Search is not available for this dataset
text
stringlengths 216
4.52M
| meta
dict |
---|---|
\section{Design of Detection Algorithms}
\label{sec:approaches}
To systematically present the approaches that have been used by the various lines of work in the past, we have opted to look at them from three distinct viewpoints:
\begin{description}
\item [Features:] What features are used?
\item [Method:] What technology is the detection method based on?
\item [Outcome:] What outcome is produced?
\end{description}
The following subsections address each of these viewpoints separately, while Table~\ref{tab:approaches} provides an overview of the section by itemizing only the most relevant examples and related articles.
\input{tables/table_approaches}
\subsection{Features}
\label{subsec:features}
Feature extraction (a.k.a., feature engineering) is a challenging task which has a big impact on the quality (accuracy and robustness) of the detection approaches. Well-crafted features contribute considerably to the success of an approach, and on the contrary, poor features may ruin even good detection algorithms. On the other hand, even though a feature may have good predictive power leading to a high detection accuracy, if it can be easily forged by an attacker, the robustness of the detection approaches relying on it will be low. Therefore, successful detection approaches must take into consideration a delicate balance of accuracy and robustness when selecting their features.
Very few approaches simply parse Resource Records from DNS traffic and use values from specific fields as they appear. Instead, a multitude of treatments can be applied to these raw values before consuming them for detection purposes (average, standard deviation, max, min, rate, outlier, etc.). Furthermore, an external data, outside the DNS environment, may be used to enrich the initial dataset. Some approaches require to transform the DNS data into a distinct data structure, such as a graph, before using it in their detection methods. For instance, this is the case in the approach proposed by Lee et al.~\cite{GMAD_Lee2014, TrackingMultipleCCBotnetsByAnalyzingDnsTraffic_Lee2010}, where a graph representing the communication sequences of clients with domains is built. The authors call it the Domain Name Travel Graph (DNTG) and use it to identify clusters of related domains that need to be considered by their detection method. In the approach proposed by Oprea et al.~\cite{DetectionOfEarlyStageEnterpriseInfection_Oprea2015}, another type of graph is built representing the association between host IP addresses and queried domains, while in the Khalil et al.~\cite{GuiltyByAssociation_Khalil2016} approach a graph captures the movement of domains in bulks among different ASNs.
The number of individual treatment, enrichment and preprocessing techniques is very large and going through each and every one is out of the scope of this paper. In order to present the state of the art in a systematic way, we distinguish consumed features at a higher level of abstraction. Specifically, we consider the following three dimensions to differentiate features:
\begin{enumerate}
\item Internal vs. Contextual features
\item DNS dataset Dependent vs. Independent features
\item Mono vs. Multi domains features
\end{enumerate}
\subsubsection{Internal vs. Contextual Features}~\\
The distinction between internal and contextual features is quite similar to the one proposed by Perdisci et al.~\cite{DetectingMaliciousFluxServiceNetworks_Perdisci2009} to divide features into \textit{passive} and \textit{active}. According to the authors, \textit{passive} features are the ones ``that can be directly extracted from the information collected by passively monitoring the DNS queries'' from resolvers, while ``\textit{active} features need some additional external information to be computed''. Since we do consider, elsewhere, the possibility to collect the data passively or actively, we felt this terminology could be misleading and therefore, we opt for the different terms, namely internal and contextual, which are described below.
\paragraph{Internal features}
These features can be extracted from DNS Resource Records alone. No external complimentary data source is required. However, they may be and most of the time are transformed before being fed into the detection method. For instance, the ``domain average TTL value'' used in~\cite{Exposure_Bilge2011, Exposure_Bilge2014, DetectingMaliciousFluxServiceNetworks_Perdisci2009, FluxBuster_Perdisci2012, FrameworkForDnsBasedDetectionAndMitigation_Stalmans2011} is an example of this type of features. Additionally, features extracted from domain names, which are popular in DGA detection and attribution (~\cite{Notos_Antonakakis2010, Pleiades_Antonakakis2012, BotGAD_Choi2012, MeasuringAndDetectingFastFluxServiceNetworks_Holz2008, Phoenix_Schiavoni2014, MethodForDetectingDgaBotnet_Tong2016}), belong to this category. Moreover, association-based features popular in graph-based approaches~\cite{DetectingMalwareBasedOnDnsGraphMining_Zou2015, DetectionOfEarlyStageEnterpriseInfection_Oprea2015, GMAD_Lee2014, Segugio_Rahbarinia2015, Segugio_Rahbarinia2016, TrackingMultipleCCBotnetsByAnalyzingDnsTraffic_Lee2010, MethodForIdentifyingCompromisedClients_Stevanovic2017}, are usually built using internal DNS features.
\paragraph{Contextual features}
On the other hand, \textit{contextual} features are built from the combined DNS and external information sources. For instance, to calculate ``the number of ASNs to which the IP addresses of a domain belong to'' (~\cite{DomainProfiler_Chiba2016, FrameworkForDnsBasedDetectionAndMitigation_Stalmans2011, BotGAD_Choi2012, MeasurementAndAnalysisOfGlobalIpUsagePatternsOfFastFluxBotnets_Hu2011}) the information about the IP-AS mapping is required. In other example~\cite{GuiltyByAssociation_Khalil2016}, the authors use similarity score calculated over the number of different AS numbers to assign a weight to domain-domain associations. Zhang et al.~\cite{Smash_Zhang2015} also exploit associations inferred from WHOIS data for domain clustering.
We note that some contextual features require to query resources controlled by attackers. For instance, Prieto et al.~\cite{BotnetDetectionBasedOnDnsRecordsAndActiveProbing_Prieto2011} use domain web presence as one of the features, i.e., every time when a new domain appears in their list they check if a webpage is available for this domain. One more special type of contextual features employs the enrichment using DNS data itself. For instance, Prieto et al.~\cite{BotnetDetectionBasedOnDnsRecordsAndActiveProbing_Prieto2011} checks if a domain has an associated MX record. Hence, the usage of such type of features may warn an attacker that the domain is under scrutiny. However, it is not always required to interact actively with the domains. Such a data sometimes can be obtained from the systems like Thales~\cite{Thales_Kountouras2016}, Censys~\cite{Censys_Durumeric2015} or Shodan~\cite{Shodan}.
Whereas the usage of internal features has a number of benefits, mostly in terms of simplicity, their ability to capture the information that has been shown to be significant to distinguish between good and bad domain names, is limited. For instance, the registration time for a given domain is often a very important feature but it cannot be obtained solely from the DNS data. It has been shown that sometimes attackers register domains in bulk several months before the start of malicious activities~\cite{Predator_Hao2016}. Detection of such registration patterns enables researchers to proactively detect malicious domains as done in~\cite{OnThePotentialOfProactiveDomainBlacklisting_Felegyhazi2010, Predator_Hao2016}. However, that information usually is not available for country code TLDs (ccTLD) because ccTLD registries very rarely offer access to their zone files. Therefore, the existence of a domain can remain unknown until it is queried for the very first time, and at this moment it may be possible (sometimes but not always) to retrieve that information by querying a WHOIS server. This makes the approaches relying on such features inapplicable for a very large amount of domains. Similarly, some other useful enrichment information can be hard to obtain due to limited accessibility, privacy concerns, excessive cost, etc. However, despite all these issues the usage of contextual information allows researchers to extract more meaningful features and hence, provide broader coverage of malicious behavior signals.
\subsubsection{DNS Dataset Dependent vs. Independent features}~\\
Based on our review of the literature, we believe it is important to distinguish between the features that are influenced by specific DNS datasets and those that are independent from the DNS dataset in hand. We call them DNS Dataset Dependent Features (DDD) and DNS Dataset Independent Features (DDI) respectively. The rationales behind these two classes are linked to the validation phase. The performance of an approach solely relying on DDD features is highly influenced by the chosen dataset. Thus, to evaluate the quality of such methods, it is very important to perform cross-dataset validation, using datasets from different places, for different periods, of different sizes, etc. (see Section~\ref{subsec:evaluation_challenges} for more). On the contrary, approaches relying on DDI features are more stable and can be run equally in different environments.
\paragraph{DNS dataset dependent features}
For instance, ``the number of IP addresses observed as being assigned to a domain'' during the observation period is a DDD feature, because its value depends on the specific dataset~\cite{MeasurementAndAnalysisOfGlobalIpUsagePatternsOfFastFluxBotnets_Hu2011, DynamicsOfOnlineScamHostingInfrastructure_Konte2009, DetectingMaliciousFluxServiceNetworks_Perdisci2009, FluxBuster_Perdisci2012}. Similarly, ``the number of observed common ASNs shared by a pair of domains'' feature used by Khalil et al.~\cite{GuiltyByAssociation_Khalil2016} to build an association between domain names, is also dataset dependent because a graph built using this association hinges on where and how a dataset has been collected.
\paragraph{DNS dataset independent features}
On the other hand, the ``hit-count of a particular domain in popular search engines''~\cite{Exposure_Bilge2011, Exposure_Bilge2014} is a DNS dataset independent feature because it does not depend on what one can see in the DNS dataset chosen. Similarly, the ``n-gram'' distribution of a domain name~\cite{Notos_Antonakakis2010, Pleiades_Antonakakis2012, ProactiveDiscoveryOfPhishingRelatedDomainNames_Marchal2012} is DNS dataset independent since it does not hinge on the chosen dataset.
\subsubsection{Mono vs. Multi Domains Features}
\paragraph{Mono domain features} Mono Domain features are extracted for every single domain. For example, ``the number of countries which host a given domain''~\cite{OnTheGroundTruthProblem_Stevanovic2015, Notos_Antonakakis2010, Kopis_Antonakakis2011, DetectingMaliciousActivityWithDnsBackscatter_Fukuda2015, DomainProfiler_Chiba2016}, is an example of a Mono Domain feature. One of the advantages of using this type of features is that the approaches rely on them can be trained and operate on completely different datasets.
\paragraph{Multi domains features}
Domain association features calculated over a pair of domains, which are used in many graph-based and clustering approaches~\cite{Segugio_Rahbarinia2015, Segugio_Rahbarinia2016, GuiltyByAssociation_Khalil2016, DetectingMalwareBasedOnDnsGraphMining_Zou2015, Smash_Zhang2015, KinderedDomains_Thomas2014}, are examples of Multi domains features and so are the ones used, for instance, in~\cite{Notos_Antonakakis2010, GMAD_Lee2014, Pleiades_Antonakakis2012}. We note that the approaches relying on \textit{Multi Domains} features usually require bigger datasets to work properly. Indeed, an association between two arbitrary domains may be indirect, hence in order to build such an association intermediate domains should be also included into consideration in order for the approach to work properly.
\subsection{Detection Methods}
\label{subsec:methods}
We have identified two main paradigms in the detection methods we are considering. In the first, the method may benefit from some external expertise to figure out how to discriminate between good and bad domains. This expertise is implemented by means of various heuristics and does not use machine learning techniques. Therefore, we call the approaches under this paradigm \textbf{Knowledge Based methods}.
In the second case, whereas the authors may have some examples of malicious and benign domains at their disposal, they have no a priori understanding on how to distinguish between the two. They rely on data driven algorithms to help automatize the discrimination process, therefore, we call the approaches under this paradigm as \textbf{Machine Learning Based methods}.
In general, the approaches belonging to the former category appear earlier than the latter. In early research efforts, through the analysis of data, researchers identified characteristics that allowing to distinguish malicious domains from benign ones. However, with the lapse of time adversaries adapted their behavior causing the degradation of the approaches' detection abilities, what forced researchers to look for more descriptive characteristics. Such races resulted in the situation when the number of characteristics required to be considered in one model, became unmanageable, pushing researchers to look towards the \textbf{Machine Learning Based methods} able to automatically derive knowledge from high-dimensional data.
With a further development of the field, researchers started to employ stacking of methods. In order to produce a list of malicious domains, these methods involve several steps when the output of one method is passed as an input to the following one. So as these techniques employ different detection methods including machine learning and knowledge based, we call them as \textbf{Hybrid approaches}
\subsubsection{Knowledge Based Approaches}~\\
To detect domains involved in malicious activities, knowledge based approaches rely on expert insights. Such insights can be obtained through measurement studies, which explore anomalies relevant to malicious domain activities. There is a number of such studies in the literature~\cite{DnsMeasurementsAtRootServer_Brownlee2001, PassiveMonitoringOfDnsAnomalies_Zdrnja2007, DayAtTheRootOfTheInternet_Castro2008, AnalyzingDnsActivitiesOfBotProcesses_Morales2009, BayesianBotDetectionBasedOnDnsTrafficSimilarity_VillamarinSalomon2009, BotnetDetectionByMonitoringGroupActivitesInDnsTraffic_Choi2007, BotnetDetectionBasedOnDnsRecordsAndActiveProbing_Prieto2011, CrossingTheThreashold_Krishnan2013,DetectingDgaMalwareUsingNetflow_Grill2015, ExtendingBlackDomainNameListByUsingCoOccurrenceRelationBetweenDnsQueries_Sato2010, PrivacyPreservingDomainFluxBotnetDetection_Guerid2013}. For instance, Sato et al.~\cite{ExtendingBlackDomainNameListByUsingCoOccurrenceRelationBetweenDnsQueries_Sato2010} observed malicious domains belonging to one malware family tend to be queried simultaneously. Hence, by measuring a degree of co-occurrences between known malicious and unknown domains and by comparing the result with some threshold, it is possible to detect new malicious domains. Hyunsang Choi exploited the same observation in his works~\cite{BotnetDetectionByMonitoringGroupActivitesInDnsTraffic_Choi2007,BotGAD_Choi2009,BotGAD_Choi2012}. Krishnan et al.~\cite{CrossingTheThreashold_Krishnan2013} and Guerid et al.~\cite{PrivacyPreservingDomainFluxBotnetDetection_Guerid2013} observed the communities of bots in a network tend to exhibit similar patterns in terms of DNS queries that can not be resolved by the DNS infrastructure.
Unfortunately, this family of approaches have limitations. Experts can intentionally or most often unintentionally be biased. For instance, Grill et al.~\cite{DetectingDgaMalwareUsingNetflow_Grill2015} built their approach on the observation that the DGA malware makes a lot of DNS resolutions in order to find the right domain to communicate with. Therefore, for hosts infected with this type of malware the amount of DNS resolutions is larger than the amount of subsequent communications. Comparing the ratio between them with a manually set threshold allowed the authors to detect hosts infected the malware. However, modern browsers try to predict users' Internet behavior and resolve ahead of time some domains, even if they are never queried. Hence, in such a scenario, if the threshold is not adjusted automatically, the approach will generate false positives since such behavior was unknown to the experts at the time of analysis. Furthermore, experts usually are not good at analyzing high-dimensional data because for a human being it is difficult to grasp all the correlations and dependencies between features extracted from the data.
\subsubsection{Machine Learning Based Approaches}~\\
The majority of the methods developed to detect malicious domains are data-driven with machine learning algorithms at their core~\cite{OnTheGroundTruthProblem_Stevanovic2015}. Generally, machine learning algorithms allow computers to learn on data without being explicitly programmed~\cite{SomeStudiesInMachineLearning_Samuel1959,MachineLearningBook_Mitchell1997}. Depending on what data is used for learning, existing machine learning techniques can be generally divided into three subcategories:
\begin{itemize}
\item Supervised learning
\item Semi-supervised learning
\item Unsupervised learning
\end{itemize}
\paragraph{Supervised learning algorithms} These algorithms require the complete training set to be labeled, i.e., every feature vector corresponding to a sample of data must be associated with a label representing a class this sample belongs to. With respect to the topic of our paper, this means every domain name in the training set must be explicitly labeled as either malicious or benign. However, considering the amount of domains typically observed during the training period of experiments, it is almost impossible to label all of them correctly. Therefore, usually in case of supervised learning approaches the training data set is trimmed to contain only those labeled with high confidence. Interested readers may refer to~\cite{SupervisedMachineLearning_Kotsiantis2007} for a review of supervised learning algorithms. Supervised machine learning approaches such as~\cite{Fluxor_Passerini2008, FrameworkForDnsBasedDetectionAndMitigation_Stalmans2011, DomainProfiler_Chiba2016, ExecScent_Nelms2013, Exposure_Bilge2011, Exposure_Bilge2014, Kopis_Antonakakis2011, Mentor_Kheir2014, DetectingMaliciousActivityWithDnsBackscatter_Fukuda2015, MeasurementAndAnalysisOfGlobalIpUsagePatternsOfFastFluxBotnets_Hu2011} are quite popular in this area due to their simplicity, automatic selection of the most relevant features and effectiveness. Indeed, researchers relying on such approaches only need to extract features from raw data and train a classifier on a labeled dataset. Application of the trained classifier to new data is straightforward. For example, DomainProfiler~\cite{DomainProfiler_Chiba2016} uses 55 features extracted considering related IP addresses and domain names. The Random Forest algorithm is applied to discover abused domains. Antonakakis et al.~\cite{Kopis_Antonakakis2011} also employs Random Forest. However, in this work the features are extracted from the passive DNS data of authoritative name servers.
Unfortunately, supervised learning approaches have several drawbacks. First, they require a labeled dataset to train. It is not easy to obtain complete and fully correct dataset because of the fickle nature of DNS and blacklist data. As discussed in Section~\ref{subsec:sources_ground_truth}, manual labeling is time-consuming and does not result in extensive training datasets. Automatic labeling using information from different white- and blacklists likewise is prone to incorrect data inclusion~\cite{PaintItBlack_Kuhrer2014, OnTheGroundTruthProblem_Stevanovic2015, CanDnsBasedBlacklistsKeepUpWithBots_Ramachandran2006, ShadesOfGrey_Sinha2008, EmpiricalAnalysisOfPhishingBlacklists_Sheng2009, EmpiricalResearchOfIpBlacklists_Dietrich2009, EmpiricalAnalysisOfMalwareBlacklists_Kuhrer2012}. Second, supervised learning approaches are more vulnerable to overfitting to a particular dataset. If the labeled dataset is biased, this may unintentionally cause a classifier to learn incorrect distributions of the feature variables. Moreover, in a real feed of DNS data only a portion of domains can be assigned with labels. In practice, the vast majority of samples are not labeled and thus, can not participate in the process of classifier learning making training dataset inconsistent.
\paragraph{Semi-supervised learning algorithms} The semi-supervised learning algorithms~\cite{SemiSupervisedLearningLiteratureSurvey_Zhu2005,SemiSupervisedLearningBook_Chapelle2010} have been proposed to overcome such limitations. They learn both from labeled and unlabeled data. The unlabeled data helps a machine learning algorithm to modify or reprioritize hypothesis obtained from a labeled dataset~\cite{SemiSupervisedLearningLiteratureSurvey_Zhu2005}. Yet, the adoption of such algorithms often is quite challenging and requires more effort from researchers. We refer to~\cite{SemiSupervisedLearningLiteratureSurvey_Zhu2005} and~\cite{SemiSupervisedLearningBook_Chapelle2010} for more information about semi-supervised learning algorithms. Graph-based inference methods are among the most popular approaches under this category~\cite{DetectingMalwareBasedOnDnsGraphMining_Zou2015, DetectingMaliciousDomainsViaGraphInference_Manadhata2014, DetectionOfEarlyStageEnterpriseInfection_Oprea2015, GMAD_Lee2014, GuiltyByAssociation_Khalil2016, TopologyBasedFlowModel_Mishsky2015, TrackingMultipleCCBotnetsByAnalyzingDnsTraffic_Lee2010, LargeScaleGraphMiningForWebReputationInference_Huang2015}. For instance, Manadhata et al.~\cite{DetectingMaliciousDomainsViaGraphInference_Manadhata2014} detected malicious domains applying the belief propagation algorithm to a host-domain graph extracted from the enterprise HTTP proxy logs\footnote{We consider this work because the paper assures the same algorithm can be applied to DNS data.}. Assuming that malicious hosts more likely communicate with malware domains, while benign hosts may only occasionally query malicious domains, and having a feed of initially malicious and benign domains, the authors, using the belief propagation approach were able to assess the marginal probability of unknown domains in the graph to be malicious. In~\cite{DetectingMalwareBasedOnDnsGraphMining_Zou2015}, the authors predicted malicious hosts and domains applying their method on two types of graphs. The first, Domain Query Response Graph (DQRG), is built using the information from DNS query-response pairs: clients' IP addresses are connected with the queried domain names which on their turn are associated with the returned domains' IP addresses. The second, Passive DNS Graph (PDG), is built using domain names, their canonical connections and corresponding IP addresses (CNAME and A resource records) extracted from passive DNS data. Then, belief propogation was applied on these graphs. Contrary to~\cite{DetectingMaliciousDomainsViaGraphInference_Manadhata2014}, where all benign domains' initial score has the same value, Zou et al.~\cite{DetectingMalwareBasedOnDnsGraphMining_Zou2015} assigned the value based on their rank in the Alexa top $K$ list. Mishsky et al.~\cite{TopologyBasedFlowModel_Mishsky2015} applied the Flow algorithm on a domain-IP graph. However, this graph includes, besides weighted domain-IP edges commonly used in this area, also domain-domain and IP-IP edges that represent ``tell me who your friends are and I will tell you who you are'' relation.
The Cluster-and-Label semi-supervised learning technique is also widely used~\cite{EmpiricalReexaminationOfGlobalDnsBehavior_Gao2013, ReexaminingDnsFromAGlobalRecursiveResolverPerspective_Gao2016, TrackingMultipleCCBotnetsByAnalyzingDnsTraffic_Lee2010, GMAD_Lee2014, OnThePotentialOfProactiveDomainBlacklisting_Felegyhazi2010} in the area. Gao et al.~\cite{EmpiricalReexaminationOfGlobalDnsBehavior_Gao2013, ReexaminingDnsFromAGlobalRecursiveResolverPerspective_Gao2016} proposed an approach to detect malicious domains through clustering based on co-occurrence patterns. Clearly, queries to DNS system from the same malicious agents do frequently co-occur, e.g., when a bot tries to resolve algorithmically generated domain names in order to find the IP address of a master. In this case, the same domain names will frequently pop-up together in DNS resolver logs. The authors exploited this observation in the following way. At first, they performed coarse-grained clustering of the traffic. They selected a time window and for every anchor domain (malicious domain from a labeled dataset) measured how often it co-occurs with other domains within the selected time window. They calculated two metrics: terms frequency that shows how often other domain names are queried together with the anchor domain, and inverse document frequency showing how rare other domains are met across all the windows. Using the predefined thresholds for both metrics, the authors selected coarse-grained clusters associated with every anchor domain. Further, to perform fine-grained clustering, every domain is assigned with a bit vector whose length is equal to the number of times anchor domains are met during the observation period. A bit in this vector is set if the query to the domain happens within a small time window with the query to the anchor one. Later, these vectors are clustered using X-means to select fine-grained clusters. Jehyun Lee and Heejo Lee proposed a new approach to build a graph representing a sequence of client-domain communications, which they called \textit{Domain Name Travel Graph (DNTG)}~\cite{GMAD_Lee2014}. A node in this directed graph represents a domain, while an edge is added between two nodes if the corresponding domains have been queried sequentially by the same client. The weight of an edge grows with the increase of the number of transitions between those domains, while the direction of an edge shows the order of the transition. An edge is also associated with the client sharing ratio score that represents Jaccard similarity of the sets of the clients queried the domains. After the graph is built, it is clustered using the values assigned to the edges and some predefined thresholds. Then, the authors mark all the domains in the clusters containing blacklisted domains as malicious.
At the same time, this type of algorithms is not a silver bullet in the case of limited ground truth. The usage of unlabeled data does not always help, hence, researchers must put additional efforts in the validation of the proposed methods. Also, the problems related to obtaining a correctly labeled dataset are relevant here as well.
\paragraph{Unsupervised learning algorithms} The unsupervised learning methods~\cite{IdentifyingSuspiciousActivitiesThroughDnsFailureGraphAnalysis_Jiang2010, ModelingDnsAgilityWithDnsMap_Berger2013, KinderedDomains_Thomas2014, Smash_Zhang2015, BotGAD_Choi2009, BotGAD_Choi2012} have been introduced not only to eliminate the dependence on labeled datasets. Unsupervised learning approaches, aka clustering techniques~\cite{DataClusteringReview_Jain1999}, automatically divide domains into clusters using only the internal properties of data. In theory, by careful selection of the features which exibits a completely different behavior for malicious and benign domains, it is possible to enable clustering algorithms to divide the provided samples into two clusters. Then, a researcher decides what cluster contains malicious and benign domains~\cite{BotGAD_Choi2009,BotGAD_Choi2012,OnTheGroundTruthProblem_Stevanovic2015}. However, some approaches, e.g.,~\cite{KinderedDomains_Thomas2014, Smash_Zhang2015}, do not follow this path and make a step further. They group domains across several dimensions related to different malicious behaviors, and then select the clusters of malicious domains by correlating the identified groups among each other.
Although such approaches have a clear benefit in terms of independence over the labeled data, they are not very common in the literature. We believe this is mainly due to the fact that these techniques are the most difficult to design. Additionally, given that labeled datasets usually exist in this area (althogh neither complete nor fully correct), researchers prefer to explore supervised and semi-supervised methods which are easier to employ.
\subsubsection{Hybrid Approaches}~\\
Despite the fact that a single detection algorithm can be categorized according to the provided classification, the majority of the existing real-world approaches are hybrid and employ several algorithms of different types to produce a result. This can be a combination of machine learning techniques~\cite{DetectingMaliciousFluxServiceNetworks_Perdisci2009, Notos_Antonakakis2010, FluxBuster_Perdisci2012, Pleiades_Antonakakis2012, DetectionOfEarlyStageEnterpriseInfection_Oprea2015}. For instance, such approach is used in the Notos system~\cite{Notos_Antonakakis2010}. It trains 5 meta-classifiers during the first stage to evaluate the closeness of a domain to the predefined group of domains (Popular, Common, Akamai, CDN and Dynamic DNS) using a supervised learning technique. Then the calculated closeness scores are used as features for the second-stage supervised learning algorithm. Oprea et al.~\cite{DetectionOfEarlyStageEnterpriseInfection_Oprea2015} combines a semi-supervised method (belief propagation) with a supervised learning algorithm (linear regression). A mix of machine learning and knowledge based methods is also used in the area~\cite{Segugio_Rahbarinia2015, Segugio_Rahbarinia2016, SemiSupervisedTimeSeriesModeling_Yu2014}. For instance, the Segugio system~\cite{Segugio_Rahbarinia2015, Segugio_Rahbarinia2016} combines graph-based prefiltering with supervised machine learning. It works in the following way. At first, the system, using the DNS data collected before a recursive DNS resolver, builds a host-domain graph. Given a set of benign and malicious domains, and some heuristics, it performs filtering of this graph. It marks the known domain nodes as benign and malicious respectively leaving the rest as unknown. Similarly, the system labels host nodes as malicious if they query one of the malicious domains, and benign that resolve only the benign domains. All other machines are marked as unknown. After this, the system performs pruning of the graph removing: 1) machines querying 5 domains or less; 2) proxy hosts (machines quering substantially more domains than other machines); 3) domains that are queried by only one machine; 4) very popular domains (domains queried by a very large number of machines). Then, every domain node left in the graph is also assigned with the following properties: 1) a set of IP addresses the domain is pointed to during the observation window; 2) how long ago the domain was first queried with respect to the observation time window. Using this information Segugio calculates several features: 1) Machine Behavior Features (the fraction of known infected machines, the fraction of unknown machines, total number of machines); 2) Domain Activity Features (number of days a domain was actively queried during the last 2 weeks, the number of consecutive days a domain was queried); 3) IP Abuse Features (fraction of IPs associated to known malware domains during the selected time window, number of IPs and /24's used by unknown domains during time window). Using these features and supervised machine learning algorithms, the authors predict labels of unknowns.
\subsection{Outcome}
\label{subsec:outcome}
At the end, all we want to know is if a domain is malicious or not. However, the mere term \textit{malicious} can be understood in different ways. For instance, some domains may be involved in spamming or phishing, serving C\&C communications, or simply acting as proxies for other types of campaigns. Among many methods proposed, some are capable to recognize specific types of ``maliciousness'', whereas others are not able to explain why they adjudicate a certain domain is malicious or not. Therefore, in this paper we divide approaches according to the outcome of their operation between those detecting \textit{specific malicious behavior} and those that are \textit{agnostic to malicious behavior}.
\paragraph{Malicious behavior agnostic approaches}
Roughly speaking, \textit{malicious behavior agnostic approaches} do not try to capture particular malicious behavior. Instead, they base their intelligence on different type of \textit{associations} between domains. The approaches of this type~\cite{DetectingMalwareBasedOnDnsGraphMining_Zou2015, DetectingMaliciousDomainsViaGraphInference_Manadhata2014, DetectionOfEarlyStageEnterpriseInfection_Oprea2015, GMAD_Lee2014, GuiltyByAssociation_Khalil2016, TopologyBasedFlowModel_Mishsky2015} will predict maliciousness of domains exploiting connection with the domains constituting the ground truth. Such technique is called sometimes ``guilty by association''~\cite{GuiltyByAssociation_Khalil2016}. If a domain has strong connection with a group of known malicious domains, then most probably, this domain is also involved in malicious activities. For instance, if adult-related domains are used as a ground truth, as a result such approaches will produce the list of domains of the same type, given that these domains make use of the same association. Similarly, if such approaches are fed with spam domains, they will predict domains related to spam activities. At the same time, only few blacklists report malicious domains of particular type, e.g., PhishTank~\cite{PhishTank} or Spamhaus~\cite{Spamhaus}. Moreover, it is usual that the same infrastructure may be used for different malicious activities. Therefore, even if an approach is fed with a ground truth of particular type, the output may include other types of malicious domains. For example, an attacker may use a server with the same IP address that hosts different types of malicious domains. If an approach builds an association between domains according to the common IP addresses, it will establish a connection between these domains.
\paragraph{Malicious behavior specific approaches}
On the contrary, \textit{malicious behavior specific approaches} are built to capture specific features relevant to particular malicious behavior. For instance, there is a number of approaches that specifically try to capture lexical~\cite{DetectingAlgorithmicallyGeneratedMaliciousDomainNames_Yadav2010, DetectingAlgorithmicallyGeneratedDomainFluxAttacks_Yadav2012, MaliciousAutomaticallyGeneratedDomainNameDetection_Haddadi2013} or resolution~\cite{DetectingDgaMalwareUsingNetflow_Grill2015, Pleiades_Antonakakis2012} features suitable for the detection of automatically generated domain names. Some of the approaches extract features detecting multiple malicious activities. So, Bilge et al.~\cite{Exposure_Bilge2011, Exposure_Bilge2014} extract domain name based features which are relevant (although may be not perfect~\cite{DeepDGA_Anderson2016, StealthyDomainGenerationAlgorithms_Fu2017}) for capturing DGAs, and DNS answer-based features (e.g., amount of different IP addresses, TTL values, etc.), which are apt for detection of domains exposing IP fluxing behavior.
\subsection{Challenges}
\label{subsec:approaches_challenges}
\subsubsection{Feature Related Challenges}~\\
Even though the process of finding meaningful features is not easy in other research areas as well, it is especially challenging in the field of malicious domain detection. Features are not only needed to be well crafted to separate benign from malicious domains, but also they have to be resilient to potential manipulation by miscreants. For example, certain DGAs produce easily recognizable names (e.g., ``ccd2.cn'', ``syx4.cn'', ``oif1.cn'', etc.) and one could see this as a powerful feature to identify these malicious domain names. While this is currently true for a very limited number of DGAs, it is trivial for the attacker to render this feature inoperative by simply changing some parameters of the domain generation algorithm. On the other hand, a feature that takes into account the limited capacity of certain resources (e.g., number of public IP addresses) is more robust because it is harder to forge it without impacting negatively the attacker's gain.
Unfortunately, it is not easy to evaluate the robustness of features in a systematic and measurable way. The importance of the problem has been recognized by many researchers, e.g., in~\cite{Smash_Zhang2015, DnsRadar_Ma2015, DetectionOfEarlyStageEnterpriseInfection_Oprea2015, ExecScent_Nelms2013, FluxBuster_Perdisci2012, Psybog_Kwon2014, Kopis_Antonakakis2011, DomainProfiler_Chiba2016, GMAD_Lee2014}. However, up to our knowledge, none of the existing approaches provides a framework that can be used to evaluate quantitatively the robustness of features. Stinson et al.~\cite{TowardsSystematicEvaluationOfTheEvadabilityOfBotnetDetectionMethods_Stinson2008} presents a qualitative high level evaluation of the evadability of some botnet detection approaches. Others, such as Hao et al.~\cite{Predator_Hao2016}, qualitatively discuss the robustness of some of the important features used in their approach. Nevertheless, providing a framework that offers qualitative and quantitative evaluation of the feature robustness remains an open problem that calls for attention from the research community. Such frameworks have to consider simultaneously the features forging complexity and their impact on attack utility. We argue such a framework could be an effective mean against adaptive attackers as it would help researchers and security experts to build detection tools leveraging features whose forging negatively impacts the attackers' benefits.
\subsubsection{Detection Methods Related Challenges}~\\
Even though the effectiveness of a detection method is important and receives due attention in most of the approaches, its performance is somehow overlooked. However, deep performance analysis is as important as effectiveness analysis for practical consideration and real-world deployments. In real-world deployments, the amount and the rate of DNS traffic could be considerably larger than the datasets used in publications. Hence, detection approaches have to be scalable to work in such production systems. Moreover, some approaches require large datasets to train and to tune their detection algorithms. To address this problem, some authors propose to use distributed computing platforms such as Apache Hadoop~\cite{ApacheHadoop} or Apache Giraph~\cite{ApacheGiraph}. Others reduce their dataset sizes by filtering out data elements deemed to be less important. For example, Exposure~\cite{Exposure_Bilge2011, Exposure_Bilge2014} filters out all domains from the Alexa Top 1000 domains~\cite{Alexa} and those that have been queried less than 20 times during a predefined period of time. Unfortunately, such filtering may result in overlooking important sets of domains which could be potentially malicious. In such cases, we need a systematic performance evaluation that takes into account not only the complexity and scalability of a detection method but also the characteristics of the filtering preprocessing steps required for the needed data size reduction.
Next to the performance evaluation challenge, the second one faced by the malicious domain detection methods is related to the latency endured before the detection. Some approaches like~\cite{Exposure_Bilge2011, Exposure_Bilge2014} rely on aggregated data or run in batch mode, and hence, they have to observe a number of DNS requests before being able to make a decision about the malicious status of a domain. However, the delay incurred by such approaches may render them ineffective against domains that serve malicious activities for short periods of time as is the case of domains fluxing. For example, Sheng et al.~\cite{EmpiricalAnalysisOfPhishingBlacklists_Sheng2009} showed that ``63\% of the phishing campaigns lasted for less than two hours''. On the other hand, some approaches leverage real time features (as opposed to aggregates) and can flag domains on the fly. However, non-aggregated features are usually easier to forge comparing to aggregated ones. Both categories of approaches have advantages and limitations, and hence, the optimal selection of one over another is heavily influenced by the deployment environment.
The third challenge is linked with the adaptive nature of the adversaries. They continuously adapt their behavior to evade detection tools, and detection techniques have to regularly retrain and adapt their models to capture such changes. Moreover, this also means that the techniques themselves with the lapse of time become obsolete, making the corresponding approaches no longer possible to use.
The fourth challenge lies in the lack of any systematic way to quantitatively compare and contrast the effectiveness and the efficiency of various domain detection methods. To obtain reliable quantitative results, every approach should be reproducible and measurable. Reproducibility means the results can be regenerated given the same dataset used in initial training, while measurability means the use of quantitative metrics in evaluating effectiveness and performance. Unfortunately, the authors of approaches rarely share datasets and implementation code, possibly due to the privacy, proprietary, and sometimes security related issues, which makes it hard to reproduce the results and considerably complicates the comparison. One way to go over this challenge is to implement tools proposed in these works using information available in public sources such as papers and technical reports. However, the complexity of such tools is usually paramount and the public sources do not contain sufficient and detailed information to provide reasonable implementation of the approach.
\subsubsection{Outcome Related Challenges}~\\
As a result of an algorithm execution, the system predicts if a domain is malicious or not. However, a domain may be malicious in different aspects. For instance, in the obvious case a domain can be defined as malicious because it is used to send spams or to distribute malicious software. Unfortunately, what constitutes a malicious behaviour is not always that well defined. An example is domains hosting adult content. Some approaches, e.g., Predator~\cite{Predator_Hao2016}, consider these domains as malicious because they are often used in spam-related campaigns. Others~\cite{PaintItBlack_Kuhrer2014,DetectingAlgorithmicallyGeneratedMaliciousDomainNames_Yadav2010,Segugio_Rahbarinia2015} consider such domains as benign. At the same time, it is shown that they are often a cause of higher false positive rates, especially if the ground truth contains this type of domains~\cite{ReevaluatingWisdomOfCrowds_Chia2012}. Generally, Wondracek et al.~\cite{IsTheInternetForPorn_Wondracek2010} confirmed adult domains are often used for malware distribution and aggressive marketing, and should not be blindly considered as benign. Hence, researchers should clearly identify in their works which domains are considered as malicious.
\section{Domain Name System Background}
\label{sec:background}
This section aims at setting a common background on DNS and providing necessary shortcuts. Readers who are familiar with the topic can probably skip this section and move on to Section~\ref{sec:datasources}.
\subsection{Domain Name System Operation}
\label{subsec:dns_operation}
The \textit{Domain Name System} is a hierarchical decentralized naming system that decouples the physical location (i.e., IP address) of a service and its logical address (i.e., its domain name), so that one can connect to the service using only its domain name. The DNS protocol has been introduced in November 1983 (IETF RFCs 882~\cite{RFC0882} and 883~\cite{RFC0883}, later superseded by RFCs 1034~\cite{RFC1034} and 1035~\cite{RFC1035} correspondingly), and now it is an essential part of the Internet infrastructure as exemplified by the attack in October 2016 against the DynDNS provider~\cite{DynDnsAttack}. By overloading their DNS servers with spurious requests, the attack prevented an extremely large portion of users from connecting to the Internet resources they needed access to. In this section, we briefly describe the key DNS concepts; the interested reader is referred to the IETF RFC 1034~\cite{RFC1034} for more details.
Domain names are organized as a suffix tree structure called domain namespace. The root of this tree is the domain called \emph{root} represented with a zero length label. The dot character is used in domain names to divide hierarchy levels. Parts between the dots are called labels. It should be noted that the trailing dot separating root domain is usually omitted; in the rest of the paper we follow this convention. The farthest right label is named the \emph{Top Level Domain} (TLD), such as ``com''. The domain directly on the left of a TLD is a \emph{Second Level Domain} (2LD), e.g., ``example.com''. \emph{Fully qualified domain name} (FQDN), e.g., ``www.example.com'', identifies a single node within the domain tree and is associated with the resource information composed of separate Resource Records (\emph{RR}s). A Resource Record is defined by an \textbf{owner} (a domain name where the RR is found), a \textbf{type} field (an encoded 16 bit value that specifies the type of the RR), a \textbf{time-to-live} (TTL) value (time in seconds during which an RR should be cached), and an \textbf{RDATA} field, whose content and semantics depends on the value of the \textbf{type} field. In this work we are interested in the following types of RR (see~\cite{RFC1035} for the whole list): \texttt{A/AAAA} stores an IPv4/IPv6 host address; \texttt{NS} points to the authoritative server storing the information about the domain; \texttt{MX} is used to determine where a mail should be sent; \texttt{PTR} record maps a host's IP address to its domain name or host name.
The domain namespace information (in the form of \emph{resource records}) is stored in the hierarchical distributed database. Given the hierarchical structure, it is possible to divide it into separate \emph{zones} (all domains under a particular node) and delegate the control under them to different authorities, which maintain this information in \emph{zone files}. The Internet Corporation for Assigned Names and Numbers (ICANN)~\cite{ICANN}, a non-profit organization, is responsible for the creation of TLDs and delegation of their control to companies called \emph{registries}, who are in charge for all the domains ending with that particular TLD. Registries work in close collaboration with \emph{registrars}, companies like GoDaddy, which sell second level domains to domain owners (\emph{registrants}) and provide billing and customer support.
To query information from a DNS database, a client specifies in the request a domain name and what type of the resource record it wants to obtain. The main algorithm specifying how standard queries are processed, is described in the RFC 1034~\cite{RFC1034}. For the sake of this paper, it suffices to say (see also Figure~\ref{fig:process}) that clients in need of a domain name resolution, e.g., in need of knowing which IP corresponds to a given domain name, use the service of a \emph{resolver}. The resolver will run what is called a \emph{recursive} query on behalf of clients contacting it. This means that it will do its best to eventually return the needed response to the client and, to do so, may send a number of queries to various name servers without the client being involved. Once the resolver obtains an IP for a given domain name, it will cache the information, and, in most cases, will not query the same information for other clients. The important point to note here is that, due to caching, the resolver is the only one to have a complete view of how many clients use its services to resolve a given domain name. If a resolver does not have an answer in its cache for a given request, it will look for the \emph{authoritative name server} in charge of that domain name. This search usually starts by asking the so-called \emph{root name servers}. Authoritative name servers typically do not respond to recursive queries but, instead, to \emph{iterative} ones, providing information at their disposal and leave it up to the requester to continue his quest by following the lead provided.
\subsection{Domain Name System Security}
\label{subsec:dns_security}
DNS security has received a lot of attention from the research community over the years. There are plenty of attacks, where DNS is involved, and an even bigger number of methods to detect them. In this section, we briefly outline the area of \emph{DNS Security} splitting it into the four subareas entitled hereafter as follows: \emph{Securing DNS}, \emph{Securing Data Provided by DNS}, \emph{Securing Users from Attacks Leveraging DNS Disingenuously}, \emph{Securing Users from Attacks Leveraging DNS Genuinely}.
\smallbreak\noindent\textbf{Securing DNS}. Being a cornerstone technology of the Internet, all DNS components has been widely attacked and exploited by adversaries. The DNS infrastructure has been targeted by a number of denial of service attempts, the latest major case being the already mentioned attack on the DynDNS infrastructure~\cite{DynDnsAttack}. The DNS software has been the subject of attacks for many years now. According to Hoglund and McGraw~\cite{ExploitingSoftware_Hoglund2004}, one of the very first reported Linux worms, the ADM worm, was spreading in a stealthy way in 1999 thanks to a buffer overflow vulnerability in DNS servers. Better software security engineering techniques, large amount of replicas for key DNS servers, deployment of anti-DDoS mitigation tools are among the various solutions that have, quite successfully, been brought forward to secure DNS from these attacks.
\smallbreak\noindent\textbf{Securing Data Provided by DNS}. Attackers always try to subvert the data provided by legit DNS servers because this allows them to redirect traffic to controlled resources. In June 2008, two of the world's most important Internet regulatory web sites, ICANN and IANA, were hijacked~\cite{IcannAndIanaSitesHacked_Kravets2008}, which led to the creation of a set of best practices~\cite{SAC40_SSAC2009} that registrars should implement in order to keep the domain names of their customers secure. DNS hijacking and DNS poisoning attacks had been known for more than 25 years, with the seminal work by Steven Bellovin, produced in 1990 but withhold from publication until 1995~\cite{UsingDomainNameSystemForSystemBreakIns_Bellovin1995} followed by the 2002 birthday paradox attack~\cite{VUNote457875}. However, only in 2008 with the so-called Kaminsky's attack~\cite{VUNote800113}, people really started paying attention to them. A number of approaches have been proposed to detect such attacks but the ultimate protection comes with the ever wider deployment of DNSSEC. Attackers have also misused the popularity of some web sites by (re-)registering their domain names just after their expiration dates, usually taking advantage of the oversight of their legitimate owners who failed to renew the registration in due time~\cite{ThisGuyBoughtGoogle_Carson2016}. In this case, attackers exploit what is usually known as the ``residual trust'' of these stolen domains~\cite{DomainZ_Lever2016, WhoisLostInTranslation_Lauinger2016}, collecting money from ads showed to regular customers of the domain, hijacking emails, or pushing malicious content to the fooled client machines~\cite{AllYourDnsRecordsPointToUs_Liu2016, WhoisLostInTranslation_Lauinger2016, DomainZ_Lever2016}. Another group of attacks falling into this category is generally known as cybersquatting~\cite{Cybersquatting_Wright2012}, when an attacker registers an Internet domain name somehow similar to a victim's domain name. Typosquatting is one of such attacks that exploits common mistakes made by the users when they type domain names in an address bar. Being very typical, this attack has received a lot of attention from the research community~\cite{SUT_Banerjee2011, TheLongTaile_Szurdi2014, SevenMonthsWorthOfMistakes_Agten2015, EverySecondsCounts_Khan2015}. Other attacks of this group include bitsquatting~\cite{Bitsquatting_Nikiforakis2013}, soundsquatting~\cite{Soundsquatting_Nikiforakis2014}, and combosquatting~\cite{HidingInPlainSight_Kintis2017}.
\smallbreak\noindent\textbf{Securing Users from Attacks Leveraging DNS Disingenuously}. A third class of DNS security threat has to do with the disingenuous use of the protocol. DNS is among the very few protocols allowed in, probably, every computing network. Not surprisingly, a number of malware samples and botnets have misused it to enable the communications between compromised hosts and their command and control servers. Various approaches have been exploited, e.g., by using some well known fields such as the free form TXT field, or by encoding commands in queried domain names~\cite{OnBotnetsThatUseDnsForCommandAndControl_Dietrich2011}. The same techniques have been also used for data exfiltration and malicious payload distribution~\cite{DetectionOfMaliciousPayloadDistributionChannelsInDns_MertKara2014}. Last but not least, in a more recent past, attackers have taken advantage of the fact that DNS replies are UDP based and much larger than the queries sent. They leverage these features to mount denial of service attacks, dubbed Reflective Denial of Service attacks~\cite{AmplificationHell_Rossow2014}, in which genuine DNS servers are being used to flood victims with large amounts of unwanted replies.
\smallbreak\noindent\textbf{Securing Users from Attacks Leveraging DNS Genuinely}. In this survey we concentrate on the attacks that leverage DNS genuinely to make them more resilient using the properties and features of the DNS protocol. To run malicious campaigns, mischievers need various kinds of services hosted in remote servers. In the early days, it was a common practice for malware to hardcode the IP addresses of the servers to receive orders or to exfiltrate data. That practice was abandoned very rapidly because the capture of a single malware sample could lead to the extraction of all these IPs, what was enough to shut down the whole botnet. It became clear that these servers needed to be able to move across the IP space. This is exactly what DNS had been made for.
Moreover, in order not to be blacklisted, domain names should also have to move across the domain name space. There are two main techniques that are used to achieve this agile behavior: \textbf{Domain-Flux} and \textbf{IP-Flux} (or \textbf{Fast-Flux}). The former refers to the strategy having several FQDNs associated with one IP address. Using a \textit{Domain Generation Algorithm} (DGA), a malware is able to dynamically generate new domain names (see~\cite{TaxonomyOfDga_Sood2016} for a taxonomy on DGAs), usually as a function of the date and time. This technique makes it difficult, short of having reverse engineered the DGA, to block the domain names used by a given botnet since these domains have a very short lifetime. The latter (IP-Flux) is characterized by the continuous change of IP addresses associated with a particular domain name. In this case, a malware builds a \textit{Fast Flux Service Network} (FFSN)~\cite{MeasuringAndDetectingFastFluxServiceNetworks_Holz2008} consisting of hundreds or even thousands of IP addresses assigned to a given domain name. When such a domain is queried, it is resolved to these IPs, which are frequently changed, thus, protecting the real location of the malicious service. Usually, the large pool of rotating IP addresses are not the final destination of the request for the content, they are just stopovers, so to speak, to reach the final destination, possibly after several other stops. \textit{Double-flux} networks are a more complex technique providing an additional layer of redundancy. Specifically, both the DNS A record sets and the authoritative NS records for a malicious domain are continually changed in a round robin manner and advertised into the fast flux service network. Clearly, these techniques can also be used in combination providing many-to-many relationship between FQDNs and IP addresses.
Although these techniques are aligned with the specification of the DNS protocol, malware have abused them in various ways to improve the mobility of their servers and, thus, their resilience. The good news is that these techniques leave traces within DNS data. Such traces give researchers important clues to develop detection approaches taking into account the changes in domain-IP mappings using the unique viewpoint provided by the observation of the DNS traffic. In this survey, we focus on the approaches that are designed to detect domains involved in such malicious activities through the analysis of the relevant traces left in DNS data.
\section{Conclusion}
\label{sec:conclusion}
DNS data carry rich traces of the Internet activities, and are a powerful resource to fight against malicious domains that are a key platform to a variety of attacks. In this paper, we presented a large body of research efforts on utilizing DNS data to detect malicious domains. Table~\ref{tab:summary} summarizes our systematization scheme and findings. As our survey shows, to design a malicious domain detection scheme, one has to consider the following major questions: (1) data sources (Section~\ref{sec:datasources}): what types of DNS data, ground truth and auxiliary information are available;
(2) features and data analysis techniques (Section~\ref{sec:approaches}): how to derive features to match intuitions of malicious behaviors, and what types of detection techniques the malicious domain discovery problem can be mapped to;
(3) evaluation strategies and metrics (Section~\ref{sec:evaluation}): how well standard evaluation methodologies fit the detection problem in a specific application context, whether there is a need for additional evaluation strategies that better capture the operational settings when a detection scheme is deployed in practice, how to evaluate the robustness of a technique given the adaptive nature of attackers, and what metrics to use for these purposes.
\input{tables/table_summary}
Our analysis identifies several significant challenges that hinder the advances of the field. First, in terms of data availability, we observe that large-scale real DNS data logs are seldom publicly available, and sharing of such information across organizational boundaries often faces legal, privacy-related or bureaucratic obstacles. Also, in terms of ground truth, there is no widely agreed practice in the community how to build ground truth from noisy public intelligence. Second, in terms of features and detection techniques, we have highlighted a number of challenges such as the resilience of the features, the adaptability of algorithms to evading attackers, and the interpretation of the results. Third, in terms of evaluation strategies and metrics, current research lacks established theoretical foundations and systematic empirical frameworks to evaluate the robustness of malicious domain detection schemes.
Providing a deep overview of the area, identifying existing challenges, and sharing our insights obtained doing the research in this field, we hope this survey will facilitate future research and development of methods and applications to fight against attacks leveraging
malicious domains.
\section{Data Sources Definitions}
\label{sec:datasources}
In this section we categorize different types of DNS data, auxiliary information and ground truth that are used in the schemes proposed in the literature. The way these data are collected has a significant impact on the underlying assumptions and intuitions of malicious domain detection schemes. Table~\ref{tab:datasources} presents a short summary of this section and itemizes relevant articles. Notice that the table is not exhaustive, it only includes the most relevant examples of sources and articles.
\input{tables/table_datasources}
\subsection{Sources of DNS Data}
\label{subsec:dns_data}
The collection of DNS data can be categorized along the following two orthogonal dimensions: (1) \emph{where} and (2) \emph{how} the data is collected.
\smallbreak\noindent\textbf{Where the Data is Collected}. Due to the distributed nature of the DNS infrastructure, multiple locations can be considered to collect information about DNS queries and replies. Among all servers involved, the resolver (as defined in Section~\ref{sec:background}) is unique as it is the only location which has access to queries coming directly from client machines. Therefore, in the following, we distinguish two specific cases for the sources of the data. We call the first one ``\textbf{Host-Resolver}''. It refers to DNS data obtained by observing the communications between an end host and its resolver. The second is called ``\textbf{DNS-DNS}'' and refers to the data that can be obtained by observing the communications between two DNS servers (and one of them could, possibly, be a resolver).
\smallbreak\noindent\textbf{How the Data is Collected}. Obtaining information about existing associations between IPs and domain names at a given point of time can be done in two ways. One way is to resolve \textbf{actively} and regularly a large collection of domain names to obtain that information. Another way is to observe \textbf{passively} all the requests sent to DNS servers extracting the necessary data. In the following, we will distinguish these two methods as \textbf{Active vs. Passive DNS data collection.}
\subsubsection{Where the Data is Collected}
\paragraph{Host-Resolver (Flows 1 and 8 in Figure~\ref{fig:process})}
One major advantage of the data captured at the internal interface of a resolver is that it provides detailed information about the clients in terms of DNS queries and responses, which may directly link to certain types of malicious behaviors~\cite{BotnetDetectionBasedOnDnsRecordsAndActiveProbing_Prieto2011, BotnetDetectionByMonitoringGroupActivitesInDnsTraffic_Choi2007, BotGAD_Choi2012, DetectingMaliciousDomainsViaGraphInference_Manadhata2014, Segugio_Rahbarinia2015, Segugio_Rahbarinia2016, DetectionOfEarlyStageEnterpriseInfection_Oprea2015}. For example, hosts controlled by a botnet often have similar DNS query model in terms of both queried domains and temporal patterns. Choi et al.~\cite{BotGAD_Choi2009, BotGAD_Choi2012} use the information ``what host queries what domain'' to build a matrix for every domain that shows what machine at what period of time has queried this particular domain. Such a representation is very handy because it allows the analysts to prune matrix both column-wise and row-wise to correct errors that could arise due to deactivation of a botnet part or if time-window parameter is misconfigured. In the Segugio system~\cite{Segugio_Rahbarinia2015, Segugio_Rahbarinia2016}, this information is used to build a host-domain graph representing ``who-queries-what'' relation between hosts and domains. It would be harder to observe such behavior patterns from DNS-DNS data due to caching by the intermediate servers. Another advantage of this source of data is the ease of access. Any company or research institute could directly deploy sensors at its own resolver(s) requiring no co-operation with other parties. Due to these reasons, many existing schemes for malicious domain detection are built on data from resolvers, in particular those whose features are tied to the behavior of individual hosts. It should be also mentioned that the approaches that use Host-Resolver DNS data, may be also adapted to detect malicious hosts, roughly speaking those ones which query malicious domains.
One limitation of sensors deployed at the internal interface of a resolver is that they can only see the behavior of hosts inside a single organization, which may not be comprehensive enough to establish patterns related to malicious activities. One notable exception is when the client chooses to use, as a resolver, a publicly available DNS server willing to serve recursive queries, such as Google Public DNS~\cite{GooglePublicDns}, OpenDNS~\cite{OpenDNS}, or Norton ConnectSafe~\cite{NortonConnectSafe}. Due to the sheer volume and diversity of hosts they interact with, the data collected at these resolvers is suitable to comprehensively reveal suspicious behaviors related to different kinds of attacks. The DNS resolvers of large ISPs also serve a large amount of individual users. They can be used for the same purpose. Unfortunately, DNS data logs from public DNS servers or ISP DNS servers are not easily accessible to the research community, often because of privacy concerns~\cite{AnalysisOfPrivacyDisclosure_Zhao2008, BehaviorBasedTracking_Herrmann2013, TrackedWithoutTrace_Kirchler2016}.
\paragraph{DNS-DNS (Flows 2 to 7 in Figure~\ref{fig:process})}
On the other hand, queries observed by sensors deployed near other DNS servers usually see queries issued from several organizations. In the literature, the most frequent locations considered to observe DNS-DNS traffic are (i) at the authoritative name servers~\cite{Kopis_Antonakakis2011} including the servers responsible for TLDs~\cite{KinderedDomains_Thomas2014, Kopis_Antonakakis2011}, and (ii) at the external interface of the resolvers~\cite{Exposure_Bilge2011,Exposure_Bilge2014,Notos_Antonakakis2010,GuiltyByAssociation_Khalil2016}. The closer the sensor to the roots of the DNS tree, the larger the visibility. The data collected from TLD servers could offer unique insights and early detection of newly emerged malicious domains.
Note, that such logs would only reveal the existence of the requests but not the answers to them (i.e., the IPs requested) since TLD servers typically serve only iterative queries. Such signals would be hard to capture only from logs of resolvers. Getting logs from an authoritative server solves this issue but, due to caching, not all queries will be visible to that server. Therefore, the view provided by the logs of DNS servers higher in the DNS tree can quickly become rather coarse grained. The extreme case is the requests observed at the root servers which give almost a full visibility of all names queried over the Internet but none of the responses. Volumetric analysis of these requests is also heavily impacted by caching happening in the intermediate servers between the end clients and the root servers~\cite{EmpiricalReexaminationOfGlobalDnsBehavior_Gao2013,ReexaminingDnsFromAGlobalRecursiveResolverPerspective_Gao2016}. Therefore, the features offered by the data captured at servers different from the resolvers are often limited. Furthermore, the logs from such domain servers cannot be easily obtained by researchers.
\subsubsection{How the Data is Collected}
\paragraph{Active DNS data collection}
To actively obtain DNS data, a data collector would deliberately send DNS queries and record the corresponding DNS responses~\cite{AsTheNetChurns_Nazario2008, MeasuringAndDetectingFastFluxServiceNetworks_Holz2008, Fluxor_Passerini2008, DynamicsOfOnlineScamHostingInfrastructure_Konte2009, BeyondBlacklists_Ma2009, DomainProfiler_Chiba2016, Thales_Kountouras2016}. The list of queried domains is built thanks to multiple sources, typical ones include popular domains lists such as the Alexa Top Sites~\cite{Alexa}, domains appearing in various blacklists, or those from the zone files of authoritative servers. Clearly, as the queries are issued by the data collector, they do not reflect the behavior of actual users. Instead, active DNS data mainly capture the DNS records of domains, e.g., the resolved IPs, canonical names, TTL of a record, etc. The major advantages of actively crawled DNS data are the flexibility and ease of use of the data collection method. Data collectors can easily control which domains to query. Additionally, active DNS can reveal abuse signals about domains before their actual malicious use. For example, active DNS collector can discover in zone files a potentially malicious domain that has been newly registered but not yet used~\cite{OnThePotentialOfProactiveDomainBlacklisting_Felegyhazi2010, Predator_Hao2016}, while passive sensors cannot see it yet. Moreover, active DNS data are not linked to the behavior of individual users, and therefore, can be shared with the research community without any privacy concern. Meanwhile, due to the same reason, active DNS data could not be used to detect malicious domains with techniques that rely on user-level features (e.g., temporal statistics of user queries). If the DNS queries are issued only from a limited set of hosts, the collected data could be biased, and this is another limitation. Specifically, a domain could be associated to multiple IPs depending on the geo-location of the query issuer. Therefore, active DNS data may contain a limited small set of IPs that are a function of where the queries are issued.
\paragraph{Passive DNS data collection}
Collecting DNS data passively is done by deploying sensors in front of DNS servers or by having access to DNS server logs to obtain real DNS queries and responses~\cite{BotGAD_Choi2012, DetectingMaliciousDomainsViaGraphInference_Manadhata2014, Exposure_Bilge2011, Exposure_Bilge2014, Notos_Antonakakis2010, GuiltyByAssociation_Khalil2016, Kopis_Antonakakis2011, Segugio_Rahbarinia2015, Segugio_Rahbarinia2016, DetectionOfEarlyStageEnterpriseInfection_Oprea2015}. Therefore, DNS data collected passively are more representative and more ``revealing'' in sense of a rich set of features and statistics that could be derived to identify malicious activities. For instance, the Kopis system~\cite{Kopis_Antonakakis2011} in order to build a requester profile and to assess requester diversity, requires information regarding every resolver that queried data about a particular domain from an authoritative or TLD DNS server. Further, if sensors are deployed in DNS servers of diverse organizations from different locations, DNS data collected passively are likely to be more comprehensive than the ones collected actively. This assumption is indirectly confirmed by Rahbarinia et al.~\cite{Segugio_Rahbarinia2015, Segugio_Rahbarinia2016}. Their system performed better if training and testing was executed on the data from the same ISP than if obtained from different ISPs. Moreover, such approaches do not require an initial precompiled list of domains. On the other hand, sharing of such data could be hindered due to privacy concerns, especially if sensors are deployed between clients and resolvers. Therefore, the existing publicly available passive DNS datasets are collected after resolvers and usually provide only aggregated views of queries to hide individual activities. For example, the Farsight passive DNS database~\cite{DNSDB} does not contain the IP addresses of requesters. Furthermore, for a given domain and one of its resolved IPs, it offers only the timestamps of the first and last seen resolution and the total number of them in between. This is a trade-off between privacy protection, the ability of sharing and the utility of the data. Similar to the actively obtained DNS data, it would be impossible to build fine-grained user-level features from this dataset. However, we note that, as the aggregation is done over queries and responses due to the actual host/user activities, some important aggregated user statistics still could be derived that may be very useful for malicious domain detection. For example, it is still possible to observe a sudden increase of queries over a set of domains globally in a short period of time, even after aggregation. Such statistics would not be available from actively collected DNS data.
\subsubsection{Challenges}~\\
The challenges of access to DNS data faced by the research community lie in two aspects. The first is in the data collection phase. Though DNS traffic is present in all networks, collection of a datasets is not an easy task. As discussed earlier, it is relatively easy to set up DNS traffic sensors in a single organization's network (e.g., a campus network), but then the collected data could offer only a limited local view of global threats. The peculiarity of many existing DNS-based malicious domain detection techniques is that they work best in big data scenarios. Thus, they may not be able to produce meaningful results on datasets collected in small networks. Meanwhile, integrating data from DNS servers belonging to different organizations would often face significant bureaucratic/legal obstacles, due to the sensitive nature of DNS logs. The same is true if researchers would like to gain access to the data from public DNS servers or from ISPs.
Even a bigger challenge lies in data sharing. Unfortunately, security related data are notoriously sensitive and hard to share. Even if a researcher is able to gain access to DNS logs from an ISP, it would be extremely difficult to make the same data available to peers for validation. At the same time, scientific advances rely on validation of and comparison with the existing approaches. There were some attempts to compare new approaches with the previous ones (e.g., Rahbarinia et al.~\cite{Segugio_Rahbarinia2015} compared their approach with Notos~\cite{Notos_Antonakakis2010}), but current research significantly lacks extensive and systematic experimental validation and comparison of different techniques. The primary reason lies in the difficulty to make publicly available a set of common or comparable reference datasets. Although currently there are several publicly available DNS datasets, which have been collected passively (e.g., from Farsight~\cite{DNSDB}) or actively (e.g., Thales~\cite{Thales_Kountouras2016}), they cannot be used in many approaches, especially in those relying on client-side patterns~\cite{DetectionOfEarlyStageEnterpriseInfection_Oprea2015, DetectingMaliciousDomainsViaGraphInference_Manadhata2014}. It should be also noted that despite some approaches may work on data collected both actively and passively (for instance, the one proposed by Khalil et al.~\cite{GuiltyByAssociation_Khalil2016} which relies on domain co-location information obtainable from both datasets), such a comparison has never been performed before.
Moreover, the researchers must ensure that the results obtained with a particular dataset can be generalized to all other possible datasets. Clearly, some datasets may have space or time peculiarities that can influence the results considerably. For instance, Yadav et al.~\cite{DetectingAlgorithmicallyGeneratedMaliciousDomainNames_Yadav2010, DetectingAlgorithmicallyGeneratedDomainFluxAttacks_Yadav2012} grounded their approach on the insight that the domain names generated automatically have abnormal distribution of character frequencies and that the algorithmically produced names are usually unpronounceable for an english speaker. Although in general this may be true for the majority of domain names, there are many countries in the world, e.g., China or Russia, where such intuitions may not hold true. It may be required to adjust the model to the peculiarities of the region.
\subsection{Sources of Data Enrichment}
\label{subsec:data_enrichment}
DNS data represents an important source of intelligence that has been successfully used by many approaches to discover and predict malicious activities. However, to provide deeper insights about malicious activities and to enhance the accuracy and coverage, the majority of the detection approaches presented in this survey utilizes external sources of data to enrich DNS information. For example, mapping the IP address to a hosting country enables some approaches to use the trustworthiness of the country as a feature in classifying the maliciousness of domains/IPs~\cite{OnTheGroundTruthProblem_Stevanovic2015}. Generally, the sources of data enrichment can be classified by the \textbf{Type of Information} they provide.
\subsubsection{Enrichment Information Types}
\paragraph{Geo-location} The geo-locations of IPs and domains are commonly used to understand the diversity of the origins of the DNS queries as well as of machines hosting the domains. Such kind of enrichment is seen in a large number of papers, e.g., in~\cite{Exposure_Bilge2011, Exposure_Bilge2014, Fluxor_Passerini2008, OnTheGroundTruthProblem_Stevanovic2015, TopologyBasedFlowModel_Mishsky2015}. The most common source of IP geolocation information observed in the literature is the Maxmind database~\cite{MaxmindDbs}.
\paragraph{Autonomous system number (ASN)} This source of information enables to understand the distribution and utilization of adversary resources~\cite{MonitoringFastFluxBotnetUsingRecursiveAndPassiveDns_Mahjoub2013, Exposure_Bilge2014, TopologyBasedFlowModel_Mishsky2015}. For example, legitimate domains (except those using CDNs) are usually hosted on one or few ASNs as opposed to malicious domains which hop from one ASN to another to evade detection. ASN is a valuable source of information allowing to distinguish different types of Internet services (e.g., IPs only used by dedicated organizations vs. those belonging to cloud service providers). The information on the IP-ASN mapping can be found in the Maxmind database~\cite{MaxmindDbs} or using the Team Cymru service~\cite{TeamCymru}.
\paragraph{Registration records} Even though domain registration records often are not verified by authorities, the information located there sometimes can be used as supportive evidence to link malicious domains controlled by the same adversary. Further, temporal information of registration records (e.g., their creation/expiration time) is critical to identify domains registered automatically in bulks to be used later for malicious activities. In fact, some previous works rely purely on registration records to identify malicious domains~\cite{OnThePotentialOfProactiveDomainBlacklisting_Felegyhazi2010, UnderstandingTheDomainRegistrationBehaviorOfSpammers_Hao2013, Predator_Hao2016}. The registration records information is usually obtained from servers which provide access to it through the WHOIS protocol~\cite{RFC3912}. It should be mentioned, there is no common standard on the format of the data provided. Hence, researchers must develop custom parsers in order to extract the necessary data.
\paragraph{IP/domain blacklists/whitelists} Domains are also often checked against well-known IP/domain blacklists (more information about blacklists/whitelists will be given in Section~\ref{subsec:sources_ground_truth}). For example, Notos~\cite{Notos_Antonakakis2010} checks how many of the IPs associated to a domain are blacklisted, which is expected to be an indicator of the maliciousness of this domain. Other approaches check if the related IPs/domains are blacklisted. For instance, Prieto et al.~\cite{BotnetDetectionBasedOnDnsRecordsAndActiveProbing_Prieto2011} considers a domain suspicious if its authoritative name server is blacklisted.
\paragraph{Associated resource records} It is possible to gain more information about a given domain or IP by exploring other \textsc{RRs} related to it that can be retrieved from the DNS database. For instance, Hao et al.~\cite{MonitoringInitialDnsBehaviorOfMaliciousDomains_Hao2011} have shown that the distribution of DNS MX records in the IP space for malicious domains is different than that of the benign ones. Moreover, Prieto et al.~\cite{BotnetDetectionBasedOnDnsRecordsAndActiveProbing_Prieto2011} observed that domain names associated with a botnet usually do not have any associated MX record.
\paragraph{Network data} The IP/domain data can be also enriched with information from network activities~\cite{BotnetDetectionBasedOnDnsRecordsAndActiveProbing_Prieto2011}, e.g., if a website is associated with a domain, what is the HTTP response, what ports are opened, etc. Researchers usually obtain such kind of information by developing their own probes or using the information provided by Internet-wide scanners such as Censys~\cite{Censys_Durumeric2015} or Shodan~\cite{Shodan}.
\subsubsection{Challenges}~\\
It is important to understand that the information associated with an IP or a domain does vary over time. For instance, the Maxmind database~\cite{MaxmindDbs}, which is used to enrich data with the geolocation and ASN information, is frequently updated. Therefore, the values of the features calculated using these data also change. This results in a number of challenges. First, since researchers often work with historical DNS data, they must rely on the enrichment information available at the same time frame when the DNS data was collected. For instance, if they calculate the number of countries that host a particular domain at a given date, they have to use the information from the Maxmind database available at exactly the same date. As an alternative, they can use the most recent available enrichment data. Any of the approaches may be valid, and researchers must clearly identify which is used. The second challenge is tightly connected to the first one. Given a large number of IP addresses, the fast-growing number of domain names and the frequent change of the corresponding enrichment data, the maintenance and management of the related historical information requires a lot of resources that may not be available for researchers.
\subsection{Sources of Ground Truth}
\label{subsec:sources_ground_truth}
Practically, every approach to detect malicious domains requires high-quality ground truth for training and validation. The ground truth data in this area is associated with domains and can be divided according to the \textbf{Type}.
\subsubsection{Type of Ground Truth}
\paragraph{Malicious Ground Truth}
To get a ground truth of malicious domains, the dominant practice in existing works is to extract it from various public blacklists. Some of the blacklists are only about specific malicious activities, e.g., spams (Spamhaus~\cite{Spamhaus}, Yahoo Webspam Database~\cite{YahooWebspamDatabase}), phishing (PhishTank~\cite{PhishTank}, OpenPhish~\cite{OpenPhish}), while some others are more general and include domains/IPs involved in any kind of malicious activities, e.g., VirusTotal~\cite{VirusTotal}, McAfee SiteAdvisor~\cite{McafeeSiteAdvisor}, Malware Domains~\cite{MalwareDomains} and Malware Domains List~\cite{MalwareDomainList}. Some of these sources, such as WoT~\cite{WOT}, can also blacklist domains that are not, per se, associated with malicious activities. This is the case when the content of such web sites is considered inappropriate with respect to the policies in place for the specific blacklist considered (e.g., pornographic content, violence, racism, copyrighted material, etc.). Another source to build ground truth is proprietary blacklist/whitelists, or proprietary reputation systems deployed by anti-virus security companies (e.g., Symantec), whose availability to the general research community is quite limited.
\paragraph{Benign Ground Truth}
Ground truth of benign domains in the literature is largely drawn from highly ranked popular domains. For example, Alexa top ranked domains~\cite{Alexa} are commonly used\footnote{We will explain later the need to apply a supplementary filter to the Alexa lists because they do contain malicious domains as well.}. Another common practice, at least when building an initial candidate set of benign domains, is based on the top level domains. For example, domains from ``gov'' and ``mil'' zones or those belonging to Google and Microsoft (used, e.g., in~\cite{DetectingMalwareBasedOnDnsGraphMining_Zou2015}), are generally considered more trustworthy than those from ``com'' or ``info''. Additionally, some public cyber intelligence tools like McAfee SiteAdvisor~\cite{McafeeSiteAdvisor}, Google Safe Browsing~\cite{GoogleSafeBrowsing} or Web Of Trust~\cite{WOT} report not only malicious and suspicious domains, but also benign ones and hence, can be also used to extract benign ground truth.
\subsubsection{Challenges}
\paragraph{Malicious Ground Truth Challenges} Even though reputable blacklists generally provide robust evidences about blacklisted domains, they still have a number of subtle issues. First, a malicious domain can be malignant in different ways: spam, phishing, C\&C, unethical, adult content, etc. Thus, the mere definition of the term ``malicious'' differs from one ground truth dataset to another. The ground truth collected for one approach may not work for another one that focuses on detecting domains involved in other types of malicious activities. Second, blacklists employ different collection methods. For instance, they may rely on crowd sourced data (e.g., PhishTank~\cite{PhishTank}, Web of Trust~\cite{WOT}), may crawl and analyze website content (e.g., Wepawet~\cite{Wepawet}), may run malicious software in sandboxes and analyze accessed domains (e.g., Anubis~\cite{Anubis}), may reverse botnet protocol and generate feed of names produced by DGAs (e.g, Conficker~\cite{ContainingConficker_Honeynet2009}), may be obtained using internal tools (e.g., Google Safe Browsing~\cite{GoogleSafeBrowsing}) or may aggregate data from different sources (for instance, UrlVoid~\cite{UrlVoid} or VirusTotal~\cite{VirusTotal}). Third, none of the blacklists is completely reliable. Sinha et al.~\cite{ShadesOfGrey_Sinha2008} and Ramachandran et al.~\cite{CanDnsBasedBlacklistsKeepUpWithBots_Ramachandran2006} showed that blacklists exhibit high false positives and false negatives rates. Some approaches address this by cross-checking domains in multiple blacklists. For example, Kheir et al.~\cite{Mentor_Kheir2014} built a ground truth dataset by voting on 3 different blacklists.
\paragraph{Benign Ground Truth Challenges} Although blacklists may contain false positives, generally a domain can be considered as malicious if it has appeared in a reputable blacklist. At the same time, building a bening domain ground truth is a far harder task. A domain cannot be deemed as benign simply because it is not present in any known blacklist. The large number of Internet domains (according to Verisign~\cite{InternetGrows_Verisign2016}, in 2016 there were around 314 million 2LDs) makes it impossible to scan and check them regularly. Although this number is large, it represents only a very small portion of the total number of FQDNs in the Internet. Even worse, that number keeps growing every day. Therefore, a malicious domain may not be blacklisted because it did not expose malicious content when it was scanned or it has never been scanned.
Although the usage of top $K$ Alexa domains~\cite{Alexa} as benign ground truth makes sense (the administrators of popular web pages devote more effort to protect their resources), it is both limited and suffers from high false positive rate. The list contains only 2LD domains and does not provide any information about sub-domains, which makes it rather limited. Domains are ranked according to their popularity but not based on their security or safety, which leads to high false positive rate. It contains proxies to malicious web pages or even domains hosting malicious content. For instance, a quite popular 2LD, \url{unblocksit.es} (ranked 11550 as of April 1, 2016), offers to proxy access to other, potentially blacklisted, domains. This 2LD is not, per se, malicious since it can be used by legit users to try to circumvent censorship measures that they are facing. Similarly, malicious users can abuse this service as a safe haven to defeat known blocking mechanisms. Moreover, some malicious domains could appear among top $K$ Alexa domains due to a burst of requests from a high number of infected clients querying them. Stevanovic et al.~\cite{OnTheGroundTruthProblem_Stevanovic2015} cross-checked domains from Alexa top $K$ domains with UrlVoid~\cite{UrlVoid}, the service which aggregates information from different blacklists. The results show that a relatively high percentage of domains (around 15\% out of 10,000 top domains) is reported to be malicious by at least one blacklist.
Such impurities of benign ground truth negatively affect the accuracy of domain detection approaches. For instance, consider a malicious domain $d$ that is mislabeled as benign in the ground truth, as it is in Alexa top $K$ domains. A correct detection of $d$ would be counted as a false positive incorrectly, causing the measured false positive rate higher than what it really is. At the same time, a malicious domain with a strong association with $d$ may be missed due to the lack of associations with malicious domains, that negatively affects the true positive rate. To mitigate the impact of Alexa top $K$ impurities, some approaches filter the domains before adding them to benign ground truth. For example, Rahbarinia et al.~\cite{Segugio_Rahbarinia2015} consider only domains that consistently appear in Alexa top 1 million sites for one year. Similarly, Bilge et al.~\cite{Exposure_Bilge2011,Exposure_Bilge2014} consider only domains older than 1 year as benign. Some other approaches, e.g.~\cite{Notos_Antonakakis2010,GuiltyByAssociation_Khalil2016}, remove dynamic DNS service domains, such as \url{no-ip.com}, before building a ground truth of benign domains. As one can see, there is no consensus on what could or should constitute the ground truth for benign domains.
\paragraph{Common Challenges} One of the common issues is to understand what domain level to use for ground truth compilation: 2LD, 3LD, or FQDN. Some ground truth sources contain domains of a specific level, e.g., top $K$ Alexa domains~\cite{Alexa} mostly consists of 2LD domains. This creates trouble for approaches that focus on the domain levels different from those found in the ground truth. The relations between domains at different levels are also unclear. Should we consider any subdomain of a malicious/benign domain as malicious/benign? Should we consider a domain as malicious/benign if the majority of its subdomains are malicious/benign? Unfortunately, there is no definite ``Yes'' or ``No'' answer to these questions. It may be reasonable, to a certain extent, to answer ``Yes'' to these questions for 2LDs that belong to private organizations like Google or Facebook. However, the subdomains of dynamic DNS services such as \url{no-ip.com} and \url{3322.org} may be totally unrelated and hence, cannot be assumed benign even if the vast majority of their subdomains is benign.
Another common issue, which we have identified in the literature, is the limited quantitative discussion of training and testing sets comprising the ground truth data. It has been shown (see~\cite{DataMiningForImbalancedDatasets_Chawla2005,TheRoleOfBalancedTrainingAndTestingDatasets_Wei2013}) that an imbalanced training dataset may have considerable influence on the learning of a classifier and thus, may influence some of the measured metrics.
\section{Evaluation Methods}
\label{sec:evaluation}
As discussed in the previous section, the majority of DNS-based malicious domain detection approaches leverage machine learning concepts and techniques such as clustering and classification. Therefore, it is natural for them to use the evaluation metrics and strategies that have been developed and used by the machine learning community. However, this area has the unique challenge of adaptive attackers, who continuously change behavior to evade detection. This limits the time and the scope of the validation results and calls for adaptive evaluation strategies. In this section, we present the commonly used evaluation metrics and strategies, and articulate the unique challenges that researchers face when they validate malicious domain detection approaches. Table~\ref{tab:evaluation} provides a short summary of the information considered here.
\input{tables/table_evaluation}
\subsection{Metrics}
\label{subsec:metrics}
As mentioned earlier, evaluation is tightly coupled with the ground truth. For the purpose of this section, the ground truth consists of a set of domains labeled either as malicious or benign. Let $P$ and $N$ be the number of malicious and benign domains in the test set, respectively; $TP$ (True Positives) and $TN$ (True Negatives) be the number of correctly identified malicious and benign domains; and $FP$ (False Positives) and $FN$ (False Negatives) be the number of benign domains that have been incorrectly identified as malicious and the number of malicious domains that have been incorrectly identified as benign, respectively. The most commonly used evaluation metrics in this area are:
\begin{itemize}
\item \textbf{True Positive Rate ($TPR$) or Recall}: The ratio of the correctly identified malicious domains to the total number of malicious domains ($TPR = TP / P$); the higher the value is, the better ($ TPR \in [0,1] $).
\item \textbf{False Positive Rate ($FPR$)}: The ratio of the benign domains flagged as malicious to the total number of benign domains ($FPR = FP / N$); the lower the value is, the better ($ FPR \in [0,1] $).
\item \textbf{True Negative Rate ($TNR$)}: The ratio of the correctly identified benign domains to the total number of benign domains ($TNR = TN / N$); the higher the value is, the better ($ TNR \in [0,1] $).
\item \textbf{False Negative Rate ($FNR$)}: The ratio of the malicious domains flagged as benign to the total number of malicious domains ($FNR = FN / P$); the lower the value is, the better ($ FNR \in [0,1] $).
\item \textbf{Precision}: The ratio of the correctly identified malicious domains to the number of all identified malicious domains ($precision = TP / (TP+FP)$); the higher the value is, the better ($ Precision \in [0,1] $).
\item \textbf{Accuracy ($Acc$)}: The ratio of the correctly identified domains to the whole size of the test set ($Acc = (TP+TN) / (P+N)$); the higher the value is, the better ($ Acc \in [0,1] $).
\item \textbf{F1-measure or F1-score}: The harmonic mean of precision and recall ($F_1 = 2*precision*recall / (precision+recall)$); the higher the value is, the better ($ F_1 \in [0,1] $).
\end{itemize}
During the design phase, a detection algorithm is tuned to identify the thresholds that optimize the desired metrics. However, some of these metrics are negatively correlated, i.e., enhancing the value of a desired metric may result in degrading the value of another one. For example, the desire to increase the $TPR$ may result in the undesired increase of the $FPR$. Therefore, detection accuracy is usually assessed based on a discrimination threshold that reflects the dependency of $TPR$ on $FPR$, which is called the \textbf{Receiver Operating Characteristics (ROC)} curve. The ROC graphical representation enables researchers to assess the achieved true positive rate once the value of false positive rate is fixed. Although a ROC curve is a good graphical representation, it cannot serve as a comparative quantitative metric. Therefore, the \textbf{Area Under the ROC Curve (AUC)} has been proposed as a quantitative comparison metric (e.g.,~\cite{DetectingDgaMalwareUsingNetflow_Grill2015}). In general, a system with higher AUC score is better. However, only few approaches report the AUC values what makes it difficult to compare and contrast different methods.
Finally, we note that some approaches use customized metrics to evaluate other important parameters of their system. For example, Khalil et al.~\cite{GuiltyByAssociation_Khalil2016} report the ``expansion'' as the number of newly detected domains for a given number of known malicious domains (seed). Hao et al.~\cite{Predator_Hao2016} defines ``completeness'' as the number of detected domains compared to other blacklists, and ``delay'' as the time it takes blacklists to identify a spammer domain after registration, while Ma et al.~\cite{DnsRadar_Ma2015} use the time-lagging as a metric to evaluate how long it takes other public sources to blacklist a detected domain.
\subsection{Evaluation Strategies}
\label{subsec:strategies}
Malicious domain detection approaches use different evaluation strategies. Most of them borrow strategies from the machine learning community, where cross-validation is one of the most popular technique. In cross-validation, the dataset is split into training and testing parts and multiple rounds are performed using different partitions to reduce variability. The partitioning could be exhaustive as in the case of the \textbf{leave-p-out cross-validation} or non-exhaustive as in the \textbf{k-fold cross-validation}. In leave-p-out cross-validation, $p$ out of the total $n$ observations are used for testing and the remaining observations are used for training. The results are averaged over all possible $p$ combinations out of the $n$ observations, which makes it difficult to apply in practice due to the large number of rounds. Therefore, this strategy is almost not used in the area. The k-fold cross-validation is more practical and hence, more popular (e.g.,~\cite{Fluxor_Passerini2008, MeasuringAndDetectingFastFluxServiceNetworks_Holz2008, Notos_Antonakakis2010, ExtendingBlackDomainNameListByUsingCoOccurrenceRelationBetweenDnsQueries_Sato2010, Exposure_Bilge2011, Kopis_Antonakakis2011, Pleiades_Antonakakis2012, FluxBuster_Perdisci2012, CrossingTheThreashold_Krishnan2013, DetectingMaliciousDomainsViaGraphInference_Manadhata2014, LargeScaleGraphMiningForWebReputationInference_Huang2015, DetectingMalwareBasedOnDnsGraphMining_Zou2015, DomainProfiler_Chiba2016, GuiltyByAssociation_Khalil2016}). According to this strategy the ground truth dataset is divided into $k$ equal parts, where $k-1$ parts are used for training and the remaining part is used for testing. The experiment is repeated $k$ times, changing every time the part used for testing, and the final score is obtained as an average of all the $k$ rounds. Other popular strategies include: (i) \textbf{Validation against the whole dataset} (e.g.,~\cite{BotnetDetectionByMonitoringGroupActivitesInDnsTraffic_Choi2007, IdentifyingBotnetsUsingAnomalyDetectionTechniques_VillamarinSalomon2008, OnThePotentialOfProactiveDomainBlacklisting_Felegyhazi2010, MeasurementAndAnalysisOfGlobalIpUsagePatternsOfFastFluxBotnets_Hu2011, BotGAD_Choi2012, ReevaluatingWisdomOfCrowds_Chia2012, EmpiricalReexaminationOfGlobalDnsBehavior_Gao2013, ExecScent_Nelms2013, PrivacyPreservingDomainFluxBotnetDetection_Guerid2013, ReexaminingDnsFromAGlobalRecursiveResolverPerspective_Gao2016, OnTheGroundTruthProblem_Stevanovic2015, Smash_Zhang2015, Predator_Hao2016}). According to this strategy, all predictions are verified against the whole ground truth data. This method is popular in unsupervised approaches, where there is no need for a training set. (ii) \textbf{One round train-test split} divides ground truth into two non-overlapping training and testing sets (e.g.,~\cite{TrackingMultipleCCBotnetsByAnalyzingDnsTraffic_Lee2010, MaliciousAutomaticallyGeneratedDomainNameDetection_Haddadi2013, Mentor_Kheir2014, AnalyzingStringFormatBasedClassifiersForBotnetDetection_Haddadi2013, DetectingMaliciousActivityWithDnsBackscatter_Fukuda2015, DetectionOfEarlyStageEnterpriseInfection_Oprea2015}). For example, the authors in~\cite{MaliciousAutomaticallyGeneratedDomainNameDetection_Haddadi2013, AnalyzingStringFormatBasedClassifiersForBotnetDetection_Haddadi2013} use 70\% of the ground truth for training and the rest 30\% for testing.
Although these validation strategies provide quite reliable results from the machine learning community's point of view, they have issues when applied to malicious domain detection. For instance, attackers and benign users in different parts of the world may have different behavior and hence, different organizations have different traffic profiles. For example, the traffic in a governmental organization network is different from that of a supplying company. Thus, the model produced from the data in one part of the world may not be suitable for the data produced in other parts of the world. Moreover, attackers usually change their behavior over time to avoid being detected, therefore, testing on time periods closer to the training time period may produce better results. Finally, an approach may be good at detecting domains belonging to one specific botnet, while performing poorly for other malware types. To address these issues, several cross-dataset strategies were proposed and applied in this area: (i) \textbf{Cross-networks validation}, in which the training and testing datasets are separated in space, i.e., training and testing datasets are collected at different locations (e.g.,~\cite{Segugio_Rahbarinia2015, DnsRadar_Ma2015}); (ii) \textbf{Cross-time validation}, in which training and testing datasets are collected at different time periods (e.g.,~\cite{Segugio_Rahbarinia2015, DetectingAlgorithmicallyGeneratedMaliciousDomainNames_Yadav2010, Pleiades_Antonakakis2012, DetectingAlgorithmicallyGeneratedDomainFluxAttacks_Yadav2012, DomainProfiler_Chiba2016, Predator_Hao2016}); (iii) \textbf{Cross-blacklists}, in which training and testing datasets are collected from different malware blacklists (e.g.,~\cite{FrameworkForDnsBasedDetectionAndMitigation_Stalmans2011, Segugio_Rahbarinia2015}). Ideally, a system should be trained and tested on completely different data separated both in terms of time and space.
\subsection{Challenges}
\label{subsec:evaluation_challenges}
The first challenge malicious domain detection approaches face with, lies in the difficulty of new knowledge validation. The majority of the approaches validates effectiveness only against part of the ground truth, the testing set, which is usually a small subset of the whole dataset. However, most of the approaches do not systematically show how to validate the predicted malicious domains that are not part of the ground truth. A few detection approaches have partially addressed this challenge (e.g.,~\cite{Pleiades_Antonakakis2012, GuiltyByAssociation_Khalil2016, OnTheGroundTruthProblem_Stevanovic2015, Smash_Zhang2015, Predator_Hao2016}) by one or a combination of the following strategies:
\begin{description}
\item [Cross-inspection.] Newly detected malicious domains are checked against sources of intelligence other than those used for the ground truth collection. However, it is clear that no combination of blacklists covers all existing malicious domains, otherwise, the new approach would generate already known data and thus, would be redundant. Hence, if this technique is applied, and the approach identifies new malicious domains it is impossible to validate them.
\item [Manual content inspection.] The content of newly detected domains is manually checked for malicious traces. In addition to being not scalable~\cite{Fluxor_Passerini2008}, manual inspection is not reliable~\cite{AllYourIframesPointToUs_Provos2008}. The cost of manually crawling and investigating the content of the potentially large number of newly detected domains is prohibitive. Therefore, only a small set of randomly selected domains is usually checked, while the content of the rest remains unverified.
\item [Automatic content inspection.] Newly detected domains are fed into tools that perform automatic content scanning. Automatic verification is not always reliable because the traces of automatic tools could be detected by malicious domain owners or the malicious domain could be proxied by look-like-benign domains~\cite{DetectionAndAnalysisOfDriveByDownloadAttacks_Cova2010,HuntingTheRedFoxOnline_Li2014,EscapeFromMonkeyIsland_Kapravelos2011}. Additionally, malware domains may simply not expose their malicious services to the public but rather target only specific visitors.
\item [Cross-time validation.] Newly detected domains are periodically checked after prediction against reputable commercial and public blacklists, or using manual content checking. However, one caveat of this strategy is that malicious owners may completely abandon domains which had been predicted to be malicious or simply have them behaving benignly~\cite{EmpiricalAnalysisOfPhishingBlacklists_Sheng2009}. Indeed, there is significant evidence that some attackers verify the presence of their resources in public blacklists before launching an attack~\cite{SarvdapSpambot}. Another caveat is that this strategy is affected by the dynamic maliciousness status of some domains over time. Domains being malicious at the detection time may become benign later and vice versa. For example, in February 2016, Linux Mint web server was hacked and used to distribute malicious content~\cite{LinuxMintHack_Murdock2016} but later it has been regained and cleaned. The first transition (malicious to benign) negatively impacts true positives, while the second transition (benign to malicious) does the same with true negatives.
\end{description}
The second challenge is the absence of a publicly available reference dataset. Although there was an attempt to provide such a dataset (see Los Alamos DNS Dataset for APT Infection Discovery Challenge~\cite{LosAlamosDnsData}), this practice was not widespread. Having a publicly available reference test set is an important step towards providing a benchmark to compare the effectiveness of various approaches, and it can help researchers to further advance the area in a more systematic way. The absence of a reference dataset combined with difficulties in sharing code makes it hard to repeat experiments for systematic comparison of different approaches. However, we admit that attackers change behavior over time to avoid detection moving from one network to another and adjusting their attack methods. Therefore, it may be hard, if not impossible, to collect a reference dataset that covers different deployment environments and survives the dynamic behavior of adversaries. Complementary to this issue is the absence of a reference ground truth data. Different approaches use different sources. As discussed before, such sources may target different malicious activities and hence, cover different domains. Additionally, the lack of sharing among different sources could increase the gap. For example, in~\cite{ReevaluatingWisdomOfCrowds_Chia2012}, out of the 296 and 192 malicious sites that SiteAdvisor and Safe Web have identified, only 8 are common. That is, evaluating based on a ground truth collected from one source may differ from that based on a ground truth collected from another.
The third challenging issue is in building a unified approach for metric calculation. A real DNS data usually consists of domains which are not all covered by white- and blacklists. This leaves the treatment of some metrics to the discretion of researchers. For instance, one can consider all domains appearing in blacklists as malicious while treating all others as benign. Other may take into the consideration only labeled part of domains out of the whole DNS dataset performing metrics computation. Such approaches may considerably influence the results. Along the same line, filtering of a dataset also influences the evaluation results. Indeed, filtering of domains applied in some approaches (e.g.,~\cite{IdentifyingSuspiciousActivitiesThroughDnsFailureGraphAnalysis_Jiang2010, DetectingMaliciousFluxServiceNetworks_Perdisci2009, Exposure_Bilge2011, Exposure_Bilge2014, OnTheGroundTruthProblem_Stevanovic2015}) may influence both positive and negative sides. For instance, in~\cite{Exposure_Bilge2011, Exposure_Bilge2014} Bilge et al. filtered out domains ``queried less than 20 times during the entire monitoring period'' (because some aggregated statistics simply do not work if there are less than this amount of queries). However, among these domains there may be a number of malicious ones. Therefore, filtering out these domains will increase the amount of false negatives. Similarly, the detection rate is also impacted once long-lived domains are removed~\cite{Exposure_Bilge2011, Exposure_Bilge2014}.
Last but not least, sometimes the approaches in this area are compared using the accuracy metric. This metric is not reliable in case of imbalanced datasets, i.e., those where the number of samples of one class is considerably higher than that of other. Such datasets are quite common in the area. Indeed, it is easy to find a large amount of benign domains, e.g., by using Alexa Top 1,000,000 domains~\cite{Alexa}, while the number of malicious domains is limited by the ones available in blacklists. Therefore, it is better either to use the metrics insensitive to imbalanced datasets (e.g., AUC or F1-measure) or to balance the sets before measuring accuracy~\cite{DataMiningForImbalancedDatasets_Chawla2005,TheRoleOfBalancedTrainingAndTestingDatasets_Wei2013}. Finally, even though in the majority of works the results are reported using the TPR and FPR scores, these approaches can be barely compared because the TRP and FPR metrics depend on each other. Therefore, in order to compare two methods one of the metrics' values in both approaches should be fixed.
\section{Introduction}
\label{sec:introduction}
It is well known that the Internet is being used continuously to run attacks against different targets. Benign services and protocols are being misused for various malicious activities: to disseminate malware, to facilitate command and control (C\&C) communications, to send spam messages, to host scam and phishing webpages. Clearly, it is very important to detect the origins of such malvolent activities, be it by identifying an URL, a domain name or an IP address. Many approaches have been proposed for such purpose: analysis of network traffic~\cite{Effort_Shin2012, TrafficAggregationForMalwareDetection_Yen2008}, inspection of web content~\cite{Prophiler_Canali2011, BInspect_Eshete2013}, URL scrutiny~\cite{LearningToDetectMaliciousURLs_Ma2011}, or using a combination of those techniques~\cite{RbSeeker_Hu2009, BeyondBlacklists_Ma2009}. On top of these, one of the most promising directions relies on the analysis of the \emph{Domain Name System} data.
\emph{Domain Name System} (DNS) protocol is an essential part of the Internet. It maps tough-to-remember Internet Protocol (IP) addresses to easy memorable domain names. Detection of malicious domains through the analysis of DNS data has a number of benefits compared to other approaches. First, DNS data constitutes only a small fraction of the overall network traffic, what makes it suitable for analysis even in large scale networks which cover large areas. Moreover, caching, being an integral part of the protocol, naturally facilitates further decrease the amount of data to be analyzed, allowing researchers to analyze even the DNS traffic coming to Top Level Domains~\cite{Kopis_Antonakakis2011}. Second, the DNS traffic contains a significant amount of meaningful features to identify domain names associated to malicious activities. Third, many of these features can further be enriched with associated information, such as AS number, domain owner, etc. providing an even richer space exploitable for detection. The large amount of features and the vast quantity of traffic data available have made DNS traffic a prime candidate for experimentation with various machine learning techniques applied to the context of security. Forth, although the solutions to encrypt DNS data, like DNSCrypt~\cite{DNSCrypt} exist, still a large fraction of DNS traffic remains unencrypted, making it available for the inspection in various Internet vantage points. Last but not least, sometimes researchers are able to reveal attacks at their early stages or even before they happen due to some traces left in the DNS data.
The purpose of this paper is to survey all the approaches that aim at detecting domains involved in malicious activities through the analysis of DNS data. To do so, we have built a comprehensive bibliography by collecting papers from several sources. First, we have crawled 4 major digital libraries, namely, ACM\footnote{\url{http://dl.acm.org}}, IEEEXplore\footnote{\url{http://ieeexplore.ieee.org}}, Springer\footnote{\url{https://rd.springer.com/}} and Scopus\footnote{\url{https://www.scopus.com/}}, feeding them with a search string consisting of keywords relevant to the area. Second, we asked credible experts to provide us the most pertinent articles. Third, we extracted from these papers the references which were not included so far in our compiled list. Additionally, we continued to monitor major conferences for any relevant new work appearing in the area. Note that the focus of this paper is not limited to domains involved in specific types of malicious activities, as done in~\cite{SurveyOfBotnetAndBotnetDetection_Feily2009, TaxonomyOfBotnetBehaviorDetectionAndDefense_Khattak2014, Alieyan2015, Dhole16}, that provide surveys specifically about botnets; or in~\cite{PhishingDetection_Khonji2013, FeatureSelectionForPhishingDetection_Zuhair2016}, \cite{SurveyOnWebSpamDetection_Spirin2012} and~\cite{MaliciousUrlDetection_Sahoo2017}, that cover areas of phishing, web spam and malicious URLs detection correspondingly.
We have carefully read each paper in our study list and extracted the information that could help us to cover the targeted research topic. The first observation we have made, is that this research area is relatively new. The seminal paper~\cite{PassiveDnsReplication_Weimer2005}, which led to the area as we know it today, dates back to 2005. Authored by Florian Weimer, it was the very first published paper not only to consider using DNS records to detect malicious domains but also to propose a practical solution to obtain large amounts of data amenable to various types of analysis. In order to position the numerous pieces of work that have followed, we propose a general framework (represented in Figure~\ref{fig:process}) to describe the various components required to implement a DNS based detection technique. It involves the following key components.
\begin{figure}[t!]
\centering
\includegraphics[width=0.75\textwidth]{survey_overview}
\caption{The general process to design a DNS data-based technique to detect malicious domains.}
\label{fig:process}
\end{figure}
\smallbreak\noindent\textbf{DNS Data Collection.} DNS data could be collected at different locations of the DNS architecture as well as be available with different granularity. For example, they could be gathered at the recursive DNS server of a company or Internet Service Provider (ISP), or at higher-level authority servers. They may be available in the form of detailed DNS query/response logs, or only in aggregated forms. The location and granularity of the data can reveal different behaviors related to malicious domains, and thus have a significant impact on the intuition and design of the detection algorithms.
\smallbreak\noindent\textbf{Data Enrichment.} To get a more comprehensive view of malicious activities, DNS data often needs to be enriched by integrating networking and application data from various sources. Typical data sources used for this purpose include domain registration records, autonomous system numbers and geo-location information of IPs hosting domains.
\smallbreak\noindent\textbf{Algorithm Design.} A detection algorithm identifies a set of potentially malicious domains based on DNS data, enrichment information and, possibly, intelligence on existing known benign and malicious domains. Existing machine learning algorithms (supervised, semi-supervised and non-supervised) are often adapted in this context, relying on various intuitions about the behavior of malicious domains.
\smallbreak\noindent\textbf{Ground Truth.} A ground truth of malicious and benign domains is needed both in the algorithm design phase and in the evaluation phase. Supervised and semi-supervised detection algorithms rely on known malicious and benign domains to train a machine learning model and tune important parameters. The evaluation of detection algorithms is also greatly influenced by how the ground truth set is collected, cleaned and applied.
\smallbreak\noindent\textbf{Evaluation Methodology.} Malicious domain detection imposes unique challenges that are not observed in typical machine learning problems, including its highly dynamic nature and the adaptiveness of attackers. Therefore, besides following standard evaluation methodologies from the machine learning community, additional evaluation criteria and methods need to be adopted to reflect the true effectiveness of a malicious domain detection scheme in practice. For example, it is highly desirable to evaluate the robustness of the approaches against adaptive attackers who could change their behaviors deliberately to evade detection.
A DNS-based malicious domain detection technique can be characterized by the above five key components. Reading the relevant articles, we paid particular attention to the information about (i) the sources of DNS data, enrichment data, and ground truth data, (ii) the extracted features and how they are used in various approaches, (iii) the evaluation metrics and strategies. The collected information constitutes the core of Sections~\ref{sec:datasources},~\ref{sec:approaches}, and~\ref{sec:evaluation}. Reading the papers and using our domain expertise, we identified a number of problems and challenges in the area. In each of these sections, we also discuss the challenges or unsolved problems faced by the research community. In Section~\ref{sec:background} we provide necessary background on DNS and malicious activities leveraging DNS, while Section~\ref{sec:conclusion} concludes our discussion.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The first boundary cross
theorem
was discovered by Malgrange--Zerner in the pioneer work \cite{ze}.
Subsequent results in this direction are obtained by
Komatsu \cite{ko} and Dru\.{z}kowski \cite{dr}.
Recently, Gonchar \cite{go1,go2} has proved a more
general result for the one-dimensional case. It should be noted that
Airapetyan and Henkin publish a version of
the edge-of-the-wedge theorem for CR manifolds (see \cite{ah1} for a brief version
and \cite{ah2} for a complete proof).
Gonchar's result could be deduced from the latter works.
In the articles \cite{pn1, pn2, pn3, pn}, the authors generalize Gonchar's result
to the one-dimensional case with more optimal hypotheses and to the higher
dimensional case.
On the other hand, cross theorems with analytic or pluripolar singularities have been developed by
many mathematicians (see, for example, \cite{jp2,jp3,jp4,jp5}
and the references therein). The question naturally arises whether
there exists a mixture of these two types of cross theorems, namely, a boundary cross theorem with singularities.
The purpose of this article is to establish such a theorem in a
simple but very useful setting: one-dimensional case with optimal hypotheses in the spirit of our previous work \cite{pn2}.
This is our first step towards a general cross theorem with singularities \cite{pn5} (see also \cite{nv1,nv2}).
\smallskip
\indent{\it{\bf Acknowledgment.}} The paper was started during
the stay of the second author at the University of Oldenburg in 2006. He was supported by a grant from DFG, Az. PF 227/8-2.
The paper was written while
he was visiting the Abdus Salam International
Centre
for Theoretical Physics
in Trieste. He wishes to express his gratitude to these
organizations.
\section{Background and statement of the main result}
First we introduce some notation and terminology. In this article,
$E$ always denotes the open unit disc in $\Bbb C.$ For $a\in\Bbb C$ and $r>0,$
$\Delta_a(r)$ is the disc centered at $a$ with radius $r.$
Finally, the one-dimensional
Lebesgue measure is denoted by $\operatorname{mes}.$
\subsection{(Sub)harmonic measure}
Let $\Omega\subset \Bbb C$ be an open set. For any function $u:\ \Omega\longrightarrow \Bbb R\cup\{-\infty\},$ let
\begin{equation*}
\hat{u}(z):=
\begin{cases}
u(z),
& z\in \Omega,\\
\limsup\limits_{\Omega\ni w\to z}u(w), & z \in \partial \Omega.
\end{cases}
\end{equation*}
For a set $A\subset \overline{\Omega}$ put
\begin{equation*}
h_{A,\Omega}:=\sup\left\lbrace u\ :\ u\in\mathcal{SH}(\Omega),\
u\leq 1\ \text{on}\ \Omega,\
\hat{u}\leq 0\ \text{on}\ A \right\rbrace,
\end{equation*}
where $\mathcal{SH}(\Omega)$ denotes the cone of all functions
subharmonic on $\Omega.$
The {\it subharmonic measure} of $A$ relative to $\Omega$ is
the function $\omega(\cdot,A,\Omega)\in \mathcal{SH}(\Omega)$
defined by
\begin{equation*}
\omega(z,A,\Omega):=h^{\ast}_{A,\Omega}(z),\qquad z\in\Omega,
\end{equation*}
where $h^{\ast}$ denotes the upper semicontinuous regularization
of $h.$
If $A\subset\partial \Omega,$ then
$\omega(\cdot,A,\Omega)$ is also called
the {\it harmonic measure} of $A$ relative to $\Omega.$
In this case, $\omega(\cdot,A,\Omega)$
is a harmonic function.
We recall the following elementary property which will be used several times later on.
Let $(A_k)_{k=1}^{\infty}$ be a sequence of measurable subsets of $\partial E$
and $A$ a measurable subset of $\partial E$ such that
$
\operatorname{mes}(A_k)>0,$ $ A_k\subset A_{k+1},$ and $ \operatorname{mes}\big( A\setminus \bigcup_{k=1}^{\infty}A_k\big)=0.$
Then
\begin{equation}\label{eq_elementary}
\omega(\cdot,A_k,E)\searrow\omega(\cdot,A,E)\qquad\text{as}\ k\nearrow\infty.
\end{equation}
\subsection{Angular approach regions and locally regular points}
Let $D\subset \Bbb C$ be a Jordan domain. Fix a conformal mapping $\Phi$ from
$D$ onto $E$ which extends continuously from $\overline{D}$ onto $\overline{E}.$
For $\zeta\in\partial D$ and $0<\alpha <\frac{\pi}{2},$ the {\it Stolz region} or
{\it angular approach region} $\mathcal{A}_{\alpha}(\zeta)$ is given by
\begin{equation*}
\mathcal{A}_{\alpha}(\zeta):=
\left\lbrace \Phi^{-1}(t):\ t\in E\ \text{and}\ \left\vert
\operatorname{arg}\left(\frac{\Phi(\zeta)-t}{\Phi(\zeta)}\right)
\right\vert<\alpha\right\rbrace ,
\end{equation*}
where $\operatorname{arg}:\ \Bbb C\longrightarrow (-\pi,\pi]$ is as usual the argument function.
Let $A\subset\overline D.$ We say that a
point
$\zeta\in \overline{ D} $ is a
{\it locally regular point relative to}
$A$
if
\begin{equation*}
\lim\limits_{D\cap \Delta_{\zeta}(r)\ni z\to \zeta} \omega(z,A\cap
\Delta_{\zeta}(r),D\cap \Delta_{\zeta}(r))=0,\qquad r>0.
\end{equation*}
Obviously, $\zeta\in \overline{A}.$ The set of all locally regular points relative to $A$ is denoted
by $A^{\ast}.$ $A$ is said to be {\it locally regular} if $A\subset
A^{\ast}.$
If $A\subset \partial D$ is measurable, then it is classical that $\Phi(A^{\ast})$ contains all
density-points of $\Phi(A),$ hence $\operatorname{mes}\Big(\Phi\big(A\setminus(A\cap
A^{\ast})\big)\Big)=0,$ and $A\cap A^{\ast}$ is again locally
regular. Moreover, it follows from (\ref{eq_elementary}) that
\begin{equation}\label{eq_elementary_new}
\omega(\cdot,A\cap A^{\ast},D)=\omega(\cdot,A,D).
\end{equation}
Recall from Definition 4.8 in \cite{pn2} the following
definition.
A point $\zeta\in\partial D$ is
said to be an {\it end-point} of an open subset $\Omega\subset D$ if, for every
$0<\alpha<\frac{\pi}{2},$ there is an open neighborhood
$U=U_{\alpha}$ of $\zeta$ such that $U\cap
\mathcal{A}_{\alpha}(\zeta)\subset \Omega.$ The set of all
end-points of $\Omega$ is denoted by $\operatorname{End}(\Omega).$
We say that a function $f,$ defined in an open subset $\Omega\subset D,$ admits an {\it angular limit}
$\lambda\in\Bbb C$ at a point $a\in\operatorname{End}(\Omega) $ if
\begin{equation*}
\lim\limits_{ \mathcal{A}_{\alpha}(a)\cap \Omega\ni z\to a }
f(z)=\lambda,\qquad 0<\alpha<\frac{\pi}{2}.
\end{equation*}
\subsection{Cross and separate holomorphy}
Let $D,G\subset \Bbb C $ be two open sets, let
$A$ (resp. $B$) be a subset of $\overline{D}$ (resp.
$\overline{G}$). We define
a {\it $2$-fold cross} $W,$ its {\it interior} $W^{\text{o}}$ as
\begin{eqnarray*}
W &:=&\Bbb X(A,B; D,G)
:=\big((D\cup A)\times B\big)\cup \big(A\times(B\cup G)\big),\\
W^{\text{o}} &:=&\Bbb X^{\text{o}}(A,B; D,G)
:= (D\times B)\cup (A\times G).
\end{eqnarray*}
For a $2$-fold cross $W :=\Bbb X(A,B; D,G)$
define
\begin{equation*}
\widehat{W}=\widehat{\Bbb X}(A,B;D,G)
:=\left\lbrace (z,w)\in D\times G:\ \omega(z,A,D)+\omega(w,B,G) <1
\right\rbrace.
\end{equation*}
Let $M$ be a subset of $W.$ Then the {\it fibers} $M_a$ and $M^b$ are given by
\begin{equation*}
M_a:=\{w\in G:\ (a,w)\in M\}\quad (a\in A); \qquad
M^b:=\{z\in D:\ (z,b)\in M\}\quad (b\in B).
\end{equation*}
We say that $M$ possesses a certain property in fibers over $A$ (resp. over $B$) if
all fibers $M_a$ with $a\in A$ (resp. all fibers $M^b$ with $b\in B$) possess this property.
Suppose that $M$ is relatively closed in fibers over $A$ and $B.$
We say that a function $f:W\setminus M\longrightarrow \Bbb C$ is {\it separately holomorphic}
{\it on $W^{\text{o}}\setminus M $} and write $f\in\mathcal{O}_s(W^{\text{o}}\setminus M),$ if
for any $a\in A $ (resp. $b\in B$)
the function $f(a,\cdot)|_{G\setminus M_a}$ (resp. $f(\cdot,b)|_{D\setminus M^b}$)
is holomorphic.
From now on we assume, in addition, that $D$ and $G$ are Jordan domains, and
$A\subset \partial D,$ $B\subset\partial G.$ Then we define the {\it regular part} $W^{\ast}$ relative to $W$ as
\begin{equation*}
W^{\ast} :=\Bbb X(A^{\ast},B^{\ast}; D,G).
\end{equation*}
Let $\Omega$ be an open subset of $D\times G.$
A point $(a,b)\in A^{\ast}\times G$ (resp. $(a,b)\in D\times B^{\ast}$) is said to be an {\it end-point} of $\Omega$
if, for every
$0<\alpha<\frac{\pi}{2},$ there are an open neighborhood
$U=U_{\alpha}$ of $a$ and an open neighborhood $V=V_{\alpha}$ of $b$ such that
\begin{equation*}
\big(U\cap
\mathcal{A}_{\alpha}(a)\big)\times V\subset \Omega\quad\Big(\text{resp.}\ U\times \big(V\cap
\mathcal{A}_{\alpha}(b)\big)\subset \Omega\ \Big).
\end{equation*}
The set of all end-points of $\Omega$ is denoted by $\operatorname{End}(\Omega).$
We say that a function $f:\ \Omega\longrightarrow \Bbb C$ admits an {\it angular limit}
$\lambda\in\Bbb C$ at $(a,b)\in \operatorname{End}(\Omega) $ if
under the previous notation one of the following cases occurs:\\
{\bf Case 1:} $(a,b)\in\ A^{\ast}\times G$ and the following limits exist and are equal to $\lambda$
\begin{equation*}
\lim\limits_{\Omega\ni (z,w)\to (a,b),\ z \in \mathcal{A}_{\alpha}(a)}
f(z,w),\qquad 0<\alpha<\frac{\pi}{2};
\end{equation*}
{\bf Case 2:} $(a,b)\in D\times B^{\ast}$ and the following limits exist and are equal to $\lambda$
\begin{equation*}
\lim\limits_{\Omega\ni (z,w)\to (a,b),\ w \in \mathcal{A}_{\alpha}(b)}
f(z,w),\qquad 0<\alpha<\frac{\pi}{2}.
\end{equation*}
For an open set $\Omega\subset\Bbb C^k,$ let $\mathcal{O}(\Omega)$ denote
the space of all holomorphic functions on $\Omega.$
A function $f:\ \mathcal{P}\rightarrow \Bbb C,$ where $ \mathcal{P}$ is a topological space,
is said to be {\it locally bounded,} if for every point $p\in \mathcal{P}$
there exists a neighborhood $U$ of $p$ such that $\sup\limits_U \vert f\vert <\infty.$
\subsection{Statement of the main result}
Now we are able to state the following
\renewcommand{\thethmspec}{Main Theorem}
\begin{thmspec}
Let $D=G=E$ and let $A\subset\partial D, $ $B\subset\partial G$
be measurable subsets such that $\operatorname{mes}(A)>0,$
$\operatorname{mes}(B)>0.$
Consider the cross $W:=\Bbb X(A,B;D,G).$
Let $M$ be a relatively closed subset of $W$ such that
\begin{itemize}
\item[$\bullet$] $M_a$ is polar (resp.
discrete) in $G$ for all $a\in A$ and
$M^b$
is polar (resp. discrete) in $D$ for all $b\in B;$\footnote{ In other words,
$M$ is polar (resp. discrete) in fibers over $A$ and $B.$}
\item[$\bullet$] $M\cap (A\times B)=\varnothing.$
\end{itemize}
Then there exists a relatively closed pluripolar subset (resp. an analytic subset) $\widehat{M}$ of $\widehat{W}$
with the following two properties:
\begin{itemize}
\item[(i)]
The set of end-points of $\widehat{W}\setminus \widehat{M}$ contains $(W^{\text{o}}\cap W^{\ast})\setminus M.$
\item[(ii)] Let $f:\ W\setminus M \longrightarrow\Bbb C$
be a locally bounded
function such that
\begin{itemize}
\item[$\bullet$] for all $a\in A,$
$f(a,\cdot)|_{G\setminus M_a}$ is
holomorphic and admits the angular limit $f(a,b)$ at all points
$b\in B;$
\item[$\bullet$] for all $b\in B,$
$f(\cdot,b)|_{D\setminus M^b}$ is holomorphic and admits the
angular limit $f(a,b)$ at all points $a\in A;$
\item[$\bullet$]
$f|_{A\times B}$ is measurable.
\end{itemize}
Then
there is a unique function
$\hat{f}\in\mathcal{O}(\widehat{W}\setminus \widehat{M})$ such
that $\hat{f}$ admits the angular limit $f$ at all points of $(W^{\text{o}}\cap W^{\ast})\setminus M.$
\end{itemize}
Moreover, if $M=\varnothing,$ then $\widehat{M}=\varnothing.$
\end{thmspec}
\section{Preparatory results}
\subsection{Auxiliary results} First recall the following
well-known result (see, for example, \cite{jp1}).
\begin{thm}\label{classical_cross_thm}
Let $D,G$ and $A,\ B$ be open subsets of $\Bbb C$ such that $A\subset
D$ and $B\subset G.$ Put $W:=\Bbb X(A,B;D,G)$ and
$\widehat{W}:=\widehat{\Bbb X}(A,B;D,G).$ Then $W\subset \widehat{W}$ and every function
$f\in\mathcal{O}_s(W)$ extends uniquely to a function $\hat{f}\in
\mathcal{O}(\widehat{W}).$
\end{thm}
The following mixed cross theorem has been proved in \cite[Theorem 7.3]{pn2}
(see also \cite[Theorem 4.2]{nv2} for another proof using the method of holomorphic discs).
\begin{thm}\label{mixed_cross_thm}
Let $A$ be a measurable subset of $\partial E$ such that
$A$ is locally regular.
Let $G\subset \Bbb C$ be an open set and $B$ an open subset of $G.$ For $0\leq\delta<1$ put
$\Omega:=\left\lbrace
z\in E:\ \omega(z,A,E)<1-\delta\right\rbrace.$
Let $W:= \Bbb X(A,B;\Omega,G)$, $W^{\text{o}}:= \Bbb X^{\text{o}}(A,B;\Omega,G),$
and\footnote{ It will be shown in Lemma \ref{lem_formula_level_sets} below that
$\widetilde{\omega}(\cdot,A,\Omega)= \frac{\omega(\cdot,A,E)}{1-\delta}$ on $\Omega,$
where $\widetilde{\omega}(\cdot,A,\Omega)$ is, in some sense, the ``angular" version of the harmonic measure.}
\begin{equation*}
\widehat{\widetilde{W}}=
\widehat{\widetilde{\Bbb X}}(A,B;\Omega,G):=\left\lbrace (z,w)\in E\times G:\ \frac{\omega(z,A,E)}{1-\delta}
+\omega
(w,B,G)<1 \right\rbrace.
\end{equation*}
Let $f:\ W\longrightarrow \Bbb C$ be such that
\begin{itemize}
\item[(i)] $f\in\mathcal{O}_s(W^{\text{o}},\Bbb C);$
\item[(ii)] $f$ is locally bounded on $W,$ $f|_{A\times B}$ is a
measurable function;
\item[(iii)] for all $w\in B,$\footnote{ Since $A$ is locally regular, it follows that $A\subset \operatorname{End}(\Omega).$}
\begin{equation*}
\lim\limits_{
\mathcal{A}_{\alpha}(a)\ni z\to a }f(z,w)=f(a,w),\qquad a\in A,\
0<\alpha<\frac{\pi}{2}.
\end{equation*}
\end{itemize}
Then there exists a unique function
$\hat{f}\in\mathcal{O}(\widehat{\widetilde{W}})$ such that $\hat{f}=f$ on $\Omega\times B$
and\footnote{ Since $A$ is locally regular, we have $A\times G\subset \operatorname{End}(\widehat{\widetilde{W}} ).$}
\begin{equation*}
\lim\limits_{
\mathcal{A}_{\alpha}(a)\ni z\to a }\hat{f}(z,w)=f(a,w),\qquad a\in A,\
w\in G,\
0<\alpha<\frac{\pi}{2}.
\end{equation*}
Moreover, $\vert f\vert_W=\vert \hat{f}\vert_{\widehat{W}}.$
\end{thm}
The next result proved by the authors in \cite{pn2}
generalizes the work of Gonchar in \cite{go1,go2}.
\begin{thm}\label{Pflug_Nguyen_thm}
We keep the hypotheses and notation of the Main Theorem. Suppose
in addition that $M=\varnothing.$ Then the conclusion of the Main
Theorem holds for $\widehat{M}=\varnothing.$
\end{thm}
The following two extension theorems are also needed in the
sequel.
\begin{thm}[Chirka \cite{Chi}] \label{Chirka_Thm}
Let $D\subset\Bbb C^n$ be a domain and let $\widehat{ D}$ be the
envelope of holomorphy of $D$. Assume that $S$ is a relatively
closed pluripolar subset of $D$. Then there exists a relatively
closed pluripolar subset $\widehat{ S}$ of $\widehat{ D}$ such
that $\widehat{S}\cap D\subset S$ and $\widehat
{D}\setminus\widehat {S}$ is the envelope of holomorphy of
$D\setminus S$.
\end{thm}
\begin{thm} [Imomkulov--Khujamov \cite{ik}, Imomkulov \cite{im}]\label{Imomkulov_thm}
Let $A$ be a measurable subset of $\partial E$ with $\operatorname{mes}(A)>0,$ let $M$ be a relatively closed subset of $A\times (\Bbb C\setminus \overline{E})$
such that $M_a:=\{w\in\Bbb C:\ (a,w)\in M\}$ is polar (resp. finite) for all $a\in A.$
Then there exists a relatively closed pluripolar (resp. analytic) subset $S$ of
$E\times (\Bbb C\setminus \overline{E})$ with the following property:
Let $f:\
(E\cup A)\times E\longrightarrow\Bbb C$ be bounded, $f|_{E\times E}\in\mathcal{O}(E\times E)$ such
that $\lim\limits_{z\to a,\ z
\in \mathcal{A}_{\alpha}(a)} f(z,w)=f(a,w)$ for all $ a\in A,
w\in E$ and $0<\alpha<\frac{\pi}{2}.$ Moreover, assume that
the (holomorphic) function $f(a,\cdot)$
extends to a holomorphic function on $\Bbb C\setminus M_a$ for every $a\in A.$ Then $f|_{E\times E}$
extends holomorphically to $(E\times \Bbb C)\setminus S.$
\end{thm}
\begin{proof}
This is a slightly modified version of the result in \cite{ik}.
In fact, Imomkulov--Khujamov suppose that $f|_{E\times E}$ can be extended continuously onto $\overline{E}\times\overline{E}.$
But their proof still works under the hypotheses of Theorem \ref{Imomkulov_thm}. Consequently,
for each function
$f$ as in the statement of the theorem,
there is a relatively closed pluripolar (resp. analytic) subset $S_f$ of
$E\times (\Bbb C\setminus \overline{E})$ such that $f$ extends to a holomorphic function on $G_f:= (E\times\Bbb C)\setminus S_f$
and that the latter function does not extend holomorphically across any point of $ S_f.$
Let $G$ denote the connected component of the interior of
$\bigcap_{f}G_f$ that
contains $E^2$ and let $S:=(E\times\Bbb C)\setminus G.$ It remains to show
that $S$ is pluripolar (resp. analytic).
Take $(a,b)\in ((A\cap A^\ast)\times\Bbb C)\setminus M$. Since $M$ is
relatively closed in $A\times \Bbb C$ and $M_{a}$ is polar, there
exists a smooth curve $\gamma:[0,1]\rightarrow\Bbb C\setminus M_{a}$
such that $\gamma(0)=0$, $\gamma(1)=b$. Take an $\epsilon>0$ so
small that
$$
\big (\Delta_a(\epsilon)\times(\gamma([0,1])+\Delta_0(\epsilon))\big )\cap M=\varnothing
$$
and that $V_b:=\Delta_0(\frac{1}{2})\cup(\gamma([0,1])+\Delta_0(\epsilon))$ is a Jordan domain. Consider the cross
\[
Y:=\Bbb X(A\cap\Delta_a(\epsilon), \partial V_b\cap
\partial\Delta_0(\frac{1}{2}) ;\Delta_a(\epsilon)\cap E,V_b).
\]
Then $f|_Y$ satisfies the hypotheses of Theorem \ref{Pflug_Nguyen_thm}. Consequently,
we get $\widehat{Y}\subset G_f$ for all $f$ as in the statement of the theorem.
Hence $\widehat {Y}\subset G$.
Thus $S^{\ast}_{a}\subset M_{a}$ for all $a\in A,$
where $S^{\ast}_{a}$ is the non-tangential boundary layer of a pseudoconcave set $S$ (see \cite[p. 358]{im}).
Consequently, by Lemma 6 and 7 from \cite{im} (see also Lemma 7 and 8 in \cite{ik}), $S$ is pluripolar (resp. analytic).
\end{proof}
\subsection{Two techniques and their applications}
The technique {\it level sets of (plurisub)harmonic measure} was introduced by the authors in \cite{pn1}. However,
it turns out that it can be successfully
used
in solving many problems arising from the
theory of separately holomorphic and meromorphic mappings (see
\cite{pn2,pn3,nv1,nv2}). For an open set $D\subset \Bbb C,$ a
subset $A\subset \partial D,$ and $0<\delta<1$ the {\it
$\delta$-level set of the harmonic measure $\omega(\cdot,A,D)$}
is, by definition,
\begin{equation*}
D_{A,\delta}:=\left\lbrace z\in D:\ \omega(z,A,D)<\delta
\right\rbrace.
\end{equation*}
The technique of level sets consists in
``replacing" $A$ (resp. $D$) by $D_{A,\delta}$ (resp.
$D_{A,1-\delta}$) for a suitable $0<\delta<\frac{1}{2}.$
Recall the following property of the level sets.
\begin{lem}\label{lem_formula_level_sets}
Let $D$ be either an empty set or a Jordan domain such that $E\not\subset D$ and that $D\cup E$
is a Jordan domain.
For a measurable subset $A$ of $\partial E\cap \partial (D\cup E)$ with $\operatorname{mes}(A)>0$ and $0<\delta <1$ let $\Omega_{\delta}:=E_{A,\delta}\cup D.$
Define the {\rm angular harmonic measure}
\begin{equation*}
\widetilde{\omega}(z,A,\Omega_{\delta}):=
\sup\limits_{u\in\mathcal{U}_{A,\delta}} u(z), \qquad z\in \Omega_{\delta},
\end{equation*}
where $\mathcal{U}_{A,\delta}$ is the cone of all subharmonic functions $u\leq 1$ on $\Omega_{\delta}$ such that
\begin{equation*}
\limsup\limits_{\Omega_{\delta}\cap \mathcal{A}_{\alpha}(\zeta)\ni z\to\zeta}u(z)\leq 0,\qquad
\zeta\in A\cap A^{\ast},\ 0<\alpha<\frac{\pi}{2}.
\end{equation*}
\begin{itemize}
\item[1)] If $D=\varnothing,$ then $\widetilde{\omega}(z,A,\Omega_{\delta})=\frac{\omega(z,A,E)}{1-\delta},$ $z\in\Omega_{\delta}.$
\item[2)] If $D$ is a Jordan domain, then $\widetilde{\omega}(z,A,\Omega_{\delta})\searrow\omega(z,A,E\cup D)$ as $\delta\searrow 0^{+}.$
\end{itemize}
\end{lem}
\begin{proof}
Part 1) follows from \cite[Theorem 4.10]{pn2}.
Part 2) is a consequence of Part 1).
\end{proof}
The technique {\it conformal mappings} has been introduced by the
second author in \cite{nv2}. This allows to reduce the study of
holomorphic extensions on some level sets to the unit disc.
The main idea of the technique of conformal mappings is described below (see Proposition 5.2 in \cite{nv2} for a proof).
\begin{prop}
\label{prop_conformal}
Let $A$ be a measurable subset of $\partial E$ with $\operatorname{mes}(A)>0.$ For $0\leq\delta<1$ put
$G:=\left\lbrace
w\in E:\ \omega(w,A,E)<1-\delta\right\rbrace.$
Let $\Omega$ be an arbitrary connected component of $G.$
Then
\begin{itemize}
\item[1)] $\operatorname{End}(\Omega) $ is a measurable subset of $\partial
E$ and $\operatorname{mes}(\operatorname{End}(\Omega))>0.$ Moreover, $\Omega$ is a simply
connected domain.
In virtue of Part 1) and the Riemann mapping theorem, let $\Phi$
be a conformal mapping of $\Omega$ onto $E.$
\item[2)] For every $\zeta\in \operatorname{End}(\Omega),$ there is $\eta\in\partial
E$ such that
\begin{equation*}
\lim\limits_{ \Omega\cap
\mathcal{A}_{\alpha}(\zeta)\ni z\to \zeta}\Phi( z)=\eta, \qquad
0<\alpha<\frac{\pi}{2}.
\end{equation*}
$\eta$ is called {\rm the limit of $\Phi$ at the end-point $\zeta$} and it is denoted by $\Phi(\zeta).$
Moreover, $\Phi|_{\operatorname{End}(\Omega)}$ is one-to-one.
\item[3)] Let $f$ be a bounded holomorphic function on $\Omega,$
$\zeta\in\operatorname{End}(\Omega),$ and $\lambda\in\Bbb C$ such that
$\lim\limits_{\Omega\cap
\mathcal{A}_{\alpha}(\zeta)\ni z\to \zeta}f(z)=\lambda$ for some
$0<\alpha<\frac{\pi}{2}.$ Then $f\circ\Phi^{-1}\in\mathcal{O}(E)$
admits the angular limit $\lambda$ at $\Phi(\zeta).$
\item[4)] Let $\Delta$ be a measurable subset of $\operatorname{End}(\Omega)$ such that
$\operatorname{mes}(\Delta)=\operatorname{mes}(\operatorname{End}(\Omega)).$ Put
$\Phi(\Delta):=\{\Phi(\zeta),\ \zeta\in \Delta\},$ where
$\Phi(\zeta)$ is given by Part 2). Then $\Phi(\Delta)$ is a
measurable subset of $\partial E$ with $\operatorname{mes}\big(\Phi(\Delta) \big)>0$ and
\begin{equation*}
\omega(\Phi(z),\Phi(\Delta),E)=\frac{\omega(z,A,E)}{1-\delta},\qquad
z\in \Omega.
\end{equation*}
\end{itemize}
\end{prop}
As an application of the technique of conformal mappings, we give the following extended version of Theorem \ref{Imomkulov_thm}.
\begin{thm}\label{new_Imomkulov_thm}
Let $A$ be a measurable subset of $\partial E$ with $\operatorname{mes}(A)>0.$ For a given $0\leq\delta<1$ put
$\Omega:=\left\lbrace
w\in E:\ \omega(w,A,E)<1-\delta\right\rbrace.$
Let
$$f:\
\big(\Omega\cup (A\cap \operatorname{End}(\Omega))\big)\times E\longrightarrow\Bbb C$$ be a bounded function such
that $f|_{\Omega\times E}$ is holomorphic and $\lim\limits_{z\to a,\ z
\in \mathcal{A}_{\alpha}(a)} f(z,w)=f(a,w)$ for all $ a\in A\cap \operatorname{End}(\Omega),$
$w\in E$ and $0<\alpha<\frac{\pi}{2}.$ Suppose in addition that
for every $a\in A\cap \operatorname{End}(\Omega),$ the function $f(a,\cdot)$
is holomorphic and it extends to a holomorphic function on the whole plane except for a
closed polar (resp. finite) set of singularities. Then $f|_{\Omega\times E}$
extends holomorphically to $(\Omega\times \Bbb C)\setminus S,$ where $S$
is a relatively closed pluripolar (resp. analytic) subset of
$\Omega\times \Bbb C.$
\end{thm}
Theorem \ref{Imomkulov_thm}
is a special case of the above result for $\delta=0.$
\begin{proof} We only treat the case where
the set of singularities of $f(a,\cdot)$ is closed polar for $a\in A\cap \operatorname{End}(\Omega).$
Since the remaining case where these sets are finite is analogous, it is therefore left to the interested reader.
Using (\ref{eq_elementary_new}) we may suppose without loss of generality that $A$ is locally regular. Then $A\subset \operatorname{End}(\Omega).$
Let $(\Omega_k)_{k\in K}$ be the family of all connected components of $\Omega,$ where $K$ is a countable index set.
By Theorem 4.9 in \cite{pn2},
\begin{eqnarray*}
\operatorname{End}(\Omega)&=&\bigcup\limits_{k\in K}\operatorname{End}(\Omega_k),\quad \operatorname{mes}\big(\operatorname{End}(\Omega_k)\cap A\big)= \operatorname{mes}\big(\operatorname{End}(\Omega_k)\big),\\
& & \operatorname{End}(\Omega_k)\cap\operatorname{End}(\Omega_{k^{'}})=\varnothing\quad
\text{for}\ k\not=k^{'}.
\end{eqnarray*}
By Proposition \ref{prop_conformal}, we may fix a conformal mapping $\Phi_k$ from $\Omega_k$ onto $E$ for every $k\in K.$
Put
\begin{equation}\label{eq1_new_Imomkulov_thm}
A_k:=\Phi_k(\operatorname{End}(\Omega_k)\cap A),\ W_k:=(E\cup A_k)\times E,\qquad k\in K.
\end{equation}
Recall from the hypothesis that for every fixed $w\in E,$ the holomorphic function $f(\cdot,w)|_{\Omega}$ is bounded
and that for every $\zeta\in A\cap \operatorname{End}(\Omega),$
\begin{equation*}
\lim\limits_{\Omega\cap \mathcal{A}_{\alpha}(\zeta)\ni z\to \zeta}f(z,w)=f(\zeta,w),\qquad 0<\alpha<\frac{\pi}{2}.
\end{equation*}
Consequently, Part 3) of Proposition \ref{prop_conformal}, applied to $f(\cdot,w)|_{\Omega_k}$ with $k\in K,$
implies that for every fixed $w\in E,$ $f(\Phi_k^{-1}(\cdot),w)\in\mathcal{O}(E)$ admits the angular limit $f(\zeta,w)$ at $\Phi_k(\zeta)$
for all $\zeta\in A\cap \operatorname{End}(\Omega_k).$ By Part 1) of that proposition, we know that $\operatorname{mes}\big( A\cap \operatorname{End}(\Omega_k)\big)>0.$
This discussion and the hypothesis
allow us to apply Theorem \ref{Imomkulov_thm} to the function
$g_k:\ W_k\longrightarrow\Bbb C$ defined by
\begin{equation}\label{eq2_new_Imomkulov_thm}
g_k(z,w):=\begin{cases}
f(\Phi_k^{-1}(z),w), & (z,w)\in E\times E\\
f(\Phi_k^{-1}(z),w), & (z,w)\in A_k\times E
\end{cases},
\end{equation}
where in the second line we have used the definition of $\Phi_k|_{\operatorname{End}(\Omega_k)}$ and its one-to-one property proved
by Part 2) of Proposition \ref{prop_conformal}.
Consequently, we obtain a relatively closed pluripolar set $S_k\subset E\times \Bbb C$ such that
$S_k\cap (E\times E) =\varnothing$ and that
$g_k|_{E\times E}$
extends holomorphically to a function $\hat{g}_k\in\mathcal{O}\big((E\times \Bbb C)\setminus S_k\big)$
with
\begin{equation}\label{eq3_new_Imomkulov_thm}
\lim\limits_{
\mathcal{A}_{\alpha}(a)\ni z\to a} \hat{g}_k(z,w)=g_k(a,w),\qquad (a,w)\in A_k\times E .
\end{equation}
Put
\begin{equation*}
\widehat{\mathcal{W}}_k:=\left\lbrace(\Phi^{-1}_k(z),w),\ (z,w)\in (E\times\Bbb C)\setminus S_k
\right\rbrace,\qquad k\in K.
\end{equation*}
Observe that the open sets $ (\widehat{\mathcal{W}}_k)_{k\in K}$ are pairwise disjoint. Moreover,
by (\ref{eq1_new_Imomkulov_thm}),
\begin{multline*}
\bigcup\limits_{k\in K} \widehat{\mathcal{W}}_k= \bigcup\limits_{k\in K}
\left\lbrace (z,w)\in \Omega_k\times \Bbb C:\
(\Phi_k(z),w)\not\in S_k \right\rbrace\\
= ( \Omega\times \Bbb C)\setminus\bigcup\limits_{k\in K} \left\lbrace (z,w)\in \Omega_k\times \Bbb C:\
(\Phi_k(z),w)\in S_k \right\rbrace=: ( \Omega\times \Bbb C)\setminus S.
\end{multline*}
Since $S_k$ is relatively closed pluripolar in $E\times \Bbb C$ for $k\in K,$
we see that $S$ is relatively closed pluripolar in $\Omega\times \Bbb C.$
Therefore, we define the desired extension function $\hat{f}\in\mathcal{O}\big((\Omega\times\Bbb C)\setminus S\big)$
by the formula
\begin{equation*}
\hat{f}(z,w):=\hat{g}_k(\Phi_k(z),w),\qquad (z,w)\in \Omega_k,\ k\in K.
\end{equation*}
This, combined with (\ref{eq1_new_Imomkulov_thm})--(\ref{eq3_new_Imomkulov_thm}), implies that $ \lim\limits_{
\mathcal{A}_{\alpha}(a)\ni z\to a}\hat{f}(z,w)=f(a,w)$ for all $(a,w)\in A\times E.$
The uniqueness of $\hat{f}$ follows from the one of $\hat{g}_k,$ $k\in K.$
Hence, the proof of the theorem is complete.
\end{proof}
\subsection{Gluing theorems}
The following theorems will be very useful in the next sections when we need to glue different local extensions.
\begin{thm}\label{gluing_thm_1}
Let $A$ and $\mathcal{N}$ be measurable subsets of $\partial E$ with $\operatorname{mes}(\mathcal{N})=0.$
Let $0<\delta<1$ and $E_{A,\delta}:=\left\lbrace z\in E:\ \omega(z,A,E)<\delta\right\rbrace.$
Suppose that $f\in\mathcal{O}(E_{A,\delta})$ admits the angular limit $0$ at all points
of $(A\cap A^{\ast})\setminus \mathcal{N}.$ Then $f\equiv 0.$
\end{thm}
\begin{proof}
See Theorem 5.4 in \cite{pn2}.
\end{proof}
\begin{thm}\label{minimum_principle}
Let $(\Omega)_{i\in I}$ be a family of open subsets of an open set $\Omega\subset \Bbb C^n.$
Let $M$ a relatively closed pluripolar subset (resp. an analytic subset) of $\Omega$ and $M_i$
a non pluripolar subset of $\Omega_i$ such that $M\cap M_i=\varnothing,$ $i\in I.$
Suppose that $f\in\mathcal{O}(\Omega\setminus M)$ and $f_i\in\mathcal{O}(\Omega_i)$
satisfy $f=f_i$ on $M_i$ for $i\in I.$
Then there exist a relatively closed pluripolar subset (resp. an analytic subset) $\widehat{M}\subset \Omega$
and a function
$\hat{f}\in \mathcal{O}(\Omega\setminus \widehat{M})$ such that
$\widehat{M}\subset M$ and $\hat{f}=f$ on $\Omega\setminus M,$
and that for all $i\in I,$ we have $\widehat{M}\cap \Omega_i=\varnothing$ and $\hat{f}=f_i$ on $\Omega_i.$
\end{thm}
\begin{proof} The case where $M$ is a relatively closed pluripolar subset of $\Omega$ is not difficult.
The remaining case where $M$ is an analytic subset of $\Omega$ follows from an easy application of Proposition
3.4.5 in \cite{jp1}.
\end{proof}
\begin{thm} \label{gluing_thm_2}
Let $(\Omega_n)_{n=1}^{\infty}$ be an increasing sequence of open subsets of an open set $\Omega\subset \Bbb C^n$
such that $\Omega_n\nearrow\Omega$ as $n\nearrow\infty.$
For every $n\in \Bbb N$ let $M_n$ be a
relatively closed pluripolar subset (resp. an analytic subset) of $\Omega_n$
and $f_n\in\mathcal{O}(\Omega_n\setminus M_n).$
Suppose in addition that
$f_n=f_{n+1} $ on $\Omega_n\setminus (M_n\cup M_{n+1}),$ $n\in\Bbb N.$
Then there exist a relatively closed pluripolar subset (resp. an analytic subset) $M\subset \Omega$
and a function
$f\in \mathcal{O}(\Omega\setminus M)$ such that
$M\cap \Omega_n\subset M_n$ and $f=f_n$ on $\Omega_n\setminus M_n$ for all $n\in\Bbb N.$
\end{thm}
\begin{proof}
It is left to the interested reader.
\end{proof}
\section{Extensions through the singularities}
We keep the hypotheses and notation of the Main Theorem.
Moreover, we only give the proof for the case where the singular set is {\it fiberwise polar}, that is,
$M_a$ (resp. $M^b$) is polar in $G$ (resp. $D$) for all $a\in A$ (resp. $b\in B$).
Since the remaining case where the singular set is fiberwise discrete is analogous, it is therefore left to the interested reader.
In this section and the beginning of the next one we assume that
\smallskip
{\bf $A$ and $B$ are compact sets.}
\smallskip
This assumption will be removed at the end of the next section.
Since $(A\times B)\cap M=\varnothing$ (by the hypothesis), we may find
$N$ points $a_1,\ldots,a_N\in A,$ $N$ numbers $r_1,\ldots,r_N>0,$ $N^{'}$ points
$b_1,\ldots,b_{N^{'}}\in B,$ and $N^{'}$ numbers $s_1,\ldots,s_{N^{'}}>0$ such that
\begin{equation*
A\subset \bigcup\limits_{k=1}^N\Delta_{a_k}(r_k),\quad B\subset \bigcup\limits_{l=1}^{N^{'}}\Delta_{b_l}(s_l),\quad
M\cap\Big( \bigcup\limits_{k=1}^N\Delta_{a_k}(r_k)\times \bigcup\limits_{l=1}^{N^{'}}\Delta_{b_l}(s_l) \Big)=\varnothing.
\end{equation*}
Put
\begin{equation}\label{subdomains_D_G}
\widetilde{D}:=D\cap \bigcup\limits_{k=1}^N\Delta_{a_k}(r_k),\quad
\widetilde{G}:=G\cap \bigcup\limits_{l=1}^{N^{'}}\Delta_{b_l}(s_l).
\end{equation}
Then it is clear that $\Bbb X(A,B;\widetilde{D},\widetilde{G})\cap M=\varnothing.$
We introduce the following notation.
For an $a\in A$ (resp. $b\in B$) and $0<r,\delta <1,$ let
\begin{equation}\label{eq_Dardelta}
\begin{split}
D_{a,r,\delta}&:=\left\lbrace z\in D\cap \Delta_a(r):\
\omega(z,A\cap \Delta_a(r),D\cap \Delta_a(r))<\delta
\right\rbrace,\\
G_{b,r,\delta}&:=\left\lbrace w\in G\cap
\Delta_b(r):\ \omega(w,B\cap \Delta_b(r),G\cap \Delta_b(r))<\delta
\right\rbrace.
\end{split}
\end{equation}
Let $\Omega$ be an open subset of $\widehat{W}.$
A point $(a,b)\in (A\cap A^{\ast})\times G$ (resp. $(a,b)\in D\times (B\cap B^{\ast})$) is said to be a {\it strong end-point} of $\Omega$
if there exist $0<r,\delta <1$
and an open neighborhood $V$ of $b$ (resp. and an open neighborhood $U$ of $a$) such that
\begin{equation*}
D_{a,r,\delta}\times V\subset \Omega\quad\Big(\text{resp.}\ U\times G_{b,r,\delta}\subset \Omega\ \Big).
\end{equation*}
It is clear that a strong end-point of $\Omega$ is also an end-point. But the converse statement is in general false.
Now, we are in the position to extend $f$ holomorphically through the singular set $M.$
\begin{prop}
\label{prop_extension_2}
For any $a\in A\cap A^{\ast},$ $w\in G,$ there exist $r,\rho,\delta\in ( 0,1) $
and a
relatively closed pluripolar subset $S\subset D_{a,r,\delta}\times
\Delta_w(\rho)$ with the following properties:
\begin{itemize}
\item[1)] $\Delta_w(\rho)\subset G$ and the set $$ T:=\Big( \big (A\cap A^{\ast}\cap
\Delta_a(r)\big) \times \Delta_w(\rho)\Big)\setminus M$$ is contained in the set of strong end-points
of $( D_{a,r,\delta}\times
\Delta_w(\rho))\setminus S.$
\item[2)] There is a function $\hat{f}\in\mathcal{O}\Big(
\big(D_{a,r,\delta}\times\Delta_w(\rho)\big )\setminus S\Big) $ which admits
the angular limit $f$ at all points of $T.$
\end{itemize}
\end{prop}
\begin{proof}
Fix an $a_0\in A\cap A^{\ast}$ and a $w_0\in G$ as in the proposition.
First we
determine $0<r,\rho,\delta<1$ and then we will construct a function $\hat{f}\in\mathcal{O}\Big(
\big(D_{a_0,r,\delta}\times\Delta_{w_0}(\rho)\big )\setminus \widetilde{S}\Big), $
where $ \widetilde{S}$ is a relatively closed pluripolar subset of $D_{a_0,r,\delta}\times\Delta_{w_0}(\rho).$
Since $M_{a_0}$ is a relatively closed polar set in $G,$ one may choose
$\rho>0$ such that $\Delta_{w_0}(\rho)\Subset G$ and
$M_{a_0}\cap\partial\Delta_{w_0}(\rho)=\varnothing$
(cf.~\cite{ArmGar}, Theorem 7.3.9). Take $\rho^-,\rho^+>0$ such that $\rho^-<\rho<\rho^+$,
$\Delta_{w_0}(\rho^+)\Subset G$, and $M_{a_0}\cap\overline
P=\varnothing$, where
\begin{equation*}
P:=\{w\in\Bbb C: \rho^-<|w-w_0|<\rho^+\}.
\end{equation*}
Define
\begin{equation*}
\mathcal{G}:=\left\lbrace w\in \widetilde{G}:\ \omega(w,B,\widetilde{G})<\frac{1}{2} \right\rbrace.
\end{equation*}
Let $\gamma:[0,1]\to G\setminus M_{a_0}$ be a curve such that
$\gamma(0)\in \mathcal{G}$,
$\gamma(1)\in\partial\Delta_{w_0}(\rho)$. Since $M$ is relatively
closed in $W,$ there exist $r,t\in (0,1)$ such that
\begin{equation}\label{since_M_is_closed}
\Delta_{a_0}(r)\cap D\subset \widetilde{D}\quad\text{and}\quad (A\cap \Delta_{a_0}(r))\times\Big((\gamma([0,1])+\Delta_0(t))\cup
P\Big)\subset W\setminus M.
\end{equation}
Put
\begin{equation*}
V:=\mathcal{G}\cup\big(\gamma([0,1])+\Delta_0(t)\big)\cup
P \end{equation*}
and consider the cross
\begin{equation*}
Y:=\Bbb X(A\cap A^{\ast}\cap \Delta_{a_0}(r), \mathcal{G}; D_{a_0,r,\frac{1}{2}},V).
\end{equation*}
Using (\ref{subdomains_D_G}) and (\ref{since_M_is_closed}) and the hypotheses on $f$ in the Main Theorem, we
are able to apply Theorem \ref{Pflug_Nguyen_thm} to the function
$f$ restricted to $\Bbb X(A\cap \Delta_{a_0}(r),B;D\cap\Delta_{a_0}(r) ,\widetilde{G} ).$
Consequently, we obtain $\widetilde{f}\in \mathcal{O}\big(\widehat{\Bbb X}(A \cap\Delta_{a_0}(r),B; D \cap\Delta_{a_0}(r),\widetilde{G} ) \big )
$ which admits the angular limit $f$ on
$\Bbb X^{\text{o}}(A \cap A^{\ast}\cap \Delta_{a_0}(r),B\cap B^{\ast}; D \cap\Delta_{a_0}(r),\widetilde{G} ) \big ).
$
Define
\begin{equation*}
f_0:=
\begin{cases}
f
& \text{on}\ (A\cap A^{\ast}\cap \Delta_{a_0}(r))\times V\\
\widetilde{f} & \text{on}\ D_{a_0,r,\frac{1}{2}}\times
\mathcal{G}
\end{cases}.
\end{equation*}
Then $f_0\in\mathcal{O}_s(Y),$ $f_0|_{ (A\cap A^{\ast}\cap \Delta_{a_0}(r))\times\mathcal{G}}$ is measurable,
and
\begin{multline*}
\lim\limits_{ \mathcal{A}_{\alpha}(\zeta)\ni z\to \zeta}
f_0(z,w)=
f(\zeta,w)=f_0(\zeta,w),\\
(\zeta,w)\in (A\cap A^{\ast}\cap \Delta_{a_0}(r))\times \mathcal{G},\ 0<\alpha<\frac{\pi}{2}.
\end{multline*}
Consequently, we are able to apply Theorem \ref{mixed_cross_thm} to $f_0$ in order to obtain
a function $\hat{f}_0$ holomorphic on
\begin{equation*}
\widehat{Y}=\left\lbrace (z,w)\in D_{a_0,r,\frac{1}{2}}\times V:\
2\omega\Big(z, A\cap\Delta_{a_0}(r),D\cap \Delta_{a_0}(r)\Big)
+\omega(w, \mathcal{G},V)<1 \right\rbrace
\end{equation*}
such that $\hat{f}_0=\widetilde{f}$ on $D_{a_0,r,\delta}\times
\mathcal{G}$ and
\begin{equation}\label{eq_hatf0_prop_extension_2}
\lim\limits_{\mathcal{A}_{\alpha}(\zeta)\ni z\to \zeta}
\hat{f}_0(z,w)=
f(\zeta,w)=:\hat{f}_0(\zeta,w),\ (\zeta,w)\in (A\cap A^{\ast}\cap \Delta_{a_0}(r))\times V,\ 0<\alpha<\frac{\pi}{2}.
\end{equation}
We have just extended $\hat{f}_0$ to $\widehat{Y}\cup \big((A\cap A^{\ast}\cap \Delta_{a_0}(r))\times V\big).$
Fix $s^-,s^+>0$ such that $\rho^-<s^-<\rho< s^+<\rho^+,$ and consider the annulus
\begin{equation*}
Q:=\{w\in\Bbb C: s^{-}<|w-w_0|<s^{+}\}.
\end{equation*}
Let $\delta$ be such that
\begin{equation*}
0<\delta<\frac{1}{2}\Big(1-\sup\limits_{w\in Q}\omega(w, \mathcal{G},V) \Big).
\end{equation*}
Using this and applying Lemma \ref{lem_formula_level_sets}, we see that
$D_{a_0,r,\delta}\times \overline{Q}\subset \widehat{Y}.$
Therefore, $\hat{f}_0$ is holomorphic on $D_{a_0,r,\delta}\times Q$ and continuous on $D_{a_0,r,\delta}\times \overline{Q}.$ Moreover,
for any $a\in A\cap A^{\ast}\cap \Delta_{a_0}(r)$ the function
$\hat{f}_0(a,\cdot)$ is holomorphic on $Q$ and continuous on $\overline{Q}.$ Therefore, by Cauchy formula we have
\begin{eqnarray*}
\hat{f}_0(z,w)&=&\frac{1}{2i\pi}\int\limits_{\vert \eta-w_0\vert=s^+}
\frac{ \hat{f}_0(z,\eta)}{\eta-w}-\frac{1}{2i\pi}\int\limits_{\vert \eta-w_0\vert=s^-}
\frac{ \hat{f}_0(z,\eta)}{\eta-w}\\
& =: & \hat{f}^+(z,w)+\hat{f}^-(z,w),\quad
z\in D_{a_0,r,\delta} \cup (A\cap A^{\ast}\cap \Delta_{a_0}(r)) ,\ w\in Q.
\end{eqnarray*}
where $
\hat{f}^+\in\mathcal{O}\Big( D_{a_0,r,\delta} \times \Delta_{w_0}(s^+)\Big)$
and
$\hat{f}^-\in\mathcal{O}\Big( D_{a_0,r,\delta}\times (\Bbb C\setminus
\overline\Delta_{w_0}(s^-))\Big)$.
Recall from (\ref{eq_hatf0_prop_extension_2}) and the hypotheses that for any $a\in A\cap A^{\ast}\cap \Delta_{a_0}(r)$ the function
$\hat{f}_0(a,\cdot)$ extends holomorphically to $G\setminus M_a.$
Consequently, for any $a\in A\cap A^{\ast}\cap \Delta_{a_0}(r)$ the function
$ \hat{f}^-(a,\cdot)$ extends holomorphically to
$\Bbb C\setminus( M_a\cap\overline\Delta_{w_0}(s^-))$. Using (\ref{eq_hatf0_prop_extension_2})
and the above integral formula for $\hat{f}^-\in\mathcal{O}\Big( D_{a_0,r,\delta}\times Q\Big),$
we see that
\begin{equation*}
\lim\limits_{(z,w)\to (\zeta,\eta),\ z\in \mathcal{A}_{\alpha}(\zeta)}
\hat{f}^-(z,w)=\hat{f}^{-}(\zeta,\eta),\qquad (\zeta,\eta)\in (A\cap A^{\ast}\cap \Delta_{a_0}(r))\times Q ,\ 0<\alpha<\frac{\pi}{2}.
\end{equation*}
Now,
we are in the position
to apply
Theorem \ref{new_Imomkulov_thm} to $\hat{f}^-.$ Consequently,
there exists a relatively closed pluripolar set
$\widetilde{S}\subset D_{a_0,r,\delta}\times
\Bbb C$ such that $\hat{f}^-$ extends holomorphically to a function
$\overset\approx f{}^-\in\mathcal{O}\big( (D_{a_0,r,\delta} \times\Bbb C)\setminus \widetilde{S}\big)$.
Since $\hat{f}_0=\hat{f}^++\hat{f}^-$, the function $\hat{f}_0$ extends holomorphically
to a function (still denoted by)
$\hat{f}_0:=\hat{f}^++\overset\approx f{}^-
\in\mathcal{O}\big( ( D_{a_0,r,\delta} \times \Delta_{w_0}(s^{+}))\setminus \widetilde{S}\big)$.
To prove Part 1) and Part 2) fix an arbitrary
$a_1\in A\cap A^{\ast}\cap\Delta_{a_0}(r)$ and $w_1\in \Delta_{w_0}(\rho).$
Since $M_{a_1}$
is polar in $G,$ there exists a smooth curve $\alpha:[0,1]\rightarrow\Bbb C\setminus M_{a_1}$
such that $\alpha(0)\in \widetilde{G}$ and $\alpha(1)=w_1.$
Moreover, using (\ref{subdomains_D_G}) and the hypothesis that $M$ is a relatively closed subset of $W,$ we may find $r_1>0$ so small that
$\widetilde{V}:= \widetilde{G}\cup(\alpha([0,1])+\Delta_0(r_1))$ is a Jordan domain
and that
$$
\Delta_{a_1}(r_1)\Subset \Delta_{a_0}(r),\qquad \big( \Delta_{a_1}(r_1)\times \widetilde{V} \big)\cap M=\varnothing.
$$
Using this,
(\ref{subdomains_D_G}), and the hypotheses on $f$ in the Main Theorem, we
are able to apply Theorem \ref{Pflug_Nguyen_thm} to the function
$f$ restricted to $\Bbb X(A\cap \Delta_{a_1}(r_1),B;D\cap\Delta_{a_1}(r_1) ,\widetilde{V} ).$
Consequently, we obtain $\widehat{f}_1=\widehat{f}_{(a_1,w_1)}
\in \mathcal{O}\big(\widehat{\Bbb X}(A \cap\Delta_{a_1}(r_1),B; D \cap\Delta_{a_1}(r_1),\widetilde{V} ) \big )
$ which admits the angular limit $f$ on
$\Bbb X^{\text{o}}(A \cap A^{\ast}\cap \Delta_{a_1}(r_1),B\cap B^{\ast}; D \cap\Delta_{a_1}(r_1),\widetilde{V} ) \big ).
$
Fix a $w_2\in\mathcal{G}.$ Then $w_2\in V\cap \widetilde{V}.$
Choose $\delta_1,\rho_1>0$ so small such that
\begin{equation}\label{eq_choosing_delta_1}
\delta_1< 1- \omega(w_1, B, \widetilde{V} )
\end{equation}
and that
\begin{equation*}
D_{a_1,r_1,\delta_1}\times \Delta_{w_2}(\rho_1)\subset \big( D_{a_0,r,\delta} \times \Delta_{w_0}(s^{+}) \big)
\cap \widehat{\Bbb X}(A \cap\Delta_{a_1}(r_1),B; D \cap\Delta_{a_1}(r_1),\widetilde{V} ).
\end{equation*}
Consequently, using (\ref{eq_hatf0_prop_extension_2}), we obtain
\begin{multline*}
\lim\limits_{ \mathcal{A}_{\alpha}(\zeta)\ni z\to \zeta}
\hat{f}_0(z,w)=\lim\limits_{ z\to \zeta,\ z\in \mathcal{A}_{\alpha}(\zeta)}
\hat{f}_1(z,w)=f(\zeta,w),\\
\zeta\in A\cap A^{\ast}\cap \Delta_{a_1}(r_1),\ w\in \Delta_{w_2}(\rho_1), 0<\alpha<\frac{\pi}{2}.
\end{multline*}
By Theorem \ref{gluing_thm_1}, $\hat{f}_0=\hat{f}_1$ on $ D_{a_1,r_1,\delta_1}\times \Delta_{w_2}(\rho_1).$
By shrinking $\rho_1$ (if necessary) and by using the fact that $w_1,w_2\in \widetilde{V}$ and estimate (\ref{eq_choosing_delta_1}),
we deduce from the latter
identity that
\begin{equation}\label{last_eq_prop_extension_2}
\hat{f}_0=\hat{f}_1\quad\text{on}\ \big(D_{a_1,r_1,\delta_1}\times \Delta_{w_1}(\rho_1)\big)\setminus \widetilde{S}.
\end{equation}
Now we are in the position to apply
Theorem \ref{minimum_principle} to $\hat{f}_0
\in\mathcal{O}\big( ( D_{a_0,r,\delta} \times \Delta_{w_0}(s^{+}))\setminus \widetilde{S}\big)$ and to
the family of functions $\big(\widehat{f}_{(a_1,w_1)}\big)$ with
$a_1\in A\cap A^{\ast}\cap\Delta_{a_0}(r)$ and $w_1\in \Delta_{w_0}(\rho)\setminus M_{a_1}.$
Consequently, we obtain the relatively closed pluripolar subset $S\subset D_{a_0,r,\delta}\times\Delta_{w_0}(\rho)
$ satisfying Part 1) and the function $\hat{f}\in\mathcal{O}\Big(
\big(D_{a_0,r,\delta}\times\Delta_{w_0}(\rho)\big )\setminus S\Big). $
Part 2) follows from (\ref{last_eq_prop_extension_2}).
\end{proof}
The role of strong end-points is illustrated by the following uniqueness theorem.
\begin{thm}\label{uniqueness}
Let $f\in\mathcal{O}(\Omega),$ where $\Omega$ is a subdomain of $\widehat{W}.$
Suppose that there exist $a_0\in A\cap A^{\ast},$ $r>0$ and an open subset
$V\subset G$ such that $(A\cap A^{\ast}\cap \Delta_{a_0}(r))\times V$ are contained in the set of strong end-points of $\Omega$
and that the angular limit of $f$ at all points of $(A\cap A^{\ast}\cap \Delta_{a_0}(r))\times V$ equals $0.$
Then $f\equiv 0.$
\end{thm}
\begin{proof}
Applying Theorem \ref{gluing_thm_1} to $f$ restricted to an open set of the form $D_{a_0,r_0,\delta}\times U\subset \Omega$
for suitable $r_0,\delta>0$ and $U\subset G,$ the theorem follows.
\end{proof}
\section{Proof of the Main Theorem}
We keep the notation in the previous section. Moreover,
we introduce some new notation.
For any $\zeta\in A,$ $r, R\in (0,1)$ with $\Delta_0(R)\cap \widetilde{G}\not=\varnothing,$ let
\begin{equation*}
W_{\zeta,r, R}:= \Bbb X\big(A\cap \Delta_{\zeta}(r),B; D\cap \Delta_{\zeta}(r),
\Delta_0(R)\cup\widetilde{G}\big).
\end{equation*}
Similarly,
for any $\eta\in B,$ $r, R\in (0,1) $ with $\Delta_0(R)\cap \widetilde{D}\not=\varnothing,$ put
\begin{equation*}
W_{\eta,r, R}:= \Bbb X\big(A,B\cap \Delta_{\eta}(r);
\Delta_0(R)\cup\widetilde{D}, G\cap \Delta_{\eta}(r) \big).
\end{equation*}
For any $R\in (0,1)$ with $\Delta_0(R)\cap \widetilde{D}\not=\varnothing$
and $\Delta_0(R)\cap \widetilde{G}\not=\varnothing,$ put
\begin{equation*}
W_{ R}:= \Bbb X\big(A,B;
\Delta_0(R)\cup\widetilde{D}, \Delta_0(R^{'})\cup\widetilde{G} \big).
\end{equation*}
Fix a sequence $(\delta_n:=\frac{1}{2^n})_{n=1}^{\infty}.$
The proof is divided into several steps. In the first three steps $A$ and $B$ are supposed to be compact.
\smallskip
\noindent {\bf Step 1.} { \it For any $\zeta\in A\cap A^{\ast}$ and $R\in (0,1)$ with $\Delta_0(R)\cap \widetilde{G}\not=\varnothing,$
there exists $r\in (0,1)$ and a relatively closed pluripolar subset $\widehat{S}$ of
$
\widehat{W}_{\zeta,r,R}$
and
a function $\hat{f}\in \mathcal{O}(\widehat{W}_{\zeta,r,R} \setminus \widehat{S}) $ with the following properties:
\begin{itemize}
\item[$\bullet$] $ (W^{\text{o}}_{\zeta,r,R}\cap W^{\ast}_{\zeta,r,R})\setminus M $ is contained in the set
of strong end-points of $\widehat{W}_{\zeta,r,R} \setminus \widehat{S}.$
\item[$\bullet$] $\hat{f}$ admits the angular limit
$f$ at all points of
$ (W^{\text{o}}_{\zeta,r,R}\cap W^{\ast}_{\zeta,r,R} )\setminus M.$
\end{itemize}
}
Applying Proposition
\ref{prop_extension_2}
to the points $\zeta\in A\cap A^{\ast}$ and $ w\in \overline{\Delta}_0(R)$ and using the compactness of $ \overline{\Delta}_0(R),$
we find $r,\delta\in (0,1),$ $p\in \Bbb N,$ and for any $j\in\{1,\ldots,p\},$ a point $w_j\in \overline{\Delta}_0(R),$ a number
$\rho_j>0,$ a relatively closed subset $S_j\subset D_{\zeta,r,\delta}\times \Delta_{w_j}(\rho_j),$
and a function $\hat{f}_j\in\mathcal{O}\big( (D_{\zeta,r,\delta}\times \Delta_{w_j}(\rho_j)) \setminus S_j \big)$ such that
\begin{itemize}
\item[$\bullet$] $\overline{\Delta}_0(R)\subset\bigcup\limits_{k=1}^p \Delta_{w_k}(\rho_k);$
\item[$\bullet$]
$\hat{f}_j$ admits the angular limit $f$ at all points of
$\Big (\big ( A\cap A^{\ast}\cap
\Delta_{\zeta}(r) \big)\times \Delta_{w_j}(\rho_j)\Big)\setminus M.$
\end{itemize}
Using this we are able to apply Theorem \ref{uniqueness}. Consequently,
$$\hat{f}_i=\hat{f}_j\quad\text{on}\
\big( D_{\zeta,r,\delta}\times (\Delta_{w_i}(\rho_i)\cap \Delta_{w_j}(\rho_j)\cap \Delta_0(R) ) \big) \setminus (S_i\cup S_j).$$
Therefore, we obtain an $\tilde{f}\in \mathcal{O}\big( ( D_{\zeta,r,\delta}\times\Delta_0(R))\setminus S^{'}\big),$ where
$\tilde{f} =\hat{f}_j$ on $ (D_{\zeta,r,\delta}\times \Delta_{w_j}(\rho_j)) \setminus S^{'}$
and $S^{'}:=\bigcup\limits_{j=1}^p S_j$ is relatively closed pluripolar set. Moreover, $\Big(\big ( A\cap A^{\ast}\cap
\Delta_{\zeta}(r) \big)\times \Delta_0(R)\Big)\setminus M$ is contained in the set of strong end-points of
$ \big ( D_{\zeta,r,\delta}\times\Delta_0(R)\big)\setminus S^{'}$ and $\tilde{f}$ admits the angular limit $f$ at all points of
the former set.
On the other hand, applying Theorem \ref{Pflug_Nguyen_thm} to the function
$f$ restricted to $\Bbb X(A\cap \Delta_{\zeta}(r),B;D\cap\Delta_{\zeta}(r) ,\widetilde{G} ),$
we obtain $\overset\approx f{}
\in \mathcal{O}\big(\widehat{\Bbb X}(A \cap\Delta_{\zeta}(r),B; D \cap\Delta_{\zeta}(r),\widetilde{G} ) \big )
$ which admits the angular limit $f$ on
$\Bbb X^{\text{o}}(A \cap A^{\ast}\cap \Delta_{\zeta}(r),B\cap B^{\ast}; D \cap\Delta_{\zeta}(r),\widetilde{G} ) \big ).
$
Next, we fix an $n_0$ such that $\delta_{n_0}<\delta.$ For $s\in (0,1)$ let
$\widetilde{G}_s:=\{w\in \widetilde{G}:\ \omega(w,B,\widetilde{G})<s\}.$
For all $n\geq n_0$ let
\begin{equation*}
W_n:=\Bbb X\Big( D_{\zeta,r,\delta_n}, \widetilde{G}_{\delta_n}; D_{\zeta,r,1-\delta_n},
\Delta_0(R)\cup \widetilde{G}_{1-\delta_n}\Big).
\end{equation*}
Define $f_n:\ W_n\setminus S^{'}\rightarrow\Bbb C$ as follows
\begin{equation}\label{eq_fn}
f_n:=
\begin{cases}
\tilde{f}, &\text{on}\ \Big(D_{\zeta,r,\delta_n}\times \big ( \Delta_0(R)\cup \widetilde{G}_{1-\delta_n}\big)\Big)\setminus S^{'}\\
\overset\approx f{}, &\text{on}\ D_{\zeta,r,1-\delta_n}\times \widetilde{G}_{\delta_n}
\end{cases};
\end{equation}
here we have applied Theorem \ref{gluing_thm_1} in order to show that $ \tilde{f}=\overset\approx f{}$
on the overlapping set. Clearly, $f_n\in\mathcal{O}( W_n\setminus S^{'}).$
Therefore, applying Theorem \ref{classical_cross_thm} and Theorem \ref{Chirka_Thm} to $ W_n\setminus S^{'},$ we obtain
a relatively closed
pluripolar subset $ \widehat{S}_n$ of $ \widehat{W}_n$ with $ \widehat{S}_n\cap W_n\subset S^{'}$ and a function
$\hat{f}_n\in \mathcal{O}( \widehat{W}_n\setminus \widehat{S}_n)$
with $\hat{f}_n=f_n$ on $ W_n\setminus S^{'}.$ Now, using Lemma \ref{lem_formula_level_sets}, we define
\begin{eqnarray*}
X_n&:=&\Bbb X\big( A\cap A^{\ast}\cap\Delta_{\zeta}(r), B\cap B^{\ast} ; D_{\zeta,r,1-\delta_n},
\Delta_0(R)\cup \widetilde{G}_{1-\delta_n}\big),\\
\widehat{X}_n&:=&\left\lbrace (z,w)\in D_{\zeta,r,1-\delta_n} \times ( \Delta_0(R)\cup \widetilde{G}_{1-\delta_n}):\
\widetilde{\omega}\big(z, A\cap A^{\ast}\cap\Delta_{\zeta}(r), D_{\zeta,r,1-\delta_n} \big ) \right.\\
&\qquad& \left. + \widetilde{\omega}\big(w, B\cap B^{\ast}, \Delta_0(R)\cup \widetilde{G}_{1-\delta_n}\big )<1 \right\rbrace.
\end{eqnarray*}
Then it follows from (\ref{eq_fn}) that $\hat{f}_n$ restricted
to $\widehat{X}_n\setminus \widehat{S}_n,$
admits the angular limit $f$ at all points
of $X^{\text{o}}_n$ and the latter set is contained in the set of strong end-points
of $\widehat{X}_n\setminus \widehat{S}_n.$
Therefore, applying Theorem \ref{uniqueness} we see that
$\hat{f}_n=\hat{f}_{n+1}$ on $\widehat{X}_n\setminus (\widehat{S}_n\cup\widehat{S}_{n+1}). $
Moreover, using Theorem \ref{minimum_principle} we may assume that
$\widehat{S}_{n+1}\cap\widehat{X}_n \subset \widehat{S}_n .$
Next, we will show that $\widehat{X}_n\nearrow \widehat{W}_{\zeta,r,R}$ as $n\nearrow\infty.$ To see this
it suffices to observe by Lemma \ref{lem_formula_level_sets} that
\begin{eqnarray*}
\omega\big (\cdot, A\cap A^{\ast}\cap\Delta_{\zeta}(r), D_{\zeta,r,1-\delta_n}\big)&\searrow&
\omega(\cdot, A\cap A^{\ast}, D\cap\Delta_{\zeta}(r)),\\
\omega\big (\cdot , B\cap B^{\ast}, \Delta_0(R)\cup \widetilde{G}_{1-\delta_n}\big)&\searrow&
\omega(\cdot, B\cap B^{\ast},\widetilde{G}),
\end{eqnarray*}
when $n\nearrow\infty.$
Now we are in the position to apply Theorem \ref{gluing_thm_2} to the functions
$\hat{f}_{n}\in\mathcal{O}(\widehat{X}_n\setminus \widehat{S}_n)$ for $n\geq n_0.$
Consequently, we obtain the desired relatively closed pluripolar subset $\widehat{S}$ of $ \widehat{W}_{\zeta,r,R} $
and the desired extension function $\hat{f}.$
This finishes Step 1.
\smallskip
\noindent {\bf Step 2.} { \it
For any $R\in (0,1)$ such that $\Delta_0(R)\cup \widetilde{D}$ and
$\Delta_0(R)\cup \widetilde{G}$ are Jordan domains,
there exist a relatively closed pluripolar subset $\widehat{S}$ of
$
\widehat{W}_{R}
$
and
a function $\hat{f}\in \mathcal{O}(\widehat{W}_{R} \setminus \widehat{S}) $
such that the set $( W^{\text{o}}_{R} \cap W^{\ast}_{R})\setminus M $ is contained in the set
of strong end-points of $\widehat{W}_{R} \setminus \widehat{S}$ and that $\hat{f}$ admits the angular limit
$f$ at all points of the former set.
}
\smallskip
Choose a sequence of closed subsets $(\widetilde{A}_m)_{m=1}^{\infty}$ (resp. $(\widetilde{B}_m)_{m=1}^{\infty}$) of $\partial D$
(resp. $\partial G$) such that
\begin{equation}\label{eq1_Step2}
\begin{split}
\operatorname{mes}(\widetilde{A}_m)&>0, \ \widetilde{A}_m\subset \widetilde{A}_{m+1}\subset A\cap A^{\ast},\ \operatorname{mes}\Big( A\setminus
\bigcup_{m=1}^{\infty}\widetilde{A}_m\Big)=0,\\
\operatorname{mes}(\widetilde{B}_m)&>0,\ \widetilde{B}_m\subset \widetilde{B}_{m+1}\subset B\cap B^{\ast} ,\ \operatorname{mes}\Big( B\setminus \bigcup_{m=1}^{\infty}
\widetilde{B}_m\Big)=0.
\end{split}
\end{equation}
Let
\begin{equation}\label{eq2_Step2}
\mathcal{W}_m:= \Bbb X(\widetilde{A}_m,\widetilde{B}_m;\Delta_0(R)\cup \widetilde{D},\Delta_0(R)\cup \widetilde{G}),
\quad \widehat{\mathcal{W}}_m := \widehat{\Bbb X}
(\widetilde{A}_m,\widetilde{B}_m;\Delta_0(R)\cup \widetilde{D},\Delta_0(R)\cup \widetilde{G}).
\end{equation}
First, we will show that for every $m$ there exist a relatively closed pluripolar subset $\widehat{\mathcal{S}}_m$ of
$
\widehat{\mathcal{W}}_{m}
$
and
a function $\widetilde{f}_m\in \mathcal{O}(\widehat{\mathcal{W}}_{m} \setminus \widehat{\mathcal{S}}) $
such that the set $( \mathcal{W}^{\text{o}}_{m} \cap \mathcal{W}^{\ast}_{m})\setminus M $ is contained in the set
of strong end-points of $\widehat{\mathcal{W}}_{m} \setminus \widehat{\mathcal{S}}_m$ and that $\widetilde{f}_m$ admits the angular limit
$f$ at all points of the former set.
For this purpose fix an $m\in \Bbb N.$
Applying Step 1 and using a compactness argument with respect to $\widetilde{A}_m$ we may find $K$ points $\zeta_1,\ldots,\zeta_K\in A\cap A^{\ast}$ and
$K$ numbers $r_1,\ldots,r_K>0$ with the following properties:
\begin{itemize}
\item[$\bullet$]
$\widetilde{A}_m\subset \bigcup\limits_{k=1}^K\Delta_{\zeta_k}(r_k)$ and $ D\cap \bigcup\limits_{k=1}^K\Delta_{\zeta_k}(r_k) \subset \widetilde{D};$
\item[$\bullet$] for every $1\leq k\leq K,$ there are
a relatively closed pluripolar subset $S_k$ of $\widehat{W}_{\zeta_k,r_k,R}$ and
a function $\hat{g}_k\in \mathcal{O}\big(\widehat{W}_{\zeta_k,r_k,R} \setminus S_k\big)$ such that
the set $ (W^{\text{o}}_{\zeta_k,r_k,R} \cap W^{\ast}_{\zeta_k,r_k,R})\setminus M$ is contained
in the set of strong end-points of $\widehat{W}_{\zeta_k,r_k,R}\setminus S_k$
and that $\hat{g}_k$ admits the angular limit
$f$ at all points of the former set.
\end{itemize}
Similarly, using Step 1 again but exchanging the role between $A$ and $B$ (resp. $D$ and $G$),
we may find $L$ points $\eta_1,\ldots,\eta_{L}\in B\cap B^{\ast}$ and
$L$ numbers $s_1,\ldots,s_L>0$
with the following properties:
\begin{itemize}
\item[$\bullet$]
$\widetilde{B}_m\subset \bigcup\limits_{l=1}^{L}\Delta_{\eta_{l}}(s_{l})$ and
$ G\cap \bigcup\limits_{l=1}^{L}\Delta_{\eta_{l}}(s_{l}) \subset \widetilde{G};$
\item[$\bullet$] for every $1\leq l\leq L,$ there are
a relatively closed pluripolar subset $T_{l}$ of $\widehat{W}_{\eta_{l},s_{l},R} $
and a function $\hat{h}_{l}\in \mathcal{O}\big(\widehat{W}_{\eta_{l}, s_{l}, R}\setminus T_{l}\Big)$
such that the set $ (W^{\text{o}}_{\eta_{l},s_{l},R} \cap W^{\ast}_{\eta_{l},s_{l},R})\setminus M$ is contained
in the set of strong end-points of $\widehat{W}_{\eta_{l},s_{l},R}\setminus T_{l}$
and that $\hat{h}_{l}$
admits the angular limit
$f$ at all points of the former set.
\end{itemize}
Put $S:=\bigcup\limits_{k=1}^K S_k$ and $ T:= \bigcup\limits_{l=1}^{L} T_{l}.$
For every $n\geq 1$ let
\begin{eqnarray*}
A_n &:=& \bigcup\limits_{k=1}^K D_{\zeta_k,r_k,\delta_n},\quad \quad B_{n}:=
\bigcup\limits_{l=1}^{L} G_{\eta_{l},s_{l},\delta_n},\\
D_n&:=&\left\lbrace z\in \Delta_0(R)\cup \widetilde{D}:\ \omega\Big( z, \widetilde{A}_m, \Delta_0(R)\cup \widetilde{D} \Big)
<1-\delta_n \right\rbrace,\\
G_{n} &:=&\left\lbrace w\in \Delta_0(R)\cup \widetilde{G}:\ \omega\Big( w, \widetilde{B}_m, \Delta_0(R)\cup \widetilde{G} \Big)
<1-\delta_n \right\rbrace,\\
W_{n} &:=& \Bbb X\Big(A_{n},
B_{n};D_{n},G_{n}\Big),\qquad X_n:=\Bbb X(\widetilde{A}_m\cap \widetilde{A}_m^{\ast},\widetilde{B}_m\cap \widetilde{B}_m^{\ast};D_n,G_n),\\
\widehat{\widetilde{X}}_n &:=& \left\lbrace (z,w)\in D_n\times G_n:\ \widetilde{\omega}(z,\widetilde{A}_m\cap \widetilde{A}_m^{\ast},D_n)+
\widetilde{\omega}(w,\widetilde{B}_m\cap \widetilde{B}_m^{\ast},G_n)<1
\right\rbrace ,
\end{eqnarray*}
where in the last line we can apply Lemma \ref{lem_formula_level_sets} since $\Delta_0(R)\cup \widetilde{D}$
and $ \Delta_0(R)\cup \widetilde{G}$ are Jordan domains.
Applying Theorem \ref{uniqueness} and Theorem \ref{gluing_thm_2}, we may glue $(\hat{g}_k)_{k=1}^K$ together
in order to define the function
$g_{n}:\ (A_n\times G_n)\setminus S\longrightarrow\Bbb C$ as follows
\begin{equation*}
g_{n} :=
\hat{g}_k
\qquad \text{on}\ (D_{\zeta_k,r_k,\delta_n}\times G_{n}) \setminus S.
\end{equation*}
Similarly, we may glue $(\hat{h}_{l})_{l=1}^{L}$ together
in order to define the function
$h_{n}:\ (D_n\times B_n)\setminus T\longrightarrow\Bbb C$ as follows
\begin{equation*}
h_{n} :=
\hat{h}_{l}
\qquad \text{on}\ (D_n\times G_{\eta_{l},s_{l},\delta_n} ) \setminus T.
\end{equation*}
Finally, we glue $g_n$ and $h_n$ together in order to define the function
$f_{n}:\ W_n\setminus (T\cup S)\longrightarrow\Bbb C$ as follows
\begin{equation}\label{eq_new_fn}
f_{n} :=
\begin{cases}
g_n
&\text{on}\ (A_n\times G_{n}) \setminus S\\
h_n, & \text{on}\ (D_{n}\times B_{n}) \setminus T
\end{cases}.
\end{equation}
The remaining part of the proof follows along the same lines as in Step 1.
Applying Theorem \ref{classical_cross_thm} and Theorem \ref{Chirka_Thm} to $ W_n\setminus( S\cup T),$ we obtain
a relatively closed
pluripolar subset $ \widehat{S}_n$ of $ \widehat{W}_n$ with $ \widehat{S}_n\cap W_n\subset (S\cup T)$ and a function
$\hat{f}_n\in \mathcal{O}( \widehat{W}_n\setminus \widehat{S}_n)$
with $\hat{f}_n=f_n$ on $ W_n\setminus (S\cup T).$
In particular, it follows from (\ref{eq_new_fn}) that $\hat{f}_n$ restricted
to $\widehat{\widetilde{X}}_n\setminus \widehat{S}_n,$
admits the angular limit $f$ at all points
of $X^{\text{o}}_n\setminus M.$ Here observe that the latter set is contained in the set of strong end-points
of $\widehat{\widetilde{X}}_n\setminus \widehat{S}_n.$
Therefore, applying Theorem \ref{uniqueness} we see that
$\hat{f}_n=\hat{f}_{n+1}$ on $\widehat{\widetilde{X}}_n\setminus (\widehat{S}_n\cup\widehat{S}_{n+1}). $
Moreover, using Theorem \ref{minimum_principle} we may assume that
$\widehat{S}_{n+1}\cap\widehat{\widetilde{X}}_n \subset \widehat{S}_n .$
Next, an application of Lemma \ref{lem_formula_level_sets} gives that
$\widehat{\widetilde{X}}_n\nearrow \widehat{\mathcal{W}}_{m}$ as $n\nearrow\infty.$
Now we are in the position to apply Theorem \ref{gluing_thm_2} to the functions
$\hat{f}_{n}\in\mathcal{O}(\widehat{\widetilde{X}}_n\setminus \widehat{S}_n)$ for $n\geq 2.$
Consequently, we obtain the desired relatively closed pluripolar subset $\widehat{\mathcal{S}}_m$ of $ \widehat{\mathcal{W}}_{m} $
and the desired extension function $\widetilde{f}_m.$
Using (\ref{eq1_Step2})--(\ref{eq2_Step2}) and (\ref{eq_elementary}) we see that
$\widehat{\mathcal{W}}_m\nearrow \widehat{W}_R$ as $m\nearrow\infty.$
Therefore, applying
Theorem \ref{uniqueness}, Theorem \ref{minimum_principle} and Theorem \ref{gluing_thm_2}, Step 2 follows.
\smallskip
\noindent {\bf Step 3.} { \it Completion of the case where $A$ and $B$ are compact.}
\smallskip
Fix $R\in (0,1)$ such that $\Delta_0(R)\cup \widetilde{D}\not=\varnothing$
and $\Delta_0(R)\cup \widetilde{G}\not=\varnothing$ are Jordan domains. Choose a sequence
$(R_n)_{n=1}^{\infty}$ such that
$R_n >R,$ $R_n\nearrow 1$ as $n\nearrow\infty.$
For $n\geq 1$ put $
W_{n} := \Bbb X\Big(A,B; \Delta_0(R_n)\cup \widetilde{D} ,\Delta_0(R_n)\cup \widetilde{G}\Big).
$
Applying the result of Step 2, we may find, for every $n\geq 1,$ a relatively closed pluripolar subset $\widehat{M}_n$ of $\widehat{W}_{n}$
and
a function $\hat{f}_n\in \mathcal{O}(\widehat{W}_{n} \setminus \widehat{M}_n) $ such that
$(W_{n}^{\ast}\cap W^{\text{o}}_n)\setminus M$ is contained in the set of strong end-points of $\widehat{W}_{n} \setminus \widehat{M}_n$ and that
$\hat{f}_n$ admits the angular limit
$f$ at all points of the former set.
Since $\widehat{W}_{n}\nearrow \widehat{W}$ as $n\nearrow \infty,$ we conclude this step by applying
Theorem \ref{uniqueness}, Theorem \ref{minimum_principle} and Theorem \ref{gluing_thm_2} as in Step 2.
\smallskip
\noindent {\bf Step 4.} { \it The general case.}
\smallskip
Choose a sequence of closed subsets $(A_n)_{n=1}^{\infty}$ (resp. $(B_n)_{n=1}^{\infty}$) of $\partial D$
(resp. $\partial G$) such that
\begin{eqnarray*}
\operatorname{mes}(A_n)&>&0, \ A_n\subset A_{n+1}\subset A,\ \operatorname{mes}\Big( A\setminus \bigcup_{n=1}^{\infty}A_n\Big)=0,\\
\operatorname{mes}(B_n)&>&0,\ B_n\subset B_{n+1}\subset B,\ \operatorname{mes}\Big( B\setminus \bigcup_{n=1}^{\infty}B_n\Big)=0.
\end{eqnarray*}
Let $W_n:= \Bbb X(A_n,B_n;D,G).$
Applying the hypotheses to $f|_{W_n\setminus M}$ for $n\geq 1,$
we obtain a relatively closed pluripolar subset $\widehat{M}_n$ of $\widehat{W}_n$ and
a function $\hat{f}_n\in\mathcal{O}(\widehat{W}_n\setminus \widehat{M}_n)$
such that $(W_{n}^{\ast}\cap W^{\text{o}}_n)\setminus M$ is contained in the set of strong end-points of $\widehat{W}_{n} \setminus \widehat{M}_n$ and that
$\hat{f}_n$ admits the angular limit
$f$ at all points of the former set. Using (\ref{eq_elementary}) we see that
$\widehat{W}_n\nearrow \widehat{W}$ as $n\nearrow\infty.$
Therefore, arguing as at the end of the previous step, Step 4 follows.
\hfill $\square$
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{Sec-intro}
Examples of excitable media are frequently found in biological and
chemical systems. Prominent examples are cardiac and neural tissue
\cite{WinfreeBook,Davidenko}, slime mold colonies in a starving
environment \cite{dictyostelium} and intracellular calcium waves
\cite{calcium}. There are two defining features of excitable media
which are crucial to enable effective signal transmission in
biological systems such as cardiac or neural tissue: threshold
behaviour and relaxation to a stable rest state. The threshold
behaviour assures that only for large enough stimuli a signal is
produced whereas small perturbation decay immediately. For
super-threshold perturbations a signal will decay only after a long
excursion -- called action potential in the context of cardiac
dynamics -- back to its stable rest state. This relaxation allows for
repeated stimulation which is essential for wave propagation in
cardiac and neural tissue.\\ Typical solutions in one-dimensional
excitable media are wave trains. The wavelength $L$ can range from
$L=\infty$ corresponding to an isolated pulse to a minimal value $L_c$
below which propagation fails. Besides wave trains rotating spiral
waves may form in two dimensional excitable media, and scroll waves in
three dimensional excitable media
\cite{Winfree,Winfree90,Winfree94,Margerit01,Margerit02}. In the
context of cardiac excitable media propagation failure of these
solutions is often linked to clinical situations. In particular the
break-up of spiral waves has been associated with pathological cardiac
arrhythmias \cite{Chaos}. Spiral waves may be created in cardiac
tissue when wave trains propagate through inhomogeneities of the
cardiac tissue. A reentrant spiral may move around an anatomical
obstacle or around a region of partially or totally inexcitable
tissue. Once created they drive the heart at a rate much faster than
normal sinus rhythm and cause tachycardia. If these spiral waves then
subsequently breakup into multiple drifting and meandering spirals and
disintegrate into a disorganized state, fibrillation may occur with a
possible fatal result for the patient, especially when occurring in
the ventricles. It is therefore of great interest to understand the
transition from one reentrant spiral to the disorganized collection of
complex reentrant pathways. Rather than investigating the full
two-dimensional problem of spiral wave break-up which would include
interactions of numerous wave arms, one can study some aspects of
spiral wave breakup by looking at a one-dimensional slice of a spiral
i.e. at a one-dimensional wave train
\cite{Courtemanche}. A pulse circulating around a one-dimensional ring
constitutes the simplest model for a spiral rotating around an
anatomical obstacle. Such models concentrate solely on the dynamics
close to the anatomical obstacle and ignore the influence of the
dynamics of the spiral arms.\\
\noindent
Experimentally this problem has been studied since the beginning of
the last century. In \cite{Mines} the circulation of an electrical
pulse around a ring-shaped piece of atrial cardiac tissue from a
tortoise heart was investigated as a model for reentrant
activity. Recently, the experiments in \cite{Frame} investigated the
dynamics of pulse circulation around a ring-shaped piece of myocardial
tissue from a dog heart. Remarkably, it was found that oscillations of
the circulation period of the pulse occurred leading to conduction
block and subsequent termination of reentry.\\
\noindent
This indicates that a one-dimensional oscillatory instability, named
alternans, may be the mechanism triggering spiral wave breakup
\cite{Nolasco,Courtemanche,Karma_A,Karma93,Fenton02}. This instability
occurs when the circulation time of the pulse around the ring is below
a certain specific threshold. Below this threshold alternans arise and
action potential durations are alternating periodically between short
and long periods. We make here the distinction between alternans which
are mediated by external periodic stimulations
\cite{Hastings00,Guevara02,Echebarria02,Echebarria02b,Fox02,Henry05,Vinet99,Vinet03,Vinet05}
and alternans which have a purely dynamical origin. We call the latter
ones {\em dynamical alternans}. It is these dynamical alternans with
which this paper is concerned with. We will analyze dynamical
alternans using a normal form proposed in \cite{GottKram05} for
travelling waves in one-dimensional excitable media. The stability of
dynamical alternans will be determined by a center manifold theory and
by multiple scale analysis. Typically oscillatory instabilities arise
in one-dimensional media via Hopf bifurcations. These Hopf
bifurcations may be supercritical resulting in sustained stable
oscillations or subcritical leading to a collapse of the oscillations
and possibly of the pulse solution. Alternans are widely believed to
originate via a supercritical bifurcation
\cite{Courtemanche,Karma93,Karma94,Courtemanche96}. This believe is
based on numerical simulations of certain models for cardiac
dynamics. However we note that the above mentioned experiments
\cite{Mines,Frame} show that the occurrence of oscillations leads to a
subsequent termination of the pulse, which is not suggestive of a
supercritical bifurcation.\\
\noindent
Our main result for a single pulse and for a wave train in a ring is
that in the framework of our normal form alternans arise as a
subcritical Hopf bifurcation. This result is in contrast to the common
belief that alternans are stable oscillations. We corroborate our
result by numerical simulations of a model for cardiac tissue in which
previous numerical simulations suggested the occurrence of stable
oscillations. We will see that previous numerical experiments have not
been performed sufficiently long to reveal the subcritical character
of the Hopf bifurcation. The subcritical character is in agreement
with the above mentioned experiments \cite{Mines,Frame} and may
explain why alternans often trigger spiral wave breakup and are
associated with cardiac failures.
The normal form describes excitable media in parameter regions where
the system is close to the saddle-node of the travelling wave. This is
the case for models such as the FitzHugh-Nagumo model
\cite{FHN}, and is typically the case when the Hopf bifurcation occurs
close to the saddle-node bifurcation of the isolated pulse. In
particular the normal form is valid for those classes of excitable
media (or parameter regions of a particular excitable medium) where
the activator weakly interacts with the preceding inhibitor which
exponentially decays towards the homogeneous rest state. However for
other models such as the model by Echebarria and Karma
\cite{Echebarria02} there exist parameter regions where upon
decreasing the length of the ring the pulse solution is driven away
from the solitary pulse solution and does not weakly interact with the
inhibitor. Our normal form cannot describe these oscillations and we
present numerical simulations where oscillations are indeed stable in
Section~\ref{Sec-numerics}.\\
\noindent
Recently we have constructed a normal form for travelling waves in
one-dimensional excitable media which takes the form of a delay
differential equation \cite{GottKram05} (see (\ref{NF}) and
(\ref{NF2})). The construction is based on the well-known observation
that the interaction of a pulse with the inhibitor of the preceding
pulse modifies the generic saddle-node bifurcation of an isolated
pulse.
In Fig.~\ref{Fig-barkley2} we illustrate this scenario for a
modified Barkley model \cite{Barkley91}.
\begin{figure}
\centerline{
\psfig{file=Fig12.eps,angle=0,width=3.0in}}
\caption{Plot of the activator $u$ (continuous line) and the inhibitor
$v$ (dashed line) for the modified Barkley system (\ref{barkley})
showing how the activator $u$ weakly interacts with the exponentially
decaying tail of the inhibitor $v$. The parameters are $a=0.22$,
$u_s=0.1$, $\epsilon=0.03755$. The ring length is $L=245$ which is
close to the critical value $L_c$ for the saddle-node bifurcation.}
\label{Fig-barkley2}
\end{figure}
The normal form exhibits a rich bifurcation behaviour which we
could verify by numerically simulating partial differential equation
models of excitable media. Besides the well known saddle-node
bifurcations for isolated pulses and for periodic wave trains the
normal form also exhibits a Hopf bifurcation and a symmetry breaking,
spatially inhomogeneous pitchfork bifurcation. Moreover, the normal
form shows that the saddle-node and the Hopf bifurcation are an
unfolding of a Bogdanov-Takens point as previously suggested in
\cite{Knees92,GottwaldKramer04}. The Hopf bifurcation is found to
occur before the saddle-node bifurcation for a single pulse in a
ring. For a wave train consisting of several pulses in a ring, the
Hopf- and the saddle-node bifurcations occur after the symmetry
breaking pitchfork bifurcation in which every second pulse dies. We
could verify these scenarios in numerical simulations of a modified
Barkley-model \cite{Barkley91} and the FitzHugh-Nagumo equations
\cite{FHN}. The normal form provides a unified framework to study all
possible bifurcations of travelling waves in one-dimensional excitable
media.\\ We were able to determine the parameters of the normal form
from numerical simulations of the modified Barkley model
\cite{Barkley91,GottwaldKramer04}. Using these numerically determined
parameters we showed excellent agreement between the normal form and
the full partial differential equation. We quantitatively described
the Hopf bifurcation and the inhomogeneous pitchfork bifurcation with
the normal form. Moreover, we were able to quantify the
Bogdanov-Takens bifurcation.\\ Whereas the subcritical character of
the pitchfork bifurcation had been established in \cite{GottKram05}, a
detailed analysis of the Hopf bifurcation was missing. For example, an
interesting question is whether the Hopf bifurcation is sub-critical
(as numerically observed for the parameters chosen in
\cite{GottKram05} for the modified Barkley model), or whether it is
possible to observe sustained stable oscillations. This has important
implications for cardiac alternans as described above. In this paper
we will analyze the Hopf bifurcation of the normal form in detail. We
derive a normal form for the Hopf bifurcation which allows us to
determine the stability of the bifurcating solutions close to
criticality.\\
\noindent
Before we embark on the analytical investigation of the Hopf
bifurcation and the implications for cardiac dynamics, we state that
all conclusions drawn obviously depend on the validity of the normal
form. So far the proposed normal form which we briefly review in
Section~\ref{Sec-NF} has not been rigorously derived for any excitable
medium. However, we point out that in \cite{GottKram05} we have shown
good quantitative agreement with some real excitable media. We will
discuss the limitations of our approach in more detail in
Section~\ref{Sec-Disc}.\\
\noindent
In Section~\ref{Sec-NF} we recall the normal form and some of its
properties. In Section~\ref{Sec-BifHopf} we perform a center manifold
reduction of the normal form to describe the character of the Hopf
bifurcation for a single pulse in a ring. In Section~\ref{Sec-CGL} we
look at the case where a pseudo continuum of modes undergo a Hopf
bifurcation and derive in a multiple scale analysis a Ginzburg-Landau
equation which allows us to study the stability of the Hopf
bifurcation of a wave train. The paper concludes with
Section~\ref{Sec-Disc} where we present results from numerical
simulations, discuss the implications of our analysis to cardiac
dynamics and make connections to previous studies on alternans. In
particular, we show that our condition for the onset of the Hopf
bifurcation coincides with the well known restitution condition for
cardiac alternans.
\noindent
\section {A normal form}
\label{Sec-NF}
\vskip 5pt
In \cite{GottKram05} we introduced a normal form for a single pulse
on a periodic domain with length $L$
\begin{eqnarray}
\label{NF}
\partial_t X = -\mu - g X^2 - \beta (\gamma + X(t-\tau) + \gamma_1X(t))\; ,
\end{eqnarray}
where
\begin{eqnarray}
\label{beta}
\beta=\beta_0 \exp{(-\kappa \tau)}\; ,
\end{eqnarray}
for positive $\beta$, $\kappa$, $\gamma$ and $\gamma_1$. Here $X(t)$
may be for example the difference of the amplitude or the velocity of
a pulse to its respective value at the saddle-node. The terms
proportional to $\beta$ incorporate finite domain effects associated
with the activator of an excitable medium running into its own
inhibitor with speed $c_0$ after the temporal delay $\tau=(L-\nu)/c_0$
where $\nu$ is the finite width of the pulse. Note that for $\beta=0$
(i.e. for the isolated pulse with $\tau \to
\infty$) we recover the generic saddle-node bifurcation which is well
known for excitable media. Numerical simulations of excitable media
show that the bifurcations of a {\it single} propagating pulse in a
ring are different from the bifurcations of a wave train consisting of
several distinct pulses. In the case of a wave train with finite wave
length where a pulse may run into the inhibitor created by its
preceding pulse, we showed that it was sufficient to consider two
alternating populations of pulses $X$ and $Y$. We derived the
following extension
\begin{eqnarray}
\label{NF2}
\partial_t X &=&-\mu - g X^2 - \beta (\gamma + Y(t-\tau) +
\gamma_1X(t)) \nonumber \\
\partial_t Y &=& -\mu - g Y^2 - \beta (\gamma + X(t-\tau) + \gamma_1Y(t))\; .
\end{eqnarray}
To avoid confusion we state that we use the term {\em normal form}
here in two different contexts. Equations (\ref{NF}) and (\ref{NF2})
are coined ``normal form'' as they are an attempt to describe the
behaviour of travelling waves in generic one-dimensional excitable
media. However, these normal forms exhibit a rather rich bifurcation
behaviour. We will focus in Section~\ref{Sec-BifHopf} on the solution
behaviour close to criticality. In that context we will speak of
``normal forms'' in the sense of bifurcation theory.
\noindent
\subsection {Properties of the normal form}
\label{Sec-BT}
\subsubsection {Saddle-node bifurcation}
\label{Sec-SN}
Equation (\ref{NF}) supports the following stationary solutions
\begin{eqnarray}
\label{statio}
{\bar X}_{1,2} =\frac{1}{2g}[-\beta(1+\gamma_1)\pm
\sqrt{\beta^2(1+\gamma_1)^2-4g(\mu+\beta \gamma)}] \;.
\end{eqnarray}
It is readily seen that the upper solution branch is stable whereas
the lower one is unstable \cite{GottKram05}. The two solutions
coalesce in a saddle-node bifurcation with
\begin{eqnarray}
\label{SN_wt}
{\bar X}_{SN} = -\frac{\beta}{2g}(1+\gamma_1) \qquad {\rm{at}} \qquad
\mu_{SN}= \frac{\beta^2}{4g}(1+\gamma_1)^2-\beta \gamma \;.
\end{eqnarray}
One expects the parameter $\beta$, which describes the coupling to the
inhibitor, to be small (see \cite{GottKram05}). This implies
$\mu_{SN}< 0$, indicating that the saddle-node of a single pulse or of
a periodic wave train on a finite ring occurs at smaller values of the
bifurcation parameter $\mu$ than for the isolated pulse. Hence, the
bifurcation is shifted to the left when compared to the isolated pulse
(see Figure~\ref{Fig-Hopfsketch}).\\
\noindent
Besides this stationary saddle node bifurcation the normal form
(\ref{NF}) also contains a Hopf bifurcation.
\begin{figure}
\centerline{
\psfig{file=Fig1new.eps,angle=0,width=3.0in}}
\caption{Sketch of the bifurcation diagram for a single
pulse in a ring showing a stationary saddle-node bifurcation (SN) and
a subcritical Hopf bifurcation (HH).}
\label{Fig-Hopfsketch}
\end{figure}
\subsubsection{Hopf bifurcation}
\label{Sec-Hopf}
Linearization of the normal form around the homogeneous solution
${\bar X}$ with respect to small perturbations of the form $\delta X
\exp{\lambda t}$ with $\lambda=\sigma + i \omega$ yields
\begin{eqnarray}
\label{lin}
\lambda+2g{\bar X} + \beta \gamma_1 + \beta e^{-\lambda \tau} = 0\; .
\end{eqnarray}
Besides the stationary saddle-node bifurcation (\ref{SN_wt}) with
$\lambda=0$, the linearization (\ref{lin}) also reveals the existence
of a Hopf bifurcation with $\lambda=i\omega$. We readily find from
(\ref{lin})
\begin{eqnarray}
\label{HomHopfa}
\omega&=&\beta\sin{\omega \tau}\\
\label{HomHopfb}
{\bar X}_{H}&=& -\frac{\beta}{2g}(\cos{\omega \tau} + \gamma_1)\; .
\end{eqnarray}
The first equation (\ref{HomHopfa}) allows us to formulate a necessary
condition for the existence for a Hopf bifurcation
\[
\beta \tau > 1 \; ,
\]
i.e. if the coupling is strong enough and the pulse feels the presence
of the inhibitor of the preceding pulse sufficiently strongly. The
Hopf bifurcation occurs in parameter space before the saddle-node
bifurcation and bifurcates from the upper stable branch ${\bar X}_1$
of (\ref{statio}) as is readily seen by observing ${\bar X}_{H} \ge
{\bar X}_{SN}$, independent of the value of $\beta$. Hence we may
equate ${\bar X}_{H}= {\bar X}_1$ and solve for the bifurcation
parameter. Setting $\mu_{H}=\mu_{SN}-\delta\mu$ we find
\begin{eqnarray}
\label{mueHopf}
\delta \mu=\frac{\beta^2}{4g}(1-\cos{\omega \tau})^2\; .
\end{eqnarray}
The saddle-node bifurcation coalesces with the Hopf bifurcation in a
codimension-$2$ Bogdanov-Takens point. At the Bogdanov-Takens point
with $\mu_{H}=\mu_{SN}$ we have $\omega \tau=0$ and $\omega=0$,
i.e. the period of the oscillation goes to infinity. The amplitude at
the Bogdanov-Takens point is readily determined by comparison of
(\ref{SN_wt}) with (\ref{HomHopfb}) for $\omega \tau=0$. From
(\ref{HomHopfa}) we infer that this occurs at $\beta \tau=1$. We note
that if $\beta \tau$ is large enough there can be arbitrary many
solutions $\omega_l$ of (\ref{HomHopfa}). We will discuss this
scenario in Section~\ref{Sec-CGL}.
In Figure~\ref{Fig-Hopfsketch} we show a schematic bifurcation diagram
with the saddle-node bifurcation and the subcritical Hopf bifurcation
for a single pulse in a ring.
\subsubsection{Spatially inhomogeneous pitchfork bifurcation}
\label{Sec-PF}
When a group of several pulses in a ring is numerically simulated one
observes that this wave train group does not undergo a symmetry
preserving Hopf bifurcation on increasing the refractoriness, but
instead develops a symmetry breaking, spatially inhomogeneous
instability whereby every second pulse dies.
The normal form (\ref{NF2}) is able to predicted and quantitatively
describe this scenario \cite{GottKram05}. The system (\ref{NF2}) for
wave trains supports two types of solutions. Besides the homogeneous
solution (\ref{statio}), ${\bar X}_h={\bar Y}_h={\bar{X}}_1$, which
may undergo a saddle-node bifurcation as described by (\ref{SN_wt}),
there exists another stationary solution, an alternating mode $X_a$
and $Y_a$, with
\begin{eqnarray}
\label{AI_2}
{\bar X}_{a}=-{\bar Y}_a+\frac{\beta}{g}(1-\gamma_1)\; .
\end{eqnarray}
Associated with this alternating solution is a pitchfork bifurcation
at
\begin{eqnarray}
\label{AI_PFa}
\mu_{PF}=\frac{1}{4}\frac{\beta^2(1+\gamma_1)^2}{g}-\frac{\beta^2}{g}-\beta
\gamma
= \mu_{SN}-\frac{\beta^2}{g} \le \mu_{SN}\; ,
\end{eqnarray}
when
\begin{eqnarray}
\label{AI_PFb}
X_{PF}=Y_{PF}=\frac{\beta}{2g}(1-\gamma_1)\; .
\end{eqnarray}
The pitchfork bifurcation sets in before the saddle-node bifurcation
as can be readily seen from (\ref{AI_PFa}).
The upper branch of the homogeneous solution ${\bar X}_h$ given by
(\ref{statio}) at the pitchfork bifurcation point $\mu_{PF}$ coincides
with (\ref{AI_PFb}). Hence as the Hopf bifurcation, the pitchfork
bifurcation branches off the upper branch of the homogeneous
solution. The pitchfork bifurcation is always subcritical because
there are no solutions ${\bar X}_{a}$ possible for $\mu >
\mu_{PF}$. No bifurcation theory is needed to determine the
subcritical character of the pitchfork bifurcation.
The stability of the homogeneous solution ${\bar X}={\bar Y}={\bar
X}_h={\bar Y}_h$ is determined by linearization. We study
perturbations $X={\bar X}_h + x\exp{\lambda t}$ and $Y={\bar X}_h +
y\exp{\lambda t}$, and obtain as a condition for nontrivial solutions
$x$ and $y$
\begin{eqnarray}
\label{lin_ai2}
(\lambda+2g {\bar X}_h+\beta \gamma_1) = \pm \beta e^{-\lambda
\tau}\; .
\end{eqnarray}
The upper sign denotes an antisymmetric mode $x=-y$ whereas the lower
sign denotes a symmetric mode $x=y$. Stationary bifurcations are
characterized by $\lambda=0$, and in this case the symmetric mode
coincides with the saddle-node bifurcation (\ref{SN_wt}) and the
antisymmetric mode terminates at the pitchfork bifurcation
(\ref{AI_PFb}).\\
\noindent
As for the case of a single pulse in a ring, non-stationary Hopf
bifurcations are possible if $\lambda=i\omega$ for wave trains. We
obtain from (\ref{lin_ai2})
\begin{eqnarray}
\label{lin_ai3}
\omega=\mp \beta \sin{\omega \tau}
\end{eqnarray}
and
\begin{eqnarray}
\label{lin_ai4}
{\bar X}_h= \frac{\beta}{2g}(\pm\cos{\omega \tau}-\gamma_1)\; .
\end{eqnarray}
We consider only the symmetric case (the lower signs) which reproduces
our results (\ref{HomHopfa}) and (\ref{HomHopfb}) for the symmetry
preserving Hopf bifurcation. The antisymmetric case does not allow for
a single-valued positive $\omega$. For $\omega \tau\to 0$ the onset of
the Hopf bifurcation moves towards the saddle-node (\ref{SN_wt}) and
coalesces with it at $\beta \tau=1$ in a Bogdanov-Takens point as
described in Section~\ref{Sec-BT}. For $\omega \tau\to\pi$ the Hopf
bifurcation moves towards the pitchfork bifurcation with a limiting
value of ${\bar X}_{h}=\beta(1-\gamma_1)/(2g)=X_{PF}$ in a
codimension-2 bifurcation. At the point of coalescence the Hopf
bifurcation has a period $T = 2 \tau$ which corresponds exactly to the
inhomogeneous pitchfork bifurcation with $p=\pi$ whereby every second
pulse dies. For values $\omega \tau \in [0,\pi)$ the Hopf bifurcation
always sets in after the pitchfork bifurcation. Hence for wave trains
one can see only a Hopf bifurcation of the steady solution at the
point where the Hopf bifurcation collides with the pitchfork
bifurcation.\\
\noindent
This allows us to sketch the full bifurcation scenario for a wave
train in a periodic ring as depicted in
Figure~\ref{Fig-Pitchsketch}. In the subsequent sections we study the
bifurcations of the steady-state solution (\ref{statio}).
\begin{figure}
\centerline{
\psfig{file=Fig2new.eps,angle=0,width=3.0in}}
\caption{Sketch of the bifurcation diagram for a wave train
in a ring showing a stationary saddle-node bifurcation (SN) and a
subcritical pitchfork bifurcation (PF). In between these two
bifurcations is also a Hopf bifurcation (HH).}
\label{Fig-Pitchsketch}
\end{figure}
\medskip
\section{Bifurcation analysis of the Hopf bifurcation for a single pulse}
\label{Sec-BifHopf}
\noindent
In this Section we study the direction and the stability of the Hopf
bifurcation. In excitable media the Hopf bifurcation is a result of
stronger and stronger coupling of the activator with its own
inhibitor. Upon reducing the length of a ring for fixed excitability,
or reducing the excitability for a fixed ring length, a single pulse
will feel the tail of its own inhibitor created during its previous
passage through the ring. In a wave train each pulse will feel the
tail of the inhibitor of its neighbour in front. In the context of our
normal form (\ref{NF}) the increasing coupling translates into an
increase of $\beta \tau$. Upon increasing $\beta \tau$ from $\beta
\tau=1$ up to a critical value of $\beta \tau\approx 7.789$ we have
only one solution $\omega=\omega_0$ of the characteristic equation
(\ref{HomHopfa}); see Fig.~\ref{Fig-sine}. We will study now the
dynamics for this case. The case of arbitrary many solutions when one
encounters a pseudo-continuum of frequencies will be discussed further
down in Section~\ref{Sec-CGL}.\\
\noindent
The theory of bifurcation analysis for delay-differential equations is
well developed
\cite{Krasovskii,Hale,Hale2,Diekmann}. For example, in \cite{Diekmann}
a formula for the coefficients of the normal form for a Hopf
bifurcation is given explicitly. However, we found that the theory of
delay differential equations is not as well known amongst scientists
as its age may suggest. We find it therefore instructive to perform
the calculations explicitly and lead the reader through the
calculations.\\
\noindent
To study the direction of the Hopf bifurcation we have to
determine the sign of $d\sigma/d\mu$ at the bifurcation point
$\mu_H$. From (\ref{lin}) we infer
\begin{eqnarray*}
\frac{d\lambda}{d\mu}=-\frac{2g}{1-\beta \tau e^{-\lambda
\tau}}\frac{d{\bar X}}{d\mu}\; .
\end{eqnarray*}
Using (\ref{mueHopf}) we find from (\ref{statio})
\begin{eqnarray*}
\frac{d{\bar X}}{d\mu}_{|_{\mu_H}} =-\frac{1}{\beta(1-\cos{\omega \tau})}\; ,
\end{eqnarray*}
and subsequently
\begin{eqnarray}
\label{direction}
\frac{d\lambda}{d\mu}_{|_{\mu_H}}
=\frac{2g}{\beta}\frac{1}{\|1-\beta \tau e^{-\lambda \tau}\|}>0\; ,
\end{eqnarray}
which implies that the stationary solution looses stability with
increasing values of the bifurcation parameter $\mu$.\\
\noindent
We now study the character of the Hopf bifurcation and derive the
normal form for a Hopf bifurcation from (\ref{NF}). In order to do
that we first transform the normal form (\ref{NF}) into standard form
by subtracting the stationary solution (\ref{statio}) according to
$X={\bar{X}}+x$ where ${\bar{X}}={\bar{X}}_1$. We obtain
\begin{eqnarray}
\label{nf}
\partial_t x = - 2g {\bar{X}} x - \beta (x(t-\tau) + \gamma_1x(t)) - gx^2\; ,
\end{eqnarray}
with the stationary solution being now $x(t)=0$. We will employ a
center manifold reduction for this equation to describe the dynamics
close to criticality. Center manifold theory is well-established for
maps, ordinary differential equations and partial differential
equations. However, although known for some time
\cite{Krasovskii,Hale,Hale2}, it is not well known how to formulate an
essentially infinite dimensional delay differential equation such as
(\ref{NF}) into a form such that center manifold reduction can be
applied. For ordinary differential equations, for example, the
application of center manifold theory is a straight-forward expansion
of the state vectors in critical eigenmodes. The problem for delay
differential equations is their inherent infinite dimensional
character. An initial condition $x(\theta)=x_0(\theta)$ for $-\tau \le
\theta \le 0$ is mapped onto a finite dimensional space; in the case
of (\ref{nf}) onto a $1$-dimensional space. Lacking uniqueness of
solutions is one obstacle which prohibits a straightforward
application of center manifold reduction. The trick out of this
dilemma is to reformulate the problem as a {\em mapping} from an
infinite-dimensional space of differentiable functions defined on the
interval $[-\tau,0]$, which we denote as
${\cal{C}}={\cal{C}}[[-\tau,0];{\mathbb R}]$ (i.e. $x_0(\theta)\in {\cal{C}}$),
onto itself. This allows us to employ the well established and
understood center manifold reduction for mappings. These ideas go back
to Hale \cite{Hale,Krasovskii}. We found well written examples of
center manifold reductions to be examined in
\cite{Wischert94,LeBlanc02,Krauskopf04}. In essence, the history of a
state vector $x(t)\in {\mathbb R}$ is folded to a single element of an extended
state space $x_t(\theta)\in {\cal{C}}$. In order to achieve this we
define $x_t(\theta)\in {\cal{C}}$ as
\[ x_t(\theta)=x(t+\theta)\quad {\rm for} \quad -\tau\le \theta \le 0
\; .\]
The time-evolution for $x(t)\in {\mathbb R}$ (\ref{nf}) needs to be
reexpressed in terms of propagators and operators acting on elements
of the extended state space $x_t(\theta)\in {\cal{C}}$ which can be
done by writing
\begin{eqnarray}
\label{XC}
\frac{d}{dt}x_t(\theta) = {\cal{A}}[x_t](\theta) = \left\{
\begin{array}{ll}
\frac{d}{d\theta}x_t(\theta) & \mbox{if $-\tau\le \theta < 0$}\\
{\cal{F}}[x_t] & \mbox{if $\theta=0$}
\end{array}
\right.
\end{eqnarray}
For $-\tau\le \theta < 0$ we used the invariance condition
$dx(t+\theta)/dt=dx(t+\theta)/d\theta$. For $\theta=0$ we can split
the operator ${\cal{F}}$ into a linear part ${\cal{L}}$ and a
nonlinear part ${\cal{N}}$ and reformulate the right-hand side of
(\ref{nf}) as
\begin{eqnarray}
\label{F}
{\cal{F}}[x_t] = {\cal{L}}[x_t]+{\cal{N}}[x_t] \; ,
\end{eqnarray}
where
\begin{eqnarray}
\label{FA}
{\cal{L}}[x_t] = \int_{-\tau}^0d\theta w_1(\theta) x_t(\theta)
\quad {\rm with} \quad
w_1(\theta) = - (2g{\bar{X}} + \beta\gamma_1)\delta(\theta)
- \beta \delta(\theta+\tau)
\end{eqnarray}
and
\begin{eqnarray}
\label{FN}
{\cal{N}}[x_t] = \int_{-\tau}^0d\theta_1d\theta_2
w_2(\theta_1,\theta_2) x_t(\theta_1)x_t(\theta_2)
\quad {\rm with} \quad
w_2(\theta_1,\theta_2) = - g\delta(\theta_1)\delta(\theta_2)\; ,
\end{eqnarray}
where $\delta(\theta)$ denotes the Dirac $\delta$-function. Once
$x_t(\theta)$ is computed via solving (\ref{XC}), one may convert back
to $x(t)$ by \[ x(t)=\int_{-\tau}^0d\theta
\delta(\theta)x_t(\theta) \; .\]
\subsection{Linear eigenvalue problem}
We now linearize (\ref{XC}) around the stationary solution
$x_t(\theta)=0$ to obtain
\begin{eqnarray}
\label{XL}
\frac{d}{dt}\xi_t(\theta) = {\cal{A}}_L[\xi_t](\theta)
= \left\{
\begin{array}{ll}
\frac{d}{d\theta}\xi_t(\theta) & \mbox{if $-\tau\le \theta < 0$}\\
{\cal{L}}[\xi_t] & \mbox{if $\theta=0$}
\end{array}
\right.
\end{eqnarray}
The linear eigenvalue problem (\ref{XL}) can be solved using the
ansatz
\begin{eqnarray*}
\xi_t(\theta) = e^{\lambda t} \Phi(\theta) \quad {\rm for} \quad
-\tau\le \theta \le 0 \; .
\end{eqnarray*}
On the interval $-\tau\le \theta < 0$ we obtain
\begin{eqnarray*}
\lambda \Phi(\theta) = \frac{d}{d\theta}\Phi(\theta)\; ,
\end{eqnarray*}
which is solved by
\begin{eqnarray}
\label{Phi}
\Phi(\theta) = e^{\lambda \theta}\Phi(0)\; .
\end{eqnarray}
Plugging the solution (\ref{Phi}) into (\ref{XL}) we can now evaluate
the $\theta=0$-part of (\ref{XL}) to obtain again the characteristic
equation (\ref{lin}). We recall the transcendental characteristic
equation as
\begin{eqnarray}
\label{LIN}
\lambda+2g{\bar X} + \beta \gamma_1 + \beta e^{-\lambda \tau} = 0\; .
\end{eqnarray}
Since in general the linear operator ${\cal{A}}_L$ is not selfadjoint,
we need to consider the corresponding adjoint eigenvalue problem on
the dual extended state space
${\cal{C}}^\dagger={\cal{C}}^\dagger[[0,\tau];{\mathbb R}] $. The dual problem
is given by backward evolution for $t\le0$,
i.e. $x_t^\dagger(s)=x_{-t}(-s)$ for $0\le s \le \tau$. The adjoint
problem can be formally written as
\begin{eqnarray*}
\frac{d}{dt}\xi^\dagger_t(s) = - {\cal{A}}^\dagger_L[\xi_t](s) \; .
\end{eqnarray*}
To provide an explicit form of the dual operator ${\cal{A}}_L$ we need
to define an inner product. It turns out that the normal scalar
product used for ordinary differential equations is not capable of
respecting the memory effects of delay-differential equations. The
following inner product is used
\begin{eqnarray}
\label{BiLin}
\langle\Psi^\dagger,\Phi \rangle =
\Psi^\dagger(0)\Phi(0)-\int_{-\tau}^0 d\theta \int_0^\theta ds
\Psi^\dagger(s-\theta) w_1(\theta) \Phi(s)
\; .
\end{eqnarray}
The adjoint operator is then explicitly given as
\begin{eqnarray}
\label{XLA}
{\cal{A}}^\dagger_L[\xi_t](s) = \left\{
\begin{array}{ll}
-\frac{d}{ds}\xi^\dagger_t(s) & \mbox{if $0 < s \le \tau$}\\
{\cal{L}}^\dagger[\xi^\dagger_t] & \mbox{if $s=0$}
\end{array}
\right.
\end{eqnarray}
where
\begin{eqnarray}
\label{LA}
{\cal{L}}^\dagger [\xi^\dagger_t] = \int_{0}^\tau ds w_1(-s)
\xi^\dagger_t(s)
\; .
\end{eqnarray}
The adjoint eigenvalue problem (\ref{XLA}) is now solved using the
ansatz
\begin{eqnarray*}
\xi^\dagger_t(s) = e^{-\lambda t} \Psi^\dagger(s) \quad {\rm for} \quad
0< s\le \tau \; .
\end{eqnarray*}
On the interval $0 < s \le \tau$ we obtain
\begin{eqnarray*}
-\lambda \Psi^\dagger(s) = \frac{d}{ds}\Psi^\dagger(s)\; ,
\end{eqnarray*}
which is solved by
\begin{eqnarray}
\label{Psi}
\Psi^\dagger(s) = e^{-\lambda s}\Psi^\dagger(0)\; .
\end{eqnarray}
Plugging the solution (\ref{Psi}) into (\ref{LA}) we can now evaluate
the $s=0$-part of (\ref{XLA}) to obtain again the characteristic
equation (\ref{LIN}). Note that the two solutions (\ref{Phi}) and
(\ref{Psi}) for the eigenvalue problem and its dual can be transformed
into each other by simple time-reversal $\theta\to -s$.
\noindent
Since the transcendental equation has two solutions with vanishing
real part $\sigma=0$, i.e. $\lambda=\pm i \omega$ with $\omega$ given
by (\ref{HomHopfa}), we have two solutions of the linear eigenvalue
problem (\ref{XL}) and its associated adjoint problem (\ref{XLA}). We
denote them as $\Phi_{1,2}$ and $\Psi^\dagger_{1,2}$ respectively. The
bilinear form (\ref{BiLin}) was constructed in order to assure
biorthogonality of the eigenfunctions. Defining the eigenfunctions as
\begin{eqnarray}
\label{EigFunc}
\Phi_1(\theta) = e^{i\omega \theta} \quad \Phi_2(\theta) = e^{-i\omega \theta}\\
\Psi_1^\dagger(s)
= \nu e^{-i\omega s} \quad \Psi_2^\dagger(s) = \nu^\star e^{i\omega s}
\end{eqnarray}
with the normalization constant
\begin{eqnarray}
\label{nu}
\nu = \frac{1}{1-\beta \tau e^{-i\omega \tau}}\; ,
\end{eqnarray}
we have $\langle\Psi^\dagger_i,\Phi_j \rangle = \delta_{ij}$ with
$i,j=1,2$ and $\delta_{ij}$ being the Kronecker symbol.\\
\subsection{Center-manifold theory}
\noindent
For the nonlinear theory we need properties for the linear operator
${\cal{A}}$ developed in \cite{Hale,Hale2}. We summarize properties of
the transcendental characteristic equation (\ref{lin}): (i)
${\cal{A}}$ has a pure point spectrum, (ii) the real part of the
eigenvalues is bounded from above, and (iii) defining $a= \tau\{2g{\bar{X}}+\beta
\gamma_1\}$ and $b=\beta\tau$ all eigenvalues of
${\cal{A}}$ have negative real part if and only if (1.) $a>-1$, (2.)
$a+b>0$ and (3.) $b<\zeta
\sin{\zeta}-a\cos{\zeta}$ where $\zeta$ is the root of
$\zeta=-a\tan{\zeta}$ with $0<\zeta<\pi$ for $a\neq 0$ and
$\zeta=\pi/2$ if $a=0$. These conditions can be translated for our
particular case (\ref{LIN}) using $a=-\beta\tau \cos(\omega
\tau)$. Condition (1) translates into $\beta \tau \cos(\omega\tau)<1$;
condition (2) into $\cos(\omega
\tau)<1$ and condition (3) defines a parameterized stability boundary
$\beta\tau<\zeta/\sin(\zeta)$ and $\cos(\omega
\tau)>\cos(\zeta)$ with $0<\zeta<\pi$. This last condition defines the
line in $\beta \tau$-$\cos(\omega\tau)$ space where the Hopf
bifurcation occurs. In particular we have $\beta\tau\ge 1$ and a
coalescence with the saddle node at $\cos(\omega \tau)=1$ with
$\beta\tau=1$ at the Bogdanov-Takens point. In Figure~\ref{Fig-Stabil}
we show the stability region. We note that care has to be taken in
interpreting the diagram in terms of the parameter $\tau$ because
$\beta=\beta(\tau)$ according to (\ref{beta}). In the limit $\tau\to
\infty$ we have $\beta \tau \to 0$. Note that stable solutions may
exist for $\beta \tau\ge 1$.
\begin{figure}
\centerline{
\psfig{file=Fig3a.eps,angle=0,width=3.0in}
\psfig{file=Fig3b4.eps,angle=0,width=3.0in}}
\caption{(a): Stability diagram for the equilibrium solution
(\ref{statio}). We also impose $\beta \tau\ge 0$. The region within
the bold lines is stable. Crossing the upper boundary corresponds to a
Hopf bifurcation. (b): Hopf line obtained by numerical simulations of
the modified Barkley model (\ref{barkley}) with $a=0.22$ and $u_s=0.1$
and $D=3$, \cite{Barkley91,GottwaldKramer04}. (Note that each point
corresponds to different values of $\epsilon$ and $L$). The numerical
results of the partial differential equations could be fitted to the
normal form (\ref{NF}) to obtain the parameters $\beta_0$ and $\tau$,
see Ref.~\cite{GottKram05}. The continuous line is the same Hopfline
as in (a).}
\label{Fig-Stabil}
\end{figure}
In \cite{Hale,Hale2} it is shown that under these circumstances one
may perform center manifold reduction. In particular we can decompose
$x_t(\theta)$ into slow modes associated with the eigenvalues
$\lambda=\pm i \omega$ and fast modes which correspond to modes with
negative real part of the eigenvalues. Center manifold theory says
that the fast modes are slaved to the slow modes and can be expressed
in terms of the slow modes. We therefore write
\begin{eqnarray}
\label{fastslow}
x_t(\theta)=z(t) \Phi_1(\theta) + {z^\star}(t) \Phi_2(\theta) +
h(z,z^\star)\; ,
\end{eqnarray}
where $z(t)\in{\mathbb C}$ and its complex conjugate $z^\star(t)$ are the
time-dependent amplitudes of the slow modes $\Phi_{1,2}(\theta)$ and
$h(z,z^\star)$ is the remaining fast component written as a function
of the slow amplitudes. The function $h(z,z^\star)$ is called the
center manifold. The expansion (\ref{fastslow}) is resemblant of
center-manifold theory for partial differential equations where
$\theta$ would be the spatial coordinate, and the expansion would be
an expansion of critical spatial eigenmodes. The connection between
delay differential equations and partial differential equations will
be explored further in Section~\ref{Sec-CGL}.\\
\noindent
We require the fast modes, and hence the center manifold, to lie in
the spectral complement of the centre space spanned by $\Phi(\theta)$;
we therefore have the constraint
\begin{eqnarray*}
\langle\Psi_j^\dagger,h(z,z^\star)\rangle=0\; \; \; \; \; \; j=1,2 \; .
\end{eqnarray*}
This implies for the slow amplitudes
\begin{eqnarray}
\label{slowamps}
z(t)=\langle\Psi_1^\dagger(\theta),x_t(\theta)\rangle
\quad {\rm and} \quad
z^\star(t)=\langle\Psi_2^\dagger(\theta),x_t(\theta)\rangle \; .
\end{eqnarray}
We use a near-identity transformation for the center manifold $h$ and
express it as a power series in $z$ and $z^\star$. The center manifold
is tangential to the manifold spanned by the slow modes which implies
the ansatz
\begin{eqnarray}
\label{CM}
h(z,z^\star)=\frac{1}{2}\left(h_{20}(\theta)z^2+2h_{11}(\theta) z
z^\star + h_{02}(\theta){z^\star}^2 \right) + {\cal{O}}(|z|^3) \; .
\end{eqnarray}
Since $x_t(\theta)$ is real we have
$h_{02}(\theta)=h^\star_{20}(\theta)$. Since the normal form for a
Hopf bifurcation only involves cubic terms, we only need to consider
quadratic terms in the equation for $h$ (\ref{ODEh}). The cubic terms
will then be generated via ${\cal{N}}[x_t](\theta=0)$ in the equations
for $z$ and $z^\star$ (see below, (\ref{ODEz}) and
(\ref{ODEzstar})).\\
\noindent
We will derive now ordinary differential equations for the slow
amplitudes $z$ and $z^\star$ which describe the dynamics on the slow
manifold. The theory of center manifolds tells us that the full
dynamics of (\ref{nf}) is well approximated by the slow dynamics
\cite{Carr}. The derivative of (\ref{slowamps}) with respect to time
$t$ is given by
\begin{eqnarray}
\label{ODEz}
{\dot{z}}&=&\langle \Psi_1^\dagger,{\dot{x}}_t\rangle \nonumber \\
&=& i \omega \langle \Psi_1^\dagger,x_t\rangle
+ \langle \Psi_1^\dagger,{\cal{N}}[x_t]\rangle_{|_{\theta=0}} \nonumber \\
&=& i \omega z
+ \Psi_1^\dagger(0) {\cal{N}}[x_t](\theta=0) \nonumber \\
&=& i \omega z
+ \nu {\cal{N}}[x_t](\theta=0) \\
\label{ODEzstar}
{\dot{z}}^\star&=& -i \omega z^\star + \nu^\star {\cal{N}}[x_t](\theta=0)\\
{\dot{h}}&=&
{\dot{x_t}}-{\dot{z}}\Phi_1(\theta)-{\dot{z}}^\star\Phi_2(\theta)\nonumber\\
&=&{\cal{A}}_Lx_t+{\cal{N}}[x_t]-i\omega z\Phi_1(\theta)+i\omega
z^\star\Phi_2(\theta)-{\cal{N}}[x_t](\theta=0)\{\nu \Phi_1(\theta) +
\nu^\star\Phi_2(\theta)\} \nonumber\\
\label{ODEh}
&=&{\cal{A}}_Lh+{\cal{N}}[x_t]-\{\nu \Phi_1(\theta) +
\nu^\star\Phi_2(\theta)\}{\cal{N}}[x_t](\theta=0) \; ,
\end{eqnarray}
where the dot denotes a time derivative. Note that
${\cal{N}}[x_t]\neq0$ only for $\theta=0$ which can be written as
${\cal{N}}[x_t](\theta=0) = {\cal{N}}[z\Phi_1 + {z^\star} \Phi_2 +
h(z,z^\star)](\theta=0)$. Using (\ref{CM}) we have therefore
\begin{eqnarray}
\label{N}
{\cal{N}}[x_t](\theta=0) &=& -gx_t^2(\theta=0)\nonumber\\
&=&-g( z\Phi_1+z^\star\Phi_2+h)^2|_{\theta=0} \nonumber \\
&=&
-g(z^2+{z^\star}^2
+ 2|z|^2 \nonumber \\
&& \hphantom{-g (}
+ h_{20}(0)z^3
+ (h_{20}(0)+2h_{11}(0)) |z|^2z
+ (h_{02}(0)+2h_{11}(0)) |z|^2z^\star
+ h_{02}(0){z^\star}^3)\nonumber\\
&& + {\cal{O}}(z^4,{z^\star}^4)\; .
\end{eqnarray}
Using the definition of ${\cal{A}}_L$ and ${\cal{N}}[x_t]$ we can
evaluate the evolution equation (\ref{XC}) by differentiating the
center manifold (\ref{CM}) with respect to time, equate with
(\ref{ODEh}), and obtain
\begin{eqnarray}
\label{ODE2}
{\dot{h}}= i\omega h_{20}(\theta)z^2 -i\omega
h_{02}(\theta){z^\star}^2 =
-2{\cal{R}}[\nu e^{i\omega \theta}]{\cal{N}}[x_t](\theta=0) +
\left\{
\begin{array}{ll}
\frac{d}{d\theta}h(\theta) & \mbox{if $-\tau\le \theta < 0$}\\
{\cal{H}}[z,z^\star;h] & \mbox{if $\theta=0$}
\end{array}
\right.
\end{eqnarray}
where
\begin{eqnarray}
{\cal{H}}[z,z^\star;h] = -(2g{\bar{X}}+\beta \gamma_1)h(0)-\beta
h(-\tau) + {\cal{N}}[x_t](\theta=0)\; .
\end{eqnarray}
Here ${\cal{R}}$ denotes the real part.
\noindent
Comparison of powers of $z$ and $z^\star$ yields differential
equations for $h_{ij}$ for the part with $-\tau\le\theta<0$ with an
associated boundary value problem coming from a comparison of powers
of $z$ and $z^\star$ from the $\theta=0$ part. We summarize
\begin{eqnarray}
\label{h20}
h_{20}^\prime&=&2i\omega h_{20}-4g{\cal{R}}[\nu e^{i\omega\theta}] \\
\label{h02}
h_{02}^\prime&=&-2i\omega h_{02}-4g{\cal{R}}[\nu e^{i\omega\theta}] \\
\label{h11}
h_{11}^\prime&=&-4g{\cal{R}}[\nu^{i\omega\theta}] \; ,
\end{eqnarray}
and the boundary conditions are given by
\begin{eqnarray}
\label{HBC1}
(i\omega -\beta e^{-i\omega\tau})h_{20}(0) + \beta h_{20}(-\tau)
&=&
2g(2{\cal{R}}[\nu]-1)\\
\label{HBC2}
(-i\omega -\beta e^{i\omega\tau})h_{02}(0) + \beta h_{02}(-\tau)
&=&
2g(2{\cal{R}}[\nu]-1)\\
\label{HBC3}
(-i\omega -\beta e^{-i\omega\tau})h_{11}(0) + \beta h_{11}(-\tau)
&=&
2g(2{\cal{R}}[\nu]-1) \; .
\end{eqnarray}
Note that the nonlinearity enters the differential equation in form of
an inhomogeneity. The ordinary differential equations
(\ref{h20})-(\ref{h11}) can be solved by variations of constants
\begin{eqnarray}
h_{20}(\theta)&=&H_{20}e^{2i\omega\theta}-2i\frac{g}{\omega}\left( \nu
e^{i\omega\theta}+\frac{1}{3}\nu^\star e^{-i\omega\theta} \right)\\
h_{02}(\theta)&=&H_{02}e^{-2i\omega\theta}+2i\frac{g}{\omega}\left( \nu^\star
e^{-i\omega\theta}+\frac{1}{3}\nu e^{i\omega\theta} \right)\\
h_{11}(\theta)&=&H_{11}+2i\frac{g}{\omega}\left( \nu
e^{i\omega\theta}-\nu^\star e^{-i\omega\theta} \right) \; .
\end{eqnarray}
The constants of integrations $H_{20}=H_{02}^\star$ and $H_{11}$ can
be determined using the boundary conditions
(\ref{HBC1})--(\ref{HBC3}). We obtain
\begin{eqnarray}
\label{HHH}
H_{20} &=& -\frac{2g}{i\omega -\beta e^{-i\omega\tau} +\beta
e^{-2i\omega\tau}} \\
H_{11} &=& -\frac{2g}{-i\omega -\beta e^{-i\omega\tau} +\beta} \; .
\end{eqnarray}
Note that $H_{11}=-2g/(\beta-\beta\cos(\omega \tau))=H_{11}^\star$. By
means of transformations \cite{Wiggins} equation (\ref{ODEz}) can be
transformed into the standard form for a normal form for Hopf
bifurcations
\begin{eqnarray}
{\dot{z}} = i \omega z + c |z|^2z\; .
\end{eqnarray}
The quadratic terms appearing in (\ref{ODEz}) can be eliminated by the
near-identity transformation $z \to z +
\eta_{20}z^2+\eta_{11}|z|^2+\eta_{02}z^{\star 2}$ using
$\eta_{20}=ig\nu/\omega$, $\eta_{11}=-2ig\nu/\omega$ and
$\eta_{02}=-ig\nu/3\omega$. Note that at the Bogdanov-Takens point
where $\omega=0$ such an elimination of quadratic terms is not
possible anymore. However this transformation generates further cubic
terms in (\ref{ODEz}). All of these cubic terms except those
proportional to $|z|^2z$ may be eliminated by means of another
transformation $z \to z + h_3(z,z^\star)$ where $h_3(z,z^\star)$ is a
cubic polynomial. After the transformation to eliminate the quadratic
terms the coefficient in front of the $|z|^2z$-term is found to be
\begin{eqnarray}
\label{cubicC}
c &=& -g \nu (h_{20}(0)+2h_{11}(0))\nonumber\\
&&
+\left(2\eta_{02}g\nu^\star+2\eta_{11}g\nu^\star-\eta_{11}(i\omega
\eta_{20} + 3 g\nu)-2\eta_{20}(2i\omega \eta_{11}-g\nu) -2\eta_{02}g\nu \right) \nonumber \\
&=& -g \nu (h_{20}(0)+2h_{11}(0)) -
\frac{14}{3}ig^2\frac{|\nu|^2}{\omega}+\frac{20}{3}ig^2\frac{\nu^2}{\omega}
\nonumber \\ &=& -g \nu
(H_{20}+2H_{11}) + \frac{14}{3}ig^2\frac{\nu^2}{\omega} \; .
\end{eqnarray}
The stability and character of the Hopf bifurcation is determined by
the sign of the realpart of $c$. Because of (\ref{direction}) the Hopf
bifurcation is supercritical provided ${\cal{R}}[c]<0$ and subcritical
provided ${\cal{R}}[c]>0$. These criteria can be easily deduced by
writing $z=re^{i\phi}$. Note that ${\cal{R}}[c]$ can be written as a
function of $g,\tau$ and $\omega$ only since $\beta=\omega/\sin(\omega
\tau)$ at the Hopf bifurcation. Using algebraic software packages such
as Maple, we can show that ${\cal{R}}[c]>0$ for all values of $g,\tau$
and $\beta$. In Figure~\ref{Fig-c} we show the real part of $c$ as a
function of $\omega \tau$. This confirms that the Hopf bifurcation is
indeed as conjectured in \cite{GottKram05} subcritical. We have
checked our result against numerical simulations of the full
normal-form (\ref{NF}) and also using the software package DDE-BIFTOOL
\cite{ddebiftool}. Again, the degeneracy at $\omega \tau \to 0$ is
reflected in ${\cal{R}}[c]$ by a singularity at $\omega \tau=0$.
\begin{figure}
\centerline{
\psfig{file=Fig4.eps,angle=0,width=3.0in}}
\caption{Plot of the real part of the cubic coefficient ${\cal{R}}[c]$ (\ref{cubicC})
as a function of $\omega \tau$. To produce the plot we set $g=1$ and
$\tau=1$. Both parameters are just coefficients multiplying $c$, so
they do not change the sign of $c$.}
\label{Fig-c}
\end{figure}
We will discuss the implications of this result in
Section~\ref{Sec-Disc}.
\medskip
\section{The limit of large delay times: The Hopf bifurcation for wave
trains}
\label{Sec-CGL}
In the previous Section we have described the Hopf bifurcation for
small $\omega \tau$ when there is only one marginal mode. This
describes the behaviour of a single pulse on a ring. In this Section
we will pursue the case of large delay times when a pseudo-continuum
of critical Hopf modes occurs. We will derive a Ginzburg-Landau
equation as an amplitude equation describing near-threshold behaviour
of such a pseudo-continuum. The connection between amplitude equations
and delay-differential equations has long been known
\cite{Giacomelli96,Giacomelli98,Schanz03,Nizette03,Amann06,WolframYanchuk}. The
cross-over from a finite-dimensional center-manifold to an
infinite-dimensional amplitude equation can be best viewed when
looking at the Hopf condition (\ref{HomHopfa})
\begin{eqnarray}
\label{omegasin}
\omega=\beta\sin{\omega \tau}\; .
\end{eqnarray}
For $\beta \tau > 7.789$ there are at least two solutions of
(\ref{omegasin}) for $\omega$. This equation has arbitrary many
solutions $\omega_k$ for $\beta \tau
\to \infty$ and we obtain a pseudo-continuum in an interval with lower
closed boundary at $\omega \tau$ and upper boundary $\omega \tau =
\pi$. At the singular limit $\omega \tau=\pi$ there are countably
infinitely many eigenvalues $\omega_k \tau = k
\pi$. An illustration is given in Fig.~\ref{Fig-sine}. Note that the
upper boundary $\omega \tau = \pi$ corresponds to the coalescence of
the Hopf bifurcation with the pitchfork bifurcation in the case of
several pulses on a ring (see Section~\ref{Sec-NF}).
\begin{figure}
\centerline{
\psfig{file=Fig7.eps,angle=0,width=4.0in}}
\caption{Illustration of the solutions and number of solutions of the
implicit equation (\ref{omegasin}) for $\omega$. Green curve:
$\beta\tau=0.1$, no Hopf bifurcation; light blue curve: $\beta\tau=5$,
Hopf bifurcation with one marginal mode; dark blue curve:
$\beta\tau=143$, Hopf bifurcation with finitely many marginal modes;
pink curve: $\beta\tau=\infty$, Hopf bifurcation with infinitely many
marginal modes.}
\label{Fig-sine}
\end{figure}
All these solutions are marginal and would have to be included in the
ansatz (\ref{fastslow}). Note that for excitable media where
$\beta=\beta_0 \exp(-\kappa \tau)$ (see (\ref{beta})) this limit
cannot be achieved by simply letting $\tau\to \infty$. The function
$\beta \tau$ has a maximum at $\tau=1/\kappa$. So in order to have
$\beta \tau \to \infty$ one can either have $\beta_0\to \infty$ which
seems unphysical or $\kappa\to 0$ with $\tau\to \infty$ to keep
$\kappa \tau$ finite. Hence the limit $\beta \tau\to \infty$ applies
to media with a very slowly decaying inhibitor in a very large
domain. Large domain instabilities are known from certain excitable
reaction diffusion systems in the context of autocatalytic oxidation
of $CO$ to $CO_2$ on platinum \cite{Baer1,Baer2,Baer3}.\\
\noindent
The case $\omega \tau \approx \pi$ is important for single pulses in a
ring and for wave trains. Firstly, it describes the case for a single
pulse when a continuum of modes becomes unstable to a Hopf
bifurcation. But more importantly it describes the case of a wave
train with distinct members when the pitchfork bifurcation coalesces
with the Hopf bifurcation. The point of coalescence is at $\mu_{PF}$
given by (\ref{AI_PFa}) and with amplitude given by (\ref{AI_PFb})
which we recall
\[
X_{PF}=\frac{\beta}{2g}(1-\gamma_1)\; .
\]
At this point of coalescence the Hopf frequency is in resonance with
the spatial instability in which every second pulse dies. At
$\mu_{PF}$ the two equations (\ref{NF2}) describing the alternating
modes in a wave train collapse to the single equation for one pulse
(\ref{NF}) (see also Fig.~\ref{Fig-Pitchsketch}). This Section will
investigate whether the coalescence of the subcritical pitchfork
bifurcation with the Hopf bifurcation may produce stable
oscillations.\\
\noindent
We will perform a multiple scale analysis of the normal form
(\ref{NF}) along the lines of \cite{Nizette03}. We will obtain at
third order an evolution equation for the amplitude as a solvability
condition which describes the dynamics close to the Hopf
bifurcation. We consider the case of large delay times $\tau$ and
introduce a small parameter $\epsilon=1/\tau$. To capture the dynamics
close to the point of coalescence we introduce a slow time scale
\[
s=\epsilon t \; ,
\]
and rewrite the normal form (\ref{NF}) in terms of the slow variable
as
\begin{eqnarray}
\label{NFGL}
\epsilon \partial_s X = -\mu - g X^2 - \beta (\gamma + X(s-1) + \gamma_1X)\; .
\end{eqnarray}
We expand the scalar field $X(s)$ as
\[
X=x_{PF} + \epsilon x_1 + \epsilon^2 x_2 + \epsilon^3 x_3 + \cdots\; .
\]
Using the generic scaling the bifurcation parameter can be written as
\[
\mu=\mu_{PF}+\epsilon^2\Delta \mu + \cdots\; .
\]
A Taylor expansion of (\ref{omegasin}) around $\omega \tau=\pi$ yields
at first order $\omega \tau=\beta \tau(\pi-\omega \tau)$ which for
large $\tau$ (small $\epsilon$) we may write as
\begin{equation}
\label{e-omegataupi}
\omega \tau = \pi(1-\frac{1}{\beta \tau}+\frac{1}{(\beta \tau)^2}) =
\pi(1-\frac{1}{\beta}\epsilon + \frac{1}{\beta^2}\epsilon^2)\; .
\end{equation}
This suggests a multiple time scaling
\[
\partial_s=\partial_{s_0}+\epsilon \partial_{s_1}+\epsilon^2
\partial_{s_2} + \cdots\; .
\]
Close to the bifurcation point critical slowing down occurs which
allows us to expand the delay term for large delays as
\begin{eqnarray}
X(s-1)&=&e^{-\partial_{s}}X(s)\\
&\approx&\left[1 - \epsilon\partial_{s_1} +
\epsilon^2\left(\frac{1}{2}\partial_{s_1s_1} -
\partial_{s_2} \right)
\right] e^{-\partial_{s_0}}X(s) \; .
\end{eqnarray}
\noindent
At lowest order, ${\cal{O}}(1)$, we obtain the equation determining
$x_{PF}$. At the next order we obtain
\begin{eqnarray}
\label{e-firstO}
{\cal{L}}x_1=0 \;,
\end{eqnarray}
with the linear operator
\begin{eqnarray*}
{\cal{L}} = \beta \left[1+e^{-\partial_{s_0}}\right]\; .
\end{eqnarray*}
Equation (\ref{e-firstO}) is solved by
\begin{eqnarray}
\label{e-x1}
x_1(s_0,s_1,s_2)=z(s_1,s_2) e^{i\pi s_0} + {\bar{z}}(s_1,s_2) e^{-i\pi
s_0}\; ,
\end{eqnarray}
with complex amplitude $z$ and its complex conjugate ${\bar{z}}$. Note
that on the fast time scale $t$ we would have $x_1(t)=z\exp(i\omega
t)+ {\rm c.c.}$ with $\omega \tau = \pi$, which, of course, is the
Hopf mode at onset.\\
\noindent
At the next order,
${\cal{O}}(\epsilon^2)$, we obtain
\begin{eqnarray}
\label{e-secondO}
{\cal{L}}x_2=-\Delta\mu-gx_1^2-\partial_{s_0}x_1 + \beta
\partial_{s_1}e^{-\partial_{s_0}}x_1\; .
\end{eqnarray}
The right-hand side involves terms proportional to $\exp(\pm i \pi
s_0)$, which are resonant with the homogeneous solution of
${\cal{L}}x_2=0$. We therefore impose the solvability condition
\begin{eqnarray*}
\partial_{s_0}x_1 -\beta\partial_{s_1}e^{-\partial_{s_0}}x_1 = 0\; ,
\end{eqnarray*}
which using (\ref{e-firstO}) reads as
\begin{eqnarray}
\label{e-cg0}
\partial_{s_1}x_1 + \frac{1}{\beta}\partial_{s_0}x_1 = 0\; .
\end{eqnarray}
In terms of the complex amplitude $z$ using (\ref{e-x1}) this reads as
\begin{eqnarray}
\label{e-cg}
\partial_{s_1}z + \frac{1}{\beta}i\pi z = 0\; .
\end{eqnarray}
This amounts to the time scale $i\omega \tau \approx \tau\partial_t
\approx \partial_{s_0}+\epsilon \partial_{s_1} = i\pi -
i\epsilon\pi/\beta=i\pi(1-1/(\beta \tau))$ which corresponds to our
scaling (\ref{e-omegataupi}) at first order. Provided (\ref{e-cg}) is
satisfied we can readily solve (\ref{e-secondO}) by solving for each
appearing harmonic, and find
\begin{eqnarray}
\label{e-x2}
x_2&=&-\frac{1}{2\beta}\left[ \Delta\mu + 2g|z|^2
+ gz^2e^{2i\pi s_0}
+ g{\bar{z}}^2e^{-2i\pi s_0} \right]
\nonumber
\\
&=&-\frac{1}{2\beta}\left[ \Delta\mu + gx_1^2 \right]
\; ,
\end{eqnarray}
where we used (\ref{e-firstO}).
\noindent
At the next order, ${\cal{O}}(\epsilon^3)$, we obtain the desired
evolution equation as a solvability condition. At
${\cal{O}}(\epsilon^3)$ we obtain
\begin{eqnarray}
\label{e-third0}
{\cal{L}}x_3&=&-\partial_{s_0}x_2 + \beta
\partial_{s_1}e^{-\partial_{s_0}} x_2
- \partial_{s_1}x_1 - 2gx_1x_2 - \frac{1}{2}\beta
\partial_{s_1s_1} e^{-\partial_{s_0}} x_1 + \beta \partial_{s_2}
e^{-\partial_{s_0}} x_1
\nonumber \\
&=&
-\partial_{s_0}x_2 + \beta
\partial_{s_1}e^{-\partial_{s_0}} x_2 - \partial_{s_1}x_1
- 2gx_1x_2 + \frac{1}{2}\beta \partial_{s_1s_1} x_1
- \beta \partial_{s_2}
x_1
\; .
\end{eqnarray}
Again resonant terms proportional to $\exp(\pm i \pi s_0)$ are
eliminated by imposing a solvability condition which upon using the
expressions for $x_2$ yields the desired amplitude equation
\begin{eqnarray}
\label{GL}
\partial_{s_2} x_1 -\frac{1}{\beta^2}\partial_{s_0} x_1
=
\frac{g}{\beta^2} \Delta \mu \,x_1
+ \frac{1}{2\beta^2}\partial_{s_0 s_0}x_1
+ \frac{g^2}{\beta^2}x_1^3\; .
\end{eqnarray}
This is the well-studied real Ginzburg-Landau equation
\cite{KramerGL}. The time-like variable is the slow time scale $s_2$
and the space-like variable the faster time scale $s_0$ which is
${\cal{O}}(\tau)$. As in the finite dimensional case studied in
Section~\ref{Sec-BifHopf} the Hopf bifurcation is clearly subcritical
since the real part of the coefficient in front of the cubic term in
(\ref{GL}) is positive for all parameter values. Hence the coalescence
of the Hopf bifurcation and the pitchfork bifurcation cannot lead to
stable oscillations. We have shown that wave trains also undergo
unstable oscillations in the framework of the normal form
(\ref{NF}).\\
The usefulness of the spatio-temporal view point for delay
differential equations as expressed here in the Ginzburg-Landau
equation (\ref{GL}) has been pointed out
\cite{Giacomelli96,Giacomelli98,Nizette03,WolframYanchuk}. However the
Ginzburg-Landau equation (\ref{GL}) may be cast into a finite
dimensional system which emphasizes the underlying multiple scale
analysis. We start by rewriting (\ref{GL}) as an equation for the
complex amplitude $z$. One can explicitly express $s_0$-derivatives
and obtain the following finite dimensional system
\begin{eqnarray}
\label{GLz}
\partial_{s_2} z -i\pi \frac{1}{\beta^2} z
=
\frac{g}{\beta^2} \left(
\Delta \mu
-\frac{\pi^2}{2 g}
\right)\,z
+ 3\frac{g^2}{\beta^2}|z|^2z\; .
\end{eqnarray}
The time-scaling on the left hand-side is as expected from our initial
linearization and expansion of the frequency (\ref{e-omegataupi}). We
have in total
\[
i\omega \tau\approx\tau \partial_t=\partial_s\approx
\partial_{s_0}+\epsilon\partial_{s_1}+\epsilon^2\partial_{s_2}=i\pi\left(
1-\frac{1}{\beta \tau}+\frac{1}{(\beta\tau)^2} \right)\;,
\]
which corresponds to (\ref{e-omegataupi}). This illustrates the
multiple-scale character of our analysis where the nonlinear term may
be interpreted as a frequency correction \cite{F00}. The correction
term to the linear term on the right-hand side of (\ref{GLz}) shows
that the onset is retarded on the very slow time scale $s_2$.\\
\noindent
In \cite{Echebarria02} a real Ginzburg-Landau equation was derived for
paced excitable media with an additional integral term modeling the
pacing. It would be interesting to see whether the therein derived
amplitude equation can be derived in a multiple scale analysis along
the lines of this multiple scale analysis.\\
\medskip
\section{Summary and Discussion}
\label{Sec-Disc}
\noindent
We have explored the Hopf bifurcations of a single pulse and of a wave
train in a ring of excitable medium. We have found that for the
phenomenological normal form (\ref{NF}) the Hopf bifurcation for a
single pulse on a ring and for a wave train on a ring is always
subcritical independent on the equation parameters.\\
\noindent
Hopf bifurcations in excitable media had been previously
studied. Besides numerical investigations of the Barkley model
\cite{Knees92}, the modified Barkley model \cite{GottwaldKramer04},
the Beeler-Reuter model
\cite{BeelerReuter,QuanRudy,Courtemanche,Karma94,Courtemanche96,Vinet00}, the
Noble-model \cite{Noble,Karma93,Karma94} and the Karma-model \cite{Karma93},
where a Hopf bifurcation has been reported, there have been many
theoretical attempts to quantify this bifurcation for a single-pulse
on a ring. Interest has risen recently in the Hopf bifurcation in the
context of cardiac dynamics because it is believed to be a precursor
of propagation failure of pulses on a ring. The Hopf bifurcation has
been related to a phenomenon in cardiac excitable media which goes
under the name of {\it alternans}. Alternans describe the scenario
whereby action potential durations are alternating periodically
between short and long periods. The interest in alternans has risen as
they are believed to trigger spiral wave breakup in cardiac tissue and
ventricular fibrillation
\cite{Nolasco,Courtemanche,Karma93,Karma_A,Fenton02}.\\
\noindent
Our results may shed a new light on what may be called {\it
alternans}. The occurrence of alternans in clinical situations is
often followed by spiral wave breakup and ventricular fibrillation
\cite{Nolasco,Courtemanche,Karma_A,Karma93,Fenton02}. The subcritical
character of the Hopf bifurcation gives a simple and straightforward
explanation for this phenomenon. Moreover, if the system length $L$ is
slowly varied, long transients may be observed of apparently stable
oscillations (see Figure~\ref{Fig-karma} and
Figure~\ref{Fig-karma2}). Depending on whether the system length is
below or above the critical length $L_H$ the oscillations will relax
towards the homogeneous state or the instability will lead to wave
breakup. However, even for the case of relaxation towards the stable
homogeneous solution, these oscillations may lead to wave breakup upon
further reduction of the system length, because of the subcritical
character of the Hopf bifurcations. This illustrates the diagnostic
importance of cardiac alternans.\\
\subsection{Limitations and range of validity of our results}
\noindent
Strictly speaking, our result that the Hopf bifurcation is subcritical
for the normal form (\ref{NF}) cannot be taken as a prove that
alternans are unstable for all excitable media. The normal form
(\ref{NF}) is only valid for a certain class of excitable media. In
particular it describes the situation in which an activator weakly
interacts with the inhibitor of the preceding exponentially decaying
inhibitor. Moreover, the normal form has only been phenomenologically
derived in \cite{GottKram05}. Of course, unless a rigorous derivation
of the normal form (\ref{NF}) has been provided the results presented
here may serve as nothing more than a guidance in interpreting
alternans in real cardiac systems or more complex ionic models of
excitable media, and may alert scientists to check results on
stability of oscillations more carefully.\\
\noindent
Several simplifications have been made to obtain the normal form
(\ref{NF}) in \cite{GottKram05}. For example, the time delay
$\tau=L/c_0$ is treated as constant. This is obviously not correct for
Hopf bifurcations.
However, the inclusion of $\gamma_1$ (which is essential in the
quantitative description of the Hopf bifurcation) allows for velocity
dependent effects. Guided by the success of the normal form to
quantitatively describe a certain class of excitable media and by
numerical experiments we are hopeful that our result may help
interpreting experiments and numerical simulations.\\
\noindent
In Section~\ref{Sec-numerics} we will discuss a particular model for
cardiac dynamics in which for certain parameter values the assumptions
for the derivation of our normal form are violated. For these
parameter values stable oscillations may occur. However even for
systems which are described by the normal form (\ref{NF}) a word of
caution is appropriate. If the oscillatory solutions bifurcating from
the stationary solution are unstable as we have proven here, the
unstable Hopf branch could in principle fold back and restabilize. Our
analysis does not include such secondary bifurcations. Another
scenario which we cannot exclude based on our analysis is that the
unstable branch may be a basin of attraction for a stable oscillatory
solution far away from the homogeneous solution. However, our
numerical simulations do not hint towards such scenarios.\\ From an
observational perspective the relevance of the subcritical instability
for spiral wave breakup is a matter of the time scale of the
instability. The time scale associated with the subcritical Hopf
bifurcation may be very long as seen in Fig.~\ref{Fig-barkley}. This
time scale becomes shorter the further the perturbation in the
bifurcation parameter is from its value at the corresponding stable
stationary pulse solution. In any case, if the parameter is kept fixed
above the critical value, the instability will eventually develop
unless the life time of a reentrant spiral is less than the time scale
of the instability. For clinical applications one would need to
estimate the time scale of a reentrant spiral and compare it with the
time scale of the instability. Such estimates however are not
meaningful for simple models such as the Barkley model.\\
\noindent
Our definition of alternans is restricted to non-paced pulses on a
ring. If the excitable media is paced, the subcritical character of
the Hopf bifurcation is not guaranteed anymore, and there is no {\it a
priori} reason why stable alternans cannot occur. Indeed, in
periodically stimulated excitable media stable alternans have been
reported
\cite{Guevara81,Lewis90,Hastings00,Guevara02,Echebarria02,Fox02,Henry05}.
A non-paced single pulse on a ring is a simple model for a reentrant
spiral moving around an anatomical obstacle or around a region of
partially or totally inexcitable tissue. As such it ignores the
dynamics of the spiral away from the obstacle. An extension would be
to look at a transversal one-dimensional slice through a spiral and
consider wave trains and instabilities of such wave trains.
\subsection{Relation to the restitution condition}
Since the pioneering work \cite{Guevara84} alternans have been related
to a period-doubling bifurcation. This work has rediscovered the
results by \cite{Nolasco}, which had hardly been noticed by the
scientific community until then. In there it was proposed that the
bifurcation can be described by a one-dimensional return map relating
the action potential duration ($APD$) to the previous recovery time,
or diastolic interval ($DI$), which is the time between the end of a
pulse to the next excitation. A period-doubling bifurcation was found
if the slope of the so called restitution curve which relates the
$APD$ to the $DI$, exceeds one. A critical account on the predictive
nature of the restitution curve for period-doubling bifurcations is
given in
\cite{Fenton99,Fox02}. In \cite{Karma94} the instability was analyzed
by reducing the partial differential equation describing the excitable
media to a discrete map via a reduction to a free-boundary problem. In
\cite{GottwaldKramer04} the Hopf bifurcation could be described by
means of a reduced set of ordinary-differential equations using a
collective coordinate approach. In
\cite{Courtemanche,Courtemanche96,Vinet00,Vinet03} the bifurcation was
linked to an instability of a single integro-delay equation. The
condition for instability given by this approach states - as in some
previous studies involving one-dimensional return maps - that the
slope of the restitution curve needs to be greater than one. However,
as evidenced in experiments \cite{Hall,Banville02} and in theoretical
studies \cite{Fenton99,Fox02,Keener02,Cherry04,Bauer07} alternans do
not necessarily occur when the slope of the restitution curve is
greater than one. In our work we have a different criterion for
alternans (which we interpret now as unstable periodic
oscillations). Our condition for the occurrence of alternans, $\beta
\tau > 1$, does not involve the restitution curve but involves the
coupling strength and the wave length. Moreover, in
Fig.~\ref{Fig-Stabil}(b) we can see that for our normal form pulses
can be stable for values of $\beta \tau \gg 1$ in accordance with the
above mentioned experiments and numerical studies.\\
\noindent
In the following we will show how our necessary condition for the
onset of instability $\beta \tau > 1$ can be related to the
restitution condition, that the onset of instability is given when the
slope of the restitution curve exceeds $1$.\\
\noindent
Close to the saddle node the Hopf frequency is $\omega \tau
\approx 0$. We introduce a small parameter $\delta\ll1$ and write
close at the saddle node
\[
X={\bar X}_{SN} + \delta x \; ,
\]
where ${\bar X}_{SN}$ is given by (\ref{SN_wt}). The generic scaling
close to the saddle node implies that we may write
$\mu=\mu_{SN}+\delta^2 \Delta \mu$. Using the critical slowing down at
the saddle node and the fact that $\omega \tau \approx 0$ we may
approximate the normal form (\ref{NF}) to describe the temporal change
of $X$ at some time $t$ and at some later time $t+\tau$.
\begin{eqnarray*}
\label{e-backwardeuler}
\frac{\delta x_{n+1}-\delta x_n}{\tau} = -\mu_{SN}-\delta^2\Delta \mu
- g({\bar X}_{SN} + \delta x_n)^2 - \beta(\gamma + {\bar X}_{SN} +
\gamma_1 {\bar X}_{SN} + \delta x_{n-1} + \gamma_1 \delta x_{n})\; .
\end{eqnarray*}
Here $x_n=x(t_n)$ and $x_{n+1}=x(t_n+\tau)$. Neglecting terms of
${\cal{O}}(\delta^2)$ and using the definition of the saddle node
(\ref{SN_wt}) we end up with
\begin{eqnarray*}
\label{e-backwardeuler2}
x_{n+1}-(1+\beta \tau) x_n + \beta \tau x_{n-1} = 0\; .
\end{eqnarray*}
This equation has either the solution $x_n=1$ which corresponds to the
stable steady solution described by ${\bar X}_1$ of (\ref{statio}), or
\begin{eqnarray*}
x_{n}= (\beta \tau)^n x_0\;,
\end{eqnarray*}
which implies
\begin{eqnarray}
\label{e-restcond}
x_{n}= \beta \tau x_{n-1}\; .
\end{eqnarray}
Close to the saddle node the amplitude of the activator correlates
well with the APD, and we find that $\beta \tau > 1$ is exactly the
restitution condition whereby the slope of the restitution curve has
to be larger than one.\\
\noindent
Our model contains the restitution condition as a limiting case when
the Hopf bifurcation occurs close to the saddle node. However, as seen
in Fig.~\ref{Fig-Stabil} $\beta \tau$ may be larger than one but still
the system supports stable pulses. These corrections to the
restitution conditions are captured by our model. Moreover, the normal
form is able to determine the frequency at onset.\\
\noindent
We note that the parameter $\gamma_1$ does not enter the restitution
condition; it is not needed for the {\it existence} of a Hopf
bifurcation (cf. (\ref{HomHopfa}) and (\ref{HomHopfb})). However, as
pointed out in \cite{GottKram05} quantitative agreement with numerical
simulations is only given if $\gamma_1$ is included. In
\cite{GottKram05} the inclusion of the $\gamma_1$-term takes into
account the velocity dependent modifications of the bifurcation
behaviour: large-amplitude pulses have a higher velocity than
low-amplitude ones. A larger pulse will therefore run further into the
inhibitor generated by its predecessor. Velocity restitution curves
have been studied in
\cite{Keener02} to allow for a modification of the restitution
condition derived in \cite{Courtemanche96} for a single pulse in a
ring. The normal form incorporates naturally these velocity dependent
terms.\\
\noindent
For a recent numerical study on the validity of the restitution
condition the reader is referred to \cite{Bauer07}. In this work the
stability of certain excitable media is investigated by means of
numerical continuation methods which allows a precise identification
of the onset of oscillations. At the onset of alternans the
restitution curve was determined. It was found that the restitution
condition failed for three out of four cases for pulses in a
one-dimensional ring. Our result suggests that the restitution
condition may be a good indicator for the onset of alternans close to
the saddle node.\\
\subsection{Numerical simulations}
\label{Sec-numerics}
\noindent
In the context of alternans the Hopf bifurcation had been described as
a supercritical bifurcation
\cite{Courtemanche,Karma93,Karma94,Courtemanche96} and not as we have
found here as a subcritical bifurcation (although at the same time
their occurrence had been related to wave breakup \cite{Karma94}). We
therefore revisit some of the previous numerical studies. In
\cite{Karma93} the following two-variable model was proposed
\begin{eqnarray}
\label{karma}
\epsilon \partial_t E&=&\epsilon^2\partial_{xx}E-E + \left[
A-\left(\frac{n}{n_B}\right)^M\right] \left( 1-\tanh(E-3)\right)
\frac{E^2}{2} \nonumber\\
\partial_t n&=&\theta(E-1)-n\; ,
\end{eqnarray}
as a model for action potential propagation in cardiac tissue. Here
$\theta(x)$ is the Heaviside step function. This model incorporates
essential features of electrophysiological cardiac models. For the
parameters $A=1.5415$, $\epsilon=0.009$, $M=30$ and $n_B=0.525$ a
supercritical Hopf bifurcation was reported upon diminishing the
system length $L$. We integrate this model using a pseudospectral
Crank-Nicolson method where the nonlinearity is treated with an
Adams-Bashforth scheme. We use a timestep of $dt=0.00001$ and $4096$
spatial grid points. A Hopf bifurcation occurs around $L=0.215$. To
approach the Hopf bifurcation we created a stable pulse for some large
system length, and subsequently diminished the system length $L$. In
Figure~\ref{Fig-karma} we show that for these parameters the
bifurcation is actually subcritical. The subcritical character has not
been recognized before - probably because of insufficiently short
integration times.
For system length $L$ just above the critical length the oscillations
can appear stable for a very long time (see Figure~\ref{Fig-karma2})
before they settle down to the homogeneous solution.
\begin{figure}
\centerline{
\psfig{file=Fig5a.ps,angle=270,width=3.0in}}
\caption{Temporal behaviour of the maximal amplitude $E_{max}$ of
the activator $E$ for model (\ref{karma}) just above the subcritical
Hopf bifurcation. The parameters are $A=1.5415$, $\epsilon=0.009$,
$M=30$ and $n_B=0.525$ and $L=0.215$. The inlet shows the behaviour at
$L=0.210$.}
\label{Fig-karma}
\end{figure}
\begin{figure}
\centerline{
\psfig{file=Fig5b1.ps,angle=270,width=3.0in}
\quad
\psfig{file=Fig5b2.ps,angle=270,width=3.0in}}
\caption{Temporal behaviour of the maximal amplitude $E_{max}$ of
the activator $E$ for model (\ref{karma}). The system length is just
below the Hopf bifurcation with $L=0.22$; the other parameters are as
in Figure~\ref{Fig-karma}. (a): The oscillations appear to be stable
over some time. (b): Same parameters as in (a) but longer integration
time. The apparent stability has to be accounted for by insufficiently
long integration times. The solution adjusts to the homogeneous
solution. Note the long time scales which contain hundreds of
oscillations.}
\label{Fig-karma2}
\end{figure}
Indeed, as already stated in our paper \cite{GottKram05}, the number
of oscillations may be rather large when the instability is weak. In
Figure~\ref{Fig-barkley} we show such a case for the maximal amplitude
of the activator $u$ for the modified Barkley model
\begin{eqnarray}
\label{barkley}
\partial_t u &=& D \partial_{xx}u + u(1-u)(u-u_s-v) \nonumber \\
\partial_t v &=& \epsilon \ (u- a\ v)\ ,
\end{eqnarray}
which is a reparameterized version of a model introduced by Barkley
\cite{Barkley91}. It is clearly seen that the oscillations can appear
stable for a very long time and many oscillations (in this case more
than $500$ oscillations) which has lead scientists to the wrong
conclusion that the Hopf bifurcation is supercritical.\\
\begin{figure}
\centerline{
\psfig{file=Fig6new.eps,angle=0,width=3.0in}}
\caption{Temporal behaviour of the maximal amplitude $u_{max}$ of
the activator $u$ for model (\ref{barkley}) just above the subcritical
Hopf bifurcation. The parameters are $a=0.22$, $u_s=0.1$,
$\epsilon=0.03755$ and $L=246$. The oscillations appear stable for a
very long time but will eventually either damp out and attain a
constant non-zero value in the case, when $L$ is larger than the
critical $L_H$ at which the Hopf bifurcation occurs, or in the case
$L<L_H$ the pulse will collapse as depicted in Figure~\ref{Fig-karma}
confirming the subcritical character of the Hopf bifurcation.}
\label{Fig-barkley}
\end{figure}
\noindent
The normal forms (\ref{NF}) or (\ref{NF2}) were derived for situations
in which the activator weakly interacts with the tail of the preceding
inhibitor which exponentially decays towards the homogeneous rest
state. Then one can describe the influence of the tail of the
preceding inhibitor as a perturbation to the generic saddle node of
the isolated pulse. The models discussed so far all fall into this
category. A different model was introduced by Echebarria and Karma in
\cite{Echebarria02} which as we will see below for certain
parameter regions does not fall into this class of model but supports
stable oscillations. Originally the model was studied for a paced
strand but recently has also been studied in a ring geometry
\cite{Echebarria06}. It has been argued in \cite{Echebarria06} that
the stability of the spatially extended pulse is determined by the
stability of a paced single cell. In the following we study
numerically the Hopf bifurcation for this model in a ring
geometry. This will illustrate the range of validity for our normal
form and the conclusions which may be drawn with respect to the
stability of cardiac alternans. The model consists of the standard
cable equation
\begin{eqnarray}
\label{EcheKarma1}
\partial_t V &=& D \partial_{xx}V - \frac{I_{{\rm ion}}}{C_m}\; ,
\end{eqnarray}
where $I_{\rm ion}$ models the membrane current and $C_m$ is the
capacity of the membrane. In \cite{Echebarria02} the following form
for the membrane current was proposed
\begin{eqnarray}
\label{EcheKarma2}
\frac{I_{{\rm ion}}}{C_m} = \frac{1}{\tau_0}\left(
S+(1-S)\frac{V}{V_c}\right)
-\frac{1}{\tau_a}hS
\; ,
\end{eqnarray}
with a switch function
\begin{eqnarray}
\label{EcheKarmaS}
S=\frac{1}{2}\left(1+\tanh(\frac{V-V_c}{\epsilon})\right)
\; .
\end{eqnarray}
The gate variable $h$ evolves according to
\begin{eqnarray}
\label{EcheKarma3}
\frac{dh}{dt}=\frac{1-S-h}{\tau_m(1-S)+\tau_pS}
\; .
\end{eqnarray}
The stable homogeneous rest state is at $V=0$ and $h=1$; however for
small $\tau_a$ a second stable focus may arise. For details on the
physiological interpretations of the model the reader is referred to
\cite{Echebarria02,Echebarria06}. For the numerical integration we use
again a semi-implicit pseudospectral Crank-Nicolson method where the
nonlinearity is treated with an Adams-Bashforth scheme. We use a
timestep of $dt=0.01$ and $1024$ spatial grid points. In
Fig.~\ref{Fig-EcheKarma1} we show an example for a subcritical Hopf
bifurcation in this model consistent with our theory. However, for
sufficiently small $\tau_a$ a supercritical Hopf bifurcation arises
upon decreasing the ring length $L$. In Fig.~\ref{Fig-EcheKarma2} we
present a space-time plot for such a situation of stable oscillations.
\begin{figure}
\centerline{
\psfig{file=Fig8.eps,angle=0,width=3.0in}}
\caption{Temporal behaviour of the maximal amplitude $V_{max}$ of
the activator $V$ for model (\ref{EcheKarma1})-(\ref{EcheKarma3}). The
parameters are $\tau_0 = 150$, $\tau_a= 26$, $\tau_m = 60$, $\tau_p = 12$,
$V_c = 0.1$, $D= 0.00025$ and $\epsilon = 0.005$. The main figure is
obtained for $L=1.11$ which is slightly above the subcritical Hopf
bifurcation confirming the subcritical character of the Hopf
bifurcation. The inset is for $L=1.1175$ which is slightly below the
bifurcation point.}
\label{Fig-EcheKarma1}
\end{figure}
\begin{figure}
\centerline{
\psfig{file=Fig11.eps,angle=0,width=4.0in}}
\caption{Space-time plot of stable oscillations occurring at
$\tau_a=6$ with $L=4.8$. The other parameters are as in
Fig.~\ref{Fig-EcheKarma1}. Stable oscillations are found for a range
of ring lengths $L$.}
\label{Fig-EcheKarma2}
\end{figure}
Whereas the subcritical case is consistent with our theory we now have
to understand why for small $\tau_a$ stable oscillations occur. In
order to do so it is helpful to look at the spatial profiles of the
activator and the inhibitor close to the Hopf bifurcation which are
presented in Fig.~\ref{Fig-EcheKarma3}. In the left figure we see the
activator $V$ and the inhibitor $1-h$ for the case of a subcritical
Hopf bifurcation as seen in Fig.~\ref{Fig-EcheKarma1}. The figure is
similar to Fig.~\ref{Fig-barkley2} for the modified Barkley model. The
activator weakly interacts with the exponentially decaying tail of the
inhibitor it created during its previous revolution. In this parameter
region our normal form is valid and correctly predicts a subcritical
bifurcation. In the right figure of Fig.~\ref{Fig-EcheKarma3} the
situation is depicted for the supercritical case seen in
Fig.~\ref{Fig-EcheKarma2}. Here the situation is very different. The
inhibitor does not approach the homogeneous rest state $1-h=0$ but
rather develops a metastable $1-h=1$ plateau. This has two
consequences; firstly, the solution is driven away from the homoclinic
pulse solution around which the normal form is built, and secondly the
interaction is not weak anymore.
\begin{figure}
\centerline{
\psfig{file=Fig9.eps,angle=0,width=2.5in}
\psfig{file=Fig10.eps,angle=0,width=2.5in}}
\caption{Plot of the activator $V$ (continuous line) and the inhibitor
$h$ (dashed line) for the system
(\ref{EcheKarma1})-(\ref{EcheKarma3}). We plot here $1-h$ rather than
$h$ to have the homogeneous rest state at $u=0$ and
$1-h=0$. Parameters are $\tau_0 = 150$, $\tau_m = 60$, $\tau_p = 12$,
$V_c = 0.1$, $D= 0.00025$ and $\epsilon = 0.005$. Left: The activator
runs into the exponentially decaying tail of the inhibitor which
decays towards the rest state $1-h=0$. This is similar to the
behaviour in Fig.~\ref{Fig-barkley2}. Parameters are $\tau_a=26$ with
$L=1.11$. This scenario is well described by the normal form. Right:
The activator does not interact with the exponentially decaying tail
corresponding to the rest state but rather with the metastable state
defined by ${\dot h}=0$. Parameters are $\tau_a=6$ with $L=4.8$. This
case cannot be captured by the normal form.}
\label{Fig-EcheKarma3}
\end{figure}
The reason for this different behaviour can be understood by looking
at the nullclines of the homogeneous problem of
(\ref{EcheKarma1})-(\ref{EcheKarma3}), i.e. setting $\partial_x=0$. In
Fig.~\ref{Fig-nullclines-echekarma} we show the nullclines for the two
cases $\tau_a=6$ (supercritical) and $\tau_a=26$ (subcritical). Note
that for $\tau_a=6$ the only stable fix point is at $V=0$ and
$h=1$. The difference is that in the supercritical case the ${\dot
h}=0$ nullcline and the ${\dot V}=0$ nullcline are very close to each
other. This forces the trajectory to spend a long time on the ${\dot
h}=0$-nullcline near $h=0$ (as seen in the plateau part of the spatial
profile of $1-h=1$ in Fig.~\ref{Fig-EcheKarma2}). We call this state a
metastable state. For decreasing ring length it dominates the profile
of the inhibitor and does not allow the inhibitor to come close to the
rest state $h=1$. Therefore our normal form, which is formulated
around the saddle-node of the pulse, breaks down. The solution is not
close to the travelling pulse in phase space anymore and our local
analysis around the saddle-node of the travelling wave cannot work
anymore. However, we note that the system
(\ref{EcheKarma1})-(\ref{EcheKarma3}) is rather unusual with the two
nullclines being parallel to each other with the possibility of a
metastable state, resulting in rather particular dynamical
behaviour.\\
\begin{figure}
\centerline{
\psfig{file=Fig13.eps,angle=0,width=2.5in,height=2.0in}
\psfig{file=Fig14.eps,angle=0,width=2.5in,height=2.0in}
\psfig{file=Fig15.eps,angle=0,width=2.5in,height=2.0in}
}
\caption{Nullclines for the system
(\ref{EcheKarma1})-(\ref{EcheKarma3}). The continuous lines denote the
${\dot V}=0$ nullclines and the dashed lines the ${\dot h}=0$
nullclines. Parameters are $\tau_0 = 150$, $\tau_m = 60$, $\tau_p =
12$, $V_c = 0.1$ and $\epsilon = 0.005$, and for all cases only one
stable fix point exists at $V=0$ and $h=1$. Left: The subcritical case
with $\tau_a=26$. Middle: The supercritical case with $\tau_a=6$. Note
the closeness of the nullclines for large $V$. Right: Nullclines for
the modification (\ref{EcheKarmacorr}) which breaks the near
degeneracy of the nullclines observed in the middle figure. Here
$\tau_a=6$ and $\tau_l=3$.}
\label{Fig-nullclines-echekarma}
\end{figure}
We may modify the model (\ref{EcheKarma1})-(\ref{EcheKarma3}) to break
the degenerate situation in which the two nullclines of the inhibitor
and activator run parallel to each other and subsequently may get too
close to each other for certain parameters. We can destroy the
existence of the metastable state for finite ring length by allowing
the nullcline of the activator to bend away from the nullcline of the
inhibitor if we, for example, consider the following modification of
the membrane current
\begin{eqnarray}
\label{EcheKarmacorr}
\frac{I_{{\rm ion}}}{C_m} = \frac{1}{\tau_0}\left(
S+(1-S)\frac{V}{V_c}\right)
+ \frac{1}{\tau_l}V^2
-\frac{1}{\tau_a}hS
\; ,
\end{eqnarray}
with some sufficiently small $\tau_l$. Then we are again in the
situation where the rest state $V=0$ and $h=1$ dominates the dynamics
upon decreasing the ring length $L$. The nullclines are shown in
Fig.~\ref{Fig-nullclines-echekarma}. We confirmed that for $\tau_l=3$
the Hopf bifurcation is indeed subcritical, consistent with our
theoretical result. We note that the actual value of $\tau_l$ is not
important for the existence of subcritical bifurcation but rather that
a sufficiently small $\tau_l$ breaks the geometric structure of the
degenerate nullclines and allows the activator nullcline to bend away
from the inhibitor nullcline. The model
(\ref{EcheKarma1})-(\ref{EcheKarma3}) illustrates for which class of
excitable media our normal form is applicable and for which systems we
may draw conclusions on the stability of dynamical alternans in a
ring.\\
\medskip
{\underbar{\bf Acknowledgements }} I would like to thank Sebastian
Hermann for helping with the DDE-BIFTOOL software, and Martin
Wechselberger for fruitful discussions. I gratefully acknowledge
support by the Australian Research Council, DP0452147 and DP0667065.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
As structure in the Universe grows hierarchically, galaxies occasionally experience major mergers with one another \citep{SandersMirabel1996}. Such collisions can induce prodigious levels of star formation \citep[e.g.,][]{Fensch2017} and potentially boost the growth of the central supermassive black hole (SMBH) in each galaxy \citep[e.g.,][]{DiMatteo2005, Hopkins2006}. Cosmological hydrodynamic simulations \citep[e.g.,][]{Volonteri2016,Capelo2017,Rosas-Guevara2019} of galaxy evolution predict that in some cases, the two supermassive black holes in a merging galaxy pair will accrete matter at the same time, and thus appear as a luminous dual quasar.
Searches for dual quasars \citep[e.g.,][]{Hewett1998,Hennawi2010} and pairs of less luminous Active Galactic Nuclei \citep[e.g.,][]{Liu2011,Comerford2012,Imanishi2014,Hou2019,Pfeifle2019} have had some success. Recently, \citet{DeRosa2019} provide a thorough review on the searches for binary supermassive black holes on various scales and across different wavebands. Briefly, detailed investigations of double-peaked [OIII]$\lambda4959,5007$ emitters have enabled estimates of the frequency of dual AGNs \citep[e.g.,][]{Comerford2013} and their host galaxy properties \citep[e.g.,][]{Liu2013,Comerford2015}. However, a fair fraction of these lower luminosity candidates are attributed to gas motions in the narrow-line region due to other effects such as outflows \citep[e.g.,][]{Rosario2011,Fu2012,Comerford2012,Mueller-Sanchez2015}. Only a modest number of more luminous dual quasars have been identified at close separation, i.e., during the final stages in the merger at $\lesssim$20 kpc, that have been identified \citep{Komossa2003,Hennawi2006,Shields2012,Inada2012,More2016,Eftekharzadeh2017,Goulding2019}, due to limitations with instrumental spatial resolution.
Current and future wide-field deep imaging surveys have the potential to significantly improve upon the number of confirmed dual quasars, particularly at higher luminosity and closer separations. For instance, the wide-area coverage, depth, and superb seeing of the Subaru Strategic Program \citep[SSP; ][]{Aihara2018,Aihara2019} with Hyper Suprime-Cam (HSC; \citealt{Miyazaki2018}) can be used to search for rare and key phases of black hole growth, including dual quasars as signposts of major galaxy mergers. Simulations show that quasar accretion is episodic \citep[e.g.,][]{Capelo2017}, thus the supermassive black holes in a major merger will both shine as a quasar only a small fraction of the merger time. The areal coverage of HSC is now sufficient to detect these rare events in significant numbers. Equally important, the median seeing of the HSC imaging is $0\farcs6$ in the i-band, meaning that dual quasars can be identified with projected separations as small as $\sim$3 kpc at $z\sim0.3$. For those at $z<1$, the underlying host galaxies can be robustly detected with HSC including those with faint structures such as tidal debris from an ongoing merger \citep{Goulding2018}.
To exploit the capabilities of HSC on this topic, we are carrying out a program to search for SDSS quasars, falling within the HSC survey footprint, that are dual quasar candidates based on an automated 2D image detection algorithm to identify multiple compact emission on the scale of the HSC point spread function. These objects were initially identified as single SDSS quasars based on optical photometry and spectroscopy, given the lower spatial resolution of the SDSS data. Dual quasars with separations $>$2$^{\prime\prime}$ are also missed by SDSS, as the survey could not obtain spectra of close pairs of objects due to a limitation on the minimum distance between two optical fibers on a single spectroscopic plate. Even so, subsequent dedicated spectroscopic followup programs of dual quasar candidates have confirmed 47 cases with most at wider separations \citep[$>2\farcs9$;][]{Eftekharzadeh2017} with essentially no overlap with the HSC SSP program.
Using the Subaru HSC SSP imaging, we report here on the identification of dual quasar candidates having projected separations between 3 and 30 kpc. An optical spectroscopic campaign is underway using Keck-I/LRIS, Subaru/FOCAS and Gemini (GMOS+NIFS) to confirm a statistical sample of dual quasars. Here, we present our first three spectroscopic confirmations based on Keck and Gemini/NIFS observations. In Section~\ref{sec:selection}, we describe the procedure to identify dual quasar candidates using HSC imaging. The followup spectroscopic campaigns are summarized in Section~\ref{sec:spectroscopy} and three confirmed dual quasars are highlighted in Section~\ref{sec:results}. Based on these initial results, we give a preliminary estimate of the dual fraction of luminous quasars (Section~\ref{sec:dualfrac}).
We discuss our results in the context of cosmological hydrodynamic simulations that span volumes capable of providing expectations on the rate of dual AGN activity and its evolution with redshift \citep{Rosas-Guevara2019,Volonteri2016,Steinborn2016}. In particular, dual quasar fractions from the Horizon-AGN simulation (M. Volonteri - private communication) are presented with parameters (i.e., redshift, quasar luminosity, ratio between the luminosity of the two quasars, projected physical separation) broadly matched to our sample (Section~\ref{sec:dualfrac}). However, any firm conclusions from such comparisons require a larger spectroscopic sample and better understanding of the selection function for both the observational and simulated samples. Throughout this paper, we use a Hubble constant of $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$ and cosmological density parameters $\Omega_\mathrm{m} = 0.3$ and $\Omega_\Lambda = 0.7$.
\begin{figure}
\epsscale{1.2}
\plotone{f1.jpeg}
\caption{SDSS quasar population imaged by Subaru HSC and identified as dual quasar candidates: Distribution of bolometric luminosity ($L_{bol}$), as tabulated in the SDSS DR14 catalog (small grey points), as a function of redshift. The initial 421 dual candidates are marked either as open blue circles (including those having optical colors consistent with stars), and red circles indicate those (116) with the companion having $g-r<1$ that removes many contaminating stars. The yellow squares mark the location of our first three spectroscopically confirmed dual quasars presented in this work. Other spectroscopic targets are shown by the green circles. The horizontal dashed line indicates the luminosity limit of the primary target used to calculate the dual quasar fraction.}
\label{fig:sample}
\end{figure}
\begin{figure*}
\epsscale{0.9}
\plotone{f2.jpeg}
\caption{Example of our two-dimensional image decomposition of a dual quasar candidate (SDSS J1416+0033) with two point-source components and a S\'{e}rsic model. The panels are as follows: (a) original $i$-band HSC image (logarithmic stretch) with a spatial resolution of $0\farcs51$, (b) smooth model galaxy plus two point-source components convolved with the PSF, (c) residual emission after subtracting the best-fit model from the data and dividing by the error map (linear scaling), (d) data minus the two point sources (image of the host galaxy only), (e) data minus the smooth host model (two quasars only) and (f) data minus the model including all components (residual emission) similar to panel c. A scale bar of 1$^{\prime\prime}$ is shown in each panel.}
\label{fig:decomp_example}
\end{figure*}
\section{Identifying dual quasar candidates from the luminous SDSS quasar population with Subaru HSC}
\label{sec:selection}
The imaging used in this study is drawn from the second public data release of the HSC SSP program \citep{Aihara2019}, including 796 deg$^2$ of $i$-band imaging. We include all imaging with at least one 200 second exposure, so not all data reach the final depth of 26.2 mag (5$\sigma$; AB). Our initial selection of dual quasar candidates is based on the $i$-band due to the quality of the seeing in that band. We then use the images taken in the other four broad-band filters \citep[i.e., $g$, $r$, $z$, $y$;][]{Kawanomoto2018} to provide color information. The imaging data are processed through the standard pipeline \citep{Bosch2018} hscPipe Version 6.7.
The SDSS DR14 v.4.4 quasar catalog \citep{Paris2018} contains 526,357 spectroscopically confirmed quasars with spectroscopic redshift up to $z\sim5$ (Figure~\ref{fig:sample}). The SDSS quasar selection includes magnitude- and color-selected samples \citep{Richards2002,Ross2012,Myers2015} and objects identified through other means, e.g., radio, X-ray, infrared. Of these, 34,476 quasars have catalog entries in the HSC database within the usable HSC exposure area which are not flagged as saturated ($i_{AB}\gtrsim18$), having a bad pixel, or unable to determine a magnitude based on a model fit. HSC image cutouts of size $60^{\prime\prime}\times60^{\prime\prime}$ are generated for each object, together with the variance image and model PSF.
An automated algorithm in Python is run on each Subaru/HSC i-band cutout image to detect those with multiple optical components. These analysis tools, provided by the open-source package Lenstronomy \citep{Birrer2018}, have been slightly modified to process HSC images from versions used to de-blend AGN and host galaxy emission based on HST imaging \citep{Ding2020}. We initially detect peaks in each image based on emission above a given signal-to-noise ratio and over a number of contiguous pixels. We then perform a forward modeling of the two-dimensional distribution of the emission. This model includes unresolved emission characteristic of the point-spread function (PSF; \citealt{Coulton2018}) of each quasar, and 2-dimensional S\'ersic profiles for the host galaxy, usually detected for only those quasars with $z < 1$. Accurate models of the PSF for each quasar are of primary importance for this analysis. We utilize the empirical model PFS used for the lensing measurements that have been constructed from stars that fall on each CCD of the main target. \citet{Carlsten2018} presents a detailed analysis of the wavelength dependence of the PSF models. In most cases, the HSC pixel scale of $0\farcs168$ is sufficient to sample the PSF. We simultaneously fit all galaxies in the field-of-view of the image cutout since their extended profiles can contribute flux to the main targets of interest. We also allow for a local constant sky level to account for errors in the global sky subtraction routine of the standard pipeline analysis. In Figure~\ref{fig:decomp_example}, we demonstrate the results for an example dual quasar candidate. This object has been selected to demonstrate our analysis routine and may be an exceptional case as described further below. For each component, our procedure returns the centroid, total flux, S\'ersic parameters (half-light radius, S\'ersic index) and ellipticity. Further details and examples of the fitting procedure of AGNs and their host galaxies using these tools with HSC data will be presented in a paper in preparation. In particular, while the deep limiting sensitivity of the HSC imaging can allow us to tie the growth of the individual SMBHs to the evolutionary state of the galaxy merger for cases at $z<1$, we reserve such analysis to a more focused effort, primarily since the two-dimensional modeling of the host optical emission from merging galaxies may require additional components in the fitting routine due to complexity in their light distribution.
Dual quasar candidates are those with multiple optical components with a separation between $0\farcs6$ and 4$^{\prime\prime}$. The lower bound is determined by the spatial resolution of the imaging while the upper bound is arbitrarily set to reach separations of a few tens of kpc while minimizing the level of contamination. The procedure initially identified 452 candidates at these separations (Table~\ref{tab:sample}). Visual inspection caused us to reject 27 objects as artifacts. In addition, four objects were found to be gravitational lenses, including one identified through our Keck spectroscopy (see below). Thus, there are 421 dual quasar candidates with 385 having a flux ratio of 10:1 or less, prior to consideration of the contamination by foreground stars (Section~\ref{sec:stars}). In Figure~\ref{fig:sample} we show the bolometric luminosities of this sample relative to the overall parent quasar population as a function of redshift. In Table 1, we provide the sample size at each step in the selection process.
\begin{figure}
\epsscale{1.1}
\plotone{f3.png}
\caption{Optical colors ($g-r$ vs. $r-i$) of the individual sources (primary SDSS target: small black circles; companion: small red circles) within our dual quasar sample. The stellar locus \citep{Covey2007} is shown by the blue line, adjusted slightly to match the HSC photometric system. The three spectroscopically-confirmed dual quasars are shown as larger stars with each pair connected by a solid line. A larger open circle marks the companions in our sample having spectroscopy.}
\label{fig:colors}
\end{figure}
\begin{deluxetable}{lr}
\tabletypesize{\scriptsize}
\tablecaption{Sample selection\label{tab:sample}}
\tablehead{\colhead{Category}&\colhead{Number of objects}}
\startdata
SDSS DR14 quasar catalog&526357\\
Imaged by the HSC wide-area survey&34476\\
Dual quasar candidates with 0.6--4$^{\prime\prime}$ separation&452\\
"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" after visual inspection&425\\
"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" minus known lenses&421\\
"~~~~~~~~~~~~~~~~" with 5-band photometry available&401\\
"~~~~~~~~~~~~~~~~~~~~~~~~~" having flux ratio within 10:1&385\\
"~~~~~" with the companion having $g-r<1.0$&116
\enddata
\end{deluxetable}
\subsection{Stellar contamination}
\label{sec:stars}
We inspect the optical colors of the individual components of our dual quasar sample (Figure~\ref{fig:colors}) for those (401) having photometry in all five bands. Many of these objects lie close to the stellar locus \citep{Covey2007}, suggesting that foreground stars are a major contaminant. In an attempt to remove some of the contaminating stars, there are 116 dual candidates in which the companion to the target quasar has $g - r < 1$. However, this color cut is not definitive; massive stars can be this blue, and dust-reddened quasars can be redder thus followup spectroscopy is needed to confirm dual quasars as candidates. In Section~\ref{sec:dualfrac}, we implement this simple color selection in a preliminary assessment of the dual quasar fraction. In future work, we will explore more rigorous methods to remove stars, such as by using the distance to the stellar locus.
\section{Spectroscopic followup}
\label{sec:spectroscopy}
The presence of a pair of close point sources does not prove that a given object is a dual quasar. It could be a single quasar gravitationally lensed by a foreground galaxy, or the chance projection of a star, a compact galaxy, or a quasar at a different redshift. An object can be confirmed as a dual quasar with spatially resolved spectroscopy of the two objects, which requires seeing good enough to separate the two components. Therefore, we have acquired spectroscopic observations of dual quasar candidates in 2019 using Keck-I, Subaru and Gemini. Here, we describe the initial Keck-I/LRIS and Gemini-NIFS spectroscopic observations used to confirm our first dual quasar candidates (Figure~\ref{fig:sample}). A subsequent study (Tang et al. in preparation) will present in detail the spectroscopic programs with Subaru/FOCAS and Gemini GMOS-N.
\begin{figure}
\epsscale{1.1}
\plotone{f4.png}
\caption{Spatial extraction windows for Keck spectroscopy of the three confirmed dual quasar pairs ($top$ SDSSJ1214+0102; $middle$: SDSSJ0847-0013, $bottom$: SDSSJ1416+0033). Images on the left are 2D cutouts surrounding the Mg II emission line. The x-axis is the spatial dimension and the y-axis is the spectral dimension. These images are then collapsed along the spectral dimension to illustrate their spatial profiles (right panels) where the grey shaded regions indicate the spatial range over which each quasar spectrum was extracted. While the two components of SDSSJ1214+0102 and SDSS1416+0033 are blended, the wings of the profile avoid contamination between the two components, allowing us to classify each as a different quasar.}
\label{fig:spatial_profiles}
\end{figure}
\begin{figure}
\epsscale{0.95}
\hskip -1.0cm
\plotone{f5.png}
\caption{Placement of the Keck-I/LRIS slits as shown by the green rectangular regions with the red and blue boxes indicating the regions from which the spectra were extracted. For SDSSJ0847-0013 and SDSS J1416+0033, spectra are extracted from regions (A and B) offset from the cores to minimize blending since the spatial resolution ($\sim0\farcs5$) achieved with Keck was not adequate to fully isolate the two nuclei separated by $0\farcs67$. The circles in the lower right corner of each panel indicate the size (i.e., FWHM) of the PSF for the lowest resolution of the HSC images used to construct the color image.}
\label{fig:slit_placement}
\end{figure}
\begin{figure*}
\epsscale{0.75}
\plotone{f6.png}
\caption{Optical spectroscopic confirmation of three dual quasar systems with Keck-I/LRIS. (Top) SDSS J1214+0102, a dual broad-line (type 1--1) quasar at \hbox{$z = 0.4927$} based on the detection of Mg II and H$\beta$ in each component. (Middle) SDSS J0847-0013, a second dual broad-line (type 1--1) quasar (\hbox{$z = 0.6269$}) based on the detection of Mg II and H$\beta$ in each component. (Bottom) SDSS J1416+0033, a dual quasar system at \hbox{z = 0.4336} having an unobscured quasar associated with component B based on the detection of broad Mg II and component A is a quasar as indicated by the strong [NeV] emission. The green vertical line indicates the centroid of [OIII]$\lambda$5007 (rest-frame wavelength) for one of the components of each dual quasar system, highlighting the velocity offsets of the narrow emission lines.
}
\label{fig:spectra}
\end{figure*}
\subsection{Keck-I/LRIS}
We observed eight candidates under suitable conditions with the Low Resolution Imaging Spectrometer (LRIS; \citealt{Oke1995}) on the Keck-I telescope on January 10, 2019. These candidates were selected to be at $z \lesssim 1$, have flux ratios of 3:1 or smaller, and be observable at either the beginning or end of the nights to not conflict with targets for the main program. The flux ratio cut was chosen so that both components would be bright enough to acquire spectra with sufficient signal-to-noise ratio in a 10--15 minute exposure. The 600/7500 grism was used, giving spectral coverage of 3200--5500 \AA~(blue channel) and 5500--8500 \AA~(red channel). Standard stars were observed for flux calibration.
The spectra were taken under a range of seeing conditions ($0.4 - 1\farcs0$). In five cases, the two components could be cleanly separated while in others significant blending occurred (see Figures~\ref{fig:spatial_profiles} and~\ref{fig:slit_placement}). While we were not able to separate the emission into two components for all candidates, we could extract spectra from the wings of the spatial profile in some cases, allowing us to classify each component.
Out of the eight candidates observed with Keck-I, three (SDSSJ0847-0013, SDSSJ1214+0102 and SDSSJ1416+0033) are confirmed as pairs of broad-line quasars (Figure~\ref{fig:spectra}). SDSSJ0847-0013 was previously identified \citep{Inada2008} while the other two are newly discovered dual quasar systems. The other candidates include one gravitationally-lensed quasar at $z = 1.097$ (Jaelani et al. in preparation), two quasar--star pairs, one triple quasar--star--galaxy system, and one unclassified object due to low signal-to-noise.
\subsection{Gemini-NIFS}
SDSS J1416+0033 (Fig.~\ref{fig:decomp_example}), whose Keck spectrum shown in Figure~\ref{fig:spectra} was compelling but not definitive evidence of being a dual quasar, was observed during Director's Discretionary Time (GN-2019A-DD-102) on April 24, 2019 with Gemini North's Near-Infrared Integral Field Spectrometer (NIFS; \citealt{McGregor2003}). In seeing-limited mode, we targeted the H$\alpha$+[NII] spectral region in the z-band to identify emission from the broad line region of each nuclei separately (Figure~\ref{fig:nifs}). The on-source exposure time was 80 min. The wavelength solution was based on an Ar/Xe arc lamp exposure. A dome flat exposure was used to flat-field the image while a single standard star observation allowed us to remove telluric absorption and flux calibrate the spectra. The NIFS data reduction was accomplished following the standard procedure, including the trimming of the images, flat-fielding, cosmic ray rejection, sky subtraction, wavelength and s-distortion calibrations, telluric absorption cancellation, and flux calibration. The construction of the data cubes for individual exposures was done using an angular sampling of 0\farcs05$\times$0\farcs05, and mosaicing of the individual data cubes using as a reference the position of the brightest nucleus -- for details see \citet{Riffel2008}. Following the procedures described in \citet{Menezes2014}, we resampled the final data cube to 0\farcs025 width spaxels, applied a Butterworth spatial band-pass filter to remove high-frequency noise, executed a Richardson-Lucy deconvolution using the flux distributions of the telluric standard star as the point-spread function
and finally we resampled the data cube back to the 0\farcs05 width spaxels.
\begin{figure}
\epsscale{0.75}
\plotone{f7a.png}
\epsscale{0.85}
\plotone{f7b.png}
\caption{Gemini/NIFS $z$-band IFU observations of the central region of SDSS J1416+0033. An image of a spectral region containing H$\alpha$+[NII] is shown on top. North is up and East is to the left. The vertical stripes are likely due to scattered light along the NIFS slices that do not impact the results of this work. The spectrum of each nucleus displayed in the middle and bottom panels with arbitrary flux units of ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$. The spectral features have been modeled with Gaussian components as shown in color. The continuum fit is displayed by a dashed black line. The sum of all model components is given in blue.}
\label{fig:nifs}
\end{figure}
\begin{deluxetable}{llll}
\tabletypesize{\scriptsize}
\tablecaption{Spectroscopically-confirmed dual quasars\label{tab:specsample}}
\tablehead{\colhead{ID}&\colhead{RA}&\colhead{DEC}&\colhead{Redshift}\\
&\colhead{(J2000)}&\colhead{(J2000)}}
\startdata
SDSS~J121405.12+010205.1&12:14:05.13& +01:02:05.0&0.4927\\
SDSS~J084710.40-001302.6&08:47:10.41&-00:13:2.5&0.6269\\
SDSS~J141637.44+003352.2 &14:16:37.46& +00:33:52.2 &0.4336
\enddata
\end{deluxetable}
\section{Results}
\label{sec:results}
We confirm three candidates as dual quasars at $z < 1$ with spectroscopic observations using Keck-I/LRIS and Gemini-N/NIFS (Table~\ref{tab:specsample}). In Figures~\ref{fig:decomp_example},~\ref{fig:1214decomp} and~\ref{fig:0847decomp}, we show the HSC images of the three dual quasars. HSC spatially separates the individual components cleanly in each case. After subtracting the point source components of the model, the host galaxies show tidal features and asymmetries indicative of a merger. These dual quasar systems increase the numbers of known galaxy mergers in which the individual nuclei are at close projected separations (4 -- 24 kpc) and both SMBHs are accreting at quasar-like luminosities: the primary target has $L_{bol}>10^{45}$ ergs s$^{-1}$ (Figure~\ref{fig:sample}) and the companion more than 0.1 as luminous as the primary. At these separations, most studies have reported dual AGNs at lower luminosities ($L_{bol} \lesssim 10^{44}$ ergs s$^{-1}$; \citealt{Liu2013,Comerford2015,Hou2019,Pfeifle2019}) with most well below this limit. Here, we describe these three examples while highlighting some common features (i.e., relatively low [OIII]$\lambda$5007 velocity offsets). We further derive the stellar masses of the host galaxies from de-blended HSC photometry using CIGALE \citep{Boquien2019}, a delayed star-formation history, and a Chabrier Initial Mass Function \citep{Chabrier2003}. While the individual black hole mass estimates will be presented elsewhere (Tang et al. in preparation), we do mention the range of black hole mass spanned by the sample in Section~\ref{sec:dualfrac}. We order the three examples by decreasing projected physical separation, possibly representing different stages in a merger sequence of galaxies and their SMBHs.
\subsection{SDSS J1214+0102}
This system is a newly discovered dual quasar at \hbox{$z = 0.4927$} with a projected separation of $2\farcs2$, corresponding to a distance of 13.2 kpc. As shown in Figure~\ref{fig:1214decomp} (top right panel), there is strong evidence for an underlying massive host galaxy (Table~\ref{tab:decomp}) associated with each quasar and an additional companion to the northwest. Thus this system is likely in the process of merging. While the two main components of SDSSJ 1214+0102 each have broad H$\beta$ and MgII emission (Figure~\ref{fig:spectra}; top row), the line widths are different (MgII full-width-half-maximum of 4400 (A) and 7000 (B) km s$^{-1}$). Furthermore, there is a velocity offset of 442 km s$^{-1}$ between the two components in the [OIII]$\lambda$5007 emission line, which traces the kinematics of the warm ionized gas on the larger scale of the host galaxy. Thus, this system is not two images of the same quasar in a gravitational lens. This velocity offset is too small to be identified as two separate systems in the SDSS joint spectrum of this object, and was thus not found in searches of quasars with double-peaked [OIII] lines \citep{Liu2011,Comerford2012}. We conclude that the three galaxy components, with stellar mass ratio of 2.6:1.3:1 (Table~\ref{tab:decomp}) are undergoing a major merger. The two most massive galaxies have SMBHs shining as quasars. The system is at an intermediate stage, with the three galaxies distinguishable, but will presumably merge to a single galaxy in much less than a Hubble time.
\subsection{SDSS J0847-0013}
HSC imaging (Fig.~\ref{fig:0847decomp}) shows a single galaxy (right panel) with two bright nuclei separated by 1$^{\prime\prime}$ (left panel), thus a projected separation of 6.8 kpc. The single host galaxy has a stellar mass of $8.5\pm4.6\times10^{10}$ M$_{\odot}$ (Table~\ref{tab:decomp}) based on a S\'{e}rsic model fit. In addition, there is diffuse emission on scales of a few arcseconds, faint tidal features to the north and four nearby faint companion galaxies; this system is clearly undergoing a merger. In fact, SDSS J0847-0013 was previously confirmed as a dual quasar in a search for lensed quasars \citep{Inada2008}, although no host galaxy was detected in the original discovery images. The published spectra show broad MgII emission lines in each component with different line profiles, thus ruling out the possibility that this system is a lensed quasar. Our Keck-I/LRIS spectroscopy (Fig.~\ref{fig:spectra}; middle panels) confirms the presence of the two dissimilar broad MgII profiles. The Keck spectra include H$\beta$ and [OIII]$\lambda$5007 as well. Both H$\beta$ lines are broad and are similar in width to the MgII lines. The [OIII] lines differ in velocity along the line of sight by 194 km s$^{-1}$, too small to show a double-peaked [OIII] profile, as in SDSS J171544.05+600835.7 \citep{Comerford2011}. This is a common characteristic of all three dual quasars in the sample presented here.
\begin{figure}
\epsscale{1.2}
\plotone{f8.jpeg}
\caption{SDSS J1214+0033: 2D decomposition of the HSC $i$-band image with panels as described in Figure~\ref{fig:decomp_example}. The spatial resolution is $0\farcs52$. In addition, we show the surface brightness profile and the contribution of the two quasars, host galaxy, and neighboring galaxies.}
\label{fig:1214decomp}
\end{figure}
\begin{figure}
\epsscale{1.18}
\plotone{f9.jpeg}
\caption{SDSS J0847-0013: 2D decomposition of the HSC $i$-band image with panels as described in Figures~\ref{fig:decomp_example} and~\ref{fig:1214decomp}. The spatial resolution is $0\farcs61$}
\label{fig:0847decomp}
\end{figure}
\subsection{SDSS J1416+0033}
As shown in Figure~\ref{fig:decomp_example}, this galaxy at $z = 0.4336$ has undergone a major merger as indicated by tidal structures and a double nucleus. The HSC color image (Fig.~\ref{fig:spectra}, bottom panel) shows two nuclei, one blue and the other red (Fig.~\ref{fig:colors}), separated by only $0\farcs67$, corresponding to a distance of 3.9 kpc. Based on a S\'{e}rsic model fit, the single host galaxy has a stellar mass of $1.6\pm5.3\times10^{11}$ M$_{\odot}$ (Table~\ref{tab:decomp}). At larger separations, there are nearby galaxies with photometric redshifts consistent with J1416+0033 being in a galaxy group.
Since the optical emission from the two nuclei seen in the Keck spectra (Fig.~\ref{fig:spatial_profiles}, bottom panels) is significantly blended, the regions from which the spectra were extracted were chosen to be offset from each other (Fig.~\ref{fig:spectra}, bottom panel) to minimize contaminating light from each nucleus. This method is effective due to the acceptable seeing conditions ($\sim0\farcs5$), brightness of the two quasars, and the relatively high signal-to-noise spectra in the outer wings of the spatial profiles.
The Keck spectra reveal a broad MgII and H$\beta$ (Fig.~\ref{fig:spectra}) emission lines associated with the primary blue nucleus (component B). The red nucleus (component A) has weaker evidence from the Keck spectra of a broad emission line in MgII and H$\beta$, but shows prominent narrow lines of high-ionization species, [OIII]$\lambda4959, 5007$ and [NeV], likely indicative of photoionization from a quasar, particularly the presence of strong [NeV] emission \citep{Gilli2010,Mignoli2013,Yuan2016}. Thus, this component is moderately obscured by dust; the [NeV] arises from the narrow-line region of a quasar, photo-ionized by the accretion disk around the central SMBH. Here, the [OIII]$\lambda$5007 lines in the two components are separated by 113 km s$^{-1}$, again not separated large enough to be split into two distinct peaks.
The coverage of the H$\alpha$+[NII] spectral region with Gemini/NIFS appears to confirm the nature of the red nucleus (component A shown in the bottom panel of Figure~\ref{fig:slit_placement}). In Figure~\ref{fig:nifs}, we show the IFU observation with a spectral extraction of the two nuclei that are now cleanly separated spatially with an effective resolution of $0\farcs28$. In the middle panel, the spectrum of the fainter nucleus (component A) to the Northwest has a clear detection of H$\alpha$ and [NII]. From Gaussian fits to the emission-line complex, the ratios of the narrow components of [NII]$\lambda6584$ to H$\alpha$, and [OIII]$\lambda5007$ to H$\beta$ (Figure~\ref{fig:optfit}), suggest a hard photo-ionizing source. More compellingly, there is evidence for a broad component that is shifted bluewards by 1608 km s$^{-1}$ from the narrower H$\alpha$ component. Based on the broad lines in each component (MgII, H$\beta$, and H$\alpha$) and the narrow emission-line ratios, [NII]$\lambda6584$/H$\alpha$ (Fig.~\ref{fig:nifs}) and [OIII]$\lambda5007$/H$\beta$ (Figure~\ref{fig:optfit}), including the brighter component B (Fig.~\ref{fig:spectra}, bottom panel), we conclude that this system is a dual quasar, in which one component is unobscured while the other one has moderate obscuration but not enough to completely hide the broad-line region. Thus, we classify the pair as a type 1.5, given the hint of a broad component to the weak H$\beta$ line shown in Figure~\ref{fig:spectra}. We note that the velocity offsets between broad emission lines (MgII, H$\beta$ and H$\alpha$) are not yet understood and will be further investigated.
We cannot comment yet on how common pairs of obscured and unobscured quasars may be in the dual quasar population based on this one example. Such pairs are likely to be prevalent within the dual quasar population given reports of similar cases confirmed through X-ray observations \citep{Ellison2017,DeRosa2018,Hou2020}, and studies of AGN obscuration in galaxy mergers \cite[e.g.,][]{Ricci2017}. In our case, while some of our dual candidates include a red component, many may be chance projections with foreground stars (Fig.~\ref{fig:colors}). However, those with extended optical emission and/or anomalous colors likely require X-ray followup or further long-slit optical or near-infrared spectroscopy to secure their nature as dual quasars.
\begin{figure}
\epsscale{0.9}
\plotone{f10.png}
\caption{Model Gaussian fits to the [OIII]$\lambda4959, 5007$ and H$\beta$ emission lines in component A of SDSS J1416+0033. The model components are the same as in Figure~\ref{fig:nifs}.}
\label{fig:optfit}
\end{figure}
\begin{deluxetable*}{llllll}
\tabletypesize{\scriptsize}
\tablecaption{2D imaging analysis of dual quasars and their hosts\label{tab:decomp}}
\tablehead{\colhead{ID}&\colhead{Quasar offsets\tablenotemark{a}}&\colhead{Separation\tablenotemark{b}}&\colhead{Quasar mag\tablenotemark{c}}&\colhead{Host mag\tablenotemark{c}}&\colhead{Host stellar mass}\\
&\colhead{(RA: \arcsec, Dec: \arcsec)}&\colhead{(\arcsec; kpc)}&\colhead{(HSC: $i$-band)}&\colhead{(HSC: $i$-band)}&\colhead{($\times10^{10}$ M$_{\odot}$)}}
\startdata
SDSS~J1214+0102&0.12, -0.26; -0.20, 1.9&2.2 (13.2)&20.22, 20.08&19.98, 20.40 &$7.6\pm2.8$, $3.8\pm1.3$\\
SDSS~J0847-0013&0.06, 0.06; 0.59, -0.73&1.0 (6.8)&19.15, 19.58&19.57&$8.5\pm4.6$\\
SDSS~J1416+0033 &0.38, -0.22; -0.22, -0.00&0.7 (3.9)&19.97, 19.94&18.80&$16.1\pm0.5$
\enddata
\tablenotetext{a}{These are spatial offsets relative to the position given in Table~\ref{tab:specsample}.}
\tablenotetext{b}{Projected separation given in arcsec and kiloparsecs.}
\tablenotetext{c}{Magnitudes resulting from the decomposition of the 2D HSC spatial profiles. Statistical errors are at the 0.01 mag level. Systematic uncertainties due to PSF mismatch have not been assessed.}
\end{deluxetable*}
\section{Discussion: preliminary insights on the dual fraction of quasars}
\label{sec:dualfrac}
With a significant sample of dual quasar candidates drawn from a large parent population of quasars, we explore the frequency of dual quasar activity and compare with a simulated sample of dual quasars from Horizon-AGN, after roughly modeling the sample selection effects. Since followup spectroscopic confirmation is still in an early stage, we consider this estimate to have a substantial level of uncertainty. However, we report here on our current assessment of the dual quasar fraction since there are interesting constraints based on our candidates and observational results in the literature are also highly uncertain. Current observational estimates for the dual quasar rate are mainly limited to the low redshift ($z<0.1$) Universe at these close separations \citep[e.g.,][]{Liu2011,Koss2012}.
We measure a dual quasar fraction out to $z\sim3.5$ (i.e., across $\sim$12 billion years of cosmic time). The observed dual quasar fraction is the ratio of the number of objects meeting our criteria of dual quasar candidates (Section~\ref{sec:selection}) to the number of SDSS DR14 quasars with imaging from HSC SSP survey. Again, we consider a quasar to be part of a dual quasar system if it has a nearby companion at least 10\% as bright. For this exercise, we exclude cases where the companion has a red color ($g - r > 1$), solely to remove many stellar contaminants (Section~\ref{sec:stars}). The projected physical separation is restricted to be between 5--30 kpc. While our initial selection of dual quasar candidates included objects with separations between $0\farcs6$ and $4\farcs0$, we are likely incomplete below $1\farcs2$ (twice the median FWHM of the i-band PSF; Figure~\ref{fig:sample_sep}~top panel). We thus limit ourselves to the 96 candidates (dual quasar systems) with separations between $1\farcs2$ and 4$^{\prime\prime}$.
There is a multiplicative correction factor that we apply to the observed dual quasar fraction to account for selection effects. We are likely to be missing dual quasar candidates at the smaller separations ($1\farcs2 - 2\farcs5$), even though we increased our lower limit from $0\farcs6$ to $1\farcs2$. As seen in Figure~\ref{fig:sample_sep} (top panel), there is a decline in the number of candidates with separations below $2\farcs5$. While we do not know the true distribution of the projected angular separation of dual quasars, we assume it to be flat over the range of separations considered here and peaking at $2\farcs5$. This amounts to a correction factor of 0.58. Hence, the observed dual quasar fraction is increased by a factor of 1.72.
\begin{figure}
\epsscale{2}
\plottwo{f11a.png}{f11b.png}
\caption{(a) Angular separation in arcseconds of the 116 dual quasar candidates with companions having a blue color $g-r<1$. The projected physical distance (kpc) is shown in panel (b), assuming the components of each dual quasar candidate are at the same redshift. The distributions are based on observed quantities thus impacted by selection effects, particularly at smaller ($\lesssim$2$^{\prime\prime}$) separations, and dependent on the redshift distribution in the case of the physical separation.}
\label{fig:sample_sep}
\end{figure}
We then multiply the dual quasar fraction by the spectroscopic success rate based on initial results from our observing program with Keck and Gemini. With the requirement on the companion having $g-r<1$, there are three spectroscopically-confirmed dual quasar systems out of six candidates (Section~\ref{sec:spectroscopy}). The observed dual quasar fraction is then multiplied by a factor of 0.5. This fraction and its uncertainty is equally applied across all redshift bins. Therefore, there remains much uncertainty due to the limited sample of candidates with spectroscopy, especially at $z>1$.
In Figure~\ref{fig:dualfraction}~$top~panel$, we plot the fraction of quasars that are dual with projected separation between 5--30 kpc. The statistical uncertainty of the dual quasar fraction is calculated in every redshift bin based on Poisson statistics. It is dominated by the total number of confirmed dual AGN and the number of dual AGN candidates per bin. Overall, we find a dual fraction of $0.26\pm0.18\%$, with no evidence of a dependence on redshift. If we limit ourselves to objects in which the two components have a flux ratio of 3:1 or less, the dual fraction is $0.16\pm0.12\%$, about a factor of 1.6 less, again showing no evidence for redshift evolution. Given the errors based on Poisson statistics and the considerable uncertainty in our spectroscopic success rate, particularly as a function of redshift, we cannot yet rule out the existence of some mild evolution. However, since we expect the spectroscopic success rate at higher redshift to drop given the greater likelihood of contamination due to projection effects and lensing, it seems unlikely that the fraction of dual quasars rises strongly with redshift, in contrast with expectations from simulations (see below).
Our dual quasar fraction rate is a factor of three higher than measured using dual quasar samples found from lens searches \citep{Kayo2012}. The difference may be because we relaxed the color criteria, allowing redder quasars (although still bluer than the stellar locus; Figure~\ref{fig:colors}), or because there are still foreground stars contaminating the sample. At $z < 1$, we find a dual quasar fraction considerably lower, by about a factor of four, than previously published observational results \citep{Liu2011,Koss2012}. This is also likely due to the varying selection of the quasar samples, highly impacted by luminosity and obscuration. Even so, a larger sample of spectroscopically confirmed dual quasars is needed at all redshifts. In particular, a better understanding of our selection function is needed since the number of pairs in our sample declines below $\sim2^{\prime\prime}$ (Figure~\ref{fig:sample_sep}), a separation larger than expected given the spatial resolution of the imaging in the $i$-band.
\begin{figure}
\epsscale{1.1}
\plotone{f12.pdf}
\caption{Dual quasar fraction with projected separation between 5 and 30 kpc as a function of redshift. $Top$ Observational results from our Subaru/HSC program are displayed with the colored region indicating a 1$\sigma$ confidence interval based on Poisson statistics and the uncertainty on the spectroscopic success rate. The blue region is based on our sample having a flux ratio within 10:1 while the maroon region shows only those having a flux ratio within 3:1. For comparison, we show other published observational studies (thin light-blue square, \citealt{Liu2011}; purple circle, \citealt{Koss2012}) that only exist for the local Universe ($z < 0.1$). $Bottom$ Results from cosmological hydrodynamic simulations based on real-space separations. Horizon-AGN \citep{Volonteri2016} is shown with a quasar selection matched to our observed sample (M. Volonteri, private comm.) thus labelled Horizon-AGN 'select'. For further comparison, we indicate the dual AGN fraction from the Magneticum \citep{Steinborn2016} and EAGLE \citep{Rosas-Guevara2019} simulations having a lower threshold in AGN luminosity that can lead to more pronounced differences in the dual AGN fraction, particularly at higher redshifts. The Magneticum, EAGLE and Horizon-AGN curves are based on the matched results shown in Figure 6 of \citet{DeRosa2019}.}
\label{fig:dualfraction}
\end{figure}
\subsection{Comparison with the Horizon-AGN and other simulations}
Cosmological hydrodynamic simulations now reach a large enough volume to allow a comparison to observational studies of luminous dual quasars. Here, we qualitatively compare the dual quasar fraction from observations and cosmological hydrodynamic simulations, particularly Horizon-AGN \citep{Volonteri2016}. The Horizon-AGN simulation \citep{Dubois2014} is based on the Adaptive Mesh Refinement Code RAMSES \citep{Teyssier2002} using a standard $\Lambda$CDM cosmology. The simulation box has a size of 100h$^{-1}$ Mpc$^{-1}$ on a side, 1024$^3$ dark matter particles, and an effective resolution of 1 kpc.
Before proceeding, it is important to note that the SDSS quasar sample, used as our primary selection, is dominated by optically unobscured objects selected using a variety of color-based algorithms, and has a complex dependence on redshift and luminosity. This fact will likely contribute to differences between the observed and simulated samples. Not surprising, comparisons between simulations also require a careful matching of parameters as extensively discussed in \citet{DeRosa2019}.
To minimize such selection effects, we make an attempt to match the quasar properties of the Horizon-AGN simulation. We place a lower limit on the bolometric luminosity of the primary SDSS quasar at \hbox{log $L_\mathrm{primary} > 45.3$}. This limit is indicated in Figure~\ref{fig:sample}. We then require the companion to be within a factor of 10:1; this amounts to a luminosity limit of \hbox{log $L_\mathrm{secondary} > 44.3$}. A sample of simulated dual quasars (M.\ Volonteri, priv.\ comm) is constructed from Horizon-AGN meeting these luminosity thresholds and having real space separations of 5 -- 30 kpc. While it is beyond the scope of this study to fully assess the differences between projected and real-space separation, projections onto different axes within the simulation, evaluated as a simple check by assuming that the two quasars are at the same redshift, does not greatly change the results. We will more accurately model the selection in the simulations in future papers, particularly after we have obtained spectroscopic confirmation of a larger sample of dual quasar candidates. To avoid any confusion, we chose to plot here the observations and simulations separately in Figure~\ref{fig:dualfraction}.
As pointed out in \citet{DeRosa2019}, these comparisons should also be matched in black hole mass and the stellar mass of the underlying host galaxies. Therefore, we place lower limits on these quantities for dual quasars in the simulation based on measurements of these quantities from our three confirmed dual quasars. From model fits to the broad emission lines (Tang et al. in preparation), we report that their black hole masses fall between 10$^8$ and $10^9$ M$_{\odot}$. Therefore, we place a lower limit of 10$^8$ M$_{\odot}$ on both black holes in a dual system. Based on the stellar mass measurements of these dual quasars (Table~\ref{tab:decomp}), we place an additional constraint where $M_{stellar}>10^{10}$ M$_{\odot}$ for both galaxies. With these cuts, we find 10 dual quasars in the Horizon-AGN simulation volume with $z \leq 3$. We label this matched simulated sample as Horizon-AGN 'select'.
As shown in Figure~\ref{fig:dualfraction}~$bottom~panel$, we find that the expected rates of dual quasar activity in the Horizon-AGN 'select' sample are about a factor of five higher than the observed sample of SDSS quasars with HSC. However, the errors on the simulated data are large, and thus the discrepancy is between 1 and 2$\sigma$, particularly at $z < 2$. As mentioned above, the differences in normalization are expected since it is challenging to exactly match the selection between the full DR14 SDSS quasars catalog and quasars in the simulation. Putting aside the differences in rates, the simulation exhibits a lack of strong evolution similar to our observations.
We also provide the predictions of the dual quasar fraction from other cosmological hydrodynamic simulations \citep{Rosas-Guevara2019,Steinborn2016}. Unlike the Horizon-AGN sample, we haven't imposed the selection effects, thus these comparisons are more of an illustration of the impact of selection when comparing with lower luminosity AGNs that dominate the samples in these surveys. The EAGLE \citep{Rosas-Guevara2019} simulation has a higher rate than our observations by an order of magnitude (Figure~\ref{fig:dualfraction}), particularly at $z>1$, and exhibits an increase in the dual quasar fraction with redshift that is not seen in our observed dual quasar fraction or the Horizon-AGN 'select' simulation as mentioned above. Much of the discrepancy is likely due to the different luminosity thresholds: in the EAGLE simulation, AGN are defined to be objects with hard ( 2--10 keV) X-ray luminosity above $10^{42}$ erg s$^{-1}$, a limit substantially lower than that of our SDSS sample. The Magneticum Pathfinder Simulations \citep{Steinborn2016} also show rates above our observed value at $z\sim2$ that is likely due to the different luminosity thresholds. To conclude, a comparison between observed and simulated samples with higher accuracy needs larger number statistics for these luminous dual quasars over a parameter space that includes redshift, projected separation, color, black hole mass, stellar mass, and Eddington accretion rate.
\section{Brief summary and concluding remarks}
We presented first results from an ongoing program to identify dual supermassive black holes, both being in a luminous quasar phase, using the Subaru HSC SSP. The exquisite quality of the deep and wide imaging of HSC in five optical bands facilitates the detection of dual quasars down to separations of a few kiloparsecs. We presented our selection of candidates from the known SDSS quasar population up to $z\sim4.5$ with consideration of the PSF and de-blended optical photometry with the latter important in providing color information needed to remove chance projections due to stars. With the use of Keck and Gemini in spectroscopy mode, we have confirmed three dual quasars, of which two were previously unknown. In the three cases presented here, the redshifts are so close to each other that spatially unresolved spectroscopy of the pair (e.g., from SDSS fiber spectroscopy) would not show a double-peaked [OIII]$\lambda$5007 line. In one case, we have found a pair of quasars, where one is unobscured while its close neighbor, just 3 kpc away, is moderately reddened. With the sample in hand, we have made a first attempt to quantify the dual fraction of quasars. The dual fraction of $0.26\pm0.18\%$ shows no evidence for a dependence on redshift. This may indicate that the triggering of simultaneous dual quasar activity is not any different at earlier epochs when the merger rates \citep[e.g., ][]{Lackner2014} and gas fractions \citep[e.g., ][]{Tacconi2020} of galaxies were significantly higher than today. Thus, the rates of dual quasar activity may simply be a result of the stochastic behavior of the instantaneous mass accretion rate onto the individual SMBHs in galaxy mergers \citep{Capelo2017,Goulding2018}. However, any firm conclusions require a larger spectroscopic sample and a better understanding of our selection function. This work is a demonstration of the improvements to be made in the study of dual quasars with larger samples with deep imaging to come from wide-area imaging surveys with HSC, LSST, Euclid and WFIRST. In addition to the luminous SDSS quasars, the optical imaging is deep enough to begin identifying dual quasars where both components are less luminous than those presented in this study.
\acknowledgments
We thank Marta Volonteri for providing the sample of dual quasars from the Horizon-AGN simulation and valuable guidance on the comparison between the observed and simulated data. Further input from Hugo Pfister was useful in this respect. We thank the anonymous referee for their valuable comments that improved the paper. We also recognize contributions from Thaisa Storchi Bergmann, Nadia Zakamska, and Jonelle Walsh in the reduction of the Gemini/NIFS data. JDS is supported by the JSPS KAKENHI Grant Number JP18H01521, and the World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. KGL is supported by the JSPS KAKENHI Grant Numbers JP18H05868 and JP19K14755. KI acknowledges support by the Spanish MICINN under grant PID2019-105510GB-C33. YM is supported by KAKENHI Grant Numbers 17H04831, 17KK0098, 19H00697, and 20H01953. R.A.R thanks partial financial support from Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (302280/2019-7) and Funda\c c\~ao de Amparo \`a pesquisa do Estado do Rio Grande do Sul (17/2551-0001144-9 and 16/2551-0000251-7).
\bibliographystyle{apj.bst}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Let $(M,g)$ and $(M',g')$ be Riemannian manifolds and $\phi:(M,g)\to (M', g')$ be a smooth map. Then $\phi$ is said to be {\it harmonic} if the tension field $\tau(\phi)={\rm tr}_{g}(\nabla d\phi)$ vanishes, equivalently, $\phi$ is a critical point of the energy functional defined by
\begin{align*}
E(\phi)={1\over 2}\int_{M} | d\phi|^2\mu_{M},
\end{align*}
where $\mu_{M}$ is the volume element of $M$ \cite{ES}. In 2003, J. Konderak and R. Wolak introduced the notion of transversally harmonic maps between foliated Riemannian manifolds \cite{KW1}. Let $(M,g,\mathcal F)$ and $(M',g',\mathcal F')$ be foliated Riemannian manifolds and $\phi:(M,g,\mathcal F)\to (M', g',\mathcal F')$ be a smooth foliated map, (i.e., $\phi$ is a smooth leaf-preserving map). Let $Q$ be the normal bundle of $\mathcal F$ and $d_{T}\phi=d\phi|_{Q}$. Then $\phi$ is said to be {\it transversally harmonic} if $\phi$ is a solution of the Eular-Largrange equation $\tau_{b}(\phi)=0$, where $\tau_b(\phi)={\rm tr}_{Q}(\nabla_{\rm tr} d_T\phi)$ is the transversal tension field of $\mathcal F$. Transversally harmonic maps on foliated Riemannian manifolds have been studied by many authors \cite{CZ,KW1,KW2,OSU}. However, a transversally harmonic map is not a critical point of the transversal energy functional \cite{JJ2} defined by
\begin{align*}
E_{B}(\phi)=\frac{1}{2}\int_{M} | d_T \phi|^2\mu_{M}.
\end{align*}
In 2013, S. Dragomir and A. Tommasoli \cite{DT} defined a new harmonic map, called {\it $(\mathcal F,\mathcal F')$-harmonic map}, which is a critical point of the transversal energy functional $E_{B}$. Two definitions are equivalent when $\mathcal F$ is minimal. Analogously to $p$-harmonic map on ordinary manifolds, we define $(\mathcal F,\mathcal F')_{p}$-harmonic map, $p>1$, which is a generalization of $(\mathcal F,\mathcal F')$-harmonic map.
In fact, a smooth foliated map $\phi$ is said to be {\it $(\mathcal F,\mathcal F')_{p}$-harmonic} if $\phi$ is a critical point of the {\it transversal $p$-energy functional} defined by
\begin{align*}
E_{B,p}(\phi)={1\over p}\int_{M} | d_T \phi|^p\mu_{M}.
\end{align*}
Trivially, $(\mathcal F,\mathcal F')_{2}$-harmonic map is just $(\mathcal F,\mathcal F')$-harmonic map \cite{DT} and $(\mathcal F,\mathcal F')_{p}$-harmonic maps are $p$-harmonic maps for point foliations.
Similarly, we can define the {\it transversally $p$-harmonic map} as a solution of the Euler-Largrange equation $\tau_{b,p}(\phi)=0$, where $\tau_{b,p}(\phi)= {\rm tr}_{Q}(\nabla_{\rm tr} |d_T\phi|^{p-2}d_T\phi)$ is the transversal $p$-tension field of $\mathcal F$ (cf. \ref{eq3-3}).
Two definitions are not equivalent if $\mathcal F$ is not minimal (cf. Theorem \ref{th3}). But if $\mathcal F$ is minimal, then two definitions are equivalent, and so they are generalizations of $p$-harmonic maps on an ordinary manifold. In this paper, we study the Liouville type theorem for $(\mathcal F,\mathcal F')_p$-harmonic maps not for transversally $p$-harmonic maps. But the Liouville type theorem for transversally $2$-harmonic maps have been studied in \cite{FJ,JJ}. The Liouville type theorem on Riemannian manifolds has been studied by many researchers \cite{Jung1,mlj,NN,EN,sy,t,y}.
In particular, D.J. Moon et al. \cite{mlj} proved the Liouville type theorem for p-harmonic maps on a complete Riemannian manifold as follows.
\begin{thm} \cite{mlj} Let $(M,g)$ be a complete Riemannian manifold and let $(M',g')$ be a Riemannian manifold of non-positive sectional curvature. Assume that the Ricci curvature ${\rm Ric^M}$ of $M$ satisfies ${\rm Ric}^M \geq - {4(p-1)\over p^2 }\mu_0$ and $> - {4(p-1)\over p^2}\mu_0$ at some point, where $\mu_0$ is the infimum of the eigenvalues of the Laplacian acting on $L^2$-functions on $M$. Then any $p$-harmonic map $\phi:M\to M'$ of finite $p$-energy is constant.
\end{thm}
We generalize Theorem 1.1 to $(\mathcal F,\mathcal F')_p$-harmonic maps on foliated manifolds.
This paper is organized as follows. In Section 2, we review some basic facts on foliated Riemannian manifolds. In Section 3, we prove the first variational formula for the transversal $p$-energy functional (Theorem \ref{th3}). From the first variational formula, we know that the transversal $p$-harmonic map is not a critical point of the transversal $p$-energy functional. And the second variational formulas for $(\mathcal F,\mathcal F')_p$-harmonic map is given (Theorem \ref{th4}) and the stability of $(\mathcal F,\mathcal F')_p$-harmonic map is studied.
In Section 4, we prove the generalized Weitzenb\"ock type formula for $(\mathcal F,\mathcal F')_p$-harmonic map (Corollary \ref{co3}). As an application of the Weitzenb\"ock type formula, we prove the Liouville type theorem for $(\mathcal F,\mathcal F')_{p}$-harmonic maps (Theorem \ref{th5}). Namely, let $\lambda_0$ be the infimum of the eigenvalues of the basic Laplacian acting on $L^2$-basic functions on $(M,\mathcal F)$.
\begin{thm} (cf. Theorem 4.5)
Let $(M,g,\mathcal F)$ be a complete foliated Riemannian manifold with coclosed mean curvature form and all leaves be compact.
Let $(M',g',\mathcal F')$ be a foliated Riemannian manifold with non-positive transversal sectional curvature. Assume that the transversal Ricci curvature ${\rm Ric^{Q}}$ of $M$ satisfies ${\rm Ric^{Q}}\geq-\frac{4(p-1)}{p^{2}}\lambda_{0}$ and ${\rm Ric^{Q}}>-\frac{4(p-1)}{p^{2}}\lambda_{0}$ at some point $x_{0}$.
Then any $(\mathcal F,\mathcal F')_{p}$-harmonic map $\phi : (M,g,\mathcal F) \rightarrow (M', g',\mathcal F')$ of $E_{B,p}(\phi)<\infty$ is transversally constant.
\end{thm}
In particular, under the same assumptions of $(M,\mathcal F)$ as in Theorem 1.2, if ${\rm Ric}^Q \geq -{4(p-1)\over p^2}\lambda_0$ and ${\rm Ric}^Q> -{4(p-1)\over p^2}\lambda_0$ at some point, then for any $q$ with $2\leq q \leq p$, every $(\mathcal F,\mathcal F')_q$-harmonic map of finite transversal $q$-energy $E_{B,q}(\phi)$ is transversally constant (Corollary \ref{co5}).
\section{Preliminaries}
Let $(M,g,\mathcal F)$ be a foliated Riemannian
manifold with a foliation $\mathcal F$ of codimension $q$ and a bundle-like metric $g$ with respect to $\mathcal F$ \cite{Molino,Tond}. Let $TM$ be the tangent bundle of $M$, $T\mathcal F$
the tangent bundle of $\mathcal F$, and $Q=TM/T\mathcal F$ the
corresponding normal bundle of $\mathcal F$.
Let $g_Q$ be the holonomy invariant metric on
$Q$ induced by $g$.
This means that $L_Xg_Q=0$ for $X\in T\mathcal F$, where
$L_X$ is the transverse Lie derivative. We denote by $\nabla^Q$ the transverse Levi-Civita
connection on the normal bundle $Q$ \cite{Tond,Tond1}. The transversal curvature tensor $R^Q$ of $\nabla^Q\equiv\nabla$ is defined by $R^Q(X,Y)=[\nabla_X,\nabla_Y]-\nabla_{[X,Y]}$ for any $X,Y\in\Gamma TM$. Let $K^Q$ and ${\rm Ric}^Q $ be the transversal
sectional curvature and transversal Ricci operator with respect to $\nabla$, respectively. The foliation $\mathcal F$ is said to be {\it
minimal} if the mean curvature form $\kappa$ of $\mathcal F$ vanishes, that is, $\kappa=0$ \cite{Tond}.
Let $\Omega_B^r(\mathcal F)$ be the space of all {\it basic
$r$-forms}, i.e., $\omega\in\Omega_B^r(\mathcal F)$ if and only if
$i(X)\omega=0$ and $L_X\omega=0$ for any $X\in\Gamma T\mathcal F$, where $i(X)$ is the interior product. Then $\Omega^*(M)=\Omega_B^*(\mathcal F)\oplus \Omega_B^*(\mathcal F)^\perp$ \cite{Lop}. Let $\kappa_B$ be the basic part of $\kappa$. Then $\kappa_B$ is closed, i.e., $d\kappa_B=0$ \cite{Lop}.
Let $\bar *:\Omega_B^r(\mathcal F)\to \Omega_B^{q-r}(\mathcal F)$ be the star operator given by
\begin{align*}
\bar *\omega = (-1)^{(n-q)(q-r)} *(\omega\wedge\chi_{\mathcal F}),\quad \omega\in\Omega_B^r(\mathcal F),
\end{align*}
where $\chi_{\mathcal F}$ is the characteristic form of $\mathcal F$ and $*$ is the Hodge star operator associated to $g$. Let $\langle\cdot,\cdot\rangle$ be the pointwise inner product on $\Omega_B^r(\mathcal F)$, which is given by
\begin{align*}
\langle\omega_1,\omega_2\rangle \nu = \omega_1\wedge\bar * \omega_2,
\end{align*}
where $\nu$ is the transversal volume form such that $*\nu =\chi_{\mathcal F}$.
Let $\delta_B :\Omega_B^r (\mathcal F)\to \Omega_B^{r-1}(\mathcal F)$ be the operator defined by
\begin{align*}
\delta_B\omega = (-1)^{q(r+1)+1} \bar * (d_B-\kappa_B \wedge) \bar *\omega,
\end{align*}
where $d_B = d|_{\Omega_B^*(\mathcal F)}$. It is well known \cite{Park} that $\delta_B$ is the formal adjoint of $d_B$ with respect to the global inner product $\ll\cdot,\cdot\gg$, which is defined by
\begin{align}\label{2-1}
\ll \omega_1,\omega_2\gg =\int_M \langle\omega_1,\omega_2\rangle\mu_M,
\end{align}
where $\mu_M =\nu\wedge\chi_{\mathcal F}$ is the volume form.
The basic
Laplacian $\Delta_B$ acting on $\Omega_B^*(\mathcal F)$ is given by
\begin{equation*}
\Delta_B=d_B\delta_B+\delta_B d_B.
\end{equation*}
Let $\{E_a\}_{a=1,\cdots,q}$ be a local orthonormal basic frame on $Q$. Then $\delta_{B}$ is locally expressed by
\begin{equation}\label{2-2}
\delta_{B} = -\sum_a i(E_a) \nabla_{E_a} + i (\kappa_{B}^\sharp),
\end{equation}
where $(\cdot)^\sharp$ is the dual vector field of $(\cdot)$.
Let $Y$ be a transversal infinitesimal automorphism, i.e., $[Y,Z]\in \Gamma T\mathcal F$ for
all $Z\in \Gamma T\mathcal F$ \cite{Kamber2}. Let $\bar Y = \pi (Y)$.
Then we obtain the transversal divergence theorem
on a foliated Riemannian
manifold.
\begin{thm} \label{thm1-1} \cite{Yorozu}
Let $(M,g,\mathcal F)$ be a closed, oriented Riemannian manifold
with a transversally oriented foliation $\mathcal F$ and a
bundle-like metric $g$ with respect to $\mathcal F$. Then for a transversal infinitesimal automorphism $X$,
\begin{equation*}
\int_M \operatorname{div_\nabla}(\bar X) \mu_{M}
= \int_M g_Q(\bar X,\kappa_B^\sharp)\mu_{M},
\end{equation*}
where $\operatorname{div_\nabla} \bar{X}$
denotes the transversal divergence of $\bar{X}$ with respect to the
connection $\nabla$.
\end{thm}
Now we define the bundle map $A_Y:\Gamma Q\to \Gamma Q$ for any $Y\in TM$ by
\begin{align}\label{eq1-11}
A_Y s =L_Ys-\nabla_Ys,
\end{align}
where $L_Y s = \pi [Y,Y_s]$ for $\pi(Y_s)=s$. It is well-known \cite{Kamber2} that for any infitesimal automorphism $Y$
\begin{align*}
A_Y s = -\nabla_{Y_s}\bar Y,
\end{align*}
where $Y_s$ is the vector field such that $\pi(Y_s)=s$. So $A_Y$ depends only on $\bar Y=\pi(Y)$ and is a linear operator. Moreover, $A_Y$ extends in an obvious way to tensors of any type on $Q$ \cite{Kamber2}.
Since
$L_X\omega=\nabla_X\omega$ for any $X\in\Gamma T\mathcal F$, $A_Y$
preserves the basic forms.
Then we
have the generalized Weitzenb\"ock formula on $\Omega_B^*(\mathcal F)$ \cite{Jung}: for any $\omega\in\Omega_B^r(\mathcal F),$
\begin{align}\label{2-3}
\Delta_B \omega = \nabla_{\rm tr}^*\nabla_{\rm tr}\omega +
F(\omega)+A_{\kappa_B^\sharp}\omega,
\end{align}
where $F(\omega)=\sum_{a,b}\theta^a \wedge i(E_b)R^Q(E_b,
E_a)\omega$ and
\begin{align}\label{2-4}
\nabla_{\rm tr}^*\nabla_{\rm tr}\omega =-\sum_a \nabla^2_{E_a,E_a}\omega
+\nabla_{\kappa_B^\sharp}\omega.
\end{align}
The operator $\nabla_{\rm tr}^*\nabla_{\rm tr}$
is positive definite and formally self adjoint on the space of
basic forms \cite{Jung}.
If $\omega$ is a basic 1-form, then $F(\omega)^\sharp
={\rm Ric}^Q(\omega^\sharp)$.
\section{Variational formulas for $(\mathcal F,\mathcal F')_{p}$-harmonic map}
Let $(M, g,\mathcal F)$ and $(M', g',\mathcal F')$ be two foliated Riemannian manifolds and let $\phi:(M,g,\mathcal F)\to (M', g',\mathcal F')$ be a smooth foliated map,
i.e., $d\phi(T\mathcal F)\subset T\mathcal F'$. We define $d_T\phi:Q \to Q'$ by
\begin{align}
d_T\phi := \pi' \circ d \phi \circ \sigma.
\end{align}
Then $d_T\phi$ is a section in $ Q^*\otimes
\phi^{-1}Q'$, where $\phi^{-1}Q'$ is the pull-back bundle on $M$. Let $\nabla^\phi$
and $\tilde \nabla$ be the connections on $\phi^{-1}Q'$ and
$Q^*\otimes \phi^{-1}Q'$, respectively. Then a foliated map $\phi:(M, g,\mathcal F)\to (M', g',\mathcal F')$ is called {\it transversally totally geodesic} if it satisfies
\begin{align}
\tilde\nabla_{\rm tr}d_T\phi=0,
\end{align}
where $(\tilde\nabla_{\rm tr}d_T\phi)(X,Y)=(\tilde\nabla_X d_T\phi)(Y)$ for any $X,Y\in \Gamma Q$. Note that if $\phi:(M,g,\mathcal F)\to (M',g',\mathcal F')$ is transversally totally geodesic with $d\phi(Q)\subset Q'$, then, for any transversal geodesic $\gamma$ in $M$, $\phi\circ\gamma$ is also transversal geodesic.
From now on, we use $\nabla$ instead of all induced connections if we have no confusion.
The {\it transversal $p$-tension field} $\tau_{b,p}(\phi)$ of $\phi$ is defined by
\begin{align}\label{eq3-3}
\tau_{b,p}(\phi):={\rm tr}_{Q}(\nabla_{\rm tr} (|d_T\phi|^{p-2}d_T\phi)),
\end{align}
where $|d_T\phi|^2=\sum_a g_{Q'}(d_T\phi(E_a),d_T\phi(E_a))$. By a direct calculation, we get
\begin{align*}
\tau_{b,p}(\phi)=|d_T\phi|^{p-2}\{\tau_{b}(\phi)+(p-2)d_T\phi({\rm grad_{Q}}(\ln|d_T\phi|))\},
\end{align*}
where $\tau_{b}(\phi)={\rm tr}_{Q}(\nabla_{\rm tr} d_T\phi)$ is the transversal tension field \cite{JJ2}. It follows that $\tau_{b,2}(\phi)=\tau_{b}(\phi)$.
Let $\Omega$ be a compact domain of $M$. Then the {\it transversal $p$-energy} of $\phi$ on $\Omega\subset
M$ is defined by
\begin{align}\label{eq2-4}
E_{B,p}(\phi;\Omega)={1\over p}\int_{\Omega} | d_T \phi|^p\mu_{M}.
\end{align}
\begin{defn}
Let $\phi: (M, g,\mathcal F) \to (M', g',\mathcal F')$ be
a smooth foliated map. Then $\phi$ is said to be {\it $(\mathcal F,\mathcal F')_{p}$-harmonic} if $\phi$ is a critical point of the transversal $p$-energy functional $E_{B,p}$.
\end{defn}
In particular, $(\mathcal F,\mathcal F')_{2}$-harmonic map is called $(\mathcal F,\mathcal F')$-harmonic map.
Let $V\in\phi^{-1}Q'$. Then there is a 1-parameter family of foliated maps $\phi_t$ with $\phi_0=\phi$ and ${d\phi_t\over dt}|_{t=0}=V$. The family $\{\phi_t\}$ is said to be a {\it foliated variation} of $\phi$ with the {\it normal variation vector field} $V$. Then we have the first variational formula.
\begin{thm} $(${\rm The first variational formula}$)$ \label{th3}
Let $\phi:(M, g, \mathcal F)\to (M', g', \mathcal F')$
be a smooth foliated map and $\{\phi_t\}$ be a smooth foliated variation of $\phi$ supported in a compact domain $\Omega$. Then
\begin{align}\label{eq2-5}
{d\over dt}E_{B,p}(\phi_t;\Omega)|_{t=0}=-\int_{\Omega} \langle V,\tilde{\tau}_{b,p}(\phi)\rangle \mu_{M},
\end{align}
where $\tilde{\tau}_{b,p}(\phi)=\tau_{b,p}(\phi)-|d_T\phi|^{p-2}d_T\phi(\kappa_B^\sharp),$
$V={d\phi_t\over dt}|_{t=0}$ is the normal variation
vector field of $\{\phi_t\}$ and $\langle\cdot,\cdot\rangle$ is the pull-back metric on $\phi^{-1}Q'$.
\end{thm}
\begin{proof}
Fix $x\in M$. Let $\{E_a\}$ be a local orthonormal basic frame on $Q$ such that $(\nabla E_a)(x)=0$. Define
$\Phi:M \times (-\epsilon,\epsilon) \to M'$ by $\Phi(x,t)=\phi_t (x)$. Then
$d\Phi(E_a)=d_T\phi_t (E_a)$, $d\Phi({\partial\over\partial t})={{d\phi_t}\over {dt}}$ and
$\nabla_{\partial\over {\partial t}} {\partial\over{\partial t}}=\nabla_{\partial\over {\partial t}} E_a=\nabla_{E_a}{\partial\over{\partial t}}=0.$
Hence at $x$,
\begin{align*}
\frac{d}{dt} E_{B,p}(\phi_t;\Omega)
&=\frac{1}{p}\frac{d}{dt}\int_{\Omega}(\sum_a\langle d\Phi(E_a), d\Phi(E_a)\rangle)^{\frac{p}{2}}\mu_{M}\\
&= \int_{\Omega}\sum_a |d_{T}\Phi|^{p-2}\langle\nabla_{\partial\over{\partial t}} d\Phi(E_a), d\Phi(E_a)\rangle\mu_{M} \\
&= \int_{\Omega}\sum_a |d_{T}\Phi|^{p-2}\langle\nabla_{E_a} d\Phi({\partial\over\partial t}), d\Phi(E_a)\rangle\mu_{M}\\
&=\int_{\Omega} \sum_a\{E_a\langle d\Phi(\frac{\partial}{\partial t}), |d_T\Phi|^{p-2}d\Phi (E_a)\rangle - \langle d\Phi(\frac{\partial}{\partial t}), (\nabla_{E_a}|d_T\Phi|^{p-2}d\Phi) (E_a)\rangle \}\mu_{M}\\
&= \int_{\Omega}\sum_a E_a \langle \frac{d\phi_t}{dt}, |d_T\phi_{t}|^{p-2}d_T \phi_t (E_a)\rangle \mu_{M} -\int_{\Omega} \langle \frac{d\phi_t}{d t}, \tau_{b,p}(\phi_t)\rangle\mu_{M},
\end{align*}
where $|d_T\Phi|^2=\sum_{a=1}^{q} \langle d\Phi(E_a),d\Phi(E_a)\rangle=|d_T\phi_{t}|^2.$
If we choose a normal vector field $X_{t}$ with
\begin{align*}
\langle X_{t},Z\rangle = \langle \frac{d\phi_t}{d t}, |d_T\phi_{t}|^{p-2}d_T \phi_t(Z)\rangle
\end{align*}
for any vector field $Z$, then
\begin{align*}
{\rm div}_\nabla (X_{t}) = \sum_a E_a\langle \frac{d\phi_t}{dt}, |d_T\phi_{t}|^{p-2}d_T \phi_t(E_a)\rangle.
\end{align*}
So by the transversal divergence theorem (Theorem \ref{thm1-1}), we have
\begin{align*}
\frac{d}{dt}E_{B,p}(\phi_t;\Omega)
&= \int_{\Omega} {\rm div}_\nabla (X_{t})\mu_M-\int_{\Omega} \langle \frac{d\phi_t}{dt}, \tau_{b,p}(\phi_t)\rangle\mu_{M}\\
&=\int_\Omega\langle X_{t},\kappa_B^\sharp\rangle\mu_M-\int_{\Omega} \langle \frac{d\phi_t}{dt}, \tau_{b,p}(\phi_t)\rangle\mu_{M}\\
&=-\int_{\Omega}\langle \frac{d\phi_t}{dt}, \tau_{b,p}(\phi_t)-|d_T\phi_{t}|^{p-2}d_T\phi_{t}(\kappa_B^\sharp)\rangle\mu_{M}\\
&=-\int_{\Omega}\langle \frac{d\phi_t}{dt}, \tilde{\tau}_{b,p}(\phi_{t})\rangle\mu_{M},
\end{align*}
which proves (\ref{eq2-5}) by $t=0$.
\end{proof}
\begin{cor}\label{co1}
Let $\phi:(M, g, \mathcal F) \rightarrow (M', g', \mathcal F')$ be a smooth foliated map. Then $\phi$ is $(\mathcal F,\mathcal F')_{p}$-harmonic map if and only if $\tilde{\tau}_{b,p}(\phi)=0$.
\end{cor}
Now, we consider the second variational formula for the transversal $p$-energy.
Let $V,W\in\phi^{-1}Q'$. Then there exists a family of foliated maps $\phi_{t,s}(-\epsilon<s,t<\epsilon)$ satisfying
\begin{align}\label{ee1}
\left\{
\begin{array}{ll}
V=\frac{\partial \phi_{t,s}}{\partial t}|_{(t,s)=(0,0)},\\\\
W=\frac{\partial \phi_{t,s}}{\partial s}|_{(t,s)=(0,0)}, \\\\
\phi_{0,0}=\phi.
\end{array}
\right.
\end{align}
The family $\{\phi_{t,s}\}$ is said to be the {\it foliated variation} of $\phi$ with the {\it normal variation vector fields} $V$ and $W$.
\begin{thm} $(${\rm The second variational formula}$)$\label{th4}
Let $\phi:(M, g, \mathcal F)\to (M', g', \mathcal F')$ be a $(\mathcal F,\mathcal F')_{p}$-harmonic map. Then for the normal variation vector fields $V$ and $W$ of the foliated variation $\{\phi_{t,s}\}$,
\begin{align*}
\frac{\partial^{2}}{\partial t\partial s}& E_{B,p}(\phi_{t,s};\Omega)|_{(t,s)=(0,0)}\notag\\
=&\int_{\Omega}|d_T\phi|^{p-2}\langle \nabla_{\rm tr}V, \nabla_{\rm tr}W\rangle \mu_M
-\int_{\Omega} |d_T\phi|^{p-2}\langle {\rm tr_{Q}}R^{Q'}(V, d_T \phi)d_T \phi,W\rangle\mu_M \notag\\
&+(p-2)\int_{\Omega}|d_T\phi|^{p-4}\langle \nabla_{\rm tr}V,d_T \phi\rangle\langle \nabla_{\rm tr}W, d_T \phi\rangle\mu_M,
\end{align*}
where ${\rm tr_{Q}}R^{Q'}(V, d_T \phi)d_T \phi=\sum_a R^{Q'}(V, d_T \phi(E_{a}))d_T \phi(E_{a})$.
\end{thm}
\begin{proof}
Let $\Phi: M\times(-\epsilon, \epsilon)\times(-\epsilon, \epsilon)\rightarrow M'$ be a smooth map which is defined by $\Phi(x,t,s)=\phi_{t,s}(x)$.
Then $d\Phi(E_a)=d_T\phi_{t,s} (E_a)$, $d\Phi(\frac{\partial}{\partial s})=\frac{\partial \phi_{t,s}}{\partial s}$
and $d\Phi(\frac{\partial}{\partial t})=\frac{\partial \phi_{t,s}}{\partial t}$.
Trivially, $[X, \frac{\partial}{\partial t}]=[X, \frac{\partial}{\partial s}]=0$ for any vector field $X\in TM$.
For convenience, we put $f=|d_T\phi_{t,s}|^{p-2}$ and $f_{0}=|d_T\phi|^{p-2}$. By making use of the first variational formula, it turns out that
\begin{align}\label{ee2}
\frac{\partial}{\partial s}E_{B,p}(\phi_{t,s};\Omega)
=-\int_{\Omega}\langle d\Phi(\frac{\partial}{\partial s}), \tilde{\tau}_{b,p}(\phi_{t,s})\rangle \mu_{M}.
\end{align}
Differentiating (\ref{ee2}) with respect to $t$, we get
\begin{align}\label{ee43}
\frac{\partial^{2}}{\partial t \partial s}E_{B,p}(\phi_{t,s};\Omega)
=-\int_{\Omega}\langle \nabla_{\frac{\partial}{\partial t}}d\Phi(\frac{\partial}{\partial s}), \tilde{\tau}_{b,p}(\phi_{t,s})\rangle \mu_{M}
-\int_{\Omega}\langle d\Phi(\frac{\partial}{\partial s}), \nabla_{\frac{\partial}{\partial t}}\tilde{\tau}_{b,p}(\phi_{t,s})\rangle \mu_{M}.
\end{align}
Since $\phi$ is $(\mathcal F,\mathcal F')_{p}$-harmonic map, from Corollary \ref{co1}, we have that
at $(t,s)=(0,0)$,
\begin{align}\label{ee50}
\frac{\partial^{2}}{\partial t \partial s}E_{B,p}(\phi_{t,s};\Omega)|_{(0,0)}
=-\int_{\Omega}\langle W, \nabla_{\frac{\partial}{\partial t}}\tilde{\tau}_{b,p}(\phi_{t,s})|_{(0,0)}\rangle \mu_{M}.
\end{align}
By choosing a local orthonormal basic frame field $E_{a}$ with $\nabla E_{a}(x)=0$ at some point $x\in M$, we have that at $x$,
\begin{align}\label{ee23}
\nabla_{\partial\over{\partial t}}&\tilde{\tau}_{b,p}(\phi_{t,s}) \notag\\
=&\nabla_{\partial\over{\partial t}}\tau_{b,p}(\phi_{t,s})-\nabla_{\partial\over{\partial t}}fd\Phi(\kappa_B^\sharp)\notag\\
=&\sum_a \nabla_{\partial\over{\partial t}}\{(\nabla_{E_{a}} fd\Phi)(E_{a})\}-f\nabla_{\kappa_B^\sharp}d\Phi(\frac{\partial}{\partial t})-\frac{\partial f}{\partial t} d\Phi(\kappa_B^\sharp)\notag\\
=&\sum_a \{\nabla_{E_a}\nabla_{\partial\over{\partial t}}fd\Phi(E_a)+ R^{\Phi}(\frac{\partial}{\partial t}, E_a)fd\Phi(E_a)\}-f\nabla_{\kappa_B^\sharp}d\Phi(\frac{\partial}{\partial t})-\frac{\partial f}{\partial t} d\Phi(\kappa_B^\sharp) \notag\\
=&\sum_a \{ \nabla_{E_a}\nabla_{E_a}fd\Phi(\frac{\partial}{\partial t})
+\nabla_{E_a}(\frac{\partial f}{\partial t}d\Phi(E_a)-E_a(f)d\Phi(\frac{\partial}{\partial t}))+R^{\Phi}(\frac{\partial}{\partial t}, E_a)fd\Phi(E_a)\}\notag\\
&-f\nabla_{\kappa_B^\sharp}d\Phi(\frac{\partial}{\partial t})-\frac{\partial f}{\partial t} d\Phi(\kappa_B^\sharp).
\end{align}
From (\ref{ee23}), we have
\begin{align}\label{ee44}
\int_{\Omega} &\langle \nabla_{\frac{\partial}{\partial t}}\tilde{\tau}_{b,p}(\phi_{t,s}),d\Phi(\frac{\partial}{\partial s})\rangle\mu_M \notag\\
=&\int_{\Omega} \sum_a \langle \nabla_{E_a}\nabla_{E_a}fd\Phi(\frac{\partial}{\partial t}), d\Phi(\frac{\partial}{\partial s})\rangle\mu_M
+\int_{\Omega}\sum_a \langle R^{Q'}(d\Phi(\frac{\partial}{\partial t}), d\Phi(E_a))fd\Phi(E_a),d\Phi(\frac{\partial}{\partial s})\rangle\mu_M \notag\\
&+\int_{\Omega}\sum_a E_a \langle \frac{\partial f}{\partial t}d\Phi(E_a), d\Phi(\frac{\partial}{\partial s}) \rangle\mu_M
-\int_{\Omega}\sum_a \langle \frac{\partial f}{\partial t}d\Phi(E_a), \nabla_{E_a}d\Phi(\frac{\partial}{\partial s}) \rangle\mu_M \notag\\
&-\int_{\Omega}\sum_a E_a \langle E_a(f)d\Phi(\frac{\partial}{\partial t}),d\Phi(\frac{\partial}{\partial s}) \rangle\mu_M
+\int_{\Omega}\sum_a \langle E_a(f)d\Phi(\frac{\partial}{\partial t}), \nabla_{E_a}d\Phi(\frac{\partial}{\partial s}) \rangle\mu_M\notag\\
&-\int_{\Omega}\langle f\nabla_{\kappa_B^\sharp}d\Phi(\frac{\partial}{\partial t}),d\Phi(\frac{\partial}{\partial s})\rangle\mu_M
-\int_{\Omega}\langle\frac{\partial f}{\partial t}d\Phi(\kappa_B^\sharp),d\Phi(\frac{\partial}{\partial s})\rangle\mu_M .
\end{align}
Let $X_{t,s}$ and $Y_{t,s}$ be two normal vector fields such that
\begin{align}\label{ee45}
\left\{
\begin{array}{ll}
\langle X_{t,s}, Z\rangle=\langle \frac{\partial f}{\partial t}d\Phi(Z),d\Phi(\frac{\partial}{\partial s})\rangle,\\\\
\langle Y_{t,s}, Z\rangle=\langle Z(f)d\Phi(\frac{\partial}{\partial t}), d\Phi(\frac{\partial}{\partial s})\rangle
\end{array}
\right.
\end{align}
for any vector field $Z$ on $M$, respectively.
Then
\begin{align}\label{ee46}
\left\{
\begin{array}{ll}
{\rm div}_\nabla (X_{t,s})=\sum_a E_a \langle \frac{\partial f}{\partial t}d\Phi(E_{a}),d\Phi(\frac{\partial}{\partial s})\rangle,\\\\
{\rm div}_\nabla (Y_{t,s})=\sum_a E_a \langle E_{a}(f)d\Phi(\frac{\partial}{\partial t}), d\Phi(\frac{\partial}{\partial s})\rangle.
\end{array}
\right.
\end{align}
By (\ref{ee46}) and the transversal divergence theorem (Theorem \ref{thm1-1}), we have
\begin{align}\label{ee47}
\int_{\Omega}\sum_a & E_a \langle \frac{\partial f}{\partial t}d\Phi(E_a), d\Phi(\frac{\partial}{\partial s}) \rangle\mu_M
-\int_{\Omega}\sum_a E_a \langle E_a(f)d\Phi(\frac{\partial}{\partial t}), d\Phi(\frac{\partial}{\partial s}) \rangle\mu_M \notag\\
=&\int_{\Omega} {\rm div_{\nabla}}(X_{t,s})\mu_M -\int_{\Omega}{\rm div_{\nabla}}(Y_{t,s})\mu_M \notag\\
=&\int_{\Omega}\langle \frac{\partial f}{\partial t}d\Phi(\kappa_B^\sharp), d\Phi(\frac{\partial}{\partial s})\rangle\mu_M
-\int_{\Omega}\langle \kappa_B^\sharp(f)d\Phi(\frac{\partial}{\partial t}), d\Phi(\frac{\partial}{\partial s})\rangle\mu_M.
\end{align}
From (\ref{ee50}), (\ref{ee44}) and (\ref{ee47}), we get
\begin{align}\label{ee19}
\frac{\partial^{2}}{\partial t \partial s}& E_{B,p}(\phi_{t,s};\Omega)|_{(0,0)} \notag\\
=&\int_{\Omega}\langle\nabla_{\rm tr}^*\nabla_{\rm tr}f_{0}V, W \rangle \mu_M
-\int_{\Omega}\sum_a f_{0}\langle R^{Q'}(V, d\Phi(E_a))d\Phi(E_a),W\rangle\mu_M \notag\\
&+\int_{\Omega}\sum_a \langle \frac{\partial f}{\partial t}|_{(0,0)}d\Phi(E_a), \nabla_{E_a}W \rangle\mu_M
-\int_{\Omega}\sum_a \langle E_a(f_{0})V, \nabla_{E_a}W \rangle\mu_M \notag\\
=&\int_{\Omega}\sum_a |d_T\phi|^{p-2}\langle \nabla_{E_a}V, \nabla_{E_a}W \rangle \mu_M
-\int_{\Omega}\sum_a |d_T\phi|^{p-2}\langle R^{Q'}(V, d_{T}\phi(E_a))d_{T}\phi(E_a),W\rangle\mu_M \notag\\
&+\int_{\Omega}\sum_a \langle \frac{\partial f}{\partial t}|_{(0,0)}d_{T}\phi(E_a), \nabla_{E_a}W \rangle\mu_M.
\end{align}
Since
\begin{align*}
\frac{\partial f}{\partial t}|_{(0,0)}=(p-2)|d_T\phi|^{p-4}\sum_{b}\langle \nabla_{E_b}V,d_T \phi(E_b)\rangle,
\end{align*}
the proof follows from (\ref{ee19}).
\end{proof}
\begin{cor} \cite{DT} \label{co4}
Let $\phi:(M, g, \mathcal F)\to (M', g', \mathcal F')$ be a $(\mathcal F,\mathcal F')$-harmonic map. Then
\begin{align*}
\frac{\partial^{2}}{\partial t\partial s} E_{B}(\phi_{t,s};\Omega)|_{(t,s)=(0,0)}
=\int_{\Omega}\langle \nabla_{\rm tr}V, \nabla_{\rm tr}W\rangle \mu_M
-\int_{\Omega}\langle {\rm tr_{Q}}R^{Q'}(V, d_T \phi)d_T \phi,W\rangle\mu_M.
\end{align*}
\end{cor}
We define the index form for $(\mathcal F,\mathcal F')_{p}$-harmonic maps by
\begin{align}\label{em1}
I_{p}(V,W):=\frac{\partial^{2}}{\partial t\partial s}E_{B,p}(\phi_{t,s})|_{(t,s)=(0,0)}
\end{align}
for vector fields $V$ and $W$ along $\phi$.
\begin{rem}
From Theorem \ref{th4} and (\ref{em1}), we obtain $I_{p}(V,W)=I_{p}(W,V)$.
\end{rem}
\begin{defn}\label{de11}
A $(\mathcal F,\mathcal F')_{p}$-harmonic map $\phi$ is said to be {\it transversally stable} if $I_{p}(V,V)\geq0$ for any vector field $V$ along $\phi$.
\end{defn}
It is easy to obtain the following theorem from Theorem \ref{th4}.
\begin{thm}\label{th1}$(${\rm Stability}$)$
Let $\phi: (M, g,\mathcal F) \to (M', g',\mathcal F')$ be a $(\mathcal F,\mathcal F')_{p}$-harmonic map with compact $M$. If the transversal sectional curvature of $M'$ is non-positive, then $\phi$ is transversally stable.
\end{thm}
\begin{proof}
By Theorem \ref{th4}, we have
\begin{align}\label{ee36}
I_{p}(V,V)
=&\int_{M}|d_T\phi|^{p-2}\{|\nabla_{\rm tr}V|^{2}-\langle R^{Q'}(V, d_T \phi)d_T \phi,V\rangle\}\mu_{M} \notag\\
&+(p-2)\int_{M}|d_T\phi|^{p-4}\langle \nabla_{\rm tr}V, d_T\phi\rangle^{2}\mu_{M}.
\end{align}
Since $K^{Q'}\leq0$, from (\ref{ee36}), we get
$$\langle R^{Q'}(V, d_T \phi)d_T \phi,V\rangle=\sum_a\langle R^{Q'}(V, d_T \phi(E_{a}))d_T \phi(E_{a}),V\rangle=\sum_a K^{Q'}(V, d_T \phi(E_{a}))\leq0,$$
which implies $I_{p}(V,V)\geq0$. So the proof follows.
\end{proof}
\section{Liouville type theorem}
Let $\phi :(M,g,\mathcal F) \rightarrow (M', g',\mathcal F')$ be a smooth foliated map and $\Omega_B^r(E)=\Omega_B^r(\mathcal F)\otimes E$ be the space of $E$-valued basic $r$-forms, where $E=\phi^{-1}Q'$. We define $d_\nabla : \Omega_B^r(E)\to \Omega_B^{r+1}(E)$ by
\begin{align}
d_\nabla(\omega\otimes s)=d_B\omega\otimes s+(-1)^r\omega\wedge\nabla s
\end{align}
for any $s\in \Gamma E$ and $\omega\in\Omega_B^r(\mathcal F)$.
Let $\delta_\nabla$ be a formal adjoint of $d_\nabla$ with respect to the inner product induced from (\ref{2-1}).
Then the Laplacian $\Delta$ on $\Omega_B^*(E)$ is defined by
\begin{align}\label{ee8}
\Delta =d_\nabla \delta_\nabla +\delta_\nabla d_\nabla.
\end{align}
Moreover, the operators $A_X$ and $L_X$ are extended to $\Omega_B^r(E)$ as follows:
\begin{align}
A_X(\omega\otimes s)&=A_X\omega\otimes s\\
L_X(\omega\otimes s)&=L_X\omega\otimes s+\omega\otimes\nabla_X s
\end{align}
for any $\omega\otimes s\in\Omega_B^r(E)$ and $X\in\Gamma TM$. Then $L_X=d_\nabla i(X) +i(X)d_\nabla$ for any $X\in \Gamma TM$, where $i(X)(\omega\otimes s)=i(X)\omega\otimes s$. Hence $\Psi \in\Omega_B^*(E)$ if and only if $i(X)\Psi=0$ and $L_X\Psi=0$ for all $ X\in \Gamma T\mathcal F$.
Then the generalized Weitzenb\"ock type formula (\ref{2-3}) is extended to $\Omega_B^*(E)$ as follows \cite{JJ2}: for any $\Psi\in\Omega_B^r(E)$,
\begin{align}\label{eq4-6}
\Delta \Psi = \nabla_{\rm tr}^*\nabla_{\rm tr} \Psi
+ A_{\kappa_{B}^\sharp} \Psi + F(\Psi),
\end{align}
where $ \nabla_{\rm tr}^*\nabla_{\rm tr}$ is the operator induced from (\ref{2-4}) and $F(\Psi)=\sum_{a,b=1}^{q}\theta^a\wedge i(E_b) R(E_b,E_a)\Psi$.
Moreover, we have that for any $ \Psi\in\Omega_B^r(E)$,
\begin{align}\label{ee51}
\frac12\Delta_B|\Psi |^{2}
=\langle\Delta \Psi, \Psi\rangle -|\nabla_{\rm tr} \Psi|^2-\langle A_{\kappa_{B}^\sharp}\Psi, \Psi\rangle -\langle F(\Psi),\Psi\rangle.
\end{align}
If we put $\Psi=|d_T\phi|^{p-2}d_T \phi$, then we have the following theorem.
\begin{thm}\label{th2}
Let $\phi:(M, g,\mathcal F) \to (M', g', \mathcal F')$ be a smooth foliated map. Then the generalized Weitzenb\"ock type formula is given by
\begin{align}\label{ee6}
\frac12\Delta_B| d_T \phi |^{2p-2}
=& \langle\Delta |d_T\phi|^{p-2}d_T \phi, |d_T\phi|^{p-2}d_T \phi\rangle -
|\nabla_{\rm tr} |d_T\phi|^{p-2}d_T \phi|^2 \notag\\
& -\langle A_{\kappa_{B}^\sharp}|d_T\phi|^{p-2}d_T \phi, |d_T\phi|^{p-2}d_T \phi\rangle -|d_T\phi|^{2p-4}\langle F(d_T\phi),d_T\phi\rangle,
\end{align}
where
\begin{align}\label{ee3}
\langle F(d_T\phi),d_T\phi\rangle&=\sum_a g_{Q'}(d_T \phi({\rm Ric^{Q}}(E_a)),d_T \phi(E_a)) \notag\\
&-\sum_{a,b}g_{Q'}( R^{Q'}(d_T \phi(E_b), d_T \phi(E_a))d_T \phi(E_a), d_T \phi(E_b)).
\end{align}
\end{thm}
\begin{proof}
The equation (\ref{ee3}) follows from (\cite{JJ2}, Theorem 5.1).
\end{proof}
\begin{lem}\label{lem1}
Let $\phi:(M, g,\mathcal F) \to (M', g', \mathcal F')$ be a $(\mathcal F,\mathcal F')_{p}$-harmonic map. Then
\begin{align*}
\delta_\nabla |d_T\phi|^{p-2}d_T\phi=0.
\end{align*}
\end{lem}
\begin{proof}
Locally, $\delta_\nabla$ is expressed by (\ref{2-2}). So from Corollary \ref{co1}, it implies that
\begin{align*}
\delta_\nabla |d_T\phi|^{p-2}d_T\phi
=&-\sum_a i(E_a) \nabla_{E_a}|d_T\phi|^{p-2}d_T\phi+i(\kappa_{B}^\sharp)|d_T\phi|^{p-2}d_T\phi\\
=&-\sum_a (\nabla_{E_a}|d_T\phi|^{p-2}d_T\phi)(E_a)+i(\kappa_{B}^\sharp)|d_T\phi|^{p-2}d_T\phi\\
=&-\tau_{b,p} (\phi) +|d_T\phi|^{p-2}i(\kappa_{B}^\sharp)d_T\phi\\
=&-\tilde{\tau}_{b,p}(\phi)\\
=&0.
\end{align*}
\end{proof}
\begin{cor}\label{co3}
Let $\phi:(M, g,\mathcal F) \to (M', g', \mathcal F')$ be a $(\mathcal F,\mathcal F')_{p}$-harmonic map. Then
\begin{align}\label{ee71}
|d_T&\phi|\Delta_B|d_T\phi|^{p-1}
-\langle\delta_\nabla d_\nabla |d_T\phi|^{p-2}d_T \phi, d_T \phi\rangle \notag\\
&+\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,|d_T\phi|^{p-2}d_T \phi\rangle
-|d_T\phi|^{p-1}\kappa_{B}^\sharp(|d_T\phi|)\notag\\
&\leq-|d_T\phi|^{p-2}\langle F(d_T\phi),d_T\phi\rangle.
\end{align}
\end{cor}
\begin{proof}
Since $\phi$ is a $(\mathcal F,\mathcal F')_{p}$-harmonic map, from Theorem \ref{th2} and Lemma \ref{lem1}, we have
\begin{align}\label{ee55}
\frac12\Delta_B| d_T \phi |^{2p-2}
=& \langle\delta_\nabla d_\nabla |d_T\phi|^{p-2}d_T \phi, |d_T\phi|^{p-2}d_T \phi\rangle -|\nabla_{\rm tr} |d_T\phi|^{p-2}d_T \phi|^2 \notag\\
&-|d_T\phi|^{p-2}\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,|d_T\phi|^{p-2}d_T \phi\rangle
+|d_T\phi|^{2p-3}\kappa_{B}^\sharp(|d_T\phi|)\notag\\
&-|d_T\phi|^{2p-4}\langle F(d_T\phi),d_T\phi\rangle.
\end{align}
By a simple calculation, we have
\begin{align}\label{ee72}
\frac12\Delta_B| d_T \phi |^{2p-2}
=|d_T\phi|^{p-1}\Delta_B|d_T\phi|^{p-1}-|d_{B}|d_T\phi|^{p-1}|^{2}.
\end{align}
From (\ref{ee55}) and (\ref{ee72}), we get
\begin{align}\label{ee73}
|d_T\phi|^{p-1}\Delta_B|d_T\phi|^{p-1}
=&|d_{B}|d_T\phi|^{p-1}|^{2}-|\nabla_{\rm tr}|d_T\phi|^{p-2}d_T\phi|^{2}
+\langle\delta_\nabla d_\nabla |d_T\phi|^{p-2}d_T \phi, |d_T\phi|^{p-2}d_T \phi\rangle \notag\\
&-|d_T\phi|^{p-2}\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,|d_T\phi|^{p-2}d_T \phi\rangle
+|d_T\phi|^{2p-3}\kappa_{B}^\sharp(|d_T\phi|)\notag\\
&-|d_T\phi|^{2p-4}\langle F(d_T\phi),d_T\phi\rangle.
\end{align}
By the first Kato's inequality [\ref{BE}], we have
\begin{align}\label{ee74}
|\nabla_{\rm tr}|d_T\phi|^{p-2}d_T\phi|\geq|d_{B}|d_T\phi|^{p-1}|.
\end{align}
Therefore, the result follows from (\ref{ee73}) and (\ref{ee74}).
\end{proof}
The following conclusion is achieved as the application of the generalized Weitzenb\"ock type formula.
\begin{thm}
Let $(M,g,\mathcal F)$ be a closed foliated Riemannian manifold of non-negative transversal Ricci curvature.
Let $(M',g',\mathcal F')$ be a foliated Riemannian manifold of non-positive transversal sectional curvature. If $\phi:(M, g,\mathcal F) \rightarrow (M', g',\mathcal F')$ is a
$(\mathcal F,\mathcal F')_{p}$-harmonic map, then $\phi$ is transversally totally geodesic.
Furthermore, \\
$(1)$ If the transversal Ricci curvature of $\mathcal F$ is
positive somewhere, then $\phi$ is transversally constant.
\\
$(2)$ If the transversal sectional curvature of $\mathcal F'$ is
negative, then $\phi$ is either transversally constant or $\phi(M)$ is a transversally geodesic closed curve.
\end{thm}
\begin{proof}
By the hypothesis and (\ref{ee3}), we know
$\langle F(d_T\phi),d_T\phi\rangle \geq 0.$
Since $\phi$ is a $(\mathcal F,\mathcal F')_{p}$-harmonic map, from Corollary \ref{co3}, we have
\begin{align}\label{ee61}
|d_T\phi|\Delta_B|d_T\phi|^{p-1}
\leq&\langle\delta_\nabla d_\nabla |d_T\phi|^{p-2}d_T \phi, d_T \phi\rangle
-\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,|d_T\phi|^{p-2}d_T \phi\rangle \notag\\
&+|d_T\phi|^{p-1}\kappa_{B}^\sharp(|d_T\phi|).
\end{align}
Integrating (\ref{ee61}), we have
\begin{align}\label{ee62}
\int_{M}\langle&|d_T\phi|,\Delta_B|d_T\phi|^{p-1}\rangle\mu_{M}\notag \\
\leq&\int_{M}\langle \delta_\nabla d_\nabla |d_T\phi|^{p-2}d_T\phi,d_T\phi\rangle\mu_{M}
-\int_{M}\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,|d_T\phi|^{p-2}d_T \phi\rangle\mu_{M}\notag \\
&+\int_{M}|d_T\phi|^{p-1}\kappa_{B}^\sharp(|d_T\phi|)\mu_{M}.
\end{align}
Since $d_\nabla (d_T\phi)=0$, we get
\begin{align}\label{ee64}
\int_{M}\langle \delta_\nabla d_\nabla |d_T\phi|^{p-2}d_T\phi,d_T\phi\rangle\mu_{M}
=\int_{M}\langle d_\nabla |d_T\phi|^{p-2}d_T\phi,d_\nabla d_T\phi\rangle\mu_{M}
=0.
\end{align}
Since $\phi$ is a $(\mathcal F,\mathcal F')_{p}$-harmonic map, from Lemma \ref{lem1}, we obtain
\begin{align}\label{ee65}
\int_{M}\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,|d_T\phi|^{p-2}d_T \phi\rangle\mu_{M}
=\int_{M}\langle i(\kappa_{B}^\sharp)d_T\phi,\delta_\nabla |d_T\phi|^{p-2}d_T \phi\rangle\mu_{M}
=0.
\end{align}
Now, we choose a bundle-like metric $g$ such that $\delta_{B}\kappa_{B}=0$. Then we have
\begin{align}\label{ee66}
\int_{M}|d_T\phi|^{p-1}\kappa_{B}^\sharp(|d_T\phi|)\mu_{M}
=\frac{1}{p}\int_{M}\kappa_{B}^\sharp(|d_T\phi|^{p})\mu_{M}
=0.
\end{align}
From (\ref{ee62})$\sim$(\ref{ee66}), we get
\begin{align}\label{ee68}
\int_{M}\langle|d_T\phi|,\Delta_B|d_T\phi|^{p-1}\rangle\mu_{M}\leq0.
\end{align}
On the other hand, we know that
\begin{align}\label{ee63}
\int_{M}\langle |d_T\phi|,\Delta_B|d_T\phi|^{p-1}\rangle\mu_{M}
&=\int_{M}\langle d_{B}|d_T\phi|,d_{B}|d_T\phi|^{p-1}\rangle\mu_{M} \notag \\
&=(p-1)\int_{M}|d_T\phi|^{p-2}| d_{B}|d_T\phi||^{2}\mu_{M}\notag \\
&\geq0.
\end{align}
Then from (\ref{ee68}) and (\ref{ee63}), we get
\begin{align}\label{ee67}
0=\int_{M}\langle |d_T\phi|,\Delta_B|d_T\phi|^{p-1}\rangle\mu_{M}=(p-1)\int_{M}|d_T\phi|^{p-2}| d_{B}|d_T\phi||^{2}\mu_{M},
\end{align}
which yields $d_T\phi=0$ or $ d_{B}|d_T\phi|=0$. If $ d_{B}|d_T\phi|\neq0$, then $d_T\phi=0$, i.e., $\phi$ is transversally constant. Trivially, $\phi$ is transversally totally geodesic.
If $d_T\phi\neq0$, then $ d_{B}|d_T\phi|=0$. It means that $|d_T\phi|$ is constant.
From (\ref{ee73}), we have
\begin{align}\label{ee81}
\langle|d_T\phi|,\Delta_B|d_T\phi|^{p-1}\rangle
=&-|d_T\phi|^{p-2}|\nabla_{\rm tr}d_T\phi|^{2}
-\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,|d_T\phi|^{p-2}d_T \phi\rangle \notag \\
&-|d_T\phi|^{p-2}\langle F(d_T\phi),d_T\phi\rangle.
\end{align}
From (\ref{ee67}), (\ref{ee81}) and Lemma \ref{lem1}, we get
\begin{align}\label{ee69}
\int_{M}|d_T\phi|^{p-2}|\nabla_{\rm tr}d_T\phi|^{2}\mu_{M}
+\int_{M}|d_T\phi|^{p-2}\langle F(d_T\phi),d_T\phi\rangle\mu_{M}=0.
\end{align}
Since $|\nabla_{\rm tr}d_T\phi|^{2}\geq0$ and $\langle F(d_T\phi),d_T\phi\rangle \geq 0$,
from (\ref{ee69}), we have
\begin{align}\label{ee11}
|\nabla_{\rm tr} d_T \phi|^2+\langle F(d_T\phi),d_T\phi\rangle=0.
\end{align}
Thus, $\nabla_{\rm tr}d_T\phi=0$, i.e., $\phi$ is transversally totally geodesic.
Furthermore, from (\ref{ee3}) and (\ref{ee11}), we get
\begin{align}\label{ee70}
\left\{
\begin{array}{ll}
g_{Q'}(d_T\phi({\rm Ric}^{Q}(E_a)),d_T\phi(E_a))= 0,\\\\
g_{Q'}(R^{Q'}(d_T\phi(E_a),d_T\phi(E_b))d_T\phi(E_a),d_T\phi(E_b))= 0
\end{array}
\right.
\end{align}
for any indices $a$ and $b$.
If ${\rm Ric}^{Q}$ is positive at some point, then $d_T\phi=0$, i.e., $\phi$ is transversally constant, which proves (1). For the statement (2), if the rank of $d_T\phi \geq2$, then there exists a point $x\in M$ such that at least two linearly independent vectors at $\phi(x)$, say, $d_T\phi(E_1)$ and $d_T\phi(E_2)$.
Since the transversal sectional curvature $K^{Q'}$ of $\mathcal F'$ is negative,
\begin{align*}
g_{Q'}(R^{Q'}(d_T\phi(E_1),d_T\phi(E_2))d_T\phi(E_2),d_T\phi(E_1))<0,
\end{align*}
which contradicts (\ref{ee70}). Hence the rank of $d_T\phi <2$, that is, the rank of $d_T\phi$ is zero or one everywhere. If the rank of $d_T\phi$ is zero, then $\phi$ is transversally constant. If the rank of $d_T\phi$ is one, then $\phi(M)$ is closed transversally geodesic.
\end{proof}
Next, we investigate the Liouville type theorem for $(\mathcal F,\mathcal F')_{p}$-harmonic map on foliated Riemannian manifold. Let $\mu_{0}$ be the infimum of the eigenvalues of the basic Laplacian $\Delta_{B}$ acting on $L^{2}$-basic functions on $M$. Then the following theorem is obtained.
\begin{thm}\label{th5}
Let $(M,g,\mathcal F)$ be a complete foliated Riemannian manifold with coclosed mean curvature form $\kappa_{B}$ and all leaves be compact.
Let $(M',g',\mathcal F')$ be a foliated Riemannian manifold with non-positive transversal sectional curvature $K^{Q'}$. Assume that the transversal Ricci curvature ${\rm Ric^{Q}}$ of $M$ satisfies ${\rm Ric^{Q}}\geq-\frac{4(p-1)}{p^{2}}\mu_{0}$ for all $x\in M$ and ${\rm Ric^{Q}}>-\frac{4(p-1)}{p^{2}}\mu_{0}$ at some point $x_{0}$.
Then any $(\mathcal F,\mathcal F')_{p}$-harmonic map $\phi : (M,g,\mathcal F) \rightarrow (M', g',\mathcal F')$ of $E_{B,p}(\phi)<\infty$ is transversally constant.
\end{thm}
\begin{proof}
Let $M$ be a complete foliated Riemannian manifold such that ${\rm Ric^{Q}}\geq-C$ for all $x$ and ${\rm Ric^{Q}}>-C$ at some point $x_{0}$, where $C=\frac{4(p-1)}{p^{2}}\mu_{0}$.
Since $K^{Q'}\leq0$ and ${\rm Ric^{Q}}\geq-C$, from (\ref{ee3}), we have
\begin{align}\label{ee13}
\langle F(d_T\phi),d_T\phi\rangle\geq \sum_a g_{Q'}(d_T \phi({\rm Ric^{Q}}(E_a)),d_T \phi(E_a))\geq-C|d_T\phi|^{2}.
\end{align}
Since $\phi$ is a $(\mathcal F,\mathcal F')_{p}$-harmonic map, from Corollary \ref{co3}, we have
\begin{align}\label{ee52}
|d_T&\phi|\Delta_B|d_T\phi|^{p-1}
-\langle\delta_\nabla d_\nabla |d_T\phi|^{p-2}d_T \phi, d_T \phi\rangle \notag\\
&+\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,|d_T\phi|^{p-2}d_T \phi\rangle
-|d_T\phi|^{p-1}\kappa_{B}^\sharp(|d_T\phi|)\notag\\
&\leq-|d_T\phi|^{p-2}\sum_a g_{Q'}(d_T \phi({\rm Ric^{Q}}(E_a)),d_T \phi(E_a))\leq C|d_T\phi|^{p}.
\end{align}
Let $B_{l}=\{y\in M|\rho(y)\leq l\}$, where $\rho(y)$ is the distance between leaves through a fixed point $x_{0}$ and $y$.
Let $\omega_{l}$ be the Lipschitz continuous basic function such that
\begin{align*}
\left\{
\begin{array}{ll}
0\leq\omega_{l}(y)\leq1 \quad {\rm for \, any} \, y\in M\\
{\rm supp}\, \omega_{l}\subset B_{2l}\\
\omega_{l}(y)=1 \quad {\rm for \, any} \, y\in B_{l}\\
\lim\limits_{l\rightarrow\infty}\omega_{l}=1\\
|d\omega_{l}|\leq\frac{\alpha}{l} \quad {\rm almost \, everywhere \, on} M,
\end{array}
\right.
\end{align*}
where $\alpha$ is positive constant \cite{Y1}. Therefore, $\omega_{l}\phi$ has compact support for any basic form $\phi\in\Omega_{B}^{*}(\mathcal F)$.
Multiplying (\ref{ee52}) by $\omega_{l}^{2}$ and integrating by parts, this yields
\begin{align}\label{ee4}
\int_{M}&\langle \omega_{l}^{2}|d_T\phi|,\Delta_B|d_T\phi|^{p-1}\rangle\mu_{M}
-\int_{M}\langle \omega_{l}^{2}d_T\phi, \delta_\nabla d_\nabla |d_T\phi|^{p-2}d_T\phi\rangle\mu_{M}\notag \\
&+\int_{M}\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,\omega_{l}^{2}|d_T\phi|^{p-2}d_T \phi\rangle\mu_{M}
-\int_{M}\langle \omega_{l}^{2}|d_T\phi|^{p-1},\kappa_{B}^\sharp(|d_T\phi|)\rangle\mu_{M}\notag \\
&\leq -\sum_a \int_{M}\omega_{l}^{2} |d_T\phi|^{p-2}g_{Q'}(d_T \phi({\rm Ric^{Q}}(E_a)),d_T \phi(E_a))\mu_{M} \notag \\
&\leq C\int_{M}\omega_{l}^{2}|d_T\phi|^{p}\mu_{M}.
\end{align}
By Lemma \ref{lem1}, since
\begin{align*}
\delta_\nabla (\omega_{l}^{2}|d_T\phi|^{p-2}d_T \phi)=-i(d_{B}\omega_{l}^{2})|d_T\phi|^{p-2}d_T \phi=-2\omega_{l}i(d_{B}\omega_{l})|d_T\phi|^{p-2}d_T \phi,
\end{align*}
we have
\begin{align*}
\bigg{|}\int_{M}\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,\omega_{l}^{2}|d_T\phi|^{p-2}d_T \phi\rangle\mu_{M}\bigg{|}
=&\bigg{|}\int_{M}\langle i(\kappa_{B}^\sharp)d_T\phi, -2\omega_{l}i(d_{B}\omega_{l})|d_T\phi|^{p-2}d_T \phi\rangle\mu_{M}\bigg{|}\notag \\
\leq&2\int_{M}\omega_{l}|i(\kappa_{B}^\sharp)d_T\phi||i(d_{B}\omega_{l})|d_T\phi|^{p-2}d_T \phi|\mu_{M}\notag \\
\leq&2\int_{M}\omega_{l}|\kappa_{B}||d_{B}\omega_{l}||d_T\phi|^{p}\mu_{M}\notag \\
\leq&2\frac{\alpha}{l}\max\{|\kappa_{B}|\}\int_{M}\omega_{l}|d_T\phi|^{p}\mu_{M}.
\end{align*}
For the second inequality in the above, we use the fact
\begin{align}\label{ee82}
|X^{\flat}\wedge d_T\phi|^{2}+|i(X)d_T\phi|^{2}=|X|^{2}|d_T\phi|^{2}
\end{align}
for any vector $X$. If we let $l\rightarrow\infty$, then
\begin{align}\label{ee57}
\lim\limits_{l\rightarrow\infty}\int_{M}\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,\omega_{l}^{2}|d_T\phi|^{p-2}d_T \phi\rangle\mu_{M}=0.
\end{align}
At the same time, from $\delta_{B}\kappa_{B}=0$ and the Cauchy-Schwarz inequality, we get
\begin{align*}
\bigg{|}\int_{M}\langle \omega_{l}^{2}|d_T\phi|^{p-1},\kappa_{B}^{\sharp}(|d_T\phi|)\rangle\mu_{M}\bigg{|}
=&\frac{1}{p}\bigg{|}\int_{M}\kappa_{B}^{\sharp}(\omega_{l}^{2}|d_T\phi|^{p})\mu_{M}-\int_{M}2\omega_{l}|d_T\phi|^{p}\langle \kappa_{B},d_{B}\omega_{l}\rangle\mu_{M} \bigg{|} \notag \\
=&\frac{2}{p}\bigg{|}\int_{M}\omega_{l}|d_T\phi|^{p}\langle \kappa_{B},d_{B}\omega_{l}\rangle\mu_{M}\bigg{|}\notag \\
\leq&\frac{\alpha}{l}\max\{|\kappa_{B}|\}\int_{M}\omega_{l}|d_T\phi|^{p}\mu_{M}.
\end{align*}
So by letting $l\rightarrow\infty$,
\begin{align}\label{ee35}
\lim\limits_{l\rightarrow\infty}\int_{M}\langle \omega_{l}^{2}|d_T\phi|^{p-1},\kappa_{B}(|d_T\phi|)\rangle\mu_{M}=0.
\end{align}
By the Cauchy-Schwarz inequality, we know that
\begin{align}\label{ee16}
\int_{M}\langle& \omega_{l}^{2}|d_T\phi|,\Delta_B|d_T\phi|^{p-1}\rangle\mu_{M}\notag\\
&=\int_{M}\langle d_{B}(\omega_{l}^{2}|d_T\phi|),d_{B}|d_T\phi|^{p-1}\rangle\mu_{M} \notag\\
&=\frac{A_{1}}{p}\int_{M}\omega_{l}^{2}|d_{B}|d_T\phi|^{\frac{p}{2}}|^{2}\mu_{M}+A_{1}\int_{M}\langle |d_T\phi|^{\frac{p}{2}}d_{B}\omega_{l},\omega_{l}d_{B}|d_T\phi|^{\frac{p}{2}}\rangle\mu_{M} \notag\\
&\geq\frac{A_{1}}{p}\int_{M}\omega_{l}^{2}|d_{B}|d_T\phi|^{\frac{p}{2}}|^{2}\mu_{M}-A_{1}\int_{M}\omega_{l}|d_T\phi|^{\frac{p}{2}}|d_{B}\omega_{l}||d_{B}|d_T\phi|^{\frac{p}{2}}|\mu_{M},
\end{align}
where $A_{1}=\frac{4(p-1)}{p}$.
It is well known \cite{prs} that for a basic function $f$ on $M$, we get from (\ref{ee82})
\begin{align}\label{ee5}
|d_{\nabla}(fd_{T}\phi)|=|d_{B}f\wedge d_{T}\phi|\leq |d_{B}f||d_{T}\phi|.
\end{align}
Hence we have
\begin{align}\label{ee7}
\bigg{|}\int_{M}\langle \omega_{l}^{2}d_T\phi, \delta_\nabla d_\nabla |d_T\phi|^{p-2}d_T\phi\rangle\mu_{M}\bigg{|}
&=\bigg{|}\int_{M}\langle d_\nabla(\omega_{l}^{2}d_T\phi), d_\nabla |d_T\phi|^{p-2}d_T\phi\rangle\mu_{M}\bigg{|} \notag \\
&\leq \int_{M}|d_\nabla(\omega_{l}^{2}d_T\phi)||d_\nabla |d_T\phi|^{p-2}d_T\phi|\mu_{M} \notag \\
&\leq 2\int_{M}|\omega_{l}d_{B}\omega_{l}||d_{B}|d_T\phi|^{p-2}||d_T\phi|^{2} \notag \\
&\leq A_{2}\int_{M} \omega_{l}|d_{B}\omega_{l}||d_T\phi|^{\frac{p}{2}}|d_{B}|d_T\phi|^{\frac{p}{2}}|,
\end{align}
where $A_{2}=\frac{4(p-2)}{p}$.
From (\ref{ee16}) and (\ref{ee7}), we get
\begin{align}\label{ee10}
\int_{M}&\langle \omega_{l}^{2}|d_T\phi|,\Delta_B|d_T\phi|^{p-1}\rangle\mu_{M}
-\int_{M}\langle \omega_{l}^{2}d_T\phi, \delta_\nabla d_\nabla |d_T\phi|^{p-2}d_T\phi\rangle\mu_{M} \notag \\
&\geq -(A_{1}+A_{2})\int_{M} \omega_{l}|d_{B}\omega_{l}||d_T\phi|^{\frac{p}{2}}|d_{B}|d_T\phi|^{\frac{p}{2}}|\mu_{M}
+\frac{A_{1}}{p}\int_{M}\omega_{l}^{2}|d_{B}|d_T\phi|^{\frac{p}{2}}|^{2}\mu_{M}.
\end{align}
From (\ref{ee4}) and Fatou's inequality, it is trivial that $d_{B}|d_T\phi|^{\frac{p}{2}}\in L^{2}$.
Hence by the H${\rm \ddot{o}}$lder inequality,
$$\int_{M}\omega_{l}|d_{B}\omega_{l}||d_T\phi|^{\frac{p}{2}}|d_{B}|d_T\phi|^{\frac{p}{2}}|\mu_{M}\leq
(\int_{M}|d_T\phi|^{p}|d_{B}\omega_{l}|^{2}\mu_{M})^{\frac{1}{2}}(\int_{M}\omega_{l}^{2}|d_{B}|d_T\phi|^{\frac{p}{2}}|^{2}\mu_{M})^{\frac{1}{2}}.$$
If we let $l\rightarrow\infty$, then
\begin{align}\label{ee32}
\lim\limits_{l\rightarrow\infty}\int_{M}\omega_{l}|d_T\phi|^{\frac{p}{2}}|d_{B}\omega_{l}||d_{B}|d_T\phi|^{\frac{p}{2}}|\mu_{M}=0.
\end{align}
From (\ref{ee16}) and (\ref{ee32}), we have
\begin{align}\label{ee83}
\lim\limits_{l\rightarrow\infty}\int_{M}\langle& \omega_{l}^{2}|d_T\phi|,\Delta_B|d_T\phi|^{p-1}\rangle\mu_{M}
\geq\frac{A_{1}}{p}\int_{M}|d_{B}|d_T\phi|^{\frac{p}{2}}|^{2}\mu_{M}.
\end{align}
On the other hand, by the Rayleigh quotient theorem, we have
\begin{align}\label{ee22}
\frac{\int_{M}\langle d_{B}|d_T\phi|^{\frac{p}{2}}, d_{B}|d_T\phi|^{\frac{p}{2}}\rangle\mu_{M}}{\int_{M} |d_T\phi|^{p}\mu_{M}}\geq\mu_{0}.
\end{align}
From (\ref{ee4}), (\ref{ee57}), (\ref{ee35}), (\ref{ee32}), (\ref{ee83}) and (\ref{ee22}),
by $l\rightarrow\infty$, we get
\begin{align}\label{ee17}
\frac{A_{1}}{p}\mu_{0}\int_{M}|d_T\phi|^{p}\mu_{M}
&\leq\frac{A_{1}}{p}\int_{M}|d_{B}|d_T\phi|^{\frac{p}{2}}|^{2}\mu_{M} \notag\\
&\leq-\sum_a \int_{M} |d_T\phi|^{p-2}g_{Q'}(d_T \phi({\rm Ric^{Q}}(E_a)),d_T \phi(E_a))\mu_{M} \notag\\
&\leq C\int_{M}|d_T\phi|^{p}\mu_{M}.
\end{align}
Since $C=\frac{A_{1}}{p}\mu_{0}$, the equation (\ref{ee17}) implies that
\begin{align}\label{ee18}
\sum_a \int_{M}|d_T\phi|^{p-2}g_{Q'}(d_T \phi({\rm (Ric^{Q}}+C)(E_a)),d_T \phi(E_a))\mu_{M}=0.
\end{align}
Since ${\rm Ric^{Q}}>-C$ at some point $x_{0}$, then $d_T \phi=0$ by (\ref{ee18}).
It means that $\phi$ is transversally constant.
\end{proof}
\begin{rem}
Theorem \ref{th5} can be found for the point foliation in \cite{mlj},
\end{rem}
\begin{cor}\label{co5}
Let $(M,g,\mathcal F)$ be a complete foliated Riemannian manifold with coclosed mean curvature form $\kappa_{B}$ and all leaves be compact.
Let $(M',g',\mathcal F')$ be a foliated Riemannian manifold with non-positive transversal sectional curvature $K^{Q'}$. Assume that the transversal Ricci curvature ${\rm Ric^{Q}}$ of $M$ satisfies ${\rm Ric^{Q}}\geq-\frac{4(p-1)}{p^{2}}\mu_{0}$ for all $x\in M$ and ${\rm Ric^{Q}}>-\frac{4(p-1)}{p^{2}}\mu_{0}$ at some point $x_{0}$.
Then any $(\mathcal F,\mathcal F')_{q}$-harmonic map $\phi : (M,g,\mathcal F) \rightarrow (M', g',\mathcal F')$ with $2\leq q\leq p$ of $E_{B,q}(\phi)<\infty$ is transversally constant.
\end{cor}
\begin{proof}
For $2\leq q\leq p$, we have $\frac{4(q-1)}{q^{2}}\geq\frac{4(p-1)}{p^{2}}$. So the proof is trivial.
\end{proof}
The following corollary can be obtained easily when $p=2$.
\begin{cor}\label{co6}
Let $(M,g,\mathcal F)$ be a complete foliated Riemannian manifold with coclosed mean curvature form $\kappa_{B}$ and all leaves be compact.
Let $(M',g',\mathcal F')$ be a foliated Riemannian manifold with non-positive transversal sectional curvature $K^{Q'}$. Assume that the transversal Ricci curvature ${\rm Ric^{Q}}$ of $M$ satisfies ${\rm Ric^{Q}}\geq-\mu_{0}$ for all $x\in M$ and ${\rm Ric^{Q}}>-\mu_{0}$ at some point $x_{0}$.
Then any $(\mathcal F,\mathcal F')$-harmonic map $\phi : (M,g,\mathcal F) \rightarrow (M', g',\mathcal F')$ of $E_{B}(\phi)<\infty$ is transversally constant.
\end{cor}
\begin{rem} Corollary \ref{co6} for the transversal harmonic map has been studied by Fu and Jung \cite{FJ}. And if $\mathcal F$ is minimal, then Liouville type theorem (Theorem \ref{th5}) holds for the transversal $p$-harmonic map. But we do not know whether Theorem \ref{th5} holds for the transversal $p$-harmonic map $(p>2)$ on arbitrary foliated Riemannian manifolds.
\end{rem}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
A symplectic toric manifold is a compact
connected symplectic manifold of dimension $2n$ with an effective
Hamiltonian action of a compact $n$-dimensional torus $T$. A
famous result of Delzant \cite{de88} describes a bijective
correspondence between symplectic toric manifolds and simple
convex polytopes, called Delzant polytopes. The polytope
associated to a symplectic toric manifold $M$ is the image of the
moment map on $M$.
Origami manifolds appeared in differential geometry recently as a
generalization of symplectic manifolds \cite{ca-gu-pi11}.
A folded symplectic form on a $2n$-dimensional manifold $M$ is a
closed $2$-form $\omega$ whose top power $\omega^n$ vanishes
transversally on a subset $\Sing$ and whose restriction to points
in $\Sing$ has maximal rank. Then $\Sing$ is a codimension-one
submanifold of $M$, called the fold. The maximality of the
restriction of $\omega$ to $\Sing$ implies the existence of a line
field on $\Sing$. If the
line field is the vertical bundle of some principal
$S^1$-fibration $\Sing\to X$, then $\omega$ is called an \emph{origami form}.
Toric origami manifolds are generalizations of symplectic toric manifolds.
The notions of a Hamiltonian action
and a moment map are defined similarly to the symplectic case, and
a toric origami manifold is defined to be a compact connected
origami manifold $(M^{2n}, \omega)$ equipped with an effective
Hamiltonian action of a torus $T$. Similarly to Delzant's theorem
for symplectic toric manifolds, toric origami manifolds
bijectively correspond to special combinatorial structures, called
origami templates, via moment maps \cite{ca-gu-pi11}. An origami
template is a collection of Delzant polytopes with some additional
gluing data encoded by a template graph $G$.
A problem of certain interest is to describe the cohomology ring and $T$-equivariant cohomology
ring of toric origami manifold $M$ in terms of the corresponding
origami template, see \cite{ca-gu-pi11} and \cite{ho-pi12}. In this paper
we present some partial results concerning this problem.
While in general toric origami manifolds can be non-orientable, in
this paper we restrict to the orientable case. Under this
assumption the action of $T$ on a toric origami manifold $M$ is
locally standard, so the orbit space $M/T$ is a manifold with
corners. One can describe the orbit space $M/T$ as a result of
gluing polytopes of the origami template. This shows that $M/T$ is
homotopy equivalent to the template graph $G$. Every proper face of
$M/T$ is homotopy equivalent to some subgraph of $G$. Thus a toric
origami manifold has the property that the orbit space and all its
faces are homotopy equivalent to wedges of circles or
contractible.
If $G$ is a tree, then $M/T$ and all its faces are contractible.
Hence, a general result of \cite{ma-pa06} applies. It gives a
description similar to toric varieties (or quasitoric manifolds):
$H^*_T(M)\cong \Z[M/T]$ and $H^*(M)\cong
\Z[M/T]/(\theta_1,\ldots,\theta_n)$. Here, $\Z[M/T]$ is the face ring
of the manifold with corners $M/T$, and
$(\theta_1,\ldots,\theta_n)$ is the ideal generated by the linear
system of parameters, defined by the characteristic map on $M/T$.
This case is discussed in detail in \cite{ho-pi12}. But, if $G$ has
cycles, even the Betti numbers of $M$ remain unknown in general. Only when $M$ is of dimension $4$, Holm and Pires described the Betti numbers of $M$ in \cite{ho-pi13}.
In this paper we study the cohomology of an orientable toric
origami manifold $M$ in the case when $M/T$ is itself arbitrary,
but every proper face of $M/T$ is acyclic (this assumption is not always satisfied, see Section~\ref{sectNonAcyclicFaces}). A different approach to
this task, based on the spectral sequence of the filtration by
orbit types, is proposed in a more general situation in
\cite{ayze14}. For toric origami manifolds, the calculation of
Betti numbers in this paper gives the same answer but a simpler
proof.
The paper is organized as follows. Section
\ref{sectToricOrigami} contains necessary definitions and
properties of toric origami manifolds and origami templates. In
Section \ref{sectBettiNumbers} we describe the procedure which
simplifies a given toric origami manifold step-by-step, and give
an inductive formula for Betti numbers. Section
\ref{sectBettiFace} provides more convenient formulas expressing
Betti numbers of $M$ in terms of the first Betti number of $M/T$
and the face numbers of the dual simplicial poset. Section
\ref{sectEquivar} is devoted to an equivariant cohomology. While
toric origami manifolds serve as a motivating example, we describe
the equivariant cohomology ring in a more general setting. In
Section \ref{sectSerreSpec}, we describe the properties of the
Serre spectral sequence of the fibration $\pi\colon ET\times_TM\to
BT$ for a toric origami manifold $M$. The restriction homomorphism
$\iota^*\colon H^*_T(M)\to H^*(M)$ induces a graded ring
homomorphism $\bar\iota^*\colon H^*_T(M)/(\pi^*(H^{2}(BT)))\to
H^*(M)$. In Section \ref{sectTowardsRing} we use Schenzel's
theorem and the calculations of previous sections to show that
$\bar\iota^*$ is an isomorphism except in degrees $2$, $4$ and
$2n-1$, a monomorphism in degrees $2$ and $2n-1$, and an epimorphism in
degree $4$; we also find the ranks of the kernels and cokernels in these exceptional degrees. Since $\bar\iota^*$ is a ring
homomorphism, these considerations describe the product structure
on the most part of $H^*(M)$, except for the cokernel of
$\bar\iota^*$ in degree $2$. Section \ref{sect4dimCase}
illustrates our considerations in the $4$-dimensional case. In
section \ref{sectCokernel} we give a geometrical description of
the cokernel of $\bar\iota^*$ in degree 2, and suggest a partial
description of the cohomology multiplication for these extra
elements. The discussion of Section~\ref{sectNonAcyclicFaces}
shows which part of the results can be generalized to the case of
non-acyclic faces.
\section{Toric origami manifolds}\label{sectToricOrigami}
In this section, we recall the definitions and properties of toric
origami manifolds and origami templates. Details can be found in
\cite{ca-gu-pi11}, \cite{ma-pa13} or \cite{ho-pi12}.
A folded symplectic form on a $2n$-dimensional manifold $M$ is a
closed 2-form $\omega$ whose top power $\omega^n$ vanishes
transversally on a subset $\Sing$ and whose restriction to points
in $\Sing$ has maximal rank. Then $\Sing$ is a codimension-one
submanifold of $M$ and is called the fold. If $\Sing$ is empty,
$\omega$ is a genuine symplectic form. The pair $(M, \omega)$ is
called a folded symplectic manifold. Since the restriction of
$\omega$ to $\Sing$ has maximal rank, it has a one-dimensional
kernel at each point of $\Sing$. This determines a line field on
$\Sing$ called the null foliation. If the null foliation is the
vertical bundle of some principal $S^1$-fibration $\Sing\to X$
over a compact base $X$, then the folded symplectic form $\omega$
is called an origami form and the pair $(M,\omega)$ is called an
origami manifold. The action of a torus $T$ on an origami manifold
$(M,\omega)$ is Hamiltonian if it admits a moment map $\mu\colon
M\to \ta^*$ to the dual Lie algebra of the torus, which satisfies
the conditions: (1) $\mu$ is equivariant with respect to the given
action of $T$ on $M$ and the coadjoint action of $T$ on the vector
space $\ta^*$ (this action is trivial for the torus); (2) $\mu$
collects Hamiltonian functions, that is, $d\langle\mu,V\rangle =
\imath_{V^\#}\omega$ for each $V\in \ta$, where $V^\#$ is the
vector field on $M$ generated by $V$.
\begin{definition}
A toric origami manifold $(M,\omega,T,\mu)$, abbreviated as $M$,
is a compact connected origami manifold $(M,\omega)$ equipped with
an effective Hamiltonian action of a torus $T$ with $\dim T =
\frac12\dim M$ and with a choice of a corresponding moment map
$\mu$.
\end{definition}
When the fold $\Sing$ is empty, a toric origami manifold is a
symplectic toric manifold. A theorem of Delzant \cite{de88} says
that symplectic toric manifolds are classified by their moment
images called Delzant polytopes. Recall that a Delzant polytope in
$\R^n$ is a simple convex polytope, whose normal fan is smooth
(with respect to some given lattice $\Z^n\subset\R^n$). In other
words, all normal vectors to the facets of $P$ have rational
coordinates, and the primitive normal vectors
$\nu(F_1),\ldots,\nu(F_n)$ form a basis of the lattice $\Z^n$ whenever facets $F_1,\ldots,F_n$ meet in a
vertex of $P$. Let
$\Dd_n$ denote the set of all Delzant polytopes in $\R^n$ (w.r.t.
a given lattice) and $\Ff_n$ be the set of all their facets.
The moment data of a toric origami manifold can be encoded into an
origami template $(G,\PsiV,\PsiE)$, where
\begin{itemize}
\item $G$ is a connected graph (loops and multiple edges are allowed) with the
vertex set $V$ and edge set $E$;
\item $\PsiV\colon V\to \Dd_n$;
\item $\PsiE\colon E\to \Ff_n$;
\end{itemize}
subject to the following conditions:
\begin{itemize}
\item If $e\in E$ is an edge of $G$ with endpoints $v_1, v_2\in
V$, then $\PsiE(e)$ is a facet of both polytopes $\PsiV(v_1)$ and
$\PsiV(v_2)$, and these polytopes coincide near $\PsiE(e)$ (this
means there exists an open neighborhood $U$ of $\PsiE(e)$ in
$\R^n$ such that $U\cap\PsiV(v_1)=U\cap\PsiV(v_2)$).
\item If $e_1,e_2\in E$ are two edges of $G$ adjacent to $v\in V$,
then $\PsiE(e_1)$ and $\PsiE(e_2)$ are disjoint facets of
$\Psi(v)$.
\end{itemize}
The facets of the form $\PsiE(e)$ for $e\in E$ are called the fold
facets of the origami template.
The following is a generalization of the theorem by Delzant to
toric origami manifolds.
\begin{theorem}[\cite{ca-gu-pi11}]\label{theo:classifOrigami}
Assigning the moment data of a toric origami manifold induces a
one-to-one correspondence
\[
\{\mbox{toric origami
manifolds}\}\leftrightsquigarrow\{\mbox{origami templates}\}
\]
up to equivariant origami symplectomorphism on the left-hand side,
and affine equivalence on the right-hand side.
\end{theorem}
Denote by $|(G,\PsiV,\PsiE)|$ the topological space constructed
from the disjoint union $\bigsqcup_{v\in V}\PsiV(v)$ by
identifying facets $\PsiE(e)\subset\PsiV(v_1)$ and
$\PsiE(e)\subset\PsiV(v_2)$ for every edge $e\in E$ with endpoints
$v_1$ and $v_2$.
An origami template $(G,\PsiV,\PsiE)$ is called co\"{o}rientable
if the graph $G$ has no loops (this means all edges have different
endpoints). Then the corresponding toric origami manifold is also
called co\"{o}rientable. If $M$ is orientable, then $M$ is
co\"{o}rientable \cite{ho-pi12}. If $M$ is co\"{o}rientable, then
the action of $T^n$ on $M$ is locally standard \cite[Lemma
5.1]{ho-pi12}. We review the definition of locally standard action
in Section \ref{sectEquivar}.
Let $(G,\PsiV,\PsiE)$ be an origami template and $M$ the
associated toric origami manifold which is supposed to be
orientable in the following. The topological space
$|(G,\PsiV,\PsiE)|$ is a manifold with corners with the face
structure induced from the face structures on polytopes
$\PsiV(v)$, and the space $|(G,\PsiV,\PsiE)|$ is homeomorphic to $M/T$ as a
manifold with corners. The space $|(G,\PsiV,\PsiE)|$ has the same
homotopy type as the graph $G$, thus $M/T\cong |(G,\PsiV,\PsiE)|$
is either contractible or homotopy equivalent to a wedge of
circles.
Under the correspondence of Theorem \ref{theo:classifOrigami}, the
fold facets of the origami template correspond to the connected
components of the fold $\Sing$ of $M$. If $F=\PsiE(e)$ is a fold
facet of the template $(G,\PsiV,\PsiE)$, then the corresponding
component $Z\subseteq\mu^{-1}(F)$ of the fold $\Sing\subset M$ is a
principal $S^1$-bundle over a compact space $B$. The space $B$ is
a $(2n-2)$-dimensional symplectic toric manifold corresponding to
the Delzant polytope $F$. In the following we also call the
connected components $Z$ of the fold $\Sing$ the ``folds'' by
abuse of terminology.
\section{Betti numbers of toric origami manifolds} \label{sectBettiNumbers}
Let $M$ be an orientable toric origami manifold of dimension $2n$
with a fold $Z$. Let $F$ be the corresponding folded facet in the
origami template of $M$ and let $B$ be the symplectic toric
manifold corresponding to $F$. The normal line bundle of $Z$ to
$M$ is trivial so that an invariant closed tubular neighborhood of
$Z$ in $M$ can be identified with $Z\times [-1,1]$. We set
\[
\tM:=M-\Int(Z\times [-1,1]).
\]
This has two boundary components which are copies of $Z$. We
close $\tM$ by gluing two copies of the disk bundle associated to
the principal $S^1$-bundle $Z\to B$ along their boundaries. The
resulting closed manifold (possibly disconnected), denoted $M'$,
is again a toric origami manifold and the graph associated to $M'$ is the graph associated to $M$ with the edge corresponding to the folded facet $F$ removed.
Let $G$ be the graph associated to the origami template of $M$ and
let $b_1(G)$ be its first Betti number. We assume that $b_1(G)\ge
1$. A folded facet in the origami template of $M$ corresponds to
an edge of $G$. We choose an edge $e$ in a (non-trivial) cycle of
$G$ and let $F$, $Z$ and $B$ be respectively the folded facet, the
fold and the symplectic toric manifold corresponding to the edge
$e$. Then $M'$ is connected and since the graph $G'$ associated to $M'$
is the graph $G$ with the edge $e$ removed, we have
$b_1(G')=b_1(G)-1$.
Two copies of $B$ lie in $M'$ as closed submanifolds, denoted
$B_+$ and $B_-$. Let $N_+$ (resp. $N_-$) be an invariant closed
tubular neighborhood of $B_+$ (resp. $B_-$) and $Z_+$ (resp.
$Z_-$) be the boundary of $N_+$ (resp. $N_-$). Note that
$M'-\Int(N_+\cup N_-)$ can naturally be identified with $\tM$, so
that
\[
\tM=M'-\Int(N_+\cup N_-)=M-\Int(Z\times [-1,1])
\]
and
\begin{alignat}{2}
M'&=\tM\cup(N_+\cup N_-),\hspace{1cm} &\tM\cap(N_+\cup N_-)=Z_+\cup Z_-,\label{eq:3-1-1} \\
M&=\tM\cup(Z\times[-1,1]),\hspace{1cm} &\tM\cap(Z\times[-1,1])=Z_+\cup Z_-. \label{eq:3-1-2}
\end{alignat}
\begin{remark} It follows from \eqref{eq:3-1-1} and \eqref{eq:3-1-2} that
\[
\chi(M')=\chi(\tM)+2\chi(B),\quad \chi(M)=\chi(\tM)
\]
and hence $\chi(M')=\chi(M)+2\chi(B)$. Note that this formula
holds without the acyclicity assumption (made later) on proper
faces of $M/T$.
\end{remark}
We shall investigate relations among the Betti numbers of $M, M',
\tM, Z$, and $B$. The spaces $\tM$ and $Z$ are auxiliary ones and
our aim is to find relations among the Betti numbers of $M, M'$
and $B$. In the following, all cohomology groups and Betti numbers
are taken with $\Z$ coefficients unless otherwise stated but the
reader will find that the same argument works over any field.
\begin{lemma} \label{lemm:3-1}
The Betti numbers of $Z$ and $B$ have the relation
$$b_{2i}(Z)-b_{2i-1}(Z)=b_{2i}(B)-b_{2i-2}(B)$$ for every~$i$.
\end{lemma}
\begin{proof}
Since $\pi\colon Z\to B$ is a principal $S^1$-bundle and
$H^{odd}(B)=0$, the Gysin exact sequence for the principal
$S^1$-bundle splits into a short exact sequence
\begin{equation} \label{eq:3-4}
0\to H^{2i-1}(Z)\to H^{2i-2}(B)\to H^{2i}(B)\xrightarrow{\pi^*} H^{2i}(Z)\to 0\quad \text{for every $i$}
\end{equation}
and this implies the lemma.
\end{proof}
\begin{lemma} \label{lemm:3-4}
The Betti numbers of $\tilde{M}$, $M'$, and $B$ have the relation
$$b_{2i}(\tM)-b_{2i-1}(\tM)=b_{2i}(M')-b_{2i-1}(M')-2b_{2i-2}(B)$$ for every $i$.
\end{lemma}
\begin{proof}
We consider the Mayer-Vietoris exact sequence in cohomology for
the triple $(M', \tM, N_+\cup N_-)$:
\begin{alignat*}{5}
\to &H^{2i-2}(M')&\to &H^{2i-2}(\tM)\oplus H^{2i-2}(N_+\cup N_-)&\to &H^{2i-2}(Z_+\cup Z_-)\\
\xrightarrow{\delta^{2i-2}}&H^{2i-1}(M')&\to &H^{2i-1}(\tM)\oplus H^{2i-1}(N_+\cup N_-)&\to &H^{2i-1}(Z_+\cup Z_-)\\
\xrightarrow{\delta^{2i-1}}&H^{2i}(M')&\to &H^{2i}(\tM)\oplus H^{2i}(N_+\cup N_-)&\to &H^{2i}(Z_+\cup Z_-)\\
\xrightarrow{\delta^{2i}}&H^{2i+1}(M')&\to& &&
\end{alignat*}
Since the inclusions $B=B_\pm\mapsto N_\pm$ are homotopy
equivalences and $Z_\pm=Z$, the restriction homomorphism
$H^q(N_+\cup N_-)\to H^q(Z_+\cup Z_-)$ above can be replaced by
$\pi^*\oplus\pi^*\colon H^q(B)\oplus H^q(B)\to H^q(Z)\oplus
H^q(Z)$ which is surjective for even $q$ from the sequence~\eqref{eq:3-4}.
Therefore, $\delta^{2i-2}$ and $\delta^{2i}$ in the exact sequence
above are trivial. It follows that
\[
\begin{split}
&b_{2i-1}(M')-b_{2i-1}(\tM)-2b_{2i-1}(B)+2b_{2i-1}(Z)\\
-&b_{2i}(M')+b_{2i}(\tM)+2b_{2i}(B)-2b_{2i}(Z)=0.
\end{split}
\]
Here $b_{2i-1}(B)=0$ because $B$ is a symplectic toric manifold,
and $2b_{2i-1}(Z)+2b_{2i}(B)-2b_{2i}(Z)=2b_{2i-2}(B)$ by
Lemma~\ref{lemm:3-1}. Using these identities, the identity above
reduces to the identity in the lemma.
\end{proof}
Next we consider the Mayer-Vietoris exact sequence in cohomology
for the triple $(M,\tM,Z\times [-1,1])$:
\begin{alignat*}{5}
\to &H^{2i-2}(M)&\to &H^{2i-2}(\tM)\oplus H^{2i-2}(Z\times[-1,1])&\to &H^{2i-2}(Z_+\cup Z_-)\\
\to &H^{2i-1}(M)&\to &H^{2i-1}(\tM)\oplus H^{2i-1}(Z\times[-1,1])&\to &H^{2i-1}(Z_+\cup Z_-)\\
\to &H^{2i}(M)&\to &H^{2i}(\tM)\oplus H^{2i}(Z\times[-1,1])&\to &H^{2i}(Z_+\cup Z_-)\to
\end{alignat*}
We make the following assumption:
\begin{quote}
$(*)$ The restriction map $H^{2j}(\tM)\oplus H^{2j}(Z\times[-1,1])\to H^{2j}(Z_+\cup Z_-)$ in the Mayer-Vietoris sequence above is surjective for $j\ge 1$.
\end{quote}
Note that the restriction map above is not surjective when $j=0$ because the image is the diagonal copy of $H^0(Z)$ in this case and we will see in Lemma~\ref{lemm:3-5} below that the assumption
$(*)$ is satisfied when every proper face of $M/T$ is acyclic.
\begin{lemma} \label{lemm:3-6}
Suppose that the assumption $(*)$ is satisfied. Then
\[
\begin{split}
&b_2(\tM)-b_1(\tM)=b_2(M)-b_1(M)+b_2(B),\\
&b_{2i}(\tM)-b_{2i-1}(\tM)=b_{2i}(M)-b_{2i-1}(M)+b_{2i}(B)-b_{2i-2}(B) \quad\text{for $i\ge 2$}.
\end{split}
\]
\end{lemma}
\begin{proof}
By the assumption $(*)$, the Mayer-Vietoris exact sequence for the
triple $(M,\tM,Z\times [-1,1])$ splits into short exact sequences:
\begin{alignat*}{5}
0\to &H^{0}(M)&\to &H^{0}(\tM)\oplus H^{0}(Z\times[-1,1])&\to &H^{0}(Z_+\cup Z_-)\\
\to &H^{1}(M)&\to &H^{1}(\tM)\oplus H^{1}(Z\times[-1,1])&\to &H^{1}(Z_+\cup Z_-)\\
\to &H^{2}(M)&\to &H^{2}(\tM)\oplus H^{2}(Z\times[-1,1])&\to &H^{2}(Z_+\cup Z_-)\to 0
\end{alignat*}
and for $i\ge 2$
\begin{alignat*}{5}
0 \to &H^{2i-1}(M)&\to &H^{2i-1}(\tM)\oplus H^{2i-1}(Z\times[-1,1])&\to &H^{2i-1}(Z_+\cup Z_-)\\
\to &H^{2i}(M)&\to &H^{2i}(\tM)\oplus H^{2i}(Z\times[-1,1])&\to &H^{2i}(Z_+\cup Z_-)\to 0.
\end{alignat*}
The former short exact sequence above yields
\[
b_2(\tM)-b_1(\tM)=b_2(M)-b_1(M)+b_2(Z)-b_1(Z)+1
\]
while the latter above yields
\[
b_{2i}(\tM)-b_{2i-1}(\tM)=b_{2i}(M)-b_{2i-1}(M)+b_{2i}(Z)-b_{2i-1}(Z) \quad\text{for $i\ge 2$}.
\]
Here $b_{2i}(Z)-b_{2i-1}(Z)=b_{2i}(B)-b_{2i-2}(B)$ for every $i$ by Lemma~\ref{lemm:3-1}, so our lemma follows.
\end{proof}
\begin{lemma} \label{lemm:3-7}
Suppose that the assumption $(*)$ is satisfied and $n\ge 2$. Then
\[
\begin{split}
&b_1(M')=b_1(M)-1,\quad b_2(M')=b_2(M)+b_2(B)+1,\\
&b_{2i+1}(M')=b_{2i+1}(M)\quad \text{for $1\le i\le n-2$}.
\end{split}
\]
\end{lemma}
\begin{proof}
It follows from Lemma~\ref{lemm:3-4} and Lemma~\ref{lemm:3-6} that
\begin{equation}\label{eq:3-5}
b_{2i}(M')-b_{2i-1}(M')=b_{2i}(M)-b_{2i-1}(M)+b_{2i}(B)+b_{2i-2}(B) \quad\text{for $i\ge 2$}.
\end{equation}
Take $i=n$ in \eqref{eq:3-5} and use Poincar\'e duality. Then we
obtain
\[
b_0(M')-b_1(M')=b_0(M)-b_1(M)+b_0(B)
\]
which reduces to the first identity in the lemma. This together
with the first identity in Lemma~\ref{lemm:3-6} implies the second
identity in the lemma.
Similarly, take $i=n-1(\ge 2)$ in \eqref{eq:3-5} and use
Poincar\'e duality. Then we obtain
\[
b_2(M')-b_3(M')=b_2(M)-b_3(M)+b_0(B)+b_2(B).
\]
This together with the second identity in the lemma implies
$b_3(M')=b_3(M)$.
Take $i$ to be $n-i$ in \eqref{eq:3-5} (so $2\le i\le n-2$) and
use Poincar\'e duality. Then we obtain
\begin{equation*}
b_{2i}(M')-b_{2i+1}(M')=b_{2i}(M)-b_{2i+1}(M)+b_{2i-2}(B)+b_{2i}(B).
\end{equation*}
This together with \eqref{eq:3-5} implies
\[
b_{2i+1}(M')-b_{2i-1}(M')=b_{2i+1}(M)-b_{2i-1}(M) \quad \text{for $2\le i\le n-2$}.
\]
Since we know $b_3(M')=b_3(M)$, this implies the last identity in
the lemma.
\end{proof}
The following is a key lemma.
\begin{lemma} \label{lemm:3-5}
Suppose that every proper face of $M/T$ is acyclic. Then the
homomorphism $H^{2j}(\tM)\to H^{2j}(Z_+\cup Z_-)$ induced from the
inclusion is surjective for $j\ge 1$, in particular, the
assumption $(*)$ is satisfied.
\end{lemma}
\begin{proof}
Since $B_+\cup B_-$ is a deformation retract of $N_+\cup N_-$, the
following diagram is commutative:
\[
\begin{CD}
H^{2j}(M')@>>> H^{2j}(B_+\cup B_-)\\
@VVV @VV \pi_\pm^* V\\
H^{2j}(\tM) @>>> H^{2j}(Z_+\cup Z_-)
\end{CD}
\]
where $\pi_\pm\colon Z_+\cup Z_-\to B_+\cup B_-$ is the projection
and the other homomorphisms are induced from the inclusions. By
\eqref{eq:3-4}, $\pi_\pm^*$ is surjective, so it suffices to show
that the homomorphism $H^{2j}(M')\to H^{2j}(B_+\cup B_-)$ is
surjective for $j\ge 1$.
The inverse image of a codimension $j$ face of $M'/T$ by the
quotient map $M'\to M'/T$ is a codimension $2j$ closed orientable
submanifold of $M'$ and defines an element of $H_{2n-2j}(M')$ so
that its Poincar\'e dual yields an element of $H^{2j}(M')$. The
same is true for $B=B_+$ or $B_-$. Note that $H^{2j}(B)$ is
additively generated by $\tau_{K}$'s where $K$ runs over all
codimension $j$ faces of $F=B/T$.
Set $F_\pm=B_\pm/T$, which are copies of the folded facet $F=B/T$.
Let $K_+$ be a codimension $j$ face of $F_+$. Then there is a
codimension $j$ face $L$ of $M'/T$ such that $K_+=L\cap F_+$. We
note that $L\cap F_-=\emptyset$. Indeed, if $L\cap
F_-\not=\emptyset$, then $L\cap F_-$ must be a codimension $j$
face of $F_-$, say $H_-$. If $H_-$ is the copy $K_-$ of $K_+$,
then $L$ will create a codimension $j$ non-acyclic face of $M/T$
which contradicts the acyclicity assumption on proper faces of
$M/T$. Therefore, $H_-\not=K_-$. However, $F_\pm$ are
respectively facets of some Delzant polytopes, say $P_\pm$, and
the neighborhood of $F_+$ in $P_+$ is same as that of $F_-$ in
$P_-$ by definition of an origami template (although $P_+$ and
$P_-$ may not be isomorphic). Let $\bar H$ and $\bar K$ be the
codimension $j$ faces of $P_-$ such that $\bar H\cap F=H_-$ and
$\bar K\cap F=K_-$. Since $H_-\not=K_-$, the normal cones of
$\bar H$ and $\bar K$ are different. However, these normal cones
must agree with that of $L$ because $L\cap F_+=K_+$ and $L\cap
F_-=H_-$ and the neighborhood of $F_+$ in $P_+$ is same as that of
$F_-$ in $P_-$. This is a contradiction.
The codimension $j$ face $L$ of $M'/T$ associates an element
$\tau_L\in H^{2j}(M')$. Since $L\cap F_+=K_+$ and $L\cap
F_-=\emptyset$, the restriction of $\tau_L$ to $H^{2j}(B_+\cup
B_-)=H^{2j}(B_+)\oplus H^{2j}(B_-)$ is $(\tau_{K_+},0)$, where
$\tau_{K_+}\in H^{2j}(B_+)$ is associated to $K_+$. Since
$H^{2j}(B_+)$ is additively generated by $\tau_{K_+}$'s where
$K_+$ runs over all codimension $j$ faces of $F_+$, for each element $(x_+,0)$ in $H^{2j}(B_+)\oplus
H^{2j}(B_-)=H^{2j}(B\cup B_-)$, there is an element $y_+\in
H^{2j}(M')$ whose restriction image is $(x_+,0)$. The same is
true for each element $(0,x_-)\in H^{2j}(B_+)\oplus H^{2j}(B_-)$.
This implies the lemma.
\end{proof}
Finally, we obtain the following.
\begin{theorem} \label{theo:3-1}
Let $M$ be an orientable toric origami manifold of dimension $2n$
$(n\ge 2)$ such that every proper face of $M/T$ is acyclic. Then
\begin{equation} \label{eq:theo3-1}
b_{2i+1}(M)=0\quad\text{for $1\le i\le n-2$}.
\end{equation}
Moreover, if $M'$ and $B$ are as above, then
\begin{equation} \label{eq:theo3-1-1}
\begin{split}
&b_1(M')=b_1(M)-1\,\,(\text{hence $b_{2n-1}(M')=b_{2n-1}(M)-1$ by Poincar\'e duality}),\\
&b_{2i}(M')=b_{2i}(M)+b_{2i}(B)+b_{2i-2}(B)\quad\text{for $1\le i\le n-1$}.
\end{split}
\end{equation}
Finally, $H^*(M)$ is torsion free.
\end{theorem}
\begin{proof}
We have $b_1(M')=b_1(M)-1$ by Lemma~\ref{lemm:3-7}. Therefore, if
$b_1(M)=1$, then $b_1(M')=0$, that is, the graph associated to
$M'$ is acyclic and hence $b_{odd}(M')=0$ by \cite{ho-pi12} (or
\cite{ma-pa06}). This together with Lemma~\ref{lemm:3-7} shows
that $b_{2i+1}(M)=0$ for $1\le i\le n-2$ when $b_1(M)=1$. If
$b_1(M)=2$, then $b_1(M')=1$ so that $b_{2i+1}(M')=0$ for $1\le
i\le n-2$ by the observation just made and hence $b_{2i+1}(M)=0$
for $1\le i\le n-2$ by Lemma~\ref{lemm:3-7}. Repeating this
argument, we see \eqref{eq:theo3-1}.
The relations in~\eqref{eq:theo3-1-1} follow from Lemma~\ref{lemm:3-7} and
\eqref{eq:3-5} together with the fact $b_{2i+1}(M)=0$ for $1\le
i\le n-2$.
As we remarked before Lemma~\ref{lemm:3-1}, the arguments developed in this section work with any field coefficients, in particular with $\Z/p$-coefficients
for every prime $p$. Hence \eqref{eq:theo3-1} and \eqref{eq:theo3-1-1}
hold for Betti numbers with $\Z/p$-coefficients. Accordingly, the Betti
numbers of $M$ with $\Z$-coefficients agree with the Betti numbers
of $M$ with $\Z/p$-coefficients for every prime $p$. This implies
that $H^*(M)$ has no torsion.
\end{proof}
As for $H^1(M)$, we have a clear geometrical picture.
\begin{proposition} \label{prop:3-1}
Let $M$ be an orientable toric origami manifold of dimension $2n$
$(n\ge 2)$ such that every proper face of $M/T$ is acyclic. Let
$Z_1,\dots,Z_{b_1}$ be folds in $M$ such that the graph associated
to the origami template of $M$ with the $b_1$ edges corresponding
to $Z_1,\dots,Z_{b_1}$ removed is a tree. Then
$Z_1,\dots,Z_{b_1}$ freely generate $H_{2n-1}(M)$, equivalently,
their Poincar\'e duals $z_1$, \dots, $z_{b_1}$ freely generate
$H^1(M)$. Furthermore, all the products generated by
$z_1,\dots,z_{b_1}$ are trivial because $Z_1,\dots,Z_{b_1}$ are
disjoint and the normal bundle of $Z_j$ is trivial for each $j$.
\end{proposition}
\begin{proof}
We will prove the proposition by induction on $b_1$. When $b_1=0$, the proposition is trivial; so we may assume $b_1\ge 1$. Let $Z$ and $M'$
be as before. Since $b_1(M')=b_1-1$, there are folds $Z_1, \dots,
Z_{b_1-1}$ in $M'$ such that $Z_1,\dots,Z_{b_1-1}$ freely generate
$H_{2n-1}(M')$ by induction assumption. The folds
$Z_1,\dots,Z_{b_1-1}$ are naturally embedded in $M$ and we will
prove that these folds together with $Z$ freely generate
$H_{2n-1}(M)$.
We consider the Mayer-Vietoris exact sequence for the triple
$(M,\tM,Z\times [-1,1])$:
\[
\begin{split}
0&\to H_{2n}(M)\xrightarrow{\partial_*} H_{2n-1}(Z_+\!\cup\! Z_-) \xrightarrow{{\iota_1}_*\oplus {\iota_2}_*} H_{2n-1}(\tM)\oplus H_{2n-1}(Z\!\times\! [-1,1])\\
&\to H_{2n-1}(M)\xrightarrow{\partial_*} H_{2n-2}(Z_+\!\cup\! Z_-) \xrightarrow{{\iota_1}_*\oplus {\iota_2}_*} H_{2n-2}(\tM)\oplus H_{2n-2}(Z\!\times\! [-1,1])
\end{split}
\]
where $\iota_1$ and $\iota_2$ are the inclusions. Since
$\iota_1^*\colon H^{2n-2}(\tM)\to H^{2n-2}(Z_+\cup Z_-)$ is
surjective by Lemma~\ref{lemm:3-5}, ${\iota_1}_*\colon
H_{2n-2}(Z_+\cup Z_-)\to H_{2n-2}(\tM)$ is injective when tensored
with $\Q$. However, $H^*(Z)$ has no torsion in odd degrees because
$H^{2i-1}(Z)$ is a subgroup of $H^{2i-2}(B)$ for every $i$ by
\eqref{eq:3-4} and $H^*(B)$ is torsion free. Therefore, $H_*(Z)$
has no torsion in even degrees. Therefore, ${\iota_1}_*\colon
H_{2n-2}(Z_+\cup Z_-)\to H_{2n-2}(\tM)$ is injective without
tensoring with $\Q$ and hence the above exact sequence reduces to
this short exact sequence:
\[
\begin{split}
0&\to H_{2n}(M)\xrightarrow{\partial_*} H_{2n-1}(Z_+\cup Z_-) \xrightarrow{{\iota_1}_*\oplus {\iota_2}_*} H_{2n-1}(\tM)\oplus H_{2n-1}(Z\times [-1,1])\\
&\to H_{2n-1}(M)\to 0.
\end{split}
\]
Noting $\partial_*([M])=[Z_+]-[Z_-]$ and
${\iota_2}_*([Z_\pm])=[Z]$, one sees that the above short exact
sequence implies an isomorphism
\begin{equation} \label{eq:3-6}
\iota_*\colon H_{2n-1}(\tM)\cong H_{2n-1}(M)
\end{equation}
where $\iota\colon \tM\to M$ is the inclusion map.
Consider the Mayer-Vietoris exact sequence for
$(M',\tM, N_+\cup N_-)$:
\[
\begin{split}
0&\to H_{2n}(M')\xrightarrow{\partial'_*} H_{2n-1}(Z_+\cup Z_-) \xrightarrow{{\iota_1}_*\oplus {\iota_3}_*} H_{2n-1}(\tM)\oplus H_{2n-1}(N_+\cup N_-)\\
&\to H_{2n-1}(M')\xrightarrow{\partial'_*} H_{2n-2}(Z_+\cup Z_-) \xrightarrow{{\iota_1}_*\oplus {\iota_3}_*} H_{2n-2}(\tM)\oplus H_{2n-2}(N_+\cup N_-)
\end{split}
\]
where $\iota_3$ is the inclusion map of the unit sphere bundle in $N_+\cup N_-$. Note that $H_{2n-1}(N_+\cup
N_-)=H_{2n-1}(B_+\cup B_-)=0$ and ${\iota_1}_*\colon
H_{2n-2}(Z_+\cup Z_-)\to H_{2n-2}(\tM)$ is injective as observed
above. Therefore, the above exact sequence reduces to this short
exact sequence:
\[
0\to H_{2n}(M')\xrightarrow{\partial'_*} H_{2n-1}(Z_+\cup Z_-) \xrightarrow{{\iota_1}_*} H_{2n-1}(\tM)
\xrightarrow{\iota_*} H_{2n-1}(M')\to 0.
\]
Here $\partial_*([M])=[Z_+]-[Z_-]$ and $H_{2n-1}(M')$ is freely
generated by $Z_1$, \dots, $Z_{b_1-1}$ by induction assumption.
Therefore, the above short exact sequence implies that
\begin{equation*}
\text{$H_{2n-1}(\tM)$ is freely generated by $Z_1,\dots,Z_{b_1-1}$ and $Z_+$ (or $Z_-$).}
\end{equation*}
This together with \eqref{eq:3-6} completes the induction step and
proves the lemma.
\end{proof}
\section{Relations between Betti numbers and face numbers} \label{sectBettiFace}
Let $M$ be an orientable toric origami manifold of dimension $2n$
$(n\ge 2)$ such that every proper face of $M/T$ is acyclic. In
this section we describe $b_{2i}(M)$ in terms of the face
numbers of $M/T$ and $b_1(M)$. Let $\P$ be the simplicial poset
dual to $\partial(M/T)$. As usual, we define
\[
\begin{split}
f_i&=\text{ the number of $(n-1-i)$-faces of $M/T$}\\
&=\text{ the number of $i$-simplices in $\P$}\quad \text{for $i=0,1,\dots,n-1$}
\end{split}
\]
and the $h$-vector $(h_0,h_1,\dots,h_n)$ by
\begin{equation} \label{eq:4-1}
\sum_{i=0}^nh_it^{n-i}=(t-1)^n+\sum_{i=0}^{n-1}f_i(t-1)^{n-1-i}.
\end{equation}
\begin{theorem} \label{theo:4-1}
Let $M$ be an orientable toric origami manifold of dimension $2n$
such that every proper face of $M/T$ is acyclic. Let $b_j$ be the
$j$th Betti number of $M$ and $(h_0,h_1,\dots,h_n)$ be the
$h$-vector of $M/T$. Then
\[
\sum_{i=0}^nb_{2i}t^i=\sum_{i=0}^nh_it^i+b_1(1+t^n-(1-t)^n),
\]
in other words, $b_0=h_0=1$ and
\[
\begin{split}
b_{2i}&=h_i-(-1)^i\binom{n}{i}b_1\quad\text{for $1\le i\le n-1$},\\
b_{2n}&=h_n+(1-(-1)^n)b_1.
\end{split}
\]
\end{theorem}
\begin{remark}
Since every
proper face of $M/T$ is acyclic, we have $h_n=(-1)^n+\sum_{i=0}^{n-1}(-1)^{n-1-i}f_i$ by \eqref{eq:4-1}
and $\chi(\partial(M/T))=\sum_{i=0}^{n-1}(-1)^if_i$. Therefore,
$h_n=(-1)^n-(-1)^n\chi(\partial(M/T))$. Since $b_{2n}=1$, it
follows from the last identity in Theorem~\ref{theo:4-1} that
\[
\chi(\partial(M/T))-\chi(S^{n-1})=((-1)^n-1)b_1.
\]
Moreover, since $b_{2i}=b_{2n-2i}$, we have
\[
\begin{split}
h_{n-i}-h_i&=(-1)^i((-1)^n-1)b_1\binom{n}{i}\\
&=(-1)^i(\chi(\partial(M/T))-\chi(S^{n-1}))\binom{n}{i} \quad\text{for $0\le i\le n$}.
\end{split}
\]
These are generalized Dehn-Sommerville relations for
$\partial(M/T)$ (or for the simplicial poset $\P$), see \cite[p.
74]{stan96} or \cite[Theorem 7.44]{bu-pa02}.
\end{remark}
We will use the notations in Section~\ref{sectBettiNumbers} freely.
For a manifold $Q$ of dimension $n$ with corners (or faces), we
define the $f$-polynomial and $h$-polynomial of $Q$ by
\[
f_Q(t)=t^n+\sum_{i=0}^{n-1}f_i(Q)t^{n-1-i},\qquad h_Q(t)=f_Q(t-1)
\]
as usual.
\begin{lemma} \label{lemm:4-4}
The $h$-polynomials of $M'/T$, $M/T$, and $F$ have the relation
$h_{M'/T}(t) =h_{M/T}(t)+(t+1)h_F(t)-(t-1)^n$. Therefore
$$t^nh_{M'/T}(t^{-1})
=t^nh_{M/T}(t^{-1})+(1+t)t^{n-1}h_F(t^{-1})-(1-t)^n.$$
\end{lemma}
\begin{proof}
In the proof of Lemma~\ref{lemm:3-5} we observed that no facet of
$M'/T$ intersects with both $F_+$ and $F_-$. This means that no
face of $M'/T$ intersects with both $F_+$ and $F_-$ because every
face of $M'/T$ is contained in some facet of $M'/T$. Noting this
fact, one can find that
\[
f_i(M'/T)=f_i(M/T)+2f_{i-1}(F)+f_i(F)\quad\text{for $0\le i\le n-1$}
\]
where $F$ is the folded facet and $f_{n-1}(F)=0$.
Therefore,
\[
\begin{split}
f_{M'/T}(t)&=t^n+\sum_{i=0}^{n-1}f_i(M'/T)t^{n-1-i}\\
&=t^n+\sum_{i=0}^{n-1}f_i(M'/T)t^{i}+2\sum_{i=0}^{n-1}f_{i-1}(F)t^{n-1-i}+\sum_{i=0}^{n-2}f_i(F)t^{n-1-i}\\
&=f_{M/T}(t)+2f_F(t)+tf_F(t)-t^n.
\end{split}
\]
Replacing $t$ by $t-1$ in the identity above, we obtain the former
identity in the lemma. Replacing $t$ by $t^{-1}$ in the former
identity and multiplying the resulting identity by $t^n$, we
obtain the latter identity.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theo:4-1}]
Since $\sum_{i=0}^nh_i(M/T)t^i=t^nh_{M/T}(t^{-1})$,
Theorem~\ref{theo:4-1} is equivalent to
\begin{equation} \label{eq:4-8}
\sum_{i=0}^nb_{2i}(M)t^i=t^nh_{M/T}(t^{-1}) +b_1(M)(1+t^n-(1-t)^n).
\end{equation}
We shall prove \eqref{eq:4-8} by induction on $b_1(M)$. The
identity \eqref{eq:4-8} is well-known when $b_1(M)=0$. Suppose
that $k=b_1(M)$ is a positive integer and the identity
\eqref{eq:4-8} holds for $M'$ with $b_1(M')= k-1$. Then
\[
\begin{split}
&\sum_{i=0}^nb_{2i}(M)t^i\\
=&1+t^n+\sum_{i=1}^{n-1} (b_{2i}(M')-b_{2i}(B)-b_{2i-2}(B))t^i\quad \text{(by Theorem~\ref{theo:3-1})}\\
=&\sum_{i=0}^nb_{2i}(M')t^i-(1+t)\sum_{i=0}^{n-1}b_{2i}(B)t^i+1+t^n\\
=&t^nh_{M'/T}(t^{-1})+b_1(M')(1+t^n-(1-t)^n)-(1+t)t^{n-1}h_{F}(t^{-1})+1+t^n\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{(by \eqref{eq:4-8} applied to $M'$)}\\
=&t^nh_{M/T}(t^{-1})+b_1(M)(1+t^n-(1-t)^n) \\
&\qquad\qquad\qquad\qquad\text{(by Lemma~\ref{lemm:4-4} and $b_1(M')=b_1(M)-1$),}
\end{split}
\]
proving \eqref{eq:4-8} for $M$. This completes the induction step
and the proof of Theorem~\ref{theo:4-1}.
\end{proof}
\section{Equivariant cohomology and face ring} \label{sectEquivar}
A torus manifold $M$ of dimension $2n$ is an orientable connected
closed smooth manifold with an effective smooth action of an
$n$-dimensional torus $T$ having a fixed point (\cite{ha-ma03}).
An orientable toric origami manifold with acyclic proper faces in
the orbit space has a fixed point, so it is a torus manifold. The
action of $T$ on $M$ is called \emph{locally standard} if every
point of $M$ has a $T$-invariant open neighborhood equivariantly
diffeomorphic to a $T$-invariant open set of a faithful
representation space of $T$. Then the orbit space $M/T$ is a nice
manifold with corners\footnote{Faces of $M/T$ are defined using types of isotropy subgroups of the $T$-action on $M$. The vertices in $M/T$ correspond to the $T$-fixed points in $M$ and \emph{nice} means that there are exactly $n$ facets (i.e., codimension-one faces) meeting at each vertex in $M/T$. \emph{A nice manifold with corners} is often called \emph{a manifold with faces}.}. The torus action on an orientable toric
origami manifold is locally standard. In this section, we study
the equivariant cohomology of a locally standard torus manifold
with acyclic proper faces of the orbit space.
We review some facts from \cite{ma-pa06}. Let $Q$ be a nice
manifold with corners of dimension $n$. Let $\kr$ be a
ground commutative ring with unit. We denote by $G\vee H$ the unique minimal face of $Q$ that contains both
$G$ and $H$. The face ring $\kr[Q]$ of $Q$
is a graded ring defined by
\[
\kr[Q]:=\kr[v_F:\text{ $F$ a face}]/I_Q
\]
where $\deg v_F=2\codim F$ and $I_Q$ is the ideal generated by all
elements
\[
v_Gv_H-v_{G\vee H}\sum_{E\in G\cap H}v_E.
\]
The dual poset of the face poset of $Q$ is a
simplicial poset of dimension $n-1$ and its face ring over $\kr$
(see \cite[p.113]{stan96}) agrees with $\kr[Q]$. For each vertex
$p\in Q$, the restriction map $s_p$ is defined as the quotient map
\[
s_p\colon \kr[Q]\to \kr[Q]/(v_F:\ p\notin F)
\]
and it is proved in \cite[Proposition 5.5]{ma-pa06} that the image
$s_p(\kr[Q])$ is the polynomial ring
$\kr[v_{Q_{i_1}},\dots,v_{Q_{i_n}}]$ where $Q_{i_1},\dots,Q_{i_n}$
are the $n$ different facets containing $p$.
\newpage
\begin{lemma}[Lemma 5.6 in \cite{ma-pa06}] \label{lemm:1-1}
If every face of $Q$ has a vertex, then the sum $s=\oplus_{p}s_p$
of restriction maps over all vertices $p\in Q$ is a monomorphism
from $\kr[Q]$ to the sum of polynomial rings.
\end{lemma}
In particular, $\kr[Q]$ has no nonzero nilpotent element if every
face of $Q$ has a vertex. It is not difficult to see that every
face of $Q$ has a vertex if every proper face of $Q$ is acyclic.
Let $M$ be a locally standard torus manifold. Then the orbit space
$M/T$ is a nice manifold with corners. Let $q\colon M\to M/T$ be
the quotient map. Note that $M^\circ:=M-q^{-1}(\partial(M/T))$ is the
$T$-free part. The projection $ET\times M\to M$ induces a map
$\bar q\colon ET\times_T M\to M/T$, where $ET$ denotes the total space of the universal principal $T$-bundle and $ET\times_T M$ denotes the orbit space of $ET\times M$ by the diagonal action of $T$ on $ET\times M$. Similarly we have a map $\bar
q^\circ\colon ET\times_T M^\circ\to M^\circ/T$. The exact sequence
of the equivariant cohomology groups for the pair $(M,M^\circ)$ together
with the maps $\bar q$ and $\bar q^\circ$ produces the following
commutative diagram:
\begin{equation*}
\begin{CD}
H^*_T(M,M^\circ)@>\eta^*>> H^*_T(M)@>{\iota^*}>> H^*_T(M^\circ)\\
@. @A\bar q^*AA @AA{(\bar q^\circ)}^* A\\
@. H^*(M/T) @>\bar\iota^*>> H^*(M^\circ/T)
\end{CD}
\end{equation*}
where $\eta$, $\iota$ and $\bar\iota$ are the inclusions and $H^*_T(X,Y):=H^*(ET\times_T X,ET\times_TY)$ for a $T$-space $X$ and its $T$-subspace $Y$ as usual. Since
the action of $T$ on $M^\circ$ is free and $\bar\iota\colon M^\circ/T\to M/T$
is a homotopy equivalence, we have graded ring isomorphisms
\begin{equation} \label{eq:1-0-0-0}
H^*_T(M^\circ)\xrightarrow{((\bar q^\circ)^*)^{-1}} H^*(M^\circ/T)\xrightarrow{(\bar\iota^*)^{-1}} H^*(M/T)
\end{equation}
and the composition $\rho:=\bar q^*\circ (\bar\iota^*)^{-1}\circ
((\bar q^\circ)^*)^{-1}$, which is a graded ring homomorphism, gives
the right inverse of $\iota^*$, so the exact sequence above
splits. Therefore, $\eta^*$ and $\bar q^*$ are both injective and
\begin{equation} \label{eq:1-0-0}
H^*_T(M)= \eta^*(H^*_T(M,M^\circ))\oplus \rho(H^*_T(M^\circ)) \quad\text{as graded groups}.
\end{equation}
Note that both factors at the right hand side above are graded
subrings of $H^*_T(M)$ because $\eta^*$ and $\rho$ are both graded
ring homomorphisms.
Let $\P$ be the poset dual to the face poset of $M/T$ as before.
Then $\Z[\P]=\Z[M/T]$ by definition.
\begin{proposition} \label{prop:1-1}
Suppose every proper face of the orbit space $M/T$ is acyclic, and
the free part of the action gives a trivial principal bundle
$M^\circ\to M^\circ/T$. Then $H^*_T(M)\cong \Z[\P]\oplus \tilde H^*(M/T)$
as graded rings.
\end{proposition}
\begin{proof}
Let $R$ be the cone of $\partial(M/T)$ and let $M_R=M_R(\Lambda)$
be the $T$-space $R\times T/\sim$ where we use the characteristic
function $\Lambda$ obtained from $M$ for the identification
$\sim$. Let $M^\circ_R$ be the $T$-free part of $M_R$. Since the
free part of the action on $M$ is trivial, we have
$M-M^\circ=M_R-M^\circ_R$. Hence,
\begin{equation} \label{eq:1-0}
H^*_T(M,M^\circ)\cong H^*_T(M_R,M^\circ_R)\quad\text{as graded rings}
\end{equation}
by excision. Since $H^*_T(M^\circ_R)\cong H^*(M^\circ_R/T)\cong H^*(R)$
and $R$ is a cone, $H^*_T(M^\circ_R)$ is isomorphic to the cohomology
of a point. Therefore,
\begin{equation} \label{eq:1-1}
H^*_T(M_R,M^\circ_R)\cong H^*_T(M_R) \quad\text{as graded rings in positive degrees.}
\end{equation}
On the other hand, the dual decomposition on the geometric
realization $|\P|$ of $\P$ defines a face structure on the cone
$P$ of $\P$. Let $M_P=M_P(\Lambda)$ be the $T$-space $P\times
T/\sim$ defined as before. Then a similar argument to that in
\cite[Theorem 4.8]{da-ja91} shows that
\begin{equation} \label{eq:1-2}
H^*_T(M_P)\cong \Z[\P] \quad\text{as graded rings}
\end{equation}
(this is mentioned as Proposition 5.13 in \cite{ma-pa06}). Since
every face of $P$ is a cone, one can construct a face preserving
degree one map from $R$ to $P$ which induces an equivariant map
$f\colon M_R\to M_P$. Then a similar argument to the proof of
Theorem 8.3 in \cite{ma-pa06} shows that $f$ induces a graded ring
isomorphism
\begin{equation} \label{eq:1-3}
f^*\colon H^*_T(M_P)\xrightarrow{\cong} H^*_T(M_R)
\end{equation}
since every proper face of $R$ is acyclic. It follows from
\eqref{eq:1-0}, \eqref{eq:1-1}, \eqref{eq:1-2} and \eqref{eq:1-3}
that
\begin{equation} \label{eq:1-3-1}
H^*_T(M,M^\circ) \cong \Z[\P] \quad\text{as graded rings in positive
degrees.}
\end{equation}
Thus, by \eqref{eq:1-0-0-0} and \eqref{eq:1-0-0} it suffices to
prove that the cup product of every $a\in \eta^*(H^*_T(M,M^\circ))$ and
every $b\in \rho(\tilde H^*_T(M^\circ))$ is trivial. Since
$\iota^*(a)=0$ (as $\iota^*\circ \eta^*=0$), we have
$\iota^*(a\cup b)=\iota^\ast(a)\cup \iota^\ast(b)=0$ and hence $a\cup b$
lies in $\eta^*(H^*_T(M,M^\circ))$. Since $\rho(H^*_T(M^\circ))\cong
H^*(M/T)$ as graded rings by \eqref{eq:1-0-0-0} and $H^m(M/T)=0$
for a sufficiently large $m$, $(a\cup b)^m=\pm a^m\cup b^m=0$.
However, we know that $a\cup b\in \eta^*(H^*_T(M,M^\circ))$ and
$\eta^*(H^*_T(M,M^\circ))\cong \Z[\P]$ in positive degrees by
\eqref{eq:1-3-1}. Since $\Z[\P]$ has no nonzero nilpotent element
as remarked before, $(a\cup b)^m=0$ implies $a\cup b=0$.
\end{proof}
As discussed in \cite[Section 6]{ma-pa06}, there is a homomorphism
\begin{equation} \label{eq:1-4}
\varphi\colon \Z[\P]=\Z[M/T] \to H^*_T(M)/\text{$H^*(BT)$-torsions}.
\end{equation}
In fact, $\varphi$ is defined as follows. For a codimension $k$
face $F$ of $M/T$, $q^{-1}(F)=:M_F$ is a connected closed
$T$-invariant submanifold of $M$ of codimension $2k$, and
$\varphi$ assigns $v_F\in \Z[M/T]$ to the equivariant Poincar\'e
dual $\tau_F \in H^{2k}_T(M)$ of $M_F$. One can see that $\varphi$
followed by the restriction map to $H^*_T(M^T)$ can be identified
with the map $s$ in Lemma~\ref{lemm:1-1}. Therefore, $\varphi$ is
injective if every face of $Q$ has a vertex as mentioned in
\cite[Lemma 6.4]{ma-pa06}.
\begin{proposition} \label{prop:1-2}
Let $M$ be a torus manifold with a locally standard torus action. If every proper face of
$M/T$ is acyclic and the free part of the action gives a trivial
principal bundle, then the $H^*(BT)$-torsion submodule of
$H^*_T(M)$ agrees with $\bar q^*(\tilde H^*(M/T))$, where $\bar
q\colon ET\times_T M\to M/T$ is the map mentioned before.
\end{proposition}
\begin{proof} First we prove that all elements in $\bar q^*(\tilde H^*(M/T))$
are $H^*(BT)$-torsions. We consider the following commutative diagram:
\[
\begin{CD}
H^*_T(M)@>\psi^*>> H^*_T(M^T)\\
@A \bar q^* AA @AA A\\
H^*(M/T) @>\bar\psi^*>> H^*(M^T)
\end{CD}
\]
where the horizontal maps $\psi^*$ and $\bar \psi^*$ are
restrictions to $M^T$ and the right vertical map is the
restriction of $\bar q^*$ to $M^T$. Since $M^T$ is isolated,
$\bar\psi^*(\tilde H^*(M/T))$ vanishes. This together with the
commutativity of the above diagram shows that $\bar q^*(\tilde
H^*(M/T))$ maps to zero by $\psi^*$. This means that all elements in $\bar
q^*(\tilde H^*(M/T))$ are $H^*(BT)$-torsions because the kernel of
$\psi^*$ are $H^*(BT)$-torsions by the Localization Theorem in
equivariant cohomology.
On the other hand, since every face of $M/T$ has a vertex, the map
$\varphi$ in \eqref{eq:1-4} is injective as remarked above. Hence, by Proposition~\ref{prop:1-1}, there are no other
$H^*(BT)$-torsion elements.
\end{proof}
We conclude this section with observation on the orientability of
$M/T$.
\begin{lemma} \label{lemm:1-3}
Let $M$ be a closed smooth manifold of dimension $2n$ with a
locally standard smooth action of the $n$-dimensional torus $T$.
Then $M/T$ is orientable if and only if $M$ is.
\end{lemma}
\begin{proof}
Since $M/T$ is a manifold with corners and $M^\circ/T$ is its
interior, $M/T$ is orientable if and only if $M^\circ/T$ is. On
the other hand, $M$ is orientable if and only if $M^\circ$ is.
Indeed, since the complement of $M^\circ$ in $M$ is the union of
finitely many codimension-two submanifolds, the inclusion
$\iota\colon M^\circ\to M$ induces an epimorphism on their
fundamental groups and hence on their first homology groups with
$\Z/2$-coefficients. Then it induces a monomorphism
$\iota^*\colon H^1(M;\Z/2)\to H^1(M^\circ;\Z/2)$ since
$H^1(X;\Z/2)=\Hom(H_1(X;\Z/2);\Z/2)$. Since
$\iota^*(w_1(M))=w_1(M^\circ)$ and $\iota^*$ is injective,
$w_1(M)=0$ if and only if $w_1(M^\circ)=0$. This means that $M$ is
orientable if and only if $M^\circ$ is.
Thus, it suffices to prove that $M^\circ/T$ is orientable if and
only if $M^\circ$ is. But, since $M^\circ/T$ can be regarded
as the quotient of an iterated free action of $S^1$, it suffices to
prove the following general fact: for a principal $S^1$-bundle
$\pi\colon E\to B$ where $E$ and $B$ are both smooth manifolds,
$B$ is orientable if and only if $E$ is. First we note that the
tangent bundle of $E$ is isomorphic to the Whitney sum of the
tangent bundle along the fiber $\tau_fE$ and the pullback of the
tangent bundle of $B$ by $\pi$. Since the free action of $S^1$ on
$E$ yields a nowhere zero vector field along the fibers, the line
bundle $\tau_fE$ is trivial. Therefore
\begin{equation} \label{eq:1-5}
w_1(E)=\pi^*(w_1(B)).
\end{equation}
We consider the Gysin exact sequence for our $S^1$-bundle:
\[
\cdots\to H^{-1}(B;\Z/2)\to H^{1}(B;\Z/2)\xrightarrow{\pi^*} H^1(E,\Z/2)\to H^{0}(B;\Z/2)\to\cdots.
\]
Since $H^{-1}(B;\Z/2)=0$, the exact sequence above tells us that the map
$\pi^*\colon H^1(B;\Z/2) \to H^{1}(E;\Z/2)$ is injective. This
together with \eqref{eq:1-5} shows that $w_1(E)=0$ if and only if
$w_1(B)=0$, proving the desired fact.
\end{proof}
\section{Serre spectral sequence} \label{sectSerreSpec}
Let $M$ be an orientable toric origami manifold $M$ of dimension
$2n$ such that every proper face of $M/T$ is acyclic. Note that
$M^\circ/T$ is homotopy equivalent to a graph, hence does not admit
nontrivial torus bundles. Thus the free part of the action gives a
trivial principal bundle $M^\circ\to M^\circ/T$, and we may apply the
results of the previous section.
We consider the Serre spectral sequence of the fibration
$\pi\colon ET\times_T M\to BT$. Since $BT$ is simply connected and
both $H^*(BT)$ and $H^*(M)$ are torsion free by
Theorem~\ref{theo:3-1}, the $E_2$-terms are given as follows:
$$E_2^{p,q}=H^p(BT;H^q(M))=H^p(BT)\otimes H^q(M).$$
Since $H^{odd}(BT)=0$ and $H^{2i+1}(M)=0$ for $1\le i\le n-2$ by
Theorem~\ref{theo:3-1},
\begin{equation} \label{eq:2-1-1}
\text{$E_2^{p,q}$ with $p+q$ odd vanishes unless $p$ is even and
$q=1$ or $2n-1$.}
\end{equation}
We have differentials
\[
\to E_r^{p-r,q+r-1}\xrightarrow{d_r^{p-r,q+r-1}}E_r^{p,q}\xrightarrow{d_r^{p,q}}E_r^{p+r,q-r+1}\to
\]
and
$$E_{r+1}^{p,q}=\ker d_r^{p,q}/\im d_r^{p-r,q+r-1}.$$
We will often abbreviate $d_r^{p,q}$ as $d_r$ when $p$ and $q$ are
clear in the context. Since
\[
d_r(u\cup v)=d_ru\cup v+(-1)^{p+q}u\cup d_rv \quad\text{for $u\in E_r^{p,q}$ and $v\in E_r^{p',q'}$}
\]
and $d_r$ is trivial on $E_r^{p,0}$ and $E_r^{p,0}=0$ for odd $p$,
\begin{equation} \label{eq:2-1-10}
\text{$d_r$ is an $H^*(BT)$-module map.}
\end{equation}
Note that $E_r^{a,b}=0$ if either $a<0$ or $b<0$. Accordingly,
\begin{equation} \label{eq:2-1-2}
\text{$E^{p,q}_r=E^{p,q}_\infty$\quad if $p<r$ and $q+1<r$.}
\end{equation}
There is a filtration of subgroups
\[
H_T^{m}(M)=\F^{0,m}\supset \F^{1,m-1}\supset \dots\supset
\F^{m-1,1}\supset \F^{m,0}\supset\F^{m+1,-1}=\{0\}
\]
such that
\begin{equation} \label{eq:2-1-3}
\F^{p,m-p}/\F^{p+1,m-p-1}= E_\infty^{p,m-p}\quad \text{for
$p=0,1,\dots,m$}.
\end{equation}
There are two edge homomorphisms.
One edge homomorphism
\[
H^p(BT)=E_2^{p,0}\to E_3^{p,0}\to \dots \to E_\infty^{p,0}\subset H^p_T(M)
\]
agrees with $\pi^*\colon H^*(BT)\to H^*_T(M)$. Since
$M^T\not=\emptyset$, one can construct a cross section of the
fibration $\pi\colon ET\times_T M\to BT$ using a fixed point in
$M^T$. So $\pi^*$ is injective and hence
\begin{equation} \label{eq:2-0}
\text{$d_r\colon E_r^{p-r,r-1}\to E_r^{p,0}$ is trivial for every $r\ge 2$ and $p\ge 0$,}
\end{equation}
which is equivalent to $E_2^{p,0}=E_\infty^{p,0}$.
The other edge homomorphism
\[
H_T^q(M)\twoheadrightarrow E_\infty^{0,q}\subset \dots \subset E_3^{0,q}\subset E_2^{0,q}=H^q(M)
\]
agrees with the restriction homomorphism $\iota^*\colon
H^q_T(M)\to H^q(M)$. Therefore, $\iota^*$ is surjective if and
only if the differential $d_r\colon E_r^{0,q}\to E_r^{r,q-r+1}$ is
trivial for every $r\ge 2$.
We shall investigate the restriction homomorphism $\iota^*\colon
H^q_T(M)\to H^q(M)$. Since $M/T$ is homotopy equivalent to the
wedge of $b_1(M)$ circles, $H^q_T(M)$ vanishes unless $q$ is $1$
or even by Proposition~\ref{prop:1-1} while $H^q(M)$ vanishes
unless $q$ is $1, 2n-1$ or even in between $0$ and $2n$ by
Theorem~\ref{theo:3-1}.
\begin{lemma} \label{lemm:2-1}
The homomorphism $\iota^*\colon H^1_T(M)\to H^1(M)$ is an isomorphism (so
$H^1(M)\cong H^1(M/T)$ by Proposition~\ref{prop:1-1}).
\end{lemma}
\begin{proof}
By \eqref{eq:2-0}, $$d_2\colon E_2^{0,1} = H^1(M)\to
E_2^{2,0}=H^2(BT)$$ is trivial. Therefore
$E_2^{0,1}=E_\infty^{0,1}$. On the other hand,
$E_\infty^{1,0}=E_2^{1,0}=H^1(BT)=0$. These imply the lemma.
\end{proof}
Since $H^{2n-1}_T(M)=0$, the homomorphism $\iota^*\colon H^{2n-1}_T(M)\to
H^{2n-1}(M)$ cannot be surjective unless $H^{2n-1}(M)=0$.
\begin{lemma} \label{lemm:2-1-1}
The homomorphism $\iota^*\colon H^{2j}_T(M)\to H^{2j}(M)$ is surjective except for
$j=1$ and the rank of the cokernel of $\iota^*$ for $j=1$ is
$nb_1(M)$.
\end{lemma}
\begin{proof}
Since $\dim M=2n$, we may assume $1\le j\le n$.
First we treat the case where $j=1$. Since $H^3_T(M)=0$,
$E_\infty^{2,1}=0$ by \eqref{eq:2-1-3} and
$E_\infty^{2,1}=E_3^{2,1}$ by \eqref{eq:2-1-2}. This together
with \eqref{eq:2-0} implies that
\begin{equation} \label{eq:2-1-9}
d_2\colon H^2(M)=E_2^{0,2}\to E^{2,1}_2=H^2(BT)\otimes H^1(M) \quad\text{is surjective}.
\end{equation}
Moreover $d_3\colon E_3^{0,2}=\ker d_2\to E_3^{3,0}$ is trivial
since $E_3^{3,0}=0$. Therefore, $E_3^{0,2}=E_\infty^{0,2}$ by
\eqref{eq:2-1-2}. Since $E_\infty^{0,2}$ is the image of
$\iota^*\colon H_T^2(M)\to H^2(M)$, the rank of
$H^2(M)/\iota^*(H^2_T(M))$ is $nb_1(M)$ by \eqref{eq:2-1-9}.
Suppose that $2\le j\le n-1$. We need to prove that the
differentials
\[
d_r\colon E_r^{0,2j}\to E_r^{r,2j-r+1}
\]
are all trivial. In fact, the target group $E_r^{r,2j-r+1}$
vanishes. This follows from \eqref{eq:2-1-1} unless $r=2j$. As
for the case $r=2j$, we note that
\begin{equation} \label{eq:2-1-8}
d_2\colon E_2^{p,2}\to E_2^{p+2,1} \quad \text{is surjective for $p\ge 0$,}
\end{equation}
which follows from \eqref{eq:2-1-10} and \eqref{eq:2-1-9}.
Therefore $E_3^{p+2,1}=0$ for $p\ge 0$, in particular
$E_{r}^{r,2j-r+1}=0$ for $r=2j$ because $j\ge 2$. Therefore
$\iota^*\colon H^{2j}_T(M)\to H^{2j}(M)$ is surjective for $2\le
j\le n-1$.
The remaining case $j=n$ can be proved directly, namely without
using the Serre spectral sequence. Let $x$ be a $T$-fixed point of
$M$ and let $\varphi\colon x\to M$ be the inclusion map. Since
$M$ is orientable and $\varphi$ is $T$-equivariant, the
equivariant Gysin homomorphism $\varphi_!\colon H_T^0(x)\to
H_T^{2n}(M)$ can be defined and $\varphi_!(1)\in H_T^{2n}(M)$
restricts to the ordinary Gysin image of $1\in H^0(x)$, that is
the cofundamental class of $M$. This implies the surjectivity of
$\iota^*\colon H^{2n}_T(M)\to H^{2n}(M)$ because $H^{2n}(M)$ is an
infinite cyclic group generated by the cofundamental class.
\end{proof}
\section{Towards the ring structure}\label{sectTowardsRing}
Let $\pi\colon ET\times_TM\to BT$ be the projection. Note that
$\pi^*(H^{2}(BT))$ maps to zero by the restriction homomorphism
$\iota^*\colon H^*_T(M)\to H^*(M)$. Hence, $\iota^*$ induces a graded
ring homomorphism
\begin{equation} \label{eq:5-0}
\bar\iota^*\colon H^*_T(M)/(\pi^*(H^{2}(BT)))\to H^*(M)
\end{equation}
which is surjective except in degrees $2$ and $2n-1$ by
Lemma~\ref{lemm:2-1-1} (and bijective in degree $1$ by
Lemma~\ref{lemm:2-1}). Here $(\pi^*(H^2(BT)))$ denotes the ideal
in $H^*_T(M)$ generated by $\pi^*(H^2(BT))$. The purpose of this
section is to prove the following.
\begin{proposition} \label{prop:5-1}
The map $\bar\iota^*$ in \eqref{eq:5-0} is an isomorphism except in
degrees $2$, $4$ and $2n-1$. Moreover, the rank of the cokernel of
$\bar\iota^*$ in degree $2$ is $nb_1(M)$ and the rank of the
kernel of $\bar\iota^*$ in degree $4$ is $\binom{n}{2}b_1(M)$.
\end{proposition}
The rest of this section is devoted to the proof of
Proposition~\ref{prop:5-1}. We recall the following result, which
was proved by Schenzel (\cite{sche81}, \cite[p.73]{stan96}) for
Buchsbaum simplicial complexes and generalized to Buchsbaum
simplicial posets by Novik-Swartz (\cite[Proposition
6.3]{no-sw09}).
There are several equivalent definitions for Buchsbaum simplicial complexes (see \cite[p.73]{stan96}). A convenient one for us would be that a finite simplicial complex $\Delta$ is Buchsbaum (over a field $\Bbbk$) if $H_i(|\Delta|,|\Delta|\backslash\{p\};\Bbbk)=0$ for all $p\in |\Delta|$ and all $i<\dim |\Delta|$, where $|\Delta|$ denotes the realization of $\Delta$. In particular, a triangulation $\Delta$ of a manifold is Buchsbaum over any field $\Bbbk$. A simplicial poset is a (finite) poset $P$ that has a unique minimal element, $\hat 0$, and
such that for every $\tau\in P$, the interval $[\hat 0,\tau]$ is a Boolean algebra. The face poset of a simplicial complex is a simplicial poset and one has the realization $|P|$ of $P$ where $|P|$ is a regular CW complex, all of whose closed cells are simplices corresponding to the intervals $[\hat 0,\tau]$. A simplicial poset $P$ is Buchsbaum (over $\Bbbk$) if its order complex $\Delta(\overline P)$ of the poset $\overline P=P\backslash\{\hat 0\}$ is Buchsbaum (over $\Bbbk$). Note that $|\Delta(\overline P)|=|P|$ as spaces since $|\Delta(\overline P)|$ is the barycentric subdivision of $|P|$. See \cite{no-sw09} and \cite{stan96} for more details.
\begin{theorem}[Schenzel, Novik-Swartz] \label{theo:5-1}
Let $\Delta$ be a Buchsbaum simplicial poset of dimension $n-1$ over a field $\Bbbk$, $\Bbbk[\Delta]$ be the face ring of $\Delta$
and let $\theta_1,\dots,\theta_n\in \Bbbk[\Delta]_1$ be a linear
system of parameters. Then
\[
\begin{split}
F(\Bbbk[\Delta]/(\theta_1,\dots,\theta_n),t)=&(1-t)^nF(\Bbbk[\Delta],t)\\
&+\sum_{j=1}^n\binom{n}{j}\Big(\sum_{i=-1}^{j-2}(-1)^{j-i}\dim_\Bbbk
\tilde H_{i}(\Delta)\Big)t^j
\end{split}
\]
where $F(M,t)$ denotes the Hilbert series of a graded module $M$.
\end{theorem}
As is well-known, the Hilbert series of the face ring $\Bbbk[\Delta]$ satisfies
\[
(1-t)^nF(\Bbbk[\Delta],t)=\sum_{i=0}^nh_it^i.
\]
We define $h_i'$ for $i=0,1,\dots,n$
by
\[
F(\Bbbk[\Delta]/(\theta_1,\dots,\theta_n),t)=\sum_{i=0}^nh_i't^i,
\]
following \cite{no-sw09}.
\begin{remark}
Novik-Swartz \cite{no-sw09} introduced
\[
h_i'':=h_i'-\binom{n}{j}\dim_\Bbbk \tilde
H_{j-1}(\Delta)=h_j+\binom{n}{j}\Big(\sum_{i=-1}^{j-1}(-1)^{j-i}\dim_\Bbbk
\tilde H_{i}(\Delta)\Big)
\]
for $1\le i\le n-1$ and showed that $h''_j\ge 0$ and
$h''_{n-j}=h''_j$ for $1\le j\le n-1$.
\end{remark}
We apply Theorem~\ref{theo:5-1} to our simplicial poset $\P$ which
is dual to the face poset of $\partial(M/T)$. For that we need to
know the homology of the geometric realization $|\P|$ of $\P$.
First we show that $|\P|$ has the same homological features as
$\partial(M/T)$.
\begin{lemma} \label{lemm:samehomology}
The simplicial poset $\P$ is Buchsbaum, and $|\P|$ has the same
homology as $\partial(M/T)$.
\end{lemma}
\begin{proof}
We give a sketch of the proof. Details can be found in \cite[Lemma
3.14]{ayze14}. There is a dual face structure on $|\P|$, and there
exists a face preserving map $g\colon \partial(M/T)\to |\P|$
mentioned in the proof of Proposition \ref{prop:1-1}. Let $F$ be a
proper face of $M/T$ and $F'$ the corresponding face of $|\P|$. By
induction on $\dim F$ we can show that $g$ induces the
isomorphisms $g_*\colon H_*(\partial F)\xrightarrow{\cong}
H_*(\partial F')$, $g_*\colon H_*(F)\xrightarrow{\cong} H_*(F')$,
and $g_*\colon H_*(F,\partial F)\xrightarrow{\cong}
H_*(F',\partial F')$. Since $F$ is an acyclic orientable manifold
with boundary, we deduce by Poincar\'{e}-Lefschetz duality that
$H_*(F',\partial F')\cong H_*(F,\partial F)$ vanishes except in
degree $\dim F$. Note that $F'$ is a cone over $\partial F'$ and
$\partial F'$ is homeomorphic to the link of a nonempty simplex of
$\P$. Thus the links of nonempty simplices of $\P$ are homology
spheres, and $\P$ is Buchsbaum \cite[Prop.6.2]{no-sw09}. Finally,
$g$ induces an isomorphism of spectral sequences corresponding to
skeletal filtrations of $\partial(M/T)$ and $|\P|$, thus induces
an isomorphism $g_*\colon H_*(\partial
(M/T))\xrightarrow{\cong}H_*(|\P|)$.
\end{proof}
\begin{lemma} \label{lemm:5-1}
$|\P|$ has the same homology as $S^{n-1}\sharp b_1(S^1\times
S^{n-2})$ (the connected sum of $S^{n-1}$ and $b_1$ copies of
$S^1\times S^{n-2}$).
\end{lemma}
\begin{proof}
By Lemma~\ref{lemm:samehomology} we only need to prove that $\partial(M/T)$
has the same homology groups as $S^{n-1}\sharp b_1(S^1\times
S^{n-2})$. Since $M/T$ is homotopy equivalent to a wedge of
circles, $H^i(M/T)=0$ for $i\ge 2$ and hence the homology exact
sequence of the pair $(M/T,\partial(M/T))$ shows that
\[
H_{i+1}(M/T,\partial(M/T))\cong H_{i}(\partial(M/T)) \quad\text{for $i\ge 2$}.
\]
On the other hand, $M/T$ is orientable by Lemma~\ref{lemm:1-3} and
hence
\[
H_{i+1}(M/T,\partial(M/T))\cong H^{n-i-1}(M/T)
\]
by Poincar\'e--Lefschetz duality, and $H^{n-i-1}(M/T)=0$ for
$n-i-1\ge 2$. These show that
\begin{equation*} \label{eq:5-0-0}
H_i(\partial(M/T))=0\quad\text{for $2\le i\le n-3$}.
\end{equation*}
Thus it remains to study $H_i(\partial(M/T))$ for $i=0,1,n-2,n-1$
but since $\partial(M/T)$ is orientable (because $M/T$ is orientable), it
suffices to show
\begin{equation} \label{eq:5-0-1}
H_i(\partial(M/T))\cong H_i(S^{n-1}\sharp b_1(S^1\times
S^{n-2}))\quad\text{for $i=0,1$}.
\end{equation}
When $n\ge 3$, $S^{n-1} \sharp b_1(S^1\times S^{n-2})$ is
connected, so \eqref{eq:5-0-1} holds for $i=0$ and $n\ge 3$.
Suppose that $n\ge 4$. Then $H^{n-2}(M/T)=H^{n-1}(M/T)=0$, so the
cohomology exact sequence for the pair $(M/T,\partial(M/T))$ shows
that $H^{n-2}(\partial(M/T))\cong H^{n-1}(M/T,\partial(M/T))$ and
hence $H_1(\partial(M/T))\cong H_1(M/T)$ by Poincar\'e--Lefschetz
duality. Since $M/T$ is homotopy equivalent to a wedge of $b_1$
circles, this proves \eqref{eq:5-0-1} for $i=1$ and $n\ge 4$. Assume that
$n=3$. Then $H_1(M/T,\partial(M/T))\cong H^2(M/T)=0$. We also know that
$H_2(M/T)=0$. The homology exact sequence for the pair
$(M/T,\partial(M/T))$ yields a short exact sequence
\[
0\to H_2(M/T,\partial(M/T))\to H_1(\partial(M/T))\to H_1(M/T)\to 0.
\]
Here $H_2(M/T,\partial(M/T)) \cong H^1(M/T)$ by
Poincar\'e--Lefschetz duality. Since $M/T$ is homotopy equivalent
to a wedge of $b_1$ circles, this implies \eqref{eq:5-0-1} for
$i=1$ and $n=3$.
It remains to prove \eqref{eq:5-0-1} when $n=2$. We use induction
on $b_1$. The assertion is true when $b_1=0$. Suppose that
$b_1=b_1(M/T)\ge 1$. We cut $M/T$ along a fold so that
$b_1(M'/T)=b_1(M/T)-1$, where $M'$ is the toric origami manifold
obtained from the cut, see Section \ref{sectBettiNumbers}. Then
$b_0(\partial(M'/T))=b_0(\partial(M/T))-1$. Since \eqref{eq:5-0-1}
holds for $\partial(M'/T)$ by induction assumption, this
observation shows that \eqref{eq:5-0-1} holds for $\partial(M/T)$.
\end{proof}
\begin{lemma} \label{lemm:5-2}
For $n\ge 2$, we have
\[
\sum_{i=0}^nh_i't^i= \sum_{i=0}^nb_{2i}t^i-nb_1t+\binom{n}{2}b_1t^2.
\]
\end{lemma}
\begin{proof}
By Lemma~\ref{lemm:5-1}, for $n\ge 4$, we have
\[
\dim\tilde H_i(\P)=\begin{cases} b_1 \quad&\text{if $i=1,\ n-2$},\\
1 \quad&\text{if $i=n-1$},\\
0\quad &\text{otherwise}.
\end{cases}
\]
Hence
\[
\sum_{i=-1}^{j-2}(-1)^{j-i}\dim \tilde H_{i}(\P)=
\begin{cases} 0\quad&\text{if $j=1,2$},\\
(-1)^{j-1}b_1 \quad&\text{if $3\le j\le n-1$},\\
((-1)^{n-1}+1)b_1\quad&\text{if $j=n$}.
\end{cases}
\]
Then, it follows from Theorem~\ref{theo:5-1} that
\begin{equation*} \label{eq:5-3}
\begin{split}
\sum_{i=0}^nh_i't^i=& \sum_{i=0}^nh_it^i+\sum_{j=3}^{n-1}(-1)^{j-1}b_1\binom{n}{j}t^j+((-1)^{n-1}+1)b_1t^n\\
=& \sum_{i=0}^nh_it^i-b_1(1-t)^n+b_1(1-nt+\binom{n}{2}t^2)+b_1t^n\\
=&\sum_{i=0}^nh_it^i+b_1(1+t^n-(1-t)^n)-nb_1t+\binom{n}{2}b_1t^2\\
=&\sum_{i=0}^nb_{2i}t^i-nb_1t+\binom{n}{2}b_1t^2
\end{split}
\end{equation*}
where we used Theorem~\ref{theo:4-1} at the last identity. This
proves the lemma when $n\ge 4$. When $n=3$,
\[
\dim\tilde H_i(\P)=\begin{cases} 2b_1 \quad&\text{if $i=1$},\\
1 \quad&\text{if $i=2$},\\
0\quad &\text{otherwise},
\end{cases}
\]
and the same argument as above shows that the lemma still holds
for $n=3$. When $n=2$,
\[
\dim\tilde H_i(\P)=\begin{cases} b_1 \quad&\text{if $i=0$},\\
b_1+1 \quad&\text{if $i=1$},\\
0\quad &\text{otherwise},
\end{cases}
\]
and the same holds in this case too.
\end{proof}
\begin{remark}
One can check that
\[
\sum_{i=1}^{n-1}h_i''t^i=\sum_{i=1}^{n-1}b_{2i}t^i-nb_1(t+t^{n-1}).
\]
Therefore, $h_i''=h_i''(\P)$ is not necessarily equal to
$b_{2i}=b_{2i}(M)$ although both are symmetric. This is not surprising
because $h_i''$ depends only on the boundary of $M/T$. It would
be interesting to ask whether $h_i''(\P)\le b_{2i}(M)$
when the face poset of $\partial(M/T)$ is dual to $\P$ and
whether the equality can be attained for some such $M$ ($M$ may
depend on $i$).
\end{remark}
Now we are ready to prove Proposition \ref{prop:5-1}.
\begin{proof}[Proof of Proposition~\ref{prop:5-1}]
At first we suppose that
$\Bbbk$ is a field. By Proposition~\ref{prop:1-1}, we have
$\Z[\P]=H^{even}_T(M)$. The images of ring generators of
$H^*(BT;\Bbbk)$ by $\pi^*$ provide an h.s.o.p.
$\theta_1,\dots,\theta_n$ in $H^{even}_T(M;\Bbbk)=\Bbbk[\P]$. This fact
simply follows from the characterization of homogeneous systems of
parameters in face rings given by \cite[Th.5.4]{bu-pa04}. Thus we
have
\begin{equation} \label{eq:5-4}
F(H^{even}_T(M;\Bbbk)/(\pi^*(H^{2}(BT;\Bbbk))),t)=\sum_{i=0}^nb_{2i}(M)t^i-nb_1t+\binom{n}{2}b_1t^2
\end{equation}
by Lemma~\ref{lemm:5-2}. Moreover, the graded ring homomorphism
in \eqref{eq:5-0}
\begin{equation} \label{eq:5-4-1}
\bar\iota^*\colon
\Bbbk[\P]/(\theta_1,\dots,\theta_n)=H^{even}_T(M;\Bbbk)/(\pi^*(H^{2}(BT;\Bbbk)))\to
H^{even}(M;\Bbbk)
\end{equation}
is surjective except in degree $2$ as remarked at the beginning of
this section. Therefore, the identity \eqref{eq:5-4} implies that
$\bar\iota^*$ in \eqref{eq:5-4-1} is an isomorphism except in
degrees $2$ and $4$. Finally, the rank of the cokernel of
$\bar\iota^*$ in degree $2$ is $nb_1(M)$ by Lemma~\ref{lemm:2-1-1}
and the rank of the kernel of $\bar\iota^*$ in degree $4$ is
$\binom{n}{2}b_1$ by \eqref{eq:5-4}, proving
Proposition~\ref{prop:5-1} over fields.
Now we explain the case
$\Bbbk=\Z$.
The map $\pi^*\colon H^*(BT;\Bbbk)\to H_T^*(M;\Bbbk)$ coincides with the
map $\pi^*\colon H^*(BT;\Z)\to H_T^*(M;\Z)$ tensored with $\Bbbk$,
since both $H^*(BT;\Z)$ and $H_T^*(M;\Z)$ are $\Z$-torsion free.
In particular, the ideals $(\pi^*(H^2(BT;\Bbbk)))$ and
$(\pi^*(H^2(BT;\Z))\otimes \Bbbk)=(\pi^*(H^2(BT;\Z)))\otimes \Bbbk$
coincide in $H_T^*(M;\Bbbk)\cong H_T^*(M;\Z)\otimes \Bbbk$. Consider the
exact sequence
\[
(\pi^*(H^2(BT;\Z)))\to H_T^*(M;\Z)\to
H_T^*(M;\Z)/(\pi^*(H^2(BT;\Z)))\to 0
\]
The functor $-\otimes \Bbbk$ is right exact, thus the sequence
\[\begin{split}
&(\pi^*(H^2(BT;\Z)))\otimes \Bbbk\to H_T^*(M;\Z)\otimes \Bbbk \\
&\qquad\qquad\qquad\qquad\qquad\qquad\to
H_T^*(M;\Z)/(\pi^*(H^2(BT;\Z)))\otimes \Bbbk\to 0
\end{split}\]
is exact. These considerations show that
\[
H_T^*(M;\Z)/(\pi^*(H^2(BT;\Z)))\otimes \Bbbk \cong
H_T^*(M;\Bbbk)/(\pi^*(H^2(BT;\Bbbk)))
\]
Finally, the map
\[
\bar\iota^*\colon H_T^*(M;\Bbbk)/(\pi^*(H^2(BT;\Bbbk)))\to H^*(M,\Bbbk)
\]
coincides (up to isomorphism) with the map
\[
\bar\iota^*\colon H_T^*(M;\Z)/(\pi^*(H^2(BT;\Z)))\to H^*(M,\Z),
\]
tensored with $\Bbbk$. The statement of Proposition \ref{prop:5-1}
holds for any field thus holds for~$\Z$.
\end{proof}
We conclude this section with some observations on the kernel of
$\bar\iota^*$ in degree $4$ from the viewpoint of the Serre
spectral sequence. Recall
\[
H^4_T(M)=\F^{0,4}\supset \F^{1,3}\supset \F^{2,2}\supset
\F^{3,1}\supset \F^{4,0}\supset \F^{5,-1}=0
\]
where $\F^{p,q}/\F^{p+1,q-1}=E^{p,q}_\infty$. Since
$E^{p,q}_2=H^p(BT)\otimes H^q(M)$, we have $E_\infty^{p,q}=0$ for $p$ odd.
Therefore,
\[
\rank H^4_T(M)=\rank E_\infty^{0,4}+\rank E_\infty^{2,2}+\rank
E_\infty^{4,0},
\]
where we know $E_\infty^{0,4}=E_2^{0,4}= H^4(M)$ and
$E_\infty^{4,0}=E_2^{4,0}=H^4(BT)$. As for $E_\infty^{2,2}$, we
recall that
\begin{equation*}
d_2\colon E_2^{p,2}\to E^{p+2,1}_2 \quad\text{is surjective for every $p\ge 0$}
\end{equation*}
by \eqref{eq:2-1-8}. Therefore, noting $H^3(M)=0$, one sees
$E_3^{2,2}=E_\infty^{2,2}$. It follows that
\[
\rank E_\infty^{2,2}=\rank E_2^{2,2}-\rank
E_2^{4,1}=nb_2-\binom{n+1}{2}b_1.
\]
On the other hand, $\rank E_\infty^{0,2}=b_2-nb_1$ and there is a
product map
\[
\varphi\colon E_\infty^{0,2}\otimes E_\infty^{2,0}\to
E_\infty^{2,2}.
\]
The image of this map lies in the ideal $(\pi^*(H^2(BT))$ and the
rank of the cokernel of this map is
\[
nb_2-\binom{n+1}{2}b_1-n(b_2-nb_1)=\binom{n}{2}b_1.
\]
Therefore
\[
\rank E_\infty^{0,4}+\rank \coker\varphi=b_4+\binom{n}{2}b_1
\]
which agrees with the coefficient of $t^2$ in
$F(H^{even}_T(M)/(\pi^*(H^{2}(BT))),t)$ by \eqref{eq:5-4}. This
suggests that the cokernel of $\varphi$ could correspond to the
kernel of $\bar\iota^*$ in degree 4.
\section{4-dimensional case}\label{sect4dimCase}
In this section, we explicitly describe the kernel of
$\bar\iota^*$ in degree $4$ when $n=2$, that is, when $M$ is of
dimension $4$. In this case, $\partial(M/T)$ is the union of
$b_1+1$ closed polygonal curves.
First we recall the case when $b_1=0$. In this case,
$H^{even}_T(M)=H^*_T(M)$. Let $\partial(M/T)$ be an $m$-gon and
$v_1,\dots,v_m$ be the primitive edge vectors in the multi-fan of
$M$, where $v_i$ and $v_{i+1}$ span a 2-dimensional cone for every
$i=1,2,\dots,m$ (see \cite{ma-pa13}). Note that $v_i\in H_2(BT)$
and we understand $v_{m+1}=v_1$ and $v_0=v_m$ in this section.
Since $\{v_j, v_{j+1}\}$ is a basis of $H_2(BT)$ for every $j$, we have
$\det(v_j,v_{j+1})=\pm 1$.
Let $\tau_i\in H^2_T(M)$ be the equivariant Poincar\'e dual class to the
characteristic submanifold corresponding to $v_i$. Then we have
\begin{equation} \label{eq:5-5}
\pi^*(u)=\sum_{i=1}^m\langle u,v_i\rangle\tau_i\quad \text{for every $u\in H^2(BT)$},
\end{equation}
where $\langle\ ,\ \rangle$ denotes the natural pairing between
cohomology and homology, (see \cite{masu99} for example). We
multiply both sides in \eqref{eq:5-5} by $\tau_i$. Then, since
$\tau_i\tau_j=0$ if $v_i$ and $v_j$ do not span a 2-dimensional
cone, \eqref{eq:5-5} turns into
\begin{equation} \label{eq:5-7}
0=\langle u,v_{i-1}\rangle \tau_{i-1}\tau_i+\langle
u,v_i\rangle\tau_i^2+\langle u,v_{i+1}\rangle\tau_i\tau_{i+1}
\quad\text{in $H^*_T(M)/(\pi^*(H^2(BT)))$.}
\end{equation}
If we take $u$ with $\langle u,v_i\rangle=1$, then \eqref{eq:5-7}
shows that $\tau_i^2$ can be expressed as a linear combination of
$\tau_{i-1}\tau_i$ and $\tau_{i}\tau_{i+1}$. If we take
$u=\det(v_i,\ )$, then $u$ can be regarded as an element of $H^2(BT)$
because $H^2(BT)=\Hom(H_2(BT),\Z)$. Hence, \eqref{eq:5-7} reduces
to
\begin{equation} \label{eq:5-8}
\det(v_{i-1},v_i)\tau_{i-1}\tau_i=\det(v_i,v_{i+1})\tau_i\tau_{i+1}
\quad\text{in $H^*_T(M)/(\pi^*(H^2(BT)))$.}
\end{equation}
Finally we note that $\tau_i\tau_{i+1}$ maps to the cofundamental
class of $M$ up to sign. We denote by $\mu\in H^4_T(M)$ the
element (either $\tau_{i-1}\tau_i$ or $-\tau_{i-1}\tau_i$) which
maps to the cofundamental class of $M$.
When $b_1\ge 1$, the above argument works for each component of
$\partial(M/T)$. In fact, according to \cite{masu99},
\eqref{eq:5-5} holds in $H^*_T(M)$ modulo $H^*(BT)$-torsion but in
our case there is no $H^*(BT)$-torsion in $H^{even}_T(M)$ by
Proposition~\ref{prop:1-2}. Suppose that $\partial(M/T)$ consists
of $m_j$-gons for $j=1,2,\dots,b_1+1$. To each $m_j$-gon, we have
the class $\mu_j\in H^4_T(M)$ (mentioned above as $\mu$). Since
$\mu_j$ maps to the cofundamental class of $M$, $\mu_i-\mu_j$
$(i\not=j)$ maps to zero in $H^4(M)$; so it is in the kernel of
$\bar{\iota}^*$. The subgroup of $H^{even}_T(M)/(\pi^*(H^2(BT)))$
in degree 4 generated by $\mu_i-\mu_j$ $(i\not=j)$ has the desired
rank $b_1$.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=.5]
\pgfsetfillopacity{0.5}
\filldraw[fill=yellow] (1,0)--(2,-0.1)--(2,5.9)--(1,6)--(0,5)--(0,1)--cycle;
\filldraw[fill=green] (4,-0.1)--(5,0)--(6,1)--(6,5)--(5,6)--(4,5.9)--cycle;
\filldraw[fill=orange] (-0.1,2)--(0,1)--(1,0)--(5,0)--(6,1)--(5.9,2)--cycle;
\filldraw[fill=blue!50] (-0.1,4)--(5.9,4)--(6,5)--(5,6)--(1,6)--(0,5)--cycle;
\draw[ultra thick,blue] (1,0)--(0,1);
\draw[ultra thick,blue] (5,0)--(6,1);
\draw[ultra thick,blue] (6,5)--(5,6);
\draw[ultra thick,blue] (0,5)--(1,6);
\end{tikzpicture}
\end{center}
\caption{The origami template with four polygons}
\label{fig:cycle}
\end{figure}
\begin{example}
Take the 4-dimensional toric origami manifold $M$ corresponding to
the origami template shown on Figure \ref{fig:cycle} (Example 3.15
of \cite{ca-gu-pi11}). Topologically $M/T$ is homeomorphic to
$S^1\times [0,1]$ and the boundary of $M/T$ as a manifold with
corners consists of two closed polygonal curves, each having $4$
segments. The multi-fan of $M$ is the union of two copies of the
fan of $\C P^1\times \C P^1$ with the product torus action.
Indeed, if $v_1$ and $v_2$ are primitive edge vectors in the fan of $\C
P^1\times \C P^1$ which spans a 2-dimensional cone, then the other
primitive edge vectors $v_3,\dots,v_8$ in the multi-fan of $M$ are
\[
v_3=-v_1,\quad v_4=-v_2,\quad \text{and}\quad v_{i}=v_{i-4} \quad
\text{for $i=5,\dots,8$}
\]
and the 2-dimensional cones in the multi-fan are
\[
\begin{split}
&\angle v_1v_2,\quad \angle v_2v_3,\quad \angle v_3v_4,\quad \angle v_4v_1,\\
&\angle v_5v_6,\quad \angle v_6v_7,\quad \angle v_7v_8,\quad \angle v_8v_5,
\end{split}
\]
where $\angle vv'$ denotes the 2-dimensional cone spanned by
vectors $v$ and $v'$. Note that
\begin{equation} \label{eq:5-9-0}
\text{$\tau_i\tau_j=0$ if $v_i, v_j$ do not span a 2-dimensional cone.}
\end{equation}
We have
\begin{equation} \label{eq:5-9}
\pi^*(u)=\sum_{i=1}^8\langle u,v_i\rangle \tau_i \quad \text{for every $u\in H^2(BT)$.}
\end{equation}
Let $\{v_1^*,v_2^*\}$ be the dual basis of $\{v_1,v_2\}$. Taking
$u=v_1^*$ or $v_2^*$ , we see that
\begin{equation} \label{eq:5-9-1}
\tau_1+\tau_5=\tau_3+\tau_7,\quad \tau_2+\tau_6=\tau_4+\tau_8\quad\text{in $H^*_T(M)/(\pi^*(H^2(BT)))$}.
\end{equation}
Since we applied \eqref{eq:5-9} to the basis $\{v_1^*, v_2^*\}$ of
$H^2(BT)$, there is no other essentially new linear relation among
$\tau_i$'s.
Now, multiply the equations \eqref{eq:5-9-1} by $\tau_i$ and use
\eqref{eq:5-9-0}. Then we obtain
\[
\begin{split}
&\tau_i^2=0\quad \text{for every $i$},\\
&(\mu_1:=)\tau_1\tau_2=\tau_2\tau_3=\tau_3\tau_4=\tau_4\tau_1,\\
&(\mu_2:=) \tau_5\tau_6=\tau_6\tau_7=\tau_7\tau_8=\tau_8\tau_5\quad\text{in $H^*_T(M)/(\pi^*(H^2(BT)))$}.
\end{split}
\]
Our argument shows that these together with \eqref{eq:5-9-0} are
the only degree two relations among $\tau_i$'s {in
$H^*_T(M)/(\pi^*(H^2(BT)))$}. The kernel of
\[
\bar\iota^*\colon H^{even}_T(M;\Q)/(\pi^*(H^{2}(BT;\Q)))\to
H^{even}(M;\Q)
\]
in degree 4 is spanned by $\mu_1-\mu_2$.
\end{example}
\section{On the cokernel of $\bar\iota^*$ in degree $2$} \label{sectCokernel}
In this section we describe the elements of $H^2(M)$ that do not
lie in the image of the map \eqref{eq:5-4-1}. In fact, we describe
geometrically the homology $(2n-2)$-cycles, which are Poincar\'e
dual classes to the elements in $H^2(M)$. A very similar technique was used in
\cite{po-sa11} to calculate the homology of 4-dimensional torus
manifolds whose orbit spaces are polygons with holes. In contrast
to \cite{po-sa11} we do not introduce particular cell structures
on $M$, because this approach becomes more complicated for higher
dimensions.
Denote the orbit space $M/T$ by $Q$, so $Q$ is a manifold with
corners and acyclic proper faces, and $Q$ is homotopy equivalent
to a wedge of $b_1$ circles. Also let $q\colon M\to Q$ denote the
projection to the orbit space, and $\Gamma_i$ be the
characteristic subgroup, i.e. the stabilizer of orbits in
$F_i^{\circ}\subset Q$. For each face $G$ of $Q$, we denote by
$\Gamma_G$ the stabilizer subgroup of orbits $x\in G^{\circ}$.
Thus $\Gamma_G=\prod_i\Gamma_i\subset T$, where the product is
taken over all $i$ such that $G\subseteq F_i$. The origami
manifold $M$ is homeomorphic to the model
$ Q\times T/\sim,$
where $(x_1,t_1)\sim(x_2,t_2)$ if $x_1=x_2\in G^{\circ}$ and
$t_1t_2^{-1}\in \Gamma_G$ for some face $G\subset Q$. This fact is
a consequence of a general result of the work \cite{yo11}. In the
following we identify $M$ with the model $Q\times T/\sim$.
Consider a homology cycle $\sigma\in H_{n-1}(Q,\partial Q)$. Note
that $\sigma$ is Poincare--Lefschetz dual to some element of
$H^1(Q)\cong H^1(\bigvee_{b_1}S^1)\cong \Z^{b_1}$. Let $\sigma$ be
represented by a pseudomanifold $\xi\colon(L,\partial L)\to
(Q,\partial Q)$, where $\dim L=n-1$, and let $[L]\in
H_{n-1}(L,\partial L)$ denote the fundamental cycle, so that
$\xi_*([L])=\sigma$. We assume that $\xi(L\setminus
\partial L)\subset Q\setminus\partial Q$. Moreover, since every face of
$\partial Q$ is acyclic, we may assume that $\xi(\partial L)$ is
contained in $\partial Q^{(n-2)}$, --- the codimension $2$
skeleton of $Q$. A pseudomanifold $(L,\partial L)$ defines a
collection of $(2n-2)$-cycles in homology of $M$, one for each
codimension-one subtorus of $T$, by the following construction.
\begin{con}\label{conNonEquiCycle}
First fix a coordinate splitting of the torus,
$T=\prod_{i\in[n]}T_i^1$ in which the orientation of each $T_i^1$
is arbitrary but fixed. For each $j\in [n]$ consider the subtorus
$T_{\jh}=T^{[n]\setminus j}=\prod_{i\in[n]\setminus j}T_i^1$, and
let $\inc\colon T_{\jh}\to T$ be the inclusion map. Given a
pseudomanifold $(L,\partial L)$ as in the previous paragraph,
consider the space $L\times T_{\jh}$ and the quotient construction
$(L\times T_{\jh})/\sim_*$, where the identification $\sim_*$ is
naturally induced from $\sim$ by the map $\xi$. Since
$\xi(\partial L)\subset \partial Q^{(n-2)}$, the space $(\partial
L\times T_{\jh})/\sim_*$ has dimension at most $2n-4$. Thus
$(L\times T_{\jh})/\sim_*$ has the fundamental cycle $V_{L,j}$.
Indeed, there is a diagram:
\begin{equation}\label{eqFundCycleOfCut}
\xymatrix@C-=0.26cm{ &&H_{n-1}\left(L,\partial L\right)\otimes H_{n-1}(T_{\jh}) \ar@{->}[d]^{\cong \mbox{ (Kunneth)}} &\\
0\ar@{=}[d]&& H_{2n-2}(L\times T_{\jh},\partial L\times T_{\jh})
\ar@{->}[d]^{\cong \mbox{
(excision)}}&0\ar@{=}[d]\\
H_{2n-2}(\frac{\partial L\times T_{\jh}}{\sim_*}) \ar@{->}[r]&
H_{2n-2}(\frac{L\times T_{\jh}}{\sim_*})\ar@{->}[r]^(0.41){\cong}&
H_{2n-2}(\frac{L\times T_{\jh}}{\sim_*}, \frac{\partial L\times
T_{\jh}}{\sim_*}) \ar@{->}[r]& H_{2n-3}(\frac{\partial L\times
T_{\jh}}{\sim_*}) }
\end{equation}
Let $T_{\jh}$ be oriented by the splitting $T\cong T_{\jh}\times
T_j^1$. Given such an orientation, there exists the distinguished
generator $\Omega_j\in H_{n-1}(T_{\jh})$. Then the fundamental
cycle $V_{L,j}\in H_{2n-2}((L\times T_{\jh})/\sim_*)$ is defined
as the image of $[L]\otimes \Omega_j\in H_{n-1}(L,\partial
L)\otimes H_{n-1}(T_{\jh})$ under the isomorphisms of diagram
\eqref{eqFundCycleOfCut}. The induced map
\[
\zeta_{L,j}\colon (L\times T_{\jh})/\sim_* \to (Q\times
T_{\jh})/\sim \hookrightarrow (Q\times T)/\sim=M
\]
determines the element $x_{L,j}=(\zeta_{L,j})_*(V_{L,j})\in
H_{2n-2}(M)$.
\end{con}
\begin{proposition}
Let $\{\sigma_1,\ldots,\sigma_{b_1}\}$ be a basis of
$H_{n-1}(Q,\partial Q)$, and let $L_1$, \dots, $L_{b_1}$ be
pseudomanifolds representing these cycles and satisfying the
restrictions stated above. Consider the set of homology classes
$\{x_{L,j}\}\subset H_{2n-2}(M)$, where $L$ runs over the set
$\{L_1,\ldots,L_{b_1}\}$ and $j$ runs over $[n]$. Then the set of
Poincare dual classes of $x_{L,j}$ is a basis of the cokernel
$H^2(M)/\iota^*(H^2_T(M))$.
\end{proposition}
\begin{proof}
Consider disjoint circles $S_1,\dots,S_{b_1}\subset Q^{\circ}$
whose corresponding homology classes $[S_1],\dots,[S_{b_1}]\in
H_1(Q)$ are dual to $\sigma_1,\ldots,\sigma_{b_1}$. Thus for the
intersection numbers we have $[S_i]\cap \sigma_k=\delta_{ik}$.
Consider 2-dimensional submanifolds of the form $S_i\times
T^1_{l}\subset M$, where $l\in [n]$. They lie in
$q^{-1}(Q^{\circ})\subset M$. Let $[S_i\times T^1_{l}]\in H_2(M)$
be the homology classes represented by these submanifolds. Then
\begin{equation}\label{eq:pairing}
[S_i\times T^1_{l}]\cap x_{L_k,j} = \delta_{ik}\delta_{lj},
\end{equation}
since all the intersections lie in $q^{-1}(Q^{\circ}) =
Q^{\circ}\times T$.
The equivariant cycles of $M$ sit in $q^{-1}(\partial Q)$. Thus
the intersection of $S_i\times T^1_{l}\subset q^{-1}(Q^{\circ})$
with any equivariant cycle is empty. Nondegenerate pairing
\eqref{eq:pairing} shows that the set $\{x_{L,j}\}$ is linearly
independent modulo equivariant cycles. Its cardinality is
precisely $nb_1$ and the statement follows from
Lemma~\ref{lemm:2-1-1}.
\end{proof}
\begin{remark} The element $x_{L,j}\in H_{2n-2}(M)$ depends on the
representing pseudomanifold $L$, not only on its homology class in
$H_{n-1}(Q,\partial Q)$. The classes corresponding to different
representing pseudomanifolds are connected by linear relations
involving characteristic submanifolds. We describe these relations
next.
\end{remark}
At first let us introduce orientations on the objects under
consideration. We fix an orientation of the orbit space $Q$. This
defines an orientation of each facet ($F_i$ is oriented by
$TF_i\oplus \nu\cong TQ$, where the inward normal vector of the
normal bundle $\nu$ is set to be positive). Since the torus $T$ is
oriented, we have a distinguished orientation of $M = Q\times
T/\sim$. Recall that $\Gamma_i$ is the characteristic subgroup
corresponding to a facet $F_i\subset Q$. Since the action is
locally standard, $\Gamma_i$ is a $1$-dimensional connected
subgroup of $T$. Let us fix orientations of all characteristic
subgroups (this choice of orientations is usually called an
omniorientation). Then every $\Gamma_i$ can be written as
\begin{equation}\label{eq:charsubgrp}
\Gamma_i=\{(t^{\lambda_{i,1}},\ldots,t^{\lambda_{i,n}})\in T\mid
t\in T^1\},
\end{equation}
where $(\lambda_{i,1},\ldots,\lambda_{i,n})\in \Z^n$ is a uniquely
determined primitive integral vector.
Let us orient every quotient torus $T/\Gamma_i$ by the following
construction. For each $\Gamma_i$ choose a codimension 1 subtorus
$\Upsilon_i\subset T$ such that the product map $\Upsilon_i\times
\Gamma_i\to T$ is an isomorphism. The orientations of $T$ and
$\Gamma_i$ induce an orientation of $\Upsilon_i$. The quotient map
$T\to T/\Gamma_i$ induces an isomorphism between $\Upsilon_i$ and
$T/\Gamma_i$ providing the quotient group with an orientation. The
orientation of $T/\Gamma_i$ defined this way does not depend on
the choice of the auxiliary subgroup $\Upsilon_i$.
Finally, the orientations on $F_i$ and $T/\Gamma_i$ give an
orientation of the characteristic submanifold $M_i=q^{-1}(F)$.
This follows from the fact that $M_i$ contains an open dense
subset $q^{-1}(F_i^{\circ}) = F_i^{\circ}\times(T/\Gamma_i)$.
\begin{con}
Let $F_i$ be a facet of $Q$, and $[F_i]\in H_{n-1}(F_i,\partial
F_i)$ its fundamental cycle. The cycles $[F_i]$ form a basis of
$$H_{n-1}(\partial Q,\partial Q^{(n-2)})=\bigoplus_{\mbox{facets}}
H_{n-1}(F_i,\partial F_i).$$
Let $\xi_\varepsilon\colon
(L_\varepsilon,\partial L_\varepsilon)\to (Q,\partial Q)$,
$\varepsilon=1,2$, be two pseudomanifolds representing the same
element $\sigma\in H_{n-1}(Q,\partial Q)$. Then there exists a
pseudomanifold $(N,\partial N)$ of dimension $n$ and a map
$\eta\colon N\to Q$ such that $L_1$ and $L_2$ are disjoint submanifolds
of $\partial N$, $\eta|_{L_\varepsilon}=\xi_\varepsilon$ for
$\varepsilon=1,2$, and $\eta(\partial N\setminus
(L_1^{\circ}\sqcup L_2^{\circ}))\subset
\partial Q$ (this follows from the geometrical definition of homology,
see \cite[App. A.2]{ru-sa}). The skeletal stratification of $Q$
induces a stratification on $N$. The restriction of the map $\eta$
sends $\partial N^{(n-2)}$ to $\partial Q^{(n-2)}$. Let $\delta$
be the connecting homomorphism
\[
\delta\colon H_n(N,\partial N)\to H_{n-1}(\partial
N,\partial N^{(n-2)})
\]
in the long exact sequence of the triple $(N,\partial N,\partial
N^{(n-2)})$. Consider the sequence of homomorphisms
\begin{multline*}
H_n(N,\partial N)\xrightarrow{\delta} H_{n-1}(\partial N,\partial
N^{(n-2)}) \cong \\ H_{n-1}(L_1,\partial L_1)\oplus
H_{n-1}(L_2,\partial L_2)\oplus H_{n-1}(\partial N\setminus
(L_1^{\circ}\cup L_2^{\circ}), \partial N^{(n-2)}) \\ \xrightarrow{\id\oplus\id\oplus \eta_*}
H_{n-1}(L_1,\partial L_1)\oplus H_{n-1}(L_2,\partial L_2)\oplus
H_{n-1}(\partial Q,\partial Q^{(n-2)}).
\end{multline*}
This sequence of homomorphisms sends the fundamental cycle $[N]\in
H_n(N,\partial N)$ to the element
\begin{equation}\label{eqRelPseudBordism}
\left([L_1],-[L_2],\sum\nolimits_i\alpha_i[F_i]\right)
\end{equation}
of the group $H_{n-1}(L_1,\partial L_1)\oplus H_{n-1}(L_2,\partial
L_2)\oplus H_{n-1}(\partial Q,\partial Q^{(n-2)})$, for some
coefficients $\alpha_i\in \Z$.
\end{con}
\begin{proposition}\label{propLinearRelation}
If $L_1$ and $L_2$ are two pseudomanifolds representing a class
$\sigma\in H_{n-1}(Q,\partial Q)$, then for each $j\in [n]$
\begin{equation}\label{eqZeroCombination}
x_{L_1,j}-x_{L_2,j} +
\sum_{\mbox{facets}}\alpha_i\lambda_{i,j}[M_i]=0\quad\mbox{ in
}H_{2n-2}(M).
\end{equation}
Here $M_i$ is the characteristic submanifold of $M$ corresponding
to $F_i$, the numbers $\alpha_i$ are given by
\eqref{eqRelPseudBordism}, and the numbers $\lambda_{i,j}$ are
given by \eqref{eq:charsubgrp}.
\end{proposition}
\begin{proof}
Choose a relative pseudomanifold bordism $N$ between $L_1$ and
$L_2$ and consider the space $(N\times T_{\jh})/\sim_*$. Here
$\sim_*$ is the equivalence relation induced from $\sim$ by the
map $\eta$. We have a map $(\eta\times\inc)/\sim\colon(N\times
T_{\jh})/\sim_*\to M$. By the diagram chase, similar to
\eqref{eqFundCycleOfCut}, the space $(N\times T_{\jh})/\sim_*$ is
a $(2n-1)$-pseudomanifold with boundary. Its boundary represents
the element \eqref{eqZeroCombination}. Thus this element vanishes
in homology. We only need to prove the following technical lemma.
\begin{lemma}
Let $F_i$ be a facet, and let $\Gamma_i$ be its characteristic subgroup
encoded by the vector $(\lambda_{i,1},\ldots,\lambda_{i,1})\in
\Z^n$ for $j\in[n]$. Let $\Omega_j\in H_{n-1}(T_{\jh})$ and
$\Phi_i\in H_{n-1}(T/\Gamma_i)$ be the fundamental classes (in the
orientations introduced previously). Then the composite map
$T_{\jh}\hookrightarrow T\twoheadrightarrow T/\Gamma_i$ sends
$\Omega_j$ to $\lambda_{i,j}\Phi_i$.
\end{lemma}
\begin{proof}
Let $\{\mathbf{e}_s\mid s\in[n]\}$ be the positive basis of
$H_1(T)$ corresponding to the splitting $T=\prod_sT_s^1$, and let
$\{\mathbf{f}_r\mid r\in[n]\}$ be a positive basis of $H_1(T)$
such that $\mathbf{f}_n=(\lambda_{i,1},\ldots,\lambda_{i,n})$.
Thus $\Omega_j=(-1)^{n-j}\mathbf{e}_1\wedge \ldots\wedge
\widehat{\mathbf{e}_j}\wedge\ldots\wedge \mathbf{e}_n$. Let $D$ be
the matrix of basis change,
$\mathbf{e}_s=\sum_{r=1}^nD_s^r\mathbf{f}_r$, and let $C=D^{-1}$.
The element $\Omega_j$ maps to
\[
(-1)^{n-j}\det(D_s^r)_{\begin{subarray}{l} r\in \{1,\ldots,n-1\}
\\ s\in \{1,\ldots,\hat{j},\ldots,n\}\end{subarray}}
\]
which is equal to the element $C_n^j$ by Cramer's rule. $C_n^j$ is
the $j$th coordinate of $\mathbf{f}_n$ in the basis
$\{\mathbf{e}_s\}$. Thus, by construction, $C_n^j =
\lambda_{i,j}$.
\end{proof}
This proves the proposition.
\end{proof}
Proposition \ref{propLinearRelation} gives the idea how to
describe the multiplication in $H^*(M)$. Equivalently, we need to
describe the intersections of cycles in $H_*(M)$. Intersections of
equivariant cycles are known --- they are encoded by the face ring
of $Q$. To describe the intersections of additional cycles
$x_{L,j}$ sometimes we can do the following:
(1) Let $M_F$ be the face submanifold of $M$, corresponding to the
face $F\subset~Q$. If $F\cap \partial L=\varnothing$, then
$[M_F]\cap x_{L,j}=0$ in the homology of $M$. Otherwise, in many
cases we can choose a different representative $L'$ of the same
homology class as $L$ with the property $\partial L'\cap F=0$.
Then, by Proposition \ref{propLinearRelation},
\[\begin{split}
[M_F]\cap
x_{L,j}&=[M_F]\cap x_{L',j}+[M_F]\cap \sum_{\mbox{facets}}
\alpha_i\lambda_{i,j}[M_i]\\
&=\sum_{\mbox{facets}}
\alpha_i\lambda_{i,j}[M_F]\cap [M_i]
\end{split}
\]
which can be computed using
relations in $\Bbbk[Q]/(\theta_1,\dots,\theta_n)$.
(2) To compute the intersection of two elements of the form
$x_{L_1,j_1}$ and $x_{L_2,j_2}$ sometimes we can use the same
trick: find a pseudomanifold $L_1'$ which does not intersect $L_2$
and replace $x_{L_1,j_1}$ by $x_{L'_1,j_1}+\sum_i
\alpha_i\lambda_{i,j_1}[M_i]$. Then the intersection
$x_{L'_1,j_1}\cap x_{L_2,j_2}$ vanishes and intersections of
$x_{L_2,j_2}$ with $[M_i]$ are computed using~(1).
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=.5]
\draw (0,0)..controls (0.5,0.7) and (1.5,0.7)..(2,0);
\draw (0,0)..controls (0.5,-0.7) and (1.5,-0.7)..(2,0);
\draw (-2,0)..controls (0,2.5) and (2,2.5)..(4,0);
\draw (-2,0)..controls (0,-2.5) and (2,-2.5)..(4,0);
\fill (-2,0) circle(3pt);
\fill (0,0) circle(3pt);
\fill (2,0) circle(3pt);
\fill (4,0) circle(3pt);
\draw[gray] (-2,0)..controls (-1,.5)..(0,0);
\draw[gray] (-2,0)..controls (-1,-1) and (1.5,-1.5)..(2,0);
\draw (-0.5,0.7) node{$L$};
\pgfsetfillopacity{0.5}
\filldraw[fill=yellow!50] (6,-1)--(6,1)--(8.3,2)--(10,1)--(10,-1)--(8.3,-2)--cycle;
\filldraw[fill=orange!50] (6,-1)--(6,1)--(8,2)--(10,1)--(10,-1)--(8,-2)--cycle;
\draw[ultra thick, blue] (6,-1)--(6,1);
\draw[ultra thick, blue] (10,-1)--(10,1);
\end{tikzpicture}
\end{center}
\caption{Manifold with corners $Q$ for which the products of extra
elements cannot be calculated using linear relations of
Proposition~\ref{propLinearRelation}} \label{fig:2gons}
\end{figure}
\begin{remark}
This general idea may not work in particular cases. Figure
\ref{fig:2gons} provides an example of $Q$ such that every
pseudomanifold $L$ with $\partial L\subset
\partial Q^{(0)}$, representing the generator of $H_1(Q,\partial Q)$,
intersects every facet of $Q$. Unfortunately, such situations may
appear as realizations of origami templates. The picture on the
right shows an origami template, whose geometric realization is
the manifold with corners shown on the left.
\end{remark}
\section{Some observation on non-acyclic
cases}\label{sectNonAcyclicFaces}
The face acyclicity condition we assumed so far is not preserved
under taking the product with a symplectic toric manifold $N$, but
every face of codimension $\ge \frac{1}{2}\dim N+1$ is acyclic.
Motivated by this observation, we will make the following
assumption on our toric origami manifold $M$ of dimension $2n$:
\begin{quote}
every face of $M/T$ of codimension $\ge r$ is acyclic for some integer~$r$.
\end{quote}
Note that $r=1$ in the previous sections. Under the above
assumption, the arguments in Section~\ref{sectBettiNumbers} work
to some extent in a straightforward way. The main point is that
Lemma~\ref{lemm:3-5} can be generalized as follows.
\begin{lemma} \label{lemm:7-1}
The homomorphism $H^{2j}(\tM)\to H^{2j}(Z_+\cup Z_-)$ induced from
the inclusion is surjective for $j\ge r$.
\end{lemma}
Using this lemma, we see that Lemma~\ref{lemm:3-6} turns into the
following.
\begin{lemma} \label{lemm:7-2} We have the relations
\[
\begin{split}
&\sum_{i=1}^r(b_{2i}(\tilde M)-b_{2i-1}(\tilde M))=\sum_{i=1}^r(b_{2i}(M)-b_{2i-1}(M))+b_{2r}(B)\\
&b_{2i}(\tilde M)-b_{2i-1}(\tilde M)=b_{2i}(M)-b_{2i-1}(M)+b_{2i}(B)-b_{2i-2}(B)\quad\text{for $i\ge r+1$}.
\end{split}
\]
\end{lemma}
Combining Lemma~\ref{lemm:7-2} with Lemma~\ref{lemm:3-4}, Lemma~\ref{lemm:3-7} turns into the following.
\begin{lemma} \label{lemm:7-3} We have the relations
\[
\begin{split}
&b_1(M')=b_1(M)-1,\quad b_{2r}(M')=b_{2r}(M)+b_{2r-2}(B)+b_{2r}(B),\\
&b_{2i+1}(M')=b_{2i+1}(M)\quad \text{for $r\le i\le n-r-1$}.
\end{split}
\]
\end{lemma}
Finally, Theorem~\ref{theo:3-1} is generalized as follows.
\begin{theorem}
Let $M$ be an orientable toric origami manifold of dimension $2n$
$(n\ge 2)$ such that every face of $M/T$ of codimension $\ge r$ is
acyclic. Then
\[
b_{2i+1}(M)=0\quad\text{for $r\le i\le n-r-1$}.
\]
Moreover, if $M'$ and $B$ are as above, then
\[
\begin{split}
&b_1(M')=b_1(M)-1\,\,(\text{hence $b_{2n-1}(M')=b_{2n-1}(M)-1$ by Poincar\'e duality}),\\
&b_{2i}(M')=b_{2i}(M)+b_{2i}(B)+b_{2i-2}(B)\quad\text{for $r\le i\le n-r$}.
\end{split}
\]
\end{theorem}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec:intro}
Shannon's rate-distortion function $R(D)$ for a stationary zero-mean Gaussian source $X$ with memory and under the MSE fidelity criterion can be written in a parametric form (the reverse water-filling solution)~\cite{gallag68}
\begin{subequations}\label{eq:ShannonsRDF}
\begin{align}
R(D)&= \frac{1}{2\pi}\int_{\omega: S_X(\omega)>\theta} \frac{1}{2}\log\left(\frac{S_X(\omega)}{\theta}\right)\\
D &= \frac{1}{2\pi}\int_{-\pi}^{\pi} S_Z(\omega) \, d\omega,\label{eq:D_shannon}
\end{align}
where $S_X(\omega)$ denotes the \emph{power spectral density} (PSD) of $X$ and the distortion PSD $S_Z(\omega)$ is given by
\begin{equation}
S_Z(\omega) = \begin{cases}
\theta, & \text{if}\ S_X(\omega) > \theta \\
S_X(\omega), & \text{otherwise}.
\end{cases}
\end{equation}
\end{subequations}
The water level $\theta$ is chosen such that the distortion constraint~\eqref{eq:D_shannon} is satisfied.
It is well known that in order to achieve Shannon's RDF in the quadratic Gaussian case, the distortion must be independent of the output.
This clearly implies that the distortion must be \emph{correlated} to the source.
Interestingly, many well known source coding schemes actually lead, by construction, to source-uncorrelated distortions. %
In particular, this is the case when the source coder satisfies the following two conditions:
a) The linear processing stages (if any) achieve \emph{perfect reconstruction} (PR) in the absence of quantization;
b) the quantization error is uncorrelated to the source.
The first condition is typically satisfied by PR filterbanks~\cite{vaidya93}, transform coders~\cite{goyal01} and feedback quantizers~\cite{jaynol84}.
The second condition is met when subtractive (and often when non-subtractive) dither quantizers are employed~\cite{grasto93}.
Thus, any PR scheme using, for example, subtractively dithered quantization, leads to source-uncorrelated distortions.
An important fundamental question, which was raised by the authors in a recent paper~\cite{derost08}, is: ``What is the impact on Shannon's rate-distortion function, when we further impose the constraint that the end-to-end distortion must be uncorrelated to the input?''
In~\cite{derost08}, we formalized the notion of $R^\perp(D)$, which is the quadratic rate-distortion function subject to the constraint that the distortion is uncorrelated to the input.
For a Gaussian source $X\in\mathbb{R}^{N}$, we defined $R^{\perp}(D)$ as~\cite{derost08}
\begin{equation}
R^{\perp}(D) \triangleq \min_{%
\substack{Y: \mathbb{E}[X(Y-X)^T]=\boldsymbol{0}, \\
\frac{1}{N} tr(\boldsymbol{K}_{Y-X}) \leq D,
\frac{1}{N}|\boldsymbol{K}_{Y-X}|^{\frac{1}N} > 0}}
\tfrac{1}{N}
I(X ; Y),
\end{equation}
where the notation $\boldsymbol{K}_{X}$ denotes the covariance matrix of $X$ and $\abs{\cdot}$ refers to the determinant.
For zero mean Gaussian stationary sources, we showed in~\cite{derost08} that the above minimum (in the limit when $N\to\infty$)
satisfies the following equations:
\begin{subequations}\label{eq:Rperp_equations}
\begin{align}
R^\perp(D)
&=
\frac{1}{2\pi} \!\int_{-\pi}^{\pi} \!\!\log\!\left(\!\!\frac{\hsqrt{S_X(\omega)+\alpha} + \hsqrt{S_X(\omega)}}{\hsqrt{\alpha}}\!\right) d\omega\label{eq:RperpProcDef}\\
D &= \frac{1}{2\pi} \int_{-\pi}^{\pi} S_{Z}(\omega) d\omega,\nonumbe
\end{align}
\text{where}
\begin{equation}\label{eq:Sz_def}
S_Z(\omega) \!= \!\frac{1}{2}\!\left(\! \hsqrt{S_X(\omega)\!+\!\alpha} - \hsqrt{S_X(\omega)}\right)\!\hsqrt{S_X(\omega)}, \; \forall \omega,
\end{equation}
\end{subequations}
is the PSD of the optimal distortion, which needs to be Gaussian.
Notice that here the parameter $\alpha$ (akin to $\theta$ in~\eqref{eq:ShannonsRDF}) does not represent a ``water level''.
Indeed, unless $X$ is white, the PSD of the optimal distortion for $R^{\perp}(D)$ is not white, \emph{for all $D>0$}.
\footnote{Other similarities and differences between $R^{\perp}(D)$ and Shannon's $R(D)$ are discussed in~\cite{derost08}.}
In the present paper we prove achievability of $R^\perp(D)$ by constructing coding schemes based on dithered lattice quantization, which, in the limit as the quantizer dimension approaches infinity, are able to achieve $R^\perp(D)$ for any positive $D$.
We also show that $R^{\perp}(D)$ can be realized causally, i.e.,
that for all Gaussian sources and for all positive distortions one can build forward test channels that realize $R^{\perp}(D)$ without using non-causal filters.
This is contrary to the case of Shannon's rate distortion function $R(D)$, where at least one of the filters of the forward test channel that realizes $R(D)$ needs to be non-causal~\cite{gallag68}.
To further illustrate the causality of
$R^{\perp}(D)$,
we present a causal transform coding architecture that realizes it.
We also show that the use of feedback noise-shaping allows one to achieve $R^{\perp}(D)$ with memoryless entropy coding.
This parallels a recent result by Zamir, Kochman and Erez for $R(D)$~\cite{zamkoc08}.
We conclude the paper by showing that, in all the discussed architectures, the
rate-loss (with respect to $R^{\perp}(D)$) when using a finite-dimensional quantizer can be upper bounded by the space-filling loss of the quantizer.
Thus, for any Gaussian source with memory, by using noise-shaping and scalar dithered quantization,
the \emph{scalar} entropy (conditioned to the dither) of the quantized output exceeds $R^{\perp}(D)$ by at most 0.254 bit/dimension.
\section{Background on Dithered Lattice Quantization}\label{sec:background}
A randomized lattice quantizer is a lattice quantizer with subtractive dither $\nu$, followed by entropy encoding.
The dither $\nu \sim \mathcal{U}(V_0)$ is uniformly distributed over a Voronoi cell $V_0$ of the lattice quantizer
Due to the dither, the quantization error is truly independent of the input. Furthermore, it was shown in~\cite{zamfed92} that the coding rate of the quantizer, i.e.\
\begin{equation}
R_{\mathcal{Q}_{N}} \triangleq
\tfrac{1}{N}H(\mathcal{Q}_{N}(X+\nu)|\nu
\end{equation}
can be written as the mutual information between the input and the output of an additive noise channel $Y'=X+E'$, where $E'$ denotes the channel's additive noise and is distributed as $-\nu$.
More precisely,
$R_{\mathcal{Q}_{N}}=\frac{1}{N}I(X;Y')=\frac{1}{N}I(X;X+E')$ and the quadratic distortion per dimension is given by $\frac{1}{N}\mathbb{E}\|Y'-X\|^2 = \frac{1}{N}\mathbb{E}\|E'\|^2$.
It has furthermore been shown that when $\nu$ is white
there exists a sequence of lattice quantizers $\{\mathcal{Q}_{N}\}$ where the quantization error (and therefore also the dither) tends to be approximately Gaussian distributed (in the divergence sense) for large $N$.
Specifically, let $E'$ have a probability distribution (PDF) $f_{E'}$, and let $E'_G$ be Gaussian distributed with the same mean and covariance as $E'$.
Then $\lim_{N\rightarrow\infty}\frac{1}{N}D(f_{E'}(e) \| f_{E'_G}(e)) = 0$ with a convergence rate of $\frac{\log(N)}{N}$ if the sequence $\{\mathcal{Q}_{N}\}$ is chosen appropriately~\cite{zamfed96}.
In the next section we will be interested in the case where the dither is not necessarily white.
By shaping the Voronoi cells of a lattice quantizer whose dither $\nu$ is white, we also shape $\nu$, obtaining a colored dither $\nu'$.
This situation was considered in detail in~\cite{zamfed96} from where we obtain the following lemma (which was proven in~\cite{zamfed96} but not put into a lemma).
\begin{lem}\label{lem:shapedlattice}
Let $E\sim \mathcal{U}(V_0)$ be white, i.e.\ $E$ is uniformly distributed over the Voronoi cell $V_0$ of the lattice quantizer $\mathcal{Q}_{N}$ and $\boldsymbol{K}_E=\epsilon \boldsymbol{I}$.
Furthermore, let $E'\sim \mathcal{U}(V'_0)$, where
$V'_0$ denotes the shaped Voronoi cell
$V'_0 = \{x\in \mathbb{R} : \boldsymbol{M}^{-1}x \in V_0\}$ and $\boldsymbol{M}$ is some invertible linear transformation.
Denote the covariance of $E'$ by $\boldsymbol{K}_{E'} = \boldsymbol{M}\boldsymbol{M}^T\epsilon$.
Similarly, let $E_G\sim \mathcal{N}(\boldsymbol{0},\boldsymbol{K}_{E_G})$ having covariance matrix
$\boldsymbol{K}_{E_G} = \boldsymbol{K}_{E}$ and let $E'_G\sim \mathcal{N}(\boldsymbol{0},\boldsymbol{K}_{E'_G})$ where $\boldsymbol{K}_{E'_G} = \boldsymbol{K}_{E'}$.
Then there exists a sequence of shaped lattice quantizers such that
\begin{equation}
\tfrac{1}{N}D(f_{E'}(e) \| f_{E'_G}(e)) = \mathcal{O}\left({\log(N)}/{N}\right).
\end{equation}
\end{lem}
\begin{proof}
The divergence is invariant to invertible transformations since
$h(E')=h(E)+\log_2(|\boldsymbol{M}|)$. Thus, $D(f_{E'}(e) \| f_{E'_G}(e)) = D(f_{\boldsymbol{M}E}(e)\|f_{\boldsymbol{M}E_G}(e)) = D(f_{E}(e)\|f_{E_G}(e))$ for any $N$.
\end{proof}
\section{Achievability of $R^{\perp}(D)$}
The simplest forward channel that realizes $R^{\perp}(D)$ is shown in Fig.~\ref{fig:forwartdtc}.
According to~\eqref{eq:Rperp_equations}, all that is needed for the mutual information per dimension between $X$ and $Y$ to equal $R^{\perp}(D)$ is that $Z$ be Gaussian with PSD equal to the right hand side (RHS) of~\eqref{eq:Sz_def}.
\begin{figure}[htp]
\centering
\input{forwardtestchannel.pstex_t}
\caption{Forward test channel}
\label{fig:forwartdtc}
\end{figure}
In view of the asymptotic properties of randomized lattice quantizers discussed in Section~\ref{sec:background},
the achievability of~$R^{\perp}(D)$ can be shown by replacing the test channel of Fig.\ref{fig:forwartdtc} by an adequately \emph{shaped} $N$-dimensional randomized lattice quantizer $\mathcal{Q}_{N}'$ and then letting $N\rightarrow \infty$.
In order to establish this result, the following lemma is needed.
\begin{lem}\label{lem:excessrate}
\emph{
Let $X$, $X'$, $Z$ and $Z'$ be mutually independent random vectors.
Let $X'$ and $Z'$ be arbitrarily distributed, and let $X$ and $Z$ be Gaussian having the same mean and covariance as $X'$ and $Z'$, respectively.
Then
\begin{align}
I(X';X'+Z') \leq I(X;X+Z) + D(Z'\Vert Z).
\end{align}
}
\end{lem}
\begin{proof}
\begin{align*}
I(&X';X'+Z')
\overset{\hphantom{(a)}}{=} h(X'+Z') - h(Z')\\
&\overset{(a)}{=} h(X+Z) - h(Z) + D(Z'\Vert Z) - D(X'+Z'\Vert X + Z)\\
&\overset{\hphantom{(b)}} {\leq} I(X;X+Z) + D(Z'\Vert Z),
\end{align*}
where $(a)$ stems from the well known result $D(X'\Vert X) = h(X) - h(X')$, see, e.g.,~\cite[p.~254]{covtho06}.
\end{proof}
We can now prove the achievability of $R^\perp(D)$.\\
\begin{thm}\label{thm:achievable}
\emph{For a source $X$ being an infinite length Gaussian random vector with zero mean,
$R^\perp(D)$ is achievable.
\end{thm}
\begin{proof}
Let $X^{(N)}$ be the sub-vector containing the first $N$ elements of $X$.
For a fixed distortion $D=\textrm{tr}(\boldsymbol{K}_{Z^{(N)}})/N$, the
average mutual information per dimension $\frac{1}{N}I({X^{(N)}};{X^{(N)}}+{Z^{(N)}})$ is minimized when ${X^{(N)}}$ and ${Z^{(N)}}$ are jointly Gaussian and
\begin{align}
\boldsymbol{K}_{Z^{(N)}} = \frac{1}{2}\sqrt{\boldsymbol{K}_{X^{(N)}}^2 + \alpha\boldsymbol{K}_{X^{(N)}}}-\frac{1}{2}\boldsymbol{K}_{X^{(N)}},
\end{align}
see~\cite{derost08}.
Let the $N$-dimensional shaped randomized lattice quantizer $\mathcal{Q}'_{N}$ be such that the dither is distributed as $-{{E'}^{(N)}}\sim\mathcal{U}(V'_0)$,
with $\boldsymbol{K}_{E'^{(N)}}=\boldsymbol{K}_{Z^{(N)}}$.
It follows that the coding rate of the quantizer is given by $R_{\mathcal{Q}_{N}} = \frac{1}{N}I({X^{(N)}};{X^{(N)}} + {{E'}^{(N)}})$.
The rate loss due to using $\mathcal{Q}_{N}$ to quantize ${X^{(N)}}$ is given by
\begin{align}
R_{\mathcal{Q}_{N}}(D) - R^{\perp}(D)
&= \tfrac{1}{N}\Big[ I({X^{(N)}};{X^{(N)}}+{{E'}^{(N)}}) \nonumber\\
&\quad- I({X^{(N)}};{X^{(N)}}+{{E'}^{(N)}}_G)\Big] \nonumber\\
&\overset{(a)}{\leq} \tfrac{1}{N}D(f_{{{E'}^{(N)}}}(e)\|f_{{{E'_G}^{(N)}}}(e)),\label{eq:middle}
\end{align}
where $f_{{{E'_G}^{(N)}}}$ is the PDF of the Gaussian random vector ${{E'_G}^{(N)}}$, independent of ${{E'}^{(N)}}$ and ${X^{(N)}}$, and having the same first and second order statistics as ${{E'}^{(N)}}$.
In~\eqref{eq:middle}, inequality~$(a)$ follows directly from Lemma~\ref{lem:excessrate}, since the use of subtractive dither yields the error ${{E'}^{(N)}}$ independent of ${X^{(N)}}$.
To complete the proof, we invoke Lemma~\ref{lem:shapedlattice}, which guarantees that the RHS of~\eqref{eq:middle} vanishes as $N\rightarrow \infty$.
\end{proof}
\begin{rem}
\begin{enumerate}
\item For zero mean stationary Gaussian random sources, $R^{\perp}(D)$ is achieved by taking $X$ in Theorem~\ref{thm:achievable} to be the complete input process.
For this case, as shown in~\cite{derost08}, the Fourier transform of the autocorrelation function of $Z^{(N)}$ tends to the RHS of~\eqref{eq:Sz_def}.
\item For vector processes, the achievability of $R^{\perp}(D)$ follows by building $X$ in Theorem~\ref{thm:achievable} from the concatenation of infinitely many consecutive vectors.
\item Note that if one has an infinite number of parallel scalar random processes, $R^{\perp}(D)$ can be achieved \emph{causally} by forming $X$ in Theorem~\ref{thm:achievable} from the $k$-th sample of each of the processes and using entropy coding after $\mathcal{Q}$.
\end{enumerate}
\end{rem}
The fact that $R^{\perp}(D)$ can be realized causally is further illustrated in the following section.
\section{Realization of $R^{\perp}(D)$ by Causal Transform Coding }\label{sec:realiz_TC}
We will next show that for a Gaussian random vector $X\in\mathbb{R}^{N}$ with positive definite covariance matrix $\boldsymbol{K}_{X}$, $R^{\perp}(D)$ can be realized by \emph{causal} transform coding~\cite{habher74,pholin00}.
A typical transform coding architecture is shown in Fig.~\ref{fig:Causal_TCNF}.
In this figure, $\boldsymbol{T}$ is an $N\times N$ matrix, and $W$ is a Gaussian vector, independent of $X$, with covariance matrix $\boldsymbol{K}_{W}=\sigma^{2}_{W}\boldsymbol{I}$.
The system clearly satisfies the perfect reconstruction condition $Y=X + \boldsymbol{T}^{-1}W$.
The reconstruction error is the Gaussian random vector $Z\triangleq Y-X$, and the MSE is
$D=\frac{1}{N}\textrm{tr}\set{\boldsymbol{K}_{Z}}$, where
$
\boldsymbol{K}_{Z} = \sigma^{2}_{W} \boldsymbol{T}^{-1} \boldsymbol{T}^{-T}
$.
\begin{figure}[htp]
\centering
\input{Block_Diag_Causal_TCNF.pstex_t}
\caption{Transform coder.}
\label{fig:Causal_TCNF}
\end{figure}
By restricting $\boldsymbol{T}$ to be lower triangular, the transform coder in Fig.~\ref{fig:Causal_TCNF} becomes causal, in the sense that $\forall k\in\set{1,..,N}$, the $k$-th elements of $U$ and $\hat{U}$ can be determined using just the first $k$ elements of $X$ and the $k$-th element of $W$.
To have
$
\frac{1}{N}I(X;Y)=R^{\perp}(D)
$,
it is necessary and sufficient that
\begin{align}
\boldsymbol{T}^{-1} \boldsymbol{T}^{-T} = \boldsymbol{K}_{Z^{\star}}/\sigma^{2}_{W} \label{eq:bTbT_NF}
\end{align}
where the covariance matrix of the optimal distortion is~\cite{derost08
\begin{align}
\boldsymbol{K}_{Z^{\star}}\triangleq
\frac{1}{2}\hsqrt{\boldsymbol{K}_{X}^2 + \alpha\boldsymbol{K}_{X}}-\frac{1}{2}\boldsymbol{K}_{X}.\label{eq:Kzstardef2}
\end{align}
Since $\boldsymbol{T}^{-1}$ is lower triangular,~\eqref{eq:bTbT_NF} is the Cholesky decomposition of $\boldsymbol{K}_{Z^{\star}}/\sigma^{2}_{W}$, which always exists.%
\footnote{Furthermore, since $\boldsymbol{K}_{Z^{\star}}>0$, there exists a unique $\boldsymbol{T}$ having only positive elements on its main diagonal that satisfies~\eqref{eq:bTbT_NF}, see~\cite{horjoh85}.}
Thus, $R^{\perp}(D)$ can be realized by causal transform coding.
In practice, transform coders are implemented by replacing the (vector) AWGN channel $\hat{U}=V+W$ by a quantizer (or several quantizers) followed by entropy coding.
The latter process is simplified if the quantized outputs are independent.
When using quantizers with subtractive dither, this can be shown to be equivalent to having $\frac{1}{N}\sumfromto{k=1}{N}I(\hat{U}_{k}-W_{k};\hat{U}_{k})=\frac{1}{N} I(U;\hat{U})$
in the transform coder when using the AWGN channel.
Notice that, since $\boldsymbol{T}$ in~\eqref{eq:bTbT_NF} is invertible, the mutual information per dimension
$\frac{1}{N} I(U;\hat{U})$ is also equal to $R^{\perp}(D)$.
By the chain rule of mutual information we have
\begin{align}
\frac{1}{N}\sumfromto{k=1}{N}I(\hat{U}_{k}-W_{k};\hat{U}_{k})
\geq \frac{1}{N} I(U;\hat{U}) = R^{\perp}(D),\label{eq:Ineq_key}
\end{align}
with equality iff the elements of $\hat{U}$ are mutually independent.
If $\hat{U}$ is Gaussian, this is equivalent to $\boldsymbol{K}_{\hat{U}}$ being diagonal.
Clearly, this cannot be obtained with the architecture shown in Fig.~\ref{fig:Causal_TCNF} using causal matrices (while at the same time satisfying~\eqref{eq:bTbT_NF}).
However, it can be achieved by using error feedback, as we show next.
Consider the scheme shown in Fig.~\ref{fig:Causal_TC}, where $\boldsymbol{A}\in\mathbb{R}^{N\times N}$ is lower triangular and $\boldsymbol{F}\in\mathbb{R}^{N\times N}$ is strictly lower triangular.
\begin{figure}[htp]
\centering
\input{Block_Diag_Causal_TC2.pstex_t}
\caption{A causal transform coding scheme with error feedback.}
\label{fig:Causal_TC}
\end{figure}
Again, a sufficient and necessary condition to have $\frac{1}{N}I(X;Y)=R^{\perp}(D)$ is that $\boldsymbol{K}_{Z}=\boldsymbol{K}_{Z^{\star}}$, see~\eqref{eq:Kzstardef2}, i.e.,
\begin{align}
\sigma^{2}_{W} \boldsymbol{A}^{-1}(\boldsymbol{I}-\boldsymbol{F})\left[\boldsymbol{A}^{-1}(\boldsymbol{I}-\boldsymbol{F})\right]^{T} = \boldsymbol{K}_{Z^{\star}}\nonumber\\
\iff
(\boldsymbol{I}-\boldsymbol{F})(\boldsymbol{I}-\boldsymbol{F})^{T} = \boldsymbol{A} \boldsymbol{K}_{Z^{\star}} \boldsymbol{A}^{T}/\sigma^{2}_{W}. \label{eq:I_F_I_F}
\end{align}
On the other hand, equality in~\eqref{eq:Ineq_key} is achieved only if
\begin{align}
\boldsymbol{K}_{\hat{U}} = \boldsymbol{A}\boldsymbol{K}_{X} \boldsymbol{A}^{T}+ \sigma^{2}_{W}(\boldsymbol{I} - \boldsymbol{F})(\boldsymbol{I} - \boldsymbol{F})^{T}= \boldsymbol{D},\label{eq:bI}
\end{align}
for some diagonal matrix $\boldsymbol{D}$ with positive elements.
If we substitute the Cholesky factorization $\boldsymbol{K}_{Z^{\star}}=\boldsymbol{L} \boldsymbol{L}^{T}$ into~\eqref{eq:I_F_I_F}, we obtain
$
(\boldsymbol{I}-\boldsymbol{F})(\boldsymbol{I}-\boldsymbol{F})^{T} = \boldsymbol{A} \boldsymbol{L}\bL^{T}\boldsymbol{A}^{T}/\sigma^{2}_{W}
$,
and thus
\begin{align}
\boldsymbol{A} = \sigma_{W}(\boldsymbol{I}-\boldsymbol{F})\boldsymbol{L}^{-1}.\label{eq:bA_short}
\end{align}
Substituting the above into~\eqref{eq:bI} we obtain
\begin{align}
\boldsymbol{D} = \sigma^{2}_{W}(\boldsymbol{I}-\boldsymbol{F})\left[ \boldsymbol{L}^{-1}\boldsymbol{K}_{X}\boldsymbol{L}^{-T} + \boldsymbol{I}\right](\boldsymbol{I}-\boldsymbol{F})^{T}\label{eq:the_one}
\end{align}
Thus, there exist%
\footnote{For any positive definite matrices $\boldsymbol{K}_{X}$ and $\boldsymbol{K}_{Z^{\star}}=\boldsymbol{L}\bL^{T}$, there exists a \emph{unique} matrix $\boldsymbol{F}$ having zeros on its main diagonal that satisfies~\eqref{eq:the_one}, see~\cite{derque08}.
}
$\boldsymbol{A}$ and $\boldsymbol{F}$ satisfying~\eqref{eq:I_F_I_F} and~\eqref{eq:bI}.
Substitution of~\eqref{eq:bA_short} into~\eqref{eq:the_one} yields
$
\boldsymbol{D} = \boldsymbol{A}\left(\boldsymbol{K}_{X} + \boldsymbol{K}_{Z^{\star}}\right) \boldsymbol{A}^{T}
$,
and
$
\log \abs{\boldsymbol{D}} = 2\log\abs{\boldsymbol{A}} + \log\abs{\boldsymbol{K}_{x} + \boldsymbol{K}_{Z^{\star}}}
$.
From~\eqref{eq:I_F_I_F} and the fact that $\abs{\boldsymbol{I}-\boldsymbol{F}}=1$ it follows that
$
\abs{\boldsymbol{A}}^{2} = \sigma^{2}_{W} / \abs{\boldsymbol{K}_{Z^{\star}}}
$,
and therefore%
\footnote{The last equality in~\eqref{eq:esta} follows from the expression for $R^{\perp}(D)$ for Gaussian vector sources derived in~\cite{derost08}.}
\begin{align}
\tfrac{1}{N}&\sumfromto{k=1}{N}I(V_{k};\hat{U}_{k})
=
\tfrac{1}{N}\sumfromto{k=1}{N}\log\Big(\tfrac{\sigma^{2}_{\hat{U}_{k}}}{\sigma^{2}_{W}}\Big)
=\tfrac{1}{2N} \log \tfrac{\abs{\boldsymbol{D}}}{\sigma^{2}_{W}} \nonumber\\
&= \tfrac{1}{2N}\log\abs{\boldsymbol{K}_{x} + \boldsymbol{K}_{Z^{\star}}} - \tfrac{1}{2N}\log\abs{ \boldsymbol{K}_{Z^{\star}}}\nonumber\\
&= \tfrac{1}{2N}\!\sumfromto{k=1}{N}\!\log
\left(
\tfrac{\hsqrt{\lambda_{k}^{2} +\lambda_{k}\alpha } + \lambda_{k}}{ \hsqrt{\lambda_{k}^{2} +\lambda_{k}\alpha } - \lambda_{k}}
\right
= R^{\perp}(D),\label{eq:esta}
\end{align}
thus achieving equality in~\eqref{eq:Ineq_key}.
We have seen that the use of error feedback allows one to make the average scalar mutual information between the input and output of each AWGN channel in the transform domain equal to $R^{\perp}(D)$.
In the following section we show how this result can be extended to stationary Gaussian processes.
\section{Achieving $R^{\perp}(D)$ by Noise Shaping}\label{sec:noise_shap}
In this section we show that, for any colored stationary Gaussian stationary source and for any positive distortion, $R^{\perp}(D)$ can be realized by noise shaping, and that $R^{\perp}(D)$ is achievable using \emph{memory-less} entropy coding.
\subsection{Realization of $R^{\perp}(D)$ by Noise-Shaping}
The fact that $R^{\perp}(D)$ can be realized by the additive colored Gaussian noise test channel of Fig.~\ref{fig:forwartdtc} suggests that $R^{\perp}(D)$ could also be achieved by an
\emph{additive white Gaussian noise} (AWGN) channel embedded in a noise-shaping feedback loop, see Fig.~\ref{fig:Block_diag_NSDPCM}.
In this figure,
$\set{X_{k}}$ is a Gaussian stationary process with PSD $S_{x}(\eexp{j\omega})$.
The filters $A(z)$ and $F(z)$ are LTI.
The AWGN channel is situated between $V$ and $\hat{U}$, where white Gaussian noise $\set{W_{k}}$, independent of $\set{X_{k}}$, is added.
The reconstructed signal $Y$ is obtained by passing $\hat{U}$ through the filter $A(z)^{-1}$, yielding the reconstruction error
$Z_{k}= Y_{k}-X_{k}$.
\begin{figure}[htp]
\centering
\input{Block_Diag_FB_Test_Channel.pstex_t}
\caption{Test channel built by embedding the AWGN channel $\hat{U}_{k}= V_{k}+W_{k}$ in a noise feedback loop.}
\label{fig:Block_diag_NSDPCM}
\end{figure}
The following theorem states that, for this scheme, the \emph{scalar} mutual information across the AWGN channel can actually equal
$R^{\perp}(D=\sigma^{2}_{Z})$.
\begin{thm}\label{thm:Realizable_FQ}
\emph{
Consider the scheme in Fig.~\ref{fig:Block_diag_NSDPCM}.
Let $\set{X_{k}}$, $\set{W_{k}}$ be independent stationary Gaussian random processes.
Suppose that
the differential entropy rate of $\set{X_{k}}$ is bounded,
and that
$\set{W_{k}}$ is white.
Then, for every $D>0$, there exist causal and stable filters $A(z)$, $A(z)^{-1}$ and $F(z)$ such that
\begin{align}
I(V_{k};\hat{U}_{k}) = R^{\perp}(D), \textrm{ where $D\triangleq \sigma^{2}_{Z}$.}\label{eq:scalar_I_equals_Rperp}
\end{align}
\end{thm}
\begin{proof}
Consider all possible choices of the filters $A(z)$ and $F(z)$ such that
the obtained sequence $\set{\hat{U}_{k}}$ is white, i.e., such that $S_{\hat{U}}(\eexp{j\omega})=\sigma^{2}_{\hat{U}},\,\, \forall \w \in [-\pi,\pi]$.
From Fig.~\ref{fig:Block_diag_NSDPCM}, this is achieved iff the filters $A(z)$ and $F(z)$ satisfy
\begin{align}
\sigma^{2}_{\hat{U}} = \abs{A(\eexp{j\omega})}^{2}S_{X}(\eexp{j\omega}) + \abs{1-F(\eexp{j\omega})}^{2}\sigma^{2}_{W}.\label{eq:uhat_whitwe}
\end{align}
On the other hand, since $\set{W_{k}}$ is Gaussian, a necessary and sufficient condition in order to achieve
$R^{\perp}(D)$ is that
\begin{align}
S_{Z}(\eexp{j\omega})
&= \abs{1-F(\eexp{j\omega})}^{2} \abs{A(\eexp{j\omega})}^{-2} \sigma^{2}_{W}\label{eq:Sz}\\
&=\frac{1}{2}\!\left(\! \hsqrt{S_X(\omega)+\alpha} - \hsqrt{S_X(\omega)}\right)\!\hsqrt{S_X(\omega)} \\
& \triangleq S_{Z^{\star}}(\eexp{j\omega}),\quad \, \forall \w \in [-\pi,\pi].\label{eq:Szstar2}
\end{align}
This holds iff
$
\abs{A(\eexp{j\omega})}^{2} = \sigma^{2}_{W}\abs{1-F(\eexp{j\omega})}^{2}/ S_{Z^{\star}}(\eexp{j\omega})
$.
Substituting the latter and~\eqref{eq:Szstar2} into~\eqref{eq:uhat_whitwe}, and after some algebra, we obtain
\begin{subequations}\label{eq:opt_filters}
\begin{align}
\!\!\!\abs{1\!-\!F(\eexp{j\omega})}^{2} \!\!
&=
\frac{\sigma^{2}_{\hat{U}}}{\sigma^{2}_{W}}\!\!
\left[
\frac
{
\!\hsqrt{S_{X}(\eexp{j\omega}) \!+\! \alpha} -\! \hsqrt{S_{X}(\eexp{j\omega})}
}
{\hsqrt{\alpha}}
\right]^{2},
\label{eq:uno_min_F_opt}
\\
\abs{A(\eexp{j\omega})}^{2}
&=
2\sigma^{2}_{\hat{U}}
\frac{\hsqrt{S_{X}(\eexp{j\omega}) \!+\! \alpha} -\! \hsqrt{S_{X}(\eexp{j\omega})}}{\alpha \hsqrt{S_{X}(\eexp{j\omega})}}\label{eq:Aopt}.
\end{align}
\end{subequations}
Notice that the functions on the right hand sides of~\eqref{eq:opt_filters} are bounded and positive for all $\omega\in[-\pi,\pi]$, and that
a bounded differential entropy rate of $\set{X_{k}}$ implies that $|\int_{-\pi}^{\pi}S_{X}(\eexp{j\omega}) d\omega|<\infty$.
From the Paley-Wiener criterion~\cite{wiepal34} (see also, e.g.,~\cite{proman96}), this implies that $(1-F(z))$, $A(z)$ and $A(z)^{-1}$ can be chosen to be stable and causal.
Furthermore, recall that for any fixed $D>0$, the corresponding value of $\alpha$ is unique (see~\cite{derost08}), and thus fixed.
Since the variance $\sigma^{2}_{W}$ is also fixed, it follows that each frequency response magnitude $\abs{1-F(\eexp{j\omega})}$ that satisfies~\eqref{eq:uno_min_F_opt} can be associated to a unique value of
$\sigma^{2}_{\hat{U}}$.
Since $F(z)$ is strictly causal and stable, the minimum value of the variance $\sigma^{2}_{\hat{U}}$ is achieved when
\begin{align}
\intpipi{\log\abs{1-F(\eexp{j\omega})}} =0,\label{eq:Bode}
\end{align}
i.e., if $1-F(z)$ has no zeros outside the unit circle (equivalently, if $1-F(z)$ is minimum phase), see, e.g.,~\cite{serbra97}.
If we choose in~\eqref{eq:uno_min_F_opt} a filter $F(z)$ that satisfies~\eqref{eq:Bode}, and then we take the logarithm and integrate both sides of~\eqref{eq:uno_min_F_opt}, we obtain
\begin{align*}
\frac{1}{2}&
\!\log\!\left(\frac{\sigma^{2}_{\hat{U}}}{\sigma^{2}_{W}}\!\right)
=
\frac{1}{2\pi}\!\Intfromto{-\pi}{\pi}{\!\log
\left[\frac{\hsqrt{\alpha}}{\hsqrt{S_{X}(\eexp{j\omega}) \!+\! \alpha} - \hsqrt{S_{X}(\eexp{j\omega})}}
\right] }d\omega\\
&=
\frac{1} {2\pi}\!
\Intfromto{-\pi}{\pi} {\!\log\!
\left[
\frac
{\hsqrt{S_{X}(\eexp{j\omega}) \!+\! \alpha} + \hsqrt{S_{X}(\eexp{j\omega})}}
{\hsqrt{\alpha}}
\right] }d\omega
=
R^{\perp}(D).
\end{align*}
where~\eqref{eq:RperpProcDef} has been used.
We then have that
\begin{align*}
R^{\perp}(D)
&\overset{\hphantom{(a)}}{=} \frac{1}{2}\log\Big(\frac{\sigma^{2}_{\hat{U}}}{\sigma^{2}_{W}}\Big)
= \frac{1}{2}\log(2\pi\expo{}\sigma^{2}_{\hat{U}}) -\frac{1}{2}\log(2\pi\expo{}\sigma^{2}_{W}) \\
&\overset{(a)}{=} h(\hat{U}_{k}) - h(W_{k})\\
&\overset{(b)}{=} h(\hat{U}_{k}) - h(V_{K}+ W_{k}| V_{k} )
= I(V_{k};\hat{U}_{k}),
\end{align*}
where $(a)$ follows from the Gaussianity of $W_{k}$ and $\hat{U}_{k}$,
and
$(b)$ from the fact that $W_{k}$ is independent of $V_{k}$ (since $F$ is strictly causal).
This completes the proof.
Alternatively,
\begin{align*}
R^{\perp}(&D)
\overset{(a)}{\leq}
\Irate{X}{Y}\\
&\overset{\hphantom{(a)}}{=}
\bar{h}(A^{-1}\set{\hat{U}_{k}})
-
\bar{h}(\set{X_{k}} + A^{-1}(1-F)\set{W_{k}}|\set{X_{k}})\\
&\overset{\hphantom{(b)}}{=}
\bar{h}(A^{-1}\set{\hat{U}_{k}})
-
\bar{h}( A^{-1}(1-F)\set{W_{k}} )\\
&\overset{(b)}{=}
\bar{h}(\set{\hat{U}_{k}})
-
\bar{h}((1-F)\set{W_{k}} )\\
&\overset{(c)}{\leq}
h(\hat{U}_{k}|U_{k}^{-})
-
h(W_{k}) \overset{(d)}{\leq}
h(\hat{U}_{k})
-
h(W_{k})\\
&\overset{(e)}{=} h(\hat{U}_{k}) - h(V_{K}+ W_{k}| V_{k} )
= I(V_{k};\hat{U}_{k}),
\end{align*}
In~$(a)$, equality is achieved iff the right hand side of~\eqref{eq:Sz} equals~\eqref{eq:uno_min_F_opt}, i.e., if $Z$ has the optimal PSD.
Equality~$(b)$ holds because $|{\int_{-\pi}^{\pi}\log\abs{A(\eexp{j\omega})}}|d\omega<\infty$, which follows from~\eqref{eq:Aopt}.
The fact that $\set{\hat{U}_{k}}$ is stationary has been used in~$(c)$, wherein equality is achieved iff $\abs{1-F}$ is minimum phase, i.e., if~\eqref{eq:Bode} holds.
Equality in~$(d)$ holds if an only if the elements of $\set{\hat{U}_{k}}$ are independent, which, from the Gaussianity of $\set{\hat{U}_{k}}$, is equivalent to~\eqref{eq:uhat_whitwe}.
Finally, $(e)$ stems from the fact that
$W_{k}$ is independent of $V_{k}$.
\end{proof}
Notice that the key to the proof of Theorem~\ref{thm:Realizable_FQ} relies on knowing a priori the PSD of the end to end distortion required to realize $R^{\perp}(D)$.
Indeed, one could also use this fact to realize $R^{\perp}(D)$ by embedding the AWGN in a DPCM feedback loop, and then following a reasoning similar to that in~\cite{zamkoc08}.
\subsection{Achieving $R^{\perp}(D)$ Through Feedback Quantization}
In order to achieve $R^{\perp}(D)$ by using a quantizer instead of an AWGN channel, one would require the quantization errors to be Gaussian.
This cannot be achieved with scalar quantizers.
However, as we have seen in~\ref{sec:background}, dithered lattice quantizers are able to yield quantization errors approximately Gaussian as the lattice dimension tends to infinity.
The sequential (causal) nature of the feedback architecture does not immediately allow for the possibility of using vector quantizers. However, if several sources are to be processed simultaneously, we can overcome this difficulty by using an idea suggested in~\cite{zamkoc08} where the sources are processed in parallel by separate feedback quantizers.
The feedback quantizers are operating independently of each other except that their scalar quantizers are replaced by a single vector quantizer.
If the number of parallel sources is large, then the vector quantizer guarantees that the marginal distributions of the individual components of the quantized vectors becomes approximately Gaussian distributed. Thus, due to the dithering within the vector quantizer, each feedback quantizer observes a sequence of i.i.d.\ Gaussian quantization noises.
Furthermore, the effective coding rate (per source) is that of a high dimensional entropy constrained dithered quantizer (per dimension).
The fact that the scalar mutual information between $V_{k}$ and $\hat{U}_{k}$
equals the mutual information rate between $\set{V_{K}}$ and $\set{\hat{U}_{k}}$
in each of the parallel coders implies that $R^{\perp}(D)$ can be achieved by using a memoryless entropy coder.
\section{Rate Loss with Dithered Feedback Quantization}
The results presented in sections~\ref{sec:realiz_TC} and~\ref{sec:noise_shap} suggest that
if a test channel embedding an AWGN channel realizes $R^{\perp}(D)$, then a source coder obtained by replacing the AWGN channel by a dithered, finite dimensional lattice quantizer, would exhibit a rate close to $R^{\perp}(D)$.
The next theorem, whose proof follows the line of the results given in~\cite[sec.~VII]{zamkoc08}, provides an upper bound on the rate-loss incurred in this case.
\begin{thm}
\emph{
Consider a source coder with a finite dimensional subtractively dithered lattice quantizer $\mathcal{Q}$.
If when replacing the quantizer by an AWGN channel the scalar mutual information across the channel equals $R^{\perp}(D)$, then
the scalar entropy of the quantized output exceeds $R^{\perp}(D)$ by at most $0.254$ bit/dimension.
\end{thm}
\begin{proof}
Let $W$ be the noise of the AWGN channel, and $V$ and $\hat{U}$ denote the channel input and output signals.
From the conditions of the theorem, we have that
\begin{align}
I(V_{k};\hat{U}_{k}) = R^{\perp}(D).\label{eq:Iscalar_eq_Rperp}
\end{align}
If we now replace the AWGN by a dithered quantizer with subtractive dither $\nu$, such that the quantization noise $W'$ is obtained with the same first and second order statistics as $W$, then the end to end MSE remains the same.
The corresponding signals in the quantized case, namely $V'$ and $\hat{U}'$, will also have the same second order statistics as their Gaussian counterparts $V$ and $\hat{U}$.
Thus, by using Lemma~\ref{lem:excessrate} we obtain
\begin{align}
I(V_{k}';\hat{U}_{k}')
&\overset{ }{\leq} R^{\perp}(D) + D(\hat{U}_{k}'\Vert \hat{U}_{k})\label{eq:I_rate_loss}.
\end{align}
Finally, from~\cite[Theorem~1]{zamfed92}, we have that
$
H(\mathcal{Q}(V_{k}+\nu_{k})|\nu_{k} )= I(V_{k}';\hat{U}_{k}')
$.
Substitution of~\eqref{eq:I_rate_loss} into this last equation yields the result.
\end{proof}
\section{Conclusions}
We have proved the achievability of $R^{\perp}(D)$ by using lattice quantization with subtractive dither.
We have shown that $R^{\perp}(D)$ can be realized causally, and that the use of feedback allows one to achieve $R^{\perp}(D)$ by using memoryless entropy coding.
We also showed that the scalar entropy of the quantized output when using optimal finite-dimensional dithered lattice quantization exceeds $R^{\perp}(D)$ by at most $0.254$ bits/dimension.
\balance
\bibliographystyle{/home/milan/Documents/University/Research/BibTeX/IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Descriptions of nuclear matter and finite nuclei, which are ultimately
governed by the physics of low-energy quantum chromodynamics (QCD),
are efficiently formulated using low-energy degrees of
freedom---the hadrons.
In the absence of direct derivations from QCD, such effective descriptions
should be constrained by the underlying
symmetries of QCD, both broken and unbroken.
Nevertheless, the appropriate realization of these symmetries for
phenomenological models is not yet established.
In this paper, we explore some consequences of applying QCD symmetry
constraints to a relativistic model of finite nuclei that features a
light scalar meson.
At present,
the most developed framework for constraining hadronic physics by
QCD symmetries is
chiral perturbation theory (ChPT)\cite{WEINBERG68},
which provides a systematic expansion in
energy for low-energy scattering processes.
The degrees of freedom are the Goldstone bosons (pions, {\it etc}.)
and, when appropriate, nucleons.
This approach builds in constraints due to chiral symmetry without any
additional constraints on the dynamics or {\it ad hoc\/} model assumptions;
physics beyond chiral symmetry is incorporated through constants
in the low-energy lagrangian,
which are usually determined from experiment.
Because additional constants are needed at each stage in the energy
expansion,
ChPT is predictive only
at sufficiently low energies, where the number
of parameters introduced does not overwhelm the data to be described.
The prospects for extending ChPT in a useful way to calculations at
finite density are unclear at present.
On the other hand, the general framework of ChPT has validated the
principle of resonance dominance of low-energy QCD.
In particular, the $E^4$ coupling constants in ChPT in the meson sector
are well reproduced
from a meson resonance lagrangian applied at tree level,
with the vector mesons playing the leading role\cite{ECKER89}.
Meson dominance is also the key principle
underlying phenomenological models of nuclei with hadronic degrees of freedom,
which we consider here.
But while the correspondence
in the vector channels is relatively straightforward
because of well-defined resonances, the dynamics in the
scalar channel is more difficult to identify and to model.
Within meson-exchange phenomenology, the mid-range attraction between nucleons
is generally believed to be a dynamical
consequence of the strong interactions between two pions
exchanged with scalar, isoscalar quantum numbers \cite{DURSO}.
No nearby underlying resonance at the relevant mass ($\approx 500\,$MeV)
is evident or, in principle, needed.
(Note that in ChPT investigations, the scalar resonance is identified
with mesons around 1~GeV.)
Nevertheless, this physics is efficiently, conveniently, and adequately
represented at the one-meson exchange
level by the exchange of a light scalar degree of freedom\cite{MACHLEIDT}.
This light scalar is also an essential element of phenomenologically
successful mean-field models of nuclei\cite{WALECKA74,SW}.
These mean-field models are significantly constrained by the bulk properties
of finite nuclei\cite{REINHARD89,BODMER91,FS}.
The question then arises:
How should QCD symmetry constraints be manifested in these models?
There is a long history of
attempts to generalize the linear sigma model to build models with
chiral symmetry;
it is almost irresistible to identify the scalar meson mediating the
mid-range nucleon--nucleon (NN) attraction with the chiral partner of the pion.
More recently, interest in models realizing the broken scale invariance
of QCD has been revived.
Scale invariance is particularly compelling to consider because of its
connection to the scalar channel.
The breaking of scale invariance by the trace anomaly
implies relations involving
zero-momentum Green's functions
of the scalar trace of the energy-momentum tensor
[see Eqs.~(\ref{eq:leta})--(\ref{eq:letb})];
these are called low-energy theorems\cite{VAINSHTEIN82}.
If these relations are assumed to be
saturated by scalar particles at tree level,
significant constraints arise on
the associated scalar potentials (in the chiral limit).
We will exploit such constraints in this paper.
In Ref.\cite{FS}, a broad class of models that attempt to
unite successful mean-field phenomenology with chiral symmetry and
the broken scale invariance of QCD were studied.
Generalizations of the conventional linear sigma model that feature
a ``Mexican hat'' potential were found to fail generically,
even with modifications inspired by the realization of broken scale
invariance.
A significant improvement was found by the Minnesota group\cite{HEIDE94}
when the ``Mexican hat'' potential is abandoned, and a reasonable
description of the properties of closed-shell nuclei was obtained.
In this paper, we build a different effective model of nuclei
by implementing a {\em nonlinear\/} realization of chiral symmetry together
with the low-energy theorems of broken scale invariance.
The detailed construction
of the full model will be reported elsewhere\cite{FST}.
Our focus here is primarily on how vacuum dynamics might be treated in an
effective field theory of nuclei.
The role and manifestation of vacuum dynamics is
an important issue in any field-theoretic description of
nuclear matter and finite nuclei.
Valence nucleons in the Fermi sea interact with each other and also
with the QCD vacuum.
In turn, the vacuum is modified by interactions with valence nucleons.
In nonrelativistic models, such effects are never dealt with explicitly, but
are absorbed implicitly
into phenomenological effective interactions involving only valence nucleons.
As a result, the interactions may acquire
additional density dependence and nonlocalities.
In previous
relativistic models of nuclear matter involving a scalar field coupled
to the nucleon, vacuum modifications were incorporated in the
renormalized scalar effective potential \cite{WALECKA74,SW}.
This in turn affects the density dependence.
In principle, the one-baryon-loop
effective potential contains an infinite number
of undetermined coupling constants, which are the coefficients in
a polynomial of infinite order in the scalar field.
In conventional renormalizable models, the
nucleon vacuum one-loop correction is well defined\cite{CHIN77} and
determines these coefficients, except for the terms of degree four and less,
which are fixed by a renormalization prescription.
However,
renormalizable models with one-loop corrections do not achieve the
phenomenological success of models without vacuum terms for the bulk
properties of finite nuclei \cite{FOX,FPW}.
We interpret this failure as a phenomenological indication that the
vacuum is not treated adequately.
In previous studies involving nonrenormalizable models,
the effective potential is simply truncated, usually at degree four,
and mean-field theory is applied without considering
vacuum effects.
In this paper, we begin to address the problem of constructing
consistent calculations
in effective field theories
of nuclear matter and finite nuclei that
explicitly address the role of the vacuum dynamics.
In particular,
we show how vacuum loop contributions are absorbed in the
renormalization of coupling constants in the lagrangian
in a model constrained to satisfy the low-energy theorems of QCD.
In contrast to the situation in ChPT, we cannot
expand in powers of the energy,
since we are not limited to derivative couplings and light meson
masses.
We observe, however, that the meson fields develop nonzero expectation values
(mean fields) at finite density, and to begin, we assume that these mean
fields dominate the contributions to the energy.
Successful mean-field phenomenology shows that for densities not much higher
than nuclear matter equilibrium density, the corresponding mean fields (or
nucleon self-energies) are small compared to the free nucleon mass
(roughly $\textstyle{1 \over 4}$ to $\textstyle{1 \over 3}$ the size); these
ratios are therefore useful expansion parameters.
Moreover, since the derivatives of the mean fields are small for normal
nuclei, a truncation of the lagrangian at some low order of derivatives is
also appropriate.
(We verify this assertion explicitly later.)
The end result is an energy functional for nuclear matter and nuclei that
contains small powers of the mean fields and their derivatives; nuclear
phenomenology implies that these fields are an efficient way to incorporate
the density dependence of nuclear observables.
Our objective here is to see how the low-energy behavior of QCD constrains
the coefficients in this energy functional, particularly with regard to
contributions from the quantum vacuum.
The assumption of mean-field dominance also has phenomenological support
from Dirac--Brueckner--Hartree--Fock (DBHF)
calculations, which indicate that exchange terms and
short-range correlations do not significantly
change the size of the nucleon self-energies nor
introduce strong momentum dependence (at least for occupied states)
\cite{CJHBDS,TERHAAR,MACHLEIDT}.
Thus we have the favorable situation that the mean fields are large enough
(compared to nuclear energy scales)
to dominate the bulk dynamics
but small enough (compared to the nucleon mass) to provide useful
expansion parameters.
Going beyond one-loop order systematically is an essential issue, but we will
leave this as a topic for future study.
A nonlinear realization of chiral symmetry will be adopted, in which the
Goldstone bosons (pions) are derivatively coupled to the nucleons.
Historically,
a linear representation (as in the usual linear sigma model) has been favored
by model builders, in part because the sigma model is renormalizable.
In this work, we wish to introduce a light scalar degree of freedom, but
we do not want to make the restrictive dynamical assumption that this scalar is
the chiral partner of the pions in a linear representation.
By realizing chiral symmetry nonlinearly, we are not committed to
such assumptions about the scalar degree of freedom.
In addition, it will be easier to introduce vector mesons
in a chirally invariant way that manifests the
vector-meson dominance of Sakurai\cite{SAKURAI69}.
Finally, the nonlinear representation is more efficient for preserving
the consequences of chiral symmetry at finite density when making
approximations involving pions, because sensitive cancellations are not
needed\cite{SW}.%
\footnote{We are unaware, however, of any proof
of the independence of finite-density observables
with respect to nonlinear field transformations,
analogous to the theorem that applies to $S$-matrix elements \cite{SMATRIX}.}
As suggested above,
the nonderivative terms of the light scalar effective potential
can be constrained by the
low-energy theorems of QCD, so that vacuum effects are
``built in.''
This, together with the truncation of our expansion
in derivatives and powers of fields, leaves us with relatively few
parameters, which can be determined by fitting to the properties of
finite nuclei.
The paper is organized as follows:
In Section II, broken scale invariance is discussed and
the model is introduced and its renormalization is considered.
An approximation scheme for nuclear matter and finite nuclei
is proposed in Section III and the energy functional is derived.
Results are given in Section IV.
Section V contains some discussion of the results and Section VI is
a summary.
\section{The Model}
In meson-exchange phenomenology there is a light scalar degree of freedom
that simulates two-pion-exchange physics in the scalar channel
\cite{DURSO,LIN}.
Here we seek to describe this physics by introducing a
light scalar field $S(x)$.
We do not associate the scalar with a bound state or resonance,
so we allow $S(x)$ to have anomalous behavior under a scale transformation
in the effective theory.
In particular, when $x\rightarrow \lambda^{-1} x$,
$S(x)\rightarrow \lambda^d S(\lambda x)$, where $d$ can differ from unity
and is to be determined phenomenologically.
A QCD-inspired scenario that leads to such a scalar was proposed by
Miransky and collaborators\cite{MIRANSKY}.
They introduced a light scalar generated by
dynamical chiral symmetry breaking in QCD, which was consequently
associated with the
quark condensate $\langle\bar{q}q\rangle$ and referred
to as quarkonium.
We will take all other fields to have canonical scale dimension.
While massless QCD is scale invariant at the classical level, this symmetry
is broken at the quantum level.
This breaking is manifested in a nonzero trace of the energy-momentum
tensor of QCD, which is referred to as the trace anomaly.
The QCD trace anomaly in the chiral limit\cite{COLLINS77}
is given by
\begin{equation}
\theta_{\mu}^{\mu}(x)\equiv-H(x)=(\beta(g)/2g)
G_{\mu\nu}^aG^{a\mu\nu} \ , \qquad a=1, 2, \cdots, N_{\rm c}^2-1
\ ,
\end{equation}
where $G^{a}_{\mu\nu}$ is the gluon field tensor and
$\beta(g)=-(g^3/48\pi^2)(11N_{\rm c}-2N_{\rm f})$ is the one-loop beta
function with $N_{\rm c}$ colors and ${N_{\rm f}}$ flavors.
There are remnants of scale invariance, which imply low-energy theorems that
relate Green's functions involving the trace of the
energy-momentum tensor $H(x)$\cite{VAINSHTEIN82}:
\begin{eqnarray}
i\int{\rm d}^4x\, \langle 0|T[H(x)H(0)]|0\rangle & = &
4H_{\scriptscriptstyle 0} \ , \label{eq:leta} \\[4pt]
i^2\int{\rm d}^4x \, {\rm d}^4y \, \langle 0|T[H(x)H(y)H(0)]|0\rangle
& = & 4^2H_{\scriptscriptstyle 0} \ , \label{eq:letb} \\[4pt]
\vdots \qquad\qquad & & \nonumber
\end{eqnarray}
where $H_{\scriptscriptstyle 0}\equiv \langle 0|H|0\rangle$.
Effective lagrangians for pure-glue QCD (no quarks) featuring a scalar
glueball field $\chi(x)$
(``gluonium'') that saturates these low-energy theorems at tree level
have been considered many times \cite{SCHMIGELL}.
Lattice QCD calculations indicate that the scalar glueball is quite
heavy on hadronic mass scales, with a mass of roughly
1.6--1.8\,GeV \cite{LATTICE}.
Its fate in the real world with light quarks is not entirely clear.
Here we will generalize the effective gluonium
model to include the light scalar discussed earlier;
this extension was proposed in a different context in Ref.\cite{MIRANSKY}.
We take the trace anomaly to consist of two contributions,
corresponding to a vacuum expectation value
$H_{\scriptscriptstyle 0}=H_{\rm g}+H_{\rm q}$.
Here $H_{\rm g}$ is identified with the
heavy glueball contribution, while $H_{\rm q}$ is nonzero only
when chiral symmetry is dynamically broken in the presence of light quarks.
One can argue that $H_{\rm g}$
dominates $H_{\scriptscriptstyle 0}$ (which is equal to the gluon condensate
up to a factor) so that $H_{\rm q} \ll H_{\rm g}$ \cite{MIRANSKY}.
How the QCD trace anomaly actually separates into the two parts is not
explored here, since we will determine $H_{\rm q}$ by fitting to the
properties of finite nuclei.
Nevertheless, we find that the value of $H_{\rm q}$ determined
in our fits satisfies $H_{\rm q} \ll H_0$ (see Table~\ref{tab:one}).
The low-energy theorems involving the trace
$\theta_{\mu}^{\mu}(x)$ of the energy-momentum tensor are
assumed to be saturated by the scalar gluonium $\chi(x)$
\cite{SCHMIGELL} {\it and\/} the light scalar $S(x)$.
For simplicity we adopt a model with no mixing between the scalars.
A candidate effective lagrangian
of the scalars that satisfies the low-energy theorems at
tree level in the chiral limit
is\cite{SCHMIGELL,MIRANSKY}
\begin{equation}
{\cal L}_{\rm s}(x)={1\over 2}\partial _{\mu}\chi\partial
^{\mu}\chi
+ {1\over 2}\bigg [\alpha_1
\bigg ({\chi^2 \over \chi_{ 0}^2}\bigg ) ^{1-d}
+(1-\alpha_1) \bigg (
{S^2 \over S_{ 0}^2}\bigg )^{(1-d)/d}\
\bigg ]
\partial _{\mu}S\partial ^{\mu}S- V(\chi, S) \ , \label{eq:Lscalar}
\end{equation}
where $\alpha_1$ is a real constant, $d$ is the scale dimension of
the $S(x)$ field, and the scale-breaking potential $V$ is
\begin{equation}
V(\chi, S) = H_{\rm g}
{\chi^4 \over \chi_{ 0}^4}
\bigg ( \ln {\chi \over \chi_0} - {1\over 4}\bigg)
+H_{\rm q}\bigg ({S^2 \over S_{ 0}^2}
\bigg)^{2 /d} \bigg ( {1 \over 2d}
\ln {S^2 \over S_{ 0}^2}
- {1 \over 4} \bigg ) \ . \label{eq:potl}
\end{equation}
Here $ \chi_{ 0}$ and $S_{ 0}$
are the vacuum expectation values of $ \chi$ and $S$ respectively.
Notice that $\alpha_1$ has been introduced so that after expanding the terms
in square brackets in Eq.~(\ref{eq:Lscalar}), the
kinetic term for $S$ is canonical.
The mass of the light scalar $S$ is given by
$m_{\rm s}^2 = 4H_{\rm q}/(d^2S_{ 0}^2)$.
The scale dimension of the $\chi$ field is assumed to be unity.
One can define the energy-momentum tensor so that the Noether current
for scale transformations is $x_\nu \theta^{\mu\nu}$.
The trace of this ``improved'' energy-momentum tensor\cite{CALLAN70}
corresponding to the lagrangian in Eqs.~(\ref{eq:Lscalar})
and (\ref{eq:potl}) is
\begin{eqnarray}
\theta_{\mu}^{\mu}(x)& =& Sd{\partial V \over \partial S}
+\chi{\partial V \over \partial \chi}
-4 V(\chi, S) \nonumber \\
& = &- H_{\rm g}
{\chi ^4 \over \chi_{ 0}^4}
-H_{\rm q}\biggl({S^2 \over S_{ 0}^2}
\biggr)^{{2/ d}} \ . \label{eq:trace}
\end{eqnarray}
With the
dynamics of the scalar field fluctuations governed by the lagrangian
in Eqs.~(\ref{eq:Lscalar}) and (\ref{eq:potl}), the preceding trace
satisfies the low-energy theorems at the tree level.
The usual direct demonstration\cite{SCHMIGELL}, in which the gluonium
alone is assumed to saturate the Green's functions,
involves parametrizing
the fluctuations $\widetilde\chi(x)$ in the exponential form
$\chi=\chi_{ 0}\exp [\widetilde\chi(x)/
\chi_{ 0}]$ and substituting this into
the low-energy theorems and into the potential,
$V(\chi,0)$ of Eq.~(\ref{eq:potl}), to determine vertices.
Keeping only tree level diagrams (no loops), the theorems then follow.
To extend the demonstration to the present case,
one first notes that the low-energy theorems should not depend on
how the gluonium field fluctuation $\widetilde\chi$ is parametrized.
One then observes that if
$\widetilde {\chi}$ is defined through $\chi \equiv \chi_{ 0}
(1 - \widetilde {\chi}/\chi_{ 0})^d$, the
resulting form for the gluonium parts of the trace
and the potential in Eqs.~(\ref{eq:potl}) and (\ref{eq:trace})
become the same as for the light scalar,
when the fluctuation of the latter
is parametrized simply as $S = S_0 - \phi$.
Since there are no couplings between the $\chi$ and the $S$
fields, the low-energy theorems follow directly.
We can now add to ${\cal L}_s$
a {\it scale-invariant\/} lagrangian with these scalars coupled
to pion, nucleon, and
vector degrees of freedom.
(We neglect pion mass terms at this point.)
The resulting model would be a candidate model for nuclei that satisfies
the low-energy theorems.
On the other hand,
there are many other possible terms allowed,
and even Eq.~(\ref{eq:Lscalar}) is not
the most general form involving two scalars.
We choose to take advantage of the heaviness of the gluonium and
the usefulness of an expansion in powers and derivatives of the other fields
to both simplify and generalize the effective lagrangian.
We expect the expansion to be valid and
useful when applied near normal nuclear matter densities.
Since the mass scale of the heavy gluonium field (roughly 1.6--1.8\,GeV)
is significantly higher
than the scales involved in the nuclear matter problem,
the heavy gluonium field fluctuations $\widetilde\chi$
can be integrated out as in Ref.~\cite{ECKER89}.
In particular, we can eliminate $\widetilde\chi$ by iteratively solving
its equation of motion, exploiting the dominance of the mass term over
powers and derivatives of $\widetilde\chi$.
This results in complicated terms involving powers and derivatives
of the other fields, but we can expand these terms.
For example, the second term in Eq.~(\ref{eq:Lscalar}) would become
\begin{equation}
{1\over 2}\biggl[1 + \beta_1 {\phi\over S_0} +
\beta_2 {\phi^2\over S_0^2}
+ \cdots \biggr]
\partial _{\mu}\phi\partial ^{\mu}\phi
+ \beta_3(\partial _{\mu}\phi\partial ^{\mu}\phi)^2 + \cdots ,
\end{equation}
where the $\beta_i$ are functions of the constants
$\chi_0$, $\alpha_1$, and $d$.
The lagrangian will be much simpler if we can
truncate this expansion at leading order in derivatives and
neglect high powers of the meson fields.
We can follow this prescription to write a general chirally invariant
effective lagrangian for nuclear matter and nuclei.
The ground states of even-even nuclei and nuclear matter will be
assumed to have good parity, so there is no pion mean field.
Thus the pion will not play an explicit role in the present
discussion of uniform nuclear matter and closed-shell nuclei
in the Hartree approximation.
Nevertheless, we wish to stress the connection to pion physics and
the underlying constraints of chiral symmetry.
Thus,
we give an overview of the full model in order to motivate the form of
the lagrangian and to set the stage for future work.
We restrict consideration to a low-energy representation of
massless, two-flavor QCD.
The Goldstone pion fields are represented by a chiral phase
angle that corresponds to a pure chiral rotation of the identity matrix:
\begin{equation}
\xi {\bf 1} \xi \equiv U(x) =
\exp (i \bbox{\pi}(x) \bbox{\cdot \tau}/ f_{\pi})
\ , \label{eq:Udef}
\end{equation}
where
$\xi (x) = \exp (i \bbox{\pi}(x)
\bbox{\cdot \tau}/ 2f_{\pi})$,
$\tau ^a$ ( $a=1, 2$, and $3$ ) are the Pauli matrices,
$\pi^a(x)$ are the Goldstone pion fields, and $f_{\pi}=93$ MeV
is the pion-decay constant.
This parametrization and the nucleon representation that follows is
conventional;
see, for example, Ref.~\cite{DONOGHUE92}.
The nucleon field is written as
\begin{equation}
N(x)=\left(\begin{array}{c} p(x) \\ n(x) \end{array} \right)\ ,
\end{equation}
with $p(x)$ and $n(x)$ being the proton and neutron fields.
Under chiral transformations of
$SU(2)_{\rm L} \otimes SU(2)_{\rm R}$,
$U(x)$ transforms globally, $U(x)\rightarrow L U(x) R^{\dagger}$,
where $L = \exp(i\bbox{\theta_L\cdot\tau})$ and
$R = \exp(i\bbox{\theta_R\cdot\tau})$
are $x$-independent elements of $SU(2)_{\rm L}$ and $SU(2)_{\rm R}$,
respectively.
In general, the transformation of $\xi$ is local since
it depends on the pion field \cite{GEORGI,DONOGHUE92,CALLAN69}:
\begin{equation}
\xi(x) \rightarrow \xi'(x) = L \xi(x) h^{\dagger}(x) = h(x) \xi(x) R^{\dagger}
\ , \label{eq:Xitrans}
\end{equation}
where the second equation defines the $SU(2)$-valued function
$h(x)$ as a {\it nonlinear\/} function of $L$, $R$, and $U(x)$.
Note that $h=L$ when $L=R$, {\it i.e.,} in the case of a pure isospin
rotation.
The nucleon field also transforms locally:
$N(x)\rightarrow h(x) N(x)$,
which implies that nucleons mix with pions under chiral transformations.
(See Ref.~\cite{GEORGI} for alternative representations of the nucleon field.)
We will incorporate the physics of vector dominance in our lagrangian
by introducing vector mesons as gauge bosons \cite{SMATRIX}.
For simplicity, since
we concentrate on the properties of nearly symmetric
($N \approx Z$) nuclear matter in this
paper, we will not explicitly write down the rho and the
electromagnetic fields.
Thus only the $\omega$ meson field $V^\mu$ appears explicitly here.
We will present a full discussion of the lagrangian elsewhere
\cite{FST}, including
how the vector mesons are gauged and how vector dominance results.
To build chirally invariant terms,
it is useful to define the vector and axial vector fields
\begin{eqnarray}
v_{\mu}(x) & = & -{i \over 2}(\xi^{\dagger} \partial_{\mu} \xi +
\xi \partial_{\mu}\xi^{\dagger} )
\ , \label{eq:vdef} \\[4pt]
a_{\mu}(x) & = & -{i \over 2}(\xi^{\dagger} \partial_{\mu} \xi -
\xi \partial_{\mu}\xi^{\dagger} ) \ , \label{eq:adef}
\end{eqnarray}
which transform as
$v_{\mu} \rightarrow h v_{\mu} h^{\dagger}
-ih\partial_{\mu}h^{\dagger}$ and
$a_{\mu} \rightarrow h a_{\mu} h^{\dagger}$.
The coupling of the pion to the nucleon is realized through $a_\mu$
and the
covariant derivative
\begin{equation}
{\cal D}_{\mu} = \partial _{\mu}
+ i v_{\mu} +i g_{\rm v}V_{\mu}
\ . \label{eq:nulcov}
\end{equation}
Now we can write the complete chirally invariant lagrangian;
all terms not contained in ${\cal L}_s$ are scale invariant.
After integrating out $\widetilde\chi$ and expanding about $S_0$,
the lagrangian takes the form
\begin{eqnarray}
{\cal L}(x) &=&
\overline N \Bigl(i\gamma^{\mu} {\cal D}_{\mu}
-ig_{\rm \scriptscriptstyle A}\gamma^{\mu}\gamma_5
a_{\mu} - M + g_{\rm s}\phi + \cdots \Bigr)N
-{1\over 4}F_{\mu\nu}F^{\mu\nu}
\nonumber \\
& & \null + {1\over 2} \bigg [
1+ \eta {\phi\over S_0} + \cdots \bigg ]
\Big [ {1\over 2}f_{\pi}^2\, {\rm tr}\,
(\partial _{\mu}U\partial ^{\mu}U^{\dagger})
+ m_{\rm v}^2 V_{\mu}V^{\mu} \Big ]
\nonumber \\
& & \null +{1\over 4!}\zeta
(g_{\rm v}^2 V_{\mu}V^{\mu})^2
+ {1\over 2}\partial_\mu \phi \partial^\mu \phi
- H_{\rm q}\bigg ({S^2 \over S_{ 0}^2}
\bigg)^{2 / d} \bigg ( {1 \over 2d}
\ln {S^2 \over S_{ 0}^2}
-{1 \over 4} \bigg ) + \cdots
\ ,\label{eq:NLag}
\end{eqnarray}
where $g_{\rm \scriptscriptstyle A}=1.23$ is the
axial coupling
constant, $g_{\rm s}$ ($g_{\rm v}$) is the light scalar
(vector $\omega$) coupling
to the nucleon, the $\omega$ field strength tensor
is $F_{\mu\nu}= \partial_{\mu} V_{\nu}
-\partial_{\nu} V_{\mu}$,
and $\eta$ and $\zeta$ are real constants.
Several features of this lagrangian are of interest:
\begin{itemize}
\item
We have combined terms after expanding and have rewritten the
coefficients,
where appropriate, in terms of physical masses.
Note that the nucleon mass $M$ has contributions from the vacuum
expectation values of both scalars; we do not assume that it comes
entirely from the light scalar (although this possibility is not excluded).
\item
The combination of the $\omega$ mass term and the pion kinetic
term in Eq.~(\ref{eq:NLag}) appears naturally, if we assume the
vector mesons to be gauge bosons\cite{SMATRIX,FST}.
\item
The original separation of the lagrangian into a scale-invariant
piece and a scale-breaking piece, in which the latter
involved only the scalar
fields, is now largely hidden
because the $\chi$ dependence is not explicit and we have expanded
about $S_0$.
Nevertheless, there is a remnant for our purposes here: the
scale-breaking potential of the light scalar
[the last term in Eq.~(\ref{eq:NLag})], which is not changed by
the elimination of $\widetilde\chi$. (Recall that
the $\chi$ and $S$ do not mix.)
Thus the low-energy theorems still protect the form
of this potential, which places constraints on vacuum
loop renormalizations, as discussed below.
\item
We have omitted many higher-order terms, as indicated
by the ellipses, which represent higher powers of fields and their
derivatives.
Only Yukawa couplings to the nucleon fields are kept, based on the
phenomenological dominance of one-meson exchange and the implicit
elimination of heavier fields.
(So $\overline NN\phi^2$ terms, {\it etc.}, are omitted.)
Higher-order terms with meson fields
should give numerically small contributions (in nuclei)
or can be absorbed into slight adjustments of the other parameters.
Some explicit justification for these claims is given in the results below.
\end{itemize}
The lagrangian in Eq.~(\ref{eq:NLag})
is written with renormalized coefficients.
Counterterms are not written explicitly, but are implied.
In particular,
these counterterms include {\it all\/} powers of the scalar field,
not just terms up to $O(\phi^4)$, as in a renormalizable model.
To understand how these counterterms are fixed, we start by
integrating out the baryon fields at zero density and temperature.
The result is a fermion determinant that contributes to the
meson action as an additive term given by
\begin{equation}
S_{\text{fd}}[\phi,V_\mu] \equiv \int {\rm d}^4 x\, {\cal L}_{\text
{fd}}
= -i\, {\rm Tr}\ln K(0)
\ , \label{eq:Seff}
\end{equation}
where
``Tr'' indicates a trace over spacetime, spin, and isospin, and
the kernel $K(\mu)$ is defined in coordinate space by
\begin{equation}
\langle x | K(\mu) | y \rangle =
[i\gamma^\mu \partial_\mu - g_{\rm
v}\gamma^{\mu}V_{\mu}(x)
+\mu \gamma_0 -M+g_{\rm s}\phi(x)] \delta^4(x-y) \ .
\end{equation}
The introduction of the chemical potential $\mu$ is for later
convenience, and baryon counterterms, which are needed beyond one-loop,
are suppressed.
Note that no approximation has been made at this point;
$S_{\text{fd}}$ is a functional of the dynamical fields $\phi$
and $V^\mu$ that still must be integrated over in a path integral,
for example.
The techniques for expanding a determinant in powers of
derivatives can be found in Ref.\cite{AITCHISON}; see also
the heat-kernel method in Ref.\cite{DONOGHUE92}.
The expansion of Eq.~(\ref{eq:Seff}) in a {\it renormalizable\/} model has
been discussed in Ref.\cite{PERRY86}.
We first focus on the nonderivative terms, which can be obtained
from Eq.~(\ref{eq:Seff}) by treating the fields as constants and
by expanding the logarithm in a power series in the fields.
Baryon number conservation implies that for the vector field,
only its derivatives can appear in the expansion.
Thus the nonderivative part of ${\cal L}_{\text{fd}}$
is an infinite polynomial in $\phi$;
for example, at the one-loop level,
\begin{eqnarray}
{\cal L}_{\text {fd}}[\phi]&=&i\int\!\! {{\rm d}^\tau k \over
(2\pi)^4} \,{\rm tr}\, \ln
G^0(k)
+ i \sum_{n=1}^{\infty} {(-1)^{n}\over n} [g_{\rm s}\phi(x)]^n
\int\!\! {{\rm d}^\tau k \over (2\pi)^4}\,
{\rm tr}\,
[G^0(k)]^n
\ . \label{eq:cts}
\end{eqnarray}
Here we have regularized dimensionally to maintain Lorentz covariance and
baryon number conservation,
``${\rm tr}$'' denotes a trace over spin and isospin only, and
\begin{equation}
G^0(k)
={1 \over \rlap/{\mkern-1mu k}-M+i\epsilon}
\end{equation}
is the free baryon propagator.
Beyond one loop there are additional terms in the coefficients,
including baryon counterterm contributions.
The polynomial in $\phi$ of Eq.~(\ref{eq:cts})
must be combined with the corresponding counterterms; in this way
the vacuum contributions are absorbed into the
renormalization of the scalar polynomial.
If one insists that the low-energy theorems be satisfied at tree
level in the meson fields,
the end result for the scalar potential should be of the form in
Eq.~(\ref{eq:NLag}), where the couplings are renormalized.
(Note that this potential can be expanded as a polynomial in $\phi$,
with {\it all\/} coefficients determined by $H_q$, $S_0$, and $d$.)
One never has to explicitly calculate any counterterms
or evaluate Eq.~(\ref{eq:cts}); when we write down the scalar potential,
the nucleon-loop effects have already been taken into
account.
Furthermore, although we have illustrated the renormalization by evaluating
nucleon loops only, any additional baryonic
degrees of freedom in the lagrangian would be treated analogously and the
final result will be the same.
Thus the phenomenological fitting of parameters accommodates a
general characterization of the vacuum response.
The renormalization of the derivative terms is analogous except
that we
do not have low-energy theorems to reduce the number of renormalized
coupling constants.
We note, however, that each additional derivative is accompanied by an
inverse power of a
typical scale in the problem, which is the nucleon mass here.
Experience with mean-field models of nuclei also suggest that the
derivatives
of the mean fields are small
(for example, $|\nabla\phi / \phi |
\lower0.6ex\vbox{\hbox{$\ \buildrel{\textstyle <}
\over{\sim}\ $}} 100\,$MeV).
Thus if we assume
mean-field dominance, such that fluctuations around
the mean fields are small, and the naturalness of the
coefficients in the derivative expansion (see the discussion in Section~V),
we can truncate the derivative terms at some tractable order.
In this work, we will stop at the lowest order for the derivatives.
Thus we have only a few unknown renormalized constants (parameters),
which are determined by fitting to experiment;
in our case, we will use finite-density observables.
At finite density, we work in the grand canonical ensemble through the
introduction of a chemical potential $\mu$\cite{TANGTHS}.
We consider only zero temperature in this work, which allows a
simplified discussion.
The relevant lagrangian density is now
\begin{equation}
{ \cal{L}}'(x,\mu)= {\cal{L}}(x) +\mu \overline N\gamma_0N
\ .
\end{equation}
Here the effective action of ${ \cal{L}}'$ is associated with
the thermodynamic potential $\Omega$ of the system, instead of the
energy. The energy follows from
\begin{equation}
E = \Omega + \mu B \label{eq:En} \ ,
\end{equation}
where
\begin{equation}
B=-{\partial \Omega \over \partial \mu} \label{eq:Bn}
\end{equation}
is the baryon number of the system.
Now we integrate out the baryon field as at zero density. The result
is the fermion determinant at finite density (or chemical potential),
$-i\, {\rm Tr}\ln K(\mu)$, to which
we can add and subtract the fermion determinant at $\mu=0$,
$-i\, {\rm Tr}\ln K(0)$.%
\footnote{We assume that $\mu=0$ still separates the positive-energy
levels from the Dirac sea. This will be the case if the density
is not too high.}
The added term $-i\, {\rm Tr}\ln K(0)$ combines with the
counterterms exactly as described above
so that the renormalization goes through as before.
Note that it
contains the same dynamical scalar and vector fields as the fermion
determinant at $\mu$.
The remaining combination
\begin{equation}
-i\, {\rm Tr}\ln K(\mu) + i\, {\rm Tr}\ln K(0)
\end{equation}
is an explicitly density-dependent piece (it vanishes for $\mu=0$),
which is finite if baryon counterterms are included in $K(\mu)$.
(This combination is evaluated in the Hartree approximation in the next
section, for which the baryon counterterms are not needed.)
Once again the scalar potential in the form shown in
Eq.~(\ref{eq:NLag}) is left intact;
the only difference is that the scalar field now acquires a different
expectation value due to the presence of valence nucleons at finite density.
\section{Finite Nuclei and Nuclear Matter}
To perform a realistic calculation, we need a good starting approximation.
Since our focus here is on bulk nuclear properties and on single-particle
spectra, we assume that the mean meson fields dominate the dynamics, and
we expand the finite-density thermodynamic potential around the mean fields.
The lowest-order result (Hartree approximation)
is obtained by replacing all the meson fields
by their mean values, and this will be the starting point of any systematic
approximation for treating the fluctuations.
The thermodynamic potential for nuclei in the Hartree approximation
is given by
\begin{equation}
\int\!{\rm d}x_0\,
\Omega =i\, {\rm Tr}\ln \overline{K}(\mu) - i\, {\rm Tr}\ln
\overline{K}(0)-\int {\rm d}^4x\, U_{\rm m}({\bf x})\ ,
\label{eq:Omega}
\end{equation}
where the baryon kernel in coordinate space is now
\begin{equation}
\langle x | \overline{K}(\mu) | y \rangle =\gamma_0
[i\partial_0+\mu-h({\bf x})]\delta^{(4)}(x-y)
\ .
\end{equation}
The single particle hamiltonian $h$ is
\begin{equation}
h({\bf x}) = -i\bbox{\alpha\cdot\nabla}
+g_{\rm v}V_0({\bf x})+
\beta(M-g_{\rm s}\phi_0({\bf x}))\ ,
\label{eq:sph}
\end{equation}
with $\beta = \gamma_0$ and $\bbox{\alpha}=\gamma_0\bbox{\gamma}$,
and the static scalar and vector
mean fields are denoted by $\phi_0({\bf x})$ and $V_0({\bf x})$.
The contribution from the meson fields is
\begin{eqnarray}
U_{\rm m}({\bf x}) &=&
-{1\over 2}(\bbox{\nabla}\phi_0)^2
-{1\over 4} m_{\rm s}^2 S_0^2 d^2
\Big \{ \biggl(1-{\phi_0\over S_0}\biggr)^{4 / d}
\Big[{1 \over d}\ln \biggl(1-{\phi_0\over S_0}\biggr)
-{1\over 4}\Big ]+{1 \over 4}\Big \}
\nonumber \\[6pt]
& & \null +{1\over 2}(\bbox{\nabla} V_0)^2+{1\over 2}
\left(1+\eta {\phi_0\over S_0}\right)m_{\rm v}^2 V_0^2
+{1\over 4!}\zeta (g_{\rm v} V_0)^4
\ .
\end{eqnarray}
Note that $\overline{K}(\mu)$ is diagonal in the single-particle basis
$\psi_{\alpha}({\bf x}) e^{i\omega x_0} $, where
$\psi_{\alpha}({\bf x})$ are the normalized
eigenfunctions of the Dirac equation with eigenvalues
$E_\alpha$ \cite{HOROWITZ,SW}:
\begin{equation}
h \psi_{\alpha}({\bf x}) = E_{\alpha} \psi_{\alpha}({\bf x})\ , \
\ \ \ \
\int {\rm d}^3x\,
\psi^{\dagger}_{\alpha}({\bf x})\psi_{\alpha}({\bf x})
= 1 \ . \label{eq:norm}
\end{equation}
{}From a path integral formulation, one can see that
the appropriate boundary condition
or $i\epsilon$ prescription for evaluating the baryon kernel
is $\omega\rightarrow (1+i\epsilon)\omega$.
{}From Eq.~(\ref{eq:Omega}) one can now obtain,
after a Wick rotation,
\begin{eqnarray}
\Omega &=&
-\sum_{\alpha}\int {{\rm d}\omega \over 2\pi}\,
[\ln (-i\omega +\mu-E_{\alpha}) - \ln (-i\omega
-E_{\alpha})]
-\int {\rm d}^3 x \, U_{\rm m}
\nonumber \\
&=& -\sum_{\alpha} (\mu -E_{\alpha})
[\theta(\mu -E_{\alpha})-\theta(-E_{\alpha})]
-\int {\rm d}^3 x \, U_{\rm m}
\nonumber \\
&\equiv& -\sum_{\alpha}^{\text{occ}}\, (\mu - E_{\alpha})
-\int {\rm d}^3 x \, U_{\rm m}
\ . \label{eq:Om}
\end{eqnarray}
Here we have used
\begin{equation}
\sum_{\alpha}\theta(-E_{\alpha})=\sum_{\alpha}\theta(E_{\alpha})
=\sum_{\alpha}{1\over 2} \ ,
\end{equation}
which is valid when $\mu=0$ separates the nucleon levels
from the antinucleon levels.
The summation superscript
``occ'' means that the sum runs only over occupied states in the Fermi sea.
Moreover, using Eqs.~(\ref{eq:En}) and (\ref{eq:Bn}), we find
\begin{eqnarray}
B &=& \sum_{\alpha}\, [\theta(\mu -E_{\alpha})-\theta(-E_{\alpha})]
= \sum_{\alpha}^{\text{occ}} \, 1 \ ,\\
E &=& \sum_{\alpha}^{\text{occ}} E_{\alpha}
-\int {\rm d}^3 x \, U_{\rm m} \ . \label{eq:Eeqn}
\end{eqnarray}
We emphasize that the final sum over only occupied (valence) states
is not the result of a {\it vacuum\/} subtraction,
as the term with $\mu=0$ still contains the background fields,
which must be determined self-consistently.
The true vacuum subtraction was performed earlier when we derived the
renormalized $U_{\rm m}$.
The equations for the mean fields are obtained from
extremizing the energy functional
with respect to $\phi_0({\bf x})$ and $V_0({\bf x})$.
{}From Eqs.~(\ref{eq:sph}) and (\ref{eq:norm}) one finds
\begin{eqnarray}
{\delta E_\alpha\over\delta\phi_0({\bf x})}
& = & {\delta\over\delta\phi_0({\bf x})}
\int\!{\rm d}^3y\, \psi^\dagger_\alpha({{\bf y}}) h({{\bf y}})
\psi_\alpha({{\bf y}})
\nonumber \\
& = & \psi^\dagger_\alpha({{\bf x}})
{\partial h\over \partial\phi_0} \psi_\alpha({{\bf x}}) +
E_\alpha {\delta\over\delta\phi_0({\bf x})}
\int\!{\rm d}^3y\, \psi^\dagger_\alpha({{\bf y}})
\psi_\alpha({{\bf y}})
\nonumber \\
& = & \psi^\dagger_\alpha({{\bf x}}) {\partial h\over
\partial\phi_0}
\psi_\alpha({{\bf x}})
\ ,
\label{eq:inter}
\end{eqnarray}
and a similar expression for the variation with respect to $V_0$;
evaluating the derivatives yields
\begin{eqnarray}
{\delta \over \delta \phi_0 ({\bf x})} \sum_{\alpha}^{\text{occ}}
E_{\alpha}
&=& -g_{\rm s} \sum_{\alpha}^{\text{occ}}
\overline \psi_{\alpha}({\bf x})\psi_{\alpha}({\bf x})
\ , \\
{\delta \over \delta V_0 ({\bf x})} \sum_{\alpha}^{\text{occ}}
E_{\alpha}
&=& g_{\rm v} \sum_{\alpha}^{\text{occ}}
\psi^\dagger_{\alpha}({\bf x})\psi_{\alpha}({\bf x})
\ .
\end{eqnarray}
Upon applying these results to Eq.~(\ref{eq:Eeqn}),
one obtains the mean-field equations:
\begin{eqnarray}
-{\bbox{\nabla}}^2 \phi_0 + m_{\rm s}^2 \phi_0 & = &
g_{\rm s} \sum_{\alpha}^{\text{occ}}
\overline \psi_{\alpha}({\bf x})\psi_{\alpha}({\bf x})
\nonumber
\\ & & \null + m_{\rm s}^2 \phi_0 +
m_{\rm s}^2 S_0 \biggl(1-{\phi_0\over S_0}\biggr)^{(4/ d)
-1}
\ln \biggl(1-{\phi_0\over S_0}\biggr) +
{\eta \over 2 S_0} m_{\rm v}^2 V_0^2 \ , \\
-{\bbox{\nabla}}^2 V_0 + m_{\rm v}^2 V_0 & = &
g_{\rm v} \sum_{\alpha}^{\text{occ}}
\psi^\dagger_{\alpha}({\bf x})\psi_{\alpha}({\bf x})
\nonumber
\\ & & \null - \eta {\phi_0\over S_0} m_{\rm v}^2 V_0
- {1\over 6} \zeta g_{\rm v}^4 V_0^3 \ . \label{eq:Vfeqn}
\end{eqnarray}
Note that we have added an explicit mass term to each side of the
scalar field equation to put it in a form that can be solved
with conventional numerical techniques \cite{HOROWITZ}.
The vector mean-field equation is actually a constraint since the
time component of the vector field is not a dynamical degree of freedom.
(See below for further comments in the case of nuclear matter.)
This lowest-order result (Hartree approximation)
is similar to that obtained from conventional derivations
of relativistic mean-field models in which one-loop vacuum corrections
are simply neglected. We emphasize, however, that we are not merely
presenting another mean-field model; the vacuum effects {\it are\/}
incorporated and systematic improvement is possible (in principle).
Rather, we consider our procedure a {\it justification\/}
for the phenomenologically successful mean-field approach.
The energy density for uniform nuclear matter in the Hartree
approximation can be obtained from the preceding results
by observing that the single-particle energy eigenvalue becomes
\begin{equation}
E({\bf k})=g_{\rm v} V_0 +\sqrt{{\bf k}^2+{M^*}^2} \ ,
\end{equation}
where $M^*=M-g_{\rm s}\phi_0$, and $\phi_0$ and $V_0$ are now constant
mean fields. The energy density ${\cal E}$ becomes
\begin{eqnarray}
{\cal E}[M^\ast,\rho_{{\scriptscriptstyle\rm B}}]
&=& {1\over 4} m_{\rm s}^2 S_0^2 d^2
\Bigl\{ \Bigl(1-{\phi_0\over S_0}\Bigr)^{4/d}
\Bigl[{1 \over d}\ln \Bigl(1-{\phi_0\over S_0}\Bigr)
-{1\over 4}\Bigr]
+ {1\over 4} \Bigr\}
+g_{\rm v}\rho_{\scriptscriptstyle\rm B}
V_0 - {1\over 4!}\zeta
(g_{\rm v} V_0)^4
\nonumber \\[4pt]
& & - {1 \over 2}\Bigl(1+\eta {\phi_0\over S_0}\Bigr)
\, m_{\rm v}^2
V_0^2
+ {\gamma\over (2\pi)^3}\! \int^{k_{\scriptscriptstyle\rm F}}
{\kern-.1em}{\rm d}^3{\kern-.1em}{k} \,
\sqrt{{\bf k}^2+{M^*}^2}
\, , \label{eq:endens}
\end{eqnarray}
where $k_{\scriptscriptstyle\rm F}$ is the Fermi momentum
defined by $\mu =g_{\rm v}V_0+\sqrt{k_{\rm \scriptscriptstyle F}^2+{M^*}^2}$,
and
$\rho_{\scriptscriptstyle\rm B}=
\gamma k_{\scriptscriptstyle\rm F}^3/(6\pi^2)$
is the baryon density.
The spin-isospin degeneracy $\gamma = 4$
for nuclear matter and $\gamma = 2$ for
neutron matter.
The equation that determines $V_{ 0}$
can be obtained either from the Euler--Lagrange equations or by using
Dirac's procedure \cite{DIRAC}, with the result
\begin{equation}
g_{\rm v}\rho_{\scriptscriptstyle\rm B}=
\Bigl(1+\eta {\phi_0\over S_0}\Bigr)\,
m_{\rm v}^2 V_{ 0}
+ {1\over 6}\zeta g_{\rm v}^4 V_{ 0}^3
\ . \label{eq:vcnstrnt}
\end{equation}
This equation can also be obtained
from Eq.~(\ref{eq:endens}) by setting
$(\partial {\cal E}/\partial V_{ 0})_{
\rho_{\scriptscriptstyle\rm B}, M^*}=0\ $.
Note, however, that this is not a minimization condition for ${\cal E}$.
In fact, the $V_{ 0}$ obtained from Eq.~(\ref{eq:vcnstrnt})
corresponds to a local maximum of the energy density.
Equation~(\ref{eq:vcnstrnt}), like Eq.~(\ref{eq:Vfeqn}),
is a constraint equation for
$V_{ 0}$, which is not a dynamical variable.
\begin{figure}[tbhp]
\centerline{%
\vbox to 3.5in{\vss
\hbox to 3.3in{\special{psfile=ep_fig.ps angle=270.
hscale=90 vscale=90
hoffset=-50 voffset=300
}\hss}}
}
\caption{\capcrunch{%
Finite-density
effective potential ${\cal E}$ from Eq.~(\protect\ref{eq:endens}),
plotted as a function of $M^\ast$ (solid line). $V_{ 0}$ is
eliminated for each $M^\ast$ using Eq.~(\protect\ref{eq:vcnstrnt}).
Parameter set T1 is used and
$k_{{\scriptscriptstyle\rm F}} = 1.30\mbox{\,fm}^{-1}$.
Results for other parameter sets and other densities are
qualitatively similar.
Also shown are the analogous potentials for the Walecka model
RHA \protect\cite{WALECKA74} (dotted line) and the
nonlinear parameter set B from Ref.~\protect\cite{FS} (dashed line).}
}
\label{fig:one}
\end{figure}
The energy density at a given baryon density is found
by using Eq.~(\ref{eq:vcnstrnt}) to eliminate
$V_{ 0}$ from ${\cal E}$ in Eq.~(\ref{eq:endens})
and then by minimizing the resulting finite-density
effective potential with respect
to $M^\ast$.
The effective potential at fixed baryon density is shown in Fig.~\ref{fig:one}.
Notice that in contrast to the conventional one-loop approximation
(relativistic Hartree approximation or RHA \cite{WALECKA74,SW}) in
renormalizable models, the finite-density
effective potential of our truncated model is meaningful
only when $|g_{\rm s}\phi_0 |$ is sufficiently
small that higher-order terms can be neglected.
Similar considerations apply to the solutions of Eq.~(\ref{eq:vcnstrnt}).
(See Section~V for further discussion.)
Parameters can be chosen so that nuclear matter exhibits saturation at
the empirical point;
one approach to determining the parameters is discussed in the next
section.
\section{Results}
To test the utility of the model,
we must see if it can successfully describe finite nuclei \cite{FS}.
The basic features we seek to reproduce are the nuclear charge densities
(including the observed flatness in heavy nuclei), the characteristics
of the single-particle spectrum, and the bulk binding-energy
systematics.
Relativistic mean-field models unconstrained by QCD symmetries
have been successful in reproducing these properties for nuclei
across the periodic table.
The Hartree equations for finite nuclei in our model were given
in Section III, but only isoscalar mesons were discussed.
To make realistic comparisons to experiment, we must include
the $\rho$ and the Coulomb interactions.
Here we simply introduce
the $\rho$ and the photon as in Ref.~\cite{FS}, except that we also
include a coupling between the $\rho$ and the scalar $\phi$, exactly as for the
$\omega$ [see Eq.~(\ref{eq:endens})].
A more complete treatment of the isovector mesons
will be presented elsewhere\cite{FST}.
We take the nucleon, $\omega$, and $\rho$ masses as given
by their experimental values:
$M=939\,$MeV, $m_{\rm v} = 783\,$MeV, and $m_{\rho}=770\,$MeV.
We then fit the rest of the parameters
($g_{\rm s}$, $g_{\rm v}$,
$g_{\rho}$, $\eta$, $\zeta$, $m_{\rm s}$, $S_0$, and $d$)
to the binding energies, the charge radii,
and the spin-orbit splittings of the least-bound proton and
neutron in $^{16}$O, $^{40}$Ca, and $^{208}$Pb, as well as to
the charge density of $^{16}$O at $r \approx 1\,$fm.
An optimization process similar to that of Ref.~\cite{LosAlamos} is used.
Here we are principally interested in showing that a good fit to properties
of finite nuclei {\it can\/} be achieved;
Table~\ref{tab:one} lists three such parameter sets (T1, T2, and T3).
In set T1, $d$ is an optimization parameter, while it is fixed
(arbitrarily) in sets T2 and T3 to illustrate the range of possible $d$.
In a future paper, we will study in more detail the regions of the parameter
space that produce a reasonable fit and examine which conditions are important
in determining individual parameters.
\begin{table}[tbh]
\caption{Parameter sets
from fits to finite nuclei.
The vector masses are $m_{\rm v} = 783\,$MeV and $m_\rho = 770\,$MeV;
the nucleon mass is $M = 939\,$MeV.
Values for $S_0$, the scalar mass $m_{\rm s}$, and
$H_{\rm q}^{1/4}$ are in MeV.
Note that $m_{\rm s}^2 = 4 H_{\rm q}/(d^2 S_0^2)$.
}
\smallskip
\begin{tabular}[tbh]{cccccccccc}
Set & $g_{\rm s}^2$ & $m_{\rm s}$ & $g_{\rm v}^2$ & $g_{\rho}^2$ &
$S_0$
& $\zeta$ & $\eta$ & $d$ & $H_{\rm q}^{1/4}$ \\
\hline
T1 & 99.3 & 509. & 154.5 & 70.2
& 90.6 & 0.0402 & $-0.496$ & 2.70
& 250. \\
T2 & 96.3 & 529. & 138.0 & 69.6
& 95.6 & 0.0342 & $-0.701$ & 2.20
& 236. \\
T3 & 109.5 & 508. & 178.6 & 67.2
& 89.8 & 0.0346 & $-0.160$ & 3.50
& 283. \\
\end{tabular}
\label{tab:one}
\end{table}
\narrowtext
\begin{table}[tbh]
\caption{Binding-energy systematics for the model proposed here
(sets T1, T2, and T3), for model B from Ref.~\protect\cite{FS},
and for the point-coupling
(PC) model of Ref.~\protect\cite{LosAlamos}.
Binding energies per nucleon are given in MeV.}
\begin{tabular}[tbh]{cccc}
Model & $^{16}$O & $^{40}$Ca & $^{208}$Pb \\ \hline
T1 & 7.99 & 8.61 & 7.91 \\
T2 & 7.94 & 8.55 & 7.89 \\
T3 & 7.95 & 8.53 & 7.91 \\
B & 7.82 & 8.35 & 7.62 \\
PC & 7.97 & 8.58 & 7.87 \\
exp't & 7.98 & 8.55 & 7.87 \\
\end{tabular}
\label{tab:two}
\end{table}
\narrowtext
\begin{table}[tbh]
\caption{Rms charge radii (in fm)
for the model proposed here
(sets T1, T2, and T3), for model B from Ref.~\protect\cite{FS},
and for the point-coupling
(PC) model of Ref.~\protect\cite{LosAlamos}.}
\begin{tabular}[tbh]{cccc}
Model & $^{16}$O & $^{40}$Ca & $^{208}$Pb \\ \hline
T1 & 2.73 & 3.47 & 5.56 \\
T2 & 2.72 & 3.47 & 5.56 \\
T3 & 2.72 & 3.48 & 5.57 \\
B & 2.74 & 3.48 & 5.56 \\
PC & 2.73 & 3.45 & 5.51 \\
exp't & 2.74 & 3.47 & 5.50 \\
\end{tabular}
\label{tab:three}
\end{table}
\begin{figure}[tbhp]
\centerline{%
\vbox to 3.5in{\vss
\hbox to 3.3in{\special{psfile=pb208_chg.ps angle=270.
hscale=90 vscale=90
hoffset=-50 voffset=300
}\hss}}
}
\caption{\capcrunch{%
Charge density of $^{208}$Pb.
The solid line is taken from experiment \protect\cite{DEVRIES87}.
Charge densities are shown for
a successful mean-field model (model B from
Ref.~\protect\cite{FS}) and for the three parameter sets
from Table~I.}
}
\label{fig:two}
\end{figure}
We have calculated $^{16}$O, $^{40}$Ca, and $^{208}$Pb for these
parameter sets and
for a representative mean-field model (set B from Ref.~\cite{FS}).
Bulk binding-energy systematics are summarized in Table~\ref{tab:two}
and rms charge radii are summarized in Table~\ref{tab:three}.
For comparison, we also include results from the point-coupling model
of Ref.~\cite{LosAlamos}.
The binding energies include center-of-mass corrections as in
Ref.~\cite{REINHARD89}.
We show charge densities and single-particle levels for
$^{208}$Pb in Figs.~\ref{fig:two} and \ref{fig:three},
and charge densities for $^{16}$O and $^{40}$Ca in Figs.~\ref{fig:four}
and \ref{fig:five}.
The charge densities are determined from point-proton densities following
the conventional procedure \cite{HOROWITZ}, which folds them with
a phenomenological proton form factor.
Form factors generated within the model itself, originating from vector
dominance physics, will be considered elsewhere.
\begin{figure}[tbhp]
\centerline{%
\vbox to 5.in{\vss
\hbox to 3.3in{\special{psfile=level.ps angle=0.
hscale=75 vscale=75
hoffset=-30 voffset=40
}\hss}}
}
\caption{\capcrunch{%
Predicted proton single-particle spectra for $^{208}$Pb using
the parameter sets from Table~I.
Only the least-bound major shell is shown.
The leftmost values are from experiment, model B
is a successful mean-field model from
Ref.~\protect\cite{FS}, and model PC is the point-coupling
model of Ref.~\protect\cite{LosAlamos}.
Note that the $1h_{9/2}$ level is an {\it unoccupied\/} state.}
}
\label{fig:three}
\end{figure}
\begin{figure}[tbhp]
\centerline{%
\vbox to 3.5in{\vss
\hbox to 3.3in{\special{psfile=o16_chg.ps angle=270.
hscale=90 vscale=90
hoffset=-50 voffset=300
}\hss}}
}
\caption{\capcrunch{%
Charge density of $^{16}$O.
The solid line is taken from experiment \protect\cite{DEVRIES87}.
Charge densities are shown for
a successful mean-field model (model B from
Ref.~\protect\cite{FS}) and for the three parameter sets
from Table I.}
}
\label{fig:four}
\end{figure}
\begin{figure}[tbhp]
\centerline{%
\vbox to 3.5in{\vss
\hbox to 3.3in{\special{psfile=ca40_chg.ps angle=270.
hscale=90 vscale=90
hoffset=-50 voffset=300
}\hss}}
}
\caption{\capcrunch{%
Charge density of $^{40}$Ca.
The solid line is taken from experiment \protect\cite{DEVRIES87}.
Charge densities are shown for
a successful mean-field model (model B from
Ref.~\protect\cite{FS}) and for the three parameter sets
from Table I.}
}
\label{fig:five}
\end{figure}
The fits to nuclear charge radii, binding energies, and spin-orbit splittings
are quite good.
The only deficiencies in the sets illustrated here are some
small deviations from
experiment in the charge densities.
Changes in the optimization procedure can improve the agreement of the
charge densities at the cost of worsening slightly the agreement with
empirical binding energies.
A good reproduction of the spin-orbit force in finite nuclei necessarily
leads to large scalar and vector mean fields in the interiors of the nuclei
or in nuclear matter.
In particular, as discussed many times (recently by Bodmer \cite{BODMER91}),
vector and scalar fields of roughly 250--300\,MeV are needed to reproduce
the observed spin-orbit splittings in the least-bound levels (and also
the deformations in light, axially symmetric nuclei \cite{FPW}).
While these fields are large on the scale of the nuclear binding energies,
$| g_{\rm v}V_0 |/M$ and
$| g_{\rm s}\phi_0 |/M$ and their gradients in finite nuclei
are relatively small; thus, these remain useful
expansion parameters.
This justifies our truncation of the energy density at small powers of the
meson fields.
While it is possible in principle to add additional monomials in the fields
(with undetermined parameters), the quality of the present fit makes it
unlikely
that there is much to be gained by this.
The scale dimension $d$ of the light scalar field was found to
be about 2.7 when $d$ was included in the optimization.
Note that the canonical dimension would have $d=1$.
Changes in the optimization procedure or a relaxation in the goals of
the fit allow for a considerable range in $d$
(sets T2 and T3 are examples), but it does not seem possible
to find a reasonable parameter set with $d<2$.
{\em Thus the introduction of an anomalous dimension for the light scalar
degree of freedom is an essential feature for the phenomenological
success of our model}.
\begin{table}[tbh]
\caption{Nuclear matter saturation properties for the model proposed
here
(sets T1, T2, and T3), for model B from Ref.~\protect\cite{FS},
and for the point-coupling
(PC) model of Ref.~\protect\cite{LosAlamos}.
Values are given for the binding energy per nucleon (in MeV),
the Fermi momentum $k_{{\scriptscriptstyle\rm F}}$ (in fm$^{-1}$), the
compressibility $K$ (in MeV), the bulk symmetry energy coefficient
$a_4$ (in MeV), $M^\ast /M$,
and $g_{\rm v} V_0$ (in MeV) at equilibrium.
}
\smallskip
\begin{tabular}[tbh]{cccccccc}
Model & $E/B-M$ & $k_{{\scriptscriptstyle\rm F}}$
& $K$ & $a_4$ & $M^\ast/M$ & $g_{\rm v} V_0$ \\
\hline
T1 & 16.2 & 1.30 & 194. & 39. & 0.60 & 302. \\
T2 & 16.3 & 1.29 & 240. & 40. & 0.61 & 298. \\
T3 & 16.1 & 1.29 & 244. & 34. & 0.61 & 297. \\
B & 15.8 & 1.30 & 220. & 35. & 0.63 & 277. \\
PC & 16.1 & 1.30 & 264. & & 0.58 & 322. \\
\end{tabular}
\label{tab:oneb}
\end{table}
Experience with a broad class of relativistic mean-field models
shows that models that successfully reproduce bulk properties of finite
nuclei share characteristic properties in infinite nuclear matter \cite{FS}.
These properties
are the equilibrium binding energy and density, the compressibility $K$,
and the value of $M^\ast/M$ at equilibrium.
One further condition, that the light scalar mass $m_{\rm s} \approx 500\,$MeV,
is needed to ensure reasonably smooth charge densities
and good surface-energy systematics.
If we calculate nuclear matter with the parameter sets in
Table~\ref{tab:one}, we find good agreement with values found
in investigations with unconstrained mean-field models
(see Table~\ref{tab:oneb}).
In particular, the saturation density corresponds to a Fermi momentum
of about 1.3\,fm$^{-1}$,
and the binding energy per nucleon at saturation is about 16\,MeV.
The compressibility is less well determined (190--250\,MeV).
The nucleon effective mass
$M^*/M \approx 0.60$ and the scalar mass $m_{\rm s}$ is just over
500\,MeV.
We emphasize that these values are obtained after a fit to finite nuclei only.
\section{Discussion}
We can relate the phenomenological success of the model proposed here
to the characteristics of successful relativistic mean-field models
of finite nuclei.
A key feature is the logarithmic
potential for the scalar field, which allows for relatively
weak nonlinearities and the dominance of the cubic and quartic scalar terms,
with the values of the scaling dimension $d$ used here.
In contrast, chiral models with a Mexican hat potential have large cubic
and quartic terms, which preclude a good fit to bulk nuclear properties
\cite{FS}.
Bodmer \cite{BODMER91} has shown that nuclear matter properties that lead
to good predictions for finite nuclei can be achieved if one adds to (small)
cubic and quartic scalar terms a term that is quartic in the vector field
(here with coupling $\zeta$).
Thus our model has all of the ingredients needed to allow a good fit
through optimization.
In addition, adjustments can be made through the
scalar-vector coupling $\eta$.
Note that the scalar-vector coupling and the quartic vector
self-coupling can be used to define an
effective, density-dependent mass of the vector meson at the mean-field level.
For example, one can use
the second derivative of the lagrangian with respect to the vector field.
For the model parameters in Table~\ref{tab:one}, the two contributions
largely cancel, so that the vector effective mass $m_{\rm v}^\ast$
is essentially independent of density.
This is in contrast to the universal scaling hypothesis of Brown and Rho
\cite{BROWN91},
which predicts $m_{\rm v}^\ast/m_{\rm v} = M^\ast/M$.
We have excluded many terms from our model: higher-order
polynomials in the vector fields and mixed scalar-vector terms,
non-Yukawa couplings to the nucleon, derivative terms, and so on.
In retrospect, were we justified in neglecting them?
An analysis of mean-field models \cite{BODMER91,FST}
implies that one can identify
dimensionless ratios that can be used to set the scale of individual
contributions to the energy.
For example,
one can rewrite the scaled energy density of nuclear matter,
${\cal E}/M^4$, in terms of the dimensionless ratios
$g_{\rm v}V_0 /M$ and $g_{\rm s}\phi_0 /M$,
which then become our finite-density expansion parameters.
Moreover, an important assumption in applying effective field theories,
such as chiral perturbation theory, is that the coefficients of terms in the
lagrangian are ``natural,'' {\it i.e.,\/} of order unity,
when written in appropriate dimensionless units.
This assumption makes the organization of terms through a
power-counting scheme useful, because one can systematically
truncate the expansion when working to a desired accuracy.
We have proposed an analogous concept of naturalness for the
finite-density problem,
which will justify the neglect of higher derivatives and
powers of the fields when applying Eq.~(\ref{eq:NLag}) to nuclei.
For example, if one expresses the nuclear matter energy density in terms of
the scaled field variables written above, one finds that the ratios
\begin{equation}
{m_{\rm v}^2 \over 2 g^2_{\rm v} M^2} \ , \quad
{m_{\rm s}^2 \over 2 g^2_{\rm s} M^2} \ , \quad
{\zeta \over 8}\ , \quad
{\eta m_{\rm v}^2 \over 2 g_{\rm s} g_{\rm v}^2 S_0 M} \ ,
\label{eq:scaledstuff}
\end{equation}
should all be of roughly equal size for our expansion to be ``natural.''
One can verify that the values in sets T1, T2, and T3 satisfy this condition.
To examine the size of the scalar self-interactions, one expands the
logarithmic potential in Eq.~(\ref{eq:endens}) with the result
\begin{eqnarray}
{1\over 4} m_{\rm s}^2 S_0^2 d^2
\Bigl\{ \Bigl(1 &-&{\phi_0\over S_0}\Bigr)^{4/d}
\Bigl[{1 \over d}\ln \Bigl(1-{\phi_0\over S_0}\Bigr)
-{1\over 4}\Bigr]
+ {1\over 4} \Bigr\}\Big/ M^4 \nonumber \\[4pt]
& = & \Bigl[ {m_{\rm s}^2 \over 2! g_{\rm s}^2 M^2} \Bigr]
{\widetilde \Phi}^2
+ \Bigl[ {1\over 3! M g_{\rm s}^3}\ {(3d - 8) m_{\rm s}^2\over d S_0}
\Bigr] {\widetilde \Phi}^3
+ \Bigl[ {1\over 4! g_{\rm s}^4}\ {(11d^2 - 48d + 48) m_{\rm s}^2\over
(d S_0)^2} \Bigr] {\widetilde \Phi}^4
\nonumber \\[4pt]
& & + \Bigl[ {M\over 5! g_{\rm s}^5}\ {2(25d^3 - 140d^2 + 240d - 128)
m_{\rm s}^2\over (d S_0)^3} \Bigr] {\widetilde \Phi}^5
\nonumber \\[4pt]
& & + \Bigl[ {M^2\over 6! g_{\rm s}^6}\ {2(137d^4 - 900d^3 + 2040d^2
-1920d + 640) m_{\rm s}^2 \over (d S_0)^4} \Bigr]
{\widetilde \Phi}^6 + \cdots \ , \label{eq:logexp}
\end{eqnarray}
where ${\widetilde \Phi} \equiv g_{\rm s} \phi_0 /M$.
The coefficients in square brackets give the combinations that should be
compared to those in Eq.~(\ref{eq:scaledstuff}), and one can verify that
these are also natural for parameter sets T1, T2, and T3.
It is interesting that for set T1, in which $d$ is an optimized parameter,
the scaled coefficients are extremely small due to nearly complete
cancellations among terms in the polynomials in $d$.
(Further discussion of these issues is given in Ref.~\cite{FST}.)
Further support for the naturalness assumption comes from extending the
model to include $\phi^2 V_\mu V^\mu$ and $(V_\mu V^\mu)^3$ terms and then
repeating the optimization.
The new fit is very close to the fit obtained without these terms.
Furthermore, contributions to the energy from the new terms are
less than 10\% of those from the old terms at nuclear
matter density, and the old coefficients change only slightly in the
new fit \cite{FST}.
Thus contributions from the higher-order terms can be absorbed into
slight adjustments of the coefficients in Eq.~(\ref{eq:NLag}).
The astute reader will note that if our naturalness assumption is
justified, we could construct a variation of our model
without the constraints of the low-energy theorems of broken scale invariance.
Indeed, at nuclear matter density,
numerics alone would let us truncate the scalar potential, and the same
arguments about renormalization apply, so that vacuum effects are still
built in.
This explains the success of previous relativistic mean-field models of nuclear
structure and illustrates the power of the assumption of
naturalness.
Here we note that the scalar potential constrained by the low-energy theorems
actually provides some justification for naturalness, which we simply
assume is valid for higher-order and derivative terms.
Thus one should be cautious in drawing strong conclusions about the role of
broken scale invariance when applied in models that are restricted to
moderate nuclear density.
How widely can our model be applied?
A prime motivation for developing relativistic models of nuclei and
nuclear matter is to extrapolate to extremes of density and
temperature \cite{WALECKA74}.
Such conditions can be reached experimentally in relativistic heavy-ion
collisions.
One hopes that the calibration of such models to observables at
ordinary nuclear densities and zero temperature, in conjunction with
constraints from QCD symmetries, will permit reliable extrapolations.
Unfortunately, our framework of mean-field dominance, naturalness, and the
truncation at small powers of the fields and their derivatives,
which limits the number of parameters at
ordinary nuclear densities, is bound to break down as the density increases.
With increasing density, we will find increasing mean fields and
expansion parameters that are no longer small.
Thus we become increasingly less justified in ignoring the effects of
higher-order terms, and the calibration at nuclear matter density becomes
less and less of a constraint.
The limits of reliable extrapolation are not clear, but one should
certainly be cautious in applying models like ours much above nuclear
matter density.
Nevertheless, the utility of an accurate relativistic mean-field model for
nuclear structure and reactions, which is compatible with the low-energy
behavior of QCD, should be obvious.
We close our discussion with some interesting observations.
{}From Table~\ref{tab:one},
one sees that $S_0$, the vacuum expectation value of the
light scalar field $S$, is close to the experimental value of
$f_\pi$ (93\,MeV).
Furthermore, the scalar coupling constant $g_{\rm s}$ is close to
$g_\pi/g_A$.
If we forget for the moment complications
from requiring terms to be scale invariant,
it is tempting to say that the model has a preference for the nucleon
mass to be generated entirely from the vacuum expectation value of $S$.
That is, if the only nucleon coupling to scalar fields is $g_{\rm
s}\overline NN S$, then we recover the empirical nucleon mass and the
Goldberger--Treiman relation from the fit values of the other parameters.
This scenario is also consistent with Miransky's model, in which the
light scalar (quarkonium) is associated with the quark condensate, and
with QCD sum rules, which associate the nucleon mass predominantly with
the quark condensate.
It is premature to do more than to point out these results, but the
coincidence of numbers certainly merits further investigation.
\section{Summary}
In summary, we have introduced a new model for nuclear matter and finite
nuclei that realizes QCD symmetries at the hadronic level.
In particular, the model incorporates chiral symmetry, broken scale
invariance, and the phenomenology of vector dominance.
An important feature is the light scalar degree of freedom, which
is given an anomalous scale dimension.
The renormalized scalar potential is constrained by the low-energy
theorems of broken scale invariance.
Vacuum loop effects are absorbed into the renormalized parameters,
which are determined by fits
to hadron masses and finite-density observables.
The truncation of the model lagrangian is based on mean-field
dominance and the identification of expansion parameters that
are reasonably small at nuclear matter densities.
Due to the characteristics of the constrained scalar potential,
we adopt a ``naturalness'' assumption, which justifies the truncation.
The parameters of the truncated model are identified by an optimization
procedure designed to reproduce bulk properties of finite nuclei.
Good fits are obtained, which also lead to very reasonable nuclear
matter properties.
The scale dimension of the scalar field comes out greater than two,
but is not tightly constrained by the fit.
It is important to emphasize what we have learned about the relationship
between effective (hadronic) theories of QCD and successful relativistic
mean-field phenomenology.
The vacuum dynamics of QCD is constrained by the trace anomaly and the
consequent low-energy theorems of QCD.
At the level of hadronic fields, this physics manifests itself in the
scalar-isoscalar sector of the theory.
We have proposed that this sector can be divided into a low-mass part that
is adequately described by a scalar meson with anomalous dimension and a
high-mass part that is ``integrated out,'' leading to various couplings
among the remaining fields.
We believe this latter characterization is quite general and independent
of the details of the high-mass part of the scalar sector.
Nevertheless, whereas the realization of the Goldstone boson dynamics is
well known ({\em i.e.}, chiral perturbation theory), as is the dynamics of
the vector sector ({\em i.e.}, vector-meson dominance), little is known
about the precise form and magnitudes of the nonlinear couplings
originating from the scalar degrees of freedom.
We find that our primary source of information on this dynamics comes from
{\em nuclear structure physics}, which provides strong constraints on this
sector of the theory.
In subsequent work, we will further explore the parameter space that leads
to good fits to nuclear properties and identify the observables that
constrain individual terms.
We will also investigate the chiral properties of the model and
study the implications of vector dominance for nuclear observables.
Work to extend the model beyond the one-baryon-loop
level in a manner consistent with
conservation laws and Ward identities is in progress.
Finally, we will continue the development of the naturalness concept for
finite-density systems.
\acknowledgments
We are pleased to thank P. J. Ellis, S.~V. Gardner, C.~J. Horowitz,
D.~B. Leinweber, V.~A. Miransky,
R.~J. Perry, S.~Rudaz, and J.~Rusnak for useful comments.
This work was supported in part by the National Science
Foundation
under Grant Nos.~PHY-9203145, PHY-9258270, PHY-9207889, and
PHY-9102922, and the Sloan Foundation and
by the U.S. Department of Energy under
contract No.~DE--FG02--87ER40365.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{introduction}
The issue of maximal possible synchrotron radiation energy in astrophysics (Guilbert, Fabian \& Rees 1983) was discussed
in various frameworks: the extreme Fermi acceleration of electrons at shocks (e.g. Ghisellini 1999) and the converter mechanism (Derishev 2003; Stern 2003). In the extreme acceleration scenario the electron energy is limited by the condition that the Larmor gyration time, being the shortest available acceleration time, is shorter than the synchrotron cooling time.
In the converter mechanism, a new high-energy electron appears in the relativistic jet as a result of photon-photon pair
(or photomeson) production and its initial energy can be arbitrarily high in the fluid frame. However, its initial direction in the jet frame is opposite to the jet bulk motion.
Therefore, its synchrotron radiation at the first
moment is Doppler deboosted in the external reference frame. When the electron has turned around, the emitted synchrotron photons become Doppler boosted. When it gyrates in the magnetic field by the angle $\pi$, it cools down below the limiting Lorentz
factor (in the comoving frame) $\gamma_{\rm m} \sim 10^8 B^{-1/2}$, where $B$[G] is the magnetic field strength.
In the extreme acceleration scenario it is approximately the same and in both cases the maximal
comoving energy of synchrotron photons does not depend on the magnetic field: $\varepsilon_{\rm m} \sim 1/\alpha$ (i.e. $ \sim 10^2 $MeV), hereafter $\varepsilon = E/m_{\rm e} c^2$ is the photon energy in electron rest mass units. In the case of blazars, this cutoff is blueshifted by factor up to $2\Gamma$, where $\Gamma$ is the bulk Lorentz factor of the jet. Observations of the superluminal proper motion in blazars imply that a typical Lorentz factor is about
$\Gamma \sim 10-20$ (see Jorstad et al. 2001). In some cases $\Gamma \approx 40$
and its value in the emission region could be even higher, when taking into account possible deceleration of the jet (see Stern \& Poutanen 2008, hereafter SP08). Therefore, the ultimate synchrotron cutoff in the blazar spectra should be expected in the GeV range, maybe slightly above 10 GeV in some extreme cases. This feature should be seen as a spectral dip with a continuation of the spectrum to higher energies
as a result of Comptonization of the external or synchrotron radiation.
In which physical context can the observable synchrotron cutoff in GeV range appear?
The main mechanism of particle acceleration, which is responsible for the gamma-ray
emission of blazars is still unknown. There are at least three possibilities
which are discussed:
(i) {\it Diffusive acceleration of electrons} by various plasma perturbations
or turbulence (Fermi II mechanism). The ultimate acceleration in this case would imply
that the relative bulk velocities of the fluid at the gyroradius scale are relativistic (in the conditions of examples
considered here the maximal gyroradius $r_{\rm m} \sim 10^{-5} - 10^{-4} R_{\rm j}$, where $R_{\rm j}$ is the jet radius). This is scarcely possible.
(ii) {\it Shock acceleration of charged particles} (Fermi I mechanism). It can work at
internal shocks in the jet or at the shear layer between the jet
and the external environment. In this case, acceleration rate can
be much faster, especially at the shear layer, where an electron entering the jet from the external
environment can gain factor $\Gamma^2$ in energy (provided that the thickness of the transition layer is
less than the gyroradius). Therefore, the particle spectra can extend up to $\gamma_{\rm m}$.
The issue is how much power can be emitted by the particles near the cutoff.
Simulations of shock acceleration (Niemiec \& Ostrowski 2006; Ostrowski 2008) demonstrate a large variety of resulting particle spectra depending on the geometry of the magnetic field and the power spectrum of its perturbations. For some parameters the spectra are very hard. On the other hand, in these simulations the radiative energy losses are neglected and it is difficult to conclude whether they extend close to $\gamma_{\rm m}$.
In the shear layer scenario, an electron having the energy above $\gamma_{\rm m}$ cannot penetrate into the jet deeper than
$r_{\rm m}$. Therefore, the energy budget for extreme acceleration at the shear layer is restricted by the bulk energy of a thin outer layer of the jet.
(iii) {\it Converter mechanism}, which can take a
form of a runaway photon breeding in relativistic jets (Derishev et al. 2003; Stern 2003; Stern \& Poutanen 2006).
The photon breeding mechanism is probably the most efficient way of dissipation of the relativistic jet energy into gamma-rays. The mechanism should work if the active galactic nucleus (AGN) is
sufficiently powerful (above $L_{\rm d} \sim 10^{42} - 10^{44}\rm erg\ s^{-1}$, depending on the scale of the emission site) and the if jet is sufficiently
fast (the Lorentz factor above $\Gamma \sim 10$, see SP08).
When photon breeding mechanism operates, the instant energy gain at photon -- electron-positron conversion in
the jet can reach $4 \Gamma^2$ and the ultimate energy $\gamma_{\rm m}$ can be
easily reached. Stern \& Poutanen (2006), studying the photon
breeding regime in jets, showed that
the energy of particles through the breeding cycle grows till the
Lorentz factor of produced pairs reaches $\gamma_{\rm m}$ in the jet frame. SP08 performing numerical simulations of the photon breeding
have obtained some hard ($\beta \sim 2$) spectra at a moderate luminosity of the source, where the ultimate synchrotron cutoff is clearly seen (hereafter we define the photon spectral index $\beta$ as ${\rm d}N/{\rm d}\varepsilon\propto \varepsilon^{-\beta}$).
As discussed above, the possibility to reproduce such cutoff with other mechanisms is uncertain at best.
Therefore, the observation of such a feature would be an
argument in favor of the converter mechanism as a source of the jet radiation.
Existing high-energy data supplied mainly by EGRET (30 MeV -- 20 GeV energy range), do not reveal such a cutoff
at a significant level. Nevertheless, there are some indications of a nontrivial shape of
the blazar spectra in the high-energy range, more complicated than a single high-energy hump.
Such an indication has been found by Costamante, Aharonian \& Khangulyan (2007) in the spectrum of BL Lac
PKS 2155--304. This is one of five objects, which has been detected by both EGRET and Cherenkov arrays: the
combination of these two data sets imply that there is a double humped structure in the ten MeV -- TeV range with a depression around 10 GeV.
This study is motivated by the launch of {\it Fermi} (former {\it GLAST}). This instrument
covering 200 MeV -- 200 GeV range is perfect for detection of the feature we
study (see also discussion in Costamante et al. 2007).
In this work we perform a series of simulations
in order to outline the variety of conditions,
where the ultimate synchrotron cutoff can be observed, and to demonstrate the variety of spectral
shapes in the MeV -- TeV range produced by the photon breeding mechanism. The
problem setup, described in Section 2, is similar to that of SP08. In the same section we present results of numerical
simulations and discuss them. In Section 3 we compare our results with the EGRET blazar data in order to illustrate that the existing data have slightly insufficient accuracy to make significant conclusions about the structure of blazar spectra in the GeV range.
In section 4 we discuss the perspectives of observations.
\section{The numerical simulations}
We have performed Monte-Carlo simulations of the jet radiation in the photon
breeding regime using the Large Particle Monte-Carlo Code (LPMC) for particle
propagation and interactions and a two dimensional ballistic model for the
jet dynamics. The problem setup and the technique of numerical simulations
are described in SP08,
with details given by Stern \& Poutanen (2006). LPMC code is described
in Stern et al. (1995).
Briefly, the problem setup can be summarized as follows:
\begin{enumerate}
\item {\it The jet} is represented as a cylinder of radius $R_{\rm j}$ and length $\Delta z = 20 R_{\rm j}$ split into array 64$\times$20 of
concentric shells and slices, where each cell can independently decelerate by
exchanging 4-momentum with interacting photons. The internal pressure is neglected. The initial Lorentz factor $\Gamma $ at the inlet is constant. The magnetic field has toroidal geometry.
\item {\it Initial state.} A simulation starts with a uniform jet of a constant Lorentz factor and a small amount of seed high-energy photons.
\item {\it The external environment} is kept at rest (see SP08 for the corresponding condition) and has the magnetic field perpendicular to the jet with random orientation in azimuthal direction.
\item {\it Emission sites.} SP08 studied three possible emission sites: at the scale of the broad emission line region ($R \sim 10^{17}$cm, model A), the parsec scale (model B) and small scale ($R \sim 30 - 100 R_{\rm g} \sim 10^{15}$ cm, where $R_{\rm g}$ is the gravitational radius, model C). Here we consider models A and C.
\item {\it The external radiation.} For model A we assume two-component radiation field, which includes direct radiation of the accretion disc (the multicolor blackbody spectrum and the X-ray tail) and the isotropic radiation with the spectrum extending
from the UV (lines and scattered disc radiation) to the far IR (dust radiation
from a few parsec scale and a possible contribution of synchrotron radiation).
The energy density of the isotropic component is 0.05 of that
of the direct disc radiation. The spectrum of the isotropic component is
parametrised as a power law ${\rm d}N/{\rm d}\varepsilon \propto \varepsilon ^{-1.4}$ in the
range $10^{-8} < \varepsilon < T_{\rm max}$, where $T_{\rm max} = 0.5 \times 10^{-5}$ is the maximal temperature of the accretion disc in $m_{\rm e} c^2$ units.
Model C represents a smaller distance scale: the vicinity of the accretion disc at tens -- hundreds gravitational radii. At such a distance the photon
breeding does
not require a separate isotropic component of the soft background radiation, because
the accretion disc supplies some photons at sufficiently large
angles. In the case of model C, the picture of radiation field is more certain than that in model A: the temperature
of the disc varies as $T = T_{\rm max} (r/r_{\rm min})^{-3/4}$, the luminosity is
distributed as ${\rm d}L/{\rm d}r \propto r^{-2}$, the angular distribution of the emitted
photons flux is described by the Lambert law ${\rm d}F/{\rm d\ cos} \theta \propto {\rm cos}\ \theta$.
The disc radiation field was precomputed as a function of the distance from the central
engine in the range $( 30 - 100) R_{\rm g}$.
\end{enumerate}
In both models, the spectrum of the disc radiation includes a nonthermal X-ray tail described by the power law with photon index $\beta = 2$ extending up to 100 keV with the total luminosity 0.07 of that of the thermal component. Such X-ray emission exists in many AGNs. For model C the X-ray component is important, because it provides a sufficient opacity for photons of energy $\varepsilon \sim 10^4$ which are otherwise too soft to interact with the thermal disc photons (see Fig. \ref{fig:opac}).
The main difference between the two models can be expressed in terms of
dependence of the photon-photon opacity on the direction. Fig. \ref{fig:opac} shows
the opacity for photons moving along the jet and in the opposite direction.
One can see that in the case of model C, the anisotropy in the opacity is higher
than that in model A.
We have chosen these two models as idealised extreme versions of external
radiation: the emission of a point-like disc plus an isotropic background (model A) and the radiation of a geometrically large accretion disc only (model C). Actually the soft radiation field should be their superposition. Particularly, model C should also include some scattered radiation, which we neglect.
The relevance of each model for different classes of objects is not evident.
For the operation of the photon breeding in model C, a lower luminosity
of the accretion disc is required than in model A: $L_{\rm d} \sim 10^{42} R_{15} \rm erg\ s^{-1}$ versus $\sim 10^{43} R_{17}\rm erg\ s^{-1}$ ($R_{n}= R / 10^n$cm). In units of compactness (which is proportional to
$L_{\rm d}/R$), model A has a lower threshold. Model C is probably the only one which can
work in the case of BL Lacs. On the other hand, model C implies that the acceleration of the jet is fast:
the Lorentz factor $\Gamma > 20$ should be reached at a few tens $R_{\rm g}$. Otherwise this model does not provide the conditions for the exponential photon breeding process.
In model A the conversion of the jet energy into the radiation is more efficient.
Other parameters for model A: $\Gamma = 20$, the average distance from the black hole
$2 \times 10^{17}$ cm, the jet radius $R_{\rm j} =10^{16}$cm, the jet power $L_{\rm j} = L_{\rm d}$, the magnetic energy flux $L_{\rm B} = 0.2 L_{\rm d}$. Note that the jet power is not important unless nonlinear effects become substantial. The latter take place at $L_{\rm j} \sim 10^{45} \rm erg\ s^{-1}$.
Parameters for model C: the distance between the beginning of the active region and the black hole $R=10^{15}$ cm, gravitational radius (which defines the accretion disc scale)
$R_{\rm g} = 3 \times 10^{13} $cm, the jet radius $R_{\rm j} = 10^{14}$cm, $L_{\rm B} = 0.25 L_{\rm d}$, $L_{\rm j} =L_{\rm d}$.
We have performed a series of simulations for each model varying the density of
the external soft photon field. This was parametrised through the
corresponding luminosity of the accretion disc $L_{\rm d}$. In both cases we have started
from the luminosity which is slightly above the threshold for the supercritical
photon breeding and have increased it by steps small enough to trace the evolution of the spectrum. For model A the simulations (A1--A4) correspond to
$L_{\rm d}= (0.5, 1, 1.4, 2)\times 10^{44} \rm erg\ s^{-1}$. For the model C (runs C1--C6):
$L_{\rm d}= (1, 2, 4, 8, 16, 32) \times10^{42} \rm erg\ s^{-1} $.
\begin{figure}
\begin{center}
\leavevmode \epsfxsize=7.0cm \epsfbox{fig1.eps}
\end{center}
\caption{ \label{fig:opac}
The photon-photon opacity at the photon path $R_j$ depending on the energy
of photons moving in the counter-jet (upper curves for both models) and
the jet direction (lower curves for both models). The absolute value of the opacity corresponds to the disc luminosity $2 \times 10^{44}\rm erg\ s^{-1}$ for model A and $2.5 \times 10^{43} \rm erg\ s^{-1}$ for model C.}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode \epsfxsize=7.0cm \epsfbox{fig2.eps}
\end{center}
\caption{ \label{fig:spec}
Spectra produced by the photon breeding mechanism obtained with
the numerical simulations. Model A:
curves from bottom to top correspond to runs A1--A4
(disc luminosity varies from
$0.5 \times 10^{44}\rm erg\ s^{-1}$ to $2 \times 10^{44} \rm erg\ s^{-1}$).
Model C: curves from bottom to top
correspond to runs C1--C6
(the disc luminosity varies from $10^{42}\rm erg\ s^{-1} $ to $3.2\times 10^{43}\rm erg\ s^{-1}$).}
\end{figure}
The resulting efficiency of the jet power conversion into radiation varies from 0.16 (run A1) to 0.6 (run A4) for model A and from 0.05 (run C1) to 0.3 (run C6) for model C.
The resulting spectra as a function of the disc luminosity for each model are given in Fig \ref{fig:spec}.
In both cases the ultimate synchrotron component is relatively strong and
the cutoff is well seen at a low disc luminosity: within factor
$\sim 3 $ above the threshold of supercritical photon breeding in the case of
model A, and within one order of magnitude in the case of model C.
In real sources, the compactness parameter is probably widely distributed, and
therefore the number of objects demonstrating the ultimate synchrotron cutoff in
its clear form should be relatively small. At a higher disc luminosity the cutoff
turns into a shallow dip. In both cases, the feature
is associated with the transition between the synchrotron and the Compton components,
however, the position and the shape of the break are different for the two models.
The reasons why the transition between the synchrotron and the Compton components shifts to a lower energy as the compactness increases are as follows:
(i) In the case of model C the high-energy photons ($\varepsilon \sim 1/T_{\rm max}$ produced {\it in the external environment} are removed from the breeding cycle.
Indeed, the photon-photon opacity across the jet for $\varepsilon =1/T_{\rm max}$
can be as large as a few hundred and most of the external high-energy photons are absorbed beyond the jet. Only the photons with $\varepsilon < 5\times 10^4$ can
penetrate the jet deeply enough (see Fig.\ref{fig:opac}) to support efficiently the breeding cycle. The synchrotron component induced by these photons is cut off at
\begin{equation}
\varepsilon_{\rm s} \sim (\varepsilon \Gamma)^2 {B \over B_{\rm 0}} \Gamma \sim 10 {\big(}{\Gamma \over 30}{\big)}^3 {B \over 10{\rm G}},
\end{equation}
where $B_{\rm 0} = 4.4 \times 10^{13}$G. Indeed, one can see a maximum at $\varepsilon \sim 10$ for model C at a high compactness.
(ii) For model A the high-energy photons produced {\it in the jet} are absorbed because of a high opacity produced by the isotropic soft photons. One can see in Fig.\ref{fig:opac} that for model A the opacity of the isotropic background at the path length $\sim 10 R_{\rm j}$ exceeds unity when
$\varepsilon > 10^6$. The corresponding absorption turnover is seen
in Fig. \ref{fig:spec}. In this case, the absorption initiates the electromagnetic cascade inside the jet producing softer pairs and broader spectra without sharp features.
The variety of spectra we present here is incomplete. For example, when the ratio between the magnetic energy flux and the disc luminosity is increased, the Compton peak becomes lower and the jet radiative efficiency decreases. A larger series of simulations will be presented in a separate publication.
We also do not discuss the
observed low-energy synchrotron maximum treated by SP08 as a
secondary effect: radiation of pairs in heating/cooling balance further
downstream along the jet.
\section{Egret data}
The EGRET catalog (Hartman et al. 1999) includes 66 high-confidence
identifications of blazars (BL Lac objects, flat-spectrum radio quasars,
or unidentified flat-spectrum radio sources).
The data errors are rather large and
all spectra of 66 blazars with a few exceptions can be fit with a simple power law at a satisfactory $\chi^2$. In order to illustrate the present accuracy of observations we show four of seven spectra with largest deviations from a power law (reduced $\chi^2 > 2$) in comparison with our simulated spectra applying their measured redshift (Fig. \ref{fig:egret}).
In these cases our spectra give a better description than a power law. In the case of
3EG J0210--5055 the cutoff in the simulated curve is at a slightly smaller energy than is indicated by the data, but this could be compensated by a larger Lorentz factor of the jet.
Of course, none of these examples confirms the existence of the synchrotron turnover in the GeV range: the statistical significance is insufficient.
An additional evidence of the high-energy synchrotron maximum could be obtained from comparison of the EGRET and the TeV
spectra for those objects, where both kinds of data exist. At present, there are
five blazars detected both by EGRET and Cherenkov telescopes. Note that observations in the EGRET and the TeV ranges are not simultaneous.
1. PKS 2155--304. As we already mentioned in the Introduction this case has been studied by Costamante et al. (2007) as a possible example of the synchrotron maximum in the EGRET range.
2. Mkn 421. It has a comparatively hard spectrum in the EGRET range ($\beta \sim 1.6$) extending up to 10 GeV, which is inconsistent with the high-energy synchrotron component unless the Lorentz factor of the jet is very large $\Gamma > 50$.
3. 1ES 1959+650. The TeV spectrum is published by Aharonian et al. (2003). The EGRET spectrum is soft ($\beta \sim 2.5$) with the GeV energy flux $ \varepsilon^2 {\rm d}N/{\rm d}\varepsilon \sim 10^{-11}$ TeV s$^{-1}$ cm$^{-2}$ while the TeV energy flux corrected for the infrared background absorption is slightly higher. This resembles the case of PKS 2155--304 and intermediate curves in Fig.2.
4. 3C 279. In the EGRET range the spectrum has a smooth maximum at
1 GeV. The spectrum of TeV emission discovered by Teshima et al. (2007) has not been published yet.
5. BL Lacertae. The spectrum is very variable both in the EGRET and the TeV ranges (see Albert et al. 2007), and no clear conclusions can be made because of the lack of simultaneous observations.
The above consideration just illustrates the present observational status.
The data do not reject the hypothesis of the existence of the ultimate
synchrotron cutoff in some blazar spectra, but cannot confirm it either, because of the insufficient gamma-ray statistics. However, it is clear that even a moderate increase in the statistics will provide sufficient significance in cases like those presented in Fig. \ref{fig:egret}.
\begin{figure}
\begin{center}
\leavevmode \epsfxsize=7.0cm \epsfbox{fig3.eps}
\end{center}
\caption{ \label{fig:egret}
Examples of the EGRET spectra sampled from the seven spectra with largest reduced $\chi^2$ of a power-law fit
resembling some two-component simulated curves from Fig. \ref{fig:spec}.
None of these examples gives a significant confirmation of the two-componentness because of the large data errors.
The simulated curves correspond to: run C1 at $z$=1.003 (3EG J0210--5055),
the run C4 at $z$=0.902 (3EG 1733--1313),
the run A2 at $z$=0.894 (3EG J0540--4402) and
the run A2 at $z$=1.494 (3EG J1409--0745).
}
\end{figure}
\section{Conclusions and perspectives of observations}
Our simulations demonstrate that the ultimate synchrotron cutoff
in its clear form is a relatively rare phenomenon. In the framework of the photon breeding mechanism, the cutoff in the GeV range appears when the compactness of the external radiation is just above the threshold of the supercritical photon breeding. When the compactness
exceeds the threshold by an order of magnitude, the spectrum becomes almost flat or the synchrotron component moves to the softer range (see Fig. \ref{fig:spec}).
Another tendency, which can be derived from the results, is that the spectral sequence depends on the angular distribution of the external radiation. When the large-angle component of the external radiation in the emission region is relatively strong (e.g. $\sim 0.1$ of the direct disc radiation as in model A) the spectra are smoother and the separation between the
synchrotron and Compton components is weakly expressed. When the side-on radiation is weak, e.g. it is supplied only by the periphery of the accretion disc, all features are much sharper.
The number of blazars with spectra demonstrating the clear ultimate cutoff should be relatively small. However, {\it Fermi} will be able to reveal this kind of spectra if it exists, because it will observe simultaneously a large number of objects.
The cutoff in the right place is by itself insufficient to conclude that we observe the ultimate synchrotron cutoff:
the latter can also be caused by the photon-photon opacity in the same energy range (see SP08). To be sure that the observed cutoff has a synchrotron nature, one have to observe the continuation of the spectrum above the cutoff in a form of the
higher energy Compton component extending to the TeV range. In principle, {\it Fermi} can
detect the low-energy part of the Compton maximum at $E \sim 100$ GeV and confirm the two-componentness in this way even without detection of the TeV
emission by large ground-based detectors.
Although the possibility of the extreme acceleration up to the energy $\gamma_{\rm m}$ in internal shocks is not evident, it is not yet ruled out. Do independent signatures exist of the photon breding mechanism?
A possible one is the correlation between the disc brightness and the spectral shape like those in Fig. \ref{fig:spec}.
If the total power of the jet is known, then the high radiative efficiency, i.e. the ratio of the gamma-ray luminosity to the total jet power,
of e.g. more than 10 per cent can give an additional support to the photon breeding hypothesis and argue against the internal shock scenario. In the internal shock mechanism the luminosity budget is limited by
the {\it internal energy} of the jet, while in the case of the photon breeding a large fraction of the {\it total bulk energy} of the jet can
be converted into radiation.
Another possible clue is the compactness threshold: at a low compactness of the disc radiation, the photon breeding does not work (the threshold for model C is $L_{\rm d}/R_{15} \sim (1 - 10)\times 10^{42} \rm erg\ s^{-1}$). Usually it is
difficult to estimate the distance $R$ from the disc to the emission region, however, sometimes approximate constraints can be obtained using the time variability
or the mass of the black hole ($R$ can not be smaller than a few $R_{\rm g}$).
In this way the photon breeding mechanism can be revealed statistically: sources with the low estimated compactness should be less efficient and have different spectra. When the photon breeding turns on, the luminosity should increase sharply. Therefore one can expect a kind of bimodality in the efficiency and probably in the high-energy spectra of blazars.
\section*{Acknowledgments}
The work is supported by the Russian Foundation for Basic Research
(N07-02-00629a) and the Presidium of Russian Academy
of Sciences (Scientific school 2469.2008.2 and ''Origin and
evolution of stars and galaxies'').
We thank Pavel Ivanov, Juri Poutanen and Stephanie Patoir for useful comments.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{}
Geometric numerical integration of Hamiltonian systems has been a
central topic in numerical solution of differential equations since
the late 1980s
\cite{Feng95kfc,Fengq10sga,hairerlw06gni,Leimkuhlerr04shd,ruth83aci,sanzc94nhp}.
The well-known Hamiltonian systems can be written in a compact form,
i.e.,
\begin{equation}\label{Hs}
\dot{\vec{z}}=J^{-1}\nabla H(\vec{z}), \quad
\vec{z}(t_0)=\vec{z}_0\in \mathbb{R}^{2d},
\end{equation}
where $J$ is a standard structure matrix, $H$ is the Hamiltonian
function. Symplecticity (Poincar\'{e} 1899) has been discovered to
be a characteristic property of Hamiltonian systems (see
\cite{hairerlw06gni}, page 185), and thus it is suggested to
construct numerical methods that share this geometric property. Such
type of special-purpose methods were naturally named to be
``symplectic"
\cite{Feng95kfc,hairerlw06gni,Leimkuhlerr04shd,ruth83aci,sanzc94nhp},
which states that the discrete numerical flow $\phi_h$ induced by
the algorithms is a symplectic transformation, i.e., satisfying
\begin{equation*}
\phi'_h(\vec{z})^TJ\phi'_h(\vec{z})=J,
\end{equation*}
where $\phi'_t$ represents the Jacobian matrix of the numerical flow
$\phi_t$. Usually, especially for those (near-)integrable systems,
symplectic methods can produce many excellent numerical behaviors
including linear error growth, long-time near-conservation of first
integrals, and existence of invariant tori
\cite{hairerlw06gni,Shang99kam}. Moreover, by backward error
analysis the numerical flow of the symplectic methods lies in the
trajectories of the interpolating Hamiltonian systems, which implies
that it exactly preserves a modified Hamiltonian
\cite{Benetting94oth,Tang94feo}.
There exists a particularly important class of symplectic methods
called ``symplectic Runge-Kutta (RK) methods", which were discovered
independently by three authors in 1988
\cite{sanz88rkm,suris88otc,lasagni88crk}. Afterwards, symplectic RK
methods were fully explored in the context of classic RK methods
(see, for example, \cite{sanz92srk,sun93coh,sun00asw}), and the
well-known $W$-transformation technique proposed by Hairer \& Wanner
\cite{hairerw96sod} was frequently used. More recently, however, RK
methods have been creatively extended to RK methods with continuous
stage \cite{butcher72ato,butcher87tna,hairer10epv}, and thus
symplectic RK-type methods gained a new growth point
\cite{tangs12ana,Tangs12tfe,Tang13tfe,Tangs14cor,Tanglx16cos,Tangsc17dgm,
Tang18ano,Tangz18spc,Tang18csr}.
In this paper, we further develop symplectic RK-type methods within
the newly-developed framework of continuous-stage RK methods. By
using Chebyshev polynomials, it enables us to get rich production of
Chebyshev symplectic methods. It should be recognized that
$W$-transformation is closely related to Legendre polynomials while
our approach can be applied to any other weighted orthogonal
polynomials (although Chebyshev polynomials are mainly involved in
the construction of our methods in this paper). On account of this,
our approach to construct symplectic methods is rather different
from the $W$-transformation technique previously used.
This paper will be organized as follows. In the next section, we
give a brief revisit of some newly-developed theoretical results for
constructing RK-type methods with general purpose. Section 3 is
devoted to study the construction of Chebyshev RK-type methods with
symplecticity-preserving property. Some numerical tests are given in
Section 4. At last, we conclude this paper.
\section{Theory of continuous-stage RK methods}
We are concerned with the following initial value problem
\begin{equation}\label{eq:ode}
\dot{\vec{z}}=\vec{f}(t,\vec{z}),\quad \vec{z}(t_0)=\vec{z}_0\in
\mathbb{R}^d,
\end{equation}
with $\vec{f}$ being sufficiently differentiable.
\begin{defi}\cite{hairer10epv}\label{defi1}
Let $A_{\tau,\, \sigma}$ be a function of two variables $\tau$,
$\sigma$ $\in [0, 1]$, and $B_\tau$, $C_\tau$ be functions of
$\tau\in [0, 1]$. The one-step method $\Phi_h: \vec{z}_0 \mapsto
\vec{z}_{1}$ given by
\begin{equation}\label{crk}
\begin{split}
&\vec{Z}_\tau=\vec{z}_0+h\int_0^{1}A_{\tau,\,\sigma}\vec{f}(t_0+C_\sigma
h,\vec{Z}_\sigma)\,\mathrm{d}
\sigma,\;\tau\in[0,\,1],\\
&\vec{z}_{1}=\vec{z}_0+h\int_0^{1}B_\tau\vec{f}(t_0+C_\tau
h,\vec{Z}_\tau)\,\mathrm{d}\tau,
\end{split}
\end{equation}
is called a \emph{continuous-stage Runge-Kutta} (csRK) method, where
$\vec{Z}_\tau\approx \vec{z}(t_0+C_\tau h).$ Here, we always assume
\begin{equation}\label{consis}
C_\tau=\int_0^1A_{\tau,\,\sigma}\,\mathrm{d}\sigma,
\end{equation}
and often use a triple $(A_{\tau,\, \sigma},\,B_\tau,\,C_\tau)$ to
represent such a method. In this paper, we will hold on the
following assumption almost everywhere as previously done in
\cite{hairer10epv,Tangs14cor,Tang18ano,Tang18csr}
\begin{equation}\label{C_assum}
C_\tau\equiv\tau, \quad\tau\in [0, 1].
\end{equation}
\end{defi}
We introduce the following \emph{simplifying assumptions} proposed
by Hairer in \cite{hairer10epv}
\begin{equation}\label{csRK_sim_assum}
\begin{split}
&\breve{B}(\xi):\quad \int_0^1B_\tau C_\tau^{\kappa-1}\,\mathrm{d}
\tau=\frac{1}{\kappa},\quad \kappa=1,\ldots,\xi,\\
&\breve{C}(\eta):\quad
\int_0^1A_{\tau,\,\sigma}C_\sigma^{\kappa-1}\,\mathrm{d}
\sigma=\frac{1}{\kappa}C_\tau^{\kappa},\quad \kappa=1,\ldots,\eta,\\
&\breve{D}(\zeta):\quad \int_0^1B_\tau C_\tau^{\kappa-1}
A_{\tau,\,\sigma}\,\mathrm{d}
\tau=\frac{1}{\kappa}B_\sigma(1-C_\sigma^{\kappa}),\quad
\kappa=1,\ldots,\zeta.
\end{split}
\end{equation}
The following result is useful for analyzing the order of csRK
methods, which is a counterpart of the classic result by Butcher in
1964 \cite{butcher64ipa}.
\begin{thm}\cite{hairer10epv,Tangs14cor}\label{crk:order}
If the coefficients $(A_{\tau,\,\sigma},\,B_\tau,\,C_\tau)$ of
method \eqref{crk} satisfy $\breve{B}(\xi)$, $\breve{C}(\eta)$ and
$\breve{D}(\zeta)$, then the method is of order at least $\min
(\xi,2\eta+2,\,\eta+\zeta+1)$.
\end{thm}
\begin{lem}\cite{Tang18csr}\label{lem_assum}
Under the assumption \eqref{C_assum}, the simplifying assumptions
$\breve{B}(\xi), \breve{C}(\eta)$ and $\breve{D}(\zeta)$ are
equivalent to
\begin{align}\label{eq:cd1}
&\breve{B}(\xi):\quad \int_0^1B_\tau \phi(\tau)\,\mathrm{d}
\tau=\int_0^1\phi(x)\,\mathrm{d} x,\quad \text{for}\; \forall\, \phi\;
\text{with}\;\emph{deg}(\phi)\leq\xi-1,\\\label{eq:cd2}
&\breve{C}(\eta):\quad \int_0^1A_{\tau,\,\sigma} \phi(\sigma)\,\mathrm{d}
\sigma=\int_0^\tau \phi(x)\,\mathrm{d} x,\quad \text{for}\; \forall\,
\phi\; \text{with}\; \emph{deg}(\phi)\leq\eta-1,\\\label{eq:cd3}
&\breve{D}(\zeta):\quad \int_0^1B_\tau
A_{\tau,\,\sigma}\phi(\tau)\,\mathrm{d}
\tau=B_\sigma\int_\sigma^1\phi(x)\,\mathrm{d} x,\quad \text{for}\;
\forall\, \phi\; \text{with}\; \emph{deg}(\phi)\leq\zeta-1,
\end{align}
where $\emph{deg}(\phi)$ stands for the degree of polynomial
function $\phi$.
\end{lem}
The concept of weight function is rather important for our
discussions later, which can be found in almost every textbook of
numerical analysis (see, for example, \cite{sulim03ait}).
\begin{defi}\label{weight_func}
A non-negative function $w(x)$ is called a \emph{weight function} on
$[a, b]$, if it satisfies the following two conditions:
\begin{itemize}
\item[(a)] The $k$-th moment $\int_a^bx^k w(x)\,\mathrm{d} x, \;k\in\mathbb{N}$ exists;
\item[(b)] For $\forall\,u(x)\geq0$, $\int_a^bu(x)w(x)\,\mathrm{d}
x=0\;\Longrightarrow\;u(x)\equiv0$.
\end{itemize}
\end{defi}
It is known that for a given weight function $w(x)$, there exists a
sequence of orthogonal polynomials in the \emph{weighted function
space} (Hilbert space) \cite{Szeg85op}
\begin{equation*}
L^2_w[a,b]=\{u \text{ is measurable on}\,
[a,b]:\;\int_a^b|u(x)|^2w(x)\,\mathrm{d} x<+\infty\}
\end{equation*}
with respect to the inner product
\begin{equation*}
(u,v)_w=\int_a^bu(x)v(x)w(x)\,\mathrm{d} x.
\end{equation*}
In what follows, we denote the orthogonal polynomials by
$\{P_n(x)\}_{n=0}^\infty$ and assume they have been normalized in
$[a,b]$, i.e.,
\begin{equation*}
(P_i,P_j)_w=\delta_{ij},\;\;i, j=0,1,2,\cdots.
\end{equation*}
It is well to be reminded that these polynomials make up a complete
orthogonal set in the Hilbert space $L^2_w[a,b]$ and the $n$-degree
polynomial $P_n(x)$ has exactly $n$ real simple zeros in the open
interval $(a,b)$.
Assume $A_{\tau,\sigma}$ and $B_{\tau}$ have the following
decompositions
\begin{equation*}
A_{\tau,\, \sigma}=\widehat{A}_{\tau,\sigma}
w(\sigma),\;\;B_{\tau}=\widehat{B}_{\tau}w(\tau),
\end{equation*}
where $w$ is a weight function defined on $[0, 1]$, and then the
csRK method \eqref{crk} can be written as
\begin{equation}\label{wcrk}
\begin{split}
&\vec{Z}_\tau=\vec{z}_0+h\int_0^{1}\widehat{A}_{\tau,\,\sigma}
w(\sigma)\vec{f}(t_0+\sigma h,\vec{Z}_\sigma)\,\mathrm{d}
\sigma,\;\tau\in[0,\,1],\\
&\vec{z}_{1}=\vec{z}_0+h\int_0^{1}\widehat{B}_\tau
w(\tau)\vec{f}(t_0+\tau h,\vec{Z}_\tau)\,\mathrm{d}\tau.
\end{split}
\end{equation}
\begin{thm}\cite{Tang18csr}\label{ordcon_var}
Suppose\footnote{We use the notation $A_{\ast,\,\sigma}$ to stand
for the one-variable function in terms of $\sigma$, and
$A_{\tau,\,\ast},\,\widehat{A}_{\ast,\,\sigma}$ can be understood
likewise.}
$\widehat{B}_{\tau},\,\,\widehat{A}_{\ast,\,\sigma},\,\,(\widehat{B}_\tau\,A_{\tau,\,\ast})\in
L^2_w[0,1]$, then, under the assumption \eqref{C_assum} we have
\begin{itemize}
\item[\emph{(a)}] $\breve{B}(\xi)$ holds $\Longleftrightarrow$ $B_\tau$ has
the following form in terms of the normalized orthogonal polynomials
in $L^2_w[0,1]$:
\begin{equation}\label{Bt}
B_\tau=\Big(\sum\limits_{j=0}^{\xi-1}\int_0^1P_j(x)\,\mathrm{d} x
P_j(\tau)+\sum\limits_{j\geq\xi}\lambda_j P_j(\tau)\Big)w(\tau),
\end{equation}
where $\lambda_j$ are any real parameters;
\item[\emph{(b)}] $\breve{C}(\eta)$ holds $\Longleftrightarrow$ $A_{\tau,\,\sigma}$ has
the following form in terms of the normalized orthogonal polynomials
in $L^2_w[0,1]$:
\begin{equation}\label{Ats}
A_{\tau,\,\sigma}=\Big(\sum\limits_{j=0}^{\eta-1}\int_0^{\tau}P_j(x)
\,\mathrm{d} x P_j(\sigma)+\sum\limits_{j\geq\eta}\varphi_j(\tau)
P_j(\sigma)\Big)w(\sigma),
\end{equation}
where $\varphi_j(\tau)$ are any real functions;
\item[\emph{(c)}] $\breve{D}(\zeta)$ holds $\Longleftrightarrow$ $B_\tau A_{\tau,\,\sigma}$ has
the following form in terms of the normalized orthogonal polynomials
in $L^2_w[0,1]$:
\begin{equation}\label{BtAts}
B_\tau\,A_{\tau,\,\sigma}=\Big(\sum\limits_{j=0}^{\zeta-1}B_\sigma\int_\sigma^1P_j(x)
\,\mathrm{d} x P_j(\tau)+\sum\limits_{j\geq\zeta}\psi_j(\sigma)
P_j(\tau)\Big)w(\tau),
\end{equation}
where $\psi_j(\sigma)$ are any real functions.
\end{itemize}
\end{thm}
For simplicity and practical application, we have to truncate the
series \eqref{Bt} and \eqref{Ats} suitably according to our needs.
Consequently, only the polynomial case of
$\widehat{A}_{\tau,\sigma}$ and $\widehat{B}_{\tau}$ needs to be
considered. Besides, generally it is impossible to exactly compute
the integrals of a csRK scheme (except that $\vec{f}$ is a
polynomial vector field), thus we have to approximate them with an
$s$-point \emph{weighted interpolatory quadrature formula}
\begin{equation}\label{wquad}
\int_0^1\Phi(x)w(x)\,\mathrm{d} x\approx\sum\limits_{i=1}^sb_i
\Phi(c_i),\;\;c_i\in[0,1],
\end{equation}
where
\begin{equation*}
b_i=\int_0^1\ell_i(x)w(x)\,\mathrm{d}
x,\;\;\ell_i(x)=\prod\limits_{j=1,j\neq
i}^s\frac{x-c_j}{c_i-c_j},\;\;i=1,\cdots,s.
\end{equation*}
Here, we remark that for the simplest case $s=1$, we define
$\ell_1(x)=x/c_1$.
Thus, by applying the quadrature rule \eqref{wquad} to the weighted
csRK method \eqref{wcrk}, it leads up to a traditional $s$-stage RK
method
\begin{equation}\label{crk:quad}
\begin{split}
&\widehat{\vec{Z}}_i=\vec{z}_0+h\sum_{j=1}^sb_j
\widehat{A}_{c_i,\,c_j}\vec{f}(t_0+c_{j}h,
\widehat{\vec{Z}}_j),\quad i=1,\cdots,s,\\
&\vec{z}_{1}=\vec{z}_0+h\sum_{i=1}^sb_{i}\widehat{B}_{c_i}
\vec{f}(t_0+c_{i}h,\widehat{\vec{Z}}_i),
\end{split}
\end{equation}
where $\widehat{\vec{Z}}_i\approx\vec{Z}_{c_i}$. After that, we can
use the following result to determine the order of the resulting RK
methods.
\begin{thm}\cite{Tang18csr}\label{qua:csRK}
Assume the underlying quadrature formula \eqref{wquad} is of order
$p$, and $\widehat{A}_{\tau,\,\sigma}$ is of degree $\pi_A^\tau$
with respect to $\tau$ and of degree $\pi_A^{\sigma}$ with respect
to $\sigma$, and $\widehat{B}_{\tau}$ is of degree $\pi_B^\tau$. If
all the simplifying assumptions $\breve{B}(\xi)$, $\breve{C}(\eta)$
and $\breve{D}(\zeta)$ in \eqref{csRK_sim_assum} are fulfilled,
then the standard RK method \eqref{crk:quad} is at least of order
$$\min(\rho, 2\alpha+2, \alpha+\beta+1),$$
where $\rho=\min(\xi, p-\pi_B^{\tau})$, $\alpha=\min(\eta,
p-\pi_A^{\sigma})$ and $\beta=\min(\zeta,
p-\pi_A^{\tau}-\pi_B^{\tau})$.
\end{thm}
\begin{proof}
Please refer to \cite{Tang18csr} for the details of proof.
\end{proof}
Next, we introduce the following optimal quadrature technique named
``Gauss-Christoffel type" for practical use, though other suboptimal
quadrature rules can also be considered
\cite{Abramowitzs65hom,sulim03ait}.
\begin{thm}\label{GCqua}
If $c_1, c_2,\cdots, c_s$ are chosen as the $s$ distinct zeros of
the normalized orthogonal polynomial $P_s(x)$ of degree $s$ in
$L^2_w[0,1]$, then the interpolatory quadrature formula
\eqref{wquad} is exact for polynomials of degree $2s-1$, i.e., of
the optimal order $p=2s$. If $\Phi\in C^{2s}$, then it has the
following error estimate
\begin{equation*}
\int_0^1\Phi(x)w(x)\,\mathrm{d} x-\sum\limits_{i=1}^sb_i
\Phi(c_i)=\frac{\Phi^{(2s)}(\xi)}{(2s)!\mu^2_s},
\end{equation*}
for some $\xi\in[0,1]$, where $\mu_s$ is the leading coefficient of
$P_s(x)$.
\end{thm}
\section{Construction of Chebyshev symplectic methods}
It is known that Chebyshev polynomials as a special class of Jacobi
polynomials are frequently used in various fields especially in the
study of spectral methods (see \cite{Gottliebo77nao,Guo98sma} and
references therein). Particularly, zeros of Chebyshev polynomials of
the first kind are often used in polynomial interpolation because
the resulting interpolation polynomial minimizes the effect of
Runge's phenomenon. But unfortunately, so far as we know, there are
few Chebyshev symplectic methods available in the scientific
literature except for two methods given in \cite{Tang18csr}. On
account of this, we are interested in such subject and try to
develop these methods based on the previous work of
\cite{Tang18csr}.
The construction of symplectic methods is mainly dependent on the
following results (please refer to \cite{Tang18ano,Tang18csr} for
more information).
\begin{thm}\cite{Tang18ano}\label{symcond1}
If the coefficients of a csRK method \eqref{crk} satisfy
\begin{equation}\label{symcond}
B_\tau A_{\tau,\sigma}+B_\sigma A_{\sigma,\tau}\equiv B_\tau
B_\sigma,\;\; \tau,\,\sigma \in [0,1],
\end{equation}
then it is symplectic. In addition, the RK scheme with coefficients
$(b_jA_{c_i,c_j}, b_iB_{c_i}, c_i)^s_{i=1}$ (derived by using
quadrature formula, c.f., \eqref{crk:quad}) based on the underlying
symplectic csRK method with coefficients satisfying \eqref{symcond}
is always symplectic.
\end{thm}
\begin{thm}\cite{Tang18csr}\label{sym_design}
Under the assumption \eqref{C_assum}, for a symplectic csRK method
with coefficients satisfying \eqref{symcond}, we have the following
statements:
\begin{itemize}
\item[\emph{(a)}] $\breve{B}(\xi)$ and $\breve{C}(\eta)$
$\Longrightarrow$ $\breve{D}(\zeta)$, where
$\zeta=\min\{\xi,\,\eta\}$;
\item[\emph{(b)}] $\breve{B}(\xi)$ and $\breve{D}(\zeta)$
$\Longrightarrow$ $\breve{C}(\eta)$, where
$\eta=\min\{\xi,\,\zeta\}$.
\end{itemize}
\end{thm}
\begin{thm}\cite{Tang18csr}\label{symcond3}
Suppose that $A_{\tau,\sigma}/B_\sigma\in L_w^2([0,1]\times[0,1])$,
then symplectic condition \eqref{symcond1} is equivalent to the fact
that $A_{\tau,\sigma}$ has the following form in terms of the
orthogonal polynomials $P_n(x)$ in $L_w^2[0,1]$
\begin{equation}\label{sympconvari}
A_{\tau,\sigma}=B_\sigma\Big(\frac{1}{2}+\sum_{0<i+j\in
\mathbb{Z}}\alpha_{(i,j)}
P_i(\tau)P_j(\sigma)\Big),\quad\alpha_{(i,j)}\in \mathbb{R},
\end{equation}
where $\alpha_{(i,j)}$ is skew-symmetric, i.e.,
$\alpha_{(i,j)}=-\alpha_{(j,i)},\,i+j>0$.
\end{thm}
By virtue of these theorems and the relevant results given in the
previous section, we can introduce the following \emph{procedure}
for constructing symplectic csRK methods\footnote{Then, symplectic
RK methods can be obtained easily by using any quadrature rule, as
revealed by Theorem
\ref{symcond1}.} \cite{Tang18csr}:\\
\textbf{Step 1.} Make an ansatz for $B_\tau$ which satisfies
$\breve{B}(\xi)$ with $\xi\geq1$ according to \eqref{Bt}, and a
finite number of $\lambda_\iota$ could be kept as parameters;\\
\textbf{Step 2.} Suppose $A_{\tau,\,\sigma}$ is in the form
(according to Theorem \ref{symcond3})
\begin{equation*}
A_{\tau,\sigma}=B_\sigma\Big(\frac{1}{2}+\sum_{0<i+j\in
\mathbb{Z}}\alpha_{(i,j)}
P_i(\tau)P_j(\sigma)\Big),\quad\alpha_{(i,j)}=-\alpha_{(j,i)},
\end{equation*}
where $\alpha_{(i,j)}$ are kept as parameters with a finite number,
and then substitute $A_{\tau,\,\sigma}$ into $\breve{C}(\eta)$ (see
\eqref{eq:cd2}, usually we let $\eta<\xi$):
\begin{equation*}
\int_0^1A_{\tau,\,\sigma} P_k(\sigma)\,\mathrm{d} \sigma=\int_0^\tau
P_k(x)\,\mathrm{d} x,\;\;k=0,1,\cdots,\eta-1,
\end{equation*}
for the sake of settling $\alpha_{(i,j)}$;\\
\textbf{Step 3.} Write down $B_\tau$ and $A_{\tau,\,\sigma}$
(satisfy $\breve{B}(\xi)$ and $\breve{C}(\eta)$ automatically),
which results in a symplectic method of order at least
$\min\{\xi,\,2\eta+2,\,\eta+\zeta+1\}$ with
$\zeta=\min\{\xi,\,\eta\}$ by Theorem \ref{crk:order} and
\ref{sym_design}.
However, the procedure above only provides a general framework for
establishing symplectic methods. For simplicity and practical use,
it needs to be more refined or particularized. Actually, in view of
Theorem \ref{qua:csRK} and \ref{sym_design}, it is suggested to
design Butcher coefficients with low-degree
$\widehat{A}_{\tau,\,\sigma}$ and $\widehat{B}_\tau$, and $\eta$ is
better to take as $\eta\approx\frac{1}{2}\xi$. Besides, for the sake
of conveniently computing those integrals of $\breve{C}(\eta)$ in
the second step, the following ansatz may be advisable (with
$C_\tau$ given by \eqref{C_assum} and let $\rho\geq\eta$ and
$\xi\geq2\eta$)
\begin{equation}\label{symBA}
B_\tau=\sum\limits_{j=0}^{\xi-1}\int_0^1P_j(x)\,\mathrm{d} x
P_j(\tau)w(\tau),\qquad
A_{\tau,\sigma}=B_\sigma\Big(\frac{1}{2}+\sum_{0<i+j\in
\mathbb{Z}\atop i\leq\rho,\,j\leq \xi-\eta}\alpha_{(i,j)}
P_i(\tau)P_j(\sigma)\Big),
\end{equation}
where $\alpha_{(i,j)}=-\alpha_{(j,i)}$. Because of the index $j$
restricted by $j\leq \xi-\eta$ in the second formula of
\eqref{symBA}, we can use $\breve{B}(\xi)$ to arrive at (please c.f.
\eqref{eq:cd1})
\begin{equation*}
\begin{split}
&\int_0^1A_{\tau,\,\sigma} P_k(\sigma)\,\mathrm{d}
\sigma=\int_0^1B_\sigma\Big(\frac{1}{2}+\sum_{0<i+j\in
\mathbb{Z}\atop i\leq\rho,\,j\leq \xi-\eta}\alpha_{(i,j)}
P_i(\tau)P_j(\sigma)\Big)P_k(\sigma)\,\mathrm{d} \sigma\\
&=\frac{1}{2}\int_0^1 P_k(x)\,\mathrm{d} x+\sum_{0<i+j\in \mathbb{Z}\atop
i\leq\rho,\,j\leq \xi-\eta}\alpha_{(i,j)}
P_i(\tau)\int_0^1P_j(\sigma)P_k(\sigma)\,\mathrm{d} \sigma,\;\;0\leq
k\leq\eta-1.
\end{split}
\end{equation*}
Therefore, $\breve{C}(\eta)$ implies that
\begin{equation}\label{symp_eqs}
\frac{1}{2}\int_0^1 P_k(x)\,\mathrm{d} x+\sum_{0<i+j\in \mathbb{Z}\atop
i\leq\rho,\,j\leq \xi-\eta}\alpha_{(i,j)}
P_i(\tau)\int_0^1P_j(\sigma)P_k(\sigma)\,\mathrm{d} \sigma=\int_0^\tau
P_k(x)\,\mathrm{d} x,\;\;0\leq k\leq\eta-1.
\end{equation}
Finally, it needs to settle $\alpha_{(i,j)}$ by transposing,
comparing or merging similar items of \eqref{symp_eqs} after the
polynomial on right-hand side being represented by the basis
$\{P_j(x)\}_{j=0}^\infty$. In view of the skew-symmetry of
$\alpha_{(i,j)}$, if we let $r=\min\{\rho,\xi-\eta\}$, then actually
the degrees of freedom of these parameters is $r(r+1)/2$, by
noticing that
\begin{equation*}
\alpha_{(i,i)}=0,\;i\geq1\;\;\text{and}\;\;\alpha_{(i,j)}=0,
\;\;\text{for}\;i>r\;\text{or}\;j>r.
\end{equation*}
When $r(r+1)/2\gg(r+1)\eta$ (number of equations), i.e.,
$r\gg2\eta$, we can appropriately reduce the degrees of freedom of
these parameters by imposing some of them to be zero in pairs, if
needed.
\subsection{Chebyshev symplectic methods of the first kind}
Firstly, let us consider using the following shifted normalized
Chebyshev polynomials of the first kind denoted by $T_n(x)$, i.e.,
\begin{equation*}
T_0(x)=\frac{\sqrt{2}}{\sqrt{\pi}},\;\;T_n(x)=\frac{2\cos
\big(n\arccos(2x-1)\big)}{\sqrt{\pi}},\;n\geq1.
\end{equation*}
It is known that these Chebyshev polynomials have the following
properties:
\begin{equation}\label{Cheby_first_proper}
\begin{split}
&\int_0^1T_k(t)\,\mathrm{d} t=\left\{
\begin{array}{lll}
0, & \hbox{\text{if}\;$k$\;\text{is}\;\text{odd}},\\[3pt]
\frac{2}{\sqrt{\pi}(1-k^2)}, & \hbox{\text{if}\;$k$\;
\text{is}\;\text{even}},\\[3pt]
\frac{\sqrt{2}}{\sqrt{\pi}}, & \hbox{\text{if}\;$k=0$},
\end{array}
\right.\\
&\int_0^xT_k(t)\,\mathrm{d} t=\left\{
\begin{array}{lll}
\frac{T_{k+1}(x)}{4(k+1)}-\frac{T_{k-1}(x)}{4(k-1)}
+\frac{(-1)^{k+1}}{(k^2-1)\sqrt{\pi}}, & \hbox{\text{if}\;$k\geq 2$},\\
\frac{T_2(x)-2/\sqrt{\pi}}{8}, & \hbox{\text{if}\;$k=1$},\\
\frac{\sqrt{2}T_1(x)}{4}+\frac{1}{\sqrt{2\pi}}, &
\hbox{\text{if}\;$k=0$},
\end{array}
\right.\\
&\int_0^1T_j(t)T_k(t)\,\mathrm{d} t=\left\{
\begin{array}{lll}
\frac{1}{\sqrt{\pi}}\int_0^1T_{j+k}(t)+T_{j-k}(t)\,\mathrm{d}
t, & \hbox{\text{if}\;$j,\,k\geq 1,\;j> k$},\\[3pt]
\frac{1}{\sqrt{\pi}}\int_0^1T_{j+k}(t)\,\mathrm{d}
t+\frac{2}{\pi}, & \hbox{\text{if}\;$j,\,k\geq 1,\;j=k$},\\[3pt]
\frac{\sqrt{2}}{\sqrt{\pi}}\int_0^1T_j(t)\,\mathrm{d} t, &
\hbox{\text{if}\;$j\geq0,\;k=0$},
\end{array}
\right.
\end{split}
\end{equation}
Notice that the properties given in \eqref{Cheby_first_proper} are
helpful for computing the integrals\footnote{Of course, we can use
some symbolic computing tool or softwares (e.g., Mathematica, Maple,
Maxima etc.) to treat these integrals alternatively.} of
\eqref{symp_eqs}, hence we can conveniently construct Chebyshev
symplectic methods of the first kind. Next, we give some examples
and the following shifted Gauss-Christoffel-Chebyshev(I) quadrature
rule will be used \cite{Abramowitzs65hom}
\begin{equation}\label{wquadI}
\int_0^1\Phi(x)w(x)\,\mathrm{d} x\approx\sum\limits_{i=1}^sb_i
\Phi(c_i),\;\;c_i\in[0,1],
\end{equation}
where
\begin{equation*}
w(x)=\frac{1}{2\sqrt{x-x^2}},\;b_i=\frac{\pi}{2s},\;
c_i=\frac{1+\cos(\frac{2i-1}{2s}\pi)}{2},\;\;i=1,\cdots,s,
\end{equation*}
with $c_i$ being the zeros of Chebyshev polynomial $T_s(x)$.
\begin{exa}
With the orthogonal polynomials $P_j(x)$ in \eqref{symBA} replaced
by $T_j(x)$, we consider the following three cases separately,
\begin{description}
\item[(i)] Let $\xi=2,\,\eta=1,\,\rho=1$, we have only one degree of freedom.
After some elementary calculations, it gives a unique solution
\begin{equation*}
\alpha_{(0,1)}=-\alpha_{(1,0)}=-\frac{\sqrt{2}\pi}{8},
\end{equation*}
which results in a symplectic method of order $2$. By using the
$1$-point Gauss-Christoffel-Chebyshev(I) quadrature rule we regain
the well-known implicit midpoint rule;
\item[(ii)] Let $\xi=3,\,\eta=1,\,\rho=2$, it will lead to
\begin{equation*}
\alpha_{(1,0)}=\frac{\sqrt{2}}{3}\alpha_{(1,2)}+\frac{\sqrt{2}}{8}\pi,
\;\;\;\alpha_{(0,2)}=-\alpha_{(2,0)}=0.
\end{equation*}
If we let $\mu=\alpha_{(1,2)}=-\alpha_{(2,1)}$ be a free parameter,
then we get a family of $\mu$-parameter symplectic csRK methods of
order $\geq3$. Actually, it is easy to verify that the resulting
methods are also symmetric\footnote{See Theorem 4.6 in
\cite{Tang18csr}.} and thus they possess an even order $4$. By using
the $3$-point Gauss-Christoffel-Chebyshev(I) quadrature rule we get
a family of $3$-stage $4$-order symplectic RK methods which are
shown in Tab. \ref{exa1:sympRK1}, with
$\gamma:=\frac{4\sqrt{3}\mu}{27\pi}$. We find that this class of
methods is exactly the same one as shown in \cite{Tang18csr}.
\item[(iii)] If we take $\xi=5,\,\eta=2,\,\rho=2$, then it gives a
unique solution
\begin{equation*}
\alpha_{(0,1)}=-\alpha_{(1,0)}=-\frac{3\sqrt{2}}{32}\pi,\;\;\alpha_{(1,2)}=
-\alpha_{(2,1)}=-\frac{3\pi}{32},\;\;\alpha_{(0,2)}=-\alpha_{(2,0)}=0.
\end{equation*}
The resulting symplectic csRK method is symmetric and of order $6$.
By using the $5$-point Gauss-Christoffel-Chebyshev(I) quadrature
rule we get a $5$-stage $6$-order symplectic RK method which is
shown numerically (the exact Butcher tableau is too complicated to
be exhibited) in Tab. \ref{exa1:sympRK2}. It is tested that such
method satisfies the classic symplectic condition (i.e., stability
matrix $M=0$ \cite{sanz88rkm}) and order conditions (from order $1$
to order $6$) up to the machine error.
\end{description}
\end{exa}
\begin{table}
\[\begin{array}{c|ccc} \frac{2-\sqrt{3}}{4} & \frac{1}{9}
& \frac{10-5\sqrt{3}}{36}+5\gamma& \frac{1-\sqrt{3}}{9}-5\gamma\\[2pt]
\frac{1}{2} &\frac{2+\sqrt{3}}{18}-2\gamma
&\frac{5}{18}&\frac{2-\sqrt{3}}{18}+2\gamma\\[2pt]
\frac{2+\sqrt{3}}{4} & \frac{1+\sqrt{3}}{9}+5\gamma&
\frac{10+5\sqrt{3}}{36}-5\gamma & \frac{1}{9}\\[2pt]
\hline & \frac{2}{9} & \frac{5}{9} & \frac{2}{9} \end{array}\] \caption{A
family of one-parameter $3$-stage $4$-order symplectic RK methods,
based on Chebyshev polynomials of the first
kind.}\label{exa1:sympRK1}
\end{table}
\begin{table}
\tiny{\[\begin{array}{c|ccccc} 0.97552825814758 & 0.04194530711667 &
0.24300466547350 &
0.37207633208122&0.26512850280807 & 0.05337345066811\\[2pt]
0.79389262614624 &0.00631196709497 & 0.13138802621666 & 0.28852394136060
& 0.28302706319253 & 0.08464162828148\\[2pt]
0.50000000000000 & -0.01789322937530 & 0.01554611000971 & 0.15333333333333
& 0.24722994242362 & 0.10178384360864\\[2pt]
0.20610737385376& -0.00075101404814 & -0.02025101075920 &
0.01814272530606 & 0.13138802621666 & 0.07757864713837\\[2pt]
0.02447174185242& 0.03051716356523 & -0.00235245037475 &
-0.06540966541455 & 0.01977138695982 & 0.04194530711667\\[2pt]
\hline & 0.08389061423334 & 0.26277605243332 & 0.30666666666667&
0.26277605243332 & 0.08389061423334 \end{array}\]} \caption{A $5$-stage
$6$-order symplectic RK method, based on Chebyshev polynomials of
the first kind.}\label{exa1:sympRK2}
\end{table}
\subsection{Chebyshev symplectic methods of the second kind}
Secondly, let us consider the shifted normalized Chebyshev
polynomials of the second kind denoted by $U_n(x)$, i.e.,
\begin{equation*}
U_n(x)=\frac{\sin\big((n+1)\arccos(2x-1)\big)}
{\sqrt{\pi(x-x^2)}}=\frac{T'_{n+1}(x)}{2(n+1)},\;n\geq0.
\end{equation*}
The following properties can be easily verified (define
$U_{-1}(x)=0$)
\begin{equation}\label{Cheby_second_proper}
\begin{split}
&\int_0^1U_k(t)\,\mathrm{d}
t=\frac{1+(-1)^k}{(k+1)\sqrt{\pi}},\;k\geq0,\\
&\int_0^xU_k(t)\,\mathrm{d}
t=\frac{U_{k+1}(x)-U_{k-1}(x)}{4(k+1)}+\frac{(-1)^k}{(k+1)\sqrt{\pi}},\;k\geq0,\\
&\int_0^1U_j(t)U_k(t)\,\mathrm{d}
t=\frac{2}{\pi}\sum_{l=0}^{j}\frac{1+(-1)^{j+k}}{j-k+1+2l},\;j\geq
k\geq0,
\end{split}
\end{equation}
where the last formula is deduced from
\begin{equation*}
U_j(t)U_k(t)=\frac{2}{\sqrt{\pi}}\sum_{l=0}^{j}U_{j-k+2l}(t),\;j\geq
k\geq0.
\end{equation*}
Applying the properties given in \eqref{Cheby_second_proper} to the
integrals of \eqref{symp_eqs}, it produces Chebyshev symplectic
methods of the second kind. In our examples below, the following
shifted Gauss-Christoffel-Chebyshev(II) quadrature rule will be used
\cite{Abramowitzs65hom}
\begin{equation}\label{wquadII}
\int_0^1\Phi(x)w(x)\,\mathrm{d} x\approx\sum\limits_{i=1}^sb_i
\Phi(c_i),\;\;c_i\in[0,1],
\end{equation}
where
\begin{equation*}
w(x)=2\sqrt{x-x^2},\;b_i=\frac{\pi}{2(s+1)}\sin^2(\frac{i}{s+1}\pi),\;
c_i=\frac{1+\cos(\frac{i}{s+1}\pi)}{2},\;\;i=1,\cdots,s,
\end{equation*}
with $c_i$ being the zeros of $U_s(x)$ as well as the inner extrema
on $[0,1]$ of $T_{s+1}(x)$.
\begin{exa}
With the orthogonal polynomials $P_j(x)$ in \eqref{symBA} replaced
by $U_j(x)$, we consider the following three cases separately,
\begin{description}
\item[(i)] Let $\xi=2,\,\eta=1,\,\rho=1$, we have only one degree of freedom.
After some elementary calculations, it gives a unique solution
\begin{equation*}
\alpha_{(0,1)}=-\alpha_{(1,0)}=-\frac{\pi}{16},
\end{equation*}
which results in a symplectic csRK method of order $2$. By using the
$1$-point Gauss-Christoffel-Chebyshev(II) quadrature rule it gives
the implicit midpoint rule;
\item[(ii)] Let $\xi=3,\,\eta=1,\,\rho=2$, after some elementary
calculations, it gives
\begin{equation*}
\alpha_{(1,0)}=-\frac{1}{3}\alpha_{(1,2)}+\frac{1}{16}\pi,
\;\;\;\alpha_{(0,2)}=-\alpha_{(2,0)}=0.
\end{equation*}
If we regard $\mu=\alpha_{(1,2)}=-\alpha_{(2,1)}$ as a free
parameter, then we get a family of $\mu$-parameter symplectic and
symmetric csRK methods of order $4$. By using the $3$-point
Gauss-Christoffel-Chebyshev(II) quadrature rule we get a family of
$3$-stage $4$-order symplectic RK methods which are shown in Tab.
\ref{exa2:sympRK1}, with $\gamma:=\frac{16\sqrt{2}\mu}{9\pi}$.
\item[(iii)] Alternatively, if we take $\xi=5,\,\eta=2,\,\rho=2$, then it gives a
unique solution
\begin{equation*}
\alpha_{(0,1)}=-\alpha_{(1,0)}=-\frac{9\pi}{128},\;\;\alpha_{(1,2)}=
-\alpha_{(2,1)}=-\frac{3\pi}{128},\;\;\alpha_{(0,2)}=-\alpha_{(2,0)}=0.
\end{equation*}
The resulting symplectic csRK method is symmetric and of order $6$.
By using the $5$-point Gauss-Christoffel-Chebyshev(II) quadrature
rule we get a $5$-stage $6$-order symplectic RK method which is
shown in Tab. \ref{exa2:sympRK2}.
\end{description}
\end{exa}
\begin{table}
\[\begin{array}{c|ccc} \frac{2-\sqrt{2}}{4} & \frac{1}{6}
& \frac{2-\sqrt{2}}{12}+\gamma& \frac{1-\sqrt{2}}{6}-\gamma\\[2pt]
\frac{1}{2} &\frac{2+\sqrt{2}}{12}-\gamma
&\frac{1}{6}&\frac{2-\sqrt{2}}{12}+\gamma\\[2pt]
\frac{2+\sqrt{2}}{4} & \frac{1+\sqrt{2}}{6}+\gamma&
\frac{2+\sqrt{2}}{12}-\gamma & \frac{1}{6}\\[2pt]
\hline & \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \end{array}\] \caption{A
family of one-parameter $3$-stage $4$-order symplectic RK methods,
based on Chebyshev polynomials of the second
kind.}\label{exa2:sympRK1}
\end{table}
\begin{table}
\small{\[\begin{array}{c|ccccc} \frac{2-\sqrt{3}}{4} & \frac{7}{90} &
\frac{19-9\sqrt{3}}{160}
&\frac{52-39\sqrt{3}}{360}&\frac{13-9\sqrt{3}}{160}
& \frac{56-21\sqrt{3}}{720}\\[2pt]
\frac{1}{4} & \frac{91+63\sqrt{3}}{1440} & \frac{1}{10} &
\frac{13}{360}& -\frac{1}{80} & \frac{91-63\sqrt{3}}{1440}\\[2pt]
\frac{1}{2} & \frac{28+21\sqrt{3}}{360} & \frac{7}{40} &
\frac{13}{90}& \frac{1}{40} & \frac{28-21\sqrt{3}}{360}\\[2pt]
\frac{3}{4}& \frac{133+63\sqrt{3}}{1440} & \frac{17}{80} &
\frac{91}{360} & \frac{1}{10} & \frac{133-63\sqrt{3}}{1440}\\[2pt]
\frac{2+\sqrt{3}}{4}& \frac{56+21\sqrt{3}}{720} &
\frac{19+9\sqrt{3}}{160} &
\frac{52+39\sqrt{3}}{360} & \frac{13+9\sqrt{3}}{160} & \frac{7}{90}\\[2pt]
\hline & \frac{7}{45} & \frac{1}{5} & \frac{13}{45}& \frac{1}{5} &
\frac{7}{45} \end{array}\]} \caption{A $5$-stage $6$-order symplectic RK
method, based on Chebyshev polynomials of the second
kind.}\label{exa2:sympRK2}
\end{table}
\section{Numerical tests}
We consider the perturbed Kepler's problem given by the Hamiltonian
function \cite{Calvofmr09sos}
\begin{equation*}
H(p, q)=\frac{1}{2}(p_1^2+p_2^2)-(q_1^2+q_2^2)^{-\frac{1}{2}}-
\frac{2\varepsilon+\varepsilon^2}{3}(q_1^2+q_2^2)^{-\frac{3}{2}}
\end{equation*}
with the initial value condition
$(p_1(0),\,p_2(0),\,q_1(0),\,q_2(0))=(0,1+\varepsilon,1,0)$. The
exact solution is
\begin{equation*}
\begin{split}
&p_1(t)=-(1+\varepsilon)\textrm{sin}(t+\varepsilon
t),\;\;q_1(t)=\textrm{cos}(t+\varepsilon t),\\
&p_2(t)=(1+\varepsilon)\textrm{cos}(t+\varepsilon
t),\;\;q_2(t)=\textrm{sin}(t+\varepsilon t).
\end{split}
\end{equation*}
In our numerical experiments, we take $\varepsilon=0.1$ and use the
step size $h=0.1$. The Chebyshev symplectic methods of order 4 given
in Tab. \ref{exa1:sympRK1} (with $\gamma=0$, denoted by ``Chebyshev
I order 4") and Tab. \ref{exa2:sympRK1} (with $\gamma=0$, denoted by
``Chebyshev II order 4") are tested comparing with the well-known
Gauss-Legendre RK method of order 4 (denoted by ``Gauss order 4").
It is observed from Fig. \ref{fig-4orderS} and Fig.
\ref{fig-4orderH} that our Chebyshev symplectic methods of order 4
share very similar numerical behaviors with the classic method
``Gauss order 4", although the latter has a little bit better result
in the aspects of growth of solution error and conservation of
energy. As is expected, we have a bounded error in energy
conservation and a linear growth of solution error, which coincides
well with the common view in general symplectic integration
\cite{Fengq10sga,hairerlw06gni}. Besides, the newly-derived
Chebyshev symplectic methods of order 6, denoted by ``Chebyshev I
order 6" and ``Chebyshev II order 6" respectively (see Tab.
\ref{exa1:sympRK2} and \ref{exa2:sympRK2}), are also tested
comparing with the 6-order Gauss-Legendre RK method, the numerical
results of which are shown in Fig. \ref{fig-6orderS} and Fig.
\ref{fig-6orderH}. It is seen that these symplectic methods almost
exhibit the same numerical results. These numerical tests are well
conformed with our expects and the associated theoretical results.
Therefore, the newly-constructed Chebyshev methods are effective for
solving Hamiltonian systems.
\begin{figure}
\begin{center}
\scalebox{0.58}[0.50]{\includegraphics{SEorder4.pdf}}
\caption{Solution error by three symplectic methods of order 4, step
size $h=0.1$.}\label{fig-4orderS}
\end{center}
\begin{center}
\scalebox{0.58}[0.50]{\includegraphics{HEorder4.pdf}}
\caption{Energy error by three symplectic methods of order 4, step
size $h=0.1$.}\label{fig-4orderH}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\scalebox{0.58}[0.50]{\includegraphics{SEorder6.pdf}}
\caption{Solution error by three symplectic methods of order 6, step
size $h=0.1$.}\label{fig-6orderS}
\end{center}
\begin{center}
\scalebox{0.58}[0.50]{\includegraphics{HEorder6.pdf}}
\caption{Energy error by three symplectic methods of order 6, step
size $h=0.1$.}\label{fig-6orderH}
\end{center}
\end{figure}
\section{Conclusions}
This paper intensively discusses the construction of Chebyshev
symplectic RK-type methods with the help of the newly-built theory
for csRK methods. We present a new family of symplectic RK methods
in use of the Chebyshev polynomials of the first and second kind
separately. Although these methods are developed mainly in terms of
Chebyshev polynomials, they essentially can be directly extended to
other types of orthogonal polynomials. In addition, we notice that
Chebyshev-Gauss-Lobatto collocation methods have been considered in
\cite{zhangk11coa} for solving Hamiltonian systems, stating that
these spectral collocation methods can preserve both energy and
symplectic structure up to the machine error in each time step. But
their methods are non-symplectic after all, it can not guarantee the
correct qualitative behaviors for a rather long term. In contrast to
this, by using the interpolatory quadrature rules with Chebyshev
abscissae, we have constructed the Chebyshev methods which are
exactly symplectic.
\section*{Acknowledgments}
This work was supported by the National Natural Science Foundation
of China (11401055), China Scholarship Council and Scientific
Research Fund of Hunan Provincial Education Department (15C0028).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\noindent Let $G$ be a group which acts on itself by conjugation, and let $H^1(G, G)$ be the first cohomology pointed set. Denote by $\Sh(G)$ the subset of $H^1(G, G)$
consisting of the cohomology classes becoming trivial after restricting to every cyclic subgroup of G. The set $\Sh(G)$, for a given group $G$, is called the Shafarevich-Tate set of $G$.
Following Kunyavski\u{\i} \cite{BK}, we say that $G$ is a $\Sh$-rigid group if the set $\Sh(G)$ consists of one element. Throughout the paper $\Sh$-rigid groups will be called rigid groups.
The Bogomolov multiplier $B_0(G)$ of a finite group G is defined as the subgroup of the Schur multiplier consisting of the cohomology classes vanishing after
restriction to all abelian subgroups of G. See \cite{BK} for further details on rigidity and Bogomolov multiplier of certain groups, and eventual ties between them. Kang and Kunyavski\u{\i} in \cite{KK} observed that $B_0(G) = 0$ for most of the known classes of finite $\Sh$-rigid groups $G$, and asked the following question:
\vspace{.05in}
\noindent {\bf Question}(\cite[Question 3.2]{KK}). Let $G$ be a finite $\Sh$-rigid group. Is it true that $B_0(G) = 0$?
\vspace{.1in}
This question has an affirmative answer for all $p$-groups of order at most $p^5$. For $p$-groups of order $\le p^4$, it is proved in \cite{KK}, and for groups of order $p^5$, $p > 3$, it follows from \cite{PM} and \cite{MKY}. For $p = 2, 3$, it can be easily verified using GAP \cite{GAP}.
However this question does not have a positive solution in general (negative answer for a group of order $256$ is provided in the paper \cite{KK} itself), it is interesting to explore for which classes of groups it has a positive solution. This paper is devoted to studying the rigidity property of groups of order $p^6$ for an odd prime $p$ and answering the above question in affirmative for the class of groups considered here. Our viewpoint on this study is a bit different. We study rigidity problem through automorphisms of groups. We make it more precise here. For a finite group $G$, an automorphism of $G$ is called (conjugacy) class-preserving if it maps each element of $G$ to some conjugate of it. The set of all class-preserving automorphisms of $G$, denoted by $Aut_c(G)$, forms a normal subgroup of $Aut(G)$ and contains $Inn(G)$, the group of all inner automorphisms of $G$. Let $Out_c(G)$ denote the quotient group $Aut_c(G)/Inn(G)$. Then there is a bijection between $Out_c(G)$ and $\Sh(G)$ for any finite group $G$ \cite[2.12]{Ono}. Therefore a finite group $G$ is $\Sh$-rigid if and only if $Out_c(G) = 1$. Both approaches, i.e., through the Shafarevich-Tate sets (cf. \cite{KV1, OW1, OW2}) as well as $Out_c(G)$ (cf. \cite{FtSt, HL, MV2, MV3, MKY}), have been explored by many mathematicians studying the rigidity of various classes of groups. As mentioned above, in the present paper we take the latter approach.
Groups of order $p^6$, $p$ an odd prime, are classified in $43$ isoclinism families by R. James \cite{RJ}, which are denoted by $\Phi_k$ for $1 \le k \le 43$. The concept of isoclinism was introduced by P. Hall \cite{PH}. That $Aut_c(G)$, for a non-abelian finite group $G$, is independent (upto isomorphism) of the choice of a group in a given isoclinism family is shown in \cite[Theorem 4.1]{MKY}. This result allows us to select and work with any group from each of $43$ isoclinism families of groups of order $p^6$. The following is the main result of this paper, which classifies rigid groups of order $p^6$, $p$ being an odd prime, and size of $| \Sh(G)| = |Out_c(G)|$ is computed for the groups $G$ which are not rigid.
\begin{thma}
Let $G$ be a group of order $p^6$ for an odd prime $p$. Then $Out_c(G) \neq 1$ if and only if $G$ belongs to one of the isoclinism families
$\Phi_k$ for $k = 7, 10, 13, 15, 18, 20, 21, 24, 30, 36, 38, 39$. Moreover,
\begin{enumerate}
\item if $G$ belongs to one of the isoclinism families $\Phi_k$ for $k = 7, 10, 24, 30, 36, 38, 39$, then $|Out_c(G)|$ = $p$,
\item if $G$ belongs to one of the isoclinism families $\Phi_k$ for $k = 13, 18, 20$, then $|Out_c(G)|$ = $p^2$, and
\item if $G$ belongs to one of the isoclinism families $\Phi_k$ for $k = 15, 21$, then $|Out_c(G)| = p^4$.
\end{enumerate}
\end{thma}
Proof of Theorem A follows from Theorem \ref{prop1} and Theorem \ref{prop2}. Using GAP \cite{GAP} it can be verified that the above question has an affirmative answer for groups of order $2^6$ and $3^6$ (Bogomolov multiplier of groups of order $2^6$ is also computed in \cite{CHKK}). With this information and results of Chen and Ma \cite[Theorem 1.4]{CM}, Theorem A provides an affirmative answer of the above mentioned question of Kang and Kunyavski\u{\i} for groups of order $p^6$ for all primes $p$ in the following result.
\begin{thmb}
Let $G$ be a $\Sh$-rigid group of order $p^6$ for a prime $p$. Then its Bogomolov multiplier $B_0(G)$ is zero.
\end{thmb}
Theorem A is also interesting from other point of view. In 1911 W. Burnside \cite{WB1} asked the following question: Does there exist
any finite group $G$ such that $G$ has a non-inner class-preserving automorphism? In 1913, he himself answered this question affirmatively by constructing a group $W$ of order $p^6$, $p$ an odd prime \cite{WB}.
This group $W$ is of nipotency class 2 such that $|Aut_c(W)| = p^8$ and $|Out_c(W)| = p^4$. More groups with non-inner class-preserving automorphism were constructed in
\cite{BM, HEIN, HW, IM, FS}. We refer the reader to \cite{MKYSRV} for a more comprehensive survey on the topic. So Theorem A above classifies all groups of order $p^6$, $p$ an odd prime, having non-inner class-preserving automorphisms. \\
We use the following notations. For a multiplicatively written group $G$ let $x,y,a \in G$. Then $[x,y]$ denotes the commutator $x^{-1}y^{-1}xy$ and $x^a$ denotes the conjugate of $x$ by $a$ i.e. $a^{-1}xa$.
By $\gen{x}$ we denote the cyclic subgroup of $G$ generated by $x$. By $Z(G)$ and $Z_2(G)$ we denote the center and the second center of $G$ respectively. For $x \in G$, $C_G(x)$ denotes the centralizer of $x$ in $G$. By $C_G(H)$ we denote the centralizer of $H$ in $G$, where $H$ is a subgroup of $G$. We write
$\gamma_2(G)$ for the commutator subgroup of $G$. The group of all homomorphisms from a group $H$ to an abelian group $K$ is denoted by $Hom(H, K)$. For $x \in G, \ x^G$ denotes
the $G$-conjugacy class of $x$ and $[x, G]$ denotes the set of all $[x,y]$, for $y \in G$. By $C_p$ we denote the cyclic group of order $p$. The subgroup generated by all elements of order $p$ in $Z(G)$ is denoted by $\Omega_1(Z(G))$. \\
\section{ \bf{Preliminaries}}
The following commutator identities are easy to prove.
\begin{lemma}\label{lem1}
Let $G$ be a group and $x,y,z \in G$. Then
\begin{enumerate}
\item $[xy, z] = [x, z]^y[y, z]$, and
\item $[z, xy] = [z, y][z, x]^y$.
\end{enumerate}
\end{lemma}
An automorphism of a group $G$ is called central if it acts trivially on the central quotient group $G/Z(G)$, or equivalently, if it commutes with all the inner automorphisms of $G$.
The set of all central automorphisms of $G$ forms a normal subgroup of $Aut(G)$. We denote this subgroup by $Autcent(G)$. The following lemma follows from \cite{AY}.
\begin{lemma}\label{lem2}
Let $G$ be a purely non-abelian finite $p$-group. Then $|Autcent(G)|$ = $|Hom(G/\gamma_2(G),$ $Z(G))|$.
\end{lemma}
\begin{lemma}[\cite{MKY}, Lemma 2.2]\label{lem3}
Let $G$ be a finite $p$-group such that $Z(G) \leq [x, G]$ for all $x \in G-\gamma_2(G)$. Then $|Aut_c(G)| \geq |Autcent(G)||G/Z_2(G)|$.
\end{lemma}
\begin{lemma}[\cite{HW}, Proposition 14.4]\label{lem4}
Let $G$ be a finite group and $H$ be an abelian normal subgroup of $G$ such that $G/H$ is cyclic. Then $Out_c(G) = 1$.
\end{lemma}
\begin{lemma}[\cite{PKR}, Lemma 5] \label{lempkr}
For a group $G$, $C_{Aut_c(G)}(Inn(G)) = Z(Aut_c(G))$.
\end{lemma}
\begin{lemma}[\cite{MKY}, Lemma 2.6]\label{lem6}
Let $G$ be a finite group. Let ${x_1, x_2, \ldots,x_d}$ be a minimal generating set for $G$. Then $|Aut_c(G)| \leq \prod\limits_{i=1}^d |{x_i}^G|$.
\end{lemma}
\begin{lemma} \label{lemay}
Let $G$ be a purely non-abelian $p$-group, minimally generated by $\alpha_1, \alpha_2, \ldots, \alpha_t$. Suppose that $G/\gamma_2(G)$ is elementary abelian. If $\beta_i \in \Omega_1(Z(G))$ for $i = 1, \ldots,t$, then the map $\delta :\{\alpha_1, \alpha_2, \ldots, \alpha_t\} \rightarrow G$, defined as $\delta(\alpha_i) = \alpha_i\beta_i$, extends to a central automorphism of $G$.
\end{lemma}
\begin{proof}
Note that the map $f_{\delta} : \{\alpha_i\gamma_2(G) | \ i = 1, \ldots,t\} \rightarrow \Omega_1(Z(G))$ defined as
$\alpha_i\gamma_2(G) \rightarrow \beta_i$ for $i = 1, \ldots,t$, extends to a homomorphism from $G/\gamma_2(G)$ to $Z(G)$ because $G/\gamma_2(G)$ is elementary abelian. Now it follows from \cite[Theorem 1]{AY} that the map $g \rightarrow gf_{\delta}(g)$ for all $g \in G$ is a central automorphism of $G$. This proves the lemma. \hfill $\Box$
\end{proof}
Let $G$ be a group minimally generated by $\alpha_1, \alpha_2, \ldots, \alpha_t$. Then note that any element of $G$ can be written as
$\eta\alpha_1^{k_1}\alpha_2^{k_2}\cdots\alpha_t^{k_t}$ for some $\eta \in \gamma_2(G)$ and some $k_1, k_2, \ldots,k_t \in \mathbb{Z}$. Also note that any conjugate of an $\alpha_i$ can be written as
$\alpha_i[\alpha_i, \eta\alpha_1^{k_1}\alpha_2^{k_2}\cdots\alpha_t^{k_t}]$ for some $\eta \in \gamma_2(G)$ and some $k_1, k_2, \ldots,k_t \in \mathbb{Z}$. It follows that an automorphism $\delta$
of $G$ is class-preserving if and only if for every $k_1, k_2, \ldots,k_t \in \mathbb{Z}$ and for all $\eta_1 \in \gamma_2(G)$, there exist $l_1, l_2, \ldots,l_t \in \mathbb{Z}$ and $\eta_2 \in \gamma_2(G)$
(depending on $k_1, k_2, \ldots,k_t$ and $\eta_1$) such that
\[\bigg[\eta_1\prod\limits_{i=1}^t \alpha_i^{k_i}, \ \ \eta_2\prod\limits_{i=1}^t \alpha_i^{l_i}\bigg] =
\bigg(\eta_1\prod\limits_{i=1}^t \alpha_i^{k_i}\bigg)^{-1}\delta\bigg(\eta_1\prod\limits_{i=1}^t \alpha_i^{k_i}\bigg).\]
We will be using these facts and Lemma \ref{lem1} very frequently in the proofs without any further reference.
\section{Groups $G$ with trivial $Out_c(G)$}
In this section, we deal with those groups $G$ for which $Out_c(G)$ is trivial. The isoclinism family $\Phi_k$ of groups of order $p^6$ contains a group $\Phi_k{(1^6)}$ for certain values of $k$. These groups will be considered frequently throughout the paper.
\begin{lemma}
Let $G$ be the group $\Phi_{11}(1^6)$. Then $Out_c(G) = 1$.
\end{lemma}
\begin{proof}
The group $G$ is a special $p$-group, minimally generated by $\alpha_1, \alpha_2$ and $\alpha_3$. The commutator subgroup $\gamma_2(G)$ is generated by $\beta_1 := [\alpha_2, \alpha_3], \beta_2 := [\alpha_3, \alpha_1], \beta_3 := [\alpha_1, \alpha_2]$.
The conjugates of $\alpha_1, \alpha_2$ and $\alpha_3$ are $\alpha_1\beta_3^t\beta_2^s$, $\alpha_2\beta_3^t\beta_1^r$ and
$\alpha_3\beta_2^s\beta_1^r$ respectively, where $r$, $s$ and $t$ vary over $\mathbb{Z}$. Since the exponent of $\gamma_2(G)$ is $p$, it follows that
$|\alpha_i^G| = p^2$ for $i = 1,2,3$. Therefore by Lemma \ref{lem6}, $|Aut_c(G)| \leq p^6$.
Define a map $\delta : \{\alpha_1, \alpha_2, \alpha_3\} \rightarrow G$ such that $\alpha_1 \mapsto \alpha_1\beta_3^{t_1}\beta_2^{s_1}, \alpha_2 \mapsto
\alpha_2\beta_3^{t_2}\beta_1^{r_2}$ and $\alpha_3 \mapsto \alpha_3\beta_2^{s_3}\beta_1^{r_3}$, for some $s_1, t_1, r_2, t_2, r_3, s_3 \in \mathbb{Z}$.
By Lemma \ref{lemay}, this map extends to a central automorphism of $G$. Since $\delta$ fixes $\gamma_2(G)$ element-wise,
for $k_1, l_1, m_1 \in \mathbb{Z}$ and $\eta \in \gamma_2(G)$,
\[\delta(\eta\alpha_1^{k_1}\alpha_2^{l_1} \alpha_3^{m_1}) =
\eta\alpha_1^{k_1}\alpha_2^{l_1} \alpha_3^{m_1}\beta_1^{l_1r_2 + m_1r_3}\beta_2^{k_1s_1 + m_1s_3}\beta_3^{k_1t_1 + l_1t_2}.\]
Therefore $\delta$ extends to a class-preserving automorphism if and only if for every $k_1, l_1, m_1 \in \mathbb{Z}$, and $\eta_1 \in \gamma_2(G)$
there exist $k_2, l_2, m_2$ (depending on $k_1, l_1, m_1$) and $\eta_2 \in \gamma_2(G)$ such that
\[[\eta_1\alpha_1^{k_1}\alpha_2^{l_1} \alpha_3^{m_1}, \eta_2\alpha_1^{k_2}\alpha_2^{l_2} \alpha_3^{m_2}] =
\beta_1^{l_1r_2 + m_1r_3}\beta_2^{k_1s_1 + m_1s_3}\beta_3^{k_1t_1 + l_1t_2}.\]
Expanding the left hand side, we get
\[\beta_1^{l_1m_2 - l_2m_1}\beta_2^{k_2m_1 - k_1m_2}\beta_3^{k_1l_2 - k_2l_1} = \beta_1^{l_1r_2 + m_1r_3}\beta_2^{k_1s_1 + m_1s_3}\beta_3^{k_1t_1 + l_1t_2}.\]
Comparing the powers of $\beta_i$'s, we have that $\delta$ extends to a class-preserving automorphism if and only if the following equations hold true:
\[l_1m_2 - l_2m_1 \equiv l_1r_2 + m_1r_3 \pmod{p}\]
\[k_2m_1 - k_1m_2 \equiv k_1s_1 + m_1s_3 \pmod{p}\]
\[k_1l_2 - k_2l_1 \equiv k_1t_1 + l_1t_2 \pmod{p}. \]
Let $\delta$ be a class-preserving automorphism.
Choose $k_1 = 0$ and $m_1, l_1$ to be non-zero modulo $p$. Then $k_2m_1 \equiv m_1s_3 \pmod{p}$ and $-k_2l_1 \equiv l_1t_2 \pmod{p}$. It follows that $t_2 \equiv -s_3 \pmod{p}$. Similarly if we choose $l_1$ to be zero and
$k_1, m_1$ to be non-zero modulo $p$, then $r_3 \equiv -t_1 \pmod{p}$, and if $m_1$ to be zero and $l_1, k_1$ to be non-zero modulo $p$, then $r_2 \equiv -s_1 \pmod{p}$.
It follows that $|Aut_c(G)| \leq p^3 = |Inn(G)|$. Therefore $Out_c(G) = 1$. \hfill $\Box$
\end{proof}
\begin{lemma}
Let $G$ be one of the groups $\Phi_{17}(1^6)$ and $\Phi_{19}(1^6)$.
Then $Out_c(G) = 1$.
\end{lemma}
\begin{proof}
First assume that $G$ is the group $\Phi_{17}(1^6)$. Then $G$ is a $p$-group of nilpotency class $3$, minimally generated by $\alpha, \alpha_1$ and $\beta$. The commutator subgroup
$\gamma_2(G)$ is abelian and generated by $\alpha_2 := [\alpha_1, \alpha], \alpha_3 := [\alpha_2, \alpha]$ and $\gamma := [\beta, \alpha_1]$. The center $Z(G)$ is of order $p^2$, generated by $\alpha_3$ and $\gamma$. It is easy to see that
$|\alpha_1^{G}| \leq p^2, |\alpha^{G}| \leq p^2$
and $|\beta^{G}| = p$. Therefore, applying Lemma \ref{lem6}, we have $|Aut_c(G)| \leq p^5$. Define a map $\delta : \{\alpha, \alpha_1, \beta\} \rightarrow G$ such that
$\alpha \mapsto \alpha, \alpha_1 \mapsto \alpha_1$ and $\beta \mapsto \alpha_1^{-1}\beta\alpha_1$ = $\beta\gamma$. Suppose that $|Aut_c(G)| = p^5$. Then $\delta$
must extend to a class-preserving automorphism of $G$. Hence there exist $\eta_1 (= \alpha_2^{r_1}\alpha_3^{s_1}\gamma^{t_1}$ say)
$\in \gamma_2(G)$ and $k_1, l_1, m_1 \in \mathbb{Z}$ such that
$[\alpha\beta, \ \eta_1\alpha^{k_1}\alpha_1^{l_1}\beta^{m_1}] = \gamma$. It is a routine calculation to show that
\[[\alpha\beta, \eta_1\alpha^{k_1}\alpha_1^{l_1}\beta^{m_1}] = \alpha_2^{-l_1}\alpha_3^{-r_1}\gamma^{l_1},\]
which can not be equal to $\gamma$ for any value of $l_1$ and $r_1$. Hence $\delta$ can not be a class-preserving automorphism, and hence $|Aut_c(G)| \leq p^4$. But
$|Inn(G)| = p^4$, so we have $Aut_c(G) = Inn(G)$.
Now we take $G$ to be $\Phi_{19}(1^6)$. Then $G$ is a $p$-group of class $3$, minimally generated by $\alpha, \alpha_1$ and $\alpha_2$. Define a map $\delta : \{\alpha, \alpha_1, \alpha_2\} \rightarrow G$ such that
$\alpha \mapsto \alpha_1^{-1}\alpha\alpha_1 (= \alpha\beta_1), \alpha_1 \mapsto \alpha_1$ and $\alpha_2 \mapsto \alpha_2$. Now the proof follows on the same lines as for the group $\Phi_{17}(1^6)$.
\hfill $\Box$
\end{proof}
\begin{lemma}
Let $G$ be the group $\Phi_{23}(1^6)$. Then $Out_c(G) = 1$.
\end{lemma}
\begin{proof}
The group $G$ is a $p$-group of class $4$, minimally generated by $\alpha, \alpha_1$. The commutator subgroup $\gamma_2(G)$ is abelian and generated by $\alpha_{i+1} := [\alpha_i, \alpha]$ for $1 \leq i \leq 3$ and $\gamma := [\alpha_1, \alpha_2]$. The center $Z(G)$ is of order $p^2$, generated by $\alpha_4$ and $\gamma$. It is easy to check that $|\alpha^{G}| \leq p^3$ and
$|\alpha_1^{G}| \leq p^2.$ Therefore $|Aut_c(G| \leq p^5$. Let $H = \gen{\alpha_4}$. Since $\alpha_4 \in Z(G), H$ is normal. Consider the
quotient group $G/H$. One can check that the group $G/H$ belongs to the family $\Phi_6.$ Therefore it follows from \cite[Lemma 5.3]{MKY} that
$Aut_c(G/H) = Inn(G/H)$. Now define a map $\delta : \{\alpha, \alpha_1\} \rightarrow G$, such that $\delta(\alpha) = \alpha$ and $
\delta(\alpha_1) = \alpha_2^{-1}\alpha_1\alpha_2 = \alpha_1\gamma$. Suppose that $|Aut_c(G| = p^5$. Then $\delta$ must extend to a class-preserving automorphism of
$G$. It also induces a non-trivial class-preserving automorphism (say $\bar{\delta}$) of $G/H$. But then $\bar{\delta}$ is an inner automorphism of $G/H$ because $Aut_c(G/H) = Inn(G/H)$.
Now note that $\bar{\delta}$ is also a central automorphism of $G/H$. It follows that it
is induced by some element in $Z_2(G/H) = \gen{\alpha_2H, Z(G/H)}$. Let
$\bar{\delta}$ be induced by $\alpha_2^tH$ for some $ t \in \mathbb{Z}$.
Hence \[\bar{\delta}(\alpha H) = (\alpha_2^tH)^{-1}\alpha H (\alpha_2^tH) = \alpha\alpha_3^{-t}H .\]
This must be equal to $\alpha H$. But then $t=0 \pmod{p}$ and hence $\bar{\delta}(\alpha_1H) = \alpha_1H$, a contradiction. Therefore we have $|Aut_c(G)| \leq p^4$. But
$|Inn(G)| = p^4$, therefore $Out_c(G) = 1$. \hfill $\Box$
\end{proof}
\begin{lemma}
Let $G$ be the group $\Phi_{27}(1^6)$. Then $Out_c(G) = 1$.
\end{lemma}
\begin{proof}
The group $G$ is a $p$-group of class $4$, minimally generated by $\alpha, \alpha_1, \beta$. The commutator subgroup $\gamma_2(G)$ is abelian and generated by $\alpha_{i+1} := [\alpha_i, \alpha]$ for $1 \leq i \leq 2$ and $\alpha_4 := [\alpha_3, \alpha] = [\alpha_1, \beta] = [\alpha_1, \alpha_2]$. The center $Z(G)$ is of order $p$, generated by $\alpha_4$. It is easy to check that
$|\alpha^{G}| \leq p^3, |\alpha_1^{G}| \leq p^2$ and $|\beta^{G}| = p$. Therefore $|Aut_c(G)| \leq p^6$. Define
$\delta :\{\alpha, \alpha_1, \beta\} \rightarrow G$ such that $\delta(\alpha)= \alpha, \delta(\alpha_1) = \alpha_1$ and $\delta(\beta)= \alpha_1\beta\alpha_1^{-1} =
\beta\alpha_4$. Suppose that $|Aut_c(G)| = p^6$. Then $\delta$ extends to a class-preserving automorphism of $G$. Also note that $\delta$ fixes $\alpha_2$, therefore
$\delta(\alpha_2\beta^{-1}) = \alpha_2\beta^{-1}\alpha_4^{-1}$. Since $\delta$ is a class-preserving automorphism, there exist some $g \in G$ such that
$[\alpha_2\beta^{-1}, g] = \alpha_4^{-1}$. Note that $C_G(\alpha_2\beta^{-1}) = \gen{\alpha_1, \alpha_2, \alpha_3, \alpha_4, \beta}$. Therefore without loss of generality we can assume
that $g = \alpha^{k_1}$. It can be calculated that
\[[\alpha_2\beta^{-1}, \alpha^{k_1}] = \alpha_3^{k_1}\alpha_4^{k_1(k_1-1)/2},\]
which, for any value of $k_1$, can never be equal to $\alpha_4^{-1}$. Hence, we get a contradiction. Therefore $|Aut_c(G)| \leq p^5$. But
$|Inn(G)| = p^5$, therefore $Out_c(G) = 1$. \hfill $\Box$
\end{proof}
\begin{lemma}
Let $G \in \{\Phi_{28}(222), \Phi_{29}(222)\}$. Then $|Out_c(G)| = 1$.
\end{lemma}
\begin{proof}
The group $G$ is a $p$-group of class $4$, minimally generated by $\alpha$ and $\alpha_1$. The commutator subgroup $\gamma_2(G)$ is abelian and generated by $\alpha_{i+1} := [\alpha_i, \alpha]$ for $1 \leq i \leq 2$ and $\alpha_4 := [\alpha_3, \alpha] = [\alpha_1, \alpha_2]$. The center $Z(G)$ is of order $p$, generated by $\alpha_4$. Note that $\gen{\alpha, \alpha_4} \leq C_G(\alpha)$ and hence $|C_G(\alpha)| \geq p^3$, therefore
$|\alpha^G| \leq p^3$. Now note that, since $\alpha_2^{(p)} = \alpha_4^y$, we have for $p =3, \alpha_2^p = \alpha_4^{y-1}$, and for $ p > 3, \alpha_2^p = \alpha_4^{y}$.
Following is a routine calculation.
\[[\alpha_1, \alpha^p] = \alpha_2^p\alpha_3^{p(p-1)/2}\alpha_4^{\sum\limits_{n=1}^{p-2} n(n-1)/2} = \alpha_2^p\alpha_4^{(p-2)(p-1)p/3},\]
which, for $p=3$, equals $\alpha_4^{y+1}$, and for $p > 3$, equals $\alpha_4^y$. But we have $[\alpha_1, \alpha_2^t] = \alpha_4^t$, therefore
$\alpha_2^{y+1}\alpha^{-p} \in C_G(\alpha_1)$ for $p=3$ and $\alpha_2^{y}\alpha^{-p} \in C_G(\alpha_1)$ for $p>3$. Now it is easy to see that
$|C_G(\alpha_1)| \geq p^4$. Hence $|\alpha_1^G| \leq p^2$. It follows from Lemma \ref{lem6} that $|Aut_c(G)| \leq p^5$. But $|G/Z(G)| = p^5$, therefore
$Out_c(G) = 1$. \hfill $\Box$
\end{proof}
\begin{lemma}
Let $G \in \{\Phi_{k}(1^6), \Phi_{34}(321)a \mid \ k = 31, \ldots, 33\}$. Then $Out_c(G) = 1$.
\end{lemma}
\begin{proof}
The group $G$ is a $p$-group of class $3$, minimally generated by $\alpha, \alpha_1$ and $\alpha_2$.
The commutator subgroup $\gamma_2(G)$ is abelian and generated by $\beta_i := [\alpha_i, \alpha]$ for $i = 1, 2$ and $\gamma := [\alpha_1, \beta_1]$. For $k = 31, 32$, $[\alpha_2, \beta_2] = \gamma^y$, and for $k = 33$ and $\Phi_{34}(321)a$, $\gamma = [\beta_2, \alpha]$. The center $Z(G)$ is of order $p$, generated by $\gamma$. It is easy to see that, if $G \in \{\Phi_{k}(1^6)| \ k = 31, 32\}$, then
$|\alpha_1^{G}| \leq p^2, |\alpha_2^{G}| \leq p^2, |\alpha^{G}| \leq p^2$, and if $G \in \{\Phi_{33}(1^6), \Phi_{34}(321)a\}$, then
$|\alpha_1^{G}| \leq p^2, |\alpha_2^{G}| = p$ and $|\alpha^{G}| \leq p^3$.
Therefore for all the four groups $G$, $|Aut_c(G)| \leq p^6$. Define a map $\delta : \{\alpha, \alpha_1, \alpha_2\} \rightarrow G$ such that
$\alpha \mapsto \alpha, \alpha_1 \mapsto \alpha_1$ and $\alpha_2 \mapsto \alpha^{-1}\alpha_2\alpha = \alpha_2\beta_2$. Suppose that $|Aut_c(G)| = p^6$. Then $\delta$
must extend to a class-preserving automorphism of $G$. Hence there exist elements $\eta_1$
$\in \gamma_2(G)$ and $k_1, l_1, m_1 \in \mathbb{Z}$ such that
$[\alpha_1\alpha_2, \ \eta_1\alpha^{k_1}\alpha_1^{l_1}\alpha_2^{m_1}] = \beta_2$, but it is a routine calculation that
\[[\alpha_1\alpha_2, \ \eta_1\alpha^{k_1}\alpha_1^{l_1}\alpha_2^{m_1}] = \beta_1^{k_1}\beta_2^{k_1}\gamma^{a}\]
for some $a \in \mathbb{Z}$. Clearly it can not be equal to $\beta_2$ for any value of $k_1, l_1, m_1, r_1$ and $s_1$.
Thus $\delta$ is not a class-preserving automorphism, and therefore it follows that $|Aut_c(G)| \leq p^5$. But
$|Inn(G)| = p^5$. Hence $Aut_c(G) = Inn(G)$ proving that $Out_c(G) = 1$. \hfill $\Box$
\end{proof}
\begin{lemma}
Let $G \in \{\Phi_{k}(1^6), \Phi_{j}(222)a_0 \mid k = 40, 41, \; j = 42, 43 \}$. Then $Out_c(G) = 1$.
\end{lemma}
\begin{proof}
The group $G$ is a $p$-group of class $4$, minimally generated by $\alpha_1$ and $\alpha_2$. The commutator subgroup $\gamma_2(G)$ is abelian and generated by
$\beta := [\alpha_1, \alpha_2], \beta_i := [\beta, \alpha_i]$ for $i = 1,2$ and $\gamma$,
where, for $k = 40$, $\gamma := [\beta_1, \alpha_2] = [\beta_2, \alpha_1]$, for $k = 41$, $\gamma^{-\nu} := [\alpha_2, \beta_2] = [\alpha_1, \beta_1]^{-\nu}$, for $j = 42$, $\gamma := [\alpha_1, \beta_2] = [\alpha_2, \beta_1]$ and for $j = 43$, $\gamma^{-\nu} := [\alpha_2, \beta_2] = [\alpha_1, \beta_1]^{-\nu}$. The center $Z(G)$ is of order $p$, generated by $\gamma$. It is easy to see that $|\alpha_1^{G}| \leq p^3$ and $|\alpha_2^{G}| \leq p^3$.
Therefore $|Aut_c(G)| \leq p^6$. Define a map $\delta : \{\alpha_1, \alpha_2\} \rightarrow G$ such that
$\alpha_1 \mapsto \alpha_1$ and $\alpha_2 \mapsto \alpha_2\beta_2$. Suppose that $|Aut_c(G)| = p^6$. Then $\delta$
must extend to a class-preserving automorphism of $G$. Hence there exist $\eta_1 (= \beta^{r_1}\beta_1^{s_1}\beta_2^{t_1}\gamma^{u_1}$ say)
$\in \gamma_2(G)$ and $k_1, l_1 \in \mathbb{Z}$ such that
$[\alpha_1\alpha_2, \ \eta_1\alpha_1^{k_1}\alpha_2^{l_1}] = \beta_2$. It is a routine calculation to show that,
\begin{eqnarray*}
[\alpha_1\alpha_2, \ \eta_1\alpha_1^{k_1}\alpha_2^{l_1}] &=& \beta^{l_1-k_1}\beta_1^{-k_1(k_1-1)/2 - r_1}\beta_2^{l_1(l_1 + 1)/2 - k_1l_1 - r_1}\gamma^{a}
\end{eqnarray*}
for some $a \in \mathbb{Z}$. It is easy to see that if powers of $\beta$ and $\beta_1$ in the above expression are $0$ modulo $p$, then the power of $\beta_2$ is also $0$ modulo $p$.
It follows that $\delta$ is not a class-preserving automorphism, and therefore $|Aut_c(G)| \leq p^5$. But $|Inn(G)| = p^5$, therefore $Out_c(G) = 1$. \hfill $\Box$
\end{proof}
We are now ready to prove the following theorem.
\begin{thm} \label{prop1}
Let $G$ be a group of order $p^6$ which belongs to one of the isoclinism family
$\Phi_k$ for $k = 2, \ldots,6,8,9,11,12,14,16, 17,19,23,25, \ldots,29, 31, \ldots,35,37, 40, \ldots,43,$ \cite[Section 4.6]{RJ}. Then $Out_c(G) = 1$.
\end{thm}
In the following proof, $\Phi_{k}(1^5)$ is the group of order $p^5$ from the isoclinism family $(k)$ of \cite[Section 4.5]{RJ} and $\Phi_{8}(32)$ is the group of order $p^5$ from the isoclinism family $(8)$ of \cite[Section 4.5]{RJ}.\\
\begin{proof}
Note that $Aut_c(H \times K) \cong Aut_c(H) \times Aut_c(K)$, for any two groups $H$ and $K$.
Let $G \ \in \ \{\Phi_k(1^6), \ \Phi_8(321)a, \ | \ k = 2, \ldots,6,9\}$, then, since $\Phi_k(1^6) = \Phi_k(1^5) \times C_p$ and $\Phi_8(321)a = \Phi_8(32) \times C_p$, it follows from \cite[Theorem 5.5]{MKY} that $Out_c(G) = 1$. Since the group
$\Phi_{12}(1^6)$ is a direct product of groups of order $p^3$, $Out_c(\Phi_{12}(1^6)) = 1$. The group $\Phi_{14}(1^6)$ is of nilpotency class 2 and $\gamma_2(\Phi_{14}(1^6))$ is cyclic, therefore from \cite[Corollary 3.6]{MKY}, we have $Out_c(\Phi_{14}(1^6)) = 1$.
Note that if $G \in \{\Phi_{16}(1^6), \Phi_{k}(222) \mid k = 25, 26\}$, then $G$ is abelian by cyclic. Hence by Lemma \ref{lem4} $Out_c(G) = 1$. If $G \in \{\Phi_{k}(1^6) \mid k = 22, 35, 37\}$, then by Lemma \ref{lem6} it follows that $|Aut_c(G)| \le p^5$. Since $|Inn(G)| = p^5$, $Out_c(G) = 1$. This, along with lemmas 3.1 - 3.7, completes the proof of Theorem \ref{prop1}. \hfill $\Box$
\end{proof}
\section{Groups $G$ with non-trivial $Out_c(G)$}
In this section we deal with those groups for which there exist a non-inner class-preserving automorphism.
\begin{lemma} \label{lem8}
Let $G$ be the group $\Phi_{24}(1^6)$.
Then $|Out_c(G)| = p$.
\end{lemma}
\begin{proof}
The group $G$ is a $p$-group of class $4$, minimally generated by $\alpha, \alpha_1$ and $\beta$. The commutator subgroup $\gamma_2(G)$ is abelian and generated by $\alpha_{i+1} := [\alpha_i, \alpha]$ for $i = 1,2$ and $\alpha_4 := [\alpha_3, \alpha] = [\alpha_1, \beta]$. The center $Z(G)$ is of order $p$, generated by $\alpha_4$.
We will show that for every element $g \in G-\gamma_2(G), Z(G) \leq [g, G]$. Let $g = \eta_1\alpha^{k_1}\alpha_1^{l_1}\beta^{m_1}$.
Then \[[g, \beta^{l_2}] = [\eta_1\alpha^{k_1}\alpha_1^{l_1}\beta^{m_1}, \beta^{l_2}] = \alpha_4^{l_1l_2}.\]
Therefore, if $l_1$ is non-zero modulo $p$, we have $Z(G) \leq [g, G]$. Let $l_1 \equiv 0\pmod{p}$. Then, since $\alpha_1^p \in \gamma_2(G)$,
\[[g, \alpha_3^{k_2}] = [\eta_1\alpha^{k_1}\alpha_1^{l_1}\beta^{m_1}, \alpha_3^{k_2}] = \alpha_4^{k_1k_2}.\]
Therefore, if $k_1$ is non-zero modulo $p$, $Z(G) \leq [g, G]$. Now let $k_1 \equiv 0\pmod{p}$. Then
\[[g, \alpha_1^{m_2}] = [ \eta_1\beta^{m_1}, \alpha_1^{m_2}] = \alpha_4^{-m_1m_2}.\]
Therefore, if $m_1$ is non-zero modulo $p$, $Z(G) \leq [g, G]$.
It follows that for every $g \in G-\gamma_2(G), Z(G) \leq [g, G]$. Hence by lemmas \ref{lem2} and \ref{lem3} we have,
\[|Aut_c(G)| \geq |Autcent(G)||G|/|Z_2(G)| = p^3p^6/p^3 = p^6.\] But,
$|\alpha^{G}| \leq p^3, |\alpha_1^{G}| \leq p^2$ and $|\beta^{G}| = p$. Therefore we have $|Aut_c(G)| \leq p^6$. Hence
$|Aut_c(G)| = p^6$. Since $|G/Z(G)| = p^5$, it follows that $|Out_c(G)| = p$. \hfill $\Box$
\end{proof}
\begin{lemma}
Let $G$ be the group $\Phi_{30}(1^6)$.
Then $|Out_c(G)| = p$.
\end{lemma}
\begin{proof}
The group $G$ is a $p$-group of class $4$, minimally generated by $\alpha, \alpha_1$ and $\beta$. The commutator subgroup $\gamma_2(G)$ is abelian and generated by
$\alpha_2 := [\alpha_1, \alpha]$, $\alpha_3 := [\alpha_2, \alpha] = [\alpha_1, \beta]$ and $\alpha_4 := [\alpha_3, \alpha] = [\alpha_2, \beta]$. The center $Z(G)$ is of order $p$, generated by
$\alpha_4$. It is easy to see that $|\alpha^{G}| \leq p^3, |\alpha_1^{G}| \leq p^2$,
and $|\beta^{G}| \leq p^2$. Therefore $|Aut_c(G)| \leq p^7$. Define a map $\delta : \{\alpha, \alpha_1, \beta\} \rightarrow G$ such that
$\alpha \mapsto \alpha\alpha_4, \alpha_1 \mapsto \alpha_1$ and $\beta \mapsto \beta\alpha_4$. By Lemma \ref{lemay}, the map $\delta$ extends to a central automorphism of $G$.
We will show that $\delta$ is also a non-inner class-preserving automorphism.
Let $g = \eta_1\alpha^{k_1}\alpha_1^{l_1}\beta^{m_1}.$ Then
\[[g, \alpha_3^{k_2}] = [\eta_1\alpha^{k_1}\alpha_1^{l_1}\beta^{m_1}, \alpha_3^{k_2}] = \alpha_4^{k_1k_2}.\]
Therefore, if $k_1$ is non-zero modulo $p$, we have $Z(G) \leq [g, G].$
Hence $\delta$ maps $g$ to a conjugate of $g$. Now let $k_1 \equiv 0\pmod{p}$, then
\[[g, \alpha_2^{m_2}] = [\eta_1\alpha_1^{l_1}\beta^{m_1}, \alpha_2^{m_2}] = \alpha_4^{-m_1m_2}.\]
It follows that, if $m_1$ is non-zero modulo $p$, $\delta$ maps $g$ to a conjugate of $g$. Now let $m_1 \equiv 0\pmod{p}$. But then, $\delta(\eta_1\alpha_1^{l_1}) = \eta_1\alpha_1^{l_1},$ because being a central automorphism
$\delta$ fixes $\gamma_2(G)$ element-wise. We have shown that for every $g \in G$,
$\delta(g)$ is a conjugate of $g$. Therefore $\delta$ is a class-preserving automorphism. Now suppose $\delta$ is an inner automorphism. Then being central,
$\delta$ is induced by some element in $Z_2(G) = \gen{\alpha_3, \alpha_4}$. Since $\alpha_4 \in Z(G)$, we can assume that $\delta$ is induced by $\alpha_3^t$. But then $\delta(\beta) = \beta$,
a contradiction. Therefore $\delta$ is non-inner class-preserving automorphism. Since $|Inn(G)| = p^5$, it follows that $|Aut_c(G)| \geq p^6.$
Now define a map $\sigma : \{\alpha, \alpha_1, \beta\} \rightarrow G$ such that
$\alpha \mapsto \alpha_1^{-1}\alpha\alpha_1 = \alpha\alpha_2^{-1}, \alpha_1 \mapsto \alpha_1$ and $\beta \mapsto \beta$. Suppose $|Aut_c(G)| = p^7$. Then
$\sigma$ extends to a class preserving automorphism. But then, $1 = \delta([\alpha, \beta]) = [\alpha\alpha_2^{-1}, \beta] = \alpha_4^{-1}$ which is not possible. Therefore,
we have $|Aut_c(G)| = p^6.$ Since $|G/Z(G)| = p^5$, we have $|Out_c(G)| = p$. \hfill $\Box$
\end{proof}
\begin{lemma}
Let $G$ be the group $\Phi_{36}(1^6).$ Then $|Out_c(G)| = p$.
\end{lemma}
\begin{proof}
The group $G$ is a $p$-group of maximal class, minimally generated by $\alpha$ and $\alpha_1$. The commutator subgroup $\gamma_2(G)$ is abelian and generated by $\alpha_{i+1} := [\alpha_i, \alpha]$ for $i = 1, 2 ,3$ and $\alpha_5 := [\alpha_4, \alpha] = [\alpha_1, \alpha_2]$. The center
$Z(G)$ is of order $p$, generated by $\alpha_5.$ We will show that for every element $g \in G-\gamma_2(G), Z(G) \leq [g, G]$. Let $g = \eta_1\alpha^{k_1}\alpha_1^{l_1}$.
Then \[[g, \alpha_4^{k_2}] = [\eta_1\alpha^{k_1}\alpha_1^{l_1}, \alpha_4^{k_2}] = \alpha_4^{-k_1k_2}.\]
Therefore, if $k_1$ is non-zero modulo $p$, we have $Z(G) \leq [g, G]$. Let $k_1 \equiv 0\pmod{p}$. Then
\[[g, \alpha_2^{l_2}] = [\eta_1\alpha_1^{l_1}, \alpha_2^{l_2}] = \alpha_5^{l_1l_2}.\]
Therefore, if $l_1$ is non-zero modulo $p$, $Z(G) \leq [g, G]$. Let $l_1 \equiv 0\pmod{p}$, then we have $g \in \gamma_2(G)$ because $\alpha_1^p \in \gamma_2(G)$.
It follows that, for every $g \in G-\gamma_2(G), Z(G) \leq [g, G]$. Hence from lemmas \ref{lem2} and \ref{lem3} we get
\[|Aut_c(G)| \geq |Autcent(G)||G|/|Z_2(G)| = p^2p^6/p^2 = p^6.\] But it is easy to see that
$|\alpha^{G}| \leq p^4$ and $|\alpha_1^{G}| \leq p^2$. Therefore $|Aut_c(G)| \leq p^6$. Hence
$|Aut_c(G)| = p^6$. Since $|G/Z(G)| = p^5$, we have $|Out_c(G)| = p$. \hfill $\Box$
\end{proof}
\begin{lemma}
Let $G$ be the group $\Phi_{38}(1^6)$. Then $|Out_c(G)| = p$.
\end{lemma}
\begin{proof}
The group $G$ is a $p$-group of maximal class, minimally generated by $\alpha$, $\alpha_1$. The commutator subgroup $\gamma_2(G)$ is abelian and generated by $\alpha_{i+1} := [\alpha_i, \alpha]$ for $i = 1, 2 ,3$ and $\alpha_5 := [\alpha_4, \alpha] = [\alpha_1, \alpha_3]$. Also $[\alpha_1, \alpha_2] = \alpha_4\alpha_5^{-1}.$ The center $Z(G)$ is generated by $\alpha_5.$
We show that for every element $g \in G-\gamma_2(G), Z(G) \leq [g, G]$. Let $g = \eta_1\alpha^{k_1}\alpha_1^{l_1}$.
Then \[[g, \alpha_4^{k_2}] = [\eta_1\alpha^{k_1}\alpha_1^{l_1}, \alpha_4^{k_2}] = \alpha_5^{-k_1k_2}.\]
Therefore, if $k_1$ is non-zero modulo $p$, $Z(G) \leq [g, G]$. Hence, let $k_1 \equiv 0\pmod{p}$. Then
\[[g, \alpha_3^{l_2}] = [\eta_1\alpha_1^{l_1}, \alpha_3^{l_2}] = \alpha_5^{l_1l_2}.\]
Therefore, if $l_1$ is non-zero modulo $p$, we have $Z(G) \leq [g, G]$. If $l_1 \equiv 0\pmod{p}$, then we have $g \in \gamma_2(G)$ because $\alpha_1^p \in \gamma_2(G)$.
It follows that, for every $g \in G-\gamma_2(G), \ Z(G) \leq [g, G]$. Therefore applying lemmas \ref{lem2} and \ref{lem3} we get
\[|Aut_c(G)| \geq |Autcent(G)||G|/|Z_2(G)| = p^2p^6/p^2 = p^6.\] It is easy to check that
$|\alpha^{G}| \leq p^4$ and $|\alpha_1^{G}| \leq p^3$. Hence the upper bound for $|Aut_c(G)|$ is $p^7$. Suppose that $|Aut_c(G)| = p^7$.
Then the map $\delta$ defined on the generating set $\{\alpha, \alpha_1\}$ as $\delta(\alpha) = \alpha$ and $\delta(\alpha_1) = \alpha_2^{-1}\alpha_1\alpha_2 = \alpha_1\alpha_4\alpha_5^{-1}$
must extend to a class-preserving automorphism automorphism of $G$. Note that \[\delta(\alpha_2) = \delta([\alpha_1, \alpha]) = [\alpha_1\alpha_4\alpha_5^{-1}, \alpha] = \alpha_2\alpha_5.\]
Since $\delta$ is a class-preserving automorphism, there exists an $\eta_1 \in \gamma_2(G)$ and $k_1, l_1 \in \mathbb{Z}$ such that
\[[\alpha_2, \eta_1\alpha^{k_1}\alpha_1^{l_1}] = \alpha_5.\]
We have that
\[[\alpha_2, \eta_1\alpha^{k_1}\alpha_1^{l_1}] = \alpha_3^{k_1}\alpha_4^{k_1(k_1-1)/2 - l_1}\alpha_5^{l_1 - k_1l_1 + \sum\limits_{n=1}^{k_1-2} n(n-1)/2},\]
which for no values of $k$ and $k_1$ can be equal to $\alpha_5$. This gives a contradiction. Hence $|Aut_c(G)| = p^6$.
Since $|G/Z(G)| = p^5$, we have $|Out_c(G)| = p$. \hfill $\Box$
\end{proof}
\begin{lemma} \label{lem9}
Let $G$ be the group $\Phi_{39}(1^6)$.
Then $|Out_c(G)| = p$.
\end{lemma}
\begin{proof}
The group $G$ is a $p$-group of maximal class, minimally generated by $\alpha, \alpha_1$. The commutator subgroup $\gamma_2(G)$ is generated by $\alpha_{i+1} := [\alpha_i, \alpha]$ for $i = 1,2$, $\alpha_4 := [\alpha_3, \alpha] = [\alpha_1, \alpha_2]$, and $\alpha_5 := [\alpha_2, \alpha_3] = [\alpha_3, \alpha_1] = [\alpha_4, \alpha_1]$. Note that any element of $\gamma_2(G)$
can be written as $\alpha_2^r\alpha_3^s\alpha_4^t\alpha_5^u$ for some $r, s, t, u \in \mathbb{Z}$. The center $Z(G)$ is generated by $\alpha_5$. It is easy to see that
$|\alpha^{G}| \leq p^3$ and $|\alpha_1^{G}| \leq p^3$.
Therefore $|Aut_c(G)| \leq p^6$. Define a map $\delta : \{\alpha, \alpha_1 \} \rightarrow G$ such that
$\alpha \mapsto \alpha$, and $\alpha_1 \mapsto \alpha_2^{-1}\alpha_1\alpha_2 = \alpha_1\alpha_4$.
We will show that $\delta$ extends to a class preserving automorphism of $G$. It is easy to check that $\delta$ preserves all the defining relations of the group. Hence $\delta$ extends to an endomorphism. Note that $\delta
$ fixes $\gamma_2(G)$ element-wise, therefore for $k_1, l_1 \in \mathbb{Z}$ and $\eta_1 = \alpha_2^{r_1}\alpha_3^{s_1}\alpha_4^{t_1}\alpha_5^{u_1}
\in \gamma_2(G)$,
\[\delta(\eta_1\alpha^{k_1}\alpha_1^{l_1}) = \eta_1\alpha^{k_1}\alpha_1^{l_1}\alpha_4^{l_1}\alpha_5^{l_1(l_1-1)/2}.\]
It is a routine calculation that
\[[\eta_1\alpha^{k_1}\alpha_1^{l_1}, \alpha_3^{s_2}\alpha_4^{t_2}] = \alpha_5^{r_1s_2 - l_1s_2 - l_1t_2 - l_1k_1s_2}\alpha_4^{-k_1s_2}.\]
Since $\delta(\eta_1\alpha^{k_1}) = \eta_1\alpha^{k_1}$, which is obviously a conjugate of $\eta_1\alpha^{k_1}$, let $l_1$ be non-zero modulo $p$.
Now if $k_1$ is non-zero modulo $p$, then clearly there exist $s_2$ and $t_2$ such that
\[-k_1s_2 \equiv l_1 \pmod{p}\]
and
\[r_1s_2 - l_1s_2 - l_1t_2 - l_1k_1s_2 \equiv l_1(l_1-1)/2 \pmod{p}.\]
Suppose $k_1 \equiv 0\pmod{p}$, then it can be calculated that
\[[\eta_1\alpha_1^{l_1}, \alpha_2^{r_2}\alpha_4^{t_2}] = \alpha_5^{-s_1r_2 + r_2l_1(l_1-1)/2 - l_1t_2}\alpha_4^{l_1r_2}.\]
Since $l_1$ is non-zero modulo $p$, clearly there exist $r_2$ and $t_2$ such that
\[l_1r_2 \equiv l_1 \pmod{p}\]
and
\[-s_1r_2 + r_2l_1(l_1-1)/2 - l_1t_2 \equiv l_1(l_1-1)/2 \pmod{p}.\]
It follows that $\delta$ maps every element of $G$ to a conjugate of itself. Therefore $\delta$ is a bijection, and hence a class-preserving automorphism of $G$. Now we show that
$\delta$ is a non-inner automorphism. On the contrary, suppose that $\delta$ is an inner automorphism. Note that $g^{-1}\delta(g) \in Z_2(G)$. Thus it follows that $\delta$ is induced by some element in
$Z_3(G) = \gen{\alpha_3, \alpha_4, Z(G)}$, where $Z_3(G)$ denotes the third term in the upper central series of $G$. Let it be induced by $\alpha_3^{s_1}\alpha_4^{t_1}$. Since $\delta(\alpha) = \alpha$, we have
$\alpha_3^{-s_1}\alpha\alpha_3^{s_1} = \alpha$, which implies that $s_1 \equiv 0\pmod{p}$. But then $\delta(\alpha_1) = \alpha_4^{-t_1}\alpha_1\alpha_4^{t_1} = \alpha_1\alpha_5^{-t_1}$,
a contradiction. Therefore $\delta$ is a non-inner class preserving automorphism. Since $|Inn(G)| = p^5$, we have $|Aut_cG)| = p^6$, and therefore $|Out_c(G)| = p$. \hfill $\Box$
\end{proof}
\begin{lemma}\label{lem10}
Let $G$ be the group $\Phi_{13}(1^6)$. Then $|Out_c(G)| = p^2$.
\end{lemma}
\begin{proof}
The group $G$ is a special $p$-group, minimally generated by $\alpha_1, \alpha_2, \alpha_3$ and $\alpha_4$.
The commutator subgroup $\gamma_2(G)$ is generated by $\beta_1 := [\alpha_1, \alpha_2]$ and $\beta_2 := [\alpha_1, \alpha_3] = [\alpha_2, \alpha_4]$. The conjugates of $\alpha_1, \alpha_2, \alpha_3$ and $\alpha_4$ are ,
$\alpha_1\beta_1^r\beta_2^s$, $\alpha_2\beta_1^r\beta_2^s, \alpha_3\beta_2^s$ and $\alpha_4\beta_2^s$ respectively where $r$ and $s$ varies over $\mathbb{Z}$.
Since the exponent of $\gamma_2(G)$ is $p$, it follows that $|\alpha_1^G| = |\alpha_2^G| = p^2$ and $|\alpha_3^G| = |\alpha_4^G| = p$.
Therefore by Lemma \ref{lem6}, $|Aut_c(G)| \leq p^6$.
Define a map $\delta : \{\alpha_1, \alpha_2, \alpha_3, \alpha_4\} \rightarrow G$ such that $\alpha_1 \mapsto \alpha_1\beta_1^{r_1}\beta_2^{s_1}, \alpha_2 \mapsto
\alpha_2\beta_1^{r_2}\beta_2^{s_2}$, $\alpha_3 \mapsto \alpha_3\beta_2^{s_3}$ and $\alpha_4 \mapsto \alpha_4\beta_2^{s_4}$, for some $r_1, s_1, r_2, s_2, s_3, s_4 \in \mathbb{Z}$. By Lemma \ref{lemay}
this map extends to a central automorphism of $G$. Since $\delta$ fixes $\gamma_2(G)$ element-wise,
for $k_1, l_1, m_1, n_1 \in \mathbb{Z}$ and $\eta_1 \in \gamma_2(G)$,
\[\delta(\eta_1\alpha_1^{k_1}\alpha_2^{l_1} \alpha_3^{m_1}\alpha_4^{n_1}) =
\eta\alpha_1^{k_1}\alpha_2^{l_1} \alpha_3^{m_1}\alpha_4^{n_1}\beta_1^{k_1r_1 + l_1r_2}\beta_2^{k_1s_1 + l_1s_2 + m_1s_3 + n_1s_4}.\]
Therefore $\delta$ extends to a class-preserving automorphism if and only if for every $k_1, l_1, m_1, n_1 \in \mathbb{Z}$, and $\eta_1 \in \gamma_2(G)$,
there exist $k_2, l_2, m_2, n_2$ (depending on $k_1, l_1, m_1, n_1$) and $\eta_2 \in \gamma_2(G)$ such that
\[[\eta_1\alpha_1^{k_1}\alpha_2^{l_1}\alpha_3^{m_1}\alpha_4^{n_1}, \eta_2\alpha_1^{k_2}\alpha_2^{l_2}\alpha_3^{m_2}\alpha_4^{n_2}] =
\beta_1^{k_1r_1 + l_1r_2}\beta_2^{k_1s_1 + l_1s_2 + m_1s_3 + n_1s_4}.\]
Expanding the left hand side, we get
\[\beta_1^{k_1l_2 - k_2l_1}\beta_2^{k_1m_2 - k_2m_1 + l_1n_2 - l_2n_1} = \beta_1^{k_1r_1 + l_1r_2}\beta_2^{k_1s_1 + l_1s_2 + m_1s_3 + n_1s_4}.\]
Comparing the powers of $\beta_i$'s, we see that $\delta$ extends to a class-preserving automorphism if the following equations hold:
\[k_1l_2 - k_2l_1 \equiv k_1r_1 + l_1r_2 \pmod{p},\]
\[k_1m_2 - k_2m_1 + l_1n_2 - l_2n_1 \equiv k_1s_1 + l_1s_2 + m_1s_3 + n_1s_4 \pmod{p}.\]
It is easy to see that for any given $k_1, l_1, m_1, n_1$ there exist $k_2, l_2, m_2, n_2$ such that the above two equations are satisfied. Thus it follows that
$|Aut_c(G)| = p^6$. Since $|G/Z(G)| = p^4, |Out_c(G)| = p^2$. \hfill $\Box$
\end{proof}
\begin{lemma}
Let $G$ be the group $\Phi_{18}(1^6)$. Then $|Out_c(G)| = p^2$.
\end{lemma}
\begin{proof}
The group $G$ is a $p$-group of class $3$, minimally generated by $\alpha, \alpha_1, \beta$.
The commutator subgroup $\gamma_2(G)$ is abelian and generated by $\alpha_2:= [\alpha_1, \alpha], \alpha_3 := [\alpha_2, \alpha] = [\alpha_1, \beta]$ and $\gamma := [\alpha, \beta]$. The center $Z(G)$ is of order $p^2$, generated by $\alpha_3$ and $\gamma$.
Note that $|\alpha^{G}| \leq p^3, |\alpha_1^{G}| \leq p^2$ and $|\beta^{G}| \leq p^2$. It follows from Lemma \ref{lem6} that
$|Aut_c(G)| \leq p^7$.
Now define a map $\delta : \{\alpha, \alpha_1, \beta \} \rightarrow G$ such that $\alpha \mapsto \alpha\alpha_3^{r_1}\gamma^{s_1}, \alpha_1 \mapsto
\alpha_1\alpha_3^{r_2}$ and $\beta \mapsto \beta\alpha_3^{r_3}$ for some $r_1, s_1, r_2, r_3 \in \mathbb{Z}$.
By Lemma \ref{lemay} this map extends to a central automorphism of $G$.
Let $g = \eta_1\alpha^{k_1}\alpha_1^{l_1}\beta^{m_1}$, where $\eta_1 \in \gamma_2(G)$ and
$k_1, l_1, m_1 \in \mathbb{Z}$. Let $k_1 \equiv 0\pmod{p}$. Then note that $\delta(g) = g\alpha_3^r$ for some $r \in \mathbb{Z}$ and
\[[g, \beta^{l_2}] = [\eta_1\alpha_1^{l_1}\beta^{m_1}, \beta^{l_2}] = \alpha_3^{l_1l_2}.\]
Therefore if $l_1$ is non-zero modulo $p$, we have $\gen{\alpha_3} \leq [g, G]$. Let $l_1 \equiv 0\pmod{p}$. Note that $\alpha_1^p \in Z(G)$, hence
\[[g, \alpha_1^{m_2}] = [\eta_1\beta^{m_1}, \alpha_1^{m_2}] = \alpha_3^{-m_1m_2}.\]
Therefore if $m_1$ is non-zero modulo $p$, we have $\gen{\alpha_3} \leq [g, G]$. Thus we have shown that, if $k_1 \equiv 0\pmod{p}$, then
$g^{-1}\delta(g) \in [g, G].$ It follows that, if $k_1 \equiv 0\pmod{p}$, $\delta$ maps $g$ to a conjugate of $g$. Now suppose $k_1$ is non-zero modulo $p$. Then
\[[g, \alpha_2^{k_2}] = [\eta_1\alpha^{k_1}\alpha_1^{l_1}\beta^{m_1}, \alpha_2^{k_2}] = \alpha_3^{-k_1k_2},\]
so that $\gen{\alpha_3} \leq [g, Z_2(G)]$. Also, we have
\[[g, \beta^{n_2}] = [\eta_1\alpha^{k_1}\alpha_1^{l_1}\beta^{m_1}, \beta^{n_2}] = \alpha_3^{l_1n_2}\gamma^{k_1n_2}.\]
Since $\beta \in Z_2(G)$ and $[g, Z_2(G)]$ is a subgroup, it follows that $Z(G) \leq [g, G]$. Because
$\delta$ is a central automorphism, we have $g^{-1}\delta(g) \in [g, G]$. We have shown that, for every $g \in G$,
$g^{-1}\delta(g) \in [g, G].$
It follows that $\delta$ is a class-preserving automorphism.
Since $r_1, s_1, r_2, r_3$ were arbitrary, we have that $|Aut_c(G) \cap Autcent(G)| \geq p^4$.
Applying Lemma \ref{lempkr} we get $|Z(Aut_c(G))| \geq p^4$. Note that $Aut_c(G)$ is non-abelian because $G$ is a group of class 3.
Therefore $|Aut_c(G)| \geq p^6$. Now suppose that $|Aut_c(G)| = p^7.$ Then the map $\sigma$ defined on the generating set $\{\alpha, \alpha_1, \beta\}$
as $\alpha \mapsto \alpha, \alpha_1 \mapsto \alpha_1$ and $\beta \mapsto \alpha\beta\alpha^{-1} = \beta\gamma$ extends to a class-preserving automorphism.
Hence there exist $\eta_2 \in \gamma_2(G)$ and $k_2, l_2, m_2 \in \mathbb{Z}$ such that
$[\alpha_1\beta, \ \eta_2\alpha^{k_2}\alpha_1^{l_2}\beta^{m_2}] = \gamma$, but by a routine calculation it can be checked that
\[[\alpha_1\beta, \ \eta_2\alpha^{k_2}\alpha_1^{l_2}\beta^{m_2}] = \alpha_2^{k_2}\beta\alpha_3^{m_2 - l_2 + k_2(k_2 - 1)/2}\gamma^{-k_2},\]
which can not be equal to $\gamma$ for any values of $k_2, l_2$ and $m_2$. Therefore we get a contradiction and it follows that $|Aut_c(G)| = p^6$. Hence $|Out_c(G)| = p^2$ as $|Inn(G)| = p^4$. \hfill $\Box$
\end{proof}
\begin{lemma} \label{lem11}
Let $G$ be the group $\Phi_{20}(1^6)$. Then $|Out_c(G)| = p^2$.
\end{lemma}
\begin{proof}
The group $G$ is a $p$-group of class $3$, minimally generated by $\alpha, \alpha_1, \alpha_2$.
The commutator subgroup $\gamma_2(G)$ is abelian and generated by $\beta:= [\alpha_1, \alpha_2], \beta_1 := [\beta, \alpha_1]$ and $\beta_2 := [\beta, \alpha_2] = [\alpha, \alpha_1]$. The center $Z(G)$ is of order $p^2$, generated by $\beta_1$ and $\beta_2$.
Note that $|\alpha^{G}| = p, |\alpha_1^{G}| \leq p^3$ and $|\alpha_2^{G}| \leq p^2$. It follows from Lemma \ref{lem6} that
$|Aut_c(G)| \leq p^6$.
Now define a map $\delta : \{\alpha, \alpha_1, \alpha_2\} \rightarrow G$ such that $\alpha \mapsto \alpha\beta_2^{t_1}, \alpha_1 \mapsto
\alpha_1\beta_1^{s_2}\beta_2^{t_2}$ and $\alpha_2 \mapsto \alpha_2\beta_2^{t_3}$ for some $t_1, s_2, t_2, t_3 \in \mathbb{Z}$.
By Lemma \ref{lemay} this map extends to a central automorphism of $G$.
Let $g = \eta_1\alpha^{k_1}\alpha_1^{l_1}\alpha_2^{m_1}$, where $\eta_1 = \beta^{u_1}\beta_1^{v_1}\beta_2^{w_1}$ and
$k_1, l_1, m_1, u_1, v_1, w_1 \in \mathbb{Z}$. Note that, if $l_1 \equiv 0\pmod{p}$, then $\delta(g) = g\beta_2^r$ for some $r \in \mathbb{Z}$.
Consider \[[g, \beta^{m_2}] = [\eta_1\alpha^{k_1}\alpha_2^{m_1}, \beta^{m_2}] = \beta_2^{-m_1m_2}.\]
Therefore if $m_1$ is non-zero modulo $p$, we have $\gen{\beta_2} \leq [g, G]$. Let $m_1 \equiv 0\pmod{p}$. Note that $\alpha_2^p \in Z(G)$, hence
\[[g, \alpha_2^{u_2}] = [\eta_1\alpha^{k_1}, \alpha_2^{u_2}] = [\beta^{u_1}\beta_1^{v_1}\beta_2^{w_1}\alpha^{k_1}, \ \alpha_2^{u_2}] = \beta_2^{u_1u_2}.\]
Therefore if $u_1$ is non-zero modulo $p$, we have $\gen{\beta_2} \leq [g, G]$. Let $u_1 \equiv 0\pmod{p}$. Then
\[[g, \alpha_1^{k_2}] = [\alpha^{k_1}, \alpha_1^{k_2}] = \beta_2^{k_1k_2},\]
so that if $k_1$ is non-zero modulo $p$, then $\gen{\beta} \leq [g, G]$. Thus we have shown that if $l_1 \equiv 0\pmod{p}$,
then $g^{-1}\delta(g) \in [g, G].$ It follows that if $l_1 \equiv 0\pmod{p}$, then $\delta$ maps $g$ to a conjugate of $g$. Now suppose that $l_1$ is non-zero modulo $p$. Then
\[[g, \alpha^{l_2}] = [\eta_1\alpha^{k_1}\alpha_1^{l_1}\alpha_2^{m_1}, \alpha^{l_2}] = \beta_2^{-l_1l_2},\]
so that $\gen{\beta_2} \leq [g, Z_2(G)]$. Also we have
\[[g, \beta^{n_2}] = [\eta_1\alpha^{k_1}\alpha_1^{l_1}\alpha_2^{m_1}, \beta^{n_2}] = \beta_1^{-l_1n_2}\beta_2^{-m_1n_2}.\]
Since $\beta \in Z_2(G)$ and $[g, Z_2(G)]$ is a subgroup, it follows that $Z(G) \leq [g, G]$. Because
$\delta$ is a central automorphism, we have $g^{-1}\delta(g) \in [g, G]$. It follows that $\delta$ is a class-preserving automorphism of $G$.
Since $t_1, s_2, t_2, t_3$ were arbitrary, we have that $|Aut_c(G) \cap Autcent(G)| \geq p^4$.
Applying Lemma \ref{lempkr} we get $|Z(Aut_c(G))| \geq p^4$. But $Aut_c(G)$ is non-abelian since $G$ is of class 3.
Therefore $|Aut_c(G)| \geq p^6$. Hence $|Aut_c(G)| = p^6.$ Since $|G/Z(G)| = p^4$, we have $|Out_c(G)| = p^2$. \hfill $\Box$
\end{proof}
Now we are ready to prove the following result, which together with Theorem \ref{prop1} proves Theorem A.
\begin{thm} \label{prop2}
Let $G$ be a group of order $p^6$. Then the following hold true.
\begin{enumerate}
\item If $G$ belongs to any of the isoclinism families $\Phi_k$ for $k = 7, 10, 24, 30, 36, 38, 39$, then $|Out_c(G)| = p$.
\item If $G$ belongs to any of the isoclinism families $\Phi_k$ for $k = 13, 18, 20$, then $|Out_c(G)| = p^2$.
\item If $G$ belongs to any of the isoclinism families $\Phi_k$ for $k = 15, 21$, then $|Out_c(G)| = p^4$.
\end{enumerate}
\end{thm}
\begin{proof}
Let $G$ be either the group $\Phi_{7}(1^6)$ or the group $\Phi_{10}(1^6)$. Then, since
$\Phi_{7}(1^6) = \Phi_{7}(1^5) \times C_p$ and $\Phi_{10}(1^6) = \Phi_{10}(1^5) \times C_p$, it follows from \cite[Lemma 5.1 and Lemma 5.2]{MKY} that $|Out_c(G)| = p$.
With this observation in hands, (i) follows from lemmas \ref{lem8}-\ref{lem9}. It is readily seen that (ii) follows from lemmas \ref{lem10}-\ref{lem11}. Now
let $G$ be the group $\Phi_{15}(1^6)$. We observe that
James' list of groups of order $p^6$ and class 2 consists of exactly 5 isoclinism families $\Phi_{k}$ for $k =11, \ldots, 15$. As we have seen above that
$|Aut_c(H)| \neq p^8$ for $H \in \{\Phi_k \mid 11\le k \le 14\}$. But, as shown by Burnside \cite{WB}, there exists a group $W$ of order $p^6$ and of nilpotency class 2 such that $|Aut_c(W)| = p^8$.
Therefore $G$ must be isoclinic to the group $W$ and $|Aut_c(G)| = p^8$. Since $|G/Z(G)| = p^4$, we have $|Out_c(G)| = p^4$.
Next, it follows from \cite[Proposition 5.8]{MKYLMS}, that $|Out_c(\Phi_{21}(1^6))| = p^4$. This completes the proof of the theorem. \hfill $\Box$
\end{proof}
\noindent{\bf Acknowledgements.} We thank the referee for his/her useful comments and suggestions which make the paper more readable.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{S:intro}
The maximum force/maximum tension conjecture was independently mooted some 20 years ago by Gary Gibbons~\cite{Gibbons2002} and Christoph Schiller~\cite{Schiller1997}.
At its heart one starts by noting that in (3+1) dimensions the quantity
\begin{eqnarray}
\label{eq:F_*} F_* =\frac{c^4}{G_N} \approx 1.2\times10^{44} \hbox{ N}
\end{eqnarray}
has the dimensions of force (equivalently, tension). Here $c$ is the speed of light in vacuum, and $G_N$ is Newton's gravitational constant. Thereby \emph{any} physical force can \emph{always} be written in the form
\begin{equation}
F_\mathrm{physical} = f\; F_*,
\end{equation}
where the quantity $f$ is some dimensionless function of dimensionless parameters.
In very many situations~\cite{Gibbons2002,Schiller1997,Barrow2014,Barrow2020} explicit calculations yield $f \leq {1\over4}$, though sometimes numbers such as $f\leq {1\over2}$ also arise~\cite{Yen2018}. Specifically, Yen Chin Ong \cite{Yen2018} formulated strong and weak versions of the conjecture:
\begin{enumerate}
\itemsep-1pt
\item {Strong form:} \quad $f\leq {1\over4}$.
\item {Weak form:} \quad $f = {\mathcal{O}}(1)$.
\end{enumerate}
Note that $F_* = E_\mathrm{Planck}/L_\mathrm{Planck}$ can also be interpreted as the Planck force, though it is not intrinsically quantum as the various factors of $\hbar$ cancel, at least in (3+1) dimensions. Furthermore it is sometimes interesting~\cite{Schiller2005} to note that the Einstein equations
\begin{equation}
G_{ab}= 8\pi \; {G_N\over c^4}\; T_{ab},
\end{equation}
can be written in terms of $F_*$ as
\begin{equation}
T_{ab} = { F_*\over8\pi} \; G_{ab}.
\end{equation}
When recast in this manner, maximum forces conjectures have tentatively been related to Jacobson's entropic derivation of the Einstein equations~\cite{Jacobson-entropy}.
Considerable work has also gone into attempts at pushing various modifications of the maximum force conjecture beyond the framework of standard general relativity~\cite{Dabrowski2015,Bolotin2015}.
Overall, while there is little doubt that the quantity $F_*$ is physically important, we feel that the precise status of the maximum force conjecture is much less certain, and is less than universal.
We shall investigate these conjectures within the context of standard general relativity, focussing on illustrative \emph{counter-examples} based on simple physical systems, analyzing the internal forces, and checking the extent to which the counter-examples are physically reasonable.
Specifically, we shall consider static spherically symmetric fluid spheres~\cite{Delgaty, Rahman:2001, Martin:2003-a, Martin:2003-b, Boonserm:2005, Boonserm:2006-a, Boonserm:2006-b, Boonserm:2007-1, Boonserm:2007-2}, and investigate both radial and equatorial forces. We shall also include an analysis of the speed of sound, and the relevant classical energy conditions, specifically the dominant energy condition (DEC),
see~\cite{ECs, book, twilight, Visser:1997, Visser:1996, Cattoen:2006, LNP-survey, Martin-Moruno:2013-a, Martin-Moruno:2013-b, Martin-Moruno:2015}.
We shall see that even the most elementary static spherically symmetric fluid sphere, Schwarzschild's constant density star, raises significant issues for the maximum force conjecture. Other models, such as the Tolman~IV solution and its variants are even worse.
Generically, we shall see that any prefect fluid sphere on the verge of gravitational collapse will violate the
weak (and strong) maximum force conjectures.
Consequently, if one wishes to retain any truly universal notion of ``maximum force'' then one will at the very least have to very carefully delineate precisely which forces are to be allowed within the domain of discourse.
\enlargethispage{40pt}
\vspace{-15pt}
\section{Spherical symmetry}\label{C:SS}
\vspace{-10pt}
Consider spherically symmetric spacetime, with metric given in Schwarzschild curvature coordinates:
\begin{equation}
ds^2 = g_{tt} \; dt^2 + g_{rr} \; dt^2 + r^2 (d\theta^2 +\sin^2\theta\; d\phi^2).
\end{equation}
We do not yet demand pressure isotropy, and for the time being allow radial and transverse pressures to differ, that is $p_r \neq p_t$.
Pick a spherical surface at some specified value of the radial coordinate $r$. Define
\begin{equation}
F_r(r) = \int p_r(r) \; dA = 4\pi \;p_r(r) \; r^2.
\label{eq:Radial Force}
\end{equation}
This quantity simultaneously represents the compressive force exerted by outer layers of the system on the core, and the supporting force exerted by the core on the outer layers of the system.
\enlargethispage{20pt}
Consider any equatorial slice through the system and define the equatorial force by
\begin{equation}
F_{eq} = \int p_t(r) \; dA = 2\pi \int^{R_s}_{0} \sqrt{g_{rr}} \; p_t(r) \; r dr.
\label{eq:Equatorial Force}
\end{equation}
This quantity simultaneously represents the force exerted by the lower hemisphere of the system on the upper hemisphere, and the force exerted by the upper hemisphere of the system on the lower hemisphere.
Here $R_s$ is the location of the surface of the object (potentially taken as infinite). As we are investigating with spherically symmetric systems, the specific choice of hemisphere is irrelevant.
\section{Perfect fluid spheres}\label{C:PF}
\subsection{Generalities}\label{SS:generalities}
The perfect fluid condition excludes pressure anisotropy so that radial and transverse pressures are set equal: $p(r) = p_r(r) = p_t(r)$. Once this is done, the radial and equatorial forces simplify
\begin{equation}
F_r(r) = \int p(r) \; dA = 4\pi \;p(r) \; r^2;
\label{eq:Radial Force2}
\end{equation}
\begin{equation}
F_{eq} = \int p(r) \; dA = 2\pi \int^{R_s}_{0} \sqrt{g_{rr}} \; p(r) \; r dr.
\label{eq:Equatorial Force2}
\end{equation}
Additionally, we shall impose the conditions that pressure is positive and decreases as one moves outwards with zero pressure defining the surface of the object~\cite{Delgaty, Rahman:2001, Martin:2003-a, Martin:2003-b, Boonserm:2005, Boonserm:2006-a, Boonserm:2006-b, Boonserm:2007-1, Boonserm:2007-2}.\footnote{There is a minor technical change in the presence of a cosmological constant, the surface is then defined by $p(R_s) = p_\Lambda$.}
Similarly density is positive and does not increase as one moves outwards, though density need not be and typically is not zero at the surface~\cite{Delgaty, Rahman:2001, Martin:2003-a, Martin:2003-b, Boonserm:2005, Boonserm:2006-a, Boonserm:2006-b, Boonserm:2007-1, Boonserm:2007-2}.
We note that for the radial force we have by construction
\begin{equation}
F_r(0)=0; \qquad F_r(R_s) = 0; \qquad \hbox{and for} \quad r\in(0,R_s):\; F_r(r) > 0.
\end{equation}
In particular in terms of the central pressure $p_0$ we have the particularly simple bound
\begin{equation}
F_r(r) < 4\pi\; p_0 \; R_s^2.
\end{equation}
This suggests that in general an (extremely) weak version of the maximum force conjecture might hold for the radial force, at least within the framework outlined above, and as long as the central pressure is finite. Unfortunately without some general relationship between central pressure $p_0$ and radius $R_s$ this bound is less useful than one might hope. For the strong version of the maximum force conjecture no such simple argument holds for $F_r$, and one must perform a case-by-case analysis.
For the equatorial force $F_{eq}$ there is no similar argument of comparable generality, and one must again perform a case-by-case analysis.
Turning now to the classical energy conditions~\cite{ECs,book,twilight, Visser:1997, Visser:1996, Cattoen:2006, LNP-survey,Martin-Moruno:2013-a,Martin-Moruno:2013-b,Martin-Moruno:2015}, they add extra restrictions to ensure various physical properties remain well-behaved. For our perfect fluid solutions, these act as statements relating the pressure $p$ and the density $\rho$ given by the stress-energy tensor $T_{\hat{\mu}\hat{\nu}}$.
Since, (in view of our fundamental assumptions that pressure and density are both positive), the null, weak, and strong energy conditions, (NEC, WEC, SEC) are always automatically satisfied, we will \emph{only} be interested in the dominant energy condition (DEC).
In the current context the dominant energy condition only adds the condition $|p| \leq \rho$.
But since in the context of perfect fluid spheres, the pressure is always positive, it is more convenient to simply write this as
\begin{equation}
\frac{p}{\rho} \leq 1; \qquad \hbox{that is} \qquad p \leq \rho.
\label{eq:DEC}
\end{equation}
The best physical interpretation of the DEC is that it guarantees that any timelike observer with 4-velocity $V^a$ will observe a flux $F^a = T^{ab} \,V_b$ that is non-spacelike (either timelike or null)~\cite{LNP-survey}. However, it should be pointed out that the DEC, being the strongest of the classical energy conditions, is also the easiest to violate --- indeed there are several known situations in which the classical DEC is violated by quantum effects~\cite{book, twilight, Visser:1997, Visser:1996, Cattoen:2006, LNP-survey,Martin-Moruno:2013-a,Martin-Moruno:2013-b,Martin-Moruno:2015}.
The DEC is sometimes [somewhat misleadingly] interpreted in terms of the speed of sound not being superluminal: naively $v_s^2 = \partial p/\partial \rho \leq 1$; whence $p\leq \rho-\rho_\mathrm{surface} <\rho$.
But the implication is only one-way, and in addition the argument depends on extra technical assumptions to the effect that the fluid sphere is well-mixed with a unique barotropic equation of state $p(\rho)$ holding throughout the interior. To clarify this point, suppose the equation of state is not barotropic, so that $p=p(\rho, z_i)$, with the $z_i$ being some collection of intensive variables, (possibly chemical concentrations, entropy density, or temperature). Then we have
\begin{equation}
{dp \over dr } = {\partial p\over\partial\rho}\; {d\rho\over dr} +
\sum_i {\partial p\over\partial z^i} \;{d z^i\over dr}
=
v_s^2(\rho,z^i)\; {d\rho\over dr} +
\sum_i {\partial p\over\partial z^i} \;{d z^i\over dr}.
\end{equation}
Then, (noting that $d\rho/dr$ is non-positive as one moves outwards), enforcing the speed of sound to not be superluminal implies
\begin{equation}
{dp \over dr } \geq {d\rho\over dr} +
\sum_i {\partial p\over\partial z^i} \;{d z^i\over dr}.
\end{equation}
Integrating this from the surface inwards we have
\begin{equation}
p(r) \leq \rho(r) - \rho(R_s)+
\sum_i \int_r^{R_s} {\partial p(\rho,z^i)\over\partial z^i} \;{d z^i\over dr} \;dr.
\end{equation}
Consequently, unless one either makes an explicit barotropic assumption $\partial p/\partial z^i=0$, or otherwise at the very least has some very tight control over the partial derivatives $\partial p/\partial z^i$,
one simply cannot use an assumed non-superluminal speed of sound to deduce the DEC.
Neither can the DEC be used to derive a non-superluminal speed of sound, at least not without many extra and powerful technical assumptions. We have been rather explicit with this discussion since we have seen considerable confusion on this point.
Finally we note that there is some disagreement as to whether or not the DEC is truly fundamental~\cite{twilight, Visser:1997, Visser:1996, Cattoen:2006}.
\subsection{Schwarzschild's constant density star}
We shall now consider a classic example of perfect fluid star, Schwarzschild's constant density star~\cite{Sch:Int}, (often called the Schwarzschild interior solution), which was discovered very shortly after Schwarzschild's original vacuum solution~\cite{Sch:Ext}, (often called the Schwarzschild exterior solution).
It is commonly argued that Schwarzschild's constant density star is ``unphysical'' on the grounds that it allegedly leads to an infinite speed of sound. But this is a naive result predicated on the physically unreasonable hypothesis that the star is well-mixed with a barotropic equation of state $p=p(\rho)$. To be very explicit about this, all realistic stars are physically stratified with non-barotropic equations of state $p=p(\rho, z_i)$, with the $z_i$ being some collection of intensive variables, (possibly chemical concentrations, entropy density, or temperature). We have already seen that
\begin{equation}
{dp \over dr } = {\partial p\over\partial\rho}\; {d\rho\over dr} +
\sum_i {\partial p\over\partial z^i} \;{d z^i\over dr}
=
v_s^2(\rho,z^i)\; {d\rho\over dr} +
\sum_i {\partial p\over\partial z^i} \;{d z^i\over dr}.
\end{equation}
Thence for a constant density star, $d\rho/dr=0$, we simply deduce
\begin{equation}
{dp \over dr } =
\sum_i {\partial p\over\partial z^i} \;{d z^i\over dr}.
\end{equation}
This tells us nothing about the speed of sound, one way or the other --- it does tell us that there is a fine-tuning between the pressure $p$ and the intensive variables $z^i$, but that is implied by the definition of being a ``constant density star''.
We have been rather explicit with this discussion since we have seen considerable confusion on this point. Schwarzschild's constant density star is not ``unphysical''; it may be ``fine-tuned'' but it
is not \emph{a priori} ``unphysical''.
\clearpage
Specifically, the Schwarzschild interior solution describes the geometry inside a static spherically symmetric perfect fluid constant density star with radius $R_s$ and mass $M$ by the metric:
\begin{equation}
\label{Eq:IntSchwarzschild}
ds^2 = -\frac{1}{4}\left(3\sqrt{1- \frac{2 M}{ R_s}} - \sqrt{1 - \frac{2 M r^2}{ R_s^3}}\right)^2 dt^2
+ \left(1 - \frac{2 M r^2}{ R_s^3}\right)^{-1} dr^2 + r^2d\Omega^2.
\end{equation}
Here we have adopted geometrodynamic units ($c\to1$, $G_N\to1$). Calculating the non-zero orthonormal stress-energy components from the Einstein equations applied to this metric yields:
\begin{eqnarray}
T_{\hat{t}\hat{t}} &=& \rho = \frac{3M}{4\pi R_s^3};\\
T_{\hat{r}\hat{r}} &=& T_{\hat{\theta}\hat{\theta}} = T_{\hat{\phi}\hat{\phi}}
= p =
\rho \;\; \frac{\sqrt{1-\frac{2 M r^2}{ R_s^3}}-\sqrt{1-\frac{2 M}{ R_s}}}
{3\sqrt{1-\frac{2 M}{ R_s}}-\sqrt{1-\frac{2 M r^2}{ R_s^3}}}.
\end{eqnarray}
This gives us the relation between density and pressure, as well as demonstrating the perfect fluid condition ($p = p_r = p_t$), and also verifying that the density is (inside the star) a position independent constant.
In these geometrodynamic units both density and pressure have units 1/(length)$^2$, while forces are dimensionless.
Note that the pressure does in fact go to zero at $r\to R_s$, so $R_s$ really is the surface of the ``star''.
Rewriting the relation between pressure and density in terms of the simplified dimensionless quantities $\chi = \frac{2 M}{ R_s}$ and $y = \frac{r^2}{R_s^2}$ we see
\begin{eqnarray}
\label{eq:InteriorPressure}
p = \rho \;\;\frac{\sqrt{1-\chi y}-\sqrt{1-\chi}}{3\sqrt{1-\chi}-\sqrt{1-\chi y}}.
\end{eqnarray}
Here $0 \leq y \leq 1$, and $0\leq \chi <\frac{8}{9}$. The first of these ranges is obvious from the definition of $y$, while the second comes from considering the central pressure at $y=0$:
\begin{equation}
p_0 = \rho \;\;\frac{1-\sqrt{1-\chi}}{3\sqrt{1-\chi}-1}.
\end{equation}
Demanding that the central pressure be finite requires $\chi <\frac{8}{9}$.
(This is actually a rather more general result of general relativistic stellar dynamics, not restricted to constant density, see various discussions of the Buchdahl--Bondi bound \cite{Buchdahl-bound,Bondi-bound}.)
\subsubsection{Radial Force}
The radial force $F_r$ as defined by equation (\ref{eq:Radial Force2}) can be combined with the pressure-density relation given by equation (\ref{eq:InteriorPressure}), giving:
\begin{equation}
\label{eq:ISC F_r}
F_r = 4\pi p r^2 = 4\pi \rho R_s^2\; y \; \frac{\sqrt{1-\chi y}-\sqrt{1-\chi}}{3\sqrt{1-\chi}-\sqrt{1-\chi y}} = \frac{3}{2} \chi y \; \frac{\sqrt{1-\chi y}-\sqrt{1-\chi}}{3\sqrt{1-\chi}-\sqrt{1-\chi y}}.
\end{equation}
As advertised in both abstract and introduction, this quantity is indeed a dimensionless function of dimensionless variables. Furthermore this quantity is defined on the bounded range $0 \leq y \leq 1$, $0\leq \chi <\frac{8}{9}$. To find if $F_r$ itself is bounded we analyse the multi-variable derivative for critical points.
\enlargethispage{20pt}
For $\partial_\chi F_r$ we find:
\begin{equation}
\partial_\chi F_r = -{3 y\over2} \left(\frac
{ \{4-\chi(3+y) \} \sqrt{1-\chi}\sqrt{1-\chi y} -\{4-\chi(3+5y-4\chi y)\} }
{\sqrt{1-\chi}\sqrt{1 - \chi y}\;(3\sqrt{1-\chi}-\sqrt{1 - \chi y})^2} \right).
\end{equation}
For $\partial_y F_r$ we find:
\begin{equation}
\partial_y F_r = -{3\chi\over2} \left(\frac
{\{4-\chi(3+y)\} \sqrt{1-\chi y}-\{4-5\chi y\} \sqrt{1-\chi} }
{\sqrt{1 - \chi y}\;(3\sqrt{1-\chi}-\sqrt{1 - \chi y})^2} \right).
\end{equation}
In particular we see that
\begin{equation}
\chi \partial_\chi F_r - y \partial_y F_r =
{3\chi^2 y\over2} \left(\frac
{ \sqrt{1-\chi y}) }
{\sqrt{1-\chi}\;(3\sqrt{1-\chi}-\sqrt{1 - \chi y})^2} \right).
\end{equation}
To have a critical point, $ \partial_\chi F_r = \partial_y F_r = 0$, we certainly require $\chi y =0$. So either $\chi=0$ or $y=0$.
But for $y=0$, and $\chi\in(0,{8\over9})$ we have
\begin{equation}
\partial_y F_r
\quad\longrightarrow\quad
{3\chi(3\chi + 4 \sqrt{1-\chi}-4)\over 2 (3\sqrt{1-\chi}-1)^2} > 0.
\end{equation}
In contrast, for $\chi=0$, and $y\in(0,1)$, we have $\partial_\chi F_r \to 0$.
So the only critical points lie on one of the boundary segments:
\begin{equation}
\partial_\chi F_r = \partial_y F_r = 0 \quad\iff\quad \chi = 0.
\end{equation}
Therefore to find the maxima of $F_r(\chi,r)$ we must inspect all four of the boundary segments of the viable region. Along three of the boundary segments we can see that the three lines corresponding to $\chi =0$, $y = 0$, and $y = 1$ all give $F_r(\chi,r) = 0$, leaving only $\chi \rightarrow \frac{8}{9}$ to be investigated.
We note
\begin{equation}
\lim_{\chi \rightarrow \frac{8}{9}} F_r(\chi, y) = {4y\over3}\; {(\sqrt{9-8y}-1)\over3-\sqrt{9-8y}}.
\end{equation}
Inserting this into the partial derivative $\partial_y F_r$ reveals:
\begin{equation}
\lim_{\chi \rightarrow \frac{8}{9}} \partial_y F_r = -\frac{4}{3} \left(1 + \frac{1}{\sqrt{9 - 8y}}\right).
\end{equation}
This is a strictly negative function in the range $0 \leq y \leq 1$.
Thus the maximum of $F_r(\chi,y)$ can be found by taking the limit $\lim_{y\rightarrow 0}$ giving:
\begin{equation}
(F_r)_\mathrm{max} = \lim_{y\rightarrow 0} \lim_{\chi \rightarrow \frac{8}{9}} F_r = 2.
\end{equation}
This is therefore bounded, with the radial force approaching its maximum at the centre of a fluid star which is on the verge of collapse. This force violates the strong maximum force conjecture, though it satisfies the weak maximum force conjecture. This limit can easily be seen graphically in Figure \ref{Fig:IntSch-Radial-Force}.
\begin{figure}[!!htbp]
\centering
\includegraphics{F-Int-Sch-Fr-max.pdf}
\caption{Radial force $F_r(\chi,y)$ for the interior Schwarzschild solution. \hfill \break
Note $F_r(\chi,y)$ is bounded above by 2 in the region of interest $y\in[0,1]$, $\chi\in[0,8/9)$. }
\label{Fig:IntSch-Radial-Force}
\end{figure}
\subsubsection{Equatorial force}
Using equation (\ref{eq:Equatorial Force2}) and the metric defined in equation (\ref{Eq:IntSchwarzschild}), with the relabelling of the previous subsection in terms of $\chi$ and $y$ gives:
\begin{eqnarray}
F_{eq}(\chi) = \frac{3}{8}\;\chi\; \hbox{\Large $\displaystyle\int$}} %needs \usepackage{amssymb^1_{\!\!\!\!\!0} \frac{1}{\sqrt{1-\chi y}}\; \left(\frac{\sqrt{1-\chi y}-\sqrt{1-\chi}}{3\sqrt{1-\chi}-\sqrt{1-\chi y}}\right) dy.
\end{eqnarray}
The integral evaluates to:
\begin{eqnarray}
F_{eq}(\chi) &=& \frac{3}{4} \left.\left[\sqrt{1-\chi y} + 2\sqrt{1-\chi}\;\ln(3\sqrt{1-\chi} - \sqrt{1-\chi y}) \right]\right\vert_{y=0}^{y=1}.
\end{eqnarray}
Ultimately
\begin{eqnarray}
\label{E:Feq-sch}
F_{eq}(\chi)
&=&\frac{3}{4}\left[\sqrt{1-\chi}\left\{1 + \ln(4-4\chi)-2\ln(3\sqrt{1-\chi}-1)\right\}-1 \right].
\end{eqnarray}
However, due to the presence of the $-\ln(3\sqrt{1-\chi}-1)$ term in this equation, it can be seen that as $\chi \rightarrow \frac{8}{9}$, $F_{eq}(\chi) \rightarrow +\infty$.
Indeed
\begin{equation}
F_{eq}(\chi) = {\ln({8\over9}-\chi)\over 2} + {\mathcal{O}}(1),
\end{equation}
implying that the equatorial force in this space-time will grow without bound as the star approaches the critical size, (just prior to gravitational collapse), in violation of both the strong and weak maximum force conjectures.
So while the interior Schwarzschild solution has provided a nice example of a bounded radial force, $F_r(y,\chi)$, it also clearly provides an explicit counter-example, where the equatorial force $F_{eq}(\chi)$ between two hemispheres of the fluid star grows without bound.
\subsubsection{DEC}
Imposing the DEC (equation \ref{eq:DEC}) within the fluid sphere we would require:
\begin{equation}
\frac{p}{\rho}=\frac{\sqrt{1-\chi y}-\sqrt{1-\chi}}{3\sqrt{1-\chi}-\sqrt{1-\chi y}}\leq1
\end{equation}
\enlargethispage{20pt}
That is
\begin{equation}
\sqrt{1-\chi y} \leq 4\sqrt{1-\chi},
\end{equation}
whence
\begin{equation}
1-\chi y \leq 16(1-\chi).
\end{equation}
Applying the boundary conditions of $0\leq\chi\leq\frac{8}{9}$, $0\leq y \leq 1$, we have a solution range:
\begin{equation}
\left(0\leq \chi \leq \frac{3}{4},\quad 0\leq y \leq 1\right)
\quad\quad \bigcup \quad\quad
\left(\frac{3}{4}< \chi \leq \frac{8}{9},\quad 4 - \frac{3}{\chi}\leq y \leq 1\right).
\end{equation}
See figures \ref{F:IntSw-DEC1} and \ref{F:IntSw-DEC2}.
\begin{figure}[!htbp]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{F-Int-Sch-DEC-1.pdf}
\caption{$\frac{p}{\rho}$ in first range}
\label{F:IntSw-DEC1}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{F-Int-Sch-DEC-2.pdf}
\caption{$\frac{p}{\rho}$ in second range}
\label{F:IntSw-DEC2}
\end{minipage}
\end{figure}
Within the first region $0\leq \chi\leq \frac{3}{4},\quad 0\leq y\leq 1$, the radial force is maximised at:
\begin{equation}
\chi = \frac{3}{4},\quad y=\frac{1}{6}(5-\sqrt{5})\approx 0.46;
\quad\rightarrow \quad F_r=\frac{3}{16}(\sqrt{5}-1)\approx 0.23 <{1\over4}.
\end{equation}
Under these conditions the strong maximum force conjecture is satisfied.
This can be seen visually in figure~\ref{F:region-1}.
\begin{figure}[!htbp]
\begin{center}
\includegraphics{F-Constant-Density-DEC-bound1-Fr.pdf}
\caption{Radial force $F_r(\chi,y)$ for the interior Schwarzschild solution in region 1 $ \left(0\leq \chi \leq \frac{3}{4},\quad 0\leq y \leq 1\right) $ where the DEC is satisfied.}
\label{F:region-1}
\end{center}
\end{figure}
Within the second region $\frac{3}{4}< \chi\leq \frac{8}{9},\quad 3-\frac{4}{\chi}\leq y\leq 1$, the radial force is maximised at:
\begin{equation}
\chi = \frac{8}{9},\quad y=\frac{5}{8};\quad\rightarrow \quad F_r=\frac{5}{6} > {1\over4}.
\end{equation}
Under these conditions the strong maximum force conjecture is violated, though the weak maximum force conjecture is satisfied.
This can be seen visually in figure~\ref{F:region-2}.
\begin{figure}[!htbp]
\begin{center}
\includegraphics{F-Constant-Density-DEC-bound2-Fr.pdf}
\caption{Radial force $F_r(\chi,y)$ for the interior Schwarzschild solution in region 2 $\left(\frac{3}{4}< \chi \leq \frac{8}{9},\quad 4 - \frac{3}{\chi}\leq y \leq 1\right)$ where the DEC is satisfied.}
\label{F:region-2}
\end{center}
\end{figure}
\enlargethispage{40pt}
Turning to the equatorial force, we see that the integrand used to define integral for $F_{eq}(\chi)$ satisfies the DEC only within the range $0\leq\chi\leq\frac{3}{4}$. Using the result for $F_{eq}(\chi)$ given above, equation (\ref{E:Feq-sch}), we have:
\begin{equation}
(F_{eq})_\mathrm{max,DEC} = F_{eq}(\chi=3/4) = {3\over8} (2\ln 2 -1) \approx 0.1448603854 <{1\over4}.
\end{equation}
This now satisfies the strong maximum force conjecture.
\subsubsection{Summary}
Only if we enforce the DEC can we then make Schwarzschild's constant density star satisfy the weak and strong maximum force conjectures. Without adding the DEC Schwarzschild's constant density star will violate both the weak and strong maximum force conjectures.
Since it is not entirely clear that the DEC represents fundamental physics~\cite{twilight, Visser:1997, Visser:1996, Cattoen:2006},
it is perhaps a little sobering to see that one of the very simplest idealized stellar models already raises issues for the maximum force conjecture.
We shall soon see that the situation is even worse for the Tolman~IV model (and its variants).
\subsection{Tolman IV solution}
The Tolman IV solution is another perfect fluid star space-time, however it does not have the convenient (albeit fine-tuned) property of constant density like the interior Schwarzschild solution. The metric can be written in the traditional form \cite{Delgaty}:
\begin{equation}
ds^2 = -\left(1 + \frac{r^2}{A^2}\right)dt^2 + \frac{1 + \frac{2r^2}{A^2}}{\left(1- \frac{r^2}{R^2}\right)\left(1+ \frac{r^2}{A^2}\right)}dr^2 + r^2d\Omega^2.
\end{equation}
Here $A$ and $R$ represent some arbitrary scale factors with units of length. This metric yields the orthonormal stress-energy tensor:
\begin{eqnarray}
T_{\hat{t}\hat{t}} &=& \rho = \frac{1}{8\pi}\frac{6r^4 + (7A^2 + 2R^2) r^2 + 3A^2(A^2 + R^2)}{ R^2(A^2 + 2r^2)^2};\\
T_{\hat{r}\hat{r}} &=& T_{\hat{\theta}\hat{\theta}} = T_{\hat{\phi}\hat{\phi}} = p = \frac{1}{8\pi } \frac{R^2 - A^2 - 3r^2}{R^2(A^2 + 2r^2)}.
\end{eqnarray}
This demonstrates the non-constancy of the energy-density $\rho$ as well as the perfect fluid conditions. Physically, the surface of a fluid star is defined as the zero pressure point, which now is at:
\begin{equation}
\label{eq:R_s}
R_s = \sqrt{\frac{R^2 - A^2}{3}}.
\end{equation}
And likewise we can find the surface density ($\rho$ at $R_0$):
\begin{equation}
\rho_s = \frac{3}{4\pi }\frac{2A^2 + R^2}{R^2(A^2 + 2R^2)}.
\end{equation}
The central pressure and central density are
\begin{equation}
p_0 = {1\over8\pi} {R^2-A^2\over R^2 A^2}; \qquad
\rho_0 ={1\over8\pi} {3(R^2+A^2)\over R^2 A^2}.
\end{equation}
Moving forwards, we will likewise calculate the radial and equatorial forces in this space-time.
\subsubsection{Radial force}
Using the previously defined radial force equation, (\ref{eq:Radial Force}), we can write the radial force for the Tolman IV space-time as:
\begin{equation}
F_r(r,R,A) = 4\pi p_r r^2
= \frac{r^2}{2 R^2} \left(\frac{R^2 - A^2 - 3r^2}{A^2 + 2r^2}\right).
\end{equation}
Defining $y= r^2/R_s^2$ and $a=A^2/R^2$ we have $y\in(0,1)$ and $a\in(0,1)$. The radial force then simplifies to
\begin{equation}
F_r(a,y) = {(1-a)^2 y (1-y)\over 6a + 4 (1-a) y}.
\end{equation}
Note this is, as expected, a dimensionless function of dimensionless variables.
The multivariable derivatives are:
\begin{equation}
\partial_a F_r(a,y)= {y(1-y)(1-a)(2ay-3a-2y-3)\over2(2ay-3a-2y)^2}.
\end{equation}
\begin{equation}
\partial_y F_r(a,y)= {(1-a)^2(2ay^2-6ay-2y^2-3a)\over2(2ay-3a-2y)^2}.
\end{equation}
For both derivatives to vanish, (within the physical region), we require $a=1$.
However $a=1$ actually minimises the function with $F_r(a,y)=0$. So we need to look at the boundaries of the physical region. Both $y=0$ and $y=1$ also minimise the function with $F_r(a,y)=0$. We thus consider $a=0$:
\begin{equation}
F_r(0,y)=\frac{1}{4}(1-y),
\end{equation}
where then it is clear that the function is maximised at $a=0$, $y\to0$, which corresponds
$(F_r)_\mathrm{max}={1\over 4}$.
This can be seen visually in figure~\ref{F:Tolman-IV-Fr}.
Thus $F_r(a,y)$ for the Tolman~IV solution is compatible with the strong maximum force conjecture, but as for the Schwarzschild constant density star, we shall soon see that the equatorial force does not behave as nicely.
\begin{figure}[!htbp]
\begin{center}
\includegraphics{F-Tolman-IV-Fr-max.pdf}
\caption{$F_r(a,y)$ for the Tolman~IV solution.}
\label{F:Tolman-IV-Fr}
\end{center}
\end{figure}
\subsubsection{Equatorial force}
Using equation (\ref{eq:Equatorial Force2}) for this space-time, and combining it with radial surface result of equation (\ref{eq:R_s}), we obtain:
\begin{equation}
F_{eq} = 2\pi\int^{R_s}_0\sqrt{g_{rr}} \;p(r)\; r\, dr
= \frac{1}{4}\hbox{\Large $\displaystyle\int$}} %needs \usepackage{amssymb^{\sqrt{\frac{R^2 - A^2}{3}}}_{\!\!\!\!\!0}
\sqrt{\frac{1 + \frac{2r^2}{A^2}}{\left(1- \frac{r^2}{R^2}\right)\left(1+ \frac{r^2}{A^2}\right)}}
\; \frac{R^2 - A^2 - 3r^2}{R^2(A^2 + 2r^2)} \; r \,dr.
\end{equation}
As an integral this converges, however the resultant function is intractable. Instead, we will opt for a simpler approach by finding a simple bound. Since the radial coordinate is physically bound by $0\leq r \leq R_s = \sqrt{\frac{R^2 - A^2}{3}}<R$, we find that in that range:
\begin{eqnarray}
g_{rr} = \frac{1}{1- \frac{r^2}{R^2}} \frac{1 + \frac{2r^2}{A^2}}{1+ \frac{r^2}{A^2}} \geq \frac{1 + \frac{2r^2}{A^2}}{1+ \frac{r^2}{A^2}} \geq 1.
\end{eqnarray}
This is actually a much more general result; for any perfect fluid sphere we have
\begin{equation}
g_{rr}={1\over1-2m(r)/r},
\end{equation}
where $m(r)$ is the Misner--Sharp quasi-local mass.
So as long as $m(r)$ is positive, which is guaranteed by positivity of the density $\rho(r)$, we have $g_{rr}>1$, and so in all generality we have
\begin{equation}
F_{eq} > 2\pi\int^{R_s}_0 \;p(r)\; r\, dr.
\end{equation}
For the specific case of Tolman~IV we can write
\begin{eqnarray}
F_{eq}
&>& \frac{1}{4 }\hbox{\Large $\displaystyle\int$}} %needs \usepackage{amssymb^{\sqrt{\frac{R^2 - A^2}{3}}}_0 \frac{R^2 - A^2 - 3r^2}{R^2(A^2 + 2r^2)} r dr.
\end{eqnarray}
Now make the substitutions $y= r^2/R_s^2$ and $a=A^2/R^2$. We find
\begin{equation}
F_{eq} > \pi r_s^2 \int^{1}_0p\; dy =
{1\over 8} \int^{1}_0 \frac{\left(1-a\right)^2 \left(1-y\right)}{2 \left(1-a\right) y+3 a} \; dy.
\end{equation}
This integral yields
\begin{equation}
F_{eq} > \left. \left\{{(a+2)\ln[(2y-3)a-2y]\over32} -{(1-a)y\over16} \right\} \right|_0^1.
\end{equation}
Thence
\begin{equation}
F_{eq} > {(a+2)[\ln(2+a)-\ln(3a)\over32} -{(1-a)\over16}.
\label{eq:F_eqInequal}
\end{equation}
Under the limit $a\rightarrow0$ we find that the term $-\log(3a) \rightarrow +\infty$. So the inequality (\ref{eq:F_eqInequal}) diverges to infinity, demonstrating that the equatorial force in the Tolman-IV space-time can be made to violate the weak maximum force conjecture.
Thus, as in the case of the interior Schwarzschild solution, we have shown that the radial force is bounded (and in this case obeys both the weak and strong maximum force conjectures). However, the equatorial force can be made to diverge to infinity and act as a counter example to both weak and strong conjectures.
\subsubsection{DEC}
To see if the DEC is satisfied over the range of integration for the equatorial force, we inquire as to whether or not
\begin{equation}
\frac{p}{\rho} =
\frac{(A^2 + 2r^2)(R^2 - A^2 - 3r^2)}{3A^4 + 7A^2r^2 + 6r^4 + (3A^2 + 2r^2)R^2} \leq1\;?
\end{equation}
It is straightforward to check that this inequality will always hold in the physical region.
Using the definitions $a=A^2/R^2$ and $z=r^2/R^2$, so that $a\in(0,1)$,
and $z\in(0,{1-a\over3})$, we can write this as
\begin{equation}
{p\over\rho}-1 =
-{2(2a^2+6az+a+6z^2)\over(7a+2)z +3a(a+1) +6z^2}
<0,
\end{equation}
which is manifestly negative.
So adding the DEC does not affect or change our conclusions.
Indeed, we have already seen that the equatorial force diverges in the limit of $a\to0$ implying $A\rightarrow 0$. Applying this limit to the ratio $p/\rho$ gives:
\begin{equation}
\lim_{A \rightarrow 0 } \frac{p}{\rho} = 1 - \frac{6r^2}{3r^2+R^2}= 1 - {6y\over1+3y} \leq 1.
\end{equation}
Again, this is always true within any physical region,
so we verify that adding the DEC does not change our conclusions.
\subsubsection{Summary}
For the Tolman IV solution, while the radial force is bounded (and obeys both the weak and strong maximum force conjectures), the equatorial force can be made to diverge to infinity in certain parts of parameter space ($A\to0$) and acts as a counter-example to both weak and strong maximum force conjectures. For the Tolman IV solution, adding the DEC does not save the situation, the violation of both weak and strong maximum force conjectures is robust.
\subsection{Buchdahl--Land spacetime: $\rho = \rho_s + p$}
The Buchdahl--Land spacetime is a special case of the Tolman IV spacetime, corresponding to the limit $A\rightarrow0$ (equivalently $a\rightarrow0$). It is sufficiently simple that it is worth some discussion in its own right. The Tolman IV metric (with a re-scaled time coordinate $t\rightarrow At$) can be written:
\begin{equation}
ds^2 = -(A^2 + r^2)dt^2 + \frac{1 + \frac{2r^2}{A^2}}{\left(1 - \frac{r^2}{R^2}\right)\left(1 + \frac{r^2}{A^2}\right)}dr^2 + r^2d\Omega^2.
\end{equation}
Under the limit $A\rightarrow0$, this becomes:
\begin{equation}
ds^2 = -r^2dt^2 + \frac{2R^2}{R^2-r^2}dr^2 + r^2d\Omega^2.
\label{eq:A=0 Limit TIV}
\end{equation}
Then the orthonormal stress-energy components are:
\begin{eqnarray}
T_{\hat{t}\hat{t}} &=& \rho = \frac{1}{16\pi}\left(\frac{1}{r^2}+\frac{3}{R^2}\right); \nonumber\\
T_{\hat{r}\hat{r}} &=& T_{\hat{\theta}\hat{\theta}} = T_{\hat{\phi}\hat{\phi}} = p = \frac{1}{16\pi}\left(\frac{1}{r^2}-\frac{3}{R^2}\right).
\end{eqnarray}
The surface is located at
\begin{equation}
R_s = \frac{R}{\sqrt{3}}; \qquad \hbox{with} \qquad \rho_s = {3\over8\pi R^2} = {1\over8\pi R_s^2}.
\end{equation}
At the centre the pressure and density both diverge --- more on this point later.
We recast the metric as
\begin{equation}
ds^2 = -r^2dt^2 + {2 \over1-{1\over3}{r^2\over R_s^2}} dr^2 + r^2d\Omega^2.
\label{eq:BuchLand}
\end{equation}
This is simply a relabelling of equation (\ref{eq:A=0 Limit TIV}).
The orthonormal stress-energy tensor is now relabelled as:
\begin{eqnarray}
T_{\hat{t}\hat{t}} &=& \rho = \frac{1}{16\pi}\left(\frac{1}{r^2} +{1\over R_s^2} \right);\nonumber\\
T_{\hat{r}\hat{r}} &=& T_{\hat{\theta}\hat{\theta}} = T_{\hat{\phi}\hat{\phi}} = p = \frac{1}{16\pi}\left(\frac{1}{r^2} -{1\over R_s^2}\right).
\end{eqnarray}
Note that
\begin{equation}
p = \rho-\rho_s; \qquad \hbox{that is} \qquad \rho=\rho_s + p.
\end{equation}
That is, the Buchdahl--Land spacetime represents a ``stiff fluid''.
This perfect fluid solution has a naked singularity at $r=0$ and a well behaved surface at finite radius.
The singularity at $r=0$ is not really a problem as one can always excise a small core region near $r=0$ to regularize the model.
\subsubsection{Radial force}
Due to the simplicity of the pressure, the radial force can be easily calculated as:
\begin{equation}
F_r = \frac{1}{4}\left(1 -{r^2\over R_s^2}\right).
\end{equation}
The radial force is trivially bounded with a maximum of $\frac{1}{4}$ at the centre of the star.
This obeys the strong (and so also the weak) maximum force conjecture.
\subsubsection{Equatorial force}
The equatorial force is:
\begin{eqnarray}F_{eq} &=& 2\pi\hbox{\Large $\displaystyle\int$}} %needs \usepackage{amssymb^{{R_s}}_{\!\!\!\!\!0}
\sqrt{ 2 \over1-{1\over3}{r^2\over R_s^2}}\;\;
\frac{1}{16\pi}\left(\frac{1}{r^2} -{1\over R_s^2}\right)\; r dr.
\end{eqnarray}
This is now simple enough to handle analytically. Using the dimensionless variable $y= r^2/R_s^2$, with range $y\in(0,1)$, we see:
\begin{eqnarray}F_{eq} &=& {1\over16} \hbox{\Large $\displaystyle\int$}} %needs \usepackage{amssymb^{1}_{\!\!\!\!\!0}
\sqrt{ 2 \over1-{1\over3}{y}}\;\;
\left(1-y\right)\; {dy\over y}.
\end{eqnarray}
This is manifestly dimensionless, and manifestly diverges to $+\infty$.
If we excise a small region $r<r_\mathrm{core}$, (corresponding to $y < y_\mathrm{core}$) to regularize the model, replacing $r<r_\mathrm{core}$
with some well-behaved fluid ball, then we have the explicit logarithmic divergence
\begin{equation}
F_{eq} = -{1\over16} \ln y_\mathrm{core} + {\mathcal{O}}(1).
\end{equation}
This violates the weak (and so also the strong) maximum force conjecture.
\subsubsection{DEC}
The DEC for this space-time is given by:
\begin{eqnarray}
\frac{p}{\rho} = {\rho-\rho_s\over \rho} = 1 - {\rho_s\over\rho} \leq 1.
\end{eqnarray}
which is always true for positive values of $r$, $\rho_s$.
\subsubsection{Summary}
The Buchdahl--Land spacetime is another weak maximum force conjecture counter-example, one which again obeys the classical energy conditions.
\subsection{Scaling solution}
The scaling solution is
\begin{equation}
ds^2 = -r^{\frac{4w}{1 + w}} dt^2 + \left(\frac{w^2 + 6w + 1}{(1+w)^2}\right) dr^2 + r^2d\Omega^2.
\end{equation}
This produces the following stress energy tensor:
\begin{eqnarray}
T_{\hat{t}\hat{t}} &=& \rho = \frac{w}{2\pi(w^2 + 6w + 1)r^2};\nonumber\\
T_{\hat{r}\hat{r}} &=& T_{\hat{\theta}\hat{\theta}} = T_{\hat{\phi}\hat{\phi}} = p = \frac{w^2}{2\pi(w^2 + 6w + 1)r^2}.
\end{eqnarray}
This perfect fluid solution has a naked singularity at $r=0$ and does not have a finite surface --- it requires $r\rightarrow\infty$ for the pressure to vanish. Nevertheless, apart from a small region near $r=0$ and small fringe region near the surface $r=R_s$, this is a good approximation to the bulk geometry of a star that is on the verge of collapse~\cite{collapse,Yunes}.
To regularize the model excise two small regions, a core region at $r\in(0,r_\mathrm{core})$, and an outer shell at $r\in(r_\mathrm{fringe},R_s)$, replacing them by segments of well-behaved fluid spheres.
Note that for $r\in(r_\mathrm{core},r_\mathrm{fringe})$ we have $p/\rho=w$, (and since $\rho>0$ we must have $w>0$), so the DEC implies $w\in(0,1]$.
\subsubsection{Radial force}
Using equation (\ref{eq:Radial Force}), we find that the radial force is very simply given by:
\begin{equation}
F_r = \frac{2w^2}{w^2 + 6w + 1}.
\end{equation}
This is independent of $r$ and attains a maximum value of $\frac{1}{4}$ when $w=1$, giving a bounded force obeying the strong maximum force conjecture.
\subsubsection{Equatorial force}
Now, using equation (\ref{eq:Equatorial Force2}), the equatorial force can be calculated as:
\begin{equation}
F_{eq} =\hbox{\Large $\displaystyle\int$}} %needs \usepackage{amssymb^{r_\mathrm{fringe}}_{\!\!\!\!\!r_\mathrm{core}}\sqrt{\frac{w^2 + 6w + 1}{(1+w)^2}}\left(\frac{w^2}{(w^2 + 6w + 1)r}\right) dr + {\mathcal{O}}(1).
\end{equation}
That is
\begin{equation}
F_{eq} =\frac{w^2}{(1+w)\sqrt{w^2 + 6w + 1}}\;
\ln(r_\mathrm{fringe}/r_\mathrm{core}) +{\mathcal{O}}(1),
\end{equation}
which trivially diverges logarithmically as either $r_\mathrm{core}\to0$ or $r_\mathrm{fringe}\to\infty$,
providing a counter-example to weak maximum force conjecture.
\subsubsection{Summary}
Again we have an explicit model where the radial force $F_r$ is well-behaved, but the equatorial force $F_{eq}$ provides an explicit counter-example to weak maximum force conjecture.
This counter-example is compatible with the DEC.
\subsection{TOV equation}
Let us now see how far we can push this sort of argument using only the TOV equation for the pressure profile in perfect fluid spheres --- we will (as far as possible) try to avoid making specific assumptions on the metric components and stress-energy. The TOV equation is
\begin{equation}
{dp(r)\over dr} = - {\{\rho(r)+p(r)\}\{m(r)+4\pi p(r) r^3\}\over r^2\{1-2m(r)/r\}}.
\end{equation}
\subsubsection{Radial force}
From the definition of radial force $F_r = 4\pi p r^2$, we see that at the maximum of $F_r$ we must have
\begin{equation}
\left.(2 p r + r^2 p')\right|_{r_\mathrm{max}}=0.
\end{equation}
Thence, at the maximum
\begin{equation}
(F_r)_\mathrm{max} = (4\pi p r^2)_\mathrm{max} = -2\pi \left.(r^3 p')\right|_{r_\mathrm{max}}.
\end{equation}
In particular, now using the TOV at the location $r_\mathrm{max}$ of the maximum of $F_r$:
\begin{equation}
(F_r)_\mathrm{max} =
2\pi \left.\left[ {(\rho+p) r(m +4\pi p r^3)\over (1-2m/r)}\right]\right|_{r_\mathrm{max}}.
\end{equation}
Let us define the two parameters
\begin{equation}
\chi=\left[2m(r)\over r\right]_{r_\mathrm{max}}={2m(r_\mathrm{max})\over r_\mathrm{max}},
\quad\hbox{and}\quad
w = \left[p(r)\over\rho(r)\right]_{r_\mathrm{max}}= {p(r_{max})\over\rho(r_{max})}.
\end{equation}
Then
\begin{equation}
(F_r)_\mathrm{max} =
{1\over2} \left.\left({4\pi p (1+{1\over w}) r^2(\chi/2 +4\pi p r^2)\over 1-\chi}\right)\right|_{r_\mathrm{max}}.
\end{equation}
Simplifying, we see:
\begin{equation}
(F_r)_\mathrm{max} =
{1\over2} {(F_r)_\mathrm{max}\; [1+1/w]\; [(F_r)_\mathrm{max}+\chi/2)\over 1-\chi}
\end{equation}
Discarding the unphysical solution $(F_r)_\mathrm{max} = 0$, we find
\begin{equation}
(F_r)_\mathrm{max} = {4w -\chi-5w\chi\over2(1+w)} = {2w\over1+w} - \chi \;{1+5w\over1+w}.
\end{equation}
The physical region corresponds to $0\leq \chi <1$, while $w>0$. Furthermore we have $(F_r)_\mathrm{max} >0$, whence $4w -\chi-5w\chi>0$, implying $\chi < 4w/(1+5w) < 4/5$. That is, at the location $r_\mathrm{max}$ of the maximum of $F_r$ we have
\begin{equation}
\left[2m(r)\over r\right]_{r_\mathrm{max}} = {2m(r_\mathrm{max})\over r_\mathrm{max}} < {4\over5}.
\end{equation}
This is not the Buchdahl--Bondi bound, it is instead a bound on the compactness of the fluid sphere at the internal location $r_\mathrm{max}$ where $F_r$ is maximized.
Observe that $(F_r)_\mathrm{max} $ is maximized when $\chi=0$ and $w=\infty$, when $(F_r)_\mathrm{max}\to 2$. This violates the strong conjecture maximum force but not the weak maximum force conjecture.
If we impose the DEC then $w\leq 1$, and $(F_r)_\mathrm{max} $ is maximized when $\chi=0$ and $w=1$, when $(F_r)_\mathrm{max}\to 1$. This still violates the strong maximum force conjecture but not the weak maximum force conjecture.
Consequently the weak conjecture for $F_r$ generically holds for any prefect fluid sphere satisfying the TOV.
\subsubsection{Equatorial force}
As we have by now come to expect, dealing with the equatorial force will be considerably trickier.
In view of the non-negativity of the Misner--Sharp quasi-local mass we have:
\begin{equation}
F_{eq} = 2\pi \int_0^{R_s} \sqrt{g_{rr}} \; p \; r dr
= 2\pi \int_0^{R_s} {1\over\sqrt{1-2m(r)/r}} \; p \; r dr > 2\pi\int_0^{R_s} p \, r \; dr .
\end{equation}
To make the integral $\int_0^{R_s} p \, r \; dr $ converge it is sufficient to demand $p(r) = o(1/r^2)$.
However, for stars on the verge of gravitational collapse it is known that $p(r) \sim K/r^2$, see for instance~\cite{collapse, Yunes}. More specifically, there is some core region $r\in(0,r_\mathrm{core})$ designed to keep the central pressure finite but arbitrarily large,
a large scaling region $r\in (r_\mathrm{core},r_\mathrm{fringe})$ where $p \sim K/r^2$, and an
outer fringe $r\in (r_\mathrm{fringe},R_s)$ where one has $p(r)\to p(R_s)=0$.
Then we have the identity
\begin{equation}
\int_0^{R_s} p \, r \; dr = \int_0^{r_\mathrm{core}} p \, r \; dr + \int_{r_\mathrm{core}}^{r_\mathrm{fringe}} p \, r \; dr + \int_{r_\mathrm{fringe}}^{R_s} p \, r \; dr.
\end{equation}
But under the assumed conditions this implies
\begin{equation}
\int_0^{R_s} p \, r \; dr = {\mathcal{O}}(1) +
\left[\int_{r_\mathrm{core}}^{r_\mathrm{fringe}} {K\over r} \; dr + {\mathcal{O}}(1) \right]
+{\mathcal{O}}(1).
\end{equation}
Thence
\begin{equation}
\int_0^{R_s} p \, r \; dr = K \ln\left({r_\mathrm{fringe}}/r_\mathrm{core} \right)
+{\mathcal{O}}(1).
\end{equation}
Finally
\begin{equation}
F_{eq} > 2\pi K \ln\left({r_\mathrm{fringe}}/r_\mathrm{core} \right)
+{\mathcal{O}}(1).
\end{equation}
This can be made arbitrarily large for a star on the verge of gravitational collapse, so the weak and strong maximum force conjectures are both violated.
Note that technical aspects of the argument are very similar to what we saw for the exact scaling solution to the Einstein equations, but the physical context is now much more general.
\subsubsection{Summary}
We see that the weak maximum force conjecture generically holds for the radial force $F_r$ when considering perfect fluid spheres satisfying the TOV. In contrast we see that the weak maximum force conjecture fails for the equatorial force $F_{eq}$ when considering perfect fluid spheres satisfying the TOV that are close to gravitational collapse.
\section{Discussion}
With the notion a natural unit of force $F_* = F_\mathrm{Planck} = c^4/G_N$ in hand, one can similarly define a natural unit of power~\cite{Dyson, Barrow:2017, footnote5, Cardoso:2018, Bruneton:2013}
\begin{equation}
P_*= P_\mathrm{Planck} = {c^5\over G_N} = 1 \hbox{ Dyson} \approx 3.6\times 10^{52} \hbox{ W},
\end{equation}
a natural unit of mass-loss-rate
\begin{equation}
(\dot m)_* = (\dot m) _\mathrm{Planck}= {c^3\over G_N} \approx 4.0\times 10^{35} \hbox{ kg/s},
\end{equation}
and even a natural unit of mass-per-unit-length
\begin{equation}
(m')_* = (m') _\mathrm{Planck}= {c^2\over G_N} \approx 1.36\times 10^{27} \hbox{ kg/m}.
\end{equation}
Despite being Planck units, all these concepts are purely classical (the various factors of $\hbar$ cancel, at least in (3+1) dimensions).
Indeed, consider the classical Stoney units which pre-date Planck units by some 20 years~\cite{Stoney1, Stoney2, Stoney3}, and use $G_N$, $c$,
and Coulomb's constant $e^2\over4\pi\epsilon_0$, instead of $G_N$, $c$, and Planck's constant $\hbar$.
Then we have $F_* = F_\mathrm{Planck} =F_\mathrm{Stoney}$. Similarly we have $P_*= P_\mathrm{Planck}= P_\mathrm{Stoney}$, $(\dot m)_* = (\dot m)_\mathrm{Planck}= (\dot m)_\mathrm{Stoney}$,
and $(m')_* = (m') _\mathrm{Planck} = (m')_\mathrm{Stoney}$.
Based ultimately on dimensional analysis, any one of these quantities might be used to advocate for a maximality conjecture: maximum luminosity~\cite{Dyson, Barrow:2017, footnote5, Cardoso:2018, Bruneton:2013}, maximum mass-loss-rate, or maximum mass-per-unit-length.
The specific counter-examples to the maximum force conjecture that we have discussed above suggest that it might also be worth looking for specific counter-examples to these other conjectures~\cite{Cardoso:2018}.
\section{Conclusions}
Through the analysis of radial and equatorial forces within perfect fluid spheres in general relativity, we have produced a number of counter-examples to both the strong and weak forms of the maximum force conjecture. These counter-examples highlight significant issues with the current phrasing and understanding of this conjecture, as merely specifying that forces are bounded within the framework of general relativity is manifestly a falsehood.
As such, should one wish some version of the maximum force conjecture to be considered viable as a potential physical principle, it must be very clearly specified as to what types of forces they pertain to.
\section*{Acknowledgments}
MV was supported by the Marsden Fund, via a grant administered by the Royal Society of New Zealand.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Recently,
LHCb collaboration reported the evidence
for the first penguin dominated charmless two-body baryonic mode, $B^-\to \Lambda\bar p$ decay,
at 4.1$\sigma$ level~\cite{Aaij:2016xfa}, giving
\begin{eqnarray}
{\cal B}(B^-\to\Lambda\bar p)=(2.4^{+1.0}_{-0.8}\pm 0.3)\times 10^{-7}.
\end{eqnarray}
The result is consistent with the predictions given in a pole model calculation~\cite{Cheng:2001tr} and a topological amplitude approach~\cite{Chua:2013zga}. The latter work made use of the large $m_B$ mass,
the experimental rate of the tree-dominated $\overline B{}^0\to p\bar p$ decay~\cite{Aaij:2013fta},
\begin{eqnarray}
{\cal B}(\overline B {}^0\to p \overline{p})=
(1.47{}^{+0.62+0.35}_{-0.51-0.14})\times 10^{-8},
\end{eqnarray}
and a na\"{i}ve estimation on tree-penguin ratio.
In fact, it was advocated in \cite{Chua:2013zga} that the $B^-\to\Lambda\bar p$ decay mode could be the second charmless two-body baryonic mode to be found experimentally.
With both the experimental evidences on $\overline B {}^0\to p \overline{p}$ and $B^-\to\Lambda\bar p$ decays, it is now possible to extract both tree and penguin amplitudes at the same time.
Most of the decay amplitudes of the two-body baryonic decays are non-factorizable.~\footnote{For a study on the factorization contributions, see Ref.~\cite{Hsiao:2014zza}.}
Various models, such as
pole
model, ~\cite{Deshpande:1987nc,Jarfi:1990ej,Cheng:2001tr,Cheng:2001ub},
sum rule, ~\cite{Chernyak:ag}, diquark
model, ~\cite{Ball:1990fw,Chang:2001jt} and approaches employing flavor symmetry
~\cite{Gronau:1987xq,He:re,Sheikholeslami:fa,Luo:2003pv}
were used (for recent reviews, see~\cite{review, review1})
to calculate the amplitudes.
Nevertheless all predictions on the $\overline B{}^0\to p\bar p$ rate are off by several orders of magnitude comparing to the LHCb result~\cite{Aaij:2013fta,review, review1}.
Indeed,
the $B^0\to p\bar p$ decay mode is more suppressed than expected. To see this one scales the
$\bar B^0\to\Lambda_c^+\bar p$ rate by the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements $|V_{ub}/V_{cb}|^2$ and with a possible dynamical suppression factor $f_{dyn}$ included,~\cite{Cheng:2014qxa}
\begin{eqnarray}
{\mathcal B}(\overline B^0\to p\bar p) = {\mathcal B}(\overline B^0\to\Lambda_c^+\bar p)|V_{ub}/V_{cb}|^2\times f_{dyn}
\sim 2\times 10^{-7}\times f_{dyn}.
\end{eqnarray}
The data demands $f_{dyn}\sim 0.1$.
It was pointed out in Ref.~\cite{Cheng:2014qxa} that for a given tree operator $O_i$, one needs to consider additional contribution, which were missed in the literature, from its Fiertz transformed operator $O'_i$.
It was found that there are cancelations of Feynman diagrams induced by $O_i$ by that from $O'_i$
and, consequently, the smallness of the tree-dominated charmless two-body baryonic $B$ decays results from this partial cancellation.
Furthermore, as pointed out in Ref.~\cite{Cheng:2014qxa} the internal $W$-emission tree amplitude should be proportional to the Wilson coefficient combination $c_1+c_2$ rather than $c_1-c_2$, where the latter is usually claimed in the literature. We shall adapt this point in the present work.
We can make use of the newly measured $\overline B {}^0\to p\bar p$ and $B^-\to\Lambda \bar p$ rates to give information on other modes in a symmetry related approach.
The quark diagram or the so-called topological
approach has been extensively used in mesonic modes~\cite{Zeppenfeld:1980ex,Chau:tk,Chau:1990ay,Gronau:1994rj,Gronau:1995hn,Cheng:2014rfa}.
In fact, the approach is closely related to the flavor SU(3)
symmetry~\cite{Zeppenfeld:1980ex,Gronau:1994rj,Savage:ub}.
In~\cite{Chua:2003it} the approach was extended to the charmless two-body baryonic case.
It was further developed in \cite{Chua:2013zga}, where amplitudes for all low lying charmless two-body baryonic modes with full topological amplitudes are obtained.
Note that in general a typical amplitude has more than one tree and one penguin amplitudes.
Asymptotic relations in the large $m_B$ limit~\cite{Brodsky:1980sx} can be used to relate various
topological amplitudes~\cite{Chua:2003it, Chua:2013zga}.
In this work,
using the experimental results on the $\overline B{}^0\to p\bar p$ and $B^-\to\Lambda \bar p$ decay rates,
we will extract both tree and penguin amplitudes for the first time.
Rates and direct $CP$ asymmetries of all low lying charmless two body baryonic decays can be explored.
Rates and $CP$ asymmetries of some modes can be checked experimentally in the near future in LHCb and Belle-II.
Note that $CP$ asymmetries of some modes can be added to the list of the tests of the Standard Model.
In particular, $\Delta S=-1$ pure penguin modes have small $CP$ asymmetries and they are expected to be sensitive to New Physics contributions.
These modes are good candidates to be added to the lists of the tests of the Standard Model, especially for those with unsuppressed rates.
The layout of this paper is as following.
In Sec. II, we briefly review and update our formulation for charmless two-body baryonic decays modes.
In Sec. III, we present the numerical results on rates and direct $CP$ asymmetries of all low lying baryon modes.
Relations on rates and $CP$ asymmetries will also be studied.
Conclusion is given in Sec. IV, which is followed by two appendices.
\section{Two-body charmless baryonic $B$ decay amplitudes}
There are more than 160 $\overline B\to{{\cal B} \overline {\cal B}}$, ${{\cal B} \overline {\cal D}}$, ${{\cal D} \overline {\cal B}}$, ${{\cal D} \overline {\cal D}}$ decay amplitudes with ${\mathcal B}$ and ${\cal D}$ the low lying octet and decuplet baryons, respectively. Their decay amplitudes expressed in term of topological amplitudes can be found in~\cite{Chua:2013zga} and are collected in Appendix~\ref{appendix:amplitudes}, since they will be used extensively in this work.
We show a few examples here,
\begin{eqnarray}
A(B^-\to\Lambda\overline{p})
&=&\frac{1}{\sq6}(T'_{1{{\cal B} \overline {\cal B}}}+2T'_{3{{\cal B} \overline {\cal B}}})
+\frac{1}{\sqrt6}(10P'_{1{{\cal B} \overline {\cal B}}}-P'_{2{{\cal B} \overline {\cal B}}})
-\frac{1}{3\sq6}(P'_{1EW{{\cal B} \overline {\cal B}}}-P'_{2EW{{\cal B} \overline {\cal B}}}
\nonumber\\
&&-4P'_{3EW{{\cal B} \overline {\cal B}}}+4P'_{4EW{{\cal B} \overline {\cal B}}})
+\frac{1}{\sq6}(10 A'_{1{{\cal B} \overline {\cal B}}}- A'_{2{{\cal B} \overline {\cal B}}}),
\nonumber\\
A(\bar B^0\to p\overline{p})
&=&-T_{2{{\cal B} \overline {\cal B}}}+2T_{4{{\cal B} \overline {\cal B}}}
+P_{2{{\cal B} \overline {\cal B}}}
+\frac{2}{3}P_{2EW{{\cal B} \overline {\cal B}}}
-5 E_{1{{\cal B} \overline {\cal B}}}+ E_{2{{\cal B} \overline {\cal B}}}
-9PA_{{\cal B} \overline {\cal B}},
\nonumber\\
A(B^-\to\Xi^{-}\overline{\Xi^{0}})
&=&-P_{2{{\cal B} \overline {\cal B}}}+\frac{1}{3}P_{2EW{{\cal B} \overline {\cal B}}}- A_{2{{\cal B} \overline {\cal B}}},
\nonumber\\
A(\bar B^0_s\to p\overline{p})
&=&-5E'_{1{{\cal B} \overline {\cal B}}}+ E'_{2{{\cal B} \overline {\cal B}}}-9 PA'_{{\cal B} \overline {\cal B}},
\label{eq:ampBB}
\end{eqnarray}
\begin{eqnarray}
A(B^-\to p\overline{\Delta^{++}})
&=&-\sq6 (T_{1{{\cal B} \overline {\cal D}}}-2T_{2{{\cal B} \overline {\cal D}}})+\sq6 P_{{\cal B} \overline {\cal D}}+2\sqrt{\frac{2}{3}}P_{1EW{{\cal B} \overline {\cal D}}}+\sq6 A_{{\cal B} \overline {\cal D}},
\nonumber\\
A(\bar B^0\to\Xi^{-}\overline{\Sigma^{*-}})
&=&\sq2P'_{{\cal B} \overline {\cal D}}
-\frac{\sq2}{3}P'_{1EW{{\cal B} \overline {\cal D}}},
\end{eqnarray}
\begin{eqnarray}
A(B^-\to\Delta^0\overline{p})
&=&\sq2T_{1{{\cal D} \overline {\cal B}}}
-\sq2 P_{{\cal D} \overline {\cal B}}
+\frac{\sq2}{3}(3P_{1EW{{\cal D} \overline {\cal B}}}+P_{2EW{{\cal D} \overline {\cal B}}})
-\sq2 A_{{\cal D} \overline {\cal B}},
\nonumber\\
A(\bar B^0\to \Sigma^{*+}\overline{p})
&=&\sq2 T'_{2{{\cal D} \overline {\cal B}}}
+\sq2 P'_{{\cal D} \overline {\cal B}}
+\frac{2\sq2}{3}P'_{2EW{{\cal D} \overline {\cal B}}},
\end{eqnarray}
\begin{eqnarray}
A(\bar B^0\to\Delta^0\overline{\Delta^0})
&=&2T_{{\cal D} \overline {\cal D}}+4 P_{{\cal D} \overline {\cal D}}+\frac{2}{3}P_{EW{{\cal D} \overline {\cal D}}}+2E_{{\cal D} \overline {\cal D}}+18PA_{{\cal D} \overline {\cal D}},
\nonumber\\
A(B^-\to \Sigma^{*+}\overline{\Delta^{++}})
&=&2\sq3 T'_{{\cal D} \overline {\cal D}}+2\sq3 P'_{{\cal D} \overline {\cal D}}+\frac{4}{\sq3}P'_{EW{{\cal D} \overline {\cal D}}}+2{\sq3}A'_{{\cal D} \overline {\cal D}},
\end{eqnarray}
where $T^{(\prime)}$, $P^{(\prime)}$, $E^{(\prime)}$, $A^{(\prime)}$, $PA^{(\prime)}$ and $P^{(\prime)}_{EW}$ are
tree, penguin, $W$-exchange, annihilation, penguin annihilation and electroweak penguin amplitudes, respectively,
for $\Delta S=0(-1)$ decays (see Fig.~\ref{fig:TA}).
In most cases we needs more than one tree and one penguin amplitudes in
the baryonic decay amplitudes.
Note that from the above amplitudes, $\overline B{}^0\to p\bar p$ decay is expected to be a tree dominated mode,
$B^-\to \Lambda\bar p$ decay a penguin dominated mode and $\overline B{}^0_s\to p\bar p$ decay a suppressed mode.
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=0.4\textwidth]{tree.pdf}
}\subfigure[]{
\includegraphics[width=0.4\textwidth]{penguin.pdf}
}\\\subfigure[]{
\includegraphics[width=0.4\textwidth]{exchange.pdf}
}\subfigure[]{
\includegraphics[width=0.4\textwidth]{annihilation.pdf}
}
\\\subfigure[]{
\includegraphics[width=0.4\textwidth]{pannihilation.pdf}
}\subfigure[]{
\includegraphics[width=0.4\textwidth]{ewpenguin.pdf}
}
\caption{Flavor flow or topological diagrams of
(a) $T^{(\prime)}$ (tree), (b) $P^{(\prime)}$ (penguin), (c) $E^{(\prime)}$ ($W$-exchange),
(d) $A^{(\prime)}$ (annihilation), (e) $PA^{(\prime)}$ (penguin annihilation) and (f) $P^{(\prime)}_{EW}$ (electroweak penguin)
amplitudes in $\overline B$ to baryon pair decays for $\Delta S=0(-1)$ decays.
} \label{fig:TA}
\end{figure}
By considering the chirality nature of weak
interaction and asymptotic relations~\cite{Brodsky:1980sx} at the large $m_B$ limit ($m_q/m_B,m_{{\cal B},{\cal D}}/m_B\ll1$), the
number of independent amplitudes is significantly reduced~\cite{Chua:2003it,Chua:2013zga}:
\begin{eqnarray}
T^{(\prime)}
&=&T^{(\prime)}_{1{{\cal B} \overline {\cal B}},2{{\cal B} \overline {\cal B}},3{{\cal B} \overline {\cal B}},4{{\cal B} \overline {\cal B}}}
=T^{(\prime)}_{1{{\cal B} \overline {\cal D}},2{{\cal B} \overline {\cal D}}}
=T^{(\prime)}_{1{{\cal D} \overline {\cal B}},2{{\cal D} \overline {\cal B}}}
=T^{(\prime)}_{{{\cal D} \overline {\cal D}}},
\nonumber\\
P^{(\prime)}
&=&P^{(\prime)}_{1{{\cal B} \overline {\cal B}},2{{\cal B} \overline {\cal B}}}
=P^{(\prime)}_{{\cal B} \overline{\cal D}}
=P^{(\prime)}_{{\cal D} \overline{\cal B}}
=P^{(\prime)}_{{\cal D} \overline{\cal D}},
\nonumber\\
P^{(\prime)}_{EW}
&=&P^{(\prime)}_{1EW{{\cal B} \overline {\cal B}},2EW{{\cal B} \overline {\cal B}},3EW{{\cal B} \overline {\cal B}},4EW{{\cal B} \overline {\cal B}}}
=P^{(\prime)}_{1EW{{\cal B} \overline {\cal D}},2EW{{\cal B} \overline {\cal D}}}
=P^{(\prime)}_{1EW{{\cal D} \overline {\cal B}},2EW{{\cal D} \overline {\cal B}}}
=P^{(\prime)}_{EW{{\cal D} \overline {\cal D}}},
\nonumber\\
&&
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
E_{1{{\cal B} \overline {\cal B}},2{{\cal B} \overline {\cal B}},{{\cal B} \overline {\cal D}},{{\cal D} \overline {\cal B}},{{\cal D} \overline {\cal D}}},\,A_{1{{\cal B} \overline {\cal B}},2{{\cal B} \overline {\cal B}},{{\cal B} \overline {\cal D}},{{\cal D} \overline {\cal B}},{{\cal D} \overline {\cal D}}},
\,PA_{{{\cal B} \overline {\cal B}},{{\cal D} \overline {\cal D}}}\to0.
\label{eq:asymptoticrelations}
\end{eqnarray}
and there are only one tree, one penguin and one electroweak penguin amplitudes, which are estimated to be
\begin{eqnarray}
T^{(\prime)}&=&V_{ub} V^*_{ud(s)}\frac{G_f}{\sq2}(c_1+c_2) \chi \bar u'(1-\gamma_5)v,
\nonumber\\
P^{(\prime)}&=&-V_{tb} V^*_{td(s)}\frac{G_f}{\sq2}[c_3+c_4+\kappa_1 c_5+\kappa_2 c_6]\chi \bar u'(1-\gamma_5)v,
\nonumber\\
P^{(\prime)}_{EW}&=&-\frac{3}{2}V_{tb} V^*_{td(s)}\frac{G_f}{\sq2}[c_9+c_{10}+\kappa_1c_7+\kappa_2 c_8]\chi \bar u'(1-\gamma_5)v,
\label{eq:asymptotic1}
\end{eqnarray}
where $c_i$ are the next-to-leading order Wilson coefficients, given by
\begin{eqnarray}
c_1 = 1.081,\,
c_2 = -0.190,\,
c_3 = 0.014,\,
c_4 = -0.036,\,
c_5 = 0.009,\,
c_6 = -0.042,
\nonumber\\
c_7 = -0.011\alpha_{EM},\,
c_8 = 0.060\alpha_{EM},\,
c_9 = -1.254\alpha_{EM},\,
c_{10} = 0.223\alpha_{EM},
\end{eqnarray}
in the naive dimensional regularization scheme at scale $\mu=4.2$GeV~\cite{Beneke:2001ev}.
Note that the relative signs of $c_{1,3,9}$ and $c_{2,4,10}$ in Eq. (\ref{eq:asymptotic1}) are determined according to Ref.~\cite{Cheng:2014qxa}.
Since the Fierz transformation of $O_{5,6,7,8}$ are different from $O_{1,2,3,4}$, additional
coefficients $\kappa_i$ in front of $c_{5(7)}$ and $c_{6(8)}$ in Eq.~(\ref{eq:asymptotic1}) are assigned.
For simplicity we take $\kappa_1=\kappa_2=\kappa$.
The parameters $\chi$ and $\kappa$ will be extracted from $\overline B{}^0\to p\bar p$ and $B^-\to\Lambda\bar p$ data.
Note that $|\kappa|$ is expected to be of order ${\cal O}(1)$.
In reality the $m_B$ mass is finite.
The decays amplitudes are not in the asymptotic forms. Some corrections are expected.
The correction on $T^{(\prime)}_i$, $P^{(\prime)}_i$ and $P^{(\prime)}_{EWi}$ are estimated to be of order $m_{\cal B}/m_B$ (the baryon and $B$ meson mass ratio).
Hence, we have
\begin{eqnarray}
T^{(\prime)}_i=(1+r_{t,i}^{(\prime)}) T^{(\prime)},
\,
P^{(\prime)}_i=(1+r_{p,i}^{(\prime)}) P^{(\prime)}_,
\,
P^{(\prime)}_{EW i}=(1+r_{ewp,i}^{(\prime)}) P^{(\prime)}_{EW},
\label{eq:correction0}
\end{eqnarray}
with
\begin{eqnarray}
|r_{t,i}^{(\prime)}|, |r_{p,i}^{(\prime)}|, |r_{ewp,i}^{(\prime)}|\leq m_p/m_B,
\label{eq:correction1}
\end{eqnarray}
parametrizing the correction to the asymptotic relations, Eq.~(\ref{eq:asymptoticrelations}).
Note that the phases of these $r^{(\prime)}$ can be any value.
For $P^{(\prime)}_i$, we replace the CKM factor $-V_{tb} V^*_{td(s)}$ by the sum of $V_{ub} V^*_{ud(s)}$ and $V_{cb} V^*_{cd(s)}$. The penguins with $V_{ub} V^*_{ud(s)}$ and $V_{cb} V^*_{cd(s)}$ are $u$-penguin ($P_i^{(\prime) u}$) and $c$-penguin ($P_i^{(\prime) c}$), respectively. The $r_{p,i}$ of the $u$-penguin and the $c$-penguin are independent.
Furthermore, although in the asymptotic limit the $\bar u' u$ and $-\bar u' \gamma_5 u$ terms have the same coefficients, see Eq.~(\ref{eq:asymptotic1}),
this will no longer be true in the finite $m_B$ case.
In other words, the $r^{(\prime)}$ for $\bar u' u$ and $-\bar u' \gamma_5 u$ terms are independent.
For subleading terms, such as exchange, penguin annihilation and annihilation amplitudes, we have
\begin{eqnarray}
E^{(\prime)}_i\equiv\eta_i \frac{f_B}{m_B} \frac{m_{\cal B}}{m_B} T^{(\prime)},\,
A^{(\prime)}_j\equiv\eta_j \frac{f_B}{m_B} \frac{m_{\cal B}}{m_B}T^{(\prime)},\,
PA^{(\prime)}_k\equiv\eta_k \frac{f_B}{m_B} \frac{m_{\cal B}}{m_B}P^{(\prime)},
\label{eq:correction2}
\end{eqnarray}
where the ratio $f_B/m_B$ is from the usual estimation, the factor $m_{\cal B}/m_B$ is from the chirality structure.
Note that $|\eta_{i,j,k}|$ are estimated to be of order 1 and,
explicitly, we take
$
0\leq|\eta_{i,j,k}|\leq |\eta|=1,
$
with the bound $|\eta|$ set to 1.
With the above amplitudes and formulas given in Appendices~\ref{appendix:amplitudes} and \ref{appendix:formulas}, we are ready to study the two-body baryonic decay rates.
Note that for $\bar B\to {{\cal B} \overline {\cal D}}, {{\cal D} \overline {\cal B}}$ decays, we need to add a correction factor of $p_{cm}/(m_B/2)$ to the topological amplitudes shown in Eq.~(\ref{eq:asymptotic1}) with $p_{cm}$ the momentum of the final state baryons in the center of mass frame. The factor will further correct the amplitudes from their asymptotic forms.
\section{Numerical Results on Rates and Direct $CP$ Asymmetries}
\subsection{Tree and penguin amplitudes}
Using the recent data on the $\overline B{}^0\to p\bar p$ rate and the $B^-\to \Lambda\bar p$ rate,
the unknown parameters $\chi$ and $\kappa$ in asymptotic amplitudes , Eq. (\ref{eq:asymptotic1}), are fitted to be
\begin{eqnarray}
\chi=(5.08^{+1.12}_{-1.02})\times 10^{-3}~{\rm GeV}^2,
\qquad
\kappa=1.92^{+0.39}_{-0.46}.
\label{eq:chikappa}
\end{eqnarray}
Note that the value of $\kappa$ is indeed of order 1. The tree-penguin ratio is close to the na\"{i}ve estimation.
The penguin-tree and tree-penguin ratios of the asymptotic amplitudes [see Eq. (\ref{eq:asymptoticrelations})] for $\Delta S=0$ and $-1$ transitions, respectively, can be extracted for the first time to be
\begin{eqnarray}
\left|\frac{P}{T}\right|=0.24\pm0.04,
\quad
\left|\frac{T'}{P'}\right|=0.21^{+0.05}_{-0.03}.
\label{eq:P/T}
\end{eqnarray}
The above equation is one of the major results of this work.
\subsection{Numerical Results on Rates}
In the following we make use the $\overline B{}^0\to p\bar p$ and $B^-\to\Lambda\bar p$ data as inputs and predict rates of all other $\overline B\to{{\cal B} \overline {\cal B}}$, ${{\cal B} \overline {\cal D}}$, ${{\cal D} \overline {\cal B}}$, ${{\cal D} \overline {\cal D}}$ modes.
Results are shown in Tables \ref{tab:BBDS=0}$\sim$\ref{tab:DDDS=-1}.
They are major results of this work.
A first glimpse on these tables reveals that all experimental upper bounds are satisfied.
This is a non-trivial check.
We will go through the discussions of the updated results on $\overline B\to{{\cal B} \overline {\cal B}}$, ${{\cal B} \overline {\cal D}}$, ${{\cal D} \overline {\cal B}}$, ${{\cal D} \overline {\cal D}}$ decay rates in below and make suggestions on future experimental searches.
We will give a summary of our suggestions at the end of this subsection.
\begin{table}[t!]
\caption{\label{tab:BBDS=0} Decay rates of $\Delta S=0$, $\overline B_q\to{{\cal B} \overline {\cal B}}$ modes.
The first uncertainty is from the uncertainties of the $\chi$ and $\kappa$ parameters, reflecting the uncertainties in the measurements of ${\cal B}(\overline B{}^0\to pp)$ and ${\cal B}(B^-\to \Lambda\bar p)$,
the second uncertainty is obtained by varying the tree and penguin strong phase $\phi$,
the third uncertainty is from relaxing the asymptotic relations, by varying $r_{t,i}, r_{p,i}, r_{ewp,i}$ (see Eqs. (\ref{eq:correction0}) and
(\ref{eq:correction1}))
and the last uncertainty is from sub-leading contributions, terms with $\eta_{i,j,k}$ (see Eq. (\ref{eq:correction2})). Occasionally the last uncertainty is shown to
larger decimal place.
The latest experimental results are given in parentheses under the theoretical results.
The experimental $\overline B{}^0\to p\bar p$ rate is one of the inputs.}
\begin{ruledtabular}
\centering
{\begin{tabular}{llll}
Mode
& ${\mathcal B}(10^{-8})$
& Mode
& ${\mathcal B}(10^{-8})$
\\
\hline $B^-\to n\overline{p}$
& $3.45^{+1.50}_{-1.23}{}^{+0.68}_{-0}{}^{+1.99}_{-1.50}\pm0.09
& $\overline B{}^0_s\to p\overline{\Sigma^{+}}$
& $1.42^{+0.69}_{-0.51}{}^{+0.13}_{-0}{}^{+2.01}_{-1.12}\pm0
\\
$B^-\to \Sigma^{0}\overline{\Sigma^{+}}$
& $3.29^{+1.56}_{-1.18}{}^{+0.52}_{-0}{}^{+2.09}_{-1.58}\pm0.11
& $\overline B{}^0_s\to n\overline{\Sigma^{0}}$
& $0.75^{+0.36}_{-0.27}{}^{+0}_{-0.05}{}^{+0.32}_{-0.26}\pm0
\\
$B^-\to \Sigma^{-}\overline{\Sigma^{0}}$
& $0.62^{+0.28}_{-0.22}\pm0{}^{+0.42}_{-0.31}{}^{+0.006}_{-0.004}
&$\overline B{}^0_s\to n\overline{\Lambda}$
& $2.96^{+1.36}_{-1.06}{}^{+0.57}_{-0}{}^{+1.88}_{-1.41}\pm0
\\
$B^-\to \Sigma^{-}\overline{\Lambda}$
& $0.47^{+0.21}_{-0.17}\pm0{}^{+0.20}_{-0.17}{}^{+0.003}_{-0.002}
& $\overline B{}^0_s\to \Sigma^{0}\overline{\Xi^{0}}$
& $10.85^{+5.23}_{-3.90}{}^{+1.22}_{-0}{}^{+5.23}_{-4.20}\pm0
\\
$B^-\to \Xi^{-}\overline{\Xi^{0}}$
& $0.07^{+0.03}_{-0.03}\pm0\pm0.03{}^{+0.0005}_{-0.0003}
& $\overline B{}^0_s\to \Sigma^{-}\overline{\Xi^{-}}$
& $1.76^{+0.78}_{-0.63}\pm0{}^{+0.76}_{-0.63}\pm0
\\
$B^-\to \Lambda\overline{\Sigma^+}$
& $0.47^{+0.21}_{-0.17}\pm0{}^{+0.38}_{-0.17}{}^{+0.003}_{-0.002}
& $\overline B{}^0_s\to \Lambda\overline{\Xi^0}$
& $0.11^{+0.05}_{-0.04}\pm0{}^{+0.63}_{-0.08}\pm0
\\
$\overline B{}^0\to p\overline{p}$
& $1.47^{+0.71}_{-0.53}{}^{+0.14}_{-0}{}^{+2.07}_{-1.16}\pm0.12
& $\overline B{}^0\to \Sigma^{+}\overline{\Sigma^{+}}$
& $0\pm0\pm0\pm0{}^{+0.003}_{-0}
\\
&$1.47{}^{+0.62+0.35}_{-0.51-0.14}$~\cite{Aaij:2013fta}
&
&
\\
$\overline B{}^0\to n\overline{n}$
& $6.66^{+3.15}_{-2.39}{}^{+1.05}_{-0}{}^{+4.25}_{-3.20}\pm0.07
& $\overline B{}^0\to \Sigma^{0}\overline{\Sigma^{0}}$
& $1.52^{+0.72}_{-0.55}{}^{+0.24}_{-0}{}^{+0.97}_{-0.73}\pm0.07
\\
$\overline B{}^0\to \Xi^{0}\overline{\Xi^{0}}$
& $0\pm0\pm0\pm0{}^{+0.0004}_{-0}
& $\overline B{}^0\to \Sigma^{-}\overline{\Sigma^{-}}$
& $1.15^{+0.51}_{-0.41}\pm0{}^{+0.78}_{-0.58}\pm0.04
\\
$\overline B{}^0\to \Xi^{-}\overline{\Xi^{-}}$
& $0.07^{+0.03}_{-0.02}\pm0{}^{+0.03}_{-0.02}\pm0.01
& $\overline B{}^0\to \Sigma^{0}\overline{\Lambda}$
& $4.10^{+1.98}_{-1.47}{}^{+0.39}_{-0}{}^{+1.90}_{-1.54}\pm0.05
\\
$\overline B{}^0\to \Lambda\overline{\Lambda}
& $0\pm0\pm0{}^{+0.23}_{-0}{}^{+0.0005}_{-0}$
& $\overline B{}^0\to \Lambda\overline{\Sigma^{0}}$
& $0.22^{+0.10}_{-0.08}\pm0{}^{+0.18}_{-0.08}\pm0.001
\\
& ($<32$)~\cite{Tsai:2007pp}
&
&
\\
\end{tabular}
}
\\
\end{ruledtabular}
\end{table}
\subsubsection{Rates of $\overline B_q\to {{\cal B} \overline {\cal B}}$ decays}
Predictions on $\Delta S=0$, $\overline B_q\to {{\cal B} \overline {\cal B}}$ decay rates are shown in Table \ref{tab:BBDS=0}.
The first uncertainty is from the uncertainties of the $\chi$ and $\kappa$ parameters [see Eq. (\ref{eq:chikappa})], reflecting the uncertainties in the measurements of ${\cal B}(\overline B{}^0\to pp)$ and ${\cal B}(B^-\to \Lambda\bar p)$.
The second uncertainty is obtained by varying the tree and penguin strong phase $\phi$,
where we assign to the penguin amplitude:~\footnote{The strong phase of the tree amplitude is factored out and set to zero. Therefore, the strong phase $\phi$ is the relative strong phase of penguin and tree amplitudes.}
\begin{eqnarray}
P^{(\prime)}&=&-V_{tb} V^*_{td(s)}\frac{G_f}{\sq2}[c_3+c_4+\kappa c_5+\kappa c_6]\chi e^{i\phi} \bar u'(1-\gamma_5)v
\label{eq: penguin strong phase}
\end{eqnarray}
and a similar expression for $P^{(\prime)}_{EW}$. We use a common strong phase for simplicity.
The third uncertainty is from relaxing the asymptotic relations by varying $r_{t,i}, r_{p,i}, r_{ewp,i}$ [see Eqs. (\ref{eq:correction0}) and
(\ref{eq:correction1})]
and the last uncertainty is from sub-leading contributions, such as annihilation, penguin annihilation and exchange amplitudes
[see Eq. (\ref{eq:correction2})].
From the tables we see that errors are reduced at least by a factor of two compared to the previous analysis in \cite{Chua:2013zga}.
These errors can provide useful informations:
(i) As noted before the first errors reflect the uncertainties in ${\cal B}(\overline B{}^0\to pp)$ and ${\cal B}(B^-\to \Lambda\bar p)$.
(ii) The second errors reflect the size of tree-penguin interferences.
We can see from the table that in general the effects of tree-penguin interference on rates are not sizable. This is consistent with the tree-penguin ratios shown in Eq.~(\ref{eq:P/T}).
(iii)~The third errors, which correspond to corrections to amplitudes away from the asymptotic limit, are usually the largest ones. (iv)~Occasionally we show the last errors, which are from sub-leading contributions, to larger decimal place.
These terms are with $\eta_{i,j,k}$ [see Eq. (\ref{eq:correction2})].
Note that for modes only have sub-leading contributions, the rates are proportional to $|\eta_{i,j,k}|^2$, while for modes having tree and/or penguin terms as well, these (sub-leading) contributions are roughly proportional to $\eta_{i,j,k}$.
For $\Delta S=0$, $\overline B_q\to{{\cal B} \overline {\cal B}}$ decays, there are three modes that can decay cascadely to all charged final states,
namely $\overline B{}^0\to p\bar p$, $\overline B{}^0\to\Xi^-\overline{\Xi^-}$ and $\Lambda\overline{\Lambda}$ decays.~\footnote{
To study the accessibilities of searching of the charmless two-body baryonic modes,
we note that
(i)~$\Delta^{++,0}$, $\Lambda$, $\Xi^-$, $\Sigma^{*\pm}$, $\Xi^{*0}$ and $\Omega^-$ have non-suppressed decay modes of final states with all charged particles,
(ii)~$\Delta^+$, $\Sigma^{+}$, $\Xi^0$, $\Sigma^{*0}$ and $\Xi^{*-}$ can be detected by detecting a $\pi^0$,
(iii)~$\Sigma^{0}$ can be detected by detecting $\gamma$,
(iv)~one needs $n$ in detecting $\Delta^-$ and $\Sigma^-$~\cite{Chua:2013zga,PDG}.
Note that although some particles, such as $\Xi^-$ and $\Xi^{*0}$, can decay to final states with all charged particles, they may suffer from low reconstruction efficiencies. }
We see from Table~\ref{tab:BBDS=0} that among these modes the $\overline B{}^0\to p\bar p$ decay has the highest rate and highest detectability. The rates of the other two modes are, in fact, one or two orders of magnitude smaller the $p\bar p$ rate.
In particular, the predicted $\overline B{}^0\to\Lambda\overline\Lambda$ rate is much smaller than the present experimental upper limit~\cite{Tsai:2007pp}.
More modes can be searched for with $\pi^0$, $\gamma$, $\pi^0\pi^0$ and $\pi^0\gamma$ in the future experiments, such as in Belle II.
For example, with one $\pi^0$ or $\gamma$, one can search for
$\overline B^0\to \Sigma^0\overline \Lambda$ and
$\overline B{}^0_s\to p\overline{\Sigma^+}$ decays,
with $\pi^0\gamma$
one can also search for
$\overline B{}^0_s\to\Sigma^0\overline{\Xi^0}$ and
$B^-\to\Sigma^0\overline{\Sigma^+}$ decays, while with $\gamma\gamma$ one can search for
$\overline B{}^0\to\Sigma^0\overline{\Sigma^0}$ decays.
In fact, the $\overline B{}^0_s\to\Sigma^0\overline{\Xi^0}$ decay rate is of the order of $10^{-7}$, which is the highest rate in the table, but the mode is reconstructed through the cascade decay $\overline B{}^0_s\to\Sigma^0\overline{\Xi^0}\to\Lambda\gamma\,\bar\Lambda\pi^0$,
which requires both $\gamma$ and $\pi^0$ for the detection.
It is understandable that why $\overline B{}^0\to p\bar p$ decay is the first mode with experimental evidence.
From Eqs. (\ref{eq: BBBm, DS=0}), (\ref{eq: BBB0, DS=0}) and (\ref{eq: BBBs, DS=0}), we see that there are several modes without any tree ($T_{i{{\cal B} \overline {\cal B}}}$) contribution.
These include
$B^-\to \Sigma^{-}\overline{\Sigma^{0}}$,
$B^-\to \Sigma^{-}\overline{\Lambda}$,
$B^-\to \Xi^{-}\overline{\Xi^{0}}$,
$\overline B{}^0\to \Xi^{-}\overline{\Xi^{-}}$,
$\overline B{}^0\to \Sigma^{-}\overline{\Sigma^{-}}$,
$\overline B{}^0\to \Xi^{0}\overline{\Xi^{0}}$,
$\overline B{}^0\to \Sigma^{+}\overline{\Sigma^{+}}$
and
$\overline B{}^0_s\to \Sigma^{-}\overline{\Xi^{-}}$ decays.
As shown in Table~\ref{tab:BBDS=0} the second uncertainties of the rates of these modes are vanishing.
Note that
$\overline B{}^0\to \Xi^{-}\overline{\Xi^{-}}$,
$\overline B{}^0\to \Sigma^{-}\overline{\Sigma^{-}}$
and $\overline B{}^0_s\to \Sigma^{-}\overline{\Xi^{-}}$ decays are pure penguin modes, which only have $P_{i{{\cal B} \overline {\cal B}}}$, $P_{iEW{{\cal B} \overline {\cal B}}}$ and $PA_{{{\cal B} \overline {\cal B}}}$ terms,
while $\overline B{}^0\to \Xi^{0}\overline{\Xi^{0}}$ and
$\overline B{}^0\to \Sigma^{+}\overline{\Sigma^{+}}$ decays only have subleading contributions, namely $E_{i{{\cal B} \overline {\cal B}}}$ and $PA_{{\cal B} \overline {\cal B}}$.
Note that although
$B^-\to \Lambda\overline{\Sigma^+}$,
$\overline B{}^0_s\to \Lambda\overline{\Xi^0}$,
$\overline B{}^0\to \Lambda\overline{\Lambda}$ and
$\overline B{}^0\to \Lambda\overline{\Sigma^{0}}$ decays
have tree amplitudes $T_{i{{\cal B} \overline {\cal B}}}$, these tree amplitudes cancel out in the asymptotic limit
[see Eqs (\ref{eq: BBBm, DS=0}), (\ref{eq: BBB0, DS=0}), (\ref{eq: BBBs, DS=0}) and (\ref{eq:asymptoticrelations})].
In particular, the tree, penguin, electroweak penguin and exchange amplitudes
in the $\overline B{}^0\to \Lambda\overline{\Lambda}$ amplitude
all cancel out in the asymptotic limit.
This mode is therefore sensitive to the correction to the asymptotic relations, Eqs. (\ref{eq:asymptoticrelations}),
(\ref{eq:correction0}) and (\ref{eq:correction1}).
\begin{table}[t!]
\caption{\label{tab:BBDS=-1} Same as Table~\ref{tab:BBDS=0}, but with $\Delta S=-1$, $\overline B_q\to{{\cal B} \overline {\cal B}}$ modes.
The latest experimental result is given in the parenthesis under the theoretical results.
The experimental $B^-\to\Lambda\bar p$ rate is one of the inputs.}
\begin{ruledtabular}
\begin{tabular}{llll}
Mode
& ${\mathcal B}(10^{-8})$
& Mode
& ${\mathcal B}(10^{-8})$
\\
\hline $B^-\to \Sigma^{0}\overline{p}$
& $0.76^{+0.35}_{-0.27}{}^{+0}_{-0.19}{}^{+0.50}_{-0.37}\pm0.001
& $\overline B{}^0\to \Sigma^{+}\overline{p}$
& $1.83^{+0.76}_{-0.65}{}^{+0.47}_{-0}{}^{+0.82}_{-0.61}\pm0
\\
$B^-\to \Sigma^{-}\overline{n}$
& $1.67^{+0.74}_{-0.59}\pm0{}^{+0.71}_{-0.58}{}^{+0.002}_{-0.002}
& $\overline B{}^0\to \Sigma^{0}\overline{n}$
& $1.12^{+0.47}_{-0.40}{}^{+0.52}_{-0}{}^{+0.47}_{-0.36}\pm0
\\
$B^-\to \Xi^{0}\overline{\Sigma^{+}}$
& $39.98^{+17.53}_{-14.23}{}^{+2.14}_{-0}{}^{+17.06}_{-14.02}\pm0.03
&$\overline B{}^0\to \Xi^{0}\overline{\Sigma^{0}}$
& $18.50^{+8.11}_{-6.58}{}^{+0.99}_{-0}{}^{+7.89}_{-6.48}\pm0
\\
$B^-\to\Xi^{-}\overline{\Sigma^{0}}$
& $19.40^{+8.59}_{-6.90}\pm0{}^{+8.38}_{-6.88}\pm0.02
& $\overline B{}^0\to \Xi^{0}\overline{\Lambda}$
& $2.59^{+1.07}_{-0.92}{}^{+0.67}_{-0}{}^{+2.80}_{-1.74}\pm0
\\
$B^-\to \Xi^{-}\overline{\Lambda}$
& $2.36^{+1.05}_{-0.84}\pm0{}^{+2.65}_{-1.67}\pm0.005
& $\overline B{}^0\to \Xi^{-}\overline{\Sigma^{-}}$
& $35.88^{+15.89}_{-12.77}\pm0{}^{+15.50}_{-12.73}\pm0
\\
$B^-\to \Lambda\overline{p}$
& $24.00^{+10.44}_{-8.54}{}^{+2.13}_{-0}{}^{+12.48}_{-9.85}\pm0.02
& $\overline B{}^0\to \Lambda\overline{n}$
& $23.48^{+10.00}_{-8.36}{}^{+4.12}_{-0}{}^{+12.13}_{-9.50}\pm0
\\
& $24^{+10}_{-8}\pm 3$~\cite{Aaij:2016xfa}
&
&
\\
$\overline B{}^0_s\to p\overline{p}$
& $0\pm0\pm0{}^{+0.007}_{-0}
& $\overline B{}^0_s\to \Sigma^{+}\overline{\Sigma^{+}}$
& $1.76^{+0.73}_{-0.63}{}^{+0.45}_{-0}{}^{+0.79}_{-0.59}{}^{+0.21}_{-0.20}
\\
& ($2.84{}^{+2.03+0.85}_{-1.68-0.18}$)~\cite{Aaij:2013fta}
&
&
\\
$\overline B{}^0_s\to n\overline{n}$
& $0\pm0\pm0{}^{+0.007}_{-0}
& $\overline B{}^0_s\to \Sigma^{0}\overline{\Sigma^{0}}$
& $1.61^{+0.69}_{-0.57}{}^{+0.21}_{-0}{}^{+0.69}_{-0.55}{}^{+0.20}_{-0.19}
\\
$\overline B{}^0_s\to \Xi^{0}\overline{\Xi^{0}}$
& $24.46^{+10.53}_{-8.71}{}^{+3.24}_{-0}{}^{+16.28}_{-12.07}{}^{+0.75}_{-0.74}
& $\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{-}}$
& $1.49^{+0.66}_{-0.53}\pm0{}^{+0.63}_{-0.52}{}^{+0.19}_{-0.18}
\\
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}}$
& $22.63^{+10.02}_{-8.05}\pm0{}^{+15.27}_{-11.36}{}^{+0.72}_{-0.71}
& $\overline B{}^0_s\to \Sigma^{0}\overline{\Lambda}$
& $0.05^{+0.03}_{-0.02}{}^{+0.04}_{-0}{}^{+0.06}_{-0.04}\pm0.001
\\
$\overline B{}^0_s\to \Lambda\overline{\Lambda}$
& $14.90^{+6.42}_{-5.31}{}^{+1.97}_{-0}{}^{+7.58}_{-5.99}{}^{+0.61}_{-0.60}
& $\overline B{}^0_s\to \Lambda\overline{\Sigma^{0}}$
& $0.05^{+0.03}_{-0.02}{}^{+0.04}_{-0}\pm0.02\pm0.001
\\
\end{tabular}
\end{ruledtabular}
\end{table}
Predictions on $\Delta S=-1$, $\overline B_q\to {{\cal B} \overline {\cal B}}$ decay rates are shown in Table \ref{tab:BBDS=-1}.
There are 9 modes that have rates of order $10^{-7}$, namely
$B^-\to\Xi^0\overline{\Sigma^+}$,
$\Xi^-\overline{\Sigma^0}$,
$\Lambda\bar p$,
$\overline B{}^0\to \Xi^0\overline{\Sigma^0}$,
$\Xi^-\overline{\Sigma^-}$,
$\Lambda \bar n$,
$\overline B{}^0_s\to\Xi^0\overline{\Xi^0}$,
$\Xi^-\overline{\Xi^-}$
and $\Lambda\overline\Lambda$ decays.
On the other hand, there are 5 modes that can cascadely decay to all charged final states, namely
$B^-\to \Xi^-\overline \Lambda,\Lambda\bar p$,
$\overline B{}^0_s\to p\bar p$,
$\Xi^-\overline{\Xi^-}$
and
$\Lambda\overline\Lambda$.
Comparing these two sets, we see that
$B^-\to \Lambda\bar p$,
$\overline B{}^0_s\to \Lambda\overline{\Lambda}$
and
$\Xi^-\overline{\Xi^-}$ decays are the only three modes that have rates of order $10^{-7}$ and can cascadely decay to all charge final states.
In fact, the $B^-\to \Lambda\bar p$ has the highest rate among these three modes and has the best detectability as the others need both $\Lambda$ and $\overline\Lambda$ for detections ($\overline B{}^0_s\to \Lambda\overline{\Lambda}$,
$\overline B{}^0_s\to\Xi^-\overline{\Xi^-}\to \Lambda\pi^-\,\overline\Lambda \pi^+$).
It is understandable that why $B^-\to \Lambda\bar p$ is the first penguin mode with experimental evidence.
Nevertheless, it is interesting to search for $\overline B{}^0_s\to \Lambda\overline{\Lambda}$ and
$\Xi^-\overline{\Xi^-}$ decays as well.
One can also search for other modes.
For example, the
$B^-\to \Xi^{-}\overline{\Lambda}$
mode can also cascadely decay to all charged final state and its rate is of order $10^{-8}$,
but this mode suffers from the low reconstruction efficiency of $\Xi^-$.
With $\gamma$ one can search for
$B^-\to\Xi^{-}\overline{\Sigma^{0}}$
decay, which has rate of order $10^{-7}$.
With $\pi^0$ one can search for
$\overline B{}^0\to \Xi^{0}\overline{\Lambda}$
and
$\overline B{}^0\to \Sigma^{+}\overline{p}$ decays at $10^{-8}$ level.
With $\pi^0\pi^0$ one can search for
$B^-\to \Xi^{0}\overline{\Sigma^{+}}$,
$\overline B{}^0_s\to \Xi^{0}\overline{\Xi^{0}}$,
and
$\overline B{}^0_s\to \Sigma^{+}\overline{\Sigma^{+}}$
decays, where the first two modes have rate of order $10^{-7}$,
while with $\pi^0\gamma$ one can search for $\overline B{}^0\to \Xi^{0}\overline{\Sigma^{0}}$,
which has rate of order $10^{-7}$,
and finally with $\gamma\gamma$ one can search for
$\overline B{}^0_s\to \Sigma^{0}\overline{\Sigma^{0}}$
decay at $10^{-8}$ level.
Note that with all charged final states, $\gamma$, $\pi^0\pi^0$ and $\pi^0\gamma$,
most modes having rates of the order of $10^{-7}$ can be searched for in the future.
From Eqs. (\ref{eq: BBBm, DS=-1}), (\ref{eq: BBB0, DS=-1}) and (\ref{eq: BBBs, DS=-1}),
we see that
$B^-\to \Sigma^{-}\overline{n}$,
$B^-\to\Xi^{-}\overline{\Sigma^{0}}$,
$B^-\to \Xi^{-}\overline{\Lambda}$,
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{-}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{-}}$,
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}}$,
$\overline B{}^0_s\to p\overline{p}$ and
$\overline B{}^0_s\to n\overline{n}$ decays
do not have any tree ($T'_{i{{\cal B} \overline {\cal B}}}$) contribution.
As shown in Table~\ref{tab:BBDS=-1} the rates of these modes have vanishing second uncertainties.
In particular, we note that
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{-}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{-}}$
and
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}}$ are pure penguin modes,
which only have $P'_{i{{\cal B} \overline {\cal B}}}$, $P'_{iEW{{\cal B} \overline {\cal B}}}$ and $PA'_{{{\cal B} \overline {\cal B}}}$ terms,
while
$\overline B{}^0_s\to p\overline{p}$ and
$\overline B{}^0_s\to n\overline{n}$ decays
only have subleading contributions, the $E'_{i{{\cal B} \overline {\cal B}}}$ and $PA'_{{{\cal B} \overline {\cal B}}}$ terms.
We will return to these modes later.
The predicted $\overline B{}^0_s\to p\bar p$ rate is several order smaller than the present experimental result, which, however, has large uncertainty.
As noted this mode only receives contributions from sub-leading terms [see Eq. (\ref{eq:ampBB})].
One may enhance the subleading contributions, but will soon run into contradictions. For example, after enhancing the subleading contributions in the $\overline B{}^0\to p\bar p$ mode, the so-called ``subleading contributions" will oversize the leading tree contribution.
Note that, some enhancement on subleading contributions is possible in the presence of final state rescattering (see for example, \cite{FSI, FSI1}),
but it is unlikely that the enhancement to be so significant.
Note that in Ref.~\cite{Hsiao:2014zza}, when partial conservation of axial-vector current is relaxed, the calculated $B_s\to p\bar p$ rate can be close to data, but the predicted $B^-\to\Lambda\bar p$ rate is of the order $10^{-8}$ and is in tension with the data~\cite{Aaij:2016xfa}.
We need a more precise measurement on the $B_s\to p\bar p$ rate to settle the issue.
\begin{table}[t!]
\caption{\label{tab:BDDS=0} Same as Table~\ref{tab:BBDS=0}, but with $\Delta S=0$, $\overline B_q\to{{\cal B} \overline {\cal D}}$ modes.
The latest experimental result is given in the parenthesis.
}
\begin{ruledtabular}
\centering
\begin{tabular}{llll}
Mode
& ${\mathcal B}(10^{-8})$
& Mode
& ${\mathcal B}(10^{-8})$
\\
\hline $B^-\to p\overline{\Delta^{++}}
& $6.21^{+3.01}_{-2.23}{}^{+0.58}_{-0}{}^{+8.77}_{-4.89}\pm0.08
& $\overline B{}^0_s\to p\overline{\Sigma^{*+}}
& $1.84^{+0.89}_{-0.66}{}^{+0.17}_{-0}{}^{+2.60}_{-1.45}\pm0
\\
& ($<14$)~\cite{Wei:2007fg}
&
&
\\
$B^-\to n\overline{\Delta^+}$
& $2.18^{+1.05}_{-0.79}{}^{+0}_{-0.15}{}^{+0.93}_{-0.75}\pm0.03
& $\overline B{}^0_s\to n\overline{\Sigma^{*0}}$
& $0.97^{+0.47}_{-0.35}{}^{+0}_{-0.07}{}^{+0.41}_{-0.33}\pm0
\\
$B^-\to\Sigma^0\overline{\Sigma^{*+}}
& $3.14^{+1.53}_{-1.13}{}^{+0.17}_{-0}{}^{+1.34}_{-1.10}\pm0.02
&$\overline B{}^0_s\to \Sigma^{0}\overline{\Xi^{*0}}
& $2.77^{+1.35}_{-1.00}{}^{+0.15}_{-0}{}^{+1.19}_{-0.97}\pm0
\\
$B^-\to\Sigma^-\overline{\Sigma^{*0}}$
& $0.04\pm0.02\pm0\pm0.02^{+0.0003}_{-0.0002}
& $\overline B{}^0_s\to \Sigma^{-}\overline{\Xi^{*-}}$
& $0.08\pm0.03\pm0\pm0.03\pm0
\\
$B^-\to\Xi^{-}\overline{\Xi^{*0}}$
& $0.07^{+0.03}_{-0.02}\pm0{}^{+0.03}_{-0.02}{}^{+0.0004}_{-0.0003}
& $\overline B{}^0_s\to \Xi^{-}\overline{\Omega^-}$
& $0.18^{+0.08}_{-0.07}\pm0{}^{+0.08}_{-0.06}\pm0
\\
$B^-\to\Lambda\overline{\Sigma^{*+}}$
& $0.14^{+0.06}_{-0.05}\pm0{}^{+0.21}_{-0.05}\pm{+0.001}
& $\overline B{}^0_s\to \Lambda\overline{\Xi^{*0}}$
& $0.12^{+0.05}_{-0.04}\pm0{}^{+0.19}_{-0.05}\pm0
\\
$\overline B{}^0\to p\overline{\Delta^+}
& $1.92^{+0.93}_{-0.69}{}^{+0.18}_{-0}{}^{+2.71}_{-1.51}\pm0.02
& $\overline B{}^0\to \Sigma^{+}\overline{\Sigma^{*+}}$
& $0\pm0\pm0{}^{+0.0001}_{-0}
\\
$\overline B{}^0\to n\overline{\Delta^0}$
& $2.01^{+0.97}_{-0.73}{}^{+0}_{-0.14}{}^{+0.86}_{-0.69}\pm0.03
& $\overline B{}^0\to \Sigma^{0}\overline{\Sigma^{*0}}
& $1.45^{+0.71}_{-0.52}{}^{+0.08}_{-0}{}^{+0.62}_{-0.51}\pm0.01
\\
$\overline B{}^0\to \Xi^{0}\overline{\Xi^{*0}}$
& $0\pm0\pm0{}^{+0.0001}_{-0}
& $\overline B{}^0\to \Sigma^{-}\overline{\Sigma^{*-}}$
& $0.08^{+0.04}_{-0.03}\pm0{}\pm0.03\pm0
\\
$\overline B{}^0\to \Xi^{-}\overline{\Xi^{*-}}$
& $0.06^{+0.03}_{-0.02}\pm0{}^{+0.03}_{-0.02}\pm0
& $\overline B{}^0\to \Lambda\overline{\Sigma^{*0}}$
& $0.06^{+0.03}_{-0.02}\pm0{}^{+0.10}_{-0.02}{}^{+0.0004}_{-0.0003}
\\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsubsection{Rates of $\overline B_q\to {{\cal B} \overline {\cal D}}$ decays}
Predictions on rates of $\Delta S=0$, $\overline B_q\to{{\cal B} \overline {\cal D}}$ decays are shown in Table~\ref{tab:BDDS=0}.
Modes that can cascadely decay to all charged final states and with unsuppressed rates are
$B^-\to p\overline{\Delta^{++}}$ and $\overline B{}^0_s\to p\overline{\Sigma^{*+}}$ decays.
In particular, the predicted rate of $B^-\to p\overline{\Delta^{++}}$ decay is the highest one in the table and is just roughly a factor of 2 smaller than the present experimental upper bound \cite{Wei:2007fg}, which, however, has not been updated for quite a while.
This mode could be just around the corner.
On the other hand, four other modes that can cascadely decay to all charged final states, namely
$B^-\to \Xi^-\overline{\Xi^{*0}}$,
$\Lambda\overline{\Sigma^{*+}}$,
$\overline B{}^0_s\to \Xi^-\overline{\Omega^-}$
and
$\Lambda\overline{\Xi^{*0}}$ decays,
are one or two order of magnitude more suppressed.
Note that
with $\gamma$ one can search for
$B^-\to\Sigma^0\overline{\Sigma^{*+}}$
and
$\overline B{}^0_s\to \Sigma^{0}\overline{\Xi^{*0}}$
decays,
with $\pi^0$ one can search for
$\overline B{}^0\to p\overline{\Delta^+}$
decay,
while with $\gamma\pi^0$ one can search for
$\overline B{}^0\to \Sigma^{0}\overline{\Sigma^{*0}}$
decay in the future.
All of these modes have rates of orders $10^{-8}$.
From Eqs. (\ref{eq: BDBm, DS=0}), (\ref{eq: BDB0, DS=0}) and (\ref{eq: BDBs, DS=0}),
we see that
$B^-\to\Sigma^-\overline{\Sigma^{*0}}$,
$B^-\to\Xi^{-}\overline{\Xi^{*0}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Xi^{*-}}$,
$\overline B{}^0\to \Xi^{-}\overline{\Xi^{*-}}$,
$\overline B{}^0\to \Sigma^{-}\overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Xi^{-}\overline{\Omega^-}$,
$\overline B{}^0\to \Sigma^{+}\overline{\Sigma^{*+}}$ and
$\overline B{}^0\to \Xi^{0}\overline{\Xi^{*0}}$ decay amplitudes,
do not have any tree ($T_{i{{\cal B} \overline {\cal D}}}$) contribution.
As shown in Table~\ref{tab:BDDS=0}, the rates of these modes have vanishing second uncertainties.
We note that
$\overline B{}^0\to \Xi^{-}\overline{\Xi^{*-}}$ and
$\overline B{}^0\to \Sigma^{-}\overline{\Sigma^{*-}}$
decays are pure penguin modes with only
$P_{{{\cal B} \overline {\cal D}}}$ and $P_{iEW{{\cal B} \overline {\cal D}}}$ contributions,
while
$\overline B{}^0\to \Sigma^{+}\overline{\Sigma^{*+}}$ and
$\overline B{}^0\to \Xi^{0}\overline{\Xi^{*0}}$ decays
are pure exchange modes with only $E_{{{\cal B} \overline {\cal D}}}$ contributions.
Although
$B^-\to\Lambda\overline{\Sigma^{*+}}$,
$\overline B{}^0_s\to \Lambda\overline{\Xi^{*0}}$ and
$\overline B{}^0\to \Lambda\overline{\Sigma^{*0}}$ decay amplitudes
have tree amplitudes, these tree amplitudes cancel out in the asymptotic limit.
There are relations among rates.
Using formulas in Appendices~\ref{appendix:amplitudes} and \ref{appendix:formulas},
we have~\cite{Chua:2013zga}
\begin{eqnarray}
{\mathcal B}(B^-\to\Xi^{-}\overline{\Xi^{*0}})
&=&2 {\mathcal B}(B^-\to\Sigma^-\overline{\Sigma^{*0}})
\left(\frac{p_{cm}(B^-\to\Xi^{-}\overline{\Xi^{*0}})}{p_{cm}(B^-\to\Sigma^-\overline{\Sigma^{*0}})}\right)^3
\nonumber\\
&\simeq&2 {\mathcal B}(B^-\to\Sigma^-\overline{\Sigma^{*0}}),
\nonumber\\
3\tau_{B_d} {\mathcal B}(\bar B^0\to\Sigma^{-}\overline{\Sigma^{*-}})
&\simeq&3\tau_{B_d}{\mathcal B}(\bar B^0\to\Xi^{-}\overline{\Xi^{*-}})
\simeq
3\tau_{B_s} {\mathcal B}(\bar B^0_s\to\Sigma^{-}\overline{\Xi^{*-}})
\nonumber\\
&\simeq&\tau_{B_s}{\mathcal B}(\bar B^0_s\to\Xi^{-}\overline{\Omega^-}),
\nonumber\\
{\mathcal B}(\bar B^0\to\Sigma^{+}\overline{\Sigma^{*+}})
&\simeq&{\mathcal B}(\bar B^0\to\Xi^{0}\overline{\Xi^{*0}}).
\end{eqnarray}
One can check from Table~\ref{tab:BDDS=0} that the rates of these modes roughly satisfy the above relations and the agreement will be improved when the SU(3) breaking effects are taken into account.
Note that these relations do not relay on the asymptotic relations.
Predictions on $\Delta S=-1$, $\overline B_q\to{{\cal B} \overline {\cal D}}$ decay rates are shown in Table \ref{tab:BDDS=-1}.
Both $\overline B{}^0\to \Xi^-\overline{\Sigma^{*-}}$ and $\Lambda\overline{\Delta^0}$ modes can cascadely decay to all charge final states,
where only the $\overline B{}^0\to \Xi^-\overline{\Sigma^{*-}}$ rate can reach $10^{-8}$ level.
In principle, this mode can be detected through $\overline B{}^0\to \Xi^-\overline{\Sigma^{*-}}\to \Lambda\pi^-\,\overline\Lambda\pi^+$ decay, but may be restricted by the low reconstruction efficiency of $\Xi^-$.
Note that with $\pi^0$ one can search for
$\overline B{}^0_s\to \Sigma^{+}\overline{\Sigma^{*+}}$,
$B^-\to \Xi^{0}\overline{\Sigma^{*+}}$,
$B^-\to \Sigma^+\overline{\Delta^{++}}$,
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Xi^{0}\overline{\Xi^{*0}}$
and
$B^-\to \Xi^{-}\overline{\Sigma^{*0}}$
decays, which have rates of orders $10^{-8}$,
while with $\gamma$ one can search for
$\overline B{}^0\to \Sigma^{0}\overline{\Delta^0}$
decay in the future.
In particular, the $B^-\to \Sigma^+\overline{\Delta^{++}}$ decay rate is the highest one in the table.
With $\gamma\pi^0$, $\pi^0\pi^0$ or $\gamma\gamma$, one can search for
$B^-\to \Sigma^0\overline{\Delta^+}$,
$\overline B{}^0\to \Sigma^{+}\overline{\Delta^+}$,
$\overline B{}^0_s\to \Sigma^{0}\overline{\Sigma^{*0}}$
and
$\overline B{}^0\to \Xi^{0}\overline{\Sigma^{*0}}$
decays, which have rates of orders (or close to) $10^{-8}$, in the future.
Note that the predicted $\overline B{}^0\to \Lambda\overline{\Delta^0}$ and $B^-\to \Lambda\overline{\Delta^+}$ rates
are both roughly two orders of magnitude below the present experimental bounds~\cite{Wang:2007as}.
From Eqs. (\ref{eq: BDBm, DS=-1}), (\ref{eq: BDB0, DS=-1}) and (\ref{eq: BDBs, DS=-1}), we see that
$B^-\to \Sigma^-\overline{\Delta^0}$,
$B^-\to \Xi^{-}\overline{\Sigma^{*0}}$,
$\overline B{}^0\to \Sigma^{-}\overline{\Delta^-}$,
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{*-}}$,
$\overline B{}^0_s\to p\overline{\Delta^+}$ and
$\overline B{}^0_s\to n\overline{\Delta^0}$ decays,
do not have any tree ($T'_{i{{\cal B} \overline {\cal D}}}$) contribution.
As shown in Table~\ref{tab:BDDS=-1}, the rates of these modes have vanishing second uncertainties.
Note that
$\overline B{}^0\to \Sigma^{-}\overline{\Delta^-}$,
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{*-}}$ and
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{*-}}$
decays are pure penguin modes with only
$P'_{{{\cal B} \overline {\cal D}}}$ and $P'_{iEW{{\cal B} \overline {\cal D}}}$ contributions,
while
$\overline B{}^0_s\to p\overline{\Delta^+}$ and
$\overline B{}^0_s\to n\overline{\Delta^0}$
are pure exchange modes with only $E'_{{{\cal B} \overline {\cal D}}}$ contributions.
\begin{table}[t!]
\caption{\label{tab:BDDS=-1} Same as Table~\ref{tab:BBDS=0}, but with $\Delta S=-1$, $\overline B_q\to{{\cal B} \overline {\cal D}}$ modes.
The latest experimental results are given in parentheses.
}
\begin{ruledtabular}
\centering
\begin{tabular}{llll}
Mode
& ${\mathcal B}(10^{-8})$
& Mode
& ${\mathcal B}(10^{-8})$
\\
\hline $B^-\to \Sigma^+\overline{\Delta^{++}}
& $6.99^{+2.90}_{-2.49}{}^{+1.80}_{-0}{}^{+3.13}_{-2.33}\pm0.002
& $\overline B{}^0\to \Sigma^{+}\overline{\Delta^+}
& $2.16^{+0.90}_{-0.77}{}^{+0.56}_{-0}{}^{+0.97}_{-0.72}\pm0
\\
$B^-\to \Sigma^0\overline{\Delta^+}
& $4.25^{+1.83}_{-1.51}{}^{+0.56}_{-0}{}^{+1.83}_{-1.46}\pm0.003
& $\overline B{}^0\to \Sigma^{0}\overline{\Delta^0}
& $3.93^{+1.69}_{-1.40}{}^{+0.52}_{-0}{}^{+1.69}_{-1.35}\pm0
\\
$B^-\to \Sigma^-\overline{\Delta^0}$
& $1.96^{+0.87}_{-0.70}\pm0{}^{+0.83}_{-0.69}\pm0.002
&$\overline B{}^0\to \Sigma^{-}\overline{\Delta^-}$
& $5.46^{+2.42}_{-1.94}\pm0{}^{+2.32}_{-1.91}\pm0
\\
$B^-\to \Xi^{0}\overline{\Sigma^{*+}}$
& $1.49^{+0.67}_{-0.53}{}^{+0}_{-0.38}{}^{+0.93}_{-0.70}\pm0.002
& $\overline B{}^0\to \Xi^{0}\overline{\Sigma^{*0}}$
& $0.69^{+0.31}_{-0.24}{}^{+0}_{-0.17}{}^{+0.43}_{-0.33}\pm0
\\
$B^-\to \Xi^{-}\overline{\Sigma^{*0}}$
& $0.81^{+0.36}_{-0.29}\pm0{}^{+0.34}_{-0.28}\pm0.001
& $\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}}
& $1.49^{+0.66}_{-0.53}\pm0{}^{+0.63}_{-0.52}\pm0
\\
$B^-\to \Lambda\overline{\Delta^{+}}$
& $0.15^{+0.07}_{-0.05}{}^{+0.11}_{-0}{}^{+0.07}_{-0.05}\pm0
& $\overline B{}^0\to \Lambda\overline{\Delta^0}$
& $0.14^{+0.07}_{-0.05}{}^{+0.10}_{-0}{}^{+0.06}_{-0.05}\pm0
\\
& ($<82$)~\cite{Wang:2007as}
&
& ($<93$)~\cite{Wang:2007as}
\\
$\overline B{}^0_s\to p\overline{\Delta^+}$
& $0\pm0\pm0{}^{+0.000005}_{-0}
& $\overline B{}^0_s\to \Sigma^{+}\overline{\Sigma^{*+}}
& $2.07^{+0.86}_{-0.74}{}^{+0.53}_{-0}{}^{+0.93}_{-0.69}\pm0.001
\\
$\overline B{}^0_s\to n\overline{\Delta^0}$
& $0\pm0\pm0{}^{+0.000005}_{-0}
& $\overline B{}^0_s\to \Sigma^{0}\overline{\Sigma^{*0}}
& $1.89^{+0.81}_{-0.67}{}^{+0.25}_{-0}{}^{+0.81}_{-0.65}\pm0.001
\\
$\overline B{}^0_s\to \Xi^{0}\overline{\Xi^{*0}}
& $1.31^{+0.59}_{-0.47}{}^{+0}_{-0.33}{}^{+0.82}_{-0.62}\pm0.002
& $\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{*-}}$
& $1.74^{+0.77}_{-0.62}\pm0{}^{+0.74}_{-0.61}\pm0
\\
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{*-}}
& $1.42^{+0.63}_{-0.51}\pm0{}^{+0.60}_{-0.50}\pm0
& $\overline B{}^0_s\to \Lambda\overline{\Sigma^{*0}}$
& $0.07^{+0.03}_{-0.02}{}^{+0.05}_{-0}{}^{+0.03}_{-0.02}\pm0.001
\\
\end{tabular}
\end{ruledtabular}
\end{table}
The rates of some modes are related.
From Appendices~\ref{appendix:amplitudes} and \ref{appendix:formulas},
we have~\cite{Chua:2013zga}
\begin{eqnarray}
2{\mathcal B}(B^-\to\Xi^{-}\overline{\Sigma^{*0}})
&=&{\mathcal B}(B^-\to\Sigma^-\overline{\Delta^0}),
\nonumber\\
3\tau_{B_s}{\mathcal B}(\bar B^0_s\to\Sigma^{-}\overline{\Sigma^{*-}})
&=&3\tau_{B_s}{\mathcal B}(\bar B^0_s\to\Xi^{-}\overline{\Xi^{*-}})
=3 \tau_{B_d}{\mathcal B}(\bar B^0\to\Xi^{-}\overline{\Sigma^{*-}})
\nonumber\\
&=&\tau_{B_d}{\mathcal B}(\bar B^0\to\Sigma^{-}\overline{\Delta^-}),
\nonumber\\
{\mathcal B}(\bar B^0_s\to p\overline{\Delta^+})
&=&{\mathcal B}(\bar B^0_s\to n\overline{\Delta^0}),
\end{eqnarray}
where these relations are subjected to corrections from SU(3) breaking in $|p_{cm}|^3$.
These relations do not relay on the asymptotic relations.
As shown in Table~\ref{tab:BDDS=0} the rates of these modes roughly satisfy the above relations and the agreement will be improved when the SU(3) breaking effects are taken into account.
\begin{table}[t!]
\caption{\label{tab:DBDS=0} Same as Table~\ref{tab:BBDS=0}, but with $\Delta S=0$, $\overline B_q\to{{\cal D} \overline {\cal B}}$ modes.
The latest experimental result is given in the parenthesis.
}
\begin{ruledtabular}
\centering
\begin{tabular}{llll}
Mode
& ${\mathcal B}(10^{-8})$
& Mode
& ${\mathcal B}(10^{-8})$
\\
\hline $B^-\to \Delta^0\overline{p}
& $2.19^{+1.05}_{-0.79}{}^{+0}_{-0.15}{}^{+0.92}_{-0.74}\pm0.03
& $\overline B{}^0_s\to \Delta^+\overline{\Sigma^{+}}
& $1.70^{+0.82}_{-0.61}{}^{+0.16}_{-0}{}^{+0.79}_{-0.64}\pm0
\\
& ($<138$)~\cite{Wei:2007fg}
&
&
\\
$B^-\to \Delta^-\overline{n}$
& $0.33^{+0.15}_{-0.12}\pm0{}^{+0.14}_{-0.12}{}^{+0.002}_{-0.001}
& $\overline B{}^0_s\to \Delta^0\overline{\Sigma^{0}}
& $0.96^{+0.46}_{-0.34}{}^{+0.15}_{-0}{}^{+0.50}_{-0.40}\pm0
\\
$B^-\to \Sigma^{*0}\overline{\Sigma^{+}}$
& $0.85^{+0.41}_{-0.31}{}^{+0}_{-0.06}{}^{+0.36}_{-0.29}\pm0.01
&$\overline B{}^0_s\to \Delta^-\overline{\Sigma^{-}}$
& $0.27^{+0.12}_{-0.10}\pm0{}^{+0.11}_{-0.09}\pm0
\\
$B^-\to \Sigma^{*-}\overline{\Sigma^{0}}$
& $0.04\pm0.02\pm0\pm0.02{}^{+0.0003}_{-0.0002}
& $\overline B{}^0_s\to \Sigma^{*0}\overline{\Xi^{0}}
& $2.73^{+1.33}_{-0.98}{}^{+0.15}_{-0}{}^{+1.17}_{-0.96}\pm0
\\
$B^-\to \Xi^{*-}\overline{\Xi^{0}}$
& $0.07^{+0.03}_{-0.02}\pm0{}^{+0.03}_{-0.02}{}^{+0.0004}_{-0.0003}
& $\overline B{}^0_s\to \Sigma^{*-}\overline{\Xi^{-}}$
& $0.07\pm0.03\pm0\pm0.03\pm0
\\
$B^-\to \Sigma^{*-}\overline{\Lambda}$
& $0.14^{+0.06}_{-0.05}\pm0{}^{+0.06}_{-0.05}\pm0.001
& $\overline B{}^0_s\to \Delta^0\overline{\Lambda}
& $2.59^{+1.27}_{-0.93}{}^{+0.03}_{-0}{}^{+1.01}_{-0.84}\pm0
\\
$\overline B{}^0\to \Delta^+\overline{p}
& $1.92^{+0.93}_{-0.69}{}^{+0.18}_{-0}{}^{+0.89}_{-0.72}\pm0.02
& $\overline B{}^0\to \Sigma^{*+}\overline{\Sigma^{+}}$
& $0\pm0\pm0\pm0{}^{+0.0001}_{-0}
\\
$\overline B{}^0\to \Delta^0\overline{n}$
& $7.46^{+3.64}_{-2.69}{}^{+0.40}_{-0}{}^{+3.19}_{-2.62}\pm0.05
& $\overline B{}^0\to \Sigma^{*0}\overline{\Sigma^{0}}$
& $0.39^{+0.19}_{-0.14}{}^{+0}_{-0.03}{}^{+0.17}_{-0.13}\pm0.005
\\
$\overline B{}^0\to \Xi^{*0}\overline{\Xi^{0}}$
& $0\pm0\pm0\pm0{}^{+0.0001}_{-0}
& $\overline B{}^0\to \Sigma^{*-}\overline{\Sigma^{-}}$
& $0.08^{+0.04}_{-0.03}\pm0\pm0.03\pm0
\\
$\overline B{}^0\to \Xi^{*-}\overline{\Xi^{-}}$
& $0.06^{+0.03}_{-0.02}\pm0{}^{+0.03}_{-0.02}\pm0
& $\overline B{}^0\to \Sigma^{*0}\overline{\Lambda}
& $1.18^{+0.57}_{-0.42}{}^{+0.11}_{-0}{}^{+0.55}_{-0.44}\pm0.02
\\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsubsection{Rates of $\overline B_q\to {{\cal D} \overline {\cal B}}$ decays}
Predictions on $\Delta S=0$, $\overline B_q\to {{\cal D} \overline {\cal B}}$ decay rates are shown in Table \ref{tab:DBDS=0}.
The mode with the highest decay rate is $\overline B{}^0\to\Delta^0\bar n$ decay, which is unfortunately hard to detect.
Both $B^-\to \Delta^0\overline p$ and $\overline B{}^0_s\to\Delta^0\overline\Lambda$ decays can cascadely decay to all charge final states.
These two modes both have rates at the order of $10^{-8}$, but the $B^-\to \Delta^0\overline p$ decay has better detectability.
Note that the predicted rate of this mode is roughly two orders of magnitude below the present experimental limit~\cite{Wei:2007fg}, which, however, has not been updated since 2008.
With $\pi^0$, one can search for
$\overline B{}^0\to \Delta^+\overline{p}$
and
$\overline B{}^0\to \Sigma^{*0}\overline{\Lambda}$
decays,
while $\overline B{}^0_s\to \Delta^0\overline{\Sigma^{0}}$ decay,
which has a slightly smaller rate, can be searched for with $\pi^0$ in the future.
With $\pi^0\pi^0$, one can search for
$\overline B{}^0_s\to \Sigma^{*0}\overline{\Xi^{0}}$,
$\overline B{}^0_s\to \Delta^+\overline{\Sigma^{+}}$,
and
$B^-\to \Sigma^{*0}\overline{\Sigma^{+}}$
decays, in the future.
These modes all have rates at the level of $10^{-8}$.
Note that
$B^-\to \Delta^-\overline{n}$,
$B^-\to \Sigma^{*-}\overline{\Sigma^{0}}$,
$B^-\to \Xi^{*-}\overline{\Xi^{0}}$,
$B^-\to \Sigma^{*-}\overline{\Lambda}$,
$\overline B{}^0\to \Xi^{*-}\overline{\Xi^{-}}$,
$\overline B{}^0_s\to \Delta^-\overline{\Sigma^{-}}$,
$\overline B{}^0_s\to \Sigma^{*-}\overline{\Xi^{-}}$,
$\overline B{}^0\to \Sigma^{*-}\overline{\Sigma^{-}}$,
$\overline B{}^0\to \Xi^{*0}\overline{\Xi^{0}}$ and
$\overline B{}^0\to \Sigma^{*+}\overline{\Sigma^{+}}$ decays
do not have any tree ($T_{i{{\cal D} \overline {\cal B}}}$) contribution
[see Eqs. (\ref{eq: DBBm, DS=0}), (\ref{eq: DBB0, DS=0}) and (\ref{eq: DBBs, DS=0})],
and, consequently, their rates in Table~\ref{tab:DBDS=0} have vanishing second uncertainties.
In particular, the
$\overline B{}^0\to \Xi^{*-}\overline{\Xi^{-}}$,
$\overline B{}^0_s\to \Delta^-\overline{\Sigma^{-}}$,
$\overline B{}^0_s\to \Sigma^{*-}\overline{\Xi^{-}}$ and
$\overline B{}^0\to \Sigma^{*-}\overline{\Sigma^{-}}$ decays
are pure penguin modes, which only have $P_{{\cal D} \overline {\cal B}}$ and $P_{i EW{{\cal D} \overline {\cal B}}}$ contributions,
while
$\overline B{}^0\to \Xi^{*0}\overline{\Xi^{0}}$ and
$\overline B{}^0\to \Sigma^{*+}\overline{\Sigma^{+}}$ decays
are pure exchange ($E_{{\cal D} \overline {\cal B}}$) modes.
The rates of some modes are related.
Using the formulas in Appendices~\ref{appendix:amplitudes} and \ref{appendix:formulas},
we have~\cite{Chua:2013zga}
\begin{eqnarray}
{\mathcal B}(B^-\to\Delta^0\overline{p})
&=&
2{\mathcal B}(B^-\to\Sigma^{*0}\overline{\Sigma^{+}}),
\nonumber\\
{\mathcal B}(B^-\to\Delta^-\overline{n})
&=&
3 {\mathcal B}(B^-\to\Xi^{*-}\overline{\Xi^{0}})
=6 {\mathcal B}(B^-\to\Sigma^{*-}\overline{\Sigma^{0}})
\nonumber\\
&=&2 {\mathcal B}(B^-\to\Sigma^{*-}\overline{\Lambda}),
\nonumber\\
3\tau_{B_d}{\mathcal B}(\bar B^0\to\Sigma^{*-}\overline{\Sigma^{-}})
&=&3\tau_{B_d}{\mathcal B}(\bar B^0\to\Xi^{*-}\overline{\Xi^{-}})
= \tau_{B_s}{\mathcal B}(\bar B^0_s\to\Delta^-\overline{\Sigma^{-}})
\nonumber\\
&=&3\tau_{B_s}{\mathcal B}(\bar B^0_s\to\Sigma^{*-}\overline{\Xi^{-}}),
\nonumber\\
{\mathcal B}(\bar B^0\to\Sigma^{*+}\overline{\Sigma^{+}})
&=&{\mathcal B}(\bar B^0\to\Xi^{*0}\overline{\Xi^{0}}),
\end{eqnarray}
where these relations are subjected to corrections from SU(3) breaking in $|p_{cm}|^3$.
These relations do not relay on the asymptotic relations.
Note that the relations on $B^-$ decay rates are new compared to those in \cite{Chua:2013zga}.
As shown in Table~\ref{tab:DBDS=0} the rates of these modes roughly satisfy the above relations and the agreement will be improved when the SU(3) breaking effects are taken into account.
\begin{table}[t!]
\caption{\label{tab:DBDS=-1} Same as Table~\ref{tab:BBDS=0}, but with $\Delta S=-1$, $\overline B_q\to{{\cal D} \overline {\cal B}}$ modes.}
\begin{ruledtabular}
\centering
\begin{tabular}{llll}
Mode
& ${\mathcal B}(10^{-8})$
& Mode
& ${\mathcal B}(10^{-8})$
\\
\hline $B^-\to \Sigma^{*0}\overline{p}
& $0.94^{+0.43}_{-0.33}{}^{+0}_{-0.24}{}^{+0.51}_{-0.40}\pm0.001
& $\overline B{}^0\to \Sigma^{*+}\overline{p}
& $2.25^{+0.93}_{-0.80}{}^{+0.58}_{-0}{}^{+0.93}_{-0.75}\pm0
\\
&$<47$~\cite{Wang:2007as}
&
&$<26$~\cite{Wang:2007as}
\\
$B^-\to \Sigma^{*-}\overline{n}$
& $2.05^{+0.91}_{-0.73}\pm0{}^{+0.87}_{-0.72}\pm0.002
& $\overline B{}^0\to \Sigma^{*0}\overline{n}$
& $1.39^{+0.58}_{-0.49}{}^{+0.65}_{-0}{}^{+0.54}_{-0.45}\pm0
\\
$B^-\to \Xi^{*0}\overline{\Sigma^{+}}
& $1.44^{+0.65}_{-0.51}{}^{+0}_{-0.37}{}^{+0.78}_{-0.61}\pm0.002
&$\overline B{}^0\to \Xi^{*0}\overline{\Sigma^{0}}$
& $0.67^{+0.30}_{-0.24}{}^{+0}_{-0.17}{}^{+0.36}_{-0.28}\pm0
\\
$B^-\to \Xi^{*-}\overline{\Sigma^{0}}
& $0.78^{+0.35}_{-0.28}\pm0{}^{+0.33}_{-0.27}\pm0.001
& $\overline B{}^0\to \Xi^{*-}\overline{\Sigma^{-}}$
& $1.45^{+0.64}_{-0.51}\pm0{}^{+0.61}_{-0.51}\pm0
\\
$B^-\to \Omega^-\overline{\Xi^{0}}
& $3.77^{+1.67}_{-1.34}\pm0{}^{+1.60}_{-1.32}\pm0.003
& $\overline B{}^0\to \Omega^-\overline{\Xi^{-}}
& $3.47^{+1.54}_{-1.24}\pm0{}^{+1.47}_{-1.21}\pm0
\\
$B^-\to \Xi^{*-}\overline{\Lambda}
& $2.48^{+1.10}_{-0.88}\pm0{}^{+1.05}_{-0.87}\pm0.002
& $\overline B{}^0\to \Xi^{*0}\overline{\Lambda}$
& $2.71^{+1.13}_{-0.97}{}^{+0.70}_{-0}{}^{+1.12}_{-0.90}\pm0
\\
$\overline B{}^0_s\to \Delta^+\overline{p}$
& $0\pm0\pm0\pm0{}^{+0.000005}_{-0}
& $\overline B{}^0_s\to \Sigma^{*+}\overline{\Sigma^{+}}
& $1.98^{+0.82}_{-0.71}{}^{+0.51}_{-0}{}^{+0.82}_{-0.66}\pm0.001
\\
$\overline B{}^0_s\to \Delta^0\overline{n}$
& $0\pm0\pm0\pm0{}^{+0.000005}_{-0}
& $\overline B{}^0_s\to \Sigma^{*0}\overline{\Sigma^{0}}
& $1.81^{+0.78}_{-0.64}{}^{+0.24}_{-0}{}^{+0.74}_{-0.61}\pm0.001
\\
$\overline B{}^0_s\to \Xi^{*0}\overline{\Xi^{0}}
& $1.99^{+0.83}_{-0.71}{}^{+0.93}_{-0}{}^{+0.78}_{-0.64}\pm0.0002
& $\overline B{}^0_s\to \Sigma^{*-}\overline{\Sigma^{-}}$
& $1.67^{+0.74}_{-0.59}\pm0{}^{+0.71}_{-0.58}\pm0
\\
$\overline B{}^0_s\to \Xi^{*-}\overline{\Xi^{-}}
& $1.36^{+0.60}_{-0.48}\pm0{}^{+0.58}_{-0.47}\pm0
& $\overline B{}^0_s\to \Sigma^{*0}\overline{\Lambda}$
& $0.06^{+0.03}_{-0.02}{}^{+0.05}_{-0}\pm0.02\pm0.001
\\
\end{tabular}
\end{ruledtabular}
\end{table}
Predictions on $\Delta S=-1$, $\overline B_q\to {{\cal D} \overline {\cal B}}$ decay rates are shown in Table \ref{tab:DBDS=-1}.
Note that
$\overline B{}^0\to \Omega^-\overline{\Xi^{-}}$,
$\overline B{}^0\to \Xi^{*0}\overline{\Lambda}$
and
$\overline B{}^0\to \Sigma^{*+}\overline{p}$
decays are the only three modes that can cascadely decay to all charged final states.
All of them have rates of order $10^{-8}$.
We need more data to search for them as the reconstruction efficiencies of the final states of the first two modes are low,
while the predicted $\overline B{}^0\to \Sigma^{*+}\overline{p}$ rate is one order of magnitude below the experimental limit,
which has not been updated since 2007~\cite{Wang:2007as}.
Note that with one $\pi^0$ one can search for
$B^-\to \Omega^-\overline{\Xi^{0}}$,
$B^-\to \Xi^{*-}\overline{\Lambda}$,
$\overline B{}^0_s\to \Xi^{*0}\overline{\Xi^{0}}$,
$\overline B{}^0_s\to \Sigma^{*+}\overline{\Sigma^{+}}$,
$B^-\to \Xi^{*0}\overline{\Sigma^{+}}$,
$\overline B{}^0_s\to \Xi^{*-}\overline{\Xi^{-}}$
and
$B^-\to \Sigma^{*0}\overline{p}$
decays, which have rates of order $10^{-8}$, in the future.
In particular,
the $B^-\to\Omega^-\overline{\Xi^0}$ decay has the highest rate in the table
and the predicted $B^-\to \Sigma^{*0}\overline{p}$ rate is more than one order of magnitude below the experimental limit,
which has not been updated since 2007~\cite{Wang:2007as}.
With $\pi^0\gamma$ one can search for
$\overline B{}^0_s\to \Sigma^{*0}\overline{\Sigma^{0}}$
and
$B^-\to \Xi^{*-}\overline{\Sigma^{0}}$
decays, which have rates of order (or close to) $10^{-8}$, in the future.
Note that
$B^-\to \Sigma^{*-}\overline{n}$,
$B^-\to \Xi^{*-}\overline{\Sigma^{0}}$,
$B^-\to \Omega^-\overline{\Xi^{0}}$,
$B^-\to \Xi^{*-}\overline{\Lambda}$,
$\overline B{}^0\to \Omega^-\overline{\Xi^{-}}$,
$\overline B{}^0_s\to \Sigma^{*-}\overline{\Sigma^{-}}$,
$\overline B{}^0\to \Xi^{*-}\overline{\Sigma^{-}}$,
$\overline B{}^0_s\to \Xi^{*-}\overline{\Xi^{-}}$
$\overline B{}^0_s\to \Delta^+\overline{p}$ and
$\overline B{}^0_s\to \Delta^0\overline{n}$ decays
do not have any tree ($T'_{i{{\cal D} \overline {\cal B}}}$) contribution
[see Eqs. (\ref{eq: DBBm, DS=-1}), (\ref{eq: DBB0, DS=-1}) and (\ref{eq: DBBs, DS=-1})],
and, consequently, their rates in Table~\ref{tab:DBDS=-1} have vanishing second uncertainties.
In particular, the
$\overline B{}^0\to \Xi^{*-}\overline{\Sigma^{-}}$ and
$\overline B{}^0_s\to \Xi^{*-}\overline{\Xi^{-}}$
decays
are pure penguin modes, which only have $P'_{{\cal D} \overline {\cal B}}$ and $P'_{i EW{{\cal D} \overline {\cal B}}}$ contributions,
while
$\overline B{}^0_s\to \Delta^+\overline{p}$ and
$\overline B{}^0_s\to \Delta^0\overline{n}$ decays
are pure exchange ($E'_{{\cal D} \overline {\cal B}}$) modes.
The rates of some modes are related.
Using the formulas in Appendices~\ref{appendix:amplitudes} and \ref{appendix:formulas},
we have~\cite{Chua:2013zga}
\begin{eqnarray}
2 {\mathcal B}(B^-\to\Sigma^{*0}\overline{p})
&=&{\mathcal B}(B^-\to\Xi^{*0}\overline{\Sigma^{+}}),
\nonumber\\
3{\mathcal B}(B^-\to\Sigma^{*-}\overline{n})
&=&
6 {\mathcal B}(B^-\to\Xi^{*-}\overline{\Sigma^{0}})
=
{\mathcal B}(B^-\to\Omega^-\overline{\Xi^{0}})
\nonumber\\
&=&
2 {\mathcal B}(B^-\to\Xi^{*-}\overline{\Lambda}),
\nonumber\\
3\tau_{B_s}{\mathcal B}(\bar B^0_s\to\Sigma^{*-}\overline{\Sigma^{-}})
&=&3\tau_{B_s}{\mathcal B}(\bar B^0_s\to\Xi^{*-}\overline{\Xi^{-}})
=3 \tau_{B_d}{\mathcal B}(\bar B^0\to\Xi^{*-}\overline{\Sigma^{-}})
\nonumber\\
&=& \tau_{B_d}{\mathcal B}(\bar B^0\to\Omega^-\overline{\Xi^{-}}),
\nonumber\\
{\mathcal B}(\bar B^0_s\to\Delta^+\overline{p})
&=&{\mathcal B}(\bar B^0_s\to\Delta^0\overline{n}),
\end{eqnarray}
where these relations are subjected to corrections from SU(3) breaking in $|p_{cm}|^3$.
Note that the relations on $B^-$ decay rates are new compared to those in \cite{Chua:2013zga}.
These relations do not relay on the asymptotic relations.
The rates in Table~\ref{tab:BDDS=-1} roughly satisfy the above relations and the agreement will be improved when the SU(3) breaking effects are taken into account.
\begin{table}[t!]
\caption{\label{tab:DDDS=0} Same as Table~\ref{tab:BBDS=0}, but with $\Delta S=0$,
$\overline B_q\to{{\cal D} \overline {\cal D}}$ modes.
}
\centering
\begin{ruledtabular}
\begin{tabular}{llll}
Mode
& ${\mathcal B}(10^{-8})$
& Mode
& ${\mathcal B}(10^{-8})$
\\
\hline $B^-\to \Delta^+ \overline{\Delta^{++}}$
& $17.15^{+8.31}_{-6.17}{}^{+1.61}_{-0}{}^{+7.97}_{-6.45}\pm0.22
& $\overline B{}^0_s\to \Delta^{+} \overline{\Sigma^{*+}}$
& $5.18^{+2.51}_{-1.86}{}^{+0.49}_{-0}{}^{+2.41}_{-1.95}\pm0
\\
$B^-\to \Delta^0 \overline{\Delta^{+}}$
& $6.47^{+3.07}_{-2.33}{}^{+1.02}_{-0}{}^{+3.39}_{-2.67}\pm0.14
& $\overline B{}^0_s\to \Delta^{0} \overline{\Sigma^{*0}}
& $2.93^{+1.39}_{-1.05}{}^{+0.46}_{-0}{}^{+1.53}_{-1.21}\pm0
\\
$B^-\to \Delta^- \overline{\Delta^{0}}$
& $0.92^{+0.41}_{-0.33}\pm0{}^{+0.39}_{-0.32}{}^{+0.006}_{-0.004}
&$\overline B{}^0_s\to \Delta^{-} \overline{\Sigma^{*-}}$
& $0.83^{+0.37}_{-0.29}\pm0{}^{+0.35}_{-0.29}\pm0
\\
$B^-\to \Sigma^{*0} \overline{\Sigma^{*+}}$
& $3.02^{+1.43}_{-1.08}{}^{+0.47}_{-0}{}^{+1.58}_{-1.24}\pm0.07
& $\overline B{}^0_s\to \Sigma^{*0} \overline{\Xi^{*0}}$
& $2.73^{+1.29}_{-0.98}{}^{+0.43}_{-0}{}^{+1.43}_{-1.12}\pm0
\\
$B^-\to \Sigma^{*-} \overline{\Sigma^{*0}}$
& $0.57^{+0.25}_{-0.20}\pm0{}^{+0.24}_{-0.20}\pm0.003
& $\overline B{}^0_s\to \Sigma^{*-} \overline{\Xi^{*-}}$
& $1.03^{+0.46}_{-0.37}\pm0{}^{+0.44}_{-0.36}\pm0
\\
$B^-\to \Xi^{*-} \overline{\Xi^{*0}}$
& $0.26^{+0.12}_{-0.09}\pm0{}^{+0.11}_{-0.09}{}^{+0.002}_{-0.001}
& $\overline B{}^0_s\to \Xi^{*-} \overline{\Omega^{-}}
& $0.71^{+0.31}_{-0.25}\pm0{}^{+0.30}_{-0.25}\pm0
\\
$\overline B{}^0\to \Delta^{++} \overline{\Delta^{++}}$
& $0\pm0\pm0\pm0{}^{+0.004}_{-0}
& $\overline B{}^0\to \Sigma^{*+} \overline{\Sigma^{*+}}$
& $0\pm0\pm0\pm0{}^{+0.002}_{-0}
\\
& $<1.1\times10^4$~\cite{Bortoletto:1989mu}
\\
$\overline B{}^0\to \Delta^{+} \overline{\Delta^{+}}
& $5.29^{+2.56}_{-1.90}{}^{+0.50}_{-0}{}^{+2.46}_{-2.00}\pm0.16
& $\overline B{}^0\to \Sigma^{*0} \overline{\Sigma^{*0}}$
& $1.40^{+0.66}_{-0.50}{}^{+0.22}_{-0}{}^{+0.73}_{-0.58}\pm0.06
\\
$\overline B{}^0\to \Delta^{0} \overline{\Delta^{0}}
& $5.99^{+2.83}_{-2.15}{}^{+0.94}_{-0}{}^{+3.14}_{-2.47}\pm0.13
& $\overline B{}^0\to \Sigma^{*-} \overline{\Sigma^{*-}}
& $1.05^{+0.47}_{-0.37}\pm0{}^{+0.45}_{-0.37}\pm0.07
\\
&$<1.5\times10^5$~\cite{Bortoletto:1989mu}
\\
$\overline B{}^0\to \Delta^{-} \overline{\Delta^{-}}$
& $2.55^{+1.13}_{-0.91}\pm0{}^{+1.08}_{-0.89}\pm0.11
& $\overline B{}^0\to \Xi^{*0} \overline{\Xi^{*0}}$
& $0\pm0\pm0\pm0{}^{+0.001}_{-0}$
\\
$\overline B{}^0\to \Omega^{-} \overline{\Omega^{-}}$
& $0\pm0\pm0\pm0{}^{+0.001}_{-0}$%
& $\overline B{}^0\to \Xi^{*-} \overline{\Xi^{*-}}$
& $0.24^{+0.11}_{-0.09}\pm0{}^{+0.10}_{-0.08}\pm0.03
\\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsubsection{Rates of $\overline B_q\to {{\cal D} \overline {\cal D}}$ decays}
Predictions on $\Delta S=0$, $\overline B_q\to{{\cal D} \overline {\cal D}}$ decay rates are shown in Table~\ref{tab:DDDS=0}.
There are six modes that can cascadely decay to all charged final states.
They are
$\overline B{}^0\to\Delta^{++}\overline{\Delta^{++}}$,
$\Delta^0\overline{\Delta^0}$,
$\Omega^-\overline{\Omega^-}$,
$\Sigma^{*+}\overline{\Sigma^{*+}}$,
$\Sigma^{*-}\overline{\Sigma^{*-}}$
and $\Xi^{*0}\overline{\Xi^{*0}}$ decays.
However, most of them have highly suppressed rates, except
$\overline{B}{}^0\to \Delta^0\overline{\Delta^0}$
and
$\overline{B}{}^0\to \Sigma^{*-}\overline{\Sigma^{*-}}$ decays, which have rates of order $10^{-8}$ and should be searchable.
In particular, the bound on $\overline B{}^0\to \Delta^{0} \overline{\Delta^{0}}$ rate has not been updated for almost three decades~\cite{Bortoletto:1989mu}.
Note that with $\pi^0$, one can search for plenty of unsuppressed modes.
These modes include
$B^-\to \Delta^+ \overline{\Delta^{++}}$,
$B^-\to \Delta^0 \overline{\Delta^{+}}$,
$\overline B{}^0_s\to \Delta^{+} \overline{\Sigma^{*+}}$,
$B^-\to \Sigma^{*0} \overline{\Sigma^{*+}}$,
$\overline B{}^0_s\to \Delta^{0} \overline{\Sigma^{*0}}$,
$\overline B{}^0_s\to \Sigma^{*0} \overline{\Xi^{*0}}$,
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Xi^{*-}}$,
and
$\overline B{}^0_s\to \Xi^{*-} \overline{\Omega^{-}}$
decays.
In particular, the mode with the highest rate (the only one at order $10^{-7}$) can be searched through the
$B^-\to \Delta^+\overline{\Delta^{++}}\to p\pi^0\bar p\pi^-$ decay.
With $\pi^0\pi^0$, one can also search for
$\overline B{}^0\to \Delta^{+} \overline{\Delta^{+}}$
and
$\overline B{}^0\to \Sigma^{*0} \overline{\Sigma^{*0}}$
decays.
Note that
$B^-\to \Delta^- \overline{\Delta^{0}}$,
$B^-\to \Sigma^{*-} \overline{\Sigma^{*0}}$,
$B^-\to \Xi^{*-} \overline{\Xi^{*0}}$,
$\overline B{}^0\to \Delta^{-} \overline{\Delta^{-}}$,
$\overline B{}^0\to \Sigma^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Xi^{*-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Delta^{-} \overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Xi^{*-} \overline{\Omega^{-}}$,
$\overline B{}^0\to \Delta^{++} \overline{\Delta^{++}}$,
$\overline B{}^0\to \Sigma^{*+} \overline{\Sigma^{*+}}$,
$\overline B{}^0\to \Xi^{*0} \overline{\Xi^{*0}}$ and
$\overline B{}^0\to \Omega^{-} \overline{\Omega^{-}}$ decays
do not have any tree ($T_{{{\cal D} \overline {\cal D}}}$) contribution
[see Eqs. (\ref{eq: DDBm, DS=0}), (\ref{eq: DDB0, DS=0}) and (\ref{eq: DDBs, DS=0})],
and, consequently, their rates in Table~\ref{tab:DDDS=0} have vanishing second uncertainties.
In particular, the
$\overline B{}^0\to \Delta^{-} \overline{\Delta^{-}}$,
$\overline B{}^0\to \Sigma^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Xi^{*-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Delta^{-} \overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Xi^{*-}}$ and
$\overline B{}^0_s\to \Xi^{*-} \overline{\Omega^{-}}$
decays
are pure penguin modes, which only have $P_{{\cal D} \overline {\cal D}}$, $P_{EW{{\cal D} \overline {\cal D}}}$ and $PA_{{\cal D} \overline {\cal D}}$ contributions,
the
$\overline B{}^0\to \Delta^{++} \overline{\Delta^{++}}$,
$\overline B{}^0\to \Sigma^{*+} \overline{\Sigma^{*+}}$ and
$\overline B{}^0\to \Xi^{*0} \overline{\Xi^{*0}}$ decays
are subleading modes,
which only have $E_{{\cal D} \overline {\cal D}}$ and $P_{{\cal D} \overline {\cal D}}$ contributions,
while the
$\overline B{}^0\to \Omega^{-} \overline{\Omega^{-}}$ decay
is a pure penguin annihilation ($PA_{{\cal D} \overline {\cal D}}$) mode.
The rates of some modes are related.
Using formulas in Appendices~\ref{appendix:amplitudes} and \ref{appendix:formulas},
we have~\cite{Chua:2013zga}
\begin{eqnarray}
2 {\mathcal B}(B^-\to\Delta^-\overline{\Delta^{0}})
&=&3 {\mathcal B}(B^-\to\Sigma^{*-}\overline{\Sigma^{*0}})
=6 {\mathcal B}(B^-\to\Xi^{*-}\overline{\Xi^{*0}}),
\nonumber\\
{\mathcal B}(B^-\to\Delta^0\overline{\Delta^{+}})
&=&2 {\mathcal B}(B^-\to\Sigma^{*0}\overline{\Sigma^{*+}}),
\nonumber\\
{\mathcal B}(\overline{B^0_s}\to\Delta^{0}\overline{\Sigma^{*0}})
&=&{\mathcal B}(\overline{B^0_s}\to \Sigma^{*0}\overline{\Xi^{*0}}),
\nonumber\\
4 {\mathcal B}(\overline{B^0_s}\to\Delta^{-}\overline{\Sigma^{*-}})
&=&4 {\mathcal B}(\overline{B^0_s}\to\Xi^{*-}\overline{\Omega^-})
=3 {\mathcal B}(\overline{B^0_s}\to\Sigma^{*-}\overline{\Xi^{*-}}),
\end{eqnarray}
where these relations are subjected to SU(3) breaking from the phase space factors.
These relations do not relay on the asymptotic relations.
As shown in Table~\ref{tab:DDDS=0} the rates of these modes roughly satisfy the above relations and the agreement will be improved when the SU(3) breaking effects are taken into account.
\begin{table}[t!]
\caption{\label{tab:DDDS=-1} Same as Table~\ref{tab:BBDS=0}, but with $\Delta S=-1$, $\overline B_q\to{{\cal D} \overline {\cal D}}$ modes.}
\begin{ruledtabular}
\centering
\begin{tabular}{llll}
Mode
& ${\mathcal B}(10^{-8})$
& Mode
& ${\mathcal B}(10^{-8})$
\\
\hline $B^-\to \Sigma^{*+} \overline{\Delta^{++}}$
& $21.48^{+8.92}_{-7.65}{}^{+5.53}_{-0}{}^{+8.86}_{-7.15}\pm0.007
& $\overline B{}^0\to \Sigma^{*+} \overline{\Delta^{+}}$
& $6.63^{+2.75}_{-2.36}{}^{+1.71}_{-0}{}^{+2.73}_{-2.21}\pm0
\\
$B^-\to \Sigma^{*0} \overline{\Delta^{+}}$
& $13.08^{+5.63}_{-4.66}{}^{+1.73}_{-0}{}^{+5.32}_{-4.39}\pm0.008
& $\overline B{}^0\to \Sigma^{*0} \overline{\Delta^{0}}$
& $12.11^{+5.21}_{-4.31}{}^{+1.60}_{-0}{}^{+4.93}_{-4.06}\pm0
\\
$B^-\to \Sigma^{*-} \overline{\Delta^{0}}$
& $6.06^{+2.68}_{-2.16}\pm0{}^{+2.57}_{-2.12}\pm0.005
&$\overline B{}^0\to \Sigma^{*-} \overline{\Delta^{-}}$
& $16.83^{+7.46}_{-5.99}\pm0{}^{+7.15}_{-5.88}\pm0$
\\
$B^-\to \Xi^{*0} \overline{\Sigma^{*+}}$
& $24.26^{+10.44}_{-8.64}{}^{+3.21}_{-0}{}^{+9.87}_{-8.13}\pm0.01
& $\overline B{}^0\to \Xi^{*0} \overline{\Sigma^{*0}}$
& $11.23^{+4.83}_{-4.00}{}^{+1.49}_{-0}{}^{+4.57}_{-3.77}\pm0
\\
$B^-\to \Xi^{*-} \overline{\Sigma^{*0}}$
& $11.24^{+4.98}_{-4.00}\pm0{}^{+4.77}_{-3.93}\pm0.01
& $\overline B{}^0\to \Xi^{*-} \overline{\Sigma^{*-}}$
& $20.79^{+9.21}_{-7.40}\pm0{}^{+8.83}_{-7.27}\pm0$
\\
$B^-\to \Omega^{-} \overline{\Xi^{*0}}$
& $15.49^{+6.86}_{-5.51}\pm0{}^{+6.57}_{-5.41}\pm0.01
& $\overline B{}^0\to \Omega^{-} \overline{\Xi^{*-}}$
& $14.32^{+6.34}_{-5.10}\pm0{}^{+6.08}_{-5.01}\pm0$
\\
$\overline B{}^0_s\to \Delta^{++} \overline{\Delta^{++}}$
& $0\pm0\pm0\pm0{}^{+0.02}_{-0}$
& $\overline B{}^0_s\to \Sigma^{*+} \overline{\Sigma^{*+}}$
& $6.49^{+2.69}_{-2.31}{}^{+1.67}_{-0}{}^{+2.67}_{-2.16}{}^{+0.77}_{-0.72}$
\\
$\overline B{}^0_s\to \Delta^{+} \overline{\Delta^{+}}$
& $0\pm0\pm0\pm0{}^{+0.02}_{-0}$
& $\overline B{}^0_s\to \Sigma^{*0} \overline{\Sigma^{*0}}$
& $5.93^{+2.55}_{-2.11}{}^{+0.79}_{-0}{}^{+2.41}_{-1.99}{}^{+0.74}_{-0.70}$
\\
$\overline B{}^0_s\to \Delta^{0} \overline{\Delta^{0}}$
& $0\pm0\pm0\pm0{}^{+0.02}_{-0}$
& $\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}}$
& $5.48^{+2.43}_{-1.95}\pm0{}^{+2.33}_{-1.92}{}^{+0.72}_{-0.67}$
\\
$\overline B{}^0_s\to \Delta^{-} \overline{\Delta^{-}}$
& $0\pm0\pm0\pm0{}^{+0.02}_{-0}$
& $\overline B{}^0_s\to \Xi^{*0} \overline{\Xi^{*0}}$
& $21.92^{+9.44}_{-7.81}{}^{+2.90}_{-0}{}^{+8.92}_{-7.35}{}^{+1.36}_{-1.32}
\\
$\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$
& $41.95^{+18.58}_{-14.93}\pm0{}^{+17.81}_{-14.67}{}^{+1.79}_{-1.75}$
& $\overline B{}^0_s\to \Xi^{*-} \overline{\Xi^{*-}}$
& $20.29^{+8.99}_{-7.22}\pm0{}^{+8.62}_{-7.10}{}^{+1.30}_{-1.26}$
\\
\end{tabular}
\end{ruledtabular}
\end{table}
Predictions on $\Delta S=-1$, $\overline B_q\to{{\cal D} \overline {\cal D}}$ decay rates are shown in Table~\ref{tab:DDDS=-1}.
There are eight modes that can cascadely decay to all charged final states with unsuppressed rates.
They are
$\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$,
$B^-\to \Xi^{*0} \overline{\Sigma^{*+}}$,
$\overline B{}^0_s\to \Xi^{*0} \overline{\Xi^{*0}}$,
$B^-\to \Sigma^{*+} \overline{\Delta^{++}}$,
$B^-\to \Omega^{-} \overline{\Xi^{*0}}$,
$\overline B{}^0_s\to \Sigma^{*+} \overline{\Sigma^{*+}}$,
$B^-\to \Sigma^{*-} \overline{\Delta^{0}}$
and
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}}$
decays.
It is interesting that many (the first five) of them have rates of order $10^{-7}$.
In particular, the $\overline B{}^0_s\to\Omega^-\overline{\Omega^-}$ decay has the highest rate and good detectability.
Note that with one $\pi^0$ one can search for
$\overline B{}^0\to \Xi^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Omega^{-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Xi^{*-} \overline{\Xi^{*-}}$,
$B^-\to \Sigma^{*0} \overline{\Delta^{+}}$,
$\overline B{}^0\to \Sigma^{*0} \overline{\Delta^{0}}$
and
$\overline B{}^0\to \Xi^{*0} \overline{\Sigma^{*0}}$
decays in the future.
All of them have rates of order $10^{-7}$.
With $\pi^0\pi^0$ one can search for
$B^-\to \Xi^{*-} \overline{\Sigma^{*0}}$,
$\overline B{}^0_s\to \Sigma^{*0} \overline{\Sigma^{*0}}$
and
$\overline B{}^0\to \Sigma^{*+} \overline{\Delta^{+}}$
decays,
where the first one has rate of order $10^{-7}$.
Note that
$B^-\to \Sigma^{*-} \overline{\Delta^{0}}$,
$B^-\to \Xi^{*-} \overline{\Sigma^{*0}}$,
$B^-\to \Omega^{-} \overline{\Xi^{*0}}$,
$\overline B{}^0\to \Sigma^{*-} \overline{\Delta^{-}}$,
$\overline B{}^0\to \Xi^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Omega^{-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$,
$\overline B{}^0_s\to \Xi^{*-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Delta^{++} \overline{\Delta^{++}}$,
$\overline B{}^0_s\to \Delta^{+} \overline{\Delta^{+}}$,
$\overline B{}^0_s\to \Delta^{0} \overline{\Delta^{0}}$ and
$\overline B{}^0_s\to \Delta^{-} \overline{\Delta^{-}}$ decays
do not have any tree ($T'_{{{\cal D} \overline {\cal D}}}$) contribution
[see Eqs. (\ref{eq: DDBm, DS=-1}), (\ref{eq: DDB0, DS=-1}) and (\ref{eq: DDBs, DS=-1})],
and, consequently, their rates in Table~\ref{tab:DDDS=-1} have vanishing second uncertainties.
The
$\overline B{}^0\to \Sigma^{*-} \overline{\Delta^{-}}$,
$\overline B{}^0\to \Xi^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Omega^{-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$ and
$\overline B{}^0_s\to \Xi^{*-} \overline{\Xi^{*-}}$
decays
are pure penguin modes, which only have $P'_{{\cal D} \overline {\cal D}}$, $P'_{EW{{\cal D} \overline {\cal D}}}$ and $PA'_{{\cal D} \overline {\cal D}}$ contributions,
the
$\overline B{}^0_s\to \Delta^{++} \overline{\Delta^{++}}$
$\overline B{}^0_s\to \Delta^{+} \overline{\Delta^{+}}$ and
$\overline B{}^0_s\to \Delta^{0} \overline{\Delta^{0}}$ decays
are subleading modes,
which only have $E'_{{\cal D} \overline {\cal D}}$ and $P'_{{\cal D} \overline {\cal D}}$ contributions,
while the
$\overline B{}^0_s\to \Delta^{-} \overline{\Delta^{-}}$ decay
is a pure penguin annihilation ($PA'_{{\cal D} \overline {\cal D}}$) mode.
The rates of some modes are related.
Using formulas in Appendices~\ref{appendix:amplitudes} and \ref{appendix:formulas},
we have~\cite{Chua:2013zga}~\footnote{A typo in the last relation is corrected.}
\begin{eqnarray}
6 {\mathcal B}(B^-\to\Sigma^{*-}\overline{\Delta^0})
&=&
2 {\mathcal B}(B^-\to\Omega^-\overline{\Xi^{*0}})
=3 {\mathcal B}(B^-\to\Xi^{*-}\overline{\Sigma^{*0}}),
\nonumber\\
2 {\mathcal B}(B^-\to\Sigma^{*0}\overline{\Delta^+})
&=&{\mathcal B}(B^-\to\Xi^{*0}\overline{\Sigma^{*+}}),
\nonumber\\
{\mathcal B}(\bar B^0\to\Sigma^{*0}\overline{\Delta^0})
&=&{\mathcal B}(\bar B^0\to\Xi^{*0}\overline{\Sigma^{*0}}),
\nonumber\\
4 {\mathcal B}(\bar B^0\to\Sigma^{*-}\overline{\Delta^-})
&=& 3 {\mathcal B}(\bar B^0\to\Xi^{*-}\overline{\Sigma^{*-}})
= 4 {\mathcal B}(\bar B^0\to\Omega^-\overline{\Xi^{*-}}).
\end{eqnarray}
where these relations are subjected to SU(3) breaking from the phase space factors.
These relations do not relay on the asymptotic relations.
The rates in Table~\ref{tab:DDDS=-1} roughly satisfy the above relations and the agreement will be improved when the SU(3) breaking effects are taken into account.
\begin{table}[t!]
\caption{\label{tab:rate} Modes with (relatively) unsuppressed rates and (relatively) good detectability are summarized.}
\begin{ruledtabular}
\centering
{\footnotesize\begin{tabular}{llll}
& Group I
& Group II
& Group III
\\
& All charged final states
& with single $\pi^0/\gamma$
& with $\pi^0\pi^0$, $\pi^0\gamma$ or $\gamma\gamma$
\\
\hline
${{\cal B} \overline {\cal B}}, \Delta S=0$
& $\overline B{}^0\to p\overline{p}$;
& $\overline B{}^0\to \Sigma^{0}\overline{\Lambda}$,
$\overline B{}^0_s\to p\overline{\Sigma^{+}}$;
& $\overline B{}^0_s\to \Sigma^{0}\overline{\Xi^{0}}$,
$B^-\to \Sigma^{0}\overline{\Sigma^{+}}$,
\\
&
&
& $\overline B{}^0\to \Sigma^{0}\overline{\Sigma^{0}}$
\\
\hline
${{\cal B} \overline {\cal B}}, \Delta S=-1$
& $B^-\to \Lambda\overline{p}$,
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}}$;
& $B^-\to\Xi^{-}\overline{\Sigma^{0}}$,
$\overline B{}^0\to \Xi^{0}\overline{\Lambda}$;
& $B^-\to \Xi^{0}\overline{\Sigma^{+}}$,
$\overline B{}^0_s\to \Xi^{0}\overline{\Xi^{0}}$,
\\
& $\overline B{}^0_s\to \Lambda\overline{\Lambda}$,
$B^-\to \Xi^{-}\overline{\Lambda}$;
& $\overline B{}^0\to \Sigma^{+}\overline{p}$
& $\overline B{}^0\to \Xi^{0}\overline{\Sigma^{0}}$,
$\overline B{}^0_s\to \Sigma^{+}\overline{\Sigma^{+}}$,
\\
&
&
& $\overline B{}^0_s\to \Sigma^{0}\overline{\Sigma^{0}}$
\\
\hline
${{\cal B} \overline {\cal D}},\Delta S=0$
& $B^-\to p\overline{\Delta^{++}}$,
$\overline B{}^0_s\to p\overline{\Sigma^{*+}}$;
& $B^-\to\Sigma^0\overline{\Sigma^{*+}}$,
$\overline B{}^0_s\to \Sigma^{0}\overline{\Xi^{*0}}$;
& $\overline B{}^0\to \Sigma^{0}\overline{\Sigma^{*0}}
\\
&
& $\overline B{}^0\to p\overline{\Delta^+}$
&
\\
\hline
${{\cal B} \overline {\cal D}},\Delta S=-1$
& $\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}}$;
& $B^-\to \Sigma^+\overline{\Delta^{++}}$,
$\overline B{}^0\to \Sigma^{0}\overline{\Delta^0}$;
& $B^-\to \Sigma^0\overline{\Delta^+}$,
$\overline B{}^0\to \Sigma^{+}\overline{\Delta^+}$,
\\
&
& $\overline B{}^0_s\to \Sigma^{+}\overline{\Sigma^{*+}}$,
$B^-\to \Xi^{0}\overline{\Sigma^{*+}}$;
& $\overline B{}^0_s\to \Sigma^{0}\overline{\Sigma^{*0}}$,
$\overline B{}^0\to \Xi^{0}\overline{\Sigma^{*0}}$
\\
&
& $\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Xi^{0}\overline{\Xi^{*0}}$
&
\\
&
& $B^-\to \Xi^{-}\overline{\Sigma^{*0}}$
&
\\
\hline
${{\cal D} \overline {\cal B}},\Delta S=0$
& $B^-\to \Delta^0\overline{p}$,
$\overline B{}^0_s\to \Delta^0\overline{\Lambda}$;
& $\overline B{}^0\to \Delta^+\overline{p}$,
$\overline B{}^0\to \Sigma^{*0}\overline{\Lambda}$;
& $\overline B{}^0_s\to \Sigma^{*0}\overline{\Xi^{0}}$,
$\overline B{}^0_s\to \Delta^+\overline{\Sigma^{+}}$,
\\
&
& $\overline B{}^0_s\to \Delta^0\overline{\Sigma^{0}}
& $B^-\to \Sigma^{*0}\overline{\Sigma^{+}}$
\\
\hline
${{\cal D} \overline {\cal B}},\Delta S=-1$
& $\overline B{}^0\to \Omega^-\overline{\Xi^{-}}$,
$\overline B{}^0\to \Xi^{*0}\overline{\Lambda}$
& $B^-\to \Omega^-\overline{\Xi^{0}}$,
$B^-\to \Xi^{*-}\overline{\Lambda}$;
& $\overline B{}^0_s\to \Sigma^{*0}\overline{\Sigma^{0}}$,
$B^-\to \Xi^{*-}\overline{\Sigma^{0}}$
\\
& $\overline B{}^0\to \Sigma^{*+}\overline{p}$
&
$\overline B{}^0_s\to \Xi^{*0}\overline{\Xi^{0}}$,
$\overline B{}^0_s\to \Sigma^{*+}\overline{\Sigma^{+}}$,
&
\\
&
& $B^-\to \Xi^{*0}\overline{\Sigma^{+}}$,
$\overline B{}^0_s\to \Xi^{*-}\overline{\Xi^{-}}$,
&
\\
&
& $B^-\to \Sigma^{*0}\overline{p}
&
\\
\hline
${{\cal D} \overline {\cal D}},\Delta S=0$
& $\overline B{}^0\to \Delta^{0} \overline{\Delta^{0}}$,
$\overline B{}^0\to \Sigma^{*-} \overline{\Sigma^{*-}}$;
& $B^-\to \Delta^+ \overline{\Delta^{++}}$,
$B^-\to \Delta^0 \overline{\Delta^{+}}$;
& $\overline B{}^0\to \Delta^{+} \overline{\Delta^{+}}$,
$\overline B{}^0\to \Sigma^{*0} \overline{\Sigma^{*0}}$
\\
&
& $\overline B{}^0_s\to \Delta^{+} \overline{\Sigma^{*+}}$,
$B^-\to \Sigma^{*0} \overline{\Sigma^{*+}}$,
&
\\
&
& $\overline B{}^0_s\to \Delta^{0} \overline{\Sigma^{*0}}$,
$\overline B{}^0_s\to \Sigma^{*0} \overline{\Xi^{*0}}$,
&
\\
&
& $\overline B{}^0_s\to \Sigma^{*-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Xi^{*-} \overline{\Omega^{-}}$
&
\\
\hline
${{\cal D} \overline {\cal D}},\Delta S=-1$
& $\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$,
$B^-\to \Xi^{*0} \overline{\Sigma^{*+}}$;
& $\overline B{}^0\to \Xi^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Omega^{-} \overline{\Xi^{*-}}$;
& $\overline B{}^0_s\to \Xi^{*-} \overline{\Xi^{*-}}$,
$B^-\to \Sigma^{*0} \overline{\Delta^{+}}$,
\\
& $\overline B{}^0_s\to \Xi^{*0} \overline{\Xi^{*0}}$,
$B^-\to \Sigma^{*+} \overline{\Delta^{++}}$;
& $\overline B{}^0\to \Sigma^{*0} \overline{\Delta^{0}}$,
$\overline B{}^0\to \Xi^{*0} \overline{\Sigma^{*0}}$;
& $B^-\to \Xi^{*-} \overline{\Sigma^{*0}}$,
$\overline B{}^0_s\to \Sigma^{*0} \overline{\Sigma^{*0}}$
\\
& $B^-\to \Omega^{-} \overline{\Xi^{*0}}$,
$\overline B{}^0_s\to \Sigma^{*+} \overline{\Sigma^{*+}}$;
& $\overline B{}^0\to \Sigma^{*+} \overline{\Delta^{+}}$
&
\\
& $B^-\to \Sigma^{*-} \overline{\Delta^{0}}$,
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}}$
&
&
\\
\end{tabular}
}
\\
\end{ruledtabular}
\end{table}
We have shown that $\overline B{}^0\to p\bar p$ and $B^-\to\Lambda\bar p$ decays have better chance to be found experimentally.
We have identified several modes that can cascadely decay to all charged final states with (relatively) unsuppressed rates.
They are searchable in near future.
In particular, we note that the predicted $B^-\to p\overline{\Delta^{++}}$ rate is close to the experimental bound,
which has not been updated in the last ten years~\cite{Wei:2007fg}.
Furthermore, the bounds on $B^-\to \Delta^0\overline{p}$ and $\overline B{}^0\to \Sigma^{*+}\overline{p}$ rates have not been updated in the last ten years~\cite{Wei:2007fg, Wang:2007as} and the bound on $\overline B{}^0\to \Delta^{0} \overline{\Delta^{0}}$ rate was last given in 1989~\cite{Bortoletto:1989mu},
while their rates are predicted to be of the order of $10^{-8}$.
Also note that the $\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$ rate is predicted to be the highest rate.
It will also be interesting for Belle-II, which will be turned on soon, to search for baryonic modes, as there are many modes having unsuppressed rates but require $\pi^0$ or $\gamma$ for detection.
We pointed out several modes without tree amplitudes. Some of them are pure penguin modes.
As we shall see in the next subsection, these will affect their $CP$ asymmetries.
We summarize our suggestions for experimental searches in Table~\ref{tab:rate}. These modes have (relatively) unsuppressed rates and better detectability. Modes are arranged according to their quantum numbers, detection and rates (in descending order) and they are assigned into Groups I, II and III accordingly, where Group I modes are modes that have unsuppressed rates and can cascadely decay to all charged final states, Group II modes are modes that can be searched with $\pi^0$ or $\gamma$ and Group III modes are modes that can be searched with $\pi^0\pi^0$, $\pi^0\gamma$ or $\gamma\gamma$.
\subsection{Numerical Results on Direct $CP$ Asymmetries}
In this subsection,
we will give results of direct $CP$ asymmetries of all modes and plot asymmetries of several interesting modes.
Note that this study becomes possible as we now have the information of the tree-penguin ratio, Eq. (\ref{eq:P/T}).
\begin{table}[t!]
\caption{\label{tab:AcpBBDS=0} Direct $CP$ asymmetries (${\mathcal A}$ in $\%$) of $\Delta S=0$, $\overline B_q\to{{\cal B} \overline {\cal B}}$ modes
for $\phi=0$, $\pm\pi/4$ and $\pm\pi/2$.
The uncertainties are from varying the strong phases of $r^{(\prime)}_{t,i}$, $r^{(\prime)}_{p,i}$, $r^{(\prime)}_{ewp,i}$ and $\eta_{i,j,k}$ (see Eqs. (\ref{eq:correction0}) and (\ref{eq:correction2})).}
\begin{ruledtabular}
\centering
{
\footnotesize
\begin{tabular}{lccclccc}
Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
& Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
\\
\hline $B^-\to n\overline{p}$
& $0\pm32.1$%
& $\mp(74.0_{-25.6}^{+19.7})$%
& $\mp(97.9_{-15.6}^{+2.1})$%
& $\overline B{}^0_s\to p\overline{\Sigma^{+}}$
& $0\pm21.2$%
& $\mp(36.0_{-17.7}^{+21.2})$%
& $\mp(49.3^{+20.2}_{-15.2})$%
\\
$B^-\to \Sigma^{0}\overline{\Sigma^{+}}$
& $0\pm31.2$%
& $\mp(59.3^{+23.4}_{-27.2})$%
& $\mp(79.5^{+16.1}_{-22.4})
& $\overline B{}^0_s\to n\overline{\Sigma^{0}}$
& $0\pm14.9$%
& $\pm(26.7^{+14.6}_{-13.8})$%
& $\pm(38.8^{+13.9}_{-12.8})$%
\\
$B^-\to \Sigma^{-}\overline{\Sigma^{0}}$
& $0\pm54.6$%
& $0\pm54.6$%
& $0\pm54.6$%
&$\overline B{}^0_s\to n\overline{\Lambda}$
& $0\pm31.0$%
& $\mp(71.6^{+19.3}_{-25.4})$%
& $\mp(94.9^{+5.1}_{-16.3})$%
\\
$B^-\to \Sigma^{-}\overline{\Lambda}$
& $0\pm35.7$%
& $0\pm35.7$%
& $0\pm35.7$%
& $\overline B{}^0_s\to \Sigma^{0}\overline{\Xi^{0}}$
& $0\pm15.4$%
& $\mp(42.8^{+14.1}_{-13.9})$%
& $\mp(58.3^{+12.9}_{-12.6})$%
\\
$B^-\to \Xi^{-}\overline{\Xi^{0}}$
& $0\pm35.7$%
& $0\pm35.7$%
& $0\pm35.7$%
& $\overline B{}^0_s\to \Sigma^{-}\overline{\Xi^{-}}$
& $0\pm30.0$%
& $0\pm30.0$%
& $0\pm30.0$%
\\
$B^-\to \Lambda\overline{\Sigma^+}$
& $0\pm70.7$%
& $0\pm70.7$%
& $0\pm70.7$%
& $\overline B{}^0_s\to \Lambda\overline{\Xi^0}$
& $0\pm100$%
& $0\pm100$%
& $0\pm100$%
\\
$\overline B{}^0\to p\overline{p}$
& $0\pm26.9$%
& $\mp(36.0^{+26.8}_{-22.0})$%
& $\mp(49.3^{+25.2}_{-18.3})$%
& $\overline B{}^0\to \Sigma^{+}\overline{\Sigma^{+}}$
& $-100\sim100$%
& $-100\sim100$%
& $-100\sim100$%
\\
$\overline B{}^0\to n\overline{n}$
& $0\pm31.6$%
& $\mp(59.3^{+23.5}_{-27.9})$%
& $\mp(79.5^{+16.0}_{-23.2})$%
& $\overline B{}^0\to \Sigma^{0}\overline{\Sigma^{0}}$
& $0\pm33.6$%
& $\mp(59.3^{+24.8}_{-29.6})$%
& $\mp(79.5^{+16.7}_{-24.4})$%
\\
$\overline B{}^0\to \Xi^{0}\overline{\Xi^{0}}$
& $-100\sim100$%
& $-100\sim100$%
& $-100\sim100$%
& $\overline B{}^0\to \Sigma^{-}\overline{\Sigma^{-}}$
& $0\pm47.1$%
& $0\pm47.1$%
& $0\pm47.1$%
\\
$\overline B{}^0\to \Xi^{-}\overline{\Xi^{-}}
& $0\pm32.1$%
& $0\pm32.1$%
& $0\pm32.1$%
& $\overline B{}^0\to \Sigma^{0}\overline{\Lambda}$
& $0\pm 13.3$%
& $\mp(36.0^{+12.8}_{-12.1})$%
& $\mp(49.3^{+12.1}_{-11.2})$%
\\
$\overline B{}^0\to \Lambda\overline{\Lambda}$
& $-100\sim100$%
& $-100\sim100$%
& $-100\sim100$%
& $\overline B{}^0\to \Lambda\overline{\Sigma^{0}}$
& $0\pm70.7$%
& $0\pm70.7$%
& $0\pm70.7$%
\\
\end{tabular}
}
\\
\end{ruledtabular}
\end{table}
\subsubsection{$CP$ asymmetries of $\overline B_q\to {{\cal B} \overline {\cal B}}$ decays}
In Table~\ref{tab:AcpBBDS=0} we give results of direct $CP$ asymmetries of $\Delta S=0$, $\overline B_q\to{{\cal B} \overline {\cal B}}$ modes.
The central values are the asymmetries generated from tree-penguin interference
where only the asymptotic amplitudes and Eq.~(\ref{eq: penguin strong phase}) are used.
We show results for $\phi=0$, $\pm\pi/4$ and $\pm\pi/2$.
The uncertainties are from relaxing the asymptotic relation by using Eq.~(\ref{eq:correction0}) and varying the strong phases of $r^{(\prime)}_{t,i}$, $r^{(\prime)}_{p,i}$, $r^{(\prime)}_{ewp,i}$ and from sub-leading terms Eq.~(\ref{eq:correction2}) with strong phases from $\eta_{i,j,k}$.
Note that to satisfy the experimental $\overline B{}^0\to p\overline p$ and $B^-\to\Lambda\overline p$ rates,
the sizes of $|r^{(\prime)}|$ are reduced by 60\% in all $\overline B_q\to{{\cal B} \overline {\cal B}}$ modes.
Now we discuss the $CP$ asymmetries of Group I, II, III modes, according to their rates and detectability as noted in Table~\ref{tab:rate}.
For the Group I mode, the $CP$ asymmetry of the
$\overline B{}^0\to p\overline{p}$ decay
can be as large as $\mp 49\%$.
For the Group II modes,
${\mathcal A}(\overline B{}^0\to \Sigma^{0}\overline{\Lambda})$
and ${\mathcal A}(\overline B{}^0_s\to p\overline{\Sigma^{+}})$
are similar to ${\mathcal A}(\overline B{}^0\to p\overline{p})$.
For the Group III modes, the $CP$ asymmetries of
$B^-\to \Sigma^{0}\overline{\Sigma^{+}}$
and
$\overline B{}^0\to \Sigma^{0}\overline{\Sigma^{0}}$ decays
are similar and can reach $\mp 80\%$,
while the $CP$ asymmetry of the
$\overline B{}^0_s\to \Sigma^{0}\overline{\Xi^{0}}$
decay
is smaller then these two modes, but can still reach $\mp58\%$.
The $CP$ asymmetries of these modes basically all have the same sign when $|\phi|$ is large enough.
As noted in the previous subsection,
$B^-\to \Sigma^{-}\overline{\Sigma^{0}}$,
$B^-\to \Sigma^{-}\overline{\Lambda}$,
$B^-\to \Xi^{-}\overline{\Xi^{0}}$,
$\overline B{}^0\to \Xi^{-}\overline{\Xi^{-}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Xi^{-}}$ and
$\overline B{}^0\to \Sigma^{-}\overline{\Sigma^{-}}$ decays
do not have any tree ($T_{i{{\cal B} \overline {\cal B}}}$) contribution.
Although the $\overline B{}^0\to \Xi^{-}\overline{\Xi^{-}}$,
$\overline B{}^0\to \Sigma^{-}\overline{\Sigma^{-}}$
and $\overline B{}^0_s\to \Sigma^{-}\overline{\Xi^{-}}$ decays are pure penguins modes,
which only have $P_{i{{\cal B} \overline {\cal B}}}$, $P_{iEW{{\cal B} \overline {\cal B}}}$ and $PA_{{{\cal B} \overline {\cal B}}}$ contributions,
it is still possible for these modes to have sizable $CP$ asymmetries.
Since the sizes of $u$-penguin ($P^u$) and $c$-penguin ($P^c$) in $\Delta S=0$ modes are not very different, as their ratio can be estimated as
\begin{eqnarray}
\left|\frac{P^u}{P^c}\right|\simeq \left|\frac{V_{ub}V^*_{ud}}{V_{cb}V^*_{cd}}\right|\simeq 0.38,
\label{eq: PuPc}
\end{eqnarray}
and they can have different strong phases to produce $CP$ asymmetries.~\footnote{Similarly the sizable $CP$ asymmetry of a pure penguin mode ${\mathcal A}(\overline B^0\to K^0\overline K^0)=-16.7^{+4.7+4.5+1.5+4.6}_{-3.7-5.1-1.7-3.6}\,\%$ as predicted in a QCD factorization (QCDF) calculation \cite{Beneke:2003zv} can be understood.}
For subleading modes, $\overline B{}^0\to \Xi^{0}\overline{\Xi^{0}}$ and
$\overline B{}^0\to \Sigma^{+}\overline{\Sigma^{+}}$ decays, which only have $E_{i{{\cal B} \overline {\cal B}}}$ and $PA_{{\cal B} \overline {\cal B}}$ contributions,
their $CP$ asymmetries can be any value.
Note that
$B^-\to \Lambda\overline{\Sigma^+}$,
$\overline B{}^0_s\to \Lambda\overline{\Xi^0}$,
$\overline B{}^0\to \Lambda\overline{\Lambda}$ and
$\overline B{}^0\to \Lambda\overline{\Sigma^{0}}$ decay
with tree amplitudes $T_{i{{\cal B} \overline {\cal B}}}$ canceled in the asymptotic limits,
have have large uncertainties on $CP$ asymmetries,
which mostly come from the corrections to the asymptotic relations.
Measuring $CP$ asymmetries of these modes can give information on the corrections to the asymptotic relations.
\begin{table}[t!]
\caption{\label{tab:AcpBBDS=-1} Same as Table~\ref{tab:AcpBBDS=0}, but with $\Delta S=-1$, $\overline B_q\to{{\cal B} \overline {\cal B}}$ modes.}
\begin{ruledtabular}
\centering
{
\footnotesize
\begin{tabular}{lccclccc}
Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
& Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
\\
\hline $B^-\to \Sigma^{0}\overline{p}$
& $0\pm20.9$%
& $\mp(28.9^{+22.1}_{-18.6})$%
& $\mp(45.0^{+23.0}_{-16.7})$%
& $\overline B{}^0\to \Sigma^{+}\overline{p}$
& $0\pm16.4$%
& $\pm(27.1^{+16.6}_{-13.6})$%
& $\pm(35.3^{+15.5}_{-11.9})$%
\\
$B^-\to \Sigma^{-}\overline{n}$
& $0\pm1.8$%
& $0\pm1.8$%
& $0\pm1.8$%
& $\overline B{}^0\to \Sigma^{0}\overline{n}$
& $0\pm29.0$%
& $\pm(47.8^{+23.8}_{-23.1})$%
& $\pm(58.6^{+19.5}_{-17.8})$%
\\
$B^-\to \Xi^{0}\overline{\Sigma^{+}}$
& $0\pm2.6$%
& $\pm(5.8^{+3.4}_{-2.3})$%
& $\pm(8.1^{+3.9}_{-2.8})$%
&$\overline B{}^0\to \Xi^{0}\overline{\Sigma^{0}}$
& $0\pm2.3$%
& $\pm(5.8^{+3.0}_{-2.1})$%
& $\pm(8.1^{+3.6}_{-2.5})$%
\\
$B^-\to\Xi^{-}\overline{\Sigma^{0}}$
& $0\pm1.8$%
& $0\pm1.8$%
& $0\pm1.8$%
& $\overline B{}^0\to \Xi^{0}\overline{\Lambda}$
& $0\pm24.5$%
& $\pm(27.1^{+28.3}_{-16.2})$%
& $\pm(35.3^{+27.2}_{-13.0})$%
\\
$B^-\to \Xi^{-}\overline{\Lambda}$
& $0\pm5.5$%
& $0\pm5.5$%
& $0\pm5.5$%
& $\overline B{}^0\to \Xi^{-}\overline{\Sigma^{-}}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
\\
$B^-\to \Lambda\overline{p}$
& $0\pm4.8$%
& $\pm(9.6^{+5.9}_{-4.0})$%
& $\pm(13.2^{+6.5}_{-4.3})$%
& $\overline B{}^0\to \Lambda\overline{n}$
& $0\pm8.4$%
& $\pm(18.7^{+9.6}_{-6.9})$%
& $\pm(24.9^{+9.8}_{-6.6})
\\
$\overline B{}^0_s\to p\overline{p}$
& $-100\sim100$%
& $-100\sim100$%
& $-100\sim100$%
& $\overline B{}^0_s\to \Sigma^{+}\overline{\Sigma^{+}}$
& $0\pm21.0$%
& $\pm(27.1^{+21.6}_{-16.7})$%
& $\pm(35.3^{+20.1}_{-14.1})$%
\\
$\overline B{}^0_s\to n\overline{n}$
& $-100\sim100$%
& $-100\sim100$%
& $-100\sim100$%
& $\overline B{}^0_s\to \Sigma^{0}\overline{\Sigma^{0}}$
& $0\pm11.1$%
& $\pm(14.2^{+12.5}_{-8.9})$%
& $\pm(19.2^{+12.7}_{-8.2})$%
\\
$\overline B{}^0_s\to \Xi^{0}\overline{\Xi^{0}}$
& $0\pm8.4$%
& $\pm(14.2^{+10.4}_{-6.5})$%
& $\pm(19.2^{+11.1}_{-6.3})$%
& $\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{-}}
& $0\pm1.6$%
& $0\pm1.6$%
& $0\pm1.6$%
\\
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}}$
& $0\pm2.5$%
& $0\pm2.5$%
& $0\pm2.5$%
& $\overline B{}^0_s\to \Sigma^{0}\overline{\Lambda}$
& $0\pm46.4$%
& $\pm(74.3^{+21.7}_{-33.6})$%
& $\pm(84.7^{+13.8}_{-19.7})$%
\\
$\overline B{}^0_s\to \Lambda\overline{\Lambda}$
& $0\pm7.0$%
& $\pm(14.2^{+8.3}_{-5.7})$%
& $\pm(19.2^{+8.8}_{-5.6})$%
& $\overline B{}^0_s\to \Lambda\overline{\Sigma^{0}}$
& $0\pm42.8$%
& $\pm(74.3^{+19.7}_{-33.2})$%
& $\pm(84.7^{+12.5}_{-20.9})$%
\\
\end{tabular}
}
\end{ruledtabular}
\end{table}
In Table~\ref{tab:AcpBBDS=-1} we give results of direct $CP$ asymmetries of $\Delta S=-1$, $\overline B_q\to{{\cal B} \overline {\cal B}}$ modes.
The $CP$ asymmetries of the Group I modes are as following.
The $CP$ asymmetries of the $B^-\to \Lambda\overline{p}$
and
$\overline B{}^0_s\to \Lambda\overline{\Lambda}$
decays are similar reaching $\pm 13\%$ and $\pm19\%$, respectively,
and their signs are opposite to
the sign of ${\mathcal A}(\overline B{}^0\to p\overline p)$.
The $CP$ asymmetries of
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}}$
and
$B^-\to \Xi^{-}\overline{\Lambda}$ decays
are vanishingly small.
We will return to these modes later.
From the table we see that for the Group II modes,
${\mathcal A}(B^-\to\Xi^{-}\overline{\Sigma^{0}})$
is vanishingly,
while
${\mathcal A}(\overline B{}^0\to \Xi^{0}\overline{\Lambda})$
and
${\mathcal A}(\overline B{}^0\to \Sigma^{+}\overline{p})$
are similar and can be as large as $\pm35\%$.
For the Group III modes,
${\mathcal A}(\overline B{}^0_s\to \Sigma^{+}\overline{\Sigma^{+}})$
is the largest one reaching $\pm35\%$,
${\mathcal A}(\overline B{}^0_s\to \Xi^{0}\overline{\Xi^{0}})$
and
${\mathcal A}(\overline B{}^0_s\to \Sigma^{0}\overline{\Sigma^{0}})$
are similar and can be as large as $\pm 19\%$,
while
${\mathcal A}(B^-\to \Xi^{0}\overline{\Sigma^{+}})$
and
${\mathcal A}(\overline B{}^0\to \Xi^{0}\overline{\Sigma^{0}})$
are similar and are not sizable, but can reach $\pm8\%$,
Some of the above modes have rates of orders $10^{-7}$ (see Table~\ref{tab:BBDS=-1})
and with unsuppressed $CP$ asymmetries.
From Table~\ref{tab:AcpBBDS=-1}, we see that
the $CP$ asymmetries of
$B^-\to \Sigma^{-}\overline{n}$,
$B^-\to\Xi^{-}\overline{\Sigma^{0}}$,
$B^-\to \Xi^{-}\overline{\Lambda}$,
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{-}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{-}}$ and
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}}$
decays
have vanishing central values as they
do not have any tree ($T'_{i{{\cal B} \overline {\cal B}}}$) contribution.
In particular,
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{-}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{-}}$
and
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}}$ decays are pure penguin modes,
which only have $P'_{i{{\cal B} \overline {\cal B}}}$, $P'_{iEW{{\cal B} \overline {\cal B}}}$ and $PA'_{{{\cal B} \overline {\cal B}}}$ terms.
Their $CP$ asymmetries are small.
This can be understood as $P^{\prime u}$ is much smaller than $P^{\prime c}$. We can estimate the $CP$ asymmetry as following:
\begin{eqnarray}
|{\mathcal A}|\simeq
2 \left|\frac{P^{\prime u}}{P^{\prime c}}\right|
\sin\gamma\,
|\sin\delta|
\lesssim
2\left|\frac{V_{ub}V^*_{us}}{V_{cb}V^*_{cs}}\right|
\sin\gamma
\simeq 3.8\%,
\label{eq: A DeltaS penguin}
\end{eqnarray}
where $\delta$ is the strong phase different of the penguins.~\footnote{It is useful to compare to the $CP$ asymmetry of the $\Delta S=1$ pure penguin $PP$ mode mode:
${\mathcal A}(\overline B{}^0_s\to K^0\overline K^0)=0.9^{+0.2+0.2+0.1+0.2}_{-0.2-0.2-0.2-0.3}\,\%$ in a QCDF calculation~\cite{Beneke:2003zv}.}
The smallness of the direct CP reflects the fact that in $\Delta S=-1$ decays, the $c$-penguin is much larger than the $u$-penguin as
their CKM factors are off by about a factor of 50.
They can be tests of the Standard Model.
In particular, the $\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}}$ decay being a Group I mode,
can cascadely decays to all charged final states and with unsuppressed rate ($\sim 2\times 10^{-7}$, see Table~\ref{tab:BBDS=-1}).
The $CP$ asymmetry of this mode can be searched for.
Nevertheless the search is quite demanding as it requires tagging of $B_s$ and suffers from low efficiency of $\Xi^-$ recontruction.
Subleading modes,
$\overline B{}^0_s\to p\overline{p}$ and
$\overline B{}^0_s\to n\overline{n}$ decays with $E'_{i{{\cal B} \overline {\cal B}}}$ and $PA'_{{{\cal B} \overline {\cal B}}}$ contributions
can have any value on their $CP$ asymmetries.
There are relations on
direct $CP$ asymmetries between $\Delta S=0$ and $\Delta S=-1$ modes by using
the so-called $U$-spin symmetry~\cite{Uspin, Uspin1}.
With
\begin{eqnarray}
\Delta_{CP}(\overline B_q\to f)\equiv \Gamma(\overline B_q\to f)-\Gamma(B_q\to \bar f)
=\frac{2}{\tau(B_q)}{\cal A}(\overline B_q\to f){\cal B}(\overline B_q\to f),
\end{eqnarray}
we have~\cite{Chua:2013zga}
\begin{eqnarray}
\Delta_{CP}(B^-\to n\overline{p})
&=&-\Delta_{CP}(B^-\to\Xi^{0}\overline{\Sigma^{+}}),
\nonumber\\
\Delta_{CP}(B^-\to\Xi^{-}\overline{\Xi^{0}})
&=&-\Delta_{CP}(B^-\to\Sigma^{-}\overline{n}),
\nonumber\\
\Delta_{CP}(\bar B^0\to p\overline{p})
&=&
-\Delta_{CP}(\bar B^0_s\to\Sigma^{+}\overline{\Sigma^{+}}),
\nonumber\\
\Delta_{CP}(\bar B^0\to n\overline{n})
&=&
-\Delta_{CP}(\bar B^0_s\to\Xi^{0}\overline{\Xi^{0}}),
\nonumber\\
\Delta_{CP}(\bar B^0\to\Sigma^{+}\overline{\Sigma^{+}})
&=&-\Delta_{CP}(\bar B^0_s\to p\overline{p}),
\nonumber\\
\Delta_{CP}(\bar B^0\to\Sigma^{-}\overline{\Sigma^{-}})
&=&-\Delta_{CP}(\bar B^0_s\to\Xi^{-}\overline{\Xi^{-}})
\nonumber\\
\Delta_{CP}(\bar B^0\to\Xi^{0}\overline{\Xi^{0}})
&=&-\Delta_{CP}(\bar B^0_s\to n\overline{n}),
\nonumber\\
\Delta_{CP}(\bar B^0\to\Xi^{-}\overline{\Xi^{-}})
&=&-\Delta_{CP}(\bar B^0_s\to\Sigma^{-}\overline{\Sigma^{-}}),
\nonumber\\
\Delta_{CP}(\bar B^0_s\to p\overline{\Sigma^{+}})
&=&-\Delta_{CP}(\bar B^0\to \Sigma^{+}\overline{p}),
\nonumber\\
\Delta_{CP}(\bar B^0_s\to\Sigma^{-}\overline{\Xi^{-}})
&=&-\Delta_{CP}(\bar B^0\to\Xi^{-}\overline{\Sigma^{-}}).
\label{eq: DCPBB}
\end{eqnarray}
The minus signs in the above relations are from
$Im(V_{ub}V^*_{ud} V^*_{tb}V_{td})
=-Im(V_{ub}V^*_{us} V^*_{tb}V_{ts})$.
Note that these relations do not rely on the large $m_B$ limit, but are subjected to corrections from SU(3) breaking in the phase space factors and topological amplitudes.
Some relations are satisfied trivially as all the related $CP$ asymmetries are always vanishing.
We can checked that the results shown in Tables~\ref{tab:AcpBBDS=0} and \ref{tab:AcpBBDS=-1} roughly satisfy these relations and the agreement can be improved when SU(3) breaking effects are taken into account.~\footnote{
Note that the values and signs of
$\Delta_{CP}(\bar B^0\to\Sigma^{+}\overline{\Sigma^{+}})$,
$\Delta_{CP}(\bar B^0\to\Xi^{0}\overline{\Xi^{0}})$,
$\Delta_{CP}(\bar B^0_s\to p\overline{p})$
and
$\Delta_{CP}(\bar B^0_s\to n\overline{n})$
cannot be read out from the tables. The relative signs of modes in the sixth, eighth and tenth relations cannot be read out from the tables.}
For example, using the first three relations of the above equation and the corresponding rates from Tables~\ref{tab:BBDS=0} and \ref{tab:BBDS=-1}, we have
\begin{eqnarray}
{\cal A}(B^-\to\Xi^{0}\overline{\Sigma^{+}})
&=&-{\cal A}(B^-\to n\overline{p})
\frac{{\cal B}(B^-\to n\overline{p})}{{\cal B}(B^-\to\Xi^{0}\overline{\Sigma^{+}})}
\nonumber\\
&\simeq&-(0.087) {\cal A}(B^-\to n\overline{p}),
\nonumber\\
{\cal A}(B^-\to\Xi^{-}\overline{\Xi^{0}})
&=&-{\cal A}(B^-\to\Sigma^{-}\overline{n})
\frac{{\cal B}(B^-\to\Sigma^{-}\overline{n})}{{\cal B}(B^-\to\Xi^{-}\overline{\Xi^{0}})}
\nonumber\\
&\simeq&-(23.9) {\cal A}(B^-\to\Sigma^{-}\overline{n}).
\nonumber\\
{\cal A}(\overline B{}^0_s\to\Sigma^+\overline{\Sigma^+})
&=&-{\cal A}(\overline B{}^0\to p\bar p)\frac{\tau(B^0_s)}{\tau(B^0)}
\frac{{\cal B}(\overline B{}^0\to p\bar p)}{{\cal B}(\overline B{}^0_s\to\Sigma^+\overline{\Sigma^+})}
\nonumber\\
&\simeq&-(0.8) {\cal A}(\overline B{}^0\to p\bar p),
\end{eqnarray}
which are roughly satisfied compared to results of the $CP$ asymmetries in Tables~\ref{tab:AcpBBDS=0} and \ref{tab:AcpBBDS=-1}. Note that the rate ratios in the above relations are not fixed. For example, they can change with $\phi$. The values of the rate ratios used are just rough estimations using the center values of rates in Tables~\ref{tab:BBDS=0} and \ref{tab:BBDS=-1}.
Some of these relations are useful to constrain the sizes of $CP$ asymmetries of $\Delta S=-1$ pure penguin modes model independently. From the sixth, eighth and tenth relations we have
\begin{eqnarray}
|{\mathcal A}(\bar B^0_s\to\Xi^{-}\overline{\Xi^{-}})|
&=&
|{\mathcal A}(\bar B^0\to\Sigma^{-}\overline{\Sigma^{-}})|
\frac{\tau(B^0_s)}{\tau(B^0)}
\frac{{\cal B}(\bar B^0\to\Sigma^{-}\overline{\Sigma^{-}})}{{\cal B}(\bar B^0_s\to\Xi^{-}\overline{\Xi^{-}})}
\nonumber\\
&\leq& \frac{\tau(B^0_s)}{\tau(B^0)}
\frac{{\cal B}(\bar B^0\to\Sigma^{-}\overline{\Sigma^{-}})}{{\cal B}(\bar B^0_s\to\Xi^{-}\overline{\Xi^{-}})}
\simeq 6.5\%,
\nonumber\\
|{\mathcal A}(\bar B^0_s\to\Sigma^{-}\overline{\Sigma^{-}})|
&=&
|{\mathcal A}(\bar B^0\to\Xi^{-}\overline{\Xi^{-}})|
\frac{\tau(B^0_s)}{\tau(B^0)}
\frac{{\cal B}(\bar B^0\to\Xi^{-}\overline{\Xi^{-}})}{{\cal B}(\bar B^0_s\to\Sigma^{-}\overline{\Sigma^{-}})}
\nonumber\\
&\leq& \frac{\tau(B^0_s)}{\tau(B^0)}
\frac{{\cal B}(\bar B^0\to\Xi^{-}\overline{\Xi^{-}})}{{\cal B}(\bar B^0_s\to\Sigma^{-}\overline{\Sigma^{-}})}
\simeq 4.6\%,
\nonumber\\
|{\mathcal A}(\bar B^0\to\Xi^{-}\overline{\Sigma^{-}})|
&=&
|{\mathcal A}(\bar B^0_s\to\Sigma^{-}\overline{\Xi^{-}})|
\frac{\tau(B^0)}{\tau(B^0_s)}
\frac{{\cal B}(\bar B^0_s\to\Sigma^{-}\overline{\Xi^{-}})}{{\cal B}(\bar B^0\to\Xi^{-}\overline{\Sigma^{-}})}
\nonumber\\
&\leq& \frac{\tau(B^0)}{\tau(B^0_s)}
\frac{{\cal B}(\bar B^0_s\to\Sigma^{-}\overline{\Xi^{-}})}{{\cal B}(\bar B^0\to\Xi^{-}\overline{\Sigma^{-}})}
\simeq 5.0\%.
\end{eqnarray}
We see from Table~\ref{tab:AcpBBDS=-1} that the above inequalities are all satisfied.
Note that the constraining powers of these inequalities are similar to the one in Eq.~(\ref{eq: A DeltaS penguin}).
\begin{table}[t!]
\caption{\label{tab:AcpBDDS=0} Same as Table~\ref{tab:AcpBBDS=0}, but with $\Delta S=0$, $\overline B_q\to{{\cal B} \overline {\cal D}}$ modes.
}
\begin{ruledtabular}
\centering
{
\footnotesize
\begin{tabular}{lccclccc}
Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
& Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
\\
\hline $B^-\to p\overline{\Delta^{++}}$
& $0\pm48.7$%
& $\mp(36.0^{+49.3}_{-33.9})$%
& $\mp(49.3^{+44.8}_{-22.0})$%
& $\overline B{}^0_s\to p\overline{\Sigma^{*+}}$
& $0\pm47.9$%
& $\mp(36.0^{+48.6}_{-33.4})$%
& $\mp(49.3^{+44.3}_{-21.9})
\\
$B^-\to n\overline{\Delta^+}$
& $0\pm19.9$%
& $\pm(26.7^{+20.5}_{-17.3})$%
& $\pm(38.8^{+20.1}_{-15.2})$%
& $\overline B{}^0_s\to n\overline{\Sigma^{*0}}$
& $0\pm19.6$%
& $\pm(26.7^{+20.1}_{-17.1})$%
& $\pm(38.8^{+19.7}_{-15.0})$%
\\
$B^-\to\Sigma^0\overline{\Sigma^{*+}}$
& $0\pm11.5$%
& $\mp(20.8^{+12.4}_{-9.9})$%
& $\mp(28.9^{+12.5}_{-9.0})$%
&$\overline B{}^0_s\to \Sigma^{0}\overline{\Xi^{*0}}$
& $0\pm11.4$%
& $\mp(20.8^{+12.3}_{-9.8})$%
& $\mp(28.9^{+12.4}_{-8.9})$%
\\
$B^-\to\Sigma^-\overline{\Sigma^{*0}}$
& $0\pm35.7$%
& $0\pm35.7$%
& $0\pm35.7$%
& $\overline B{}^0_s\to \Sigma^{-}\overline{\Xi^{*-}}$
& $0\pm29.9$%
& $0\pm29.9$%
& $0\pm29.9$%
\\
$B^-\to\Xi^{-}\overline{\Xi^{*0}}$
& $0\pm35.7$%
& $0\pm35.7$%
& $0\pm35.7$%
& $\overline B{}^0_s\to \Xi^{-}\overline{\Omega^-}
& $0\pm29.9$%
& $0\pm29.9$%
& $0\pm29.9$%
\\
$B^-\to\Lambda\overline{\Sigma^{*+}}$
& $0\pm100$%
& $0\pm100$%
& $0\pm100$%
& $\overline B{}^0_s\to \Lambda\overline{\Xi^{*0}}$
& $0\pm100$%
& $0\pm100$%
& $0\pm100$%
\\
$\overline B{}^0\to p\overline{\Delta^+}$
& $0\pm48.7$%
& $\mp(36.0^{+49.3}_{-33.9})$%
& $\mp(49.4^{+44.8}_{-22.0})$%
& $\overline B{}^0\to \Sigma^{+}\overline{\Sigma^{*+}}$
& $0$%
& $0$%
& $0$%
\\
$\overline B{}^0\to n\overline{\Delta^0}$
& $0\pm19.9$%
& $\pm(26.7^{+20.5}_{-17.3})$%
& $\pm(38.8^{+20.1}_{-15.2})$%
& $\overline B{}^0\to \Sigma^{0}\overline{\Sigma^{*0}}$
& $0\pm11.5$%
& $\mp(20.8^{+12.4}_{-9.9})$%
& $\mp(28.9^{+12.5}_{-9.0})$%
\\
$\overline B{}^0\to \Xi^{0}\overline{\Xi^{*0}}$
& $0$%
& $0$%
& $0$%
& $\overline B{}^0\to \Sigma^{-}\overline{\Sigma^{*-}}$
& $0\pm29.9$%
& $0\pm29.9$%
& $0\pm29.9$%
\\
$\overline B{}^0\to \Xi^{-}\overline{\Xi^{*-}}
& $0\pm29.9$%
& $0\pm29.9$%
& $0\pm29.9$%
& $\overline B{}^0\to \Lambda\overline{\Sigma^{*0}}$
& $0\pm100$%
& $0\pm100$%
& $0\pm100$%
\\
\end{tabular}
}
\end{ruledtabular}
\end{table}
\subsubsection{$CP$ asymmetries of $\overline B_q\to {{\cal B} \overline {\cal D}}$ decays}
In Table~\ref{tab:AcpBDDS=0} we give results of direct $CP$ asymmetries of $\Delta S=0$, $\overline B_q\to{{\cal B} \overline {\cal D}}$ modes.
The $CP$ asymmetries of Group I modes,
$B^-\to p\overline{\Delta^{++}}$
and
$\overline B{}^0_s\to p\overline{\Sigma^{*+}}$
decays are similar and can be as large as $\mp49\%$.
For Group II modes,
${\mathcal A}(B^-\to\Sigma^0\overline{\Sigma^{*+}})$
and
${\mathcal A}(\overline B{}^0_s\to \Sigma^{0}\overline{\Xi^{*0}})$
are similar and can be as large as $\mp29\%$,
while
${\mathcal A}(\overline B{}^0\to p\overline{\Delta^+})$
is more sizable and is similar to the $CP$ asymmetries of
$B^-\to p\overline{\Delta^{++}}$
and
$\overline B{}^0_s\to p\overline{\Sigma^{*+}}$
decays, reaching $\mp49\%$.
The $CP$ asymmetry of the Group III mode,
${\mathcal A}(\overline B{}^0\to \Sigma^{0}\overline{\Sigma^{*0}})$
is similar to ${\mathcal A}(B^-\to\Sigma^0\overline{\Sigma^{*+}})$
and
${\mathcal A}(\overline B{}^0_s\to \Sigma^{0}\overline{\Xi^{*0}})$,
reaching $\mp29\%$.
The $CP$ asymmetries of these modes basically all share the sign of ${\mathcal A}(\overline B{}^0\to p\overline p)$
for large enough $|\phi|$.
From Eqs. (\ref{eq: BDBm, DS=0}), (\ref{eq: BDB0, DS=0}) and (\ref{eq: BDBs, DS=0}),
we can easily identify the following relations
\begin{eqnarray}
{\mathcal A}(B^-\to\Xi^{-}\overline{\Xi^{*0}})
&=&{\mathcal A}(B^-\to\Sigma^-\overline{\Sigma^{*0}}),
\nonumber\\
{\mathcal A}(\bar B^0\to\Sigma^{-}\overline{\Sigma^{*-}})
&=&{\mathcal A}(\bar B^0\to\Xi^{-}\overline{\Xi^{*-}})
={\mathcal A}(\bar B^0_s\to\Sigma^{-}\overline{\Xi^{*-}})
\nonumber\\
&=&{\mathcal A}(\bar B^0_s\to\Xi^{-}\overline{\Omega^-}),
\nonumber\\
{\mathcal A}(\bar B^0\to\Sigma^{+}\overline{\Sigma^{*+}})
&=&{\mathcal A}(\bar B^0\to\Xi^{0}\overline{\Xi^{*0}})=0.
\end{eqnarray}
We see from Table~\ref{tab:AcpBDDS=0} that these relations are satisfied.
Note that these relations do not relay on the asymptotic limit, while the first relation is subjected to corrections from SU(3) breaking in the topological amplitudes.
As shown in Table~\ref{tab:AcpBDDS=0},
the $CP$ asymmetries of
$B^-\to\Sigma^-\overline{\Sigma^{*0}}$,
$B^-\to\Xi^{-}\overline{\Xi^{*0}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Xi^{*-}}$,
$\overline B{}^0\to \Xi^{-}\overline{\Xi^{*-}}$,
$\overline B{}^0\to \Sigma^{-}\overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Xi^{-}\overline{\Omega^-}$,
$\overline B{}^0\to \Sigma^{+}\overline{\Sigma^{*+}}$ and
$\overline B{}^0\to \Xi^{0}\overline{\Xi^{*0}}$ decays,
have vanishing central values,
as they do not have any tree ($T_{i{{\cal B} \overline {\cal D}}}$) contribution.
In particular,
$\overline B{}^0\to \Xi^{-}\overline{\Xi^{*-}}$ and
$\overline B{}^0\to \Sigma^{-}\overline{\Sigma^{*-}}$
decays are $\Delta S=0$ pure penguin modes with only
$P_{{{\cal B} \overline {\cal D}}}$ and $P_{iEW{{\cal B} \overline {\cal D}}}$ contributions,
while
$\overline B{}^0\to \Sigma^{+}\overline{\Sigma^{*+}}$ and
$\overline B{}^0\to \Xi^{0}\overline{\Xi^{*0}}$ decays
are pure exchange modes with only $E_{{{\cal B} \overline {\cal D}}}$ contributions.
The $CP$ asymmetries of the former two modes need not be vanishing [see discussion around Eq.~(\ref{eq: PuPc})], while the $CP$ asymmetries of the latter two modes are always vanishing.
Note that although
$B^-\to\Lambda\overline{\Sigma^{*+}}$,
$\overline B{}^0_s\to \Lambda\overline{\Xi^{*0}}$ and
$\overline B{}^0\to \Lambda\overline{\Sigma^{*0}}$ decays
have vanishing tree contributions in the asymptotic limit, resulting their $CP$ asymmetries to have vanishing center values,
their uncertainties, mostly from corrections to the asymptotic relations, are sizable, however.
Measuring the $CP$ asymmetries of these modes can give information on the breaking of the asymptotic relations.
\begin{table}[t!]
\caption{\label{tab:AcpBDDS=-1} Same as Table~\ref{tab:AcpBBDS=0}, but with $\Delta S=-1$, $\overline B_q\to{{\cal B} \overline {\cal D}}$ modes.
}
\begin{ruledtabular}
\centering
{
\footnotesize
\begin{tabular}{lccclccc}
Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
& Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
\\
\hline $B^-\to \Sigma^+\overline{\Delta^{++}}$
& $0\pm30.3$%
& $\pm(27.1^{+29.0}_{-25.9})$%
& $\pm(35.3^{+26.4}_{-21.8})$%
& $\overline B{}^0\to \Sigma^{+}\overline{\Delta^+}$
& $0\pm30.1$%
& $\pm(27.1^{+28.8}_{-25.7})$%
& $\pm(35.3^{+26.2}_{-21.6})
\\
$B^-\to \Sigma^0\overline{\Delta^+}$
& $0\pm16.1$%
& $\pm(14.2^{+17.1}_{-13.7})$%
& $\pm(19.2^{+16.9}_{-12.6})$%
& $\overline B{}^0\to \Sigma^{0}\overline{\Delta^0}$
& $0\pm15.8$%
& $\pm(14.2^{+16.8}_{-13.5})$%
& $\pm(19.2^{+16.6}_{-12.3})$%
\\
$B^-\to \Sigma^-\overline{\Delta^0}$
& $0\pm1.8$%
& $0\pm1.8$%
& $0\pm1.8$%
&$\overline B{}^0\to \Sigma^{-}\overline{\Delta^-}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
\\
$B^-\to \Xi^{0}\overline{\Sigma^{*+}}$
& $0\pm21.3$%
& $\mp(28.9^{+22.9}_{-18.5})$%
& $\mp(45.0^{+24.0}_{-16.3})$%
& $\overline B{}^0\to \Xi^{0}\overline{\Sigma^{*0}}$
& $0\pm21.0$%
& $\mp(28.9^{+22.6}_{-18.2})$%
& $\mp(45.0^{+23.7}_{-16.0})$%
\\
$B^-\to \Xi^{-}\overline{\Sigma^{*0}}$
& $0\pm1.8$%
& $0\pm1.8$%
& $0\pm1.8$%
& $\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
\\
$B^-\to \Lambda\overline{\Delta^{+}}$
& $0\pm52.9$%
& $\pm(74.3^{+22.9}_{-41.7})$%
& $\pm(84.7^{+14.4}_{-25.1})$%
& $\overline B{}^0\to \Lambda\overline{\Delta^0}$
& $0\pm52.9$%
& $\pm(74.3^{+22.9}_{-41.7})$%
& $\pm(84.7^{+14.4}_{-25.1})$%
\\
$\overline B{}^0_s\to p\overline{\Delta^+}$
& $0$%
& $0$%
& $0$%
& $\overline B{}^0_s\to \Sigma^{+}\overline{\Sigma^{*+}}$
& $0\pm30.3$%
& $\pm(27.1^{+29.0}_{-25.9})$%
& $\pm(35.3^{+26.4}_{-21.8})$%
\\
$\overline B{}^0_s\to n\overline{\Delta^0}$
& $0$%
& $0$%
& $0$%
& $\overline B{}^0_s\to \Sigma^{0}\overline{\Sigma^{*0}}$
& $0\pm16.0$%
& $\pm(14.2^{+17.0}_{-13.6})$%
& $\pm(19.2^{+16.7}_{-12.4})$%
\\
$\overline B{}^0_s\to \Xi^{0}\overline{\Xi^{*0}}$
& $0\pm21.3$%
& $\mp(28.9^{+22.9}_{-18.5})$%
& $\mp(45.0^{+24.0}_{-16.3})$%
& $\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{*-}}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
\\
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{*-}}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
& $\overline B{}^0_s\to \Lambda\overline{\Sigma^{*0}}$
& $0\pm53.6$%
& $\pm(74.3^{+23.1}_{-42.3})$%
& $\pm(84.7^{+14.5}_{-25.4})$%
\\
\end{tabular}
}
\end{ruledtabular}
\end{table}
In Table~\ref{tab:AcpBDDS=-1} we give results of direct $CP$ asymmetries of $\Delta S=-1$, $\overline B_q\to{{\cal B} \overline {\cal D}}$ modes.
The $CP$ asymmetry of the Group I mode
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}}$
is vanishing. We will return to this later.
For Group II modes,
${\mathcal A}(B^-\to \Sigma^+\overline{\Delta^{++}})$
and
${\mathcal A}(\overline B{}^0_s\to \Sigma^{+}\overline{\Sigma^{*+}})$
are similar reaching $\pm35\%$,
${\mathcal A}(B^-\to \Xi^{0}\overline{\Sigma^{*+}})$
and
${\mathcal A}(\overline B{}^0_s\to \Xi^{0}\overline{\Xi^{*0}})$
are similar and sizable reaching $\mp45\%$,
but with sign opposite to most modes in Table~\ref{tab:AcpBDDS=-1},
${\mathcal A}(\overline B{}^0\to \Sigma^{0}\overline{\Delta^0})$
can reach $\pm19\%$,
while
${\mathcal A}(\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{*-}})$
and
${\mathcal A}(B^-\to \Xi^{-}\overline{\Sigma^{*0}})$
are vanishingly small.
In the Group III modes,
${\mathcal A}(\overline B{}^0\to \Xi^{0}\overline{\Sigma^{*0}})$
is the largest one reaching $\mp45\%$ and is similar to
${\mathcal A}(B^-\to \Xi^{0}\overline{\Sigma^{*+}})$
and
${\mathcal A}(\overline B{}^0_s\to \Xi^{0}\overline{\Xi^{*0}})$,
${\mathcal A}(\overline B{}^0\to \Sigma^{+}\overline{\Delta^+})$
can be as large as $\pm35\%$,
while
${\mathcal A}(B^-\to \Sigma^0\overline{\Delta^+})$
and
${\mathcal A}(\overline B{}^0_s\to \Sigma^{0}\overline{\Sigma^{*0}})$
are similar and can be as large as $\pm19\%$.
Note that for large enough $|\phi|$ most modes in the Table have signs basically equal to the sign of ${\mathcal A}(B^-\to\Lambda \overline p)$.
From Eqs. (\ref{eq: BDBm, DS=-1}), (\ref{eq: BDB0, DS=-1}) and (\ref{eq: BDBs, DS=-1}),
we can easily identify the following relations
\begin{eqnarray}
{\mathcal A}(B^-\to\Xi^{-}\overline{\Sigma^{*0}})
&=&{\mathcal A}(B^-\to\Sigma^-\overline{\Delta^0}),
\nonumber\\
{\mathcal A}(\bar B^0_s\to\Sigma^{-}\overline{\Sigma^{*-}})
&=&{\mathcal A}(\bar B^0_s\to\Xi^{-}\overline{\Xi^{*-}})
={\mathcal A}(\bar B^0\to\Xi^{-}\overline{\Sigma^{*-}})
\nonumber\\
&=&{\mathcal A}(\bar B^0\to\Sigma^{-}\overline{\Delta^-}),
\nonumber\\
{\mathcal A}(\bar B^0_s\to p\overline{\Delta^+})
&=&{\mathcal A}(\bar B^0_s\to n\overline{\Delta^0})=0.
\end{eqnarray}
We see from Table~\ref{tab:AcpBDDS=-1} that these relations are satisfied.
Note that these relations do not relay on the asymptotic limit, but are subjected to SU(3) breaking in the topological amplitudes.
As shown in Table~\ref{tab:AcpBDDS=-1},
the $CP$ asymmetries of
$B^-\to \Sigma^-\overline{\Delta^0}$,
$B^-\to \Xi^{-}\overline{\Sigma^{*0}}$,
$\overline B{}^0\to \Sigma^{-}\overline{\Delta^-}$,
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{*-}}$,
$\overline B{}^0_s\to p\overline{\Delta^+}$ and
$\overline B{}^0_s\to n\overline{\Delta^0}$ decays
have vanishing central values,
as they do not have any tree ($T'_{i{{\cal B} \overline {\cal D}}}$) contribution.
In particular,
$\overline B{}^0\to \Sigma^{-}\overline{\Delta^-}$,
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{*-}}$ and
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{*-}}$
decays are pure penguin modes with only
$P'_{{{\cal B} \overline {\cal D}}}$ and $P'_{iEW{{\cal B} \overline {\cal D}}}$ contributions,
while
$\overline B{}^0_s\to p\overline{\Delta^+}$ and
$\overline B{}^0_s\to n\overline{\Delta^0}$
are pure exchange modes with only $E'_{{{\cal B} \overline {\cal D}}}$ contributions.
The $CP$ asymmetries of the $\Delta S=-1$ pure penguin modes are small [see Eq.~(\ref{eq: A DeltaS penguin})], while the CP asymmetries of pure exchange modes are always vanishing.
These can be tests of the Standard Model.
In particular, the $\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}}$ decay
can cascadely decay to all charged final states and have unsuppressed decay rate (see Table~\ref{tab:BDDS=-1}),
but may suffer from low reconstruction efficiencies of the final state particles.
Nevertheless, it is still interesting to search for this modes and its $CP$ asymmetry.
The $U$-spin relations for octet-antidecuplet modes are given by~\cite{Chua:2013zga}
\begin{eqnarray}
\Delta_{CP}(B^-\to n\overline{\Delta^+})
&=&-\Delta_{CP}(B^-\to\Xi^{0}\overline{\Sigma^{*+}}),
\nonumber\\
\Delta_{CP}(B^-\to\Xi^{-}\overline{\Xi^{*0}})
&=&2 \Delta_{CP}(B^-\to\Sigma^-\overline{\Sigma^{*0}})
=-2\Delta_{CP}(B^-\to\Xi^{-}\overline{\Sigma^{*0}})
\nonumber\\
&=&-\Delta_{CP}(B^-\to\Sigma^-\overline{\Delta^0}),
\nonumber\\
\Delta_{CP}(B^-\to p\overline{\Delta^{++}})
&=&-
\Delta_{CP}(B^-\to \Sigma^+\overline{\Delta^{++}}),
\nonumber\\
\Delta_{CP}(\bar B^0\to n\overline{\Delta^0})
&=&-\Delta_{CP}(\bar B^0_s\to\Xi^{0}\overline{\Xi^{*0}}),
\nonumber\\
3\Delta_{CP}(\bar B^0\to\Sigma^{-}\overline{\Sigma^{*-}})
&=&3 \Delta_{CP}(\bar B^0\to\Xi^{-}\overline{\Xi^{*-}})
=3 \Delta_{CP}(\bar B^0_s\to\Sigma^{-}\overline{\Xi^{*-}})
\nonumber\\
&=&\Delta_{CP}(\bar B^0_s\to\Xi^{-}\overline{\Omega^-})
= -3 \Delta_{CP}(\bar B^0_s\to\Sigma^{-}\overline{\Sigma^{*-}})
\nonumber\\
&=&-3 \Delta_{CP}(\bar B^0_s\to\Xi^{-}\overline{\Xi^{*-}})
=-3 \Delta_{CP}(\bar B^0\to\Xi^{-}\overline{\Sigma^{*-}})
\nonumber\\
&=&-\Delta_{CP}(\bar B^0\to\Sigma^{-}\overline{\Delta^-}),
\nonumber\\
\Delta_{CP}(\bar B^0\to\Sigma^{+}\overline{\Sigma^{*+}})
&=&\Delta_{CP}(\bar B^0\to\Xi^{0}\overline{\Xi^{*0}})
=-\Delta_{CP}(\bar B^0_s\to p\overline{\Delta^+})
\nonumber\\
&=&-\Delta_{CP}(\bar B^0_s\to n\overline{\Delta^0})=0,
\nonumber\\
\Delta_{CP}(\bar B^0\to p\overline{\Delta^+})
&=&-\Delta_{CP}(\bar B^0_s\to\Sigma^{+}\overline{\Sigma^{*+}}),
\nonumber\\
\Delta_{CP}(\bar B^0_s\to n\overline{\Sigma^{*0}})
&=&-\Delta_{CP}(\bar B^0\to\Xi^{0}\overline{\Sigma^{*0}})
\nonumber\\
\Delta_{CP}(\bar B^0_s\to p\overline{\Sigma^{*+}})
&=&-\Delta_{CP}(\bar B^0\to \Sigma^{+}\overline{\Delta^+}).
\end{eqnarray}
These relations are subjected to corrections from SU(3) breaking in the $|p_{cm}|^3$ factors and topological amplitudes.
The above relations are roughly satisfied by the results shown in Tables~\ref{tab:AcpBDDS=0} and \ref{tab:AcpBDDS=-1}
and the agreement can be improved when SU(3) breaking effects are taken into account.
One can make a quick but non-trivial check on the relative signs of these asymmetries
and see that they are indeed in agreement with the above relations.~\footnote{Note that for the second relation, the $CP$ asymmetries of these modes all have vanishing central values, their relative signs cannot be read out from Tables~\ref{tab:AcpBDDS=0} and \ref{tab:AcpBDDS=-1}.}
Note that several vanishing $CP$ asymmetries as discussed previously are related to each other.
The fifth relation can be used to constrain the sizes of $CP$ asymmetries of $\Delta S=-1$ pure penguin modes model independently. For example, from the relation we can have
\begin{eqnarray}
|{\mathcal A}(\bar B^0\to\Xi^{-}\overline{\Sigma^{*-}})|
&=&
\frac{1}{3}|{\mathcal A}(\bar B^0_s\to\Xi^{-}\overline{\Omega^-})|
\frac{\tau(B^0)}{\tau(B^0_s)}
\frac{{\cal B}(\bar B^0_s\to\Xi^{-}\overline{\Omega^-})}{{\cal B}(\bar B^0\to\Xi^{-}\overline{\Sigma^{*-}})}
\nonumber\\
&\leq& \frac{1}{3}\frac{\tau(B^0)}{\tau(B^0_s)}
\frac{{\cal B}(\bar B^0_s\to\Xi^{-}\overline{\Omega^-})}{{\cal B}(\bar B^0\to\Xi^{-}\overline{\Sigma^{*-}})}
\simeq 4.1\%,
\end{eqnarray}
which is satisfied by ${\mathcal A}(\bar B^0\to\Xi^{-}\overline{\Sigma^{*-}})$ shown in Table~\ref{tab:AcpBDDS=-1}.
Note that the above inequality are generic and can be tested experimentally as the quantities on the both sides are measurable.
\begin{table}[t!]
\caption{\label{tab:AcpDBDS=0} Same as Table~\ref{tab:AcpBBDS=0}, but with $\Delta S=0$, $\overline B_q\to{{\cal D} \overline {\cal B}}$ modes.
}
\begin{ruledtabular}
\centering
{
\footnotesize
\begin{tabular}{lccclccc}
Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
& Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
\\
\hline $B^-\to \Delta^0\overline{p}$
& $0\pm18.5$%
& $\pm(26.7^{+19.1}_{-16.0})$%
& $\pm(38.8^{+18.7}_{-14.0})$%
& $\overline B{}^0_s\to \Delta^+\overline{\Sigma^{+}}$
& $0\pm19.0$%
& $\mp(36.0^{+18.8}_{-16.2})$%
& $\mp(49.3^{+17.9}_{-14.2})$%
\\
$B^-\to \Delta^-\overline{n}$
& $0\pm35.7$%
& $0\pm35.7$%
& $0\pm35.7$%
& $\overline B{}^0_s\to \Delta^0\overline{\Sigma^{0}}$
& $0\pm30.9$%
& $\mp(59.3^{+23.6}_{-25.6})$%
& $\mp(79.5^{+16.2}_{-19.3})$%
\\
$B^-\to \Sigma^{*0}\overline{\Sigma^{+}}$
& $0\pm18.5$%
& $\pm(26.7^{+19.1}_{-16.0})$%
& $\pm(38.8^{+18.7}_{-14.0})$%
&$\overline B{}^0_s\to \Delta^-\overline{\Sigma^{-}}$
& $0\pm29.9$%
& $0\pm29.9$%
& $0\pm29.9$%
\\
$B^-\to \Sigma^{*-}\overline{\Sigma^{0}}$
& $0\pm35.7$%
& $0\pm35.7$%
& $0\pm35.7$%
& $\overline B{}^0_s\to \Sigma^{*0}\overline{\Xi^{0}}$
& $0\pm11.0$%
& $\mp(20.8^{+11.8}_{-9.5})$%
& $\mp(28.9^{+12.0}_{-8.6})$%
\\
$B^-\to \Xi^{*-}\overline{\Xi^{0}}$
& $0\pm35.7$%
& $0\pm35.7$%
& $0\pm35.7$%
& $\overline B{}^0_s\to \Sigma^{*-}\overline{\Xi^{-}}$
& $0\pm29.9$%
& $0\pm29.9$%
& $0\pm29.9$%
\\
$B^-\to \Sigma^{*-}\overline{\Lambda}$
& $0\pm35.7$%
& $0\pm35.7$%
& $0\pm35.7$%
& $\overline B{}^0_s\to \Delta^0\overline{\Lambda}$
& $0\pm2.3$%
& $\mp(4.4^{+2.6}_{-2.0})$%
& $\mp(6.2^{+2.7}_{-1.9})$%
\\
$\overline B{}^0\to \Delta^+\overline{p}$
& $0\pm19.4$%
& $\mp(36.0^{+19.3}_{-16.5})$%
& $\mp(49.3^{+18.3}_{-14.4})$%
& $\overline B{}^0\to \Sigma^{*+}\overline{\Sigma^{+}}$
& $0$%
& $0$%
& $0$%
\\
$\overline B{}^0\to \Delta^0\overline{n}$
& $0\pm11.1$%
& $\mp(20.8^{+12.0}_{-9.6})$%
& $\mp(28.9^{+12.1}_{-8.7})$%
& $\overline B{}^0\to \Sigma^{*0}\overline{\Sigma^{0}}$
& $0\pm18.5$%
& $\pm(26.7^{+19.1}_{-16.0})$%
& $\pm(38.8^{+18.7}_{-14.0})$%
\\
$\overline B{}^0\to \Xi^{*0}\overline{\Xi^{0}}$
& $0$%
& $0$%
& $0$%
& $\overline B{}^0\to \Sigma^{*-}\overline{\Sigma^{-}}$
& $0\pm29.9$%
& $0\pm29.9$%
& $0\pm29.9$%
\\
$\overline B{}^0\to \Xi^{*-}\overline{\Xi^{-}}$
& $0\pm29.9$%
& $0\pm29.9$%
& $0\pm29.9$%
& $\overline B{}^0\to \Sigma^{*0}\overline{\Lambda}$
& $0\pm19.4$%
& $\mp(36.0^{+19.3}_{-16.5})$%
& $\mp(49.3^{+18.3}_{-14.4})
\\
\end{tabular}
}
\end{ruledtabular}
\end{table}
\subsubsection{$CP$ asymmetries of $\overline B_q\to {{\cal D} \overline {\cal B}}$ decays}
In Table~\ref{tab:AcpDBDS=0} we give results of direct $CP$ asymmetries of $\Delta S=0$, $\overline B_q\to{{\cal D} \overline {\cal B}}$ modes.
The $CP$ asymmetries of two Group I modes,
$B^-\to \Delta^0\overline{p}$
and
$\overline B{}^0_s\to \Delta^0\overline{\Lambda}$
decays,
are opposite in signs, while the former can reach $\pm39\%$
and is more sizable than the latter.
For Group II modes,
${\mathcal A}(\overline B{}^0\to \Delta^+\overline{p})$
and
${\mathcal A}(\overline B{}^0\to \Sigma^{*0}\overline{\Lambda})$
are similar reaching $\mp49\%$,
while
${\mathcal A}(\overline B{}^0_s\to \Delta^0\overline{\Sigma^{0}})$
can be as large as $\mp80\%$.
For Group III modes,
${\mathcal A}(\overline B{}^0_s\to \Sigma^{*0}\overline{\Xi^{0}})$
and
${\mathcal A}(\overline B{}^0_s\to \Delta^+\overline{\Sigma^{+}})$
can reach $\mp29\%$ and $\mp49\%$, respectively,
and have the same sign,
while
${\mathcal A}(B^-\to \Sigma^{*0}\overline{\Sigma^{+}})$
can be as large as $\pm39\%$, but with sign opposites to the other two's.
From Eqs. (\ref{eq: DBBm, DS=0}), (\ref{eq: DBB0, DS=0}) and (\ref{eq: DBBs, DS=0}),
we can easily identify the following relations
\begin{eqnarray}
{\mathcal A}(B^-\to\Delta^0\overline{p})
&=&
{\mathcal A}(B^-\to\Sigma^{*0}\overline{\Sigma^{+}}),
\nonumber\\
{\mathcal A}(B^-\to\Delta^-\overline{n})
&=&
{\mathcal A}(B^-\to\Xi^{*-}\overline{\Xi^{0}})
={\mathcal A}(B^-\to\Sigma^{*-}\overline{\Sigma^{0}})
\nonumber\\
&=&{\mathcal A}(B^-\to\Sigma^{*-}\overline{\Lambda}),
\nonumber\\
{\mathcal A}(\bar B^0\to\Sigma^{*-}\overline{\Sigma^{-}})
& =&{\mathcal A}(\bar B^0\to\Xi^{*-}\overline{\Xi^{-}})
={\mathcal A}(\bar B^0_s\to\Delta^-\overline{\Sigma^{-}})
\nonumber\\
&=&{\mathcal A}(\bar B^0_s\to\Sigma^{*-}\overline{\Xi^{-}}),
\nonumber\\
{\mathcal A}(\bar B^0\to\Sigma^{*+}\overline{\Sigma^{+}})
&=&{\mathcal A}(\bar B^0\to\Xi^{*0}\overline{\Xi^{0}})=0.
\end{eqnarray}
We see from Table~\ref{tab:AcpDBDS=0} that these relations are satisfied.
Note that these relations do not relay on the asymptotic limit,
but are subjected to SU(3) breaking in the topological amplitudes.
Note that the central values of the $CP$ asymmetries of
$B^-\to \Delta^-\overline{n}$,
$B^-\to \Sigma^{*-}\overline{\Sigma^{0}}$,
$B^-\to \Xi^{*-}\overline{\Xi^{0}}$,
$B^-\to \Sigma^{*-}\overline{\Lambda}$,
$\overline B{}^0\to \Xi^{*-}\overline{\Xi^{-}}$,
$\overline B{}^0_s\to \Delta^-\overline{\Sigma^{-}}$,
$\overline B{}^0_s\to \Sigma^{*-}\overline{\Xi^{-}}$,
$\overline B{}^0\to \Sigma^{*-}\overline{\Sigma^{-}}$,
$\overline B{}^0\to \Xi^{*0}\overline{\Xi^{0}}$ and
$\overline B{}^0\to \Sigma^{*+}\overline{\Sigma^{+}}$ decays
are vanishing,
as they do not have any tree ($T_{i{{\cal D} \overline {\cal B}}}$) contribution.
The
$\overline B{}^0\to \Xi^{*-}\overline{\Xi^{-}}$,
$\overline B{}^0_s\to \Delta^-\overline{\Sigma^{-}}$,
$\overline B{}^0_s\to \Sigma^{*-}\overline{\Xi^{-}}$ and
$\overline B{}^0\to \Sigma^{*-}\overline{\Sigma^{-}}$ decays
are pure penguin modes, which only have $P_{{\cal D} \overline {\cal B}}$ and $P_{i EW{{\cal D} \overline {\cal B}}}$ contributions,
while
$\overline B{}^0\to \Xi^{*0}\overline{\Xi^{0}}$ and
$\overline B{}^0\to \Sigma^{*+}\overline{\Sigma^{+}}$ decays
are pure exchange ($E_{{\cal D} \overline {\cal B}}$) modes,
the $CP$ asymmetries of the $\Delta S=0$ pure penguin modes need not be vanishing [see Eq.~(\ref{eq: PuPc})], while the $CP$ asymmetries of the pure exchange modes are always vanishing.
\begin{table}[t!]
\caption{\label{tab:AcpDBDS=-1} Same as Table~\ref{tab:AcpBBDS=0}, but with $\Delta S=-1$, $\overline B_q\to{{\cal D} \overline {\cal B}}$ modes.}
\begin{ruledtabular}
\centering
{
\footnotesize
\begin{tabular}{lccclccc}
Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
& Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
\\
\hline $B^-\to \Sigma^{*0}\overline{p}$
& $0\pm19.4$%
& $\mp(28.9^{+20.6}_{-17.2})$%
& $\mp(45.0^{+21.5}_{-15.4})$%
& $\overline B{}^0\to \Sigma^{*+}\overline{p}$
& $0\pm14.8$%
& $\pm(27.1^{+15.2}_{-12.3})$%
& $\pm(35.3^{+14.2}_{-10.8})$%
\\
$B^-\to \Sigma^{*-}\overline{n}$
& $0\pm1.8$%
& $0\pm1.8$%
& $0\pm1.8$%
& $\overline B{}^0\to \Sigma^{*0}\overline{n}$
& $0\pm26.4$%
& $\pm(47.8^{+22.0}_{-20.8})$%
& $\pm(58.6^{+18.1}_{-16.1})$%
\\
$B^-\to \Xi^{*0}\overline{\Sigma^{+}}$
& $0\pm19.4$%
& $\mp(28.9^{+20.6}_{-17.2})$%
& $\mp(45.0^{+21.5}_{-15.4})$%
&$\overline B{}^0\to \Xi^{*0}\overline{\Sigma^{0}}$
& $0\pm19.2$%
& $\mp(28.9^{+20.4}_{-16.9})$%
& $\mp(45.0^{+21.3}_{-15.2})$%
\\
$B^-\to \Xi^{*-}\overline{\Sigma^{0}}$
& $0\pm1.8$%
& $0\pm1.8$%
& $0\pm1.8$%
& $\overline B{}^0\to \Xi^{*-}\overline{\Sigma^{-}}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
\\
$B^-\to \Omega^-\overline{\Xi^{0}}$
& $0\pm1.8$%
& $0\pm1.8$%
& $0\pm1.8$%
& $\overline B{}^0\to \Omega^-\overline{\Xi^{-}}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
\\
$B^-\to \Xi^{*-}\overline{\Lambda}$
& $0\pm1.8$%
& $0\pm1.8$%
& $0\pm1.8$%
& $\overline B{}^0\to \Xi^{*0}\overline{\Lambda}$
& $0\pm14.8$%
& $\pm(27.1^{+15.2}_{-12.3})$%
& $\pm(35.3^{+14.2}_{-10.8})$%
\\
$\overline B{}^0_s\to \Delta^+\overline{p}$
& $0$%
& $0$%
& $0$%
& $\overline B{}^0_s\to \Sigma^{*+}\overline{\Sigma^{+}}$
& $0\pm15.1$%
& $\pm(27.1^{+15.5}_{-12.5})$%
& $\pm(35.3^{+14.5}_{-11.0})$%
\\
$\overline B{}^0_s\to \Delta^0\overline{n}$
& $0$%
& $0$%
& $0$%
& $\overline B{}^0_s\to \Sigma^{*0}\overline{\Sigma^{0}}$
& $0\pm7.9$%
& $\pm(14.2^{+8.8}_{-6.7})$%
& $\pm(19.2^{+9.0}_{-6.5})$%
\\
$\overline B{}^0_s\to \Xi^{*0}\overline{\Xi^{0}}$
& $0\pm26.7$%
& $\pm(47.8^{+22.1}_{-21.0})$%
& $\pm(58.6^{+18.2}_{-16.2})$%
& $\overline B{}^0_s\to \Sigma^{*-}\overline{\Sigma^{-}}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
\\
$\overline B{}^0_s\to \Xi^{*-}\overline{\Xi^{-}}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
& $\overline B{}^0_s\to \Sigma^{*0}\overline{\Lambda}$
& $0\pm41.6$%
& $\pm(74.3^{+20.0}_{-29.8})$%
& $\pm(84.7^{+12.8}_{-17.9})$%
\\
\end{tabular}
}
\end{ruledtabular}
\end{table}
In Table~\ref{tab:AcpDBDS=-1} we give results of direct $CP$ asymmetries of $\Delta S=-1$, $\overline B_q\to{{\cal D} \overline {\cal B}}$ modes.
In the Group I modes,
${\mathcal A}(\overline B{}^0\to \Xi^{*0}\overline{\Lambda)}$
and
${\mathcal A}(\overline B{}^0\to \Sigma^{*+}\overline{p})$
are similar and sizable reaching $\pm35\%$,
while
${\mathcal A}(\overline B{}^0\to \Omega^-\overline{\Xi^{-}})$
is vanishing and will be discussed later.
For Group II modes,
${\mathcal A}(\overline B{}^0_s\to \Xi^{*0}\overline{\Xi^{0}})$,
${\mathcal A}(\overline B{}^0_s\to \Sigma^{*+}\overline{\Sigma^{+}})$,
${\mathcal A}(B^-\to \Xi^{*0}\overline{\Sigma^{+}})$
and
${\mathcal A}(B^-\to \Sigma^{*0}\overline{p})$
are sizable reaching $\pm59\%$, $\pm35\%$, $\mp45\%$ and $\mp45\%$, respectively,
and the signs of the first two asymmetries are opposite to the last two,
while
${\mathcal A}(B^-\to \Omega^-\overline{\Xi^{0}})$,
${\mathcal A}(B^-\to \Xi^{*-}\overline{\Lambda})$
and
${\mathcal A}(\overline B{}^0_s\to \Xi^{*-}\overline{\Xi^{-}})$
are vanishingly small and will be discussed later.
For the Group III modes,
${\mathcal A}(\overline B{}^0_s\to \Sigma^{*0}\overline{\Sigma^{0}})$
can be as large as $\pm 19\%$, but
${\mathcal A}(B^-\to \Xi^{*-}\overline{\Sigma^{0}})$
is highly suppressed.
From Eqs. (\ref{eq: BDBm, DS=-1}), (\ref{eq: BDB0, DS=-1}) and (\ref{eq: BDBs, DS=-1}),
we can easily identify the following relations
\begin{eqnarray}
{\mathcal A}(B^-\to\Sigma^{*0}\overline{p})
&=&{\mathcal A}(B^-\to\Xi^{*0}\overline{\Sigma^{+}}),
\nonumber\\
{\mathcal A}(B^-\to\Sigma^{*-}\overline{n})
&=&
{\mathcal A}(B^-\to\Xi^{*-}\overline{\Sigma^{0}})
=
{\mathcal A}(B^-\to\Omega^-\overline{\Xi^{0}})
\nonumber\\
&=&
{\mathcal A}(B^-\to\Xi^{*-}\overline{\Lambda}),
\nonumber\\
{\mathcal A}(\bar B^0_s\to\Sigma^{*-}\overline{\Sigma^{-}})
&=&{\mathcal A}(\bar B^0_s\to\Xi^{*-}\overline{\Xi^{-}})
={\mathcal A}(\bar B^0\to\Xi^{*-}\overline{\Sigma^{-}})
\nonumber\\
&=& {\mathcal A}(\bar B^0\to\Omega^-\overline{\Xi^{-}}),
\nonumber\\
{\mathcal A}(\bar B^0_s\to\Delta^+\overline{p})
&=&{\mathcal A}(\bar B^0_s\to\Delta^0\overline{n})=0.
\end{eqnarray}
We see from Table~\ref{tab:AcpDBDS=-1} that these relations are satisfied.
Note that these relations do not relay on the asymptotic limit, but are subjected to SU(3) breaking in the topological amplitudes.
The central values of $CP$ asymmetries of
$B^-\to \Sigma^{*-}\overline{n}$,
$B^-\to \Xi^{*-}\overline{\Sigma^{0}}$,
$B^-\to \Omega^-\overline{\Xi^{0}}$,
$B^-\to \Xi^{*-}\overline{\Lambda}$,
$\overline B{}^0\to \Omega^-\overline{\Xi^{-}}$,
$\overline B{}^0_s\to \Sigma^{*-}\overline{\Sigma^{-}}$,
$\overline B{}^0\to \Xi^{*-}\overline{\Sigma^{-}}$,
$\overline B{}^0_s\to \Xi^{*-}\overline{\Xi^{-}}$,
$\overline B{}^0_s\to \Delta^+\overline{p}$ and
$\overline B{}^0_s\to \Delta^0\overline{n}$ decays
are vanishing,
as they do not have any tree ($T'_{i{{\cal D} \overline {\cal B}}}$) contribution.
Since
$\overline B{}^0\to \Omega^-\overline{\Xi^{-}}$,
$\overline B{}^0_s\to \Sigma^{*-}\overline{\Sigma^{-}}$,
$\overline B{}^0\to \Xi^{*-}\overline{\Sigma^{-}}$ and
$\overline B{}^0_s\to \Xi^{*-}\overline{\Xi^{-}}$
decays
are pure penguin modes, which only have $P'_{{\cal D} \overline {\cal B}}$ and $P'_{i EW{{\cal D} \overline {\cal B}}}$ contributions,
while
$\overline B{}^0_s\to \Delta^+\overline{p}$ and
$\overline B{}^0_s\to \Delta^0\overline{n}$ decays
are pure exchange ($E'_{{\cal D} \overline {\cal B}}$) modes,
the $CP$ asymmetries of the $\Delta S=-1$ pure penguin modes are small [see Eq. (\ref{eq: A DeltaS penguin})], while the $CP$ asymmetries of the pure exchange modes are always vanishing.
These can be tests of the Standard model.
In particular, $\overline B{}^0\to \Omega^-\overline{\Xi^{-}}$
and $\overline B{}^0_s\to \Xi^{*-}\overline{\Xi^{-}}$ belonging to Group I and II modes, respectively,
their decay rates are unsuppressed (see Table~\ref{tab:DBDS=-1}).
These modes should be searched for,
but the latter mode requires $B_s$ tagging to search for its $CP$ asymmetry.
The $U$-spin relations for decuplet-antioctet modes are given by~\cite{Chua:2013zga}
\begin{eqnarray}
\Delta_{CP}(B^-\to\Delta^0\overline{p})
&=&
2\Delta_{CP}(B^-\to\Sigma^{*0}\overline{\Sigma^{+}})
=-2 \Delta_{CP}(B^-\to\Sigma^{*0}\overline{p})
\nonumber\\
&=&-\Delta_{CP}(B^-\to\Xi^{*0}\overline{\Sigma^{+}}),
\nonumber\\
\Delta_{CP}(B^-\to\Delta^-\overline{n})
&=&
3 \Delta_{CP}(B^-\to\Xi^{*-}\overline{\Xi^{0}})
=6 \Delta_{CP}(B^-\to\Sigma^{*-}\overline{\Sigma^{0}})
\nonumber\\
&=&2 \Delta_{CP}(B^-\to\Sigma^{*-}\overline{\Lambda})
= -3\Delta_{CP}(B^-\to\Sigma^{*-}\overline{n})
\nonumber\\
&=&
-6 \Delta_{CP}(B^-\to\Xi^{*-}\overline{\Sigma^{0}})
=
-\Delta_{CP}(B^-\to\Omega^-\overline{\Xi^{0}})
\nonumber\\
&=&
-2 \Delta_{CP}(B^-\to\Xi^{*-}\overline{\Lambda}),
\nonumber\\
\Delta_{CP}(\bar B^0\to\Delta^+\overline{p})
&=&
-\Delta_{CP}(\bar B^0_s\to\Sigma^{*+}\overline{\Sigma^{+}}),
\nonumber\\
3\Delta_{CP}(\bar B^0\to\Sigma^{*-}\overline{\Sigma^{-}})
& =&3\Delta_{CP}(\bar B^0\to\Xi^{*-}\overline{\Xi^{-}})
= \Delta_{CP}(\bar B^0_s\to\Delta^-\overline{\Sigma^{-}})
\nonumber\\
&=&3\Delta_{CP}(\bar B^0_s\to\Sigma^{*-}\overline{\Xi^{-}})
=-3\Delta_{CP}(\bar B^0_s\to\Sigma^{*-}\overline{\Sigma^{-}})
\nonumber\\
&=&-3\Delta_{CP}(\bar B^0_s\to\Xi^{*-}\overline{\Xi^{-}})
=-3 \Delta_{CP}(\bar B^0\to\Xi^{*-}\overline{\Sigma^{-}})
\nonumber\\
&=& -\Delta_{CP}(\bar B^0\to\Omega^-\overline{\Xi^{-}}),
\nonumber\\
\Delta_{CP}(\bar B^0\to\Sigma^{*+}\overline{\Sigma^{+}})
&=&\Delta_{CP}(\bar B^0\to\Xi^{*0}\overline{\Xi^{0}})
=-\Delta_{CP}(\bar B^0_s\to\Delta^+\overline{p})
\nonumber\\
&=&-\Delta_{CP}(\bar B^0_s\to\Delta^0\overline{n})=0,
\nonumber\\
\Delta_{CP}(\bar B^0\to\Delta^0\overline{n})
&=&
-\Delta_{CP}(\bar B^0_s\to\Xi^{*0}\overline{\Xi^{0}}),
\nonumber\\
\Delta_{CP}(\bar B^0_s\to \Delta^+\overline{\Sigma^{+}})
&=&
-\Delta_{CP}(\bar B^0\to \Sigma^{*+}\overline{p}),
\nonumber\\
\Delta_{CP}(\bar B^0_s\to\Sigma^{*0}\overline{\Xi^{0}})
&=&
-\Delta_{CP}(\bar B^0\to\Sigma^{*0}\overline{n}).
\end{eqnarray}
These relations are subjected to corrections from SU(3) breaking in $|p_{cm}|^2$ and topological amplitudes.
They are roughly satisfied by the results shown in Tables~\ref{tab:AcpDBDS=0} and \ref{tab:AcpDBDS=-1} and the agreement can be improved when SU(3) breaking effects are taken into account.
One can make a quick but non-trivial check on the relative signs of these asymmetries
and see that they are indeed agreed with the above relations.~\footnote{Note that for the second and forth relations, the $CP$ asymmetries of these modes all have vanishing central values, their relative signs cannot be read out from Tables~\ref{tab:AcpDBDS=0} and \ref{tab:AcpDBDS=-1}.}
The forth relation can be used to constrain the size of $CP$ asymmetries of $\Delta S=-1$ pure penguin modes model independently. For example, we can have
\begin{eqnarray}
|{\mathcal A}(\bar B^0\to\Omega^-\overline{\Xi^{-}})|
&=&
3|{\mathcal A}(\bar B^0_s\to\Sigma^{*-}\overline{\Xi^{-}})|
\frac{\tau(B^0)}{\tau(B^0_s)}
\frac{{\cal B}(\bar B^0_s\to\Sigma^{*-}\overline{\Xi^{-}})}{{\cal B}(\bar B^0\to\Omega^-\overline{\Xi^{-}})}
\nonumber\\
&\leq&3 \frac{\tau(B^0)}{\tau(B^0_s)}
\frac{{\cal B}(\bar B^0_s\to\Sigma^{*-}\overline{\Xi^{-}})}{{\cal B}(\bar B^0\to\Omega^-\overline{\Xi^{-}})}
\simeq 6.1\%,
\end{eqnarray}
which is satisfied by the result shown in Table~\ref{tab:AcpDBDS=-1}. Note that the two modes in the above inequality have final states that can cascadely decay to all charged particles. The inequality is generic and can be verified experimentally.
\begin{table}[t!]
\caption{\label{tab:AcpDDDS=0} Same as Table~\ref{tab:AcpBBDS=0}, but with $\Delta S=0$,
$\overline B_q\to{{\cal D} \overline {\cal D}}$ modes.
}
\begin{ruledtabular}
\centering
{
\footnotesize
\begin{tabular}{lccclccc}
Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
& Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
\\
\hline $B^-\to \Delta^+ \overline{\Delta^{++}}$
& $0\pm19.4$%
& $\mp(36.0^{+19.3}_{-16.5})$%
& $\mp(49.3^{+18.3}_{-14.4})$%
& $\overline B{}^0_s\to \Delta^{+} \overline{\Sigma^{*+}}$
& $0\pm19.0$%
& $\mp(36.0^{+18.8}_{-16.2})$%
& $\mp(49.3^{+17.9}_{-14.2})$%
\\
$B^-\to \Delta^0 \overline{\Delta^{+}}$
& $0\pm32.1$%
& $\mp(59.3^{+24.4}_{-26.5})$%
& $\mp(79.5^{+16.7}_{-19.9})$%
& $\overline B{}^0_s\to \Delta^{0} \overline{\Sigma^{*0}}$
& $0\pm30.9$%
& $\mp(59.3^{+23.6}_{-25.6})$%
& $\mp(79.5^{+16.2}_{-19.3})$%
\\
$B^-\to \Delta^- \overline{\Delta^{0}}$
& $0\pm35.7$%
& $0\pm35.7$%
& $0\pm35.7$%
& $\overline B{}^0_s\to \Delta^{-} \overline{\Sigma^{*-}}$
& $0\pm29.9$%
& $0\pm29.9$%
& $0\pm29.9$%
\\
$B^-\to \Sigma^{*0} \overline{\Sigma^{*+}}$
& $0\pm32.1$%
& $\mp(59.3^{+24.4}_{-26.5})$%
& $\mp(79.5^{+16.7}_{-19.9})$%
& $\overline B{}^0_s\to \Sigma^{*0} \overline{\Xi^{*0}}$
& $0\pm30.9$%
& $\mp(59.3^{+23.6}_{-25.6})$%
& $\mp(79.5^{+16.2}_{-19.3})$%
\\
$B^-\to \Sigma^{*-} \overline{\Sigma^{*0}}$
& $0\pm35.7$%
& $0\pm35.7$%
& $0\pm35.7$%
& $\overline B{}^0_s\to \Sigma^{*-} \overline{\Xi^{*-}}$
& $0\pm29.9$%
& $0\pm29.9$%
& $0\pm29.9$%
\\
$B^-\to \Xi^{*-} \overline{\Xi^{*0}}$
& $0\pm35.7$%
& $0\pm35.7$%
& $0\pm35.7$%
& $\overline B{}^0_s\to \Xi^{*-} \overline{\Omega^{-}}$
& $0\pm29.9$%
& $0\pm29.9$%
& $0\pm29.9$%
\\
$\overline B{}^0\to \Delta^{++} \overline{\Delta^{++}}$
& $-100\sim100$%
& $-100\sim100$%
& $-100\sim100$%
& $\overline B{}^0\to \Sigma^{*+} \overline{\Sigma^{*+}}$
& $-100\sim100$%
& $-100\sim100$%
& $-100\sim100$%
\\
$\overline B{}^0\to \Delta^{+} \overline{\Delta^{+}}$
& $0\pm22.9$%
& $\mp(36.0^{+22.4}_{-19.4})$%
& $\mp(49.3^{+21.1}_{-16.8})$%
& $\overline B{}^0\to \Sigma^{*0} \overline{\Sigma^{*0}}$
& $0\pm37.1$%
& $\mp(59.3^{+27.1}_{-31.2})$%
& $\mp(79.5^{+17.9}_{-23.4})$%
\\
$\overline B{}^0\to \Delta^{0} \overline{\Delta^{0}}$
& $0\pm34.0$%
& $\mp(59.3^{+25.4}_{-28.4})$%
& $\mp(79.5^{+17.1}_{-21.4})$%
& $\overline B{}^0\to \Sigma^{*-} \overline{\Sigma^{*-}}$
& $0\pm30.6$%
& $0\pm30.6$%
& $0\pm30.6$%
\\
$\overline B{}^0\to \Delta^{-} \overline{\Delta^{-}}$
& $0\pm30.6$%
& $0\pm30.6$%
& $0\pm30.6$%
& $\overline B{}^0\to \Xi^{*0} \overline{\Xi^{*0}}$
& $-100\sim100$%
& $-100\sim100$%
& $-100\sim100$%
\\
$\overline B{}^0\to \Omega^{-} \overline{\Omega^{-}}$
& $0$%
& $0$%
& $0$%
& $\overline B{}^0\to \Xi^{*-} \overline{\Xi^{*-}}$
& $0\pm32.1$%
& $0\pm32.1$%
& $0\pm32.1$%
\\
\end{tabular}
}
\end{ruledtabular}
\end{table}
\subsubsection{$CP$ asymmetries of $\overline B_q\to {{\cal D} \overline {\cal D}}$ decays}
In Table~\ref{tab:AcpDDDS=0}, we give results of direct $CP$ asymmetries of $\Delta S=0$, $\overline B_q\to{{\cal D} \overline {\cal D}}$ modes.
For Group I modes,
${\mathcal A}(\overline B{}^0\to \Delta^{0} \overline{\Delta^{0}})$
is sizable and can reach $\mp 80\%$,
but ${\mathcal A}(\overline B{}^0\to \Sigma^{*-} \overline{\Sigma^{*-}})$
is vanishing and will be discussed later.
For Group II modes,
${\mathcal A}(B^-\to \Delta^0 \overline{\Delta^{+}})$,
${\mathcal A}(B^-\to \Sigma^{*0} \overline{\Sigma^{*+}})$,
${\mathcal A}(\overline B{}^0_s\to \Delta^{0} \overline{\Sigma^{*0}})$,
${\mathcal A}(\overline B{}^0_s\to \Sigma^{*0} \overline{\Xi^{*0}})$
and
${\mathcal A}(\overline B{}^0\to \Sigma^{*0} \overline{\Sigma^{*0}})$
are similar and sizable, reaching $\mp80\%$,
${\mathcal A}(B^-\to \Delta^+ \overline{\Delta^{++}})$
and
${\mathcal A}(\overline B{}^0_s\to \Delta^{+} \overline{\Sigma^{*+}})$
are similar and sizable, reaching $\mp49\%$,
but
${\mathcal A}(\overline B{}^0_s\to \Sigma^{*-} \overline{\Xi^{*-}})$
and
${\mathcal A}(\overline B{}^0_s\to \Xi^{*-} \overline{\Omega^{-}})$
are vanishing and will be discussed later.
The Group III mode,
the
$\overline B{}^0\to \Delta^{+} \overline{\Delta^{+}}$
decay,
has sizable $CP$ asymmetry,
reaching $\mp49\%$.
Most of these $CP$ asymmetries basically share the sign of ${\mathcal A}(\overline B{}^0\to p\overline p)$.
From Eqs. (\ref{eq: DDBm, DS=0}), (\ref{eq: DDB0, DS=0}) and (\ref{eq: DDBs, DS=0}),
we can easily identify the following relations
\begin{eqnarray}
{\mathcal A}(B^-\to\Delta^-\overline{\Delta^{0}})
&=&{\mathcal A}(B^-\to\Sigma^{*-}\overline{\Sigma^{*0}})
={\mathcal A}(B^-\to\Xi^{*-}\overline{\Xi^{*0}}),
\nonumber\\
{\mathcal A}(B^-\to\Delta^0\overline{\Delta^{+}})
&=&{\mathcal A}(B^-\to\Sigma^{*0}\overline{\Sigma^{*+}}),
\nonumber\\
{\mathcal A}(\overline{B^0_s}\to\Delta^{0}\overline{\Sigma^{*0}})
&=&{\mathcal A}(\overline{B^0_s}\to \Sigma^{*0}\overline{\Xi^{*0}}),
\nonumber\\
{\mathcal A}(\overline{B^0_s}\to\Delta^{-}\overline{\Sigma^{*-}})
&=&{\mathcal A}(\overline{B^0_s}\to\Xi^{*-}\overline{\Omega^-})
={\mathcal A}(\overline{B^0_s}\to\Sigma^{*-}\overline{\Xi^{*-}}).
\end{eqnarray}
We see from Table~\ref{tab:AcpDDDS=0} that these relations are satisfied.
These relations do not relay on the asymptotic limit and are not corrected from phase space ratio, but are subjected to SU(3) breaking in the topological amplitudes.
Note that the $CP$ asymmetries of several Group I modes, $B^-\to\Delta^0\overline{\Delta^{+}}$, $B^-\to\Sigma^{*0}\overline{\Sigma^{*+}}$,
$\overline{B^0_s}\to\Delta^{0}\overline{\Sigma^{*0}}$ and $\overline{B^0_s}\to \Sigma^{*0}\overline{\Xi^{*0}}$ decays are related.
The central values of the $CP$ asymmetries of
$B^-\to \Delta^- \overline{\Delta^{0}}$,
$B^-\to \Sigma^{*-} \overline{\Sigma^{*0}}$,
$B^-\to \Xi^{*-} \overline{\Xi^{*0}}$,
$\overline B{}^0\to \Delta^{-} \overline{\Delta^{-}}$,
$\overline B{}^0\to \Sigma^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Xi^{*-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Delta^{-} \overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Xi^{*-} \overline{\Omega^{-}}$,
and
$\overline B{}^0\to \Omega^{-} \overline{\Omega^{-}}$ decays
are vanishing,
as they do not have any tree ($T_{{{\cal D} \overline {\cal D}}}$) contribution.
The
$\overline B{}^0\to \Delta^{++} \overline{\Delta^{++}}$,
$\overline B{}^0\to \Sigma^{*+} \overline{\Sigma^{*+}}$ and
$\overline B{}^0\to \Xi^{*0} \overline{\Xi^{*0}}$ decays
are subleading modes,
which only have $E_{{\cal D} \overline {\cal D}}$ and $P_{{\cal D} \overline {\cal D}}$ contributions.
Their $CP$ asymmetries can be any value.
The
$\overline B{}^0\to \Delta^{-} \overline{\Delta^{-}}$,
$\overline B{}^0\to \Sigma^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Xi^{*-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Delta^{-} \overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Xi^{*-}}$ and
$\overline B{}^0_s\to \Xi^{*-} \overline{\Omega^{-}}$
decays
are $\Delta S=0$ pure penguin modes,
which only have $P_{{\cal D} \overline {\cal D}}$, $P_{EW{{\cal D} \overline {\cal D}}}$ and $PA_{{\cal D} \overline {\cal D}}$ contributions,
while the
$\overline B{}^0\to \Omega^{-} \overline{\Omega^{-}}$ decay
is a pure penguin annihilation ($PA_{{\cal D} \overline {\cal D}}$) mode.
The $CP$ asymmetries of the $\Delta S=0$ penguin modes need not be vanishing [see Eq.~(\ref{eq: PuPc})], while the $CP$ asymmetry of the pure penguin annihilation mode is always vanishing.
\begin{table}[t!]
\caption{\label{tab:AcpDDDS=-1} Same as Table~\ref{tab:AcpBBDS=0}, but with $\Delta S=-1$, $\overline B_q\to{{\cal D} \overline {\cal D}}$ modes.}
\begin{ruledtabular}
\centering
{
\footnotesize
\begin{tabular}{lccclccc}
Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
& Mode
& $\phi=0$
& $\phi=\pm\pi/4$
& $\phi=\pm\pi/2$
\\
\hline $B^-\to \Sigma^{*+} \overline{\Delta^{++}}$
& $0\pm15.1$%
& $\pm(27.1^{+15.5}_{-12.5})$%
& $\pm(35.3^{+14.5}_{-11.0})$%
& $\overline B{}^0\to \Sigma^{*+} \overline{\Delta^{+}}$
& $0\pm14.8$%
& $\pm(27.1^{+15.2}_{-12.3})$%
& $\pm(35.3^{+14.2}_{-10.8})$%
\\
$B^-\to \Sigma^{*0} \overline{\Delta^{+}}$
& $0\pm8.0$%
& $\pm(14.2^{+8.9}_{-6.8})$%
& $\pm(19.2^{+9.1}_{-6.6})$%
& $\overline B{}^0\to \Sigma^{*0} \overline{\Delta^{0}}$
& $0\pm7.7$%
& $\pm(14.2^{+8.6}_{-6.6})$%
& $\pm(19.2^{+8.8}_{-6.4})$%
\\
$B^-\to \Sigma^{*-} \overline{\Delta^{0}}$
& $0\pm1.8$%
& $0\pm1.8$%
& $0\pm1.8$%
&$\overline B{}^0\to \Sigma^{*-} \overline{\Delta^{-}}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
\\
$B^-\to \Xi^{*0} \overline{\Sigma^{*+}}$
& $0\pm8.0$%
& $\pm(14.2^{+8.9}_{-6.8})$%
& $\pm(19.2^{+9.1}_{-6.6})$%
& $\overline B{}^0\to \Xi^{*0} \overline{\Sigma^{*0}}$
& $0\pm7.7$%
& $\pm(14.2^{+8.6}_{-6.6})$%
& $\pm(19.2^{+8.8}_{-6.4})$%
\\
$B^-\to \Xi^{*-} \overline{\Sigma^{*0}}$
& $0\pm1.8$%
& $0\pm1.8$%
& $0\pm1.8$%
& $\overline B{}^0\to \Xi^{*-} \overline{\Sigma^{*-}}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
\\
$B^-\to \Omega^{-} \overline{\Xi^{*0}}$
& $0\pm1.8$%
& $0\pm1.8$%
& $0\pm1.8$%
& $\overline B{}^0\to \Omega^{-} \overline{\Xi^{*-}}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
\\
$\overline B{}^0_s\to \Delta^{++} \overline{\Delta^{++}}$
& $-100\sim100$%
& $-100\sim100$%
& $-100\sim100$%
& $\overline B{}^0_s\to \Sigma^{*+} \overline{\Sigma^{*+}}$
& $0\pm18.3$%
& $\pm(27.1^{+19.0}_{-14.4})$%
& $\pm(35.3^{+17.8}_{-12.3})$%
\\
$\overline B{}^0_s\to \Delta^{+} \overline{\Delta^{+}}$
& $-100\sim100$%
& $-100\sim100$%
& $-100\sim100$%
& $\overline B{}^0_s\to \Sigma^{*0} \overline{\Sigma^{*0}}$
& $0\pm9.6$%
& $\pm(14.2^{+11.0}_{-7.7})$%
& $\pm(19.2^{+11.2}_{-7.2})$%
\\
$\overline B{}^0_s\to \Delta^{0} \overline{\Delta^{0}}$
& $-100\sim100$%
& $-100\sim100$%
& $-100\sim100$%
& $\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}}$
& $0\pm1.6$%
& $0\pm1.6$%
& $0\pm1.6$%
\\
$\overline B{}^0_s\to \Delta^{-} \overline{\Delta^{-}}$
& $0$%
& $0$%
& $0$%
& $\overline B{}^0_s\to \Xi^{*0} \overline{\Xi^{*0}}$
& $0\pm8.6$%
& $\pm(14.2^{+9.8}_{-7.2})$%
& $\pm(19.2^{+10.0}_{-6.8})$%
\\
$\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$
& $0\pm1.5$%
& $0\pm1.5$%
& $0\pm1.5$%
& $\overline B{}^0_s\to \Xi^{*-} \overline{\Xi^{*-}}$
& $0\pm1.6$%
& $0\pm1.6$%
& $0\pm1.6$%
\\
\end{tabular}
}
\end{ruledtabular}
\end{table}
In Table~\ref{tab:AcpDDDS=-1} we give results of direct $CP$ asymmetries of $\Delta S=-1$, $\overline B_q\to{{\cal D} \overline {\cal D}}$ modes.
For Group I modes,
${\mathcal A}(B^-\to \Sigma^{*+} \overline{\Delta^{++}})$
and
${\mathcal A}(\overline B{}^0_s\to \Sigma^{*+} \overline{\Sigma^{*+}})$
are similar and can be as large as $\pm35\%$,
${\mathcal A}(B^-\to \Xi^{*0} \overline{\Sigma^{*+}})$
and
${\mathcal A}(\overline B{}^0_s\to \Xi^{*0} \overline{\Xi^{*0}})$
are similar and can be as large as $\pm19\%$,
but
${\mathcal A}(\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}})$,
${\mathcal A}(B^-\to \Omega^{-} \overline{\Xi^{*0}})$,
${\mathcal A}(B^-\to \Sigma^{*-} \overline{\Delta^{0}})$
and
${\mathcal A}(\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}})$
are vanishingly small and will be discussed later.
For Group II modes,
${\mathcal A}(\overline B{}^0\to \Sigma^{*+} \overline{\Delta^{+}})$
can reach $\pm35\%$,
${\mathcal A}(\overline B{}^0\to \Sigma^{*0} \overline{\Delta^{0}})$
and
${\mathcal A}(\overline B{}^0\to \Xi^{*0} \overline{\Sigma^{*0}})$
are similar and can be $\pm19\%$,
but
${\mathcal A}(\overline B{}^0\to \Xi^{*-} \overline{\Sigma^{*-}})$
and
${\mathcal A}(\overline B{}^0\to \Omega^{-} \overline{\Xi^{*-}})$
are vanishing.
For Group III modes,
${\mathcal A}(B^-\to \Sigma^{*0} \overline{\Delta^{+}})$
and
${\mathcal A}(\overline B{}^0_s\to \Sigma^{*0} \overline{\Sigma^{*0}})$
are similar and can be $\pm19\%$,
but
${\mathcal A}(\overline B{}^0_s\to \Xi^{*-} \overline{\Xi^{*-}})$
and
${\mathcal A}(B^-\to \Xi^{*-} \overline{\Sigma^{*0}})$
are vanishingly small and will be discussed later.
From Eqs. (\ref{eq: DDBm, DS=-1}), (\ref{eq: DDB0, DS=-1}) and (\ref{eq: DDBs, DS=-1}),
we can easily identify the following relations
\begin{eqnarray}
{\mathcal A}(B^-\to\Sigma^{*-}\overline{\Delta^0})
&=&
{\mathcal A}(B^-\to\Omega^-\overline{\Xi^{*0}})
={\mathcal A}(B^-\to\Xi^{*-}\overline{\Sigma^{*0}}),
\nonumber\\
{\mathcal A}(B^-\to\Sigma^{*0}\overline{\Delta^+})
&=&{\mathcal A}(B^-\to\Xi^{*0}\overline{\Sigma^{*+}}),
\nonumber\\
{\mathcal A}(\bar B^0\to\Sigma^{*0}\overline{\Delta^0})
&=&{\mathcal A}(\bar B^0\to\Xi^{*0}\overline{\Sigma^{*0}}),
\nonumber\\
{\mathcal A}(\bar B^0\to\Sigma^{*-}\overline{\Delta^-})
&=& {\mathcal A}(\bar B^0\to\Xi^{*-}\overline{\Sigma^{*-}})
={\mathcal A}(\bar B^0\to\Omega^-\overline{\Xi^{*-}}).
\end{eqnarray}
We see from Table~\ref{tab:AcpDDDS=-1} that these relations are satisfied.
Note that these relations do not relay on the asymptotic limit, but are subjected to SU(3) breaking in the topological amplitudes.
The central values of the $CP$ asymmetries of
$B^-\to \Sigma^{*-} \overline{\Delta^{0}}$,
$B^-\to \Xi^{*-} \overline{\Sigma^{*0}}$,
$B^-\to \Omega^{-} \overline{\Xi^{*0}}$,
$\overline B{}^0\to \Sigma^{*-} \overline{\Delta^{-}}$,
$\overline B{}^0\to \Xi^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Omega^{-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$,
$\overline B{}^0_s\to \Xi^{*-} \overline{\Xi^{*-}}$
and
$\overline B{}^0_s\to \Delta^{-} \overline{\Delta^{-}}$ decays
are vanishing,
as they do not have any tree ($T'_{{{\cal D} \overline {\cal D}}}$) contribution.
The subleading modes,
$\overline B{}^0_s\to \Delta^{++} \overline{\Delta^{++}}$
$\overline B{}^0_s\to \Delta^{+} \overline{\Delta^{+}}$ and
$\overline B{}^0_s\to \Delta^{0} \overline{\Delta^{0}}$ decays
have $E'_{{\cal D} \overline {\cal D}}$ and $P'_{{\cal D} \overline {\cal D}}$ contributions.
Their $CP$ asymmetries can be any value.
Since the
$\overline B{}^0\to \Sigma^{*-} \overline{\Delta^{-}}$,
$\overline B{}^0\to \Xi^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Omega^{-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$ and
$\overline B{}^0_s\to \Xi^{*-} \overline{\Xi^{*-}}$
decays
are $\Delta S=-1$ pure penguin modes, which only have $P'_{{\cal D} \overline {\cal D}}$, $P'_{EW{{\cal D} \overline {\cal D}}}$ and $PA'_{{\cal D} \overline {\cal D}}$ contributions,
and the
$\overline B{}^0_s\to \Delta^{-} \overline{\Delta^{-}}$ decay
is a pure penguin annihilation ($PA'_{{\cal D} \overline {\cal D}}$) mode,
their $CP$ asymmetries are small [see Eq.~(\ref{eq: A DeltaS penguin})] or always vanishing.
These can be tests of the Standard Model.
Note that some of these modes have relatively good detectability.
These include
two Group I modes,
$\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$
and
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}}$
decays,
two Group II modes,
$\overline B{}^0\to \Xi^{*-} \overline{\Sigma^{*-}}$
and
$\overline B{}^0\to \Omega^{-} \overline{\Xi^{*-}}$
decays,
and
a Group III modes,
the $\overline B{}^0_s\to \Xi^{*-} \overline{\Xi^{*-}}$
decay, where all have rates of orders $10^{-7}$ (see Table~\ref{tab:DDDS=-1}).
It will be interesting to search for these modes and use their $CP$ asymmetries as the null tests of the Standard Model.
The $U$-spin relations for decuplet-antidecuplet modes are given by~\cite{Chua:2013zga}~\footnote{A typo in the fifth relation is corrected.}
\begin{eqnarray}
2\Delta_{CP}(B^-\to\Delta^-\overline{\Delta^{0}})
&=&3\Delta_{CP}(B^-\to\Sigma^{*-}\overline{\Sigma^{*0}})
=\Delta_{CP}(B^-\to\Xi^{*-}\overline{\Xi^{*0}})
\nonumber\\
&=&-6 \Delta_{CP}(B^-\to\Sigma^{*-}\overline{\Delta^0})
= -2 \Delta_{CP}(B^-\to\Omega^-\overline{\Xi^{*0}})
\nonumber\\
&=&-3 \Delta_{CP}(B^-\to\Xi^{*-}\overline{\Sigma^{*0}}),
\nonumber\\
\Delta_{CP}(B^-\to\Delta^0\overline{\Delta^{+}})
&=&2\Delta_{CP}(B^-\to\Sigma^{*0}\overline{\Sigma^{*+}})
=-2 \Delta_{CP}(B^-\to\Sigma^{*0}\overline{\Delta^+})
\nonumber\\
&=&-\Delta_{CP}(B^-\to\Xi^{*0}\overline{\Sigma^{*+}}),
\nonumber\\
\Delta_{CP}(B^-\to\Delta^+\overline{\Delta^{++}})
&=&-\Delta_{CP}(B^-\to \Sigma^{*+}\overline{\Delta^{++}}),
\nonumber\\
\Delta_{CP}(\overline{B^0_s}\to\Delta^{0}\overline{\Sigma^{*0}})
&=&\Delta_{CP}(\overline{B^0_s}\to \Sigma^{*0}\overline{\Xi^{*0}})
=- \Delta_{CP}(\bar B^0\to\Sigma^{*0}\overline{\Delta^0})
\nonumber\\
&=&-\Delta_{CP}(\bar B^0\to\Xi^{*0}\overline{\Sigma^{*0}}),
\nonumber\\
4 \Delta_{CP}(\overline{B^0_s}\to\Delta^{-}\overline{\Sigma^{*-}})
&=&4 \Delta_{CP}(\overline{B^0_s}\to\Xi^{*-}\overline{\Omega^-})
=3 \Delta_{CP}(\overline{B^0_s}\to\Sigma^{*-}\overline{\Xi^{*-}})
\nonumber\\
&=& -4 \Delta_{CP}(\bar B^0\to\Sigma^{*-}\overline{\Delta^-})
=-3\Delta_{CP} (\bar B^0\to\Xi^{*-}\overline{\Sigma^{*-}})
\nonumber\\
&=& -4 \Delta_{CP}(\bar B^0\to\Omega^-\overline{\Xi^{*-}}),
\nonumber\\
\Delta_{CP}(\overline{B^0_s}\to\Delta^+\overline{\Sigma^{*+}})
&=&-\Delta_{CP}(\bar B^0\to \Sigma^{*+}\overline{\Delta^+}),
\nonumber\\
\Delta_{CP}(\overline{B^0}\to \Xi^{*0}\overline{\Xi^{*0}})
&=&-\Delta_{CP}(\bar B^0_s\to\Delta^0\overline{\Delta^0}),
\nonumber\\
\Delta_{CP}(\overline{B^0}\to\Xi^{*-}\overline{\Xi^{*-}})
&=& -\Delta_{CP}(\bar B^0_s\to\Sigma^{*-}\overline{\Sigma^{*-}}),
\nonumber\\
\Delta_{CP}(\overline{B^0}\to\Sigma^{*0}\overline{\Sigma^{*0}})
&=&-\Delta_{CP}(\bar B^0_s\to\Sigma^{*0}\overline{\Sigma^{*0}}),
\nonumber\\
\Delta_{CP}(\overline{B^0}\to\Omega^-\overline{\Omega^-})
&=&-\Delta_{CP}(\bar B^0_s\to\Delta^-\overline{\Delta^-})=0,
\nonumber\\
\Delta_{CP}(\overline{B^0}\to\Delta^{++}\overline{\Delta^{++}})
&=& -\Delta_{CP}(\bar B^0_s\to \Delta^{++}\overline{\Delta^{++}}),
\nonumber\\
\Delta_{CP}(\overline{B^0}\to\Sigma^{*+}\overline{\Sigma^{*+}})
&=&-\Delta_{CP}(\bar B^0_s\to\Delta^+\overline{\Delta^+}),
\nonumber\\
\Delta_{CP}(\overline{B^0}\to\Delta^{-}\overline{\Delta^{-}})
&=&-\Delta_{CP}(\bar B^0_s\to\Omega^{-}\overline{\Omega^{-}}),
\nonumber\\
\Delta_{CP}(\overline{B^0}\to\Sigma^{*-}\overline{\Sigma^{*-}})
&=&-\Delta_{CP}(\bar B^0_s\to\Xi^{*-}\overline{\Xi^{*-}}),
\nonumber\\
\Delta_{CP}(\overline{B^0}\to\Delta^+\overline{\Delta^{+}})
&=&-\Delta_{CP}(\bar B^0_s\to\Sigma^{*+}\overline{\Sigma^{*+}}),
\nonumber\\
\Delta_{CP}(\overline{B^0}\to\Delta^0\overline{\Delta^{0}})
&=&-\Delta_{CP}(\bar B^0_s\to\Xi^{*0}\overline{\Xi^{*0}}).
\label{eq: Uspin DD}
\end{eqnarray}
These relations are roughly satisfied by the results shown in Tables~\ref{tab:AcpDDDS=0} and \ref{tab:AcpDDDS=-1}
and the agreement can be improved when SU(3) breaking effects are taken into account.
For example, one can make a quick but non-trivial check on the relative signs of these asymmetries
and see that they are indeed agreed with the above relations. ~\footnote{Note that for the first relation, the $CP$ asymmetries of these modes all have vanishing central values, their relative signs cannot be read out from Tables~\ref{tab:AcpDDDS=0} and \ref{tab:AcpDDDS=-1}.
The relative signs of $\Delta_{CP}(\overline{B^0}\to \Xi^{*0}\overline{\Xi^{*0}}$ and $\Delta_{CP}(\bar B^0_s\to\Delta^0\overline{\Delta^0})$
$\Delta_{CP}(\overline{B^0}\to\Delta^{++}\overline{\Delta^{++}})$
and $\Delta_{CP}(\bar B^0_s\to \Delta^{++}\overline{\Delta^{++}})$,
$\Delta_{CP}(\overline{B^0}\to\Sigma^{*+}\overline{\Sigma^{*+}})$
and $\Delta_{CP}(\bar B^0_s\to\Delta^+\overline{\Delta^+})$
cannot be read out from Tables~\ref{tab:AcpDDDS=0} and \ref{tab:AcpDDDS=-1}. The signs of modes in the fifth, eighth, thirteenth and fourteenth relations cannot be read out from the tables.}
Note that several vanishing $CP$ asymmetries as discussed previously are related to each other.
For example, from the last relation of the above equation, we have
\begin{eqnarray}
{\cal A}(\bar B^0\to\Delta^0\overline{\Delta^0})
&=&-{\cal A}(\bar B^0_s\to\Xi^{*0}\overline{\Xi^{*0}})\frac{\tau(B^0)}{\tau(B^0_s)}
\frac{{\cal B}(\bar B^0_s\to\Xi^{*0}\overline{\Xi^{*0}})}{{\cal B}(\bar B^0\to\Delta^0\overline{\Delta^{0}})}
\nonumber\\
&\simeq&-(3.7) {\cal A}(\bar B^0_s\to\Xi^{*0}\overline{\Xi^{*0}}).
\label{eq: Usipin DD1}
\end{eqnarray}
The results in Tables~\ref{tab:AcpDDDS=0} and \ref{tab:AcpDDDS=-1} agree with it.
Note that these two modes are both Group I modes. We will return to this later.
The fifth, eighth, thirteenth and fourteenth relations can be used to constrains the sizes of $CP$ asymmetries of $\Delta S=-1$ pure penguin modes model independently. For example, using the fifth relation we can have
\begin{eqnarray}
|{\cal A}(\bar B^0\to\Omega^-\overline{\Xi^{*-}})|
&=&\frac{3}{4}|{\cal A}(\bar B^0_s\to\Sigma^{*-}\overline{\Xi^{*-}})|\frac{\tau(B^0)}{\tau(B^0_s)}
\frac{{\cal B}(\bar B^0_s\to\Sigma^{*-}\overline{\Xi^{*-}})}{{\cal B}(\bar B^0\to\Omega^-\overline{\Xi^{*-}})}
\nonumber\\
&\leq&\frac{3}{4}\frac{\tau(B^0)}{\tau(B^0_s)}
\frac{{\cal B}(\bar B^0_s\to\Sigma^{*-}\overline{\Xi^{*-}})}{{\cal B}(\bar B^0\to\Omega^-\overline{\Xi^{*-}})}
\simeq 5.5\%,
\end{eqnarray}
which is satisfied by the result shown in Table \ref{tab:AcpDDDS=-1}. Note that the above two modes are both Group II modes and one does not need $B_s$ tagging to test the inequality experimentally.
\begin{figure}[h!]
\centering
\subfigure[]{
\includegraphics[width=0.315\textwidth]{B0barPbarP.pdf}
}\hspace{0.7cm}
\subfigure[]{
\includegraphics[width=0.315\textwidth]{BmLambdabarP.pdf}
}\\\subfigure[]{
\includegraphics[width=0.315\textwidth]{BsbarLambdabarLambda.pdf}
}\hspace{0.7cm}
\subfigure[]{
\includegraphics[width=0.315\textwidth]{BmpDeltappbar.pdf}
}
\\\subfigure[]{
\includegraphics[width=0.315\textwidth]{BspSigmastpbar.pdf}
}\hspace{0.7cm}
\subfigure[]{
\includegraphics[width=0.315\textwidth]{BmDelta0pbar.pdf}
}
\\\subfigure[]{
\includegraphics[width=0.315\textwidth]{BsDelta0Lambdabar.pdf}
}
\caption{Direct $CP$ asymmetries of some interesting modes are plotted with respect to the penguin-tree relative strong phase $\phi$.
The solid lines are from tree-penguin interferences using the asymptotic relation.
The bands between the dashed (dotted) lines are with contributions from corrections to the asymptotic relation without (with) contributions from sub-leading terms.
} \label{fig:ACPI}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure[]{
\includegraphics[width=0.315\textwidth]{BoXist0Lambdabar.pdf}
}\hspace{0.7cm}
\subfigure[]{
\includegraphics[width=0.315\textwidth]{B0Sigmastppbar.pdf}
}\\\subfigure[]{
\includegraphics[width=0.315\textwidth]{B0Delta0Delta0bar.pdf}
}\hspace{0.7cm}
\subfigure[]{
\includegraphics[width=0.315\textwidth]{BmSigmastpDeltappbar.pdf}
}
\\\subfigure[]{
\includegraphics[width=0.315\textwidth]{BsSigmastpSigmastpbar.pdf}
}\hspace{0.7cm}
\subfigure[]{
\includegraphics[width=0.315\textwidth]{BmXist0Sigmastpbar.pdf}
}
\\\subfigure[]{
\includegraphics[width=0.315\textwidth]{BsXist0Xist0bar.pdf}
}
\caption{Direct $CP$ asymmetries of some interesting modes are plotted with respect to the penguin-tree relative strong phase $\phi$.
The solid lines are from tree-penguin interferences using the asymptotic relation.
The bands between the dashed (dotted) lines are with contributions from corrections to the asymptotic relation without (with) contributions from sub-leading terms.
} \label{fig:ACPII}
\end{figure}
In Tables~\ref{tab:AcpBBDS=0} to \ref{tab:AcpDDDS=-1}, we show results on $CP$ asymmetries for some specify values of the penguin-tree relative strong phase $\phi$ (0, $\pm\pi/4$ and $\pm\pi/2$).
It will be useful to plot the $CP$ asymmetries of some interesting modes in the full range of $\phi$.
In Fig.~\ref{fig:ACPI}, we plot the $CP$ asymmetries of
$\overline B{}^0\to p\overline{p}$,
$B^-\to \Lambda\overline{p}$,
$\overline B{}^0_s\to \Lambda\overline{\Lambda}$,
$B^-\to p\overline{\Delta^{++}}$,
$\overline B{}^0_s\to p\overline{\Sigma^{*+}}$,
$B^-\to \Delta^0\overline{p}$
and
$\overline B{}^0_s\to \Delta^0\overline{\Lambda}$
decays.
In Fig.~\ref{fig:ACPII}, we plot the $CP$ asymmetries of
$\overline B{}^0\to \Xi^{*0}\overline{\Lambda}$,
$\overline B{}^0\to \Sigma^{*+}\overline{p}$,
$\overline B{}^0\to \Delta^{0} \overline{\Delta^{0}}$,
$B^-\to \Sigma^{*+} \overline{\Delta^{++}}$,
$\overline B{}^0_s\to \Sigma^{*+} \overline{\Sigma^{*+}}$,
$B^-\to \Xi^{*0} \overline{\Sigma^{*+}}$
and
$\overline B{}^0_s\to \Xi^{*0} \overline{\Xi^{*0}}$
decays.
These are direct $CP$ asymmetries of several Group I modes, which have unsuppressed rates and can cascadely decay to all charged final states.
The solid lines are from tree-penguin interferences using the asymptotic relation.
The bands between the dashed (dotted) lines in the figures are with contributions from corrections to the asymptotic relation without (with) contributions from sub-leading terms.
For the figures we see that corrections to the asymptotic relation dominate the uncertainties.
Note that the remaining Group I modes, including
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}}$,
$B^-\to \Xi^{-}\overline{\Lambda}$,
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Omega^-\overline{\Xi^{-}}$,
$\overline B{}^0\to \Sigma^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$,
$B^-\to \Omega^{-} \overline{\Xi^{*0}}$,
$B^-\to \Sigma^{*-} \overline{\Delta^{0}}$
and
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}}$
decays
do not depends on $\phi$, hence, their $CP$ asymmetries are not plotted.
In particular, as noted before,
${\mathcal A}(\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}})$,
${\mathcal A}(\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}})$,
${\mathcal A}(\overline B{}^0\to \Omega^-\overline{\Xi^{-}})$,
${\mathcal A}(\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}})$,
and
${\mathcal A}(\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}})$
are small and can be used to test the Standard Model,
especially these modes have good detectability in rates.
We now return to Figs.~\ref{fig:ACPI} and \ref{fig:ACPII}.
Most of these asymmetries are sizable.
We can classify these modes, according to the dependence of $\phi$, into two groups.
The
${\mathcal A}(\overline B{}^0\to p\overline{p})$,
${\mathcal A}(B^-\to p\overline{\Delta^{++}})$,
${\mathcal A}(\overline B{}^0_s\to p\overline{\Sigma^{*+}})$,
${\mathcal A}(\overline B{}^0_s\to \Delta^0\overline{\Lambda})$
and
${\mathcal A}(\overline B{}^0\to \Delta^{0} \overline{\Delta^{0}})$
have similar behavior, while
${\mathcal A}(B^-\to \Lambda\overline{p})$,
${\mathcal A}(\overline B{}^0_s\to \Lambda\overline{\Lambda})$,
${\mathcal A}(B^-\to \Delta^0\overline{p})$,
${\mathcal A}(\overline B{}^0\to \Xi^{*0}\overline{\Lambda)}$,
${\mathcal A}(\overline B{}^0\to \Sigma^{*+}\overline{p})$,
${\mathcal A}(B^-\to \Sigma^{*+} \overline{\Delta^{++}})$,
${\mathcal A}(\overline B{}^0_s\to \Sigma^{*+} \overline{\Sigma^{*+}})$,
${\mathcal A}(B^-\to \Xi^{*0} \overline{\Sigma^{*+}})$
and
${\mathcal A}(\overline B{}^0_s\to \Xi^{*0} \overline{\Xi^{*0}})$
have similar behavior, but different from the first group.
The asymmetries in these two groups basically have opposite signs for $\phi$ away from 0, $\pi$, $2\pi$.
Note that through $U$-spin relation,
${\mathcal A}(\overline{B^0}\to\Delta^0\overline{\Delta^{0}})$ and
${\mathcal A}(\bar B^0_s\to\Xi^{*0}\overline{\Xi^{*0}})$ are related by Eq.~(\ref{eq: Uspin DD}).
We see from Fig.~\ref{fig:ACPII} (c) and (g) that they indeed respect the relation.
It will be interesting to measure the $CP$ asymmetries of these Group I modes and compare to the predictions plotted in Figs.~\ref{fig:ACPI} and \ref{fig:ACPII}. Measuring these asymmetries can provide useful information on the decay amplitudes.
\section{Conclusion}
With the experimental evidences of $\overline B {}^0\to p \overline{p}$ and $B^-\to\Lambda\bar p$ decays,
it is now possible to extract both tree and penguin amplitudes of charmless two-body baryonic decays
for the first time.
The extracted penguin-tree ratio agrees with the expectation.
Predictions on all $\overline B_q\to {{\cal B} \overline {\cal B}}$, ${{\cal B} \overline {\cal D}}$, ${{\cal D} \overline {\cal B}}$ and ${{\cal D} \overline {\cal D}}$ decay rates are given.
It is non-trivial that the results do not violate any existing experimental upper limit.
From the results, it is understandable that why $\overline B {}^0\to p \overline{p}$ and $B^-\to\Lambda\bar p$ modes are the first two modes with experimental evidences.
Relations on rates are verified.
There are 23 modes that have relatively sizable rates and can cascadely decay to all charged final states,
including
$\overline B{}^0\to p\bar p$,
$B^-\to \Lambda\bar p$, $\Xi^{-}\overline{\Lambda}$,
$\overline B{}^0_s\to \Lambda\overline{\Lambda}$,
$\Xi^-\overline{\Xi^-}$;
$B^-\to p\overline{\Delta^{++}}$,
$\overline B{}^0_s\to p\overline{\Sigma^{*+}}$,
$\overline B{}^0\to \Xi^-\overline{\Sigma^{*-}}$;
$B^-\to \Delta^0\overline p$,
$\overline B{}^0_s\to\Delta^0\overline\Lambda$,
$\overline B{}^0\to \Sigma^{*-}\overline p$,
$\Omega^-\overline{\Xi^-}$, $\Xi^{*0}\overline\Lambda$;
$\overline{B}{}^0\to \Delta^0\overline{\Delta^0}$,
$\overline{B}{}^0\to \Sigma^{*-}\overline{\Sigma^{*-}}$,
$B^-\to\Sigma^{*+}\overline {\Delta^{++}}$,
$\Xi^{*0}\overline{\Sigma^{*+}}$,
$\Omega^-\overline{\Xi^{*0}}$,
$\Sigma^{*-} \overline{\Delta^{0}}$,
$\overline B{}^0_s\to\Omega^-\overline{\Omega^-}$,
$\Xi^{*0}\overline{\Xi^{*0}}$,
$\Sigma^{*+} \overline{\Sigma^{*+}}$
and
$\Sigma^{*-} \overline{\Sigma^{*-}}$
decays.
With $\pi^0$ and $\gamma$ another 38 modes can be searched for, while with $\pi^0\pi^0$, $\pi^0\gamma$ and $\gamma\gamma$ 38 more modes can be searched for in the future.
In particular, we note that the predicted $B^-\to p\overline{\Delta^{++}}$ rate is close to the experimental bound,
which has not been updated in the last ten years~\cite{Wei:2007fg}.
The bounds on $B^-\to \Delta^0\overline{p}$ and $\overline B{}^0\to \Sigma^{*+}\overline{p}$ rates have not been updated in the last ten years~\cite{Wei:2007fg, Wang:2007as} and the bound on $\overline B{}^0\to \Delta^{0} \overline{\Delta^{0}}$ rate has not been updated in about three decades~\cite{Bortoletto:1989mu},
while their rates are predicted to be of the order of $10^{-8}$.
Also note that the $\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$ rate is predicted to be the highest rate.
The analysis of this work can be improved systematically when more modes are measured.
Direct $CP$ asymmetries of all $\overline B_q\to {\cal B} \overline {\cal B}$,
${\cal B} \overline {\cal D}$,
${\cal D} \overline {\cal B}$ and
${\cal D} \overline {\cal D}$ modes are explored.
Relations on $CP$ asymmetries are verified.
Results of $CP$ asymmetries for modes with relatively good detectability in rates are highlighted.
In particular, the direct $CP$ asymmetry of $\overline B {}^0\to p \overline{p}$ decay can be as large as $\pm 50\%$.
Some of the $CP$ asymmetries are small or vanishing.
For $\overline B\to {{\cal B} \overline {\cal B}}$, $\Delta S=-1$ decays,
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{-}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{-}}$
and
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}}$
decays are pure penguin modes.
For $\overline B\to {{\cal B} \overline {\cal D}}$, $\Delta S=0$ decays,
$\overline B{}^0\to \Sigma^{+}\overline{\Sigma^{*+}}$ and
$\overline B{}^0\to \Xi^{0}\overline{\Xi^{*0}}$ decays
are pure exchange modes.
For $\overline B\to {{\cal B} \overline {\cal D}}$, $\Delta S=-1$ decays,
$\overline B{}^0\to \Sigma^{-}\overline{\Delta^-}$,
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Sigma^{-}\overline{\Sigma^{*-}}$ and
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{*-}}$
decays are pure penguin modes
and
$\overline B{}^0_s\to p\overline{\Delta^+}$ and
$\overline B{}^0_s\to n\overline{\Delta^0}$
decays are pure exchange modes.
For $\overline B\to {{\cal D} \overline {\cal B}}$, $\Delta S=0$ decays,
$\overline B{}^0\to \Xi^{*0}\overline{\Xi^{0}}$ and
$\overline B{}^0\to \Sigma^{*+}\overline{\Sigma^{+}}$ decays
are pure exchange modes.
For $\overline B\to {{\cal D} \overline {\cal B}}$, $\Delta S=-1$ decays,
$\overline B{}^0\to \Omega^-\overline{\Xi^{-}}$,
$\overline B{}^0_s\to \Sigma^{*-}\overline{\Sigma^{-}}$,
$\overline B{}^0\to \Xi^{*-}\overline{\Sigma^{-}}$ and
$\overline B{}^0_s\to \Xi^{*-}\overline{\Xi^{-}}$
decays
are pure penguin modes,
while
$\overline B{}^0_s\to \Delta^+\overline{p}$ and
$\overline B{}^0_s\to \Delta^0\overline{n}$ decays
are pure exchange modes.
For $\overline B\to {{\cal D} \overline {\cal D}}$, $\Delta S=0$ decays,
$\overline B{}^0\to \Omega^{-} \overline{\Omega^{-}}$ decay
is a pure penguin annihilation mode.
For $\overline B\to {{\cal D} \overline {\cal D}}$, $\Delta S=-1$ decays,
$\overline B{}^0\to \Sigma^{*-} \overline{\Delta^{-}}$,
$\overline B{}^0\to \Xi^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Omega^{-} \overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}}$,
$\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$ and
$\overline B{}^0_s\to \Xi^{*-} \overline{\Xi^{*-}}$
decays
are pure penguin modes,
and the
$\overline B{}^0_s\to \Delta^{-} \overline{\Delta^{-}}$ decay
is a pure penguin annihilation mode.
The $CP$ asymmetries of the above modes are small, following from the hierarchy of the CKM factors, or vanishing.
They can be added to the list of the tests of the Standard Model.
Note that some of these modes have relatively good detectability in rates.
These include 5 Group I modes,
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{-}}$,
$\overline B{}^0\to \Xi^{-}\overline{\Sigma^{*-}}$,
$\overline B{}^0\to \Omega^-\overline{\Xi^{-}}$,
$\overline B{}^0_s\to \Sigma^{*-} \overline{\Sigma^{*-}}$
and
$\overline B{}^0_s\to \Omega^{-} \overline{\Omega^{-}}$
decays,
4 Group II modes,
$\overline B{}^0_s\to \Xi^{-}\overline{\Xi^{*-}}$,
$\overline B{}^0_s\to \Xi^{*-}\overline{\Xi^{-}}$,
$\overline B{}^0\to \Xi^{*-} \overline{\Sigma^{*-}}$
and
$\overline B{}^0\to \Omega^{-} \overline{\Xi^{*-}}$
decays,
and a Group III mode,
the $\overline B{}^0_s\to \Xi^{*-} \overline{\Xi^{*-}}$
decay,
but some require $B_s$ tagging to search for its $CP$ asymmetry.
It will be interesting to search for these modes and use their $CP$ asymmetries to search for New Physics.
Furthermore, since these modes are rare decay modes and all of them are pure penguin modes,
they are expected to be sensitive to New Physics contributions.
\section{Acknowledgments}
The author thanks Paoti Chang and Eduardo Rodrigues for discussions and useful comments.
This research was supported in part by the Ministry of Science and Technology of R.O.C. under Grant
Nos. 103-2112-M-033-002-MY3.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
\par
With the discovery of the Higgs boson (\PH)~\cite{higgs:atlas,higgs:cms,long}
it is possible to probe new physics by measuring its coupling to
other particles. Of particular interest is the flavor-changing neutral
current (FCNC) decay of the top quark to the Higgs boson. The
investigation of this process at the CERN LHC is motivated by the large
\ttbar production cross section and the variety of possible decay modes
of the Higgs boson.
The next-to-next-to-leading-order $\ttbar$ production cross section at a
center-of-mass energy of 8\TeV and with a top quark mass ($m_{\rm t}$)
of 173.5\GeV~\cite{PDG} is 252\unit{pb}~\cite{PhysRevLett.110.252004}. The standard
model~(SM) predicts that the top quark decays with a branching fraction
of nearly 100\% into a bottom quark and a \PW boson ($\rm t \to
Wb$).
In the SM, FCNC decays are absent at leading-order and occur only via
loop-level processes that are additionally suppressed by the
Glashow-Iliopoulos-Maiani mechanism \cite{Eilam:1990zc,sm-pred-thc2}.
Because the leading-order decay rate of $\rm t \to Wb$ is also quite
large, the SM branching fraction $\mathcal{B}\left({\rm t \to
Hq}\right)$, where q is an up or charm quark, is predicted to be of
$\mathcal{O}(10^{-15})$~\cite{Eilam:1990zc,sm-pred-thc2,sm-pred-thc3},
far below the experimental sensitivity at the LHC. However, some
extensions of the SM predict an enhanced $\rm t \to Hq$ decay rate.
Thus, observation of a large branching fraction would be clear
evidence for new physics. The largest enhancement in $\mathcal{B}({\rm
t \to Hq})$ is predicted in models that incorporate a two-Higgs doublet,
where the branching fraction can be of
$\mathcal{O}(10^{-3})$~\cite{sm-pred-thc3}.
Previous searches for FCNC in top quark decays mediated by a Higgs boson
have been performed at the LHC by ATLAS~\cite{Aad:2014dya,Aad:2015pja} and
CMS~\cite{Khachatryan:2014jya}. The CMS search considered both
multilepton and diphoton final states and the observed upper limit of
$\mathcal{B}({\rm t \to Hc})$ at the 95\% confidence level (CL) was
determined to be 0.56\%. The recent ATLAS result included final states
where the Higgs boson decays to b quark pairs, and measured the observed
upper limits of $\mathcal{B}({\rm t \to Hc})$ and $\mathcal{B}({\rm t
\to Hu})$ at the 95\% CL to be 0.46\% and 0.45\%, respectively.
The analysis presented here uses a data sample recorded with the CMS
detector and corresponding to an integrated luminosity of 19.7\fbinv of
\Pp\Pp\, collisions at $\sqrt{s} = 8\TeV$. The data were recorded in
2012 with instantaneous luminosities of 5--8\,$\times10^{33}~\rm
cm^{-2}s^{-1}$ and an average of 21 interactions per bunch crossing. The
inelastic collisions that occur in addition to the hard-scattering
process in the same beam crossing produce mainly low-\pt particles that
form the so-called ``pileup'' background.
In this paper, the FCNC decays $\cPqt\to\PH\cPqc$ and $\cPqt\to\PH\cPqu$
are searched for through the processes $\ttbar \to \rm Hc+Wb$ or $\rm
Hu+Wb$. Three independent analyses are perfomed and their results are
then combined. The multilepton analysis considers events with two
same-sign (SS) leptons or three charged leptons (electrons or muons).
This channel is sensitive to the Higgs boson decaying into WW, ZZ, or
$\tau\tau$ which have branching fractions of 21.5\%, 2.6\%, and 6.3\%,
respectively~\cite{YR3}. The diphoton analysis considers events with
two photons, a bottom quark, and a W boson that decays either
hadronically or leptonically. The two photons in this channel are used
to reconstruct the Higgs boson which decays to diphotons with
$\mathcal{B}\left({\rm H}\to \gamma\gamma \right) =
0.23\%$~\cite{YR3}. Finally, events with at least four jets, three of
which result from the hadronization of bottom quarks (b jets), and a
leptonically decaying W boson are considered. The b jet + lepton
channel takes advantage of the large Higgs boson branching fraction into
\bbbar pairs, $\mathcal{B}(\PH\to\bbbar) =
57\%$~\cite{Denner:2011mq}. A summary of the enumerated final states is
shown in Table~\ref{topologies}.
The CMS detector and trigger are described in Section~\ref{sec:cms}, and
the event selection and reconstruction in
Section~\ref{sec:preselection}. Section~\ref{sec:data} then discusses
the Monte Carlo (MC) simulation samples. The signal selection and
background estimations for each of the three analyses are given in
Section~\ref{sec:signal}, and the systematic uncertainties in
Section~\ref{sec:sys}. Finally, the individual and combined results
from the analyses are presented in Section~\ref{sec:results}.
\begin{table}[ht]
\begin{center}
\topcaption{
Summary of the requirements for the ${\rm pp \to \ttbar \to Hq + Wb}$ channels used
in this analysis.
\label{topologies}}
\resizebox{\textwidth}{!}{
\begin{tabular}{ l | c c c c c }
\multicolumn{1}{c}{Decay channels} & Leptons & Photons & Jets & b jets & Category \\
\hline
H $\to$ WW, ZZ, $\tau\tau$ \& W$\to\ell\nu$ & eee, ee$\mu$, e$\mu\mu$, $\mu\mu\mu$ & --- & ${\ge}2$ & --- & trilepton \\
H $\to$ WW, ZZ, $\tau\tau$ \& W$\to\ell\nu$ & e$^\pm$e$^\pm$, e$^\pm\mu^\pm$, $\mu^\pm\mu^\pm$ & --- & ${\ge}2$ & --- & dilepton SS \\
H $\to$ $\gamma\gamma$ \& W$\to\ell\nu$ & e$^\pm$, $\mu^\pm$ & ${\ge}2$ & ${\ge}2$ & =1 & diphoton + lepton \\
H $\to$ $\gamma\gamma$ \& W$\to q_1 q_2$ & --- & ${\ge}2$ & ${\ge}4$ & =1 & diphoton + hadron \\
H $\to$ $\bbbar$ \& W$\to\ell\nu$ & e$^\pm$, $\mu^\pm$ & --- & ${\ge}4$ & ${\ge}3$ & b jet + lepton \\
\end{tabular}
}
\end{center}
\end{table}
\section{The CMS detector and trigger}
\label{sec:cms}
A detailed description of the CMS detector, together with a definition
of the coordinate system used and the relevant kinematic variables, can
be found in Ref.~\cite{ref:cms}. The central feature of the CMS
apparatus is a superconducting solenoid, 13\unit{m} in length and
6\unit{m} in diameter, which provides an axial magnetic field of
3.8\unit{T}. Within the field volume there are several particle
detection systems. Charged particle trajectories are measured by silicon
pixel and strip trackers, covering $0\le \phi \le 2\pi$ in azimuth and
$\abs{\eta} < 2.5$ in pseudorapidity. A lead tungstate crystal
electromagnetic calorimeter (ECAL) surrounds the tracking volume. It is
comprised of a barrel region $\abs{\eta} < 1.48$ and two endcaps that
extend up to $\abs{\eta} = 3$. A brass and scintillator hadron
calorimeter (HCAL) surrounds the ECAL and also covers the region
$\abs{\eta} < 3$. The forward hadron calorimeter (HF) uses steel as the
absorber and quartz fibers as the sensitive material. The HF extends the
calorimeter coverage to the range $3.0 < |\eta| < 5.2$. A lead and
silicon-strip preshower detector is located in front of the ECAL
endcaps. Muons are identified and measured in gas-ionization detectors
embedded in the steel flux-return yoke outside the solenoid. The
detector is nearly hermetic, allowing momentum balance measurements in
the plane transverse to the beam direction.
Depending on the final state under consideration, events are selected at
the trigger level by either requiring at least two leptons, ($\Pe\Pe$,
$\mu\mu$ or $\Pe\mu$), at least two photons, or a single lepton ($\Pe$
or $\mu$) to be within the detector acceptance and to pass loose
identification and kinematic requirements.
The dilepton triggers used in the multilepton selection require one
lepton with $\pt > 17\GeV$ and one lepton with $\pt > 8\GeV$. At
the trigger level and during the offline selection, electrons are required to
be within $\abs{\eta} < 2.5$, and muons are required to be within
$\abs{\eta} < 2.4$. All leptons must be isolated, as described in
Section~\ref{sec:preselection}, and have $\pt > 20\GeV$ for the
highest-\pt lepton, and $\pt > 10\GeV$ for all subsequent leptons in the
event. For events satisfying the full multilepton selection, the dimuon,
dielectron, and electron-muon trigger efficiencies are measured to be
98\%, 91\%, and 94\%, respectively, for the SS dilepton selection, and
100\% for the trilepton selection.
The diphoton trigger requires the presence of one photon with $\pt >
36\GeV$ and a second photon with $\pt > 22\GeV$. Loose isolation and
shower shape requirements are applied to both photons~\cite{EGM-14-001}.
The average diphoton trigger efficiency is measured to be 99.4\% after
applying the full event selection for photons within $\abs{\eta} < 2.5$,
excluding the barrel-endcap transition region $1.44 <\abs{\eta} < 1.57$.
The b jet + lepton selection uses the single-lepton triggers. The
single-muon trigger requires at least one isolated muon with $\pt >
24\GeV$ and $\abs{\eta} < 2.1$ to be reconstructed online. The
single-electron trigger requires at least one isolated electron with
$\pt > 27\GeV$ and $\abs{\eta} < 2.5$. The offline selection further
requires that electrons have $\pt > 30\GeV$ and muons have $\pt >
26\GeV$. This results in an average trigger efficiency of 84\% for the
single-electron triggers and 92\% for the single-muon trigger after the
b jet + lepton selection.
\section{Event selection and reconstruction}
\label{sec:preselection}
Events are required to have a primary vertex with a reconstructed
longitudinal position within 24\unit{cm} of the geometric center of the
detector and a transverse position within 2\unit{cm} from the nominal
interaction point. To distinguish the hard-scattering vertex from
vertices arising from pileup interactions, the reconstructed vertex with
the highest scalar sum of the ${\pt^2}$ of its associated tracks is
chosen as the primary vertex. To ensure that leptons originate from
the same primary vertex, a loose requirement is applied to their
longitudinal and transverse impact parameters with respect to the
primary vertex.
The particle-flow event algorithm~\cite{ref:pf,CMS:2010byl} is used to
reconstruct and identify individual particles using an optimized
combination of information from the elements of the detector. Prompt
electrons and muons arising from W and Z decays are typically more
isolated than nonprompt leptons arising from the decay of hadrons within
jets. In order to distinguish between prompt and nonprompt lepton
candidates, a relative isolation parameter is defined for each lepton
candidate. This is calculated by summing the \pt of all charged and
neutral particles reconstructed using the particle-flow algorithm within
a cone of angular radius $\DR \equiv \sqrt{\smash[b]{(\Delta\eta)^2 +
(\Delta \phi)^2}} = 0.4$ around the lepton candidate momentum, where
$\Delta\eta$ and $\Delta\phi$ are the pseudorapidity and azimuthal angle
(in radians) differences, respectively, between the directions of the
lepton and the other particle~\cite{Baffioni:2006cd,Chatrchyan:2012xi}.
This cone excludes the lepton candidate and the charged particles
associated with the pileup vertices. The resulting quantity is
corrected for additional underlying-event activity owing to neutral
particles~\cite{long}, and then divided by the lepton candidate's \pt.
The relative isolation parameter is required to be less than
0.15 for electrons and 0.12 for muons.
The electron selection criteria are optimized using a multivariate
approach that combined information from both the tracks and ECAL clusters,
and have a combined identification and isolation efficiency of
approximately 60\% at low \pt (10\GeV) and 90\% at high \pt (50\GeV) for
electrons from $\PW$ or $\Z$ boson decays~\cite{Khachatryan:2015hwa}.
The training of the multivariate electron reconstruction is performed
using simulated events, while the performance is validated using data.
Muon candidates are reconstructed with a global trajectory fit using
hits in the tracker and the muon system. The efficiency for muons to
pass both the identification and isolation criteria is measured from
data to be larger than 95\%~\cite{eleReg1b,long}.
For events in which there is an overlap between a muon and an electron,
\ie, an electron within $\DR < 0.1$ of a muon, precedence is given to
the muon by vetoing the electron. In the multilepton selection, events
in which there are more than three isolated leptons (electron or muon)
with \pt $>$ 10\GeV are rejected to reduce diboson contamination. The
invariant mass of dilepton pairs in the SS channel is required to be
greater than 30\GeV in order to reject low-mass resonances and reduce
poorly modeled backgrounds (\eg, QCD). In the b jet + lepton selection,
events in which there are additional isolated electrons with
$\pt > 20\GeV$ and $\abs{\eta} < 2.5$ or isolated muons with
$\pt > 10\GeV$ and $\abs{\eta} < 2.4$ are rejected.
The photon energy is reconstructed from the sum of signals in the ECAL
crystals~\cite{EGM-14-001}. The ECAL signals are
calibrated~\cite{Calib-ECAL}, and a multivariate regression, developed
for a previous $\PH\to\gamma\gamma$ analysis~\cite{cms-Hgg-Legacy}, is
used to estimate the energy of the photon. Clusters are
formed from the neighboring ECAL crystals seeded around local maxima
of energy deposits, and the collection of clusters that contain the
energy of a photon or an electron is called a supercluster.
Identification criteria are applied to distinguish photons from jets
and electrons. The observables used in the photon identification
criteria are the isolation variables, the ratio of the energy in the
HCAL towers behind the supercluster to the electromagnetic energy in
the supercluster, the transverse width in $\eta$ of the
electromagnetic shower, and the number of charged tracks matched to
the supercluster. The photon efficiency identification is measured
using $\Z \to \Pe^{+}\Pe^{-}$ events in data by reconstructing the electron
showers as photons~\cite{CMS:2011aa}, taking into account the shower
shape and whether the electron probe is located in the barrel or
endcap. The two highest \pt photons must exceed 33
and 25\GeV, respectively.
Jets are reconstructed from the candidates produced by the particle-flow
algorithm. An anti-$\kt$ clustering algorithm~\cite{ref:kt} with a distance
parameter of 0.5 is used for jet reconstruction. Jets with a significant
fraction of energy coming from pileup interactions or not associated with the
primary vertex are rejected. Remaining pileup energy in jets is
subtracted using a technique that relies on information about the jet
area~\cite{pile1,pile2,pile3}. Reconstructed jets are calibrated to
take into account differences in detector response~\cite{ref:jetscale}. The
jets in the multilepton and b jet + lepton selections are required to have $\pt
> 30\GeV$, $\abs{\eta} < 2.5$, and to be separated from leptons such that
$\DR(\text{lepton, jet}) > 0.3$. The selection of jets in the diphoton events
differs by requiring the jet $\ET > 20\GeV$ and the jets be
separated from both photons such that $\DR(\text{photon, jet}) > 0.3$.
To characterize the amount of hadronic activity in an event, the scalar sum of
the transverse energy of jets passing all of these requirements ($H_\mathrm{T}$)
is calculated. The missing transverse energy (\MET) is calculated as the
magnitude of the vector sum of the transverse momenta of all reconstructed
particle-flow candidates in the event.
Jets originating from the hadronization of b quarks are identified by the
combined secondary vertex (CSV) b tagging algorithm~\cite{CSV}. The selection
criteria that are used have an identification efficiency of 66\%, and a
misidentification rate of 18\% for charm quarks and 1\% for light-quark and
gluon jets. The diphoton and b jet + lepton selections require
b-tagged jets. Although the identification of b jets is not used to select
signal events in the multilepton selection, it is used for the purpose of
defining control samples to check the normalization of simulated background
processes. No additional tagging is used to discriminate between jets
originating from c quarks.
The inclusion of b jets in the diphoton and b jet + lepton selections
results in a difference in the sensitivity to the $ \PQt \to \PH \PQu $ and
$ \PQt \to \PH \PQc $ decay modes. This is caused by the larger likelihood
of b tagging a jet originating from a charm quark than from an up quark.
The multilepton analyses do not include b tagging to enhance the signal
sensitivity so the two FCNC top quark decay modes are indistinguishable.
\section{Simulated samples}
\label{sec:data}
The determination of the expected signal and background yields relies on
simulated events, as well as an estimation based on control samples in
data, as discussed in later sections. Samples of Drell--Yan, $\ttbar$,
\PW+jets, $\PW + \bbbar$, diboson, $\ttbar + \PZ$, $\ttbar + \PW$,
and triboson events are generated using the \MADGRAPH event generator
(v5.1.5.11)~\cite{Alwall:2011uj}. The samples of $\PZ\PZ$ to four charged
leptons and single top quark events are generated using \POWHEG (v1.0
r1380)~\cite{ref:Nason:2004rx,ref:Frixione:2007vw,ref:Alioli:2010xd}.
In all cases, hadronization and showering are done through \PYTHIA
(v6.426)~\cite{ref:pythia}, and $\tau$ decays are simulated using
\TAUOLA (v2.75)~\cite{Was:2000st}. Three additional production
processes are considered for the nonresonant diphoton backgrounds, where
the dominant one coming from $\gamma\gamma$ + jets is simulated with
\SHERPA (v1.4.2)~\cite{SHERPA}. Top quark pairs with one additional
photon are simulated with \MADGRAPH, while those with two additional
photons are simulated using the \WHIZARD (v2.1.1)~\cite{WHIZARD}
generator interfaced with \PYTHIA. The Z2 tune~\cite{Field:2011iq} of
\PYTHIA is used to model the underlying event.
Events that arise from the SM Higgs boson production are treated as a
background. The gluon-fusion (ggH) and vector-boson-fusion (VBF) Higgs
boson production processes are generated with \POWHEG at
next-to-leading order (NLO) in QCD, interfaced with \PYTHIA . The
associated W/ZH production and $\ttbar$H processes are simulated with
\PYTHIA at leading order. The cross sections and branching fractions of
the SM Higgs boson processes are set to the values recommended by the
LHC Higgs cross section working group~\cite{YR3}.
The simulated samples for the signal process ${\ttbar \to \rm Hq
+ Wb}$ (q = c or u) are produced using \PYTHIA for the case of the
Higgs boson decaying to WW, ZZ, $\tau\tau$, and $\gamma\gamma$, and with
\MADGRAPH for $\rm H \to \bbbar$. The use of different generators is an
artifact of the various modes being analyzed separately. The Higgs boson
is assumed to have a mass of 125\GeV.
The set of parton distribution functions (PDF) used is
CTEQ6L~\cite{ref:CTEQ6LL} in all cases, except for $\rm H \to
\bbbar$, where CT10~\cite{PhysRevD.78.013004} is used.
The CMS detector response is simulated using a $\GEANTfour$-based
(v9.4)~\cite{ref:geant} model, and the events are reconstructed and
analyzed using the same software used to process collision data.
The effect of pileup is included in the simulation process by
superimposing simulated events on the process of interest. The
simulated signal events are weighted to account for the differences
between data and simulation of the trigger, reconstruction, and
isolation efficiencies, and the distributions of the reconstructed
vertices coming from pileup. Additional corrections are applied to
account for the energy scale and lepton \pt resolution. The observed jet
energy resolution and scale~\cite{ref:jetscale}, top quark
\pt distribution~\cite{CMS-PAS-TOP-12-027}, and b tagging
efficiency and discriminator distribution~\cite{Khachatryan:2014qaa} in
data are used to correct the simulated events. Corrections accounting
for the differences in lepton selection efficiencies are derived using the
tag-and-probe technique~\cite{Chatrchyan:2011cm}.
\section{Signal selection and background estimation}
\label{sec:signal}
The sensitivity of the search is enhanced by combining the twelve exclusive
channels, shown in Table~\ref{topologies}, defined according to the
expected decay modes of the Higgs and W bosons.
\subsection{Multilepton channels}
\par
The multilepton analysis is conducted with the goal of
enhancing the signal sensitivity in the trilepton channel:
$\ttbar \to \PH\cPq + \PW\cPqb \to \ell\nu\ell\nu\cPq + \ell\nu\cPqb$,
and the SS dilepton channel: $\ttbar \to \PH\cPq + \PW\cPqb \to \ell\nu {
\cPq\cPq\cPq} + \ell\nu\cPqb$, where $\ell$ represents either a muon or
electron. The main target of optimization is final states
resulting from $\rm H \to WW$ decays.
\begin{figure}[h]
\begin{center}
\vspace{1cm}
\includegraphics[width=0.99\textwidth]{Figure_001.pdf}\\
\caption{\label{fig:1}
Trilepton invariant mass versus opposite-sign dilepton invariant
mass in the trilepton channel after the event selection described in
Section~\ref{sec:preselection} for simulated signal, estimated
background, and data, from left to right.
}
\end{center}
\end{figure}
In the case of the trilepton channel, rejection of events containing
dileptons originating from resonant \Z boson production is necessary to
remove backgrounds from WZ production, asymmetric internal conversions
(AIC, the process in which final-state radiation in a Drell--Yan event
converts to dileptons where one of the leptons carries most of the
photon momentum)~\cite{asy-conv} or final-state radiation where the
photon is misidentified as an electron. A comparison of the
two-dimensional distribution of the trilepton mass versus the opposite-sign
dilepton mass is shown in Figure~\ref{fig:1} for the estimated signal
and background processes, and data. Events satisfying any of
the following criteria are vetoed to reduce the contribution from
resonant \PZ production: (1) the invariant mass of an opposite-sign,
same-flavor (OSSF) lepton pair is within 15\GeV of the \PZ boson
mass~\cite{PDG}; (2) the invariant mass of an OSSF lepton pair is
greater than 30\GeV and the trilepton invariant mass is within 10\GeV of
the \PZ boson mass. For the SS dielectron channel, electron pairs with
an invariant mass within 15\GeV of the \PZ boson mass are rejected to
reduce the background arising from misidentification of the electron
charge. No invariant mass requirement is applied to the
$\mu^{\pm}\mu^{\pm}$ and e$^{\pm}\mu^{\pm}$ final states since there is
a negligible contamination from resonant \PZ boson production.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth]{Figure_002-a}
\includegraphics[width=0.45\textwidth]{Figure_002-b}
\caption{
Jet multiplicity in the samples featuring three identified leptons
(left) and two SS leptons (right) after rejecting events with \Z
bosons. The data are represented by the points with vertical bars,
and the unfilled histogram shows the expected signal. A value of
$\mathcal{B}({\rm t\to Hc}) = 3\%$ is used for the sake of improved
visualization. The dominant backgrounds are represented with filled
histograms and the background (BG) uncertainty is shown as shaded
bands.
}\label{fig:2}
\end{center}
\end{figure}
The jet multiplicity after rejecting events containing a \Z boson is
shown in Figure~\ref{fig:2}. To improve the sensitivity of the search, we
require at least two jets in the final state.
Figure~\ref{fig:3} shows the \MET and \HT distributions for trilepton
and SS dilepton events after applying the Z veto and jet requirement. A
candidate event in the trilepton channel has no additional requirements
on \MET or \HT. The SS events are required to pass an \MET-dependent
\HT requirement (shown in Table~\ref{table:MetHT}) and have \MET greater
than 30\GeV. The \MET and \HT requirements are obtained by maximizing
the estimated signal significance, defined as the number of signal
events over the square root of the number of background
events.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth]{Figure_003-a}
\includegraphics[width=0.45\textwidth]{Figure_003-b}\\
\includegraphics[width=0.45\textwidth]{Figure_003-c}
\includegraphics[width=0.45\textwidth]{Figure_003-d}
\caption{
The \MET (top) and \HT (bottom) distributions in the trilepton (left)
and SS dilepton (right) channels in data (points
with bars) and predicted by the SM background simulations (filled
histograms) after rejecting events containing \Z bosons, requiring at
least two jets, and the event selection described in
Section~\ref{sec:preselection}. The overall background uncertainty
is shown in shaded black. The expected signal assuming a
$\mathcal{B}({\rm t\to Hc})$ of 3\% is shown by the unfilled
histogram.
}\label{fig:3}
\end{center}
\end{figure}
The main sources of background can be divided into two categories
according to the origin of the identified leptons and the \MET. These
include (1) \textit{irreducible background processes}: events with leptons
originating from the decay of SM bosons and having large \MET arising
from neutrinos; (2) \textit{reducible background processes:} events with
misidentified leptons produced either by nonprompt leptons from hadron
decays (\eg, semileptonic decays of B mesons), by misidentified hadrons,
or by mismeasurement of the lepton charge.
\begin{table}[ht]
\begin{center}
\topcaption{ Two-dimensional selection
requirements on \MET and \HT applied in the SS dilepton channel. An
event is selected if it satisfies one of the three listed sets. }
\label{table:MetHT}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{ c | c c c }
Selection set & 1 & 2& 3 \\ \hline \MET & $\phantom{1}{<}70\GeV$ & $70{-}90\GeV$ &
${>}90\GeV$ \\ \HT & ${>}140\GeV$ & ${>}100\GeV$ & ${>}60\GeV$ \\
\end{tabular}
\end{center}
\end{table}
Given that at least two isolated leptons and two jets are required in
the final state, the main sources of irreducible backgrounds are
$\ttbar$ associated with vector boson production, \PW\Z$\to
3\ell\nu$, $\PZ\PZ\to 4\ell$, $\Z\to 4\ell$, and, to a
lesser extent, triboson and $\PW^{\pm}\PW^{\pm}$ production. The
contribution from all of these processes except $\Z\to 4\ell$
production are estimated from simulated samples. The WZ cross section
used in the simulation is cross-checked against a control sample from
data that is enriched in WZ events by requiring that there be three
leptons, with two of them forming a dilepton pair whose invariant mass
is consistent with a Z boson. No correction to the WZ normalization is
needed. This sample is also used to assess the systematic uncertainty
in the simulation of the background.
For the presentation of the results, several of the backgrounds are
grouped into a single category referred to as the rare backgrounds. The
rare background contribution is estimated mainly from simulation (see
the following paragraph), and the processes include $\PZ\PZ\to
4\ell$, $\ttbar$+Z, $\ttbar$+W, triboson, $\PW^{\pm}\PW^{\pm}$, and
$\ttbar$+$\PH$. The $\PW\Z\to 3\ell\nu$ background contribution is
presented separately.
The residual contribution in the trilepton channel from asymmetric
internal conversions (AIC) arising from Drell--Yan events is estimated
using a data-driven technique~\cite{asy-conv} that uses $\Z\to
\ell^{+}\ell^{-}+\gamma$ events in data to model $\Z\to
\ell^{+}\ell^{-}+\text{e}/\mu$ events. This is because the process that
gives rise to the two final states is the same (final-state radiation in
Drell--Yan events), and the third lepton that is detected in the AIC
event carries most of the photon momentum. The
$\ell^{+}\ell^{-}+\gamma$ events are scaled based on photon
\pt-dependent weights coming from a control sample defined as having a
three-body invariant mass within 15\GeV of the Z boson mass. The
average conversion probabilities for photons in dimuon and dielectron
events are $(0.57 \pm 0.07)\%$ and $(0.7 \pm 0.1)\%$, respectively.
There are two major types of reducible backgrounds coming from \bbbar,
Drell--Yan, W+jets, and $\ttbar$ processes. One source comes from
events with either nonprompt leptons produced during the hadronization
process of the outgoing quarks (\eg, semileptonic decays of B mesons) or
hadrons misidentified as prompt leptons. The other source originates
from the charge misidentification of a lepton in the more frequent
production of opposite-sign dileptons. This background mostly
contaminates the SS dielectron final states. Data-driven methods are
used to estimate these two types of reducible backgrounds.
Mismeasuring the charge of a lepton can be a significant source of
background in SS dilepton final states when there are one or more
electrons. Even though the probability for mismeasuring the charge of
an electron is relatively low (${\approx}0.1\%$), the production rate of
opposite-sign dileptons is very high in comparison to processes that
result in genuine SS dileptons. The probability of mismeasuring the
charge of a muon is negligible (${<}10^{-6}$) and is therefore not
considered here. In order to estimate the probability of misidentifying
the charge of an electron from data, a control sample is selected
consisting of events containing a dielectron pair with an invariant mass
within 15\GeV of the \Z boson mass. The rate of charge
misidentification is then determined from the ratio of the number of SS
events to opposite-sign events as a function of \pt and $\eta$. The
measured charge misidentification for electrons with $\abs{\eta} <
1.48$ is less than 0.2\% for $\pt < 100\GeV$, while for
$\abs{\eta} > 1.48$ it is 0.1\% at 10\GeV and increases with
\pt to 2.5\% at 125\GeV. These measurements are in agreement with those
obtained from simulated Drell--Yan events.
Two control samples are used to estimate the misidentification rate of
prompt
leptons~\cite{Chatrchyan:2011wba,Chatrchyan:2012ira,Chatrchyan:2012ty}:
one region is enriched in \bbbar events; the other is enriched
in Z + jet production. Both samples are used to estimate the
probability of misidentifying nonprompt electrons and muons as a
function of $\pt$ and $\eta$. The measured misidentification rate for
electrons ranges from 2\% to 8\% and for muons ranges from 1\% to 6\%.
Simulated events are used to correct for the contamination arising from
prompt leptons in the nonprompt misidentification rate measurement
(e.g., WZ production in the Z+jet control region). The rates are then
applied to events where one or more of the lepton candidates fail the
tight lepton identification requirements. The differences between the
nonprompt misidentification rates in the two measurement regions and the
signal region are then used to estimate the systematic uncertainty of
this background. To further assess the systematic uncertainty, the
misidentification rates are also measured in simulated events that
reproduce the background composition of events in the signal region and
compared to the rates measured from data.
The predicted numbers of background and signal events for the trilepton
and SS dileptons are given in Table~\ref{table:yieldsa}. The backgrounds
are separated into nonprompt lepton, charge misidentification,
$\PW\PZ\to 3\ell\nu$, and the rare backgrounds. The predicted
number of signal events assumes $\mathcal{B}({\rm t\to Hq}) =
1\%$. The total number of observed events, also given in
Table~\ref{table:yieldsa}, is consistent with the predicted number of
background events.
\begin{table}[t]
\begin{center}
\topcaption{
The predicted and observed inclusive event yields after the full
event selection for the trilepton and SS dilepton
categories assuming $\mathcal{B}({\rm t\to Hq}) = 1\%$.
The quoted uncertainties include both statistical and systematic
uncertainties added in quadrature. The total number of observed
events is given in the last row.
}\label{table:yieldsa}
\begin{tabular}{ l | c c | c c }
Process & \multicolumn{2}{c|}{Trilepton} & \multicolumn{2}{c}{SS dilepton} \\
\hline
Nonprompt & \multicolumn{2}{c|}{$49.4 \pm 9.0$} & \multicolumn{2}{c}{$409 \pm 72$} \\
Charge misidentification & \multicolumn{2}{c|}{---} & \multicolumn{2}{c}{$32.1 \pm 6.4$} \\
$\mathrm{WZ}\to3\ell\nu$ & \multicolumn{2}{c|}{$15.8 \pm 1.1$} & \multicolumn{2}{c}{$83.9 \pm 5.4$} \\
Rare backgrounds & \multicolumn{2}{c|}{$19.6 \pm 1.4$} & \multicolumn{2}{c}{$128.1 \pm 6.4\ensuremath{\phantom{0}}$} \\
\hline
Total background & \multicolumn{2}{c|}{$86.2 \pm 9.3$} & \multicolumn{2}{c}{$654 \pm 73$} \\
\hline
Signal & t$\to$Hu & t$\to$Hc & t$\to$Hu & t$\to$Hc \\
\hline
H$ \to$ WW & 12.4 $\pm$ 1.4 & 14.4 $\pm$ 1.1 & 135 $\pm$ 12 & 130.3 $\pm$ 8.1\ensuremath{\phantom{0}} \\
H$ \to \tau\tau$ & \x4.1 $\pm$ 0.4 & \x4.4 $\pm$ 0.3 & 36.4 $\pm$ 3.2 & 35.3 $\pm$ 2.2 \\
H$ \to$ ZZ & \x0.4 $\pm$ 0.1 & \x0.4 $\pm$ 0.1 & \x1.6 $\pm$ 0.1 & \x1.4 $\pm$ 0.1 \\
\hline
Total signal & 16.9 $\pm$ 1.5 & 19.2 $\pm$ 1.1 & 173 $\pm$ 13 & 167.0 $\pm$ 8.4\ensuremath{\phantom{0}} \\
\hline\hline
Observed & \multicolumn{2}{c|}{79} & \multicolumn{2}{c}{631} \\
\end{tabular}
\end{center}
\end{table}
\subsection{Diphoton channel}
The diphoton analysis is performed using both leptonic and hadronic W
boson decays: $\ttbar \to \PH\cPq + \PW\cPqb \to \gamma\gamma\cPq +
\ell\nu\cPqb$, and $\ttbar \to \PH\cPq + \PW\cPqb \to \gamma\gamma\cPq +
\cPq\cPq\cPqb$. The mass of the diphoton system $m_{\gamma\gamma}$ is
the primary variable used to search for the Higgs boson decay. The
contribution of the nonresonant backgrounds is estimated by fitting the
$m_{\gamma\gamma}$ distribution from data in the mass range $100 <
m_{\gamma\gamma}< 180\,\GeV$, whereas the contribution of resonant
backgrounds is taken from the simulation.
The two highest-\pt photons must have $\pt > m_{\gamma\gamma}/3$
and $\pt > m_{\gamma\gamma}/4$, respectively. The use of $\pt$
thresholds scaled by $m_{\gamma\gamma}$ prevents a distortion of the low
end of the $m_{\gamma\gamma}$ spectrum that would result from a fixed
threshold~\cite{Chatrchyan:2012twa}. In the rare case of multiple
diphoton candidates in an event, the one with the highest $\pt$ sum is
selected.
The hadronic analysis uses events with at least four jets and exactly
one b jet. The b jet and the three jets with the highest $\pt$ are used
to reconstruct the invariant mass of the two top quarks, $m_{{\rm
j}\gamma\gamma}$ and $m_{{\rm bjj}}$. There are three possible
$(m_{{\rm j}\gamma\gamma}, m_{{\rm bjj}})$ pairs per event. The
combination of jets with the minimum value of $\left| m_{{\rm
j}\gamma\gamma}/ m_{{\rm bjj}} -1 \right| + \left| m_{{\rm bjj}}/m_{{\rm
j}\gamma\gamma} -1\right|$ is selected. The allowed ranges for $m_{{\rm
j}\gamma\gamma}$, $m_{{\rm bjj}}$, and the W boson mass $m_{\PW}$
associated with $m_{{\rm bjj}}$ are obtained by maximizing the signal
significance $S/\sqrt{B}$ in the simulation, where $S$ is the number of
signal events and $B$ is number of the background events. The background
events are assumed to come from $\gamma\gamma$+jets and are taken from
simulation. The highest signal significance is found to be 16\%
obtained for $142 \le m_{{\rm bjj}} \le 222\GeV$, $158 \le m_{{\rm
j}\gamma\gamma} \le 202\GeV$, and $44\le m_{\rm W} \le 140\GeV$.
The leptonic analysis uses events with at least three jets, exactly one
b jet, and at least one lepton. The reconstructed top mass $m_{{\rm
b}\nu\ell}$ is found from the b jet, the lepton, and \MET. The
longitudinal momentum of the neutrino is estimated by using the W boson
mass as a constraint, which leads to a quadratic equation. If the
equation has a complex solution, the real part of the solution is used.
If the equation has two real solutions, the one with the smaller value
of $\left| m_{{\rm j}\gamma\gamma}/ m_{{\rm b}\ell\nu} -1 \right|
+\left| m_{{\rm b}\ell\nu}/m_{{\rm j}\gamma\gamma} -1\right|$ is
chosen. The mass windows for $m_{\rm bjj}$, $m_{\rj\gamma\gamma}$, and
$m_{\PW}$ are the same as in the hadronic channel.
The signal region is defined using the experimental width of the Higgs
boson, 1.4\GeV, around the nominal mass peak position. As in the
analysis of the inclusive SM Higgs boson decaying into
diphotons~\cite{Chatrchyan:2012twa}, the signal shape of the diphoton
invariant mass distribution is described by the sum of three Gaussian
functions. Although the contribution from the SM Higgs boson
background, dominated by the $\ttbar$H process, is relatively small in
comparison to the contribution of the nonresonant diphoton background,
the resonant diphoton background cannot be ignored because it has a very
similar $m_{\gamma\gamma}$ distribution as the signal.
To determine the shape of the nonresonant diphoton background,
a function consisting of a test model and the resonant diphoton
background is fitted to the data under the background-only hypothesis. The
model of the resonant diphoton background is the same as the signal
function. The background function is used to generate 1000 pseudo-experiment
samples that are fitted with the background plus signal probability density
function.
A pull is then defined as $(N_{\rm fit}-N_{\rm gen})/\sigma_{N_{\rm
fit}}$, where $N_{\rm fit}$ is the fitted number of signal events in the
pseudo-experiments, $N_{\rm gen}$ is the number of generated signal
events, and $\sigma_{N_{\rm fit}}$ is the corresponding uncertainty. In
the case under consideration, $N_{\rm gen} = 0$. The procedure is
verified by injecting signal in the pseudo-experiments. Several models
are tried, and the chosen function for nonresonant diphoton background
is the one whose bias (offset of the pull distribution) is less than
0.15 and with the minimum number of degrees of freedom for the entire
set of tested models. A third-order Bernstein polynomial is selected as
the functional form of the background for both the hadronic and leptonic
channels. After determining the function to describe the nonresonant
diphoton background, a function given by the sum of probability density
functions of the resonant and nonresonant diphoton backgrounds and signal
is fitted to the data. The normalization of the resonant diphoton
background is allowed to vary within its uncertainties, while the
normalization of the nonresonant component is unconstrained.
Table~\ref{tab:gg-s-bg} gives a summary of the observed and expected
event yields for the two diphoton channels and Figure~\ref{fig:gghl}
shows the fit result overlaid with the data.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth]{Figure_004-a}
\includegraphics[width=0.45\textwidth]{Figure_004-b}
\caption{\label{fig:gghl}
The $m_{\gamma\gamma}$ distribution and the fit result of the
hadronic (left) and leptonic (right) channels. The dashed
line represents the component of the nonresonant diphoton
background, while the solid line represents the total background
plus signal. The shaded bands represent one and two
standard deviation uncertainties of the fit.}
\end{center}
\end{figure}
\begin{table}[h]
\topcaption{
Observed event yield and the expected numbers of background and
signal events for the diphoton selection in the hadronic and
leptonic channels in the $100<m_{\gamma\gamma} < 180\GeV$ mass
range. The signal yields assume $\mathcal{B}(\cPqt\to\PH\cPq)
= 1\%$. The uncertainties are statistical only.
\label{tab:gg-s-bg}}
\begin{center}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{l | c c}
Process & Hadronic channel & Leptonic channel\\
\hline
Nonresonant background & 28.9 $\pm$ 5.4\ensuremath{\phantom{0}} & 8.0 $\pm$ 2.8 \\
Resonant background & 0.15 $\pm$ 0.02 & 0.04 $\pm$ 0.01 \\
\hline
$\cPqt\to\PH\cPqc$ & 6.26 $\pm$ 0.07 & 1.91 $\pm$ 0.04 \\
$\cPqt\to\PH\cPqu$ & 7.09 $\pm$ 0.08 & 2.02 $\pm$ 0.04 \\
\hline\hline
Observed & 29 & 8\\
\end{tabular}
\end{center}
\end{table}
\subsection{b jet + lepton channel}
The basic event selection requirements for the b jet + lepton channel
are a single-lepton trigger, one isolated lepton, a minimum \MET of
30\GeV, and at least four jets, with at least three of them tagged as b
jets. The background is dominated by $\ttbar \to
\bbbar\PW^{+}\PW^{-}$ production. Figure~\ref{fig:1c} shows the
distributions of \MET and the W boson transverse mass ($M_{\mathrm{T}}$) for data
and simulation after the basic event selection criteria are applied.
The transverse mass is defined as
\begin{equation*}
M_{\mathrm{T}} = \sqrt{2\pt^{\ell}\MET[1 - \cos(\Delta\phi(\ell, \nu))]},
\end{equation*}
where $\pt^{\ell}$ is the \pt of the lepton, \MET is used in place of
the \pt of the neutrino, and $\Delta\phi(\ell, \nu))$ is the azimuthal
angular difference between the directions of the lepton and neutrino.
\begin{figure}[ht]
\begin{center}
\vspace{1cm}
\includegraphics[width=0.45\textwidth]{Figure_005-a.pdf}
\includegraphics[width=0.45\textwidth]{Figure_005-b.pdf}
\caption{
Comparison between data and simulated events after the basic
selection for b jet + lepton events has been applied: the \MET
distribution (left) and the reconstructed transverse mass of the W
boson candidate (right). A value of $\mathcal{B}({\rm t\to Hc}) =
3\%$ is used for the sake of improved visualization.
}\label{fig:1c}
\end{center}
\end{figure}
For both top quark decays $\cPqt \to \PH\cPq \to \bbbar \rj$ and $\cPqt
\to \PW\cPqb \to \cPqb\ell\nu$, a full reconstruction of the top
quark invariant mass $m_{\PH\cPq}$ or $m_{\PW\cPqb}$ is possible. However,
combinatorial background arises since there is no unambiguous way to
match multiple light-quark and b quark jets with the final-state
quarks. Therefore, all possible combinations are examined and a
multivariate analysis (MVA) technique~\cite{Hocker:2007ht} is used to
select the best candidate for each event. Several variables based on
event kinematics and event topology are examined. Considering their
signal-to-background separation power, the following variables are used
to form a boosted decision tree (BDT)~classifier~\cite{Hocker:2007ht}:
\begin{itemize}
\item the invariant masses $m_{\PH\cPq}$ and $m_{\PH\cPqb}$ of the reconstructed top quarks,
\item the energy of the u or c jet from the
$\cPqt\to\cPq\PH$ in the rest frame of its parent top quark,
\item the azimuthal angle between the reconstructed top quarks
directions,
\item the azimuthal angle between the reconstructed W boson and
the associated b jet directions,
\item the azimuthal angle between the Higgs boson and the
associated jet directions,
\item the azimuthal angle between the directions of the b jets resulting from the
Higgs boson decay.
\end{itemize}
The BDT classifier is trained with the correct and wrong combinations of
simulated FCNC events determined from the generator-level parton
matching. Because only event kinematics and topological variables are
used, the Hu and Hc channels share the same BDT classifier. The
jet-parton assignment in each event is determined by choosing the
combination with the largest BDT classifier score, resulting in the correct
assignment in 54\% of events, as determined from simulation.
The signal is determined using a template fit of the output of an
artificial neural network (ANN)~\cite{Hocker:2007ht}. The ANN takes its
inputs from the invariant mass of the reconstructed Higgs boson
candidate and the CSV discriminator variables of the three b jets from
the hadronic top quark and Higgs boson daughters. The training of the
ANN is done separately for the $\cPqt\to\PH\cPqu$ and
$\cPqt\to\PH\cPqc$ channels. A control sample dominated by
$\ttbar$ is selected to validate the simulation used in the training.
The sample is constructed by requiring one lepton and four jets, of
which exactly two are b jets.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth]{Figure_006-a.pdf}
\includegraphics[width=0.45\textwidth]{Figure_006-b.pdf}
\caption{
The output distributions from the ANN discriminator for data
(points) and simulated background (lines) where the ANN was
trained to discriminate the backgrounds from either
$\cPqt\to\PH\cPqc$ (left) or $\cPqt\to\PH\cPqu$ (right) decays.
The solid line shows the result of the fit of the signal and
background templates to data. The dotted line gives the
predicted signal distribution from simulation for
$\mathcal{B}(\cPqt\to\PH\cPqc) = 3\%$ and the filled histogram
shows the proportion of signal estimated from the fit.
\label{plot:bb-fit}}
\end{center}
\end{figure}
Figure~\ref{plot:bb-fit} show the results of the fit performed with the
6840 observed events. The observed number of events and the expected
yields of the signal and the main backgrounds estimated from simulation
are shown in Table~\ref{table:multijet_sim_results}. The estimated
background and signal based on the fit of the ANN discriminator output
is shown in Table~\ref{table:multijet_fit_results}. The number of
signal and background events from the fit result for the Hc channel are
$74 \pm 109\stat\pm~24\syst$ and $6770 \pm 130\stat\pm~950\syst$, respectively. The corresponding yields for the Hu channel are
$197 \pm 87\stat\pm~59\syst$ and $6640 \pm 120\stat\pm~800\syst$, respectively.
\begin{table}[h]
\topcaption{The expected number of background and signal events for the
b jet + lepton selection from simulation. The signal yields from
the simulation of the signal assume $\mathcal{B}(\cPqt\to\PH\cPq) =
1\%$. Uncertainties combine both statistical and
systematic components in quadrature.
\label{table:multijet_sim_results}}
\centering
\begin{tabular}{l | c}
Process & Predicted number of events \\
\hline
\ttbar & $7100 \pm 1500$ \\
\ttbar\PH & $55 \pm 11$ \\
\PW\bbbar & $71 \pm 14$ \\
\hline
Total background & $7226 \pm 1500$ \\
\hline
$\cPqt\to\PH\cPqc$ & $272 \pm 90\ensuremath{\phantom{0}}$ \\
$\cPqt\to\PH\cPqu$ & $215 \pm 65\ensuremath{\phantom{0}}$ \\
\hline\hline
Observed & 6840 \\
\end{tabular}
\end{table}
\begin{table}[h]
\topcaption{The measured number of background and signal events for the
b jet + lepton selection from fitting the ANN output
trained on $\cPqt\to\PH\cPqc$ and $\cPqt\to\PH\cPqu$ final states.
Uncertainties are statistical and systematic values,
respectively. The observed number of events is shown in the last
row.
\label{table:multijet_fit_results}}
\centering
\begin{tabular}{l | c c}
Process & $\cPqt\to\PH\cPqc$ & $\cPqt\to\PH\cPqu$ \\
\hline
Background & $6770 \pm 130 \pm 950$ & $6440 \pm 120 \pm 800$ \\
Signal & $\x74 \pm 109 \pm 24$ & $\x197 \pm \x87 \pm 59\ensuremath{\phantom{0}}$ \\
\hline\hline
Observed & \multicolumn{2}{c}{6840} \\
\end{tabular}
\end{table}
\section{Systematic uncertainties}
\label{sec:sys}
In the fit to the data, systematic uncertainties are treated as nuisance
parameters. Each of them is assigned a log-normal or Gaussian pdf,
which is included into the likelihood in a frequentist manner by
interpreting it as signal arising from pseudo-measurement distributions.
Nuisance parameters can affect either the signal yield, the shape of
kinematic variable distributions, or both. If a specific source of
uncertainty is not included for a given channel, it indicates that the
uncertainty is either not applicable to that channel or is found to have
negligible impact on the result.
The sources of uncertainties common to all analysis channels are: the
uncertainty in the total integrated luminosity
(2.6\%)~\cite{CMS-PAS-LUM-13-001}; the effects of the event pileup
modeling for the signal samples (0.2--3\%), which is particularly
important for the b jet + lepton channel; the uncertainty in the Higgs
boson branching fractions (5\%)~\cite{Denner:2011mq}; the uncertainty in the \ttbar cross
section (7.5\%)~\cite{Khachatryan:2014loa}; the uncertainty in the jet
energy scale (1--15\%)~\cite{ref:jetscale} and resolution (0.4--8\%),
where the larger uncertainty is for the b jet + lepton selection; the
uncertainty in the PDF used in the event generators
($<9\%$)~\cite{Bourilkov:2006cj}; the assumed top quark \pt distribution
(1--4\%)~\cite{CMS-PAS-TOP-12-027}; the \MET resolution
(0.2--4\%)~\cite{ref:jetscale}; the uncertainty in the trigger
efficiency (${<}2\%$); and the corrections applied to the simulation to
account for the differences in lepton identification and isolation
efficiencies in data and simulation (0.01--6\%), where the larger
uncertainty is for the selection of events with a three-electron final
state.
The uncertainties specific to the signal description and background
estimation for the multilepton analysis come from the 11--13\%
uncertainty in the $\ttbar$W and $\ttbar$Z theoretical cross
sections~\cite{Garzelli:2012bn}; the 15\% uncertainty in the WZ
normalization (determined from a control region); the uncertainty in the
lepton misidentification rate (40\% for electrons, 30\% for muons); and
the 20\% uncertainty in the electron charge mismeasurement probability.
The uncertainties specific to the signal description and background estimation
for the diphoton channels are the corrections applied to the
simulation to account for differences of the photon identification efficiency
in data and simulation (0.1--5\%); and the uncertainty in the jet and b jet
identification efficiency (2--3.5\%)~\cite{CSV}. The resonant background from
the SM Higgs boson production has an uncertainty of 8.1\% from the PDF
uncertainty and 9.3\% from the QCD scale~\cite{lhcwg12}.
The uncertainties specific to the signal description and background
estimation for the b jet + lepton channel are dominated by the b jet
identification. The uncertainty in the b tagging correction has two
components: one is from the sample purity (4\%)~\cite{CSV} and the other
from the sample statistical uncertainty (24\%). The uncertainty in
the $\ttbar$+jets cross section, determined using a leading-order event
generator, is 1\%. The uncertainty in the modeling of the heavy-flavor
daughters of the W decay in the \ttbar simulated sample is estimated to
be 3\%. Additional uncertainties arise from the event generator
parameters such as the renormalization and factorization scales
(5\%)~\cite{PhysRevD.78.013004}, the parton-jet matching threshold (1--9\%), and
the top quark mass (4\%).
The uncertainties owing to the integrated luminosity, jet energy scale
and resolution, pileup, reconstruction of physics objects, signal PDFs,
and top quark related uncertainties are assumed to be fully correlated,
while all others are treated as uncorrelated.
The systematic uncertainties are summarized in Table~\ref{sys}.
\begin{table}[ht!]
\topcaption{
Systematic uncertainties for the $\rm t\bar{t} \to Hq + Wb$~(q
= u, c) channels in percent. Ranges are quoted to indicate
values that vary across the different analyses.
} \label{sys}
{
\begin{center}
\renewcommand{\arraystretch}{1.05}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|ccccc}
Channel & SS dilepton & Trilepton & $\gamma\gamma$ hadronic& $\gamma\gamma$ leptonic & b jet + lepton \\
\hline
Integrated luminosity & 2.6 & 2.6 & 2.6 & 2.6 & 2.6 \\
Pileup & 1.0 & 1.0 & 0.3 & 0.8 & 0.2--3.0 \\
Higgs boson branching fraction & 5.0 & 5.0 & 5.0 & 5.0 & 5.0\\
${\rm t\bar{t}}$ cross section & 7.5 & 7.5 & 7.5 & 7.5 & 7.5\\
Jet energy scale & 0.5 & 0.6 & 1.2 & 1.0 & 5.2--15\phantom{.}\\
Jet energy resolution & 0.8 & 2.2 & 2.7 & 0.4 & 2.2--7.8\\
Signal PDF & 6.0 & 6.0 & 5.9 & 5.2 & ${<}1$--9.0\\
Top quark $p_{\rm T}$ correction & --- & --- & 1.4 & 3.2 & 0.8--4.3\\
\MET & 4.0 & 4.0 & --- & --- & 0.2--2.5 \\
Trigger efficiency & 1.0--2.0 & --- & 1.0 & 1.0 & ${<}0.1$--0.4\ensuremath{\phantom{0}}\\
Identification and isolation & & & & & \\
~~~~- muon & 1.0--2.0 & 1.0--3.0 & --- & 0.3 & 0.01--0.04 \\
~~~~- electron & 2.0--4.0 & 2.0--6.0 & --- & 0.3 & ${<}0.1$--0.2\ensuremath{\phantom{0}} \\
\hline
${\rm t\bar{t}}$W normalization & 11.0 & 11.0 & --- & --- & --- \\
${\rm t\bar{t}}$Z normalization & 13.0 & 13.0 & --- & --- & ---\\
WZ normalization & 15.0 & 15.0 & --- & --- & ---\\
Lepton misidentification & & & & & \\
~~~~- electron & 40.0 & 40.0 & --- & --- & ---\\
~~~~- muon & 30.0 & 30.0 & --- & --- & ---\\
Charge misidentification & 20.0 & --- & --- & --- & ---\\
\hline
Photon identification efficiency & --- & --- & 5.2 & 5.2 & --- \\
Corrections per photon & & & & & \\
~~~- energy scale & --- & --- & 0.1 & 0.1 & --- \\
~~~- energy resolution & --- & --- & 0.1 & 0.1 & --- \\
~~~- material mismodeling & --- & --- & 0.3 & 0.3 & --- \\
~~~- nonlinearity & --- & --- & 0.1 & 0.1 & --- \\
Jet identification efficiency & --- & --- & 2.0 & 2.0 & --- \\
b jet identification efficiency & --- & --- & 2.9 & 3.5 & --- \\
Higgs boson background & & & & & \\
~~~~- cross section scale factors & --- & --- & 9.3 & 9.3 & ---\\
~~~~- PDF & --- & --- & 8.1 & 8.1 & ---\\
\hline
b jet CSV distribution & & & & & \\
~~~~- purity & --- & --- & --- & --- & 1.0--3.4\\
~~~~- statistical precision & --- & --- & --- & --- & 1.0--24\phantom{.}\\
\ttbar + heavy flavor jets & --- & --- & --- & --- & 0.3--1.0\\
Modeling W decay daughters & --- & --- & --- & --- & 1.6--2.7\\
Generator parameters & & & & & \\
~~~- QCD scale & --- & --- & --- & --- & 1.0--4.9 \\
~~~- matching parton-jet threshold & --- & --- & --- & --- & 1.3--9.4 \\
~~~- top quark mass & --- & --- & --- & --- & 0.8--4.1 \\
\end{tabular}
}
\end{center}}
\end{table}
\section{Results}
\label{sec:results}
The expected number of events from the SM background processes and the
expected number of signal events in data assuming a branching fraction
$\mathcal{B}(\cPqt\to\PH\cPq) = 1\%$ are shown in
Tables~\ref{table:yieldsa}, \ref{tab:gg-s-bg}, and
\ref{table:multijet_fit_results} for the multilepton, diphoton, and b jet +
lepton selections, respectively. The final results are based on the
combination of 12 channels: three SS dilepton, four trilepton, one
diphoton + hadrons, two diphoton + lepton, and two b jet + lepton. The
combination requires the simultaneous fit of the data selected by all
the individual analyses, accounting for all statistical and systematic
uncertainties, and their correlations. As $\mathcal{B}({\rm t \to Hq})$
is expected to be small, the possibility of both top quarks decaying via
FCNC is not considered.
No excess beyond the expected SM background is observed and upper limits
at the 95\% CL on the branching fractions of $\rm t \to Hc$ and $\rm t
\to Hu$ are determined using the modified frequentist approach
(asymptotic CLs
method~\cite{ref:Junk:1999kv,ref:Read:2002hq,Cowan:2010js}).
The observed 95\% CL upper limits on the branching fractions ${\cal
B}({\rm t \to Hc})$ and ${\cal B}({\rm t \to Hu})$ are 0.40\% and
0.55\%, respectively, obtained from the combined multilepton, diphoton,
and b jet + lepton channels. A summary
of the observed and expected limits is presented in
Table~\ref{table:limits_final}. The diphoton channels are significantly
more sensitive than the other channels, largely because of the lower
uncertainty in the background model. The multilepton and b jet + lepton
channels provide a 15\% (37\%) improvement on the observed (expected)
upper limit when combined with the diphoton channel. A previous search
for FCNC mediated by Higgs boson interactions via the $\rm t \to Hc$ decay
at the LHC made use of trilepton and diphoton final
states~\cite{Khachatryan:2014jya}. The inclusion of new channels (SS
dilepton, diphoton, and b jet + lepton final states) in addition to
refinements in the trilepton and diphoton channels results in an
improvement of 30\% (34\%) in the observed (expected) upper limit
on $\mathcal{B}(\rm t\to Hc)$.
The partial width of the ${\rm t \to Hq}$ process is related to the
square of the Yukawa coupling $\lambda_{\rm tq}$
by the formula~\cite{Craig:2012vj,Atwood}:
\begin{equation*}
\Gamma_{\cPqt\to\PH\cPq}= \frac{m_{\cPqt}}{16\pi}
\left|\lambda_{\rm tq}^{\PH}\right|^{2}
\left[(y_{\rm q}+1)^2-y^2\right]
\sqrt{1-(y-y_{\rm q})^2} \sqrt{1-(y+y_{\rm q})^2},
\end{equation*}
where $y = m_{\PH}/m_{\rm t}$ and $y_{\rm q} = m_{\rm q}/m_{\rm t}$.
(Note that a convention where the parity of the coupling is ignored is
adopted here: this introduces a factor of two when comparing to the
ATLAS result.) Assuming the ${\rm t \to Wb}$ partial width to be
dominant, the upper limit on the ${\rm t \to Hq}$ branching fractions
can be translated into an upper limit on the square of the couplings
using the relations:
\begin{equation*}
\mathcal{B}({\rm t \to Hc}) = \Gamma_{\rm t\to Hc}/\Gamma_{\rm Total} =
(0.58 \pm 0.01) \left|\lambda_{\rm tc}^{\PH}\right|^{2},
\end{equation*}
\begin{equation*}
\mathcal{B}({\rm t \to Hu}) = \Gamma_{\rm t\to Hu}/\Gamma_{\rm Total} =
(0.56 \pm 0.01) \left|\lambda_{\rm tu}^{\PH}\right|^{2},
\end{equation*}
where the CKM matrix element $\left|V_{\rm tb}\right|$ is assumed to be
equal to unity in the NLO order calculation~\cite{Denner} of
$\Gamma_{\rm Total}\approx\Gamma_{\rm t\to Wb}=1.372\GeV$, and
uncertanties arise from uncertainties on the mass values. The Particle
Data Group~\cite{PDG} values of $m_{\rm H} = 125\GeV$, $m_{\rm t} =
173.5\GeV$, $m_{\rm c} = 1.29\GeV$, and $m_{\rm u}=2.3\MeV$ are used.
Based on the analysis results, the observed (expected) 95\% CL upper
limits on the squares of the top-Higgs Yukawa couplings are:
\begin{equation*}
\left|\lambda_{\rm tc}^{\PH}\right|^{2} < 6.9~(7.4^{+3.6}_{-2.2}) \times 10^{-3},\\
\end{equation*}
\begin{equation*}
\left|\lambda_{\rm tu}^{\PH}\right|^{2} < 9.8~(7.1^{+3.2}_{-2.3}) \times 10^{-3}.
\end{equation*}
\begin{table}[ht!]
\topcaption{The observed and expected upper limits at
the $95\%$ CL on the branching fraction (in \%) of $\rm t
\to Hq$ (q = u, c) for: trilepton, SS dilepton, and
combined multilepton channels; diphoton; b jet + lepton; and the
combination of all channels. For the expected upper limit, the
limit plus and minus a standard deviation are also shown.
} \label{table:limits_final}
\vspace{0.1in}
\centering
\begin{tabular}{ l | c | c c c }
& $\mathcal{B}_\text{obs}$($ \PQt \to \PH \PQc $) & $\mathcal{B}_\text{exp}$($ \PQt \to \PH \PQc $) & $\mathcal{B}_\text{exp}{+}\sigma$ & $\mathcal{B}_\text{exp}{-}\sigma$ \\
\hline
Trilepton & 1.26 & 1.33 & 1.87 & 0.95 \\
Same-sign dilepton & 0.99 & 0.93 & 1.26 & 0.68 \\
Multilepton combined & 0.93 & 0.89 & 1.22 & 0.65 \\
Diphoton hadronic & 1.26 & 1.33 & 1.87 & 0.95 \\
Diphoton leptonic & 0.99 & 0.93 & 1.26 & 0.68 \\
Diphoton combined & 0.47 & 0.67 & 1.06 & 0.44 \\
b jet + lepton & 1.16 & 0.89 & 1.37 & 0.60 \\
\hline
Full combination & 0.40 & 0.43 & 0.64 & 0.30 \\
\hline \hline
& $\mathcal{B}_\text{obs}$($ \PQt \to \PH \PQu $) & $\mathcal{B}_\text{exp}$($ \PQt \to \PH \PQu $) & $\mathcal{B}_\text{exp}{+}\sigma$ & $\mathcal{B}_\text{exp}{-}\sigma$ \\
\hline
Trilepton & 1.34 & 1.47 & 2.09 & 1.05 \\
Same-sign dilepton & 0.93 & 0.85 & 1.16 & 0.62 \\
Multilepton combined & 0.86 & 0.82 & 1.14 & 0.60 \\
Diphoton hadronic & 1.26 & 1.33 & 1.87 & 0.95 \\
Diphoton leptonic & 0.99 & 0.93 & 1.26 & 0.68 \\
Diphoton combined & 0.42 & 0.60 & 0.96 & 0.39 \\
b jet + lepton & 1.92 & 0.84 & 1.31 & 0.57 \\
\hline
Full combination & 0.55 & 0.40 & 0.58 & 0.27 \\
\end{tabular}
\end{table}
\section{Summary}
\label{sec:conclusions}
\par
A search for flavor-changing neutral currents in the decay of a top
quark to a charm or up quark and a Higgs boson based on $\sqrt{s} =
8\TeV$ proton-proton collisions has been presented. Samples of
multilepton, diphoton, and b jet + lepton events were selected from data
recorded with the CMS detector, corresponding to an integrated
luminosity of 19.7\fbinv. The topologies ${\rm pp \to \ttbar \to}$
${\rm Hq + Wb}$ events, where q = u, c and H is allowed to decay into
WW, ZZ, $\tau \tau$, $\gamma\gamma$, and \bbbar. No excess of
events above the SM background is observed, and branching fractions of
${\cal B}({\rm t \to Hc})$ larger than 0.40\% and ${\cal B}({\rm t \to
Hu})$ larger than 0.55\% are excluded at the 95\% confidence level.
These observed upper limits on ${\cal B}({\rm t \to Hq})$ and the
corresponding constraints on the top quark flavor-changing Higgs boson
Yukawa couplings are amongst the most stringent measured to date.
\begin{acknowledgments}
\hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren Rachada-pisek}
We congratulate our colleagues in the CERN accelerator departments for
the excellent performance of the LHC and thank the technical and
administrative staffs at CERN and at other CMS institutes for their
contributions to the success of the CMS effort. In addition, we
gratefully acknowledge the computing centers and personnel of the
Worldwide LHC Computing Grid for delivering so effectively the computing
infrastructure essential to our analyses. Finally, we acknowledge the
enduring support for the construction and operation of the LHC and the
CMS detector provided by the following funding agencies: the Austrian
Federal Ministry of Science, Research and Economy and the Austrian
Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds
voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq,
CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and
Science; CERN; the Chinese Academy of Sciences, Ministry of Science and
Technology, and National Natural Science Foundation of China; the
Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of
Science, Education and Sport, and the Croatian Science Foundation; the
Research Promotion Foundation, Cyprus; the Secretariat for Higher
Education, Science, Technology and Innovation, Ecuador; the Ministry of
Education and Research, Estonian Research Council via IUT23-4 and
IUT23-6 and European Regional Development Fund, Estonia; the Academy of
Finland, Finnish Ministry of Education and Culture, and Helsinki
Institute of Physics; the Institut National de Physique Nucl\'eaire et
de Physique des Particules~/~CNRS, and Commissariat \`a l'\'Energie
Atomique et aux \'Energies Alternatives~/~CEA, France; the
Bundesministerium f\"ur Bildung und Forschung, Deutsche
Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher
Forschungszentren, Germany; the General Secretariat for Research and
Technology, Greece; the National Scientific Research Foundation, and
National Innovation Office, Hungary; the Department of Atomic Energy and
the Department of Science and Technology, India; the Institute for
Studies in Theoretical Physics and Mathematics, Iran; the Science
Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy;
the Ministry of Science, ICT and Future Planning, and National Research
Foundation (NRF), Republic of Korea; the Lithuanian Academy of Sciences;
the Ministry of Education, and University of Malaya (Malaysia); the
Mexican Funding Agencies (BUAP, CINVESTAV, CONACYT, LNS, SEP, and
UASLP-FAI); the Ministry of Business, Innovation and Employment, New
Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science
and Higher Education and the National Science Centre, Poland; the
Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal; JINR, Dubna;
the Ministry of Education and Science of the Russian Federation, the
Federal Agency of Atomic Energy of the Russian Federation, Russian
Academy of Sciences, and the Russian Foundation for Basic Research; the
Ministry of Education, Science and Technological Development of Serbia;
the Secretar\'{\i}a de Estado de Investigaci\'on, Desarrollo e
Innovaci\'on and Programa Consolider-Ingenio 2010, Spain; the Swiss
Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich,
and SER); the Ministry of Science and Technology, Taipei; the Thailand
Center of Excellence in Physics, the Institute for the Promotion of
Teaching Science and Technology of Thailand, Special Task Force for
Activating Research and the National Science and Technology Development
Agency of Thailand; the Scientific and Technical Research Council of
Turkey, and Turkish Atomic Energy Authority; the National Academy of
Sciences of Ukraine, and State Fund for Fundamental Researches, Ukraine;
the Science and Technology Facilities Council, UK; the US Department of
Energy, and the US National Science Foundation.
Individuals have received support from the Marie-Curie program and the
European Research Council and EPLANET (European Union); the Leventis
Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt
Foundation; the Belgian Federal Science Policy Office; the Fonds pour la
Formation \`a la Recherche dans l'Industrie et dans l'Agriculture
(FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en
Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports
(MEYS) of the Czech Republic; the Council of Science and Industrial
Research, India; the HOMING PLUS program of the Foundation for Polish
Science, cofinanced from European Union, Regional Development Fund, the
Mobility Plus program of the Ministry of Science and Higher Education,
the National Science Center (Poland), contracts Harmonia
2014/14/M/ST2/00428, Opus 2013/11/B/ST2/04202, 2014/13/B/ST2/02543 and
2014/15/B/ST2/03998, Sonata-bis 2012/07/E/ST2/01406; the Thalis and
Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the National
Priorities Research Program by Qatar National Research Fund; the
Programa Clar\'in-COFUND del Principado de Asturias; the Rachadapisek
Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and
the Chulalongkorn Academic into Its 2nd Century Project Advancement
Project (Thailand); and the Welch Foundation, contract C-1845.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Supplemental Material}
\subsection{The investigation of $\Lambda$ dependence}
The typical value of $\Lambda$ is usually in the range $\Lambda \sim 0.9\pm 0.2~\text{GeV}$.
Here, we provide two additional new fits with $\Lambda = 0.8 $ and
$1.2$ GeV.
The final results are shown in Table~\ref{tab:fitmass} and Fig.~\ref{fig:fitspec}. The three sets of
results are similar to each other, which implies that the $\Lambda$
dependence can be absorbed by the renormalization of the interaction
kernel.
And the final results are almost independent on the choice of
$\Lambda$ in the widely used region.
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.8\linewidth]{spec-lattice-L08.pdf}\vspace{0.5cm}
\includegraphics[width=0.8\linewidth]{spec-lattice-L12.pdf}
\caption{The fitted binding energy dependence of the
length $L$ for the $D^*_{s0}(2317)$ (left), the
$D^*_{s1}(2460/2536)$ (middle) states with the pion mass
$m_{\pi}=150$ MeV~[69] and $m_{\pi}=156$ MeV~[68] for $\Lambda =
0.8,1.2$ GeV from up to down in order.} \label{fig:fitspec}
\end{figure*}
\\
\begin{table}[t]
\caption{The content of the bare $c\bar s$ cores in
the $D_s$ states $P(c\bar{s})$ for three different $\Lambda$.
}\label{tab:fitmass}
\begin{ruledtabular}
\begin{tabular}{cccc}
& $\Lambda=1.0 \,[\%]$ & $\Lambda=0.8\,[\%]$ & $\Lambda=1.2\,[\%]$ \vspace{0.1cm}\\
\hline
$D^*_{s0}(2317)$ & $32.0^{+5.2}_{-3.9}$ & $32.5^{+4.7}_{-3.6}$ & $31.6^{+5.9}_{-3.5}$ \vspace{0.1cm} \\
$D^*_{s1}(2460)$ & $52.4^{+5.1}_{-3.8}$ & $53.0^{+4.5}_{-3.9}$ & $51.9^{+5.9}_{-3.9}$ \vspace{0.1cm} \\
$D^*_{s1}(2536)$ & $98.2^{+0.1}_{-0.2}$ & $98.2^{+0.1}_{-0.2}$ & $98.2^{+0.1}_{-0.2}$ \vspace{0.1cm} \\
\hline
$D^*_{s2}(2573)$ & $95.9^{+1.0}_{-1.5}$ & $95.7^{+0.9}_{-1.3}$ & $96.0^{+0.8}_{-1.7}$ \vspace{0.1cm} \\
\end{tabular}
\end{ruledtabular}
\end{table}
\end{document} | {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
A code of length $n^2$ is obtained by taking centralizer of a matrix from the vector space $\mathbb{F}_q^{n \times n}$. As a consequence, it cannot reach to most of the sizes. Whereas a code of length $n \cdot k$ is formed by taking the solutions of matrix equation for some matrices $A \in \mathbb{F}_q^{n \times n}$ and $C \in \mathbb{F}_q^{k \times k}$ over $\mathbb{F}_q$. Here we have taken one such matrix equation of the form $AB=BC$ where the matrix $A \in \mathbb{F}_q^{n \times n}$ and the matrix $C \in \mathbb{F}_q^{k \times k}$. Thus the set of solutions $\mathcal{R}(A,C)= \{B \in \mathbb{F}_q^{n \times k}|AB=BC\}$ gives a code of length $n\cdot k$ by a similar construction in \cite{Alahmadi2017235} \cite{Alahmadi201468} \cite{joydeb1}. This code is named intertwining code \cite{Glasby2107}. This code can extend the use of better decoding ability of GTC codes \cite{joydeb1} into a vast class of linear codes.
Finding efficient error correcting procedure for a linear code is a challenging problem. If we look upon centralizer codes and twisted centralizer codes, we see that there was a very nice method to detect and correct single error using syndrome. This technique cannot provide an easy task for correcting more than single errors. Thus we present two algorithms to do better decoding procedure. Our algorithms work for the family of intertwining codes as well as for any linear codes.
In this paper, we explore few properties on intertwining codes. Then we show that there exists a intertwining code for which a certain linear code is a subcode of it. We find a way to effectively find an upper bound on the minimum distance of the code along with proving the existence of a certain weight codeword based upon the matrices $A$ and $C$. At last we show a possible way to find the matrix pair $(A,C)$ of intertwining code related to a linear code in total computational perspective.
Throughout this paper we denote $\mathbb{F}_q$ as a finite field with $q$ elements and $\mathbb{F}_q^{n \times k}$ as the set of all matrices of order $n \times k$ over $\mathbb{F}_q$. We take two matrices $A$ and $C$ from the vector spaces $\mathbb{F}_q^{n \times n}$ and $\mathbb{F}_q^{k \times k}$ respectively and also $O$ denotes the null matrix.
\begin{defn}
For any matrices $A \in \mathbb{F}_q^{n \times n}$ and $C \in \mathbb{F}_q^{k \times k}$, the set $\mathcal{R}(A, C)= \lbrace B \in \mathbb{F}_q^{n \times k}| AB=BC \rbrace$ is called intertwining code \cite{Glasby2107}.
\end{defn}
Clearly, the set $\mathcal{R}(A, C)$ is a linear subspace of the vector space $\mathbb{F}_q^{n \times n}$ and hence it is a linear code. The formation of the code is very similar to \cite{Alahmadi2017235} \cite{Alahmadi201468} \cite{joydeb1}. We develop few basic results on intertwining codes as follows.
\begin{enumerate}
\item If $A \in \mathcal{R} (A,C)$ then $C$ should be an $n \times n$ matrix. Now let $A$ belongs to the intertwining code then $A^2=AC$. If $A$ is idempotent then $A(C-I_n) = O$ and if $A^2=0$ then $C$ belongs to the matrix wise null space of $A$. But these are not possibly the whole solution of the equation $A^2=AC$.
\item If $A$ and $C$ are invertible matrices of order $n$ and $k$ respectively, then $\mathcal{R}(A, C)$ is isomorphic to $\mathcal{R}(A^{-1}, C^{-1})$.
\item Let $X \in \mathbb{F}_q^{n \times n}$ and $Y \in \mathbb{F}_q^{k \times k}$ are two invertible matrices. Then $\mathcal{R}(A,C)$ is isomorphic to conjugate code $\mathcal{R}(XAX^{-1}, YCY^{-1})$.
\item Let $A$ is such a matrix that $c$ is not an eigenvalue and $C =cI_k$. Then the matrix equation $AB=BC$ possesses a trivial solution, i.e., $\mathcal{R}(A, C)=\{0\}$.
\end{enumerate}
\section{Analysis on weight distribution}
\begin{thm}
Let $J \in \mathbb{F}_2^{n \times k}$ be the matrix with all entries are $1$. If $J \in \mathcal{R}(A,C)$ then the weight distribution of the code $\mathcal{R}(A,C)$ is symmetric, i.e., $A_i=A_{nk-i}$ for all $i$. In addition, if the matrix $A$ is of row sum equal to $0$ and $C$ is of column sum equal to $0$ then $J \in \mathcal{R}(A,C)$.
\end{thm}
\begin{proof}
Let $B_1 \in \mathcal{R}(A,C)$ with weight $i$ where $0 \leq i \leq nk$. If $J \in \mathcal{R}(A,C)$ then $J +B_1 \in \mathcal{R}(A,C)$, since $\mathcal{R}(A,C)$ is a linear code. Now weight of $ J +B_1$ is $nk-i$. So, whenever a codeword of weight $i$ appears, simultaneously there exists a codeword with weight $nk-i$. Thus we can say weight distribution is symmetric, i.e., number of codewords with weight $i$ = number of codewords with weight $nk-i$, where $i$ is a positive integer with $0 \leq i \leq nk$, i.e., $A_i=A_{nk-i}$.
For the second part, observe that $AJ = [ row$-$1$-$sum ~row$-$2$-$sum ~\dots ~row$-$n$-$sum]^T$ $\cdot$ $[1 ~1 ~1 ~1 ~\dots ~1]$ and similarly observe that $JC=[1 ~1 ~1 ~\dots ~1]^T \cdot [col$-$1$-$sum ~col$-$2$-$sum$ $ ~\dots ~col$-$k$-$sum]$. Hence it is clear that if the matrices $A$ has row sum equal to $0$ and $C$ has column sum equal to $0$, then $J$ satisfies the equation $AJ-JC=O$ since $AJ=O$ and $JC=O$.
\end{proof}
\subsection{Existence of a certain weight codeword}
Consider the matrices $A$ and $C$ for which $\mathcal{R}(A,C)$ has been constructed. Let us assume that none of them are invertible. Therefore, there will be dependent columns and rows. If $d_A$ is the minimum number of columns which are dependent in $A$ and $d_C$ is the minimum number of rows which are dependent in $C$, then at least one codeword in $\mathcal{R}(A,C)$ of weight $d_A \cdot d_C$ exists. In this case, we have $\sum c_i A_i=0$ with $c_i \neq 0, ~\forall i$ as the relation is taken to be of minimum number of columns. Then we have a column vector of those $c_i$'s. Similarly, we get a row vector from $C$. Then we take the multiplication of these vectors as $n\times 1$ into $1 \times k$ matrix multiplication. So this vector will give us an element of $\mathcal{T}_O=\{ B \in \mathbb{F}_q^{n \times k}| AB=BC=O\}$.
As the codeword of weight $d_A \cdot d_C$ exists, by construction this weight $d_A \cdot d_C$ is less than or equal to the $(r_A+1)\cdot(r_C+1)$ hence the minimum weight is also less than or equal to $(r_A+1) \cdot(r_{C}+1)$. We list the result in the above discussion as follows.
\begin{thm}
For an intertwining code generated by $A$ and $C$, there exists at least one codeword of weight $d_A \cdot d_C$ and therefore the minimum distance $d$ of $\mathcal{R}(A,C)$ is less than or equal to $d_A \cdot d_C$, i. e., $d(\mathcal{R}(A,C)) \leq d_A\cdot d_C$.
\end{thm}
\begin{proof}
The product code $\operatorname{Ker} A \otimes \operatorname{Ker} C^T$ belongs to intertwining code. Now according to definition of $d_A$ there exists $d_A$ number of columns dependent such that no set of lesser cardinality is dependent. Therefore a column vector $v$ of weight $d_A$ exists such that $Av=0$. Similarly we find a row vector $w$ of weight $d_C$ with $wC=0$ therefore $vw$ exists in the code with the rest following.
\end{proof}
\section{Decoding process}
Encoding procedure for intertwining codes are similar to the encoding procedures mentioned in \cite{Alahmadi2017235} \cite{Alahmadi201468} \cite{joydeb1}. To check whether a message is erroneous, it is required to define syndrome.
\begin{defn}
Let $A$ be a square matrix of order $n$ and $C$ be a square matrix of order $k$. Then the syndrome of an element $B \in \mathbb{F}_q^{n \times k}$ with respect to the intertwining code $\mathcal{R}(A,C)$ is defined as $S_{A,C}(B)= AB-BC$.
\end{defn}
If a word is erroneous then $S_{A,C}(B)= AB-BC \neq O$. It is an easiest technique to check whether a codeword belongs to the code or not. Here we propose two algorithms to correct errors in an intertwining code.
\subsection{\textbf{Algorithm 1:}}
As we do not know how to find the minimum weight element algorithmically hence we can not keep it inside our decoding process as it needs a lot of time. That's why we modify our procedure a bit for the Step $3$. As the vector space $\mathbb{F}_q^{n \times k}$ can be broken down into intertwining code and its cosets so we will calculate minimum weight element first for all the cosets separately. Now we input list of coset leaders as a table inside our decoding system.
\begin{enumerate}
\item[Step $1.$] Receiver received a word $B'$ from the channel.
\item[Step $2.$] Calculate the syndrome $S_{A,C}(B')=AB'-B'C$. If $S_{A,C}(B')=O$ then the transmitted codeword is $B'$ and goto Step $5$. Otherwise, goto Step $3$.
\item[Step $3.$] Find this syndrome's corresponding least weight error matrix $E$, already stored in the table. The matrix $E$ is the error pattern for the word $B'$.
\item[Step $4.$] Since, both $B'$ and the $E$ belong to the same coset $B'+\mathcal{R}(A,C)$, then $B'-E$ is the transmitted codeword of the code $\mathcal{R}(A,C)$.
\item[Step $5.$] End.
\end{enumerate}
\noindent \textbf{Caution!} This table may look like syndrome look-up table and it will work similarly, but here we must remember following differences.
\begin{enumerate}
\item This syndrome is different from the standard syndrome which is obtained by multiplying the codeword of length $n$ with a $(n-k) \times n$ parity-check matrix, resulting into $n-k$ length i.e. $7-4=3$ length for hamming code. Here we will get a whole matrix of length $n^2$, same length as our codewords, as the minimum weight codeword of a coset.
\item This table is not the syndrome look-up table for the intertwining code. A different one can be constructed but the construction will be more complex if we try to do so.
\end{enumerate}
\subsection{\textbf{Algorithm 2:}}
In the previous algorithm, we have to store whole of a table which occupy a good amount of memory. So, we provide a better algorithm which does not take the memory that much and achieves the least weight error matrix or error pattern by this algorithm.
For this algorithm, partition the codewords of the code $\mathcal{R}(A,C)$ by its weight distribution. Let $\mathcal{R}(A,C) = \cup_{i=1}^k A_k$, where $A_j = \lbrace c \in \mathcal{R}(A,C): wt(c)=a_j \rbrace$. We easily see that $A_i \cap A_j = \phi, ~\forall ~i \neq j$. Now weights available are $a_0, a_1, \dots, a_k$. Now, the algorithm is presented below.
\begin{enumerate}
\item[Step $1.$] Receiver received a word $B'$ and start to calculate its syndrome $S_{A,C}(B')=AB'-B'C$. To reduce complexity for computing the syndrome, we calculate bitwise syndrome. Whenever a nonzero bit appears in the syndrome, we stop the computation of the syndrome and goto Step $2$. Otherwise if $S_{A,C}(B')= 0$ then the transmitted word is $B'$ and goto Step $5$.
\item[Step $2.$] Find $b=wt(B')$, weight of $B'$. Now evaluate the intervals in two cases. If $n \geq b+a_i$, then we take the interval $[|b-a_i|, b+a_i]$, otherwise take the interval $[|b-a_i|, 2n-b-a_i]$.
\item[Step $3.$] Delete those intervals whose lower bounds are greater than $t$, where $t = \lfloor \frac{d-1}{2} \rfloor$ and $d$ is the minimum distance of the code $\mathcal{R}(A,C)$.
\item[Step $4.$] Arrange remaining intervals in ascending order of the lower bounds of intervals. We store ordering of index. Take the first interval then take corresponding weight. Let $a_m$ be the corresponding weight. Now find $S = \{A+B': wt(A+B') \leq wt(A'+B') ~\forall ~A' \in A_m\}$. Select $A$ such that $wt(A+B')$ is minimum for $A \in A_m$. If not unique, choose one $A$ randomly. Let $E=A+B'$. Find weight of $E$ and then delete intervals with lower bound greater than or equal to $wt(E)$. Go to next partition. In same procedure find the least weight word $E'$. Compare with the previous least weight word $E$. Choose minimum weight among these and save it to $E$. Continuing this process for further partitions we will get a matrix $E$, which is the error pattern. Therefore the transmitted codeword was $B'-E$.
\item[Step $5.$] End.
\end{enumerate}
\textbf{Note:} In Step $4$ of the above algorithm, error pattern matrix $E$ is always unique because if there are $E_1$ and $E_2$ such that both $B'-E_1$ and $B'-E_2$ have weight less than $t$ then distance between $E_1$ and $E_2$ will be less than $2t<d$ which is impossible for two codewords.
\textbf{Analysis:} In our algorithm we will store the code sorted according to weights. So we will have to store $2^k$ codewords at our worse, where $k$ is the dimension of the code. This is less than $2^{n-k}$ if $k<\frac{n}{2}$, where $n$ is the length. So for a code of dimension less than $\frac{n}{2}$, our algorithm takes less memory. Now we can view the weight wise sorted codewords as non-linear codes of constant weight. So we can store each sorted partition using the standard representation of a non-linear code using kernel of it and it's coset representatives as discussed in \cite{Villanueva2015}. Thus this will take shorter memory even.
\section{Can any linear code be represented as a subcode of intertwining code?}
Let us consider an $l$-dimensional subspace of the $nk$-dimensional vector space over $\mathbb{F}_q$. This $nk$ dimensional vector space can be represented as the vector space of matrices $\mathbb{F}_q^{n \times k}$. Can we find a pair of matrices $A \in \mathbb{F}_q^{n \times n}$ and $C \in \mathbb{F}_q^{k \times k}$ to construct an intertwining code $\mathcal{R}(A,C)$ which contains the $l$ dimensional linear code? This problem can be formulated as follows.
Given a linear code $\mathcal{C}$ over $\mathbb{F}_q$ of length $nk$ and dimension $l$. So, the linear code $\mathcal{C}$ has a generator matrix $G$ of order $l \times nk$. Can we represent the linear code $\mathcal{C}$ as a subcode of an intertwining code $\mathcal{R}(A,C)$?
\subsection{Forming the equations and existence of solutions}
We represent each row of the generating matrix as an $n \times k$ order matrix. So, there are $l$ linearly independent matrices $B_1, B_2, \dots, B_l$ corresponding to the generator matrix of the known linear code $\mathcal{C}$. Our aim is to find two such non-zero matrices $A$ and $C$ which satisfy the equation $AB_i=B_iC$ for each $B_i$, $i=1, 2, \dots, l$. To find $A$ and $C$, let us consider the entries of $A$ and $C$ are variables. Then we get $n^2+k^2$ variables. For each matrix $B_i$, there are $nk$ equations and there will be a total $nkl$ equations satisfying these variables. Here we use the mapping $\bar~:F_q^{n \times n} \rightarrow F_q^{n^2 \times 1}$ with $B \mapsto \bar{B}$, where the matrix $\bar{B}$ is formed by concatenating columns of $B$. Now the equation $AB_i =B_iC$ can be written as $$ AB_i - B_iC = O \Rightarrow \begin{bmatrix} I_n \otimes B_i^T & | & -B_i \otimes I_k \end{bmatrix} \begin{bmatrix} \bar{A} \\ \bar{C} \end{bmatrix}=O \Rightarrow D_i \begin{bmatrix} \bar{A} \\ \bar{C} \end{bmatrix} = O,$$ where $D_i= \begin{bmatrix} I_n \otimes B_i^T ~|-B_i \otimes I_k \end{bmatrix}$. Let $ D = \begin{bmatrix} D_1 & D_2 & \cdots & D_l \end{bmatrix}^T$. Then the above system of $l$ equations is written as
\begin{equation}\label{eq_1}
D \begin{bmatrix} \bar{A} \\ \bar{C} \end{bmatrix} =O.
\end{equation}
Here $D_i$ is coming from each $B_i$. Now the final solution is the solution of the equation (\ref{eq_1}). The matrix $D$ is of order $nkl \times (n^2+k^2)$. Thus the existence of non-trivial solution of above equation is reached if $n^2+k^2 \geq nkl$. This is a sufficient condition.
Now we see that each $D_i$ consists of two blocks, i.e., $B_i \otimes I_k$ and another block $I_n \otimes B_i^T$. So $D_i= \begin{bmatrix} I_n \otimes B_i^T ~|-B_i \otimes I_k \end{bmatrix}=\begin{bmatrix} I_n \otimes B_i^T ~|~ O \end{bmatrix}+\begin{bmatrix} O ~| -B_i \otimes I_k \end{bmatrix}=A_1+A_2$.
Now $rank(D_i) \leq rank(A_1)+rank(A_2)$. Clearly, $rank(A_1)=n \cdot rank(B_i^T) = n \cdot rank(B_i)$ and $rank(A_2)= k \cdot rank(B_i)$. Now, we get $rank(D_i) \leq (n+k) \cdot rank(B_i)$. So, $rank(D) \leq \sum_{i=1}^l (n+k) \cdot rank(B_i)$. Now the final solution space will have dimension $ \geq n^2+k^2- \sum_{i=1}^l (n+k) \cdot rank(B_i)$.
\section{Conclusion}
Centralizer codes, twisted centralizer codes and generalized twisted centralizer codes have length $n^2$ which is a reason that it cannot fit to most of the famous linear codes. But intertwining codes can reach most of the linear codes because it is of length $nk$. So, we have taken intertwining codes and try to make a correspondence between it and existing linear codes. We have found an upper bound on minimum distance and proposed two decoding algorithms which take less storage memory.\\
\textbf{Acknowledgements}
The author Joydeb Pal is thankful to DST-INSPIRE for financial support to pursue his research work.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Background concepts: sound waves, power and dynamic}\label{sec:tech}
Let $x(t)$ be a continuous time waveform taking values on the time interval $T_1 \leq t \leq T_2,$ such that $\int_{T_1}^{T_2} x(t) dt=0$. Its power is given by
\begin{equation}\label{eq:power}
P^{RMS}=C\sqrt{\frac{1}{T_2 - T_1} \int_{T_1}^{T_2} x(t)^2 dt}
\end{equation}
where $C$ is an appropriate scaling constant that depends on the measurement unit. \eqref{eq:power} defines the so called root mean square (RMS) power. It tells us that the power expressed by a waveform is determined by the average magnitude of the wave swings around its average level. In other words the equation \eqref{eq:power} reminds us of the concept of standard deviation.
A sound wave $x(t)$ can be recorded and stored by means of analog and/or digital processes. In the digital world $x(t)$ is represented numerically by sampling and quantizing the analog version of $x(t)$. The sampling scheme underlying the so called Compact Disc Digital Audio, is called Pulse Code Modulation (PCM). In PCM sampling a voltage signal $x(t)$ is sampled as a sequence of integer values proportional to the level of $x(t)$ at equally spaced times $t_0, t_1,\ldots$~. The CDDA is based on PCM with sampling frequency equal to 44.1KHz, and 16bits precision. The quantization process introduces rounding errors also known as quantization noise. Based on the PCM samples $\{x_0,x_1, \ldots x_T\}$, and under strong conditions on the structure of the underlying $x(t)$, the RMS power can be approximated by
\begin{equation}\label{eq:pcm_power}
P^{RMS}_T = \sqrt{\frac{1}{T+1} \sum_{t=0}^{T} x_t^2}.
\end{equation}
The latter is equal to sampling variance, because this signals have zero mean. Power encoded in a PCM stream is expressed as full-scale decibels:
\begin{equation}\label{eq:pcm_dbfs}
\text{dBFS} = 20\log_{10}\frac{P^{RMS}_T}{P_0},
\end{equation}
where $P_0$ is the RMS power of a reference wave. Usually $P_0 = 1/\sqrt{2}$, that is the RMS power of a pure sine-wave, or $P_0=1$, that is the RMS power of pure square-wave. For simplicity we set $P_0=1$ in this paper. dBFS is commonly considered as DR measure because it measures the spread between sound power and power of a reference signal.
For most real-world signal power changes strongly over time. In Figure \ref{fig:intheflash2039} we report a piece of sound extracted from the left channel of the song ``In the Flesh?'' by Pink Floyd. The song starts with a soft sweet lullaby corresponding to Block 1 magnified in the bottom plot. However, at circa 20.39s the band abruptly starts a sequence of blasting riffs. This teach us that: (i) sound power of music signals can change tremendously over time; (ii) the power depends on $T$, that is the time horizon. In the audio engineering community the practical approach is to time-window the signal and compute average power across windows, then several forms of DR statistics are computed \cite[see][]{ballou-2005} based on dBFS. In practice one chooses a $T$, then splits the PCM sequence into blocks of $T$ samples allowing a certain number $n_o$ of overlapping samples between blocks, let $\overline{P^{RMS}}_T$ be the average of $P_T^{RMS}$ values computed on each block, finally a simple DR measure, that we call ``sequential DR'', is computed as
\begin{equation}\label{eq:AvgDR}
\text{DRs} = - 20 \log_{10} \frac{\overline{P^{RMS}}_T}{x_{\text{peak}}},
\end{equation}
where $x_{\text{peak}}$ is the peak sample. The role of $x_{\text{peak}}$ is to scale the DR measure so that it does not depend on the quantization range. DRs=10 means that on average the RMS power is 10dBFS below the maximum signal amplitude. Notice that 3dBFS increment translates into twice the power. DRs numbers are easily interpretable, however, the statistical foundation is weak. Since the blocks are sequential, $T$ and $n_o$ determine the blocks uniquely regardless the structure of the signal at hand. The second issue is whether the average $\overline{P^{RMS}}_T$ is a good summary of the power distribution in order to express the DR concept. Certainly the descriptive nature of the DRs statistic, and the lack of a stochastic framework, does not allow to make inference and judge numbers consistently.
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{pics/intheflesh_wave_1.jpg}\\
\vspace{5mm}
\includegraphics[width=\textwidth]{pics/intheflesh_wave_2.jpg}
\caption{Waveform extracted from the left channel of the song ``{\sl In the Flesh?}'' from ``{\sl The Wall''} album by Pink Floyd (Mobile Fidelity Sound Lab remaster, catalog no. UDCD 2-537). Top plot captures 980ms of music centered at time 20.39524s (vertical dashed line). Bottom plot magnifies Block 1.}%
\label{fig:intheflash2039}
\end{figure}
\section{Simulation experiment}\label{sec:empirical_evidence_montecarlo}
A common way of assessing a statistical procedure is to simulate data from a certain known stochastic process fixed as the reference truth, and then compute Monte Carlo expectations of bias and efficiency measures. The problem here is that writing down a stochastic model capable of reproducing the features of real-world music signal is too complex. Instead of simulating such a signal we assess our methods based on simulated perturbations on real data. We considered two well recorded songs and we added various degrees of dynamic compression to assess whether our measure is able to highlight dynamic differences. A good method for estimating a measure of DR should consistently measure the loss of DR introduced by compressing the dynamic. In order to achieve a fair comparison we need songs on which little amount of digital processing has been applied. Chesky Records is a small label specialized in audiophile recordings, their ``{\sl Ultimate Demonstration Disc: Chesky Records' Guide to Critical Listening}'' (catalog number UD95), is almost a standard among audiophiles as test source for various aspects of hifi reproduction. We consider the left channel of tracks no.29, called ``{\sl Dynamic Test}'', and track no.17 called ``{\sl Visceral Impact}''. Both waveforms are reported in Figure \ref{fig:chesky_tracks}. The ``Dynamic Test'' consists of a drum recorded near field played with an increasing level. Its sound power is so huge that a voice message warns against play backs at deliberately high volumes, which in fact could cause equipment and hearing damages. Most audiophiles subjectively consider this track as one of the most illustrious example of dynamic recording. The track is roughly one minute long. The ``{\sl Visceral Impact}'' is actually the song ``{\sl Sweet Giorgia Brown}'' by Monty Alexander and elsewhere published in the Chesky catalog. The song has an energetic groove from the beginning to the end and it's about three minutes long. Differently from the previous track, that has increasing level of dynamic, this song has a uniform path. This can be clearly seen in Figure \ref{fig:chesky_tracks}.\\
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{pics/dynamic_test_wave.jpg}\\
\vspace{1em}
\includegraphics[width=\textwidth]{pics/visceral_impact_wave.jpg}
\caption{Waveforms of the left channel of the songs titled ``{\sl Dynamic Test}'' (track no.29), and ``{\sl Visceral Impact}'' (track no.17) from the audiophile CD ``{\sl Ultimate Demonstration Disc: Chesky Records' Guide to Critical Listening}'' (Chesky Record, catalog no. UD95).}%
\label{fig:chesky_tracks}
\end{figure}
We removed initial and final silence from both tracks, and the final length (in sample units) for ``{\sl Dynamic Test}'' is $n=2,646,000$, while $n=7,938,000$ for the second song. We then applied compression on both waves. A dynamic compressor is a function that whenever the original signal exceeds a given power (threshold parameter), the power of the output is scaled down by a certain factor (compression ratio parameter).
With a threshold of -12dBFS and a compression ratio of 1.5, whenever the signal power is above -12dBFS, the compressor reduces the signal level to $2/3$ so that the input power is 1.5${\times}$(output power). All this has been performed using SoX, an high quality audio digital processing software, with all other tuning parameters set at default values. For both threshold levels equal to -12dBFS and -24dBFS we applied on each song compression ratios equal to $\{1.5,2,2.5,3,3.5,4,4.5,5\}$. There are a total of 16 compressed versions of the original wave for each song. Hence the total number of tracks involved in the simulation experiment is 34. Even though the random subsampling makes the computational effort feasible, 34 cases still require a considerable amount of computations. The subsampling algorithm has been run with $K=500$ for both songs, while $b=2205$ (which means 50ms) for ``{\sl Dynamic Test}'', and $b=3528$ (which means 80ms) for ``{\sl Visceral Impact}''. A larger $b$ for larger $n$ obeys the theoretical requirements that $b/n \rightarrow 0 $ as $n$ and $b$ grow to $\infty$. The constant $M$ is fixed according to Proposition \ref{prop:1}, i.e. $M= \lfloor \sqrt{n \hat h} \rfloor$. Stability analysis has been conducted changing these parameters. In particular, we tried several values of $b$ for both tracks, but results did not change overall. A larger value of $K$ also had almost no impact on the final results, however larger $K$ increases the computational load considerably. In a comparison like this, one can choose to fix the seeds for all cases so that statistics are computed over the same subsamples in all cases. However this would not allow to assess the stability of the procedure against subsampling induced variance. The results presented here are obtained with different seeds for each case, but fixed seed has been tested and it did not change the main results. Moreover, we estimate $s(\cdot)$ in $(h,1-h)$ to avoid the well known issue of the boundary effect for the kernel estimator.
\begin{figure}[!t]
\centering
\includegraphics[width=.49\textwidth]{pics/mesdr_dynamic_test_90.pdf}
\hfill
\includegraphics[width=.49\textwidth]{pics/mesdr_dynamic_test_95.pdf}
\caption{MeSDR statistics for the song ``Dynamic Test''. The left plot reports 90\%-confidence bands, while the right plot reports 95\%-confidence bands.}%
\label{fig_msdr_dynamic}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=.49\textwidth]{pics/mesdr_visceral_impact_90.pdf}
\hfill
\includegraphics[width=.49\textwidth]{pics/mesdr_visceral_impact_95.pdf}
\caption{MeSDR statistics for the song ``Visceral Impact''. The left plot reports 90\%-confidence bands, while the right plot reports 95\%-confidence bands.}%
\label{fig_msdr_visceral}
\end{figure}
The results for all the cases are summarized in Figures \ref{fig_msdr_dynamic} and \ref{fig_msdr_visceral} where we report MeSDR statistics with 90\% and 95\%-confidence bands. The simulation experiment reveals an interesting evidence.
\begin{enumerate}
\item First notice that each of the curves in Figures \ref{fig_msdr_dynamic} and \ref{fig_msdr_visceral} well emulates the theoretical behaviour of DR vs compression ratios. In fact, if we had an ideal input signal with constant unit RMS power, the DR decreases at the speed of $\log_{10} (1/\text{compression ratio})$ for any threshold value. When the RMS power is not constant, a consistent DR statistics should still behave similar to $\log_{10} (1/\text{compression ratio})$ with a curvature that depends both on the threshold parameter and the amount of power above the threshold.
\item At both threshold levels, for both songs the MeSDR does a remarkable discrimination between compression levels. For a given positive compression level none confidence bands for the -12dBFS threshold overlaps with the confidence bands for the -24dBFS case. Over 32 cases, a single overlap happens in
Figure \ref{fig_msdr_dynamic} for compression ratio equal to 1.5 when consider the wider 95\%--confidence interval.
\item For both songs the confidence intervals are larger for the -12dBFS case, and on average the ``Dynamic Test'' reports longer intervals. This is expected because increasing the threshold from -12dBFS to -24dBFS will increase the proportion of samples affected by compression so that the variations of MeSDR will be reduced. Moreover we also expect that if the dynamic of a song doesn't have a sort of uniform path, as in the case of the ``Dynamic Test'', the variability of the MeSDR will be larger. Summarizing, not only the level of MeSDR, but also the length of the bands (i.e. the uncertainty) revels important information on the DR.
\item There is a smooth transition going from 90\% to 95\%--confidence intervals.This is an indication that the tails of the distribution of the MeSDR are well behaved.
\end{enumerate}
The experiment above shows how MeSDR is able to detect consistently even small differences in compression levels. Hence it can be used to effectively discriminate between recording quality.
\section{Real data application}\label{sec_real_dataset}
There are a number small record labels that gained success issuing remastered versions of famous albums. Some of these reissues are now out of catalogue and are traded at incredible prices on the second hand market. That means that music lovers actually value the recording quality. On the other hand majors keep issuing new remastered versions promising miracles, they often claim the use of new super technologies termed with spectacular names. But music lovers are often critical. Despite the marketing trend of mastering music with obscene levels of dynamic compression to make records sounding louder, human ears perceive dynamic compression better than it is thought. In this section we measure the DR of three different digital masterings of the song {``In the Flesh?''} from ``{\sl The Wall}'' album by Pink Floyd. The album is considered one of the best rock recording of all times and {``In the Flesh?''} is a champion in dynamic, especially in the beginning (as reported in Figure ~\ref{fig:intheflash2039}) and at the end. We analysed three different masters: the MFSL by Mobile Fidelity Sound Lab (catalog UDCD 2-537, issued in 1990); EMI94 by EMI Records Ltd (catalog 8312432, issued in 1994); EMI01 produced by EMI Music Distribution (catalog 679182, issued in 2001). There are much more remasters of the album not considered here. The EMI01 has been marketed as a remaster with superior sound obtained with state of the art technology. The MFSL has been worked out by a company specialized in classic album remasters. The first impression is that the MFSL sounds softer than the EMI versions. However, there are Pink Floyd fans arguing that the MFSL sounds more dynamic, and overall is better than anything else. Also the difference between the EMI94 and EMI01 is often discussed on internet forums with fans arguing that EMI01 did not improve upon EMI94 as advertised.\\
We measured the MeSDR of the three tracks and compared the results. Since there was a large correlation between the two channels, we only measured the channel with the largest peak, that is the left one. The three waves have been time--aligned, and the initial and final silence has been trimmed. The MeSDR has been computed with a block length of $b=2205$ (that is 50ms), and $K=500$. We computed the MeSDR both with equal and unequal seeds across the three waves to test for subsampling induced variability when $K$ is on the low side. Results are summarized in Table \ref{tab:intheflash}. First notice the seed changes the MeSDR only slightly. Increasing the value of $K$ to 1000 would make this difference even smaller. The second thing to notice is that MeSDR reports almost no difference between EMI94 and EMI01. In a way the figure provided by the MeSDR is consistent with most subjective opinions that the MFSL, while sounding softer, has more dynamic textures. We recall here that a 3dBFS difference is about twice the dynamic in terms of power. Table \ref{tab:intheflash} suggests that the MFSL remaster is about 4dBFS more dynamic than competitors, this is a huge difference.\\
A further question is whether there are statistically significant differences between the sound power distributions of the tracks. It's obvious that the descriptive nature of the approaches described in Section \ref{sec:tech} cannot answer to such a question. Our subsampling approach can indeed answer to this kind of question by hypothesis testing. If the mixing coefficients of $ \{\varepsilon_t\}_{t \in \mathbb{Z}}$ go to zero at a proper rate, one can take the loudness measurements $\set{ \text{DR}_{n,b,I_i} \quad i=1,2,\ldots,K}$ as almost asymptotically independent and then apply some standard tests for location shift. One can study an ad-hoc test for this problem. In this paper we simply apply a handy simple test like the Mann-Whitney. Using the set up with unequal seeds, we applied the Mann-Whitney test for the null hypothesis that the DR distribution of MFSL and EMI94 are not location-shifted, against the alternative that MFSL is right shifted with respect to EMI94. The resulting p-value$<2.2\times 10^{-16}$ suggests to reject the null at any sensible confidence level, and this confirms that MFSL sounds more dynamic compared to EMI94. Comparison between MFSL and EMI01 leads to a similar result. We also tested the null hypothesis DR distribution of EMI94 and EMI01 DR levels are not location-shifted, against the alternative that there is some shift different from zero. The resulting p-value=0.6797 suggests to not reject the null at any standard confidence level. The latter confirms the figure suggested by MeSDR that is there is no significant overall dynamic difference between the two masterings. \\
\begin{table}[!t]
\centering
\caption{MeSDR statistics for ``{\sl In the Flesh?}'' from ``{\sl The Wall''} album by Pink Floyd. ``Lower'' and ``Upper'' columns are limits of the confidence intervals based on the asymptotic approximation of the distribution of the empirical quantiles.}
\label{tab:intheflash}
\begin{tabular}{llccccc}
\toprule
Seed number & Version & MeSDR & \multicolumn{2}{c}{90\%--Interval} & \multicolumn{2}{c}{95\%--Interval}\\
& & & Lower & Upper & Lower & ~Upper \\
\midrule
Equal & MFSL & 29.77 & 29.32 & 30.29 & 29.22 & 30.35 \\
& EMI94 & 25.96 & 25.65 & 26.48 & 25.38 & 26.61 \\
& EMI01 & 26.00 & 25.66 & 26.52 & 25.40 & 26.67 \\
\midrule
Unequal & MFSL & 29.55 & 29.31 & 29.94 & 29.27 & 30.22 \\
& EMI94 & 25.63 & 25.36 & 26.11 & 25.20 & 26.18 \\
& EMI01 & 25.81 & 25.52 & 26.18 & 25.48 & 26.34 \\
\bottomrule
\end{tabular}
\end{table}
\section{Concluding Remarks}\label{sec:concl-final-remarks}
Starting from the DR problem we exploited a novel methodology to estimate the variance distribution of a time series produced by a stochastic process additive to a smooth function of time. The general set of assumptions on the error term makes the proposed model flexible and general enough to be applied under various situations not explored in this paper. The smoothing and the subsampling theory is developed for fixed global bandwidth and fixed subsample size. We constructed a DR statistic that is based on the random term. This has two main advantages: (i) it allows to draw conclusions based on inference; (ii) since power variations are about sharp changes in the energy levels, it is likely that these changes will affect the stochastic part of \eqref{eq:Yt} more than the smooth component. In a controlled experiment our DR statistic has been able to highlight consistently dynamic range compressions. Moreover we provided an example where the MeSDR statistic is able to reconstruct differences perceived subjectively on real music signals.
\section{Statistical properties of sound waves and modelling}\label{sec:Statistical_Modelling_and_Estimation}
The German theoretical physicist Herman Von Helmholtz (\citeyear{vonhelmholtz-1885}) discovered that within small time intervals sound signals produced by instruments are periodic and hence representable as sums of periodic functions of time also known as ``harmonic components''. The latter implies a discrete power spectrum. \cite{risset-mathews-1969} discovered that the intensity of the harmonic components varied strongly over time even for short time lengths. \cite{Serra_Smith_1990} proposed to model sounds from single instruments by a sum of sinusoids plus a white noise process. While the latter can model simple signals, e.g. a flute playing a single tune, in general such a model is too simple to represent more complex sounds.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{pics/zarathustra.jpg}
\caption{Spectrogram of the right channel of the opening fanfare of ``Also Sprach Zarathustra'' (Op. 30) by Richard Strauss, performed by Vienna Philharmonic Orchestra conducted by Herbert von Karajan (Decca, 1959). The power spectra values are expressed in dBFS scale and coded as colors ranging from black (low energy) to white (high energy).}
\label{fig:zarathustra}
\end{center}
\end{figure}
Figure \ref{fig:zarathustra} reports the spectrogram of a famous fanfare expressed in dBFS. This is a particularly dynamic piece of sound. The orchestra plays a soft opening followed by a series of transients at full blast with varying decay-time. There are several changes in the spectral distribution. At particular time points there are peaks localized in several frequency bands, but there is a continuum of energy spread between peaks. This shows how major variations are characterized by a strong continuous component in the spectrum that is also time-varying. In their pioneering works \cite{Voss_Clarke_1975, Voss_Clarke_1978} found evidence that, at least for some musical instruments, once the recorded signal is passed trough band-pass filter with cutoffs set at 100Hz and 10KHz, the signal within the bandpass has a spectrum that resembles $1/f$--noise or similar fractal processes. But the empirical evidence is based on spectral methods acting as if the processes involved were stationary, whereas this is often not true. Moreover, while most acoustic instruments produce most of their energy between 100Hz and 10KHz, this is not true if one considers complex ensembles. It is well known that for group of instruments playing together, on average 50\% of the energy is produced in the range [20Hz, 300Hz]. Whether or not the $1/f$--noise hypothesis is true is yet to be demonstrated, in this paper we give examples that show that the $1/f$--noise hypothesis does not generally hold.
These observations are essential to motivate the following model for the PCM samples. Let $\{Y_t\}_{t \in\mathbb{Z}}$ be a sequence such that
\begin{equation}\label{eq:Yt}
Y_t = s(t)+\varepsilon_t,
\end{equation}
under the following assumptions:
\begin{assumption}{\bf A1}{}\label{ass:s}
The function $s(t)$ has a continuous second derivative.
\end{assumption}
\begin{assumption}{\bf A2}{}\label{ass:eps}
$\{\eps_t\}_{t \in \mathbb{Z}}$ is a strictly stationary and $\alpha$-mixing process with mixing coefficients $\alpha(k)$, $\ex[\eps_t]=0$, $\ex \abs{\eps_t^2}^{2+\delta} < +\infty$, and $\sum_{k=1}^{+\infty} \alpha^{\delta/(2+\delta)}(k)<\infty$ for some $\delta>0$.
\end{assumption}
(\ref{eq:Yt}) is by no means interpretable as a Tukey-kind signal plus noise decomposition. The observable (recorded) sound wave $Y_t$ deviates from $s(t)$ because of several factors: (i) transient changes in acoustic energy; (ii) several sources of noise injected in the recording path; (iii) non-harmonic components. We call the process $\{\varepsilon_t\}_{t \in \mathbb{Z}}$ the ``{\sl stochastic sound wave}'' (SSW). The main difference with \cite{Serra_Smith_1990} is that in their work $\varepsilon_t$ is a white noise, and $s(t)$ is a sum of sinusoids. \cite{Serra_Smith_1990} are mainly interested in the spectral structure, hence they use $s(t)$ to study the discrete part of the spectrum. Moreover they are interested in simple sounds from single instruments, hence they simply assume that $\varepsilon_t$ is a white noise. Their assumptions are reasonable for simple sounds, but in general do not hold for complex sounds from an ensemble of instruments.
\ref{ass:s} imposes a certain degree of smoothness for $s(\cdot)$. This is because we want that the stochastic term absorbs transients while
$s(\cdot)$ mainly models long-term periodic (harmonic) components. The $\alpha$-mixing assumption allows to manage non linearity, departure from Gaussianity, and a certain slowness in the decay of the dependence structure of the SSW. Certainly the $\alpha$-mixing assumption \ref{ass:eps} would not be consistent with the long-memory nature of $1/f$--noise. However, the $1/f$--noise features disappear once the long term harmonic components are caught by $s(\cdot)$ (see discussion on the example of Figure~\ref{fig_intheflash_residuals}). Whereas \ref{ass:eps} allows for various stochastic structures, the restrictions on the moments and mixing coefficients are needed for technical reasons. Nevertheless, the existence of the fourth moment is not that strong in practice, because this would imply that the SSW has finite power variations, which is something that has to hold otherwise it would be impossible to record it.\\
\section*{\centering Appendix: Proofs of Statements}\label{sec:proofs}
\paragraph{Proof of Proposition \ref{prop:1}.} First notice that \ref{ass:eps}--\ref{ass:K} in this paper imply that Assumptions A--E in \cite{altman-1990} are fulfilled. In particular \ref{ass:eps} on mixing coefficients of $\eps_t$ ensures that Assumptions E and D in \cite{altman-1990} are satisfied.
Let %
$\hat{\gamma}(j)=\frac{1}{n}\sum_{t=1}^{n-j}\hat{\varepsilon}_t\hat{\varepsilon}_{t+j}$ %
be the estimator of the autocovariance $\gamma(j)$ with $j=0,1,\ldots$ and $r_n=\frac{1}{nh}+h^4=n^{-4/5}$ by \ref{ass:K}.
First we note that by Markov inequality
\begin{equation}\label{eq:proof_1_1}
\frac{1}{n}\sum_{i=1}^{n}\left(s(i/n)-\hat{s}(i/n)\right)^2 - \text{MISE}(h;\hat{s}) = o(r_n)
\end{equation}
Let us now rearrange $\hat{\gamma}(j)$ as
\begin{eqnarray}
\label{eq:proof_1_2}
\hat{\gamma}(j) &=& %
\frac{1}{n}\sum_{i=1}^{n-j}\left(s(i/n)-\hat{s}(i/n)\right)\left(s((i+j)/n)-\hat{s}((i+j)/n)\right) + \nonumber \\ %
&+& \frac{1}{n}\sum_{i=1}^{n-j}\left(s((i+j)/n)-\hat{s}((i+j)/n)\right)\varepsilon_{i} + \nonumber \\ %
&+& \frac{1}{n}\sum_{i=1}^{n-j}\left(s(i/n)-\hat{s}(i/n)\right)\varepsilon_{i+j}+\frac{1}{n}\sum_{i=1}^{n-j}\varepsilon_i\varepsilon_{i+j} =I+II+III+IV
\end{eqnarray}
By \eqref{eq:proof_1_1} and Schwartz inequality it results that $I=O_p\left(r_n\right)$. Now, consider term $III$ in \eqref{eq:proof_1_2}. Since $\hat s(t)=s(t)\left(1+O_p\left(r_n^{1/2}\right)\right)$, it is sufficient to investigate the behaviour of
\[
\frac{1}{n}\sum_{i=1}^{n-j}s(i/n)\varepsilon_{i+j}.
\]
By \ref{ass:s} and applying Chebishev inequality, it happens that $III=O_p(n^{-1/2})$. Based on similar arguments one has that term $II=O_p(n^{-1/2})$. Finally, $IV=\gamma(j)+O_p(n^{-1/2})$. Then
$\hat{\gamma}(j)=\gamma(j) + O_p(r_n) + O_p(n^{-1/2})+O_p(j/n)$, where the $O_p(j/n)$ is due to the bias of $\hat{\gamma}(j)$. It follows that also $\hat{\rho}(j) = \rho(j) + O_p(r_n)+ O_p(n^{-1/2})+O_p(j/n)$. Since $\mathcal{K}(\cdot)$ is bounded from above then one can write
\begin{displaymath}
\frac{1}{nh}\sum_{j=-M}^M
\mathcal{K}\left(\frac{j}{nh}\right)\hat{\rho}(j)=\frac{1}{nh}\sum_{j=-M}^M
\mathcal{K}\left(\frac{j}{nh}\right)\rho(j)+\frac{M}{nh}O_p(r_n)+\frac{M}{nh}O_p(n^{-1/2})+\frac{M^2}{n}O_p\left(\frac{1}{nh}\right);
\end{displaymath}
by \ref{ass:M} and $h=O(n^{-1/5})$, it holds true that
\begin{equation}
\label{eq:proof_1_3}
\frac{1}{nh}\sum_{j=-M}^M
\mathcal{K}\left(\frac{j}{nh}\right)\hat{\rho}(j)=\frac{1}{nh}\sum_{j=-M}^M
\mathcal{K}\left(\frac{j}{nh}\right)\rho(j)+o_p(r_n).
\end{equation}
So, taking the quantity in the expression (22) of
\citet{altman-1990}, we have to evaluate
\[
Q_1=\left|\frac{1}{nh}\sum_{j=-nh/2}^{nh/2}\mathcal{K}\left(\frac{j}{nh}\right)\rho(j)-\frac{1}{nh}\sum_{j=-M}^M
\mathcal{K}\left(\frac{j}{nh}\right)\hat{\rho}(j)\right|
\]
where we consider the nearest integer in place of $nh$.
By \eqref{eq:proof_1_3}, it follows that
\[
Q_1=\left|\frac{2}{nh}\sum_{j=M+1}^{nh/2}\mathcal{K}\left(\frac{j}{nh}\right)\rho(j)\right|+o_p(r_n).
\]
Since, by assumptions \ref{ass:eps}, \ref{ass:K} and
\ref{ass:M},
\[
\frac{1}{nh}\sum_{j=M+1}^{nh/2}\mathcal{K}\left(\frac{j}{nh}\right)\rho(j)\sim\rho(nh)=o(r_n),
\]
then $Q_1=o_p(r_n)$. Notice that the first term in $Q_1$ is the analog of expression (22) in \citet{altman-1990}. Therefore, $\text{CV}(h)$ can now be written as
\[
\text{CV}(h)= %
\quadre{1-\frac{1}{nh}\sum_{j=-M}^M\mathcal{K}\left(\frac{j}{nh}\right)\hat{\rho}(j)} ^{-2} %
\frac{1}{n} \sum_{i=1}^n \hat{\eps}_i^2=
\]
\[
=\quadre{1-\frac{1}{nh}\sum_{j=-nh/2}^{nh/2}\mathcal{K}\left(\frac{j}{nh}\right)\rho(j)} ^{-2} %
\frac{1}{n} \sum_{i=1}^n \hat{\eps}_i^2+o_p(r_n).
\]
Apply the classical bias correction and based on (14) in \citet{altman-1990}, we
have that
\[
\text{CV}(h)=\frac{1}{n}\sum_{t=1}^n\varepsilon_t^2+\text{MISE}(h;\hat{s})+
o_p(r_n),
\]
using the same arguments as in the proof of Theorem 1 in
\citet{chu-marron-1991}. Since $\text{MISE}(h;\hat s)=O(r_n)$, it follows that $\hat{h}$, the
minimizer of $\text{CV}(h)$, is equal to $h^\star$, the minimizer
of $\text{MISE}(h;\hat{s})$, asymptotically in probability. \qed
Before proving Proposition \ref{prop:2}, we need a technical Lemma that states the consistency of the classical subsampling (not random) in the case of the estimator $\hat s(t)$ on the entire sample.
\begin{lemma}
\label{lemma1} Assume \ref{ass:s}, \ref{ass:eps}, \ref{ass:K} and \ref{ass:M}. Let $\hat s(t)$ be the estimator of $s(t)$ computed on the entire sample (of length n). If $b/n\rightarrow 0$ whenever $n\rightarrow \infty$and $b\rightarrow \infty$, then
\begin{eqnarray}
(i)&\sup_x\left|\hat G_{n,b}(x)-G(x)\right|\stackrel{p}{\longrightarrow} 0 \nonumber \\
(ii) & \hat q_{n,b}(\gamma)\stackrel{p}{\longrightarrow}q(\gamma) \nonumber
\end{eqnarray}
where $\gamma\in(0,1)$ and $\hat G_{n,b}(x)$, $ G(x)$,
$\hat q_{n,b}(\gamma)$ and $q(\gamma)$ are defined in Section \ref{sec:estimation}.
\end{lemma}
\begin{proof}
For the part \textit{(i)}, notice that under the assumption \ref{ass:eps}, the conditions of Theorem (4.1) in \cite{politis-romano-wolf-2001} hold. Let us denote $r_n=\frac{1}{nh}+h^4$, which is $r_n=n^{-4/5}$ by Proposition (\ref{prop:1}). As in the proof of Proposition (\ref{prop:1}), we can write
\begin{displaymath}
\frac{1}{b}\sum_{i=t}^{t+b-1}\left(\hat{\varepsilon}_i-\varepsilon_i\right)^2 = O_p(r_n) + O(1/b) \qquad \forall t,
\end{displaymath}
since $\hat s(\cdot)$ is estimated on the entire sample of length $n$ and $\{\varepsilon_i\}$ is a sequence of stationary random variables by \ref{ass:eps}. The term $O(1/b)$ is due to the error of the deterministic variable in $s(\cdot)$, when we consider the mean instead of the integral. Note that we do not report this error ($O(1/n)$) in (\ref{eq:proof_1_1}) because the leading term is $r_n$ given that $r_n^{-1}/n\rightarrow 0$ when $n\rightarrow\infty$. In this case $b^{-1} \sum_{i=t}^{t+b-1}\left(\hat{\varepsilon}_i-\varepsilon_i\right)^2$ is an estimator of $\text{MISE}_I(h;\hat s)$ on a set $I\subset (0,1)$. Now $\sqrt{b}(r_n+b^{-1}) \rightarrow 0$ as $n \rightarrow \infty$, and all this is sufficient to have that $\sqrt{b}\left(\hat{V}_{n,b,t}-V_{n,b,t}\right) \stackrel{\text{p}}{\longrightarrow} 0$, $\forall t$. Then, we can conclude that $\sqrt{b}\left(\hat{V}_{n,b,t}-V_{n}\right)$ has the same asymptotic distribution as $\sqrt{b}\left(V_{n,b,t}-V_n\right)$. Let us denote $Z_{1t}=\sqrt{b}\left(V_{n,b,t}-V_n\right)$ and $Z_{2t}=\sqrt{b}\left(\hat{V}_{n,b,t}-V_{n,b,t}\right)$, hence
\begin{displaymath}
\hat{G}_{n,b}(x) = \frac{1}{n-b+1}\sum_{t=1}^{n-b+1} \text{\bf \large 1} \set{Z_{1t}+Z_{2t} \le x}.
\end{displaymath}
By the same arguments used for the proof of Slutsky theorem the previous equation can be written as
\begin{eqnarray}
\sup_x \abs{\hat{G}_{n,b}(x)-G(x)} &\le& \sup_x \abs{ G_{n,b}(x \pm \xi)-G(x)} + \nonumber\\
&+& \frac{1}{n-b+1}\sum_{t=1}^{n-b+1} \text{\bf \large 1} \set{ |Z_{2t}| > \xi} \nonumber
\end{eqnarray}
for any positive constant $\xi$. Since $G(x)$ is continuous at any $x$ (Normal distribution), it follows that $\sup_x\abs{ G_{n,b}(x) -G(x)} \stackrel{\text{p}}{\longrightarrow} 0$ by Theorem (4.1) in \cite{politis-romano-wolf-2001}. Moreover, by \ref{ass:eps} $Z_{2t} \stackrel{\text{p}}{\longrightarrow} 0$, for all $t$ and thus
\begin{displaymath}
\frac{1}{n-b+1}\sum_{t=1}^{n-b+1} \text{\bf \large 1} \set{ |Z_{2t}|> \xi} \stackrel{\text{p}}{\longrightarrow} 0,
\end{displaymath}
for all $\xi>0$, which proves the result.
Finally, part \textit{(ii)} is straightforward following Theorem 5.1 of \cite{politis-romano-wolf-2001} and part \textit{(i)} of this Lemma.
\end{proof}
\paragraph{Proof of Proposition \ref{prop:2}.} Let $P^*(X)$ and $E^*(X)$ be the conditional probability and the conditional expectation of a random variable $X$ with respect to a set $\chi = \set{Y_1,\ldots,Y_n}$.
Here $\hat G_{n,b}(x)$ uses the estimator $\hat s_b(\cdot)$ on each subsample of length $b$. Then,
\begin{equation*}
\frac{1}{b}\sum_{i=t}^{t+b-1}\left(\hat \varepsilon_i-\varepsilon_i\right)^2=O_p\left(b^{-4/5}\right) \qquad \forall t,
\end{equation*}
as in the proofs of Proposition \ref{prop:1} and Lemma \ref{lemma1} (i).
Then, $\sqrt{b}b^{-4/5}\rightarrow 0$. So, by Lemma
\ref{lemma1} (i), it follows that
\[
\sup_x\left|\hat G_{n,b}(x)-G(x)\right|\stackrel{\text{p}}{\longrightarrow} 0,
\]
since $G(x)$ is continuous $\forall x$.
Let $Z_i(x)=\text{\bf \large 1}\set{\sqrt{b}\left(\hat V_{n,b,i}-V_n\right)\le x}$
and $Z_i^*(x)=\text{\bf \large 1}\set{\sqrt{b}\left(\hat V_{n,b,I_i}-V_n\right)\le
x}$. $I_i$ is a random variable from $I=\set{1,2,\ldots,n-b+1}$.
Then, $P(Z_i^*(x)=Z_i(x))=\frac{1}{n-b+1}$ $\forall i$ and each
$x$. Then, we can write
$\tilde{G}_{n,b}(x)=\frac{1}{K}\sum_{i=1}^KZ_i^*(x)$ and
\begin{displaymath}
E^* \tonde{ \tilde{G}_{n,b}(x)} = \frac{1}{n-b+1}
\sum_{t=1}^{n-b+1}Z_i(x) = \hat{G}_{n,b}(x)\stackrel{\text{p}}{\longrightarrow} G(x).
\end{displaymath}
By Corollary 2.1 in \citet{politis-romano-1994} we have that
\[
\sup_x\left|\tilde G_{n,b}(x)-\hat G_{n,b}(x)\right|\rightarrow 0\quad
wp1.
\]
Then, the result follows. \qed
\paragraph{Proof of Corollary \ref{cor:1}.}
It follows the proof of Lemma \ref{lemma1} (ii) by replacing Lemma \ref{lemma1} (i) with Proposition \ref{prop:2}. \qed
\section{Introduction}\label{Section_Introduction}
Music signals have a fascinating complex structure with interesting statistical properties. A music signal is the sum of periodic components plus transient components that determine changes from one dynamic level to another. The term ``transients'' refers to changes in acoustic energy. Transients are of huge interest. For technical reasons most recording and listening medium have to somehow compress acoustic energy variations, and this causes that peaks are strongly reduced with respect to the average level. The latter is also known as ``dynamic range compression''. Compression of the dynamic range (DR) increases the perceived loudness. The DR of a signal is the spread of acoustic power. Loss of DR along the recording-to-playback chain translates into a loss of audio fidelity. While DR is a well established technical concept, there is no consensus on how to define it and how to measure it, at least in the field of music signals. DR measurement has become an hot topic in the audio business. In 2008 the release of the album ``Death Magnetic'' (by Metallica), attracted medias' attention for its extreme and aggressive loud sounding approach caused by massive DR compression. DR manipulations are not reversible, once applied the original dynamic is lost forever \citep[see][and references therein]{katz-2007, vickers-2010}. Furthermore, there is now consensus that there is a strong correlation between the DR and the recording quality perceived by the listener. Practitioners in the audio industry use to measure the DR based on various descriptive statistics for which little is known in terms of their statistical properties \citep{boley-etal-2010, ballou-2005}.
The aim of this paper is twofold: (i) define a DR statistic that is able to characterize the dynamic of a music signal, and to detect DR compression effectively, (ii) build a procedure to estimate DR with proven statistical properties. In signal processing a ``dynamic compressor'' is a device that reduces the peakness of the sound energy. The idea here is that dynamic structure of a music signal is characterized by the energy produced by transient dynamic, so that the DR is measured by looking at the distribution of transient power. We propose a nonparametric model composed by two elements: (i) a smooth regression function mainly accounting for long term harmonic components; (ii) a stochastic component representing transients. In this framework transient power is given by the variance of the stochastic component. By consistently estimating the distribution of the variance of the stochastic component, we obtain the distribution of its power which, in turn, is the basis for constructing our DR statistic. The DR, as well as other background concepts are given in Sections \ref{sec:tech} and \ref{sec:Statistical_Modelling_and_Estimation}.
This paper gives four contributions. The idea of decomposing the music signal into a deterministic function of time plus a stochastic component is due to the work of \cite{Serra_Smith_1990}. However, it is usually assumed that stochastic term of this decomposition is white noise. While this is appropriate in some situations, in general the white noise assumption is too restrictive. The first novelty in this paper is that we propose a decomposition where the stochastic term is an $\alpha$-mixing process, and this assumption allows to accommodate transient structures beyond those allowed by linear processes. The stochastic component is obtained by filtering out the smooth component of the signal, and this is approximated with a simple kernel estimator inspired to \cite{altman-1990}. The second contribution of this work is that we develop upon Altman's seminal paper obtaining a rate optimal kernel estimator without assuming linearity and knowledge of the correlation structure of the stochastic term. An important advantage of the proposed smoothing is that only one data-driven tuning is needed (see Assumption \ref{ass:M} and Proposition \ref{prop:1}), while existing methods require two tunings to be fixed by the user \citep[e.g. ][]{Hall-a-etal-1995}. Approximation of the distribution of the variance of the stochastic component of the signal is done by a subsampling scheme inspired to that developed by \cite{politis-romano-1994} and \cite{politis-romano-wolf-2001}. However, the standard subsampling requires to compute the variance of the stochastic component on the entire sample, which in turn means that we need to compute the kernel estimate of the smooth component over the entire sample. The latter is unfeasible given the astronomically large nature of the typical sample size. Hence, a third contribution of the paper is that we propose a consistent random subsampling scheme that does only require computations at subsample level. The smoothing and the subsampling are discussed in Section \ref{sec:estimation}. A further contribution of the paper is that we propose a DR statistic based on the quantiles of the variance distribution of the stochastic component. The smoothing--subsampling previously described is used to obtain estimates of such a statistic. The performance of the DR statistic is assessed in a simulation study where we use real data to produce simulated levels of compression. Various combinations of compression parameters are considered. We show that the proposed method is quite accurate in capturing the DR concept. DR is considered as a measure of hifi quality, and based on a real dataset we show how the estimated DR measure emulates comparative subjective judgements about hifi quality given by experts. All this is treated in Sections \ref{sec:empirical_evidence_montecarlo} and \ref{sec_real_dataset}. Conclusions and final remarks are given in Section \ref{sec:concl-final-remarks}. All proofs of statements are given in the final Appendix.
\section{Dynamic range statistic}\label{sec:dynam-range-stat}
The random nature of $\eps_t$ allows us to use statistical theory to estimate its distribution. If the SSW catches transient energy variations, then its distribution will highlight important information about the dynamic.
The square root of $\hat V_{n,b,I_i}$ is a consistent estimate of the RMS power of $\eps_t$ over the block starting from $t=I_i$. The loudness of the $\eps_t$ component over each block can be measured on the dBFS scale taking $L_{n,b,I_i} = -10 \log_{10}\hat V_{n,b,I_i} $. Notice that whenever we take continuous monotone transformations of $\hat V_{n,b,I_i}$, consistency for the quantities obtained through the subsampling is preserved.
In analogy with DRs we can define a DR measure based on the subsampling distribution of $\hat V_{n,b,I_i}$. We define the DR measure block-wise as $\text{DR}_{n,b,I_i} \equiv L_{n,b,I_i}$. For a sound wave scaled onto the interval [-1,1] this is actually a measure of DR of $\eps_t$ because it tells us how much the SSW is below the maximum attainable instantaneous power. We propose a measure of DR by taking the median of the subsampling distribution of $\text{DR}_{n,b,I_i}$, that is we define what we call ``{\sl Median Stochastic DR}'' as
$$\text{MeSDR} = \text{med}_K \{\text{DR}_{n,b,I_i} \quad i=1,2\ldots,K\},$$
where $\text{med}_K\{\cdot\}$ denotes the empirical median over a set of $K$ observations. Since $\text{DR}_{n,b,I_i}$ is obtained by applying a monotone continuous transform to $\hat V_{n,b,I_i}$ then consistency theory for the quantiles of $\text{DR}_{n,b,I_i}$ applies. For waves scaled onto [-1,1] it is easy to see that our statistic is expressed in dBFS. If the wave is not scaled onto [-1,1], it suffices to add $20\log_{10}$( maximum absolute observed sample), and this will correct for the existence of headroom.\\
The MeSDR has a nice practical interpretation. Suppose MeSDR=20, this means that 50\% of the stochastic sound power is at least 10dBFS below the maximum instantaneous power. Hence large values of MeSDR indicates large dynamic swings. There are several advantages of such a measure. The DRs statistic is based on mean values which will be pushed toward the peak power it wants to compare. Contrary to what happens to DRs, the MeSDR is less influenced by the peaks. It can be argued that the SSW not only catches transients and non-periodic smooth components. In fact, it's likely the it fits noise, mainly quantization noise. This is certainly true, but quantization noise operates at extremely low levels and its power is constant over time. Moreover, since it is likely that digital operations producing DR compression (compressors, limiters, equalizers, etc.) increase the quantization noise (in theory this is not serially correlated), this would reflect in a decrease of MeSDR, so we should be able to detect DR compression better than classical DRs-like measures. \\
\section{Estimation}\label{sec:estimation}
In analogy with equation \eqref{eq:pcm_power}, RMS power of the SSW is given by the sampling standard deviation of $ \{\varepsilon_t\}_{t \in \mathbb{Z}}$. The main goal is to obtain an estimate of the distribution of the variance of $\{\eps_t\}_{t \in \mathbb{Z}}$. Application of existing methods would require nonparametric estimation of $s(\cdot)$ on the entire sample. However, the sample size is typically of the order of millions of observations. Moreover, since the smooth component is time-varying, one would estimate $s(\cdot)$ by using kernel methods with local window. It is clear that all this is computationally unfeasible. Instead, we propose a statistical procedure where all computations are performed subsample-wise:
{\bf random subsampling procedure}\\
\newcommand{{\color{white}{xxx}}}{{\color{white}{xxx}}}
fix constants: $b \in \mathbb{N}, K \in \mathbb{N}$; \\
draw a random sample $\set{I_1, I_2, \ldots, I_K}$ from the set of indexes $\set{1,2,\ldots, n-b+1};$\\
{\bf for} ($i:=1$ to $K$); {\bf do:}\\
{\color{white}{xxx}} optimally estimate $s(\cdot)$ on the subsample $Y^{\text{sub}}_i=\set{y_{I_i}, y_{I_{i}+1}, \ldots, y_{I_{i}+b-1}},$ \\
{\color{white}{xxx}} compute the sampling variance of $\hat \eps$ over $Y^{\text{sub}}_i$\\
{\bf end for}\\
construct the empirical distribution of the variances of $\hat \eps$.\\
Compared with the standard subsampling, the previous procedure gives clear advantages: (i) randomization reduces the otherwise impossible large number of subsamples to be explored; (ii) none of the computations is performed on the entire sample. In particular estimation of $s(\cdot)$ is performed subsample-wise as in Proposition \ref{prop:2} and Corollary \ref{cor:1}; (iii) estimation of $s(\cdot)$ on smaller blocks of observations allows to adopt a global, rather than a local, bandwidth approach. Points (ii) and (iii) are crucial for the feasibility of the computing load. The smoothing and the random subsampling part of the procedure are disentangled in the next two Sections.
\subsection{Smoothing}\label{sec_smoothing}
This section treats the smoothing with respect to the entire sample. The theory developed into this section is functional to the development of the \textit{local} estimation of $s(\cdot)$ at subsample level. The latter will be treated in Section \ref{sec_random_subsampling}. First notice that without loss of generality we can always rescale $t$ onto $(0,1)$ with equally spaced values. Therefore, model (\ref{eq:Yt}) can be written as
\begin{equation}
\label{eq:Ytbis}
Y_i = s(i/n)+\varepsilon_i, \qquad i=1,\ldots,n.
\end{equation}
Estimation of $s(t)$, $t\in(0,1)$, is performed based on the classical Priestley-Chao kernel estimator \citep[][]{priestley-chao-1972}
\begin{equation}\label{eq:ker}
\hat{s}(t) = \frac{1}{nh} \sum_{i=1}^n \mathcal{K}\left(\frac{t-i/n}{h}\right)y_i,
\end{equation}
under the assumption
\begin{assumption}{\bf A3}{} \label{ass:K}
$\mathcal{K}(\cdot)$ is a density function symmetric about zero with compact support. Moreover, $\mathcal{K}(\cdot)$ is Lipschitz continuous of some order. The bandwidth $h\in
H=[c_1n^{-1/5};c_2n^{-1/5}]$, where $c_1$ and $c_2$ are two positive constants such that $c_1$ is arbitrarily small while $c_2$ is arbitrarily large.
\end{assumption}
Without loss of generality, we will use the Epanechnikov kernel for its well known efficiency properties, but any other kernel function fulfilling \ref{ass:K} is welcome.
\cite{altman-1990} studied the kernel regression problem when the error term additive to the regression function exhibits serial correlation. Furthermore in the setup considered by \cite{altman-1990} the error term is a linear process. The paper showed that when the stochastic term exhibits serial correlation, standard bandwidth optimality theory no longer applies. The author proposed an optimal bandwidth estimation which is based on a correction factor that assumes that the autocorrelation function is known. Therefore Altman's theory does not apply here for two reasons: (i) in this paper the $\set{\eps_t}_{t \in \mathbb{Z}}$ is not restricted to the class of linear processes; (ii) we do not assume that serial correlations are known. Let
$\hat{\eps}_i = y_i - \hat{s}(i/n)$, and let us define the cross-validation function
\begin{equation}\label{eq:CV}
\text{CV}(h)= %
\quadre{1-\frac{1}{nh}\sum_{j=-M}^M \mathcal{K}\left(\frac{j}{nh}\right)\hat{\rho}(j)} ^{-2} %
\frac{1}{n} \sum_{i=1}^n \hat{\eps}_i^2 ;%
\end{equation}
where the first term is the correction factor \`{a} la Altman with the difference that it depends on the estimated autocorrelations of $ \{\varepsilon_t\}_{t \in \mathbb{Z}}$ up to $M$th order. We show that the modification does not affect consistency at the optimal rate. The number of lags into the correction factor depends both on $n$ and $h$. Intuitively consistency of the bandwidth selector can only be achieved if $M$ increases at a rate smaller than $nh$, and in fact we will need the following technical requirement:
\begin{assumption}{\bf A4}{}\label{ass:M}
Whenever $n\to \infty$; then $M \to \infty$ and $M=O(\sqrt{nh})$.
\end{assumption}
The previous condition makes clear the relative order of the two smoothing parameters $M$ and $h$. The bandwidth is estimated by minimizing the cross-validation function, that is
\begin{equation*}
\hat h = {\text{argmin}}_{h \in H} \; \text{CV}(h).
\end{equation*}
Let $\text{MISE}(h; \hat{s})$ be the mean integrated square error of $\hat{s}(\cdot)$, and let $h^\star$ be the global minimizer of $\text{MISE}(h; \hat{s})$.
\begin{proposition}\label{prop:1}
Assume {\ref{ass:s}}, {\ref{ass:eps}}, {\ref{ass:K}} and {\ref{ass:M}}. $\hat{h}/{h^\star} \stackrel{\text{p}}{\longrightarrow} 1$ as $n \to \infty$.
\end{proposition}
Proof of Proposition~\ref{prop:1} is given in the Appendix. It shows that $\hat h$ achieves the optimal global bandwidth.
The previous result improves the existing literature in several aspects. Previous works on kernel regression with correlated errors all requires stronger assumptions on $ \{\varepsilon_t\}_{t \in \mathbb{Z}}$, e.g. linearity, Gaussianity, existence of high order moments and some stringent technical conditions \citep[see][]{altman-1990,altman_1993,Hart_1991,xia_li_2002, FranciscoFernandez_etal_2004}. None of the contributions in the existing literature treats the choice of the smoothing parameters in the cross-validation function, that is $M$. \cite{FranciscoFernandez_etal_2004} mentions its crucial importance, but no clear indication on how to set it is given. \ref{ass:M} improves upon this giving a clear indication of how this tuning has to be automatically fixed in order to achieve optimality. In fact, Proposition \ref{prop:1} suggests to take $M=\lfloor\sqrt{n h}\rfloor$. Therefore the smoothing step is completely data-driven. Notice that alternatively standard cross-validation would require to fix two tuning parameters \citep[see Theorem 2.3 in][]{Hall-a-etal-1995}.
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{pics/intheflesh_spectra_1.jpg}\\
\vspace{1em}
\includegraphics[width=\textwidth]{pics/intheflesh_spectra_2.jpg}
\caption{Points represent windowed periodogram power spectral density estimates of the SSW obtained in two subsamples of size 50ms extracted from the wave reported in Figure~\ref{fig:intheflash2039}. The solid line has been obtained by kernel smoothing. The top plot refers to a subsample randomly chosen within ``Block 1'', while the bottom plot refers to a subsample randomly chosen within ``Block 2''.}
\label{fig_intheflash_residuals}
\end{figure}
In order to see how the behavior of $ \{\varepsilon_t\}_{t \in \mathbb{Z}}$ is time-varying, see Figure~\ref{fig_intheflash_residuals}. The SSW has been estimated based on \eqref{eq:ker} and \eqref{eq:CV} on subsamples of length equal to 50ms. The first subsample has been randomly chosen within Block 1 of Figure~\ref{fig:intheflash2039}, while the second has been randomly chosen within Block 2. Discrete-time Fourier transform measurements have been windowed using Hanning window. Points in the plots correspond to spectral estimates at FFT frequencies scaled to dBFS (log-scale). It can be seen that the two spectrum show dramatic differences. In the first one the energy spread by the SSW is modest and near the shape and level of uncorrelated quantization noise. On the other hand, the bottom spectrum shows a pattern that suggests that correlations vanish at slow rate which is consistent with \ref{ass:eps}. The steep linear shape of on log-log coordinates above 3KHz reminds us approximately the shape of the $1/f$--noise spectrum, however below 3KHz the almost flat behavior suggests a strong departure from the $1/f$--noise hypothesis. All this confirms the idea that there are music sequences where the SSW in \eqref{eq:Yt} cannot be seen as the usual ``error term''. Tests proposed in \cite{Berg_McMurry_Politis_2012} also lead to rejection of the linear hypothesis for the SSW. In the end, it is remarkable that these extremely diverse stochastic structures coexist within just one second of music.
\subsection{Random Subsampling}\label{sec_random_subsampling}
Equation \eqref{eq:pcm_power} only takes into account sum of squares, this is because theoretically PCM are always scaled to have zero mean. Notice that, even though we assume that $ \{\varepsilon_t\}_{t \in \mathbb{Z}}$ has zero expectation, we define RMS based on variances taking into account the fact that quantization could introduce an average offset in the PCM samples. Let us introduce the following quantities:
\begin{equation}\label{eq:Vn}
V_n=\frac{1}{n-1}\sum_{i=1}^n\left(\eps_i-\bar{\eps} \right)^2, %
\qquad \text{with} \quad \bar{\eps}=\frac{1}{n} \sum_{i=1}^n \eps_i.
\end{equation}
The distribution of the RMS power of the SSW is given by the distribution of $\sqrt{V_n}$. By \ref{ass:eps} it can be shown that
$\sqrt{n} (V_n - \sigma^2_{\eps} ) \stackrel{\text{d}}{\longrightarrow} \text{Normal}(0,V),$ where $\sigma^2_{\eps} =\ex[\eps_t^2]$ and $ V = \lim_{n\rightarrow\infty} n \var[V_n]$. From now onward, $G(\cdot)$ will denote the distribution function of a $\text{Normal}(0,V)$.
Although the sequence $\set{\eps_i}$ is not observable, one can approximate its power distribution based on $\hat \eps_i=y_i-\hat{s}(i/n)$. Replacing $\eps_i$ with $\hat \eps_i$ in \eqref{eq:Vn} we obtain:
\begin{equation}\nonumber
\hat V_n=\frac{1}{n-1}\sum_{i=1}^n\left(\hat \eps_i-\bar{\hat \eps} \right)^2, %
\qquad \text{with} \quad \bar{\hat \eps}=\frac{1}{n} \sum_{i=1}^n \hat \eps_i.
\end{equation}
The distribution of $\hat V_n$ can now be used to approximate the distribution of $V_n$.
One way to do this is to implement a subsampling scheme \`{a} la \cite{politis-romano-wolf-2001}. That is, for all blocks of observations of length $b$ (subsample size) one compute $\hat V_n$, in this case there would be $n-b+1$ subsamples to explore. Then one hopes that the empirical distribution of the $n-b+1$ subsample estimates of $\hat V_n$ agrees with the distribution of $V_n$ when both $n$ and $b$ grow large enough at a certain relative speed. This is essentially the subsampling scheme proposed by \cite{politis-romano-wolf-2001} and \cite{Politis_Romano_2010}, but it is of limited practical use here. A three minutes stereo song contains $n=15,876,000$ samples, which means that one has to compute $\hat s(\cdot)$ on such a long series. Even if this is in principle possible it would require a large computational power hardly achievable by regular computers. We solve the problem by introducing a variant to the subsampling scheme previously described. Namely instead of estimating $s(\cdot)$ on the entire series, we estimate it on each subsample separately, then we use the average estimated error computed block-wise instead of $\bar{\hat \eps}$ computed on the whole sample. Moreover a block-wise kernel estimate of $s(\cdot)$ allows to work with the simpler global bandwidth instead of the more complex local bandwidth without loosing too much in the smoothing step.
Let $\hat s_b(\cdot)$ be the estimator of $s(\cdot)$ on a subsample of length $b$, that is
\begin{equation}\label{eq_sb}
\hat{s}_b(t) = \frac{1}{bh} \sum_{i=1}^b \mathcal{K}\left(\frac{t-i/b}{h}\right)y_i.
\end{equation}
At a given time point $t$ we consider a block of observations of length $b$ and we consider the following statistics
\begin{equation}\nonumber
V_{n,b,t}=\frac{1}{b-1}\sum_{i=t}^{t+b-1} (\eps_i- \bar{\eps}_{b,t})^2, %
\qquad \text{and} \qquad
\hat{V}_{n,b,t}=\frac{1}{b-1}\sum_{i=t}^{t+b-1} (\hat{\eps}_i-\bar{\hat{\eps}}_{b,t})^2,
\end{equation}
with $\bar{{\eps}}_{b,t}=b^{-1}\sum_{i=t}^{t+b-1} {\eps}_i$ and $\bar{\hat{\eps}}_{b,t}=b^{-1}\sum_{i=t}^{t+b-1} \hat{\varepsilon}_i$. Note that in $\hat V_{n,b,t}$ we can consider either $\hat{\varepsilon}_{i}=y_i-\hat s(i/n)$ or $\hat{\varepsilon}_{i}=y_i-\hat s_b((i-t+1)/b)$, $i=t,\ldots,t+b-1$. Of course, the bandwidth, $h$, depends on $n$ or $b$. We do not report this symbol because it will be clear from the context.
The empirical distribution functions of $V_{n,b,t}$ and $\hat{V}_{n,b,t}$ will be computed as
\begin{eqnarray}
G_{n,b}(x)&=&\frac{1}{n-b+1}\sum_{t=1}^{n-b+1}\text{\bf \large 1}\set{\sqrt{b}\left(V_{n,b,t}-V_n\right)\le
x}, \nonumber\\
\hat{G}_{n,b}(x)&=&\frac{1}{n-b+1}\sum_{t=1}^{n-b+1} \text{\bf \large 1}\set{\sqrt{b} (\hat{V}_{n,b,t}-V_n)\le x} ; \nonumber
\end{eqnarray}
where $\text{\bf \large 1}\set{A}$ denotes the usual indicator function of the set $A$.
Furthermore, the quantiles of the subsampling distribution also converges to the quantities of interest, that is those of $V_n$. This is a consequence of the fact that $\sqrt{n}V_n$ converges weakly to a Normal distribution, let it be $F$. Let define the empirical distributions:
\begin{eqnarray}
F_{n,b}(x)&=&\frac{1}{n-b+1}\sum_{t=1}^{n-b+1}\text{\bf \large 1} \set{ \sqrt{b}V_{n,b,t}\le
x},\nonumber\\
\hat{F}_{n,b}(x)&=&\frac{1}{n-b+1}\sum_{t=1}^{n-b+1} \text{\bf \large 1} \set{\sqrt{b}\hat{V}_{n,b,t}\le
x}. \nonumber
\end{eqnarray}
For $ \gamma \in (0,1)$ the quantities $q(\gamma)$, $q_{n,b}(\gamma)$ and $\hat{q}_{n,b}(\gamma)$ denote respectively the $\gamma$-quantiles with respect to the distributions $F$, $F_{n,b}$ and $\hat{F}_{n,b}$. We adopt the usual definition that $q(\gamma)=\inf\set{x: F(x)\ge \gamma}$.
However exploring all subsamples makes the procedure still computationally heavy. A second variant is to reduce the number of subsamples by introducing a random block selection. Let $I_i$, $i=1,\ldots K$ be random variables indicating the initial point of every block of length $b$. We draw the sequence $\set{I_i}_{i=1}^K$, with or without replacement, from the set $I=\{1,2,\ldots,n-b+1\}$. The empirical distribution function of the subsampling variances of $\eps_{t}$ over the random blocks will be:
$$
\tilde{G}_{n,b}(x)=\frac{1}{K}\sum_{i=1}^{K} \text{\bf \large 1} \set{\sqrt{b} \tonde{ \hat{V}_{n,b,I_i}-V_{n}} \leq x },
$$
and the next results states the consistency of $\tilde{G}$ in approximating $G$.
\begin{proposition}\label{prop:2} %
Assume {\ref{ass:s}}, {\ref{ass:eps}}, {\ref{ass:K}} and {\ref{ass:M}}. Let $\hat s_b(t)$ be the estimator of $s(t)$ on a subsample of length $b$. If $K \rightarrow \infty$, $b/n\rightarrow 0$, $b\rightarrow \infty$ then $\sup_x\left|\tilde{G}_{n,b}(x)-G(x)\right| \stackrel{\text{p}}{\longrightarrow} 0$ when $n\rightarrow \infty$.
\end{proposition}
We can also establish consistency for the quantiles based on $\set{ \hat{V}_{n,b,I_i} }_{i=1}^{K}$. Let define the distribution function
$$
\tilde{F}_{n,b}(x) = \frac{1}{K} \sum_{t=1}^{K} \text{\bf \large 1} \set{ \sqrt{b}\hat{V}_{n,b,I_t}\le x},
$$
and let $\tilde{q}_{n,b}(\gamma)$ be the $\gamma$-quantile with respect to $\tilde{F}$.
\begin{corollary}\label{cor:1} %
Assume {\ref{ass:s}}, {\ref{ass:eps}}, {\ref{ass:K}} and {\ref{ass:M}}. Let $\hat s_b(t)$ be the estimator of $s(t)$ on a subsample of length $b$. If $K \rightarrow \infty$, $b/n\rightarrow 0$, $b\rightarrow \infty$ then $\tilde{q}_{n,b}(\gamma) \stackrel{\text{p}}{\longrightarrow} q(\gamma)$ when $n\rightarrow \infty$.
\end{corollary}
Proposition \ref{prop:2} and Corollary \ref{cor:1} are novel in two directions. First, the two statements are based on $\hat{\varepsilon}_i$ rather than observed $\varepsilon_i$ as in standard subsampling. Second, we replace $s(\cdot)$ by $s_b(\cdot)$ allowing for local smoothing without using local-windowing on the entire sample.\\
Hence the subsampling procedure proposed here consistently estimates the distribution of $V_n$ and its quantiles. The key tuning constant of the procedure is $b$. One can estimate an optimal $b$, but again we have to accept that the astronomically large nature of $n$ would take the whole estimation time infeasible. Moreover, for the particular problem at hand, there are subject matter considerations that can effectively drive the choice of $b$. For music signals dynamic variations are usually investigated on time intervals ranging from 35ms to 125ms (these are metering ballistics established with the IEC61672-1 protocol). Longer time horizons up to 1s are also used, but these are usually considered for long-term noise pollution monitoring. In professional audio software, 50ms is usually the default starting value. Therefore, we suggest to start from $b=2205$ as the ``50ms--default'' for signals recorded at the standard 44.1KHz sampling rate.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
What is referred today as Kaluza--Klein theory (see \cite{Review} for an extensive collection of papers) is Klein's
modification \cite{Klein} of the original Kaluza's theory \cite{Kaluza}. These two theories are dual \cite{ip1, ip2}
and the duality between them is the duality between threading and slicing decomposition \cite{Boersma} of the
five-dimensional spacetime --- foliation with one-dimensional or one-codimensional surfaces. \\
The field equations \cite{Thiry} of Klein's theory (threading decomposition) express Newton's constant as a dynamical
field (dilaton) and do not allow a constant solution for the dilaton unless an unphysical restriction to the Maxwell
tensor $F_{ij}$ is imposed: namely, $F_{ij} F^{ij} = 0$ (Latin indexes run from 1 to 4, Greek --- from 1 to 5).
In 1983 Gross {\it et al.} \cite{Gross} and Sorkin \cite{Sorkin} found magnetic monopoles in Klein's theory by
considering four-dimensional Euclidean and periodic in time Kerr \cite{Kerr} and Taub--NUT \cite{Taub, NUT} solutions
which were trivially embedded into a vacuum five-dimensional Klein's universe with timelike fifth dimension. The
original Euclidean periodic time was then identified as the fifth dimension and the magnetic vector potentials --- as
former degrees of freedom of the four-dimensional Kerr or Taub--NUT solution. The resulting four-dimensional gravity
has a non-constant dilaton and has lost the original Kerr or Taub--NUT geometry. \\
Unlike Klein's theory, in the original Kaluza's theory (slicing decomposition) the gauge degrees of freedom of the
electromagnetic potentials $A_i$ are transferred to the dilaton \cite{ip1, ip2}. This allows us to consider
four-dimensional spacetimes with constant dilaton (i.e. Newton's constant) and fixed gauge. We show
that a constant dilaton and a vacuum five-dimensional Kaluza universe necessitate a Ricci-flat four-dimensional slice.
In the dual Kaluza's setup, the Kerr or Taub--NUT geometry of the four-dimensional slice is preserved. The field
equations of the original Kaluza's theory lead to the interpretation of the four-dimensional Lorentzian Kerr and
Taub--NUT solutions as resulting from static electric and magnetic charges and dipoles in the presence of ghost matter.
\section{Field Equations of Kaluza's Theory}
\noindent
The five-dimensional Kaluza's metric is:
\begin{eqnarray}
\label{ka}
G_{\mu \nu} = \left( \begin{array}{ccc|c}
& & & \\
& g_{ij}^{\phantom{A^A}} & & A_i \\
& & & \\
\cline{1-4}
& A_i^{\phantom{A^A}} & & \phi \\
\end{array} \right) \,
\end{eqnarray}
The five-dimensional interval in mostly-plus metric is:
\begin{eqnarray}
\label{inte}
d
\sigma^2 = g_{ij} (dy^i + A^i d s)(dy^j + A^j d s) + \frac{1}{N^2} ds^2 \, ,
\end{eqnarray}
where $y^1 \equiv t \, , \, y^5 \equiv s \, , \, g^{ij}$ is the inverse of $g_{ij} \, , \, A^i = g^{ij} A_j$ and
$N^{-2} = \phi - A^2$. \\
The slicing lapse function is $N^{-1}$, while the slicing shift vector field is given by $A^i$. \\
If one is to require $g_{ij}$ to be the metric of our four-dimensional world and $A_i$ --- the electromagnetic
potentials, then $N$ is the dilaton field and can be expressed as \cite{ip1}:
\begin{eqnarray}
N^2 = \frac{\mbox{det } g}{\mbox{det } G} \, .
\end{eqnarray}
The five-dimensional Kaluza metric $G_{\mu \nu}$ is a solution to the five-dimensional vacuum equations
$R_{\mu \nu} = 0$, where $R_{\mu \nu}$ is the five-dimensional Ricci tensor. These equations were written in terms of
the extrinsic curvature $\pi_{ij} = -(N/2) (\nabla_i A_j + \nabla_j A_i - \partial_s g_{ij}) $ of the four-dimensional
world as follows \cite{ip1, ip2, Wesson}:
\begin{eqnarray}
r_{ij} - \frac{1}{2}g_{ij} r & = & N \nabla_i \nabla_j \frac{1}{N} - N (\mathcal{L}_{\mbox{\tiny A}} \pi_{ij}
+ \partial_s \pi_{ij}) + (\pi \pi_{ij} - 2 \pi_{ik} \pi^k_j) \nonumber \\
& & \hskip4.99cm + \frac{1}{2} g_{ij} (\pi^2 - \pi_{kl} \pi^{kl}) \, , \\
0 & = & \nabla_i (\pi^i_j - \delta^i_j \pi) \, , \\
\lower2pt \hbox{$\phantom{.}^{\mbox{\tiny \fbox{}}}$} \frac{1}{N} &
= & A^i \nabla_i \pi - \frac{1}{N} \pi_{ij} \pi^{ij} - \partial_s \pi \, ,
\end{eqnarray} where $r_{ij}$ is the four-dimensional Ricci tensor, $r$ is the
four-dimensional scalar curvature, $\: \nabla_i$ is the
four-dimensional covariant derivative, $\lower1pt
\hbox{$\phantom{.}^{\mbox{\tiny \fbox{}}}$} = g^{ij} \nabla_i
\nabla_j \, ,$
$\:\: \mathcal{L}_{\mbox{\tiny A}}$ is the Lie derivative with respect to $A_i$ and and $\pi = \pi^k_k$. \\
These equations can be equivalently written as \cite{ip1, ip2}:
\begin{eqnarray}
\label{einstein}
& & r_{ij} - \frac{1}{2} g_{ij}r = \frac{N^2}{2} T_{ij} \, , \\
\label{maxgen}
& & \nabla_i F^{ij} = - 2 A_i r^{ij} + \frac{2}{N^2} (\pi^{ij} - \pi^k_k g^{ij}) \partial_i N \, , \\
\label{div}
& & \nabla_i (A_j \pi^{ij} - \nabla^i \frac{1}{N}) = 0 \, , \\
\end{eqnarray}
where $F_{ij} = \partial_i A_j - \partial_j A_i$ is the Maxwell electromagnetic tensor. \\
The first of these equations, (\ref{einstein}), are the equations of general relativity with matter, the second,
(\ref{maxgen}), are a generalization of Maxwell's equations and the last, (\ref{div}), is the gauge-fixing condition.
The dilaton $N$ is related to Newton's constant $G_N$ via \cite{ip1}:
\begin{eqnarray}
\frac{N^2}{2} = \kappa = \frac{8 \pi G_N}{c^4} \, .
\end{eqnarray}
The energy-momentum tensor $T_{ij}$ appearing in equation (\ref{einstein}) is given by \cite{ip1, ip2}:
\begin{eqnarray}
T_{ij} = T^{\mbox{\tiny Maxwell}}_{ij} \, + \, \nabla^k \Psi_{ijk} \, +
\, \nabla^k \Theta_{ijk} \, + \, C_{ij} \, + \, D_{ij} \, ,
\end{eqnarray}
where:
\begin{eqnarray}
T^{\mbox{\tiny Maxwell}}_{ij} & = & F_{ik} F^{\phantom{j} k}_j - \frac{1}{4} g_{ij} F_{kl} F^{kl} \, , \\ \nonumber \\
\Psi_{ijk} & = & A_k \nabla_j A_i - A_j \nabla_k A_i + A_i F_{jk} \, , \\ \nonumber \\
\Theta_{ijk} & = & \nabla_i (A_k A_j) + g_{ij}(A^l \nabla_k A_l - A_k \nabla_l A^l) \, , \\ \nonumber \\
C_{ij} & = & g_{ij} A^k A^l r_{kl} - 2 A^k A_j r_{ik} - 2 A^k A_i r_{jk} \, , \\ \nonumber \\
D_{ij} & = & \frac{2}{N} \nabla_i \nabla_j \frac{1}{N} - \frac{2}{N^2} \pi^k_k (A_i \partial_j N + A_j \partial_i N)
\nonumber \\
& & \!\!\!\!\! + \frac{2}{N^2} \Bigl[ - A^k \pi_{ij} + A_i \pi^k_j + A_j \pi^k_i - g_{ij}
(A^l \pi^k_l - A^k \pi^l_l)\Bigr] \partial_k N.
\end{eqnarray}
The field equations reveal a very interesting relation between the type of solution of the four-dimensional general
relativity and the dilaton. \\
We first suppose that the dilaton is constant: $N = \mbox{const}$. Let us write the five-dimensional vacuum
metric $G_{\mu \nu}$ in a block-diagonal form: $G_{\mu \nu} = \mbox{diag }(g'_{ij}, \, N^{-2})$. Having $A_i = 0$
in the field equations, together with $N = \mbox{const}$, results in vanishing of the full energy-momentum tensor and,
therefore in a Ricci-flat four-dimensional relativity ($r_{ij} = 0$). One can re-introduce the electromagnetic
potentials via a five-dimensional coordinate transformation. The only transformation which leaves the five-dimensional
interval (\ref{inte}) invariant is: $y^i \rightarrow y^i + s c^i, \: s \rightarrow s, \, $ where $c^i$ are constants.
Then the physical electromagnetic potentials will be $A_j = g_{jk} c^k$ . Under this transformation the fields in
the five-dimensional interval (\ref{inte}) transform as $g'_{ij} = g_{ij} \, , \: A'^i = A^i + c^i \, , \: N' = N$.
Thus $r_{ij} = 0$ remains unchanged. Therefore constant dilaton and a vacuum five-dimensional Kaluza universe
necessarily result in a Ricci-flat four-dimensional slice.
\section{Flat Four-dimensional Universe}
It is interesting to consider whether the converse is true, namely, if a Ricci-flat four-dimensional slice
($r_{ij} = 0$), embedded in a vacuum five-dimensional Kaluza universe ($R_{\mu \nu} = 0$), results in a constant
dilaton ($N = \mbox{const}$). We will give an example which shows that this is not the case. Consider a flat
four-dimensional slice with $g_{ij} = \eta_{ij} = \mbox{diag}(-1, \, 1, \, 1, \, 1)$. This is clearly a vacuum
solution. Let us now see if the five-dimensional metric $G_{\mu \nu} = \mbox{diag}(-1, \, 1,
\, 1, \, 1, \, N^{-2})$ admits a non-constant solution for $N$. From the field equations we see that when
$r_{ij} = 0$ and $A_i = 0$, then $N$ must be a solution to:
\begin{eqnarray}
\nabla_i \nabla_j \frac{1}{N} = 0 \, .
\end{eqnarray}
For the flat case, the obvious solution is: $N = (a_k y^k + a_5)^{-1}$, where $a_\mu$ are constants. It will be very
interesting to take:
\begin{eqnarray}
\label{kas} a_2 = a_3 = a_4 = a_5 = 0 \, , \:\: a^1 = \frac{c^2}{4 \sqrt{\pi} t_0} \, .
\end{eqnarray}
Then Newton's constant will become $G_N = c^4 N^2/(16 \pi) = (t_0/t)^2$ --- the gravitational attraction falling off
with time from infinity. This can be explained as a purely geometric effect between a vacuum universe embedded into
another vacuum universe. \\
The Kasner metric \cite{Kasner} is:
\begin{eqnarray}
d \sigma^2 = -dt^2 + \sum_{i=2}^4 \left( \frac{t}{t_0} \right)^{2p_i} (dy^i)^2 + \left(\frac{t}{t_0} \right)^{2p_5}
ds^2 \, ,
\end{eqnarray}
where $\sum_{i=2}^5 \: p_i = \sum_{i=2}^5 \: p_i^2 = 1$. \\
Solution (\ref{kas}) corresponds to the special case: $p_2 = p_3 = p_4 = 0 \, , \, p_5 = 1$.
\section{Ghost Energy-Momentum Tensor}
As we are interested in solutions to the vacuum five-dimensional relativity with constant four-dimensional Newton's
constant (dilaton), we will have to consider Ricci-flat solutions to four-dimensional relativity only. For $r_{ij} = 0$
and $N = \mbox{const}$ the field equations reduce to:
\begin{eqnarray}
\label{e1}
r_{ij} - \frac{1}{2} h_{ij}r & = & \frac{N^2}{2} (T^{\mbox{\tiny Maxwell}}_{ij} + \nabla^k \Psi_{ijk}
+ \nabla^k \Theta_{ijk}) \, = \, 0 \, , \\
\label{e2}
\nabla_i F^{ij} & = & 0 \, , \\
\label{e3}
\nabla_i (A_j \pi^{ij}) & = & 0 \, .
\end{eqnarray}
We further have:
\begin{eqnarray}
\label{eqm1}
\nabla^j T^{\mbox{\tiny Maxwell}}_{ij} = F_{ik} \nabla_j F^{jk} = 0 \, ,
\end{eqnarray}
due to (\ref{e2}). Equation (\ref{eqm1}) is the conservation law for the energy and momentum resulting from Maxwell's
equations (\ref{e2}). \\
The tensor $\nabla^k \Theta_{ijk}$ satisfies:
\begin{eqnarray}
\label{eqm2}
\nabla^j \nabla^k \Theta_{ijk} = - \frac{2}{N} \nabla_j \nabla_i (A_k \pi^{ik}) = 0 \, ,
\end{eqnarray}
in view of the gauge-fixing condition (\ref{e3}). \\
Considering the remaining term, $\nabla^k \Psi_{ijk}$, we see that it satisfies:
\begin{eqnarray}
\label{eqm3}
\nabla^j \nabla^k \Psi_{ijk} & = & \frac{1}{2} (\nabla^j \nabla^k + \nabla^k \nabla^j) \Psi_{ijk} +
\frac{1}{2} [\nabla^j, \nabla^k] \Psi_{ijk} = 0
\end{eqnarray}
in view of the antisymmetry $\Psi_{ijk} = - \Psi_{ikj}$ and $r_{ij} = 0$. Thus the tensor $\nabla^k \Psi_{ijk}$ does
not describe any dynamics. \\
The gauge-fixing condition (\ref{e3}) and equation (\ref{eqm3}) lead to the conservation law:
\begin{eqnarray}
\nabla^j T^{\mbox{\tiny Ghost}}_{ij} = 0 \, ,
\end{eqnarray}
where $T^{\mbox{\tiny Ghost}}_{ij}$ is the Belinfante symmetric energy-momentum tensor of the ghost fields:
\begin{eqnarray}
T^{\mbox{\tiny Ghost}}_{ij} = \nabla^k (\Psi_{ijk} + \Theta_{ijk}) \, .
\end{eqnarray}
For the "haunted" Kaluza's universe, the energy and momentum of the ghost fields compensates completely the energy
and momentum of the Maxwell's fields:
\begin{eqnarray}
T^{\mbox{\tiny Ghost}}_{ij} + T^{\mbox{\tiny Maxwell}}_{ij} = 0
\end{eqnarray}
and, therefore, it is possible to have matter co-existing with ghost matter in a Ricci flat universe.
\section{Four-dimensional Lorentzian slice with Kerr \\ Geometry}
We will generate the five-dimensional solution simply by
starting off with a four-dimensional static Ricci-flat solution, promoting it trivially to five dimensions
(by adding $ds^2$ in the metric) and performing a five-dimensional coordinate transformation:
\begin{eqnarray}
\label{shift}
t \rightarrow t + \beta s \, ,
\end{eqnarray}
where $\beta$ is the inverse of the ``speed of light'' along the fifth, transverse dimension. This coordinate
transformation will not introduce $s$-dependence in the four-dimensional world (as the four-dimensional metric is
static and time appears only with its differential) and as a result the new {\it five-dimensional} "observer" will
``see'' the electromagnetic potentials:
\begin{eqnarray}
\label{pot}
A_j = \beta g_{tj} \, ,
\end{eqnarray}
where $j \in \{ t, \, r, \, \theta, \, \phi \}$. \\
The four-dimensional Kerr metric \cite{Kerr} in Boyer--Lindquist coordinates \cite{Boyer}, trivially promoted to
five dimensions is:
\begin{eqnarray}
d \sigma^2 & = & - \frac{\Delta}{\rho^2} (dt - a \sin^2 \theta \,\, d \phi)^2
+ \frac{\sin^2 \theta}{\rho^2} [(r^2 + a^2) \, d\phi - a \, dt]^2 \nonumber \\
& & \hskip30pt + \, \frac{\rho^2}{\Delta} \, dr^2 + \rho^2 \, d\theta^2 + ds^2 \, ,
\end{eqnarray}
where $\Delta = r^2 - \alpha r + a^2$ and $\rho^2 = r^2 + a^2 \cos^2 \theta$. Here $\alpha$ and $a$ are integration
constants which will be identified further (in the four-dimensional Kerr solution these are the mass and the angular
momentum per unit mass of a black hole). We consider the physically interesting case $\alpha > a$ (black hole
solution). \\
The electromagnetic potentials (\ref{pot}) are:
\begin{eqnarray}
A_t & = & \beta (-1 + \frac{\alpha r}{\rho^2}) \, , \\
A_r & = & 0 \, , \\
A_\theta & = & 0 \, , \\
A_\phi & = & - \frac{a \alpha \beta r \sin^2 \theta}{\rho^2} \, .
\end{eqnarray}
For large $r$, the non-zero components of the vector potential are:
\begin{eqnarray}
\label{el}
A_t & \sim & \beta (-1 + \frac{\alpha}{r}) \, , \\
\label{mag} A_\phi & \sim & - \, \frac{a \alpha \beta \sin^2
\theta}{r} \, .
\end{eqnarray}
Therefore, from (\ref{el}), one can identify the constant $\alpha \beta$ as electric charge:
\begin{eqnarray}
\label{qkerr}
q = \alpha \beta \, .
\end{eqnarray}
Equation (\ref{mag}) describes the field of a magnetic dipole of strength $a \alpha \beta$, located at the
origin \cite{Gross, Sorkin}:
\begin{eqnarray}
\label{mkerr}
m = a \alpha \beta = a q \, .
\end{eqnarray}
Thus one can interpret the Kerr solution as a black hole generated by an electric charge and magnetic dipole (and
not by a rotating massive centre). The potentials (\ref{pot}) satisfy the vacuum Maxwell's equations (\ref{e2}).
The electric charge and the magnetic dipole are located at the singularity $\rho = 0$ . \\
This is the only singularity of the Kerr spacetime and can be better understood in Cartesian coordinates \cite{ch}:
\begin{eqnarray}
x & = & \sqrt{r^2 + a^2} \, \sin \theta \, \cos \phi \, , \\
y & = & \sqrt{r^2 + a^2} \, \sin \theta \, \sin \phi \, , \\
z & = & r \cos \theta \, . \\
\end{eqnarray}
The singularity $\rho = 0$, i.e. $r = 0$ and $\cos \theta = 0$, corresponds to the ring $x^2 + y^2 = a^2$. \\
One can analytically continue the Kerr solution for negative values of $r$ \cite{ch}. The horizons are at:
\begin{eqnarray}
r_\pm = \alpha \pm \sqrt{\alpha^2 - a^2} \, .
\end{eqnarray}
The equations of the corresponding static horizons are:
\begin{eqnarray}
r_\pm(\theta) = \alpha \pm \sqrt{\alpha^2 - a^2 \cos^2 \theta} \, .
\end{eqnarray}
There are no timelike coordinates inside the ergosphere --- the region between the event horizon and surrounding
static horizon. \\
Then Kerr solution describes two universes which behave asymptotically as Schwarzschild universes --- one with
$r > 0$ and having a positive centre $\alpha$, event horizon at $r_+$, and a Cauchy horizon at $r_- \, $; the
other --- with $r < 0$ and having a negative centre $\alpha$, event horizon at $r_-$, and a Cauchy horizon at
$r_+$. In our context this has the natural interpretation of a black hole solution with positive/negative charge
$q = \alpha \beta$ and a magnetic dipole of strength $m =a \alpha \beta$.
\section{Four-dimensional Lorentzian slice with Taub--NUT Geometry}
We consider five-dimensional Kaluza's universe with a four-dimensional \linebreak Lorentzian Taub-NUT \cite{Taub, NUT}
slice:
\begin{eqnarray}
d \sigma^2 & = & - V(r)(dt + 2 \ell \cos\theta \, d\phi)^2 + \frac{1}{V(r)} \, dr^2 \nonumber \\
& & \hskip30pt + \, (r^2 + \ell^2) (d \theta^2 + \sin^2 \theta \, d\phi^2) + ds^2 \, ,
\end{eqnarray}
where
\begin{eqnarray}
V(r) = 1 - 2 \, \frac{\alpha r + \ell^2}{r^2 + \ell^2} \, ,
\end{eqnarray}
where $\alpha$ and $\ell$ are, again, integration constants. \\
This metric has conical singularities at $\theta = 0, \, \pi$ (Misner string \cite{Misner}). The event horizon is
where $V(r)$ vanishes, i.e. at:
\begin{eqnarray}
r_\pm = \alpha \pm \sqrt{\alpha^2 + \ell^2} \, .
\end{eqnarray}
The metric can be analytically continued for negative $r$ in a similar way and we will be interested in the two regions:
Region I with $\alpha \le 0$ and $r < r_- < 0$ and Region II with $\alpha \ge 0$ and $r > r_+ > 0$. \\
There is also an "ergoregion", surrounding the Misner string, inside of which $\phi$ is another timelike coordinate. The
equation of these horizons is:
\begin{eqnarray}
\tan^2 \theta = \frac{4 \ell^2 V(r)}{r^2 + \ell^2} \, , \: \: r < r_- \, \mbox{ or } \, r > r_+ \, .
\end{eqnarray}
For the electromagnetic potentials (\ref{pot}) introduced with the transformation (\ref{shift}), we get:
\begin{eqnarray}
\label{hop1}
A_t & = & - \beta V(r) \, , \\
A_r & = & 0 \, , \\
A_\theta & = & 0 \, , \\
\label{hop2}
A_\phi & = & - 2 \ell \beta V(r) \cos \theta \, .
\end{eqnarray}
Asymptotically, for large $r$ we have:
\begin{eqnarray}
\label{electric}
A_t & = & - \beta \left(1 - \frac{2 \alpha}{r} \right) \, , \\
\label{magnetic}
A_\phi & = & - 2 \ell \beta \left(1 - \frac{2 \alpha}{r} \right) \cos \theta \, .
\end{eqnarray}
Equation (\ref{electric}) is the electric potential due to charge
\begin{eqnarray}
\label{qtaub}
q = 2 \alpha \beta \, .
\end{eqnarray}
Equation (\ref{magnetic}) is the potential due to a magnetic monopole of charge $m = 2 \beta \ell$. This can be seen by
integrating the flux of the magnetic field $B^r$ through the infinite sphere:
\begin{eqnarray}
\label{mtaub}
4 \pi m = \lim_{r \to \infty} \oint_{S_r} B^r \, ds_r = \lim_{r \to \infty} \oint_{S_r} F_{\theta \phi} \, d\theta d\phi
= 4 \pi (2 \beta \ell) \, .
\end{eqnarray}
For Kerr geometry this integral vanishes (we do not have a monopole but a magnetic dipole there). \\
The location of these charges appears to be unclear as $V(r) = 1 - 2 (\alpha r + \ell^2)(r^2 + \ell^2)^{-1}$, appearing
in (\ref{hop1}) and (\ref{hop2}), is not singular outside the horizons. However, every spacelike hypersurface which
is pushed between the horizons becomes singular \cite{Brill} and therefore, we can interpret the points
$\, ir = \pm \ell \, $ (note that $ir$ is a spacelike coordinate between the horizons) as the loci where the images of
the charges are seen by an observer with $r > r_+$ or $r < r_-$. For the case $\alpha = 0$, the proper distance
between the origin and the location of the images is:
\begin{eqnarray}
\int\limits_{i0}^{\pm i \ell} \frac{dr}{\sqrt{V(r)}} \, =
\pm \, \, \ell \!\!\!\!\! \int\limits_{0}^{\ln (1 + \sqrt{2})} \sqrt{1 - \sinh ^2 x} \, dx \: \approx
\: \pm \, 0.7 \ell \, .
\end{eqnarray}
Therefore, an observer with $r > r_+ > 0$ (Region I) will register the image of a monopole a proper distance
$\vert 0.7 \ell \vert$ from the origin, while an observer with $r < r_- < 0$ (Region II) will register the image of
a monopole a proper distance $-\vert 0.7 \ell \vert$ from the origin.
\section{Conclusions}
We have presented solutions to the original Kaluza's theory which describe static electric and magnetic fields
generated by point-like electric and magnetic charges and dipoles. Unlike the dual Kaluza--Klein theory (namely,
Klein's modification of the original Kaluza's theory) these solutions allow to have a constant Newton's constant,
as the gauge degrees of freedom are now transferred to the dilaton. The gauge-fixing of the electromagnetic
potentials results in the appearance of a Belinfante ghost part in the full energy-momentum tensor which fully
compensates the electromagnetic energy-momentum tensor. Four-dimensional solutions with vanishing Ricci tensor
($r_{ij} = 0$) are the only possible solutions when the dilaton is required to be constant in a Ricci-flat
five-dimensional universe. These four-dimensional gravitational Ricci-flat solutions can be interpreted from a
five-dimensional Kaluza's perspective, as solutions generated by four-dimensional electromagnetism of charges
and dipoles or their images (for the case of a Taub--NUT four-dimensional slice). The integration constants
$a \, , \alpha \mbox{ and } \beta$ (for Kerr geometry) and $\alpha \, , \ell \mbox{ and } \beta$ (for Taub--NUT
geometry) have interpretation as charges (see (\ref{qkerr}) and (\ref{mkerr}) for Kerr geometry and
(\ref{qtaub}) and (\ref{mtaub}) for Taub--NUT geometry) and the solutions represent gravitational attraction
without unphysical regions with gravitational repulsion, as, for example, in the Reissner--Nordstr\o m
case \cite{MTW}.
\section*{Acknowledgements}
\noindent
We would like to thank Siddhartha Sen, Brian Nolan and Vesselin Gueorguiev for very helpful discussions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Fragmentation of liquid droplets due to the flow of high-speed air stream is observed in many industrial applications \cite{sikroria2014experimental} and natural phenomena \cite{Villermaux2009}. In atomization, initially fuel jet forms ligaments and breaks into droplets (primary atomization). These droplets become unstable when the aerodynamic force overcomes the influence of the surface tension force and the viscous force, and subsequently break into smaller satellite droplets (secondary atomization). The surface area to volume ratio significantly increases in the secondary atomization process, which in turn increases its efficiency (see e.g. Refs. \cite{varga2003initial,lefebvre2017atomization}). Thus fragmentation of a liquid droplet into tiny satellite droplets in in-line and cross-flow configurations has been a subject of research for the past several decades (see for instance \cite{taylor1963shape,Villermaux2009,Jain2015,dai2001temporal,pilch1987,chou1998temporal,krzeczkowski1980measurement}).
Previous experiments suggest that there exists different types of mode when a liquid droplet interacts with high speed air stream, namely, the vibration, bag, bag-stamen, dual-bag, multi-mode, shear and catastrophic breakup modes \cite{dai2001temporal,guildenbecher2009,cao2007,Suryaprakash2019} owing to different mechanisms and flow characteristics. The Weber number that measures of the relative importance of the inertia over the surface tension force is mainly used to demonstrate different breakup modes. The Weber number is given by $We \equiv {\rho_a U^2 D / \sigma}$. The other dimensionless numbers which have been used to study droplet breakup phenomenon are the Reynolds number, $Re \equiv {\rho_l U D / \mu_l}$ (inertia force/viscous force), the E\"{o}tv\"{o}s number $Eo \equiv {(\rho_l-\rho_a) g D^{2} / \sigma}$ (gravitational force/surface tension force) and the Ohnesorge number, $Oh \equiv \sqrt{We}/ Re = {\mu_{l} / \sqrt{\rho_l \sigma D}}$, wherein, $\sigma$ is the interfacial tension, $g$ is the acceleration due to gravity, $D$ is the diameter of the droplet, $\rho_a$ and $\rho_l$ are the density of the air and liquid, respectively, and $U$ is the average velocity of the air stream.
At low Weber numbers, a droplet undergoes shape oscillations at a certain frequency (vibrational regime). As the Weber number is increased slowly by increasing the aerodynamic force, and keeping the surface tension force constant, the droplet exhibits a transition from the vibrational mode to the bag breakup mode. The value of the Weber number at which this transition occurs (i.e. the Weber number at which the droplet just starts to form a bag) is termed as the critical Weber number ($We_{cr}$). Taylor \cite{taylor1963shape} was the first to predict that a liquid droplet undergoes breakup above a critical Weber number. Since then many researchers \cite{Jain2015,dai2001temporal,pilch1987,chou1998temporal,krzeczkowski1980measurement} have reported different values of the critical Weber number in different flow configurations. In cross-flow configuration, Table \ref{tab2} presents the range of the Weber numbers associated with the bag breakup region obtained by previous researchers. The discrepancies in obtaining the critical Weber number can be attributed to the definition of the breakup regime and the different experimental facilities (shock tubes, wind tunnels, convergent nozzles, etc.) used in their investigations. Moreover, the fragmentation process, due to its inherent nature, is highly sensitive to the flow conditions. Among all the fragmentation regimes described above, the bag breakup regime has the widest range of applications since it occurs at relatively low Weber numbers (see for instance, Refs. \cite{dai2001temporal,guildenbecher2009,Jain2015,kulkarni2014,flock2012,cao2007,Villermaux2009}). The bag breakup occurs at the beginning of the secondary atomization process and shows similar characteristics with that of the bag-stamen, the multi-bag and the shear stripping breakup modes \cite{Jain2015}. Hanson {\it et al.} \cite{hanson1963} and Wierzba \cite{wierzba1990deformation} demonstrated that the critical Weber number decreases as the droplet size increases in cross-flow configuration. Their results are summarised in Table \ref{tab3}.
\begin{table}
\caption{The Weber number range for the bag type breakup.}
\label{tab2}
\centering
\begin{tabular}{|c|c|}
\hline
References & Weber number range \\ \hline
Pilch and Erdman \cite{pilch1987} & $12 < We \leq 50$ \\ \hline
Guildenbecher {\it et al.} \cite{guildenbecher2009} & $11 < We < 35$ \\ \hline
Krzeczkowski \cite{krzeczkowski1980} & $10 < We <18$ \\ \hline
Jain {\it et al.} \cite{Jain2015} & $12 < We < 24$ \\ \hline
Hsiang and Faeth \cite{hsiang1993} & $We \leq 11 \pm 2$ \\ \hline
Wierzba \cite{wierzba1990deformation} & $13.7 < We <14.07$ \\ \hline
Kulkarni and Sojka \cite{kulkarni2014} & $12 < We <16$ \\ \hline
Wang {\it et al.} \cite{wang2014} & $10 < We < 35$ \\ \hline
\end{tabular}
\end{table}
Three types approaches have been used to study the droplet breakup phenomena: (i) the shock tube approach \cite{hsiang1993,dai2001,chou1998,krzeczkowski1980} (ii) the continuous air jet method \cite{zhao2016,Jain2015,kulkarni2014,flock2012,cao2007} and (iii) the droplet tower approach \cite{Villermaux2009}. In the present study, we implement a continuous air jet method. The main components of this method are an air nozzle that can produce a top-hat velocity profile and a compressed air measuring and controlling device. The results obtained using the first and the second approaches can be made similar by ensuring that the droplet interacts in the continuous air stream. This is possible when the time taken by the droplet to cross the shear boundary layer is much smaller than the resident time of the droplet when the droplet deforms and breaks \cite{guildenbecher2009}.
\begin{table}[H]
\caption{The critical Weber number for different droplet sizes reported by Hanson {\it et al.} \cite{hanson1963} and Wierzba \cite{wierzba1990deformation}.}
\label{tab3}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
References & Fluids & $D$ (mm) & $We_{cr}$ \\ \hline
\multirow{19}{*}{} & \multirow{4}{*}{} & 0.426 & 6.62 \\ \cline{3-4}
& & 0.285 & 7.18 \\ \cline{3-4}
& Silicone oil (10 cSt) & 0.180 & 7.93 \\ \cline{3-4}
& & 0.117 & 7.94 \\ \cline{2-4}
& \multirow{4}{*}{} & 0.531 & 10.4 \\ \cline{3-4}
& & 0.273 & 11.8a \\ \cline{3-4}
& Silicone oil (50 cSt) & 0.210 & 13.5 \\ \cline{3-4}
& & 0.126 & 15.4 \\ \cline{2-4}
& \multirow{6}{*}{} & 0.540 & 13.1 \\ \cline{3-4}
Hanson {\it et al.} \cite{hanson1963} & & 0.338 & 14.3 \\ \cline{3-4}
& & 0.239 & 15.5 \\ \cline{3-4}
& & 0.25 & 16.2 \\ \cline{3-4}
& Silicone oil (100 cSt) & 0.185 & 22.6 \\ \cline{3-4}
& & 0.150 & 23.8 \\ \cline{2-4}
& \multirow{3}{*}{} & 0.391 &4.79 \\ \cline{3-4}
& & 0.180 & 6.37 \\ \cline{3-4}
& Water & 0.0945 & 7.14 \\ \cline{2-4}
& \multirow{2}{*}{} & 0.471 & 6.76 \\ \cline{3-4}
& Methyl alcohol & 0.186 & 7.45 \\ \hline
Wierzba \cite{wierzba1990deformation}& Water & 2.22-3.9 & 13.7 to 14.07 \\ \hline
\end{tabular}
\end{table}
As the above review indicates all the previous studies considered small droplets either in the cross-flow (e.g. \cite{wierzba1990deformation,hanson1963} or in the in-line (e.g. \cite{ Inamura2009,Villermaux2009}) configurations. However, in many situations, such as swirling spray applications and during rain, droplets may interact with the continuous air stream at an angle. In such situations, apart from the aerodynamic, the viscous and the surface tension forces, the gravitational force also influences the droplet breakup morphology when the droplet size is comparable or larger than the capillary length scale, given by $l_c = \sqrt{\sigma/(\rho_l - \rho_a) g}$ \cite{brenner1993}. When $D \ge l_c$, the surface tension force will not be able to hold the drop in a spherical shape and the droplet will deform due to the effect of gravity, and thereby changing the critical force requirement for different types of breakup. To the best of our knowledge, such a study has not been conducted yet.
In the present work, we experimentally investigate the deformation and breakup of a droplet interacting with an air stream flowing at an angle, $\alpha$ to the horizontal direction. A high-speed imaging system is employed to record the trajectory and topological change of the droplet and the effect of obliquity of the air stream on the droplet breakup process has been investigated. The droplet size, the orientation of the air nozzle, fluid properties (surface tension and viscosity) are varied to study different breakup modes. The critical Weber number has been obtained for different values of the E\"{o}tv\"{o}s number $(Eo)$, the angle of inclination of the air stream $(\alpha)$ and the Ohnesorge number $(Oh)$. It is found that, although the droplet exhibits rectilinear motion during its entry into the oblique air stream and prior to the formation of the bag breakup mode, a curvilinear motion is depicted by the droplet at later times, which indicates the variation in forces acting on the droplet. The apparent acceleration experienced by the droplet and its size are linked with the variation in the critical Weber number requirement for the bag breakup mode. The departure from the cross-flow arrangement shows a sharp decrease in the critical Weber number for the bag breakup with asymptotic behaviour at higher oblique angles. At high obliquity, the critical Weber number approaches a value of 6 as reported in the literature (see for instance, Ref. \cite{Villermaux2009}) for the inline (opposed) flow configuration for the droplet breakup. It is also noted that in the present study, the effort has been made to give a detailed description of the experimental procedure, the cautions taken during the experiments, the uncertainty analysis that confirms the repeatability of the experimental results.
The rest of the paper is organized as follows. A brief description of the mathematical analysis is given in Section \ref{sec:math}, the experimental procedure is discussed in Section \ref{sec:expt}, which is followed by the discussion of our experimental results in Section \ref{sec:dis}. Finally, concluding remarks are given in Section \ref{sec:conc}.
\section{Mathematical formulation}
\label{sec:math}
A schematic diagram showing a liquid droplet freely falling under the action of gravity and subjected to an air stream at an angle $\alpha$ to the horizontal is shown in Fig. \ref{fig1a}. In the crossed flow configuration, $\alpha=0$, such that the air stream flows in the horizontal $(X)$ direction and the droplet falls in the vertical $(Y)$ direction. For any oblique angle, $\alpha$ of the air stream, the various forces acting on the droplet are the gravity force $(F_g = {\pi \over 6} D^{3}\rho_l g)$, the buoyant force $(F_B = {\pi \over 6} {D}^{3}\rho_a g)$, the inertia force $(F_i = \frac{\pi}{6} {D}^{3}\rho_a \frac{d \textbf{u}_d}{dt})$ and the drag force ($F_D = {1 \over 2}C_d \rho_a U^2 A$), such that $F_i= F_g -F_b + F_D$. Here $\textbf{u}_d$ is the velocity of the droplet, $A$ is the projected area of the deformed droplet and $C_d$ is drag coefficient and $t$ is time.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{Figure1.pdf}
\caption{Schematic diagram showing a liquid droplet freely falling under the action of gravity and subjected to an air stream at an angle $\alpha$ to the horizontal.}
\label{fig1a}
\end{figure}
In the absence of the air stream, balancing the gravity force, $F_g$ with the surface tension force, $F_s = \pi D \sigma$, one can get the following expression for maximum droplet size, $D_{c}$ for which the surface tension dominates the other forces acting on the droplet while maintaining its sphericity.
\begin{equation}
D_c= \sqrt{\frac{6\sigma}{(\rho_l-\rho_a) g}}.
\end{equation}
In case of an oblique air stream, the droplet follows a curvilinear motion and experiences an apparent weight due to the rate of change of its acceleration. In this case, the resultant acceleration acting on the droplet ($\widehat{a}$) is given by
\begin{eqnarray}
\widehat{a} = \frac{F_i}{ \frac{\pi}{6} D^{3} \rho_l} = {g}+\frac{{F_D}}{ \frac{\pi}{6} D^{3} \rho_l} = \underbrace{\left [{g}+\frac{{F_D} ~sin{\alpha}}{ \frac{\pi}{6} D^{3} \rho_l} \right ]}_{\rm {Vertical ~ component}} +\underbrace{\left [ \frac{{F_D} ~cos{\alpha}}{ \frac{\pi}{6} D^{3} \rho_l} \right ]}_{\rm Horizontal ~ component}
\label{eq:neta}
\end{eqnarray}
The net resultant acceleration have two components: the vertical, $\left ({g}+\frac{{F_D} ~ sin{\alpha}}{ \frac{\pi}{6} D^{3} \rho_l} \right )$ and the horizontal component, $\left ( \frac{{F_D} ~ cos{\alpha}}{ \frac{\pi}{6} D^{3} \rho_l} \right )$. Thus, for the cross-flow condition $(\alpha=0^\circ)$, the the vertical and the horizontal components of the acceleration are ${g}$ and $\left ( \frac{{F_D} }{ \frac{\pi}{6} D^{3} \rho_l} \right )$, respectively. For the in-line (counter) flow configuration ($\alpha =90^\circ$), the component of acceleration in the vertical direction is $\left( {g} + \frac{{F_D} }{ \frac{\pi}{6} D^{3} \rho_l} \right)$ and the droplet does not encounter any acceleration in the horizontal direction.
Using the apparent weight of the droplet, the modified critical droplet diameter to maintained its sphericity can be obtained as
\begin{equation}
\widehat{D_c} = \sqrt{\frac{6\sigma}{(\rho_l-\rho_a) \widehat{a}}}.
\end{equation}
Thus for a droplet with $D>\widehat{D_c}$, the surface tension is weaker than the external force and the droplet deforms from its spherical shape.
Now we like to turn the attention to the time scales used in the previous studies. Different researchers have used different time scales to non-dimensionalise their results \cite{krzeczkowski1980,jalaal2012fragmentation}. Most of the researchers non-dimensionalised the breakup time with the transport time (a ratio of droplet diameter to its velocity) as suggested by Nicholls and Ranger \cite{nicholls1969}. Such nondimensionalisation provides an opposite trend as compared to the actual observed breakup time in case of oblique flow configurations. Considering the droplet breakup process being dictated by forces acting on the droplet, we propose and use the following time scale, $t_s \equiv \rho_a U/(\rho_l - \rho_a) g$ to nondimensionalise time, in the present study.
\section{Experimental set-up and procedure}
\label{sec:expt}
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{Figure2.pdf}
\caption{Schematic of the experimental set-up. It consists of a high-speed camera, a nozzle, a diffuser sheet, a light source and a dispensing needle. The angle of inclination of the air nozzle to the horizontal is denoted by $\alpha$. $H$ ($\approx 40$ mm) is the height of the needle tip from the centre of the nozzle. Gravity, $g$ acts in the negative $Y$ direction. The orientation of the nozzle and the deflection of the nozzle is shown in $X-Y$ plane. }
\label{fig1}
\end{figure}
Figure~\ref{fig1} depicts the schematic of the experimental set-up used in the present investigation. A metallic circular nozzle has been used to create a continuous and uniform air jet flow. This nozzle is hinged in such a way that the orientation of the nozzle can be fixed to any angle, $\alpha$ to the horizontal. A set of honeycomb pallet is placed near the upstream end of the nozzle to straighten the flow and to minimise disturbances. The internal section of the nozzle is designed to obtain a top-hat velocity profile at the nozzle exit, which has been verified with the aid of a pitot tube and differential head of the manometer measurements. The internal and external diameters of the pitot tube are 1.5 mm and 4 mm, respectively. The horizontal distance between the nozzle and the pitot tube is kept within 10 mm ($< 0.5$ diameter of the nozzle). For every experiment, it is ensured that the droplet enters in the continuous velocity profile of air or the potential jet core in the immediate downstream region of the nozzle exit before undergoing deformation and breakup. The air flow rate has been measured and controlled using an accurate and reliable ALICAT digital mass flow controller (range 0 to 2000 liter per minute) which is installed before the air nozzle.
For generating the same size of liquid droplets, a regular infusion pump (SP-810 from Medical Point) is used. The drop size is varied in two ways: (i) by changing the mass flow rate setting in the infusion pump and (ii) by changing the spout having different exit diameters. The droplet generator and the circular nozzle are arranged in the cross-flow configuration ($\alpha=0^{\circ}$). The liquid droplet drips from the tip of the spout under the action of gravity and enters the air stream.
A high-speed camera (Phantom VEO 710) with a 50 mm Nikon lens is used to capture the droplet deformation and breakup phenomena occurring near the nozzle. The images are recorded at 15000 frames per second (fps) with an exposure time of 10 $\mu$s for all the experiments conducted in the present study. The camera is connected to a computer and is operated via Phantom Camera Control (PCC) 3.1 software. The camera has a built-in memory card that allows for direct recording of the images. A diffused back-lit illumination (Light Emitting Diode (LED) based lighting, model 900445, 12000 lm, Visual Instrumentation Corporation) shadowgraph technique has been used to illuminate the background for droplet identification. In this technique, a light source is placed behind a diffusion screen to diffuse the backlit into uniform light.
The recorded images are analysed using the Matlab software. The geometric size of the droplet has been calculated by image processing. The droplet deforms while suspended in syringe spout and no longer remains perfectly spherical, hence the equivalent droplet initial diameter has been calculated as \cite{rioboo2002}
\begin{equation}
D = (D^2_a \times D_b)^\frac{1}{3},
\end{equation}
where $D_a$ and $D_b$ are droplet diameter corresponding to the major axis and minor axis of the slightly deformed droplet.
The droplet velocity is found to be insignificant as compared to air velocity that allows us to use the absolute velocity of the air stream instead of the relative velocity in our calculations. Note that free jet is employed to investigate the droplet deformation and breakup. The uniform flow requirement for droplet breakup study is ensured through droplet entering the potential core region immediate downstream of the nozzle exit. For oblique air stream angles, if sufficient care is not taken, the droplet deformation and breakup process will occur in the shear layer instead of the jet potential core region. This will result in a higher critical Weber number as compared to the actual value. Thus, in the present study, sufficient care has been taken to ensure that droplet enters the potential core region of the air stream.
In our experiments, the uniform size of droplets is created with the help of a syringe pump. For calculating the dimensionless numbers, the standard properties of the fluids are taken at $30^{\circ}$C. To see the effect of surface tension and viscosity on the droplet breakup phenomenon, four different fluids, namely, deionised water, 50 \%, 70 \% and 80 \% aqueous mixtures of glycerin (by volume) have been considered. The viscosities of the fluids are measured using a rotational rheometer (Model: RheolabQC). The density and surface tension of the fluids are taken from Ref. \cite{fp}. The density (kg/m$^3$), the surface tension (N/m) and the dynamic viscosity (mPa$\cdot$s) of the working fluids are given in Table~\ref{tab:4}.
\begin{table}[ht]
\caption{Physical properties of the working fluids.}
\label{tab:4}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Working fluid & Dynamic viscosity & Surface tension & Density & Droplet diameter \\
& $\mu_l$ (mPa$\cdot$s) & $\sigma$ (mN/m) & $\rho_l$ (kg/m$^3$) & $D$ (mm) \\
\hline
Water & 0.78 & 72 & 998 & 2.0-6.3 \\ \hline
Glycerine (50\%) & 4.85 & 67.5& 1127 & 2.5 -5.75 \\ \hline
Glycerine (70\%) & 17.48 & 65.8 & 1182 & 2.8-5.75 \\ \hline
Glycerine (80\%) & 43.08 & 64.8 & 1209 & 3-5.6 \\ \hline
\end{tabular}
\end{table}
For current experiments, the nozzle orientation is varied from $\alpha=0^{\circ}$ to $\alpha=60^{\circ}$ with an interval of $10^{\circ}$ in order to investigate the droplet deformation and breakup.
The uncertainty in the calculation of the field variables, like droplet diameter $(D)$, the air phase velocity and other dimensionless numbers are listed in Table~\ref{tab:5}. The maximum uncertainty propagation rule has been applied to calculate the uncertainty. According to this rule, if $p$, $q$, $r$, $...$ are the field variables having uncertainties $U_p$, $U_q$, $U_r$, $...$ in the measurements, the uncertainty in $Q = Q(p,q,r,...)$ can be calculated as follows
\begin{equation}
\delta Q = \sqrt{ \left ( \frac{\delta Q}{\delta p} \cdot U_{p}\right )^2 + \left ( \frac{\delta Q}{\delta q} \cdot U_{q}\right )^2 + \left ( \frac{\delta Q}{\delta r} \cdot U_{r}\right )^2 + .....} \quad . \label{uncer}
\end{equation}
\begin{table}[ht]
\caption{Uncertainty in the calculations of different variables in the present study.}
\label{tab:5}
\centering
\begin{tabular}{|c|c|}
\hline
Variable & Maximum uncertainty \\ \hline
$\alpha$ (degree) & $5 \times 10^{-1}$ \\ \hline
Air velocity (m/s) & $6 \times 10^{-3}$ \\ \hline
Weber number ($We$) & $3.3 \times 10^{-1}$ \\ \hline
E\"{o}tv\"{o}s number ($Eo$) & $8 \times 10^{-2}$ \\ \hline
Nozzle diameter (m) & $5 \times 10^{-4}$ \\ \hline
Droplet diameter (m) & $ 10^{-4}$ \\ \hline
\end{tabular}
\end{table}
\section{Results and discussion}
\label{sec:dis}
We begin the presentation of our results by demonstrating the topological changes of a freely falling water droplet under the action of gravity when subjected to an air stream flowing at an angle $\alpha=60^\circ$ to the horizontal. The dynamics is compared again the cross-flow configuration $(\alpha=0^\circ)$, which has been previously studied by several researchers (see for instance, Refs. \cite{guildenbecher2009,Jain2015,kulkarni2014,flock2012,cao2007}). Figs. \ref{fig2a} and \ref{fig2b} present the temporal variations of a droplet undergoing the vibrational, the transitional and the bag breakup for $\alpha=0^\circ$ and $60^\circ$, respectively. By varying the droplet size and air-stream velocity, $We$ and $Eo$ are varied to demonstrate the dynamics for $\alpha=0^\circ$ and $60^\circ$. The dimensionless time, $\tau \left( \equiv (\rho_l-\rho_a) g t / U \rho_a \right)$ is used to show the temporal evolutions of the droplet, such that $\tau=0$ represents the dimensionless time when the droplet just enters the uniform air-stream.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{Figure3.pdf}
\caption{The breakup dynamics of a water droplet in the cross-flow condition ($\alpha=0^\circ$) for $Eo \approx 2.4$ demonstrating the vibrational ($We=7.87$), the transitional ($We=8.27$) and the bag breakup ($We=9.13$) modes. The dimensionless times, $\tau$ are shown at the top each panel, such that $\tau=0$ represents the time when the droplet just enters the uniform air-stream.}
\label{fig2a}
\end{figure}
For $\alpha=0^\circ$ and $Eo \approx 2.4$, it can be seen in the first row of Fig. \ref{fig2a} that (for $We=7.87$) the aerodynamic force is not enough to overcome the effect of the surface tension and viscous force, and the droplet undergoes shape oscillations (the vibrational regime). Due to the aerodynamic force, the amplitude of oscillations continue to increase and the droplet decomposes into comparable size droplets (not shown). In this case, it is not necessary that the disturbance created by the aerodynamic force is sufficient enough for the fragmentation process. The deformation time scale of the droplet undergoing the vibrational mode is also high as compared to the other modes. For a higher value of the Weber number ($We=8.27$), the droplet exhibits the transitional mode (second row of Fig. \ref{fig2a}) which is an intermediate stage between the vibrational mode and the bag breakup mode. As we increase the value of Weber number further ($We=9.13$) keeping the rest of the parameters fixed, the droplet exhibits the bag breakup mode as seen in the third row of Fig. \ref{fig2a}. The minimum value of the Weber number at which the droplet undergoes bag breakup phenomenon is known as the critical Weber number, $We_{cr}$. It can be concluded that for a fixed set of other parameters, the bag breakup mode can be achieved by increasing the air stream velocity only.
The temporal evolutions for the vibrational, the transitional and the bag breakup modes for $\alpha=60^\circ$ and $Eo \approx 1.8$ are shown in Fig. \ref{fig2b} for $We=5.13$, 5.73 and 6.35, respectively. It can be seen that the phenomena are qualitatively similar to those observed in the cross-flow configuration. In all these cases, the droplet enters inside the air nozzle at $\tau \approx 10$, 10.38 and 11.56 and comes out at $\tau \approx 42.72$, 31.13 and 31.07 with oblate, disk and bag-type shapes for the vibrational, the transitional and the bag breakup modes, respectively. This confirms that even in the oblique configuration the droplet was interacting with the continuous air stream in the potential core region.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{Figure4.pdf}
\caption{The breakup dynamics of a water droplet in an oblique configuration ($\alpha=60^\circ$) for $Eo \approx 1.8$ demonstrating the vibrational ($We=5.13$), the transitional ($We=5.73$) and the bag breakup ($We=6.35$) modes. The dimensionless times, $\tau$ are shown at the top each panel, such that $\tau=0$ represents the time when the droplet just enters the uniform air-stream. The inset in each panel of the last two columns represent the zoomed view of the droplet.}
\label{fig2b}
\end{figure}
Next, we investigate the effect of the air stream directionality on the droplet deformation and breakup phenomena. The drop size, height from the nozzle to tip, air flow velocity are kept constant and only the angle of the air stream, $\alpha$ is varied. Figs. \ref{fig:1a}(a), (b) and (c) present the temporal evolutions of the droplet along with their trajectories for $\alpha=0^\circ$, $30^\circ$ and $60^\circ$, respectively at $Eo=1.25$. Note that the Weber number can be calculated based on the relative velocity of the air stream with respect to the droplet or only the average velocity of the air stream from the nozzle. The parameters used to generate Fig. \ref{fig:1a} are given in Table~\ref{tab:1aa}. It can be seen that the droplet velocity, $u_d$ (calculated by the image processing method) is much smaller than the air stream velocity in all the cases. Two sizes of water droplet with $D=3.03$ mm and 4.23 mm are considered for which the values of the E\"{o}tv\"{o}s number are 1.25 and 2.44, respectively. It can be observed that the Weber number calculated based on the air stream velocity, $U$ is constant for each droplet size, but the Weber number calculated based on the relative velocity between the air stream and the droplet decreases slightly with the increase in the oblique angle of the air stream, $\alpha$. Thus, the Weber number, $We$ calculated based on the average air stream velocity is used to present the rest of the presented in this study. Note that in Fig. \ref{fig:1a}, only $\alpha$ is varied while the rest of the parameters remain fixed.
It can be seen in Fig. \ref{fig:1a}(a) for $\alpha=0^\circ$ that the droplet undergoes the vibrational mode. As we increase the angle of the air stream, the droplet exhibits a transitional mode for $\alpha=30^\circ$ (see Fig. \ref{fig:1a}(b)) and the bag breakup mode for $\alpha=60^\circ$ (see Fig. \ref{fig:1a}(c)). At $\tau$=0, the droplet is almost spherical for all the cases. In the vibrational mode, the non-uniform pressure distribution around the interface of the spherical drop initiates an internal flow within the drop, which in turn deforms the droplet to an oblate shape at $\tau \approx 6.44$ and a disk type shape at $\tau \approx 14.79$. In this case, the aerodynamic force is not enough to disintegrate the droplet into multiple pieces. Due to the competition between the aerodynamic force and the surface tension force, the droplet undergoes shape oscillations with a certain frequency. In some situations, these oscillations may lead to a breakup of the droplet into large ligament and satellite droplets of comparable sizes.
\begin{table}[H]
\caption{The parameters considered to generate Fig. \ref{fig:1a}. Here, $We_r$ corresponds the Weber number defined based on the relative velocity between the air stream and the droplet.}
\label{tab:1aa}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$\alpha$ & $D$ (mm) & $U$ (m/s) & ${u}_d$ (m/s) & $Eo$ & $We$ & $We_r$ \\ \hline
$0^\circ$ & 3.03 & 12.33& 0.3 & 1.25 & 8.0 & 8.40 \\
& 4.23 & 9.82 & 0.3 & 2.44 & 7.1 & 7.54 \\ \hline
$30^\circ$ & 3.03 &12.33 & 0.3 & 1.25 & 8.0 & 8.34 \\
& 4.23 &9.82& 0.3 &2.44 & 7.1 & 7.48 \\ \hline
$60^\circ$ & 3.03 &12.33 & 0.3 &1.25 & 8.0 & 8.20 \\
& 4.23 &9.82 & 0.3 & 2.44 & 7.1& 7.32 \\ \hline
\end{tabular}
\end{table}
For the same operating condition, as the air stream angle is increased to $\alpha$=30$^\circ$ the droplet undergoes a transitional region (Fig. \ref{fig:1a}(b)). In this case, as the droplet enters the air stream, the unequal distribution of the pressure changes the droplet shape from the spherical to a disk-like shape at $\tau \approx 10.69$. At $\tau \approx 15.85$, a bag-like structure appears in the air flow direction, but due to insufficient aerodynamic force, the droplet bag re-tracked back to a disk-like shape. Subsequent at later times $\tau > 20.53$, the droplet breaks into similar size droplets (not shown). The resultant satellite droplets exhibit vibrational modes.
At $\alpha=60^\circ$, the droplet clearly shows the bag-type breakup (Fig. \ref{fig:1a}(c)). In this case, when the spherical drop enters the region of the steady air stream, it is influenced by the distribution of air pressure around the drop. At an equilibrium condition, the internal and the surface tension forces are just balanced with the external aerodynamic force. However, when a steady air stream flows around a drop, the air velocity distribution and the air pressure distribution at any point on the drop surface are not uniform (see Fig. \ref{fig:1b}). The air velocity is maximum at point $ii$ and equals to zero at point $i$ (the stagnation point) in Fig. \ref{fig:1b}. Thus, by Bernoulli's equation, the air pressure becomes highest at point $i$, and lowest at point $ii$. In this situation, the external aerodynamic pressure causes the drop to deform to an oblate ellipsoid shape in the direction normal to the air flow. As the velocity at the equator of drop increases, the Bernoulli pressure difference increases accordingly, which flattens the drop more. Finally, the drop becomes a pancake-like shape (not shown as the droplet is inside the air nozzle at this instant). Due to the contraction of the drop, a wake region \cite{flock2012} is developed on the back side of the droplet, which facilities the bag formation at $\tau \approx 19$. The bag carries more mass at the periphery due to the pressure distribution along the interface, that suppresses the instability while at the centre of the bag grows parallel to the air stream direction. The hollow portion of the bag becomes thin, like a membrane, and finally ruptures after $\tau \approx 23.62$ for this set of parameters. Due to this rupture, many satellite droplets are generated. Then the toroidal ring expands in the radial direction and breaks into relatively bigger child droplets in the downstream direction (not shown).
Comparison of the trajectories of the droplet in Figs. \ref{fig:1a}(a), (b) and (c) also reveals that as the oblique angle of the air stream, $\alpha$ increases, the droplet follow a curvilinear motion with a sharp turn and the distance travelled by droplet from the nozzle decreases. The change in projected area of the droplet changes the drag coefficient. As the tangent to the path of the droplet gives its velocity. The acceleration, thus the inertia force acting on the droplet, also changes in an oblique configuration. Thus, using such oblique configurations, one can design compact combustors with a small primary reaction zone.
In Fig. \ref{fig:1a}(d), the trajectories of the droplets exhibiting the vibrational, the transitional and the bag breakup modes for $Eo=1.25$ ($D=3.03$ mm, solid lines) and 2.44 ($D=4.23$ mm, dashed lines) are shown. For both values of $Eo$, at $\alpha=0^\circ$, the droplet enters the aerodynamic field, deforms into a disk-like shape and dragged away in the direction of the air flow. The shape of the droplet for $\alpha=0^\circ$ is shown at $\tau$=25.59. At this time, the droplet is in the shear layer region outside the potential core region of the air stream. For higher values of $\alpha$, the droplet decelerates while migrating in the downward direction. When the droplet reaches the equilibrium stage, i.e., when the forces acting on the droplet in downward direction are balanced by the forces in the upward direction, the droplet comes to a stationary position for a very short time and starts to move in the opposite direction. At this stage, the droplet deforms to a shape that will facilitate it to migrate in the air flow direction. The droplet remains in contact with the strong aerodynamic force for a longer time as compared to the low $\alpha$ case (this residence time is proportional to the value of $\alpha$). This increases the probability to break the droplet even with smaller air velocity. Therefore, based on the above discussion, it can be concluded that the critical Weber number is sensitive to the air flow direction, $\alpha$. Increasing $\alpha$ decreases the critical Weber number for the bag breakup. This observation can play a significant role in the atomization process where the co and counter swirling secondary air streams are used to promote the secondary atomization process. The critical Weber number also sensitive to the droplet size. It is observed that for water droplets of diameters 3.03 $(Eo=1.25)$ and 4.23 mm ($Eo=2.44$), the values of $We_{cr}$ are 8 and 7.1, respectively.
\begin{figure}[H]
\centering
\hspace{0.5cm} {\large (a)} \hspace{7.4cm} {\large (b)}\\
\includegraphics[width=0.45\textwidth]{Figure5a.pdf} \includegraphics[width=0.45\textwidth]{Figure5b.pdf} \\
\vspace{2mm}
\hspace{0.5cm} {\large (c)} \hspace{7.4cm} {\large (d)}\\
\includegraphics[width=0.45\textwidth]{Figure5c.pdf}
\includegraphics[width=0.45\textwidth]{Figure5d.pdf}
\caption{Trajectory and the shape of the droplet at different times for $Eo=1.25$ that corresponds to $D=3.03$ mm for different values of $\alpha$. (a) $\alpha=0^\circ$, (b) $\alpha=30^\circ$ (c) $\alpha=60^\circ$. The panel (d) combines the trajectories for $Eo=1.25$ ($D=3.03$ mm) and $Eo=2.44$ ($D=4.23$ mm) at different oblique angles of the air stream. The numbers in each panel show the instants at which the droplet shapes are presented.}
\label{fig:1a}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.38\textwidth]{Figure6.pdf}
\caption{Pressure distribution on the droplet subjected to air stream.}
\label{fig:1b}
\end{figure}
\begin{figure}[H]
\centering
\hspace{0.5cm} {\large (a)} \hspace{7.8cm} {\large (b)}\\
\includegraphics[width=0.45\textwidth]{Figure7a.pdf} \hspace{2mm} \includegraphics[width=0.45\textwidth]{Figure7b.pdf} \\
\hspace{0.5cm} {\large (c)} \hspace{7.8cm} {\large (d)}\\
\includegraphics[width=0.45\textwidth]{Figure7c.pdf} \hspace{2mm} \includegraphics[width=0.45\textwidth]{Figure7d.pdf} \\
\hspace{0.5cm} {\large (e)} \hspace{7.8cm} {\large (f)}\\
\includegraphics[width=0.45\textwidth]{Figure7e.pdf} \hspace{2mm} \includegraphics[width=0.45\textwidth]{Figure7f.pdf} \\
\caption{Trajectories of a water droplet ($Oh \approx 0.0015$) exhibiting (a,c,e) the vibrational and (b,d,f) the bag breakup modes for (a,b) $\alpha=0^\circ$, (c,d) $\alpha=30^\circ$ and (e,f) $\alpha=60^\circ$. }
\label{fig8}
\end{figure}
Next we investigate the path followed by the droplet subjected to an oblique air flow for different sets of $Eo$ and $We$ as it plays important role in combustors designing and optimisation. Figs. \ref{fig8}(a,b), (c,d) and (e,f) correspond to $\alpha=0^\circ$ (cross-flow configuration), $\alpha=30^\circ$ and $\alpha=60^\circ$, respectively. Figs. \ref{fig8} (a,c,e) and (b,d,f) show the trajectories for the droplet exhibiting the vibrational mode and the bag breakup, respectively. Here, $Y=0$ represents the centre line of the nozzle. The trajectories are plotted before fragmentation of the droplet. It can be seen that droplet travel a short distance from the nozzle and undergoes a bigger turning in the bag breakup cases as compared to the vibrational cases. It is observed that for all $\alpha$ values considered, when the droplet velocity vector (tangent to the trajectory) aligns closer to the air flow direction, the drag force exerted on the droplet is more, which increases the stretching of the droplet, and the surface tension force can no longer keep the droplet intact and it disintegrates into smaller droplets. Increasing the droplet size (increasing $Eo$) increases the curvature of the droplet trajectory. It is to be noted here that the value of $We$ considered is close to the critical Weber number for each case. {However, the fact that the larger droplets, although smaller than the critical diameter based on the capillary length scale, break up even when the trajectory of the droplet is not aligned with the air stream direction. This indicates that the gravity force, aiding into droplet distortion, leads to increase the drag to overcome the cohesive force that keeps the droplet intact.}
\begin{figure}[H]
\centering
\hspace{1cm} {\large (a) $\alpha=0^\circ$} \hspace{3cm} {\large (b) $\alpha=30^\circ$} \hspace{3cm} {\large (c) $\alpha=60^\circ$}\\
\includegraphics[width=0.3\textwidth]{Figure8a.pdf}
\includegraphics[width=0.31\textwidth]{Figure8b.pdf}
\includegraphics[width=0.31\textwidth]{Figure8c.pdf}
\caption{Breakup of a water droplet ($Oh\approx0.0015$) for (a) $\alpha=0^\circ$, (b) $\alpha=30^\circ$ and (c) $\alpha=60^\circ$. For each angle, the left side images are corresponding to drop shape just before reaching the air stream, and the right images are the same droplet corresponding to $\tau$ (written below the images) just before the onset of the bag breakup.}
\label{fig11a}
\end{figure}
Figs. \ref{fig11a}(a), (b) and (c) show instantaneous snapshots of the droplet of different sizes just before entering the air stream (panel (i); $\tau=0$) and just before the onset of bag breakup (panel (ii); the value of $\tau$ is written below the droplets) for $\alpha=0^{\circ}$, $30^{\circ}$ and $60^{\circ}$, respectively. The value of $\tau$ mentioned in panel (ii) for each $\alpha$ essentially represents the time required to complete the bag formation for different sizes of the droplet at its critical Weber number, $We_{cr}$. It can be observed that for each value of $\alpha$, increasing the droplet size increases the value of the dimensionless breakup time. Increasing $Eo$ (increasing droplet size) decreases the value of $We_{cr}$, and increases the breakup time because of the decrease in the critical air velocity requirement for the larger droplets. It can also be seen that with the increase in the obliquity angle of the air nozzle, the dimensionless breakup time, $\tau$ shows a significant increase. This can be attributed to the droplet deceleration prior to aligning closely with the air jet direction. Interestingly, the droplet breakup occurs much closer to the air nozzle for high obliquity angle ($\alpha$ = 60$^\circ$) with significantly larger residence time. This is an important requirement from the design perspective of a swirl stabilised combustors as the flame would anchor much closer to the swirler warranting accelerating swirl flow (or converging) to increase flame standoff distance. Such a practice have been adopted in previous studies (see Refs. \cite{merkle2003effect,gupta2001swirl}). For $\alpha$ = 0$^\circ$ (cross-flow configuration), the path traveled by the droplet is large as compared to the path traveled by the droplet at any other obliquity angle conditions. Note that the droplet traveled more distance from the nozzle in $\alpha=0^\circ$ case than other cases with $\alpha$ values, but still the non-dimensional time is lower, which can be attributed to the acceleration of the droplet in the cross-flow condition. This is unlike the oblique cases where the deceleration of droplet is not observed as droplet motion is not totally against gravity.
In order to demonstrate that small droplets require less time to disintegrate as compared to bigger droplets at their respective values of the critical Weber number, in Fig. \ref{fig11b}, we present the shapes of different size droplets at two dimensional times, $t=0$ and $t=23.67$ ms in the cross-flow configuration ($\alpha=0$). It can be seen that the droplets are almost spherical when they enter the air stream. We observed that the small droplets (with $D=2.62$ mm and 2.76 mm) at their critical Weber numbers completely disintegrate, the intermediate size droplets ($D=3.37$ mm and 4.10 mm) form bags and the bigger droplet ($D=5.91$ mm) appears as a disk at $t=23.67$ ms. Although the bigger droplet breaks at smaller Weber number, the time required for the breakup increases with the droplet size. As in this case, the velocity requirement is lower, the rate of energy transfer to the droplet from the air stream is reduced resulting into larger residence time to overcome the effect due to the surface tension and the viscous forces.
\begin{figure}[H]
\centering
\includegraphics[width=0.25\textwidth]{Figure9.pdf}
\caption{Breakup of a water droplet ($Oh\approx0.0015$) at $\alpha=0$ (cross-flow configuration). The left side images are corresponding to drop shape just before entering the air stream and the right set of images are for the same droplet $t=23.67$ ms.}
\label{fig11b}
\end{figure}
Then we investigate the effect of angle of the air stream, $\alpha$ on the critical Weber number $We_{cr}$. In Fig. \ref{fig6}, we present the region maps showing the vibrational and the bag breakup modes in the $We-Eo$ plane for different values of $\alpha$. Here, the $Eo$ is changed by changing the droplet size in our experiments. In each panel, the red dashed line with filled square symbols depicts the boundary between the vibrational (open circle symbols) and the bag breakup (open square symbols) regimes. For $\alpha=0^\circ$ (cross-flow configuration), at $Eo=1.1$, the value of the $We_{cr} \approx 14$, which conforms the finding of previous studies \cite{wierzba1990deformation,hanson1963}. Increasing the value of $Eo$ decreases the value of $We_{cr}$ initially and then decreases at a slower rate. For high values of $Eo$, $D \ge l_c$. Thus the gravity force starts to dominate the surface tension force, and the droplet deforms from its initially spherical shape. This, in turn, increases the drag force exerted on droplet due to the increase in its surface area, which helps the droplet to deform more \cite{jalaal2012fragmentation}. Thus, the critical Weber number decreases with the increase in $Eo$ (or size of the droplet). Note that in the cross-flow condition ($\alpha=0^\circ$), air flows in the horizontal direction and the component of net acceleration in the vertical direction remains as $g$. Hence the reported E\"{o}tv\"{o}s number is calculated based on the gravitational acceleration component only.
For the cases with the oblique air flow, the droplet undergoes a curvilinear motion with a rapid deceleration followed by acceleration with a small radius of curvature in its trajectory (see Fig. \ref{fig:1a}). The curvilinear trajectory makes the internal pressure distribution to fluctuate as the droplet is continuously changing its direction. The pressure variation in a curvilinear motion is normal to the instantaneous velocity vector, which increases the stretching of the droplet, and hence the frontal area and the drag force also increase. Comparison of the results presented in Fig. \ref{fig6}(a)-(f) reveals that increasing the value of $\alpha$ decreases the value of $We_{cr}$. For all values of $\alpha$ considered, $We_{cr}$ drops sharply as we increase $Eo$ initially and then decreases slightly or remains constant. This behaviour can also be seen in Fig. \ref{fig7}, where the variations of $We_{cr}$ versus $\alpha$ are plotted for different values of $Eo$. It can be seen that $We_{cr}$ decreases to an asymptotic value of 6 (approximately). Interestingly, Villermaux and Bossa \cite {Villermaux2009} also theoretically found that $We_{cr} \approx 6$ for $\alpha = 90^\circ$ (in case of the opposed flow configuration). It can be seen that Fig. \ref{fig:1a} that at $\alpha$ = 60$^\circ$ the droplet almost aligned with the direction of air stream. This is the reason why $We_{cr}$ for $\alpha$ = 60$^\circ$ is approximately equal to 6, like in the case of the opposed flow configuration considered by Villermaux and Bossa \cite {Villermaux2009}.
\begin{figure}[H]
\centering
\hspace{1cm} {\large (a) $\alpha = 0^\circ$} \hspace{6cm} {\large (b) $\alpha = 10^\circ$}
\vspace{5mm}
\includegraphics[width=0.45\textwidth]{Figure10a.pdf} \hspace{2mm} \includegraphics[width=0.45\textwidth]{Figure10b.pdf} \\
\vspace{1mm}
\hspace{1.cm} {\large (c) $\alpha = 20^\circ$} \hspace{6cm} {\large (d) $\alpha = 30^\circ$} \\
\includegraphics[width=0.45\textwidth]{Figure10c.pdf} \hspace{2mm} \includegraphics[width=0.45\textwidth]{Figure10d.pdf} \\
\vspace{3mm}
\hspace{1cm} {\large (e) $\alpha = 40^\circ$} \hspace{6cm} {\large (f) $\alpha = 60^\circ$}
\vspace{4mm}
\includegraphics[width=0.45\textwidth]{Figure10e.pdf} \hspace{2mm} \includegraphics[width=0.45\textwidth]{Figure10f.pdf} \\
\caption{Region maps showing the vibrational and the bag breakup modes in $We - Eo$ plane for a water droplet ($Oh\approx0.0015$) at different angles of inclination of the air stream, $\alpha$. The red dashed line in each panel shows the transition between the vibrational and bag breakups.}
\label{fig6}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Figure11.pdf}
\caption{Variation of the transitional Weber number, $We_{cr}$ with $\alpha$ (in degree) for water droplet ($Oh\approx0.0015$) and different values of E\"{o}tv\"{o}s number. The uncertainty in the measurement in angles is $\pm0.5^\circ$. }
\label{fig7}
\end{figure}
\begin{figure}[H]
\centering
\hspace{0.5cm} {\large (a)} \hspace{7.8cm} {\large (b)}\\
\includegraphics[width=0.45\textwidth]{Figure12a.pdf} \hspace{0mm} \includegraphics[width=0.46\textwidth]{Figure12b.pdf} \\
\hspace{0.5cm} {\large (c)} \hspace{7.8cm} {\large (d)}\\
\includegraphics[width=0.45\textwidth]{Figure12c.pdf} \hspace{2mm} \includegraphics[width=0.47\textwidth]{Figure12d.pdf} \\
\caption{ (a,c) The critical dimensional air velocity, $U_c$ (m/s) versus $D$ (mm), and (b,d) $We_{cr}Re_{cr}$ versus $Eo$ for different values of $\alpha$. Panels (a,b) and (c,d) correspond to the droplets of water ($Oh\approx0.0015$) and 80\% glycerol + 20\% water solution ($Oh\approx0.078$), respectively.}
\label{fig4}
\end{figure}
In Figs. \ref{fig4}(a) and (c), the critical air stream velocity, $U_c$ that is just sufficient for the bag breakup is plotted against the droplet diameter, $D$ for three different values of $\alpha$ with droplets of water and 80\% glycerol+ 20 \%water solution, respectively. It can be seen that increasing the size of the droplet decreases the value of the velocity of the air stream required for the bag breakup. This behaviour can be attributed to the large frontal cross-section area and the large drag coefficient due to the deformation associated with bigger droplets. Additionally, the wake region \cite{flock2012} developed behind the deformed droplet facilitates the skewed internal pressure distribution leading to excessive stretching. A similar finding was also reported in Refs. \cite{wierzba1990deformation,chou1998temporal}. {Close inspection of Fig. \ref{fig4}(a) and (c) also reveals that increasing the viscosity of the droplet the critical velocity requirement increases slightly for the same size of the droplet, which is also observed by previous researchers \cite{pilch1987,guildenbecher2009,hanson1963}. This is due to the suppression of the surface instability as the viscosity is increased. It can be seen that increasing $\alpha$ for the same droplet size decreases the critical velocity requirements for the droplet to undergo the bag breakup.} This can be attributed to the effective energy dissipation to overcome the surface tension and the viscous effects through the larger turning in the droplet trajectory. Note that the rate of change of the angular momentum provides a measure of the energy transfer from air stream to the droplet to overcome the viscous and surface energy dissipation. Thus decreasing the radius of curvature in the curvilinear trajectory of the droplet increases the change in its angular momentum leading to a more effective energy transfer in accordance with the conservation of energy principle.
It is obvious that as the droplet size increases, the critical Weber number ($We_{cr}$) decreases but the corresponding critical Reynolds number ($Re_{cr}$) of the air stream increases even though the air velocity is decreased (except when the stretching is excess). However, in Fig. \ref{fig4}(b) and (d), it can be seen that the product of $We_{cr}$ and $Re_{cr}$ does not vary much with the increase in the droplet size. This is observed for all angles of the air stream. The product of $We_{cr}$ and $Re_{cr}$ signifies the inertial force requirement to overcome the viscous and the surface tension forces, such that a low value of $We_{cr} \times Re_{cr}$ represents a situation when a less inertia force can lead to the droplet breakup. Increasing $\alpha$ decreases $We_{cr} \times Re_{cr}$ as the curvature in the droplet trajectory increases with increasing $\alpha$ leading to more effective energy transfer from the air stream to the droplet to overcome viscous and surface tension forces. Hence, it can be concluded that the opposed flow configuration is an effective way to promote the secondary atomization at a low cost of input energy \cite{merkle2003effect,gupta2001swirl}).
\begin{figure}[H]
\centering
\hspace{1cm} {\large (a) $\alpha = 0^\circ$} \hspace{6.5cm}{ \large (b) $\alpha = 30^\circ$}\\
\includegraphics[width=0.45\textwidth]{Figure13a.pdf} \hspace{2mm} \includegraphics[width=0.45\textwidth]{Figure13b.pdf} \\
\hspace{0.6cm} {\large (c) $\alpha = 60^\circ$} \\
\includegraphics[width=0.45\textwidth]{Figure13c.pdf}
\caption{The critical Weber number, $We_{cr}$ versus Ohnesorge number, $Oh$ for different values of $Eo$: (a) $\alpha=0^\circ$, (b) $\alpha=30^\circ$ and (c) $\alpha=60^\circ$.}
\label{fig12}
\end{figure}
Two dimensionless numbers, namely the Weber number, $We$ and the Ohnesorge number, $Oh$ are found to be effective in characterising the breakup regimes and transition between the vibrational and the bag breakup regimes. For relatively low Ohnesorge numbers ($Oh < 0.1$), the Weber number has a dominant influence on droplet breakup, and it determines the breakup regimes as discussed above for $Oh\approx0.0015$ (water droplets). In Fig. \ref{fig12}(a-c), we present the variations of $We_{cr}$ versus $Oh$ for different values of $Eo$ and $\alpha$. The value of $Oh$ is varied by changing the concentration of the glycerol-water solution. As expected. it can be seen that $We_{cr}$ increases with increasing $Oh$ for all the cases. Aalburg {\it et al.} \cite{Aalburg2003} also found that the effect of surface tension, which is responsible to oppose the fragmentation, decreases with increasing $Oh$.
Finally, we investigate the variations of $We_{cr}$ with $\alpha$ for different values of $Eo$ and for different glycerol-water solutions (for different values of $Oh$). It can be seen that $We_{cr}$ decreases with the increase in $\alpha$ irrespective of the liquids considered. However, as observed for a low value of $Oh$ (see Fig. \ref{fig7}), $We_{cr}$ did not approach the asymptotic value of $6$ observed in the opposed flow configuration \cite{Villermaux2009}. The Ohnesorge number, $Oh$ has been accounted in the correlation to predict the modified critical Weber number, $We_{cr}^m$ of a viscous droplet as follows \cite{pilch1987,guildenbecher2009,kulkarni2014}
\begin{equation}\label{eq:1}
We_{cr}^m=We_{cr} (1+ c_1 Oh^{c_2}),
\end{equation}
where the values of $c_1$ and $c_2$ are 0.776 and 0.338 respectively, which are obtained using a regression analysis. In the present study, the mean absolute percentage error between the predicted and experimental value is found to be $<6 \%$.
\begin{figure}[H]
\centering
\hspace{1cm} {\large (a) $Oh\approx0.009$} \hspace{6.5cm} {\large (b) $Oh\approx0.032$}\\
\includegraphics[width=0.45\textwidth]{Figure14a.pdf} \hspace{2mm} \includegraphics[width=0.45\textwidth]{Figure14b.pdf} \\
\hspace{0.6cm} (c) {\large $Oh\approx0.078$} \\
\includegraphics[width=0.45\textwidth]{Figure14c.pdf}
\caption{Variations of the critical Weber number, $We_{cr}$ versus $\alpha$ (in degree) for (a) 50\% glycerol + 50\% water solution ($Oh \approx 0.009$), (b) 70\% glycerol + 30 \%water solution ($Oh \approx 0.032$) and (c) 80\% glycerol + 20\% water solution ($Oh \approx 0.078$).}
\label{fig13}
\end{figure}
\section{Concluding remarks}
\label{sec:conc}
\label{sec:conc}
We have conducted experiments to investigate the effect of the air stream obliquity $(\alpha)$, the droplet diameter $(D)$ and the fluid properties of the liquid on the deformation and the breakup phenomena of droplets experiencing an oblique continuous air stream. A high-speed imaging system is used to record the trajectories and topological changes of the droplets. Four different liquids, namely, deionised water, 50\%, 70\% and 80\% aqueous mixtures of glycerol (by volume) are considered in our experiments. The angle of the continuous air stream is varied from $\alpha=0^{\circ}$ to $\alpha=60^{\circ}$ with an interval of $10^{\circ}$. The values of the critical Weber number $(We_{cr})$ for the bag-type breakup are obtained as a function of the E\"{o}tv\"{o}s number $(Eo)$, angle of inclination of the air stream $(\alpha)$ and the Ohnesorge number $(Oh)$. It is found that, in case of the oblique air stream, the droplet entering the air stream follows a rectilinear motion that transforms into a curvilinear motion when the droplet undergoes topological changes. As we gradually change the cross-flow configuration to the in-line configuration, a sharp decrease in the critical Weber number is observed, which asymptotically reaches to the same value as that observed in an in-line (opposed) flow configuration. The droplet breakup time also increases with the increase in the oblique angle of the continuous air stream. This behaviour can be attributed to the deceleration of the droplet as it follows a curvilinear trajectory and shifts its direction to the nearly in-line (co-flow) arrangement. Thus increasing the residence time in a short path length leads to a more effective energy transfer from the air stream to the droplets, which dissipates to overcome the viscous and the surface tension forces, to form smaller satellite droplets. As expected, increasing the droplet diameter (or increasing $Eo$, which signifies a decrease in the relative influence of the surface tension force over the gravitational force) decreases the value of $We_{cr}$. Increasing the viscosity of the droplet (which is equivalent to increasing the value of $Oh$) requires higher $We$ for the droplet to undergo bag-type breakup.
\section*{Acknowledgments}
Authors would like to acknowledge the financial support from the Department of Science and Technology, India (SERB grant No. ECR/2015/000365). SKS and PKK also thank the Ministry of Human Resource Development, India for providing research fellowships.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{introduction}
The discovery of the Higgs boson ($\Ph$) in 2012~\cite{Chatrchyan:2012xdj,Aad:2012tfa,CMS:2012nga,ATLASscience} has led to a detailed program of studies of the Higgs field couplings to the elementary particles of the standard model (SM) of particle physics: leptons, quarks, and gauge bosons. To fully understand the form of the Higgs field potential, which is a key element in the formulation of the SM, it is important to also study the self-interaction of the Higgs boson. The self-interaction can be investigated through measurements of the production of a pair of Higgs bosons ($\Ph\Ph$). In the SM, $\Ph\Ph$ production is a rare, nonresonant process, with a small production rate~\cite{HiggsXS} that will require the future data sets of the high-luminosity LHC to be observed~\cite{HiggsXS}. Hence, an early observation of $\Ph\Ph$ production, a resonant production in particular, would be a spectacular signature of physics beyond the standard model (BSM). The production of gravitons, radions, or stoponium~\cite{Tang:2012pv,PhysRevD.63.056007,PhysRevD.90.055007}, for example, could lead to $s$-channel $\Ph\Ph$ production via narrow-width resonances. The breadth of the Higgs boson decay channels provides a unique opportunity to test the self-consistency of an $\Ph\Ph$ signal with the SM or models with extended electroweak sectors, such as two-Higgs doublet models (2HDM)~\cite{2HDM1, 2HDM2} or extensions of the minimal supersymmetric standard model~\cite{hH2,hH3,hH5}.
This paper reports a search for resonant $\Pp\Pp\to\mathrm{X}\to\PH\PH$ production in the $\PH\PH\to\ensuremath{\PQb\PQb\PZ\PZ}\xspace$ decay channel, where X is a narrow-width resonance of spin-0 or spin-2, and $\PH$ can represent either $\Ph$ or an additional Higgs boson from an extended electroweak sector. The search uses proton-proton ($\Pp\Pp$) collision data at $\sqrt{s}=13\TeV$, recorded with the CMS detector at the LHC in 2016, and corresponding to an integrated luminosity of 35.9\fbinv. It covers a range of resonance masses between 260 and 1000\GeV. The final state consists of two \PQb jets from one Higgs boson decay and two distinct $\PZ$ boson decay signatures from the other $\PH\to\PZ\PZ$ decay: two same-flavor, opposite-sign (OS) leptons from a decay of one of the \PZ bosons, and either two jets of any flavor (the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channel) or significant missing transverse momentum (the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel) from the decay of the second \PZ boson to neutrinos. In both cases, the selected charged leptons are either electrons or muons. In the SM, the branching fractions of these signatures represent 0.43 (0.12)\% of the full $\Ph\Ph$ decay through the \ensuremath{\PQb\PQb\PZ\PZ}\xspace intermediate state in the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace (\ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace{}) channel. The challenging aspect of the search in the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channel is the ability to discriminate the signal containing two \PQb jets and two additional jets from multijet background events. For a search in the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel, the challenge resides in discriminating the signal against top quark anti-top quark ($\ttbar$) events and instrumental background sources of large missing transverse momentum arising from the mismeasurement of the energies of jets in the final state. The two channels are kept independent by applying orthogonal selections on the missing transverse momentum of the event. Signal yields are calculated for each individual channel and are then combined. Having multiple decay channels with complementary background compositions and sensitivities over a large resonance mass ($m_X$) range makes this combination of the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace and \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channels highly efficient for covering the \ensuremath{\PQb\PQb\PZ\PZ}\xspace final state. This is the first search for Higgs boson resonant pair production in the \ensuremath{\PQb\PQb\PZ\PZ}\xspace channel.
Previous searches for resonant $\Ph\Ph$ production have been performed by the CMS and ATLAS Collaborations in the $\ensuremath{\PQb\PQb}\xspace\bb$~\cite{hh4b,Aaboud:2018knk}, $\ensuremath{\PQb\PQb}\xspace\tau\tau$~\cite{bbtautau,Aaboud:2018bun}, $\ensuremath{\PQb\PQb}\xspace\gamma\gamma$~\cite{bbgammagamma}, and $\ensuremath{\PQb\PQb}\xspace\ell\nu\ell\nu$~\cite{bbWW,Aaboud:2018bun} channels. While coverage of as many $\Ph\Ph$ decay channels as possible remains necessary to understand the exact nature of the Higgs boson self-coupling and the electroweak symmetry breaking mechanism, a \ensuremath{\PQb\PQb\PZ\PZ}\xspace search is particularly interesting in models with extended electroweak sectors, where the phenomenology of additional Higgs bosons can lead to significantly enhanced \ensuremath{\PQb\PQb\PZ\PZ}\xspace production, while suppressing the BSM production of $\ensuremath{\PQb\PQb}\xspace\bb$, $\ensuremath{\PQb\PQb}\xspace\tau\tau$, or $\ensuremath{\PQb\PQb}\xspace\gamma\gamma$ final states.
\section{Benchmark models}
\label{benchmarks}
As in the previous searches, a class of narrow width resonance models arising from the Randall--Sundrum (RS) model~\cite{RandallSundrum} in warped extra dimensions~\cite{radion1,radion2,radion3,radion4} are considered. This scenario introduces one small spatial extra dimension with a nonfactorizable geometry, where the SM particles are not allowed to propagate along that extra dimension, and is referred to in this search as RS1. The resonant particle can be a radion (spin-0) or the first Kaluza--Klein (KK) excitation of a graviton (spin-2). The production cross section of the radion is proportional to $1/\lambda_{\mathrm{R}}^{2}$ where $\lambda_{\mathrm{R}}$ is the interaction scale parameter of the theory. In this analysis, we consider the cases where $\lambda_{\mathrm{R}}=1\TeV$ with $kL = 35$, where $k$ is the constant in the warp factor ($e^{-kL}$) appearing in the space-time metric of the theory and $L$ is the size of the extra dimension. The free parameter of the model for the graviton case is $\tilde{k} = k/\overline{M}_{\mathrm{Pl}}$, where $\overline{M}_{\mathrm{Pl}}$ is the reduced Planck scale, and we consider $\tilde{k}=0.1$ in this analysis~\cite{Oliveira:2014kla}. We further scan the model parameter space in the $\lambda_{\mathrm{R}}$ and $\tilde{k}$ parameters for their respective models. Production at hadron colliders is expected to be dominated by gluon-gluon fusion, and we assume that the radion or graviton is produced exclusively via this process. Due to the small branching fraction of $\Ph\Ph\to\ensuremath{\PQb\PQb\PZ\PZ}\xspace$ and the high multiplicities of the final states, the analyses presented in this paper are less sensitive to these models compared to the previous searches. As noted in Section~\ref{introduction}, however, certain models with extended electroweak sectors can produce significantly enhanced \ensuremath{\PQb\PQb\PZ\PZ}\xspace production, while suppressing final states with Higgs boson decays to fermions and scalar bosons.
Such an enhancement can be produced for example in the next-to-minimal 2HDM (N2HDM) extended Higgs sector~\cite{N2HDM,N2HDM1}, where an additional real singlet is introduced in addition to the usual two doublet Higgs bosons of the 2HDM. This analysis is further interpreted in this context. The so-called broken phase is considered, wherein both the Higgs doublets and the singlet acquire vacuum expectation values (vev)~\cite{N2HDM1}. Mixing between these states produces 3 $CP$-even Higgs bosons ${\PH}_1$, ${\PH}_2$, and ${\PH}_3$, with masses that are free parameters of the model. This search considers the nearly mass-degenerate case where the masses of the two bosons ${\PH}_1$ and ${\PH}_2$ are constrained to the experimental measurements of the $\Ph$ mass, which would be indistinguishable from $\Ph$ production with current LHC data sets~\cite{hH1,hH4,hH2}, but may give rise to manifestly non-SM-like rates in the case of $\Ph\Ph$ production. In what is commonly referred to as Higgs boson cascade decays, ${\PH}_3$ can decay to any combination of bosons ${\PH}_1$ and ${\PH}_2$, which then both can have different decay branching fractions compared to the SM Higgs boson. The model spectrum depends on the ratio of the vevs of the two Higgs doublets $\tan\beta$, low values of which enhance ${\PH}_3$ production; the vev of the singlet, which affects the decay branching fractions of ${\PH}_3$ to ${\PH}_1$ and ${\PH}_2$; and three mixing angles, which determine the decay branching fractions of ${\PH}_1$ and ${\PH}_2$~\cite{N2HDM1}. The model spectra described below are determined using \textsc{N2HDECAY}~\cite{N2HDM2}, and are chosen to enhance production of the \ensuremath{\PQb\PQb\PZ\PZ}\xspace final state while respecting current LHC measurements of the SM $\Ph$ branching fractions within their experimental uncertainties~\cite{HiggsXS}. The gluon-gluon fusion production cross sections of ${\PH}_3$ are determined from the BSM Higgs boson predictions of the LHC Higgs Cross Section Working Group~\cite{HiggsXS}. These cross sections assume SM decay branching fractions of the Higgs boson, and changing these branching fractions affects the production cross section. The cross sections are corrected at leading order (LO) by the ratio of the relative partial width of ${\PH}_3$ in the decay to two gluons compared to the BSM Higgs boson prediction. Enhanced (reduced) coupling of ${\PH}_3$ to gluons will enhance (reduce) the production cross section of ${\PH}_3$. The mass of the Higgs bosons ${\PH}_1$ and ${\PH}_2$ are set to 125\GeV, and the mass of ${\PH}_3$ is generated in the range $260\leq m_\mathrm{X}\leq 1000\GeV$. Two benchmark points are chosen for this analysis, corresponding to $\tan\beta=0.5$ and 2.0. In both cases, the scalar vev is set to 45\GeV, and the mixing angles $\alpha_1$, $\alpha_2$, $\alpha_3$ are set to 0.76, 0.48, and 1.00, respectively. For $\tan\beta=0.5$, this results in branching fractions of ${\PH}_3$ to ${\PH}_1{\PH}_1$, ${\PH}_1{\PH}_2$, and ${\PH}_2{\PH}_2$ around 0.02, 0.29, and 0.64 respectively, branching fractions of ${\PH}_1\to\PQb\PQb$ (${\PH}_1\to\PZ\PZ$) of 0.70 (0.01), and branching fractions of ${\PH}_2\to\PQb\PQb$ (${\PH}_2\to\PZ\PZ$) of 0.42 (0.05). This represents a 33\% increase in the branching fraction to \ensuremath{\PQb\PQb\PZ\PZ}\xspace compared to SM $\Ph\Ph$ decays. The correction factor based on the relative partial width of ${\PH}_3$ to two gluons is around 3.0. For $\tan\beta=2.0$, this results in branching fractions of ${\PH}_3$ to ${\PH}_1{\PH}_1$, ${\PH}_1{\PH}_2$, and ${\PH}_2{\PH}_2$ around 0.07, 0.22, and 0.67 respectively, branching fractions of ${\PH}_1\to\PQb\PQb$ (${\PH}_1\to\PZ\PZ$) of 0.53 (0.03), and branching fractions of ${\PH}_2\to\PQb\PQb$ (${\PH}_2\to\PZ\PZ$) of 0.58 (0.03). This represents a 5\% increase in the branching fraction to \ensuremath{\PQb\PQb\PZ\PZ}\xspace compared to SM $\Ph\Ph$ decays. The correction factor based on the relative partial width of ${\PH}_3$ to two gluons is around 0.7. These corrections and branching fractions produce significant differences in the production rates of the \ensuremath{\PQb\PQb\PZ\PZ}\xspace system compared to $\Ph\Ph$ production both in the SM and through resonant production of radions or gravitons.
\section{The CMS detector}
\label{cms}
The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity coverage provided by the barrel and endcap detectors, where pseudorapidity is defined as $\eta = -\ln[\tan(\theta /2)]$, and $\theta$ is the polar angle. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. CMS uses a two-level trigger system~\cite{Khachatryan:2016bia}. The first level of the CMS trigger system, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most interesting events. The high-level trigger (HLT) processor farm further decreases the event rate from around 100\unit{kHz} to a rate of around 1\unit{kHz}, before data storage. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008aa}.
\section{Event simulation}
\label{samples}
The signal samples of RS1 spin-0 radion and RS1 KK spin-2 graviton narrow resonances decaying to a pair of Higgs bosons ($\mathrm{X}\to\Ph\Ph$) are generated at LO using {\MGvATNLO}. The $\Ph$ mass is set to 125\GeV, and the $X$ resonance mass $m_X$ is generated in the range of 260--1000\GeV. In the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel the final state can be produced via either the \ensuremath{\PQb\PQb\PZ\PZ}\xspace or $\PQb\PQb\PWp\PWm$ intermediate states.
The main background processes to production of a pair of Higgs bosons in the $\ensuremath{\PQb\PQb\PZ\PZ}\xspace\to$ \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace or \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace final states are \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace and $\ttbar$ processes. Less significant backgrounds arise from single top quark, $\PW$+jets, diboson+jets, SM Higgs boson production, and quantum chromodynamics (QCD) multijet production. Signal and background processes are modeled with simulations, with the exception of the QCD multijet background that is estimated using data control regions.
In the analysis using the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channel, the \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace and $\PW$+jets processes are generated with {\MGvATNLO}2.4.2~\cite{Alwall:2014hca} at next-to-leading order (NLO). In this case, the generator uses the \textsc{FxFx} jet merging scheme~\cite{Frederix:2012ps}. The analysis of the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel uses samples of \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace generated with {\MGvATNLO} at LO, with the MLM matching scheme~\cite{Alwall:2007fs}, and reweighted to account for higher order QCD and electroweak effects~\cite{DY_QCDnEWK}.
{\tolerance=800 The \ttbar process is generated at NLO with \POWHEG2.0~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd,Alioli:2009je,Re:2010bp,Alioli:2008tz}. Single top processes and SM Higgs boson production processes are simulated at NLO either with \POWHEG or {\MGvATNLO}, depending on the particular channel. The diboson processes ($\PW\PW$+jets, $\PW\PZ$+jets, $\PZ\PZ$+jets) are simulated at NLO with {\MGvATNLO}.\par}
The simulated samples are normalized to their best-known highest-order-QCD cross sections, either evaluated at NLO with \MCFM~\cite{Campbell:2010ff} (diboson+jets) or at next-to-next-to-leading order with \FEWZ 3.1~\cite{Li:2012wna} (single top quark, $\PW$+jets, SM Higgs boson), with the exception of $\ttbar$ and \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace processes, which are normalized using data.
The simulated samples are interfaced with \PYTHIA8.212~\cite{pythia8p2} for parton showering and hadronization. The \PYTHIA generator uses the CUETP8M1 underlying event tune~\cite{Khachatryan:2015pea}. The \textsc{nnpdf3.0} NLO and LO parton distribution functions (PDFs)~\cite{Ball:2014uwa} are used for the various processes, with the precision matching that in the matrix element calculations.
For all the simulated samples used in this analysis, a simulation of CMS detector response based on \GEANTfour~\cite{GEANT4} is applied. The presence of additional interactions in the same bunch crossing (pileup, or PU), both in-time and out-of-time with respect to the primary interaction, is simulated and corrected to agree with a multiplicity corresponding to the distribution measured in data.
\section{Event reconstruction and background estimation}
\label{selection}
\subsection{Event reconstruction}
\label{reconstruction}
Events are selected using triggers that require two muons with transverse momentum $\pt >17$ (8)\GeV or two electrons with $\pt >23$ (12)\GeV for the leading (sub-leading) lepton.
The particle-flow (PF) algorithm~\cite{PFlast}, which combines information from various elements of the CMS detector, is used to reconstruct and identify final-state particles, such as photons, electrons, muons, and charged and neutral hadrons, as individual PF objects. Combinations of PF objects are then used to reconstruct higher-level objects such as jets and missing transverse momentum.
Jets are reconstructed from the PF objects, using the anti-\kt~\cite{antikt,fastjet} algorithm with a distance parameter of $R = 0.4$. In order to reduce instrumental backgrounds and the contamination from PU, selected jets are required to satisfy loose identification criteria~\cite{CMS:2017wyc} based on the multiplicities and energy fractions carried by charged and neutral hadrons. The energy of reconstructed jets is calibrated using $\pt $- and $\eta$-dependent corrections to account for nonuniformity and nonlinearity effects of the ECAL and HCAL energy response to neutral hadrons, for the presence of extra particles from PU, for the thresholds used in jet constituent selection, reconstruction inefficiencies, and possible biases introduced by the clustering algorithm. These jet energy corrections are extracted from the measurement of the momentum balance in dijet, $\text{photon}+\text{jet}$, \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace, and multijet events~\cite{Khachatryan:2016kdb}. A residual $\eta$- and $\pt $-dependent calibration is applied to correct for the small differences between data and simulated jets. The jets that are candidates to be from the decay of one of the Higgs bosons and of one of the \PZ bosons are required to have $\pt >20\GeV$. Furthermore, jets are required to have a spatial separation of $\Delta R > 0.3$ from lepton candidates.
Jets originating from \PQb quarks are identified with the combined multivariate analysis (cMVA) algorithm~\cite{Sirunyan:2017ezt}. A jet is tagged as a \PQb jet if the cMVA discriminant is above a certain threshold, chosen such that the misidentification rate is about 1\% for light-flavor quark and gluon jets, and about 13\% for charm quark jets. The \PQb jet tagging efficiency for this working point is about 66\%.
The missing transverse momentum vector \ptvecmiss is computed as the negative vector sum of the transverse momenta of all the PF candidates in an event, and its magnitude is denoted as \ptmiss~\cite{Sirunyan:2019kia}. The \ptvecmiss is modified to account for corrections to the energy scale of the reconstructed jets in the event.
The candidate vertex with the largest value of summed physics-object $\pt^2$ is taken to be the primary $\Pp\Pp$ interaction vertex. The physics objects are the jets, clustered using the jet finding algorithm~\cite{antikt,fastjet} with the tracks assigned to candidate vertices as inputs, and the associated missing transverse momentum, taken as the negative vector sum of the \pt of those jets.
Muons are reconstructed as tracks in the muon system that are matched to the tracks reconstructed in the inner silicon tracking system~\cite{MuId}. The leading muon is required to have $\pt > 20\GeV$, while the subleading muon must have $\pt > 15$ (10)\GeV in the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace (\ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace{}) channel. Muons are required to be reconstructed in the HLT fiducial volume, \ie{}, with $\abs{\eta} < 2.4$, to ensure that the offline selection is at least as restrictive as the HLT requirements. The selected muons are required to satisfy a set of identification requirements based on the number of spatial measurements in the silicon tracker and in the muon system and the fit quality of the combined muon track~\cite{Khachatryan:2015pea}, and are required to be consistent with originating from the primary vertex.
Electrons are reconstructed by matching tracks in the silicon tracker to the clusters of energy deposited in the ECAL~\cite{EMId}. The leading (subleading) electron is required to have $\pt > 25$ (15)\GeV and $\abs{\eta} < 2.5$ to be within the geometrical acceptance, excluding candidates in the range $1.4442 < \abs{\eta} < 1.5660$, which is the transition region between the ECAL barrel and endcaps, because the reconstruction of an electron in this region is poor compared to other regions. Electrons are required to pass an identification requirement based on an MVA~\cite{TMVA} technique that combines information from various observables related to the shower shape in the ECAL and the quality of the matching between the tracks and the associated ECAL clusters~\cite{EMId}. They are further required to be consistent with originating from the primary vertex. Candidates that are identified as originating from photon conversions in the material of the detector are removed.
Both muons and electrons have a requirement that the lepton relative isolation, defined in Eq.(\ref{isoeq}), be less than 0.25 (0.15) and 0.15 (0.06), respectively, for the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace (\ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace{}) channel. In Eq.(\ref{isoeq}), the sums run over charged hadrons originating from the primary vertex of the event, neutral hadrons, and photons inside a cone of radius $\Delta R = \sqrt{\smash[b]{(\Delta\phi )^2 + (\Delta\eta) ^2}} < 0.4$ (0.3) around the direction of the muon (electron), where $\phi$ is the azimuthal angle in radians.
\ifthenelse{\boolean{cms@external}}{
\begin{multline}
\label{isoeq}
I_\text{iso} = \frac{1}{\pt^{\ell}}\Biggl[\sum^{\text{charged}}\pt \\+ \max\Bigl(0, \sum^{\text{neutral}}\pt + \sum^{\text{photons}} \pt - \text{Corr}_{\text{PU}}\Bigr)\Biggr]
\end{multline}
}{
\begin{equation}
\label{isoeq}
I_\text{iso} = \frac{1}{\pt^{\ell}}\left[\sum^{\text{charged}}\pt + \max\left(0, \sum^{\text{neutral}}\pt + \sum^{\text{photons}} \pt - \text{Corr}_{\text{PU}}\right)\right]
\end{equation}
}
{\tolerance=800 The isolation includes a correction for pileup effects, $\text{Corr}_{\text{PU}}$. For electrons, $\text{Corr}_{\text{PU}}=\rho A_{\text{eff}}$, where $\rho$ is the average transverse momentum flow density, calculated using the jet area method~\cite{PUcorr}, and $A_{\text{eff}}$ is the geometric area of the isolation cone times an $\eta$-dependent correction factor that accounts for residual pileup effects. For muons, $\text{Corr}_{\text{PU}}=0.5\sum^{\text{PU}}\pt $, where the sum runs over charged particles not associated with the primary vertex and the factor 0.5 corresponds to an approximate average ratio of neutral to charged particles in the isolation cone~\cite{CMS:2010eua}.\par}
Simulated background and signal events are corrected with scale factors for differences observed between data and simulation, in trigger efficiencies, in lepton $\pt $- and $\eta$-dependent identification and isolation efficiencies, and in \PQb\ tagging efficiencies.
\subsection{\texorpdfstring{Event selection in the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channel}{Event selection in the bblljj channel}}
\label{lljjselection}
After selection of the candidate physics objects, an initial event selection is performed by requiring at least two same-flavor leptons (muons or electrons) in each event. The two leptons are required to be oppositely charged. The invariant mass of the two leptons, $m_{\ell\ell}$, is required to be larger than 15\GeV. Four of the jets in an event are designated as the $\Ph$ and \PZ boson decay products. These jets are required to have $\pt >20\GeV$ and at least one of those must be \PQb tagged with a minimum requirement on the \PQb tagging discriminant, that is looser than the requirement in the final selection. We refer to this selection as the preselection.
Since the signal contains two \PQb jets from the decay of a Higgs boson, and two jets of any flavor from the decay of a \PZ boson, it is important to carefully categorize the jets in the event. Starting from a collection of jets identified as described above, the information from the \PQb tagging discriminant, as well as the kinematic properties of the jets, are taken into account when assigning jets as each particle's decay products.
The following selection is applied to identify the \PQb jets originating from the decay of the Higgs boson. The two jets with the highest \PQb tagging scores above a certain threshold are assigned to the decay of the Higgs boson. If only one jet is found that meets the minimum \PQb tagging score value, a second jet that leads to an invariant mass closest to 125\GeV~is selected. If no jets with \PQb tagging scores above threshold are found, the two jets whose invariant mass is closest to 125\GeV~are chosen.
After jets are assigned to the decay of \ensuremath{\Ph\to\PQb\PQb}\xspace, from the remaining jets the two jets with four-object invariant mass $M(\ell\ell\text{jj})$ closest to 125\GeV are assigned to the decay of the \PZ boson.
After preselection, additional requirements are imposed. At least one of the four jets assigned as the decay products of the $\Ph$ or \PZ boson must satisfy the \PQb tagging requirement, to increase the signal-to-background ratio. To impose orthogonality with the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace decay channel, upper limits on the $\ptmiss$ are imposed as follows: $\ptmiss < 40,\ 75,\ \mathrm{and}\ 100\GeV$ for the $m_X$ of 260--350, 350--650, and ${\geq}$650\GeV, respectively. We refer to this selection as the final selection in the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channel.
After the final selection, twenty-two variables that exploit the differences in kinematic and angular distributions between the signal and background processes are combined into a boosted decision tree (BDT) discriminant~\cite{bdt}. In the $m_X$ range of 260--300\GeV, the most important variables are $m_{\ell\ell}$, the separation between the leading lepton and leading \PQb tagged jet $\Delta R_{\ell1\PQb1}$, and the invariant mass of the pair of \PQb tagged jets $m^{\Ph}_{\PQb\PQb}$. In the $m_X$ range of 350--550\GeV, $m^{\Ph}_{\PQb\PQb}$ is the most important variable, while $m_{\ell\ell}$ becomes less important, and the separation between the pair of leptons $\Delta R_{\ell\ell}$ gradually becomes more important when the $m_X$ increases. For the $m_X$ higher than 550\GeV, $\Delta R_{\ell\ell}$ becomes the most important variable followed by $m^{\Ph}_{\PQb\PQb}$ and the separation between the pair of \PQb tagged jets $\Delta R^{\Ph}_{\PQb\PQb}$. The BDTs are configured to use stochastic gradient boosting with the binomial log-likelihood loss function. The software package TMVA~\cite{TMVA} is used for BDT implementation, training, and application.
The BDT is trained using all background processes described in Section~\ref{samples}, excluding the multijet background. In each lepton channel and for each spin hypothesis, one BDT is trained for each simulated signal $m_X$. In the training, signal events include samples from the two neighboring mass points, in addition to the targeted mass point. In total, 48 BDTs are trained. These BDT distributions for data and expected backgrounds are used as the final discriminating variable in the analysis.
\subsection{\texorpdfstring{Background estimation in the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channel}{Background estimation in the bblljj channel}}
\label{lljjbackground}
The main processes that can mimic the signature of the signal in the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channel are \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace and $\ttbar$, with smaller contributions from QCD multijets, diboson+jets, $\PW$+jets, and SM Higgs boson production.
The contribution from the principal background, \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace, is estimated with simulated events normalized to the data at the preselection level in the $\PZ$ boson-enriched control region $80 < m_{\ell\ell} < 100\GeV$. The contribution from $\ttbar$ is estimated in a similar manner, with the $\ttbar$-enriched control region defined by $m_{\ell\ell} > 100\GeV$, and $\ptmiss>100\GeV$. The data-to-simulation normalization factors derived from the two control regions are $R_{\PZ} = 1.14 \pm 0.01\stat$ and $R_{\ttbar} = 0.91 \pm 0.01\stat$ in the muon channel and $R_{\PZ} = 1.24 \pm 0.01\stat$ and $R_{\ttbar} = 0.97 \pm 0.02\stat$ in the electron channel. These normalization factors are found to be consistent between lepton flavors when applying lepton-specific systematic variations.
The contribution from QCD multijet processes is determined from data with a method that exploits the fact that neither signal events nor events from other backgrounds produce final states with same-sign leptons at any significant level. Data events with same-sign isolated leptons are used to model the shape of the multijet background, after all non-QCD sources of background contributing to this selection are subtracted using simulation. The yield in this region is normalized with the ratio of the number of events with nonisolated OS leptons to the number of events with nonisolated same-sign leptons. Here, nonisolated leptons are those muons (electrons) that fail the relative isolation requirements described in Section~\ref{reconstruction}. All non-QCD sources of background, estimated with simulated events, are subtracted from the numerator and the denominator before computing the ratio.
The contributions from diboson+jets, $\PW$+jets, and SM Higgs boson production are estimated from simulation.
\subsection{\texorpdfstring{Event selection in the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel}{Event selection in the bbllnunu channel}}
\label{llnunuselection}
Candidate events in the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel are reconstructed from the physics objects, as described above. The two leptons (muons or electrons) are required to have OS, and the invariant mass of the two leptons, $m_{\ell\ell}$, is required to exceed 76\GeV. One of the Higgs bosons is formed from the pair of \PQb jets with the highest output value of the \PQb tagging discriminant, and the second Higgs boson is reconstructed as a combination of the two charged leptons and the \ptvecmiss, representing the visible and invisible decays products, respectively, of the pair of \PZ bosons. The requirement on $m_{\ell\ell}$ reduces the contribution from resonant $\mathrm{X}\to\Ph\Ph$ production in the $\PQb\PQb\PW\PW$ final state, and makes this measurement orthogonal to a previous $\PQb\PQb\PW\PW$ search~\cite{bbWW}, where only events with $m_{\ell\ell}$ below 76\GeV were considered.
For the Higgs boson decaying to a pair of \PZ bosons, the two neutrinos are not reconstructed in the detector, and a pseudo invariant mass of the Higgs boson is used to approximate the incomplete momentum four-vector of the $\PH$. The pseudo invariant mass is formed from the momenta of the two charged leptons coming from one of the \PZ bosons and the four-vector $(\ptmiss, \ptvecmiss)$ approximating that of the two-neutrino system coming from the other of the \PZ bosons, where the $z$ component of \ptvecmiss is zero. While the true invariant mass of the pair of neutrinos is not zero but is equal to the invariant mass of the parent \PZ boson, that boson is off the mass shell and has relatively low mass.
In order to suppress the backgrounds from the \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace and QCD multijet processes as well as from the SM Higgs boson production via the $\PZ\Ph$ process, a requirement is imposed on the minimum \ptmiss, which is 40 (75)\GeV for the $m_X$ of 260--300 (350--600)\GeV, and 100\GeV for higher $m_X$.
Three regions, a signal region (SR) and two control regions (CRs), are further defined using $m_{\ell\ell}$ and the invariant mass $m^{\Ph}_{\PQb\PQb}$ of the two \PQb jets. The SR is defined by the requirements $76 < m_{\ell\ell} <106\GeV$ and $90< m^{\Ph}_{\PQb\PQb} <150\GeV$. A first CR, dominated by \ttbar events, is defined by $m_{\ell\ell} >106\GeV$ and $90< m^{\Ph}_{\PQb\PQb} <150\GeV$. A second CR, enriched in \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace events, is defined by requiring $20< m^{\Ph}_{\PQb\PQb} <90\GeV$ or $m^{\Ph}_{\PQb\PQb} >150\GeV$, and $76 < m_{\ell\ell} <106\GeV$. The two CRs and the SR are used to estimate the backgrounds in the SR via a simultaneous fit.
To further differentiate signal from backgrounds in the SR, a BDT discriminant is trained using all simulated signal and background processes described in Section~\ref{samples}. Of the nine input distributions to the BDT, the most important variables in the low-mass range are the separation between the pair of $\PQb$ tagged jets $\Delta R^{\Ph}_{\PQb\PQb}$, \ptmiss, and $m^{\Ph}_{\PQb\PQb}$. In the high-mass region, $m^{\Ph}_{\PQb\PQb}$ and $\Delta R^{\Ph}_{\PQb\PQb}$ are also the most significant, together with the separation between the pair of charged leptons $\Delta R_{\ell\ell}$, which becomes more important as the resonance mass increases. Two BDTs are trained for each lepton channel and each resonance spin hypothesis, one for $m_X$ in the range of 250--450\GeV, and another one for the $m_X$ above 450\GeV. A minimum BDT value is required for candidates in the SR, optimized for each narrow $m_X$ hypothesis to yield the best 95\% confidence level (\CL) expected upper limit on resonant production. The BDTs are configured with the same classification and loss function parameters described in Section~\ref{lljjselection}.
Finally, a quantity closely correlated with the energy-momentum four-vector of the $\Ph\Ph$ system is constructed as the vector sum of the of the two leptons, two $\PQb$ jets, and the four-vector formed as $(\ptmiss, \ptvecmiss)$ for the neutrinos, as described above. Subsequently, the pseudo transverse mass of the $\Ph\Ph$ system is defined as $\widetilde{M}_{\mathrm{T}}(\Ph\Ph) = \sqrt{\smash[b]{E^2 - p_{z}^2}}$, where $E$ and $p_{z}$ are the energy and the $z$ component of the combined four-vector.
The $\widetilde{M}_{\mathrm{T}}(\Ph\Ph)$ distributions for data and expected backgrounds, in the combined signal and CRs, will be used as the final discriminating variable in the analysis.
After the event selection in this channel is applied, the signal $\Ph\Ph$ events in the SR come predominantly from the decays with the \ensuremath{\PQb\PQb\PZ\PZ}\xspace intermediate state (80\%) with a smaller contribution from the $\PQb\PQb\PWp\PWm$ intermediate state (20\%). Both intermediate states are used to calculate the limit on $\Pp\Pp\to\mathrm{X}\to\Ph\Ph$ in the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel.
\subsection{\texorpdfstring{Background estimation in the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel}{Background estimation in the bbllnunu channel}}
\label{llnunubackground}
The dominant sources of background in the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel are \ttbar and \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace production. Several other background processes contribute, including single top quark and diboson production, and SM Higgs boson production in association with a \PZ boson. While these are typically minor backgrounds, their contribution can vary over the $m_X$ range. The QCD multijet background is negligible across the full mass range because of the stringent selection on $m_{\ell\ell}$.
The event yields in the signal and two CRs, which are dominated by \ttbar and \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace events, are determined from data. The corresponding normalizations of the simulated $\widetilde{M}_{\mathrm{T}}(\Ph\Ph)$ distributions are free parameters in the simultaneous fit of all three regions. The remaining backgrounds are estimated from simulation and normalized according to their theoretical cross sections.
\section{Systematic uncertainties}
\label{systematics}
The dominant source of systematic uncertainty in this analysis is the jet energy scale (JES) uncertainty, which is of the order of a few percent and is estimated as a function of jet \pt and $\eta$~\cite{Khachatryan:2016kdb}. The $\eta$-dependent jet energy resolution (JER) correction factors are varied by $\pm$1 standard deviation in order to estimate the effect of the uncertainty. Uncertainties in the JES are propagated to the calculation of $\ptmiss$. A residual $\ptmiss$ uncertainty of 3\% is applied in the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel to take into account the effect, at low $\ptmiss$, of the unclustered energy from neutral hadrons and photons that do not belong to any jet, and from jets with $\pt <10\GeV$.
An uncertainty of 2\% per muon in the muon reconstruction, identification, and isolation requirements, as well as a 1\% per muon uncertainty in the muon HLT efficiency are assigned~\cite{MuId}. A per-muon uncertainty due to measured differences of tracking efficiency in data and simulation is estimated to be 0.5\% for muon $\pt <300\GeV$ and 1.0\% for muon $\pt >300\GeV$~\cite{wprime}. Per-electron uncertainties in the efficiency for electron trigger, identification and isolation requirements, estimated by varying the scale factors within their uncertainties, are applied. The uncertainties in the efficiency scale factors are generally $<$2\% for trigger and $<$3\% for identification and isolation~\cite{EMId}. The effect of the variations on the yield of the total background is $<$1\%. Uncertainties in the data-to-simulation correction factors of the \PQb tagging and of light flavor mis-tagging efficiencies are included.
Normalization and shape uncertainties are assigned to the modeling of the backgrounds. An uncertainty in the shape of the signal and background models is determined by varying the factorization and the renormalization scales between their nominal values and 0.5 to 2.0 times the nominal values in the simulated signal and background samples. The variations where one scale increases and the other decreases are not considered. Each of the remaining variations of the renormalization and the factorization scales are considered, and the maximum variation among all the samples with respect to the nominal sample used in the analysis is taken as the systematic uncertainty, which is found to be 5--7\% depending on the process. An uncertainty in the signal acceptance and background acceptance and cross section due to PDF uncertainties and to the value chosen for the strong coupling constant is estimated by varying the NNPDF set of eigenvectors within their uncertainties, following the PDF4LHC prescription~\cite{PDF4LHC}. Statistical uncertainties in the simulated samples for \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace and $\ttbar$ background estimates result in uncertainties on the data-derived normalization factors in the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channel.
An uncertainty of 2.5\% is assigned to the determination of the integrated luminosity~\cite{lumi2016}. The uncertainty in the PU condition and modeling is assessed by varying the inelastic $\Pp\Pp$ cross section from its central value by $\pm$4.6\%~\cite{pu2016}.
All the uncertainties discussed are applied to all background and signal simulated samples. The sensitivity of the presented search is limited by the statistical uncertainties.
\section{Results}
\label{results}
Results are obtained by performing a binned maximum likelihood fit of the BDT distributions for the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channel, and of the $\Ph\Ph$ pseudo transverse mass simultaneously in the SR and two CRs for the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel.
The data and background predictions at final selection level in the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channel are shown in Fig.~\ref{figapp:bdt}, for the distributions of the BDT discriminant for signal masses of 500 and 1000\GeV, in the muon and electron final states. Studies performed on all 48 BDT discriminants show stability of the trainings with no evidence of bias or overtraining.
\begin{figure*}[htbp]
\centering
{\includegraphics[width=.49\textwidth]{Figure_001-a.pdf}}
{\includegraphics[width=.49\textwidth]{Figure_001-b.pdf}}
{\includegraphics[width=.49\textwidth]{Figure_001-c.pdf}}
{\includegraphics[width=.49\textwidth]{Figure_001-d.pdf}}
\caption{Comparison of the BDT discriminant for $m_\mathrm{X} =500$ and 1000\GeV after the final selection in the muon (upper row) and electron (lower row) final states of the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channel. The signals of an RS1 radion with mass of 500 (left) and 1000\GeV (right) are normalized to a cross section of 1\unit{pb} for the $\Pp\Pp\to\mathrm{X}\to\Ph\Ph$ process. The shaded area represents the combined statistical and systematic uncertainties in the background estimate.}
\label{figapp:bdt}
\end{figure*}
Figure~\ref{MCcomparisons} shows the $\Ph\Ph$ pseudo transverse mass distributions in the data, background estimates, and spin-2 RS1 graviton for the 300\GeV mass hypothesis, after the final selection in the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel.
\begin{figure*}[tbp]
\centering
\includegraphics[width=0.32\textwidth]{Figure_002-a.pdf}
\includegraphics[width=0.32\textwidth]{Figure_002-b.pdf}
\includegraphics[width=0.32\textwidth]{Figure_002-c.pdf} \\
\includegraphics[width=0.32\textwidth]{Figure_002-d.pdf}
\includegraphics[width=0.32\textwidth]{Figure_002-e.pdf}
\includegraphics[width=0.32\textwidth]{Figure_002-f.pdf}
\caption{Pseudo transverse mass of the reconstructed $\Ph\Ph$ candidates, in the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel, for data, simulated spin-2 RS1 graviton signal with a mass of 300\GeV, and simulated backgrounds scaled according to the fit results. The upper and lower
rows correspond to the muon and electrons channels. For each row,
the left and middle plots are for the \ensuremath{\PZ/\gamma^{*}\text{+jets}}\xspace and \ttbar control regions, and the right
is for the signal region. The signals are normalized to 1\unit{pb} for the $\Pp\Pp\to\mathrm{X}\to\Ph\Ph$ process.
The shaded area represents the combined statistical and systematic uncertainties in the background estimate.}
\label{MCcomparisons}
\end{figure*}
The systematic uncertainties are represented by nuisance parameters that are varied in the fit according to their probability density functions, prescribed as follows. A log-normal probability density function is assumed for the nuisance parameters affecting the event yields of the various background contributions, whereas systematic uncertainties that affect the distributions are represented by nuisance parameters whose variation is a vertical interpolation in each bin with a sixth-order polynomial for upward and downward shifts of one standard deviation, and linearly outside of that~\cite{ATLAS:2011tau}.
The statistical uncertainty from the limited number of events in the simulated samples is taken into account, for each bin of the discriminant distributions, by assigning a nuisance parameter to scale the sum of the process yields in that bin according to the statistical uncertainty using the Barlow--Beeston ``lite'' prescription~\cite{BARLOW1993219,Conway:2011in}.
In both channels the data distributions are well reproduced by the SM background processes. Upper limits on the resonance production cross section are set, using the asymptotic \CLs modified frequentist approach~\cite{Junk:1999kv,Read_2002,AsympOpt}.
The observed and expected 95\% \CL upper limits on $\sigma(\Pp\Pp\to\mathrm{X}\to\PH\PH\to\ensuremath{\PQb\PQb\PZ\PZ}\xspace)$ in the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace and \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channels as a function of $m_X$ are shown in Fig.~\ref{fig:limit_plots_channel}, together with the NLO predictions for the RS1 radion, RS1 KK graviton, and N2HDM resonance production cross sections, where $\PH$ can represent either the SM Higgs boson or an additional Higgs boson from an extended electroweak sector. As two different BDTs are defined for the search in the low- and high-mass ranges of the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel, the limit calculation is performed with both of the BDTs at the boundary of the two ranges, around 450\GeV, where a discontinuity is seen.
\begin{figure*}[!htbp]
\centering
{\includegraphics[width=.49\textwidth]{Figure_003-a.pdf}}
{\includegraphics[width=.49\textwidth]{Figure_003-b.pdf}}
{\includegraphics[width=0.49\textwidth]{Figure_003-c.pdf}}
{\includegraphics[width=0.49\textwidth]{Figure_003-d.pdf}}
\caption{
Expected (black dashed line) and observed (black solid line) limits on the cross section of resonant $\PH\PH$ production times the branching fraction of $\PH\PH\to\ensuremath{\PQb\PQb\PZ\PZ}\xspace$ as a function of the resonance mass for the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace (upper row) and \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace (lower row) channels, where $\PH$ can represent either the SM Higgs boson or an additional Higgs boson from an extended electroweak sector. The spin-0 case is shown on the left and the spin-2 case is shown on the right. The red solid line shows the theoretical prediction for the cross section of an RS1 radion with $\lambda_{\mathrm{R}}=1\TeV$ and $kL=35$ (left) and an RS1 KK graviton with $\tilde{k} = 0.1$ (right). In the spin-0 case only, the blue (green) line shows the decays of ${\PH}_3\to{\PH}_1{\PH}_1/{\PH}_1{\PH}_2/{\PH}_2{\PH}_2\to\ensuremath{\PQb\PQb\PZ\PZ}\xspace$ in the N2HDM formulation, with $\tan\beta=0.5\ (2.0)$, the scalar ${\PH}_3$ vev set to 45\GeV, and the mixing angles $\alpha_1$, $\alpha_2$, $\alpha_3$ set to 0.76, 0.48, and 1.00, respectively. The correction factor based on the relative partial width of ${\PH}_3$ to two gluons is around 3.0 (0.7) for $\tan\beta=0.5\ (2.0)$. In the lower row, the vertical black dashed line indicates the resonance mass of 450\GeV, a mass point where the BDT used in the analysis is switched from the one trained for low mass resonance to the one trained for high mass resonance.
}
\label{fig:limit_plots_channel}
\end{figure*}
Combined 95\% \CL upper limits from both channels on $\sigma(\Pp\Pp\to\mathrm{X}\to\PH\PH\to\ensuremath{\PQb\PQb\PZ\PZ}\xspace$) as a function of $m_X$, are shown in Fig.~\ref{fig:limit_plots_combined}, together with the theoretical predictions for the RS1 radion and RS1 KK graviton. In the $m_X$ range between 260 and 1000\GeV, limits on the production cross section times branching fraction of RS1 radion and RS1 KK graviton range from 0.1 to 5.0 and 0.1 to 3.6\unit{pb}, respectively. In the spin-0 case, the predictions of the N2HDM model with $\tan\beta=0.5\mathrm{\ and\ }2.0$ are shown, for all ${\PH}_3\to{\PH}_1{\PH}_1/{\PH}_1{\PH}_2/{\PH}_2{\PH}_2\to\ensuremath{\PQb\PQb\PZ\PZ}\xspace$ decays. In the $\tan\beta=0.5$ case, the model can be excluded with ${\PH}_3$ in the $m_X$ range of 360--620\GeV. In comparison to previous searches in other channels, we achieve a sensitivity to the RS1 radion and RS1 KK graviton models that is consistent with the lower value of the $\Ph\Ph$ branching fraction in the \ensuremath{\PQb\PQb\PZ\PZ}\xspace channel relative to the other channels.
\begin{figure*}[!htbp]
\centering
{\includegraphics[width=.49\textwidth]{Figure_004-a.pdf}}
{\includegraphics[width=.49\textwidth]{Figure_004-b.pdf}}
\caption{
Expected (black dashed line) and observed (black solid line) limits on the cross section of resonant $\PH\PH$ production times the branching fraction of $\PH\PH\to\ensuremath{\PQb\PQb\PZ\PZ}\xspace$ as a function of the mass of the resonance for the combination of the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace and \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channels, where $\PH$ can represent either the SM Higgs boson or an additional Higgs boson from an extended electroweak sector. The spin-0 case is shown on the left and the spin-2 case is shown on the right. The expected limits for each individual channel are shown with a red dashed line for the \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace channel and blue dashed line for the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace channel. The red solid lines show the theoretical prediction for the cross section of an RS1 radion with $\lambda_{\mathrm{R}}=1\TeV$ and $kL=35$ (left) and an RS1 KK graviton with $\tilde{k} = 0.1$ (right). In the spin-0 case only, the blue (green) line shows the decays of ${\PH}_3\to{\PH}_1{\PH}_1/{\PH}_1{\PH}_2/{\PH}_2{\PH}_2\to\ensuremath{\PQb\PQb\PZ\PZ}\xspace$ in the N2HDM formulation, with $\tan\beta=0.5\ (2.0)$, the scalar ${\PH}_3$ vev set to 45\GeV, and the mixing angles $\alpha_1$, $\alpha_2$, $\alpha_3$ set to 0.76, 0.48, and 1.00, respectively. The correction factor based on the relative partial width of ${\PH}_3$ to two gluons is around 3.0 (0.7) for $\tan\beta=0.5\ (2.0)$. The vertical black dashed line indicates the resonance mass of 450\GeV, a mass point where the BDT used in the analysis is switched from the one trained for low mass resonance to the one trained for high mass resonance.
}
\label{fig:limit_plots_combined}
\end{figure*}
Finally, the results are also interpreted as a function of both the $m_X$ and $\lambda_{\mathrm{R}}$ ($\tilde{k}$) for the radion (graviton) case, with $\lambda_{\mathrm{R}}<0.3\TeV$ ($\tilde{k}>0.6$) excluded for all of the $m_X$ considered, as shown in Fig.~\ref{fig:limit_plots_combined_2D}.
\begin{figure*}[!htbp]
\centering
{\includegraphics[width=.49\textwidth]{Figure_005-a.pdf}}
{\includegraphics[width=.49\textwidth]{Figure_005-b.pdf}}
\caption{
The expected and observed exclusion limits at 95\% \CL on the RS1 radion with $kL=35$ (RS1 KK graviton) hypothesis in the $\lambda_{\mathrm{R}}$ ($\tilde{k}$) versus mass plane for the individual \ensuremath{\PQb\PQb\ell\ell\mathrm{jj}}\xspace (red) and \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace (blue) channels and their combination (black). The dark green and light yellow expected limit uncertainty bands represent the 68 and 95\% confidence intervals. Solid lines represent the observed limits and dashed lines represent the expected limits. The shaded region is excluded by the current limits. The vertical black dashed line indicates the resonance mass of 450\GeV, a mass point where the BDT used in the \ensuremath{\PQb\PQb\ell\ell\nu\nu}\xspace analysis is switched from the one trained for low mass resonance to the one trained for high mass resonance.
}
\label{fig:limit_plots_combined_2D}
\end{figure*}
\section{Summary}
\label{summary}
A search for the production of a narrow-width resonance decaying into a pair of Higgs bosons decaying into the \ensuremath{\PQb\PQb\PZ\PZ}\xspace channel is presented. The analysis is based on data collected with the CMS detector during 2016, in proton-proton collisions at the LHC, corresponding to an integrated luminosity of 35.9\fbinv. The final states considered are the ones where one of the \PZ bosons decays into a pair of muons or electrons, and the other \PZ boson decays either to a pair of quarks or a pair of neutrinos. Upper limits at 95\% confidence level are placed on the production of narrow-width spin-0 or spin-2 particles decaying to a pair of Higgs bosons, in models with and without an extended Higgs sector. For a resonance mass range between 260 and 1000\GeV, limits on the production cross section times branching fraction of a spin-0 and spin-2 resonance range from 0.1 to 5.0\unit{pb} and 0.1 to 3.6\unit{pb}, respectively. These results set limits in parameter space in bulk Randall--Sundrum radion, Kaluza--Klein excitation of the graviton, and N2HDM models. For specific choices of parameters the N2HDM can be excluded in a mass range between 360 and 620\GeV for a resonance decaying to two Higgs bosons. This is the first search for Higgs boson resonant pair production in the \ensuremath{\PQb\PQb\PZ\PZ}\xspace channel.
\begin{acknowledgments}
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, PUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU (Ukraine); STFC (United Kingdom); DOE and NSF (USA).
\hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract Nos.\ 675440, 752730, and 765710 (European Union); the Leventis Foundation; the A.P.\ Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science -- EOS" -- be.h project n.\ 30820817; the Beijing Municipal Science \& Technology Commission, No. Z191100007219010; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy -- EXC 2121 ``Quantum Universe" -- 390833306; the Lend\"ulet (``Momentum") Program and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850, 125105, 128713, 128786, and 129058 (Hungary); the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Ministry of Science and Higher Education, project no. 02.a03.21.0005 (Russia); the Tomsk Polytechnic University Competitiveness Enhancement Program and ``Nauka" Project FSWW-2020-0008 (Russia); the Programa Estatal de Fomento de la Investigaci{\'o}n Cient{\'i}fica y T{\'e}cnica de Excelencia Mar\'{\i}a de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Kavli Foundation; the Nvidia Corporation; the SuperMicro Corporation; the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA).
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Recently, the discovery of the $AdS/CFT$ correspondence \cite{Maldacena:1997re,Aharony:1999ti,Witten:1998qj,Gubser:1998bc} has created great interest in the ${\cal N}=4$ $SUSY$
$SU(N)$ gauge theory because the duality allows the investigation of several aspects of the gauge theory (see for instance \cite{Maldacena:1998im,Witten:1998zw,Kovchegov:2007pq,Brower:2010wf,Beuf:2009cx,Albacete:2008vs,Janik:2010we,Heller:2007qt,Kovchegov:2009du,Lin:2009pn,Janik:2005zt,Peschanski:2010cs,Gubser:2008pc,Gubser:2008yx,Albacete:2008dz,Iatrakis:2010jb,Gursoy:2010fj,Gursoy:2009kk,Kovchegov:2010zg,Cornalba:2010vk,Cornalba:2009ax,Taliotis:2010pi,Lin:2010cb,Athanasiou:2010pv,D'Eramo:2010ak,Dumitru:2009ni,Noronha:2009ia}) at strong coupling through gravitational and generally classical computations.
In particular one of the first computations using the gauge/gravity duality was the evaluation of the heavy-quark potential at large 't Hooft coupling (and large $N$), both at zero temperature \cite{Maldacena:1998im} and also at finite temperature \cite{Rey:1998bq,Brandhuber:1998bs,Albacete:2008dz,Albacete:2009bp}. In particular, in \cite{Albacete:2008dz,Albacete:2009bp} it was found that at large enough distances the real part of the potential has a power low fall-off extending the results (obtained ealrier) in the literature \cite{Rey:1998bq,Brandhuber:1998bs}. The method of \cite{Albacete:2008dz,Albacete:2009bp} involved an analytic continuation of the solution of \cite{Rey:1998bq,Brandhuber:1998bs} applying the ideas of \cite{Albacete:2008ze,Taliotis:2009ne,Albacete:2008vv}.
Amazingly enough, exactly the same power fall-off was found in \cite{Gale:1987en} for pQCD at finite temperature. This fact partially motivated investigating the ${\cal N}=4$ $SUSY$ $SU(N)$ gauge theory at non-zero temperature in the framework of a pertubative, field theoretical approach.
Therefore, in this work we compute the heavy-quark singlet potential of ${\cal N}=4$ $SUSY$ at non-zero temperature at weak 't Hooft coupling. We compute the Debye mass and apply the technique of \cite{Gale:1987en} that had been previously applied to QCD, in our case, and investigate whether we may predict a similar power low fall-off of the potential. Such an investigation allows for the comparison with the result of \cite{Gale:1987en} obtained in the framework of pQCD and with the result of \cite{Albacete:2008dz,Albacete:2009bp} obtain in the framework of $AdS/CFT$. We note that literature is reach in regards of perturbative calculations at finite temperature for both, the heavy-quark potential for $QCD$ \cite{Kuti:1980gh,Dumitru:2010id,Gale:1987en,Kajantie:1982xx,DeGrand:1986uf} as well as several (other) contexts of the ${\cal N}=4$ $SUSY$ theory \cite{Chesler:2006gr,Fotopoulos:1998es,KorthalsAltes:2009dp,Blaizot:2006tk,Yamada:2006rx}.
\vspace{0.1in}
We organize this work as follows:
\vspace{0.1in}
In section \ref{SU} we set up the problem arguing that the singlet potential is a gauge invariant quantity and hence it makes sense to compute (perturbatively). We also give an elementary introduction of the objects we need in this work and which appear in thermal field theory.
Section \ref{diag} deals with the relevant diagrams involved in the calculation. For a subset of the diagrams under consideration, we borrow results from the literature calculated for $QCD$ and we show how these results may be adapted for our case.
In section \ref{poten} we use the results of previous sections in order to evaluate the quark-pair potential. We initially compute the Debye mass giving rise to the expected Yukawa fall off and then we extend the computation in the spirit of \cite{Gale:1987en}.
Finally, in section \ref{conc} we summarize and discuss our results. In particular, we find that the potential has a power fall-off (at sufficiently large distances) which agrees precisely with \cite{Gale:1987en} and (the absolute value of) \cite{Albacete:2008dz,Albacete:2009bp}.
The notation and conventions we use as well as many useful formulas may be found in Appendix \ref{A}. In the rest of the Appendices we explicitly write the Lagranian for ${\cal N}=4$ $SUSY$ and derive the Feynman Rules. We also show which diagrams contribute in the calculation we are interested and evaluate in great detail (one of) the relevant diagram(s) in order to exhibit the general idea behind these sorts of calculations.
\section{Setting up the Problem}\label{SU}
\subsection{$q{\bar q}$ potential and gauge invariance}\label{GI}
In $QED$, the magnetic and electric fields are gauge invariant. In non-abelian gauge theories on the other hand, gauge invariance (of the chromo-electromagnetic field) is not valid as a consequence of the presence of the non-linear terms in the field strength tensor $F_{\mu \nu}^a$. However, one gauge invariant quantity one may construct is the free energy (potential) of a static, color-singlet (total color charge zero), quark-antiquark pair as a function of the separation $r$ of the pair \cite{Kapusta:2006pm}. As the quantity we wish to compute is gauge invariant, we choose to compute it in the Temporal Axial Gauge ({\bf TAG}).
\subsection{Elements of thermal field theory}
In this subsection we present the minimum knowledge about field theory at finite temperature one needs in order to perform the calculation we are interested.
The self energy of the gauge boson (2-point irreducible diagram\footnote{Whose precise definition is given below in equation (\ref{Dexact}).}) at finite temperature may be decomposed as
\begin{align}\label{gse}
\Pi^{\mu \nu}=G(q^0,|{\bf q}|)P_T^{\mu \nu}+F(q^0,|{\bf q}|)P_L^{\mu \nu}
\end{align}
where $q^{\mu}$ is the four-momentum of the gauge boson, $G$ and $F$ are scalar functions of $q^0$ and $|{\bf q}|$ while $P_T$ and $P_L$ are the transverse and longitudinal projection tensors (to ${\bf q}$) respectively and their explicit form is given by
\begin{align}\label{ptl}
P_T^{\mu \nu}= \left( \delta_{ij}-\frac{q_iq_j}{{\bf q}^2}\right) g^{i\mu}g^{j\nu} \hspace{0.3in} P_L^{\mu \nu}=(-g^{\mu \nu}+\frac{q^{\nu}q^{\mu}}{q^2})-P_T^{\mu \nu}, \hspace{0.1in} \mu={0,i}.
\end{align}
These tensors satisfy
\begin{align}\label{proj}
q_iP_T^{i \nu}=q_{\mu}P_T^{\mu \nu}=q_{\mu}P_L^{\mu \nu}=P_T^{\mu \kappa}P_{L \hspace{0.01in} \kappa}\hspace{0.01in}^{ \nu}=0.
\end{align}
Motivated by equations (\ref{proj}) and (\ref{ptl}), it is natural to associate $G$ with the chromomagnetic modes while $F$ with the chromoelectric one's. These factors may be expressed in terms of the diagonal components of $\Pi^{\mu \nu}$ as
\begin{align}\label{GF}
F=\frac{k^2}{{\bf k}^2}\Pi^{00} \hspace{0.3in} F+2G=\Pi^{ii}-\Pi^{00}.
\end{align}
Taking into account the expression for the bare propagator of the gluon $D^{\mu \nu}_0$ from figure \ref{propFR} written in the TAG gauge, one may formally at least, compute the exact (dressed) propagator to all orders in perturbation theory whose non zero components are
\begin{align}\label{Dexact}
D^{i j}=D_0^{i k} \sum_{N=0}^{\infty}[(-\Pi D_0)^N]_{k}\hspace{0.015in}^{j}=\frac{1}{G-k^2} \left( \delta_{ij}-\frac{q_iq_j}{{\bf q}^2}\right) +\frac{1}{F-k^2}\frac{k^2}{k_0^2}\frac{k^ik^j}{{\bf k}^2} \hspace{0.15in} \mathrm{(TAG)}.
\end{align}
This expression in fact provides the definition for $\Pi^{\mu \nu}$.
Now let us suppose that we place two oppositely (cromo)charged fermions at a distance $r$ and seek for the mutual force of the two charges treating them as small (static) perturbations in the thermal medium. Then, in the spirit of the linear response theory one may show \cite{Kapusta:2006pm} that the potential of the $q{\bar q}$
system as a function of the separation is given by
\begin{align}\label{qqV}
V(r)=Q_1Q_2\int \frac{d^3q}{(2 \pi)^3}\frac{e^{i {\bf k}{\bf r}}}{{\bf q}^2+F(0,{\bf q})}=Q_1Q_2\int \frac{d^3q}{(2 \pi)^3}\frac{e^{i {\bf k}{\bf r}}}{{\bf q}^2-\Pi^{00}(0,{\bf q})}.
\end{align}
where the last equality is a consequence of the (left) equation (\ref{GF}) and (\ref{gmn}). Thus, we conclude that in order to compute the potential we only need the (00) component of $\Pi^{\mu \nu}$. In this work, we compute it perturbatively to $O(g^2T^2)$ (see section \ref{diag}).
\subsection{Approximations}
As we will see, $\Pi^{00}$ depends from the temperature $T$ and the chemical potential $\mu_i$\footnote{The index $i$ is associated with every global symmetry $(i)$ that gives rise to $\mu_i$.} through the ratio $\mu_i/T$, that is
\begin{align}\label{PiTmu}
\Pi^{00}=\Pi^{00}(q^0,{\bf q};T, \mu_i/T).
\end{align}
We will work in the region where
\begin{align}\label{approx}
\frac{1}{\mu_i} \gg \frac{1}{T} \hspace{0,1in}\forall \hspace{0,015in} i \hspace {0.6in} r\gg \frac{1}{T} \hspace{0.015in}.
\end{align}
In particular, the right approximation above implies that the integral in equation (\ref{qqV}) receives its main contribution from ${\bf q} \rightarrow 0 $ and as a result (\ref{qqV}) reduces to
\begin{align}\label{appV}
V(r)=Q_1Q_2\int \frac{d^3q}{(2 \pi)^3}\frac{e^{i {\bf k}{\bf r}}}{{\bf q}^2- \lim \limits_{{\bf q} \to 0} \Pi^{00}(0,{\bf q})}.
\end{align}
where in the color singlet state, the product of charges is
\begin{align}\label{q1q2}
Q_1Q_2=(gT^a_G) \otimes (-g T^a_{{\bar G}}) =-C_2(G) g^2=-N g^2.
\end{align}
The expression $T^a_G \otimes T^a_{{\bar G}}$ is the tensor product of the adjoint times the (anti)adjoint
representation resulting to the Casimir operator and which in turn yields to the expected factor of $N$.
\section{Evaluating the Relevant Diagrams} \label{diag}
The gauge boson self energy tensor is defined diagrammatically through figure \ref{1PI}.
\begin{figure}[h!]
\centering
\includegraphics*[scale=0.7,angle=0,clip=true]{1PI.eps} \hspace{1cm}
\caption{Diagramatic definition of (the colored) $\Pi^{\mu \nu}$ suppressing color indices. There is no imaginary $i$ involved in the definition.}
\label{1PI}
\end{figure}
The diagrams one has to calculate in order to compute $V(r)$ to $O(g^2)$ are given in Appendix \ref{C}. Most of them have been calculated and may be found (in addition to other places) in \cite{Kapusta:2006pm}. The important point here is that there is not any scalar propagator involved in this computation because according to subsection \ref{GI} the potential we compute involves a color singlet and hence the fermion pair should be of the same flavor. However, according to the Lagranian (\ref{LN4}) and the resulting Feynman rules for scalars, figures \ref{Yuk}, the vertices change the flavor of the fermions (observe the presence $\epsilon_{ijk}$ in the corresponding Feynman rules) concluding that scalar propagators do not contribute to the calculation under consideration.\footnote{In practice one may imagine that the two interacting fermions are massive, charged under color and couple (directly) only to the gauge bosons but not to the scalars. It is evident that these fermions are external to the ${\cal N}=4$ $SUSY$ particles.}
\subsection{Contribution of $\Pi^{00}_g$ due to the gauge loops}
The contribution due to the gluons arise from the diagrams of figure \ref{pig}. The result in the TAG is given by equation (8.65) of \cite{Kapusta:2006pm} and may also be found in \cite{Kajantie:1982xx} and is given by
\begin{align}\label{p00g}
\Pi^{00;ab}_{g \hspace{0.01in}mat}(q_0,|{\bf q}|)=&-\delta^{ab} \frac{g^2N}{4\pi^2} \int_0^{\infty} dk k N_B(k) \times
\mbox{Re} \Bigg[4-\frac{({\bf q}^2-2kq_0-q_0^2)(2k+q_0)^2}{2k^2(k+q_0)^2}+\frac{(2k+q_0)^2}{2k|{\bf q}|}\times\notag\\&
\left(1+\frac{(k^2+(k+q_0)^2-{\bf q}^2)^2}{4k^2(k+q_0)^2})\ln \left( \frac{R_{g+}}{R_{g-}} \right) \right) \Bigg], \hspace{0.3in} q_0=i 2\pi n T, \hspace{0.015in}k\equiv|{\bf k}|
\end{align}
where $N_B(k)$ is defined in (\ref{NFB}), $R_{g\pm}={\bf q}^2-2kq_0-q_0^2\pm 2k|{\bf q}| $
while the operator Re is defined in (\ref{Re}). In the limits we are interested (see (\ref{approx})) we eventually obtain that the matter (thermal) part contribution is
\begin{align}\label{p00gL}
\Pi^{00;ab}_{g \hspace{0.01in}mat}(0,|{\bf q}|\rightarrow 0)= -\delta^{ab} Ng^2\left(\frac{1}{3}T^2-\frac{1}{4}T |{\bf q}| +O({\bf q}^2\log({\bf q}^2/T^2))\right), \hspace{0.2in} q\ll T
\end{align}
where we have used the left equation of (\ref{IBF}) and (\ref{I4}) in order to extract the zeroth and first order in $|{\bf q}|$ respectively.
\subsection{Contribution of $\Pi^{00}_f$ due to the fermion loops}
This is the contribution due to the diagrams of figure \ref{pif}. For a Dirac fermion this contribution has been calculated in the literature and has been reproduced by us. For instance in \cite{Kapusta:2006pm}, equation (5.51), gives the answer (matter contribution) for an abelian gauge theory ($QED$). In order to adapt the answer to our case, we must multiply (5.51) by $f^{acd}f^{acd}=\delta^{ab} N$ to account for the color factors times $4$\footnote{Assuming they have the same chemical potentials $\mu_i$ for simplicity; in any case the $\mu_i$'s will not contribute (to zeroth order) in the view of (\ref{approx}).}
to account for the four fermions (one gaugino and three adjoint fermions; see (\ref{LN4})). Finally, we should multiply by $(1/2)$ in order to account for the fact that we deal with Weyl spinors whose corresponding trace involves (four) sigma (instead of gamama) matrices; in the view of (\ref{traces}) the overall factor is $2$ instead of $4$ which occurs with the gamma matrices. The antisymmetric part in the trace cancels because $\Pi^{00}$ is symmetric in the spacetime indices. Therefore, we effectively multiply equation (5.51) of \cite{Kapusta:2006pm} by $2N\delta^{ab}$ assuming massless fermions yielding to
\begin{align}\label{p00f}
\Pi^{00;ab}_{f \hspace{0.01in} mat}(q_0,|{\bf q}|)=&-2\delta^{ab} \frac{g^2N}{\pi^2} \mbox{Re} \int_0^{\infty} dk k (N^+_F(k)+N^-_F(k)) \times \notag\\&
\Bigg[1 + \frac{4kq_0-4k^2-q_0^2+q^2}{4 k |{\bf q}|} \ln \left( \frac{R_{f+}}{R_{f-}} \right) \Bigg],
\hspace{0.3in} q_0=i 2\pi n T
\end{align}
where $R_{f \pm}=q_o^2-{\bf q}^2-2q_0k\pm 2k|{\bf q}|$. In the limits that we are concerned (see (\ref{approx}) and (\ref{appV})) we obtain that the matter (thermal) part is
\begin{align}\label{p00fL}
\Pi^{00;ab}_{f \hspace{0.01in}mat}(0,|{\bf q}|\rightarrow0)&=-4\delta^{ab} \frac{g^2N}{\pi^2} \int_0^{\infty} dk k N_F(k) \Bigg[1 - \frac{k}{ |{\bf q}|} \ln \left( \frac{R_{f+}(q^0=0)}{R_{f-}(q^0=0)} \right) \Bigg] +O({\bf q}^2) \notag\\&
=-\frac{2}{3}\delta^{ab}Ng^2T^2+O({\bf q}^2)
\end{align}
where in extracting the zeroth contribution to $|{\bf q}|$ we have used (the right integral of) (\ref{IBF}). We point out that there is not a linear term in $q^{\mu}$ present unlike (\ref{p00gL}) and that in extracting the leading terms in $|{\bf q}|$ one should be careful (see (\ref{I3}) for one example).
\subsection{Contribution of $\Pi^{00}_{s_i}$ due to the scalar loops}
The corresponding diagrams are the two diagrams of figure \ref{pis}.
{\bf First Contribution $(s_1)$:} This is due to the diagram on the left of the figure and has been computed in the literature for the case of $\phi^4$ theory (see equation (3.48) of \cite{Kapusta:2006pm}). In order to adapt it for our case we should, according to the Feynman rule of figure \ref{gsFR} (in the middle), replace $\lambda \rightarrow g^2 \left(f^{cad}f^{dbe}+f^{cbd}f^{dae} \right)\delta^{ce} \delta_{ii} g^{00}=-6g^2N\delta^{ab}$ as expected.\footnote{A factor of $2$ exists in scalar QED as well while $3N$ shows all the different complex colored scalars in the loop. The additional minus sign is traced simply to the fact that instead of $-\lambda$ it is used $g^2$. The chemical potentials are again ignored.}We find that the contribution of this diagram is
\begin{align}\label{p00s1L}
\Pi^{00;ab}_{s_1 \hspace{0.01in}mat} (q_0,|{\bf q}|)= 6g^2 N \delta^{ab}T\sum_n \int \frac{d^3{\bf k}}{(2 \pi)^3}\frac{1}{(k^0_n)^2-k^2} =
-6 g^2N \delta^{ab}\int \frac{d^3{\bf k}}{(2 \pi)^3}\frac{1}{k} N_B(k) =-\delta^{ab} \frac{1}{2}g^2NT^2
\end{align}
where $k_n^0= i 2\pi nT$, $ n \in \mathbb{Z}$, $k \equiv |{\bf k}|$ and $N_B$ is given by (\ref{NFB}). The first equality is a direct application of the Feynman rules for thermal field theory, the second equality follows by integrating using (\ref{sumB}) and the third equality uses the left integral of (\ref{IBF}). We note that this diagram is $q^{\mu}$ independent.
{\bf Second Contribution $(s_2)$:} This is due to the diagram on the right of figure \ref{pis} and which we evaluate in the Appendix \ref{D}\footnote{We evaluate this diagram in great detail (also ignoring the chemical potential contribution) as the rest of the five diagrams (see (\ref{p00gL}), (\ref{p00fL}) and (\ref{p00s1L})), are evaluated in an analogous way.} obtaining equation (\ref{sd}). For $q^0=0$ which is what interests us we have
\begin{align}\label{p00s2}
\Pi^{00;ab}_{s_2 \hspace{0.01in}mat}(q_0,|{\bf q}|)=- 3\frac{g^2}{4 \pi^2} \delta^{ab} \int _0^{\infty} dx \frac{1}{2}x^2 |{\bf q}|^2 \ln \left( \frac{1+x}{|1-x|}\right) N_B(\frac{|{\bf q}|}{2} x)
\end{align}
where we have made for the integral (\ref{sd}) the substitution $2k=|{\bf q}| x$. Extracting the leading and sub-leading contribution for small $ |{\bf q}|$ is not straightforward but it may be achieved by working in the same way as in proving (\ref{I3}) yielding
\begin{align}\label{p00s2L}
\Pi^{00;ab}_{s_2 \hspace{0.01in}mat}(0,|{\bf q}| \rightarrow 0)=-\frac{1}{2} \delta^{ab} N g^2 T^2+O( |{\bf q}|^2).
\end{align}
It is crucial to highlight that likewise the fermion case, the scalars have no linear contribution in $ |{\bf q}|$ either. This completes the set of contributions to the order in $g$ and $ |{\bf q}|$ that we are interested.
\section{Calculating the Potential}\label{poten}
Collecting the results from equations (\ref{p00gL}), (\ref{p00fL}), \ref{p00s1L}) and (\ref{p00s2L}) the self energy tensor to $O(g^2)$ is
\begin{align}\label{P00}
\Pi^{00}_{mat}(q_0=0,|{\bf q}\rightarrow 0| )= -m_D^2+2 m_D t|{\bf q}| +O({\bf q}^2\log({\bf q}^2/T^2))
\end{align}
where $m_D$ is the Debye mass and is given by
\begin{align}\label{mD}
m_D=2Ng^2 T^2.
\end{align}
This is one of our main results for this work. On the other hand $t$ is positive and is given by
\begin{align}\label{t}
t=\frac{1}{16 } \frac{m_D}{T}.
\end{align}
\subsection{$V(r)$ from pure electric mode corrections}
Performing the integral of (\ref{appV}) using just the $m_D^2$ for $\Pi^{00}$ we obtain
\begin{align}\label{YukV}
V(r)=-\frac{g^2 N}{4 \pi} \frac{1}{r} e^{-m_D r}
\end{align}
which is the expected Yukawa potential.
\subsection{$V(r)$ from magnetic mode corrections}
One may push the calculation a little further and include corrections to the potential due to the linear contribution of $|{\bf q}|$ in $\Pi(0,{\bf q})$. As this term is gauge invariant \cite{Toimela:1982hv,Gale:1987en} it is tempting to try to include it in the evaluation the potential extending the result of (\ref{YukV}). Although it has not been rigorously proved that this expansion in $|{\bf q}|$ is well defined, it is believed \cite{Gale:1987en} that it is very likely to be the case. Assuming this and applying the analysis of \cite{Gale:1987en} for our case, we find that
\begin{align}\label{V1}
V(r)=\frac{2 g^2N }{\pi^2} \frac{t}{m_D^3 r^4}=\frac{1}{(4 \pi)^2} \frac{1}{T^3 r^4}
\end{align}
which shows that at sufficiently large distances the potential falls of as $1/r^4$ and also it is repulsive and in addition $N$ and $g$ independent! This is the second main part of our investigation.
\section{Discussion}\label{conc}
In this work we perform the calculation of the $q{\bar q}$ potential in a thermal medium for the ${\cal N}=4$ $SUSY$ theory at weak coupling. By considering the purely electric modes at high temperatures we find the expected Yukawa potential, equation (\ref{YukV}), with the Debye mass given by equation (\ref{mD}). In particular, we observe that each of the (eight $\times N$) bosonic degrees of freedom contribute to $m_D^2$ with $\frac{1}{6}g^2T^2$ (see (\ref{p00g}), (\ref{p00s1L}) and (\ref{p00s2L})) while each of the (eight $\times N$) fermionic degrees of freedom contributes with $\frac{1}{12}g^2T^2$ (see (\ref{p00fL})) leading to (\ref{mD}).
Next, and following \cite{Gale:1987en}, we include (a subset of the) magnetic corrections obtaining the potential of equation (\ref{V1}) at large distances which is independent from the coupling and the number of colors and falls of as $1/r^4$ but it is repulsive (as in \cite{Gale:1987en}). On the other hand, from $AdS/CFT$ calculations it was found \cite{Albacete:2008dz} the same power fall-off at large distances but with an attractive force between the $q{\bar q}$. Motivated by that result, we were hoping, by including the scalars contribution for the ${\cal N}=4$ $SUSY$ theory at weak coupling, to obtain the result of \cite{Gale:1987en} with an additional overall negative sign agreeing with \cite{Albacete:2008dz} and also with our intuition. We find that both the fermions and the scalars do not contribute to the magnetic modes (linearly in ${\bf |q|}$) and hence the analysis of \cite{Gale:1987en} proceeds (up to an overall factor) unaltered yielding the repulsive potential of equation (\ref{V1}).
This work provides a concrete example of an observable of ${\cal N}=4$ $SUSY$ whose behavior is qualitatively the same as the corresponding one of $QCD$. Consequently one may conclude that studying non perturbative phenomena of $QCD$ by applying the $AdS/CFT$ correspondence may not be far from reality and has the potential to yield to qualitatively correct results.
\section*{Acknowledgments}
We would like to thank A. Majumder for fruitful discussions and also for giving a great set of lectures on Field Theory at Finite Temperature in spring of 2010 at The Ohio State University. We acknowledge S. Dam for useful discussions during TASI School 2010 which has been inspiring. We also thank the organizers of ``Quantifying the Properties of Hot QCD Matter" and especially U. Heinz, held at INT in Seattle, the organizers of HESI10 and especially K. Fukushima held at the Yukawa Institute in Kyoto and the organizers of the ``AdS Holography and the QGP"/the EMMI workshop for ``Hot Matter", and especially A. Rebhan, held at ESI in Vienna, for their excellent hospitality during the set-up of this project. Finally, we would like to thank E. Braaten, H. Haber, I. Sarcevic, K. Dienes, John Terning, A. Stergiou, J. Kapusta and Y. Kovchegov for informative discussions, S. Avery for reading the manuscript and making extremely useful comments and in particular Y. Kovchegov for sharing his striking insight.
\vspace{0.1in}
This work is supported by The Department of Energy under contracts DE-FG02-04ER41319 and DE-FG02-04ER41298 and partially by the Institute of Governmental Scholarships of Cyprus.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\subsection{Number-theoretic motivation} For a field $K$, let $\bar K_s$ denote its separable closure.
The absolute Galois group of $K$ is the profinite group $G_K=\Gal(\bar K_s/K)$.
One of the main challenges in current Galois theory is to describe absolute Galois groups of fields among profinite groups.
Already describing the maximal pro-$p$ quotient $G_K(p)$ of $G_K$ among pro-$p$ groups, for $p$ a prime number,
is a remarkable challenge.
The most amazing advancement in Galois theory in the last decades is the proof
of the \emph{Bloch-Kato Conjecture} by M.~Rost and V.~Voevodsky, with the contribution of Ch.~Weibel
(cf.\ \cite{rost:BK, voev, weibel}). One of the most important consequences of this result is the following:
if $K$ contains a root of unity of order $p$, then the $\mathbb{F}_p$-cohomology algebra of $G_K$ ---
i.e.\ the graded algebra $\bigoplus_{n\geq0}H^n(G_K,\mathbb{F}_p)$, endowed with the cup product and with $\mathbb{F}_p$ as a trivial $G_K$-module ---
is a \emph{quadratic algebra} over $\mathbb{F}_p$, namely, all its elements of positive degree are combinations
of products of elements of degree 1, and its defining relations are homogeneous relations of degree 2
(see Definition~\ref{def:quadratic algebra}).
This property is inherited by the maximal pro-$p$ quotient $G_K(p)$ (cf.\ \cite[\S~2]{cq:bk}).
It is therefore of major interest to study pro-$p$ groups whose $\mathbb{F}_p$-cohomology algebra is quadratic,
which we call \emph{$H^\bullet$-quadratic} (or simply \emph{quadratic}) pro-$p$ groups.
This class has been investigated in some recent papers, in order to find obstructions in the realization
of pro-$p$ groups as the maximal pro-$p$ quotient of absolute Galois groups (cf.\ \cite{cq:bk,cq:qc,qw:cyclotomic}).
Unfortunately, as it often happens in profinite group theory, there is an astounding lack of examples of $H^\bullet$-quadratic pro-$p$ groups. We will try to partly remedy this lack of examples in our work.
\subsection{Main Results} In the present paper we start a systematic investigation of $H^\bullet$-quadratic pro-$p$ groups from various points of view.
First of all, we deal with torsion in $H^\bullet$-quadratic pro-$p$ groups. In the next proposition we will collect some facts that might be already known to experts. For further discussion on finite $H^\bullet$-quadratic $2$-groups see Remark~\ref{rmk:propCp2}.
\begin{proABC}\label{prop:properties quadratic}
Let $G$ be a finitely generated pro-$p$ group.
\begin{itemize}
\item[(a)] If $p$ is odd and $G$ is $H^\bullet$-quadratic, then $G$ is torsion-free.
\item[(b)] If $p=2$, $G$ is finite and abelian, then $G$ is $H^\bullet$-quadratic if and only if $G$ is $2$-elementary abelian.
\item[(c)] If $p=2$ and $G$ is finite, then every subgroup of $G$ is quadratic if and only if $G$ is $2$-elementary abelian.
\end{itemize}
\end{proABC}
Next, we study the closure of the class of $H^\bullet$-quadratic pro-$p$ groups under free constructions in the category of pro-$p$ groups, such as
\emph{amalgamated free products} and \emph{HNN-extensions} (see Section~\ref{sec:combgrth} for the definitions).
The corresponding results for free products and direct products are well known to experts. In Section~\ref{sec:combgrth} we prove the following theorems.
\begin{thmABC}\label{thm:cohomology amalgam}
Let $G$ be a finitely generated pro-$p$ group which can be written as a proper amalgam $G=G_1\amalg_{H}G_2$ with $H^\bullet$-quadratic
pro-$p$ groups $G_1,G_2,H$.
Assume that the restriction maps
\begin{equation}\label{eq:surj cohomology amalgam}
\mathrm{res}_{G_i,H}^\bullet\colon H^\bullet(G_i,\mathbb{F}_p)\longrightarrow H^\bullet(H,\mathbb{F}_p),
\end{equation}
with $i=1,2$, satisfy the following conditions:
\begin{itemize}
\item[(i)] ${\rm res}_{G_1,H}^1$ and ${\rm res}_{G_2,H}^1$ are surjective;
\item[(ii)] $\mathrm{ker}({\rm res}_{G_i,H}^2)=\mathrm{ker}({\rm res}_{G_i,H}^1)\wedge H^1(G_i,\mathbb{F}_p)$ for both $i=1,2$.
\end{itemize}
Then also $G$ is $H^\bullet$-quadratic.
\end{thmABC}
\begin{thmABC}\label{thm:HNN}
Let $G_0$ be a $H^\bullet$-quadratic pro-$p$ group, and let $A,B\le G_0$ two isomorphic
$H^\bullet$-quadratic subgroups, with isomorphism $\phi\colon A\to B$.
Assume that
\begin{itemize}
\item[(i)] the restriction map ${\rm res}_{G_0,A}^1$ is surjective,
\item[(ii)] $\mathrm{ker}({\rm res}_{G_0,A}^2)=\mathrm{ker}({\rm res}_{G_0,A}^1)\wedge H^1(G_0,\mathbb{F}_p)$ and
\item[(iii)] the map $f^1_{G_0}\colon H^1(G_0,\mathbb{F}_p)\to H^1(A,\mathbb{F}_p)$ is trivial, where
$$f_{G_0}^1={\rm res}_{G_0,A}^1-\phi^*\circ{\rm res}_{G_0,B}^1,$$
and $\phi^*\colon H^1(B,\mathbb{F}_p)\to H^1(A,\mathbb{F}_p)$ is the map induced by $\phi$.
\end{itemize}
Assume further that $G=\mathrm{HNN}(G_0,A,\phi)$ is proper.
Then $G$ is $H^\bullet$-quadratic.
\end{thmABC}
Moreover, we exhibit several examples to show that each numbered condition in Theorem~\ref{thm:cohomology amalgam}
and Theorem~\ref{thm:HNN} is necessary (see Examples~\ref{example:amalg1}, \ref{example:amalg2}, \ref{ex:HNNcondi}, \ref{ex:HNNcondii} and \ref{ex:HNNcondiii}). In passing we find two new criteria to insure that an amalgam of pro-$p$ groups is \emph{proper}, see Proposition~\ref{prop:amalgunif} and Proposition~\ref{lem:amalgunifsubgroup}. In particular, in Proposition~\ref{prop:amalgunif} we show that the amalgamated free product of two uniform groups $G_1$ and $G_2$ over an isomorphic uniform subgroup $H$ is always proper, provided that the generators of $H$ are part of bases of both $G_1$ and $G_2$.
\begin{tcolorbox}
In the rest of this section $p$ will be a prime number greater or equal to $3$. For $p=2$ see Section~\ref{sec:p=2}.
\end{tcolorbox}
We proceed by collecting some known results on $p$-adic analytic pro-$p$ groups to characterise $H^\bullet$-quadratic groups in this class (see Section~\ref{sec:quadraticpadic} for the definitions).
\begin{thmABC}\label{thm:quadratic analytic}
A $p$-adic analytic pro-$p$ group $G$ is $H^\bullet$-quadratic if and only if $G$ is uniform.
\end{thmABC}
Note that this provides a new characterisation of uniform pro-$p$ groups among $p$-adic analytic ones. As a direct consequence we have that if a $p$-adic analytic pro-$p$ group $G$ can be realised as $G_K(p)$ for some $K$ containing a root of unity of order $p$ then every open subgroup of $G$ is uniform, i.e., $G$ is a hereditarily uniform pro-$p$ group. Note that hereditarily uniform pro-$p$ groups have been classified and all those groups can be realised as maximal pro-$p$ Galois groups (cf. \cite{cq:bk}, \cite{ware} and \cite{ks}; see also \cite{ks-j.algebra}).
We continue by looking at $H^\bullet$-quadratic groups from a more ``combinatorial'' point of view.
Namely we introduce a new class of pro-$p$ groups, called \emph{generalised Right Angled Artin pro-$p$ Groups}
--- \emph{$p$-RAAGs} for short.
Define a \emph{$p$-graph} $\Gamma=({\mathcal G},f)$ to be an oriented graph ${\mathcal G} = ({\mathcal V},{\mathcal E})$ together with labels $f(e)=(f_1(e),f_2(e))\in p\mathbb{Z}_p\times p\mathbb{Z}_p$
for every edge $e\in{\mathcal E}$.
The $p$-RAAG associated to $\Gamma$ is the pro-$p$ group defined by the pro-$p$ presentation
\begin{equation*}
G_\Gamma = \pres{x_1,\ldots,x_n}{[x_i,x_j]= x_i^{f_1(e)} x_j^{f_2(e)}\ \text{for } e=(x_i,x_j)\in {\mathcal E}}
\end{equation*}
(see Definition~\ref{def:pRAAGs} for the precise details). The class of $p$-RAAGs is extremely interesting.
For instance, we can completely determine the $\mathbb{F}_p$-cohomology algebra of an $H^\bullet$-quadratic $p$-RAAG
and this only depends on the underlying graph.
\begin{thmABC}\label{thm:pRAAGs quadratic}
Let $G_\Gamma$ be an $H^\bullet$-quadratic $p$-RAAG with associated $p$-graph $\Gamma=(\mathcal{G},f)$.
Then
\begin{equation*}
H^\bullet(G_\Gamma,\mathbb{F}_p)\cong \Lambda_\bullet(\mathcal{G}^{\mathrm{op}}).
\end{equation*}
\end{thmABC}
In Theorem~\ref{thm:galois pRAAGs} we show that there is a further condition, besides yielding
a $H^\bullet$-quadratic pro-$p$ RAAG, that a $p$-graph $\Gamma$ must satisfy in order to generate
a $p$-RAAG which occurs as a Galois group $G_K(p)$ for some field $K$.
Next we prove that $p$-RAAGs provide an abundance of new examples of $H^\bullet$-quadratic pro-$p$ groups.
\begin{thmABC}\label{thm:pRAAGs mild}
Let $\Gamma=(\mathcal{G},f)$ be a $p$-graph.
Then the following are equivalent.
\begin{itemize}
\item[(i)] $\mathcal{G}$ is triangle-free.
\item[(ii)] The generalised $p$-RAAG $G_\Gamma$ is mild.
\item[(iii)] $G_\Gamma$ has cohomological dimension $\mathrm{cd}(G_\Gamma)=2$.
\end{itemize}
In particular, if these conditions are satisfied, $G_\Gamma$ is quadratic.
\end{thmABC}
In fact, by a classical theorem of Erd\"os et al., the number of graphs on $n$ vertices that do not involve a
``triangle'' is asymptotic to $2^{n^2/4}$ for $n$ that tends to infinity;
while the total number of graphs on $n$ vertices is asymptotic to $2^{n^2/2}$ for $n$ that tends to infinity.
The condition of \emph{mildness} for a pro-$p$ group, introduced by J.~Labute in \cite{labute:mild},
is quite technical and we direct the reader to \S~\ref{sec:mild} and references therein for the definition.
In light of Theorem~\ref{thm:pRAAGs mild}, we start a careful investigation of $p$-RAAGs associated
to triangle $p$-graphs. It so happens that a classical example of Mennicke, of a \emph{finite} $3$-generated $3$-related pro-$p$ group, can be written as a triangle $p$-RAAG (cfr.\ Example~\ref{ex:mennicke}); so there are $p$-RAAGs that are not quadratic, because they contain non-trivial torsion. Furthermore, it is in general very hard to decide whether a given presentation of a pro-$p$ group yields a finite group. So we content ourselves to classify the possible \emph{$H^\bullet$-quadratic} $p$-RAAGs that arise from triangle $p$-RAAGs.
\begin{thmABC}\label{thm:quadratictrianglepraags}
Let $G$ be a $p$-RAAG associated to a triangle $p$-graph, i.e., let $$G = \pres{x,y,z}{[x,y]=x^{\a_1} y^{\a_2},\ [y,z]=y^{\b_2} z^{\b_3},\ [z,x]=z^{\c_3} x^{\c_1}},$$ where $\a_1,\a_2, \b_2,\b_3, \c_1,\c_3 \in p\mathbb{Z}_p$. If $G$ is
$H^\bullet$-quadratic, then either:
\begin{itemize}
\item[(a)] $G$ is a metabelian uniform pro-$p$ group; or
\item[(b)] $G$ is isomorphic to a $3$-dimensional uniform open subgroup of the $p$-Sylow subgroup of $\mathrm{SL}_2(\mathbb{Z}_p)$.
\end{itemize}
\end{thmABC}
Additionally, if $G$ is metabelian in part (a) of the previous theorem,
it has to belong to one of three different explicitly described families of pro-$p$ groups.
We investigate also part (b) in more detail: namely, we explicitly realise $\mathrm{SL}_2^1(\mathbb{Z}_p)$
as a triangle $p$-RAAG (see Proposition~\ref{prop:SL21triangle}).
To boot, we analyse whether all \emph{torsion-free} $p$-RAAGs are $H^\bullet$-quadratic.
We fall short of proving this in full generality, but we can use Theorem~\ref{thm:cohomology amalgam}
to deal with a large number of them (see \S~\ref{sec:sometrianglefull}). In particular, we can prove the following theorem for
$p$-RAAGs associated to \emph{chordal} graphs. A graph is chordal if all cycles of four or more vertices have a chord (see Definition~\ref{def:chordal}).
\begin{thmABC}\label{thm:chordal}
Let $\Gamma=({\mathcal G},f)$ be a $p$-graph with ${\mathcal G}$ a chordal graph, such that the associated $p$-RAAG $G_\Gamma$ is non-degenerate.
Then $G$ is a quadratic pro-$p$ group.
\end{thmABC}
Morever, we can prove that all $p$-RAAGs arising from $p$-graphs on $5$ vertices are $H^\bullet$-quadratic,
if they are \emph{non-degenerate} (cf.\ Definition~\ref{defn:nondeg}).
Finally, we investigate the presence of free non-abelian closed subgroups in $H^\bullet$-quadratic pro-$p$ groups.
In the arithmetic case one has a ``Tits alternative type'' result: if
the field $K$ contains a roots of unity of order $p$, then either $G_K(p)$ contains a free non-abelian
closed pro-$p$ subgroup, or every finitely generated subgroup is uniform
(cf. \cite[Thm.~B]{cq:bk}, \cite[Thm.~3]{ware} and \cite[\S~3.1]{CMQ:fast}).
We prove the following (see \S~\ref{sec:free subgp}).
\begin{thmABC}\label{thm:pRAAGs free subgroup}
Let $G_\Gamma$ be a $p$-RAAG, with associated $p$-graph $\Gamma=({\mathcal G},f)$.
Then either $G_{\Gamma}$ is a powerful pro-$p$ group, or it contains a free non-abelian closed pro-$p$ subgroup.
\end{thmABC}
As an immediate corollary of the previous theorem we deduce: a quadratic $p$-RAAG is either uniform, or it contains a free non-abelian closed pro-$p$ subgroup.
We also investigate the presence of free non-abelian subgroups in mild pro-$p$ groups. For this, in Proposition~\ref{prop:mild GoSha} we show that ---under certain conditions--- several mild pro-$p$ groups are generalised Golod-Shafarevic pro-$p$ groups (see Section~\ref{sec:ggs} for the definition). As a corollary, we deduce that all $H^\bullet$-quadratic groups with at most $3$ generators are either uniform or contain a closed free non-abelian pro-$p$ subgroup (see Corollary~\ref{coro:small free subgp}). Motivated by the aforementioned results, we formulate the following.
\begin{conjABC}\label{conj:subgp free}
Let $G$ be a finitely generated $H^\bullet$-quadratic pro-$p$ group. Then either $G$ is a uniform pro-$p$ group, or it contains a closed free non-abelian pro-$p$ subgroup.
\end{conjABC}
\subsection{Case \texorpdfstring{$\boldsymbol{p=2}$}{p=2}}\label{sec:p=2}
As it often happens in the theory of pro-$p$ groups, one needs to take extra care in the case $p=2$. We will assume a technical condition on $H^\bullet$-quadratic pro-$2$ groups (see Remark~\ref{rem:p2}) that will insure that most of our proofs also work for the even prime.
We remark that Theorems~\ref{thm:cohomology amalgam} and \ref{thm:HNN} hold without change for $p=2$.
Theorem~\ref{thm:quadratic analytic} does not hold for $p=2$, see Examples~\ref{ex:analyticp21} and \ref{ex:analyticp22}.
The definition of $p$-RAAGs needs to be slightly modified for $p=2$ (see Section~\ref{sec:pRAAGs}), namely labels of edges have to be in $4\mathbb{Z}_2$. After this modification, Theorems~\ref{thm:pRAAGs quadratic}, \ref{thm:pRAAGs mild}, \ref{thm:quadratictrianglepraags}, \ref{thm:chordal} and \ref{thm:pRAAGs free subgroup} hold for $p=2$.
Finally, Proposition~\ref{prop:mild GoSha} also holds for $p=2$ under the additional assumption of Remark~\ref{rem:p2}.
\subsection{Structure of the article}
The main definitions and conventions of this article are laid out in Section~\ref{sec:Preliminaries}. We prove Proposition~\ref{prop:properties quadratic} in Section~\ref{sec:quadratic}. Theorems~\ref{thm:cohomology amalgam} and \ref{thm:HNN} are proved in Section~\ref{sec:combgrth}, where we also discuss the properness of amalgamated free products and HNN extensions in the category of pro-$p$ groups. Theorem~\ref{thm:quadratic analytic} is proved in Section~\ref{sec:quadraticpadic}. We introduce the class of $p$-RAAGs in Section~\ref{sec:pRAAGs}, where we also prove Theorems~\ref{thm:pRAAGs quadratic} and \ref{thm:pRAAGs mild}. Non-degenerate $p$-RAAGs are defined and studied in Section~\ref{sec:trianglefull}. Triangle $p$-RAAGs are analysed in Section~\ref{sec:triang}, which also includes a proof of Theorem~\ref{thm:quadratictrianglepraags}. Finally, in Section~\ref{sec:free subgp} we investigate ``Tits alternative behaviour'' in the classes of $p$-RAAGs and mild pro-$p$ groups, proving Theorem~\ref{thm:pRAAGs free subgroup}.
\section{Preliminaries}\label{sec:Preliminaries}
\subsection{Quadratic algebras}
An associative unital algebra $A_\bullet$ over the finite field $\mathbb{F}_p$ is graded if it decomposes
as the direct sum of $\mathbb{F}_p$-vector
spaces $A_\bullet=\bigoplus_{n\ge 0}A_n$ such that $A_n\cdot A_m\subseteq A_{n+m}$ for all $n,m\ge 0$.
The graded algebra $A_\bullet$ is of finite type if every space
$A_n$ has finite dimension.
Hereinafter we will restrict ourself to
graded $\mathbb{F}_p$-algebras of finite type,
and to finite dimensional vector spaces over $\mathbb{F}_p$.
For an $\mathbb{F}_p$-vector space $V$, let $T_\bullet(V)=\bigoplus_{n\geq0}V^{\otimes n}$ denote the free graded tensor algebra generated by $V$. For $\Omega\subseteq V\otimes V$, let $(\Omega)$ denote the two-sided ideal generated by $\Omega$ in $T_\bullet(V)$.
\begin{definition}\label{def:quadratic algebra}
A graded algebra $A_\bullet$ is called \emph{quadratic} if one has an isomorphism
\begin{equation}\label{eq:quadratic algebra}
A_\bullet\cong Q(A_1,\Omega) := \frac{T_\bullet(A_1)}{(\Omega)},
\end{equation}
for some $\Omega\subseteq A_1\otimes A_1$. We will call $Q(A_1,\Omega)$ the \emph{quadratic algebra} generated by $A_1$ and $\Omega$.
\end{definition}
\begin{example}\label{ex:quad algebras}
Let $V$ be a vector space. Then
\begin{itemize}
\item[(a)] the tensor algebra $T_\bullet(V) = Q(V,0)$,
\item[(b)] the trivial algebra $ Q(V,V^{\otimes 2})$,
\item[(c)] the symmetric algebra $S_\bullet(V) = Q(V,\{v\otimes w - w\otimes v\mid v,w\in V\})$ and
\item[(d)] the exterior algebra $\Lambda_\bullet(V) = Q(V, \{v\otimes v\mid v\in V\})
\end{itemize}
are quadratic. Moreover, for two quadratic algebras $A_\bullet,B_\bullet$, one has the following general constructions of new quadratic algebras (cf.~\cite[\S~3.1]{pp:quad}).
\begin{itemize}
\item[(a)] The direct sum $C_\bullet=A_\bullet\oplus B_\bullet$ is the algebra with
$C_n=A_n\oplus B_n$ for every $n\geq1$.
\item[(b)] The wedge product $C_\bullet=A_\bullet\wedge B_\bullet$ is the algebra with
$C_n=\bigoplus_{i+j=n}A_i\wedge B_j$ for every $n\geq2$ (note that in \cite{pp:quad,MPQT} this algebra is
called the skew-commutative tensor product $A_\bullet\otimes^{-1}B_\bullet$).
\end{itemize}
\end{example}
\subsection{Presentations of pro-\texorpdfstring{$p$}{p} groups}
Throughout this paper, subgroups of pro-$p$ groups are assumed to be closed (in the pro-$p$ topology) and generators will be intended as topological generators. In particular, given two (closed) subgroups $H_1$ and $H_2$ of a pro-$p$ group $G$, the subgroup $[H_1,H_2]$ is the (closed) subgroup of G generated by the commutators $[g_1,g_2]$, with $g_i \in H_i$ for $i = 1,2$. Also, for a positive integer $n$, $G^n$ denotes the (closed) subgroup of $G$ generated by the $n$-th powers of elements of $G$. Similarly, all homomorphisms between pro-$p$ groups will be assumed continuous.
Finally, we will always consider $\mathbb{F}_p$ as a trivial $G$-module.
We will now recall some basic facts and definitions for presentations of pro-$p$ groups. The experienced reader might wish to skip ahead to the next section.
For a pro-$p$ group $G$, set $G_{(2)}=G^p [G,G]$ and
\begin{equation} \label{eq:G3}
G_{(3)}=\begin{cases} G^p [[G,G],G] & \text{if }p\geq3 \\ G^4 [[G,G],G] & \text{if }p=2 \end{cases}.
\end{equation}
Then $G_{(2)}$ and $G_{(3)}$ coincide with the second and third terms of the $p$-Zassenhaus filtration of $G$,
respectively (cf.\ \cite[\S~11.1]{ddms:padic}).
The pro-$p$ group $G$ is finitely generated if and only if $G/G_{(2)}$ is a vector space of finite dimension over $\mathbb{F}_p$, and
$\mathrm{d}(G):=\dim G/G_{(2)}$ is the minimal number of generators of $G$.
Moreover, the equality $H^1(G,\mathbb{F}_p)=\mathrm{Hom}(G,\mathbb{F}_p)$ implies the isomorphism
of vector spaces
\begin{equation}\label{eq:H1}
H^1(G,\mathbb{F}_p)\cong(G/G_{(2)})^*,
\end{equation}
where $\cdot^*=\mathrm{Hom(\cdot,\mathbb{F}_p)}$ denotes the dual for vector spaces.
Thus, $\mathrm{d}(G)=\dim H^1(G,\mathbb{F}_p)$.
A presentation
\begin{equation}\label{eq:presentation}
\xymatrix{ 1\ar[r] & R\ar[r] & F\ar[r] & G\ar[r] & 1 }
\end{equation}
of a finitely presented pro-$p$ group $G$ is said to be \emph{minimal} if one of the following equivalent conditions is satisfied:
\begin{itemize}
\item[(i)] $F\to G$ induces an isomorphism $F/F_{(2)}\cong G/G_{(2)}$;
\item[(ii)] $R\le F_{(2)}$.
\end{itemize}
If \eqref{eq:presentation} is minimal, it follows from \cite[\S~1.6]{nsw:cohn} that
$$H^2(G,\mathbb{F}_p)\cong H^1(R,\mathbb{F}_p)^F\cong (R/R^p[R,F])^*.$$
A minimal subset $\mathcal{R}\subseteq F$ which generates $R$ as normal subgroup is called a set of defining relations of $G$.
Thus, $\mathrm{r}(G):=\dim H^2(G,\mathbb{F}_p)$ is the cardinality of a set of defining relations.
For a pro-$p$ group $G$ and a minimal presentation \eqref{eq:presentation}, set $d=\mathrm{d}(G)$ and let $\mathcal{X}=\{x_1,\ldots,x_d\}$ be a basis of $F$. We will identify $\mathcal{X}$ with the induced basis of $G$ via $F\to G$.
Given $r\in R \subseteq F_{(2)}$, one may write
\begin{equation}\label{eq:shape r}
r=\begin{cases}
\prod_{i<j}[x_i,x_j]^{a_{ij}}\cdot r' & \text{if }p\neq2 \\
\prod_{i=1}^d x_i^{2a_{ii}}\cdot\prod_{i<j}[x_i,x_j]^{a_{ij}}\cdot r' & \text{if }p=2
\end{cases} \qquad r'\in F_{(3)},
\end{equation}
with $0\leq a_{ij}<p$, and such numbers are uniquely determined by $r$
(cf.\ \cite[Prop.~1.3.2]{vogel:thesis}).
\subsection{Pro-\texorpdfstring{$p$}{p} groups and \texorpdfstring{$\mathbb{F}_p$}{Fp}-cohomology}
The $\mathbb{F}_p$-cohomology of a pro-$p$ group $G$ comes endowed with the cup-product
\[
\smile: H^i(G,\mathbb{F}_p)\times H^j(G,\mathbb{F}_p)\longrightarrow H^{i+j}(G,\mathbb{F}_p)
\]
which is bilinear and graded-commutative, i.e.\ $\beta\smile \alpha=(-1)^{ij}\alpha\smile \beta$ for $\alpha\in H^i(G,\mathbb{F}_p)$
and $\beta\in H^j(G,\mathbb{F}_p)$, so that $H^\bullet(G,\mathbb{F}_p)=\bigoplus_{n\geq0}H^n(G,\mathbb{F}_p)$
is a graded algebra (cf.\ \cite[\S~I.4]{nsw:cohn}).
Let $G$ be a finitely presented pro-$p$ group with minimal presentation \eqref{eq:presentation}, with
$\mathcal{X}$ as above, and let $\mathcal{X}^*=\{\alpha_1,\ldots,\alpha_d\}$ be the basis of $H^1(G,\mathbb{F}_p)$
dual to $\mathcal{X}$.
Moreover, set $m=\mathrm{r}(G)$ and let $\mathcal{R}=\{r_1,\ldots, r_m\}$ be a set of defining relations of $G$.
Cup-products of elements of degree 1 and defining relations are connected by the following
(cf.\ \cite[Prop.~1.3.2]{vogel:thesis} and \cite[Prop.~7.1]{MPQT}).
\begin{proposition}\label{prop:cupproduct}
Let $G$, $\mathcal{X}$, $\mathcal{X}^*$ and $\mathcal{R}$ be as above.
Then for every $r_h\in\mathcal{R}$ one has a bilinear pairing
\[
\mathrm{trg}^{-1}(\alpha_i\smile \alpha_j).r_h=\begin{cases}
-a_{ij} & \text{if }i<j \\ a_{ij} & \text{if }j<i \\ -\binom{p}{2}a_{ii} & \text{if }i=j,
\end{cases} \]
where $a_{ij}$ are the exponents in the expression of $r_h$ as in \eqref{eq:shape r},
interpreted as elements of $\mathbb{F}_p$.
In particular, $\alpha_i \smile\alpha_j\neq0$ if and only if $a_{ij}\neq0$ for some $r_h\in\mathcal{R}$.
\end{proposition}
Moreover, one has the following (cf.\ \cite[Thm.~7.3]{MPQT}).
\begin{proposition}\label{prop:relations}
Let $G$ be a finitely generated pro-$p$ group.
Then the following are equivalent.
\begin{itemize}
\item[(i)] The cup-product induces an epimorphism $H^1(G,\mathbb{F}_p)^{\otimes2}\to H^2(G,\mathbb{F}_p)$.
\item[(ii)] One has an equality $R\cap F_{(3)}=R^p[R,F]$
for every $i$.
\end{itemize}
If the above two conditions hold, then $r_i\in R\smallsetminus F_{(3)}$ for any set of defining relations $\mathcal{R}=\{r_1,\ldots,r_m\}$ of $G$.
\end{proposition}
Hence, if a pro-$p$ group $G$ can have a chance to be quadratic (see Definition~\ref{definition:Hquadratic}), it has to satisfy one of the equivalent conditions of Proposition~\ref{prop:relations}. In the rest of the paper we will mostly restrict ourselves to this situation. In fact, one of the easiest shapes of relations satisfying (iii) above is involved in the definition of $p$-RAAGs (see Section~\ref{eq:pRAAGdefn}).
\begin{remark}[Gauss reduction of relations]\label{rem:Gauss reduction}
Let $G$ and $\mathcal{X}$ be as above and set $d=\mathrm{d}(G)$, $m=\mathrm{r}(G)$. We will consider the lexicographic order $<$ on the set of couples $\{(i,j) \mid 1\le i<j\le m\}$.
The quotient $F_{(2)}/F_{(3)}$ is an $\mathbb{F}_p$-vector space with basis $\{[x_i,x_j]F_{(3)} \mid 1\leq i<j\leq d\}$.
Suppose that $G$ satisfies the conditions of Proposition~2.4 and let $\mathcal{R}=\{r_1,\ldots,r_m\}$ be a set of defining relations of $G$. Then $R/R\cap F_{(3)}$ is a subspace with basis $\bar{\mathcal{R}}=\{r_h F_{(3)} \mid 1\leq h\leq m\}$ and we will write $$r_h F_{(3)} = \sum_{1\le i< j\le d} a(h,i,j) \cdot [x_i,x_j]F_{(3)}$$ with $a(h,i,j) \in \mathbb{F}_p$.
Let $A=(a(h,i,j))_{h,(i,j)}$ be the $m\times {d \choose 2}$ matrix of the coefficients of the elements of $\mathcal{R}$ in $F_{(2)}/F_{(3)}$.
Gauss reduction on $A$ yields a lexicographically ordered sequence $(i_1,j_1)<\ldots<(i_m,j_m)$
and a set $\mathcal{R}' = \{r_1',\ldots,r_m'\}\subseteq F$ of new relators for $G$ such that:
$$r_h' \equiv \prod_{1\leq i<j\leq d}[x_i,x_j]^{b(h,i,j)}\mod F_{(3)},\quad h=1,\ldots,m$$ with exponents $b(h,i,j)\in\mathbb{F}_p$ such that $$b(h,i,j) = \begin{cases}
1 & \text{ if } (i,j) = (i_h,j_h) \\
0 & \text{ if } (i,j)<(i_h,j_h)
\end{cases}$$
The commutator $[x_{i_h},x_{j_h}]$ may be considered the ``leading term'' of $r_h$. This will be relevant in Section~\ref{sec:free subgp}.
\end{remark}
\subsection{\texorpdfstring{$H^\bullet$}{H}-quadratic pro-\texorpdfstring{$p$}{p} groups}\label{sec:quadratic}
We introduce the main object of our investigations.
\begin{definition}\label{definition:Hquadratic}
A pro-$p$ group is called \emph{$H^\bullet$-quadratic} (or simply \emph{quadratic}) if the $\mathbb{F}_p$-cohomology algebra $H^\bullet(G,\mathbb{F}_p)$,
endowed with the cup-product, is a quadratic algebra.
\end{definition}
In the rest of the paper we will refer to $H^\bullet$-quadratic pro-$p$ groups simply as \emph{quadratic pro-$p$ groups}.
Let $G$ be a quadratic pro-$p$ group.
If $p\neq2$, then $\alpha\smile \alpha=0$ for every $\alpha\in H^\bullet(G,\mathbb{F}_p)$ by graded-commutativity.
If we further assume that $\alpha\smile \alpha=0$ for every $\alpha$ also in the case $p=2$,
then one has an epimorphism of quadratic algebras
\begin{equation}\label{eq:wedge}
\end{equation}
Recall that $\mathrm{cd}(G)$ denotes the cohomological dimension of $G$ (cf.\ \cite[\S~III.3]{nsw:cohn}). In particular, if $G$ is finitely generated then one has the inequalities
\begin{equation}\label{eq:inequalities}
\mathrm{cd}(G)\leq \mathrm{d}(G) \quad \text{and} \quad \mathrm{r}(G)\leq\binom{\mathrm{d}(G)}{2},
\end{equation}
so that $G$ is finitely presented.
\begin{notation}
With a slight abuse of notation, from now on we denote the cup product $\a \smile \b$ of $\a \in H^i(G,\mathbb{F}_p)$ and $\b \in H^j(G,\mathbb{F}_p)$ by $\a\wedge \b \in H^{i+j}(G,\mathbb{F}_p)$. Moreover, when it is clear from context, the cup-product symbol will be omitted.
\end{notation}
\begin{example}
Free pro-$p$ groups are trivially quadratic, as $\mathrm{cd}(G)=1$ for a free pro-$p$ group $G$ (cf.\ \cite[Prop.~3.5.17]{nsw:cohn}), i.e.\ $H^n(G,\mathbb{F}_p)=0$ for $n\geq2$.
\end{example}
Given a field $K$, let $G_K(p)$ denote the maximal pro-$p$ quotient of the absolute Galois group $G_K$ ---
i.e.\ $G_K(p)$ is the Galois group of the maximal $p$-extension of $K$.
Then one has the following consequence of the Rost-Voevodsky Theorem (cf.\ \cite[\S~24.3]{ido:miln}),
which is one of the reasons of our interest in quadratic pro-$p$ groups.
\begin{theorem}\label{thm:GKp quadratic}
Let $K$ be a field containing a root of 1 of order $p$.
Then the maximal pro-$p$ Galois group $G_{K}(p)$ of $K$ is quadratic.
\end{theorem}
The well-known Artin-Schreier Theorem (and its pro-$p$ version) implies that the only finite group
which occurs as maximal pro-$p$ Galois group is $C_2$, the cyclic group of order~$2$.
We will now prove Proposition~\ref{prop:properties quadratic}
\begin{proof}[\textbf{Proof of Proposition~\ref{prop:properties quadratic}}]
For (a), assume by contradiction that $G$ is not torsion-free.
Then $G$ contains a subgroup $H$ which is cyclic of order $p$, so that $H^n(H,\mathbb{F}_p)\cong \mathbb{F}_p$
for every $n\geq0$ and $\mathrm{cd}(H)$ is infinite.
Thus, also $\mathrm{cd}(G)$ is infinite by \cite[Prop.~3.3.5]{nsw:cohn}, contradicting \eqref{eq:inequalities}. This proves claim (a).
Regarding (b), it is well known that the cohomology algebra of a $2$-elementary abelian group $C_2^d$ ($d\in \mathbb{N}$) with coefficients in the finite field $\mathbb{F}_2$ is the symmetric algebra $S_\bullet(H^1(C_2^d,\mathbb{F}_2))$. Thus, $C_2^d$ is quadratic.
Now, let $G$ be a finite $2$-group and suppose that $G$ has a minimal presentation of the form $\langle X \vert R \rangle$ with $x,y \in X$ and either $x^{2^k}y^{-2^m}\in R$ or $x^{2^k}\in R$ for some $m,k\ge 2$. Then $G$ is not quadratic. Since the only finite abelian $2$-groups that do not have relations of the above form are elementary abelian, item (b) follows.
For (c), by item (b), all elements of $G$ must have order $2$ and hence $G$ is abelian.
\end{proof}
We were informed by J.\ Min\'a\v{c} that a similar result to our Proposition~\ref{prop:properties quadratic} has been obtained also by him together with D.\ Benson, S.\ Chebolu, C.\ Okay and J.\ Swallow, and it will appear in a forthcoming paper.
\begin{remark}[Proposition~\ref{prop:properties quadratic} for $p=2$]\label{rmk:propCp2}
We believe that, if a finite $2$-group is quadratic, then it must be elementary abelian.
One can check this directly for several finite $2$-groups: dihedral groups $D_{2^n}$ (see \cite[Chap.~IV, Thm.~2.7]{atem}), quaternion group and generalised quaternion groups (see \cite[Chap.~IV, Thm.~2.9]{atem} and \cite[Chap.~IV, Lem.~2.11]{atem}) and extraspecial $2$-groups (see \cite[Rem.~2.13]{atem}).
Unfortunately, we have not been able to verify our conviction in full generality, but we have two additional comments.
First, we can show---using a spectral sequence argument---that the Mennicke $2$-group $M$ of order $2^{11}$ given by the presentation $$M= \langle a, b, c \mid [a, b] = b^{-2}, [b, c] = c^{-2}, [c, a] = a^{-2} \rangle $$ is not quadratic. We have decided to not include a proof, as this is very similar to the cited examples in \cite[Chap.~IV]{atem}.
Secondly, the method of proof of item (b) in Proposition~\ref{prop:properties quadratic} cannot be applied in general, as there are $2$-groups
that do not satisfy the condition above. These include finite $2$-groups $G$ with balanced presentations (i.e., with $d(G) = r(G)$), which among other groups include the Mennicke $2$-group $M$, the quaternion group and the generalised quaternion groups.
\end{remark}
\begin{remark}\label{rem:p2}
Henceforth we will always implicitely assume that $\alpha^2$ is trivial in $H^2(G,\mathbb{F}_2)$ for every
$\alpha\in H^1(G,\mathbb{F}_p)$ in the case $p=2$.
By Proposition~\ref{prop:cupproduct} and \cite[Prop.~1.3.2]{vogel:thesis},
this is equivalent
to assuming that $a_{ii}=0$ for every $i\in\{1,\ldots,d\}$
and for every defining relation $r_h\in\mathcal{R}$ of $G$.
If $G=G_K(p)$ for some field $K$, then this holds if $\sqrt{-1}\in K$.
\end{remark}
\subsection{Mild pro-\texorpdfstring{$p$}{p} groups}\label{sec:mild}
A good source of quadratic pro-$p$ groups is the class of \emph{mild} pro-$p$ groups.
Such groups were introduced by J.~P.~Labute in \cite{labute:mild} to study the Galois groups of pro-$p$
extensions of number fields with restricted ramification.
Since a quadratic pro-$p$ group needs to satisfy the conditions of Proposition~\ref{prop:relations},
we give the definition of mild pro-$p$ groups among those satisfying these conditions.
For the general definition and properties of mild pro-$p$ groups we refer to \cite{labute:mild,forre:mild,gartner:mild}.
\begin{definition}\label{def:mild}
Let $G$ be a finitely presented pro-$p$ group with presentation \eqref{eq:presentation}
and satisfying Proposition~\ref{prop:relations}. Consider the graded algebra $M_\bullet(G) = Q(V,\Omega)$, where $V\cong G/G_{(2)}= \langle x_1,\ldots,x_d\rangle$
and $$\Omega=\left\{\sum_{i<j}a_{ij}(x_i \otimes x_j-x_j \otimes x_i)\right\}$$ with $a_{ij}$ as in \eqref{eq:shape r}.
The group $G$ is \emph{mild} if one has an equality of formal power series
\begin{equation}\label{eq:series mild}
\sum_{n\geq 0}\dim(M_n(G))\cdot T^n=\frac{1}{1-\mathrm{d}(G)T+\mathrm{r}(G)T^2}.
\end{equation}
\end{definition}
The most interesting --- and useful --- feature of mild pro-$p$ groups is that they have cohomological dimension
equal to 2 (cf.\ \cite[Thm.~1.2]{labute:mild}). In fact, it is fairly easy to decide if a pro-$p$ group with cohomological dimension $2$ is quadratic.
\begin{proposition}\label{prop:mild cd2}
Let $G$ be a finitely generated pro-$p$ group with $\mathrm{cd}(G)=2$ which satisfies Proposition~\ref{prop:relations}.
Write $H^2(G,\mathbb{F}_p)=H^1(G,\mathbb{F}_p)^{\otimes2}/\Theta$ for some subspace $\Theta$ of $H^1(G,\mathbb{F}_p)^{\otimes2}$.
If \begin{equation}\label{eq:H3}
\Theta \wedge H^1(G,\mathbb{F}_p)=H^1(G,\mathbb{F}_p)^{\otimes3},
\end{equation}
then $G$ is quadratic.
\end{proposition}
\begin{proof}
Since $H^n(G,\mathbb{F}_p)$ is trivial for $n\geq3$, one just needs to check whether $H^3(G,\mathbb{F}_p)=0$
follows from the relations $\Theta$ in $H^2(G,\mathbb{F}_p)$. But this follows immediately from \eqref{eq:H3}.
\end{proof}
This fact has the following consequence.
\begin{remark}
In particular, Proposition~\ref{prop:mild cd2} holds for mild groups satisfying Proposition~\ref{prop:relations}.
\end{remark}
Usually, it is quite difficult to check whether a finitely presented pro-$p$ group is mild
using the definition.
Rather, one has the following handy criterion to check whether a pro-$p$ group is mild.
\begin{proposition}[{\cite[p.~789]{gartner:mild}}]\label{prop:cd2 cupproduct}
Let $G$ be a finitely presented pro-$p$ group such that $H^2(G,\mathbb{F}_p)\neq0$.
Assume that $H^1(G,\mathbb{F}_p)$ admits a decomposition $H^1(G,\mathbb{F}_p)=V_1\oplus V_2$ such that the following holds:
\begin{itemize}
\item[(i)] the cup-product $V_1\otimes V_1\to H^2(G,\mathbb{F}_p)$ is trivial;
\item[(ii)] the cup-product $V_1\otimes V_2\to H^2(G,\mathbb{F}_p)$ is surjective.
\end{itemize}
Then $G$ is mild.
\end{proposition}
We remark that the condition of mildness of a pro-$p$ group satisfying Proposition~\ref{prop:relations}
depends only on the ``shape'' of its defining relations modulo $F_{(3)}$.
\begin{remark}\label{rem:mild approximation}
Let $G$ be a mild quadratic pro-$p$ group with minimal presentation \eqref{eq:presentation}
and defining relations $\mathcal{R}=\{r_1,\ldots,r_m\}$. If $\tilde{\mathcal{R}}=\{\tilde r_1,\ldots,\tilde r_m\}$
is a subset of $F$ with $\tilde r_h\equiv r_h\bmod F_{(3)}$ for every $h=1,\ldots,m$, then also the pro-$p$
group $\tilde G=F/\tilde R$ with defining relations $\tilde{\mathcal{R}}$ is mild and quadratic --- since $H^2(\tilde G,\mathbb{F}_p)\cong H^2(G,\mathbb{F}_p)$ by Proposition~2.3.
\end{remark}
The previous remark will be used in Section~\ref{sec:free subgp}.
\subsection{Generalised Golod-Shafarevic pro-\texorpdfstring{$p$}{p} groups}\label{sec:ggs}
A generalised Go\-lod-Sha\-fa\-re\-vich pro-$p$ group is a pro-$p$ group which satisfies
a weighted version of the celebrated Golod-Shafarevich condition.
We briefly recall the definition and we reference the reader \cite[\S~4.1]{ershov} for a deeper treatment.
Let $\mathcal{X}=\{x_1,\ldots,x_d\}$ and $U=\{u_1,\ldots,u_d\}$ be two finite sets of the same cardinality.
The free pro-$p$ group $F$ on $\mathcal{X}$ can be embedded in the \emph{free associative algebra}
$\mathbb{F}_p \langle\!\langle U \rangle\!\rangle$ over $U$ via the Magnus embedding $\iota: x_i\mapsto 1+u_i$.
Choose weights $w(u_i)\in \mathbb{R}_{\ge0}\cup \{\infty\}$ for $x_i\in \mathcal{X}$ and extend this
to a \emph{weight function} $w: \mathbb{F}_p \langle\!\langle U \rangle\!\rangle \to \mathbb{R}_{\ge0}\cup \{\infty\}$
by setting
\begin{equation*}
w(1)=0, \text{\quad } w(0)=\infty,\quad w(u_{i_1}\ldots u_{i_k})= w(u_{i_1})+ \ldots + w(u_{i_k})
\end{equation*}
\begin{equation*}
w\left(\sum c_\alpha m_\alpha\right) = \min\{w(m_\alpha) \mid c_\alpha \neq 0\}.
\end{equation*}
For an element $f\in F$, define the \emph{valuation} of $f$ by $D(f) = w(\iota(f)-1)$ and,
for a subset $S$ of $F$, define $H_{S,D}(T) = \sum_{s\in S} T^{D(s)}$.
A pro-$p$ group $G$ is said to be \emph{generalised Golod-Shafarevic} if there exist:
\begin{enumerate}
\item a minimal presentation \eqref{eq:presentation} for $G$, with generators $\mathcal{X}=\{x_1,\ldots,x_d\}$
and relators $\mathcal{R}$;
\item a weight function $w: \mathbb{F}_p \langle\!\langle U \rangle\!\rangle \to \mathbb{R}_{\ge0}\cup \{\infty\}$
(with associated valuation $D$) on $F(\mathcal{X})$ and
\item a real number $T_0\in (0,1)$
\end{enumerate}
such that
\[
1- H_{\mathcal{X},D}(T_0) + H_{\mathcal{R},D}(T_0) <0.
\]
In Section~\ref{sec:free subgp} we will use the following theorem.
\begin{theorem}[{\cite[Thm.~7.1]{ershov}}]\label{thm:ershovfreesub}
A generalised Golod-Shafarevic pro-$p$ groups contains a free non-abelian closed subgroup.
\end{theorem}
\section{Combinatorial group theory for quadratic pro-\texorpdfstring{$p$}{p} groups}\label{sec:combgrth}
In the next section we concern ourselves with finding new ways to construct quadratic groups from old ones.
\subsection{Free and direct products}
The following results are well-known to experts.
\begin{proposition}\label{prop:cohomology freeproduct}
Let $G_1,G_2$ be two finitely generated quadratic pro-$p$ groups.
\begin{itemize}
\item[(a)] The free pro-$p$ product $G=G_1\ast G_2$ is again quadratic; furthermore, we have $H^\bullet(G,\mathbb{F}_p)\cong H^\bullet(G_1,\mathbb{F}_p)\oplus H^\bullet(G_2,\mathbb{F}_p)$.
\item[(b)] The direct product $G=G_1\times G_2$ is again quadratic;
furthermore, we have $H^\bullet(G,\mathbb{F}_p)\cong H^\bullet(G_1,\mathbb{F}_p)\wedge H^\bullet(G_2,\mathbb{F}_p)$.
\end{itemize}
\end{proposition}
\begin{proof}
Statement (a) follows from \cite[\S~IV.1]{nsw:cohn}.
Statement (b) follows from \cite[\S~II.4, Ex.~7]{nsw:cohn}. Note that both statements also follow from the discussion following Example \ref{ex:quad algebras}.
\end{proof}
\begin{remark}\label{rem:directproduct}
Let $G_1,G_2$ be two quadratic pro-$p$ groups which occur as maximal pro-$p$ Galois groups
of fields.
By \cite[Thm.~C]{cq:bk}, the direct product $G_1\times G_2$ may occur as a maximal pro-$p$ Galois group
only if one of the two groups is abelian.
\end{remark}
As we have seen above, quadratic pro-$p$ groups are closed under taking free and direct products. There are other universal ``free product-like'' constructions and, in the rest of this section, we will explore to what extent the class of quadratic pro-$p$ groups is closed under these constructions.
\subsection{Amalgamated products}
Let $G_1$ and $G_2$ be pro-$p$ groups and let $\phi_i\colon H\to G_i$ (for $i\in\{1,2\}$) be continuous
monomorphisms of pro-$p$ groups.
An \emph{amalgamated free pro-$p$ product} (or simply \emph{amalgam}) of $G_1$ and $G_2$ with \emph{amalgamated subgroup} $H$ is defined
to be the pushout
\[
\xymatrix{ H\ar[r]^{\phi_1}\ar[d]_{\phi_2} & G_1\ar@{-->}[d]^{\psi_1} \\ G_2\ar@{-->}[r]^{\psi_2} & G }
\]
in the category of pro-$p$ groups, which is unique (cf.\ \cite[\S~9.2]{ribzal:book}).
We write $G=G_1\amalg_H G_2$.
An amalgamated free pro-$p$ product $G=G_1\amalg_H G_2$ is said to be \emph{proper}
if the homomorphisms $\psi_i$ ($i = 1, 2$) are monomorphisms.
In that case one identifies $G_1$, $G_2$ and $H$ with their images in $G$.
\begin{remark}
For a proper amalgam $G=G_1\amalg_H G_2$ it follows from \cite[Prop.~9.2.13(a)]{ribzal:book} that $$\mathrm{cd}(G)\le \max\{\mathrm{cd}(G_1),\mathrm{cd}(G_2),\mathrm{cd}(H)+1\}.$$
\end{remark}
\begin{example}\label{example:demushkin amalg}
For $p\neq2$, let $G$ be a Demushkin group with presentation
\[
G=\left\langle x_1,\ldots,x_d\mid x_1^q[x_1,x_2][x_3,x_4]\cdots[x_{d-1},x_d]=1 \right\rangle,
\]
with $d=\mathrm{d}(G)\geq4$ and $q$ a power of $p$.
Let $G_1,G_2\le G$ be the subgroups generated by
$x_1,x_2$ and by $x_3,\ldots,x_d$ respectively, and $H\le G$ the pro-cyclic subgroup
generated by $[x_2,x_1]x_1^{-q}=[x_3,x_4]\cdots[x_{d-1},x_d]$.
Then $G$ is (isomorphic to) the proper amalgam $G_1\amalg_H G_2$ (cf.\ \cite[Ex.~9.2.12]{ribzal:book}; the same holds also if $p=2$). Note that $\mathrm{cd}(G_1)=\mathrm{cd}(G_2)=1$, but $\mathrm{cd}(G)=2$.
\end{example}
We come to the proof of the first of the two main results of this section: Theorem~\ref{thm:cohomology amalgam}.
\begin{remark}\label{remark:restriction}
For a pro-$p$ group $G$ and a subgroup $H\le G$, the restriction map
\[
\mathrm{res}_{G,H}^\bullet\colon H^\bullet(G,\mathbb{F}_p)\longrightarrow H^\bullet(H,\mathbb{F}_p),
\]
induced by $\mathrm{res}_{G,H}^n$ for every $n\geq0$, is a morphism of graded algebras
(cf.\ \cite[Prop.~1.5.3]{nsw:cohn}).
\end{remark}
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:cohomology amalgam}}]
By hypothesis~(i), the maps ${\rm res}_{G_i,H}^1$ are surjective, for $i=1,2$.
Set $V_i=\mathrm{ker}({\rm res}_{G_i,H}^1)$, and let $W_i\subseteq H^1(G_i,\mathbb{F}_p)$ be a complement for $V_i$ for each $i$.
Then we may identify both $W_1$ and $W_2$ with $H^1(H,\mathbb{F}_p)$ via ${\rm res}^1_{G_1,H}$ and ${\rm res}_{G_2,H}^1$, respectively, and with an abuse of notation we write $W=W_1=W_2$, so that $H^1(G_i,\mathbb{F}_p)=V_i\oplus W$ for $i=1,2$.
Since $H$ is quadratic, one has an isomorphism of quadratic algebras
\[
H^\bullet(H,\mathbb{F}_p)\simeq \frac{\Lambda_\bullet(W)}{(\Omega_H)},
\]
for some subspace $\Omega_H\subseteq \Lambda_2(W)$.
Moreover, by hypothesis~(ii), there exist subspaces $\Omega_i\subseteq\Lambda_2(V_i)\oplus(V_i\wedge W)$ and isomorphisms of quadratic algebras
\[
H^\bullet(G_i,\mathbb{F}_p)\simeq \frac{\Lambda_2(V_i\oplus W)}{(\Omega_i\cup\Omega_H)}
\]
for $i=1,2$.
In particular, in $H^\bullet(G_i,\mathbb{F}_p)$ one has a relation $a+b=0$ with $a\in H^1(G_i,\mathbb{F}_p)\wedge V_i$ and $b\in \Lambda_2(W)$ if and only if $a\in \Omega_i$ and $b\in \Omega_H$.
Since $G=G_1\amalg_{H}G_2$ is a proper amalgam, by \cite[Prop.~9.2.13]{ribzal:book} the monomorphisms
$H\to G_i$ and $G_i\to G$ for $i=1,2$ induce a long exact sequence in cohomology
\[
\xymatrix@C=0.5truecm{ 0\ar[r] & H^1(G,\mathbb{F}_p)\ar[r]^-{ f_G^1}& \cdots\ar[r]^-{ f_H^{n-1}} & H^{n-1}(H,\mathbb{F}_p) \ar`r[d]`[l] `[dlll] `[dll] [dll] \\
&H^n(G,\mathbb{F}_p)\ar[r]^-{ f_G^n}& H^n(G_1,\mathbb{F}_p)\oplus H^n(G_2,\mathbb{F}_p)\ar[r]^-{ f_H^n} & H^n(H,\mathbb{F}_p) \ar`r[d]`[l] `[dlll] `[dll] [dll] \\
& H^{n+1}(G,\mathbb{F}_p)\ar[r]^-{ f_G^{n+1}} & H^{n+1}(G_1,\mathbb{F}_p)\oplus H^{n+1}(G_2,\mathbb{F}_p)\ar[r]^-{ f_H^{n+1}} &\cdots &}
\]
for $n\geq1$, where
\[ \begin{split}
f_G^n(\xi) &=\left(\mathrm{res}_{G,G_1}^n(\xi),\mathrm{res}_{G,G_2}^n(\xi)\right), \\
f_H^n(\eta,\eta')&=\mathrm{res}_{G_1,H}^n(\eta)-\mathrm{res}_{G_2,H}^n(\eta')
\end{split} \]
for every $\xi\in H^n(G,\mathbb{F}_p)$ and $\eta\in H^n(G_1,\mathbb{F}_p)$, $\eta'\in H^n(G_2,\mathbb{F}_p)$ (cf.\ \cite{gildenribes:amalg}).
Since $G_1$, $G_2$ and $H$ are quadratic, Remark~\ref{remark:restriction} implies
that the restriction maps $\mathrm{res}_{G_1,H}^\bullet$ and $\mathrm{res}_{G_2,H}^\bullet$
are epimorphisms of quadratic algebras. Therefore also the maps $f_H^n$ are surjective for all $n\geq1$.
Thus,
\[
H^\bullet(G,\mathbb{F}_p)\cong \mathbb{F}_p\oplus\left(\bigoplus_{n\geq1}\mathrm{ker}(f_H^n)\right),
\]
where $\mathbb{F}_p$ is the degree 0 part --- note that this is a morphism of graded algebras, as the cup-product commutes with the restriction maps (cf.\ \cite[Prop.~1.5.3]{nsw:cohn}).
For every $n\geq1$, we may decompose the map $f_H^n$ as
\[
\xymatrix@C=0.5truecm{ H^n(G_1,\mathbb{F}_p)\oplus H^n(G_2,\mathbb{F}_p)\ar[r]^-{ \varphi_n} & H^n(H,\mathbb{F}_p)\oplus H^n(H,\mathbb{F}_p)\ar[r]^-{\psi_n}
& H^n(H,\mathbb{F}_p)},\]
where $\varphi_n=\mathrm{res}_{G_1,H}^n\oplus\mathrm{res}_{G_2,H}^n$ and $\psi_n=\pi_n-\pi_n'$,
with $\pi_n$ and $\pi_n'$ the canonical projections onto the first and the second summand respectively.
Clearly, $\mathrm{ker}(\psi_n)$ is the diagonal $\Delta(H^n(H,\mathbb{F}_p))\subseteq H^n(H,\mathbb{F}_p)\oplus H^n(H,\mathbb{F}_p)$.
Moreover, by hypothesis (ii) and by Remark~\ref{remark:restriction},
$\mathrm{ker}(\mathrm{res}_{G_i,H}^\bullet)$ is generated as ideal of
$H^\bullet(G_i,\mathbb{F}_p)$ by $\mathrm{ker}({\rm res}_{G_i,H}^1)$ for $i=1,2$, so that
\begin{equation}\label{eq:ker resn Gi}
\ker({\rm res}_{G_i,H}^n)=\mathrm{ker}({\rm res}_{G_1,H}^1)\wedge H^{n-1}(G_i,\mathbb{F}_p)
\end{equation}
for each $n\geq1$ and $i=1,2$.
Therefore, one has
\begin{equation}
\begin{split}
\mathrm{ker}(f_H^n) & = \mathrm{ker}(\varphi_n)\oplus \Delta(H^n(H,\mathbb{F}_p)) \\
&\cong (V_1\wedge H^{n-1}(G_1,\mathbb{F}_p))\oplus (V_2\wedge H^{n-1}(G_2,\mathbb{F}_p))\oplus H^n(H,\mathbb{F}_p)
\end{split}
\end{equation}
for all $n\geq1$ (here we identify $H^n(H,\mathbb{F}_p)$ with $H^n(G_i,\mathbb{F}_p)/\mathrm{ker}({\rm res}^n_{G_i,H})$).
Now let $A_\bullet$ be the quadratic algebra $\Lambda_\bullet(V_1\oplus W\oplus V_2)/(\Omega_G)$, with $\Omega_G=\Omega_1\oplus\Omega_2\oplus\Omega_H\oplus (V_1\wedge V_2)$.
In particular, one has the isomorphisms of graded algebras $A_\bullet/(V_1)\cong H^\bullet(G_2,\mathbb{F}_p)$,
$A_\bullet/(V_2)\cong H^\bullet(G_1,\mathbb{F}_p)$ and
\begin{equation}\label{eq:cong W H}
A_\bullet/(V_1\oplus V_2)\cong \Lambda_\bullet(W)/(\Omega_H)\cong H^\bullet(H,\mathbb{F}_p).
\end{equation}
Moreover, for every $n\geq 1$, we have the isomorphisms of vector spaces
\begin{equation}\label{eq:cong V1V2 HG}
\frac{V_i\wedge (\Lambda_{n-1}(V_i\oplus W))}{(\Omega_i)_n}\cong V_i\wedge H^{n-1}(G_i,\mathbb{F}_p),
\end{equation}
where $(\Omega_i)_n$ denotes the part of degree $n$ of the ideal $(\Omega_i)$.
Let $$\phi_\bullet\colon A_\bullet\longrightarrow H^\bullet(G_1,\mathbb{F}_p)\oplus H_1(G_2,\mathbb{F}_p)$$
be the morphism of quadratic algebras given by
$ \phi_1\vert_{V_1}=\mathrm{id}_{V_1}\oplus 0$, $ \phi_1\vert_{V_2}=0\oplus \mathrm{id}_{V_2}$, and $\phi_1\vert_{W}=\mathrm{id}_W\oplus\mathrm{id}_W$ (here 0 denotes the 0-map).
Since $\psi_1\circ\phi_1\vert_{V_1\oplus V_2}=0$, and $f_H^1\circ\phi_1\vert_{W}=0$, by quadraticity of $A_\bullet$ one has that the image of $\phi_n$ is contained in $\mathrm{ker}(f_H^n)$ for every $n\geq1$.
Moreover, by \eqref{eq:ker resn Gi}, \eqref{eq:cong W H} and \eqref{eq:cong V1V2 HG}, the map $\phi_n\colon A_n\to \mathrm{ker}(f_H^n)$ is an isomorphism.
Therefore, $\phi_\bullet\colon A_\bullet\to \mathrm{im}(\phi_\bullet)$ is an isomorphism of quadratic algebras, and $H^\bullet(G,\mathbb{F}_p)$ is quadratic.
\end{proof}
\begin{remark}\label{rmk:hypotheses pRAAGs}
By duality \eqref{eq:H1}, the restriction maps \eqref{eq:surj cohomology amalgam}, with $k=1$,
are surjective if and only if the monomorphisms $H\to G_1$ and $H\to G_2$ induce monomorphisms
\[
\frac{H}{H_{(2)}}\longrightarrow \frac{G_1}{(G_1)_{(2)}},
\quad \frac{H}{H_{(2)}}\longrightarrow \frac{G_1}{(G_1)_{(2)}}.
\]
In other words, one may find bases $\mathcal{X}_1$ and $\mathcal{X}_2$ for $G_1$ and $G_2$ respectively, such that
$H$ is generated as a subgroup of $G_1$ and $G_2$ by $\mathcal{X}_1\cap H$ and $\mathcal{X}_2\cap H$ respectively.
Let $\mathcal{X}_1$ and $\mathcal{X}_2$ be as above and, for $i\in\{1,2\}$, let $x_h,x_j\in \mathcal{X}_i\cap H$.
By Proposition~\ref{prop:cupproduct}, condition (ii) of the statement of Theorem~\ref{thm:cohomology amalgam}
holds in the following case: if $a_{hj}\neq0$ for some relation $r$ of $G_i$, then there is a relation
$[x_h,x_j]=t$, with $t\in\Phi(H)$. In particular, as it will be clear later, Theorem~\ref{thm:cohomology amalgam} works for generalised $p$-RAAGs: a
relation of $G_i$ is also a relation of $H$ (see Section~\ref{sec:pRAAGs}).
\end{remark}
The following two examples of proper amalgams of quadratic pro-$p$ groups show that both conditions
of the statement of Theorem~\ref{thm:cohomology amalgam} are necessary.
\begin{example}[\textbf{Condition (i) is necessary}]\label{example:amalg1}
Let $G_1=\langle x_1,y_1\rangle$ and $G_2=\langle x_2,y_2\rangle$ be two 2-generated free
pro-$p$ groups, and set $z_1=[x_1,[x_1, y_1]]$ and $z_2=[x_2,[x_2,y_2]]$.
Then $G_1$, $G_2$, $\langle z_1\rangle$ and $\langle z_2\rangle$ are all quadratic pro-$p$ groups.
Let $G$ be the proper amalgam $G_1\amalg_{z_1=z_2}G_2$ (the amalgam is proper by \cite[Thm.~3.2]{ribes:amalg}).
Then $G$ is not quadratic by \cite[Cor.~9.2]{CEM}.
Here the restriction maps ${\rm res}_{G_i,\langle z_i\rangle}^1$ are trivial for both $i=1,2$:
indeed $z_i\in\Phi(G_i)$, so that $\alpha(z_i)=0$ for any $\alpha\in H^1(G_i,\mathbb{F}_p)$.
\end{example}
\begin{example}[\textbf{Condition (ii) is necessary}]\label{example:amalg2}
For $p\neq2$ let $G_1,G_2$ be the pro-$p$ groups with minimal presentations
\[
G_1=\langle x_1,x_2,x_3\mid x_1^p[x_2,x_3]=1\rangle,\quad G_2=\langle x_2,x_3,x_4\mid x_4^p[x_2,x_3]=1\rangle.
\]
Then $G_1,G_2$ are isomorphic free-by-Demushkin pro-$p$ groups (cf.\ \cite{kochzal}).
Also, they are quadratic by \cite[Prop.~4.3]{cq:onerel}.
Let $H\le G_1,G_2$ be the closed subgroup generated by $x_2,x_3$.
Then $H$ is a free 2-generated pro-$p$ group by the Freiheitsatz (cf.\ \cite{romanov:freiheit}) and thus it is quadratic as well.
Set $G=G_1\amalg_H G_2$. By \cite[Ex.~9.2.6(a)]{ribzal:book}, $G$ is a proper amalgam. Let $\{\alpha_1,\ldots,\alpha_4\}$ be a basis of $H^1(G,\mathbb{F}_p)$
dual to $\{x_1,\ldots,x_4\}$.
Then, since $\mathrm{cd}(H)=1$, the long exact sequence in cohomology induces an isomorphism in degree 2
\begin{equation}\label{eq:example amalg isoH2}
{\rm res}_{G,G_1}^2\oplus{\rm res}_{G,G_2}^2\colon H^2(G,\mathbb{F}_p)\overset{\sim}{\longrightarrow} H^2(G_1,\mathbb{F}_p)\oplus H^2(G_2,\mathbb{F}_p),
\end{equation}
with $H^2(G_1,\mathbb{F}_p)=\langle{\rm res}_{G,G_1}^2(\alpha_2\alpha_3)\rangle$ and
$H^2(G_2,\mathbb{F}_p)=\langle{\rm res}_{G,G_2}^2(\alpha_2\alpha_3)\rangle$, both isomorphic to $\mathbb{F}_p$.
Moreover, ${\rm res}_{G_i,H}^1$ is surjective and
$$\ker({\rm res}_{G_i,H}^2)=H^2(G_i,\mathbb{F}_p)\neq\ker({\rm res}_{G_i,H}^1)\wedge H^1(G_i,\mathbb{F}_p),$$
for both $i=1,2$.
By Proposition~\ref{prop:cupproduct}, the only element of $H^2(G,\mathbb{F}_p)$ generated in degree 1 is $\alpha_2\alpha_3$,
and since $H^2(G,\mathbb{F}_p)$ has dimension 2 by \eqref{eq:example amalg isoH2}, $H^\bullet(G,\mathbb{F}_p)$ is not quadratic.
\end{example}
\subsection{HNN extensions}
Let $G_0$ be a pro-$p$ group and let $\phi\colon A\to B$ be a continuous isomorphism between subgroups
$A,B\le G_0$.
A \emph{pro-$p$ HNN-extension} of $G_0$ with associated subgroups $A$ and $B$ is given by the pro-$p$ group
$G=\mathrm{HNN}(G_0,A,\phi)$, together with an element $t\in G$ and a continuous homomorphism $\psi\colon G_0\to G$ such that
$$t(\psi(a))t^{-1}=\psi\circ\phi(a)$$ for every $a\in A$, which satisfy the following universal property:
for any pro-$p$ group $G'$, any $g\in G'$ and any continuous homomorphism $f\colon G_0\to G'$
satisfying $g(f(a))g^{-1}=f\circ\phi(a)$ for all $a\in A$, there is a unique continuous
homomorphism $\tilde f\colon G\to G'$ with $\tilde f(t)=g$ such that $\tilde f\circ \psi=f$.
Such a pro-$p$ group $G$ is unique (cf.\ \cite[\S~9.4]{ribzal:book}).
If the homomorphism $\psi\colon G_0\to G$ is injective, the HNN extension is said to be \emph{proper}.
\begin{remark}
Note that, for a proper HNN extension, it follows from \cite[Prop.~9.4.2(a)]{ribzal:book} that $$\mathrm{cd}(\mathrm{HNN}(G_0,A,\phi)) \le \max\{\mathrm{cd}(G_0),\mathrm{cd}(A)+1\}.$$
\end{remark}
\begin{example}\label{example:demushkin HNN}
In \cite[Ex.~9.2.12]{ribzal:book}, it is shown that a Demushkin pro-$p$ group is an HNN-extension.
\end{example}
We come to the second main result of this section: Theorem~\ref{thm:HNN}.
\begin{remark}\label{remark:HNN thm}
Condition (ii) in Theorem~\ref{thm:HNN} amounts to say that the morphism
$$\bar\phi\colon \frac{A}{A\cap(G_0)_{(2)}}\longrightarrow\frac{B}{B\cap(G_0)_{(2)}},$$
induced by $\phi$, is the restriction of the identity of $G_0/(G_0)_{(2)}$ on $A/A\cap(G_0)_{(2)}$.
\end{remark}
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:HNN}}]
Since $G=\mathrm{HNN}(G_0,A,\phi)$ is proper, by \cite[Prop.~9.4.2]{ribzal:book}
the monomorphisms $G_0\to G$ and $A\to G_0$ induce a long exact sequence in cohomology
\[
\xymatrix@C=0.8truecm{ 0\ar[r] &\mathbb{F}_p\ar[r] & H^1(G,\mathbb{F}_p)\ar[r]^-{{\rm res}_{G,G_0}^1 }& \cdots\ar[r]^-{ f_{G_0}^{n-1}} & H^{n-1}(A,\mathbb{F}_p) \ar`r[d]`[l] `[dlll]_-{\delta^{n-1}} `[dll] [dll] \\
& & H^n(G,\mathbb{F}_p)\ar[r]^-{{\rm res}_{G,G_0}^n}& H^n(G_0,\mathbb{F}_p)\ar[r]^-{ f_{G_0}^n} & H^n(A,\mathbb{F}_p) \ar`r[d]`[l] `[dlll]_-{\delta^n} `[dll] [dll] \\
& & H^{n+1}(G,\mathbb{F}_p)\ar[r]^-{{\rm res}_{G,G_0}^{n+1}} & H^{n+1}(G_0,\mathbb{F}_p)\ar[r]^-{f_{G_0}^{n+1}} &\cdots &}
\]
for $n\geq1$, where $f_{G_0}^n={\rm res}_{G_0,A}^n-\phi^*\circ{\rm res}_{G_0,B}^n$,
with $\phi^*\colon H^n(B,\mathbb{F}_p)\to H^n(A,\mathbb{F}_p)$ the map induced by $\phi$ --- see also \cite{bieri:HNN}.
Since the map $\phi^*$ commutes with the cup product for every $n\geq1$ and $\alpha_1,\ldots,\alpha_n\in H^1(G_0,\mathbb{F}_p)$,
one has
\[\begin{split}
{\rm res}_{G_0,A}^n(\alpha_1\cdots\alpha_n) &= {\rm res}_{G_0,A}^1(\alpha_1)\cdots{\rm res}_{G_0,A}^1(\alpha_n) \\
&= \phi^*{\rm res}_{G_0,B}^1(\alpha_1)\cdots\phi^*{\rm res}_{G_0,B}^1(\alpha_n)\\
&= \phi^*{\rm res}_{G_0,B}^n(\alpha_1\cdots\alpha_n),
\end{split}\]
and hence by hypothesis (ii) the maps $f_{G_0}^n$ are trivial for every $n\geq1$.
Therefore, for every $n\geq2$ one has a short exact sequence of vector spaces
\begin{equation}\label{eq:ses hnn cohomology}
\xymatrix{ 0\ar[r] & H^{n-1}(A,\mathbb{F}_p)\ar[r]^{\delta^{n-1}} & H^n(G,\mathbb{F}_p)\ar[r]^-{{\rm res}_{G,G_0}^n} & H^n(G_0,\mathbb{F}_p)\ar[r] &0 }.
\end{equation}
We will identify $H^n(G_0,\mathbb{F}_p)$ and $H^n(A,\mathbb{F}_p)$ as subspaces of $H^n(G,\mathbb{F}_p)$.
Let $\alpha_t\in H^1(G,\mathbb{F}_p)$ be the generator of $\mathrm{ker}({\rm res}_{G,G_0}^1)$ --- i.e.\ $\alpha_t$ is dual to $t\in G$.
By Remark~\ref{remark:HNN thm}, for every $a\in A$ one has a relation $[t,a]=a'$, with $a'\in A\cap(G_0)_{(2)}$.
Hence, by Proposition~\ref{prop:cupproduct}, the cup-product $\alpha \alpha_t\in H^2(G,\mathbb{F}_p)$ is not trivial for every
$\alpha\in H^1(G_0,\mathbb{F}_p)$ such that ${\rm res}_{G_0,A}^1(\alpha)\neq0$. Therefore
$\mathrm{im}(\delta^{1})=\ker({\rm res}_{G,G_0}^2)=\alpha_t\wedge H^1(A,\mathbb{F}_p)$.
Thus, for every $n\geq1$, one has an isomorphism of vector spaces
\begin{equation}\label{eq:HNN iso vectspa}
H^n(G,\mathbb{F}_p)=H^n(G_0,\mathbb{F}_p)\oplus\left(\alpha_t\wedge H^{n-1}(A,\mathbb{F}_p)\right)
\end{equation}
and $H^\bullet(G,\mathbb{F}_p)$ is generated in degree 1.
Moreover, from \eqref{eq:HNN iso vectspa} one deduces
\[
H^n(G,\mathbb{F}_p)\cong \frac{H^n(G_0,\mathbb{F}_p)\oplus (\alpha_t\wedge H^{n-1}(G_0,\mathbb{F}_p))}{\alpha_t\wedge\mathrm{ker}({\rm res}_{G_0,A}^{n-1})},
\]
for every $n\geq2$.
By hypothesis (i), $\mathrm{ker}({\rm res}_{G_0,A}^{n-1})=\mathrm{ker}({\rm res}_{G_0,A}^1)\wedge H^{n-2}(G_0,\mathbb{F}_p)$,
that is, it is generated in degree 1. Thus $\alpha_t\wedge\mathrm{ker}({\rm res}_{G_0,A}^{n-1})$
is generated in degree 2 for every $n\geq2$.
It follows that $H^\bullet(G,\mathbb{F}_p)$ is a quadratic algebra.
\end{proof}
The following is an example of a proper HNN-extension satisfying all the hypothesis of Theorem~\ref{thm:HNN}
and it is a new example of quadratic pro-$p$ group obtained in this way.
\begin{example}\label{example:HNN1}
Let $G_0 =\langle x,y,z \rangle$ be free abelian pro-$p$ group.
Set $A= \langle x,y \rangle \le G_0$ and let $\phi\colon A\to A$ be the isomorphism induced by $x\mapsto xy^p$
and $y\mapsto yz^p$. Consider $G=\mathrm{HNN}(G_0,A,\phi)$. It follows easily using \cite[Prop.~9.4.3(2)]{ribzal:book} that $G$ is a proper HNN-extension.
Thus, $G$ has a presentation
\[
G=\langle x,y,z,t\mid [x,y]=[y,z]=[z,x]=1,[x,t]=y^p,[y,t]=z^p\rangle.
\]
Let $\{\alpha_x,\alpha_y,\alpha_z\}\subseteq H^1(G_0,\mathbb{F}_p)$ be a basis dual to $\{x,y,z\}$.
Then $\mathrm{ker}({\rm res}_{G_0,A}^1)=\langle\alpha_z\rangle$ and
\[\mathrm{ker}({\rm res}_{G_0,A}^2)=\langle\alpha_x\alpha_z,\alpha_y\alpha_z\rangle=
\mathrm{ker}({\rm res}_{G_0,A}^1)\wedge H^1(G_0,\mathbb{F}_p).\]
Moreover, $\phi^*\colon H^1(A,\mathbb{F}_p)\to H^1(A,\mathbb{F}_p)$ is the identity.
Therefore, $G$ is quadratic pro-$p$ group.
\end{example}
On the other hand, Theorem~\ref{thm:HNN} can also be used to show that certain HNN-extensions are not proper.
\begin{example}
Let $G$ be the pro-$p$ group with presentation $$\langle x_1,x_2,x_3,x_4 \mid [x_1,x_2][x_3,x_4]=1,[x_1,x_3]=1,[x_1,x_4][x_2,x_3]=1,[x_2,x_4]=1 \rangle.$$
Let $\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$ be a basis of $H^1(G,\mathbb{F}_p)$ dual to $\{x_1,x_2,x_3,x_4\}$.
Then we have $\alpha_1\alpha_2=\alpha_3\alpha_4$, $\alpha_1\alpha_4=\alpha_2\alpha_3$ and therefore $\{\alpha_1\alpha_2,\alpha_1\alpha_3,\alpha_1\alpha_4,\alpha_2\alpha_4\}$
is a basis of $H^2(G,\mathbb{F}_p)$ by Proposition~2.3.
It is clear that $\alpha_i\alpha_j\alpha_k=0$ for any triple $(i,j,k)$; also it is easy to check that conditions (i)-(iii) of Theorem~\ref{thm:HNN} are satisfied.
Let $N,H\le G$ be the subgroups generated by $x_2,x_3,x_4$ and by $x_1$, respectively.
Then $N$ is normal in $G$. The short exact sequence \eqref{eq:ses cohom GGS} implies that $\mathrm{cd}(G)\geq3$.
Thus $G$ is not quadratic.
Notice that we can realise $G$ as $\mathrm{HNN}(G_0,G_0,\phi)$ with $$G_0 = \langle x_2,x_4\mid [x_2,x_4]=1\rangle \ast \langle x_3 \rangle \cong (\mathbb{Z}_p\times \mathbb{Z}_p) \ast\mathbb{Z}_p ,\quad t=x_1$$ and $\phi(x_2)=x_2[x_3,x_4]$, $\phi(x_3)=x_3$, $\phi(x_4)=x_4[x_2,x_3]$.
Hence, $\mathrm{HNN}(G_0,G_0,\phi)$ is not proper by Theorem~\ref{thm:HNN}.
\end{example}
The following is a list of examples of proper HNN-extensions which are not quadratic
pro-$p$ groups, each of which does not satisfy one of the hypothesis of Theorem~\ref{thm:HNN}. That the given HNN-extensions are proper one can show easily using \cite[Prop.~9.4.3(2)]{ribzal:book}.
\begin{example}[\textbf{Condition (i) is necessary}]\label{ex:HNNcondi}
Let $G_0 = \langle x,y \rangle$ be a free abelian pro-$p$ group.
Set $A=\langle x^p \rangle$ and $B=\langle y^p \rangle$ and let $\phi\colon A\to B$ be the isomorphism induced by $x^p\mapsto y^p$. Consider $G=\mathrm{HNN}(G_0,A,\phi)$.
Thus $G$ has a presentation
\[
G=\langle x,y,t\mid [x,y]=1,[x^p,t]=(x^{-1}y)^p\rangle,
\]
and $G$ is not quadratic, as Proposition~\ref{prop:relations} is not satisfied.
Indeed, the map ${\rm res}_{G_0,A}^1$ is not surjective.
\end{example}
\begin{example}[\textbf{Condition (ii) is necessary}]\label{ex:HNNcondii}
Let $G_0$ be the pro-$p$ group with minimal presentation
\[
G_0=\langle x,y,z\mid x^p[y,z]=1\rangle,
\]
let $A=B\le G_0$ be the subgroup generated by $y,z$ (in particular, $A$ is a free 2-generated pro-$p$
group by the Freiheitsatz, cf.\ \cite{romanov:freiheit}) and let $\phi\colon A\to A$ be the identity.
Consider $G=\mathrm{HNN}(G_0,A,\phi)$.
Thus, $G$ has a presentation
\[
G=\langle x,y,z,t\mid x^p[y,z]=[t,y]=[t,z]=1 \rangle.
\]
Let $\{\alpha_x,\ldots,\alpha_t\}\subseteq H^1(G,\mathbb{F}_p)$ and, with a slight abuse of notation, consider $\alpha_x,\alpha_y,\alpha_z$
as elements of $H^1(G_0,\mathbb{F}_p)$.
Then $\mathrm{ker}({\rm res}_{G_0,A}^1)=\langle\alpha_x\rangle$ and
\[
\mathrm{ker}({\rm res}_{G_0,A}^2)=\langle\alpha_y\alpha_z\rangle\neq\mathrm{ker}({\rm res}_{G_0,A}^1)\wedge H^1(G_0,\mathbb{F}_p).
\]
On the other hand, the long exact sequence in cohomology induced by the HNN-extension
implies that $H^2(G,\mathbb{F}_p)=\langle\alpha_y\alpha_z,\alpha_t\alpha_y,\alpha_t\alpha_z\rangle$,
whereas $H^3(G,\mathbb{F}_p)=0$ as $\mathrm{cd}(G_0)=2$ and $\mathrm{cd}(A)=1$, so that $G$ is not quadratic.
\end{example}
\begin{example}[\textbf{Condition (iii) is necessary}]\label{ex:HNNcondiii}
Let $G_0 =\langle x,y \rangle$ be a free abelian pro-$p$ group.
Set $A=\langle x \rangle$, $B=\langle y \rangle$ and let $\phi\colon A\to B$ be the isomorphism induced by $x\mapsto y$. Consider $G=\mathrm{HNN}(G_0,A,\phi)$.
Thus, $G$ has a presentation
\[
G=\langle x,y,t\mid [x,y]=1,[x,t]=x^{-1}y\rangle=\langle x,t\mid [x,[x,t]]=1\rangle
\]
and $G$ is not quadratic, as Proposition~\ref{prop:relations} is not satisfied.
Indeed, the map $f_{G_0}^1$ is not trivial.
\end{example}
\section{Analytic pro-\texorpdfstring{$p$}{p} groups}\label{sec:quadraticpadic}
A pro-$p$ group $G$ is said to be \emph{powerful} if $p \geq 3$ and
$[G,G]\leq G^p$, or $p=2$ and $[G,G]\leq G^4$. Here, $[G,G]$ and
$G^p$ denote the commutator subgroup and the
subgroup generated by all $p$th powers. Recall that the \emph{descending $p$-central series of $G$} is defined inductively by $P_1(G)=G$ and $P_{n+1}(G) = P_n(G)^p [P_n(G),G]$ for $n\ge 1$.
A pro-$p$ group $G$ is called \emph{uniform} if
it is finitely generated, powerful and $$\lvert P_i(G) : P_{i+1}(G) \rvert = \lvert G : P_2(G) \rvert$$
for all $i \in \mathbb{N}$.
It is worth noting that a powerful finitely generated pro-$p$ group is uniform if and only if it is torsion-free (see \cite[Thm.~4.5]{ddms:padic}).
Uniform pro-$p$ groups play an important role in the theory of $p$-adic Lie groups (see \cite{ddms:padic}).
In light of Lazard's work we have the following (cf.\ \cite{lazard:padic}, see also \cite[Thm.~5.1.5]{sw:cpg}).
\begin{proposition}\label{prop:uniform}
Let $G$ be a uniform pro-$p$ group.
Then $$H^\bullet(G,\mathbb{F}_p)\cong\Lambda_\bullet H^1(G,\mathbb{F}_p).$$
\end{proposition}
We can now prove Theorem~\ref{thm:quadratic analytic}.
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:quadratic analytic}}]
If $G$ is uniform, then the claim follows by Proposition~\ref{prop:uniform}.
Assume now that $G$ has quadratic $\mathbb{F}_p$-cohomology.
Since $G$ is torsion-free, one has $\mathrm{d}(G)\leq\dim(G)$, by \cite[Prop.~1.2]{klopsch:rank}. Moreover, $G$ is a Poincar\'e-Duality pro-$p$ group
so that $\dim(G)=\mathrm{cd}(G)$. Now by (\ref{eq:inequalities}),
\[
\mathrm{d}(G)\leq\dim(G)=\mathrm{cd}(G)\leq \mathrm{d}(G),
\]
so $\mathrm{cd}(G)=\mathrm{d}(G)$ and $H^\bullet(G,\mathbb{F}_p)\cong\Lambda_\bullet(H^1(G,\mathbb{F}_p))$
(cf.\ \cite[Prop.~4.3]{cq:bk}).
In particular, $G$ is powerful by \cite[Thm.~5.1.6]{sw:cpg}, and hence uniform.
\end{proof}
The following examples show that the above theorem cannot be extended to $p=2$, even for torsion-free groups.
\begin{example}\label{ex:analyticp21}
Let $G_d= N_d\rtimes H$ be the pro-$2$ group with $$N_d = \langle x_1,\ldots,x_d \mid [x_i,x_j]=1,\ 1\le i<j\le d \rangle \cong \mathbb{Z}_2^d,\quad H=\langle y \mid \ \rangle \cong \mathbb{Z}_2$$ and $y^{-1}x_i y=x_i^{-1}$, for $1\le i\le d$.
Then $G_d$ is $2$-adic analytic and torsion-free, but it is not uniform.
Moreover the group $G_d$ is quadratic, by \cite[Thm.~3.16]{qw:cyclotomic}.
Thus, the hypothesis $p\ge 3$ in the above theorem cannot be removed.
\end{example}
There are also infinite quadratic pro-$2$ groups with torsion.
\begin{example}\label{ex:analyticp22}
The infinite dihedral pro-$2$ group $D_{\infty}= \mathbb{Z}_2\rtimes C_2$, with action given by inversion, is isomorphic to the free pro-$2$ product $C_2\ast C_2$. By Proposition~\ref{prop:cohomology freeproduct}, we have
\[ H^\bullet(G,\mathbb{F}_2) \cong H^\bullet(C_2,\mathbb{F}_2)\oplus H^\bullet(C_2,\mathbb{F}_p) \cong \mathbb{F}_2[X_1]\oplus \mathbb{F}_2[X_2]\]
with $X_1,X_2$ indeterminates. Hence $G$ is $2$-adic analytic and quadratic, but it contains torsion elements.
\end{example}
\begin{remark}
Since there are uncountably many uniform pro-$p$ groups which are not {com\-men\-su\-ra\-ble} (cf.\ \cite{snopche:uncountably}), we have that there are uncountably many non-commensurable quadratic pro-$p$ groups. Taking free products, it is then easy to see that there are also uncountably many \emph{non-isomorphic} quadratic non-uniform pro-$p$ groups.
\end{remark}
\section{generalised \texorpdfstring{$p$}{p}-RAAGs}\label{sec:pRAAGs}
Right angled Artin groups (RAAGs for short) are a combinatorial construction that plays a prominent role in Geometric Group Theory. These can be defined as (abstract) groups given by a presentation where all relators are commutators of generators. One possible way to obtain a pro-$p$ group out of a RAAG is to consider the pro-$p$ completion of the group; this yields pro-$p$ groups whose structure remains quite close to that of a RAAG. In this section we introduce a generalised construction of RAAGs for pro-$p$ groups. For more evidence of the novelty and flexibility of this construction see Section~\ref{sec:triang}.
\subsection{\texorpdfstring{$p$}{p}-Graphs and \texorpdfstring{$p$}{p}-RAAGs}\label{sec:pgraphpraag}
We will state some conventions that we will keep for the rest of the article.
An \emph{(oriented) graph} is a couple $\mathcal{G}=({\mathcal V},{\mathcal E})$ where ${\mathcal V}$ is a finite set, whose elements
are called \emph{vertices}, and ${\mathcal E} \subseteq{\mathcal V}^{2}$, whose elements
are called \emph{edges}.
For an edge $e=(x_1,x_2)\in {\mathcal E}$, $o(e):=x_1$ and $t(e):=x_2$ are called the \emph{origin} and the \emph{terminus}
of $e$, respectively. The \emph{opposite edge} of $e=(x_1,x_2)\in {\mathcal E}$ is $\overline{e}:=(x_2,x_1) \in{\mathcal V}^2$.
We denote the set of \emph{opposite edges} of edges in $\mathcal{G}=({\mathcal V},{\mathcal E})$ as $\overline{{\mathcal E}}$.
Let $\mathcal{G}=({\mathcal V},{\mathcal E})$ be a graph. A \emph{loop} in $\mathcal{G}$ is an edge $e\in {\mathcal E}$ with $\vert\{o(e),t(e)\}\vert=1$. An \emph{(unoriented) circuit of lenght $2$} in $\mathcal{G}$ is couple $\{e,\overline{e}\}$ for $e\in \mathcal{G}$. An \emph{(unoriented) circuit} in $\mathcal{G}$ is a sequence of distinct edges $e_1,\ldots,e_n$
with $n\ge 3$ such that $\{o(e_i), t(e_{i})\} \cap \{o(e_{i+1}), t(e_{i+1})\} \neq \emptyset$, for $i=1,\ldots,n-1$
and $\{o(e_n), t(e_{n})\} \cap \{o(e_{1}), t(e_{1})\} \neq \emptyset$.
A graph $\mathcal{G}$ is said to be \emph{combinatorial} if it has no loops and no circuits of lenght $2$.
Note that, in particular, a combinatorial graph has a natural ``orientation'', i.e.\ only one of the pairs $(x_1,x_2)$, $(x_2,x_1)$ with $x_1,x_2 \in {\mathcal V}$ can appear in ${\mathcal E}$ and ${\mathcal E} \cap \overline{{\mathcal E}}=\emptyset$.
A \emph{$p$-labelling} of a combinatorial graph $\mathcal{G}=({\mathcal V},{\mathcal E})$ is a function $f:{\mathcal E} \to p\mathbb{Z}_p \times p\mathbb{Z}_p$ if $p\ge 3$, and $f:{\mathcal E} \to 4\mathbb{Z}_2 \times 4\mathbb{Z}_2$ if $p=2$.
\begin{definition}\label{def:graph}
Let $\mathcal{G}=({\mathcal V},{\mathcal E})$ be a combinatorial graph.
\begin{itemize}
\item[(a)] The graph $\mathcal{G}$ is said to be \emph{complete} if $\overline{{\mathcal E}}={\mathcal V}^2 \smallsetminus ({\mathcal E} \cup \{(v,v)\mid v\in {\mathcal V}\})$.
\item[(b)] A couple $\mathcal{G}' = ({\mathcal V}',{\mathcal E}')$ with ${\mathcal V}'\subseteq {\mathcal V}$ and ${\mathcal E}' \subseteq {\mathcal E}$ is said to be a \emph{subgraph} of $\mathcal{G}$.
\item[(c)] A subgraph $\mathcal{G}'= ({\mathcal V}',{\mathcal E}')$ of $\mathcal{G}$ is said to be \emph{full} if ${\mathcal E}'={\mathcal E}\cap({\mathcal V}')^2$.
\item[(d)] A full subgraph ${\mathcal G}'$ of ${\mathcal G}$ is said to be a \emph{clique} of ${\mathcal G}$ if ${\mathcal G}'$ is complete.
\end{itemize}
\end{definition}
\begin{definition}
A \emph{$p$-graph} is a couple $\Gamma=(\mathcal{G},f)$ where $\mathcal{G}=({\mathcal V},{\mathcal E})$ is a combinatorial graph and $f$ is a $p$-labelling of $\mathcal{G}$.
\end{definition}
Throughout the paper all graphs and $p$-graphs will be finite.
\begin{definition}\label{def:pRAAGs}
Let $\Gamma= ({\mathcal V},{\mathcal E},f)$ be a $p$-graph with $p$-labelling $(f_1,f_2):{\mathcal E}\to p\mathbb{Z}_p \times p\mathbb{Z}_p$.
The \textit{generalised Right Angled Artin {pro-$p$} group} ($p$-RAAG for short) associated to $\Gamma$, denoted by $G_\Gamma$, is the pro-$p$ group defined by the following pro-$p$ presentation:
\begin{equation}\label{eq:pRAAGdefn}
G_\Gamma = \pres{{\mathcal V}}{[o(e),t(e)]= o(e)^{f_1(e)} t(e)^{f_2(e)}\ \text{for } e\in {\mathcal E}}
\end{equation}
\end{definition}
We present a couple of examples to clarify the definition.
\begin{example}
Let $\mathcal{G}$ be a graph, let $c\equiv (0,0)\in p\mathbb{Z}_p \times p\mathbb{Z}_p$ be the constant $p$-labelling on $\mathcal{G}$ and set $\Gamma=(\mathcal{G},c)$; then $G_\Gamma$ is the pro-$p$ completion of the abstract RAAG associated to ${\mathcal G}$.
\end{example}
\begin{example}\label{ex:pRAAGs}
Let $\Gamma_1$ and $\Gamma_2$ be the $p$-graphs
\begin{center}
\begin{minipage}{0.1\textwidth}
$\Gamma_1 =$
\end{minipage}
\begin{minipage}{0.15\textwidth}
\[
\xymatrix{x_1 \ar[r]^{(a,b)} & x_2}
\]
\end{minipage}
\begin{minipage}{0.2\textwidth}
\begin{center} and \end{center}
\end{minipage}
\begin{minipage}{0.1\textwidth}
\qquad $\Gamma_2 =$
\end{minipage}
\begin{minipage}{0.2\textwidth}
\[
\xymatrix{ & x_2 & \\ x_1 \ar[ur]^{(\lambda,p^2)} \ar[rr]_{(0,0)}& & x_3 }
\]
\end{minipage}
\end{center}
\noindent with $a,b,\lambda\in p\mathbb{Z}_p$. Then the pro-$p$ groups $G_{\Gamma_1}$ and $G_{\Gamma_2}$ are defined by the presentations
\[
G_{\Gamma_1} = \pres{x_1,x_2}{[x_1,x_2] = x_1^a x_2^b} \text{\quad and}
\]
\[
G_{\Gamma_2} = \pres{x_1,x_2,x_3}{[x_1,x_2]= x_1^{\lambda} x_2^{p^2},\ [x_1,x_3] = 1},
\]
respectively.
\end{example}
We will see in Lemma~\ref{lem:isom}, that $G_{\Gamma_1}$ is a $2$-generated Demushkin group, hence a uniform pro-$p$ group.
\begin{remark}\label{rem:pRAAGs H2}
One of our main motivations to introduce generalised $p$-RAAGs is that $G_\Gamma$ satisfies the equivalent conditions of Proposition~\ref{prop:relations} and Remark~\ref{rmk:hypotheses pRAAGs}.
\end{remark}
\subsection{Cohomology of quadratic \texorpdfstring{$p$}{p}-RAAGs}
For a set $S$, we will denote by $\mathbb{F}_p\langle S \rangle$ the vector space with basis $S$.
Let $\Gamma=(\mathcal{G},f)$ be a $p$-graph. In this subsection we will show that the $\mathbb{F}_p$-cohomology of a quadratic $p$-RAAG $G_\Gamma$ is completely determined by its ``underlying graph'' $\mathcal{G}$. Recall that, the opposite graph $\mathcal{G}^{\mathrm{op}} = ({\mathcal V}^{\mathrm{op}},{\mathcal E}^{\mathrm{op}})$ is defined by $\mathcal{V}^{\mathrm{op}}=\mathcal{V}$, and $\mathcal{E}^{\mathrm{op}}=\mathcal{V}^2\smallsetminus (\mathcal{E}\cup \overline{{\mathcal E}})$. For instance,
\[
\begin{minipage}{0.1\textwidth}
\begin{center}$\mathcal{G}$ = \end{center}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\xymatrix{ \bullet \ar[r] \ar[dr] & \bullet \ar[d] \\ \bullet \ar[u] & \bullet \ar[l]}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\begin{center}$\xrightarrow{\phantom{aaa}\mathrm{op}\phantom{aaa}}$ \end{center}
\end{minipage}
\begin{minipage}{0.1\textwidth}
\begin{center}$\mathcal{G}^{\mathrm{op}}$ = \end{center}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\xymatrix{ \bullet & \bullet \ar@/_/[dl] \\ \bullet \ar@/_/[ur] & \bullet }
\end{minipage}
\]
\begin{definition}\label{def:gammaop}
Let $\mathcal{G}=({\mathcal V},{\mathcal E})$ be a graph. Define the algebra $\Lambda_\bullet(\mathcal{G}^{\mathrm{op}}) = Q(\mathbb{F}_p\langle{\mathcal V}\rangle,\Omega)$ with
\[
\Omega=\{v\otimes w + w\otimes v \mid (v,w)\notin{\mathcal E}\cup \overline{{\mathcal E}}\}\subseteq \mathbb{F}_p\langle{\mathcal V}\rangle^{\otimes 2}
\]
(cf.\ \cite[\S~4.2.2]{weigel:koszul}).
Namely, we kill the wedge product of two vertices if they are not connected in $\mathcal{G}$.
In particular, $\Lambda_\bullet(\mathcal{G}^{\mathrm{op}})$ is quadratic and graded-commutative.
\end{definition}
Let $G_\Gamma$ be a generalised $p$-RAAG with associated $p$-graph $\Gamma=(\mathcal{G},f)$ and underlying graph $\mathcal{G}=({\mathcal V},{\mathcal E})$.
Clearly, by \eqref{eq:H1} one has $H^1(G_\Gamma,\mathbb{F}_p)\cong\Lambda_1(\mathcal{G}^{\mathrm{op}})$.
In degree $2$ one has the following.
\begin{lemma}\label{lemma:H2 pRAAGs}
Let $G_\Gamma$ be a generalised $p$-RAAG with associated $p$-graph $\Gamma=(\mathcal{G},f)$ and underlying graph $\mathcal{G}=({\mathcal V},{\mathcal E})$.
Then $H^2(G_\Gamma,\mathbb{F}_p)\cong \Lambda_2(\mathcal{G}^{\mathrm{op}})$.
\end{lemma}
\begin{proof}
Set $d=\mathrm{d}(G_\Gamma)$. Let $\mathcal{X}=\{x_1,\ldots,x_d\}$ and $\mathcal{R}$ be the basis and
the set of defining relations induced by $\Gamma$, respectively. Also let $\mathcal{X}^*=\{\alpha_1,\ldots,\alpha_d\}$ be the dual basis of $\mathcal{X}$ in $H^1(G_\Gamma,\mathbb{F}_p)$.
Then by Proposition~\ref{prop:cupproduct} one has
\begin{equation}\label{eq:H2alpha}
H^2(G_\Gamma,\mathbb{F}_p)=\bigoplus_{\substack{i<j\\(x_i,x_j)\in{\mathcal E}}}\mathbb{F}_p\cdot\alpha_i\alpha_j,
\end{equation}
with $\alpha_j\alpha_i=\alpha_i\alpha_j$ for all $i,j$.
Finally, \eqref{eq:H2alpha} coincides with $\Lambda_2(\mathbb{F}_p\langle{\mathcal V}\rangle)/(\Omega)$,
with $\Omega$ as in Definition~\ref{def:gammaop}.
\end{proof}
Once we know that a $p$-RAAG is quadratic, the previous lemma completely determines the $\mathbb{F}_p$-cohomology.
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:pRAAGs quadratic}}]
The proof follows at once from the definition of quadraticity and Lemma~\ref{lemma:H2 pRAAGs}.
\end{proof}
In the setting of Theorem~\ref{thm:pRAAGs quadratic}, we may describe the cohomology algebra of $G_\Gamma$ as follows.
Let $\mathcal{X}$ be a basis of $G_\Gamma$ induced by the homomorphism $F\to G_\Gamma$ in \eqref{eq:presentation} and let $\mathcal{X}^*=\{\alpha_1,\ldots,\alpha_d\}$ be its dual basis of $H^1(G_\Gamma,\mathbb{F}_p)$. Fix an integer $n$ with $1\leq n\leq \mathrm{cd}(G_\Gamma)$.
Then, by Theorem~\ref{thm:pRAAGs quadratic}, we have
$$\beta=\alpha_{i_1}\cdots\alpha_{i_n}\neq0,\qquad\text{with }1\leq i_1<\ldots<i_n\leq n,$$
if, and only if, $(x_{i_j},x_{i_l})\in{\mathcal E}$ for every
$i_j,i_l\in I=\{i_1,\ldots,i_n\}$.
Namely, $\beta$ is not trivial in $H^n(G_\Gamma,\mathbb{F}_p)$ if, and only if, there exists a clique
$\mathcal{G}_I$ of $\mathcal{G}$ with ${\mathcal V}(\mathcal{G}_I)=\{x_{i_1},\ldots,x_{i_n}\}$.
Let us denote by $\mathrm{Cl}_n(\Gamma)$ the set of $n$-cliques in $\Gamma$. In particular, one has the following.
\begin{coro}
Let $G_\Gamma$ be a quadratic $p$-RAAG with associated $p$-graph $\Gamma=(\mathcal{G},f)$ and underlying graph $\mathcal{G}=({\mathcal V},{\mathcal E})$. For $1\leq n\leq\mathrm{cd}(G_\Gamma)$ and $1\leq i_1<\ldots<i_n\leq n$, the assignment
\[
\alpha_{i_1}\cdots\alpha_{i_n}\longmapsto \begin{cases}\mathcal{G}_I & \text{if }\alpha_{i_1}\cdots\alpha_{i_n}\neq0,\\
0 & \text{if }\alpha_{i_1}\cdots\alpha_{i_n}=0, \end{cases} \]
with $\mathcal{G}_I$ the $n$-clique of $\Gamma$ defined as above,
induces an isomorphism of vector spaces $$H^n(G_\Gamma,\mathbb{F}_p)\overset{\sim}{\longrightarrow}\mathbb{F}_p\langle\mathrm{Cl}_n(\Gamma)\rangle.$$
\end{coro}
Since we have a clear picture of the cohomology ring of a quadratic $p$-RAAG,
we can describe more precisely how the underlying graph influences the cohomology.
\subsection{Triangle-free \texorpdfstring{$p$}{p}-RAAGs}
In this section we prove that several graphs always yield quadratic pro-$p$ groups.
The graph $\mathcal{G}=({\mathcal V},{\mathcal E})$ is said to be \emph{triangle-free}, if there are no triples $\{x_i,x_j,x_h\}\subseteq{\mathcal V}$
such that $$(x_i,x_j),(x_i,x_h),(x_j,x_h)\in{\mathcal E} \cup\overline{{\mathcal E}}.$$
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:pRAAGs mild}}]
Assume $\mathcal{G}$ is triangle-free.
By Remark~\ref{rem:pRAAGs H2}, $G_\Gamma$ satisfies the conditions of Proposition~\ref{prop:relations}.
Consider the quadratic algebra $A_\bullet(\mathcal{G}) =Q(\mathbb{F}_p\langle {\mathcal V} \rangle,\Omega)$ where $$ \Omega = \{x_i\otimes x_j -x_j\otimes x_i \mid i<j,\ (x_i,x_j) \in {\mathcal E}\cup \overline{{\mathcal E}}\}.$$
By the definition of a $p$-RAAG, the coefficients $a_{ij}$ from \eqref{eq:shape r} are either zero or one, depending on whether $(x_i,x_j) \in {\mathcal E}$.
Hence $A_\bullet(\mathcal{G}) = M_\bullet(G_\Gamma)$ from Definition~\ref{def:mild}.
By \cite[\S~4.2.2]{weigel:koszul}, one has an equality of formal power series
\begin{equation*}
\sum_{n\geq0}\dim(A_n(\mathcal{G}))\cdot T^n=\frac{1}{1-dT+rT^2},
\end{equation*}
where $d=|{\mathcal V}|$ and $r=|{\mathcal E}|$. This yields (ii).
Condition (ii) implies (iii) by \cite[Thm.~2.12]{gartner:mild}.
Assume that $\mathrm{cd}(G_\Gamma)=2$, and suppose that $\Gamma$ contains a triangle $T=(\mathcal{T},f\vert_{\mathcal{T}})$
as a full subgraph, with ${\mathcal V}(\mathcal{T})=\{x_1,x_2,x_3\}$.
Let $H$ be the subgroup of $G_\Gamma$ generated by $x_1,x_2,x_3$.
Then $\mathrm{cd}(H)\leq\mathrm{cd}(G_\Gamma)$ by \cite[Prop.~3.3.5]{nsw:cohn} and $H$ is powerful.
If $H$ is torsion-free, then $\mathrm{cd}(H)=3$ by Proposition~\ref{prop:uniform} --- in contradiction with (iii).
On the other hand, if $G$ has non-trivial torsion, then $\mathrm{cd}(H)=\infty$ --- again contradicting (iii). Thus, (iii) implies (i).
Finally, if the three conditions hold, Proposition~\ref{prop:mild cd2} and Lemma~\ref{lemma:H2 pRAAGs}
imply that $H^\bullet(G_\Gamma,\mathbb{F}_p)$ is a quadratic algebra.
\end{proof}
For example, we can show that every ``cycle'' $p$-graph yields a quadratic pro-$p$ group.
\begin{example}
Let $G$ be the pro-$p$ group with minimal presentation
\[
G=\left\langle x_1,\ldots,x_d\mid x_1^p[x_1,x_2]=x_2^p[x_2,x_3]=\ldots=x_d^p[x_d,x_1]=1\right\rangle,
\]
with $d\geq4$.
By Theorem~\ref{thm:pRAAGs mild}, $G$ is quadratic.
Note that the abelianization $G/[G,G]$ is a $p$-elementary abelian group; moreover, $G$ does not occur
as a maximal pro-$p$ Galois group by Theorem~\ref{thm:galois pRAAGs}.
\end{example}
The above theorem shows that many $p$-RAAGs are quadratic, since every $p$-graph with triangle-free underlying graph yields a quadratic group. The precise magnitude of triangle-free graphs was calulated by Erd\H{o}s, Kleitman and Rothschild; we record their result below.
\begin{theorem}{\cite{MR0463020}}\label{lem:mostpRAAGs}
The number of triangle-free graphs on $n$ vertices is asymptotic to $2^{n^2/4 + o(n^2)}$ for $n$ tending to infinity.
\end{theorem}
Proposition~\ref{prop:mild cd2} raises the following questions, one the ``dual'' of the other.
\begin{ques}
Is every mild pro-$p$ group satisfying Proposition~\ref{prop:relations} quadratic?
\end{ques}
\begin{ques}
Is every finitely presented quadratic pro-$p$ group of cohomological dimension 2 mild?
\end{ques}
By Theorem~\ref{thm:pRAAGs mild}, the above questions have a positive answer for $p$-RAAGs.
\subsection{Triangle-ful \texorpdfstring{$p$}{p}-RAAGs}\label{sec:trianglefull}
In the previous section we saw that a triangle-free $p$-graph always yields a quadratic pro-$p$ group. In particular, the $p$-RAAG associated to a triangle-free $p$-graph is always torsion-free. This is also the case if every edge of $\Gamma$ is labelled by $(0,0)$, i.e.\ $G_\Gamma$ is the pro-$p$ completion of a RAAG.
On the other hand, it turns out that general $p$-RAAGs have a surprisingly rich structure. For instance, it is possible for a $p$-RAAG to be finite.
\begin{example}\label{ex:mennicke}
The $p$-RAAG $G_\Gamma$ associated to the $p$-graph
\[
\xymatrix{& x_2 \ar[dr]^{(0,-p)} & \\ x_1 \ar[ur]^{(0,-p)} & & x_3 \ar[ll]^{(0,-p)}}
\]
is given by the presentation $$G=\left\langle x_1, x_2, x_3 \mid [x_1,x_2]=x_2^{-p}, [x_2,x_3]=x_3^{-p}, [x_3,x_1]=x_1^{-p} \right\rangle.$$
This is a finite $p$-group (see \cite[\S~4.4, Ex.~2(e)]{ser:gal}).
\end{example}
Of course a $p$-RAAG as in Example~\ref{ex:mennicke} cannot be quadratic, by Proposition~\ref{prop:properties quadratic} and Remark~\ref{rmk:propCp2}. Hence we need to somehow exclude the possibility that ``triangles collapse the group''. One such condition is given by the following definition.
\begin{definition}\label{defn:nondeg}
The $p$-RAAG $G_\Gamma$ associated to the $p$-graph $\Gamma=({\mathcal V},{\mathcal E},f)$ is said to be \emph{non-degenerate} if there exist a subset $\widetilde{{\mathcal E}} \subseteq {\mathcal V}^2$ and a $p$-labelling $\widetilde{f}: \widetilde{{\mathcal E}} \to p\mathbb{Z}_p \times p\mathbb{Z}_p$ such that
\begin{enumerate}
\item $\widetilde{\mathcal{G}} = ({\mathcal V},\widetilde{{\mathcal E}})$ is a combinatorial graph,
\item ${\mathcal E} \subseteq \widetilde{{\mathcal E}}$ and $\widetilde{f}_{\vert{\mathcal E}} = f$,
\item $G_{(\widetilde{\mathcal{G}},\widetilde{f})}$ is a uniform pro-$p$ group.
\end{enumerate}
\end{definition}
The above definition just says that a $p$-RAAG is non-degenerate if its $p$-graph can be ``completed'' to the combinatorial $p$-graph of a uniform pro-$p$ group.
\begin{example}
We will see later that the $p$-RAAG $G_{\Gamma_1}$ from Example~\ref{ex:pRAAGs} is a uniform pro-$p$ group. Hence it is non-degenerate.
We can complete the $p$-graph $\Gamma_2$ from Example~\ref{ex:pRAAGs} as below; see Section~\ref{sec:triang} for the details.
\[
\begin{minipage}{0.1\textwidth}
\begin{center}$\Gamma_2$ = \end{center}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\xymatrix{& x_2 & \\ x_1 \ar[ur]^{(\lambda,p^2)} & & x_3 \ar[ll]^{(0,0)}}
\end{minipage}
\begin{minipage}{0.1\textwidth}
\begin{center}$\longmapsto$ \end{center}
\end{minipage}
\begin{minipage}{0.1\textwidth}
\begin{center}$\widetilde{\Gamma}_2$ = \end{center}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\xymatrix{& x_2 \ar[dr]^{(0,0)} & \\ x_1 \ar[ur]^{(\lambda,p^2)} & & x_3 \ar[ll]^{(0,0)}}
\end{minipage}
\]
On the other hand, again using the methods from Section~\ref{sec:triang}, one can show that the $p$-graph $\Gamma_3$ below cannot be completed to give a uniform group.
\[
\begin{minipage}{0.1\textwidth}
\begin{center}$\Gamma_3$ = \end{center}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\xymatrix{ x_2 \ar[r]^{(p,p)} & x_3 \ar[d]^{(p,p)} \\ x_1 \ar[u]^{(p,p)} & x_4 \ar[l]^{(p,0)}}
\end{minipage}
\]
\end{example}
\subsubsection{Complete $p$-graphs and $p$-subgraphs}
We will now exhibit a criterion to check if a $p$-RAAG associated to a complete $p$-graph is non-degenerate (or equivalently uniform).
\begin{proposition}\label{lem:unif}
\begin{enumerate}[(a)]
\item Let $G=\langle x_1,\ldots,x_d\rangle$ be a uniform pro-$p$ group with $d = d(G)$ and assume that for all $1\le i<j\le d$ there exist $\a_{ij}, \beta_{ij} \in p\mathbb{Z}_p$ such that $[x_i,x_j] = x_i^{\a_{ij}} x_j^{\beta_{ij}}$.
Then there exists a complete $p$-graph $\Gamma$ such that $G=G_\Gamma$.
\item Let $\Gamma$ be a complete $p$-graph and let $G_\Gamma= \langle x_1,\ldots,x_d \rangle$ be its associated generalised $p$-RAAG. Then $G_\Gamma$ is uniform if and only if every triple of generators $x_r,x_s,x_t$ in $G_\Gamma$ generates a torsion free pro-$p$ group (which must be uniform of dimension 3). In particular, $G_\Gamma$ is quadratic if and only if it is torsion free, and therefore uniform.
\end{enumerate}
\end{proposition}
\begin{proof}
If $G$ is a uniform pro-$p$ group as in $(a)$, then there is a presentation of the required form over a complete $p$-graph $\Gamma$
by \cite[Proposition~4.32]{ddms:padic}. This proves $(a)$.
Let $\Gamma$ be a complete $p$-graph.
Modulo reversing some arrows, the $p$-RAAG $G_\Gamma$ has a presentation
\begin{equation}\label{eq:presuniform}
G_\Gamma = \pres{x_1,\ldots,x_d}{[x_i,x_j] = x_i^{\a_{ij}} x_j^{\beta_{ij}},\ 1\le i<j\le d}.
\end{equation}
Clearly $G_\Gamma$ is a powerful group. Let $H_{r,s,t}$ be the subgroup of $G_\Gamma$ generated by a triple of generators
$x_r,x_s,x_t$ in $G_\Gamma$ . If $G_\Gamma$ is uniform, then it is torsion free and therefore $H_{r,s,t}$ is torsion free as well. Now suppose that $H_{r,s,t}$ is torsion free for every triple of generators $x_r,x_s,x_t$ in $G_\Gamma$. Then $H_{r,s,t}$ is uniform of dimension $3$.
Hence the $\mathbb{Z}_p$-Lie algebra $L_{H_{r,s,t}}$ associated to $H_{r,s,t}$ has the presentation
\begin{equation}\label{eq:presliealg}
L_{H_{r,s,t}} = \left\langle X_r, X_s, X_t \ \left\vert \ \begin{matrix} [X_r,X_s] = \alpha_{r,s}' X_r + \beta_{r,s}' X_s,\\ [X_s,X_t] = \alpha_{s,t}' X_s + \beta_{s,t}' X_t,\\ [X_t,X_r] = \alpha_{t,r}' X_t + \beta_{t,r}' X_r \end{matrix} \right. \right\rangle
\end{equation}
for some $\alpha_{ij}', \beta_{ij}'\in p\mathbb{Z}_p$. In fact, \eqref{eq:presliealg} follows from \eqref{eq:presuniform} by direct computation using \eqref{eq:Liebracket} and the hypothesis.
Define the powerful $\mathbb{Z}_p$-Lie algebra $L$ given by the presentation
\[
L = \pres{X_1, \ldots, X_d}{[X_i,X_j] = \a_{ij}' X_i + \beta_{ij}' X_j,\ 1\le i<j\le d}.
\]
It is easy to see that the free $ \mathbb{Z}_p$-module $L$ with basis $X_1,\ldots,X_d$ has a Lie algebra structure with the given value of the Lie bracket on the generators; it suffices to check the Jacobi identity for triples of generators. Consider the uniform pro-$p$ group $G_L$ associated to $L$
via the Lazard correspondence using the map $\exp: L\to G_L$.
Using the properties of the commutator Campbell-Hausdorff formula $\Psi: L\to G_L$
(\cite[Lem.~7.12(iii)]{ddms:padic}), one can show that
$$\exp ([X_r,X_s]) = [\exp(X_r), \exp(X_s)]= \exp(X_r)^{\a_{rs}} \exp(X_s)^{\b_{rs}}.$$
Thus the map $G_L \to G$ defined by $\exp(X_r) \mapsto x_r$ yields a surjective homomorphism,
which induces the long exact sequence
\[
\xymatrix@C=1.2truecm{ 0\ar[r] & H^1(G,\mathbb{F}_p)\ar[r]^-{\inf_{G_L,N}^1}& H^1(G_L,\mathbb{F}_p)\ar[r]^-{{\rm res}_{G_L,N}^1}
& H^1(N,\mathbb{F}_p)^G \ar`r[d]`[l] `[dlll] `[dll] [dll] \\
& H^{2}(G,\mathbb{F}_p)\ar[r]^-{\inf_{G_L,N}^2} & H^{2}(G_L,\mathbb{F}_p) & &}
\]
where $N$ denotes the kernel of $G_L\to G$.
The map $\inf_{G_L,N}^1$ is an isomorphism because we may identify the bases of $G_L$ and of $G$.
Moreover, also the map $\inf_{G_L,N}^2$ is an isomorphism, since Proposition~\ref{prop:uniform} yields
$H^2(G,\mathbb{F}_p)\cong H^2(G_L,\mathbb{F}_p)\cong\Lambda_2H^{1}(G,\mathbb{F}_p)$.
Therefore, $H^1(N,\mathbb{F}_p)^G$ is trivial, so that $N=\mathrm{ker}(G_L\to G)$ is trivial too.
\end{proof}
Proposition~\ref{lem:unif} also gives a handy criterion to check if a complete $p$-RAAG is non-degenerate. In fact, we believe that deciding whether a generalised $p$-RAAG is degenerate boils down to the same question for $p$-subgraphs that are triangles. This can be a very difficult task. We attempt to give some partial answers in Section~\ref{sec:triang} where we will study the groups arising from $p$-graphs of the form
\[
\xymatrix{& x_2 \ar[dr]^{(\b_2,\b_3)} & \\ x_1 \ar[ur]^{(\a_1,\a_2)}& & x_3 \ar[ll]^{(\c_3,\c_1)}}
\]
We will call these groups \emph{triangle $p$-RAAGs}.
Furthermore, it is hard to decide whether the $p$-RAAG associated to a $p$-subgraph $\Gamma_1\subset \Gamma$ embeds into the $p$-RAAG associated to $\Gamma$. This can be addressed for complete $p$-subgraphs of non-degenerate $p$-graphs.
\begin{lemma}\label{lemma:complete in nondeg}
Let $\Gamma=(({\mathcal V},{\mathcal E}),f)$ be a $p$-graph such that the associated $p$-RAAG $G_\Gamma$ is non-degenerate,
and let $\Delta=(({\mathcal V}_\Delta,{\mathcal E}_\Delta),f_\Delta)$ be a complete $p$-subgraph of $\Gamma$.
Then the $p$-RAAG $G_\Delta$ is uniform, and it embeds in $G_\Gamma$.
\end{lemma}
\begin{proof}
Let $\tilde\Gamma$ be a complete $p$-graph which completes $\Gamma$.
Then $\Delta$ is a $p$-subgraph of $\tilde\Gamma$, and one has the morphisms of pro-$p$ groups
\[
\xymatrix{ G_\Delta\ar[r]^-{\phi_\Delta} & G_\Gamma\ar[r]^-{\phi_{\tilde\Gamma}} & G_{\tilde\Gamma}} ,
\]
with $G_{\tilde\Gamma}$ uniform.
Set $\psi=\phi_{\tilde\Gamma}\circ\phi_\Delta$.
Since $\Delta$ is complete, $G_\Delta$ is powerful, and therefore also $\mathrm{im}(\psi)$
is powerful.
Moreover, $G_{\tilde\Gamma}$ is torsion-free, thus also $\mathrm{im}(\psi)$ is torsion-free,
and hence uniform.
Finally, both $G_\Delta$ and $\mathrm{im}(\psi)$ are minimally generated by ${\mathcal V}_\Delta$.
Hence, $\psi\colon G_\Delta\to\mathrm{im}(\psi)$ is an isomorphism.
\end{proof}
In the rest of this section we will try to apply Theorem~\ref{thm:cohomology amalgam} to show that
several families of $p$-RAAGs yield quadratic pro-$p$ groups.
We have seen in Remark~\ref{rmk:hypotheses pRAAGs} that $p$-RAAGs automatically satisfy the cohomological hypotheses
of Theorem~\ref{thm:cohomology amalgam}. It turns out that the main obstruction to apply the theorem
is the fact that we do not have a general criterion to decide whether a certain $p$-RAAG $G_\Gamma$ is
a \emph{proper} amalgam of two $p$-RAAGs associated to full subgraphs of $\Gamma$.
In the next subsection, we will add two novel criteria for an amalgam of pro-$p$ groups to be proper.
\subsection{Proper amalgams of \texorpdfstring{$p$}{p}-RAAGs}
We will show below that the amalgam of two uniform pro-$p$ groups over a uniform subgroup $H$ is always proper, provided that the generators of $H$ are part of the minimal generating sets of both groups. This adds a new criterion to the known criteria from \cite[\S~9.2]{ribzal:book}. We will add yet another new criterion for properness in later in this Section.
\begin{proposition}\label{prop:amalgunif}
Let $1\le d\le k \le n$. Let $G_1=\langle x_1,\ldots,x_k\rangle$ and $G_2= \langle x_d\ldots,x_n \rangle$ be uniform pro-$p$ groups with the isomorphic closed uniform subgroup $H=\langle x_d,\ldots,x_k \rangle$. Then the amalgamated free pro-$p$ product $G=G_1 \amalg_H G_2$ is proper.
\begin{proof}
First note that it is sufficient to show that $H^{p^n} = H\cap G_i^{p^n}$ for every $n$ and $i=1,2$. In fact, we can then apply \cite[Thm.~9.2.4]{ribzal:book} and we are done.
By symmetry, it is sufficient to show the property for $i=1$. It is clear that $H^{p^n} \le H\cap G_1^{p^n}$ for all $n$. By \cite[Thm.~2.7]{ddms:padic}, we have that $G_1^{p^n} = \langle x_1^{p^n},\ldots,x_k^{p^n} \rangle$ and $H^{p^n} = \langle x_d^{p^n},\ldots,x_k^{p^n} \rangle$. Suppose by contraddiction that there is $g\in (H\cap G_1^{p^n}) \smallsetminus H^{p^n}$. Then $$ g= x_1^{a_1 p^n} \ldots x_{d-1}^{a_{d-1} p^n} x_d^{a_d p^n} \ldots x_k^{a_k p^n}$$ for some $a_i,b_j \in \mathbb{Z}_p$. Since $x_d^{a_d p^n} \ldots x_k^{a_k p^n} \in H$, we also have $x_1^{a_1 p^n} \ldots x_{d-1}^{a_{d-1} p^n} \in H$. Thus there exist $c_d,\ldots,c_k\in \mathbb{Z}_p$ such that
$$x_1^{a_1 p^n} \ldots x_{d-1}^{a_{d-1} p^n}\cdot x_d^{c_d p^n} \ldots x_{k}^{c_{k} p^n} = 1.$$ Since $G_1$ is uniform, we can conclude that $$a_1=\ldots =a_{d-1} = c_d =\ldots = c_k=0,$$ and hence $g\in H^{p^n}$, which yields a contradiction.
\end{proof}
\end{proposition}
We now need an auxilliary lemma on free products of pro-$p$ groups and their Frattini subgroups.
Recall that the Frattini series of a pro-$p$ group $G$ is defined inductively by $\Phi^1(G)=G$ and $\Phi^{n+1}(G) = \Phi^n(G)^p [\Phi^n(G),\Phi^n(G)]$.
\begin{lemma}\label{lem:frattini}
Let $H$ and $K$ be pro-$p$ groups and let $G= H\ast K$. Then $$\Phi^n(H) = H\cap \Phi^n(G) \text{\quad for all $n$}.$$
\begin{proof}
Clearly $\Phi^n(H) \le H\cap \Phi^n(G)$. For the other inclusion, we first observe that there exists a retraction $r:G\to H$, i.e.\ a continuous homomorphism such that $r_{\vert H} = \mathrm{id}_H$, given by $h\mapsto h$ and $k\mapsto 1$ for $h\in H$ and $k\in K$. Hence $r(\Phi^n(G)) \le \Phi^n(r(G))$ and \begin{multline*} \Phi^n(G)\cap H = r(\Phi^n(G)\cap H) \le r(\Phi^n(G)) \cap r(H) \le \\ \le \Phi^n(r(G)) \cap H = \Phi^n(H) \cap H = \Phi^n(H).\end{multline*}
\end{proof}
\end{lemma}
We are almost ready to state our criterion for an amalgam of $p$-RAAGs to be proper. First one more proposition which might be of independent interest.
\begin{proposition}\label{prop:frattiniunifsubgroup}
Let $G=G_\Gamma=\langle x_1,\ldots,x_d\rangle $ be a non-degenerate $p$-RAAG. Suppose that the subgroup $H=\langle x_k\ldots, x_d\rangle$ is uniform for some $1 \le k\le d$. Then $$ \Phi^n(H) = H\cap \Phi^n(G) $$ for every $n\in \mathbb{N}$.
\begin{proof}
Clearly $\Phi^n(H) \le H\cap \Phi^n(G)$. Consider the free pro-$p$ group $F=F(x_1,\ldots,x_{k-1})$ generated by $x_1,\ldots,x_{k-1}$ and the canonical projection $\varphi: F\ast H \to G$ given by $x_i\mapsto x_i$ for $i=1,\ldots,d$.
Also, denote by $\widetilde{G}$ the uniform quotient of $G$ from Definition~\ref{defn:nondeg} and by $\pi: G\to \widetilde{G}$ the associated projection.
Then the map $(\pi \circ \varphi)_{\vert H}$ is an isomorphism, since $H$ is uniform by hypothesis. We will use the same symbol for the three different copies of $H$ to make the notation lighter.
Suppose by contradiction that there is $x\in (H\cap \Phi^n(G)) \smallsetminus \Phi^n(H)$. Then, by Lemma~\ref{lem:frattini}, $\pi(x)\in H\cap \Phi^n(G) = \Phi^n(H)$. Moreover, there exists $y\in \Phi^n(H)= H\cap \Phi^n(F\ast H)$ such that $\pi(\varphi(y)) = \pi(x)$. Since $y\in H$ and $\pi_{\vert H}$ is also an isomorphism, we deduce that $\varphi(y)=x$. Thus $x\in \varphi(\Phi^n(H)) \le \Phi^n(H)$, which yields a contradiction.
\end{proof}
\end{proposition}
Let us go back to the task at hand, that is showing the properness of amalgams over uniform subgroups in certain non-degenerate cases.
\begin{proposition}\label{lem:amalgunifsubgroup}
Let $G_1= G_{\Gamma_1}$ and $G_2 = G_{\Gamma_2}$ be non-degenerate $p$-RAAGs with underlying $p$-graphs $\Gamma_1$ and $\Gamma_2$, respectively. Let $\Gamma'$ be a common isomorphic complete $p$-subgraph of $\Gamma_1$ and $\Gamma_2$ and let $H=G_{\Gamma'}$. Then the amalgamated product $G_1 \amalg_H G_2$ is proper.
\begin{proof}
By Proposition~\ref{prop:frattiniunifsubgroup}, we have that $H\cap \Phi^n(G_i)=\Phi^n(H)$ for $i=1,2$ and every $n\in \mathbb{N}$. Now the result follows from \cite[Thm.~9.2.4]{ribzal:book}, where we can take $U_{i n} = \Phi^{n}(G_i)$.
\end{proof}
\end{proposition}
This generalises the well-known fact that amalgams over pro-cyclic subgroups are always proper.
\begin{example}
Let $K_1$ and $K_2$ be non-degenerate $p$-RAAGs and let $H$ be a uniform pro-$p$ group. Consider the groups $G_1=K_1\times H$ and $G_2= H\times K_2$. Then $G=G_1\amalg_H G_2$ is a proper amalgam by \cite[Ex.~9.2.6(c)]{ribzal:book}. Note that, if $H$ is $p$-RAAG, then properness follows also from Proposition~\ref{lem:amalgunifsubgroup}.
Moreover, if both $K_1$ and $K_2$ are quadratic, by Proposition~\ref{prop:cohomology freeproduct}
and Proposition~\ref{prop:uniform} also $G_1$ and $G_2$ are quadratic, and for both $i=1,2$ one has
\[\begin{split}
H^1(G_i,\mathbb{F}_p) &= H^1(K_i,\mathbb{F}_p)\oplus H^1(H,\mathbb{F}_p),\\
H^2(G_1,\mathbb{F}_p) &= H^2(K_i,\mathbb{F}_p)\oplus (H^1(K_i,\mathbb{F}_p)\wedge H^1(H,\mathbb{F}_p))\oplus H^2(H,\mathbb{F}_p),
\end{split} \]
and the restriction maps ${\rm res}_{G_i,H}^1$ and ${\rm res}_{G_i,H}^2$ are the projections onto the second summand
of $H^1(G_i,\mathbb{F}_p)$ and the third summand of $H^2(G_i,\mathbb{F}_p)$ respectively.
Hence, all the hypothesis of Theorem~\ref{thm:cohomology amalgam} are satisfied
and $G$ is quadratic.
\end{example}
\subsection{Some quadratic triangle-ful \texorpdfstring{$p$}{p}-RAAGs}\label{sec:sometrianglefull}
Next we will produce several examples of triangle-ful $p$-RAAGs that are quadratic. First of all we remark that all ``small'' non-degenerate $p$-RAAGs are quadratic.
\begin{lemma}\label{lem:smallpRRAG}
Let $\Gamma=(({\mathcal V},{\mathcal E}),f)$ be a $p$-graph.
\begin{enumerate}
\item If $\norm{{\mathcal V}}\le 3$ and $\norm{{\mathcal E}}\le 2$, then $G_\Gamma$ is quadratic.
\item Suppose that $\norm{{\mathcal V}}= 3$ and $\norm{{\mathcal E}} = 3$ and that $G_\Gamma$ is non-degenerate. Then $G_\Gamma$ is quadratic.
\end{enumerate}
\begin{proof}
In the first case, $G_\Gamma$ is either a free pro-$p$ group, a Demushkin group or an amalgamated free product of a free pro-$p$ group and a Demushkin group over a pro-cyclic subgroup. The result follows from \cite[Thm.~3.2]{ribes:amalg} and Theorem~\ref{thm:cohomology amalgam}.
In the second case, $G_\Gamma$ is a uniform pro-$p$ group by Proposition~\ref{lem:unif}. The result follows from Proposition~\ref{prop:uniform}.
\end{proof}
\end{lemma}
\begin{remark}\label{rmk:newquadpraags}
We now exhibit some operations that, starting from quadratic $p$-RAAGs, produce new quadratic $p$-RAAGs. We point out that, in all the following examples, we can apply Theorem~\ref{thm:cohomology amalgam} because of Remark~\ref{rmk:hypotheses pRAAGs}.
\begin{enumerate}[(a)]
\item \textbf{Disjoint union.} If $\Gamma$ is the union of two disjoint subgraphs $\Gamma_1$ and $\Gamma_2$, then $G_\Gamma$ is isomorphic to $G_{\Gamma_1}\ast G_{\Gamma_2}$. Therefore, if $\Gamma_1$ and $\Gamma_2$ are quadratic $p$-RAAGs, then $G_\Gamma$ is a quadratic $p$-RAAG as well. Moreover, free products of quadratic $p$-RAAGs are quadratic $p$-RAAGs.
\item \textbf{Mirroring.} Let $G_\Gamma$ be a quadratic $p$-RAAG. Then the amalgamated product $G =G_\Gamma \amalg_{G_{\Gamma'}} G_\Gamma$ of $G_\Gamma$ with itself over $G_{\Gamma'}$ (identified via the identity map) , where $\Gamma'$ is any full subgraph of $\Gamma$, is proper by \cite[Ex.~9.2.6(a)]{ribzal:book}. Hence such a $G$ is quadratic.
\item \textbf{Amalgam over uniform subgroups.} The $p$-RAAGs obtained from Proposition~\ref{prop:amalgunif} and Proposition~\ref{lem:amalgunifsubgroup}.
\item \textbf{RAAGs.} Pro-$p$ completions of abstract RAAGs can be written as a series of proper HNN-extensions, hence they are quadratic. This fact was already proved by Riley and Weigel (unpublished).
\end{enumerate}
\end{remark}
\begin{example}
Let $\Gamma$ be a $p$-graph obtained from a complete $p$-graph by removing one edge. Using Proposition~\ref{prop:amalgunif}, we can show that the $p$-RAAG $G_\Gamma$ is quadratic. One can make similar considerations using Proposition~\ref{lem:amalgunifsubgroup}.
\end{example}
Next we will show that another special class of triangle-ful $p$-graphs that yields several new quadratic $p$-RAAGs.
\begin{definition}\label{def:chordal}
A graph ${\mathcal G}$ is \emph{chordal} (or \emph{triangulated}) if it contains no circuits other than triangles as full subgraphs.
\end{definition}
Chordal graphs are characterized by the following property (cf.\ \cite[Prop.~5.5.1]{graphbook}).
\begin{proposition}\label{prop:chordalgraph}
A graph is chordal if and only if it can be constructed
recursively by pasting along complete subgraphs, starting from
complete graphs.
\end{proposition}
Now we will prove Theorem~\ref{thm:chordal}.
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:chordal}}]
We proceed by induction on the size of $\Gamma$.
Clearly, the theorem holds if $\Gamma$ has only one vertex.
Therefore, we can assume that $\Gamma$ is a graph with more than one vertex.
If $\Gamma$ is complete, then $G= G_{\Gamma}$ is a uniform pro-$p$ group, and therefore quadratic (cf. Proposition~\ref{lem:unif}--(b)).
Otherwise, by Proposition~\ref{prop:chordalgraph}, there are proper full subgraphs $\Gamma_1,\Gamma_2,\Delta$ of $\Gamma$, with $\Delta$ complete such that $\Gamma$ is obtained by pasting together $\Gamma_1$ and $\Gamma_2$, and $\Delta= \Gamma_1 \cap \Gamma_2$.
Thus
\begin{equation}\label{eq:amalg chordal}
G_{\Gamma} = G_{\Gamma_1} \amalg_{G_\Delta} G_{\Gamma_2},
\end{equation}
where $G_{\Delta}$ is a uniform pro-$p$ group, since $G_\Gamma$ is non-degenerate.
Clearly, also $G_{\Gamma_1}$ and $G_{\Gamma_2}$ are non-degenerate, and $\Gamma_1,\Gamma_2$ are chordal.
Thus, by Proposition~\ref{lem:amalgunifsubgroup}, the amalgam~\eqref{eq:amalg chordal} is proper.
By induction, $G_{\Gamma_1}$ and $G_{\Gamma_2}$ are quadratic pro-$p$ groups.
Hence, by Theorem~B, $G$ is a quadratic pro-$p$ group.
\end{proof}
We believe that all non-degenerate $p$-RAAGs are quadratic, but this might be very hard to prove given the limited knowledge of properness of amalgamated free products in the category of pro-$p$ groups.
As evidence of the power of our methods, we remark that every $p$-graph on at most $4$ vertices always yields a quadratic $p$-RAAG.
Moreover, all $p$-graphs on $5$ vertices but
those with underlying graph $\mathcal{H}$ as in Figure~\ref{fig:2} can be handled with the same methods.
\begin{figure}[h]
\[
\xymatrix{ & \bullet \ar@{-}[dr] \ar@{-}[dl] \ar@{-}[d] & \\ \bullet \ar@{-}[dr] \ar@{-}[r] & \bullet \ar@{-}[d] & \bullet \ar@{-}[dl] \\ & \bullet & \\ & & }
\vspace{-1cm}
\]
\caption{The graph $\mathcal{H}$}\label{fig:2}
\end{figure}
If the $p$-labels on $\mathcal{H}$ are ``symmetric along the horizontal axis'', this $p$-graph can be handled via mirroring (i.e.\ Remark~\ref{rmk:newquadpraags}(b) above).
On the other hand, there are instances where we cannot decide, in general, whether a non-degenerate $p$-graph
always yields a quadratic group.
\subsection{Generalised \texorpdfstring{$p$}{p}-RAAGs and Galois groups}\label{ssec:Galois}
Throughout this subsection, $K$ denotes a field containing a root of unity of order $p$, and also $\sqrt{-1}$ if
$p=2$.
The following result shows that for some fields $K$, the maximal pro-$p$ Galois group $G_{K}(p)$
is a generalised $p$-RAAG.
\begin{proposition}\label{prop:solvable galois}
Let $G$ be a finitely generated solvable pro-$p$ group which occurs as $G_{K}(p)$ for some field $K$.
Then $G\cong G_\Gamma$ for a complete $p$-graph $\Gamma$.
\end{proposition}
\begin{proof}
By \cite[Cor.~4.9]{cq:bk}, if $G$ is solvable then it is uniform, and moreover every 2-generated subgroup of $G$
is again uniform --- i.e. it is a 2-generated Demushkin group.
Thus, given a basis $\mathcal{X}=\{x_1,\ldots,x_d\}$ of $G$, one has $[x_i,x_j]\in\langle x_i,x_j\rangle$
and Proposition~\ref{lem:unif} yields the claim.
\end{proof}
Clearly, the Bloch-Kato conjecture implies that being quadratic is a necessary condition
for a generalised $p$-RAAG to occur as maximal pro-$p$ Galois group for some field $K$.
It is natural asking whether there are further conditions that a generalised $p$-RAAG must fulfill in order
to occur as maximal pro-$p$ Galois group of a field $K$.
For example, one has the following obstruction.
\begin{example}\label{example:galois square}
Let $\Gamma=({\mathcal G},f)$ be the square $p$-graph with labels all equal to $(0,0)$.
Then, by \cite[Thm.~5.6]{cq:bk}, the $p$-RAAG $G_\Gamma$ can not be realized as $G_{K}(p)$ for any field $K$.
\end{example}
We will see shortly another necessary condition about how the edges of a $p$-graph $\Gamma=({\mathcal G},f)$ and their labels patch together.
Let $(x_i,x_j)$ be an edge of ${\mathcal G}$ and let $H_{ij}=\langle x_i,x_j\rangle$ be the subgroup
of the $p$-RAAG $G_\Gamma$ generated by $x_i$ and $x_j$.
By Example~\ref{ex:pRAAGs} and Lemma~\ref{lem:isom}, if $H_{ij}$ is uniform then it is a 2-generated
Demushkin group and there exist $u_{ij},w_{ij}\in H_{ij}$ such that
\begin{equation}\label{eq:pres cyc}
H_{ij}=\langle u_{ij},w_{ij}\mid u_{ij}w_{ij}u_{ij}^{-1}=w_{ij}^{1+\lambda_{ij}}\rangle,
\end{equation}
for some $\lambda_{ij}\in p\mathbb{Z}_p$.
In particular, the structure of $H_{ij}$ induces a homomorphism of pro-$p$ groups
\begin{equation}\label{eq:cyc morph}
\theta_{ij}\colon H_{ij}\to1+p\mathbb{Z}_p, \qquad \theta_{ij}(u_{ij})=1+\lambda_{ij},\ \theta_{ij}(w_{ij})=1.
\end{equation}
Recall that $1+p\mathbb{Z}_p=\{1+p\lambda\mid\lambda\in\mathbb{Z}_p\}$, which is isomorphic to $\mathbb{Z}_p$ if $p\neq2$,
and to $C_2\oplus\mathbb{Z}_2$ if $p=2$.
\begin{definition}\label{defi:kummerian graph}
A $p$-graph $\Gamma=({\mathcal G},f)$, with undelying graph ${\mathcal G}=({\mathcal V},{\mathcal E})$, is called \emph{cyclotomic} if:
\begin{itemize}
\item[(a)] for every edge $(x_i,x_j)\in{\mathcal E}$, the subgroup $H_{ij}$ is uniform, and
\item[(b)] for all edges $(x_i,x_j),(x_j,x_h)\in{\mathcal E}$ we have $\theta_{ij}(x_j)=\theta_{jh}(x_j)$ (cf.\ \eqref{eq:cyc morph}).
\end{itemize}
\end{definition}
Namely, in a cyclotomic $p$-graph $\Gamma=({\mathcal G},f)$
the homomorphisms $\theta_{ij}$ induced by the edges of ${\mathcal G}$, which are 2-generated Demushkin groups,
agree on common vertices.
The following shows that being cyclotomic is a necessary condition for a $p$-graph in order
to give rise to a generalised $p$-RAAG which is a maximal pro-$p$ Galois group.
\begin{theorem}\label{thm:galois pRAAGs}
Let $K$ be a field containing a root of unity of order $p$ and let $\Gamma=({\mathcal G},f)$ be a $p$-graph with underlying graph ${\mathcal G}=({\mathcal V},{\mathcal E})$. Suppose that $G_\Gamma\cong G_{K}(p)$ for some field $K$.
Then $\Gamma$ is cyclotomic.
\end{theorem}
\begin{proof}
Let $$\theta\colon G_\Gamma\longrightarrow1+p\mathbb{Z}_p$$ be the $p$th cyclotomic character induced by the action of $G_{K}(p)$ on the
roots of unity of order a power of $p$ lying in the maximal pro-$p$ extension $K(p)$ of $K$
(cf., e.g., \cite[Def.~7.3.6]{nsw:cohn}).
For an edge $(x_i,x_j)\in{\mathcal E}$, the subgroup $H_{ij}$ is the maximal pro-$p$ Galois group
of the subextension $K(p)/K(p)^{H_{ij}}$.
Since $H_{ij}$ is not free, then $H_{ij}$ is uniform by \cite[Thm.~4.6]{cq:bk},
with $\theta_{ij}\colon H_{ij}\to1+p\mathbb{Z}_p$ the cyclotomic character of the extension $K(p)/K(p)^{H_{ij}}$,
which coincides with the restriction $\theta\vert_{H_{ij}}$.
Therefore, for all edges $(x_i,x_j),(x_j,x_h)\in{\mathcal E}$, one has
$ \theta_{ij}(x_j)=\theta(x_j)=\theta_{jh}(x_j)$.
\end{proof}
The following result shows that cyclotomic $p$-graphs are also a good source of non-de\-ge\-ne\-ra\-te $p$-RAAGs.
\begin{proposition}\label{prop:cyclotomic graph}
Let $\Gamma=({\mathcal G},f)$ be a cyclotomic $p$-graph with underlying graph ${\mathcal G}=({\mathcal V},{\mathcal E})$.
Then the $p$-RAAG $G_{\Gamma}$ is non-degenerate.
\end{proposition}
\begin{proof}
By Remark~\ref{rmk:newquadpraags}(a), we may assume that ${\mathcal G}$ is connected, so that every vertex $x_i\in{\mathcal V}$ belongs to some edge.
Let $\theta\colon G_\Gamma\to1+p\mathbb{Z}_p$ be the homomorphism induced by the homomorphisms $\theta_{ij}$ for every edge
of $\Gamma$ --- since $\Gamma$ is cyclotomic, $\theta$ is well defined.
Moreover, let \eqref{eq:presentation} be the minimal presentation of $G_\Gamma$ induced by $\Gamma$ and let
$\hat\theta\colon F\to1+p\mathbb{Z}_p$ be the composition of the projection $F\to G_\Gamma$ with $\theta$.
Consider the following normal subgroups
\[\begin{split}
K(G_\Gamma)&=\left\langle\left. y^{-\theta(x)}xyx^{-1}\right\vert x\in G_\Gamma,y\in\mathrm{ker}(\theta) \right\rangle \lhd G_\Gamma,\\
K(F)&=\left\langle\left. y^{-\hat\theta(x)}xyx^{-1}\right\vert x\in F,y\in\mathrm{ker}(\hat\theta) \right\rangle \lhd F
\end{split}\]
(cf.\ \cite[\S~3]{eq:kummer}).
Note that $K(G_\Gamma)\le\Phi(G_\Gamma)\cap\mathrm{ker}(\theta)$ and $K(F)\le\Phi(F)\cap\mathrm{ker}(\hat\theta)$.
By \eqref{eq:pres cyc}, $R$ is generated as normal subgroup of $F$ by the relations
$$w_{ij}^{-\hat\theta(u_{ij})}u_{ij}w_{ij}u_{ij}^{-1},\qquad \text{for }(x_i,x_j)\in{\mathcal E}({\mathcal G}),$$
and consequently $R\le K(F)$.
Therefore, \cite[Thm.~5.6]{eq:kummer} implies that the quotient $\bar G_\Gamma:=G_\Gamma/K(G_\Gamma)$ is torsion-free
and it splits as semi-direct product
\[
\bar G_\Gamma\cong\mathbb{Z}_p^m\rtimes (G_\Gamma/\mathrm{ker}(\theta)),\qquad \text{for some }m\geq0,\]
with action $\bar xz\bar x^{-1}= z^{\bar\theta(\bar x)}$ for all $z\in\mathbb{Z}_p^m$ and $\bar x\in\bar G_\Gamma$,
where $\bar\theta\colon \bar G_\Gamma\to1+p\mathbb{Z}_p$ is the morphism induced by $\theta$
(namely, the pro-$p$ group $G_\Gamma$ endowed with the morphism $\theta$ is ``Kummerian'', following the language of \cite{eq:kummer}).
In particular, $[x,y]\in\langle x,y\rangle\le \bar G_\Gamma$.
Therefore, by Proposition~\ref{lem:unif}~(a) there exists a complete $p$-graph $\tilde \Gamma$ such that
$\bar G_{\Gamma}\cong G_{\tilde\Gamma}$.
Since ${\mathcal V}(\tilde\Gamma)={\mathcal V}(\Gamma)$, $\tilde\Gamma$ is a completion of $\Gamma$.
\end{proof}
The class of \emph{Koszul graded algebras} is a particular class of quadratic algebras,
singled out by Priddy in \cite{priddy} --- the definition of Koszul graded algebra
is highly technical; we refer to \cite[Ch.~2]{pp:quad} and to \cite[\S~2]{MPQT}.
Recently, Koszul graded algebras became of great interest in the context of Galois cohomology
(see, e.g., \cite{pos:K,posi:koszul,MPQT}).
In particular, Positselski conjectured in \cite{posi:koszul} that the cohomology algebra $H^\bullet(G_{K}(p),\mathbb{F}_p)$ is Koszul, if $G_{K}(p)$ is finitely generated.
Moreover, Weigel conjectured in \cite{weigel:collection} that the graded group algebra
\[
\mathrm{gr}(\mathbb{F}_pG_{K}(p))=\bigoplus_{n\geq0}I^n/I^{n+1},\qquad I^0=\mathbb{F}_pG_{K}(p),
\]
where $I$ denotes the augmentation ideal of the group algebra $\mathbb{F}_pG_{K}(p)$ (cf.\ \cite[\S~3.2]{MPQT}),
is also a Koszul graded algebra.
Usually it is quite hard to check whether a graded algebra is Koszul.
Nonetheless, in the setting of generalised $p$-RAAGs we can easily deduce the following.
\begin{coro} Let $\Gamma=({\mathcal G},f)$ be a $p$-graph and let $G_\Gamma$ be the associated $p$-RAAG.
\begin{itemize}
\item[(i)] If $G_\Gamma$ is quadratic, then
the cohomology algebra $H^\bullet(G_\Gamma,\mathbb{F}_p)$ is Koszul.
\item[(ii)] If $\Gamma$ is triangle-free, then the graded group algebra $\mathrm{gr}(\mathbb{F}_pG_\Gamma)$ is Koszul.
\end{itemize}
\end{coro}
\begin{proof}
Statement (i) follows from Theorem~\ref{thm:pRAAGs quadratic} and \cite[\S~4.2.2]{weigel:koszul}.
Statement (ii) follows from Theorem~\ref{thm:pRAAGs mild} and \cite[Thm.~8.4]{MPQT}.
\end{proof}
Thus, generalised $p$-RAAGs provide a huge source of pro-$p$ groups for which Positselski's and Weigel's
\emph{Koszulity conjectures} hold.
The above result raises the following question.
\begin{ques}
Let $G_\Gamma$ be a quadratic $p$-RAAG with associated $p$-graph $\Gamma$.
Is the graded algebra $\mathrm{gr}(\mathbb{F}_p G_\Gamma)$ Koszul?
\end{ques}
\section{Triangle \texorpdfstring{$p$}{p}-RAAGs}\label{sec:triang}
In the following section we will slightly change the focus of our investigation. Until now we were mainly concerned with finding new examples of quadratic pro-$p$ groups in the family of $p$-RAAGs. Here we will mainly be concerned with the determination of the isomorphism classes of quadratic $p$-RAAGs arising from triagle $p$-graphs. These will be called \emph{triangle $p$-RAAGs}.
\subsection{The Lazard correspondence}
Given a powerful pro-$p$ group $G$ and $n\in \mathbb{N}$, we have $P_{n+1}(G)=G^{p^n}=\{ x^{p^n} ~|~ x\in G \}$ (see \cite[Thm.~3.6]{ddms:padic}). Moreover, if $G$ is uniform, then the mapping $ x \mapsto x^{p^n}$ is a homeomorphism from $G$ onto $G^{p^n}$ (see \cite[Lem.~4.10]{ddms:padic}). This shows that each element $x\in G^{p^n}$ admits a unique $p^n$th root in $G$, which we denote by $x^{p^{-n}}$.
As in the case of pro-$p$ groups, a $\mathbb{Z}_p$-Lie algebra $L$ is called \emph{powerful} if $L\cong \mathbb{Z}_p^d$ for some $d>0$ as $\mathbb{Z}_p$-module and $(L,L)_{Lie}\subseteq pL$ if $p$ is odd, or $(L,L)_{Lie} \subseteq 4L$ if $p=2$.
If $G$ is an analytic pro-$p$ group, then it has a characteristic open subgroup which is uniform.
For every open uniform subgroup $H \leq G$, $\mathbb{Q}_p[H]$ can be made into a normed $\mathbb{Q}_p$-algebra,
call it $\mathit{A}$, and $log(H)$, considered as a subset of the completion $\hat{A}$ of $A$,
will have the structure of a Lie algebra over $\mathbb{Z}_p$.
There is a different construction of an intrinsic Lie algebra over $\mathbb{Z}_p$ for uniform groups.
The uniform group $U$ and its Lie algebra over $\mathbb{Z}_p$, call it $L_U$, are identified as sets,
and the Lie operations are defined by
\begin{equation}\label{eq:Liesum}
g+h=\lim_{n \to \infty}(g^{p^n}h^{p^n})^{p^{-n}}
\end{equation}
and
\begin{equation}\label{eq:Liebracket}
(g,h)_{Lie}=\lim_{n \to \infty}[g^{p^n},h^{p^n}]^{p^{-2n}}=\lim_{n\to \infty} (g^{-p^n}h^{-p^n}g^{p^n}h^{p^n})^{p^{-2n}} .
\end{equation}
It turns out that $L_U$ is a powerful $\mathbb{Z}_p$-Lie algebra and it is isomorphic to the $\mathbb{Z}_p$-Lie algebra $log(U)$ (cf.\ \cite[Cor.~7.14]{ddms:padic}).
On the other hand, if $L$ is a powerful $\mathbb{Z}_p$-Lie algebra, then the Campbell-Hausdorff formula induces
a group structure on $L$; the resulting group is a uniform pro-$p$ group.
If this construction is applied to the $\mathbb{Z}_p$-Lie algebra $L_U$ associated to a uniform group $U$,
one recovers the original group.
Indeed, the assignment $U\mapsto L_U$ gives an equivalence between the category of uniform pro-$p$ groups
and the category of powerful $\mathbb{Z}_p$-Lie algebras (see \cite[Thm.~9.10]{ddms:padic}).
In light of Theorem~\ref{thm:pRAAGs mild} and Proposition~\ref{lem:unif}, the structure of quadratic $p$-RAAGs
associated to triangle $p$-graphs is of particular interest.
So we are interested in torsion-free pro-$p$ groups defined by presentations of the form
\begin{equation}
G_{A'} =\langle x_1,x_2,x_3 \mid [x_1,x_2]=x_1^{\a_1'} x_2^{\a_2'}, [x_2,x_3]=x_2^{\b_2'} x_3^{\b_3'}, [x_3,x_1]=x_3^{\c_3'} x_1^{\c_1'} \rangle
\end{equation}
with parameters $A'=(\a_1',\a_2', \b_2',\b_3', \c_1',\c_3') \in p\mathbb{Z}_p^6$. Since the torsion-free group $G_{A'}$ is uniform, we can associate to it the $\mathbb{Z}_p$-Lie lattice $L_{G_{A'}}$. It follows from \eqref{eq:Liebracket} that $L_{G_{A'}}$ has a ($\mathbb{Z}_p$-Lie algebra) presentation of the form
\begin{equation}\label{eq:Lielattice}
L_A = \left\langle x_1,x_2,x_3\ \left\vert\ \begin{matrix}
[x_1,x_2] = \a_1 x_1 + \a_2 x_2,\ [x_2,x_3] = \b_2 x_2 + \b_3 x_3, \\
[x_3,x_1] = \c_1 x_1 + \c_3 x_3
\end{matrix} \right. \right\rangle
\end{equation}
for some $\a_1,\a_2$, $\b_2,\b_3$, $\c_1,\c_3 \in p\mathbb{Z}_p$.
Hence, to understand the structure of triangle $p$-RAAGs, it is important to determine all $3$-dimensional $\mathbb{Z}_p$-Lie lattices of type \eqref{eq:Lielattice}. This will be done in the next subsection.
\subsection{Triangle Lie algebras}
\begin{lemma}\label{lem:system}
Let $L$ be a free $\mathbb{Z}_p$-module with basis $\{x_1,x_2,x_3\}$ and let $\a_1,\a_2$, $\b_2,\b_3$, $\c_1,\c_3 \in p\mathbb{Z}_p$. Consider the bilinear function $[\cdot,\cdot] : L\times L \to L$ defined by $[x_i,x_i]=0$ for $i=1,2,3$ and
\[
[x_1,x_2] = \a_1 x_1 + \a_2 x_2,\quad [x_2,x_3] = \b_2 x_2 + \b_3 x_3,\quad [x_3,x_1] = \c_1 x_1 + \c_3 x_3.
\]
Then $[\cdot,\cdot]$ defines a bracket on $L$ if and only if the following system is satisfied:
\begin{equation}\label{eq:system}
\begin{cases}
\a_1 \b_2 - \c_1 \b_3 =0 \\
\c_1 \a_2 - \b_2 \c_3 = 0 \\
\a_1 \c_3 - \a_2 \b_3 = 0
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
We only need to check that \eqref{eq:system} is satisfied if and only if the function $[\cdot,\cdot]$ satisfies the Jacobi identity. Now, a straight-forward computation yields the claim.
\end{proof}
\begin{proposition}\label{prop:families}
Let $\a_1,\a_2$, $\b_2,\b_3$, $\c_1,\c_3 \in p\mathbb{Z}_p$ and consider the $\mathbb{Z}_p$-Lie lattice $L_A$ defined in \eqref{eq:Lielattice}. Then there exist $\eta,\rho,\mu,\lambda \in p\mathbb{Z}_p$, satisfying $\eta \rho - \mu \lambda =0$, such that $L_A$ is isomorphic to one of the following:
\begin{itemize}
\setlength\itemsep{1em}
\item $L_1(\eta,\rho,\mu,\lambda) = \langle x, y, z \vert [x,y]= \eta y,\ [y,z]= \mu y,\ [z,x] = \lambda z + \rho x \rangle.$
\item $L_2(\eta,\mu) = \langle x, y, z \vert [x,y]= 0,\ [y,z]= \eta y + \mu z,\ [z,x] = 0 \rangle.$
\item $L_3(\eta,\mu) = \langle x, y, z \vert [x,y]= 0,\ [y,z]= \eta z,\ [z,x] = \mu z \rangle.$
\item For $\eta\mu\lambda \neq 0$, $$L_4(\eta,\mu,\lambda) = \langle x,y,z \vert [x,y] =\eta x + \mu y, [y,z] = \lambda y- \eta z, [z,x] = - \lambda x - \mu z \rangle.$$
\item For $\eta\mu\lambda \neq 0$, $$L_\ast(\eta,\mu,\lambda) = \langle x,y,z \vert [x,y] =\eta x + \mu y, [y,z] = \lambda y+ \eta z, [z,x] = \lambda x + \mu z \rangle.$$
\end{itemize}
\end{proposition}
\begin{proof}
Since $L_A$ is a Lie lattice, the parameters have to satisfy \eqref{eq:system}, by Lemma~\ref{lem:system}. First suppose that $\a_1=0$. It is easy to check the statement in this case, see Figure~\ref{fig:1} below for a schematic treatment.
\begin{figure}[h]
$$
\resizebox{0.80\linewidth}{!}{%
\Tree[.{\framebox{$\a_1 = 0$}} [ .{\framebox{$\b_3 \c_1= 0 \text{\quad and \quad} \a_2 \b_3 =0$}} [.{\framebox{$\b_3 = 0 \text{\quad and \quad} \a_2 \c_1 -\b_2 \c_3 =0$}} [.\framebox{{\framebox{$L_A= L_1(\a_2,\b_2,\c_3,\c_1)$}}} ]] [.\framebox{$\a_2 = \c_1 =0$} [.{\framebox{$\b_2 \c_3 = 0$}} [.{\framebox{$\b_2=0$}} \framebox{\framebox{$L_A=L_3(\b_3,\c_3)$}} ] [.{\framebox{$\c_3 = 0$}} \framebox{\framebox{$L_A = L_2(\b_2,\b_3) $}} ]]]]]}
$$
\caption{Proof of Proposition~\ref{prop:families} for $\a_1=0$.}\label{fig:1}
\end{figure}
If $\a_1\neq 0$ and some other coefficient is zero, we can use a change of basis to go back to the above case.
Suppose now that all coefficients are non-zero. Define the constants $$ q = \frac{\b_2}{\c_1} = \frac{\b_3}{\a_1},\quad r = \frac{\c_1}{\b_2} = \frac{\c_3}{\a_2},\quad s= \frac{\a_1}{\b_3} = \frac{a_2}{\c_3} .$$ Then $\a_1 \b_2 = \b_3 \c_1$, together with the definition of $r$, implies that $\a_1 = \b_3 r$. Since $\a_1= s \b_1$, we must have $r=s$. Furthermore $\b_2 \c_3 = \a_2 \c_1$, together with the definition of $q$, implies that $q \c_3 = \a_2$. Since $\a_2 = \c_3 s$, we must have $s=q$. So $q=r=s$.
Finally note that $\b_2 = q \c_1$ and $\c_1 = \b_2 r = \b_2 q$, therefore $q= \pm 1$. So either $q=r=s = 1$ or $q=r=s =-1$.
\end{proof}
The unusual numbering in the previous lemma will become clear after Lemma~\ref{lem:metabelianorSL21}. Recall that a non-zero element $z\in \mathbb{Z}_p$ can be written as formal power series $z=\sum_{n=N}^\infty a_n p^n$, for $a_n\in \{0,\ldots,p-1\}$ with $a_N\neq 0$, and we define $v_p(z)=N$.
\begin{lemma}\label{lem:xyz}
Each of the algebras of Proposition~\ref{prop:families} can be written in the form
\[
L_{\a,\b,\c}^{\xi_1,\xi_2,\xi_3} = \pres{x,y,z}{[x,y]= \a \xi_1 , [y,z] = \b \xi_2 , [z,x]= \c \xi_3}
\]
where $\a,\b,\c \in p\mathbb{Z}_p$, and $\xi_1\in \{0,x,y\}$, $\xi_2 \in \{0,y,z\}$ and $\xi_3 \in \{0,z,x,y\}$. Moreover, we have the following isomorphisms of Lie algebras
\[
L_1(\eta,\rho,\mu,\lambda) \cong L_{0,\lambda,\eta}^{0,x,y},\ L_2(\eta,\mu) \cong L_{0,\mu,\eta}^{0,z,z},\ L_3(\eta,\mu) \cong L_{0,\eta,\mu}^{0,z,z}
\]
\[
L_4(\eta,\mu,\lambda) \cong L_{0,\eta,-\eta}^{0,y,x} \text{ and } L_\ast(\eta,\mu,\lambda) \cong L_{\eta,\eta,-2\rho}^{x,z,y}.
\]
\begin{proof}
The isomorphisms above can again be obtained by base-change. We will spell out the details in a few cases for the sake of clarity.
Clearly $L_3(\eta,\mu)\cong L_{0,\eta,\mu}^{0,z,z}$.
Consider the algebra $L_2(\eta,\mu)$. Without loss of generality we can assume $v_p(\eta) \le v_p(\mu)$. The change of basis $$u=x-\frac{\mu}{\eta}z, \quad v=y+\frac{\mu}{\eta} z, \quad t=z$$ yields
\[
[u,v] = \mu u,\quad [v,t] = \eta u,\quad [t,u] = 0.
\]
and $L_2(\eta,\mu)\cong L_3(\mu,\eta)$. Hence $L_2(\eta,\mu) \cong L_{\mu,\eta,0}^{y,y,0} \cong L_{0,\eta,\mu}^{0,z,z} \cong L_3(\mu,\eta) $.
The calculations for the other cases are completely analogous and will be omitted.
\end{proof}
\end{lemma}
\begin{lemma}\label{lem:metabelianorSL21}
Let $L=L_{\a,\b,\c}^{\xi_1,\xi_2,\xi_3}$ be the Lie algebra defined in Lemma~\ref{lem:xyz}. Then $L$ is not metabelian if, and only if, $L = L_{\eta,\eta,-2\rho}^{x,z,y}$. Moreover, in this case $L$ is commensurable to the Lie algebra of $\mathrm{SL}_2^1(\mathbb{Z}_p)$.
\begin{proof}
From the proof of Lemma~\ref{lem:xyz}, we can see that it is sufficient to check that $L_1(\eta,\mu,\lambda,\rho)$ and $L_3(\eta,\mu)$ are metabelian for every allowed choice of coefficients. It is clear that $L_3(\eta,\mu)$ is metabelian.
The derived subalgebra of $L_{0,\lambda,\eta}^{0,y,x}$ is generated by $\lambda y$ and $\eta x$ and these elements commute in $L_{0,\lambda,\eta}^{0,y,x}$.
The last claim follows from the fact that $\mathbb{Q}_p \otimes L \cong \mathfrak{sl}_2(\mathbb{Q}_p)$ (see for instance \cite[Prop.~2.30]{ns:selfsim}).
\end{proof}
\end{lemma}
\subsection{Solvable triangle groups}
We start by determining the isomorphism classes of $p$-RAAGs with exactly one edge.
\begin{lemma}\label{lem:isom}
Let $G_{\a,\b}$ be the pro-$p$ group defined by $$\pres{x,y}{[x,y]=x^\a y^\b}$$ with $\a,\b\in p^{1+\varepsilon}\mathbb{Z}$ ($\varepsilon =0$ if $p$ is odd and $\varepsilon=1$ if $p=2$) and set $c=\min \{v_p(\a),v_p(\b)\}$. Then $G_{\a,\b} \cong G_{p^{c},0}$.
\end{lemma}
\begin{proof}
For $\a=\b=0$, the group $G_{0,0}$ is isomorphic to $\mathbb{Z}_p^2$ and the statement is clear. We can then suppose that $\a\neq 0$ and $v_p(\a) \le v_p(\b)\le \infty$.
Note that $G_{\a,\b}$ is a $2$-generated powerful pro-$p$ group. In particular, $[G_{\a,\b},G_{\a,\b}]$ is a finitely generated normal subgroup of $G_{\a,\b}$ and it is non-trivial. In fact, one can easily show that $$G_{\a,\b}/[G_{\a,\b},G_{\a,\b}] \cong \mathbb{Z}_p \times \left(\mathbb{Z}_p/p^{v_p(\a)}\mathbb{Z}_p\right).$$
Since $G_{\a,\b}$ has positive deficiency, \cite[Thm.~4]{HS:deficiency} yields that $G_{\a,\b}$ is a pro-$p$ duality group of dimension $2$. Hence, $G_{\a,\b}$ is a $p$-adic analytic Demushkin group of dimension $2$. By \cite[Prop.~7.1]{kgs:small}, $G_{\a,\b}$ is isomorphic to the group
\[
H= \pres{x,y}{[x,y]= x^{p^k}}
\]
for some positive integer $k$ ($k\ge 2$ for $p=2$). Comparing the abelianisations of $G_{\a,\b}$ and $H$, we conclude that $k= v_p(\a)$.
\end{proof}
\begin{lemma}\label{lem:2gens}
Let $\a,\b$ and $G_{\a,\b}$ be as above. Consider the powerful $\mathbb{Z}_p$-Lie lattice $L_{\a,\b}$ defined by $$\pres{x,y}{[x,y] = \a x+ \b y}$$ and the associated pro-$p$ group $G_{L_{\a,\b}}$ under the Lazard correspondence. Then
\[
G_{L_{\a,\b}} \cong G_{\a,\b}.
\]
\end{lemma}
\begin{proof}
Without loss of generality we may suppose that $v_p(\a)\le v_p(\b) \le \infty$. We will first perform a change of basis in $L_{\a,\b}$: set $u = x+ (\b/\a) y$ and $v= y$, then $L_{\a,\b}' = \pres{u,v}{[u,v] = \a u}_{\mathrm{Lie}}$ is clearly isomorphic to $L_{\a,\b}$.
Let $a=p^{v_p(\a)}$. Working directly with the Lie bracket definition of the uniform $p$-adic analytic pro-$p$ group $H_{a} = \pres{x,y}{[x,y]=x^{a}}$, it is easy to see that its associated Lie algebra $L_{H_a}$ is isomorphic to $\pres{x,y}{[x,y]= a x}_{\mathrm{Lie}}$. By a standard Lie-theoretic computation, it follows that $L_{\a,\b}'$ is isomorphic to $L_{H_a}$. Therefore $G_{L_{\a,\b}} \cong G_{L_{H_{a}}}$. By the Lazard correspondence $G_{L_{H_{a}}} \cong H_a$ and finally $H_a \cong G_{\a,\b}$, by Lemma~\ref{lem:isom}.
\end{proof}
In a similar fashion to the previous section, we are going to define some groups that will turn out to correspond to the above Lie algebras. For $\a,\b,\c \in p\mathbb{Z}_p$, and $\xi_1\in \{0,x,y\}$, $\xi_2\in \{0,y,z\}$ and $\xi_3\in \{0,z,x,y\}$, define the pro-$p$ group
\[
G_{\a,\b,\c}^{\xi_1,\xi_2,\xi_3} = \pres{x,y,z}{[x,y] = \xi_1^\a,\ [y,z] = \xi_2^\b,\ [z,x] = \xi_3^\c}
\]
Also define $$G_2(\a,\b) = \pres{x,y,z}{[x,y] = 1,\ [y,z] = y^\a z^\b,\ [z,x] = 1}.$$ It turns out that the picture for groups is analogous to that of Lie lattices (cf.\ Proposition~\ref{prop:families}).
\begin{lemma}\label{lem:3gens}
The groups $G_2(\a,\b)$, $G_{0,\b,\c}^{0,z,z}$ and $G_{0,\b,\c}^{0,y,x}$ are metabelian uniform pro-$p$ groups of dimension $3$ for every choice of parameters. Moreover,
\[
L_{G_2(\a,\b)} \cong L_2(\a,\b)\cong L_{0,\b,\a}^{0,z,z}, \quad L_{G_{0,\b,\c}^{0,z,z}} \cong L_{0,\b,\c}^{0,z,z} \text{\ \ and \ \ } L_{G_{0,\b,\c}^{0,y,x}} \cong L_{0,\b,\c}^{0,y,x}.
\]
In particular, $G_{0,\b,\c}^{0,z,z} \cong G_2(\b,\c)$.
\end{lemma}
\begin{proof}
First of all notice that these groups are powerful pro-$p$ groups by definition.
We first consider $G_{0,\b,\c}^{0,y,x}$. Define the homomorphisms
\[
\varphi_1: G_{0,\b,\c}^{0,y,x} \to \pres{z,x}{[z,x] = x^\c} \text{ and } \varphi_2: G_{0,\b,\c}^{0,y,x} \to \pres{y,z}{[y,z] = y^\b}.
\]
By Lemma~\ref{lem:2gens}, the subgroup generated by $x$ and $z$ is uniform of dimension $2$. Moreover, the kernel of $\varphi_1$ is generated by $y$ and hence infinite, because $y$ has infinite image via $\varphi_2$. In particular, the dimension of $G_{0,\b,\c}^{0,y,x}$ as a $p$-adic analytic group must satisfy $$ 3\ge \dim(G_{0,\b,\c}^{0,y,x}) \ge \dim (\mathrm{Im}\ \varphi_1 ) + \dim (\mathrm{Ker}\ \varphi_1 ) = 2+1 =3. $$ Now, $G_{0,\b,\c}^{0,y,x}$ is a powerful pro-$p$ group with $d(G)=\dim(G)$ and, by \cite[Prop.~2.12]{ks}, it must be uniform. The isomorphism of Lie algebras now follows from the definition of the Lazard Lie bracket via a straight-forward calculation using \eqref{eq:Liebracket}.
For the group $G_2(\a,\b)$ the proof is similar to the previous case using the homomorphisms induced by $x\mapsto 1$ and $y\mapsto 1$.
Moreover, it is clear that $G_2(\a,\b) = \langle x \rangle \times \langle y,z \rangle$. By Lemma~\ref{lem:2gens} applied to the subgroup generated by $y$ and $z$, we deduce that $L(G_2(\a,\b)) \cong \mathbb{Z}_p \oplus L_{\a,\b} \cong L_2(\a,\b)$.
We need to use a different strategy to prove the claims about $G_{0,\b,\c}^{0,z,z}$. Without loss of generality, we can assume that $v_p(\b)\ge v_p(\a)$. For $\delta \in \mathbb{Z}_p$, since $y^{-1}z y = z^{1-\a}$, we deduce that $[z, y^\delta] = z^{(1-\a)^\delta -1}$. Hence, $$ [y^\delta x, z] = [y^\delta,z]^x [x,z] = (z^{1-(1-\a)^\delta})^x z^{-\b} = z^{(1+\b)(1-(1-\a)^\delta)} z^{-\b}.$$ By setting $[z,y^\delta]=1$, we obtain the equation $ (1+\b)(1-(1-\a)^\delta) = \b. $ Solving for $\delta$ we obtain $\delta = \log(1-\frac{\b}{1+\b})/\log(1-\a) \in \mathbb{Z}_p,$ since $v_p(\b)\ge v_p(\a)$. Therefore $$ G_{0,\b,\c}^{0,z,z} = \pres{y^\delta x, y, z}{[y^\delta x, y] =1,\ [y,z] = z^\a,\ [z,y^\delta x] =1} \cong G_2(0,\a).$$
In conclusion, the isomorphisms of Lie algebras are obtained using the definition of Lie bracket \eqref{eq:Liebracket} and comparing abelianisations.
\end{proof}
We are now ready to prove Theorem~\ref{thm:quadratictrianglepraags}.
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:quadratictrianglepraags}}]
Note that the quadratic triangle $p$-RAAG $G_A$ is powerful. Hence, $G_A$ must be uniform of dimension $3$, by Theorem~\ref{thm:quadratic analytic}. Since $G_A$ is uniform, we can consider the associated Lazard Lie algebra. The result now follows from Lemma~\ref{lem:xyz}, Lemma~\ref{lem:metabelianorSL21} and Lemma~\ref{lem:3gens}.
\end{proof}
To the interested reader the last theorem might sound unsatisfactory, as we have a pretty clear picture for the solvable case and not so for the non-solvable one.
In the next section, we will content ourselves with the computation of a somewhat special case to outline the general method that could be used to produce many non-solvable triangle groups.
\subsection{Unsolvable triangle groups}
In this section we will use the methods of \cite{ac} to study non-solvable triangle $p$-RAAGs.
Let $G=G(\eta,\mu,\lambda)$ be the pro-$p$ group defined by the balanced pro-$p$ presentation
\begin{equation}\label{eq:pres}
\pres{x,y,z}{[x,y]= x^\eta y^\mu,\ [y,z] = y^\lambda z^\eta,\ [z,x] = z^\mu x^\lambda }
\end{equation} with $\eta,\mu, \lambda \neq 0$. Suppose additionally that $v_p(\lambda) = v_p(\mu) = v_p(\eta)=1$. We will show that $G$ is isomorphic to $\mathrm{SL}_2^1(\mathbb{Z}_p)$ by proving that the latter admits a presentation of the form \eqref{eq:pres}.
Given a pro-$p$ group $G$, we can form its \emph{graded Lie algebra}: for $n\ge 1$, define
$$\mathfrak{gr}_n(G) = P_n(G)/P_{n+1}(G) \text{ and } \mathfrak{gr}(G) = \bigoplus_{n\ge 1} \mathfrak{gr}_n (G). $$
Denote by $\phi_n: P_n(G) \to \mathfrak{gr}_n (G)$ the quotient map.
It is straightforward to check that the maps $\pi_n : \mathfrak{gr}_n (G) \to \mathfrak{gr}_{n+1} (G)$ induced by $p$-powers
are linear and these extend uniquely to the linear map $\pi : \mathfrak{gr} (G)\to \mathfrak{gr} (G)$.
Finally, commutators in the graded components $\mathfrak{gr}_n (G)$ endow $\mathfrak{gr} (G)$ with the structure of
a graded Lie $\mathbb{F}_p [\pi]$-algebra.
We will need the following
two lemmas from \cite{ac} which can also be checked directly.
\begin{lemma}
The Lie $\mathbb{F}_p [\pi]$-algebra $\mathfrak{gr} (G)$ is generated by its homogeneous component $\mathfrak{gr}_1 (G)$; moreover $\mathfrak{gr} (G)$ is generated by $\{\phi_1(x) \vert x\in X \}$ for every generating set $X$ of $G$.
\end{lemma}
\begin{lemma}
The graded Lie algebra $\mathfrak{gr}(\mathrm{SL}_2^1(\mathbb{Z}_p))$ of $\mathrm{SL}_2^1(\mathbb{Z}_p)$ can be presented by
\begin{equation}\label{eq:sl21}
\pres{\ol{x}_1,\ol{x}_2,\ol{x}_3}{[\ol{x}_1,\ol{x}_2] = \pi \ol{x}_1, [\ol{x}_2,\ol{x}_3] = \pi \ol{x}_3, [\ol{x}_3,\ol{x}_1] = \pi \ol{x}_2}.
\end{equation}
In particular, it is a free $\mathbb{F}_p [\pi]$-module of rank $3$.
\end{lemma}
\begin{proposition}\label{prop:SL21triangle}
The group $\mathrm{SL}_2^1(\mathbb{Z}_p)$ has a pro-$p$ presentation $G(\eta,\mu,\lambda)$ of the form \eqref{eq:pres}
for every $\eta,\mu,\lambda\in p\mathbb{Z}_p$ and $v_p(\eta)= v_p(\mu)= v_p(\lambda)=1$.
\begin{proof}
We will write $G=\mathrm{SL}_2^1(\mathbb{Z}_p)$. We proceed by induction. Suppose that, for $n\ge 1$, we defined three generators $x_{n},y_{n},z_{n} \in G$ and three coefficients $\eta_{n},\mu_{n},\lambda_{n}\in \mathbb{Z}_p$ such that these satisfy the relations $\mathcal{R}$ modulo $P_{n+2}(G)$, that is \begin{equation} \label{eq:xnynzn} [x_{n},y_{n}] \equiv x_{n}^{\eta_{n}} y_{n}^{\mu_{n}},\quad [y_{n},z_{n}] \equiv y_{n}^{\lambda_{n}} z_{n}^{\eta_{n}}, \quad [z_{n},x_{n}] \equiv z_{n}^{\mu_{n}} x_{n}^{\lambda_{n}} \mod P_{n+2}(G)
\end{equation}
\begin{equation}
\eta_{n} \equiv \eta,\quad \mu_{n} \equiv \mu,\quad \lambda_{n} \equiv \lambda \mod p^{n+1}\mathbb{Z}_p. \end{equation} By the proof of Lemma~\ref{lem:xyz} (or by base-change), the Lie $\mathbb{F}_p$-algebra $\mathfrak{gr}_1(G(\eta,\mu,\lambda))$ admits a presentation of the form \eqref{eq:sl21}, hence we have the case $n=1$.
Let $\d_1,\d_2,\d_3 \in \{0,\ldots, p-1\}$ be the integers such that
\begin{equation}
\delta_1 \equiv \eta -\eta_n,\quad \delta_2 \equiv \mu -\mu_n,\quad \delta_3 \equiv \lambda -\lambda_n \mod p^{n+2}\mathbb{Z}_p
\end{equation}
and define the new coefficients $\eta_{n+1}= \eta_{n} + p^{n+1} \d_1$, $\mu_{n+1}= \mu_{n} + p^{n+1} \d_2$ and $\lambda_{n+1}= \lambda_{n} + p^{n+1} \d_3$.
Let $\Delta_1,\Delta_2,\Delta_3 \in P_{n+1}(G)$ and define the new elements $x_{n+1} = x_{n} \Delta_1$, $y_{n+1}= y_n \Delta_2$ and $z_{n+1}= z_n \Delta_3$. We will prove that we can choose $\Delta_1,\Delta_2,\Delta_3$ so that
\begin{align*}
f_1 &= [x_{n+1},y_{n+1}] (x_{n+1}^{\eta_{n+1}} y_{n+1}^{\mu_{n+1}})^{-1} \cdot ( [x_{n},y_{n}] (x_{n}^{\eta_{n}} y_{n}^{\mu_{n}})^{-1} )^{-1} \\
f_2 &= [y_{n+1},z_{n+1}] (y_{n+1}^{\lambda_{n+1}} z_{n+1}^{\eta_{n+1}})^{-1} \cdot ( [y_{n},z_{n}] (y_{n}^{\lambda_{n}} z_{n}^{\eta_{n}})^{-1} )^{-1} \\
f_3 &= [z_{n+1},x_{n+1}] (z_{n+1}^{\mu_{n+1}} x_{n+1}^{\lambda_{n+1}})^{-1} \cdot ( [z_{n},x_{n}] (z_{n}^{\mu_{n}} x_{n}^{\lambda_{n}})^{-1} )^{-1}
\end{align*}
are congruent to $1$ modulo $P_{n+3}(G)$ for every $\d_1,\d_2,\d_3 \in \mathbb{Z}_p$. This will allow us to define the new generators $x_{n+1}$, $y_{n+1}$ and $z_{n+1}$ with the required properties. Finally, the elements $x= \lim_{n\to \infty} x_n$, $y= \lim_{n\to \infty} y_n$ and $z= \lim_{n\to \infty} z_n$ clearly deliver a presentation of the form $G(\eta,\mu,\lambda)$.
First we note that $f_i \in P_{n+2}(G)$, $i=1,2,3$. Now, denote by $\ol{g}$ the image of $g\in G$ in $\mathfrak{gr}_1(G)$ and write $\ol{x}=\phi_1(x_n)$, $\ol{y}=\phi_1(y_n)$, $\ol{z}=\phi_1(z_n)$. Thus $\ol{\Delta_i}$ can be written as $\pi^n ( \c_{i1} \ol{x}+ \c_{i2} \ol{y} + \c_{i3} \ol{z})$ for some $\c_{ij}\in \mathbb{F}_p$. Reducing $f_i$ modulo $P_{n+3}(G)$, it is easy to show that
\begin{align*}
\ol{f}_1 &= \pi^{n+1} ( [\ol{x},\ol{\Delta}_2] + [\ol{\Delta}_1,\ol{y}] - \d_1 \ol{x} - \d_2 \ol{y} ) \\
\ol{f}_2 &= \pi^{n+1} ( [\ol{y},\ol{\Delta}_3] + [\ol{\Delta}_2,\ol{z}] - \d_3 \ol{y} - \d_1 \ol{z} ) \\
\ol{f}_3 &= \pi^{n+1} ( [\ol{z},\ol{\Delta}_1] + [\ol{\Delta}_3,\ol{x}] - \d_2 \ol{z} - \d_3 \ol{x} ).
\end{align*}
Using the relations \eqref{eq:xnynzn} and the expressions of $\ol{\Delta}_i$, we obtain the equations
\begin{align*}
(\c_{22} + \c_{11} -\c_{23} -\d_1) \ol{x} + (\c_{22} + \c_{11} -\c_{13} -\d_2) \ol{y} +(-\c_{23} -\c_{13}) \ol{z} &=0 \\
(-\c_{31} -\c_{21}) \ol{x} + (-\c_{31} +\c_{33} +\c_{22} -\d_3) \ol{y} + (\c_{33} - \c_{21} + \c_{22} -\d_1) \ol{z} &=0 \\
(\c_{11} -\c_{32} + \c_{33} -\d_3) \ol{x} + (-\c_{12} - \c_{32} ) \ol{y} + (\c_{11} - \c_{12} + \c_{33} -\d_2) \ol{z} &=0.
\end{align*}
Therefore we have to solve the $9\times 9$ system of equations obtained by equating all coefficients to $0$ with the parameters $\d_1,\d_2,\d_3$. This system can be represented by
$$ \begin{bmatrix}
1 & 0 & 0 & 0 &1 & -1 & 0 & 0 & 0 \\
1 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & -1 & 0 & 0 & -1 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 & 0 & 0 & -1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & -1 & 0 & 1 \\
0 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 \\
0 & -1 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\
1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 &1 \\
\end{bmatrix}
\begin{bmatrix}
\c_{11} \\
\c_{12} \\
\c_{13} \\
\c_{21} \\
\c_{22} \\
\c_{23} \\
\c_{31} \\
\c_{32} \\
\c_{33} \\
\end{bmatrix}
=
\begin{bmatrix}
\d_1 \\
\d_2 \\
0 \\
0 \\
\d_3 \\
\d_1 \\
\d_3 \\
0 \\
\d_2
\end{bmatrix}
$$
One can easily check that this system has the solution
\begin{equation*}
\frac{1}{2} ( {\d_2},\ -(\d_2-\d_3),\ \d_1-\d_2,\ -(\d_1-\d_3),\ \d_1, -(\d_1-\d_2),\ \d_1-\d_3,\ \d_2-\d_3,\ \d_3)
\end{equation*}
\end{proof}
\end{proposition}
\begin{remark}
Note that the $9\times 9$ matrix appearing above has rank $8$.
\end{remark}
Finally, in order to get a large family of subgroups of $\mathrm{SL}_2^1(\mathbb{Z}_p)$ as triangle groups, one can consider the subgroups $\gen{x^{p^{a_1}}, y^{p^{a_2}}, z^{p^{a_3}}}$ for $a_i\ge 1$ inside $G(\eta,\mu,\lambda)$ with $v_p(\eta)=v_p(\mu)=v_p(\lambda)=1$. These are powerful pro-$p$ groups and they satisfy relations of the form
\begin{equation*}
[x^{p^{a_1}},y^{p^{a_2}}] = (x^{p^{a_1}})^{\a_1} (x^{p^{a_1}})^{\a_2},\quad
[y^{p^{a_2}},z^{p^{a_3}}] = (y^{p^{a_2}})^{\b_2} (z^{p^{a_3}})^{\b_3},
\end{equation*}
\begin{equation*}
[z^{p^{a_3}},x^{p^{a_1}}] = (z^{p^{a_3}})^{\c_3} (x^{p^{a_1}})^{\c_1}
\end{equation*}
for some $\a_1,\a_2,\b_2,\b_3,\c_1,\c_3 \in p\mathbb{Z}_p$.
It is possible to slightly modify the proof of Proposition~\ref{prop:SL21triangle} to show that $\mathrm{SL}_2^k(\mathbb{Z}_p)$ admits a presentation $$G(\eta,\mu,\lambda)= \pres{x,y,z}{[x,y]= x^\eta y^\mu,\ [y,z] = y^\lambda z^\eta,\ [z,x] = z^\mu x^\lambda}$$ with $\eta,\mu,\lambda\in p\mathbb{Z}_p$ and $v_p(\eta)= v_p(\mu)= v_p(\lambda)=k$ for $k\ge 2$.
We believe that all the groups of the form \eqref{eq:pres} are torsion-free and therefore uniform for any choice of parameters. Note that, by \cite{ac} and \cite[Prop.~2.12]{ks}, it would be sufficient to show that these groups are infinite.
\section{Non-abelian free subgroups}\label{sec:free subgp}
In Galois theory one has the following version of the celebrated \emph{Tits alternative} (cf.\ \cite{cq:bk} and \cite[Thm.~3]{ware}).
\begin{theorem}\label{thm:tits}
Let $K$ be a field containing a root of unity of order $p$.
Then either $G_{K}(p)$ is (metabelian) uniform, or it contains a free non-abelian pro-$p$ subgroup.
\end{theorem}
One may ask whether a similar result holds also for quadratic pro-$p$ groups which do not arise as maximal
pro-$p$ Galois groups of fields. In the case of $p$-RAAGs we have Theorem~\ref{thm:pRAAGs free subgroup}, which we prove next.
\subsection{Non-abelian free subgroups in \texorpdfstring{$p$}{p}-RAAGs}
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:pRAAGs free subgroup}}]
Let $G_\Gamma$ be a $p$-RAAG with associated $p$-graph $\Gamma=({\mathcal G},f)$ and underlying graph ${\mathcal G}=({\mathcal V},{\mathcal E})$. Assume that $G_\Gamma$ is not powerful.
Set $d=\mathrm{d}(G)$ and let $$G_\Gamma=\langle x_1,\ldots,x_d\mid R\rangle$$ be the presentation induced by $\Gamma$.
By Proposition~\ref{lem:unif}~(b), ${\mathcal G}$ is not complete, hence there exist
two vertices in ${\mathcal V}({\mathcal G})$ --- say $x_1$ and $x_2$ --- such that $(x_1,x_2)\notin{\mathcal E}({\mathcal G})$.
Consider the pro-$p$ group $S=\langle a,b\mid a^p=b^p=1 \rangle$. Note that $S\cong C_p\ast C_p$. Let $\phi\colon G\to S$
be the homomorphism defined by $\phi(x_1)=a$, $\phi(x_2)=b$ and $\phi(x_i)=1$ for $i\neq1,2$.
Set $H=\langle x_1, x_2\rangle$.
Then $\phi|_H\colon H\to S$ is an epimorphism.
The elements $t_1=aba$ and $t_2=bab$ in $S$ have infinite order and the subgroup they generate is not pro-cyclic.
Thus, $\langle t_1,t_2\rangle$ is a 2-generated free pro-$p$ group.
Now choose two elements $u,v\in H$ such that $\phi(u)=t_1$ and $\phi(v)=t_2$.
Then $\phi|_{\langle u,v\rangle}:\langle u,v\rangle\to\langle t_1,t_2\rangle$
is an epimorphism, and from the hopfian property it follows that $\langle u,v\rangle$ is a 2-generated free
pro-$p$ group.
\end{proof}
\begin{remark}
Let $G=G_1\amalg_{H}G_2$ be a pro-$p$ group as in Theorem~\ref{thm:cohomology amalgam} and suppose that $H$ is not equal to $G_1$ or $G_2$, i.e., the amalgam is non-fictitious. Then the standard graph $S=S(G)$ of $G$ is defined as follows (cf. \cite{ribzal:horizons}):
$$S = G/H \charfusion[\mathop]{\bigcup}{\cdot} G/G_1 \charfusion[\mathop]{\bigcup}{\cdot} G/G_2,\ V(S) = G/G_1 \charfusion[\mathop]{\bigcup}{\cdot} G/G_2,\ d_0(gH)=gG_1,\ d_1(gH)=gG_2.$$
Now let $G=\mathrm{HNN}(G_0,A,\phi) = \pres{G_0, t}{tat^{-1}=\phi(a), a\in A}$ be as in Theorem~\ref{thm:HNN}. Then the standard graph $S=S(G)$ of $G$ is defined as follows:
$$S = G/A \charfusion[\mathop]{\bigcup}{\cdot} G/G_0,\quad V(S) = G/G_0,\quad d_0(gA)=gG_0,\quad d_1(gH)=gtG_0.$$
In both cases $S$ is a pro-$p$ tree (see \cite[Thm.~4.1]{ribzal:horizons}). Moreover, it is not difficult to see that $G$ acts faithfully and irreducibly on $S$. By \cite[Thm.~3.15]{ribzal:horizons}, if $G$ is not isomorphic to $C_2\oplus\mathbb{Z}_2$ or $\mathbb{Z}_p$, then $G$ conatins a free non-abelian pro-$p$ subgroup.
\end{remark}
As we have seen in the previous proof, in a quadratic $p$-RAAG which is not uniform there must be a ``missing'' commutator among its relations. It is interesting to remark that the same is true for quadratic \emph{mild} pro-$p$ groups.
\begin{proposition}\label{prop:mild cupproduct}
Let $G$ be a mild quadratic pro-$p$ group with $\mathrm{r}(G)\geq\mathrm{d}(G)$.
Then there exist linearly independent elements $\alpha,\alpha'\in H^1(G,\mathbb{F}_p)$ such that $\alpha \alpha'=0$.
\end{proposition}
\begin{proof}
Let \eqref{eq:presentation} be a minimal presentation of $G$. Let $\mathcal{X}=\{x_1,\ldots,x_d\}$ be a basis of $F$
and let $\{\alpha_1,\ldots,\alpha_d\}$ be a basis of $H^1(G,\mathbb{F}_p)$ dual to $\mathcal{X}$.
Suppose that $\alpha \alpha'\neq0$ for every linearly independent couple $\alpha,\alpha'\in H^1(G,\mathbb{F}_p)$.
Then, by bilinearity of the cup-product, $\{\alpha_1 \alpha_h \mid h=2,\ldots,d\}$ is a set of linearly independent
elements of $H^2(G,\mathbb{F}_p)$. This implies that, for every $h=1,\ldots,d-1$, the commutator $[x_1,x_{h+1}]$ appears in some relation. Moreover $\mathrm{r}(G):=m\geq d> d-1$.
By Remark~\ref{rem:Gauss reduction} and the above discussion, one may pick a set of defining relations
$\mathcal{R}=\{r_1,\ldots,r_m\}$ such that
\begin{equation}\label{eq:rels1}
r_h\equiv [x_1,x_{h+1}]\cdot \displaystyle\prod_{2\leq i<j\leq d}[x_i,x_j]^{b(h,i,j)} \mod F_{(3)},\quad \text{ for } 1\le h\le d-1
\end{equation}
and
\begin{equation}\label{eq:rels2}
r_h\equiv \displaystyle\prod_{2\leq i<j\leq d}[x_i,x_j]^{b(h,i,j)} \mod F_{(3)}, \quad \text{ for } h\geq d
\end{equation}
for appropriate coefficients $b(h,i,j) \in \mathbb{F}_p$. Let $\widetilde{G}$ be the pro-$p$ group with presentation $\langle \mathcal{X}\mid \widetilde{\mathcal{R}} \rangle$ where $\widetilde{\mathcal{R}}=\{\widetilde{r}_1,\ldots,\widetilde{r}_m\}$ and $\widetilde{r}_h$ is the coset representative of $r_h$ modulo $F_{(3)}$ appearing in the righ-hand side of \eqref{eq:rels1} and \eqref{eq:rels2}. Since $G$ is mild, by Remark~\ref{rem:mild approximation} $\widetilde{G}$ is also mild. Let $N,H\le \widetilde{G}$ be the subgroups generated by $\{x_2,\ldots,x_d\}$ and $\{x_1\}$, respectively.
Then $N$ is a normal subgroup of $\widetilde{G}$. Thus the short exact sequence of pro-$p$ groups
\[
\xymatrix{1\ar[r] & N\ar[r] & \widetilde{G}\ar[r] & H \ar[r] & 1},
\]
induces the short exact sequences in cohomology
\begin{equation}\label{eq:ses cohom GGS}
\xymatrix{ 0\ar[r] & H^1(H,H^{n-1}(N,\mathbb{F}_p))\ar[r] & H^n(\widetilde{G},\mathbb{F}_p)\ar[r] & H^n(N,\mathbb{F}_p)^{\widetilde{G}}\ar[r] &0}
\end{equation}
for every $n\geq1$ (cf. \cite[\S~II.4, Ex.~4]{nsw:cohn}).
In particular, from \eqref{eq:ses cohom GGS} for $n=2$ we can deduce that $H^2(N,\mathbb{F}_p)^{\widetilde{G}}\neq0$.
Therefore $H^1(H,H^{2}(N,\mathbb{F}_p))\neq0$. Finally \eqref{eq:ses cohom GGS} for $n=3$ yields $H^3(\widetilde{G},\mathbb{F}_p)\neq0$,
contradicting $\mathrm{cd}(\widetilde{G})=2$.
\end{proof}
\subsection{Non-abelian free subgroups in mild pro-\texorpdfstring{$p$}{p} groups}
Even with Proposition~\ref{prop:mild cupproduct} in hand, we were not able to prove an analogous of Theorem~\ref{thm:pRAAGs free subgroup} for mild pro-$p$ groups in full generality. One reason for this is that the condition of mildness for a pro-$p$ group $G$ depends only on the shape of the defining relations modulo $G_{(3)}$ (cf.\ Remark~\ref{rem:mild approximation}).
Nevertheless, we can show that many mild pro-$p$ groups contain a free non-abelian subgroup. To do this, we will show that ---in several cases--- a mild pro-$p$ group is a \emph{generalised Golod-Shafarevic group} (see Section~\ref{sec:ggs}).
\begin{proposition}\label{prop:mild GoSha}
Let $G$ be a mild quadratic and non-uniform pro-$p$ group
and let \eqref{eq:presentation} be a minimal presentation of $G$. Choose a basis $\mathcal{X}=\{x_1,\ldots,x_d\}$ of $F$
and a set of defining relations $\mathcal{R}=\{r_1,\ldots,r_m\}\subset F$ for $G$.
Suppose that one of the following conditions holds:
\begin{itemize}
\item[(a)] $\mathrm{r}(G) \neq \mathrm{d}(G)^2/4$;
\item[(b)] there are $x_i,x_j\in\mathcal{X}$, $i\neq j$, such that $x_i^3,x_j^3$, $[x_i,x_j]$ and any other higher commutator involving only $x_i$ and $x_j$ do not appear in any defining relation $r_h\in\mathcal{R}$;
\item[(c)] every defining relation consists of a single elementary commutator modulo $F_{(3)}$, i.e.,
$r_h\equiv [x_{i_h},x_{j_h}]\bmod F_{(3)}$ for some $1\leq i_h<j_h\leq d$, for all $r_h\in\mathcal{R}$.
\end{itemize}
Then $G$ is generalised Golod-Shafarevic. In particular, it contains a free non-abelian pro-$p$ subgroup.
\end{proposition}
\begin{proof}
Note that, by Theorem~\ref{thm:ershovfreesub}, it is sufficient to show that $G$ is a generalised Golod-Shafarevic pro-$p$ group. Set $d=\mathrm{d}(G)$ and $m=\mathrm{r}(G)$. By \cite[Prop.~4]{labute:fabulous},
\begin{equation}\label{eq:gocha}
m \le \frac{d^2}{4}.
\end{equation}
If \eqref{eq:gocha} is a strict inequality, then $G$ is a Golod-Shafarevich pro-$p$ group by \cite{zelmanov}.
Thus it is also generalized Golod-Shafarevich.
This settles part (a).
If \eqref{eq:gocha} is an equality, then $d=2n$ for some $n\geq2$.
Without loss of generality, suppose that condition (b) holds with $i=1$, $j=2$.
We define a valuation $D$ on $F$ as follows: set $D(x_i)=1$, for $i=1,2$, and $D(x_i)=N$, with $N$ to be chosen later.
It is easy to show that $D$ comes from a weight function on $\mathbb{F}_p\langle\!\langle\mathcal{X}\rangle\!\rangle$
and that $D(x_i^p)=p D(x_i)$ and $D([x_i,x_j])=D(x_i)+D(x_j)$, for $1\le i<j\le d$.
By (b), if $D(r_h)\le 5$ for some $h$, then one has $p=5$ and $r_h=x_1^{ap}x_2^{bp}r_h'$, with $a,b\in\{0,1\}$ not both equal to 0, and $D(r_h')\geq 1+N$.
Thus one may choose $\mathcal{R}$ such that $x_1^p$ and $x_1^p$ possibly appear only in at most two relations,
say $r_1$ and $r_2$.
Hence $D(r_1),D(r_2)\geq 5$ and $D(r_h)\geq1+N$, for $h\geq3$.
It follows that
\[\begin{split}
1-H_{\mathcal{X},D}(T)+H_{\mathcal{R},D}(T)&=1-(2T+(d-2)T^N)+(T^{D(r_1)}+T^{D(r_2)}+\mathcal{O}(T^N))\\
&\leq 1-2T+2T^5+\mathcal{O}(T^{N-1})
\end{split}\]
for $T\in (0,1)$.
Since $N$ can be chosen arbitrarily large, there exists $T_0\in(0,1)$ such that
$1-H_{\mathcal{X},D}(T_0)+H_{\mathcal{R},D}(T_0)<0$.
Hence $G$ is generalized Golod-Shafarevich, and this settles part (b).
Finally, suppose condition (c) holds.
Let ${\mathcal G}=({\mathcal V},{\mathcal E})$ be the combinatorial graph with
${\mathcal V}=\mathcal{X}$ and ${\mathcal E}=\{(x_{i_h},x_{j_h}),h=1,\ldots,m\}$.
Since $cd(G)=2$ and $G$ is quadratic, the graph ${\mathcal G}$ is triangle-free. By Mantel's Theorem (cf. \cite{mantel}),
${\mathcal G}$ is a complete bipartite graph on $2n$ vertices and $n^2$ edges.
Thus, after renumbering we can arrange that the couples $(i_h,j_h)$ are all the couples with $1\leq i_h,j_h\leq d$, $i_h$ odd and $j_h$ even.
We define a valuation $D$ on $F$ as follows: set $D(x_i)=1$, for $i$ odd, and $D(x_j)=2$ for $j$ even.
It is easy to show that $D$ comes from a weight function on $\mathbb{F}_p\langle\!\langle\mathcal{X}\rangle\!\rangle$
and that $D([x_i,x_j])=D(x_i)+D(x_j)$, for $1\le i<j\le d$.
Hence $D(r_h)=3$ for every $h=1,\ldots,m$.
It follows that
\[\begin{split}
1-H_{\mathcal{X},D}(T)+H_{\mathcal{R},D}(T)&=1-\left(nT+nT^2\right)+n^2T^3\\
&= \left(1-nT\right)\left(1-nT^2\right).
\end{split}\]
Thus, there exists $T_0\in(1/n,1/\sqrt{n})$ such
that $1-H_{\mathcal{X},D}(T_0)+H_{\mathcal{R},D}(T_0)<0$.
Hence $G$ is generalized Golod-Shafarevich.
\end{proof}
We conclude this section by showing that all ``small'' quadratic groups are either uniform or contain a free pro-$p$ subgroup.
\begin{coro}\label{coro:small free subgp}
Let $G$ be a quadratic pro-$p$ group with $\mathrm{d}(G)\leq3$.
Then either $G$ is uniform, or $G$ contains a free non-abelian pro-$p$ subgroup.
\end{coro}
\begin{proof}
By Proposition~\ref{prop:mild GoSha}, it is enough to show that every quadratic pro-$p$ group with $\mathrm{d}(G)\leq3$ which is not analytic is either mild or free.
This is clear for $\mathrm{d}(G)\leq2$. For $\mathrm{d}(G)=3$ and $\mathrm{r}(G)=1$ it follows from \cite{cq:onerel}.
If $\mathrm{d}(G)=3$ and $\mathrm{r}(G)=2$ we may use Proposition~\ref{prop:cd2 cupproduct}.
In this case $\dim H^2(G,\mathbb{F}_p)=2$, so $H^2(G,\mathbb{F}_p)$ is generated by, say, the products $\alpha_1\alpha_2$
and $\alpha_1\alpha_3$. Then the subspaces $V_1=\langle\alpha_1\rangle$ and $V_2=\langle\alpha_2,\alpha_3\rangle$ of $H^1(G,\mathbb{F}_p)$ satisfy the hypotheses of Proposition~\ref{prop:cd2 cupproduct} and $G$ is mild.
\end{proof}
In light of the previous results, Conjecture~\ref{conj:subgp free} is very natural. Notice that Conjecture~\ref{conj:subgp free} holds for almost all known examples of quadratic groups: maximal pro-$p$ Galois groups, $p$-RAAGs, many mild pro-$p$ groups and ``small'' quadratic pro-$p$ groups.
\section*{}
\subsection*{Acknowledgments}
We would like to thank Thomas Weigel for inspiring us to work on quadratic pro-$p$ groups and for his continuous encouragement.
The second and third author thank the Heinrich-Heine University of D\"usseldorf and the University of Milan-Bicocca for their hospitality and support.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Stabilization of an underactuated mechanical system is a challenging
problem since it is not fully feedback linearizable due to fewer
number of actuators than the degrees of freedom (DOF). Nevertheless,
Interconnection and damping assignment passivity-based control
(IDA-PBC) is a general approach to stabilize an underactuated system
by shaping total energy of the system~\cite{ortega2002stabilization}.
In this method, the desired structure of the closed-loop system is
selected instead of classical passivity, and then all assignable
energy functions are derived. For this purpose, two sets of PDEs
related to the shaping of potential and kinetic energy functions,
which are called \emph{matching equations}, should be solved
analytically~\cite{ferguson2019matched}. By this means, the desired
inertia matrix should be derived from the kinetic energy PDE, and
then by knowing this matrix, the potential energy PDE is solved.
Since the analytic solutions of these equations shall be derived,
the applicability of this approach is significantly restricted.
Several papers have focused on solving or reformulating the matching
equations. In \cite{acosta2005interconnection}, solving the matching
equations of systems with one degree of underactuation has been
reported. It has been shown that upon certain conditions, the
solution of the kinetic and potential energy PDEs could be derived
analytically. Furthermore, Solving the PDEs of 2-DOF underactuated
systems satisfying some conditions has been reported
in~\cite{d2006further}. Simplification of kinetic energy PDE by
means of coordinate transformation has been studied
in~\cite{viola2007total}. In~\cite{donaire2015shaping} the total
energy shaping without solving the matching equations for a class of
underactuated systems has been investigated, (see also
\cite{romero2016energy}). A method for simplifying the kinetic
energy PDE has been reported in~\cite{harandi2021matching} in which
a particular structure for desired inertia matrix has been
considered to simplify the related matching equation. Recently in
\cite{harandi2021solution}, it has been shown that replacing a PDE with
some Pfaffian differential equations may ease up this stumbling
block of solving matching equations. Furthermore, it has
been shown that the solution of potential energy PDE may be divided
into homogeneous and non-homogeneous components,
while~\cite{j2021bounded} reports on the general structure of the
homogeneous solution. In some cases, derivation of desired potential
energy, especially the non-homogeneous component, is a prohibitive
task. The situation is more crucial when the kinetic energy PDE is
as challenging as to derive another desired inertia matrix for the
system.
In this note, we concentrate on reformulation of the potential
energy PDE. For this purpose, we derive a certain and applicable
condition for many systems to merely derive the homogeneous solution
of the PDE. Furthermore, we show that verification of the proposed
condition is quite easy since it is based on physical parameters of
the system, desired inertia matrix, and the controller gains
in the homogeneous part of desired potential function. Furthermore,
it is shown that the proposed condition may be reformulated into a
linear matrix inequality (LMI), that may impose some constraints on
the controller gains. Note that in this formulation, the gravity
torques of unactuated configuration variables remain in the
closed-loop system. Thus, the contribution of this note may be
interpreted as rejection of a specific unmatched disturbance by
merely suitable selection of the controller gains (see
\cite{Donaire-2019} for the challenges of unmatched disturbance
rejection). The effectiveness of the method is verified through some
examples.
\section{Review of IDA-PBC for Mechanical Systems}\label{s2}
Dynamic equations of a mechanical system in port Hamiltonian
modeling are as follows~\cite{ortega2002stabilization}
\begin{equation}
\label{1}
\begin{bmatrix}
\dot{q} \\ \dot{p}
\end{bmatrix}
=\begin{bmatrix}
0_{n\times n} & I_n \\ -I_n & 0_{n\times n}
\end{bmatrix}
\begin{bmatrix}
\nabla_q H \\ \nabla_p H
\end{bmatrix}
+\begin{bmatrix}
0_{n\times m} \\ G(q)
\end{bmatrix}
u,
\end{equation}
where $q,p\in\mathbb{R}^{n}$ are generalized position and momentum,
respectively, $H(q,p)=\frac{1}{2}p^TM^{-1}(q)p+V(q)$ denotes the
Hamiltonian of the system which is the summation of kinetic and
potential energy, $0<M\in\mathbb{R}^{n\times n}$ denotes the inertia
matrix, $u\in\mathbb{R}^m$ is the input, $G(q)\in\mathbb{R}^{n\times
m}$ denotes full rank input mapping matrix and the operator $\nabla$
denotes gradient of a function which is represented by a column
vector. Presume that the target dynamic of the closed-loop system is
in the following form
\begin{align}
\label{2}
\begin{bmatrix}
\dot{q} \\ \dot{p}
\end{bmatrix}
=
\begin{bmatrix}
0_{n\times n} & M^{-1}M_d \\ -M_dM^{-1} & J_2-GK_vG^T
\end{bmatrix}
\begin{bmatrix}
\nabla_q H_d \\ \nabla_p H_d
\end{bmatrix},
\end{align}
in which
$$H_d=\frac{1}{2}p^TM_d^{-1}(q)p+V_d(q),$$
is the summation of the desired kinetic and potential energy,
$J_2\in\mathbb{R}^{n\times n}$ is a free skew-symmetric matrix,
$K_v\in\mathbb{R}^{m\times m}$ is positive definite damping gain,
and $V_d$ should be designed such that $q_d=\text{arg min} V_d(q)$
with $q_d$ being the desired equilibrium point. By setting (\ref{1})
equal to (\ref{2}), the control law is given as
\begin{align}
u&=(G^TG)^{-1}G^T(\nabla_q H-M_dM^{-1}\nabla_q H_d+J_2\nabla_p H_d)\nonumber\\&-K_vG^T\nabla_p H_d,\label{3}
\end{align}
while the following PDEs shall be satisfied
\begin{align}
&G^\bot \{\nabla_q \big(p^TM^{-1}(q)p\big) - M_dM^{-1}(q)\nabla_q \big(p^TM_d^{-1}(q)p\big) \nonumber \\ &\hspace{2.9cm}+2J_2 M_d^{-1}p \} =0, \label{4} \\
&G^\bot\{\nabla_q V(q)-M_dM^{-1}\nabla_q V_d(q)\} =0. \label{5}
\end{align}
The first PDE is related to kinetic energy shaping, and the second
one corresponds to the potential energy PDE. First, $M_d$ should be
derived from nonlinear PDE (\ref{4}) and then $V_d$ is obtained from
(\ref{5}). Stability of $q_d$ is ensured by considering $H_d$ as a
Lyapunov candidate whose derivative is $\dot{H}_d =-(\nabla_p H_d)^T
GK_vG^T\nabla_p H_d$. In the next
section, we focus on the reformulation of PDE (\ref{5}).
\section{Main Results}
As indicated in \cite{harandi2021solution} and~\cite{j2021bounded},
the solution of potential energy PDE can be divided to homogeneous
($V_{dh}$) and non-homogeneous ($V_{dn}$) components. Furthermore,
$V_{dh}$ is derived from the following PDE
\begin{align}\label{6}
G^\bot M_dM^{-1}\nabla_q V_d(q) =0,
\end{align}
and has the following form
$$V_{dh}=\phi(V_{dh_1},V_{dh_2},\dots),$$
in which, $V_{dh_i}$s are functions satisfying (\ref{6}). Typically,
the free design function $\phi$ is set as:
\begin{align}\label{7}
V_{dh}=\displaystyle\sum \frac{k_i}{2} (V_{dh_{i}}-V_{dh_{i}}^*)^2,
\end{align}
in which $V_{dh_{i}}^*=\left.V_{dh_{i}}\right|_{q=q_d}$ and $k_i$s
are free positive gains. Furthermore, $V_{dn}$ is the particular
solution of the PDE (\ref{5}).
The solution of (\ref{5}) clearly depends on $M_d$, which is derived
from PDE (\ref{4}). Since in general, the nonlinear PDE (\ref{4}) is
very hard to be solved analytically, especially in the cases that
the first term of (\ref{4}) is non-zero, deriving multiple $M_d$ is
a prohibitive task. This means that the designers are very reluctant
to solve (\ref{5}), given $M_d$. Furthermore, derivation of
non-homogeneous solution of (\ref{5}) is more challenging than that
of the homogeneous part. As an example, examine the case reported in
\cite{harandi2021matching} for the systems with $G=P[I_m,0_{m\times
n}]^T$ where $P$ denotes permutation matrix. In this case, the term
$G^\bot M_d$ in (\ref{6}) is a constant vector resulting in
straightforward PDE (\ref{6}) to be solved\footnote{Note that based
on the structure of $M_d^{-1}$ given in \cite{harandi2021matching},
the $k$th row of adjugate of $M_d^{-1}$ is constant, but its
determinant is configuration-dependent. However, the determinant is
omitted in (\ref{6}).}. However, for the same case derivation of the
non-homogeneous solution is still very difficult (see section
\ref{pen} for more details).
To tackle this problem, in the following theorem, a condition is
proposed, through which it is enough to solve the PDE (\ref{6})
instead of (\ref{5}), and therefore, derivation of the homogeneous
solution would be sufficient.
\begin{theorem}\label{th1}
Consider IDA-PBC methodology introduced in section~\ref{s2} with
$V_d=V_{dh}$ as the solution of (\ref{6}). Then $q_d$ is stable if
the following condition is satisfied
\begin{align}\label{8}
\left.\Big(\frac{\partial^2 (V_{dh}+\eta)}{\partial q^2}\Big)\right|_{q=q_d}>0,
\end{align}
in which,
\begin{align}
\eta:&=\int\displaystyle\sum_{i=1}^{n-m}\bigg(0.5\frac{( G_i^\bot\nabla V) G_i^{\bot}}{\| G_i^{\bot}\|^2}\bigg){M_d}^{-1} M\dot{q}+0.5\dot{q}^TMM_d^{-1}\nonumber\\&\bigg(\frac{( G_i^\bot\nabla V) G_i^{\bot^T}}{\| G_i^{\bot}\|^2}\bigg)dt,\label{9}
\end{align}
while $G_i^\bot$ denotes the $i$th row of $G^\bot$.\carre
\end{theorem}
Notice that verification of condition (\ref{8}) is quite simple
since only positive definiteness of a constant matrix
should be checked.
\proof Substitute the control law (\ref{3}) with $V_d=V_{dh}$ as the
solution of (\ref{6}) in (\ref{1}). Then, the closed-loop equations
are given by:
\begin{align}
\begin{bmatrix}
\dot{q} \\ \dot{p}
\end{bmatrix}
&=
\begin{bmatrix}
0_{n\times n} & M^{-1}M_d \\ -M_dM^{-1} & J_2-GK_vG^T
\end{bmatrix}
\begin{bmatrix}
\nabla_q H_d \\ \nabla_p H_d
\end{bmatrix}\nonumber\\&-\begin{bmatrix}
0 \\ \displaystyle\sum_{i=1}^{n-m}\frac{( G_i^\bot\nabla V) G_i^{\bot ^T}}{\| G_i^{\bot }\|^2}
\end{bmatrix},
\label{10}
\end{align}
with $H_d=\frac{1}{2}p^TM_d^{-1}p+V_{dh}$. The last term in
(\ref{10}) is resulted from dissatisfaction of (\ref{5}) and is
interpreted as the natural gravity torques/forces of the system in
the direction of unactuated coordinates. Now, consider the following
function
\begin{align}\label{11}
\boldsymbol{V}=H_d+\eta.
\end{align}
It is easy to verify that its derivative along the trajectories of the system is
$$\dot{\boldsymbol{V}}=-(\nabla_p H_d)^T GK_vG^T\nabla_p H_d.$$
Thus, if it is shown that $\boldsymbol{V}$ is a suitable Lyapunov
function, the proof is completed. For this purpose, we should show
the following terms
\begin{align}
\left.\frac{\partial \boldsymbol{V}}{\partial x}\right|_{x=x_d}=0, \quad \left.\frac{\partial^2 \boldsymbol{V}}{\partial x^2}\right|_{x=x_d}>0 \quad \mbox{with} \hspace{1mm}x=[q^T,p^T]^T.\label{12}
\end{align}
Note that $\eta$ may be represented by:
\begin{align*}
\eta&=\int_{q_0}^{q}\displaystyle\sum_{i=1}^{n-m}\bigg(\frac{( G_i^\bot(\nu)\nabla_\nu V(\nu)) G_i^{\bot}(\nu)}{\| G_i^{\bot}(\nu)\|^2}\bigg){M_d}^{-1}(\nu) M(\nu)d\nu.
\end{align*}
Hence, the only $p$-dependent term in $\boldsymbol{V}$ is
$\frac{1}{2}p^TM_d^{-1}p$ that clearly satisfies (\ref{12}).
Therefore, we verify conditions (\ref{12}) with respect to $q$ for
the other terms of $\boldsymbol{V}$. By this means, we have
\begin{align*}
&\left.\frac{\partial (V_{dh}+\eta)}{\partial q}\right|_{q=q_d}=\left.(\nabla V_{dh})^T\right|_{q=q_d}+\\& \displaystyle\sum_{i=1}^{n-m}\bigg(\frac{\big( G_i^\bot(q_d)\left.(\nabla V)\right|_{q=q_d} \big) G_i^{\bot}(q_d)}{\| G_i^{\bot}(q_d)\|^2}\bigg){M_d}^{-1}(q_d) M(q_d)=0.
\end{align*}
Recall that $q_d$ is an equilibrium point, and therefore, it is on
the manifold $G^\bot \nabla V=0$. Hence, the second condition of
(\ref{12}) will be reformulated as (\ref{8}) and this completes the
proof.\carrew
Notice that the closed-loop equations (\ref{10}) may be interpreted
as a simple IDA-PBC in the presence of unmatched disturbance. Thus,
it may be seen as the rejection of particular form of external disturbance.
\begin{remark}
In the proof of Theorem~\ref{th1}, based on the IDA-PBC methodology,
it is shown that the closed-loop system is represented by
(\ref{10}). Here, it is explained from another point of view.
Suppose that (\ref{10}) is the target dynamic of the
closed-loop system. By multiplying (\ref{1}) and (\ref{10}) from
left side to the full rank matrix $[G,G^{\bot^T}]^T$, the control
law and kinetic energy PDE are derived as (\ref{3}) and (\ref{4}),
respectively, while $V_d$ is the solution of the PDE (\ref{6}).
\end{remark}
From (\ref{7}), it is deduced that the condition (\ref{8}) depends
on $M_d$ and $V_{dh}$, and may impose a constraint on $k_i$s.
Generally, the first term of (\ref{8}) is positive semi-definite;
thus, $M_d$ should be designed such that the second term is not
negative definite. In the following, some cases are analyzed.
Case 1: Presume that $q_d$ is natural equilibrium point of the
system and $M_d=M$. In this case, the condition (\ref{8}) is
trivially satisfied since it is the summation of two positive (semi)
definite matrices. The advantage of Theorem~\ref{th1} in this case
is the simple design of $V_d$ for the systems with $\left. \nabla
V\right|_{q=q_d}\neq 0$. This might be very useful in applications
such as underactuated cable-driven robots~\cite{harandi}.
Case 2: If $V_{dh}$ is designed as the general form of (\ref{7}),
the first term of (\ref{8}) is given by:
\begin{align*}
\left.\nabla^2V_{dh}\right|_{q=q_d}=\displaystyle\sum k_i\left.\begin{bmatrix}
\frac{\partial^2V_{dh_i}}{\partial q_1^2} & \dots & \frac{\partial(\partial V_{dh_i})}{\partial q_n\partial q_1} \\ \vdots & \ddots & \vdots \\ \frac{\partial(\partial V_{dh_i})}{\partial q_1\partial q_n} & \dots & \frac{\partial^2V_{dh_i}}{\partial q_n^2}
\end{bmatrix}\right|_{q=q_d}.
\end{align*}
Therefore, generally we can argue that the inequality (\ref{8}) can
be reformulated into an LMI. Solving a LMI problem has been studied
thoroughly in several references such as \cite{boyd1994linear}, and many
tractable software are developed to accomplish this task.
Case 3: Consider systems with $G=P[I_m,0_{m\times n}]^T$. The
condition (\ref{8}) is in the following form
\begin{align*}
&\left.\nabla^2V_{dh}\right|_{q=q_d}+\displaystyle\sum_{i=1}^{n-m}\bigg(\left.\frac{\partial (G_i^\bot\nabla_{q}V)}{2\partial q}\right|_{q=q_d}G_i^\bot M_d^{-1}(q_d)M(q_d)\\&+M(q_d)M_d^{-1}(q_d)G_i^{\bot^T}\left.\Big(\frac{\partial (G_i^\bot\nabla_{q}V)}{2\partial q}\Big)^T\right|_{q=q_d}\bigg)>0
\end{align*}
In particular, assume that $n=2$ and $m=1$ which corresponds to most
of the benchmark systems given in the literature. In this case, we
have an inequality in the form of:
\begin{align}
\label{13}
k\begin{bmatrix}
\alpha_1 & \alpha_2 \\ \alpha_2 & a_3
\end{bmatrix}+\begin{bmatrix}
\beta_1 & \beta_2 \\ \beta_2 & \beta_3
\end{bmatrix}>0,
\end{align}
in which the first and second matrix represent
$\left.\nabla^2V_{dh}\right|_{q=q_d}$ and
$\left.\frac{\partial^2\eta}{\partial q^2}\right|_{q=q_d}$,
respectively. The first matrix is assumed to be positive semi-definite with the following properties
\begin{align*}
\alpha_1,\alpha_3>0,\qquad \alpha_1\alpha_3-\alpha_2^2=0.
\end{align*}
As explained before, the second matrix must not be negative definite
(if so, $M_d$ should be altered). Therefore, we have the following
scenarios
\begin{align}
&A_1: \beta_1,\beta_3\leq0,\qquad \beta_1\beta_3-\beta_2^2\leq0\nonumber\\
&A_2: \beta_1,\beta_3\geq0,\qquad \beta_1\beta_3-\beta_2^2\leq0\nonumber\\
&A_3: \beta_1\beta_3\leq0.\label{14}
\end{align}
One can easily verify that (\ref{13}) is satisfied with
\begin{align}
&A_1\& A_3: k>\max\biggl\{-\frac{\beta_1}{\alpha_1},-\frac{\beta_3}{\alpha_3},\frac{\beta_2^2-\beta_1\beta_3}{\rho}\biggr\},\nonumber\\
&A_2:
k>\frac{\beta_2^2-\beta_1\beta_3}{\rho}, \label{15}
\end{align}
with
$$\rho:=\alpha_1\beta_3+\alpha_3\beta_1-2\alpha_2\beta_2,$$ which must be positive. In the next
section, some examples are given as representative of benchmark
systems to show the applicability of the proposed method in
practice. Readers are referred to \jgrv{[arxiv version of this
note]} to see more examples.
\section{Case Studies}\label{s4}
\subsection{Spatial Underactuated Cable-Driven Robot}
This system consists of a suspended mass from two cables that their
length is controlled by the actuators. The dynamic parameters of the
system, as well as the solution of the potential energy PDE
(\ref{5}) with $M_d=M$ are given as follows~\cite{harandi}
\begin{align*}
&q=[x,y,z]^T,\quad M=mI_3,\quad V=mgy,\quad q_d=[x_d,y_d,0]^T,\\ &l_1^2=x^2+y^2+z^2,\qquad l_2^2=(x-b)^2+y^2+z^2,\\
&G^T=\begin{bmatrix}
\frac{x}{l_1} & \frac{y}{l_1} & \frac{z}{l_1} \\
\frac{x-b}{l_2} & \frac{y}{l_2} & \frac{z}{l_2}
\end{bmatrix},\qquad V_d=mgy+\phi(x,y^2+z^2),
\end{align*}
in which $m$ denotes the mass of the end-effector while $y$ is
always negative. It is clear that by defining
$$V_{dh}=\frac{k_1}{2}(x-x_d)^2+\frac{k_2}{2}(y^2+z^2-y_d^2)^2,$$
it is not possible to satisfy $q_d=\text{arg min} V_d(q)$.
However, by omitting $mgy$ and designing $V_d=V_{dh}$, the condition
(\ref{8}) is reduced to
$$\begin{bmatrix}
k_1 & 0 & 0 \\ 0 & 4k_2y_d^2 & 0 \\ 0 & 0 & 0
\end{bmatrix}+\begin{bmatrix}
0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -\frac{mgy_d}{y_d^2+z_d^2}
\end{bmatrix},$$
which shows that the condition (\ref{8}) of Theorem~\ref{th1} is satisfied with
$k_1,k_2>0$. This choice clearly simplifies the controller design.
\subsection{Acrobot}
The system is a 2R serial robot whose second link is merely
actuated. Dynamic parameters of the system are given
as~\cite{donaire2017robust}
\begin{align}
&M=\begin{bmatrix}
c_1+c_2+2c_3\cos(q_2) & c_2+c_3\cos(q_2) \\ c_2+c_3\cos(q_2) & c_2
\end{bmatrix}\nonumber\\
&V=c_4g\cos(q_1)+c_5g\cos(q_1+q_2),\label{16}
\end{align}
and $G=[0,1]^T$. An IDA-PBC controller has been designed in
\cite{d2006further} with the following parameters for the controller
\begin{equation*}
\begin{array}{c}
M_d=\begin{bmatrix}
a_1 & a_2 \\ a_2 & a_3
\end{bmatrix},\\ V_d=b_0\cos(q_1-\mu q_2)+b_1\cos(q_1)+b_2\cos(q_1+q2)\\+b_3\cos(q_1+2q_2)+b_4\cos(q_1-q_2)+\phi(q_1-\mu q_2)
\end{array}
\end{equation*}
where
\begin{align*}
a_3>\frac{a_2}{1-\sqrt{c_1/c_2}},\qquad
\mu=\frac{-1}{1+\sqrt{c1/c2}}.
\end{align*}
Furthermore, $b_0$ is a free constant and $b_i$s for
$i\in\{1,...,4\}$ are constant values defined in
\cite{donaire2017robust,d2006further}. To apply Theorem~\ref{th1},
consider the desired potential energy as follows
$$V_d=V_{dh}=\frac{k}{2}(q_1-\mu q_2)^2.$$
Condition (\ref{8}) is in the form of (\ref{13}) with the following parameters
\begin{align*}
&\alpha_1=1,\qquad\alpha_2=-\mu,\qquad\alpha_3=\mu^2,\\& \beta_1=\frac{-c_4g-c_5g}{a_1a_3-a_2^2}(a_3c_1+a_3c_2+2a_3c_3-a_2c_2-a_2c_3),\\& \beta_2=\frac{-c_5g}{2(a_1a_3-a_2^2)}(a_3c_1+a_3c_2+2a_3c_3-a_2c_2-a_2c_3)\\&+\frac{-c_4g-c_5g}{2(a_1a_3-a_2^2)}(a_3c_2+a_3c_3-a_2c_2), \\&\beta_3=\frac{-c_5g}{a_1a_3-a_2^2}(a_3c_2+a_3c_3-a_2c_2)
\end{align*}
Since $a_3>a_2$, the case $A1$ in (\ref{14}) is applicable
and the suitable value of $k$ is given in (\ref{15}). Note that
since $a_i$s are all positive free scalars, assurance of $\rho>0$ is
quite simple.
\subsection{Pendubot}\label{pen}
As explained before, Theorem~\ref{th1} is appropriate to integrate
with the proposed method in~\cite{harandi2021matching} in which a
systematic approach to solve or simplify (\ref{4}) has been
introduced. The reason is the particular structure of $M_d$ such
that the vector $\gamma$ defined as
\begin{align*}
\gamma:=\det M \det M_d^{-1}G^\bot M_dM^{-1},
\end{align*}
is independent of the configuration-dependent element of $M_d$. By
this means, $M_d$ and $V_{dh}$ may be derived easily. However,
derivation of $V_{dn}$ is very complicated in general. As an
example, consider Pendubot whose dynamic parameters are given as
(\ref{16}) with $G=[1,0]^T$.
By applying the procedure of \cite{harandi2021matching}, $M_d^{-1}$ is derived as follows
\begin{align*}
M_d^{-1}=\begin{bmatrix}
a_1 & b_1 \\ b_1 & \frac{\lambda e^{a(q_2)}+b^2}{a_1}
\end{bmatrix
\end{align*}
in which
\begin{align*}
&a(q_2)=\frac{-a_1c_1c_2c_3+b_1c_1c_2c_3}{c_1c_2(b_1c_3+2a_1c_3)}\ln\big(c_1c_2-c_3^2\cos^2(q_2)\big)+\\&\frac{2a_1c_1c_2c_3-2b_1c_1c_2c_3}{c_1c_2(b_1c_3+2a_1c_3)}\cos(q_2)+\ln\bigg(\frac{1+\sqrt{\frac{c_3^2}{c_1c_2}}\cos(q_2)}{1-\sqrt{\frac{c_3^2}{c_1c_2}}\cos(q_2)}\bigg)\times\\&\frac{2a_1c_3^3+2a_1c_3^2(c_1+c_2)+2b_1c_2c_3^2(b_1-2)}{2a_1\sqrt{c_1c_2c_3^2}(b_1c_3+2a_1c_3)}\\&+\frac{b_1c_3^3}{c_3^2(b_1c_3+2a_1c_3)}\ln\big(c_3^2\cos^2(q_2)-c_1c_2\big),
\end{align*}
$\lambda$ is a free parameter,
and the constants $a_1$ and $b$ should be chosen such that
$a_1c_1+a_1c_2+b_1c_2=0$. The solution of PDE (\ref{6}) which is independent of $a(q_2)$ is
\begin{align*}
V_{dh}=\phi\bigg(\frac{\ln\big(\frac{\delta_4+\delta_3\cos(q_2)+\sqrt{\delta_4^2-\delta_3^2}\sin(q_2)}{(\delta_3+\delta_4\cos(q_2))(\delta_1\delta_4-\delta_2\delta_3)}\big)}{\delta_4\sqrt{\delta_4^2-\delta_3^2}}+\frac{\delta_2}{\delta_4}q_2-q_1\bigg)
\end{align*}
with
\begin{align*}
&\delta_1=-b_1c_2-a_1c_2,\qquad\hspace{.8cm}\delta_2=-a_1c_3,\\
&\delta_3=b_1c_2+a_1c_1+a_1c_2,\qquad\delta_4=b_1c_3+2a_1c_3.
\end{align*}
However, derivation of the non-homogeneous solution of (\ref{5}) is
a prohibitive task. To tackle this problem, we may investigate whether
Theorem~\ref{th1} could be applied in this case. For this purpose,
by virtue of (\ref{7}), condition (\ref{8}) is in the following form
\begin{align*}
&\alpha_1=1,\qquad\alpha_2=-\frac{1}{\delta_4(\delta_3+\delta_4)},\qquad \alpha_3=\alpha_2^2,\\
&\beta_1=-b_1c_5g(c_1+c_2+2c_3)-\frac{\lambda a(0)+b^2}{a_1}c_5g(c_2+c_3),\\
&\beta_2=-b_1c_5g(c_2+c_3)/2-\frac{\lambda a(0)+b^2}{2a_1}c_2c_5g-b_1c_5g(c_1+c_2\\&+2c_3)/2-\frac{\lambda a(0)+b^2}{2a_1}c_5g(c_2+c_3),\\
&\beta_3=-b_1c_5g(c_2+c_3)-\frac{\lambda a(0)+b^2}{a_1}c_2c_5g.
\end{align*}
It seems that condition (\ref{8}) coincides with $A1$ in (\ref{14}) that the suitable value of $k$ is derived from (\ref{15}). Note that $\lambda,a_1$ and $b$ should be chosen such that $\rho>0$.
As a numerical example, similar to \cite{harandi2021matching}, presume that $c_1=4,c_2=1,c_3=1.5,c_5=2$.
Condition (\ref{8}) is in the following form
\begin{align*}
k\begin{bmatrix}
1 & 5/9 \\ 5/9 & 25/81
\end{bmatrix}+\begin{bmatrix}
-550 & -420 \\ -420 & -290
\end{bmatrix}
\end{align*}
which is matched with $A1$ in (\ref{14}). The suitable value of
$k$ is derived form (\ref{15}) with $\rho=6.91$.
\subsection{Cart-pole}
The system consists of a suspended mass from a cart. Dynamic equations are as follows~\cite{acosta2005interconnection}
\begin{align*}
M&=\begin{bmatrix}
1 & b\cos(q_1) \\ b\cos(q_1) & m_3
\end{bmatrix},\quad V=c\cos(q_1),\quad G=[0,1]^T\\&
c=g/l,\quad b=1/l,m_3=(m+M)/ml^2,\quad q=[\theta,x]^T.
\end{align*}
In literature, the designed IDA-PBC are based on a partial feedback linearization to transform the system into so called Spong's normal form. Here, our goal is to stabilize the system without partial feedback linearization. For this purpose, we apply the method introduced in \cite{harandi2021matching}. By this means, $M_d^{-1}$ is in the following form
\begin{align*}
M_d^{-1}=\begin{bmatrix}
\frac{\lambda e^{a(q_2)}+b^2}{a_1} & b_1 \\ b_1 & a_1
\end{bmatrix
\end{align*}
with
\begin{align*}
&a(q_1)=\frac{2\ln\big(bb_1+a_1m_3(a_1m_3-bb_1)\tan^2(q_1/2)\big)}{a_1(m_3a_1^2-b_1^2)}\times\\&(m_3a_1^2-2m_3a_1b_1+b_1^2)-\\&\frac{ln\big(b\tan^2(q_1/2)-b+\sqrt{m}+\sqrt{m}\tan^2(x/2)\big)}{a_1^3m_3+a_1^2b_1\sqrt{m_3}}\times\\&(m_3a_1^2-2m_3a_1b_1+b_1^2)-\\&\frac{\ln\big(b-b\tan^2(x/2)+\sqrt{m}+\sqrt{m}\tan^2(x/2)\big)}{a_1^3m_3+a_1^2b_1\sqrt{m_3}}\times \\&(m_3a_1^2-2m_3a_1b_1+b_1^2)
\end{align*}
and $V_{dh}$ is
\begin{align*}
&V_{dh}=\phi\bigg(-\frac{\ln\Big(\frac{bb_1+\sin(q_1)\sqrt{-a_1^2m_3^2+b^2b_1^2}+a_1m_3\cos(q_1)}{a_1m_3+bb_1\cos(q_1)}\Big)}{bb_1\sqrt{-a_1^2m_3^2+b^2b_1^2}}\times\\&(-bm_3a_1^2+bb_1^2)-\frac{a_1q_1}{b_1}-q_2\bigg).
\end{align*}
Condition (\ref{8}) is in the form of (\ref{13}) with the following parameters
\begin{align*}
&\alpha_1=\alpha_2^2,\alpha_2=\frac{-bm_3a_1^2+bb_1^2}{bb_1(a_1m_3+bb_1)+\frac{a_1}{b_1}},\alpha_3=1,\\&
\beta_1=-c\frac{\lambda e^{a(0)}+b^2}{a_1}-bb_1c,\beta_2=\frac{\beta_1+\beta_3}{2},\\&\beta_3=-bc\frac{\lambda e^{a(0)}+b^2}{a_1}-b_1m_3c
\end{align*}
It is matched with $A1$ in (\ref{13}) and the suitable value of $k$ is derived from (\ref{15}) where positiveness of $\rho$ may be deduced from suitable values of $a_1,b_1$ and $\lambda$.
\subsection{VTOL Aircraft}
It is a strongly coupled system with the following modified dynamics~\cite{acosta2005interconnection}
\begin{align*}
M=I_3,V=\frac{g}{\epsilon}\cos(q_3),G=\begin{bmatrix}
1 & 0 \\ 0 & 1 \\ \frac{1}{\epsilon}\cos(q_3) & \frac{1}{\epsilon}\sin(q_3)
\end{bmatrix}q=\begin{bmatrix}
x \\ y \\ \theta
\end{bmatrix}
\end{align*}
Here our aim is to modify the proposed controller in \cite{acosta2005interconnection}.
The parameters of IDA-PBC controller are
\begin{align*}
&M_d^{-1}=\begin{bmatrix}
\lambda_1\epsilon\cos^2(q_3)+\lambda_3 & \lambda_1\epsilon\cos(q_3)\sin(q_3) & \lambda_1\cos(q_3) \\ \lambda_1\epsilon\cos(q_3)\sin(q_3) & -\lambda_1\epsilon\cos^2(q_3)+\lambda_3 & \lambda_1\sin(q_3) \\ \lambda_1\cos(q_3) & \lambda_1\sin(q_3) & \lambda_2
\end{bmatrix}\\&
V_{dh}=\phi(q_1-q_{1d}-\frac{\lambda_3}{\lambda_1-\lambda_2\epsilon}\sin(q_3),q_2-q_{2d}+\\&\frac{\lambda_3-\lambda_1\epsilon}{\lambda_1-\lambda_2\epsilon}(\cos(q_3)-1))
\end{align*}
in which the following inequality should be held
$$\lambda_3>5\lambda_1\epsilon,\quad \lambda_1/\epsilon>\lambda_2>\lambda_1/2\epsilon.$$
Condition (\ref{8}) in this case is as follows
\begin{align*}
&k_1\begin{bmatrix}
1 & 0 & -\frac{\lambda_3}{\lambda_1-\lambda_2\epsilon} \\ 0 & 0 & 0 \\ -\frac{\lambda_3}{\lambda_1-\lambda_2\epsilon} & 0 & \big(\frac{\lambda_3}{\lambda_1-\lambda_2\epsilon}\big)^2
\end{bmatrix}+k_2\begin{bmatrix}
0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0
\end{bmatrix}+\\&\begin{bmatrix}
0 & 0 & 0\\ 0 & 0 & 0 \\ \theta_1 & 0 & \theta_2
\end{bmatrix}>0
\end{align*}
with
\begin{align*}
\theta_1&=\frac{g\epsilon(-\lambda_1\lambda_2\epsilon+\lambda_2\lambda_3-\lambda_1^2\epsilon^2+\lambda_1\lambda_3\epsilon)}{(\lambda_1\epsilon+\lambda_3)(-\lambda_1\epsilon+\lambda_3)\lambda_2-\lambda_1(-\lambda_1\epsilon+\lambda_3)\lambda_1}\\
\theta_2&=\frac{g\epsilon(\lambda_1^2\epsilon-\lambda_1\lambda_3+\lambda_1^2\epsilon^3-k_3^2\epsilon)}{(\lambda_1\epsilon+\lambda_3)(-\lambda_1\epsilon+\lambda_3)\lambda_2-\lambda_1(-\lambda_1\epsilon+\lambda_3)\lambda_1}
\end{align*}
that can be solved easily.
\section{Conclusion and Future Works}
In this paper, we concentrated on the reformulation of potential energy PDE of IDA-PBC approach. It was shown that under the satisfaction of a condition, it is enough to merely derive the homogeneous solution of this PDE. The condition was analyzed for three cases including 2-DOF underactuated robots. Furthermore, it was deduced that the proposed condition may be reformulated into LMI problem. The proposed method was verified on an underactuated cable-driven robot, Acrobot, and Pendubot. Generalization of the method to simplify the kinetic energy PDE is our future aim.
\bibliographystyle{ieeetr}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec_intro}
Rigorous information on the QCD phase diagram, schematically depicted
in Fig.~\ref{fig_phasedia}, is scarce. At finite temperature ($T$) and
vanishing quark chemical potential ($\mu_q=0$), lattice QCD (lQCD)
computations predict the existence of a rapid cross-over transition
from hadronic matter to a Quark-Gluon Plasma at a (pseudo-) critical
temperature of $T_c=(180\pm20)$~MeV~\cite{Cheng:2007jq,Aoki:2006br}.
\begin{figure}[!b]
\begin{center}
\includegraphics[width=0.60\textwidth,angle=-90]{phase2.eps}
\end{center}
\caption[]{Schematic diagram of the QCD phase structure in the $\mu_q$-$T$
plane, as characterized by the lowest dimension quark condensates, i.e.,
the chiral quark-antiquark and the diquark one (Cooper pairs). The former
is believed to prevail in the low-$T$ and -$\mu_q$ hadronic world while
the latter occurs at the Fermi surface of cold dense quark matter. Key to
investigating the phase structure is the identification of the relevant
excitations in the respective phases, as indicated in the figure. The
``data'' points are extracted from hadro-chemical analysis of the final
state observed in heavy-ion collisions~\cite{BraunMunzinger:2003zd,
Becattini:2003wp}.
}
\label{fig_phasedia}
\end{figure}
Heavy-ion collisions at SPS and RHIC energies have shown that the produced
medium exhibits a high degree of thermalization and that the achieved
temperatures are on the order of $T_c$ and above. This provides a firm
basis for studying QCD matter in the laboratory. At finite $\mu_q$ and
vanishing $T$, there are compelling arguments that the high-density limit
of QCD is in a color-superconducting phase (attractive quark-quark
interaction plus Cooper theorem). Since this ground state, characterized
by a non-vanishing diquark condensate ($\langle qq\rangle \ne 0$), is
qualitatively different from the chirally broken QCD vacuum
($\langle \bar{q}q\rangle \ne 0$), it is natural to expect a first-order
transition at some intermediate chemical potential, $\mu_q^c\simeq400$~MeV.
Finally, heavy atomic nuclei have been long used to quantify the
properties and excitatins of the finite-density ground state at
$\mu_q = (m_N-E_B)/3 \simeq 310$~MeV, indicated by the square box in
Fig.~\ref{fig_phasedia}.
Heavy-ion experiments performed over a wide range of collision energies
can, in principle, cover a large region of the phase diagram, cf.~the
``data'' points in Fig.~\ref{fig_phasedia}. In particular, as indicated
above, one ought to be able to determine phase changes, down to
temperatures of about 100~MeV where the critical chemical potential may
be around $\mu_q^c\simeq400$~MeV (it is, however, questionable
whether heavy-ion experiments can reach into the color-superconducting
phases, unless the zero-temperature quark pairing gap is well above
100~MeV~\cite{Rapp:1999qa}; even if so, equilibration appears to be
unlikely. More likely could be the production of the so-called
quarkyonic phase~\cite{McLerran:2007qj} which may extend to higher $T$).
Of special interest is the occurrence of a critical second-order
endpoint, whose existence is suggested by the cross-over transition
found in finite-$T$ lQCD and the putative 1.~order transition at $T=0$.
Suitable observables to study the QCD phase structure in heavy-ion
collisions (HICs) may be roughly divided into two categories:
(1) bulk observables driven by the equation of state (EoS), including
collective flow patterns of the most abundant particles ($\pi$, $K$, $p$)
or fluctuation observables in $p_T$ spectra and hadrochemistry;
(2) ``microscopic'' probes associated with specific quantum-number
channels, e.g., the vector current coupling to photons and dileptons.
In some instances the physical origin of type (1) and (2) observables is
closely related. E.g., electric-charge fluctuations are governed by the
electromagnetic (EM) susceptibility, $\chi_{\rm em}$, which can be
expressed as the screening limit of the static EM correlation
function, $\langle Q^2 \rangle - \langle Q\rangle^2
= \chi_{\rm em} = \Pi_{\rm em}(q_0=0,q\to 0)$, while photon and
dilepton spectra are directly proportional to the imaginary part
of $\Pi_{\rm em}$, as discussed below.
In this paper we focus on microscopic probes as realized via dileptons,
charm and charmonia. Corresponding observables are often associated with
``hard probes'', due to a large momentum transfer associated with their
initial production (e.g., $|q^2| \ge 4m_c^2 \simeq6$~GeV$^2$). We will
argue, however, that all of the above 3 probes can provide valuable
information on relatively ``soft'' modes in the medium, at the
temperature scale or below. For dileptons (Sec.~\ref{sec_dilep}), this
relates to the in-medium $\rho$-meson spectral function as reflected in
thermal radiation at low invariant masses ($M\sim {\cal O}(T)\le m_\rho$).
In the open-charm sector (Sec.~\ref{sec_charm}), one can study mechanisms
of thermalization or, more generally, transport properties within a
diffusion equation via modifications of charmed hadron $p_T$ spectra and
elliptic flow (governed by elastic scattering at typical momentum
transfers $|q|\sim {\cal O}(gT)$). Finally, for charmonia
(Sec.~\ref{sec_charmonium}), the key to understanding their in-medium
bound-state properties lies in color-screening effects (at the Debye
scale $m_D\sim{\cal O}(gT)$) as well as inelastic dissociation
reactions (at the binding-energy scale). In each sector, based on current
insights, we will try to identify promising directions of future
investigations specific to the situation of finite $\mu_q$ and/or the
putative critical point.
Concluding remarks are collected in Sec.~\ref{sec_concl}.
\section{Low-Mass Dileptons: $\rho$-Meson Spectroscopy}
\label{sec_dilep}
The basic quantity to calculate thermal emission spectra of
EM radiation from hot and dense matter is the retarded correlation
function of the hadronic EM current, $j^\mu_{\rm em}$,
\begin{equation}
\Pi_{\rm em}^{\mu \nu}(M,q;\mu_B,T) = -i \int d^4x \ e^{iq\cdot x} \
\Theta(x_0) \ \langle[j_{\rm em}^\mu(x), j_{\rm em}^\nu(0)]\rangle_T \ .
\end{equation}
Its imaginary part (EM spectral function) directly figures into the
differential production rates of dileptons ($l^+l^-$) and photons
($\gamma$),
\begin{eqnarray}
\frac{dN_{ll}}{d^4xd^4q} &=& -\frac{\alpha_{\rm em}^2}{\pi^3} \
\frac{L(M)}{M^2} \ f^B(q_0;T) \ {\rm Im}~\Pi_{\rm em} (M,q;\mu_B,T)
\label{Rll}
\\
q_0\frac{dN_{\gamma}}{d^4xd^3q} &=& -\frac{\alpha_{\rm em}}{\pi^2} \
f^B(q_0;T) \ {\rm Im}~\Pi_{\rm em}(M=0,q;\mu_B,T) \ ,
\label{Rgam}
\end{eqnarray}
respectively ($f^B$ is the thermal Bose distribution and $L(M)$ a
final-state lepton phase-space factor relevant close to the dilepton
threshold). In the vacuum, the low-mass regime ($M\le 1$~GeV) of
$\Pi_{\rm em}$ is essentially saturated by the light vector mesons
$\rho$, $\omega$ and $\phi$. Within the vector-dominance model (VDM)
the EM spectral function is directly proportional to the vector-meson
spectral functions,
\begin{equation}
{\rm Im}~\Pi_{\rm em} \sim [ {\rm Im}~D_\rho +
\frac{1}{9} {\rm Im}~D_\omega + \frac{2}{9} {\rm Im}~D_\phi ] \ .
\label{vdm}
\end{equation}
Thus, if VDM remains valid in the medium (see, e.g.,
Ref.~\cite{Harada:2003jx} for an alternative scheme), low-mass dilepton
spectra mostly probe in-medium modifications of the $\rho$ meson, which
have been studied rather extensively in the literature, see, e.g.,
Refs.~\cite{Rapp:2009yu,Leupold:2009kz} for recent reviews.
It turns out that low-mass thermal EM radiation in HICs dominantly
emanates from the hadronic phases of the collisions, even at RHIC
energies~\cite{David:2006sr}. It is therefore in order to study
hadronic medium effects on the $\rho$ propagator,
\begin{equation}
D_\rho(M,q;\mu_B,T) =
[ M^2-{m_\rho^{(0)}}^2-\Sigma_{\rho\pi\pi}-\Sigma_{\rho B}-
\Sigma_{\rho M} ]^{-1} \ ,
\end{equation}
encoded in selfenergy insertions, $\Sigma_\rho$, induced by interactions
with particles in the heat bath. These may be classified as
(a) medium modifications of the pion cloud, $\Sigma_{\rho\pi\pi}$,
due to pion rescattering (most notably on baryons) and thermal Bose
enhancement~\cite{Urban:1999im};
(b) direct $\rho$-baryon couplings~\cite{Friman:1997tc}, e.g.,
$\rho+N\to \Delta, N(1520), N(1720)$, etc.;
(c) direct interactions of the $\rho$ with mesons, e.g.,
$\rho+\pi\to \omega,a_1,...$ or $\rho+K\to K_1,...$ etc.~\cite{Rapp:1999qu}.
The interactions are usually modeled by effective hadronic Lagrangians
which satisfy basic constraints from EM gauge invariance and (mostly
for pions) chiral symmetry. The free parameters (coupling constants and
formfactor cutoffs to account for finite-size effects) can be
constrained empirically by partial decay rates (e.g.,
$a_1\to \pi\rho, \pi\gamma$) or, more comprehensively,
scattering data (e.g., $\pi N\to \rho N$ or photo-absorption cross
sections).
\begin{figure}[!t]
\begin{minipage}{0.5\linewidth}
\includegraphics[width=0.78\textwidth,angle=-90]{Arho69PbT300c.eps}
\end{minipage}
\hspace{0.3cm}
\begin{minipage}{0.5\linewidth}
\vspace{-0.1cm}
\includegraphics[width=0.95\textwidth]{drdm2-Inx160.eps}
\end{minipage}
\vspace{0.1cm}
\caption[]{Left panel: $\rho$-meson spectral function in hot and dense
hadronic matter at fixed $\mu_B=3\mu_q=330$~MeV (corresponding to
baryon densities $\varrho_B/\varrho_0=0.1,0.7,2.6$ at $T=120,150,180$~MeV,
respectively) and 3-momentum $q=0.3$~GeV~\cite{Rapp:1999us}.
Right panel: 3-momentum
integrated thermal dielectron rates in the isovector channel using the
vacuum (dotted line) and full in-medium (solid line) $\rho$ spectral
function; the long-dashed, dash-dotted and short-dashed curves only
include in-medium selfenergies due to either the in-medium pion cloud
($\Sigma_{\rho\pi\pi}$) or direct $\rho$-baryon interactions
($\Sigma_{\rho B}$) or direct $\rho$-meson interactions
($\Sigma_{\rho M}$), respectively (the latter 2 include the free pion
cloud as well).}
\label{fig_arho}
\end{figure}
The left panel of Fig.~\ref{fig_arho} shows an in-medium $\rho$ spectral
functions~\cite{Rapp:1999us} including all of the above components under
conditions roughly resembling HICs at the SPS. A strong broadening with
increasing matter density and temperature occurs, melting the resonance
when extrapolated toward the phase boundary region.
The large low-mass enhancement becomes much more apparent in the
dilepton production rate, due to the Bose factor and photon propagator
in eq.~(\ref{Rll}), cf.~right panel of Fig.~\ref{fig_arho}. At
$M\simeq0.4$~GeV, an order of magnitude enhancement over the rate
from free $\pi\pi$ annihilation is predicted. Also note that the
divergence of the rate for $M\to 0$ is required to produce a finite
photon production rate, eq.~(\ref{Rgam}). Plotting the rate in terms
of the 3 individual selfenergy contributions as introduced above, one
clearly recognizes the prevalence of the baryon-driven medium effects.
This may seem surprising since at SPS energies the observed
pion-to-baryon ratio is about 5:1. However, in the interacting medium,
most of the pions are ``stored'' in excited (meson and baryon)
resonances; e.g., at ($\mu_B$,$T$)=(240,160)~MeV, the total baryon
density, $\varrho_B\simeq0.8\varrho_0$ is quite comparable to the direct
pion density, $\varrho_\pi\simeq0.9\varrho_0$ ($\varrho_0=0.16~$fm$^{-3}$).
An application of the $\rho$ spectral function to recent NA60 low-mass
dimuon data in In-In collisions at SPS~\cite{Arnaldi:2006jq} is shown
in the left panel of Fig.~\ref{fig_dndm}.
\begin{figure}[!b]
\begin{minipage}{0.5\linewidth}
\includegraphics[width=0.95\textwidth]{dndm-semi2.eps}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\includegraphics[width=0.95\textwidth]{Gam-emsf-SPS.eps}
\end{minipage}
\caption[]{Left panel: dimuon invariant-mass spectra (including experimental
acceptance) in semicentral In-In collisions at SPS energies ($E_{\rm lab}=
158$~AGeV); NA60 data~\cite{Arnaldi:2006jq} are compared to theoretical
calculations based on the in-medium $\rho$ spectral function shown
in Fig.~\ref{fig_arho}.
Right panel: in-medium $\rho$ width as a function of temperature along
isentropic trajectories in the phase diagram starting from chemical
equilibrium at $(T,\mu_B)=(175,230)$~MeV (dots: fixed hadrochemistry
including chemical potentials for pions, kaons, etc.; squares: chemical
equilibrium; triangles: without baryonic medium effects).}
\label{fig_dndm}
\end{figure}
Upon convoluting the EM emission rates over an expanding fireball, the
excess radiation is well described by the predicted in-medium $\rho$
line shape (QGP and primordial contributions are
subleading)~\cite{vanHees:2007th}, see
Refs.~\cite{Dusling:2007kh,Ruppert:2007cr,Bratkovskaya:2008bf} for
alternative calculations. This is also true for the excess dielectron
data reported for central Pb-Au by CERES/NA45~\cite{Adamova:2006nu}.
One can quantify the $\rho$ broadening by an approximate average width
which amounts to
$\bar{\Gamma}_\rho^{\rm med}\simeq$~(350-400)~MeV~$\simeq
3~\Gamma_\rho^{\rm vac}$, realized at a representative temperature
of $\bar{T}\simeq$~150-160~MeV, cf.~right panel of Fig.~\ref{fig_dndm}.
This inevitably implies that, toward $T_c$, the $\rho$'s in-medium width
becomes comparable to its mass, i.e., the resonance has indeed melted.
The absolute yield of the excess radiation is quite sensitive to the
total fireball lifetime, enabling a remarkably accurate measurement
of the fireball lifetime for semicentral In-In collision,
$\tau_{\rm FB}\simeq (6.5\pm1)$~fm/$c$. This tool could become
invaluable for detecting significant lifetime changes when approaching
the critical point and/or moving into the first-order regime with an
extended mixed phase (of course, it only works if the in-medium
spectral shape is under sufficient theoretical control).
The NA60 collaboration has recently taken another step forward by
fully correcting their data for experimental
acceptance~\cite{Arnaldi:2008er,Arnaldi:2008fw}. Upon integrating over
transverse momentum, the resulting invariant-mass spectra,
$dN_{\mu\mu}/dMdy$, do justice to the notion of
Lorentz-{\em invariance}, i.e., transverse flow effects have been
eliminated. Thus, one is essentially looking at the (average) emission
rate from the medium, multiplied by the emitting 4-volume, cf.~left
panels in Fig.~\ref{fig_rate}.
\begin{figure}[!t]
\begin{minipage}{0.5\linewidth}
\vspace{-1.2cm}
\includegraphics[width=0.92\textwidth]{dndm-NA60acor-eosd.eps}
\vspace{-0.2cm}
\includegraphics[width=0.92\textwidth]{dndm-NA60acor02-eosd.eps}
\end{minipage}
\hspace{-0.4cm}
\begin{minipage}{0.5\linewidth}
\includegraphics[width=1.00\textwidth]{drdm-mu15-150-180.eps}
\end{minipage}
\caption[]{Left panel: acceptance-corrected dimuon invariant-mass spectra
in semicentral In(158~AGeV)-In~\cite{Arnaldi:2008er,Arnaldi:2008fw} for
transverse pair momenta $q_t=0.2$-2.4~GeV (upper left) and
$q_t=0$-0.2~GeV (lower left).
Right panel: 3-momentum integrated dimuon thermal emission rates in
the isovector ($\rho$) channel at a baryon chemical potential
representative for SPS energies ($\mu_B=330$~MeV)~\cite{Rapp:1999us}.}
\label{fig_rate}
\end{figure}
This provokes a direct comparison to the theoretical input rates based
on Ref.~\cite{Rapp:1999us}, augmented by the muon phase-space factor,
$L(M)$, shown in the right panel of Fig.~\ref{fig_rate} for two
temperatures. The resemblance of the in-medium hadronic rates and the
NA60 spectra is rather astonishing, both in slope and shape. The former
can, in principle, serve as a true thermometer, i.e. free from blue-shift
contamination due to transverse flow. Essential to these
arguments is the prevalence of thermal radiation in the excess spectra
which is borne out of (i) the theoretical calculations, and
(ii) the complete lack of polarization in the measured angular
distribution of the muon pairs~\cite{Arnaldi:2008gp}. The good overall
agreement of theory and data furthermore corroborates that VDM
stays intact even close to the phase boundary (the data also indicate
that the $\phi$ does not radiate dileptons in the hadronic
phase but decouples earlier~\cite{Adamova:2005jr,Arnaldi:2009wr}; this
is, in fact, consistent with its relatively soft $p_T$ spectra).
The importance of baryon-driven medium effects in the interpretation
of the SPS low-mass dilepton data naturally calls for studies at
lower collisions energies where even larger baryon compression
may be achieved.
\begin{figure}[!t]
\begin{minipage}{0.5\linewidth}
\includegraphics[width=0.8\textwidth,angle=-90]{ArhoM69PbT140.ps}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\includegraphics[width=0.8\textwidth,angle=-90]{ArhoM69PbT170.ps}
\end{minipage}
\caption[]{$\rho$-meson spectral functions, weighted by a factor of
inverse mass as figuring into the 3-momentum integrated dilepton rate,
at two temperatures and for baryon densities representative for full SPS
energy (red lines) and CBM energies (blue lines).}
\label{fig_arhom}
\end{figure}
Hadronic many-body calculations identify the mass regime around
$M\simeq0.2$~GeV as the most sensitive one in the $\rho$ spectral
function, with up to a factor of $\sim$2 enhancement under conditions
expected at the Compressed Baryonic Matter (CBM) experiment relative
to full SPS energy. A first glimpse at such an effect may have been
seen by CERES/NA45 in a 40~AGeV Pb-Au run~\cite{Adamova:2002kf}.
A direct way to study baryon effects is provided by cold
nuclear matter, i.e., in atomic nuclei. The advantage over heavy-ion
experiments obviously lies in the essentially static matter environment,
which, however, is limited by nuclear saturation density ($\varrho_0$)
and also exhibits significant spatial gradients. Nevertheless, the
predicted medium effects on the $\rho$ spectral function are appreciable
even at half saturation density, at least at small 3-momentum relative
to the nuclear rest frame, see left panel of
Fig.~\ref{fig_cold}~\cite{Rapp:1999us,Riek:2008ct}.
\begin{figure}[!t]
\begin{minipage}{0.5\linewidth}
\includegraphics[width=1.0\textwidth]{Arhoc05q.eps}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\vspace{-0.3cm}
\hspace{1.4cm}
\includegraphics[width=0.7\textwidth]{dia-rho-phot-pro.eps}
\end{minipage}
\caption[]{Left panel: in-medium $\rho$ spectral function in cold
nuclear matter at half saturation density for various 3-momenta (for
$q$>0, transverse and longitudinal modes split).
Right panel: elementary amplitude for $\rho$ photo-production on the
nucleon~\cite{Riek:2008ct}.
}
\label{fig_cold}
\end{figure}
As in HICs, the dilepton final state is the cleanest way to probe medium
effects. The initial state in nuclear production experiments is, however,
rather different: the $\rho$ has to be created by an external
excitation (cf.~right panel of Fig.~\ref{fig_cold}), as compared to an
approximately thermal medium in HICs.
Thus, a good knowledge of the production process is mandatory, which
can be tested with proton targets. Two additional complications arise:
(a) an in-medium broadening leads to a reduction in the dilepton to
hadronic branching ratio, thus reducing the signal (in HICs the $\rho$
is ``continuously" regenerated in the interacting medium);
(b) to provide the mass of the $\rho$ a rather energetic incoming
particle is needed which usually implies that the $\rho$ carries
significant 3-momentum; this enhances surface and/or escape effects
thus reducing the in-medium signal as well.
The latter point is presumably best dealt with using a photon beam
where all the incoming energy can be converted into mass.
Photo-production of dileptons has recently been studied by the CLAS
collaboration at Jefferson Lab (JLab) using a variety of nuclear
targets~\cite{Wood:2008ee}, with an incoming photon spectrum ranging
over $E_\gamma\simeq$~(1-3.5)~GeV.
\begin{figure}[!t]
\begin{minipage}{0.5\linewidth}
\includegraphics[width=0.93\textwidth]{clas-fe-spec.eps}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\includegraphics[width=1.0\textwidth]{dndm-clas-Fe.eps}
\end{minipage}
\caption[]{Dilepton spectra measured in nuclear photo-production
by CLAS at JLAB~\cite{Wood:2008ee}, including transport model
calculations~\cite{Muhlich:2002tu} for the decays of
light vector mesons $\rho$, $\omega$ and $\phi$. Right panel:
CLAS ``excess spectra" compared to calculations~\cite{Riek:2008ct}
using the same $\rho$ spectral function as for the NA60 data in
Figs.~\ref{fig_dndm}, \ref{fig_rate}.}
\label{fig_clas}
\end{figure}
The left panel of Fig.~\ref{fig_clas} shows the dilepton signal from
iron targets, compared to Boltzmann transport
calculations~\cite{Muhlich:2002tu} for $\rho$, $\omega$ and $\phi$
decays. Since the $\omega$ and $\phi$ peaks are essentially unaffected
by the medium
(and well concentrated in mass), their contribution can be subtracted
from the spectrum (much like for the NA60 data) leading to an ``excess''
signal as shown by the data in the right panel of Fig.~\ref{fig_clas}.
Breit-Wigner fits have been applied resulting in a moderate $\rho$
broadening to $\Gamma_\rho^{\rm med}\simeq220$\,MeV~\cite{Wood:2008ee}.
The $\rho$ spectral function used in the NA60 context has been applied
to the CLAS experiment in combination with a realistic elementary
production amplitude and a somewhat schematic modeling of the spatial
propagation, accounting for the production kinematics and nuclear
density profile~\cite{Riek:2008ct}. The CLAS data are reasonably well
described; it turns out that for the Fe target the typical densities
and 3-momenta probed in the spectral
function are $\bar{\varrho}\simeq0.5\varrho_0$ and $\bar{q}\simeq2$~GeV.
The latter are the main reason for moderating the medium effects
in the current CLAS data (recall left panel of Fig.~\ref{fig_cold}).
However, at high momenta additional medium effects not included in the
spectral function of Ref.~\cite{Rapp:1999us} may occur, see, e.g.,
the discussion in Ref.~\cite{Rapp:1999ej}.
\section{Open Charm and Transport}
\label{sec_charm}
The masses of charm and bottom quarks (and hadrons) are much larger than
the typical temperatures realized in heavy-ion collisions at SPS and
RHIC, $m_{Q}\gg T$ ($Q$=$b$,$c$). Furthermore, a typical momentum transfer
from the heat bath, $|q^2|\simeq T^2$, is parametrically smaller than the
thermal momentum of $c$ and $b$ quarks, $p^2\sim 3m_{Q}T \gg |q^2|$.
Thus a diffusion approximation to the Boltzmann equation becomes
applicable leading to a Fokker-Planck
equation~\cite{Svetitsky:1987gq,vanHees:2004gq,Moore:2004tg,Mustafa:2004dr},
\begin{equation}
\frac{\partial f_Q}{\partial t} = \gamma \frac{\partial (pf_Q)}{\partial p}
+ D \frac{\partial^2 f_Q}{\partial p^2} \ ,
\end{equation}
for the heavy-quark (HQ) phase-space distribution, $f_Q$. The scattering
rate, $\gamma$, and momentum diffusion coefficient, $D$, are related
via the Einstein relation, $T=D/(\gamma m_Q)$.
Applications to RHIC data revealed that perturbative QCD (pQCD) elastic
scattering is insufficient to generate the observed elliptic flow, even
with a strong coupling constant as large as $\alpha_s=0.4$, supporting
the notion of a strongly coupled QGP (sQGP). At strong coupling, diagrams
with large contributions have to be resummed, possibly leading to the
appearance of collective modes (bound states or resonances). In this spirit,
the effective resonance model for $c$- and $b$-quark scattering through
in-medium $D$ and $B$ meson has been introduced~\cite{vanHees:2004gq}.
The pertinent $s$-channel
diagrams are displayed in the left panels of Fig.~\ref{fig_graph}. An
approximately 4-fold decrease of the HQ thermalization time,
$\tau_Q=\gamma^{-1}$, has been found relative to pQCD.
\begin{figure}[!t]
\begin{minipage}{0.4\linewidth}
\begin{center}
\includegraphics[width=0.65\textwidth]{Dreso-graph.eps}
\includegraphics[width=0.9\textwidth]{Dself-graph.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.6\linewidth}
\includegraphics[width=0.95\textwidth]{brueckner.eps}
\end{minipage}
\caption[]{Left panels: $c$-quark scattering off light antiquarks via
$s$-channel $D$-mesons (upper panel) and pertinent $D$-meson
selfenergy (lower panel) within the effective resonance model in the
QGP~\cite{vanHees:2004gq}. Right panels: selfconsistent Brueckner scheme
for heavy quarks in the QGP based on an in-medium heavy-light $T$-matrix
with interaction potential $V$ (upper panel) and pertinent HQ
selfenergy (lower panel)~\cite{vanHees:2007me}.}
\label{fig_graph}
\end{figure}
When implemented into relativistic Langevin simulations within an
expanding fireball for Au-Au collisions at RHIC~\cite{vanHees:2005wb},
the recent data on suppression and elliptic flow of semileptonic decay
electrons are fairly well described~\cite{Adare:2006nq,Abelev:2006db},
cf.~left panel of Fig.~\ref{fig_elec} (see also
Refs.~\cite{Zhang:2005ni,Gossiaux:2008jv,Akamatsu:2008ge}).
Heavy-light quark coalescence in the hadronization
process at $T_c$ plays a significant role in increasing {\em both}
$R_{AA}$ and $v_2$.
\begin{figure}[!t]
\begin{minipage}{0.5\linewidth}
\vspace{-0.5cm}
\includegraphics[width=0.95\textwidth]{RAA-v2-elec-phenix.eps}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\includegraphics[width=0.95\textwidth]{RAA-v2-elec-tmat.eps}
\end{minipage}
\caption[]{RHIC data~\cite{Adare:2006nq,Abelev:2006db,Hornback:2008ur}
for the nuclear modification factor (upper panels) and elliptic flow (lower
panels) of HQ decay electrons in
Au-Au collisions, compared to theory.
Left panels: relativistic Langevin simulations using upscaled pQCD
elastic scattering cross sections (short-dashed and dash-dotted
line)~\cite{Moore:2004tg} or effective resonances+pQCD elastic
scattering (bands)~\cite{vanHees:2005wb}, and radiative energy-loss
calculations (long-dashed lines)~\cite{Armesto:2005mz}.
Right panels: heavy-light quark
$T$-matrix+pQCD elastic interactions~\cite{vanHees:2007me} using
internal energies from quenched (solid line)
and $N_f$=2 (dash-dotted line) thermal lQCD~\cite{Kaczmarek:2003ph};
the dashed lines are obtained if hadronization via heavy-light quark
coalescence at $T_c$ is switched off.
}
\label{fig_elec}
\end{figure}
One may ask whether resonant HQ interactions in the QGP can be
understood more microscopically. This question has been studied
using a heavy-light quark $T$-matrix equation~\cite{vanHees:2007me},
\begin{equation}
T_{qQ} = V_{qQ} + V_{qQ} \, G_{qQ} \, T_{qQ}
\end{equation}
diagrammatically depicted in the upper right panel of Fig.~\ref{fig_graph}.
The key input is the driving kernel (potential), $V_{qQ}$, which
has been assumed to coincide with the static heavy-quark internal
energy computed in thermal lQCD, augmented by relativistic corrections
due to color-magnetic current-current interactions. In addition
to the color-singlet channel, the $Q$-$\bar{q}$ and $Q$-$q$ interactions
in the color-octet, -antitriplet and -sextet channels have been estimated
(using Casimir scaling). It turns out that the interaction
strength is concentrated in the attractive singlet and antitriplet
channels, supporting Feshbach-like meson and diquark resonances close to
the $q$-$Q$ threshold up to temperatures of $\sim$1.7\,$T_c$ and
$\sim$1.4\,$T_c$, respectively. Compared to the resonance model, the HQ
interaction in the $T$-matrix approach of similar strength close
to $T_c$, but weakens at higher $T$ due to color-screening in the
potential which dissolves the resonances.
An open problem remains the nonperturbative treatment of HQ-gluon
interactions. The application of the $T$-matrix approach to RHIC
electron data looks promising (right panels in Fig.~\ref{fig_elec}),
but significant uncertainties remain which currently inhibit definite
conclusions about the microscopic origin of HQ diffusion.
What effects can be expected for charm-quark diffusion at finite
$\mu_q$, relevant for CBM energies and a RHIC energy scan?
The effective resonance model suggests that, in a quark-dominated
environment, anticharm quarks ($\bar{c}+q\to \bar{D}$) interact more
frequently than charm quarks ($c+\bar{q}\to D$). However, in the
$T$-matrix approach, scattering via (anti-) diquarks, $cq$ and
$\bar{c}\bar{q}$, is equally important, thus washing out the
asymmetry. Moreover, in the hadronic phase, the baryon excess
favors $D$-meson scattering via $\Lambda_c N^{-1}$ excitations
over its antiparticle conjugate.
A promising possibility could be the development of the $\sigma$ soft
mode close to the critical point, which is particularly pronounced
in the spacelike regime~\cite{Fujii:2003bz}. {\em If} charm quarks
couple to the $\sigma$, their $t$-channel exchange cross section with
light quarks (and hadrons) could be ``critically" enhanced leaving
observable traces in $D$-meson $p_T$ spectra and $v_2$ (see
Ref.~\cite{Zhuang:1995uf} for a related study in the light-quark
sector).
\section{Charmonia: Screening and Dissociation}
\label{sec_charmonium}
The dissolution temperature of charmonia in medium largely depends
on two mechanisms: color-screening of the inter-quark force and
inelastic dissociation reactions. While the
former largely governs the $Q$-$\bar{Q}$ binding energy, $\varepsilon_B$
(via spacelike gluon exchange), the latter determines the inelastic
width, $\Gamma_\psi^{\rm inel}$, of the bound-state (via dissociation
reactions with timelike (on-shell) partons in the heat bath).
Within a schematic pole ansatz, both mechanisms figure into the
charmonium spectral function as
\begin{equation}
\sigma_\psi(\omega) \sim {\rm Im}~D_\psi(\omega) \sim
{\rm Im}~[\omega-2m_c^* +\varepsilon_B + i\,\Gamma_\psi/2 ]^{-1}
\label{sig-psi}
\end{equation}
(a pole ansatz applies to a well-defined bound-state/resonance).
Note the dependence on the in-medium $c$-quark mass, $m_c^*$, and
that the total width, $\Gamma_\psi$, includes contributions from elastic
scattering as well. However, as we will see below, binding energies and
dissociation reactions mutually influence each other.
Recent years have seen a revival of potential models to evaluate
in-medium quarkonium
properties~\cite{Mocsy:2007yj,Cabrera:2006wh,Alberico:2006vw,Wong:2006bx,Laine:2007qy,Brambilla:2008cx}.
\begin{figure}[!t]
\begin{minipage}{0.5\linewidth}
\includegraphics[width=1.0\textwidth]{etac-MP08.eps}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\includegraphics[width=0.9\textwidth]{ImG-etac-CR.eps}
\includegraphics[width=0.9\textwidth]{RG-etac-CR.eps}
\end{minipage}
\caption[]{$S$-wave $c$-$\bar{c}$ spectral functions and Euclidean-time
correlators (normalized with a ``reconstructed" correlator, $R_G(\tau)\equiv
G(\tau)/G_{\rm rec}(\tau)$) using a HQ potential close to the lQCD free
energy (left panels)~\cite{Mocsy:2007yj} or corresponding to the
lQCD internal energy (right panels)~\cite{Cabrera:2006wh}.}
\label{fig_corr}
\end{figure}
This was largely spurred by the hope that the input potential can be
taken in a model-independent way from the HQ free energy computed in
thermal lQCD, and that resulting spectral functions can be discriminated
via comparisons to Euclidean correlation functions independently
computed in lQCD. There is, however, an ongoing controversy as to
which quantity to identify with a potential (free vs. internal energy),
and concerning the gauge variance of the projection on the color-singlet
channel~\cite{Philipsen:2008qx}. More quantitatively, the current
situation is illustrated in Fig.~\ref{fig_corr}.
Roughly speaking, when the singlet free energy, $F_{Q\bar{Q}}^1(r,T)$,
is used as potential, ground-state charmonia dissolve at temperatures
below 1.5\,$T_c$ (left panel)~\cite{Mocsy:2007yj}. On the other hand,
with the internal energy,
$U_{Q\bar{Q}}^1 =F_{Q\bar{Q}}^1(r,T) +T S_{Q\bar{Q}}^1(r,T)$
($S_{Q\bar{Q}}$: entropy), a $J/\psi$ peak in the
spectral function can survive up to 2.5-3\,$T_c$ (upper right
panel)~\cite{Cabrera:2006wh}.
The spectral functions have been used to calculate temporal Euclidean
correlation functions,
\begin{equation}
G_\psi(\tau,p;T)=
\int\limits_0^\infty {d\omega} \ \sigma_\psi(\omega,p;T) \
\frac{\cosh[(\omega(\tau-1/2T)]}{\sinh[\omega/2T]} \ .
\label{G-tau}
\end{equation}
which are usually normalized to a ``reconstructed" correlator,
$G_{\rm rec}$, computed with a vacuum spectral function (but
identical finite-temperature kernel in eq.~(\ref{G-tau})).
Surprisingly, both weak and strong-binding scenarios for the
spectral function result in correlator ratios which are around
one and depend weakly on temperature (compatible with lQCD
results~\cite{Datta:2003ww,Jakovac:2006sf,Aarts:2007pk}), cf.~inset
in the left panel and lower right panel of Fig.~\ref{fig_corr}.
The reason for this ``redundancy" is the underlying effective quark
mass, which is calculated from the asymptotic value of the HQ
free or internal energy, $m_c^*=m_c^0+\Delta m_c$ with
$2\Delta m_c = F(r\to\infty,T)$ or $U(r\to\infty,T)$.
For the free energy, $\Delta m_c$ is substantially reduced from its
vacuum value, thus lowering the in-medium $c\bar{c}$ threshold
considerably; this compensates for the lack of a bound state in
providing low-energy strength in the spectral function to ensure
a stable correlator ratio. For the internal energy, $\Delta m_c$
is significantly larger which, together with a stronger binding,
leads to an essentially stable $J/\psi$ peak position and thus a
roughly stable $T$-dependence of the correlator ratio.
More work is needed to disentangle these two scenarios.
Finite-width effects on the correlators have received rather little
attention thus far, but they seem to further stabilize
the $T$-dependence~\cite{Cabrera:2006wh}.
If a reliable understanding of quarkonium correlators in the QGP
at $\mu_q=0$ in terms of potential models can been established,
it might serve as a benchmark to extrapolate, e.g., color-screening
effects to finite $\mu_q$.
\begin{figure}[!t]
\begin{minipage}{0.5\linewidth}
\includegraphics[width=0.95\textwidth]{gam-psi-qgp.eps}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\includegraphics[width=0.95\textwidth]{gam-psi-muq.eps}
\end{minipage}
\caption[]{$J/\psi$ dissociation rate (=inelastic width) in a QGP as a
function of $T$ (at $\mu_q$=0, left panel) and $\mu_q$ (at $T$=180~MeV,
right panel) for gluo-dissociation ($g+J/\psi\to c+\bar{c}$) and
quasifree destruction ($p+J/\psi\to p+c+\bar{c}$ with $p=q,\bar{q},g$)
with vacuum and in-medium reduced $J/\psi$ binding energy.}
\label{fig_gam-psi}
\end{figure}
The dissociation width of heavy quarkonia in medium is pivotal
for a quantitative description of suppression (and
regeneration!) reactions in HICs. The prevalent dissociation
mechanism in the QGP depends on the binding energy of the bound
state~\cite{Kharzeev:1995ju,Grandchamp:2001pf,Park:2007zza,Zhao:2007hh},
cf.~left panel of Fig.~\ref{fig_gam-psi}.
For large binding, $\varepsilon_B\sim 3T$, thermal-gluon absorption is
most efficient, $g+J/\psi\to c+\bar{c}$ (and formally the process to
leading-order in $\alpha_s$). For small binding, $\varepsilon_B < T$,
the phase space for gluo-dissociation shrinks leading to a decrease
in its rate with $T$. Thus, ``quasi-free'' dissociation,
$p+J/\psi\to p+c+\bar{c}$, albeit formally of higher order in
$\alpha_s$, takes over. Note that the quasi-free process can be
induced by both gluons and anti-/quarks. This has consequences at
finite $\mu_q$, in that quasifree dissociation is additionally enhanced
over gluo-dissociation due to an increasing abundance of thermal
quarks, cf.~right panel of Fig.~\ref{fig_gam-psi}~\cite{Zhao:2009}.
\section{Conclusions}
\label{sec_concl}
Instead of a formal summary of this paper, let us reiterate what we
find the most promising perspectives regarding the finite-$\mu_q$
dependence of dileptons and charm/onia ath this point.
For low-mass dileptons, we have identified the mass region around
$M\simeq0.2$~GeV as the most sensitive one for baryon-driven medium
effects. In addition, with a good knowledge of the in-medium spectral
shape of the EM correlator, the dilepton yield can finally be used for
an accurate determination of the fireball lifetime in HICs, which might
be useful in detecting (the onset of) an extended quark-hadron mixed
phase. For open charm, a ``critical" enhancement of $\sigma$ exchange
in $c$-quark or $D$-meson scattering in the vicinity of the critical
point may occur, potentially affecting transport properties in an
observable way ($p_t$ spectra and elliptic flow). For charmonia, a
hope is to establish the validity of finite-$T$ potential models and
extrapolate them to finite $\mu_q$, augmented by microscopic
calculations of dissociation rates.
These developments are particularly exciting in view of future tests
in several heavy-ion programs around the world.
\section*{Acknowledgments}
It is a pleasure to thank D.~Cabrera, V.~Greco, M.~Mannarelli, F.~Riek,
H.~van Hees, J.~Wambach and X.~Zhao for their collaboration on various
aspects of the presented topics, and H.~van Hees for a careful reading
of the ms.
This work has been supported by a U.S. National Science Foundation
CAREER award under grant no. PHY-0449489 and by the A.-v.-Humboldt
Foundation (Germany) through a Bessel award.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{}
\label{}
\section{Introduction}
Magnetic resonance imaging (MRI) is an important non-invasive imaging technique, which enables excellent assessments of structural and functional conditions with no radiation in a reproducible manner. Basically, MRI is aimed to reconstruct the images from the observed signals whose degradation process can be formulated as follows:
\begin{align}\label{formula:mri_forword}
y = \mathcal{F}x + n,
\end{align}
\noindent where $x,y \in \mathbb{C}^{N}$ are the vectors denoting the latent image to reconstruct in the image domain and the observed measurements in \textit{k}-space, $\mathcal{F} \in \mathbb{C}^{N \times N}$ is the 2D discrete Fourier transform (DFT) and $n$ is the noise inevitably appearing in the signal acquisition process.
However, acquiring the full measurements of $y$ to construct a high-quality MR image $x$ is highly time-consuming. Moreover, the long scanning time will bring about the artefacts arising from the voluntary movements of the patients and involuntary physiological movements~\citep{Zbontar2018}.
In order to mitigate the long acquisition time of MRI as well as alleviate the aliasing artefacts, a range of methods has been developed for accelerating MRI to obtain accurate reconstructions.
Traditionally, gradient refocusing~\citep{Stehling1991} and a multiple-radio frequency mediated approach~\citep{Hennig1986} were proposed. Under constraints of the Nyquist-Shannon sampling theorem, they did reduce the scanning time although by only a limited factor. With the development of the parallel imaging (PI) and the compressed sensing (CS) theory, the fast MRI based on these two theories attracted much researches and advancements.
Parallel imaging was introduced to take advantage of spatial sensitivity distribution derived from an array of carefully distributed receiver surface coils, to reduce the measurement from each coil, alleviating the need of enhancing gradient performance and hence reducing the acquisition time~\citep{Blaimer2004}.
The undersampled \textit{k}-space signal using PI MRI can be represented by a general model as:
\begin{align}\label{formula:pi_mri_forword}
y^q = \mathcal{F}_{u}(\mathcal{S}^q \otimes x) + n^q, q = 1,...S,
\end{align}
\noindent where $\mathcal{S}^q$ is the sensitivity map and $\mathcal{F}_{u} \in \mathbb{C}^{M \times N}$ is the undersampled 2D DFT matrix with $M\ll N$ to reduce the measurements of each $y^q$. With $S$ coils applied parallelly, one can obtain $y^{1}, ..., y^{S}$ simultaneously to reconstruct the latent image $x$. To reconstruct these PI acquired images, great progress in developing PI reconstruction techniques has taken place, proposing popular methods such as the simultaneous acquisition of spatial harmonic (SMASH)~\citep{Sodickson1997}, sensitivity encoding (SENSE)~\citep{Pruessmann1999} and generalized auto-calibrating partially parallel acquisition (GRAPPA)~\citep{Griswold2002}.
The invention of CS theory~\citep{Donoho2006} further advanced the sampling efficiency of MRI.
The CS-MRI utilised the non-linear methodology and sparse transformation to reconstruct the latent image from only a small portion of \textit{k}-space measurement under a much smaller downsampling rate than the Nyquist rate. The general problem of MRI using the CS-MRI is to find the minimiser image to the following problem:
\begin{align}\label{formula:cs_mri_1}
\argmin_{x} \mid\mid\Phi x\mid\mid_{1},
\text{ s.t. } y=\mathcal{F}_{u} x,
\end{align}
\noindent where $\Phi$ is the sparsifying transformation, $\mathcal{F}_{u} \in \mathbb{C}^{M \times N}$ is undersampled 2D DFT with $M\ll N$, and $y \in \mathbb{C}^{M}$ is the observed undersampled measurements in \textit{k}-space.
A range of non-linear reconstruction methods has demonstrated success in resolving this, including some fixed sparsifying methods such as total variance~\citep{Block2007}, curvelets~\citep{Beladgham2008} and double-density complex wavelet~\citep{Zhu2013}, and a few adaptive sparsifying models taking the advantage of dictionary learning~\citep{Ravishankar2011}.
While both CS-MRI and PI-MRI can significantly reduce the required number of measurements in \textit{k}-space, the iterative algorithms are required to derive the image however prolong the reconstruction time and hence cause concerns when translated for actual clinical uses.
As a modern popular method for general image analysis, deep learning has been very successful by exploiting the non-linear and complex natures of the network with supervised or unsupervised learning.
Convolutional neural networks (CNNs) as a special type of deep learning networks enable enhanced latent feature extraction by their very deep hierarchical structure. CNN has demonstrated its superiority in multiple tasks, including detection~\citep{Girshick2014}, classification~\citep{Szegedy2015}, segmentation~\citep{Long2015} and super-resolution~\citep{Dong2016}.
Wang et al.~\citep{Wang2016} became the pioneer to take advantage of CNNs by extracting latent correlations between undersampled and fully sampled \textit{k}-space data for MRI reconstruction.
Yang et al.~\citep{Yang2016} further improved the network structures by re-applying the alternating direction method of multipliers (ADMM), which was originally used for CS-based MR reconstruction methods.
A cascaded structure was developed by Schlemper et al.~\citep{Schlemper2018} for the more targeted reconstruction of dynamic sequences in cardiac MRI.
To enable further latent mapping in the reconstruction model, Zhu et al.~\citep{Zhu2018} developed a novel framework to provide more dense mapping through domains via its proposed automated transform by manifold approximation.
For a long time, CNNs have had a dominant position in the field of computer vision (CV) since convolutions are effective feature extractors. Most deep learning-based MRI reconstruction methods are based on CNNs, including the GAN-based model. As Figure~\ref{fig:FIG_Overview}(A) shows, the feature extraction of CNNs is based on convolution, which is locally sensitive and lacks long-range dependency. The receptive field of CNNs is limited by the convolutional kernel and the network depth. Oversized convolutional kernel brings huge computational cost and overly-deep network depth, which can cause gradient vanishing.
A novel structure, transformer, taking advantage of even deeper mapping, sequence-to-sequence model design~\citep{Sutskever2014} and adaptive self-attention setting~\citep{Vaswani2017,Parikh2016,Cheng2016,Matsoukas2021} with expanding receptive fields (Figure~\ref{fig:FIG_Overview}(A))~\citep{Parmar2018,Salimans2017} has been proposed recently and been popularised in natural language processing (NLP) initially~\citep{Qiu2020}.
Then it has been applied to object detection~\citep{Carion2020} and image recognition~\citep{Dosovitskiy2020} and extended to super-resolution~\citep{Parmar2018} for general image analysis.
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_INTRO/FIG_Overview.pdf}
\caption{
Overview of the proposed SwinMR.
(A) and (B) are the schematic diagrams of the receptive field for 2D convolution (Conv2D), multi-head self-attention (MSA) and shifted windows based multi-head self-attention (W-MSA/SW-MSA). Conv2D is locally sensitive and lacks long-range dependency. Compared with Conv2D, MSA and (S)W-MSA are globally sensitive and have larger receptive fields. MSA is performed in the whole image space, while (S)W-MSA is performed in shifted windows.
(Red box: the receptive field of the operation; green box: the pixel; blue box: the patch in self-attention.)
(C) is the overview of SwinMR.
(D) shows the results of the proposed SwinMR compared with GT, ZF and another method DAGAN~\citep{Yang2018}.
(IM: the input module; FEM: the feature extraction module; OM: the output module. ZF: undersampled zero-filled MR images; Recon: reconstructed MR images; Multi-Channel GT: multi-channel ground truth MR images; GT: single-channel ground truth MR images; MASK: the undersampling mask; SM: sensitivity maps.)
}
\label{fig:FIG_Overview}
\end{figure}
With its superior ability in image reconstruction and synthesis as demonstrated in natural images, we could see transformers applied in MRI in many different ways.
For synthesis, it has greatly enhanced cross-modality image synthesis (PET-to-MR by directional encoder~\citep{Shin2020}, T1-to-T2 by a pyramid structure~\citep{Zhang2021}, and MR-to-CT and T1/T2/PD by a novel aggregated residual transformer block~\citep{Dalmaz2021}).
Variants of the transformer also enabled improved performance in reconstruction and super-resolution tasks.
It was first applied on the reconstruction of brain MR imaging~\citep{Korkmaz2021_1}.
Korkmaz et al.~\citep{Korkmaz2021_2} developed an unsupervised adversarial method to alleviate the scarce training sample populations.
To further improve the quality of imaging, Feng et al.~\citep{Feng2021_1} enabled an end-to-end joint reconstruction and super-resolution.
Feng et al.~\citep{Feng2021_2} further advanced the model for these dual tasks by incorporating the model with task-specific novel cross-attention modules.
However, the shift from NLP tasks to CV tasks leads to challenges:
(1) Difference in scale: visual elements (e.g., pixels) in CV tasks tend to vary substantially in scale unlike language elements (e.g., word tokens) in NLP tasks.
(2) Higher resolution: the resolution of pixels in images (or frames) tend to be much higher than words in sentences.~\citep{Liu2021}
Therefore, it is a trade-off for less computational complexity to limit the scale of self-attention in a local window, as Figure~\ref{fig:FIG_Overview}(A) and (B) shows. \textbf{S}hifted \textbf{win}dows (Swin) transformer~\citep{Liu2021} replaced the traditional multi-head self-attention (MSA) by the shifted windows based multi-head self-attention (W-MSA/SW-MSA).
Based on the Swin transformer module, Liang et al.~\citep{Liang2021} proposed SwinIR for image restoration tasks.
In this work, we introduced the SwinMR, a novel parallel imaging coupled Swin transformer based model for fast CS MRI reconstruction, as Figure~\ref{fig:FIG_Overview}(C) shows. The main contributions can be summarised as follows:
\begin{itemize}
\item[$\bullet$]
A novel parallel imaging coupled Swin transformer-based model for fast MRI reconstruction was proposed, as Figure~\ref{fig:FIG_Overview}(C) shows.
\item[$\bullet$]
A novel multi-channel loss was proposed by using the sensitivity maps, which was proved to reserve more textures and details in the reconstruction results.
\item[$\bullet$]
A series of ablation studies and comparison experiments were conducted. Experimental studies using different undersampling trajectories with various noises were performed to validate the robustness of our proposed SwinMR.
\item[$\bullet$]
A downstream task experiment using a segmentation network was conducted. A pre-trained segmentation network was applied to test the segmentation score for reconstructed images.
\end{itemize}
\section{Method}
\subsection{Classic Model-Based CS MRI Reconstruction}
To recover better spatial information with less artefacts from the undersampled \textit{k}-space data, traditional CS-MRI methods usually consider solving the following optimisation problem:
\begin{align}\label{formula:cs_mri_2}
\min _{x} \frac{1}{2} \mid\mid \mathcal{F}_{u} x-y \mid\mid_{2}^{2}+ \lambda R(\Phi x),
\end{align}
\noindent where $\Phi$ is the sparsifying transform, e.g., discrete wavelet transform~\citep{qu2012undersampled}, gradient operator~\citep{Block2007, wu2017solving} and dictionary-based transform~\citep{ravishankar2010mr}. $R(\cdot)$ is the regularisation function imposed on the sparsity, e.g, $l_1$-norm and $l_0$-norm, and $\lambda$ is the weight parameter to balance the two terms.
The solution of the above problem can be derived by the non-linear optimisation solvers such as gradient-based algorithms~\citep{lustig2007sparse} and variable splitting methods~\citep{wang2014compressed, yang2010fast}. Depending on the manually designed regularisation, some models may suffer from a long reconstruction time for better reconstruction quality. Additionally, the manually selected sparsifying transform $\Phi$ could also introduce the undesirable artefacts, e.g., TV-based regularisation which is well-known for removing the noise and preserving the sharp edges can introduce staircase artefacts~\citep{Beladgham2008} and the tight wavelet frame transform increases the reconstruction efficiency but may lead to the blocky artefacts~\citep{cai2020data}.
\subsection{CNN-based Fast MRI Reconstruction}
To relieve the artefacts brought by the hand-crafting regularisation and the long reconstruction time of classic models, the deep neural networks which are well-known as the powerful features extractors, was firstly applied in the CS-MRI in~\citep{Wang2016}. In this work, a deep convolutional network was applied to learn the mapping from down-sampled reconstruction images to fully sampled reconstruction images directly.
Following that, several networks have been proposed to further improve the reconstruction quality.
Some works attempted to bridge the classic models with deep CNNs by mimicking the iterative algorithm in their network architectures.
Deep ADMM Net~\citep{Yang2016} was firstly trained by unfolding the optimisation algorithm ADMM to derive the solution to the general model Equation (\ref{formula:cs_mri_2}) by network blocks.
In~\citep{Schlemper2018}, the reconstruction of the deep CNN from lower-quality image was adopted as the prior information to approximate in a classic CS-model as follows:
\begin{align}\label{formula:cnn_mri}
\min_{x} \frac{1}{2}
\mid\mid y - \mathcal{F}_u x\mid\mid^2_2
+ \lambda \mid\mid x - f_{\mathrm{CNN}}(x_u\mid\theta) \mid\mid^2_2,
\end{align}
\noindent where the solution of the above function was further adopted into the network architecture iteratively to improve the reconstruction result of $f_{\mathrm{CNN}}$ which takes the zero-filled reconstruction $x_u$ as the input.
On top of the CNNs, conditional generative adversarial networks (cGANs) exploited the advantages of deep learning further and proved to enhance the quality of the MR image reconstruction to a large extent~\citep{Yang2021, Lv2021_1}.
Such a competitive network introduced a two-player generator-discriminator training mechanism to competitively improve reconstruction performance by alternatively optimising $\theta_{G}$ and $\theta_{D}$ of the generator $G$ and the discriminator $D$, in a general form as:
\begin{align}\label{formula:gan1}
\min_{\theta_{G}} \max_{\theta_{D}}
{\mathbb{E}}_{x \sim p_{\text{gt}}}
\left[\log D_{\theta_{D}}(x)\right] +
{\mathbb{E}}_{{x_u \sim p_{\text{u}}}}\left[\log
\left(1-D_{\theta_{D}}\left(G_{\theta_{G}}(x_u)\right)\right)\right],
\end{align}
\noindent where $G_{\theta_{G}}$ and $D_{\theta_{D}}$ denote the generator and the discriminator with parameters $\theta_{G}$ and $\theta_{D}$, respectively. $x$ and $x_u$ denote the ground truth MR images and undersampled zero-filled MR images with aliasing artefacts. After the training, the generator can yield the corresponding reconstruction from $x_u$ to reconstructed images $G_{\theta_{G}}(x_u)$.
Variants of generators and discriminators have been developed to cope with multiple flaws in the original architecture of GAN -- for improved generator~\citep{Shaul2020}, improved discriminator~\citep{Huang2021}, loss functions~\citep{Quan2018}, regularisation~\citep{Ma2021}, training stability by Wasserstein GAN~\citep{Arjovsky2017,Guo2020} and attention mechanism~\citep{Jiang2021}.
DAGAN~\citep{Yang2018}, by substituting the residual networks with U-Net~\citep{Ronneberger2015}, combined the advantage of U-Net in latent information extraction with competitive training and pre-trained VGG based transfer learning.
Furthermore, PIDDGAN~\citep{Huang2021} considered edge information into their model and further enhance the edge information in the reconstruction, which are clinically important when interpreting MR images.
The utilisation of transfer learning improved the generalisability of a network trained with a small dataset~\citep{Lv2021_3}.
CNNs-based MR reconstruction methods showed their superiority both on reconstruction quality and efficiency compared to classical MR reconstruction methods.
However, the performance of those CNN-based methods was limited by the local sensitivity of the convolutional operation. Motivated by this limitation, we proposed a Swin transformer based MR reconstruction method SwinMR.
\subsection{SwinMR: Swin Transformer for MRI Reconstruction}
\subsubsection{Overall Architecture}
The overall architecture is shown in Figure~\ref{fig:FIG_Overview}(C) and the data flow of SwinMR is shown in Figure~\ref{fig:FIG_DataFlow}.
Root sum square (RSS) is applied to combine the multi-channel ground truth MR images $x^q$ to single-channel ground truth MR images $x$ ($q$ denotes the $q^{\text{th}}$ coil). Sensitivity maps $\mathcal{S}^q$ are estimated by ESPIRiT~\citep{Uecker2014} from multi-channel ground truth MR images $x^q$.
Undersampling and noise interruption are performed in \textit{k}-space using fast Fourier transform (FFT) and inverse fast Fourier transform (iFFT) (Gaussian noise is added in the noise experiments), which converts single-channel ground truth MR images $x$ to undersampled zero-filled MR images $x_u$.
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_INTRO/FIG_DataFlow.pdf}
\caption{The dataflow of proposed SwinMR.
Root sum square (RSS) is applied to combine the multi-channel ground truth MR images (Muti-Channel GT) to single-channel ground truth MR images (GT).
Undersampling and noise interruption are performed in \textit{k}-space using fast Fourier transform (FFT) and inverse fast Fourier transform (iFFT) to convert the GT to undersampled zero-filled MR images (ZF) as the input of our proposed SwinMR.
Multi-channel reconstructed MR images (Muti-Channel Recon) are calculated by the pixel-wise multiplication of single-channel reconstructed MR images (Recon), which are the output of the proposed SwinMR, and sensitivity maps, which are estimated by ESPIRiT from the Multi-Channel GT.
}
\label{fig:FIG_DataFlow}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_INTRO/FIG_Structure.pdf}
\caption{The structure of proposed SwinMR.
(A) shows the overall structure of SwinMR. In SwinMR architecture, two Conv2Ds are placed at the beginning and the ending. A cascade of RSTBs and a Conv2D with a residual connection are placed between the two Conv2Ds.
(B) shows the structure of RSTB. The RSTB consists of a patch embedding operator, $N$ cascaded STLs, a patch unembedding operator, a Conv2D, and a residual connection between the input and output of RSTB. An STL consists of an LN, an (S)W-MSA, an LN and an MLP respectively, with two residual connections.
(RSTB: the residual Swin transformer block; STL: the Swin transformer layer; Conv2D: the 2D convolutional layer; LN: the layer normalisation layer; (S)W-MSA: the (shifted) windows multi-head self-attention; MLP: the multi-layer perceptron.)
}
\label{fig:FIG_Structure}
\end{figure}
The proposed SwinMR model can produce reconstructed MR images $\hat x_u$ from undersampled zero-filled MR images $x_u$, where the residual connection is applied to accelerate the convergence and stable the training processing.
It can be expressed by
\begin{align}\label{formula:5}
\hat x_u = \text{SwinMR}(x_u \mid \theta) + x_u,
\end{align}
\noindent where the SwinMR network is parameterised by $\theta$.
Figure~\ref{fig:FIG_Structure}(A) shows the structure of SwinMR, which is composed of an input module (IM), a feature extraction module (FEM) and an output module (OM). The IM and OM are at the beginning and the end of the whole structure, and the FEM is placed between the IM and OM with a residual connection.
The structure can be expressed by
\begin{align}\label{formula:6}
& F_{\text{IM}} = \text{H}_{\text{IM}}(x_u),\\
& F_{\text{FEM}} = \text{H}_{\text{FEM}}(F_{\text{IM}}),\\
& F_{\text{OM}} = \text{H}_{\text{OM}}(F_{\text{FEM}}+F_{\text{IM}}),
\end{align}
\noindent where the $\text{H}_{\text{IM}}(\cdot)$, $\text{H}_{\text{FEM}}(\cdot)$ and $\text{H}_{\text{OM}}(\cdot)$ denote the IM, FEM and OM respectively. $F_{\text{IM}}$, $F_{\text{FEM}}$ and $F_{\text{OM}}$ denote the output of the IM, FEM and OM respectively.
\subsubsection{Input Module and Output Module}
The IM is used for early visual processing and mapping from the input image space to higher dimensional feature space for the following FEM.
The IM applies a 2D convolutional layer (Conv2D) mapping $x_u \in \mathbb{R}^{H \times W \times 1}$ to $F_{\text{IM}} \in \mathbb{R}^{H \times W \times C}$.
In contrast, the OM is used to map the higher dimensional feature space to the output image space by a Conv2D mapping $F_{\text{FEM}} \in \mathbb{R}^{H \times W \times C}$ to $F_{\text{OM}} \in \mathbb{R}^{H \times W \times 1}$.
In the training stage, the input image is randomly cropped to a fixed size $H \times W$ ($H=W$). In the inference stage, $H$, $W$ denote the height and weight of the input image. Here we define $H$ (or $W$) as the patch number and $C$ as the channel number for the self-attention processing.
\subsubsection{Feature Extraction Module}
The FEM is composed of a cascade of residual Swin transformer blocks (RSTBs) and a Conv2D at the end. It can be expressed as
\begin{align}\label{formula:7}
& F_0 = F_{\text{IM}}, \\
& F_i = {\text{H}}_{\text{RSTB}_i} (F_{i-1}), \quad i = 1,2,...,P, \\
& F_{\text{FEM}} = {\text{H}}_{\text{CONV}}(F_P),
\end{align}
\noindent where $F_{\text{IM}}$ and $F_{\text{FEM}}$ are the input and output of the FEM. ${\text{H}}_{\text{RSTB}_i}(\cdot)$ denotes the $i^{\text{th}}$ RSTB ($P$ RSTBs in total) in the FEM. ${\text{H}}_{\text{CONV}}(\cdot)$ denotes the Conv2D after a series of RSTBs.
Figure~\ref{fig:FIG_Structure}(B) shows the structure of the RSTB. An RSTB consists of $Q$ Swin transformer layers (STLs) and a Conv2D, and a residual connection is linked between the input and output of the RSTB. It can be expressed as
\begin{align}\label{formula:8}
& F_{i,0} = {\text{H}}_{\text{Emb}_{i}}(F_{i-1}), \\
& F_{i,j} = {\text{H}}_{\text{STL}_{i,j}} (F_{i,j-1}), \quad j = 1,2,...,Q, \\
& F_{i} = {\text{H}}_{\text{CONV}_{i}}({\text{H}}_{\text{Unemb}_{i}}(F_{i,Q}) + F_{i-1}),
\end{align}
\noindent where ${\text{H}}_{\text{Emb}_{i}}(\cdot)$ is the patch embedding from $F_{i-1} \in \mathbb{R}^{H \times W \times C}$ to $F_{i,0} \in \mathbb{R}^{HW \times C}$, and ${\text{H}}_{\text{Unemb}_{i}}(\cdot)$ is the patch unembedding from $F_{i,Q} \in \mathbb{R}^{HW \times C}$ to $\mathbb{R}^{H \times W \times C}$.
${\text{H}}_{\text{STL}_{i,j}}(\cdot)$ and ${\text{H}}_{\text{CONV}_{i}}(\cdot)$ denote the $j^{\text{th}}$ STL and the Conv2D in the $i^{\text{th}}$ RSTB, respectively.
\subsubsection{Swin Transformer Layer}
The whole process of the STL can be expressed as
\begin{align}\label{formula:9}
&X^{\prime}={\text{H}}_{\text{(S)W-MSA}}({\text{H}}_{\text{LN}}(X))+X,\\
&X^{\prime\prime}={\text{H}}_{\text{MLP}}({\text{H}}_{\text{LN}}(X^{\prime}))+X^{\prime},
\end{align}
\noindent where $X$ and $X^{\prime\prime}$ are the input and output of the STL. ${\text{H}}_{\text{MLP}}(\cdot)$ and ${\text{H}}_{\text{LN}}(\cdot)$ denote the multilayer perceptron and the layer normalisation layer. Windows multi-head self-attention (W-MSA) and shifted windows multi-head self-attention (SW-MSA) ${\text{H}}_{\text{(S)W-MSA}}(\cdot)$ are alternative applied in the STL.
Spatial constraints are added in the Swin transformer layer compared to the original transformers. Figure~\ref{fig:FIG_Structure}(B) shows the W-MSA and the SW-MSA compared with the original MSA. Original MSA performs self-attention in the whole image space. Although the information of the entire picture is involved in each attention calculation, it aggravates computational costs and redundant connections. The computational complexity for the original MSA is as follows:
\begin{align}\label{formula:10}
\Omega({\text{H}}_{\text{MSA}})=4HWC^{2}+2(HW)^{2}C.
\end{align}
In Swin transformer layers, a $\mathbb{R}^{H \times W \times C}$ feature map are divided into $\frac{HW}{M^2}$ non-overlapped windows with the size of $M^2 \times C$. (S)W-MSA is calculated in each window, instead of the whole image space. The computational complexity for (S)W-MSA is as follows:
\begin{align}\label{formula:11}
\Omega({\text{H}}_{\text{(S)W-MSA}})=4HWC^{2}+2M^{2}HWC,
\end{align}
\noindent which is significantly reduced compared to the original MSA. However, if the separation of windows is fixed between each STL, the network will lose the link between different windows. Normal windows and shifted windows are alternatingly utilised in each STL to enable information communication from different windows.
(S)W-MSA for each non-overlap window $X$ can be expressed by
\begin{align}\label{formula:12}
Q=X P_{Q}, \quad K=X P_{K}, \quad V=X P_{V},
\end{align}
\noindent where the $P_{Q}$, $P_{K}$, $P_{V}$ are shared projection matrices over all the windows. The query $Q$, key $K$, value $V$ and learnable relative position encoding $B$ ($\mathbb{R}^{M^2 \times d}$) are used in the calculation of the self-attention mechanism in a local window, which can be expressed by
\begin{align}\label{formula:13}
\operatorname{Attention}(Q, K, V)=\operatorname{SoftMax}\left(Q K^{T} / \sqrt{d}+B\right) V.
\end{align}
Such self-attention mechanism calculations are performed for $h$ times and concatenated for (S)W-MSA.
\subsubsection{Loss Function}
A novel multi-channel loss using the sensitivity maps was introduced for better reconstruction quality and more textures and details.
Charbonnier loss~\citep{Lai2019} was utilised for the pixel-wise loss and the frequency loss since it is more robust and able to handle the outliers better.
The total loss $\mathcal{L}_{\mathrm{TOTAL}}(\theta)$ consists of the pixel-wise Charbonnier loss $\mathcal{L}_{\mathrm{pixel}}(\theta)$, the frequency Charbonnier loss $\mathcal{L}_{\mathrm{freq}}(\theta)$ and perceptual loss $\mathcal{L}_{\mathrm{VGG}}(\theta)$. The pixel-wise Charbonnier loss can be expressed by
\begin{align}\label{formula:14}
\mathop{\text{min}}\limits_{\theta}
\mathcal{L}_{\mathrm{pixel}}(\theta) =
\frac{1}{S} \sum_{q=1}^{S}
\sqrt{\mid\mid x^q - \mathcal{S}^q \hat x_u \mid\mid^2_2 + \epsilon^2},
\end{align}
\noindent where $\epsilon$ is a constant which is set to $10^{-9}$ empirically and $\mathcal{S}^q$ is the sensitivity map of $q^{\text{th}}$ coil ($S$ colis in total). The frequency Charbonnier loss can be expressed by
\begin{align}\label{formula:15}
\mathop{\text{min}}\limits_{\theta}
\mathcal{L}_{\mathrm{freq}}(\theta) =
\frac{1}{S} \sum_{q=1}^{S}
\sqrt{\mid\mid y^q - \mathcal{F}\mathcal{S}^q \hat x_u \mid\mid^2_2 + \epsilon^2}.
\end{align}
The perceptual VGG loss can be expressed by
\begin{align}\label{formula:16}
\mathop{\text{min}}\limits_{\theta}
\mathcal{L}_{\mathrm{VGG}}(\theta) =
\mid\mid f_{\mathrm{VGG}}(x) - f_{\mathrm{VGG}}(\hat x_u) \mid\mid_1,
\end{align}
\noindent where $f_{\mathrm{VGG}}(\cdot)$ denotes the VGG network, and $\mid\mid \cdot \mid\mid_1$ denotes the $l_1$ norm. The utilisation of $\mathcal{L}_{\mathrm{VGG}}$ is able to optimise the perceptual quality of reconstructed results.
The total loss can be expressed by
\begin{align}\label{formula:17}
\mathcal{L}_{\mathrm{TOTAL}}(\theta)
= \alpha \mathcal{L}_{\mathrm{pixel}}(\theta)
+ \beta \mathcal{L}_{\mathrm{freq}}(\theta)
+ \gamma \mathcal{L}_{\mathrm{VGG}}(\theta),
\end{align}
\noindent where $\alpha$, $\beta$ and $\gamma$ are coefficients controlling the balance of each term in the loss function.
\section{Experiments and Results}
\subsection{Datasets}
In this work, the Calgary Campinas multi-channel (CC) dataset~\citep{Souza2018} and the Multi-modal Brain Tumour Segmentation Challenge 2017 (BraTS17)~\citep{BraTS17_1,BraTS17_2,BraTS17_3} dataset were used for the experiment sections.
For the CC dataset, 15360 slices of 12-channel T1-weight brain 2D MR images were randomly divided into training, validation and testing sets (7680, 3072, and 4608 slices respectively), according to the ratio of 5:2:3.
For the BraTS17 dataset, we applied the brain data with reference segmentation results (280 3D brains in BraTS17 official training dataset), including both higher and lower grade glioma. 280 3D brain data were divided into training, validation and testing set (235, 20, and 30 3D brains respectively), and cropped to $152 \times 192 \times 144$ volumes (slice, height and weight, respectively).
\subsection{Implementation Detail}
The proposed SwinMR was implemented using PyTorch, trained on two NVIDIA RTX 3090 GPUs with 24GB GPU memory, and tested on an NVIDIA RTX 3090 GPU or an Intel Core i9-10980XE CPU.
We set the RSTB number, the STL number, the window size number and the attention head number to 6, 6, 8 and 6 respectively, which are the default setting in the original SwinIR~\citep{Liang2021}. The patch number and channel number were empirically set to 96 and 180, according to our ablation studies.
For the parameter in the loss function, $\alpha$, $\beta$, $\gamma$ were set to 15, 0.1 and 0.0025 to balance each term, according to our ablation studies.
We used SwinMR (PI) to denote the proposed model trained with multi-channel data and sensitivity maps, and SwinMR (nPI) to indicate the proposed model trained with single-channel data without sensitivity maps.
\subsection{Evaluation Methods}
Structural similarity index (SSIM), Peak signal-to-noise ratio (PSNR) and Fr\'echet inception distance (FID)~\citep{Heusel2017} were utilised for evaluation.
SSIM quantifies the structural similarity between two images based on luminance, contrast, and structures.
PSNR is the ratio between maximum signal power and noise power, which measures the fidelity of the representation. Both metrics are based on simple and shallow functions, and direct comparisons between images, which are not necessary for the visual quality for human observers~\citep{Zhang2018}.
FID is calculated by computing the Fr\'echet distance between two multivariate Gaussians, which measures the similarity between two sets of images. FID correlates well with visual quality for human observers, and a lower FID indicates more perceptual results.
Both Intersection over Union (IoU) and Dice scores were applied to measure the segmentation quality in the brain tumour segmentation experiment.
\subsection{Comparisons with Other Methods}
In this experimental study, we compared our proposed SwinMR (nPI and PI) with other benchmarked MR reconstruction methods, including Deep ADMM Net~\citep{Yang2016}, DAGAN~\citep{Yang2018}, PIDDGAN~\citep{Huang2021}, as well as ground truth MR images (GT) and undersampled zero-filled MR images (ZF) using Gaussian 1D 30\% mask. Among them, PIDDGAN and SwinMR (PI) were parallel imaging-coupled, i.e., trained with multi-channel MR images. This experiment was conducted using the CC dataset.
The quantitative result of comparisons is shown in Table~\ref{tab:comparison}. Our proposed SwinMR (nPI) achieved the highest SSIM and PSNR, and SwinMR (PI) achieved the best FID score. The time in Table~\ref{tab:comparison} indicates the average time for one inference measured by ten times inferences in average in an Intel Core i9-10980XE CPU or an NVIDIA RTX 3090 GPU. The computational cost of SwinMR was higher than other CNN-based models.
Figure~\ref{fig:FIG_EXP_IMAGE_Comparison} shows the reconstructed MR images, edge information extracted by Sobel operator and absolute differences of standardised pixel intensities ($10 \times$) between reconstructed MR images and GT MR images from top to button respectively. The proposed SwinMR shows superiority to other methods in terms of overall reconstruction quality and edge information.
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_IMAGE_Comparison.pdf}
\caption{
Samples of the comparison experiment with ground truth images (GT), undersampled zero-filled images (ZF) and reconstructed images by other methods.
Row 1: GT, ZF and reconstructed images by different methods;
Row 2: Edge information extracted by Sobel operator;
Row 3: Gaussian 1D 30\% mask and the absolute differences between reconstructed (or ZF) images and GT images ($10 \times$).
}
\label{fig:FIG_EXP_IMAGE_Comparison}
\end{figure}
\begin{table}[H]
\centering
\caption{
Quantitative results of the comparison experiment with other methods using Gaussian 1D 30\% mask (mean (std)).
$^{\star}$: $p \textless 0.05$;
$^{\star\star}$: $p \textless 0.01$
(compared with SwinMR (PI) by paired t-Test).
$^{\dagger}$: $p \textless 0.05$;
$^{\dagger\dagger}$: $p \textless 0.01$
(compared with SwinMR (nPI) by paired t-Test).
PSNR: Peak signal-to-noise ratio;
SSIM: Structural similarity index;
FID: Fr\'echet inception distance;
Time: The average time for one inference in an Intel Core i9-10980XE CPU or an NVIDIA RTX 3090 GPU.\\
}
\scalebox{0.75}{
\begin{tabular}{cccccc}
\toprule
\multirow{2}[4]{*}{Methods} & \multirow{2}[4]{*}{PSNR} & \multirow{2}[4]{*}{SSIM} & \multirow{2}[4]{*}{FID} & \multicolumn{2}{c}{Time} \\
\cmidrule{5-6} & & & & CPU (s) & GPU (s) \\
\midrule
ZF & 27.81 (0.83)$^{\star\star\dagger\dagger}$ & 0.884 (0.012)$^{\star\star\dagger\dagger}$ & 156.39 & - & - \\
Deep ADMM Net & 29.24 (0.99)$^{\star\star\dagger\dagger}$ & 0.922 (0.012)$^{\star\star\dagger\dagger}$ & 54.56 & 0.459 (0.052) & - \\
DAGAN & 30.41 (0.83)$^{\star\star\dagger\dagger}$ & 0.924 (0.010)$^{\star\star\dagger\dagger}$ & 56.05 & 0.089 (0.003) & 0.003 (0.000) \\
PIDDGAN & 31.23 (0.93)$^{\star\star\dagger\dagger}$ & 0.936 (0.010)$^{\star\star\dagger\dagger}$ & 17.55 & 0.166 (0.007) & 0.006 (0.000) \\
SwinMR (nPI) & \textbf{32.83 (1.10)$^{\star\star}$} & \textbf{0.954 (0.009)$^{\star\star}$} & 27.67 & 19.341 (0.060) & 0.388 (0.001) \\
SwinMR (PI) & 31.88 (1.03)$^{\dagger\dagger}$ & 0.943 (0.010)$^{\dagger\dagger}$ & \textbf{13.17} & 19.779 (0.038) & 0.388 (0.001) \\
\bottomrule
\end{tabular}}%
\label{tab:comparison}%
\end{table}%
\subsection{Experiments on Masks}
This experimental study aimed to evaluate the performance of SwinMR using different undersampling trajectories.
Three 1D Cartesian undersampling trajectories including Gaussian 1D 10\% (G1D10\%), Gaussian 1D 30\% (G1D30\%) and Gaussian 1D 50\% (G1D50\%), as well as two 2D non-Cartesian undersampling trajectories including radial 10\% (R10\%) and spiral 10\% (S10\%) were applied in this experiment. This experiment compared the SSIM, PSNR and FID of SwinMR (PI), DAGAN and ZF, and was conducted using the CC dataset.
The quantitative results of the experiment on masks are shown in Figure~\ref{fig:FIG_EXP_GRAPH_Mask} and Table~\ref{tab:mask_fid}.
The sample of reconstructed images, edge information and absolute differences of standardised pixel intensities ($10 \times$) between reconstructed images and GT images are shown in Figure~\ref{fig:FIG_EXP_IMAGE_Mask}, Figure~\ref{fig:FIG_EXP_EDGE_Mask} and Figure~\ref{fig:FIG_EXP_ERROR_Mask} respectively.
According to the results, the proposed SwinMR achieved a higher reconstruction quality compared to DAGAN using different undersampling trajectories, especially when the mask of low undersampling rate (10\%) was applied.
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Graph/FIG_EXP_GRAPH_Mask.pdf}
\caption{
Peak signal-to-noise ratio (PSNR) and Structural similarity index (SSIM) of the experiment on different masks.
Five undersampling trajectories including Gaussian 1D 10\% (G1D10\%), Gaussian 1D 30\% (G1D30\%), Gaussian 1D 50\% (G1D50\%), radial 10\% (R10\%) and spiral 10\% (S10\%) were applied in this experiment.
(Box range: interquartile range; $\times$:1\% and 99\% confidence interval; $-$: maximum and minimum; $\square$: mean; $\shortmid$: median.) The SwinMR (PI) outperforms the DAGAN using different undersampling masks with significantly higher PSNR, SSIM ($p < 0.05$ by paired t-Test).}
\label{fig:FIG_EXP_GRAPH_Mask}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_IMAGE_Mask.pdf}
\caption{
Samples of the experiment on different masks.
Five undersampling trajectories including Gaussian 1D 10\% (G1D10\%), Gaussian 1D 30\% (G1D30\%), Gaussian 1D 50\% (G1D50\%), radial 10\% (R10\%) and spiral 10\% (S10\%) were applied in this experiment.
Row 1: Undersampled zero-filled MR images (ZF) using different masks;
Row 2: Ground truth MR images (GT);
Row 3: Reconstructed MR images by DAGAN;
Row 4: Reconstructed MR images by SwinMR (PI);
Row 5: Undersampling masks.
The Peak signal-to-noise ratio (PSNR) and Structural similarity index (SSIM) of reconstructed and ZF images are shown in the top-left corner.
}
\label{fig:FIG_EXP_IMAGE_Mask}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_EDGE_Mask.pdf}
\caption{
Edge information of the experiment on different masks.
Five undersampling trajectories including Gaussian 1D 10\% (G1D10\%), Gaussian 1D 30\% (G1D30\%), Gaussian 1D 50\% (G1D50\%), radial 10\% (R10\%) and spiral 10\% (S10\%) were applied in this experiment.
Row 1: Edge information of undersampled zero-filled MR images (ZF) using different masks;
Row 2: Edge information of ground truth MR images (GT);
Row 3: Edge information of reconstructed MR images by DAGAN;
Row 4: Edge information of reconstructed MR images by SwinMR (PI).
The edge information was extracted by the Sobel operator.
}
\label{fig:FIG_EXP_EDGE_Mask}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_ERROR_Mask.pdf}
\caption{
Absolute differences of standardised pixel intensities ($10 \times$) of the experiment on different masks.
Five undersampling trajectories including Gaussian 1D 10\% (G1D10\%), Gaussian 1D 30\% (G1D30\%), Gaussian 1D 50\% (G1D50\%), radial 10\% (R10\%) and spiral 10\% (S10\%) were applied in this experiment.
Row 1: Absolute differences between undersampled zero-filled MR images (ZF) using different masks and ground truth MR images (GT);
Row 2: Absolute differences between reconstructed MR images by DAGAN and GT;
Row 3: Absolute differences between reconstructed MR images by SwinMR (PI) and GT.
}
\label{fig:FIG_EXP_ERROR_Mask}
\end{figure}
\begin{table}[H]
\centering
\caption{
Fr\'echet inception distance (FID) of the experiment on different masks.
Five undersampling masks including Gaussian 1D 10\% (G1D10\%), Gaussian 1D 30\% (G1D30\%), Gaussian 1D 50\% (G1D50\%), radial 10\% (R10\%) and spiral 10\% (S10\%) were applied in this experiment.\\
}
\setlength{\tabcolsep}{8mm}{
\begin{tabular}{cccc}
\toprule
Mask & SwinMR (PI) & DAGAN & ZF \\
\midrule
G1D10\% & \textbf{35.23} & 169.83 & 325.99 \\
G1D30\% & \textbf{13.17} & 56.04 & 156.38 \\
G1D50\% & \textbf{7.92} & 19.26 & 86.25 \\
R10\% & \textbf{43.86} & 132.58 & 319.45 \\
S10\% & \textbf{35.88} & 115.98 & 333.40 \\
\bottomrule
\end{tabular}}%
\label{tab:mask_fid}%
\end{table}%
\subsection{Experiments on Noise}
This experimental study aimed to evaluate the robustness of SwinMR under the influence of noise.
The noise in MRI is imposed on the \textit{k}-space and that could follow a Gaussian distribution~\citep{Hansen2015}. In our experiments, different noise levels (NL20\%, NL30\%, NL50\%, NL70\% and NL80\%) were tested after undersampling (Gaussian 1D 30\% mask) in \textit{k}-space. The noise level is defined as:
\begin{align}\label{formula:18}
\text{NL} = \frac{N^{\prime}}{S^{\prime}+N^{\prime}},
\end{align}
\noindent where $N^{\prime}$ and $S^{\prime}$ denote the power of noise and signal, respectively. This experiment compared the SSIM, PSNR and FID of SwinMR (PI), DAGAN and ZF, and was conducted using the CC dataset.
The quantitative results of the noise experiments are shown in Figure~\ref{fig:FIG_EXP_GRAPH_Noise} and Table~\ref{tab:noise_fid}.
The sample of reconstructed images, edge information and absolute differences of standardised pixel intensities ($10 \times$) between reconstructed images and GT images are shown in Figure~\ref{fig:FIG_EXP_IMAGE_Noise}, Figure~\ref{fig:FIG_EXP_EDGE_Noise} and Figure~\ref{fig:FIG_EXP_ERROR_Noise}, respectively.
According to the results, under the interruption of noise, SwinMR maintains better reconstruction quality compared to DAGAN.
The quality improvement becomes more clear when under a high noise level.
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Graph/FIG_EXP_GRAPH_Noise.pdf}
\caption{
Peak signal-to-noise ratio (PSNR) and Structural similarity index (SSIM) of the experiment on different noise using Gaussian 1D 30\% mask.
Five noise levels (NL20\%, NL30\%, NL50\%, NL70\% and NL80\%) were tested in this experiment.
(Box range: interquartile range; $\times$:1\% and 99\% confidence interval; $-$: maximum and minimum; $\square$: mean; $\shortmid$: median.)
The SwinMR (PI) outperforms the DAGAN under different noise levels with significantly higher PSNR, SSIM ($p < 0.05$ by paired t-Test).
}
\label{fig:FIG_EXP_GRAPH_Noise}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_IMAGE_Noise.pdf}
\caption{
Samples of the experiment on different noise using Gaussian 1D 30\% mask.
Five noise levels (NL20\%, NL30\%, NL50\%, NL70\% and NL80\%) were tested in this experiment.
Row 1: Undersampled zero-filled MR images (ZF) with different noise levels;
Row 2: Ground truth MR images (GT);
Row 3: Reconstructed MR images by DAGAN;
Row 4: Reconstructed MR images by SwinMR (PI).
The Peak signal-to-noise ratio (PSNR) and Structural similarity index (SSIM) of reconstructed and ZF images are shown in the top-left corner.
}
\label{fig:FIG_EXP_IMAGE_Noise}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_EDGE_Noise.pdf}
\caption{
Edge information of the experiment on different noise using Gaussian 1D 30\% mask.
Five noise levels (NL20\%, NL30\%, NL50\%, NL70\% and NL80\%) were tested in this experiment.
Row 1: Edge information of undersampled zero-filled MR images (ZF) with different noise levels;
Row 2: Edge information of ground truth images (GT);
Row 3: Edge information of reconstructed MR images by DAGAN;
Row 4: Edge information of reconstructed MR images by SwinMR (PI).
The edge information was extracted by the Sobel operator.
}
\label{fig:FIG_EXP_EDGE_Noise}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_ERROR_Noise.pdf}
\caption{
absolute differences of standardised pixel intensities ($10 \times$) of the experiment on different using Gaussian 1D 30\% mask noise.
Five noise levels (NL20\%, NL30\%, NL50\%, NL70\% and NL80\%) were tested in this experiment.
Row 1: Absolute differences between undersampled zero-filled MR images (ZF) with different noise levels and ground truth MR images (GT);
Row 2: Absolute differences between reconstructed MR images by DAGAN and GT;
Row 3: Absolute differences between reconstructed MR images by SwinMR (PI) and GT.
}
\label{fig:FIG_EXP_ERROR_Noise}
\end{figure}
\begin{table}[H]
\centering
\caption{
Fr\'echet inception distance (FID) of the experiment on different noise using Gaussian 1D 30\% mask.
Five noise levels (NL20\%, NL30\%, NL50\%, NL70\% and NL80\%) were applied in this experiment.\\
}
\setlength{\tabcolsep}{8mm}{
\begin{tabular}{cccc}
\toprule
Noise Level & SwinMR (PI) & DAGAN & ZF \\
\midrule
NL20\% & \textbf{21.11} & 66.89 & 156.55 \\
NL30\% & \textbf{21.84} & 71.44 & 168.57 \\
NL50\% & \textbf{29.66} & 75.77 & 203.46 \\
NL70\% & \textbf{35.92} & 78.24 & 225.41 \\
NL80\% & \textbf{40.81} & 73.12 & 250.99 \\
\bottomrule
\end{tabular}}%
\label{tab:noise_fid}%
\end{table}%
\subsection{Ablation Experiments on the Patch Number and Channel Number}
The patch number $H$ (or $W$) and the channel number $C$ decide the input size of STL in the SwinMR. Ablation studies for different patch numbers and channel numbers were conducted to study the impression of them on the reconstruction results.
Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (A) and Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (C) show the SSIM, PSNR and FID of SwinMR with different patch numbers.
Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (E) shows the loss function of SwinMR in the training process.
Figure~\ref{fig:FIG_AEXP_Patch} displays the sample of reconstructed images of SwinMR with different patch numbers.
Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (B) and Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (D) show the SSIM, PSNR and FID of SwinMR with different channel numbers.
Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (F) shows the loss function of SwinMR in the training process.
Figure~\ref{fig:FIG_AEXP_Channel} displays the sample of reconstructed images of SwinMR with the different channel numbers.
For the patch number, from Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (A) and Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (C), the results demonstrate that reconstruction quality becomes better as the patch number grows. According to Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (E), the training loss converges faster and lower as the patch number grows. However, the growing patch number aggravates the computational cost. Empirically, we applied patch number 96 for training.
For the channel number, from Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (B) and Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (D), the results did not resemble the trend presented in the ablation experiment on patch number. There were no significant differences for the three indicators (SSIM, PSNR and FID) as the channel number changed. According to Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (F), the training loss converges faster and lower as the channel number grows. Empirically, we applied a patch number of 180 for training.
For the comparison of multi-channel data (PI) and single-channel data (nPI), SwinMR (PI) tend to have a better (lower) FID, but worse (lower) SSIM/PSNR than SwinMR (nPI).
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Graph/FIG_AEXP_GRAPH_ChannelPatch.pdf}
\caption{
Structural similarity index (SSIM), Peak signal-to-noise ratio (PSNR) and Fr\'echet inception distance (FID) of ablation experiments of the patch number and channel number.
(A), (C) and (E) are the SSIM/PSNR, FID and training loss of the ablation experiment of the patch number.
(B), (D) and (F) are the SSIM/PSNR, FID and training loss of the ablation experiment of the channel number.
}
\label{fig:FIG_AEXP_GRAPH_ChannelPatch}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Sample/FIG_AEXP_Patch.pdf}
\caption{
Samples of the ablation experiment on the patch number using Gaussian 1D 30\% mask.
Row 1: Reconstructed MR images by SwinMR (nPI) with different patch numbers
and zero-filled MR images (ZF);
Row 2: Absolute differences ($10 \times$) between reconstructed MR images by SwinMR (nPI) and ground truth MR images (GT),
and absolute differences ($10 \times$) between ZF and GT;
Row 3: Reconstructed MR images by SwinMR (PI) with the different patch number
and GT;
Row 4: Absolute differences ($10 \times$) between reconstructed MR images by SwinMR (PI) and GT,
and the Gaussian 1D 30\% mask.
}
\label{fig:FIG_AEXP_Patch}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Sample/FIG_AEXP_Channel.pdf}
\caption{
Samples of the ablation experiment on the channel number using Gaussian 1D 30\% mask.
Row 1: Reconstructed MR images by SwinMR (nPI) with the different channel numbers
and zero-filled MR images (ZF);
Row 2: Absolute differences ($10 \times$) between reconstructed MR images by SwinMR (nPI) and ground truth MR images (GT),
and absolute differences ($10 \times$) between ZF and GT;
Row 3: Reconstructed MR images by SwinMR (PI) with the different channel number
and GT;
Row 4: Absolute differences ($10 \times$) between reconstructed MR images by SwinMR (PI) and GT,
and the Gaussian 1D 30\% mask.
}
\label{fig:FIG_AEXP_Channel}
\end{figure}
\subsection{Ablation Experiments on the Loss Function}
This ablation study aimed to discover the effect of each term in the loss function. According to Equation (\ref{formula:17}), the loss function of SwinMR consists of pixel-wise loss, frequency loss and perceptual loss. Four experiments were performed in this ablation study:
(1) PFP: \textbf{P}ixel-wise, \textbf{F}requency and \textbf{P}erceptual loss;
(2) PP: \textbf{P}ixel-wise and \textbf{P}erceptual loss;
(3) PF: \textbf{P}ixel-wise and \textbf{F}requency loss;
(4) P: only \textbf{P}ixel-wise loss.
Figure~\ref{fig:FIG_AEXP_GRAPH_Loss} shows the SSIM, PSNR and FID of SwinMR trained with different loss functions.
Figure~\ref{fig:FIG_AEXP_Loss_edge} displays the samples of reconstructed images of SwinMR trained with different loss functions.
According to Figure~\ref{fig:FIG_AEXP_GRAPH_Loss}, for SwinMR (PI), the utilisation of frequency loss tends to improve SSIM/PSNR and decrease the FID (PFP vs PP; PF vs P).
For SwinMR (nPI), the utilisation of frequency loss leads to improvement only on SSIM and PSNR, but scarcely on FID.
In most cases, the utilisation of the frequency loss has a positive impact on reconstruction quality indicators -- both SSIM/PSNR and FID.
For SwinMR (PI), the utilisation of perceptual loss tends to slightly decrease SSIM and PSNR, but substantially decrease the FID (PFP vs PF; PP vs P).
For SwinMR (nPI), the utilisation of perceptual loss tends to achieve a better FID but scarcely change SSIM and PSNR (PFP vs PF; PP vs P).
In most cases, the utilisation of the perceptual loss has a positive impact on FID, but a negative impact on SSIM/PSNR when using multi-channel data.
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Graph/FIG_AEXP_GRAPH_Loss.pdf}
\caption{
Structural similarity index (SSIM), Peak signal-to-noise ratio (PSNR) and Fr\'echet inception distance (FID) of the ablation experiment on the loss function using Gaussian 1D 30\% mask.
PFP: pixel-wise, frequency and perceptual loss;
PP: pixel-wise and perceptual loss;
PF: pixel-wise and frequency loss;
P: only pixel-wise loss.
}
\label{fig:FIG_AEXP_GRAPH_Loss}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Sample/FIG_AEXP_Loss_edge.pdf}
\caption{
Samples of the ablation experiment on the loss function using Gaussian 1D 30\% mask.
PFP: pixel-wise, frequency and perceptual loss;
PP: pixel-wise and perceptual loss;
PF: pixel-wise and frequency loss;
P: only pixel-wise loss.
Row 1: Reconstructed MR images by SwinMR (nPI)
and zero-filled MR images (ZF);
Row 2: Edge information of reconstructed MR images by SwinMR (nPI)
and edge information of ZF;
Row 3: Absolute differences ($10 \times$) between reconstructed MR images by SwinMR (nPI) and ground truth MR images (GT),
and absolute differences ($10 \times$) between ZF and GT;
Row 4: Reconstructed MR images by SwinMR (PI)
and GT;
Row 5: Edge information of reconstructed MR images by SwinMR (PI)
and edge information of GT;
Row 6: Absolute differences ($10 \times$) between reconstructed MR images by SwinMR (PI) and GT,
and the Gaussian 1D 30\% mask.
}
\label{fig:FIG_AEXP_Loss_edge}
\end{figure}
\subsection{Downstream Task Experiments: Brain Segmentation Experiments on BraTS17}
In this experiment, we performed a downstream task using a reconstructed MR image, in order to measure the reconstruction quality.
Specifically, an open-access multi-modalities brain tumour segmentation network~\footnote{https://github.com/Mehrdad-Noori/Brain-Tumor-Segmentation}~\citep{Noori2019} was trained on the BraTS17 dataset (four modalities are required including FLAIR, T1, T1CE and T2). Then, we trained four SwinMR weights using BraTS17 FLAIR, T1, T1CE and T2 data, respectively. After that, segmentation tasks were conducted on GT MR images, SwinMR reconstructed MR images and ZF MR images directly using the pre-trained segmentation network. Ideally, the segmentation score of reconstructed images and GT images should be as closer as possible.
Table~\ref{tab:BraTS17_Recon} shows the result of SwinMR trained with BraTS17 FLAIR, T1, T1CE and T2 respectively.
Figure~\ref{fig:FIG_EXP_IMAGE_BraTS17_Reconstruction} displays the samples of the reconstruction of different modalities.
Table~\ref{tab:BraTS17_IoU} and Table~\ref{tab:BraTS17_Dice} show the IoU and Dice score of the segmentation task.
Figure~\ref{fig:FIG_EXP_IMAGE_BraTS17_Segmentation} displays the sample of the segmentation task.
According to Table~\ref{tab:BraTS17_IoU} and Table~\ref{tab:BraTS17_Dice}, the IoU and Dice score of reconstructed MR images are improved compared with ZF MR images and much closer to the score of GT MR images.
According to the Mann-Whitney Test, the IoU and Dice score distributions of the reconstructed MR images using the Gaussian 1D 30\% mask are not significantly different from the distributions of the GT MR images ($p > 0.05$).
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_IMAGE_BraTS17_Reconstruction.pdf}
\caption{
Samples of reconstruction results for SwinMR on BraTS17 dataset including FLAIR, T1, T1CE and T2 MR images.
Row 1: Ground truth MR images (GT);
Row 2: Zero-filled MR images (ZF) undersampled by Gaussian 1D 10\% mask (G1D10\%);
Row 3: Reconstructed MR images undersampled by G1D10\%;
Row 4: ZF undersampled by Gaussian 1D 30\% mask (G1D30\%);
Row 5: Reconstructed MR images undersampled by G1D30\%.
}
\label{fig:FIG_EXP_IMAGE_BraTS17_Reconstruction}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_IMAGE_BraTS17_Segmentation.pdf}
\caption{
Samples of segmentation results for SwinMR on the BraTS17 dataset.
Col 1: Segmentation reference;
Col 2: Segmentation prediction using GT images;
Col 3: Segmentation prediction using zero-filled MR images (ZF) undersampled by Gaussian 1D 10\% mask (G1D10\%);
Col 4: Segmentation prediction using reconstructed MR images undersampled by G1D10\%;
Col 5: Segmentation prediction using ZF undersampled by Gaussian 1D 30\% mask (G1D30\%);
Col 6: Segmentation prediction using reconstructed MR images undersampled by G1D30\%.
Blue area: Whole tumour (WT);
Red area: Enhancing tumour (ET);
Green area: Tumour core (TC).
}
\label{fig:FIG_EXP_IMAGE_BraTS17_Segmentation}
\end{figure}
\begin{table}[H]
\centering
\caption{
Quantitative results of reconstructed images by SwinMR (Recon) and zero-filled images (ZF) on BraTS17 dataset (mean (std)).
PSNR: Peak signal-to-noise ratio;
SSIM: Structural similarity index;
FID: Fr\'echet inception distance.
G1D10\%: Gaussian 1D 10\% mask;
G1D30\%: Gaussian 1D 30\% mask.\\
}
\scalebox{0.75}{
\begin{tabular}{cccccc}
\toprule
\multirow{2}[4]{*}{Mask} & \multirow{2}[4]{*}{Indicator} & \multicolumn{4}{c}{Recon} \\
\cmidrule{3-6} & & FLAIR & T1 & T1CE & T2 \\
\midrule
\multirow{3}[1]{*}{G1D10\%} & PSNR & 30.07 (1.99) & 33.80 (2.30) & 33.80 (1.84) & 32.20 (1.81) \\
& SSIM & 0.751 (0.043) & 0.760 (0.046) & 0.797 (0.049) & 0.745 (0.039) \\
& FID & 38.02 & 32.97 & 31.46 & 21.84 \\
\multirow{3}[1]{*}{G1D30\%} & PSNR & 37.97 (2.42) & 41.08 (3.36) & 42.29 (2.12) & 38.37 (2.02) \\
& SSIM & 0.942 (0.013) & 0.953 (0.012) & 0.953 (0.015) & 0.937 (0.016) \\
& FID & 5.94 & 4.80 & 4.39 & 8.95 \\
\midrule
\multirow{2}[4]{*}{Mask} & \multirow{2}[4]{*}{Indicator} & \multicolumn{4}{c}{ZF} \\
\cmidrule{3-6} & & FLAIR & T1 & T1CE & T2 \\
\midrule
\multirow{3}[1]{*}{G1D10\%} & PSNR & 23.87 (1.64) & 25.92 (1.48) & 25.92 (1.70) & 23.92 (1.79) \\
& SSIM & 0.388 (0.070) & 0.414 (0.061) & 0.414 (0.068) & 0.431 (0.057) \\
& FID & 225.70 & 234.52 & 227.51 & 219.09 \\
\multirow{3}[1]{*}{G1D30\%} & PSNR & 28.74 (1.78) & 28.82 (1.60) & 30.60 (1.82) & 29.46 (2.01) \\
& SSIM & 0.597 (0.046) & 0.602 (0.051) & 0.602 (0.051) & 0.632 (0.038) \\
& FID & 91.18 & 100.98 & 106.28 & 85.49 \\
\bottomrule
\end{tabular}}%
\label{tab:BraTS17_Recon}%
\end{table}%
\begin{table}[H]
\centering
\caption{
Intersection over union (IoU) of the segmentation experiment (median/mean [Q1,Q3]).
$^{\star}$: $p \textless 0.05$;
$^{\star\star}$: $p \textless 0.01$
(compared with GT by Mann-Whitney Test).
GT: ground truth MR images; Recon: reconstructed MR images by SwinMR; ZF: undersampled zero-filled MR images.
G1D10\%: Gaussian 1D 10\% mask; G1D30\%: Gaussian 1D 30\% mask.
WT: Whole tumour; TC: Enhancing tumour ; ET: Tumour core.\\
}
\scalebox{0.75}{
\begin{tabular}{ccccc}
\toprule
\multicolumn{2}{c}{IoU} & GT & Recon & ZF \\
\midrule
\multirow{3}[2]{*}{G1D10\%} & WT & 0.930/0.924 [0.900,0.954] & 0.898/0.899 [0.868,0.940]$^{\star\star}$ & 0.838/0.836 [0.795,0.881]$^{\star\star}$ \\
& TC & 0.821/0.771 [0.726,0.903] & 0.758/0.722 [0.661,0.890]$^{\star\star}$ & 0.617/0.539 [0.393,0.733]$^{\star\star}$ \\
& ET & 0.772/0.735 [0.625,0.889] & 0.740/0.652 [0.471,0.846]$^{\star\star}$ & 0.570/0.527 [0.336,0.694]$^{\star\star}$ \\
\midrule
\multirow{3}[2]{*}{G1D30\%} & WT & 0.930/0.924 [0.900,0.954] & 0.924/0.921 [0.895,0.953] & 0.897/0.897 [0.862,0.945]$^{\star\star}$ \\
& TC & 0.821/0.771 [0.726,0.903] & 0.811/0.766 [0.719,0.904] & 0.763/0.728 [0.669,0.895]$^{\star\star}$ \\
& ET & 0.772/0.735 [0.625,0.889] & 0.770/0.725 [0.616,0.883] & 0.748/0.697 [0.573,0.859]$^{\star\star}$ \\
\bottomrule
\end{tabular}}%
\label{tab:BraTS17_IoU}%
\end{table}%
\begin{table}[H]
\centering
\caption{
Dice score of the segmentation experiment (median/mean [Q1,Q3]).
$^{\star}$: $p \textless 0.05$;
$^{\star\star}$: $p \textless 0.01$
(compared with GT by Mann-Whitney Test).
GT: ground truth MR images; Recon: reconstructed MR images by SwinMR; ZF: undersampled zero-filled MR images.
G1D10\%: Gaussian 1D 10\% mask; G1D30\%: Gaussian 1D 30\% mask.
WT: Whole tumour; TC: Enhancing tumour; ET: Tumour core.\\
}
\scalebox{0.75}{
\begin{tabular}{ccccc}
\toprule
\multicolumn{2}{c}{Dice} & GT & Recon & ZF \\
\midrule
\multirow{3}[2]{*}{G1D10\%} & WT & 0.968/0.965 [0.952,0.981] & 0.950/0.950 [0.933,0.974]$^{\star\star}$ & 0.916/0.914 [0.892,0.940]$^{\star\star}$ \\
& TC & 0.904/0.857 [0.845,0.951] & 0.863/0.819 [0.800,0.944]$^{\star\star}$ & 0.767/0.653 [0.566,0.847]$^{\star\star}$ \\
& ET & 0.874/0.835 [0.777,0.941] & 0.852/0.766 [0.640,0.917]$^{\star\star}$ & 0.725/0.665 [0.503,0.820]$^{\star\star}$ \\
\midrule
\multirow{3}[2]{*}{G1D30\%} & WT & 0.968/0.965 [0.952,0.981] & 0.964/0.963 [0.948,0.980] & 0.949/0.949 [0.930,0.975]$^{\star\star}$ \\
& TC & 0.904/0.857 [0.845,0.951] & 0.897/0.854 [0.838,0.951] & 0.868/0.826 [0.803,0.947]$^{\star\star}$ \\
& ET & 0.874/0.835 [0.777,0.941] & 0.871/0.827 [0.765,0.939] & 0.857/0.808 [0.729,0.925]$^{\star\star}$ \\
\bottomrule
\end{tabular}}%
\label{tab:BraTS17_Dice}%
\end{table}%
\section{Discussion}
In this work, a novel Swin transformer based model, i.e., SwinMR, for fast MRI reconstruction have been proposed.
Most existing deep learning based image restoration methods, including MRI reconstruction approaches, are based on CNNs. The convolution is a very effective feature extractor but lacks long-range dependency. The receptive field of CNNs is limited by the size of the kernel and the depth of the network.
To tackle this problem, researchers have developed transformers based image restoration methods that have been originally used for solving NLP tasks. The core of the transformer is MSA, which has global sensitivity. In MSA operation, each patch can link with any other patches in the whole image space but also aggravates the computational burden.
However, we have believed that in MRI reconstruction, the MSA, which is operated in the whole image space, is redundant and not necessary.
It is not difficult to understand that in NLP tasks the first and the last words may have a strong connection in a sentence. However, this may not be applicable in CV tasks.
Visual elements (e.g., pixels) in CV tasks can vary substantially in scale unlike language elements (e.g., word tokens) in NLP tasks~\citep{Liu2021}.
Since in most cases, for example, the top-left corner patch has no relationship with the bottom-right corner patch within an image. Moreover, for MRI reconstruction, the biggest difficulty is the recovery of detailed information and texture information. Focusing too much on global information and ignoring the detailed (local) information may make the image smoother and lose more details.
The utilisation of a Swin transformer can achieve a trade-off for CV tasks. In Swin transformer, operations are conducted in shifted windows instead of the whole images. It has a larger receptive field compared to CNNs but is not overly concerned with global information. This is the reason why we have developed a Swin transformer for MRI reconstruction.
To evaluate our proposed methods, several comparison experiments and ablation studies have been conducted. In this study, we have compared our proposed SwinMR with benchmark MRI reconstruction methods. The results in Table~\ref{tab:comparison} have demonstrated that our SwinMR has achieved the highest SSIM/PSNR and lowest FID compared to CNN-based and GAN-based models. From Figure~\ref{fig:FIG_EXP_IMAGE_Comparison}, we have shown clearly that our SwinMR has obtained better reconstruction quality, especially in the zoom-in area, where the details of the cerebellum have been well-preserved.
In this study, we have also compared SwinMR (PI) that has been trained with multi-channel brain data with SwinMR (nPI) that has been trained with single-channel brain data. The results have led to a similar conclusion in our previous study~\citep{Huang2021}, where FID of the model trained with multi-channel data has been better compared to the model trained with single-channel data, and the SSIM/PSNR has shown the opposite (i.e., SSIM/PSNR: nPI \textgreater PI; FID: PI \textless nPI). This phenomenon can also be observed in the subsequent
ablation experiments.
From Figure~\ref{fig:FIG_EXP_IMAGE_Comparison}, we can find that the reconstructed images of SwinMR (PI) have shown more details and texture information, but the reconstructed images of SwinMR (nPI) have shown smoother.
The experimental results have demonstrated that the three metrics that compared PI and nPI gave different answers. We have speculated that this might be due to the different principles of these metrics.
PSNR is a classic metric based on per-pixel comparisons, which are not able to reflect the structure information for images. SSIM is a perceptual metric that measures structure similarity. However, both of them are based on simple and shallow functions and direct comparisons between images, which is insufficient to account for many nuances of human perception~\citep{Zhang2018}.
For FID, the comparison is based on perception and performed on two sets of images. Images are mapped to high-dimension representations by a pre-trained InceptionV3 network, which is well-related to human visual perception.
The SwinMR (PI) reconstructed images have demonstrated more details and texture information. Even though these details and texture information may not be so \emph{accurate}, they make the reconstructed images more \emph{visually similar} with the ground truth images.
However, the SwinMR (nPI) reconstructed images have shown smoother in pixel-wise scale, at the cost of less detail and texture information.
Therefore, SwinMR (PI) have tended to have better FID and worse SSIM/PSNR compared to SwinMR (nPI), due to the principle differences of the evaluation methods.
From Table~\ref{tab:comparison}, we can find a common problem of transformer-based methods, which is the higher computational cost compared to other CNN-based and GAN-based methods. Equation (\ref{formula:11}) have shown that the computational complexity is proportional to the $HW$ of the input of (S)W-MSA. The time shown in Table~\ref{tab:comparison} has been the inference time, where the original height and weight have been treated as $H$ and $W$ ($256 \times 256$ here). For training, randomly cropping have been applied to ease the long processing time.
Experiments using different undersampling masks with various noise levels have demonstrated that our proposed method SwinMR have shown superiority to DAGAN in all the tests. The evaluation indicators change as expected when the condition changes (different masks and noise levels).
Ablation studies on the patch number and the channel number have demonstrated that reconstruction quality has been improved as the patch number has been increased and has gradually been saturated, according to Figures~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch}(A) and (C).
However, according to Equation (\ref{formula:11}), the computational complexity also has been increased as the patch number has been increased. As a trade-off, we have set the patch number to 96.
Beyond our expectations, the changing of channel number has not been positively correlated with the evaluation indicator in this experiment, according to Figures~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch}(B) and (D). We have assumed that the evaluation indicators have saturated in the range of channel number in this experiment. Empirically, we have set the channel number to 180 according to the default setting of SwinIR.
Ablation studies on different loss functions have been conducted. As expected, the utilisation of the pixel-wise loss and the frequency loss has mainly constrained the fidelity of reconstruction, and the utilisation of perceptual VGG loss has focused on perception, which has been well-related to the human visual system. Therefore, the utilisation of frequency loss has had a positive impact on SSIM and PSNR, which has been more sensitive to the fidelity of reconstruction. The utilisation of perceptual loss has had a positive impact on FID, which has been based on perception.
There are still some limitations of our work.
First, in the (S)W-MSA operation, the size of windows is fixed. Inspired by Google-Net, multi-scale windows could be incorporated and results from different scales could be merged in the (S)W-MSA.
Second, the heavy computational cost is still an obstacle to the development of transformers. The improvement that transformers bring is at the sacrifice of increased computational cost. A lightweight transformer model could be a potential future research direction.
\section{Conclusion}
In this work, we have developed the SwinMR, a novel parallel imaging coupled Swin transformer-based model for fast multi-channel MRI reconstruction. The proposed method has outperformed other benchmark CNN-based and GAN-based MRI reconstruction methods. It has also shown excellent robustness using different undersampling trajectories with various noises.
\section*{Acknowledgement}
This work was supported in part by the UK Research and Innovation Future Leaders Fellowship [MR/V023799/1], in part by the Medical Research Council [MC/PC/21013], in part by the European Research Council Innovative Medicines Initiative [DRAGON, H2020-JTI-IMI2 101005122], in part by the AI for Health Imaging Award [CHAIMELEON, H2020-SC1-FA-DTS-2019-1 952172], in part by the British Heart Foundation [Project Number: TG/18/5/34111, PG/16/78/32402], in part by the Project of Shenzhen International Cooperation Foundation [GJHZ20180926165402083], in part by the Basque Government through the ELKARTEK funding program [KK-2020/00049], and in part by the consolidated research group MATHMODE [IT1294-19].
\newpage
\bibliographystyle{elsarticle-num}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
At present, all data are consistent with,
and in fact strongly support, the standard big-bang model
in which the time evolution of the Universe is described by the
Friedmann-Lema\^{\i}tre-Robertson-Walker metric.
Accordingly, the Universe possesses the space-time structure
$\mathbb{R}\times{\cal M}$
where $\mathbb{R}$ describes the ``space'' of cosmic time,
and ${\cal M}$ the three-dimensional comoving space section of constant
curvature $K=+1, 0$ and $-1$.
The Einstein field equations respectively
the Friedmann equations for the cosmic scale factor do not fix
the curvature a priori.
Instead, the curvature parameter $K$ has to be inferred from a
determination of the total energy density
$\varepsilon_{\hbox{\scriptsize tot}}$
of the Universe via the relation $(c=1)\,$
$K = H_0^2 a_0^2(\Omega_{\hbox{\scriptsize tot}}-1)$,
$\Omega_{\hbox{\scriptsize tot}}:=\varepsilon_{\hbox{\scriptsize tot}}/
\varepsilon_{\hbox{\scriptsize crit}}$,
where $\varepsilon_{\hbox{\scriptsize crit}} := \frac{3H_0^2}{8\pi G}$
denotes the critical energy density,
$a_0$ the cosmic scale factor, and $H_0$ the Hubble constant
(all quantities at the present epoch).
Furthermore, it is a mathematical fact,
although not always appreciated
(see, however, the remark on early works below),
that fixing the curvature $K$ does not determine uniquely
the global geometry of ${\cal M}$,
i.\,e.\ the topology and thus the shape of the Universe.
Only if it is {\it assumed} that the Universe is simply-connected,
the possible homogeneous 3-spaces ${\cal M}$ of constant curvature $K$
are given by the 3-sphere ${\cal S}^3 (K=+1)$,
Euclidean 3-space ${\cal E}^3 (K=0)$,
or hyperbolic 3-space ${\cal H}^3 (K=-1)$.
In this case, the Universe is finite for positive curvature
$(\Omega_{\hbox{\scriptsize tot}}>1)$
and infinite for vanishing $(\Omega_{\hbox{\scriptsize tot}}=1)$
or negative curvature $(\Omega_{\hbox{\scriptsize tot}}<1)$.
However, most 3-spaces ${\cal M}$ of constant curvature are
multi-connected and are given by the quotient of
${\cal S}^3$, ${\cal E}^3$, or ${\cal H}^3$ by a group $\Gamma$
of covering transformations,
i.\,e.\ ${\cal M}={\cal S}^3/\Gamma$, ${\cal E}^3/\Gamma$, or
${\cal H}^3/\Gamma$.
In this case, the Universe is again finite for positive curvature,
but can be finite too if it is flat or negatively curved.
Here, we would like to remark that the question whether the space of the
Universe is finite and possibly multi-connected has been discussed
during the last century by several cosmologists,
e.\,g.\ by Schwarzschild \cite{Schwarzschild_1900},
Einstein \cite{Einstein_1917},
Friedmann \cite{Friedmann_1922,Friedmann_1924},
Lema\^{\i}tre \cite{Lemaitre_1958},
Heckmann and Sch\"ucking \cite{Heckmann_Schuecking_1959a},
and Ellis \cite{Ellis_1971}, to mention only a few.
The concordance model of cosmology ($\Lambda$CDM model)
assumes a flat Universe with the topology of ${\cal E}^3$
with a positive cosmological constant $\Lambda$,
i.\,e.\ $\Omega_\Lambda := \frac{\Lambda}{3H_0^2} =
1 -\Omega_{\hbox{\scriptsize mat}}-\Omega_{\hbox{\scriptsize rad}}$
with $\Omega_{\hbox{\scriptsize mat}}=\Omega_{\hbox{\scriptsize bar}}
+ \Omega_{\hbox{\scriptsize cdm}}$,
where the various $\Omega$-parameters denote the present value
of the baryonic (bar), cold dark matter (cdm), matter (mat)
and radiation (rad) energy densities in units of
$\varepsilon_{\hbox{\scriptsize crit}}$.
Three variants of the concordance model have been presented
by the WMAP team \cite{Spergel_et_al_2003}
providing a good overall fit to the temperature fluctuations
$\delta T$ of the cosmic microwave background radiation (CMB)
on small and medium scales, but there remains a strange
discrepancy at large scales as first observed by COBE
\cite{Hinshaw_et_al_1996}
and later substantiated by WMAP \cite{Bennett_et_al_2003}.
The suppression of the CMB anisotropy at large scales
respectively low multipoles can be explained
if the Universe is finite.
Recent analyses concerning the suppression
at low multipoles in the WMAP data can be found in
\cite{Luminet_Weeks_Riazuelo_Lehoucq_Uzan_2003,%
Aurich_Lustig_Steiner_Then_2004a,Aurich_Lustig_Steiner_Then_2004b,%
Aurich_Lustig_Steiner_2004c,Gundermann_2005}.
As discussed before, a finite Universe is naturally obtained,
if the total energy density exceeds the critical value one,
i.\,e.\ $\Omega_{\hbox{\scriptsize tot}}>1$.
Interestingly enough, the WMAP team reported \cite{Bennett_et_al_2003}
$\Omega_{\hbox{\scriptsize tot}}=1.02\pm0.02$
together with $\Omega_{\hbox{\scriptsize bar}}=0.044\pm0.004$,
$\Omega_{\hbox{\scriptsize mat}}=0.27\pm0.04$,
and $h=0.71^{+0.04}_{-0.03}$ for the present day reduced Hubble constant
(the errors give the $1\sigma$-deviation uncertainties only).
Taking at their face value, these parameters hint to a
positively curved Universe possessing the geometry of ${\cal S}^3$
or of one of the spatial space forms ${\cal S}^3/\Gamma$.
(One should keep in mind, however, that the WMAP values depend on certain
priors, and, furthermore, include the $1\sigma$-errors only.
Thus it would be too early to conclude that the data definitively
exclude a negatively curved Universe.
In fact, we have recently shown
\cite{Aurich_Lustig_Steiner_Then_2004a,Aurich_Lustig_Steiner_Then_2004b}
that the non-compact, but finite hyperbolic Picard universe
describes well the CMB anisotropy and the observed suppression
of power at large scales.)
In this paper, we present a systematic comparison of the predictions
with the CMB anisotropy for universes possessing homogeneous
spherical topology.
This comparison is made possible for two reasons.
First of all, the spherical spaces were classified already by 1932
\cite{Threlfall_Seifert_1930,Threlfall_Seifert_1932}
and thus their mathematical structure is known.
(This is in contrast to the case of hyperbolic manifolds
which are not yet completely classified; even the manifold with the
smallest volume is not yet known.)
Second, due to an efficient numerical algorithm described in
our recent paper \cite{Aurich_Lustig_Steiner_2004c},
we are able to take in the Sachs-Wolfe formula a large number of
vibrational modes into account
and thus to predict sufficiently many CMB multipoles for the
various spherical spaces which in turn allow a detailed
comparison with the WMAP data.
Since the CMB spectrum depends sensitively on the curvature radius,
the comparison is performed as a function of
$\Omega_{\hbox{\scriptsize tot}}$ in the large interval $[1.01,1.20]$
in order to determine for a given spherical space form the
best-fitting value of the total energy density -
under the condition, of course, that the space under consideration
is able to describe the data at all.
Recently, Luminet et al.\ \cite{Luminet_Weeks_Riazuelo_Lehoucq_Uzan_2003}
proposed the Poincar\'e dodecahedron, which is one of the well-known
spherical space forms
(see section \ref{Section_spherical_space_form} for details),
as a model for the geometry of the Universe.
In their preliminary study involving only the three lowest multipoles
$(l=2,3,4)$ they found, indeed,
for $\Omega_{\hbox{\scriptsize tot}} = 1.013$ a strong suppression
of the CMB power at $l=2$ and a weak suppression at $l=3$ in agreement
with the WMAP data.
However, in \cite{Luminet_Weeks_Riazuelo_Lehoucq_Uzan_2003}
only the first three vibrational modes of the dodecahedral space
with wave number $\beta=13, 21$ and 25
(comprising in total 59 eigenfunctions) have been used,
and there thus remained the question about how this extremely low
wave number cut-off affects the predictions of the multipoles,
since experience shows that increasing the cut-off usually enhances
the integrated Sachs-Wolfe and Doppler contributions.
In our recent paper \cite{Aurich_Lustig_Steiner_2004c}
we presented a thorough discussion of the CMB anisotropy for the
dodecahedral space topology using the first 10521 eigenfunctions
corresponding to the large wave number cut-off $\beta=155$.
The contributions of higher wave numbers up to $\beta=1501$ were
taken into account with respect to their mean behaviour.
Taking within the tight-coupling approximation not only the
ordinary, but also the integrated Sachs-Wolfe and also the
Doppler contribution into account, we were able to predict
sufficiently many multipole moments
such that a detailed comparison of the dodecahedral space model
with the WMAP data could be performed.
We found that the temperature correlation function for the
dodecahedral universe possesses very weak correlations at large
scales in nice agreement with the WMAP data for
$\Omega_{\hbox{\scriptsize tot}}$ in the range $1.016\dots1.020$.
There thus arises the interesting question whether the
dodecahedral space is the only spherical space form able to
describe the CMB data.
In \cite{Gundermann_2005} the CMB anisotropy is also studied
for the dodecahedron, the binary octahedral group, and
the binary tetrahedral group.
The main result of the present paper is that while almost all homogeneous
spherical spaces have to be excluded as possible geometries for the Universe,
there is one particular space form, defined by the binary octahedral group,
which agrees for $\Omega_{\hbox{\scriptsize tot}} \simeq 1.038$
with the WMAP data even better than the dodecahedron.
We observe that the best-fitting values obtained for
$\Omega_{\hbox{\scriptsize tot}}$ are different for the
binary octahedral space and the dodecahedron,
but both values lie well within the $1\sigma$-band
determined by WMAP.
It remains to be seen whether future data will enable us to
definitely eliminate one of the two space forms in favour of the
other as describing the true topology of the Universe.
Our paper is organised as follows.
In section \ref{Section_spherical_space_form},
we summarize the main properties of the existing homogeneous
spherical space forms and of their vibrational modes.
Our main results are presented in section \ref{Section_statistic}
which contains a detailed comparison with the WMAP data of the
CMB angular power spectrum for the various types of spherical spaces.
In addition to the power spectrum,
we study also the so-called $S(\rho)$ statistic \cite{Bennett_et_al_2003}
which measures the suppression at large angular scales directly
in terms of the temperature correlation function.
Section \ref{Section_Conclusion} contains our conclusions.
\section{The spherical space forms and their vibrational modes}
\label{Section_spherical_space_form}
In section 2 of \cite{Aurich_Lustig_Steiner_2004c} we have already
described the three-dimensional spaces ${\cal M}$ of constant
positive curvature $K=1$, and therefore we refer the reader to
this paper for details.
The spherical spaces were classified by 1932
\cite{Threlfall_Seifert_1930,Threlfall_Seifert_1932}
and are given by the quotient ${\cal M} = {\cal S}^3/\Gamma$
of the three-sphere ${\cal S}^3$ under the action of a discrete
fixed-point free subgroup $\Gamma \subset \hbox{SO}(4)$
of the isometries of ${\cal S}^3$.
All these manifolds are compact possessing the volume
$V({\cal S}^3/\Gamma) = V({\cal S}^3)/N$,
where $N$ is the order of the group $\Gamma$,
and are, apart from the universal covering space ${\cal S}^3$ with
volume $V({\cal S}^3)=2\pi^2$, multi-connected.
To define the discrete fixed-point free subgroups
$\Gamma \subset \hbox{SO}(4)$ of isometries of ${\cal S}^3$,
one makes use of the fact that the unit 3-sphere ${\cal S}^3$
can be identified with the multiplicative group of unit quaternions $\{q\}$.
The latter are defined by $q:=w+x\hbox{\rm i}+y\hbox{\rm j}+z\hbox{\rm i}\hbox{\rm j}$,
$(w,x,y,z)\in \mathbb{R}^4$, having unit norm, $|q|^2 = w^2+x^2+y^2+z^2=1$.
Here, the 4 basic quaternions $\{1,\hbox{\rm i},\hbox{\rm j},\hbox{\rm i}\hbox{\rm j}\}$ satisfy
the multiplication rules $\hbox{\rm i}^2=\hbox{\rm j}^2=-1$ and $\hbox{\rm i}\hbox{\rm j}=-\hbox{\rm j}\hbox{\rm i}$
plus the property that $\hbox{\rm i}$ and $\hbox{\rm j}$ commute with every real number.
The distance $d(q_1,q_2)$ between two points $q_1$ and $q_2$ on ${\cal S}^3$
is given by $\cos d(q_1,q_2) = w_1w_2+x_1x_2+y_1y_2+z_1z_2$.
The group $\hbox{SO}(4)$ is isomorphic to
${\cal S}^3 \times {\cal S}^3 / \{\pm(1,1)\}$,
the two factors corresponding to the left and right group actions.
In this paper, we are only interested in homogeneous manifolds
${\cal M} = {\cal S}^3/\Gamma$, in which case the group $\Gamma$
contains only right-handed Clifford translations $\gamma\in\Gamma$
that act on an arbitrary unit quaternion $q\in{\cal S}^3$ by
left-multiplication, $q \to \gamma q$, and translate all points
$q_1,q_2\in {\cal S}^3$ by the same distance $\chi$,
i.\,e.\ $d(q_1,\gamma q_1)=d(q_2,\gamma q_2)=\chi$.
The right-handed Clifford translations act as right-handed
cork screw fixed-point free rotations of ${\cal S}^3$.
The following groups lead to homogeneous manifolds
${\cal M} = {\cal S}^3/\Gamma$
\cite{Threlfall_Seifert_1930,Threlfall_Seifert_1932,Wolf_1974,Thurston_1997}:
\begin{itemize}
\item The cyclic groups $Z_m$ of order $m$ $(m\ge 1)$.
\item The binary dihedral groups $D_{4m}^\star$ of order $4m$ $(m\ge 2)$.
\item The binary tetrahedral group $T^\star$ of order $24$.
\item The binary octahedral group $O^\star$ of order $48$.
\item The binary icosahedral group $I^\star$ of order $120$.
\end{itemize}
In table \ref{Tab:Generators} we give the right-handed Clifford translations
which generate the above groups $\Gamma$.
\begin{table}
\hspace*{2cm}\begin{tabular}{|c|c|c|}
\hline
$\Gamma$ & $\gamma_1$ & $\gamma_2$ \\
\hline
$Z_m$ & $\cos\left(\frac{2\pi}m\right) +
\hbox{\rm i} \, \sin\left(\frac{2\pi}m\right)$ & $-$ \\
\hline
$D_{4m}^\star $ & $\cos\left(\frac{2\pi}m\right) +
\hbox{\rm i}\hbox{\rm j} \, \sin\left(\frac{2\pi}m\right)$ & $\hbox{\rm i}$ \\
\hline
$T^\star$ & $\hbox{\rm j}$ &
$\frac 12 + \frac 12 \hbox{\rm i} + \frac 12 \hbox{\rm j} + \frac 12 \hbox{\rm i}\hbox{\rm j}$ \\
\hline
$O^\star$ & $\frac 1{\sqrt 2} + \frac 1{\sqrt 2}\hbox{\rm i}$ &
$\frac 12 + \frac 12 \hbox{\rm i} + \frac 12 \hbox{\rm j} + \frac 12 \hbox{\rm i}\hbox{\rm j}$ \\
\hline
$I^\star$ & $\hbox{\rm j}$ & $\frac{\sigma}2 + \frac 1{2\sigma}\hbox{\rm i} + \frac 12\hbox{\rm j}$ \\
\hline
\end{tabular}
\caption{\label{Tab:Generators}
The generators $\gamma_1$ and $\gamma_2$ for the groups $\Gamma$
$(\sigma = (1 + \sqrt 5)/2)$.
}
\end{table}
The vibrations on the homogeneous spherical spaces
${\cal M} = {\cal S}^3/\Gamma$ are determined by the regular solutions
of the Helmholtz equation
\begin{equation}
\label{Eq:Helmholtz}
(\Delta + E_\beta^{\cal M}) \, \psi_\beta^{{\cal M},i}(q) \; = \; 0
\hspace{10pt} , \hspace{10pt}
q \in {\cal M} \; \; ,
\end{equation}
satisfying the fundamental periodicity conditions
\begin{equation}
\label{Eq:periodicity_condition}
\psi_\beta^{{\cal M},i}(\gamma_k q) \; = \; \psi_\beta^{{\cal M},i}(q)
\hspace{10pt} , \hspace{10pt}
\forall q \in {\cal M} \; \; , \; \; \forall \gamma_k \in \Gamma
\hspace{10pt} .
\end{equation}
Here $\Delta$ denotes the Laplace-Beltrami operator on ${\cal S}^3$.
The eigenfunctions on ${\cal M}$ satisfy the orthonormality relation
\begin{equation}
\label{Eq:orthonormality}
\int_{\cal M} d\mu \;
\psi_\beta^{{\cal M},i}(q) \; \psi_{\beta'}^{{\cal M},i'}(q) \; = \;
\frac 1N \, \int_{{\cal S}^3} d\mu \;
\psi_\beta^{{\cal M},i}(q) \; \psi_{\beta'}^{{\cal M},i'}(q) \; = \;
\delta_{\beta\beta'} \, \delta{ii'}
\hspace{10pt} .
\end{equation}
The spectrum on ${\cal M}$ is discrete, and the eigenvalues can be
expressed in terms of the wave number $\beta\in\mathbb{N}$ as
$E_\beta = \beta^2-1$ and are independent
of the degeneracy index $i=1,\dots,r^{\cal M}(\beta)$,
where $r^{\cal M}(\beta)$ denotes the multiplicity of the mode $\beta$.
It should be noted that for a given manifold ${\cal M}$
the wave numbers $\beta$ do not take all values in $\mathbb{N}$.
The allowed $\beta$ values together with their multiplicities
$r^{\cal M}(\beta)$ are explicitly known \cite{Ikeda_1995,Weeks_2005},
see Table \ref{Tab:Spectrum}.
The eigenfunctions $\psi_\beta^{{\cal M},i}(q)$ on ${\cal M}$
can be expanded into the eigenfunctions
$\psi_{\beta lm}^{{\cal S}^3}(q)$ on ${\cal S}^3$
\begin{equation}
\label{Eq:expansion_in_S3}
\psi_\beta^{{\cal M},i}(q) \; = \;
\sum_{l=0}^{\beta-1} \sum_{m=-l}^l \xi_{\beta lm}^i({\cal M}) \,
\psi_{\beta lm}^{{\cal S}^3}(q)
\hspace{10pt} .
\end{equation}
Since the eigenfunctions on ${\cal S}^3$ are explicitly known
\cite{Schroedinger_1938,Schroedinger_1939,Schroedinger_1940a,%
Schroedinger_1940b,Harrison_1967,Abbott_Schaefer_1986},
it remains to determine the expansion coefficients
$\xi_{\beta lm}^i({\cal M})$
which satisfy as a consequence of eq.\,(\ref{Eq:orthonormality})
the normalization condition
\begin{equation}
\label{Eq:normalization_condition}
\sum_{l=0}^{\beta-1} \sum_{m=-l}^l \left(\xi_{\beta lm}^i({\cal M})\right)^2
\; = \; N
\hspace{10pt} .
\end{equation}
In \cite{Aurich_Lustig_Steiner_2004c} we have described our
numerical algorithm to compute the coefficients $\xi_{\beta lm}^i({\cal M})$.
It uses a collocation method by imposing the periodicity condition
(\ref{Eq:periodicity_condition}).
Using this method, we have computed in \cite{Aurich_Lustig_Steiner_2004c}
the expansion coefficients for $\Gamma=I^\star$,
i.\,e.\ for the Poincar\'e dodecahedral space
${\cal D} = {\cal S}^3/I^\star$,
for $\beta\leq 155$ comprising the first 10521 eigenfunctions.
In addition, we have computed in \cite{Aurich_Lustig_Steiner_2004c}
the coefficients $\xi_{\beta lm}^i({\cal M})$ for $\beta\leq 33$
for some cyclic groups, some binary dihedral groups,
the binary tetrahedral group, and the binary octahedral group.
Based on these numerical results, we stated for homogeneous space forms
the following relation as a conjecture $(0\leq l \leq \beta-1)$
\begin{equation}
\label{Eq:Conjecture}
\frac 1{2l+1} \sum_{m=-l}^l \sum_{i=1}^{r^{\cal M}(\beta)}
\left( \xi_{\beta l m}^i({\cal M}) \right)^2 \; = \;
N \, \frac{r^{\cal M}(\beta)}{r^{{\cal S}^3}(\beta)}
\hspace{10pt} ,
\end{equation}
where $r^{{\cal S}^3}(\beta)=\beta^2$ denotes the multiplicity of the
vibrational modes on ${\cal S}^3$.
The relation (\ref{Eq:Conjecture}) has been found to hold within a numerical
accuracy of 13 digits.
However, for the inhomogeneous lens spaces
\cite{Gausmann_Lehoucq_Luminet_Uzan_Weeks_2001}
L(12,5) and L(72,17) we have found that the relation (\ref{Eq:Conjecture})
does not hold.
We thus concluded in \cite{Aurich_Lustig_Steiner_2004c}
that the relation (\ref{Eq:Conjecture}) is only valid for
homogeneous 3-spaces.
Recently, Gundermann \cite{Gundermann_2005} provided a proof of our
conjecture (\ref{Eq:Conjecture}).
In the next section, we shall use the eigenfunctions
(\ref{Eq:expansion_in_S3}) to calculate CMB sky maps
and the variance of the CMB anisotropy
and relation (\ref{Eq:Conjecture}) to calculate the
mean value of the CMB anisotropy for a variety of spherical spaces.
\section{The angular power spectrum $\delta T_l^2$ and the
correlation function $C(\vartheta)$ for spherical spaces}
\label{Section_statistic}
The relative temperature fluctuations $\frac{\delta T(\hat n)}T$
of the CMB are caused by several effects
which we shall compute within the tight-coupling approximation
along the lines described in detail in Section 2 of
\cite{Aurich_Lustig_Steiner_Then_2004a}.
The dominant contribution at large scales is given by the
ordinary Sachs-Wolfe (SW) effect which is a combination of
the gravitational potential $\Phi(\eta,\tau,\theta,\phi)$
at the surface of last scattering (SLS), and the intrinsic temperature
fluctuation $\frac 14\delta_\gamma(\eta,\tau,\theta,\phi)$
due to the imposed entropic initial conditions,
where $\delta_\gamma$ denotes the relative perturbation in
the radiation component.
(Here $\eta$ denotes the conformal time and $(\tau(\eta),\theta,\phi)$
the spherical coordinates of the photon path in the direction
$\hat n = \hat n(\theta,\phi)$,
where we assume that the observer is at the origin of the coordinate system,
i.\,e.\ at $(\tau_{\hbox{\scriptsize obs}},\theta_{\hbox{\scriptsize obs}},
\phi_{\hbox{\scriptsize obs}}) = (0,0,0)$.)
The gravitational potential $\Phi$ is identified with the (scalar)
perturbation of the Friedmann-Lema\^{\i}tre-Robertson-Walker metric
which for an energy-momentum tensor $T_{\mu\nu}$ with
$T_{ij}=0$ for $i\neq j$ $(i,j=1,2,3)$ can in conformal Newtonian gauge
be written as \cite{Bardeen_1980}
\begin{equation}
\label{Eq:Metric}
ds^2 \; = \; a^2(\eta) \, \left[ \, (1+2\Phi) d\eta^2 -
(1-2\Phi) |d\vec x|^2 \, \right]
\hspace{10pt} .
\end{equation}
Here $a(\eta)$ denotes the cosmic scale factor as a function of
conformal time $\eta$ and $|d\vec x|^2$ the line element on ${\cal S}^3$
\begin{equation}
\label{Eq:Metric_on_S3}
|d\vec x|^2 \; = \; d\tau^2 \, + \,
\sin^2\tau \; \big( d\theta^2 + \sin^2\theta \, d\phi^2 \big)
\end{equation}
with $0\leq \tau \leq \pi$, $0\leq \theta \leq \pi$, $0\leq \phi \leq 2\pi$.
The ordinary Sachs-Wolfe (SW) contribution to the temperature fluctuation
is given by
\begin{equation}
\label{Eq:NSW}
\frac{\delta T^{\hbox{\scriptsize SW}}(\hat n)}T \; = \;
\Phi(\eta_{\hbox{\scriptsize SLS}},\tau_{\hbox{\scriptsize SLS}},\theta,\phi)
\, + \,
\frac 14 \delta_\gamma(\eta_{\hbox{\scriptsize SLS}},
\tau_{\hbox{\scriptsize SLS}},\theta,\phi)
\end{equation}
with $\tau_{\hbox{\scriptsize SLS}} := \eta_0 - \eta_{\hbox{\scriptsize SLS}}$,
where $\eta_0$ and $\eta_{\hbox{\scriptsize SLS}}$ denote the conformal time
at the present epoch and at the time of recombination corresponding
to a redshift $z_{\hbox{\scriptsize SLS}}=1089$, respectively.
For a given spherical space ${\cal M}$, the metric perturbation can be
written as an expansion in the eigenfunctions on ${\cal M}$
\begin{equation}
\label{Eq:grav_potential}
\Phi(\eta,\tau,\theta,\phi) \; = \;
{\sum_{\beta\ge 3}} '\; \sum_{i=1}^{r^{\cal M}(\beta)}
\Phi_\beta^i(\eta) \, \Psi_\beta^{{\cal M},i}(\tau,\theta,\phi)
\hspace{10pt} ,
\end{equation}
where in the mode summation only the modes with $\beta\ge 3$ have been
taken into account (if they exist),
since the wave numbers $\beta=1,2$ correspond to modes which are pure
gauge terms \cite{Bardeen_1980}.
The prime in the summation over the modes $\beta$ indicates
that the spectrum of a given manifold ${\cal M}$ does not
contain all $\beta\in\mathbb{N}$, see Table \ref{Tab:Spectrum}.
The functions $\Phi_\beta^i(\eta)$ determine the time evolution and
will be factorized
$\Phi_\beta^i(\eta) = \Phi_\beta^i(0) \, g_\beta(\eta)$
with $g_\beta(0)=1$.
The functions $g_\beta(\eta)$ do not depend on the degeneracy index $i$,
since the associated differential equation depends only on
the eigenvalue $E_\beta^{\cal M}$ which is independent of $i$.
The initial values $\Phi_\beta^i(0)$ are the primordial
fluctuation amplitudes and are assumed to be Gaussian random variables with
zero expectation value and covariance
\begin{equation}
\label{Eq:covariance}
\left< \Phi_\beta^i(0)\, \Phi_{\beta'}^{i'}(0) \right> \; = \;
\delta_{\beta\beta'} \, \delta_{i i'} \, P_\Phi(\beta)
\hspace{10pt} .
\end{equation}
Here $P_\Phi(\beta)$ denotes the primordial power spectrum
that determines the weight by which the primordial modes $\beta$
are excited, on average.
The average $\left<\dots\right>$ in (\ref{Eq:covariance}) denotes an
ensemble average over the primordial perturbations
which are supposed to arise from quantum fluctuations,
by which the Universe is ``created''.
In the following, we shall assume that the primordial power spectrum
is in good approximation described by the scale-invariant
Harrison-Zel'dovich spectrum
\begin{equation}
\label{Eq:Harrison_Zeldovich}
P_\Phi(\beta) \; = \; \frac \alpha{\beta(\beta^2-1)}
\hspace{10pt} .
\end{equation}
Here $\alpha$ is a normalization factor
which will be determined from the CMB data.
The {\it temperature fluctuations} $\delta T(\hat n)$
of the microwave sky can be expanded into real spherical harmonics
$\tilde Y_{lm}(\hat n)$ on ${\cal S}^2$,
\begin{equation}
\label{Eq:delta_T_expansion}
\delta T(\hat n) \; := \;
\sum_{l=2}^\infty \sum_{m=-l}^l \, a_{lm} \, \tilde Y_{lm}(\hat n)
\hspace{10pt} ,
\end{equation}
where the monopole and dipole terms, $l=0,1$, are not included
in the sum (\ref{Eq:delta_T_expansion}).
From the real expansion coefficients $a_{lm}$ one forms
the {\it multipole moments}
\begin{equation}
\label{Eq:C_l}
C_l \; := \;
\frac 1{2l+1} \, \Big< \sum_{m=-l}^l \, \big( a_{lm} \big)^2 \, \Big>
\end{equation}
and the {\it angular power spectrum}
\begin{equation}
\label{Eq:angular_power_spectrum}
\delta T_l^2 \; := \; \frac{l(l+1)}{2\pi} \, C_l
\hspace{10pt} .
\end{equation}
The average $\left< \dots \right>$ in (\ref{Eq:C_l}) denotes an
ensemble average over the primordial perturbations as in
eq.(\ref{Eq:covariance}),
respectively an ensemble average over the universal observers.
Inserting the expansions (\ref{Eq:grav_potential}) and
(\ref{Eq:expansion_in_S3}) into the approximation
$\frac{\delta T^{\hbox{\scriptsize SW}}(\hat n)}T \simeq
\frac 13 \Phi(\eta_{\hbox{\scriptsize SLS}},\tau_{\hbox{\scriptsize SLS}},
\theta,\phi)$
to the Sachs-Wolfe formula (\ref{Eq:NSW})
and using the explicit expression for the eigenfunctions
$\psi_{\beta lm}^{{\cal S}^3}$ on ${\cal S}^3$,
\begin{equation}
\label{Eq:modes_S3}
\psi_{\beta lm}^{{\cal S}^3}(\vec x\,) \; = \;
R_{\beta l}(\tau) \, \tilde Y_{lm}(\theta,\phi)
\hspace{10pt} ,
\end{equation}
one arrives \cite{Aurich_Lustig_Steiner_2004c}
with the help of relation (\ref{Eq:Conjecture}) at the following
expression for the ordinary Sachs-Wolfe contribution to the multipole
moments for a given spherical space ${\cal M} = {\cal S}^3/\Gamma$
\begin{equation}
\label{Eq:Cl_NSW}
C_l^{\hbox{\scriptsize SW}}({\cal M}) \; = \;
\frac{N}{9} {\sum_{\beta > l}}' \;
\frac{r^{\cal M}(\beta)}{\beta^2} \, P_\Phi(\beta) \,
g_\beta^2(\eta_{\hbox{\scriptsize SLS}}) \,
R_{\beta l}^2(\tau_{\hbox{\scriptsize SLS}})
\hspace{10pt} .
\end{equation}
Here $R_{\beta l}(\tau)$ denote the
``radial functions'' on ${\cal S}^3$
which can be expressed in terms of Gegenbauer polynomials,
see eq.(10) of \cite{Aurich_Lustig_Steiner_2004c}.
The expression (\ref{Eq:Cl_NSW}) shows in a transparent way,
why the lowest multipoles are in general suppressed for the
multi-connected spherical spaces,
the more the more wave numbers $\beta$ are missing in
the vibrational spectrum.
Let us consider, for example, the quadrupole moment, $l=2$.
Then the summation over modes in (\ref{Eq:Cl_NSW}) runs for the
simply-connected manifold ${\cal S}^3$ over all natural numbers
with $\beta\ge 3$.
In contrast, for the Poincar\'e dodecahedral space ${\cal D}$
there is a large gap between $\beta=3$ and 12, since the lowest
contributing mode occurs only at $\beta=13$, and thus the missing modes
lead to a suppression.
The suppression on ${\cal D}$ gets even stronger,
because there exist only the three modes $\beta=21,25$ and 31
in the wave number interval $13<\beta<33$
(see table \ref{Tab:Spectrum}).
Now, we would like to discuss this suppression mechanism in more detail.
Table \ref{Tab:Spectrum} shows that the allowed wave numbers $\beta$
are for all homogeneous spherical spaces given by odd natural numbers,
except for the homogeneous lens spaces ${\cal S}^3/Z_m$ with
$m$ odd $\ge 1$ for which $\beta$ runs through all natural numbers
with $\beta \ge m+1$, in addition to the lowest odd wave numbers
between 1 and $m$.
While the $\beta$ spectrum consists for the homogeneous lens spaces
$Z_m$ with $m$ even $\ge 2$ of all odd natural numbers,
there are for all other spherical spaces at low wave numbers
below a given threshold
$\beta_{\hbox{\scriptsize th}}$ ``gaps'' in the spectrum.
As can be seen from Table \ref{Tab:Spectrum}, the threshold value
$\beta_{\hbox{\scriptsize th}}$ increases if the volume $V({\cal M})$
decreases,
i.\,e.\ $\beta_{\hbox{\scriptsize th}}=4[(m+1)/2]+1$, 13, 25 and 61 for the
spaces belonging to the groups $D^\star_{4m}$, $T^\star$, $O^\star$ and
$I^\star$, respectively.
This demonstrates clearly the important r\^ole played by the spatial volume,
i.\,e.\ the topology of the Universe.
\begin{table}
\hspace*{-2cm}\begin{tabular}{|c|c|c|}
\hline
$\Gamma$ & wave number spectrum $\{\beta\}$ of manifold ${\cal M} = {\cal S}^3/\Gamma$ &
multiplicity $r^{\cal M}(\beta)$ \\
\hline
$Z_1$ & $\mathbb{N}$ & $\beta^2$ \\
\hline
$Z_m$, $m$ odd $\ge 1$ & $\{1,3,5,\dots,m\} \cup \{n|n\ge m+1\}$ &
$\beta\sum_{\beta-1\equiv 2l(m); 0\le l \le \beta-1} 1$ \\
\hline
$Z_m$, $m$ even $\ge 2$ & $2\mathbb{N}+1$ &
$\beta\sum_{\beta-1\equiv 2l(m); 0\le l \le \beta-1} 1$ \\
\hline
$D_{4m}^\star$, $m\ge 2$ &
$\{1,5,9,\dots,4\left[\frac{m+1}2\right]+1\} \cup
\{2n+1| n\ge 2\left[\frac{m+1}2\right]+1\}$ &
$\beta \left(\left[\frac{\beta-1}{2m}\right]+1\right)$ for
$\beta \in \{4n+1|n\ge 0\}$
\\ & &
$\beta \left[\frac{\beta-1}{2m}\right]$ for
$\beta \in \{4n+3|n\ge\left[\frac{m+1}2\right] \}$
\\
\hline
$T^\star$ & $\{1,7,9\} \cup \{2n+1|n\ge 6\}$ &
$\beta\left( 2 \left[\frac{\beta-1}6\right] + \left[\frac{\beta-1}4\right] -
\frac{\beta-3}2\right)$ \\
\hline
$O^\star$ & $\{1,9,13,17,19,21\} \cup \{2n+1| n\ge 12\}$ &
$\beta\left( \left[\frac{\beta-1}8\right] + \left[\frac{\beta-1}6\right] +
\left[\frac{\beta-1}4\right] - \frac{\beta-3}2\right)$ \\
\hline
$I^\star$ & $\{1,13,21,25,31,33,37,41,43,45,49,51,53,55,57\}$ &
$\beta\left( \left[\frac{\beta-1}{10}\right] + \left[\frac{\beta-1}6\right] +
\left[\frac{\beta-1}4\right] - \frac{\beta-3}2 \right)$ \\
& $\cup \, \{2n+1, n \ge 30\}$ & \\
\hline
\end{tabular}
\caption{\label{Tab:Spectrum}
The eigenvalue spectrum for the groups $\Gamma$.
}
\end{table}
In addition to the wave number gaps, there is another important imprint
of topology on the multipole moments $C_l$ due to the multiplicities
$r^{\cal M}(\beta)$, also given in Table \ref{Tab:Spectrum},
by which the vibrational modes are weighted in the mode sum (\ref{Eq:Cl_NSW}).
Again, this effect is significant for the low vibrational modes and
thus for the low multipole moments which can be considered as carrying the
fingerprints of the topology of the Universe.
On the other hand, for the large multipole moments,
$l \gg \beta_{\hbox{\scriptsize th}}$,
the details of the different topologies get washed out due to the
identical mean asymptotic behaviour of the multiplicities for all
multi-connected spaces ${\cal M}$ (except $Z_m$, $m$ odd) given by
\begin{equation}
\label{Eq:multiplicity_asymptotic}
r^{\cal M}(\beta) \; = \;
\frac{2}{N} \, \beta^2 \, + \, \dots \; = \;
\frac{V({\cal M})}{\pi^2} \, \beta^2 \, + \, \dots
\hspace{10pt} \hbox{ for } \hspace{10pt}
\beta \to \infty
\end{equation}
which leads to the universal formula ($l \gg \beta_{\hbox{\scriptsize th}}$)
\begin{equation}
\label{Eq:Cl_NSW_asymptotic}
C_l^{\hbox{\scriptsize SW}}({\cal M}) \; \simeq \;
\frac{2}{9} \sum_{\beta {\,\hbox{\scriptsize odd}}, \beta>l}^\infty \;
P_\Phi(\beta) \, g_\beta^2(\eta_{\hbox{\scriptsize SLS}}) \,
R_{\beta l}^2(\tau_{\hbox{\scriptsize SLS}})
\hspace{10pt} .
\end{equation}
For the homogeneous lens spaces $Z_m$, $m$ odd $\ge 1$,
one obtains instead
\begin{equation}
\label{Eq:multiplicity_asymptotic_Zm_odd}
r^{\cal M}(\beta) \; = \;
\frac{1}{N} \, \beta^2 \, + \, \dots
\hspace{10pt} \hbox{ for } \hspace{10pt}
\beta \to \infty
\hspace{10pt} ,
\end{equation}
and thus from (\ref{Eq:Cl_NSW}) for $l \gg \beta_{\hbox{\scriptsize th}}$
\begin{equation}
\label{Eq:Cl_NSW_asymptotic_Zm_odd}
C_l^{\hbox{\scriptsize SW}}(Z_m) \; \simeq \;
\frac{1}{9} \sum_{\beta=l+1}^\infty \;
P_\Phi(\beta) \, g_\beta^2(\eta_{\hbox{\scriptsize SLS}}) \,
R_{\beta l}^2(\tau_{\hbox{\scriptsize SLS}})
\end{equation}
which should be numerically not very different from
the expression (\ref{Eq:Cl_NSW_asymptotic})
where the additional factor 2 should recompensate for the missing
even $\beta$-values in the sum (\ref{Eq:Cl_NSW_asymptotic}).
The asymptotic behaviour given in eqs.\,(\ref{Eq:multiplicity_asymptotic})
and (\ref{Eq:multiplicity_asymptotic_Zm_odd}) is in agreement with
Weyl's law, as can be seen as follows $(k\to\infty)$:
\begin{eqnarray} \nonumber
{\cal N}^{\cal M}(k) & := & \#\{\beta| \beta \leq k \} \; = \;
{\sum_{\beta\le k}} ' r^{\cal M}(\beta)
\\ & = &
\label{Eq:Weyl_gen}
\sum_{\beta {\,\hbox{\scriptsize odd}}, \beta\le k}
\left(\frac 2N \beta^2\right)
\, + \, \dots \; = \;
\frac{V({\cal M})}{6\pi^2} \, k^3 \, + \, \dots
\end{eqnarray}
for $\Gamma \neq Z_m, m$ odd $\ge 1$, and
\begin{equation}
\label{Eq:Weyl_Zm_odd}
{\cal N}^{\cal M}(k) \; = \;
\sum_{\beta=1}^k \left(\frac 1N \beta^2\right)
\, + \, \dots \; = \;
\frac{V({\cal M})}{6\pi^2} \, k^3 \, + \, \dots
\end{equation}
for $\Gamma = Z_m, m$ odd $\ge 1$.
One thus sees that the ordinary Sachs-Wolfe contribution
to the large multipoles is identical for all spherical spaces,
including ${\cal S}^3$, in agreement with the expectation
that the cosmic topology is most clearly seen at large scales.
The above discussion showed how the topology of the Universe influences
via the vibrational modes the CMB anisotropy at large scales.
The whole story is, however, more subtle, since an additional
$l$-dependence comes in eq.(\ref{Eq:Cl_NSW}) from the radial functions
$R_{\beta l}(\tau_{\hbox{\scriptsize SLS}})$, and furthermore,
there is an important dependence of the multipoles on the
curvature radius respectively on $\Omega_{\hbox{\scriptsize tot}}$
which is determined by both the time evolution via
$g_\beta(\eta_{\hbox{\scriptsize SLS}})$ and the radial functions.
The interplay of all these effects is responsible for the rather
complicated structure displayed by the numerical computations
to be illustrated below.
\begin{figure}[htb]
\vspace*{-2.5cm}
\begin{center}
\hspace*{-80pt}\begin{minipage}{14cm}
\begin{minipage}{6cm}
\includegraphics[width=9.0cm]{psplots/cl_l_2_bis_10_tot_tetraeder_m28__h70.ps}
\put(-180,155){(a)}
\end{minipage}
\begin{minipage}{6cm}
\hspace*{50pt}\includegraphics[width=9.0cm]{psplots/tetraeder_s_und_d_statistik_mat_028_bar_0046_h_70_S_statistik.ps}
\put(-175,155){(b)}
\end{minipage}
\end{minipage}
\end{center}
\vspace*{-1.3cm}
\caption{\label{Fig:Statistic_Tetraeder}
The binary tetrahedral group $T^\star$.
Panel (a) shows the $\Omega_{\hbox{\scriptsize tot}}$ dependence
of the mean value of the first three angular power moments $\delta T_l^2$.
The horizontal lines indicate the corresponding WMAP values for $\delta T_l^2$.
Panel (b) displays the $S(\rho)$ statistics
($\rho=60^\circ$ full curve and $\rho=20^\circ$ dotted curve)
in units of $\mu\hbox{K}^4$
in dependence on $\Omega_{\hbox{\scriptsize tot}}$.
The corresponding WMAP values are indicated as horizontal lines.
}
\end{figure}
\begin{figure}[htb]
\vspace*{-2.5cm}
\begin{center}
\hspace*{-80pt}\begin{minipage}{14cm}
\begin{minipage}{6cm}
\includegraphics[width=9.0cm]{psplots/cl_l_2_bis_10_tot_oktaeder_m28__h70.ps}
\put(-180,155){(a)}
\end{minipage}
\begin{minipage}{6cm}
\hspace*{50pt}\includegraphics[width=9.0cm]{psplots/oktaeder_s_und_d_statistik_mat_028_bar_0046_h_70_S_statistik.ps}
\put(-175,155){(b)}
\end{minipage}
\end{minipage}
\end{center}
\vspace*{-1.3cm}
\caption{\label{Fig:Statistic_Oktaeder}
The same as in figure \ref{Fig:Statistic_Tetraeder}
for the binary octahedral group $O^\star$.
}
\end{figure}
\begin{figure}[htb]
\vspace*{-2.5cm}
\begin{center}
\hspace*{-80pt}\begin{minipage}{14cm}
\begin{minipage}{6cm}
\includegraphics[width=9.0cm]{psplots/cl_l_2_bis_10_tot_dodekaeder_m28__h70.ps}
\put(-180,155){(a)}
\end{minipage}
\begin{minipage}{6cm}
\hspace*{50pt}\includegraphics[width=9.0cm]{psplots/dodekaeder_s_und_d_statistik_mat_028_bar_0046_h_70_S_statistik.ps}
\put(-175,155){(b)}
\end{minipage}
\end{minipage}
\end{center}
\vspace*{-1.3cm}
\caption{\label{Fig:Statistic_Dodekaeder}
The same as in figure \ref{Fig:Statistic_Tetraeder}
for the binary icosahedral group $I^\star$.
}
\end{figure}
\begin{figure}[htb]
\vspace*{-2.5cm}
\begin{center}
\hspace*{-80pt}\begin{minipage}{14cm}
\begin{minipage}{6cm}
\includegraphics[width=9.0cm]{psplots/cl_l_2_bis_10_tot_dihedral_4x2_m28__h70.ps}
\put(-180,155){(a)}
\end{minipage}
\begin{minipage}{6cm}
\hspace*{50pt}\includegraphics[width=9.0cm]{psplots/dihedral_4x2_s_und_d_statistik_mat_028_bar_0046_h_70_S_statistik.ps}
\put(-175,155){(b)}
\end{minipage}
\end{minipage}
\end{center}
\vspace*{-1.3cm}
\caption{\label{Fig:Statistic_dihedral_4x2}
The same as in figure \ref{Fig:Statistic_Tetraeder}
for the binary dihedral group $D^\star_8$, i.\,e.\ $m=2$.
}
\end{figure}
\begin{figure}[htb]
\vspace*{-2.5cm}
\begin{center}
\hspace*{-80pt}\begin{minipage}{14cm}
\begin{minipage}{6cm}
\includegraphics[width=9.0cm]{psplots/cl_l_2_bis_10_tot_dihedral_4x5_m28__h70.ps}
\put(-180,155){(a)}
\end{minipage}
\begin{minipage}{6cm}
\hspace*{50pt}\includegraphics[width=9.0cm]{psplots/dihedral_4x5_s_und_d_statistik_mat_028_bar_0046_h_70_S_statistik.ps}
\put(-175,155){(b)}
\end{minipage}
\end{minipage}
\end{center}
\vspace*{-1.3cm}
\caption{\label{Fig:Statistic_dihedral_4x5}
The same as in figure \ref{Fig:Statistic_Tetraeder}
for the binary dihedral group $D^\star_{20}$, i.\,e.\ $m=5$.
}
\end{figure}
\begin{figure}[htb]
\vspace*{-2.5cm}
\begin{center}
\hspace*{-80pt}\begin{minipage}{14cm}
\begin{minipage}{6cm}
\includegraphics[width=9.0cm]{psplots/cl_l_2_bis_10_tot_zyklisch_1_s3_m28__h70.ps}
\put(-180,155){(a)}
\end{minipage}
\begin{minipage}{6cm}
\hspace*{50pt}\includegraphics[width=9.0cm]{psplots/zyklisch_1_s3_s_und_d_statistik_mat_028_bar_0046_h_70_S_statistik.ps}
\put(-175,155){(b)}
\end{minipage}
\end{minipage}
\end{center}
\vspace*{-1.3cm}
\caption{\label{Fig:Statistic_zyklisch_1}
The same as in figure \ref{Fig:Statistic_Tetraeder}
for the cyclic group $Z_1$, i.\,e.\ for the space ${\cal S}^3$.
}
\end{figure}
\begin{figure}[htb]
\vspace*{-2.5cm}
\begin{center}
\hspace*{-80pt}\begin{minipage}{14cm}
\begin{minipage}{6cm}
\includegraphics[width=9.0cm]{psplots/cl_l_2_bis_10_tot_zyklisch_2_p3_m28__h70.ps}
\put(-180,155){(a)}
\end{minipage}
\begin{minipage}{6cm}
\hspace*{50pt}\includegraphics[width=9.0cm]{psplots/zyklisch_2_p3_s_und_d_statistik_mat_028_bar_0046_h_70_S_statistik.ps}
\put(-175,155){(b)}
\end{minipage}
\end{minipage}
\end{center}
\vspace*{-1.3cm}
\caption{\label{Fig:Statistic_zyklisch_2}
The same as in figure \ref{Fig:Statistic_Tetraeder}
for the cyclic group $Z_2$, i.\,e.\ for the projective space ${\cal P}^3$.
}
\end{figure}
\begin{figure}[htb]
\vspace*{-2.5cm}
\begin{center}
\hspace*{-80pt}\begin{minipage}{14cm}
\begin{minipage}{6cm}
\includegraphics[width=9.0cm]{psplots/cl_l_2_bis_10_tot_zyklisch_6_m28__h70.ps}
\put(-180,155){(a)}
\end{minipage}
\begin{minipage}{6cm}
\hspace*{50pt}\includegraphics[width=9.0cm]{psplots/zyklisch_6_s_und_d_statistik_mat_028_bar_0046_h_70_S_statistik.ps}
\put(-175,155){(b)}
\end{minipage}
\end{minipage}
\end{center}
\vspace*{-1.3cm}
\caption{\label{Fig:Statistic_zyklisch_6}
The same as in figure \ref{Fig:Statistic_Tetraeder}
for the cyclic group $Z_6$.
}
\end{figure}
Let us now discuss the results for the different spherical space forms.
In all these computations the CMB anisotropy is obtained using the complete
Sachs-Wolfe formula $(\tau(\eta):=\eta_0 - \eta)$
\begin{eqnarray} \nonumber \hspace{-50pt}
\frac{\delta T}T(\hat n) & \hspace*{-15pt}= &
{\sum_{\beta\ge 3}} '\; \sum_{i=1}^{r^{\cal M}(\beta)} \;
\left[ \left( \Phi_\beta^i(\eta) +
\frac{\delta_{\gamma,\beta}^i(\eta)}4 +
\frac{a(\eta) V_{\gamma,\beta}^i(\eta)}{E_\beta}
\frac{\partial}{\partial \tau} \right)
\Psi_\beta^{{\cal M},i}(\tau(\eta),\theta,\phi)
\right]_{\eta=\eta_{\hbox{\scriptsize{SLS}}}}
\\ & & \hspace{10pt}
\label{Eq:Sachs_Wolfe_tight_coupling}
\, + \,
2 \; {\sum_{\beta\ge 3}} '\; \sum_{i=1}^{r^{\cal M}(\beta)} \;
\int_{\eta_{\hbox{\scriptsize{SLS}}}}^{\eta_0} d\eta \,
\frac{\partial\Phi_\beta^i(\eta)}{\partial\eta} \,
\Psi_\beta^{{\cal M},i}(\tau(\eta),\theta,\phi)
\hspace{10pt} .
\end{eqnarray}
The details of the computation of the quantities needed in
(\ref{Eq:Sachs_Wolfe_tight_coupling}) are described in
\cite{Aurich_Lustig_Steiner_Then_2004a}.
The first two terms in (\ref{Eq:Sachs_Wolfe_tight_coupling})
are the ordinary Sachs-Wolfe contribution (\ref{Eq:NSW})
discussed above.
The next term involving the spatial covariant divergence of
the velocity field is the Doppler contribution.
The integral over the photon path yields the
integrated Sachs-Wolfe contribution.
The $\beta_{\hbox{\scriptsize max}}$ cut-off is chosen sufficiently
high in order to obtain enough
multipoles which then enables us to normalise the $\delta T_l^2$ spectrum
in the range $l=20$ to 45 according to the WMAP first-year data
\cite{Bennett_et_al_2003}.
In our computations we use
$\beta_{\hbox{\scriptsize max}}=3001$ for
$\Omega_{\hbox{\scriptsize tot}} \leq 1.05$,
$\beta_{\hbox{\scriptsize max}}=2001$ for
$1.05 < \Omega_{\hbox{\scriptsize tot}} \leq 1.1$, and
$\beta_{\hbox{\scriptsize max}}=1501$ for
$1.1 < \Omega_{\hbox{\scriptsize tot}} \leq 1.2$.
In the following we fix the cosmological parameters as
$\Omega_{\hbox{\scriptsize bar}} = 0.046$,
$\Omega_{\hbox{\scriptsize mat}} = 0.28$, and $h=70$,
in agreement with the WMAP data.
The free parameter is
$\Omega_\Lambda = \Omega_{\hbox{\scriptsize tot}} -
\Omega_{\hbox{\scriptsize mat}}-\Omega_{\hbox{\scriptsize rad}}$,
i.\,e.\ the density corresponding to the cosmological constant.
Although $\Omega_\Lambda$ is varied, we show in the following
figures $\Omega_{\hbox{\scriptsize tot}}$ on the abscissa.
In figures \ref{Fig:Statistic_Tetraeder} to \ref{Fig:Statistic_zyklisch_6}
we show the dependence of the large scale CMB anisotropy
on $\Omega_{\hbox{\scriptsize tot}}$ for
the binary tetrahedral space, the binary octahedral space,
the dodecahedral space,
two binary dihedral spaces and three spaces belonging to cyclic groups.
In panel (a) the expectation values of the
angular power spectra $\delta T_l^2$ are shown
for the first three multipole moments $\delta T_2^2$ (solid curve),
$\delta T_3^2$ (dashed curve) and $\delta T_4^2$ (dotted curve).
(Panel (b) in figures \ref{Fig:Statistic_Tetraeder} to
\ref{Fig:Statistic_zyklisch_6} will be discussed below.)
The corresponding values measured by the WMAP team are indicated
as straight horizontal lines.
With respect to the large scale anisotropy,
the best values for $\Omega_{\hbox{\scriptsize tot}}$ are obtained
by choosing those values which yield the best agreement with the data.
For example, consider the binary tetrahedral space
shown in figure \ref{Fig:Statistic_Tetraeder}.
Since the density $\Omega_{\hbox{\scriptsize tot}}$ should be
as close to one as possible, one chooses the first minimum of $\delta T_2^2$,
i.\,e.\ the range $\Omega_{\hbox{\scriptsize tot}}=1.06\dots 1.07$.
In this range the value of $\delta T_3^2$ is also near to the observed one.
One should note that these values are only the expectation values such
that for a given realization one has also to take into account
the cosmic variance.
However, as the simulations for such realizations show,
the probability for a given spherical space increases significantly
when the expectation values are already near to the observed values.
In this way one gets for the binary octahedral space
(figure \ref{Fig:Statistic_Oktaeder})
the range $\Omega_{\hbox{\scriptsize tot}}=1.03\dots 1.04$
and for the dodecahedron (figure \ref{Fig:Statistic_Dodekaeder})
$\Omega_{\hbox{\scriptsize tot}}=1.015\dots 1.02$.
For none of the considered binary dihedral spaces is a comparably good
agreement found.
We have computed the CMB anisotropy for the binary dihedral groups
$D_{4m}^\star$ with $m=2$, 3, 4, 5, 10, 20, and 30.
In figures \ref{Fig:Statistic_dihedral_4x2} and
\ref{Fig:Statistic_dihedral_4x5} we show as two typical examples
the cases $m=2$ and $m=5$, respectively.
The first multipole moment $\delta T_2^2$ decreases for all
considered groups $D_{4m}^\star$ only for unrealistically high values of
$\Omega_{\hbox{\scriptsize tot}}>1.1$.
In this parameter range the values for $\delta T_4^2$ are very large.
Thus a binary dihedral topology seems to be very unprobable as a possibility
for our Universe.
The cyclic groups $Z_m$ are also much worse
compared to the binary tetrahedral space, the binary octahedral space,
or the dodecahedron.
We have computed all groups $Z_m$ with $m\le 20$ and
a lot of examples up to $m=500$.
As three examples we present in figures \ref{Fig:Statistic_zyklisch_1} to
\ref{Fig:Statistic_zyklisch_6} the models for $m=1$, 2, and 6, respectively.
The group $Z_1$ has only the identity as a group element and
thus leads to the usual spherical space ${\cal S}^3$.
One observes the known difficulty of the concordance model,
i.\,e.\ too large values for the two lowest multipole moments.
The next group shown is $Z_2$ leading to
the projective space ${\cal P}^3$, also known as elliptic space,
which has historically played a special role as an example of
an alternative to the spherical space ${\cal S}^3$.
For $\Omega_{\hbox{\scriptsize tot}}<1.1$ the first multipole moments
behave very similar to the former case with $m=1$
such that this topology is not a better match than the concordance model.
Models with larger groups $Z_m$ lead to a quadrupole moment
which increases with increasing $\Omega_{\hbox{\scriptsize tot}}$, in general.
As an example, figure \ref{Fig:Statistic_zyklisch_6} displays the
result for $Z_6$.
The quadrupole moment $\delta T_2^2$ shows a very small minimum around
$\Omega_{\hbox{\scriptsize tot}}=1.05$ and increases for higher
values of $\Omega_{\hbox{\scriptsize tot}}$.
Although there are minima in many cyclic models,
they never suppress the power of $\delta T_2^2$ even to the level of
the observed value for $\delta T_4^2$,
see figure \ref{Fig:Statistic_zyklisch_6}.
The fact that lens spaces do not fit well the CMB anisotropy
is already discussed in \cite{Uzan_Riazuelo_Lehoucq_Weeks_2003}
where this behaviour is ascribed to the non well-proportioned
fundamental cells.
Therefore, from all spherical spaces,
only the binary tetrahedral space, the binary octahedral space,
and the dodecahedron display for the given parameter ranges
the observed suppression in the large anisotropy power.
Up to now, we have only discussed the angular power spectrum $\delta T_l^2$.
Let us now come to the {\it temperature two-point correlation function}
$C(\vartheta)$, which is defined as
$C(\vartheta) := \left< \delta T(\hat n) \delta T(\hat n')\right>$
with $\hat n \cdot \hat n' = \cos\vartheta$.
It can be computed from the multipole moments (\ref{Eq:C_l})
under the assumption of statistical isotropy as
\begin{equation}
\label{Eq:C_theta}
C(\vartheta) \; \simeq \;
\frac 1{4\pi} \, \sum_{l=2}^\infty \, (2l+1) \, C_l \, P_l(\cos\vartheta)
\hspace{10pt} .
\end{equation}
This quantity is well suited in order to measure the large scale power
as emphasised in \cite{Spergel_et_al_2003}
where the observations are compared with the theoretical models.
A comparison of $C(\vartheta)$ observed by WMAP with the concordance model
can also be found in figure 1 of reference \cite{Aurich_Lustig_Steiner_2004c}.
(In \cite{Aurich_Lustig_Steiner_2004c} we have derived an analytic
expression for $C^{\hbox{\scriptsize SW}}(\vartheta)$,
i.\,e.\ for the SW contribution in the case of the dodecahedron,
which after multiplication by $N/120$ holds for all homogeneous
spherical space forms.)
The correlation function $C(\vartheta)$ displays a surprisingly low
CMB anisotropy on large angular scales $\vartheta \ge \rho$,
which can be quantified by the $S(\rho)$ statistic \cite{Bennett_et_al_2003}
\begin{equation}
\label{Eq:S-Statistik}
S(\rho) \; = \;
\int_{-1}^{\cos\rho} \big| C(\vartheta) \big|^2 \; d\cos\vartheta
\end{equation}
which is discussed for the first-year WMAP data in \cite{Spergel_et_al_2003}
for $\rho=60^\circ$,
and it is found that only $0.3\%$ of the simulations based on the
concordance model ri have lower values of $S(60^\circ)$
than the observed value $S(60^\circ)=1644$.
The $S(\rho)$ statistic is shown for the above discussed spherical spaces
in figures \ref{Fig:Statistic_Tetraeder}
to \ref{Fig:Statistic_zyklisch_6} in panel (b)
for $\rho=60^\circ$ (solid curves) and $\rho=20^\circ$ (dotted curves).
The corresponding WMAP values are indicated as straight horizontal lines.
The inspection of these figures leads to the same result as
the above discussion of the first three angular power moments $\delta T_l^2$.
The binary tetrahedral space possesses a sufficiently low power in the
range $\Omega_{\hbox{\scriptsize tot}}=1.06\dots 1.07$,
the binary octahedral space in the range
$\Omega_{\hbox{\scriptsize tot}}=1.03\dots 1.04$,
and the dodecahedral space in the range
$\Omega_{\hbox{\scriptsize tot}}=1.015\dots 1.02$.
For the other spherical spaces no comparable agreement is found
as can be seen in the case of the two binary dihedral spaces
(figures \ref{Fig:Statistic_dihedral_4x2}(b) and
\ref{Fig:Statistic_dihedral_4x5}(b))
and the three cyclic groups
(figures \ref{Fig:Statistic_zyklisch_1}(b) to
\ref{Fig:Statistic_zyklisch_6}(b)).
\begin{figure}[htb]
\vspace*{-2.5cm}
\begin{center}
\hspace*{-45pt}
\hspace*{-80pt}\begin{minipage}{14cm}
\begin{minipage}{6cm}
\includegraphics[width=9.0cm]{psplots/tau_sls_vs_Omega_tot.ps}
\put(-180,155){(a)}
\end{minipage}
\begin{minipage}{6cm}
\hspace*{50pt}\includegraphics[width=9.0cm]{psplots/radialfunction.ps}
\put(-150,155){(b)}
\end{minipage}
\end{minipage}
\end{center}
\vspace*{-1.3cm}
\caption{\label{Fig:Radialfunction}
Panel (a) shows the dependence of the conformal distance
$\tau_{\hbox{\scriptsize SLS}}$ to the surface of last scattering
on the density $\Omega_{\hbox{\scriptsize tot}}$.
Panel (b) shows the square of the radial function $R_{\beta l}^2(\tau)$
for the first eigenvalue occurring in (\ref{Eq:Sachs_Wolfe_tight_coupling})
for the quadrupole moment $l=2$
for the binary tetrahedral space $(\beta=7)$,
the binary octahedral space $(\beta=9)$,
and the dodecahedron $(\beta=13)$.
}
\end{figure}
Now, we would like to demonstrate the strong influence
of the radial function $R_{\beta l}^2(\tau)$
on the suppression of power in the case of the quadrupole moment $l=2$
already discussed above.
The quadrupole suppression gets stronger,
the higher the value of the first contributing eigenvalue is,
see equation (\ref{Eq:Cl_NSW}).
In addition, it is seen from equation (\ref{Eq:Cl_NSW})
that the term belonging to the first eigenvalue $\beta$ is multiplied
by $R_{\beta l}^2(\tau)$.
Thus the contribution of the first eigenvalue is eliminated
for those values of $\Omega_{\hbox{\scriptsize tot}}$
which belong to a value of $\tau_{\hbox{\scriptsize SLS}}$
at which $R_{\beta l}(\tau_{\hbox{\scriptsize SLS}})$ is zero.
The dependence of $\tau_{\hbox{\scriptsize SLS}}$ on
the density $\Omega_{\hbox{\scriptsize tot}}$ is shown
in figure \ref{Fig:Radialfunction}(a)
for our choice of cosmological parameters
and is well described by
$\tau_{\hbox{\scriptsize SLS}} =
3.32 \sqrt{\Omega_{\hbox{\scriptsize tot}} - 1}$
for $1< \Omega_{\hbox{\scriptsize tot}} < 1.1$.
In figure \ref{Fig:Radialfunction}(b) the square of the radial function
$R_{\beta l}^2(\tau)$ is shown for the first eigenvalue
of the binary tetrahedral space $(\beta=7)$,
the binary octahedral space $(\beta=9)$,
and the dodecahedron $(\beta=13)$ for $l=2$.
One observes that the quadrupole suppression due to the first
zero of the radial function is maximal in the case of the
binary tetrahedral space at $\Omega_{\hbox{\scriptsize tot}}\simeq 1.064$,
at $\Omega_{\hbox{\scriptsize tot}}\simeq 1.038$
for the binary octahedral space,
and at $\Omega_{\hbox{\scriptsize tot}}\simeq 1.018$
for the dodecahedral space.
This matches perfectly well to the previously found intervals
on which these models show a strong anisotropy suppression.
Thus, the radial function has an important influence on the
suppression for a given spherical topology.
Note that the zero of the radial function eliminates many eigenfunctions
due to the high degeneracy,
e.\,g.\ the first 13 eigenfunctions in the case of the dodecahedron.
This does not happen in such a dramatic way in models with negative curvature,
where one has no degeneracies at all,
i.\,e.\ all eigenvalues have multiplicity one, in general.
Then the radial function can only suppress a single eigenfunction
and not a ``cluster'' of them.
\begin{figure}[htb]
\begin{center}
\begin{minipage}{6cm}
\hspace*{-145pt}
\includegraphics[width=8.0cm,angle=270]{psplots/CMB_Tetraeder_small.ps}
\end{minipage}
\end{center}
\vspace*{-0.0cm}
\caption{\label{Fig:CMB_Tetrahedron}
The temperature fluctuation $\delta T/T$ of one
realization for the binary tetrahedral group $T^\star$ is shown
($\beta_{\hbox{\scriptsize max}}=155$).
The cosmological parameters
$\Omega_{\hbox{\scriptsize tot}}=1.065$,
$\Omega_\Lambda=0.785$ and $h=70$
are used.
}
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{minipage}{6cm}
\hspace*{-145pt}
\includegraphics[width=8.0cm,angle=270]{psplots/CMB_Oktaeder_small.ps}
\end{minipage}
\end{center}
\vspace*{-0.0cm}
\caption{\label{Fig:CMB_Octahedron}
The temperature fluctuation $\delta T/T$ of one
realization for the binary octahedral group $O^\star$ is shown
($\beta_{\hbox{\scriptsize max}}=161$).
The cosmological parameters
$\Omega_{\hbox{\scriptsize tot}}=1.038$,
$\Omega_\Lambda=0.758$ and $h=70$
are used.
}
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{minipage}{6cm}
\hspace*{-145pt}
\includegraphics[width=8.0cm,angle=270]{psplots/CMB_Dodecahedron_small.ps}
\end{minipage}
\end{center}
\vspace*{-0.0cm}
\caption{\label{Fig:CMB_Dodecahedron}
The temperature fluctuation $\delta T/T$ of one
realization for the binary icosahedral group $I^\star$ is shown
($\beta_{\hbox{\scriptsize max}}=185$).
The cosmological parameters
$\Omega_{\hbox{\scriptsize tot}}=1.018$,
$\Omega_\Lambda=0.738$ and $h=70$
are used.
}
\end{figure}
The angular power spectrum $\delta T_l^2$ as well as the
$S(\rho)$ statistic lead to the conclusion
that there are three best candidates with respect to spherical spaces,
i.\,e.\ the binary polyhedral spaces
${\cal S}^3/T^\star$, ${\cal S}^3/O^\star$, and ${\cal S}^3/I^\star$.
In figures \ref{Fig:CMB_Tetrahedron} to \ref{Fig:CMB_Dodecahedron}
we show the temperature fluctuation $\delta T/T$ in
the Mollweide projection for these three topological spaces,
where exactly those values of $\Omega_{\hbox{\scriptsize tot}}$ are used
which lead to a strong suppression of large scale power.
In these calculations we have used for the three spaces all modes below
the wave number cut-off $\beta_{\hbox{\scriptsize max}} = 155$,
161 and 185, respectively.
\begin{figure}[htb]
\begin{center}
\vspace*{-90pt}
\begin{minipage}{11cm}
\hspace*{-30pt}\includegraphics[width=11.0cm]{psplots/C_l_comp_WMAP_Tstar.ps}
\end{minipage}
\end{center}
\vspace*{-50pt}
\caption{\label{Fig:Cl_Tetrahedron}
The angular power spectrum $\delta T_l^2$ is shown for the
binary tetrahedral group $T^\star$ (open circles) using the same
cosmological parameters as in figure \ref{Fig:CMB_Tetrahedron}.
The angular power spectrum is shifted by $\Delta l=0.25$ in order
to enable a comparison with the first-year WMAP data (full diamonds).
The $1\sigma$ errors are shown.
}
\end{figure}
\begin{figure}[htb]
\begin{center}
\vspace*{-90pt}
\begin{minipage}{11cm}
\hspace*{-30pt}\includegraphics[width=11.0cm]{psplots/C_l_comp_WMAP_Ostar.ps}
\end{minipage}
\end{center}
\vspace*{-50pt}
\caption{\label{Fig:Cl_Oktahedron}
The angular power spectrum $\delta T_l^2$ is shown for the
binary octahedral group $O^\star$ (open circles) using the same
cosmological parameters as in figure \ref{Fig:CMB_Octahedron}.
}
\end{figure}
\begin{figure}[htb]
\begin{center}
\vspace*{-90pt}
\begin{minipage}{11cm}
\hspace*{-30pt}\includegraphics[width=11.0cm]{psplots/C_l_comp_WMAP_Istar.ps}
\end{minipage}
\end{center}
\vspace*{-50pt}
\caption{\label{Fig:Cl_Dodecahedron}
The angular power spectrum $\delta T_l^2$ is shown for the
binary icosahedral group $I^\star$ (open circles) using the same
cosmological parameters as in figure \ref{Fig:CMB_Dodecahedron}.
}
\end{figure}
In figures \ref{Fig:Cl_Tetrahedron} to \ref{Fig:Cl_Dodecahedron}
we show the angular power spectrum $\delta T_l^2$
for the binary polyhedral spaces
${\cal S}^3/T^\star$, ${\cal S}^3/O^\star$, and ${\cal S}^3/I^\star$,
where the same cosmological parameters as in figures
\ref{Fig:CMB_Tetrahedron} to \ref{Fig:CMB_Dodecahedron} are used.
The $1\sigma$ deviations are computed along the lines of
\cite{Gundermann_2005}.
Since the distributions for the lowest multipole moments are asymmetric,
i.\,e.\ not Gaussian,
these error bars should only be considered as providing the order of
magnitude of fluctuations in individual realizations.
In order to facilitate a comparison with the WMAP data,
shown as full diamonds together with their $1\sigma$ errors
not including the cosmic variance,
the spectra of the binary polyhedral spaces are shifted by
$\Delta l=0.25$.
The angular power spectra $\delta T_l^2$ for these three
binary polyhedral spaces are very similar
such that one is faced with a {\it topological degeneracy}
with respect to $\delta T_l^2$.
All three spectra display a good agreement with the WMAP data.
\section{Conclusion}
\label{Section_Conclusion}
In this paper we analyse the CMB anisotropy of
homogeneous 3-spaces of constant positive curvature
which are multi-connected and are given by the quotient of
${\cal S}^3$ by a group $\Gamma$ of covering transformations,
i.\,e.\ ${\cal M}={\cal S}^3/\Gamma$.
The motivation is provided by the surprisingly low power
in the CMB anisotropy at the largest scales as measured by
COBE and WMAP and the fact
that the mean value of $\Omega_{\hbox{\scriptsize tot}}$ reported by
WMAP is 1.020 which hints to a positively curved Universe.
In order to explain this low power,
one could modify the primordial power spectrum $P_\Phi(\beta)$,
e.\,g.\ by carefully choosing the inflationary scalar potential,
or by resorting to multi-connected space forms
which give a low CMB anisotropy at the largest scales
due to missing modes compared to the simply-connected ${\cal S}^3$,
in general.
We study all types of homogeneous multi-connected spherical space forms
and find no agreement for the cyclic groups $Z_m$
which show an enhanced power at the largest scales
despite their small volumes.
Also the binary dihedral groups $D^\star_{4m}$ do not lead to models
with a suppression significantly stronger than the simply-connected
${\cal S}^3$ universe.
Thus these models do not seem to provide viable space forms
as a model for our Universe.
This contrasts to the remaining three space forms,
the binary tetrahedral, the binary octahedral as well as
the dodecahedral space forms
which show a sufficiently strong suppression of large scale power
compared to the simply-connected ${\cal S}^3$ universe.
The binary tetrahedral space requires a density
$\Omega_{\hbox{\scriptsize tot}}$
in the range $1.06\dots 1.07$.
Since the WMAP team reported $\Omega_{\hbox{\scriptsize tot}}=1.02\pm0.02$
this model is probably in conflict with the observations.
For the two remaining models the density $\Omega_{\hbox{\scriptsize tot}}$
should be in the range
$\Omega_{\hbox{\scriptsize tot}}=1.03\dots 1.04$
for the binary octahedral space, and
$\Omega_{\hbox{\scriptsize tot}}=1.015\dots 1.02$ for the dodecahedral space.
These values are compatible with the current observations.
Furthermore, we would like to remark that the binary octahedral space
displays a slightly stronger suppression of power than
the dodecahedral space,
as a comparison of figures \ref{Fig:Statistic_Oktaeder} and
figures \ref{Fig:Statistic_Dodekaeder} reveals.
A unique signal for a particular topology is provided by the
so-called circles-in-the-sky-signature proposed in
\cite{Cornish_Spergel_Starkman_1998b}.
Along two circles on the sky which are mapped onto each other
by the group $\Gamma$,
the ordinary Sachs-Wolfe effect produces the same temperature signal.
If there would be no Doppler and integrated Sachs-Wolfe contribution,
see equation (\ref{Eq:Sachs_Wolfe_tight_coupling}),
which disturb this signal,
one would expect a clear sign for a given topology if present.
In \cite{Aurich_Lustig_Steiner_2004c} we study the influence
of the latter two contributions on the circles-in-the-sky-signature
and find that the degradation of the signal is strong enough
such that the topology signal can be swamped.
Therefore, the fact that in \cite{Cornish_Spergel_Starkman_Komatsu_2003}
no circles are found in the WMAP sky maps,
does not necessarily exclude the binary octahedral or the dodecahedral space
as viable models for our Universe.
In a forthcoming paper we will study the circles-in-the-sky
for the three best multi-connected spherical space forms,
and in particular shall discuss whether a combined circle search
on all circles simultaneously of a given topology can overcome
the degradations.
\section*{Acknowledgment}
One of us (F.S.) would like to thank the Theoretical Physics Division
of CERN for hospitality.
\section*{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{\setcounter{equation}{0}\oldsection}
\renewcommand\thesection{\arabic{section}}
\renewcommand\theequation{\thesection.\arabic{equation}}
\allowdisplaybreaks
\def\pf{\it{Proof.}\rm\quad}
\newcommand\ww{\bm {w}}
\newcommand\bb{\bm b}
\newcommand\vv{\bm v}
\newcommand\UU{\mbox{\bfseries U}}
\newcommand\FF{\mbox{\bfseries \itshape F}}
\newcommand\h{\mbox{\bfseries \itshape h}}\newcommand\dd{\mbox{d}}
\newcommand\g{\mbox{\bfseries \itshape g}}
\newcommand\xx{\mbox{\bfseries \itshape x}}
\def\R{\mathbb{R}}\def\pa{\partial}
\def\N{\mathbb{N}}\def\Z{\mathbb{Z}}
\newcommand\divg{{\text{div}}}
\newtheorem{defn}{Definition}[section]
\newtheorem{thm}{Theorem}[section]
\newtheorem{lem}[thm]{Lemma}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{re}{Remark}[section]
\newtheorem{exa}{Example}[section]
\setlength{\arraycolsep}{0.5mm}
\begin{document}
\title {\bf Identities for the multiple zeta (star) values}
\author{
{Ce Xu\thanks{Corresponding author. Email: 15959259051@163.com}}\\[1mm]
\small School of Mathematical Sciences, Xiamen University\\
\small Xiamen
361005, P.R. China}
\date{}
\maketitle \noindent{\bf Abstract } In this paper we prove some new identities for multiple zeta values and multiple zeta star values of arbitrary depth by using the methods of integral computations of logarithm function and iterated integral representations of series. By applying the formulas obtained, we can prove that multiple zeta star values whose indices are the sequences $(\bar 1,\{1\}_m,\bar 1)$ and $(2,\{1\}_m,\bar 1)$ can be expressed polynomially in terms of zeta values, polylogarithms and $\ln(2)$. Finally, we also evaluate several restricted sum formulas involving multiple zeta values.
\\[2mm]
\noindent{\bf Keywords} Multiple zeta value; multiple zeta star value; restricted sum formula.
\\[2mm]
\noindent{\bf AMS Subject Classifications (2010):} 11M06; 11M40; 40B05; 33E20.
\tableofcontents
\section{Introduction}
The study of special values of multiple zeta functions and multiple zeta star functions deals with relations between values at non-zero integer vectors ${\bf s}:=(s_1,\ldots,s_k)$ of sums of the form
\begin{align}\label{1.1}
&\zeta \left( \bf s \right)\equiv\zeta \left( {{s_1}, \ldots ,{s_k}} \right): = \sum\limits_{{n_1} > \cdots > {n_k} > 0} {\prod\limits_{j = 1}^k {n_j^{ - \left| {{s_j}} \right|}}{\rm sgn}(s_j)^{n_j}} ,
\end{align}
\begin{align}\label{1.2}
\zeta^\star \left( \bf s \right)\equiv{\zeta ^ \star }\left( {{s_1}, \ldots ,{s_k}} \right): = \sum\limits_{{n_1} \ge \cdots \ge {n_k} \ge 1} {\prod\limits_{j = 1}^k {n_j^{ - \left| {{s_j}} \right|}{\rm sgn}(s_j)^{n_j}} } ,
\end{align}
commonly referred to as multiple zeta values and multiple zeta star values \cite{BBBL1997,BBBL2001,BB2003,O1999,Y2009}, respectively, where
\[{\mathop{\rm sgn}} \left( {{s_j}} \right): = \left\{ {\begin{array}{*{20}{c}}
{1,} & {{s_j} > 0,} \\
{ - 1,} & {{s_j} < 0.} \\
\end{array}} \right.\]
We are primarily interested in positive integer values of the arguments $s_1,\ldots,s_k$, in which case it is seen that the condition $s_1>1$ is necessary for both sums (\ref{1.1}) and (\ref{1.2}) to converge. Of course, if ${\sigma_1} =-1$, then we can allow $s_1=1$. Throughout the paper we will use $\bar p$ to denote a negative entry $s_j=-p$. For example,
\[\zeta \left( {{{\bar s}_1},{s_2}} \right) = \zeta \left( { - {s_1},{s_2}} \right),\ {\zeta ^ \star }\left( {{{\bar s}_1},{s_2},{{\bar s}_3}} \right) = {\zeta ^ \star }\left( { - {s_1},{s_2}, - {s_3}} \right).\]
We call $l({\bf s}):=k$ the depth of (\ref{1.1}), (\ref{1.2}) and $\left| {\bf s} \right|: = \sum\limits_{j = 1}^k {\left| {{s_j}} \right|} $ the weight. For convenience we set $\zeta \left( \emptyset \right) = {\zeta ^ \star }\left( \emptyset \right) = 1$ and ${\left\{ {{s_1}, \ldots ,{s_j}} \right\}_d}$ to be the set formed by repeating the composition $\left( {{s_1}, \ldots ,{s_j}} \right)$ $d$ times.
In this paper we let $\zeta_n \left( {{s_1}, \ldots ,{s_k}} \right)$ and $\zeta_n^\star \left( {{s_1}, \ldots ,{s_k}} \right)$ to denote the multiple harmonic number (also called the partial sums of multiple zeta value) and multiple harmonic star number (also called the partial sums of multiple zeta star value), respectively, which are defined by
\begin{align}\label{a1}
{\zeta _n}\left( {{s_1},{s_2}, \cdots ,{s_k}} \right): = \sum\limits_{n \ge {n_1} > {n_2} > \cdots > {n_k} \ge 1} {\prod\limits_{j = 1}^k {n_j^{ - \left| {{s_j}} \right|}} {\rm{sgn}}{{({s_j})}^{{n_j}}}} ,
\end{align}
\begin{align}\label{a2}
\zeta _n^ \star \left( {{s_1},{s_2}, \cdots ,{s_k}} \right): = \sum\limits_{n \ge {n_1} \ge {n_2} \ge \cdots \ge {n_k} \ge 1} {\prod\limits_{j = 1}^k {n_j^{ - \left| {{s_j}} \right|}} {\rm{sgn}}{{({s_j})}^{{n_j}}}} ,
\end{align}
when $n<k$, then ${\zeta _n}\left( {{s_1},{s_2}, \cdots ,{s_k}} \right)=0$, and ${\zeta _n}\left(\emptyset \right)={\zeta^\star _n}\left(\emptyset \right)=1$.
A good deal of work on multiple zeta (star) values has focused on the problem of determining when `complicated' sums can be expressed in terms of `simpler' sums. Thus, researchers are interested in determining which sums can be expressed in terms of other sums of smaller depth.
The origin of multiple zeta (star) values goes back to the correspondence of Euler with Goldbach in 1742-1743 (see \cite{H2007}). Euler studied double zeta values and established some important relation formulas for them. For example, he proved that
\begin{align}\label{1.3}
2{\zeta ^ \star }\left( {k,1} \right) = \left( {k + 2} \right)\zeta \left( {k + 1} \right) - \sum\limits_{i = 1}^{k - 2} {\zeta \left( {k - i} \right)\zeta \left( {i + 1} \right)},\ k\geq2,
\end{align}
which, in particular, implies the simplest but nontrivial relation:
${\zeta ^ \star }\left( {2,1} \right)\; = 2\zeta \left( 3 \right)$ or equivalently $\zeta \left( {2,1} \right)\; = \zeta \left( 3 \right)$.
Moreover, he conjectured that the double zeta values would be reducible whenever weight is odd, and even gave what he hoped to be the general formula. The conjecture was first proved by Borweins and Girgensohn \cite{BBG1995}. For some interesting results on generalized double zeta values (also called Euler sums), see \cite{BBG1994,FS1998}.
The systematic study of multiple zeta values began in the early 1990s with the works of Hoffman \cite{H1992}, Zagier \cite{DZ1994} and Borwein-Bradley-Broadhurst \cite{BBBL1997} and has continued with increasing attention in recent years (see \cite{CCE2016,CML2016,E2009,E2016}). The first systematic study of reductions up to depth 3 was carried out by Borwein and Girgensohn \cite{BG1996}, where the authors proved that if $p+q+r$ is even or less than or equal to 10 or $p+q+r=12$, then triple zeta values $\zeta \left( {q,p,r} \right)$ (or $\zeta^\star \left( {q,p,r} \right)$) can be expressed as a rational linear combination of products of zeta values and double zeta values. The set of the the multiple zeta values has a rich algebraic structure given by the shuffle and the stuffle relations \cite{H1992,M2009,DZ2012}.
Additionally, it has been discovered in the course of the years that many multiple zeta (star) values admit expressions involving finitely many zeta values and polylogarithms, that are to say values of the Riemann zeta function and polylogarithm function,
\begin{align*}
&\zeta \left( s \right): = \sum\limits_{n = 1}^\infty {\frac{1}{{{n^s}}}} \;\left( {{\mathop{\Re}\nolimits} \left( s \right) > 1} \right),\\
&{\rm Li}_s\left( x \right): = \sum\limits_{n = 1}^\infty {\frac{{{x^n}}}{{{n^s}}}} \;\left( {{\mathop{\Re}\nolimits} \left( s \right) \ge 1,\;x \in \left[ { - 1,1} \right)} \right)
\end{align*}
at positive integer arguments and $x=1/2$. The relations between multiple zeta (star) values and the values of the Riemann zeta function and polylogarithm function have been studied by many authors, see \cite{BBBL1997,BBBL2001,BB2003,L2012,KO2010,O1999,KP2013,KPZ2016,X2017,Y2009,DZ2012}. For example, Zagier \cite{DZ2012} proved that the multiple zeta star values ${\zeta ^ \star \left( {{{\left\{ 2 \right\}}_a},3,{{\left\{ 2 \right\}}_b}} \right)} $ and multiple zeta values ${\zeta \left( {{{\left\{ 2 \right\}}_a},3,{{\left\{ 2 \right\}}_b}} \right)} $, where $a,b\in \N_0:=\N\cup \{0\}=\{0,1,2,\ldots\}$, are reducible to polynomials in zeta values, and gave explicit formulae. Hessami Pilehrood et al. \cite{KP2013} and Li \cite{L2012} provide two new proofs of Zagier's formula for ${\zeta ^ \star \left( {{{\left\{ 2 \right\}}_a},3,{{\left\{ 2 \right\}}_b}} \right)} $ based on a finite identity for partial sums of the zeta-star series and hypergeometric series computations, respectively.
The purpose of the present paper is to establish some new family of identities for multiple integral representation of series. Then, we apply it to obtain a family of identities relating multiple zeta (star) values to zeta valus and polylogarithms. Specially, we present some new recurrence relations for multiple zeta star values whose indices are the sequences $(\bar 1,\{1\}_m,\bar 1)$, $(2,\{1\}_m,\bar 1)$, and prove that the multiple zeta values $\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m}},\bar 1} \right)$ and $\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m}},\bar 1,\bar 1,{{\left\{ 1 \right\}}_{k}}} \right)$
can be expressed as a rational linear combination
of zeta values, polylogarithms and $\ln(2)$. Moreover, we also evaluate several restricted sum formulas involving multiple zeta values.
\section{Evaluation of the multiple zeta star values $\zeta^\star(\bar 1,\{1\}_m,\bar 1)$ and $\zeta^\star(2,\{1\}_m,\bar 1)$}
In this section, we will show that the alternating multiple zeta star values $\zeta^\star(\bar 1,\{1\}_m,\bar 1)$ and $\zeta^\star(2,\{1\}_m,\bar 1)$ satisfy certain recurrence relations that allow us to write them in terms of zeta values, polylogarithms and $\ln(2)$.
\subsection{Four lemmas and proofs}
Now, we need four lemmas which will be useful in proving Theorems 2.4 and 2.5.
\begin{lem}(\cite{Xu2017}) \label{lem2.1}
For $n\in \N,\ m\in \N_0$ and $x\in[-1,1)$, the following relation holds:
\begin{align}\label{2.1}
\int\limits_0^x {{t^{n - 1}}{{\ln }^m}\left( {1 - t} \right)} dt&=\frac{1}{n}\left( {{x^n} - 1} \right){\ln ^m}\left( {1 - x} \right)+ m!\frac{{{{\left( { - 1} \right)}^m}}}{n}\sum\limits_{1 \le {k_m} \le \cdots \le {k_1} \le n} {\frac{{{1}}}{{{k_1} \cdots {k_m}}}}\nonumber \\
&\quad - \frac{1}{n}\sum\limits_{i = 1}^{m } {{{\left( { - 1} \right)}^{i - 1}}i!\left( {\begin{array}{*{20}{c}}
m \\
i \\
\end{array}} \right){{\ln }^{m - i}}\left( {1 - x} \right)} \sum\limits_{1 \le {k_i} \le \cdots \le {k_1} \le n} {\frac{{{x^{{k_i}}} - 1}}{{{k_1} \cdots {k_i}}}},
\end{align}
where if $m=0$, then
\[\sum\limits_{_{1 \le {k_m} \le \cdots \le {k_1} \le n}} {\frac{{f\left( {{k_m}} \right)}}{{{k_1} \cdots {k_m}}}} = f\left( n \right).\]
\end{lem}
\pf The proof is by induction on $m$.
Define $J\left( {n,m;x} \right): = \int\limits_0^x {{t^{n - 1}}{{\ln }^m}\left( {1 - t} \right)} dt$, for $m=0$, by a simple calculation, we can arrive at the conclusion that
\[J\left( {n,0;x} \right) = \int\limits_0^x {{t^{n - 1}}dt} = \frac{{{x^n}}}{n}\]
and the formula is true. For $m\geq 1$ we proceed as follows.
By using integration by parts we have the following recurrence relation
\begin{align}\label{2.2}
J\left( {n,m;x} \right) = \frac{1}{n}\left( {{x^n} - 1} \right){\ln ^m}\left( {1 - x} \right) - \frac{m}{n}\sum\limits_{k = 1}^n {J\left( {k,m - 1;x} \right)}.
\end{align}
Suppose the lemma holds for $m-1$. Then the inductive hypothesis implies that the integral (\ref{2.2}) is equal to
\begin{align}\label{2.3}
J\left( {n,m;x} \right) =& \frac{1}{n}\left( {{x^n} - 1} \right) {\ln ^m}\left( {1 - x} \right)+ m!\frac{{{{\left( { - 1} \right)}^m}}}{n}\sum\limits_{j = 1}^n \frac 1{j}{\sum\limits_{1 \le {k_{m - 1}} \le \cdots \le {k_1} \le j} {\frac{{{1}}}{{{k_1} \cdots {k_{m - 1}}}}} }\nonumber \\
& + \frac{m}{n}\sum\limits_{i = 1}^{m - 1} {{{\left( { - 1} \right)}^{i - 1}}i!\left( {\begin{array}{*{20}{c}}
{m - 1} \\
i \\
\end{array}} \right){{\ln }^{m - 1 - i}}\left( {1 - x} \right)}\sum\limits_{j = 1}^n \frac 1{j} \sum\limits_{1 \le {k_i} \le \cdots \le {k_1} \le j} {\frac{{{x^{{k_i}}} - 1}}{{{k_1} \cdots {k_i}}}}\nonumber \\
& - \frac{m}{n}{\ln ^{m - 1}}\left( {1 - x} \right)\sum\limits_{j = 1}^n {\frac{{{x^j} - 1}}{j}} .
\end{align}
Thus, by a direct calculation, we may deduce the desired result. This completes the proof of Lemma 2.1. \hfill$\square$
\begin{lem}\label{blem1}
For integers $m\geq 0, n\geq 0$ and $x\in (0,1)$, then the following integral identity holds:
\begin{align}\label{b1}
\int\limits_0^x {{t^{n}}{{\left( {\ln t} \right)}^m}} dt = \sum\limits_{l = 0}^m {l!\left( {\begin{array}{*{20}{c}}
m \\
l \\
\end{array}} \right)\frac{{{{\left( { - 1} \right)}^l}}}{{{(n+1)^{l + 1}}}}{x^{n+1}}{{{\ln^{m - l} (x)} }}}.
\end{align}
\end{lem}
\pf This follows by induction on $m$, using the formula
\[\int\limits_0^{{x}} {{t^{n }}{{\ln }^m}(t)} dt = \frac{{{x^{n+1}}}}{n+1}{\ln ^m}(x) - \frac{m}{n+1}\int\limits_0^{{x}} {{t^{n}}{{\ln }^{m - 1}}(t)} dt,\]
that comes from integration by parts.\hfill$\square$
\begin{lem} (\cite{X2017}) \label{lem2.2}
For positive integer $m$, the following identity holds:
\begin{align}\label{2.4}
\int\limits_0^1 {\frac{{{{\ln }^m}\left( {1 + t} \right)\ln \left( {1 - t} \right)}}{{1 + t}}} dt= &\frac{1}{{m + 1}}{\ln ^{m + 2}}(2) - \zeta \left( 2 \right){\ln ^m}(2)\nonumber\\
& - \sum\limits_{k = 1}^m {\left( {\begin{array}{*{20}{c}}
m \\
k \\
\end{array}} \right){{\left( { - 1} \right)}^{k + 1}}\left\{ \begin{array}{l}
\sum\limits_{l = 1}^k {l!\left( {\begin{array}{*{20}{c}}
k \\
l \\
\end{array}} \right){{\left( {\ln^{m - l} (2)} \right)}}{\rm{L}}{{\rm{i}}_{l + 2}}\left( {\frac{1}{2}} \right)} \\
- k!{\left( {\ln^{m - k} (2)} \right)}\zeta \left( {k + 2} \right) \\
\end{array} \right\}}.
\end{align}
\end{lem}
\pf This proof is based on the simple variable substitution $1+t=2x$ and the help of formula (\ref{b1}). We have
\begin{align*}
\int\limits_0^1 {\frac{{{{\ln }^m}\left( {1 + t} \right)\ln \left( {1 - t} \right)}}{{1 + t}}} dt\rm{ = }&\int\limits_{1/2}^1 {\frac{{{{\ln }^m}\left( {2x} \right)\ln \left( {2 - 2x} \right)}}{x}dx} \\
=& \sum\limits_{k = 0}^m {\left( {\begin{array}{*{20}{c}}
m \\
k \\
\end{array}} \right){{\ln }^{m - k + 1}}\left( 2 \right)\int\limits_{1/2}^1 {\frac{{{{\ln }^k}\left( x \right)}}{x}dx} } \\
& + \sum\limits_{k = 0}^m {\left( {\begin{array}{*{20}{c}}
m \\
k \\
\end{array}} \right){{\ln }^{m - k}}\left( 2 \right)\int\limits_{1/2}^1 {\frac{{{{\ln }^k}\left( x \right)\ln \left( {1 - x} \right)}}{x}dx} } .
\end{align*}
Then by using identity (\ref{b1}) and the power series expansion of logarithm function
\[\ln \left( {1 - x} \right) = - \sum\limits_{n = 1}^\infty {\frac{{{x^n}}}{n}} ,\quad x\in [-1,1)\]
we easily obtain the formula (\ref{2.4}), which completes the proof of Lemma \ref{lem2.2}.\hfill$\square$
\begin{lem} (\cite{X2017}) \label{lem2.3}
For integer $m\geq1$ and $x \in \left[ {0,1} \right]$, the following identity holds:
\begin{align}\label{2.5}
\int\limits_0^x {\frac{{{{\ln }^m}\left( {1 + t} \right)}}{t}} dt = &\frac{1}{{m + 1}}{\ln ^{m + 1}}\left( {1 + x} \right) + m!\left( {\zeta \left( {m + 1} \right) - {\rm Li}{_{m + 1}}\left( {\frac{1}{{1 + x}}} \right)} \right)\nonumber \\&- m!\sum\limits_{j = 1}^m {\frac{{{{\ln }^{m - j + 1}}\left( {1 + x} \right)}}{{\left( {m - j + 1} \right)!}}} {\rm Li}{_j}\left( {\frac{1}{{1 + x}}} \right).
\end{align}
\end{lem}
\pf We note that applying the change of variable $t\rightarrow w-1$ to the left hand side of (\ref{2.5}), which can be rewritten as
\begin{align*}
\int\limits_0^x {\frac{{{{\ln }^m}\left( {1 + t} \right)}}{t}} dt\mathop = \limits^{w = 1 +t} & \int\limits_1^{1 + x} {\frac{{{{\ln }^m}(w)}}{{w - 1}}} dw\mathop = \limits^{u = {w^{ - 1}}} {\left( { - 1} \right)^{m + 1}}\int\limits_1^{{{(1 + x)^{-1}}}} {\frac{{{{\ln }^m}(u)}}{{u - {u^2}}}} du\\
=& {\left( { - 1} \right)^{m + 1}}\left\{ {\int\limits_1^{{{(1 + x)^{-1}}}} {\frac{{{{\ln }^m}(u)}}{u}} du + \int\limits_1^{{{(1 + x)^{-1}}}} {\frac{{{{\ln }^m}(u)}}{{1 - u}}} du} \right\}\\
=&\frac{1}{{m + 1}}{\ln ^{m + 1}}\left( {1 + x} \right) + {\left( { - 1} \right)^{m + 1}}\int\limits_1^{{{(1 + x)^{-1}}}} {\frac{{{{\ln }^m}(u)}}{{1 - u}}} du .
\end{align*}
Then with the help of formula (\ref{b1}) we may deduce the evaluation (\ref{2.5}).\hfill$\square$
\subsection{Two theorems and proofs}
Now, we state our main results, the main results in this section are the following two theorems.
\begin{thm}\label{thm1}
For positive integer $m$, we have the recurrence relation
\begin{align}\label{2.6}
\zeta^\star \left( {\bar 1,{{\left\{ 1 \right\}}_{m }},\bar 1} \right)= &\frac{{{{\left( { - 1} \right)}^{m }}}}{{ {m }!}}\zeta \left( 2 \right){\ln ^{m}}(2)\nonumber\\
& - \frac{{{{\left( { - 1} \right)}^{m }}}}{{(m+1)!}}\sum\limits_{i = 1}^{m} {{{\left( { - 1} \right)}^{i + 1}}i!\left( {\begin{array}{*{20}{c}}
m+1 \\
i \\
\end{array}} \right){{\left( {\ln 2} \right)}^{m +1- i}}} \left\{ {\zeta^\star \left( {\bar 1,{{\left\{ 1 \right\}}_{i - 1}},\bar 1} \right) - \zeta^\star \left( {\bar 1,{{\left\{ 1 \right\}}_i}} \right)} \right\}\nonumber\\
&+ \frac{{{{\left( { - 1} \right)}^{m}}}}{{{m}!}}\sum\limits_{k = 1}^{m} {\left( {\begin{array}{*{20}{c}}
{m} \\
k \\
\end{array}} \right){{\left( { - 1} \right)}^{k + 1}}\left\{ \begin{array}{l}
\sum\limits_{l = 1}^k {l!\left( {\begin{array}{*{20}{c}}
k \\
l \\
\end{array}} \right){{\left( {\ln 2} \right)}^{m - l}}{\rm{L}}{{\rm{i}}_{l + 2}}\left( {\frac{1}{2}} \right)} \\
- k!{\left( {\ln 2} \right)^{m- k}}\zeta \left( {k + 2} \right) \\
\end{array} \right\}},
\end{align}
where \[{\zeta ^ \star }\left( {\bar 1,\bar 1} \right) = \frac{{\zeta \left( 2 \right) + {{\ln }^2}\left( 2 \right)}}{2}.\]
\end{thm}
\pf Multiplying (\ref{2.1}) by ${\left( { - 1} \right)^{n - 1}}$ and summing with respect to $n$, the result is
\begin{align}\label{2.7}
&\int\limits_0^x {\frac{{{{\ln }^m}\left( {1 - t} \right)}}{{1 + t}}} dt = \sum\limits_{n = 1}^\infty {{{\left( { - 1} \right)}^{n - 1}}\int\limits_0^x {{t^{n - 1}}{{\ln }^m}\left( {1 - t} \right)} dt}\nonumber \\
&= {\ln ^m}\left( {1 - x} \right)\sum\limits_{n = 1}^\infty {\frac{{{x^n} - 1}}{n}{{\left( { - 1} \right)}^{n - 1}}} + m!{\left( { - 1} \right)^m}\sum\limits_{n = 1}^\infty {\frac{{{{\left( { - 1} \right)}^{n - 1}}}}{n}\sum\limits_{1 \le {k_m} \le \cdots \le {k_1} \le n} {\frac{{{x^{{k_m}}}}}{{{k_1} \cdots {k_m}}}} }\nonumber \\
& \quad- \sum\limits_{i = 1}^{m - 1} {{{\left( { - 1} \right)}^{i - 1}}i!\left( {\begin{array}{*{20}{c}}
m \\
i \\
\end{array}} \right){{\ln }^{m - i}}\left( {1 - x} \right)} \sum\limits_{n = 1}^\infty {\frac{{{{\left( { - 1} \right)}^{n - 1}}}}{n}\sum\limits_{1 \le {k_i} \le \cdots \le {k_1} \le n} {\frac{{{x^{{k_i}}} - 1}}{{{k_1} \cdots {k_i}}}} }.
\end{align}
On the other hand, by integration by parts, we obtain the formula
\begin{align*}
&\mathop {\lim }\limits_{x \to - 1} \left\{ {\int\limits_0^x {\frac{{{{\ln }^m}\left( {1 - t} \right)}}{{1 + t}}} dt - {{\ln }^m}\left( {1 - x} \right)\ln \left( {1 + x} \right)} \right\}\\
& = \mathop {\lim }\limits_{x \to - 1} \left\{ {m\int\limits_0^x {\frac{{{{\ln }^{m - 1}}\left( {1 - t} \right)\ln \left( {1 + t} \right)}}{{1 - t}}} dt} \right\}\\
& = m\int\limits_0^{ - 1} {\frac{{{{\ln }^{m - 1}}\left( {1 - t} \right)\ln \left( {1 + t} \right)}}{{1 - t}}} dt\\
& = - m\int\limits_0^1 {\frac{{{{\ln }^{m - 1}}\left( {1 + t} \right)\ln \left( {1 - t} \right)}}{{1 + t}}} dt,
\end{align*}
Hence, replacing $m-1$ by $m$ and letting $x$ approach $-1$ in (\ref{2.7}), and combining it with (\ref{2.4}), we deduce (\ref{2.6}). \hfill$\square$\\
Noting that, taking $x\rightarrow 1$ in (\ref{2.7}), we have the result
\[\int\limits_0^1 {\frac{{{{\ln }^m}\left( {1 - t} \right)}}{{1 + t}}} dt = m!{\left( { - 1} \right)^{m - 1}}\zeta^\star \left( {\bar 1,{{\left\{ 1 \right\}}_m}} \right).\]
On the other hand, applying the change of variable $t\rightarrow 1-x$ to integral above and using (\ref{b1}), we obtain
\[\int\limits_0^1 {\frac{{{{\ln }^m}\left( {1 - t} \right)}}{{1 + t}}} dt = {\left( { - 1} \right)^m}m!{\rm{L}}{{\rm{i}}_{m + 1}}\left( {\frac{1}{2}} \right).\]
Therefore, we conclude that
\begin{align}\label{2.8}
\zeta^\star \left( {\bar 1,{{\left\{ 1 \right\}}_m}} \right) = - {\rm{L}}{{\rm{i}}_{m + 1}}\left( {\frac{1}{2}} \right),\quad m\in\N_0.
\end{align}
\begin{thm}\label{thm2}
For positive integer $m$, we have the recurrence relation
\begin{align}\label{2.9}
\zeta^\star \left( {2,{{\left\{ 1 \right\}}_{m}},\bar 1} \right) =& \frac{{m + 2}}{{\left( {m + 3} \right)!}}{\left( { - 1} \right)^{m }}{\ln ^{m + 3}}(2) + \left( {m + 2} \right){\left( { - 1} \right)^{m }}\left( {\zeta \left( {m + 3} \right) - {\rm{L}}{{\rm{i}}_{m + 3}}\left( {\frac{1}{2}} \right)} \right)\nonumber\\
& - \left( {m + 2} \right){\left( { - 1} \right)^{m }}\sum\limits_{j = 1}^{m + 2} {\frac{{{{ {\ln^{m + 3 - j} (2)}}}}}{{\left( {m + 3 - j} \right)!}}} {\rm{L}}{{\rm{i}}_j}\left( {\frac{1}{2}} \right) - \frac{3}{2}\frac{{{{\left( { - 1} \right)}^{m}}}}{{(m+1)!}}\zeta \left( 2 \right){ {\ln^{m+1} (2) }}\nonumber\\
& - \frac{{{{\left( { - 1} \right)}^{m}}}}{{(m+1)!}}\sum\limits_{i = 1}^{m} {{{\left( { - 1} \right)}^{i - 1}}i!\left( {\begin{array}{*{20}{c}}
m+1 \\
i \\
\end{array}} \right){{\left( {\ln^{m+1 - i}(2)} \right)}}}\nonumber\\
&\quad\quad\quad\quad\quad\quad\times \left\{ {\zeta^\star \left( {2,{{\left\{ 1 \right\}}_{i - 1}},\bar 1} \right) - \zeta^\star \left( {2,{{\left\{ 1 \right\}}_i}} \right)} \right\},
\end{align}
where \[{\zeta ^ \star }\left( {2,\bar 1} \right) = \frac{1}{4}\zeta \left( 3 \right) - \frac{3}{2}\zeta \left( 2 \right)\ln \left( 2 \right).\]
\end{thm}
\pf Similarly as in the proof of Theorem \ref{thm1}, we consider the integral
\begin{align}\label{2.10}
\int\limits_0^{ - 1} {\frac{{{{\ln }^{m + 1}}\left( {1 - t} \right)}}{t}} dt = - \sum\limits_{n = 1}^\infty {\frac{1}{n}\int\limits_0^{ - 1} {{t^{n - 1}}{{\ln }^m}\left( {1 - t} \right)dt} }.
\end{align}
Putting $x\rightarrow -1$ in (\ref{2.1}), we deduce that
\begin{align}\label{2.11}
\int_0^{ - 1} {{t^{n - 1}}\ln^m \left( {1 - t} \right)} dt =& \frac{1}{n}({\ln ^m}(2))\left( {{{\left( { - 1} \right)}^n} - 1} \right) + m!\frac{{{{\left( { - 1} \right)}^m}}}{n}{\zeta^\star _n}\left( {{{\left\{ 1 \right\}}_{m-1}},\bar 1} \right)\nonumber\\
& - \frac{1}{n}\sum\limits_{i = 1}^{m - 1} {{{\left( { - 1} \right)}^{i - 1}}i!\left( {\begin{array}{*{20}{c}}
m \\
i \\
\end{array}} \right)} {(\ln ^{m - i}}(2))\left\{ {{\zeta^\star _n}\left( {{{\left\{ 1 \right\}}_{i - 1}},\bar 1} \right) - {\zeta^\star _n}\left( {{{\left\{ 1 \right\}}_i}} \right)} \right\}.
\end{align}
Setting $x\rightarrow 1$ in (\ref{2.5}) and combining (\ref{2.10}) with (\ref{2.11}), the replacing $m-1$ by $m$, we obtain the desired result (\ref{2.9}).\hfill$\square$\\
By considering the case $m=2$ in (\ref{2.6}) and (\ref{2.9}) we get
\[\zeta^\star \left( {\bar 1,1,\bar 1} \right){\rm{ = }}\frac{1}{8}\zeta \left( 3 \right){\rm{ + }}\frac{1}{2}\zeta \left( 2 \right)\ln(2) - \frac{1}{6}{\ln ^3}(2),\]
\[\zeta^\star \left( {2,1,\bar 1} \right) = \frac{1}{8}{\ln ^4}(2) + 3{\rm{L}}{{\rm{i}}_4}\left( {\frac{1}{2}} \right) - 3\zeta \left( 4 \right) - \frac{3}{2}\zeta \left( 2 \right){\ln ^2}(2) + \frac{7}{8}\zeta \left( 3 \right)\ln (2).\]
From \cite{Xu2017} we have the result
\[\zeta^\star \left( {2,{{\left\{ 1 \right\}}_m}} \right) = \left( {m + 1} \right)\zeta \left( {m + 2} \right),\quad m\in\N_0.\]
Therefore, from Theorem \ref{thm1} and Theorem \ref{thm2}, we know that the alternating multiple zeta star values $\zeta^\star(\bar 1,\{1\}_m,\bar 1)$ and $\zeta^\star(2,\{1\}_m,\bar 1)$ can be expressed as a rational linear combination of zeta values, polylogarithms and $\ln(2)$.
\section{Some results on multiple zeta values}
In this section, we use certain multiple integral representations to evaluate several multiple zeta values. We need the following lemma.
\begin{lem} (\cite{X2017,Xu2017})\label{lem3.1}
For integer $k>0$ and $x\in [-1,1)$, we have that
\begin{align}\label{3.1}
{\ln ^k}\left( {1 - x} \right) = {\left( { - 1} \right)^k}k!\sum\limits_{n = 1}^\infty {\frac{{{x^n}}}{n}{\zeta _{n - 1}}\left( {{{\left\{ 1 \right\}}_{k - 1}}} \right)},
\end{align}
\begin{align}\label{3.2}
s\left( {n,k} \right) = \left( {n - 1} \right)!{\zeta _{n - 1}}\left( {{{\left\{ 1 \right\}}_{k - 1}}} \right).
\end{align}
where ${s\left( {n,k} \right)}$ is called (unsigned) Stirling number of the first kind (see \cite{L1974}).
The Stirling numbers ${s\left( {n,k} \right)}$ of the first kind satisfy a recurrence relation in the form
\[s\left( {n,k} \right) = s\left( {n - 1,k - 1} \right) + \left( {n - 1} \right)s\left( {n - 1,k} \right),\;\;n,k \in \N,\]
with $s\left( {n,k} \right) = 0,n < k,s\left( {n,0} \right) = s\left( {0,k} \right) = 0,s\left( {0,0} \right) = 1$.
\end{lem}
\pf The proof is based on the two identities
\begin{align}\label{c1}
{\ln ^{k{\rm{ + }}1}}\left( {1 - x} \right){\rm{ = }} - \left( {k + 1} \right)\int\limits_0^x {\frac{{{{\ln }^k}\left( {1 - t} \right)}}{{1 - t}}dt},\quad k\in \N_0
\end{align}
and
\begin{align}\label{c2}
{\ln ^k}\left( {1 - x} \right) = {\left( { - 1} \right)^k}k!\sum\limits_{n = k}^\infty {\frac{{s\left( {n,k} \right)}}{{n!}}{x^n}} ,\: - 1 \le x < 1.
\end{align}
Applying the induction hypothesis and Cauchy product formula, we arrive at
\begin{align*}
{\ln ^{k{\rm{ + }}1}}\left( {1 - x} \right){\rm{ = }}& - \left( {k + 1} \right)\int\limits_0^x {\frac{{{{\ln }^k}\left( {1 - t} \right)}}{{1 - t}}dt} \\
& = {\left( { - 1} \right)^{k + 1}}\left( {k + 1} \right)!\sum\limits_{n = 1}^\infty {\frac{1}{{n + 1}}\sum\limits_{i = 1}^n {\frac{{{\zeta _{i - 1}}\left( {{{\left\{ 1 \right\}}_{k - 1}}} \right)}}{i}} } {x^{n + 1}}\\
& = {\left( { - 1} \right)^{k + 1}}\left( {k + 1} \right)!\sum\limits_{n = 1}^\infty {\frac{{{\zeta _n}\left( {{{\left\{ 1 \right\}}_k}} \right)}}{{n + 1}}} {x^{n + 1}}.
\end{align*}
Nothing that ${\zeta _n}\left( {{{\left\{ 1 \right\}}_k}} \right) = 0$ when $n<k$. Thus, we can deduce (\ref{3.1}). Then, comparing the coefficients of $x^n$ in (\ref{3.1}) and (\ref{c2}), we obtain formula (\ref{3.2}). The proof of Lemma \ref{lem3.1} is thus completed.\hfill$\square$
It is clear that using (\ref{3.1}) and applying Cauchy product formula of power series, we deduce
\begin{align}\label{c3}
\frac{{{{\ln }^k}\left( {1 + x} \right)}}{{1 - x}} = {\left( { - 1} \right)^k}k!\sum\limits_{n = 1}^\infty {{\zeta _{n - 1}}\left( {\bar 1,{{\left\{ 1 \right\}}_{k - 1}}} \right){x^{n - 1}}} ,\quad x\in (-1,1),
\end{align}
\begin{align}\label{c4}
\frac{{{{\ln }^k}\left( {1 - x} \right)}}{{1 + x}} = {\left( { - 1} \right)^k}k!\sum\limits_{n = 1}^\infty {{{\left( { - 1} \right)}^{n - 1}}{\zeta _{n - 1}}\left( {\bar 1,{{\left\{ 1 \right\}}_{k - 1}}} \right){x^{n - 1}}},\quad x\in(-1,1) .
\end{align}
The main results are the following theorems and corollary.
\begin{thm}\label{thm3.2} For integers $m,k\in\N_0$, then
\begin{align}\label{3.3}
\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_m},\bar 1,{{\left\{ 1 \right\}}_{k}}} \right) = {\left( { - 1} \right)^{m + 1}}{\rm{L}}{{\rm{i}}_{k + 2,{{\left\{ 1 \right\}}_m}}}\left( {\frac{1}{2}} \right),
\end{align}
where the multiple polylogarithm function
\begin{align}\label{3.4}
{\rm{L}}{{\rm{i}}_{{s_1},{s_2}, \cdots {s_k}}}\left( x \right): = \sum\limits_{{n_1} > \cdots > {n_k} > 0} {\frac{{{x^{{n_1}}}}}{{n_1^{{s_1}} \cdots n_k^{{s_k}}}}}
\end{align}
is defined for positive integers $s_j$, and $x$ is a real number satisfying $0\leq x<1$. Of course, if $s_1>1$, then we can allow $x=1$.
\end{thm}
\pf To prove the identity (\ref{3.3}), we consider the following multiple integral
\begin{align}\label{3.5}
{M_m}\left( k \right): = \int\limits_0^1 {\frac{1}{{1 + {t_1}}}d{t_1}} \cdots \int\limits_0^{{t_{m - 1}}} {\frac{1}{{1 + {t_m}}}d{t_m}} \int\limits_0^{{t_m}} {\frac{{{{\ln }^{k+1}}\left( {1 - {t_{m + 1}}} \right)}}{{1 + {t_{m + 1}}}}} d{t_{m + 1}}.
\end{align}
By using formula (\ref{c4}) and the power series expansion
\[\frac{1}{{1 + x}} = \sum\limits_{n = 1}^\infty {{{\left( { - 1} \right)}^{n - 1}}{x^{n - 1}}},\quad x\in (-1,1)\]
we can find that
\begin{align}\label{c5}
&\frac{{{{\ln }^{k + 1}}\left( {1 - {t_{m + 1}}} \right)}}{{\left( {1 + {t_1}} \right) \cdots \left( {1 + {t_m}} \right)\left( {1 + {t_{m + 1}}} \right)}} \nonumber\\
&= {\left( { - 1} \right)^{k + m}}\left( {k + 1} \right)!\sum\limits_{{n_1},{n_2}, \cdots ,{n_{m + 1}} = 1}^\infty {{{\left( { - 1} \right)}^{{n_1} + {n_2} + \cdots + {n_{m + 1}}}}{\zeta _{{n_1} - 1}}\left( {\bar 1,{{\left\{ 1 \right\}}_k}} \right)t_1^{{n_{m + 1}} - 1}t_2^{{n_m} - 1} \cdots t_{m + 1}^{{n_1} - 1}} .
\end{align}
Applying the iterated-integral symbol $\int\limits_0^1 {\int\limits_0^{{t_1}} { \cdots \int\limits_0^{{t_m}} {\left( \cdot \right)d{t_1}d{t_2} \cdots d{t_{m + 1}}} } } $ to (\ref{c5}), we obtain
\begin{align}\label{3.6}
{M_m}\left( k \right) =& {\left( { - 1} \right)^{k + m }}(k+1)!\sum\limits_{{n_1},{n_2}, \cdots ,{n_{m + 1}} = 1}^\infty {{{\left( { - 1} \right)}^{{n_1} + \cdots + {n_{m + 1}}}}\frac{{{\zeta _{{n_1} - 1}}\left( {\bar 1,{{\left\{ 1 \right\}}_{k }}} \right)}}{{{n_1}}}}\nonumber \\
&\quad \quad\quad\quad\quad\quad\quad\quad\quad\times \int\limits_0^1 {t_1^{{n_{m + 1}} - 1}d{t_1}} \cdots \int\limits_0^{{t_{m - 1}}} {t_m^{{n_1} + {n_2} - 1}d{t_m}}\nonumber \\
=& {\left( { - 1} \right)^{k + m }}(k+1)!\sum\limits_{{n_1},{n_2}, \cdots ,{n_{m + 1}} = 1}^\infty {{{\left( { - 1} \right)}^{{n_1} + \cdots + {n_{m + 1}}}}\frac{{{\zeta _{{n_1} - 1}}\left( {\bar 1,{{\left\{ 1 \right\}}_{k }}} \right)}}{{{n_1}\left( {{n_1} + {n_2}} \right) \cdots \left( {{n_1} + \cdots + {n_{m + 1}}} \right)}}}\nonumber \\
=& {\left( { - 1} \right)^{k + m }}(k+1)!\sum\limits_{{n_1} > \cdots > {n_{m + 1}} \ge 1}^\infty {\frac{{{\zeta _{{n_{m + 1}} - 1}}\left( {\bar 1,{{\left\{ 1 \right\}}_{k}}} \right)}}{{{n_1}{n_2} \cdots {n_{m + 1}}}}{{\left( { - 1} \right)}^{{n_1}}}}\nonumber \\
=& {\left( { - 1} \right)^{k + m }}(k+1)!\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_m},\bar 1,{{\left\{ 1 \right\}}_{k}}} \right).
\end{align}
Hence, ${M_{m}}\left( k \right) = {\left( { - 1} \right)^{k + m}}(k+1)!\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m }},\bar 1,{{\left\{ 1 \right\}}_{k}}} \right)$.
On the other hand, applying the change of variables
\[{t_i} \mapsto 1 - {t_{m + 2 - i}},\;i = 1,2, \cdots ,m + 1.\]
to the above multiple integral ${M_m}\left( k \right)$, and noting that the fact
\[\frac{1}{{\left( {2 - {t_1}} \right) \cdots \left( {2 - {t_m}} \right)\left( {2 - {t_{m + 1}}} \right)}} = \sum\limits_{{n_1},{n_2}, \cdots ,{n_{m + 1}} = 1}^\infty {\frac{{t_1^{{n_{m + 1}} - 1}t_2^{{n_m} - 1} \cdots t_{m + 1}^{{n_1} - 1}}}{{{2^{{n_1} + {n_2} + \cdots + {n_{m + 1}}}}}}} \]
we have
\begin{align}\label{3.7}
&{M_m}\left( k \right) = \int\limits_0^1 {\frac{{{{\ln }^{k+1}}\left( {{t_1}} \right)}}{{2 - {t_1}}}d{t_1}\int\limits_0^{{t_1}} {\frac{1}{{2 - {t_2}}}d{t_2}} \cdots \int\limits_0^{{t_m}} {\frac{1}{{2 - {t_{m + 1}}}}d{t_{m + 1}}} } \nonumber \\
&{\rm{ = }}\sum\limits_{{n_1}, \cdots ,{n_{m + 1}} = 1}^\infty {\frac{1}{{{2^{{n_1} + \cdots + {n_{m + 1}}}}}}\int\limits_0^1 {t_1^{{n_{m + 1}} - 1}{{\ln }^{k+1}}\left( {{t_1}} \right)d{t_1}\int\limits_0^{{t_1}} {t_2^{{n_m} - 1}d{t_2}} \cdots \int\limits_0^{{t_m}} {t_{m + 1}^{{n_1} - 1}d{t_{m + 1}}} } }\nonumber \\
&{\rm{ = }}\sum\limits_{{n_1}, \cdots ,{n_{m + 1}} = 1}^\infty {\frac{1}{{{2^{{n_1} + \cdots + {n_{m + 1}}}}}}\cdot\frac{1}{{{n_1}\left( {{n_1} + {n_2}} \right) \cdots \left( {{n_1} + \cdots + {n_m}} \right)}}\int\limits_0^1 {t_1^{{n_1} + \cdots + {n_{m + 1}} - 1}{{\ln }^{k+1}}\left( {{t_1}} \right)d{t_1}} } \nonumber \\
& = (k+1)!{\left( { - 1} \right)^{k+1}}\sum\limits_{{n_1}, \cdots ,{n_{m + 1}} = 1}^\infty {\frac{1}{{{2^{{n_1} + \cdots + {n_{m + 1}}}}}}} \cdot \frac{1}{{{n_1}\left( {{n_1} + {n_2}} \right) \cdots \left( {{n_1} + \cdots +{n_m}} \right){{\left( {{n_1} + \cdots + {n_{m + 1}}} \right)}^{k + 2}}}}\nonumber \\
&= (k+1)!{\left( { - 1} \right)^{k+1}}\sum\limits_{1 \le {n_{m + 1}} < \cdots < {n_1}}^\infty {\frac{1}{{{n_{m + 1}} \cdots {n_2}n_1^{k + 2}{2^{{n_1}}}}}} \nonumber \\
& = (k+1)!{\left( { - 1} \right)^{k+1}}{\rm{L}}{{\rm{i}}_{k + 2,{{\left\{ 1 \right\}}_m}}}\left( {\frac{1}{2}} \right).
\end{align}
Therefore, combining (\ref{3.6}) and (\ref{3.7}) we may deduce the desired result.\hfill$\square$
\begin{thm}\label{thm3.3} For integers $m,k\in\N_0$, then
\begin{align}\label{3.8}
\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_m},\bar 1,{{\left\{ 1 \right\}}_{k}}} \right) =& \frac{{{{\left( { - 1} \right)}^{m + k+1}}}}{{ {k} !}}\sum\limits_{j = 0}^{k} {{{\left( { - 1} \right)}^j}{{\left( {\ln 2} \right)}^{k- j}}j!\left( {\begin{array}{*{20}{c}}
{k} \\
j \\
\end{array}} \right)}\nonumber \\
&\quad\quad\quad\quad\times \left\{ {\zeta \left( {m + 2,{{\left\{ 1 \right\}}_j}} \right) - \sum\limits_{l = 0}^{m + 1} {\frac{{{{\left( {\ln 2} \right)}^{m + 1 - l}}}}{{\left( {m + 1 - l} \right)!}}{\rm{L}}{{\rm{i}}_{l + 1,{{\left\{ 1 \right\}}_j}}}\left( {\frac{1}{2}} \right)} } \right\}.
\end{align}
\end{thm}
\pf By using (\ref{3.1}), we can find that
\begin{align}\label{3.9}
\int\limits_0^1 {\frac{{{{\left( {\ln x} \right)}^k}{{\ln }^{m + 1}}\left( {1 - \frac{x}{2}} \right)}}{x}} dx &= {\left( { - 1} \right)^{m + 1}}\left( {m + 1} \right)!\sum\limits_{n = 1}^\infty {\frac{{{\zeta _{n - 1}}\left( {{{\left\{ 1 \right\}}_m}} \right)}}{{n{2^n}}}\int\limits_0^1 {{x^{n - 1}}{{\ln }^k}(x)dx} }\nonumber \\
&= {\left( { - 1} \right)^{m + k + 1}}\left( {m + 1} \right)!k!\sum\limits_{n = 1}^\infty {\frac{{{\zeta _{n - 1}}\left( {{{\left\{ 1 \right\}}_m}} \right)}}{{{n^{k + 2}}{2^n}}}} \nonumber \\
& = {\left( { - 1} \right)^{m + k + 1}}\left( {m + 1} \right)!k!{\rm{L}}{{\rm{i}}_{k + 2,{{\left\{ 1 \right\}}_m}}}\left( {\frac{1}{2}} \right).
\end{align}
Then it is readily seen that
\begin{align}\label{3.10}
{\rm{L}}{{\rm{i}}_{k + 2,{{\left\{ 1 \right\}}_m}}}\left( {\frac{1}{2}} \right) = \frac{{{{\left( { - 1} \right)}^{m + k+1}}}}{{\left( {m + 1} \right)!{k} !}}\int\limits_0^1 {\frac{{{{\left( {\ln x} \right)}^{k}}{{\ln }^{m + 1}}\left( {1 - \frac{x}{2}} \right)}}{x}} dx,
\end{align}
\begin{align}\label{3.11}
\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_m},\bar 1,{{\left\{ 1 \right\}}_{k}}} \right) = \frac{{{{\left( { - 1} \right)}^{k}}}}{{\left( {m + 1} \right)! {k}!}}\int\limits_0^1 {\frac{{{{\left( {\ln x} \right)}^{k}}{{\ln }^{m + 1}}\left( {1 - \frac{x}{2}} \right)}}{x}} dx.
\end{align}
On the other hand, changing $x=2(1-u)$ in above integral, we conclude that
\begin{align}\label{3.12}
&\int\limits_0^1 {\frac{{{{\left( {\ln x} \right)}^{k}}{{\ln }^{m + 1}}\left( {1 - \frac{x}{2}} \right)}}{x}} dx\nonumber \\&= \int\limits_{\frac{1}{2}}^1 {\frac{{{{\left( {\ln 2 + \ln \left( {1 - u} \right)} \right)}^{k}}{{\ln }^{m + 1}}(u)}}{{1 - u}}} du\nonumber\\
& = \sum\limits_{j = 0}^{k} {{{\left( {\ln 2} \right)}^{k- j}}\left( {\begin{array}{*{20}{c}}
{k} \\
j \\
\end{array}} \right)} \int\limits_{\frac{1}{2}}^1 {\frac{{{{\ln }^j}\left( {1 - u} \right){{\ln }^{m + 1}}(u)}}{{1 - u}}} du\nonumber\\
& = \sum\limits_{j = 0}^{k} {{{\left( { - 1} \right)}^j}{{\left( {\ln 2} \right)}^{k - j}}j!\left( {\begin{array}{*{20}{c}}
{k} \\
j \\
\end{array}} \right)\sum\limits_{n = 1}^\infty {{\zeta _{n - 1}}\left( {{{\left\{ 1 \right\}}_j}} \right)\int\limits_{\frac{1}{2}}^1 {{u^{n - 1}}{{\ln }^{m + 1}}(u)} du} }\nonumber \\
& = \left( {m + 1} \right)!{\left( { - 1} \right)^{m + 1}}\sum\limits_{j = 0}^{k} {{{\left( { - 1} \right)}^j}{{\left( {\ln 2} \right)}^{k - j}}j!\left( {\begin{array}{*{20}{c}}
{k} \\
j \\
\end{array}} \right)}\nonumber \\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\times \left\{ {\zeta \left( {m + 2,{{\left\{ 1 \right\}}_j}} \right) - \sum\limits_{l = 0}^{m + 1} {\frac{{{{\left( {\ln 2} \right)}^{m + 1 - l}}}}{{\left( {m + 1 - l} \right)!}}{\rm{L}}{{\rm{i}}_{l + 1,{{\left\{ 1 \right\}}_j}}}\left( {\frac{1}{2}} \right)} } \right\}.
\end{align}
Thus, substituting (\ref{3.12}) into (\ref{3.11}), we obtain the desired result. The proof of Theorem \ref{thm3.3} is finished. \hfill$\square$\\
From Theorem \ref{thm3.2} and Theorem \ref{thm3.3}, we immediately derive the following special case of the multiple zeta values.
\begin{cor} (Conjectured in \cite{BBBL1997})\label{cor3.4} For integer $m\in\N_0$, then the following identities hold:
\begin{align}\label{3.13}
&{\rm{L}}{{\rm{i}}_{2,{{\left\{ 1 \right\}}_m}}}\left( {\frac{1}{2}} \right) = \zeta \left( {m + 2} \right) - \sum\limits_{l = 0}^{m + 1} {\frac{{{{\left( {\ln 2} \right)}^{m + 1 - l}}}}{{\left( {m + 1 - l} \right)!}}{\rm{L}}{{\rm{i}}_{l + 1}}\left( {\frac{1}{2}} \right)} ,
\end{align}
\begin{align}\label{3.14}
\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_m},\bar 1} \right) = {\left( { - 1} \right)^{m + 1}}\left\{ {\zeta \left( {m + 2} \right) - \sum\limits_{l = 0}^{m + 1} {\frac{{{{\left( {\ln 2} \right)}^{m + 1 - l}}}}{{\left( {m + 1 - l} \right)!}}{\rm{L}}{{\rm{i}}_{l + 1}}\left( {\frac{1}{2}} \right)} } \right\}.
\end{align}
\end{cor}
\pf Setting $k=0$ in (\ref{3.3}) and (\ref{3.8}) we may easily deduce the results.\hfill$\square$\\
Note that Corollary \ref{cor3.4} is an immediate corollary of Zlobin's Theorem 9 (see \cite{SAZ2012}).
\begin{thm}\label{thm3.5} For integers $m,k\in \N_0$, we have
\begin{align}\label{3.15}
\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m}},\bar 1,\bar 1,{{\left\{ 1 \right\}}_{k}}} \right) =& \frac{{{{\left( { - 1} \right)}^{k}}}}{{(m+1)!(k+1)!}}\left\{ {(k+1){{\left( {\ln 2} \right)}^{m+1}}I\left( {k} \right) - \left( {m + k+2} \right)I\left( {m + k+ 1} \right)} \right\}\nonumber\\
& - \sum\limits_{i = 1}^{m} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}} \zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - i}},\bar 1,\bar 1,{{\left\{ 1 \right\}}_{k}}} \right),
\end{align}
where $I\left( k \right)$ denotes the integral on the left hand side of (\ref{2.4}), namely
\[I\left( k \right): = \int\limits_0^1 {\frac{{{{\ln }^k}\left( {1 + x} \right)\ln \left( {1 - x} \right)}}{{1 + x}}dx}\quad (k\in \N),\]
with \[I\left( 0 \right) = \frac{{{{\ln }^2}\left( 2 \right) - \zeta \left( 2 \right)}}{2}.\]
\end{thm}
\pf By a similar argument as in the proof of Theorem \ref{thm3.2}, we consider the following multiple integral
\begin{align}\label{3.16}
{J_m}\left( k \right): = \int\limits_0^1 {\frac{1}{{1 + {t_1}}}d{t_1} \cdots } \int\limits_0^{{t_{m - 2}}} {\frac{1}{{1 + {t_{m - 1}}}}d{t_{m - 1}}} \int\limits_0^{{t_{m - 1}}} {\frac{1}{{1 + {t_m}}}d{t_m}} \int\limits_0^{{t_m}} {\frac{{{{\ln }^k}\left( {1 + {t_{m + 1}}} \right)}}{{1 - {t_{m + 1}}}}d{t_{m + 1}}}.
\end{align}
By using (\ref{c3}), we deduce that
\begin{align}\label{3.17}
\int\limits_0^x {\frac{1}{{1 + t}}dt\int\limits_0^t {\frac{{{{\ln }^k}\left( {1 + u} \right)}}{{1 - u}}du} } = {\left( { - 1} \right)^{k - 1}}k!\sum\limits_{n = 1}^\infty {\frac{{{\zeta _{n - 1}}\left( {\bar 1,\bar 1,{{\left\{ 1 \right\}}_{k - 1}}} \right)}}{n}{{\left( { - 1} \right)}^n}{x^n}} .
\end{align}
Hence, combining (\ref{3.16}) and (\ref{3.17}), it is easily shown that
\begin{align}\label{3.18}
{J_m}\left( k \right) = {\left( { - 1} \right)^{m + k}}k! \zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - 1}},\bar 1,\bar 1,{{\left\{ 1 \right\}}_{k - 1}}} \right).
\end{align}
On the other hand, using integration by parts, we can find that
\begin{align}\label{c6}
&{J_m}\left( k \right) = \int\limits_0^1 {\frac{1}{{1 + {t_1}}}\left( {\int\limits_{0 < {t_{m + 1}} < \cdots < {t_2} < {t_1}} {\frac{{{{\ln }^k}\left( {1 + {t_{m + 1}}} \right)}}{{\left( {1 + {t_2}} \right) \cdots \left( {1 + {t_m}} \right)\left( {1 - {t_{m + 1}}} \right)}}d{t_2} \cdots d{t_{m + 1}}} } \right)d{t_1}}\nonumber \\
=& \left. {\left( {\ln \left( {1 + {t_1}} \right)\int\limits_{0 < {t_{m + 1}} < \cdots < {t_2} < {t_1}} {\frac{{{{\ln }^k}\left( {1 + {t_{m + 1}}} \right)}}{{\left( {1 + {t_2}} \right) \cdots \left( {1 + {t_m}} \right)\left( {1 - {t_{m + 1}}} \right)}}d{t_2} \cdots d{t_{m + 1}}} } \right)} \right|_{{t_1} = 0}^{{t_1} = 1} \nonumber \\
& - \int\limits_0^1 {\frac{{\ln \left( {1 + {t_1}} \right)}}{{1 + {t_1}}}} \int\limits_{0 < {t_{m + 1}} < \cdots < {t_3} < {t_1}} {\frac{{{{\ln }^k}\left( {1 + {t_{m + 1}}} \right)}}{{\left( {1 + {t_3}} \right) \cdots \left( {1 + {t_m}} \right)\left( {1 - {t_{m + 1}}} \right)}}d{t_1}d{t_3} \cdots d{t_{m + 1}}} \nonumber \\
= &\left( {\ln 2} \right){J_{m - 1}}\left( k \right) - \int\limits_0^1 {\frac{{\ln \left( {1 + {t_1}} \right)}}{{1 + {t_1}}}d{t_1}\int\limits_0^{{t_1}} {\frac{1}{{1 + {t_2}}}d{t_2}} \cdots \int\limits_0^{{t_{m - 2}}} {\frac{1}{{1 + {t_{m - 1}}}}d{t_{m - 1}}\int\limits_0^{{t_{m - 1}}} {\frac{{{{\ln }^k}\left( {1 + {t_m}} \right)}}{{1 - {t_m}}}d{t_m}} } } \nonumber\\
=&\cdots\nonumber\\
=& \sum\limits_{i = 1}^{m - 1} {{{\left( { - 1} \right)}^{i - 1}}\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}{J_{m - i}}\left( k \right)} + \frac{{{{\left( { - 1} \right)}^{m - 1}}}}{{\left( {m - 1} \right)!}}\int\limits_0^1 {\frac{{{{\ln }^{m - 1}}\left( {1 + {t_1}} \right)}}{{1 + {t_1}}}d{t_1}\int\limits_0^{{t_1}} {\frac{{{{\ln }^k}\left( {1 + {t_2}} \right)}}{{1 - {t_2}}}d{t_2}} } .
\end{align}
Therefore, substituting (\ref{3.18}) into (\ref{c6}) yields
\begin{align}\label{3.19}
{J_m}\left( k \right) = &{\left( { - 1} \right)^{m + k - 1}}k!\sum\limits_{i = 1}^{m - 1} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}} \zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - i - 1}},\bar 1,\bar 1,{{\left\{ 1 \right\}}_{k - 1}}} \right)\nonumber\\
& + \frac{{{{\left( { - 1} \right)}^{m - 1}}}}{{\left( {m - 1} \right)!}}\int\limits_0^1 {\frac{{{{\ln }^{m - 1}}\left( {1 + {t_1}} \right)}}{{1 + {t_1}}}d{t_1}\int\limits_0^{{t_1}} {\frac{{{{\ln }^k}\left( {1 + {t_2}} \right)}}{{1 - {t_2}}}d{t_2}} } .
\end{align}
Moreover, we note that the integral on the right-hand side of (\ref{3.19}) can be written as
\begin{align}\label{3.20}
&\int\limits_0^1 {\frac{{{{\ln }^{m - 1}}\left( {1 + {t_1}} \right)}}{{1 + {t_1}}}d{t_1}\int\limits_0^{{t_1}} {\frac{{{{\ln }^k}\left( {1 + {t_2}} \right)}}{{1 - {t_2}}}d{t_2}} }\nonumber \\
& = \mathop {\lim }\limits_{x \to 1} \left\{ {\int\limits_0^x {\frac{{{{\ln }^{m - 1}}\left( {1 + {t_1}} \right)}}{{1 + {t_1}}}d{t_1}\int\limits_0^{{t_1}} {\frac{{{{\ln }^k}\left( {1 + {t_2}} \right)}}{{1 - {t_2}}}d{t_2}} } } \right\}\nonumber\\
& = \frac{1}{m}\mathop {\lim }\limits_{x \to 1} \left\{ {\int\limits_0^x {\frac{{{{\ln }^m}\left( {1 + x} \right){{\ln }^k}\left( {1 + t} \right) - {{\ln }^{m + k}}\left( {1 + t} \right)}}{{1 - t}}dt} } \right\}\nonumber\\
& = \frac{1}{m}\left\{ {k{{\left( {\ln 2} \right)}^m}\int\limits_0^1 {\frac{{{{\ln }^{k - 1}}\left( {1 + t} \right)\ln \left( {1 - t} \right)}}{{1 + t}}dt} - \left( {m + k} \right)\int\limits_0^1 {\frac{{{{\ln }^{m + k - 1}}\left( {1 + t} \right)\ln \left( {1 - t} \right)}}{{1 + t}}dt} } \right\}.
\end{align}
Therefore, replacing $k-1$ by $k$ and $m-1$ by $m$, the relations (\ref{3.18}), (\ref{3.19}) and (\ref{3.20}) yield the desired result. The proof of Theorem 3.5 is completed.\hfill$\square$
From Lemma \ref{lem2.2} and Theorem \ref{thm3.5}, we have the conclusion: if $m,k\in \N_0$, then the alternating multiple zeta values $\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m}},\bar 1,\bar 1,{{\left\{ 1 \right\}}_{k}}} \right)$
can be expressed as a rational linear combination of zeta values, polylogarithms and $\ln(2)$.
We now close this section with several examples.
\begin{exa} By (\ref{3.14}) and (\ref{3.15}), we have
\begin{align*}
&\zeta \left( {\bar 1,1,\bar 1} \right) = \frac{1}{8}\zeta \left( 3 \right) - \frac{1}{6}{\ln ^3}(2),\\
&\zeta \left( {\bar 1,\bar1,\bar 1} \right) = -\frac{1}{4}\zeta \left( 3 \right)+\frac{1}{2}\zeta \left( 2 \right)\ln(2) - \frac{1}{6}{\ln ^3}(2),\\
&\zeta \left( {\bar 1,1,1,\bar 1} \right) = {\rm{L}}{{\rm{i}}_4}\left( {\frac{1}{2}} \right) + \frac{1}{{12}}{\ln ^4}(2) + \frac{7}{8}\zeta \left( 3 \right)\ln(2) - \frac{1}{2}\zeta \left( 2 \right){\ln ^2}(2) - \zeta \left( 4 \right),\\
&\zeta \left( {\bar 1,\bar 1,\bar 1,1} \right) = 3{\rm{L}}{{\rm{i}}_4}\left( {\frac{1}{2}} \right) + \frac{1}{6}{\ln ^4}(2) + \frac{{23}}{8}\zeta \left( 3 \right)\ln (2) - \zeta \left( 2 \right){\ln ^2}(2) - 3\zeta \left( 4 \right),\\
&\zeta \left( {\bar 1,1,\bar 1,\bar 1} \right) = - 3{\rm{L}}{{\rm{i}}_4}\left( {\frac{1}{2}} \right) - \frac{1}{{12}}{\ln ^4}(2)- \frac{{11}}{4}\zeta \left( 3 \right)\ln (2) - \frac{3}{4}\zeta \left( 2 \right){\ln ^2}(2) + 3\zeta \left( 4 \right).
\end{align*}
\end{exa}
\section{Some evaluation of restricted sum formulas involving multiple zeta values}
In \cite{X2017}, we considered the following restricted sum formulas involving multiple zeta values
\[{\left( { - 1} \right)^{m+1}}\sum\limits_{i = 0}^{m} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - i}},3,{{\left\{ 1 \right\}}_{k}}} \right)} + {\left( { - 1} \right)^{k+1}}\sum\limits_{i = 0}^{k} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{k - i}},3,{{\left\{ 1 \right\}}_{m }}} \right)} \]
and gave explicit reductions to multiple zeta values of depth less than ${\rm max}\{k+3,m+3\}$, where $m,k\in \N_0$. Moreover, we also proved that the restricted sum
\[\sum\limits_{i = 0}^{m} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - i}},2,{{\left\{ 1 \right\}}_{k}}} \right)} \]
can be expressed by the zeta values and polylogarithms, which implies that for any $m,k\in \N_0$, the multiple zeta values $\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m}},2,{{\left\{ 1 \right\}}_{k}}} \right)$ can be represented as a polynomial of zeta values and polylogarithms with rational coefficients. In particular, one can find explicit formula for weight 4
\begin{align*}
\zeta \left( {\bar 1,1,2} \right) = 3{\rm{L}}{{\rm{i}}_4}\left( {\frac{1}{2}} \right) + \frac{1}{8}{\ln ^4}(2) + \frac{{23}}{8}\zeta \left( 3 \right)\ln (2) - \zeta \left( 2 \right){\ln ^2}(2) - 3\zeta \left( 4 \right).
\end{align*}
In this section, we will consider the general restricted sum
\[{\left( { - 1} \right)^{m+1}}\sum\limits_{i = 0}^{m} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - i}},p + 3,{{\left\{ 1 \right\}}_{k}}} \right)} + {\left( { - 1} \right)^{p + k+1}}\sum\limits_{i = 0}^{k} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{k - i}},p + 3,{{\left\{ 1 \right\}}_{m }}} \right)} \]
where $m,k,p\in\N_0$.
Now we are ready to state and prove our main results. Note that our proof of Theorem \ref{thm4.1} is based on Lemma \ref{lem3.1}.
\begin{thm}\label{thm4.1} For integers $m,k,p\in\N_0$, we have
\begin{align}\label{4.1}
&{\left( { - 1} \right)^{m+1}}\sum\limits_{i = 0}^{m} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - i}},p + 3,{{\left\{ 1 \right\}}_{k}}} \right)}\nonumber \\
&\quad + {\left( { - 1} \right)^{p + k+1}}\sum\limits_{i = 0}^{k} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{k - i}},p + 3,{{\left\{ 1 \right\}}_{m}}} \right)} \nonumber\\
&= \frac{{{{\left( { - 1} \right)}^{m }}}}{{(m+1)!}}{\left( {\ln 2} \right)^{m+1}}\zeta \left( {{\overline {p + 3}},{{\left\{ 1 \right\}}_{k}}} \right)\nonumber\\
&\quad + \frac{{{{\left( { - 1} \right)}^{p + k }}}}{{(k+1)!}}{\left( {\ln 2} \right)^{k+1}}\zeta \left( {{\overline {p + 3}},{{\left\{ 1 \right\}}_{m}}} \right)\nonumber\\
&\quad + \sum\limits_{i = 0}^p {{{\left( { - 1} \right)}^i}\zeta \left( {{\overline {2 + i}},{{\left\{ 1 \right\}}_{m}}} \right)} \zeta \left( {{\overline {p + 2 - i}},{{\left\{ 1 \right\}}_{k }}} \right).
\end{align}
\end{thm}
\pf Similarly as in the proof of Theorem \ref{thm3.5}, we consider the multiple integral
\[
{R_{m,k}}\left( p \right): = \int\limits_{0 < {t_{m + p + 2}} < \cdots < {t_1} < 1} {\frac{{{{\ln }^k}\left( {1 + {t_{m + p + 2}}} \right)}}{{\left( {1 + {t_1}} \right) \cdots \left( {1 + {t_m}} \right){t_{m + 1}} \cdots {t_{m + p + 1}}{t_{m + p + 2}}}}} d{t_1} \cdots d{t_{m + p + 2}}.
\]
Then with the help of formula (\ref{3.1}), we may easily deduce that
\begin{align}\label{4.2}
{R_{m,k}}\left( p \right) = {\left( { - 1} \right)^{m + k}}k!\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - 1}},p + 3,{{\left\{ 1 \right\}}_{k - 1}}} \right).
\end{align}
On the other hand, by a similar argument as in the proof of formula (\ref{c6}) (using integration by parts), it is easily shown that
\begin{align}\label{4.3}
&{\left( { - 1} \right)^{m + k}}k!\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - 1}},p + 3,{{\left\{ 1 \right\}}_{k - 1}}} \right)\nonumber\\
& = {\left( { - 1} \right)^{m + k + 1}}k!\sum\limits_{i = 1}^{m - 1} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - 1 - i}},p + 3,{{\left\{ 1 \right\}}_{k - 1}}} \right)} \nonumber\\
&\quad + \frac{{{{\left( { - 1} \right)}^{m - 1}}}}{{\left( {m - 1} \right)!}}\int\limits_0^1 {\frac{{{{\ln }^{m - 1}}\left( {1 + {t_1}} \right)}}{{1 + {t_1}}}d{t_1}} \int\limits_0^{{t_1}} {\frac{1}{{{t_2}}}d{t_2} \cdots \int\limits_0^{{t_{p{\rm{ + }}1}}} {\frac{1}{{{t_{p + 2}}}}{t_{p + 2}}} } \int\limits_0^{{t_{p + 2}}} {\frac{{{{\ln }^k}\left( {1 + {t_{p + 3}}} \right)}}{{{t_{p + 3}}}}d{t_{p + 3}}}\nonumber \\
& = {\left( { - 1} \right)^{m + k + 1}}k!\sum\limits_{i = 1}^{m - 1} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - 1 - i}},p + 3,{{\left\{ 1 \right\}}_{k - 1}}} \right)}\nonumber \\
&\quad + \frac{{{{\left( { - 1} \right)}^{m +k- 1}}}}{{m!}}{k!\left( {\ln 2} \right)^m}\zeta \left( {{\overline {p + 3}},{{\left\{ 1 \right\}}_{k - 1}}} \right)\nonumber\\
&\quad + \frac{{{{\left( { - 1} \right)}^m}}}{{m!}}\int\limits_0^1 {\frac{{{{\ln }^m}\left( {1 + {t_1}} \right)}}{{{t_1}}}d{t_1}} \int\limits_0^{{t_1}} {\frac{1}{{{t_2}}}d{t_2} \cdots \int\limits_0^{{t_p}} {\frac{1}{{{t_{p + 1}}}}{t_{p + 1}}} } \int\limits_0^{{t_{p + 1}}} {\frac{{{{\ln }^k}\left( {1 + {t_{p + 2}}} \right)}}{{{t_{p + 2}}}}d{t_{p + 2}}} .
\end{align}
Hence, by a direct calculation, we can get the following identity
\begin{align}\label{4.4}
&\int\limits_0^1 {\frac{{{{\ln }^m}\left( {1 + {t_1}} \right)}}{{{t_1}}}d{t_1}} \int\limits_0^{{t_1}} {\frac{1}{{{t_2}}}d{t_2} \cdots \int\limits_0^{{t_p}} {\frac{1}{{{t_{p + 1}}}}{t_{p + 1}}} } \int\limits_0^{{t_{p + 1}}} {\frac{{{{\ln }^k}\left( {1 + {t_{p + 2}}} \right)}}{{{t_{p + 2}}}}d{t_{p + 2}}}\nonumber \\
& = {\left( { - 1} \right)^k}m!k!\sum\limits_{i = 0}^{m - 1} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - 1 - i}},p + 3,{{\left\{ 1 \right\}}_{k - 1}}} \right)}\nonumber \\
&\quad + {\left( { - 1} \right)^k}k!{\left( {\ln 2} \right)^m}\zeta \left( {{\overline {p + 3}},{{\left\{ 1 \right\}}_{k - 1}}} \right).
\end{align}
Moreover, applying the same argument as above, we deduce that
\begin{align}\label{4.5}
&\int\limits_0^1 {\frac{{{{\ln }^m}\left( {1 + {t_1}} \right)}}{{{t_1}}}d{t_1}} \int\limits_0^{{t_1}} {\frac{1}{{{t_2}}}d{t_2} \cdots \int\limits_0^{{t_p}} {\frac{1}{{{t_{p + 1}}}}{t_{p + 1}}} } \int\limits_0^{{t_{p + 1}}} {\frac{{{{\ln }^k}\left( {1 + {t_{p + 2}}} \right)}}{{{t_{p + 2}}}}d{t_{p + 2}}}\nonumber \\
& + {\left( { - 1} \right)^p}\int\limits_0^1 {\frac{{{{\ln }^k}\left( {1 + {t_1}} \right)}}{{{t_1}}}d{t_1}} \int\limits_0^{{t_1}} {\frac{1}{{{t_2}}}d{t_2} \cdots \int\limits_0^{{t_p}} {\frac{1}{{{t_{p + 1}}}}{t_{p + 1}}} } \int\limits_0^{{t_{p + 1}}} {\frac{{{{\ln }^m}\left( {1 + {t_{p + 2}}} \right)}}{{{t_{p + 2}}}}d{t_{p + 2}}}\nonumber \\
&= {\left( { - 1} \right)^{m + k}}k!m!\sum\limits_{i = 0}^p {{{\left( { - 1} \right)}^i}\zeta \left( {{\overline {2 + i}},{{\left\{ 1 \right\}}_{m - 1}}} \right)} \zeta \left( {{\overline {p + 2 - i}},{{\left\{ 1 \right\}}_{k - 1}}} \right).
\end{align}
Thus, combining the formulas (\ref{4.4}) and (\ref{4.5}), then replacing $k-1$ by $k$ and $m-1$ by $m$, we obtain the desired result. This completes the proof of Theorem \ref{thm4.1}.\hfill$\square$\\
Taking $p=0$ in Theorem \ref{thm4.1}, we get the following corollary which was first proved in \cite{X2017}.
\begin{cor} For integers $m,k\in \N_0$, we have
\begin{align}\label{4.6}
&{\left( { - 1} \right)^{m+1}}\sum\limits_{i = 0}^{m } {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - i}},3,{{\left\{ 1 \right\}}_{k}}} \right)}\nonumber \\
&+ {\left( { - 1} \right)^{k+1}}\sum\limits_{i = 0}^{k} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{k - i}},3,{{\left\{ 1 \right\}}_{m}}} \right)}\nonumber \\
& = \frac{{{{\left( { - 1} \right)}^{m}}}}{{(m+1)!}}{\left( {\ln 2} \right)^{m+1}}\zeta \left( {\bar 3,{{\left\{ 1 \right\}}_{k}}} \right) + \frac{{{{\left( { - 1} \right)}^{k }}}}{{(k+1)!}}{\left( {\ln 2} \right)^{k+1}}\zeta \left( {\bar 3,{{\left\{ 1 \right\}}_{m}}} \right)\nonumber\\
&\quad + \zeta \left( {\bar 2,{{\left\{ 1 \right\}}_{m}}} \right)\zeta \left( {\bar 2,{{\left\{ 1 \right\}}_{k}}} \right).
\end{align}
\end{cor}
\begin{thm}\label{thm4.3} For integers $m,k,p\in \N_0$, then the following identity holds:
\begin{align}\label{4.7}
&{\left( { - 1} \right)^{k + p}}(m+1)!(k+1)!\sum\limits_{i = 0}^{m} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - i}},2,{{\left\{ 1 \right\}}_{p}},2,{{\left\{ 1 \right\}}_{k}}} \right)}\nonumber \\
&+ {\left( { - 1} \right)^{m+1}}(k+1)!(m+1)!\sum\limits_{i = 0}^{k} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{k - i}},2,{{\left\{ 1 \right\}}_{p}},2,{{\left\{ 1 \right\}}_{m}}} \right)}\nonumber \\
&+ {\left( { - 1} \right)^{k + p}}(k+1)!{\left( {\ln 2} \right)^{m+1}}\zeta \left( {\bar 2,{{\left\{ 1 \right\}}_{p}},2,{{\left\{ 1 \right\}}_{k}}} \right)\nonumber \\
& + {\left( { - 1} \right)^{m+1}}(m+1)!{\left( {\ln 2} \right)^{k+1}}\zeta \left( {\bar 2,{{\left\{ 1 \right\}}_{p}},2,{{\left\{ 1 \right\}}_{m}}} \right)\nonumber \\
& = {\left( { - 1} \right)^{m + k + p+1}}{(m+1)}!{(k+1)}!\zeta \left( {\bar 2,{{\left\{ 1 \right\}}_{m}}} \right)\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{p}},2,{{\left\{ 1 \right\}}_{k}}} \right)\nonumber \\
&\quad + {\left( { - 1} \right)^{m + k}}(k+1)!(m+1)!\zeta \left( {\bar 2,{{\left\{ 1 \right\}}_{k}}} \right)\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{p}},2,{{\left\{ 1 \right\}}_{m }}} \right)\nonumber \\
&\quad + {\left( { - 1} \right)^{m + k + p+1}}(k+1)!(m+1)!\sum\limits_{i = 1}^{p} {{{\left( { - 1} \right)}^i}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{i - 1}},2,{{\left\{ 1 \right\}}_{m }}} \right)\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{p - i}},2,{{\left\{ 1 \right\}}_{k}}} \right)} .
\end{align}
\end{thm}
\pf The proof of Theorem \ref{thm4.3} is similar to the proof of Theorem \ref{thm4.1}. We consider the iterated integral
\[{Q_{m,k}}\left( p \right): = \int\limits_{0 < {t_{m + p + 2}} < \cdots < {t_1} < 1} {\frac{{{{\ln }^k}\left( {1 + {t_{m + p + 2}}} \right)d{t_1} \cdots d{t_{m + p + 2}}}}{{\left( {1 + {t_1}} \right) \cdots \left( {1 + {t_m}} \right){t_{m + 1}}\left( {1 + {t_{m + 2}}} \right) \cdots \left( {1 + {t_{m + p + 1}}} \right){t_{m + p + 2}}}}} .\]
By a similar argument as in the proof of formula (\ref{4.1}), we can obtain the following identities
\begin{align}\label{4.8}
&{Q_{m,k}}\left( p \right) = {\left( { - 1} \right)^{k + m + p}}k!\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - 1}},2,{{\left\{ 1 \right\}}_{p - 1}},2,{{\left\{ 1 \right\}}_{k - 1}}} \right),
\end{align}
\begin{align}\label{4.9}
&\int\limits_0^1 {\frac{{{{\ln }^m}\left( {1 + {t_1}} \right)}}{{{t_1}}}d{t_1}} \int\limits_0^{{t_1}} {\frac{1}{{1 + {t_2}}}d{t_2} \cdots \int\limits_0^{{t_p}} {\frac{1}{{1 + {t_{p + 1}}}}{t_{p + 1}}} } \int\limits_0^{{t_{p + 1}}} {\frac{{{{\ln }^k}\left( {1 + {t_{p + 2}}} \right)}}{{{t_{p + 2}}}}d{t_{p + 2}}}\nonumber \\
& = {\left( { - 1} \right)^{k + p}}m!k!\sum\limits_{i = 0}^{m - 1} {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - i - 1}},2,{{\left\{ 1 \right\}}_{p - 1}},2,{{\left\{ 1 \right\}}_{k - 1}}} \right)}\nonumber \\
&\quad+ {\left( { - 1} \right)^{p + k}}k!{\left( {\ln 2} \right)^m}\zeta \left( {\bar 2,{{\left\{ 1 \right\}}_{p - 1}},2,{{\left\{ 1 \right\}}_{k - 1}}} \right),
\end{align}
\begin{align}\label{4.10}
&\int\limits_0^1 {\frac{{{{\ln }^m}\left( {1 + {t_1}} \right)}}{{{t_1}}}d{t_1}} \int\limits_0^{{t_1}} {\frac{1}{{1 + {t_2}}}d{t_2} \cdots \int\limits_0^{{t_p}} {\frac{1}{{1 + {t_{p + 1}}}}{t_{p + 1}}} } \int\limits_0^{{t_{p + 1}}} {\frac{{{{\ln }^k}\left( {1 + {t_{p + 2}}} \right)}}{{{t_{p + 2}}}}d{t_{p + 2}}}\nonumber \\
&+ {\left( { - 1} \right)^p}\int\limits_0^1 {\frac{{{{\ln }^k}\left( {1 + {t_1}} \right)}}{{{t_1}}}d{t_1}} \int\limits_0^{{t_1}} {\frac{1}{{1 + {t_2}}}d{t_2} \cdots \int\limits_0^{{t_p}} {\frac{1}{{1 + {t_{p + 1}}}}{t_{p + 1}}} } \int\limits_0^{{t_{p + 1}}} {\frac{{{{\ln }^m}\left( {1 + {t_{p + 2}}} \right)}}{{{t_{p + 2}}}}d{t_{p + 2}}} \nonumber \\
& = {\left( { - 1} \right)^{m + k + p}}m!k!\zeta \left( {\bar 2,{{\left\{ 1 \right\}}_{m - 1}}} \right)\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{p - 1}},2,{{\left\{ 1 \right\}}_{k - 1}}} \right)\nonumber \\
&\quad + {\left( { - 1} \right)^{m + k}}k!m!\zeta \left( {\bar 2,{{\left\{ 1 \right\}}_{k - 1}}} \right)\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{p - 1}},2,{{\left\{ 1 \right\}}_{m - 1}}} \right)\nonumber \\
&\quad + {\left( { - 1} \right)^{m + k + p}}k!m!\sum\limits_{i = 1}^{p - 1} {{{\left( { - 1} \right)}^i}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{i - 1}},2,{{\left\{ 1 \right\}}_{m - 1}}} \right)\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{p - i - 1}},2,{{\left\{ 1 \right\}}_{k - 1}}} \right)} .
\end{align}
Hence, combining formulas (\ref{4.8})-(\ref{4.10}), then replacing $k-1$ by $k$, $m-1$ by $m$ and $p-1$ by $p$, we may easily deduce the desired result.\hfill$\square$\\
Setting $p=1,k=m=0$ in Theorem \ref{thm4.3}, we get
\[\zeta \left( {\bar 1,2,1,2} \right) + \zeta \left( {\bar 2,1,2} \right)\ln (2) + \zeta \left( {\bar 2} \right)\zeta \left( {\bar 1,1,2} \right) = \frac 1{2}\zeta^2\left({\bar 1},2 \right).\]
In fact, proceeding in a similar method to evaluation of the Theorems \ref{thm3.2}, \ref{thm4.1} and \ref{thm4.3}, it is possible to evaluate
other alternating multiple zeta values. For example, we have used our method to obtain the following explicit integral representations, closed form representations of multiple zeta values:
\begin{align}
&\zeta \left( {{{\left\{ {\bar 1} \right\}}_{2p + 2}},{{\left\{ 1 \right\}}_{k}}} \right) = {\left( { - 1} \right)^{p + 1}}{\rm{L}}{{\rm{i}}_{k + 2,{{\left\{ 2 \right\}}_p}}}\left( {\frac{1}{2}} \right)\nonumber\\
& = \frac{{{{\left( { - 1} \right)}^{k + p}}}}{{(k+1)!}}\int\limits_{0 < {t_{2p + 1}} < \cdots < {t_1} < 1} {\frac{{{{\ln }^{k+1}}\left( {1 - {t_{2p + 1}}} \right)}}{{\prod\limits_{j = 1}^p {\left\{ {\left( {1 + {t_{2j - 1}}} \right)\left( {1 - {t_{2j}}} \right)} \right\}\left( {1 + {t_{2p + 1}}} \right)} }}} d{t_1} \cdots d{t_{2p + 1}},\\
&\zeta \left( {{{\left\{ {\bar 1} \right\}}_{2p + 1}},{{\left\{ 1 \right\}}_{k}}} \right) = \frac{{{{\left( { - 1} \right)}^{k + p+1}}}}{{(k+1)!}}\int\limits_{0 < {t_{2p}} < \cdots < {t_1} < 1} {\frac{{{{\ln }^{k+1}}\left( {1 + {t_{2p}}} \right)}}{{\prod\limits_{j = 1}^p {\left\{ {\left( {1 + {t_{2j - 1}}} \right)\left( {1 - {t_{2j}}} \right)} \right\}} }}} d{t_1} \cdots d{t_{2p}},\\
&\sum\limits_{i = 0}^m {\frac{{{{\left( {\ln 2} \right)}^i}}}{{i!}}\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_{m - i}},{{\left\{ {\bar 1} \right\}}_{2p}},{{\left\{ 1 \right\}}_{k}}} \right)} \nonumber \\
&= \frac{{{{\left( { - 1} \right)}^{k + p+1}}}}{{(k+1)!m!}}\int\limits_{0 < {t_{2p}} < \cdots < {t_1} < 0} {\frac{{{{\ln }^m}\left( {1 + {t_1}} \right){{\ln }^{k+1}}\left( {1 + {t_{2p}}} \right)}}{{\prod\limits_{j = 1}^p {\left\{ {\left( {1 + {t_{2j - 1}}} \right)\left( {1 - {t_{2j}}} \right)} \right\}} }}d{t_1} \cdots d{t_{2p}}}, \\
&\zeta \left( {\bar 1,{{\left\{ 1 \right\}}_m},{{\left\{ {\bar 1} \right\}}_{2p + 1}},{{\left\{ 1 \right\}}_{k}}} \right) = {\left( { - 1} \right)^{m + p + 1}}{\rm{L}}{{\rm{i}}_{k + 2,{{\left\{ 2 \right\}}_p},{{\left\{ 1 \right\}}_m}}}\left( {\frac{1}{2}} \right).
\end{align}
{\bf Acknowledgments.} We thank the anonymous referee for suggestions which led to improvements in the exposition.
{\small
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
This paper is dedicated to L\'aszl\'o Z\'adori not only because of his birthday, but also because a nice construction from his very first mathematical paper is heavily used here.
Our starting point is that Strietz~\cite{strietz1,strietz2} proved in 1975 that
\begin{equation}\left.
\parbox{7.5cm}{the lattice $\Part n$ of all partitions of the (finite) set $\set{1,2,\dots,n}$ is a four-generated lattice.}
\,\,\right\}
\label{eqpbxstRszlT}
\end{equation}
A decade later, Z\'adori~\cite{zadori} gave a very elegant proof of this result (and proved even more, which is not used in the present paper). Z\'adori's construction has opened lots of perspectives; this is witnessed by Chajda and Cz\'edli~\cite{chcz},
Cz\'edli~\cite{czedlismallgen,czedlifourgen,czedlioneonetwo,czgfourgeneqatoms},
Cz\'edli and Kulin~\cite{czgkulin}, Kulin~\cite{kulin}, and Tak\'ach~\cite{takach}.
Our goal is to generalize \eqref{eqpbxstRszlT} from partition lattices to their direct powers; see Theorems~\ref{thmmain} and \ref{thmoot} later.
Passing from $\Part n$ to $\Part n^k$ has some content because of four reasons, which will be given with more details later;
here we only mention these reasons tangentially.
First, even the direct square of a four-generated lattice need not be four-generated. Second, if some direct power of a lattice is four-generated, then so are the original lattice and all of its other direct powers with smaller exponents; see Corollaries \ref{coroLprd} and \ref{corodjTd}.
Third, for each non-singleton finite lattice $L$, there is a (large) positive integer $k_0=k_0(L)$ such that for every $k\geq k_0$, the direct power $L^k$ is \emph{not} four-generated; this explains that the exponent is not arbitrary in our theorems. We admit that we could not
determine the set $\set{k: \Part n^k\text{ is four-generated}}$, that is, we could not find
the least $k_0$; this task will probably remain unsolved for long. Fourth, a whole section of this paper is devoted to the applicability of complicated lattices with few generators in Information Theory.
Although this paper has some links to Information Theory, it is primarily a \emph{lattice theoretical} paper.
Note that only some elementary facts, regularly taught in graduate (and often in undergraduate) algebra, are needed about lattices. For those who know how to compute the join of two equivalence relations the paper is probably self-contained. If not, then a small part of each of the monographs Burris and Sankappanavar \cite{burrsankapp},
Gr\"atzer~\cite{ggeneral,ggglt}, and Nation~\cite{nationbook}
can be recommended; note that \cite{burrsankapp} and \cite{nationbook} are freely downloadable at the time of writing.
\subsection*{Outline}
The rest of the paper is structured as follows. Section~\ref{sectztrms}
gives the rudiments of partition lattices and recalls Z\'adori's construction in details; these details will be used in the subsequent two sections. Section~\ref{sectprod} formulates and prove our first result, Theorem~\ref{thmmain}, which
asserts that $\Part n^k$ is four-generated for certain values of $k$. In Section~\ref{sectootwo}, we formulate and prove Theorem~\ref{thmoot} about the existence of a four-element generating set of order type $1+1+2$ in $\Part n^k$.
Finally, Section~\ref{sectauth} offers a protocol for authentication based on partition lattices and their direct powers; this protocol can also be used in secret key cryptography.
\section{Rudiments and Z\'adori's construction}\label{sectztrms}
Below, we are going to give some details in few lines for the sake of those not familiar with partition lattices and, in addition, we are going to fix the corresponding notation.
For a set $A$, a set of pairwise disjoint nonempty subsets of $A$ is a \emph{partition} of $A$ if the union of these subsets, called \emph{blocks}, is $A$. For example,
\begin{equation}
U=\set{\set{1,3},\set{2,4},\set{5}}
\label{eqUpPrrsB}
\end{equation}
is a partition of $A=\set{1,2,3,4,5}$. For pairwise distinct elements $a_1,\dots,a_k$ of $A$, the partition of $A$ with block $\set{a_1,\dots, a_k}$ such that all the other blocks are singletons will be denoted by
$\kequ{a_1,\dots a_k}$. Then, in our notation, $U$ from \eqref{eqUpPrrsB} is the same as
\begin{equation}
\equ13 + \equ24.
\label{eqczrggtbxmdGXP}
\end{equation}
For partitions $U$ and $V$ of $A$, we say that $U\leq V$ if and only if every block of $U$ is as subset of a (unique) block of $V$. With this ordering, the set of all partitions of $A$ turns into a lattice, which we denote by $\Part A$.
For brevity,
\begin{equation}
\text{$\Part n$ will stand for $\Part{\set{1,2,\dots,n}}$,}
\label{pbxPartnmM}
\end{equation}
and also for $\Part A$ when $A$ is a given set consisting of $n$ elements.
Associated with a partition $U$ of $A$, we define an \emph{equivalence relation} $\pi_U$ of $A$ as the collection of all pairs $(x,y)\in A^2$ such that $x$ and $y$ belong to the same block of $U$. As it is well known, the equivalence relations and the partitions of $A$ mutually determine each other, and $\pi_U\leq \pi_V$ (which is our notation for $\pi_U\subseteq \pi_V$) if and only if $U\leq V$. Hence, the \emph{lattice $\Equ A$ of all equivalence relations} of $A$ (in short, the \emph{equivalence lattice} of $A$) is isomorphic to $\Part A$. In what follows, we do not make a sharp distinction between a partition and the corresponding equivalence relation; no matter which of them is given, we can use the other one without warning. For example, \eqref{eqczrggtbxmdGXP} also denotes an equivalence relation associated with the partition given in \eqref{eqUpPrrsB}, provided the base set $\set{1,2,\dots,5}$ is understood. So we define and denote equivalences as the partitions above but we prefer to work in $\Equ A$ and $\Equ{n}=\Equ{\set{1,\dots,n}}$, because the lattice operations are easier to handle in $\Equ A$. For $\kappa,\lambda\in \Equ A$, the \emph{meet} and the \emph{join} of $\kappa$ and $\lambda$, denoted by $\kappa \lambda$ (or $\kappa\cdot\lambda$) and $\kappa+\lambda$, are the intersection and the transitive hull of the union of $\kappa$ and $\lambda$, respectively. The advantage of this notation is that the usual precedence rule allows us to write, say, $xy+xz$ instead of $(x\wedge y)\vee (x\wedge z)$.
\emph{Lattice terms} are composed from variables and join and meet operation signs in the usual way; for example, $f(x_1,x_2,x_3,x_4)=(x_1+x_2)(x_3+x_4)+(x_1+x_3)(x_2+x_4)$ is a quaternary lattice term.
Given a lattice $L$ and $a_1,\dots, a_k\in L$, the \emph{sublattice generated} by $\set{a_1,\dots,a_k}$ is denoted and defined by
\begin{equation}
\sublat{a_1,\dots,a_k}:=\set{f(a_1,\dots,a_k): a_1,\dots,a_k\in L,\,\,f\text{ is a lattice term}}.
\end{equation}
If there are pairwise distinct elements $a_1,\dots,a_k\in L$ such that $\sublat{a_1,\dots,a_k}=L$ then $L$ is said to be a \emph{$k$-generated lattice}.
Almost exclusively, we are going to define our equivalence relations by (undirected simple, edge-coloured) graphs. Every horizontal thin straight edge is $\alpha$-colored but its color, $\alpha$, is not always indicated in the figures. The thin straight edges of slope 1, that is the southwest-northeast edges, are $\beta$-colored while the thin straight edges with slope $-1$, that is the southeast-northwest edges, are $\gamma$-colored.
Finally, the thin \emph{solid} curved edges are $\delta$-colored.
(We should disregard the \emph{dashed} ovals at this moment. Note that except for Figure~\ref{figd4}, every edge is thin.)
Figure~\ref{figd1} helps to keep this convention in mind.
On the vertex set $A$, this figure and the other figures in the paper define an \emph{equivalence} (relation) $\alpha\in \Equ A$ in the following way: deleting all edges but the $\alpha$-colored ones, the components of the remaining graph are the blocks of the partition associated with $\alpha$. In other words, $\pair x y\in\alpha$ if and only if there is an $\alpha$-coloured path from vertex $x$ to vertex $y$ in the graph, that is, a path (of possibly zero length) all of whose edges are $\alpha$-colored. The equivalences $\beta$, $\gamma$, and $\delta$ are defined analogously. The success of Z\'adori's construction, to be discussed soon, lies in the fact of this visualization. Note that, to make our figures less crowded, the labels $\alpha,\dots,\delta$ are not always indicated but
\begin{equation}
\text{our convention, shown in Figure \ref{figd1}, defines the colour of the edges}
\label{eqtxtsdhClrsznzlD}
\end{equation}
even in this case.
\begin{figure}[htb]
\centerline
{\includegraphics[scale=1.0]{czgauthfig1}}
\caption{Standard notation for this paper
\label{figd1}}
\end{figure
Let us agree upon the following notation:
\begin{equation}\left.
\begin{aligned}
\sum_{\text{for all meaningful x}}&\equ{u_x}{v_x} \text{ will be denoted by }\cr
&\kern 2em\faequ{u_x}{v_x} \text{ or } \quad \faequ{u_y}{v_y};
\end{aligned}\,\right\}
\end{equation}
that is, each of $x$ and $y$ in subscript or superscript position will mean that a join is formed for all meaningful values of these subscripts or superscript. If only a part of the meaningful subscripts or superscripts are needed in a join, then the following notational convention will be in effect:
\begin{equation}
\xequ{u^{(i)}}{v^{(i)}}{i\in I}\quad\text{ stands for }\quad
\sum_{i\in I}\equ{u^{(i)}}{v^{(i)}}.
\end{equation}
For an integer $k\geq 2$ and the $(2k+1)$-element set
\[Z=Z(2k+1):=\set{a_0,a_1,\dots,a_k, b_0,b_1,\dots, b_{k-1}},
\]
we define
\begin{equation}
\begin{aligned}
&\alpha:=\kequ{a_0,a_1,\dots a_k} +\kequ{b_0,b_1,\dots b_{k-1}}=
\equ{a_x}{a_{x+1}}+\equ{b_y}{b_{y+1}}\cr
&\beta:=\faequ{a_x}{b_x}=\xequ{a_i}{b_i}{0\leq i\leq k-1},\cr
&\gamma:=\faequ{a_{x+1}}{b_x} = \xequ{a_{i+1}}{b_i}{0\leq i\leq k-1},\cr
&\delta:=\equ{a_0}{b_0}+\equ{a_k}{b_{k-1}};
\end{aligned}
\label{eqsBzTrGhxQ}
\end{equation}
see Figure~\ref{figd2}.
Then the system $\tuple{Z(2k+1);\alpha,\beta,\gamma,\delta}$ is called a $(2k+1)$-element \emph{Z\'adori configuration}. Its importance is revealed by the following lemma.
\begin{lemma}[Z\'adori~\cite{zadori}]\label{lemmazadori} For $k\geq 2$,
$\sublat{\alpha,\beta,\gamma,\delta}=\Equ{Z(2k+1)}$, that is, the four partitions in \eqref{eqsBzTrGhxQ} of the Z\'adori configuration generate the lattice of all equivalences of $Z(2k+1)$. Consequently,
\begin{equation}
\sublat{\alpha,\beta,\gamma, \equ{a_0}{b_0}, \equ{a_k}{b_{k-1}}}=\Equ{Z(2k+1)}.
\label{eqzLhetzB}
\end{equation}
\end{lemma}
\begin{figure}[htb]
\centerline
{\includegraphics[scale=1.0]{czgauthfig2}}
\caption{The Z\'adori configuration of odd size $2k+1$ with $k=6$
\label{figd2}}
\end{figure
We shall soon outline the proof of this lemma since we are going to use its details in the paper. But firstly, we formulate another lemma from Z\'adori \cite{zadori}, which has also been used in Cz\'edli \cite{czedlismallgen,czedlifourgen,czedlioneonetwo}
and in other papers like Kulin~\cite{kulin}. We are going to recall its proof only for later reference.
\begin{lemma}[``Circle Principle'']\label{lemmaHamilt}
If $d_0,d_1,\dots,d_{n-1}$ are pairwise distinct elements of a set $A$ and $0\leq u<v\leq n-1$, then
\begin{equation}\left.
\begin{aligned}
\equ {d_u}{d_v}=\bigl(\equ{d_{u}}{d_{u+1}} + \equ{d_{u+1}}{d_{u+2}}\dots + \equ{d_{v-1}}{d_{v}} \bigr) \cdot
\bigl( \equ{d_{v}}{d_{v+1}}
\cr
+ \dots + \equ{d_{n-2}}{d_{n-1}} +
\equ{d_{n-1}}{d_{0}}
+ \equ{d_{0}}{d_{1}}+ \dots + \equ{d_{u-1}}{d_{u}} \bigr)
\end{aligned}\,\right\}
\label{eqGbVrTslcdNssm}
\end{equation}
holds in $\Equ A$.
If, in addition, $A=\set{d_0,d_1,\dots,d_{n-1}}$, then
$\Equ A$ is generated by
\[\set{\equ{d_{n-1}}{d_0} }\cup\bigcup_{0\leq i\leq n-2} \set{\equ{d_{i}}{d_i+1} }.
\]
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemmaHamilt}] \eqref{eqGbVrTslcdNssm} is trivial. The second half of the lemma follows from the fact that for a finite $A$, the lattice $\Equ A$ is \emph{atomistic}, that is, each of its elements is the join of some atoms.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lemmazadori}]
On the set $\set{\oal,\obe,\oga,\ode}$ of variables, we are going to define several quaternary terms recursively. But first of all, we define the quadruple
\begin{equation}
\obmu:=\tuple{\oal,\obe,\oga,\ode}
\label{eqMnPkBhJsZfPmF}
\end{equation}
of four variables with the purpose of abbreviating our quaternary terms $t(\oal,\obe,\oga,\ode)$ by $t(\obmu)$.
We let
\begin{equation}\left.
\begin{aligned}
g_0(\obmu)&:= \obe\,\ode\text{ (i.e.,}=\obe\wedge\ode), \cr
h_{i+1}(\obmu)&:=((g_i(\obmu)+\oga)\oal+g_i(\obmu))\oga\text{ for }i\geq 0,\cr
g_{i+1}(\obmu)&:=((h_{i+1}(\obmu)+\obe)\oal+h_{i+1}(\obmu))\obe\text{ for }i\geq 0,\cr
H_0(\obmu)&:=\oga\ode,\cr
G_{i+1}(\obmu)&:=((H_i(\obmu)+\obe)\oal + H_i(\obmu))\obe \text{ for }i\geq 0,\cr
H_{i+1}(\obmu)&:=((G_{i+1}(\obmu)+\oga)\oal+G_{i+1}(\obmu))\oga \text{ for }i\geq 0.
\end{aligned}
\,\,\right\}
\label{eqZhgRsMks}
\end{equation}
For later reference, let us point out that
\begin{equation}\left.
\parbox{5.4cm}{in \eqref{eqZhgRsMks}, $\delta$ is used only twice: to define $g_0(\obmu)$ and to define $H_0(\obmu)$.}
\,\,\right\}
\label{eqpbxTwCzWn}
\end{equation}
Next, in harmony with \eqref{eqsBzTrGhxQ} and Figure~\ref{figd2}, we let
\begin{equation}
\bmu:=\tuple{\alpha,\beta,\gamma,\delta}.
\label{eqMdhzBfTnQslwvP}
\end{equation}
Clearly,
\begin{equation}
\beta\delta=\equ{a_0}{b_0}\,\,\text{ and }\,\,\gamma\delta=\equ{a_k}{b_{k-1}}.
\label{eqNTjgMzdD}
\end{equation}
An easy induction shows that
\begin{equation}\left.
\begin{aligned}
g_i(\bmu)&:=\xequ{a_j}{b_j}{0\leq j\leq i} \text{ for }0\leq i\leq k-1, \cr
h_{i}(\bmu)&:=\xequ{a_j}{b_{j-1}}{1\leq j\leq i}\text{ for }1\leq i\leq k,\cr
H_i(\bmu)&:=\xequ{a_{k-j}}{b_{k-1-j}}{0\leq j\leq i} \text{ for }0\leq i\leq k-1, \cr
G_{i}(\bmu)&:=\xequ{a_{k-j}}{b_{k-j}}{1\leq j\leq i}\text{ for }1\leq i\leq k.\cr
\end{aligned}
\,\,\right\}
\label{eqmlZrbQshPrk}
\end{equation}
Next, for certain edges $\pair u v$ of the graph given in Figure~\ref{figd2}, we define a corresponding lattice term $\eterm u v(\obmu)$ as follows.
\begin{equation}\left.
\begin{aligned}
\eterm{a_i}{b_i}(\obmu)&:=g_i(\obmu)\cdot G_{k-i}(\obmu),\quad
\text{for }0\leq i\leq k-1,\cr
\eterm{a_i}{b_{i-1}}(\obmu)&:=h_i(\obmu)\cdot H_{k-i}(\obmu),\quad \text{for }1\leq i\leq k\cr
\eterm{a_i}{a_{i+1}}(\obmu)&:=\oal\cdot (\eterm{a_i}{b_i}(\obmu) + \eterm{a_{i+1}}{b_{i}}(\obmu)),\quad \text{for }0\leq i\leq k-1,\cr
\eterm{b_i}{b_{i+1}}(\obmu)&:=\oal\cdot (\eterm{a_{i+1}}{b_i}(\obmu) + \eterm{a_{i+1}}{b_{i+1}}(\obmu)),\quad 0\leq i\leq k-2.
\end{aligned}
\,\right\}
\label{eqmlZsknNgRvTk}
\end{equation}
The first two equalities below follow from \eqref{eqmlZrbQshPrk}, while
the third and the fourth from the first two.
\begin{equation}\left.
\begin{aligned}
\eterm{a_i}{b_i}(\bmu)&=\equ{a_i}{b_i},\quad
\text{for }0\leq i\leq k-1,\cr
\eterm{a_i}{b_{i-1}}(\bmu)&=\equ{a_i}{b_{i-1}}\quad \text{for }1\leq i\leq k\cr
\eterm{a_i}{a_{i+1}}(\bmu)&=
\equ{a_i}{a_{i+1}}\quad \text{for }0\leq i\leq k-1,\cr
\eterm{b_i}{b_{i+1}}(\bmu)&=
\equ{b_i}{b_{i+1}}, \quad 0\leq i\leq k-2.
\end{aligned}
\,\right\}
\label{eqmlspnSzdRkMblk}
\end{equation}
Finally, let
\begin{equation}\tuple{d_0,d_1,\dots,d_{n-1}}:=\tuple{a_0,a_1,\dots,a_k,b_{k-1},b_{k-2},\dots, b_0}.
\label{eqdsrzLmJnszpkJvStT}
\end{equation}
In harmony with \eqref{eqGbVrTslcdNssm}, we define the following term
\begin{equation}\left.
\begin{aligned}
\eterm{d_u}{d_v}(\obmu):=\bigl(\eterm{d_{u}}{d_{u+1}}(\obmu) + \eterm{d_{u+1}}{d_{u+2}}(\obmu)\dots + \eterm{d_{v-1}}{d_{v}}(\obmu) \bigr) \cdot
\bigl( \eterm{d_{v}}{d_{v+1}}(\obmu)
\cr
+ \dots + \eterm{d_{n-2}}{d_{n-1}}(\obmu) +
\eterm{d_{n-1}}{d_{0}}
+ \eterm{d_{0}}{d_{1}}(\obmu)+ \dots + \eterm{d_{u-1}}{d_{u}}(\obmu) \bigr)
\end{aligned}\,\right\}
\label{eqnbVrtzstVcX}
\end{equation}
for $0\leq u< v\leq n-1= 2k$. Combining \eqref{eqGbVrTslcdNssm}, \eqref{eqmlspnSzdRkMblk}, and \eqref{eqnbVrtzstVcX}, we obtain that
\begin{equation}
\eterm{d_u}{d_v}(\bmu)= \equ{d_u}{d_v}.
\label{eqczhhndPMtlvRdwdb}
\end{equation}
Based on \eqref{eqpbxTwCzWn}, note at this point that in \eqref{eqZhgRsMks}, \eqref{eqmlZsknNgRvTk}, and \eqref{eqnbVrtzstVcX}, $\delta$ is used only twice: to define $g_0(\obmu)$ and to define $H_0(\obmu)$. Consequently, taking \eqref{eqNTjgMzdD} also into account, we conclude that
\begin{equation}\left.
\parbox{9.0cm}{equality \eqref{eqczhhndPMtlvRdwdb} remains valid if $\delta$, the fourth component of $\bmu$, is replaced by any other partition whose meet with $\beta$ and that with $\gamma$ are $\equ {a_0}{b_0}$ and $\equ {a_k}{b_{k-1}}$, respectively.} \,\,\right\}
\label{eqpbxZbntGhSwD}
\end{equation}
Since every atom of $\Equ{Z(2k+1)}$ is of the form \eqref{eqczhhndPMtlvRdwdb} and $\Equ{Z(2k+1)}$ is an atomistic lattice, $\sublat{\alpha,\beta,\gamma,\delta}=\Equ{Z(n)}$.
In virtue of \eqref{eqpbxZbntGhSwD} and since $\ode$ has been used only twice, \eqref{eqzLhetzB} also holds, completing the proof of Lemma~\ref{lemmazadori}.
\end{proof}
\begin{figure}[htb]
\centerline
{\includegraphics[scale=1.0]{czgauthfig3}}
\caption{A configuration for even size $2k+2$ with $k=3$
\label{figd3}}
\end{figure
Next, for $k\geq 2$, we add a new vertex $c$, a $\beta$-colored edge $\pair{b_0}{c}$, and a $\gamma$-colored edge $\pair{b_{2}}{c}$
to $Z(2k+1)$ to obtain $Z(2k+2)$, see Figure~\ref{figd3}. This configuration is different from what Z\'adori~\cite{zadori} used for the even case; our approach by Figure~\ref{figd3} is simpler and fits better to our purposes. Again, the dashed curved edges of Figure~\ref{figd3} should be disregarded until otherwise is stated.
\begin{lemma}\label{lemmazeven}
For $n=2k+2\geq 6$, we have that $\Equ{Z(n)}=\sublat{\alpha,\beta,\gamma, \delta}$.
\end{lemma}
\begin{proof} With the short terms $\oal^\ast{}=\oal$, $\obe^\ast{}:=\obe(\oal+\ode)$, $\oga^\ast{}:=\oga(\oal+\ode)$, and $\ode^\ast{}:=\ode$, we define
$\obmu^\ast{}:=\tuple{\oal^\ast{},\obe^\ast{},\oga^\ast{},\ode^\ast{}}$. For each term $t$ defined in \eqref{eqZhgRsMks} and \eqref{eqmlZsknNgRvTk}, we define a term $t^\ast{}$ as
$t^\ast{}(\obmu):=t(\obmu^\ast{})$.
We also need the corresponding partitions $\alpha^\ast{}:=\alpha$, $\beta^\ast{}:=\beta(\alpha+\delta)$, $\gamma^\ast{}:=\gamma(\alpha+\delta)$,
$\delta^\ast{}:=\delta$, and the quadruple $\bmu^\ast{}:=\tuple{\alpha^\ast{},\beta^\ast{},\gamma^\ast{},\delta^\ast{}}$.
Apart from the singleton block $\set c$,
they are the same as the partitions considered in Lemma~\ref{lemmazadori} for $Z(2k+1)$. Hence, it follows that
\eqref{eqmlZrbQshPrk}, \eqref{eqmlspnSzdRkMblk}, and \eqref{eqczhhndPMtlvRdwdb} hold with $\bmu^\ast{}$ instead of $\bmu$.
In other words, they hold with $\bmu$ if the terms $t$ are replaced by the corresponding terms $t^\ast$.
In particular, \eqref{eqczhhndPMtlvRdwdb} is reworded as follows:
\begin{equation}
\eterm x y^\ast(\bmu)=\equ x y\quad\text{ for all }x,y\in Z(n)\setminus\set c.
\label{eqmsSzvKlJjJrlcsPsm}
\end{equation}
So if we define (without defining their ``asterisk-free versions'' $\eterm{a_0}{c}$ and $\eterm{a_2}{c}$) the terms
\begin{equation}
\eterm{a_0}{c}^\ast(\obmu):=\obe\cdot\bigl(\oga+\eterm{a_0}{a_{2}}(\obmu^\ast{}) \bigr)\text{ and }
\eterm{a_{2}}{c}^\ast(\obmu):=\oga\cdot\bigl(\obe+\eterm{a_0}{a_{2}}(\obmu^\ast{}) \bigr),
\label{eqdzltGmrkszTnldpfGl}
\end{equation}
then it follows easily that
\begin{equation}
\eterm{a_0}{c}^\ast(\bmu)= \equ{a_0}{c}\,\,\text{ and }\,\,
\eterm{a_2}{c}^\ast(\bmu)= \equ{a_2}{c};
\label{eqhfnWhtjCxT}
\end{equation}
remark that in addition to \eqref{eqmsSzvKlJjJrlcsPsm}, \eqref{eqhfnWhtjCxT} also belongs to the scope of \eqref{eqpbxZbntGhSwD}.
Let
\begin{equation}
\tuple{d_0,d_1,\dots,d_{n-1}}
:=\tuple{a_0,c,a_2,a_3,\dots,a_{k},b_{k-1},b_{k-2},\dots,b_1,a_1,b_0}.
\label{eqchtFkJknFsgsN}
\end{equation}
Similarly to \eqref{eqnbVrtzstVcX} but now
based on \eqref{eqchtFkJknFsgsN} rather than \eqref{eqdsrzLmJnszpkJvStT}, we define the following term (without defining its ``non-asterisked'' $\fterm{d_u}{d_v}$ version)
\begin{equation}\left.
\begin{aligned}
\fterm{d_u}{d_v}^\ast (\obmu):=\bigl(\eterm{d_{u}}{d_{u+1}}^\ast (\obmu) + \eterm{d_{u+1}}{d_{u+2}}^\ast (\obmu)\dots + \eterm{d_{v-1}}{d_{v}}^\ast (\obmu) \bigr) \cdot
\bigl( \eterm{d_{v}}{d_{v+1}}^\ast (\obmu)
\cr
+ \dots + \eterm{d_{n-2}}{d_{n-1}}^\ast (\obmu) +
\eterm{d_{n-1}}{d_{0}}
+ \eterm{d_{0}}{d_{1}}^\ast (\obmu)+ \dots + \eterm{d_{u-1}}{d_{u}}^\ast (\obmu) \bigr)
\end{aligned}\,\right\}
\label{eqnvlqLY}
\end{equation}
for $0\leq u<v < n$. By Lemma~\ref{lemmaHamilt}, \eqref{eqmsSzvKlJjJrlcsPsm}, \eqref{eqhfnWhtjCxT}, and \eqref{eqnvlqLY}, we obtain that
\begin{equation}
\fterm x y^\ast(\bmu)=\equ x y\quad\text{ for all }x\neq y\in Z(n).
\label{eqnmsdsulTVcX}
\end{equation}
The remark right after \eqref{eqhfnWhtjCxT} allows us to note that
\begin{equation}
\text
{\eqref{eqnmsdsulTVcX} also belongs to the scope of \eqref{eqpbxZbntGhSwD}.}
\label{eqnZhgjdlSrkhllRw}
\end{equation}
Finally, \eqref{eqnmsdsulTVcX} implies Lemma~\ref{lemmazeven} since $\Equ{Z(n)}$ is atomistic.
\end{proof}
\section{Generating direct powers of partition lattices}\label{sectprod}
Before formulating the main result of the paper, we recall some notations and concepts. The lower integer part of a real number $x$ will be denoted by $\lfloor x\rfloor$; for example,
$\lfloor \sqrt 2\rfloor=1$ and $\lfloor 2\rfloor=2$. The set of positive integer numbers will be denoted by $\NN$. For $n\in\NN$, the number of partitions of the $n$-element set $\set{1,2,\dots,n}$, that is, the size of $\Part n\cong \Equ n$ is the so-called $n$-th \emph{Bell number}; it will be denoted by $\Bell n$.
The number of partitions of $n$ objects with exactly $r$ blocks is denoted by $S(n,r)$; it is the \emph{Stirling number of the second kind} with parameters $n$ and $r$. Note that $S(n,r)\geq 1$ if and only if $1\leq r\leq n$; otherwise $S(n,r)$ is zero. Clearly, $\Bell n=S(n,1)+S(n,2)+\dots +S(n,n)$.
Let
\begin{equation}
\text{$\maxs(n)$ denote the maximal element of the set $\set{S(n,r): r\in\NN}$.}
\label{eqtxtmMxSdfLm}
\end{equation}
We know from
Rennie and Dobson~\cite[page 121]{renniedobson} that
\begin{equation}
\log \maxs(n)= n \log n - n\log \log n - n + O\left( n\cdot {\frac{\log\log n}{\log n}} \right).
\end{equation}
Hence, $\maxs(n)$ is quite large; see Tables~\eqref{tablerdDbsa}--\eqref{tablerdDbsc} and \eqref{tablerdDbsg} for some of its values; note that those given in exponential form are only rounded values.
Some rows occurring in these tables, computed by Maple V. Release 5 (1997) under Windows 10, will be explained later.
\allowdisplaybreaks{
\begin{align}
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\noalign{\hrule}\vonal\noalign{\hrule}\vonal
&&\hfill$n$&&$\,1$&&$\,2$&&$\,3$&&$\,4$&&$5$&&$6$&&$7$&&$8$&&$9$&&$10$&&$11$&&$12$&
\cr\noalign{\hrule}\vonal
&&$\maxs(n)$&&$1$&&$1$&&$3$&&$7$&&$25$&&$90$&&$350$&&$1\,701$&&$7\,770$&&$42\,525$&&$246\,730$&&$1\,379\,400$&
\cr\noalign{\hrule}
&&\hfill$m(n)$&&$\phantom a$&&$\phantom b$&&$\phantom c$&&$\phantom d$&&$1$&&$1$&&$3$&&$3$&&$21$&&$21$&&$175$&&$175$&\cr
\noalign{\hrule}
&&\hfill$\csm(n)$&&$\phantom a$&&$\phantom b$&&$\phantom c$&&$\phantom d$&&$\phantom d$&&$\phantom e$&&$1$&&$1$&&$1$&&$1$&&$2$&&$2$&\cr
\noalign{\hrule}\vonal\noalign{\hrule}\vonal
}}
\label{tablerdDbsa}
\\
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\noalign{\hrule}\vonal\noalign{\hrule}\vonal
&&\hfill$n$&&$13$&&$14$&&$15$&&$16$&&$17$&
\cr\noalign{\hrule}\vonal
&&$\maxs(n)$&&$9\,321\,312$&&$63\,436\,373$&&$420\,693\,273$&&$3\,281\,882\,604$&&$25\,708\,104\,786$&
\cr
\noalign{\hrule}
&&\hfill $m(n)$&&$2\,250$&&$2\,250$&&$31\,500$&&$31\,500$&&$595\,350$&
\cr
\noalign{\hrule}
&&\hfill $\csm(n)$&&$2$&&$2$&&$9$&&$9$&&$9$&
\cr
\noalign{\hrule}\vonal\noalign{\hrule}\vonal
}}
\label{tablerdDbsb}
\\
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\noalign{\hrule}\vonal\noalign{\hrule}\vonal
&&\hfill$n$&&$18$&&$19$&&$20$&
\cr\noalign{\hrule}\vonal
&&$\maxs(n)$&&$1\,974\,624\,834\,000$&&$1\,709\,751\,003\,480$&&$15\,170\,932\,662\,679$&
\cr\noalign{\hrule}
&&\hfill$m(n)$&&$595\,350$&&$13\,216\,770$&&$13\,216\,770$&
\cr\noalign{\hrule}
&&\hfill$\csm(n)$&&$9$&&$49$&&$49$&
\cr\noalign{\hrule}\vonal\noalign{\hrule}\vonal
}}
\label{tablerdDbsc}
\\
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\noalign{\hrule}\vonal\noalign{\hrule}\vonal
&&\hfill$n$&&$21$&&$22$&&$23$&&$24$&&$25$&
\cr\noalign{\hrule}\vonal
&&\hfill$m(n)$&&$330\,419\,250$&&$330\,419\,250$&&$10\,492\,193\,250 $&&$10\,492\,193\,250 $&& $ 3.40\cdot 10^{11} $ &
\cr\noalign{\hrule}
&&\hfill$\csm(n)$&&$49$&&$49$&&$625$&&$625 $&& $625 $ &
\cr\noalign{\hrule}\vonal\noalign{\hrule}\vonal
}}
\label{tablerdDbsd}
\\
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\noalign{\hrule}\vonal\noalign{\hrule}\vonal
&&\hfill$n$&&$26$&&$27$&&$28$&&$29$&&$30$&&$31$&
\cr\noalign{\hrule}\vonal
&&\hfill$m(n)$&&$3.40\cdot 10^{11}$&&$1.29\cdot 10^{13}$&&$1.29\cdot 10^{13}$&&$5.91\cdot 10^{14}$&& $5.91\cdot 10^{14}$ && $2.67\cdot 10^{16}$ &
\cr\noalign{\hrule}
&&\hfill$\csm(n)$&&$625$&&$8100$&&$8100$&&$8100$&& $8100$ && $122500 $ &
\cr\noalign{\hrule}\vonal\noalign{\hrule}\vonal
}}
\label{tablerdDbse}
\\
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\noalign{\hrule}\vonal\noalign{\hrule}\vonal
&&\hfill$n$&&$32$&&$33$&&$34$&&$35$&&$36$&&$37$&
\cr\noalign{\hrule}\vonal
&&\hfill$m(n)$&&$2.67\cdot 10^{16}$&&$1.38\cdot 10^{18}$&&$1.38\cdot 10^{18}$&&$8.44\cdot 10^{19}$&& $8.44\cdot 10^{19}$ && $5.08\cdot 10^{21}$ &
\cr\noalign{\hrule}
&&\hfill$\csm(n)$&&$122500$&&$122500$&&$122500$&&$2893401$&& $2893401$ && $2893401$ &
\cr\noalign{\hrule}\vonal\noalign{\hrule}\vonal
}}
\label{tablerdDbsf}
\\
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule
\cr\noalign{\hrule}\vonal\noalign{\hrule}\vonal
&&\hfill$n$&&$97$&&$98$&&$99$&&$100$&&$2020$&
\cr\noalign{\hrule}\vonal
&&\hfill$\maxs(n)$&&$3.22\cdot 10^{110}$&&$9.31\cdot 10^{111}$&&$2.69\cdot 10^{113}$&& $7.77\cdot 10^{114}$ &&$3.81\cdot10^{4398}$&
\cr\noalign{\hrule}
&&\hfill$m(n)$&&$1.08\cdot 10^{87}$&&$1.08\cdot 10^{87}$&&$3.09\cdot 10^{89}$&&$3.09\cdot 10^{89}$&&$5.52\cdot 10^{3893}$&
\cr\noalign{\hrule}
&&\hfill$\csm(n)$&&$1.52\cdot 10^{32}$&&$1.52\cdot 10^{32}$&&$1.45\cdot 10^{34}$&& $1.45\cdot 10^{34}$ &&$3.97\cdot 10^{1700}$&
\cr\noalign{\hrule}\vonal\noalign{\hrule}\vonal
}}
\label{tablerdDbsg}
\end{align}
The aim of this section is to prove the following theorem; \eqref{pbxPartnmM} and \eqref{eqtxtmMxSdfLm} are still in effect.
\begin{theorem}\label{thmmain} Let $n\geq 5$ be an integer, let
$k:=\lfloor (n-1)/2 \rfloor$, and let
\begin{equation}
m=m(n):=\maxs(k)\cdot \maxs(k-1).
\label{eqmSrpRdgB}
\end{equation}
Then $\Part n^m$ or, equivalently, $\Equ n^m$ is four-generated. In other words, the $m$-th direct power of the lattice of all partitions of the set $\set{1,2,\dots, n}$ is generated by a four-element subset.
\end{theorem}
Some values of $m(n)$ are given in Tables~\eqref{tablerdDbsa}--\eqref{tablerdDbsg}. Before proving this theorem, we formulate some remarks and corollaries and we make some comments.
\begin{corollary}\label{coroLprd}
Let $n$ and $m$ as in Theorem \ref{thmmain}. Then for every integer $t$ with $1\leq t\leq m$, the direct power $\Part n^t$ is four-generated. In particular, $\Part n$ in itself is four-generated.
\end{corollary}
The second half of Corollary~\ref{coroLprd} shows that Theorem~\ref{thmmain} is a stronger statement than the Strietz--Z\'adori result; see \eqref{eqpbxstRszlT} in the Introduction. This corollary follows quite easily from Theorem~\ref{thmmain} as follows.
\begin{proof}[Proof of Corollary~\ref{coroLprd}]
Since the natural projection $\Part n^m \to \Part n^t$, defined by $\tuple{x_1,\dots, x_m}\mapsto \tuple{x_1,\dots, x_t}$, sends a 4-element generating set into an at most 4-element generating set, Theorem~\ref{thmmain} applies.
\end{proof}
\begin{remark} We cannot say that $m=m(n)$ in Theorem~\ref{thmmain} is the largest possible exponent. First, because the proof that we are going to present relies on a particular construction and we do not know whether there exist better constructions for this purpose.
Second, because we use Stirling numbers of the second kind to give a lower estimate of the size of a maximum-sized antichain in partition lattices, and we know from
Canfield~\cite{canfield} that this estimate is not sharp. However, this fact would not lead to a reasonably esthetic improvement of Theorem~\ref{thmmain}.
\end{remark}
\begin{remark}\label{remnVrbG} If $n$ and $t$ are positive integers such that $n\geq 4$ and
\begin{equation}
t > \Bell n \cdot \Bell{n-1}\cdot \Bell{n-2}\cdot \Bell{n-3} ,
\label{eqthmBsrhzTq}
\end{equation}
then $\Part n^t$ is not four-generated. Thus, the exponent in Theorem~\ref{thmmain} cannot be arbitrarily large.
\end{remark}
The product occurring in \eqref{eqthmBsrhzTq} is much larger than $m(n)$ in \eqref{eqmSrpRdgB}. Hence, there is a wide interval of integers $t$ such that we do not know whether $\Part n^t$ is four-generated or not.
\begin{proof}[Proof of Remark~\ref{remnVrbG}]
Let $p$ denote the product in \eqref{eqthmBsrhzTq}.
For the sake of contradiction, suppose that $t>p$ but $\Part n^t$ is generated by some $\set{\alpha,\beta,\gamma,\delta}$. Here $\alpha=\tuple{\alpha_1,\alpha_2,\dots,\alpha_t}$
with all the $\alpha_i\in\Part n$, and similarly for $\beta$, $\gamma$, and $\delta$. By the easy argument proving Corollary~\ref{coroLprd}, we know that $\set{\alpha_i,\beta_i,\gamma_i,\delta_i}$ generates $\Part n$ for all $i\in\set{1,\dots, t}$.
Since $\Part n$ is not 3-generated by Z\'adori~\cite{zadori}, the quadruple $\tuple{\alpha_i,\beta_i,\gamma_i,\delta_i}$ consists of pairwise distinct components. But there are only $p$ such quadruples, whereby the the pigeonhole principle yields two distinct subscripts $i$ and $j$ $\in\set{1,\dots, t}$
such that $\tuple{\alpha_i,\beta_i,\gamma_i,\delta_i}=\tuple{\alpha_j,\beta_j,\gamma_j,\delta_j}$. Hence, for every quaternary lattice term $f$, we have that
$f(\alpha_i,\beta_i,\gamma_i,\delta_i)=f(\alpha_j,\beta_j,\gamma_j,\delta_j)$. This implies that for every
$\eta=\tuple{\eta_1,\dots,\eta_t}\in \sublat{\alpha,\beta,\gamma,\delta}$, we have that $\eta_i=\eta_j$. Thus, $\sublat{\alpha,\beta,\gamma,\delta}\neq \Part n^t$, which is a contradiction proving Remark~\ref{remnVrbG}.
\end{proof}
\begin{remark}\label{remngyznnGynrGcrsT} For a four-generated finite lattice $L$, the direct square $L^2$ of $L$ need not be four-generated. For example, if $L$ is the distributive lattice generated freely by four elements, then there exists no $t\geq 2$ such that $L^t$ is four-generated.
\end{remark}
\begin{proof}
Let $t\geq 2$, and let $L$ be the free distributive lattice on four generators. Observe that $L^t$ is distributive. So if $L^t$ was four-generated, then it would be a homomorphic image of $L$ and $|L|^t=|L^t|\leq |L|$ would be a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thmmain}]
Since the notation of the elements of the base set is irrelevant, it suffices to show that $\Equ{Z(n)}^m$ is four-generated. No matter if $n$ is odd or even, we
use the notation $k$, $a_i$ and $b_j$ as in Figures~\ref{figd2} and \ref{figd3}. We are going to define $\valpha=\tuple{\alpha_1,\dots, \alpha_m}$, $\vbeta=\tuple{\beta_1,\dots, \beta_m}$, $\vgamma=\tuple{\gamma_1,\dots, \gamma_m}$, and $\vshd=\tuple{\shd_1,\dots, \shd_m}$ so that $\set{\valpha,\vbeta,\vgamma,\vshd}$ generates $\Equ{Z(n)}^m$. For every $i\in\set{1,\dots,m}$, $\alpha_i$, $\beta_i$, and $\gamma_i$ are defined as in Figures~\ref{figd2} and \ref{figd3}, that is, as in the proofs of Lemmas~\ref{lemmazadori} and \ref{lemmazeven}.
However, the definition of the equivalences $\shd_i$ is going to be more tricky.
Let $\delta_i:=\equ{a_0}{b_0}+\equ{a_k}{b_{k-1}}$, as in Lemmas~\ref{lemmazadori} and \ref{lemmazeven}. Note that
\begin{equation}
\text{none of $\alpha_i:=\alpha$, $\beta_i:=\beta$, $\gamma_i:=\gamma$, and $\delta_i:=\delta$ depends on $i$.}
\label{eqtxtngkSmfmRstsRz}
\end{equation}
We know from Lemmas~\ref{lemmazadori} and \ref{lemmazeven} that $\set{\alpha_i,\beta_i,\gamma_i,\delta_i}$ generates $\Equ{Z(n)}$. Therefore, for any two distinct elements $u$ and $v$ of $Z(n)$, we can pick a quaternary lattice term $\fterm u v= \fterm u v(\oal,\obe,\oga,\ode)$ with variables $\obmu:=\tuple{\oal,\obe,\oga,\ode}$ such that, in virtue of \eqref{eqczhhndPMtlvRdwdb} and \eqref{eqnmsdsulTVcX},
\begin{equation}\left.
\parbox{8.7cm}{depending on the parity of $n$, $\fterm u v$ is $\eterm u v$ from the proof of Lemma~\ref{lemmazadori} or it is
$\fterm u v^\ast$ from that of Lemma~\ref{lemmazeven}, and
$\fterm u v(\alpha_i,\beta_i,\gamma_i,\delta_i)=\equ u v \in \Equ{Z(n)}$.}\,\,\right\}
\label{eqfTmrzGb}
\end{equation}
By defining $\fterm u u$ to be the meet of its four variables, the validity of\eqref{eqfTmrzGb} extends to the case $u=v$, where $\equ u u$ is understood as the least partition, that is, the partition with all of its blocks being singletons.
Next, let
\begin{equation}
U:=\set{a_1,a_2\dots, a_{k-1}}\,\text{ and }\,W:=\set{b_0,b_1,\dots,b_{k-1}};
\label{eqwmghUV}
\end{equation}
these sets are indicated by dashed ovals in Figures~\ref{figd2}, \ref{figd3}, and \ref{figd4}.
By the definition of $\maxs(k-1)$, we can pick an integer
$r'\in\NN$ such that there are exactly $\maxs(k-1)$ equivalences of $U$ with exactly $r'$ blocks. (By a block of an equivalence we mean a block of the corresponding partition.) Let $\aGa$ denote the set of these ``$r'$-block equivalences'' of $U$. Clearly, $\aGa$ is an antichain in $\Equ U$ with size $|\aGa|=\maxs(k-1)$.
Similarly, $\maxs(k)$ is the number of $r''$-block equivalences for some $r''\in\NN$ and the $r''$-block equivalences of $W$ form an antichain $\aHa\subseteq \Equ W$ such that $|\aHa|=\maxs(k)$.
Observe that, in the direct product $\Equ U\times\Equ W$,
\begin{equation}
\aGa\times \aHa \text{ is an antichain}.
\label{eqanTiChaIn}
\end{equation}
Since $|\aGa\times \aHa|=|\aGa|\cdot|\aHa|=\maxs(k-1)\cdot \maxs(k)=m$, see \eqref{eqmSrpRdgB},
we can enumerate $\aGa\times \aHa$ in the following repetition-free list of length $m$ as follows:
\begin{equation}
\aGa\times \aHa=\set{\pair{\kappa_1}{\lambda_1}, \pair{\kappa_2}{\lambda_2}, \dots, \pair{\kappa_m}{\lambda_m} }.
\label{eqchzTnFshRp}
\end{equation}
For each $i\in\set{1,\dots,m}$, we define $\shd_i$ as follows:
\begin{equation}
\shd_i:= \text{the equivalence generated by }\delta_i \cup \kappa_i \cup \lambda_i;
\label{eqctznBkPrDTrMp}
\end{equation}
this makes sense since each of $\delta_i$, $\kappa_i$ and $\lambda_i$ is a subset of $Z(n)\times Z(n)$.
Clearly, for any $x\neq y\in Z(n)$, $\pair x y\in \alpha\shd_i$ if and only $\pair x y\in\kappa_i\cup\lambda_i\subseteq U^2\cup W^2$. This fact together with $\kappa_i\cap\lambda_i\subseteq U^2\cap W^2=\emptyset$ and \eqref{eqanTiChaIn} yield that
for any $i,j\in\set{1,2,\dots,m}$,
\begin{equation}
\text{if $i\neq j$, then $\alpha \shd_i$ and $\alpha \shd_j$ are incomparable.}
\label{pbxTfhhsslqkntLXn}
\end{equation}
\begin{figure}[htb]
\centerline
{\includegraphics[scale=1.0]{czgauthfig4}}
\caption{``zigzagged circles''
\label{figd4}}
\end{figure
Next, we define
\begin{equation}
\text{the ``\emph{zigzagged circle}'' }\,\,
\tuple{d_0,d_1,\dots, d_{n-1}}
\label{eqtxtZgZghsvTjs}
\end{equation}
as follows; see also the thick edges and curves in Figure~\ref{figd4}.
(Note that the earlier meaning of the notation $d_0, d_1,\dots$ is no longer valid.)
For $i\in\set{0,\dots,k-1}$, we let $d_{2i}:=a_i$ and $d_{2i+1}=b_i$. We let $d_{2k}=a_k$ and, if $n=2k+2$ is even, then we let $d_{n-1}=c$. Two consecutive vertices of the zigzagged circle
will always be denoted by $d_p$ and $d_{p+1}$ where $p, p+1\in\set{0,1,\dots,n-1}$ and the addition is understood modulo $n$. The zigzagged circle has one or two \emph{thick curved edges}; they are $\pair{a_0}{a_k}$ for $n=2k+1$ odd and they are $\pair{a_0}c$ and $\pair{c}{a_k}$ for $n=2k+2$ even; the rest of its edges are \emph{straight thick edges}. So the zigzagged circle consist of the thick (straight and curved) edges, whereby the adjective ``thick'' will often be dropped.
Next, we define some lattice terms associated with the edges of the zigzagged circle. Namely,
for $j\in\set{1,\dots,m}$ and for
$p\in \set{0,1,\dots,2k-1}$, we define the quaternary term
\begin{equation}
\begin{aligned}
\gterm j {d_p} {d_{p+1}}(\obmu):=
\fterm {d_p} {d_{p+1}}(\obmu)
&\cdot \prod_{\pair {d_p}x \in \alpha\shd_j} \bigl( \oal\ode + \fterm x{d_{p+1}}(\obmu) \bigr)
\cr
&\cdot
\prod_{\pair y{d_{p+1}}\in \alpha\shd_j} \bigl(\fterm {d_{p}}y(\obmu) + \oal\ode \bigr).
\end{aligned}
\label{eqcBmTzXQfFt}
\end{equation}
The assumption on $p$ means that \eqref{eqcBmTzXQfFt} defines $\gterm j {d_p} {d_{p+1}}(\obmu)$ for each straight edge of the zigzagged circle.
We claim that for all $j\in\set{1,\dots,m}$ and $p\in \set{0,1,\dots,2k-1}$,
\begin{equation}
\gterm j {d_p} {d_{p+1}}(\alpha_j,\beta_j,\gamma_j,\shd_j)=
\equ {d_p} {d_{p+1}}.
\label{eqCshTnGbW}
\end{equation}
In order to show \eqref{eqCshTnGbW}, observe that
$\beta_i\shd_i=\beta_i\delta_i=\equ{a_0}{b_0}$ and
$\gamma_i\shd_i=\gamma_i\delta_i=\equ{a_k}{b_{k-1}}$. These equalities, \eqref{eqpbxZbntGhSwD}, \eqref{eqnZhgjdlSrkhllRw}, and \eqref{eqfTmrzGb} yield that for any $u,v\in Z(n)$, $i\in\set{1,\dots,m}$, and $p\in\set{0,1,\dots,2k-1}$,
\begin{align}
\fterm {u} {v}(\alpha_i,\beta_i,\gamma_i,\shd_i)&=\equ {u} {v}\,\,\text{ and, in particular,} \label{eqalignZrtbvRsrst}
\\
\fterm {d_p} {d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)&=\equ {d_p} {d_{p+1}}.
\label{eqcHtRbkvWp}
\end{align}
Combining \eqref{eqcBmTzXQfFt} and \eqref{eqcHtRbkvWp},
we obtain the ``$\leq$'' part of \eqref{eqCshTnGbW}. In order to turn this inequality to an equality, we have to show that the pair
$\pair{d_p}{d_{p+1}}$ belongs to $\alpha\shd_j+\fterm x{d_{p+1}} (\alpha_j,\beta_j,\gamma_j,\shd_j ) $ for every $\pair {d_p}x \in \alpha\shd_j$, and it also belongs to
$\fterm {d_{p}}y(\alpha_j,\beta_j,\gamma_j,\shd_j) + \alpha\shd_j$ for every $\pair y{d_{p+1}}\in \alpha\shd_j$.
But this is trivial since $\pair x{d_{p+1}}\in \fterm x{d_{p+1}} (\alpha_j,\beta_j,\gamma_j,\shd_j )$ in the first case by \eqref{eqalignZrtbvRsrst}, and similarly trivial in the second case.
We have shown \eqref{eqCshTnGbW}.
Next, we claim that for any $i,j\in\set{1,\dots,m}$,
\begin{equation}\left.
\parbox{8cm}{if $i\neq j$, then there exists a $p\in
\set{0,1,\dots, 2k-1}$ such that
$\gterm j {d_p} {d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)=\enul:=0_{\Equ{Z(n)}}$.
}\,\,\right\}
\label{eqpbxZhgRblKnGgvnD}
\end{equation}
In order to prove \eqref{eqpbxZhgRblKnGgvnD}, assume that $i\neq j$. For an equivalence $\epsilon\in \Equ{Z(n)}$ and $x\in Z(n)$, the \emph{$\epsilon$-block} $\set{y\in Z(n): \pair x y\in \epsilon}$ of $x$ will be denoted by $x/\epsilon$.
We know from \eqref{pbxTfhhsslqkntLXn} that $\alpha\shd_j\not\leq\alpha\shd_i$. Hence, there is an element $x\in Z(n)$ such that $x/(\alpha\shd_j)\not\subseteq x/(\alpha\shd_i)$. Since $c/(\alpha\shd_j)=\set c= c/(\alpha\shd_i)$ for $n$ even, $x$ is distinct from $c$. Hence, $x$ is one of the endpoints of a straight edge $\pair{d_p}{d_{p+1}}$ of the zigzagged circle. This is how we can select a $p\in\set{0,1,\dots, 2k-1}$, that is,
a straight edge $\pair{d_p}{d_{p+1}}$ of the zigzagged circle \eqref{eqtxtZgZghsvTjs} such that
\begin{equation}
d_p/(\alpha\shd_j)\not\subseteq d_p/(\alpha\shd_i) \quad \text{ or }\quad
d_{p+1}/(\alpha\shd_j) \not\subseteq d_{p+1}/(\alpha\shd_i).
\label{eqchSdjgThRgRTrnL}
\end{equation}
Now, we are going to show that this $p$ satisfies the requirement of \eqref{eqpbxZhgRblKnGgvnD}. We can assume that the first part of the disjunction given in \eqref{eqchSdjgThRgRTrnL} holds, because the treatment for the second half is very similar. Pick an element
\begin{equation}
z\in d_p/(\alpha\shd_j)\text{ such that }z\notin d_p/(\alpha\shd_i).
\label{eqChClmTnsz}
\end{equation}
Because of \eqref{eqcHtRbkvWp} and the first meetand in \eqref{eqcBmTzXQfFt},
\begin{equation}
\gterm j {d_p} {d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)\leq \equ {d_p} {d_{p+1}}.
\label{eqnMsTsnWRStbHws}
\end{equation}
We claim that
\begin{equation}
\pair{d_p} {d_{p+1}}\notin \gterm j {d_p} {d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i).
\label{eqhzSzNtlNsrLlgz}
\end{equation}
Suppose the contrary. Then, using \eqref{eqcBmTzXQfFt} and that
$\pair {d_p}z\in\alpha\shd_j$ by \eqref{eqChClmTnsz}, we have that
\begin{equation}
\pair{d_p} {d_{p+1}} \in \alpha\shd_i + \fterm z{d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)
\eeqref{eqalignZrtbvRsrst}
\alpha\shd_i + \equ z{d_{p+1}}.
\label{eqghzrBsjTvnScsz}
\end{equation}
According to \eqref{eqghzrBsjTvnScsz}, there exists a \emph{shortest} sequence $u_0=d_{p+1}$, $u_1$, \dots, $u_{q-1}$, $u_q=d_p$ such that for every $\ell\in\set{0,1,\dots, q-1}$,
either $\pair{u_\ell}{u_{\ell+1}}\in \alpha\shd_i$, which is called a \emph{horizontal step}, or $\pair{u_\ell}{u_{\ell+1}}\in \equ z{d_{p+1}}$, which is a \emph{non-horizontal step}.
There is at least one non-horizontal steps since $d_p$ and $d_{p+1}$ are in distinct $\alpha$-blocks.
A non-horizontal step means that $\set{u_\ell,u_{\ell+1}} =\set{z,d_{p+1}}$, so $\set{z,d_{p+1}}$ is the only ``passageway'' between the two nonsingleton $\alpha$-blocks. Hence, there exists exactly one non-horizontal step since our sequence is repetition-free. This step is the first step since we have taken a shortest sequence. Hence,
$u_1=z$ and all the subsequent steps are horizontal steps.
Hence, $\pair {z}{d_p}=\pair {u_1}{d_p}\in\alpha\shd_i$.
Thus, $z\in d_p/(\alpha\shd_i)$, contradicting the choice of $z$ in \eqref{eqChClmTnsz}. This contradiction yields \eqref{eqhzSzNtlNsrLlgz}. Finally, \eqref{eqhzSzNtlNsrLlgz} together with \eqref{eqnMsTsnWRStbHws} imply \eqref{eqpbxZhgRblKnGgvnD}.
Next, for $j\in\set{1,2,...,m}$ and $q\in\set{0,1,\dots, n-1}$, we define the following quaternary term
\begin{equation}\left.
\begin{aligned}
\hterm j {d_q}{d_{q+1}}(\obmu):=
\cr\fterm {d_q}{d_{q+1}}(\obmu)
&\cdot
\prod_{p=0}^{2k-1}\bigl(\fterm {d_q}{d_p}(\obmu) + \gterm j{d_p}{d_{p+1}}(\obmu) + \fterm {d_{p+1}}{d_{q+1}}(\obmu)\bigr)\cr
&\cdot
\prod_{p=0}^{2k-1}\bigl(\fterm {d_q}{d_{p+1}}(\obmu) + \gterm j{d_p}{d_{p+1}}(\obmu) + \fterm {d_{p}}{d_{q+1}}(\obmu)\bigr),
\end{aligned}
\,\,\,\right\}
\label{eqttzsnJsjPl}
\end{equation}
where $q+1$ in subscript position is understood modulo $n$.
We claim that, for $q\in\set{0,1,\dots, n-1}$ and $i,j\in\set{1,\dots, m}$,
\begin{equation}
\hterm j {d_q}{d_{q+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)=
\begin{cases}
\equ {d_q}{d_{q+1}},&\text{if }\,\, i=j,\cr
\enul=0_{\Equ{Z(n)}},&\text{if }\,\, i\neq j.
\end{cases}
\label{eqHgtqrmBnSkWrkK}
\end{equation}
In virtue of \eqref{eqCshTnGbW}, \eqref{eqalignZrtbvRsrst}, and \eqref{eqcHtRbkvWp}, the validity of \eqref{eqHgtqrmBnSkWrkK} is clear when $i=j$. So, to prove \eqref{eqHgtqrmBnSkWrkK}, we can assume that $i\neq j$.
Since
$\hterm j {d_q}{d_{q+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)\leq \equ {d_q}{d_{q+1}}$ by \eqref{eqalignZrtbvRsrst} and \eqref{eqcHtRbkvWp}, it suffices to show that
$\pair{d_q}{d_{q+1}} \notin \hterm j {d_q}{d_{q+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)$. Suppose the contrary. Then we obtain from \eqref{eqalignZrtbvRsrst} and \eqref{eqttzsnJsjPl} that for
all $p\in\set{0,\dots,2k-1}$,
\begin{align}
\pair{d_q}{d_{q+1}} &\in \equ{d_q}{d_p} + \gterm j{d_p}{d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)
+
\equ{d_{p+1}}{d_{q+1}}\text{ and}
\label{eqlgnbmTzskWrNztp}\\
\pair{d_q}{d_{q+1}} &\in \equ{d_q}{d_{p+1}} + \gterm j{d_p}{d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i) + \equ {d_{p}}{d_{q+1}}.
\label{fzhNtbmTzrTxT}
\end{align}
Now we choose $p$ according to \eqref{eqpbxZhgRblKnGgvnD}; then $\gterm j{d_p}{d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)$ can be omitted from \eqref{eqlgnbmTzskWrNztp} and \eqref{fzhNtbmTzrTxT}. Therefore, if $p=q$, then \eqref{eqlgnbmTzskWrNztp} asserts that $\pair{d_q}{d_{q+1}}\in\enul$, a contradiction. Note that, due to $n\geq 5$, $p=q$ is equivalent to $|\set{d_p,d_{p+1},d_q,d_{q+1}}|=2$, whence $|\set{d_p,d_{p+1},d_q,d_{q+1}}|=2$ has just been excluded.
If $|\set{d_p,d_{p+1},d_q,d_{q+1}}|=4$, then each of \eqref{eqlgnbmTzskWrNztp} and \eqref{fzhNtbmTzrTxT} gives a contradiction again. If $|\set{d_p,d_{p+1},d_q,d_{q+1}}|=3$, then exactly one of \eqref{eqlgnbmTzskWrNztp} and \eqref{fzhNtbmTzrTxT} gives a contradiction. Hence, no matter how $p$ and $q$ are related, we obtain a contradiction. This proves the $i\neq j$ part of \eqref{eqHgtqrmBnSkWrkK}. Thus, \eqref{eqHgtqrmBnSkWrkK} has been proved.
Finally, let $K:=\sublat{\valpha,\vbeta,\vgamma,\vshd}$; it is a sublattice of $\Equ{Z(n)}^m$ and we are going to show that
$K=\Equ{Z(n)}^m$. Let $j\in\set{1,\dots,m}$.
It follows from \eqref{eqHgtqrmBnSkWrkK} that
\begin{equation}
\tuple{
\enul,\dots,\enul,
\underbrace{\equ {d_q} {d_{q+1}}}
_{ j\text{-th entry} }
,\enul,\dots,\enul }
\in K,\text{ for all }q\in\set{0,\dots,n-1}.
\label{eqmsTlgYkZsWsG}
\end{equation}
Since the sublattice
\begin{equation*}
S_j:=\set{\enul}\times\dots\times \set{\enul}\times\Equ{Z(n)}\times \set{\enul}\times\dots\times \set{\enul}
\end{equation*}
with the non-singleton factor at the $j$-th place
is isomorphic to $Z(n)$, it follows from \eqref{eqmsTlgYkZsWsG} and Lemma~\ref{lemmaHamilt} that $S_j\subseteq K$, for all $j\in\set{1,\dots,m}$. Therefore, since every element of $\Equ{Z(n)}^m$ is of the form $s^{(1)}+ s^{(2)} + \dots + s^{(m)}$ with $s^{(1)}\in S_1$, \dots, $s^{(m)}\in S_m$, we obtain that $\Equ{Z(n)}^m \subseteq K$. Consequently,
$\Equ{Z(n)}^m =K = \sublat{\alpha,\beta,\gamma,\shd}$ is a four-generated lattice, as required.
The proof of Theorem~\ref{thmmain} is complete.
\end{proof}
\section{$(1+1+2)$-generation}\label{sectootwo}
By a $(1+1+2)$-generating set or, in other words, \emph{a generating subset of order type $1+1+2$}
we mean a four element generating set such that exactly two of the four elements are comparable.
Lattices having such a generating set are called \emph{$(1+1+2)$-generated.}
In his paper, Z\'adori~\cite{zadori} proved that for every integer $n\geq 7$, the partition lattice $\Part n$ is $(1+1+2)$-generated. In this way, he improved the result proved by Strietz~\cite{strietz2} from $\set{n: n\geq 10}$ to $\set{n: n\geq 7}$. In this section, we generalize this result to direct powers by the following theorem;
\eqref{pbxPartnmM} and \eqref{eqtxtmMxSdfLm} are still in effect.
\begin{theorem}\label{thmoot} Let $n\geq 7$ be an integer, let
$k:=\lfloor (n-1)/2 \rfloor$, and let
\begin{equation}
\csm=\csm(n):=
\max\left(\,\, \lfloor (k-1)/2 \rfloor,\,\, \maxs(\lfloor (k-1)/2 \rfloor)^2 \,\, \right).
\label{eqmSrmd nmrpRd}
\end{equation}
Then $\Part n^\csm$ or, equivalently, $\Equ n^\csm$ is $(1+1+2)$-generated. In other words, the $\csm$-th direct power of the lattice of all partitions of the set $\set{1,2,\dots, n}$ is has a generating subset of order type $1+1+2$.
\end{theorem}
Note that $\csm$ above is at least $1$, and $\csm \geq 2$ if and only if $n\geq 11$.
\begin{figure}[htb]
\centerline
{\includegraphics[scale=1.0]{czgauthfig5}}
\caption{$Z(2k+2)$ for $k=23$
\label{figd5}}
\end{figure
\begin{proof}
With our earlier conventions, we define $\alpha$, $\beta$, and $\gamma$ as in Sections~\ref{sectztrms} and \ref{sectprod}, see also \eqref{eqtxtngkSmfmRstsRz}, but we let $\delta:=\equ{a_0}{a_k}+\equ{b_0}{b_{k-1}}\in\Equ{Z(n)}$. For $n=47$, this is illustrated by Figure~\ref{figd5} if we omit vertex $c$. For $n=48$, Figure~\ref{figd5} is a faithful illustration without omitting anything but taking \eqref{eqtxtsdhClrsznzlD} into account. Instead of working with
$U$ and $W$ from \eqref{eqwmghUV}, we define these two sets as follows.
\begin{align}
U&:=\set{a_i: 1\leq i\leq k-2\text{ and }i\text{ is odd}}\text{ and}\label{eqhrTdfzgu}
\\
W&:=\set{b_i: 1\leq i\leq k-2\text{ and }i\text{ is odd}}.
\label{eqhrTdfzgv}
\end{align}
In Figure~\ref{figd5}, $U\cup W$ is the set of black-filled elements. Let $r:=\lfloor (k-1)/2 \rfloor$; note that $r=|U|=|W|$. Since $n\geq 7$, we have that $k\geq 3$ and $r\geq 1$.
Let $\bdelta$ be the equivalence on $Z(n)$ generated by
$\delta\cup U^2\cup W^2$. In other words, $\bdelta$ is the equivalence with blocks $\set{a_0,a_k,b_0,b_{k-1}}$, $U$, and $W$such that the rest of its blocks are singletons.
Let
\[t:=2\cdot\lfloor(k+1)/2\rfloor -3\,\,\text{ and }\,\,T:=\set{i: 1\leq i\leq t \text{ and }i\text{ is odd} }.
\]
Based on Figure~\ref{figd5}, we can think of $T$ as the set of subscripts of the black-filled elements.
The blocks of $\gamma+\bdelta$ are the following:
\begin{align*}
&\set{a_0,a_k,b_0,b_{k-1}}\cup \set{a_i: i\in T}\cup\set{b_{i-1}:i\in T},\cr
&\set{a_{i+1}:i\in T}\cup\set{b_{i}:i\in T},\text{ and, if }k\text{ is even, }\set{a_{k-1},b_{k-2}},
\end{align*}
and, for $n$ even, $\set{c}$.
Hence, we obtain that
\begin{equation}
\equ{a_0}{b_0} =\beta(\gamma+\bdelta).
\label{eqrzhGrnszGsFpL}
\end{equation}
Similarly, the blocks of $\beta+\bdelta$ are $\set{c}$, if $n$ is even, and the following:
\begin{align*}
&\set{a_{i}:i\in T}\cup\set{b_{i}:i\in T},\,\,\,\set{a_0,a_k,b_0,b_{k-1}, a_{k-1}}.\cr
&\text{and, for }i\in\set{2,3,\dots,k-2}\setminus T,\,\,\,\,\set{a_i,b_i}.
\end{align*}
Hence, it follows that
\begin{equation}
\equ{a_k}{b_{k-1}} =\gamma(\beta+\bdelta).
\label{eqnhsptdkjrTSz}
\end{equation}
In the proof of Theorem~\ref{thmmain}, based on \eqref{eqwmghUV}, $\aGa$, $\aHa$, \eqref{eqanTiChaIn}, and \eqref{eqchzTnFshRp}, we defined the equivalences $\shd_1$,\dots,$\shd_m$ in \eqref{eqctznBkPrDTrMp}. Now we define $\shd_1$,\dots,$\shd_\csm$ exactly in the same way but we use \eqref{eqhrTdfzgu} and \eqref{eqhrTdfzgv} instead of \eqref{eqwmghUV}, and we take into account that $U$ and $W$ are now smaller and we obtain $\csm$ rather than $m$ from them. Observe that
\begin{equation}
\equ{a_0}{b_0} =\beta(\gamma+\delta)\quad\text{ and }\quad
\equ{a_k}{b_{k-1}} = \gamma(\beta+\delta).
\label{eqezrhBsmjRtMcLc}
\end{equation}
Since $\delta\leq \shd_j\leq \bdelta$ for $j\in\set{1,\dots,\csm}$, it follows from \eqref{eqrzhGrnszGsFpL}, \eqref{eqnhsptdkjrTSz}, and \eqref{eqezrhBsmjRtMcLc} that
\begin{equation}
\equ{a_0}{b_0} =\beta(\gamma+\shd_j)\quad\text{ and }\quad
\equ{a_k}{b_{k-1}} = \gamma(\beta+\shd_j)
\label{eqwzmbHhGRdffkT}
\end{equation}
for all $j\in\set{1,\dots,\csm}$.
Armed \eqref{eqwzmbHhGRdffkT}
and all the previous preparations, the rest of the proof is the same as in case of Theorem~\ref{thmmain} unless $r=2$; these details are not repeated here.
Observe that the only role of $m$ in the proof of Theorem~\ref{thmmain} is that we had to find an $m$-element antichain in
$\Equ U\times \Equ W$. Similarly, if $r\neq 2$, then all what we have to do with $\csm$ is to find an $\csm$-element antichain in $\Equ U\times \Equ W$. If $r\neq 2$, then we obtain such an antichain as the Cartesian product of an antichain of $\Equ U$ and that of $\Equ W$. If $r=2$, then this method does not work since $\Equ U$ and $\Equ W$ are (two-element) chains but $\csm=2$. However, $\Equ U\times \Equ W$ has an $\csm=2$-element antichain even in this case. This completes the proof of Theorem~\ref{thmoot}.
\end{proof}
Obviously, Theorem~\ref{thmoot} implies the following counterpart of Corollary~\ref{coroLprd}.
\begin{corollary}\label{corodjTd}
Let $n$ and $\csm$ be as in Theorem \ref{thmoot}. Then for every integer $t$ with $1\leq t\leq \csm$, the direct power $\Part n^t$ is $(1+1+2)$-generated.
\end{corollary}
\section{Authentication and secret key cryptography with lattices}\label{sectauth}
While lattice theory is rich with involved constructs and proofs, it seems not to have many, if any, applications in information theory. The purpose of this section is to suggest a protocol primarily for \emph{authentication}; it is also good for \emph{secret key cryptography}, and it could be appropriate for a \emph{commitment protocol}.
Assume that during the authentication protocol that we are going to outline, \aph{}\footnote{\emph{\aph} is the Hungarian version of Andrew;
as a famous lattice theorist with this first name, I mention my scientific advisor, \aph{} P.\ Huhn (1947--1985).} intends to prove his identity to his Bank and conversely; online, of course. In order to do so, \aph{} and the Bank should find a lattice $L$ with the following properties:
\begin{itemize}
\item $|L|$ is large,
\item $L$ has a complicated structure,
\item the length of $L$ is small (that is, all maximal chains of $L$ are small),
\item every non-zero element of $L$ has lots of lover covers and dually,
\item $L$ can be given by and constructed easily from little data,
\item and $L$ is generated by few elements.
\end{itemize}
The first four properties are to make the Adversary's task difficult (and practically impossible) while the rest of these properties ensure that \aph{} and the Bank can handle $L$. It is not necessary that $L$ has anything to do with partitions, but partitions lattices and their direct powers seem to be good choices. Partition lattices are quite complicated since every finite lattice can be embedded into a finite partition lattice by Pudl\'ak and T\r uma~\cite{pudlaktuma}. Also, they are large lattices described by very little data. For example, we can take
\begin{align}
L&=\Part{273},\text{ its size is }|\Part{273}|\approx 3.35\cdot 10^{404},\text{ or}\label{eqZhgRtBfktszg}
\\
L&=\Part{12}^{61},\text{ its size is }|\Part{12}^{61}|\approx 1.27\cdot 10^{404}.
\label{eqdmzmgrYjHBbsgnwsztRtnk}
\end{align}
Although these two lattices seem to be similar in several aspects, each of them has some advantage over the other.
As opposed to $\Part{273}$,
\begin{equation}
\parbox{7.7cm}{joins can easily be computed componentwise in $\Part{12}^{61}$ if parallel computation is allowed.}
\end{equation}
On the other hand, using that $\Part{273}$ is a semimodular lattice and so any two of its maximal chains have the same length, it is easy to see that the longest chain in $\Part{273}$
is only of length 272 (that is, this chain consists of 273 element). Using semimodularity again, it follows easily that the longest chain in $\Part{12}^{61}$ is of length $61\cdot 11=671$, so $\Part{273}$ seems to be more advantageous in this aspect. Based on data obtained by computer, to be presented in tables \eqref{tableD8gens} and \eqref{tablenFpgFns}, we guess that $\Part{273}$ has more $p$-element generating sets of an \emph{unknown pattern} than $\Part{12}^{61}$. If so, then this can also be an advantage of $\Part{273}$
since a greater variety of $p$-element generating sets of unknown patterns makes the Adversary's task even more hopeless. It is probably too early to weigh all the pros and cons of \eqref{eqZhgRtBfktszg}, \eqref{eqdmzmgrYjHBbsgnwsztRtnk} and, say, $\Part{113}^{3}$ with size $|\Part{113}|^{3}\approx 1.51\cdot 10^{405}$.
\aph{} and the Bank choose two small integer parameters $p,q\geq 4$, the suggested value is $p=q=8$ or larger; these numbers can be public.
Also, \aph{} and the Bank agree upon a $p$-tuple
\begin{equation}
\ves=\tuple{s_1,s_2,\dots,s_{p}} \in L^{p}.
\label{eqphndThbnhnSpLSr}
\end{equation}
This $\vec s$ is the common authentication code for \aph{} and the Bank; only they know it and they keep it in secret. So far, the role of $\ves$ is that of the PIN (personal identification number) of a bank card.
Every time \aph{} intends to send an authenticated message to the Bank, the Bank selects a vector $\vew=\tuple{w_1,w_2,\dots, w_q}$ of long and complicated $p$-ary lattice terms
randomly. (We are going to discuss after \eqref{eqmnTltjrm} how to select $\vew$.)
Then the Bank sends $\vew$ to \aph{}.
(If \aph{} thinks that $\vew$ is not complicated enough, then he is allowed to ask for a more complicated $\vew$ repeatedly until he is satisfied with $\vew$.) Then, to prove his identity, \aph{} sends
\begin{equation}
\vew(\ves):=\tuple{w_1(s_1,\dots,s_p),\dots, w_q(s_1,\dots, s_p) }
\label{eqszhnKhKypNxGqr}
\end{equation}
to the Bank. (Preferably, in the same message that instructs the Bank to do something like transferring money, etc.) The Bank also computes $\vew(\ves)$ and compares it with what \aph{} has sent; if they are equal then the Bank can be sure that he communicates with \aph{} rather than with an adversary. Note that it is easy and fast to compute $\vew(\ves)$ from $\vew$ and $\ves$.
Note also that, changing their roles, \aph{} can also verify (by another $q$-tuple $\vew'$ of terms) that he communicates with the Bank rather than with the Adversary.
The point of the protocol is that while $\ves$ can be used many times,
a new $\vew$ is chosen at each occasion. So even if the Adversary intercepts the communication, he cannot use the old values of $\vew(\ves)$. So the Adversary's only chance to interfere is to extract
the secret $\ves$ from $\ver:=\vew(\ves)$. However, extracting $\ves$ from $\ver:=\vew(\ves)$ and $\vew$ seems to be hard. (This problem is in NP and hopefully it is not in $P$.) The Adversary cannot test all possible $p$-tuples $\ves'\in L^p$ since there are astronomically many such tuples. The usual iteration technique to find a root of a function $\mathbb R^p\to \mathbb R$ is not applicable here since, in general,
\begin{equation}
\text{it is unlikely that two elements of $L$ are comparable,}
\label{eqtxtnmctKzSszbwhnT}
\end{equation}
simply because the length of $L$ is small but $|L|$ is large. It is also unlikely that two members of $L^q$ are comparable.
If the Adversary begins parsing, say, $r_1:=w_1(\ves)$, then even the first step splits into several directions since $r_1\in L$ has many lower and upper covers and so there are many possibilities to represent it as the join of two elements (in case the outmost operation sign in $w_1$ is $\vee$) or as the meet of two elements (in case the outmost operation sign is $\wedge$).
Each of these several possibilities split into several cases at the next step, and this happens many times depending on the length of $w_1$. But $w_1$ is a long term, whence exponentially many sub-directions should be handled, which is not feasible.
Some caution is necessary when choosing the common secret authentication code $\ves$. This $\ves$ should be chosen so that $\sublat{s_1,s_2,\dots, s_p}=L$ or at least $\sublat{s_1,s_2,\dots, s_p}$ should be very large. One possibility to ensure that $\set{s_1,s_2,\dots, s_p}$ generates $L$ is to extend a four-element generating set from Sections~\ref{sectztrms}--\ref{sectootwo} to a $p$-element subset of $L$.
If $L=\Part{273}$, then one can pick a permutation $\tau$ of the set $\set{1,2,\dots,273}$; this $\tau$ induces an automorphism $\otau$ of $\Part{273}$ in the natural way, and
$\set{\otau(\alpha),\otau(\beta),\otau(\gamma),\otau(\delta)}$
with $\alpha,\dots, \delta$ from Section~\ref{sectztrms} is
a four-element generating set of $\Part{273}$.
If $L=\Part{12}^{61}$, then in addition to the permutations of $\set{1,2,\dots,12}$, allowing different permutations in the direct factors,
there are many ways to select a 61-element antichain as a subset of the 175-element maximum-sized antichain that occurs in \eqref{eqanTiChaIn}. (Note that we obtained this number, 175, when computing the last column of \eqref{tablerdDbsa}.) In both cases, \aph{} and the Bank can easily pick one of the astronomically many four-element generating sets described in the present paper. A four-element generating set can be extended to a $p$-element one in many ways. It would be even better to pick a $p$-element generating set of an unknown pattern, but it is not clear at this moment how this would be possible.
\aph{} and the Bank should also be careful when selecting a $q$-tuple $\vew=\tuple{w_1,\dots,w_q}$ of complicated $p$-ary lattice terms. They should avoid that, for $i\in\set{1,\dots,q}$, the outmost operation symbol in $w_i$ is $\wedge$ and $w_i(\ves)$ is meet irreducible (or it has only few upper covers), and dually, and similarly for most of the subterms of $w_i$.
In particular, $w_i(\ves)\in\set{0,1}$ should not happen.
To exemplify our ideas that come below, consider the (short) lattice term
\begin{equation}
x_4\Bigl(x_5+\Bigl(\bigl((x_1x_8+x_2x_3)\cdot(x_4x_5+x_3x_6)\bigr)+\bigl(x_2x_8+(x_3x_4)x_7\bigr)\Bigr)\Bigr);
\label{eqmnTltjrm}
\end{equation}
there are 15 \emph{occurrences} of variables in this term.
That is, if we represented this term by a binary tree in the usual way, then this three would have 15 leaves.
Now, to choose a random term $w_1$, we can begin with a randomly chosen variable. Then, we iterate the following, say, a thousand times: after picking an occurrence of a variable in the already constructed term randomly (we denote this occurrence by $x_i$), selecting two of the $p$ variables, and picking one of the two operations symbols, we replace $x_i$ by the meet or the join of the two variables selected, depending on which operations symbol has been picked.
When choosing the two variables and the operation symbol mentioned above, we can exclude that the replacement immediately ``cancels by the absorption laws''. (Or, at least, we have to be sure that this does not happen too often.)
For example, it seems to be reasonable to forbid that $x_6$ in \eqref{eqmnTltjrm} is replaced by $x_3+x_7$. Although we can choose the occurrence mentioned in the previous paragraph according to the even distribution, it can be advantageous to go after a distribution that takes the \emph{depths} of the occurrences into account somehow.
If $q=p$, which is recommended, then it is desirable that $\vew(\ves)$ should be far from $\ves$ and, in addition, each of the
$w_1(\ves)$, \dots, $w_q(\ves)$ should be far from each other, from $0=0_L$, $1$, and from $s_1$, \dots, $s_p$. By ``far'', we mean that the usual graph theoretical \emph{distance} in the Hasse diagram of $L$ or that of $L^q$ is larger than a constant. Hence, while developing $w_1$ randomly, one can monitor $w_1(\ves)$ and interfere into the random process from time to time if necessary.
If $L$ is from \eqref{eqZhgRtBfktszg} or \eqref{eqdmzmgrYjHBbsgnwsztRtnk}, then $L$ is a semimodular lattice, so any two maximal chains of $L$ consist of the same number of elements.
In this case, the above-mentioned distance of $x,y\in L$ can be computed quite easily; see for example Cz\'edli, Powers, and White~\cite[equation (1.8)]{czpw}. Namely, the distance of $x$ and $y$ is
\begin{equation}
\distance x y =\length([x,x+y])+\length([y,x+y]).
\label{eqdlanghshsRw}
\end{equation}
Since any two maximal chains of $\Equ n$ are of the same size, it follows easily that $\length([x,x+y])$ is the difference of the number of $x$-blocks and the number of $(x+y)$-blocks, and similarly for
$\length([x,x+y])$.
Several questions about the strategy remains open but future experiments with computer programs can lead to satisfactorily answers. However, even after obtaining good answers, the reliability of the above-described protocol would still remain the question of belief in some extent. This is not unexpected, since many modern cryptographic and similar protocols rely on the belief that certain problems, like factoring an integer or computing discrete logarithms, are hard.
Besides authentication, our method is also good for \emph{cryptography}. Assume that \aph{} and the Bank have previously agreed in $\ves$; see \eqref{eqphndThbnhnSpLSr}.
Then one of them can send a random $\vew$ to the other.
They can both compute $\vew(\ves)$, see \eqref{eqszhnKhKypNxGqr}, but the Adversary cannot since even if he intercepts $\vew$, he does not know $\ves$. Hence, \aph{} and the Bank can use $\vew(\ves)$ as the secret key of a classical cryptosystem like Vernam's. Such a secret key cannot be used repeatedly many times but \aph{} and the Bank can select a new $\vew$ and can get a new key $\vew(\ves)$ as often as they wish.
Next, we conjecture that \aph{} can lock a \emph{commitment} $\ves$ by making $\vew(\ves)$ public.
To be more precise, the protocol is that there is a Verifier who chooses $\vew$, and then \aph{} computes $\ver=\vew(\ves)$
with the Verifier's $\vew$ and makes this $\ver$ public.
From that moment, \aph{} cannot change his commitment $\ves$, nobody knows what this $\ves$ is, but armed with $\vew$ and $\ver$, everybody can check \aph{} when he reveals $\ves$. Possibly, some stipulations should be tailored to $\ves$ and $\vew$ in this situation.
\begin{equation}
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\noalign{\hrule}\vonal\noalign{\hrule}\vonal
&&\hfill$n$&&$\,4$&&$\,5$&&$\,6$&&$\,7$&&$8$&&$9$&
\cr\noalign{\hrule}\vonal
&& \hfill $|\Part n|$ &&$15$&&$52$&&$203$&&$877$&&$4\,140$&&$21\,147$&
\cr\noalign{\hrule}
&&\hfill$|\forall$8-sets$|$&&$6435$&&$7.53 \cdot 10^8$&&$6.22\cdot 10^{13}$&&$8.41\cdot 10^{18}$&&$2.13\cdot 10^{24}$&&$9.91\cdot 10^{29}$&
\cr\noalign{\hrule}
&&\hfill$|$tested$|$&&$100\,000$&&$10\,000$&&$10\,000$&&$6000$&&$1000$&&$284$&
\cr\noalign{\hrule}
&&\hfill$|$found$|$&&$89\,780$&&$7\,690$&&$7913$&&$5044$&&$848$&&$248$&
\cr
\noalign{\hrule}
&&\hfill \% &&$89.78$&&$76.90$&&$79.13$&&$84.01$&&$84.80$&&$90.19$&
\cr
\noalign{\hrule}\vonal\noalign{\hrule}\vonal
}}
\label{tableD8gens}
\end{equation}
Finally, we have developed and used a computer program to see if there are sufficiently many $8$-element generating subsets and $n$-element generating sets of $\Part n$.
This program, written in Bloodshed Dev-Pascal
v1.9.2 (Freepascal) under Windows 10 and partially in
Maple V. Release 5 (1997), is available from the author's website; see the list of publications there.
The results obtained with the help of this program are reported in Tables~\ref{tableD8gens} and \ref{tablenFpgFns}. The first, \dots, sixth rows in Tables~\ref{tableD8gens} give
the size $n$ of the base set,
the size of $\Part n$,
the number of 8-element subsets of $\Part n$,
the number of randomly selected 8-element subsets,
the number of those selected 8-element subsets that generate $\Part n$,
and the percentage of these generating 8-element subsets with respect to the number of the selected 8-element subsets, respectively. These subsets were selected independently according to the uniform distribution; a subset could be selected more than once.
Table~\ref{tablenFpgFns} is practically the same but the $n$-element (rather than 8-element) subsets generating $\Part n$ are counted in it.
\begin{equation}
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\noalign{\hrule}\vonal\noalign{\hrule}\vonal
&&\hfill$n$&&$\,4$&&$\,5$&&$\,6$&&$\,7$&&$8$&&$9$&
\cr\noalign{\hrule}\vonal
&& \hfill $|\Part n|$ &&$15$&&$52$&&$203$&&$877$&&$4\,140$&&$21\,147$&
\cr\noalign{\hrule}
&&\hfill$|\forall n$-sets$|$&&$1365$&&$2\,598\,960$&&$9.2\cdot 10^{10}$&&$7.73\cdot 10^{16}$&&$2.13\cdot 10^{24}$&&$2.33\cdot 10^{33}$&
\cr\noalign{\hrule}
&&\hfill$|$tested$|$&&$100\,000$&&$10\,000$&&$10\,000$&&$10000$&&$1000$&&$\phantom{f}$&
\cr\noalign{\hrule}
&&\hfill$|$found$|$&&$89\,780$&&$1430$&&$3918$&&$6811$&&$848$&&$\phantom{f}$&
\cr
\noalign{\hrule}
&&\hfill \% &&$89.78$&&$14.30$&&$39.18$&&$68.11$&&$84.80$&&$\phantom{f}$&
\cr
\noalign{\hrule}\vonal\noalign{\hrule}\vonal
}}
\label{tablenFpgFns}
\end{equation}
Computing the last column of Table~\ref{tableD8gens} took 73 hours for a desktop computer
with AMD Ryzen 7 2700X Eight-Core Processor 3.70 GHz; this explains that no more 8-element subsets have been tested for Table~\ref{tableD8gens} and the last column of Table~\ref{tablenFpgFns} is partly missing. After computing the columns for $n=4$ and $n=5$ in Tables~\ref{tableD8gens} and \ref{tablenFpgFns}, we expected that the number in the percentage row (the last row) would decrease as $n$ would decrease as $n$ grows. To our surprise, the opposite happened.
Based on these two tables, we guess that $p=n$ should be and even $p=8$ could be appropriate in the protocol if $n=273$ and
$L$ is taken from \eqref{eqZhgRtBfktszg}.
\subsection*{Chronology and comparison, added on \chronologydatum}
The first version of the present paper
was uploaded
to \texttt{https://arxiv.org/abs/2004.14509}
on April 29, 2020.
A related \emph{second paper} dealing with direct products rather than direct powers
was completed
and uploaded to
\texttt{http://arxiv.org/abs/2006.14139}
on June 25, 2020.
(This second paper pays no attention to authentication and cryptography.)
The present paper corrects few typos and minor imperfections but it is not significantly different from its April 29, 2020 version. Although a particular case of the second paper also tells something on four-generation of direct powers of finite partition lattices, the present paper, yielding larger exponents and paying attention to $(1+1+2)$-generation, tells more. For example, while the four-generability of $\Part {2020}^{10^{127}}$ is almost explicit in the second paper and the maximum we can extract from \emph{that} paper is approximately the four-generability
of $\Part {2020}^{10^{604}}$, the last column of Table \eqref{tablerdDbsg} in the \emph{present} paper guarantees a significantly larger exponent, $5.5194232\cdot10^{3893}$. (The corresponding value, $5.52\cdot10^{3893}$, from Table \eqref{tablerdDbsg} was obtained by rounding up.)
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
We consider probability measures
$\mu=e^{-V} \ dx$, $\nu = e^{-W} \ dx$
on $\R^d$ and the optimal transportation mapping
$T $
pushing forward $\mu$ onto $\nu$
and minimizing the Monge--Kantorovich functional
$$
\int \| x - T(x) \|^2 \ d\mu.
$$
It is known (see, e.g., \cite{Vill1}, \cite{BoKo2012}) that $T$ has the form $T = \nabla \Phi$, where $\Phi$ is a convex function.
If $\Phi$ is smooth, it satisfies the following change of variables formula
\begin{equation}
\label{VW}
e^{-V} =
e^{-W(\nabla \Phi)} \det D^2 \Phi.
\end{equation}
This relation can be considered as a non-linear second order PDE with unknown $\Phi$, the so-called Monge--Amp{\`e}re equation.
The regularity problem for the Monge--Amp\`{e}re equation has a rather long history.
The pioneering results have been obtained by Alexandrov, Bakelman, Pogorelov, Calabi, Yau.
The classical theory can be found in \cite{Pog}, \cite{Bak}, and \cite{GT}.
See also an interesting survey \cite{Kryl} on nonlinear PDE's.
Despite the long history, the sharpest H{\"o}lder regularity results of classical type have been obtained only in the 90'th by
L.~Caffarelli \cite{Ca1} (see also \cite{CC, Guit, Vill1}).
In particular, Caffarelli proved that $D^2 \Phi$ is H{\"o}lder if $V$ and $W$ are H{\"o}lder on bounded sets $A$ and $B$, where $A$ and $B$ are supports of $\mu$ and $\nu$ respectively.
In addition, $B$ is supposed to be convex.
It seems that the latter assumption cannot be dropped as demonstrated by famous counterexamples.
A nice exposition with new simplified proofs and historical overview can be found in \cite{TW}.
Another result from \cite{Ca1} establishes sufficient conditions for $\Phi$ to belong to the second Sobolev class
$W^{2,p}_{loc}$ with $p>1$. More precisely, Caffarelli considered solution of the Monge--Amp{\'e}re equation
$$\det D^2 \Phi =f$$ on a convex set $\Omega$ with $\Phi|_{\partial \Omega}=0$.
Assume that $\Omega$ is normalized:
$B_1 \subset \Omega \subset B_d$ (an arbitrary convex set $\Omega$ can be normalized using an affine transformation).
It is shown in \cite{Ca1} that for every $p>0$ there exists $\varepsilon(p)>0$ such that if
$|f-1|<\varepsilon(p)$ then $\|\Phi\|_{W^{2,p}(B_{1/2})} \le C(\varepsilon)$.
Wang \cite{Wang} proved that for a fixed $\varepsilon$ in $|f-1|<\varepsilon$
the value of $p$ in the inclusion $\Phi \in W^{2,p}_{loc}$ cannot be chosen arbitrary large.
This Sobolev regularity result has been extended and generalized in different ways in the recent papers \cite{Savin}, \cite{DePhil-Fig}, \cite{DePhil-Fig-Sav}, and \cite{Schmidt}. See also \cite{Huang} for some results on the mean oscillation of $D^2 \Phi$.
It was shown in \cite{DePhil-Fig-Sav} that every $\Phi$ satisfying $\det D^2 \Phi =f$ on a normalized convex set $\Omega$ with $\Phi|_{\partial \Omega}=0$ belongs to $W^{2,1+\varepsilon}(\Omega')$, where
$\Omega' = \Big\{x: \Phi(x) \le - \frac{1}{2} \|\Phi\|_{L^{\infty}}\Big\}$ provided $0 < \lambda < f < \Lambda$.
The main purpose of this paper is to develop an alternative approach to the regularity problem of the Monge--Amp{\`e}re equation.
We prove that $\Phi$ belongs to a Besov's space under the assumption that
$e^{-V}$ is Besov and $W$ is convex. We give a short proof which does not use previously known regularity results. Our estimates rely on a generalization of the so-called above-tangent formalism which
has been widely used in the applications of the optimal transport theory in probability and PDE's (see \cite{AGS}, \cite{BoKo2012}, \cite{Vill1}, and \cite{Vill2}). We also apply a result of McCann on the change of variables formula
and some classical results on equivalence of functional norms.
Some estimates of the type considered in this paper have been previously obtained in \cite{Kol2010} in the case of the Sobolev spaces.
Applications to the infinite-dimensional analysis and convex geometry can be found in \cite{BoKo2011} and \cite{Kol2011-12} respectively.
Hereafter $B_r$ denotes the ball of radius $r$ centered at $0$. We use notation $D^2 \Phi$ for the Hessian matrix of $\Phi$ and $\| \cdot \|$ for the standard operator norm. We will assume that the measures $\mu$ and $\nu$ satisfy the following assumptions:
\begin{description}
\item[{\bf Assumption A}] The potential $V: \R^d \to \R$ has the representation
$$V = V_0 + V_1,$$ where $|V_0|$ is globally bounded, $V_1 $ admits local Sobolev derivatives, and $|\nabla V_1| \in L^1(\mu)$.
\item[{\bf Assumption B}]
The support of $\nu$ is a compact convex set $B \subset B_R$.
\end{description}
\begin{definition} {\bf Besov's space (Fractional Sobolev's space).}
The space $W^{s,p}(Q)$, where $Q$ is a cube in $\R^d$, consists of functions with the finite norm
$$
\|u\|_{W^{s,p}(Q)} = \|u\|_{L^p(Q)} + \Bigl( \int_Q \int_Q \frac{|u(x)-u(y)|^p}{|x-y|^{d+sp}} \, dx\,dy\Bigr)^{\frac{1}{p}}.
$$
The space $W^{s,p}_0(\R^{d})$ is the completion of $C^{\infty}_0(\R^d)$ with the norm
$$
\|u\|_{W^{s,p}_0(\R^d)} = \Bigl( \int_{\R^d} \int_{\R^d} \frac{|u(x)-u(y)|^p}{|x-y|^{d+sp}} \, dx\,dy\Bigr)^{\frac{1}{p}}.
$$
\end{definition}
\begin{definition} {\bf Log-concave measure.}
A probability measure $\nu$ is called log-concave if it satisfies the following inequality for all compact sets $A, B$:
$$
\nu( \alpha A + (1-\alpha) B) \ge \nu^{\alpha}(A) \nu(B)^{1-\alpha}
$$
and any $0 \le \alpha \le 1$. If $\nu$ has a density $\nu = e^{-W} \ dx$, then $W$ must be convex
(we assume that $W = + \infty$ outside of $\mbox{supp}(\nu)$). This is a classical result of C. Borell \cite{Bor}.
\end{definition}
\begin{remark}
We note that the second derivative of the convex function $\Phi$ can be understood in different ways.
In the generalized (weak) sense this is a measure with absolutely continuous part $D^2_a \Phi \ dx$ and singular part $D^2_s \Phi$.
Throughout the paper we use the following agreement: the statement ``$D^2 \Phi$ belongs to a certain Besov or Sobolev class'' means that the measure $D^2 \Phi$
has no singular component and the corresponding Sobolev derivative $D^2 \Phi = D^2_a \Phi$ belongs to this class.
\end{remark}
\begin{theorem}
\label{main}
Let Assumptions {\bf A} and {\bf B} be fulfilled and, moreover,
\begin{itemize}
\item[(1)]
there exist $p \ge 1$ and $0<\gamma <1$ such that for every $r>0$
\begin{align*}
\label{main-ass}
\int_{B_r} \frac{ \| (\delta_y V)_{+} \|_{L^p(\mu)} }{|y|^{d+ \frac{1}{p} + \gamma} } \ dy < \infty,
\end{align*}
where $\delta_y V = V(x+y) + V(x-y) - 2V(x)$ and $(\delta_y V )_{+}$ is the non-negative part of $\delta_y V$;
\item[(2)]
$ e^{-V}$ is locally bounded from below;
\item[(3)]
$\nu$ is a compactly supported log-concave measure.
\end{itemize}
Then
$$
\| D^2 \Phi \|_{W^{\varepsilon, 1}(Q)} < \infty
$$
for every cube $Q \subset \R^d$ and $0<\varepsilon < \frac{\gamma}{2}$.
\end{theorem}
Taking $p=+\infty$ we obtain the following result.
\begin{corollary}
Let Assumptions {\bf A}, {\bf B} and conditions (2)-(3) be fulfilled.
Assume, in addition, that $ (\delta_y V)_{+} \le \omega(|y|)$ for some $\omega : \R^+ \to \R^+$ and
$$
\int_{0}^{r} \frac{ \omega(s)}{s^{1+ \gamma} } \ ds < \infty
$$
for some $\gamma>0$ and every $r>0$.
Then $
\| D^2 \Phi \|_{W^{\varepsilon, 1}(Q)} < \infty
$
with $0<\varepsilon < \frac{\gamma}{2}$.
\end{corollary}
In particular, applying the fractional Sobolev embedding theorem (see, e.g., \cite[Ch. V]{Adams}) we get
\begin{corollary} \label{lp}
Under assumptions of Theorem \ref{main}, for every $\varepsilon>0$,
$$\|D^2 \Phi\| \in L^{\frac{d}{d-\frac{\gamma}{2} + \varepsilon}}_{loc}.$$
\end{corollary}
\begin{remark}
Note that in corollary \ref{lp} we do not assume that $V$ is bounded from below.
\end{remark}
\vspace{5mm}
\section{Auxiliary results}
Below we will use the following function space (see, e.g., \cite{Stein}).
\\Let $\Lambda^{p,q}_{\alpha}$, $0<\alpha <2$, be the space of functions with the finite norm
$$
\|f\|_{\Lambda^{p,q}_{\alpha}} = \|f\|_p+ \Bigl[ \int_{\R^d} \frac{\big(\|f(x+t) + f(x-t) - 2f(x) \|_p\big)^q }{|t|^{d+ \alpha q}} \ dt \,\Bigr]^{\frac{1}{q}},
$$
where $\| \cdot \|_p$ is the $L^p$-norm with respect to the Lebesgue measure. We will apply the following equivalence result from
\cite[Ch.5, Sec. 5, Propos. 8']{Stein}.
\begin{lemma}
\label{stein-th}
For $\alpha>1$ the norm $\|f\|_{\Lambda^{p,q}_{\alpha}}$ is equivalent to
$$
\|f\|_p + \Bigl[ \int_{\R^d} \frac{(\|\nabla f(x+t) - \nabla f(x) \|_p)^q }{|t|^{d+ (\alpha-1) q}} \ dt \Bigr]^{\frac{1}{q}}.
$$
In particular, if $\alpha>1$ and $\|f\|_{\Lambda^{p,q}_{\alpha}} < \infty$, then $f$ admits the Sobolev derivatives.
\end{lemma}
Let us recall that every convex function $\Phi$ on $\R^d$ admits the following two types of second derivatives.
\begin{definition}
We say that the measure $\mu_{ev}$ is the distributional derivative of a convex function $\Phi$ along unit vectors $e, v$
if the following integration by parts formula holds for any test function $\xi$:
$$
\int \xi \ d \mu_{ev} = - \int \partial_e \xi \ \partial_{v} \Phi \ dx.
$$
We set
$$
\partial_{ev} \Phi := \mu_{ev}.
$$
\end{definition}
\begin{definition} The { absolutely continuous part} $(\partial_{ev} \Phi )_a$ of $\partial_{ev} \Phi $ is called the second Alexandrov derivative of $\Phi$ along $e,v$.
Clearly,
$$
\partial_{ee} \Phi \ge (\partial_{ee} \Phi)_a \ge 0
$$
in the sense of measures.
\end{definition}
Let us denote by $D^2_a \Phi$ the matrix consisting on these absolutely continuous parts.
We will apply the following result of R.~McCann from \cite{McCann}.
\begin{theorem}
For $\mu$-almost all $x$ the following change of variables formula holds
$$
e^{-V(x)} = \det D^2_a \Phi(x) \cdot e^{-W(\nabla \Phi(x))}.
$$
\end{theorem}
We will need the following lemma from \cite{Kol2010}.
In fact, this is a generalization of the well-known above-tangent lemma which has numerous applications in probability
and gradient flows of measures with respect to the Kantorovich metric (see \cite{Vill1}, \cite{Vill2}, \cite{AGS}, \cite{BoKo2012}).
The proof follows directly from the change of variables and integration by parts.
\begin{lemma}
\label{basiclemma}
Assume that $W$ is twice continuously differentiable and $$D^2 W \ge K \cdot \mbox{\rm{Id}}$$ for some $K \in \R$, $p \ge 0$. Then
\begin{align*}
\int & \delta_y V (\delta_{y} \Phi)^p d\mu
\\&
\ge
\frac{K}{2} \int |\nabla \Phi(x+y)- \nabla \Phi(x)|^2 (\delta_{y} \Phi)^p \ d\mu +
\frac{K}{2} \int |\nabla \Phi(x-y)- \nabla \Phi(x)|^2 (\delta_{y} \Phi)^p \ d\mu \\&
+ p \int \Big\langle \nabla \delta_{y} \Phi , (D^2_a \Phi)^{-1} \nabla \delta_{y} \Phi \Big\rangle (\delta_{y} \Phi)^{p-1} \ d\mu,
\end{align*}
where $\delta_y V = V(x+y) + V(x-y) - 2V(x)$.
\end{lemma}
\begin{corollary}
It follows easily from Lemma \ref{basiclemma} that inequality
$$
\int \delta_y V (\delta_{y} \Phi)^p d\mu
\ge
p \int \Big\langle \nabla \delta_{y} \Phi , (D^2_a \Phi)^{-1} \nabla \delta_{y} \Phi \Big\rangle (\delta_{y} \Phi)^{p-1} \ d\mu
$$
holds for any log-concave measure $\nu$. In particular, this holds for the restriction of Lebesgue measure
$\frac{1}{\lambda(A)} \lambda|_{A}$ on a convex subset $A$. In this case $W$ is a constant on $A$ and $W(x)=+\infty$ if $x \notin A$.
\end{corollary}
\begin{remark}
Let us assume that $V$ is twice differentiable and $y = te$ for some unit vector $e$.
Dividing by $t^{2p+2}$ and passing to the limit we obtain
\begin{eqnarray}
\label{lp-sd+}
\int V_{ee} \Phi_{ee}^p \ d \mu
&\ge& K
\int \| D^2 \Phi \cdot e\|^2 \Phi_{ee}^p \ d \mu
\\
\nonumber
&+& p \int \langle (D^2 \Phi)^{-1} \nabla \Phi_{ee}, \nabla \Phi_{ee} \rangle \Phi_{ee}^{p-1} \ d \mu.
\end{eqnarray}
Now it is easy to get some of the results of \cite{Kol2010} from (\ref{lp-sd+}). In particular, applying integration by parts for the left-hand side and H{\"o}lder inequalities, one can easily obtain that
$$
K \| \Phi^{2}_{ee}\|_{L^p(\mu)} \le \frac{p+1}{2} \| V^2_e\|_{L^p(\mu)}.
$$
It is worth mentioning that (\ref{lp-sd+}) gives, in fact, an a priori estimate for derivatives of $\Phi$ up to the {\it third} order
(due to the term $p \int \langle (D^2 \Phi)^{-1} \nabla \Phi_{ee}, \nabla \Phi_{ee} \rangle \Phi_{ee}^{p-1} \ d \mu$).
\end{remark}
We denote by $\Delta_a \Phi$ the absolutely continuous part of the distributional Laplacian of $\Phi$.
\begin{lemma}
\label{phil1}
Under Assumptions {\bf A} and {\bf B}, there exists $C$ such that
$$\int \Delta_a \Phi (x+y) \ d \mu \le \sum_{i=1}^d \int e^{-V(x)} \ d \bigl[ \partial_{x_i x_i} \Phi (x+y) \bigr] \le C$$ uniformly in $y \in \R^d$.
\end{lemma}
\begin{proof}
The inequality $\int \Delta_a \Phi (x+y) \ d \mu \le \sum_{i=1}^d \int e^{-V(x)} \ d \bigl[ \partial_{x_i x_i} \Phi (x+y)\bigr]$
is clear in view of the fact that the singular part of $\partial_{x_i x_i} \Phi$ is nonnegative.
Moreover, we get
\begin{align*}
& \sum_{i=1}^d \int e^{-V(x)} \ d \bigl[\partial_{x_i x_i} \Phi (x+y) \bigr]
\le
c_1 \sum_{i=1}^d \int e^{-V_1(x)} \ d \bigl[ \partial_{x_i x_i} \Phi (x+y) \bigr]
\\& = c_1 \int \langle \nabla \Phi(x+y), \nabla V_1(x) \rangle e^{-V_1(x)} \ dx
\le c_1 R \int |\nabla V_1(x)| e^{-V_1(x)} \ dx
\\&
\le c_2 \int | \nabla V_1(x) | \ d \mu.
\end{align*}
\end{proof}
Finally, we will use the fractional Sobolev embedding theorem (see \cite[Ch. V]{Adams}).
We formulate it in the following form given in \cite{MSh} (see also \cite{BBM}, \cite{KolLer}).
\begin{theorem}
Let $p>1$, $0 < s < 1$ and $sp<d$. Then for every $u \in {W^{s,p}_0(\R^d)}$ one has
$$
\|u\|^p_{L^q(\R^d)} \le c(d,p) \frac{s(1-s)}{(d-sp)^{p-1}}\|u\|^p_{W^{s,p}_0(\R^d)},
$$
where $q = dp/(d-sp)$.
\end{theorem}
\vspace{5mm}
\section
Proof of Theorem \ref{main}}
Let us apply Lemma \ref{basiclemma} with $p=1$. We have
$$
\int_{\R^d} \delta_y V \cdot \delta_{y} \Phi \ d\mu
\ge
\int_{\R^d} \Big\langle \nabla \delta_{y} \Phi , (D^2_a \Phi)^{-1} \nabla \delta_{y} \Phi \Big\rangle \ d\mu.
$$
Taking into account Lemma \ref{phil1}, we obtain $\int_{\R^d} \| D^2_a \Phi\| \ d\mu < \infty.$
Then Cauchy inequality yields
$$
\int_{\R^d} \| D^2_a \Phi\| \ d\mu \cdot \int _{\R^d} \delta_y V \cdot \delta_{y} \Phi \ d\mu
\ge
\Bigl( \int_{\R^d} | \nabla \delta_{y} \Phi | d\mu \Bigr)^2 .
$$
By the H{\"o}lder inequality, for every $p, q \ge 1$, $\frac{1}{p} + \frac{1}{q}=1$, we have
$$
\int _{\R^d} \delta_y V \cdot \delta_{y} \Phi \ d\mu
\le \int _{\R^d} (\delta_y V)_{+} \cdot \delta_{y} \Phi \ d\mu
\le \| (\delta_y V)_{+} \|_{L^p(\mu)} \cdot \| \delta_{y} \Phi \|_{L^q(\mu)}
$$
Let us now estimate $ \| \delta_{y} \Phi \|_{L^q(\mu)}$.
Note that $|\nabla \Phi| \le R$, and hence $ \delta_{y} \Phi \le 2 R |y|$.
Let us also mention that $t \to \Phi(x+ ty)$ is a one-dimensional convex function for a fixed $x$. Therefore,
$m_y=\partial^2_{tt} \Phi(x + ty)$ is a nonnegative measure on $\R^1$.
Moreover,
\begin{align*}
\delta_{y} \Phi &= \int_0^1 \langle \nabla \Phi(x+ sy) - \nabla \Phi(x-sy) , y \rangle \ ds
\\&= \int_0^1 \int_{-s}^s d m_y \ ds.
\end{align*}
Therefore, we have
\begin{align*}
\| \delta_{y} \Phi \|^q_{L^q(\mu)} &
= \int ( \delta_{y} \Phi )^{q} \ d\mu
\le (2R |y|)^{q-1} \int \delta_{y} \Phi \ d\mu
\\&
= (2R |y|)^{q-1} \int_0^1 \int_{-s}^s \int e^{-V(x)} d \bigl[ \partial^2_{tt} \Phi(x + ty) \bigr] dt \ ds.
\end{align*}
It follows from Lemma \ref{phil1} that
$$
\| \delta_{y} \Phi \|^q_{L^q(\mu)} \le C |y|^{1+q}.
$$
Hence,
\begin{align*}
\Bigl( \int_{\R^d} | \nabla \delta_{y} \Phi | d\mu \Bigr)^2 \le \int _{\R^d} \delta_y V \cdot \delta_{y} \Phi \ d\mu \le
C \| (\delta_y V)_{+} \|_{L^p(\mu)} |y|^{1+\frac{1}{q}}.
\end{align*}
Now we divide this inequality by $|y|^{d+ 2+ \gamma}$ and integrate it over a bounded subset $Q \subset \R^d$.
By condition (1), we get
\begin{equation}
\label{frac-sob}
\int_{Q} \frac{1}{|y|^{d+2+\gamma}} \Bigl( \int_{\R^d} | \nabla \Phi(x+y) + \nabla \Phi(x-y) - 2\nabla \Phi(x) | \ d\mu \Bigr)^2 \ dy < \infty.
\end{equation}
Let us take a smooth compactly supported function $\xi \ge 0$. We will show that
\begin{equation}
\label{frac-sob2}
\int_{\R^d} \frac{1}{|y|^{d+2+\gamma}} \Bigl( \int_{\R^d} | \xi(x+y) \nabla \Phi(x+y) + \xi(x-y)\nabla \Phi(x-y) - 2 \xi(x) \nabla \Phi(x) | \ d x \Bigr)^2 \ dy < \infty.
\end{equation}
Let us split this integral in two parts: $$\int_{\R^d} \cdots \ dx= \int_{B_1} \cdots \ dx + \int_{B^c_1} \cdots \ dx
= I_1 + I_2.$$
To estimate the second part, we note that
$$
\int_{\R^d} | \xi(x+y) \nabla \Phi(x+y) + \xi(x-y)\nabla \Phi(x-y) - 2 \xi(x) \nabla \Phi(x) | \ d x \le 4R \int_{\R^d} |\xi(x)| \ dx.
$$
Thus, $ I_2 < \infty$.
Let us estimate $I_1$. It follows from estimate (\ref{frac-sob}) and condition (3) of the theorem that
$$
\int_{B_1} \frac{1}{|y|^{d+2+\gamma}} \Bigl( \int_{\R^d} | \nabla \Phi(x+y) + \nabla \Phi(x-y) - 2 \nabla \Phi(x) | \xi(x) \ d x \Bigr)^2 \ dy < \infty.
$$
Therefore, it is enough to show that
\begin{multline*}
I_3=\int_{B_1} \frac{1}{|y|^{d+2+\gamma}} \Bigl( \int_{\R^d} | \nabla \Phi(x+y) \bigl( \xi(x+y) - \xi(x) \bigr) \\+ \nabla \Phi(x-y) \bigl( \xi(x-y) - \xi(x) \bigr) | \ d x \Bigr)^2 \ dy < \infty.
\end{multline*}
Since $|y| \le 1$ and $\xi$ is compactly supported, there exists $R_0>0$ such that
\begin{multline*}
I_3 \le \int_{B_{R_0}} \frac{1}{|y|^{d+2+\gamma}} \Bigl( \int _{B_{R_0}} \big| \nabla \Phi(x+y) \bigl( \xi(x+y) - \xi(x) \bigr) \\+ \nabla \Phi(x-y) \bigl( \xi(x-y) - \xi(x) \bigr) \big| \ d x \Bigr)^2 \ dy < \infty.
\end{multline*}
Using smoothness conditions on $\xi$, we obtain
\begin{align*}
\Big| \nabla \Phi(x+y) & \bigl( \xi(x+y) - \xi(x) \bigr) + \nabla \Phi(x-y) \bigl( \xi(x-y) - \xi(x) \bigr) \Big|
\\&
=
\Big|\bigl( \nabla \Phi(x+y) - \nabla \Phi(x-y) \bigr) \bigl( \xi(x+y) - \xi(x) \bigr) \\&\qquad + \nabla \Phi(x-y) \bigl( \xi(x+y) + \xi(x-y) - 2\xi(x) \bigr) \Big|
\\&
\le C \Bigl( R_0 |y|^2 + |y| | \nabla \Phi(x+y) - \nabla \Phi(x-y) | \Bigr ).
\end{align*}
Thus, it is sufficient to show that
\begin{equation}\label{delta}
\int_{B_R} \frac{1}{|y|^{d+\gamma}} \Bigl( \int _{B_R} | \nabla \Phi(x+y) - \nabla \Phi(x-y) | \ d x \Bigr)^2 \ dy < \infty.
\end{equation}
To prove (\ref{delta}), we use the representation
$$
\int _{B_R} | \nabla \Phi(x+y) - \nabla \Phi(x-y) | \ d x = \int_{B_R} \Bigr| \int_{-1}^{1} \sum_{i=1}^d d \bigl[ \partial_{s} \Phi_{e_i}(x + sy)\Bigr] (s) \cdot y_i \bigl| \ dx.
$$
The latter is bounded by $C |y| \int_{B_{2R}} \Delta \Phi.$
This immediately implies that $I_3< \infty$ and therefore (\ref{frac-sob2}) is proved.
This means that $|\xi \cdot \nabla \Phi|_{\Lambda^{1,2}_{1+ \gamma/2}} < \infty $ for every smooth compactly supported $\xi$.
Using smoothness conditions on $\xi$ and boundedness of $\nabla \Phi$, we get from Lemma \ref{stein-th} that $D^2 \Phi$ has no singular parts and
\begin{equation}
\label{besov-est}
\int_{B_r} \frac{1}{|y|^{d + \gamma}} \Bigl( \int_{B_r} | D^2 \Phi(x+y) - D^2 \Phi(x) | \ d x \Bigr)^2 \ dy < \infty
\end{equation}
for any $B_r$.
Finally, applying the Cauchy--Schwarz inequality, we obtain for every $\delta>0$
\begin{align*}
\Bigl( \int_{B_r} & \frac{1}{|y|^{d + \gamma/2 - \delta/2}} \int_{B_r} | D^2 \Phi(x+y) - D^2 \Phi(x) | \ d x \ dy \Bigr)^2
\\&
\le
\int_{B_r} \frac{1}{|y|^{d + \gamma}} \Bigl( \int_{B_r} | D^2 \Phi(x+y) - D^2 \Phi(x) | \ d x \Bigr)^2 \ dy \cdot \int_{B_r} \frac{|y|^{\delta}}{|y|^{d}} \ dy <\infty, \ \ \forall B_r
\end{align*}
Changing variables implies
$$
\int_{Q} \int_{Q} \frac{ | D^2 \Phi(z) - D^2 \Phi(x) | }{|z-x|^{d + \varepsilon}} \ d x \ dz < \infty
$$
for every $0<\varepsilon < \frac{\gamma}{2}$ and bounded $Q$. The proof is now complete.
\hfill $\square
\vspace{5mm}
\section{Remarks on improved integrability}
By applying Theorem \ref{main} we get a better (local) integrability of $\|D^2 \Phi\|$ ($L^{\frac{d}{d-\frac{\gamma}{2} + \varepsilon}}$ instead of $L^1$); see Corollary \ref{lp}. This can be used to improve the estimates obtained in Theorem \ref{main}.
We assume for simplicity that we transport measures with periodical densities
by a periodical optimal mapping
$$
T(x) = x + \nabla \varphi(x),
$$
where $\varphi$ is periodical. Equivalently, one can consider optimal transportation of
probability measures on the flat torus $\mathbb{T}$.
Then one can repeat the above arguments and obtain the same estimates which become
{\it global}.
In general, it can be shown that the assumption $\|D^2 \varphi\| \in L^r(\mathbb{T})$ implies
$$
\int_\mathbb{T} \int_\mathbb{T}
\frac{|D^2 \varphi(x+y)-D^2 \varphi(x)|^{\frac{2r}{r+1}} }
{|y|^{d+ \frac{r}{r+1}(\gamma + \frac{r-1}{q}) -\varepsilon}} \ dx \ dy < \infty
$$
for every $\varepsilon$ and $\|D^2 \varphi \| \in L^{r'}(\mathbb{T})$
with any $r'$ satisfying
$$
r' < \frac{\frac{2 d r}{r+1}}{d - \frac{r}{r+1} (\gamma+\frac{r-1}{q})}.
$$
Starting with $r_0=1$ and iterating this process one can obtain
a sequence $r_n$ such that $\|D^2 \varphi\| \in L^{r_n-\varepsilon}$ for every $r_n$ and $\varepsilon>0$
$$
r_0=1, \ \ r_{n+1} = \frac{\frac{2 d r_n}{r_n+1}}{d - \frac{r_n}{r_n+1} (\gamma+\frac{r_n-1}{q})}.
$$
One has $\|D^2 \varphi\| \in L^{r-\varepsilon}$, where $r = \lim_n r_n$
solves the equation
$$
x^2 + x(q(\gamma -d) -1)+qd=0
$$
with $r>1$.
\vspace{7mm}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The two-photon coupling of axions or axion-like particles allows for
transitions between them and photons in the presence of external
electric or magnetic fields as shown in Figure~1~\cite{sikivie,
Raffelt:1987im}. This mechanism serves as the basis for the
experimental searches for galactic dark matter axions~\cite{sikivie,
Bradley:2003kg} and solar axions~\cite{sikivie, vanBibber:1988ge,
Moriyama:1998kd, Inoue:2002qy, Andriamonje:2004hi}. The
astrophysical implications of this mechanism have also been widely
investigated and reviewed~\cite{Raffeltbook, Raffelt:1999tx}. The
phenomenological consequences of an extremely light or massless
axion would be particularly interesting in several astrophysical
circumstances such as polarization of
radio-galaxies~\cite{Harari:1992ea} and
quasars~\cite{Hutsemekers:2005iz}, the diffuse x-ray
background~\cite{Krasnikov:1996bm}, or ultra-high energy cosmic
rays~\cite{Gorbunov:2001gc, Csaki:2003ef}.
One intriguing cosmological consequence of this mechanism is
photon-axion conversion caused by intergalactic magnetic fields,
leading to the dimming of distant light sources, notably of
supernovae (SNe) of type Ia that are used as cosmic standard
candles~\cite{Csaki:2001yk}. Observationally, SNe~Ia at redshifts
$0.3 \lesssim z \lesssim 1.7$ appear fainter than expected from the
luminosity-redshift relation in a decelerating
Universe~\cite{supnovR, supnovP, Riess:2004nr}, a finding usually
interpreted as evidence for acceleration of the cosmic expansion
rate and thus for a cosmic equation of state (EoS) that today is
dominated by a cosmological constant, a slowly evolving scalar
field, or something yet more exotic~\cite{Carroll:2003qq}. The
dimming caused by photon-axion conversion could mimic this behavior
and thus provide an alternative to the interpretation as cosmic
acceleration. Although still requiring some non-standard fluid to
fit the flatness of the universe, this model seemed capable to
explain the SN dimming through a completely different mechanism.
However, if the light from distant SNe~Ia reaches us partially
converted to axion-like particles, the same mechanism would affect
all distant sources of electromagnetic radiation. Therefore, it
appears useful to update the different arguments constraining
photon-axion conversion in intergalactic magnetic fields, in
particular the constraints arising from spectral distortions of the
cosmic microwave background (CMB) and dispersion of quasar (QSO)
spectra.
To this end we begin in Section~2 with a review of the formalism of
photon-axion conversion in magnetic fields. Some technical details are
deferred to Appendix~A. In Section~3 we describe how this mechanism
affects the SN~Ia luminosity-redshift relation and accounts for the
observed dimming. In Section~4 we turn to spectral CMB distortions and
in Section~\ref{QSO} combine these limits with those from the
dispersion of QSO spectra. In Section~6 we describe some additional
limits from a violation of the reciprocity relation between the
luminosity and angular diameter distances. We conclude in
Section~\ref{conclusions} and comment on the viability of the
photon-axion conversion mechanism.
\begin{figure}[b]
\centering
\includegraphics[height=3cm]{fig-01.eps}
\caption{ \footnotesize\baselineskip=4mm
Axion-photon transition in an external electric or magnetic
field.}
\label{fig:1}
\end{figure}
\section{Photon-axion conversion}
\label{sec:conversion}
To understand how photon-axion conversion could affect distant sources
we take a closer look at the phenomenon of photon-axion mixing. The
Lagrangian describing the photon-axion system is~\cite{Raffeltbook}
\begin{equation}
{\cal L} = {\cal L}_{\gamma} + {\cal L}_a + {\cal L}_{a \gamma} \,\ .
\end{equation}
The QED Lagrangian for photons is
\begin{equation}
{\cal L}_{\gamma} = - \frac{1}{4} F_{\mu\nu}{F}^{\mu\nu} +
\frac{\alpha^2}{90 m_e^4}\left[(F_{\mu\nu}{F}^{\mu\nu})^2
+ \frac{7}{4} (F_{\mu\nu}\tilde{F}^{\mu\nu})^2 \right] \,\ ,
\end{equation}
where $F_{\mu\nu}$ is the electromagnetic field-strength tensor,
$\tilde{F}_{\mu\nu}
=\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}F^{\rho\sigma}$ its dual,
$\alpha$ the fine-structure constant, and $m_e$ the electron mass.
We always use natural units with $\hbar=c=k_{\rm B}=1$. The second
term on the r.h.s. is the Euler-Heisenberg effective Lagrangian,
describing the one-loop corrections to classical electrodynamics for
photon frequencies $\omega\ll m_e$.
The Lagrangian for the non-interacting axion field $a$ is
\begin{equation}
{\cal L}_{a} = \frac{1}{2} \partial^{\mu} a \partial_{\mu}a
-\frac{1}{2} m^2 a^2 \,\ .
\end{equation}
A generic feature of axion models is the CP-conserving two-photon
coupling, so that the axion-photon interaction is
\begin{equation}
{\cal L}_{a \gamma}= -\frac{1}{4}\,g_{a\gamma}
F_{\mu\nu}\tilde{F}^{\mu\nu}a = g_{a\gamma} \,\ {\bf E}\cdot{\bf B}\,a ,
\end{equation}
where $g_{a\gamma}$ is the axion-photon coupling with dimension of inverse
energy. A crucial consequence of ${\cal L}$ is that the propagation
eigenstates of the photon-axion system differ from the corresponding
interaction eigenstates. Hence interconversion takes place, much in
the same way as it happens for massive neutrinos of different
flavors. However, since the mixing term
$F_{\mu\nu}\tilde{F}^{\mu\nu}a$ involves two photons, one of them must
correspond to an external field~\cite{sikivie, Raffelt:1987im,
Raffeltbook, Anselm:1987vj}.
Axion-photon oscillations are described by the coupled Klein-Gordon
and Maxwell equations implied by these Lagrangians. For very
relativistic axions ($m_a \ll \omega$), the short-wavelength
approximation can be applied and the equations of motion reduce to a
first order propagation equation. More specifically, we
consider a monochromatic light beam travelling along the
$z$-direction in the presence of an arbitrary magnetic field
$\bf{B}$. Accordingly, the propagation equation takes the
form~\cite{Raffelt:1987im}
\begin{equation}\label{linsys1}
\left(\omega-i\partial_z +{\cal M}\right)
\left(
\begin{array}{ccc}
A_x\\
A_y\\
a
\end{array}
\right)=0\,,
\end{equation}
where $A_x$ and $A_y$ correspond to the two linear photon
polarization states, and $\omega$ is the photon or axion energy. The
mixing matrix is
\begin{equation} \label{mixmatxy}
{\cal M}=\left(
\begin{array}{ccc}
\Delta_{xx}&\Delta_{xy}&\frac{1}{2}g_{a\gamma} B_x\\
\Delta_{yx}&\Delta_{yy}&\frac{1}{2}g_{a\gamma} B_y\\
\frac{1}{2}g_{a\gamma} B_x&\frac{1}{2}g_{a\gamma} B_y& \Delta_a
\end{array}
\right)\,,
\end{equation}
where $\Delta_a = -m^2_{a}/2\omega$. The component of ${\bf B}$
parallel to the direction of motion does not induce photon-axion
mixing. While the terms appearing in the third row and column of
${\cal M}$ have an evident physical meaning, the
$\Delta_{ij}$-terms ($i,j=x,y$) require some explanations.
Generally speaking, they are determined
both by the properties of the medium and the QED vacuum polarization
effect. We ignore the latter, being sub-dominant for the problem at
hand~\cite{Deffayet:2001pc}.
For a homogeneous magnetic field we may choose the $y$-axis along the
projection of $\bf{B}$ perpendicular to the $z$-axis. Correspondingly
we have $B_x=0$, $B_y = |\bf{B_{\rm T}}| = B \sin \theta$, $A_x =
A_\perp$, $A_y = A_\parallel$. Equation~(\ref{linsys1}) then becomes
\begin{equation} \label{linsys}
\left(\omega-i\partial_z +{\cal M}\right)
\left(
\begin{array}{ccc}
A_{\perp}\\
A_{\parallel}\\
a
\end{array}
\right)=0\,,
\end{equation}
with the mixing matrix
\begin{equation} \label{mixmat}
{\cal M}=\left(
\begin{array}{ccc}
\Delta_{\perp} &\Delta_{\rm R} & 0\\
\Delta_R &\Delta_{\parallel} & \Delta_{a\gamma}\\
0 &\Delta_{a\gamma} & \Delta_a\\
\end{array}
\right)\,,
\end{equation}
where
\begin{eqnarray}
\Delta_{a \gamma}&=&g_{a\gamma}
|{\bf B}_{\rm T}|/2 \,\ ,\\
\Delta_{\parallel, \perp} &=& \Delta_{\rm{pl}} +
\Delta_{\parallel, \perp}^{\rm CM} \,\ .
\end{eqnarray}
In a plasma, the photons acquire an effective mass given by the
plasma frequency $\omega_{\rm pl}^2 = 4\pi\alpha\,n_e/m_e$ with $n_e$
the electron density, leading to
\begin{equation}
\Delta_{\rm pl} = - \frac{\omega_{\rm pl}^2}{2 \omega} \,\ .
\end{equation}
Furthermore, the $\Delta^{\rm CM}_{\parallel, \perp}$ terms describe
the Cotton-Mouton effect, i.e.\ the birefringence of fluids in the
presence of a transverse magnetic field where
$|\Delta_{\parallel}^{\rm CM}-\Delta_{\perp}^{\rm CM}|\propto B_{\rm
T}^2$. These terms are of little importance for the following
arguments and will thus be neglected. Finally, the Faraday rotation
term $\Delta_{\rm R}$, which depends on the energy and the
longitudinal component $B_z$, couples the modes $A_{\parallel}$ and
$A_{\perp}$. While Faraday rotation is important when analyzing
polarized sources of photons, it plays no role for the problem at
hand.
With this simplification the $A_\perp$ component decouples, and the
propagation equations reduce to a 2-dimensional mixing problem with a
purely transverse field ${\bf B}={\bf B}_{\rm T}$
\begin{equation}
\left(\omega-i\partial_z +{\cal M}_2\right)
\left(
\begin{array}{cc}
A_\parallel\\a
\end{array}
\right)=0,
\end{equation}
with a 2-dimensional mixing matrix
\begin{equation} \label{mixmat2}
{\cal M}_{2}=\left(
\begin{array}{cc}
\Delta_{\rm pl}&\Delta_{a \gamma}\\
\Delta_{a \gamma}&\Delta_a
\end{array}
\right).
\end{equation}
The solution follows from diagonalization by the rotation angle
\begin{equation}
\label{tan}
\vartheta = \frac{1}{2}\arctan
\left(\frac{2\Delta_{a \gamma}}{\Delta_{\rm pl}-\Delta_a}\right).
\end{equation}
In analogy to the neutrino case~\cite{Kuo:1989qe}, the probability for
a photon emitted in the state $A_{\parallel}$ to convert into an axion
after travelling a distance $s$ is
\begin{eqnarray}\label{p1ga}
P_0(\gamma\rightarrow a)&=&
\left|\langle A_\parallel(0)|a(s)\rangle\right|^2\nonumber\\
&=&\sin^2\left(2 \vartheta \right)
\sin^2(\Delta_{\rm osc}s/2)\nonumber\\
&=&\left(\Delta_{a \gamma} s\right)^2
\frac{\sin^2(\Delta_{\rm osc} s /2)}
{(\Delta_{\rm osc} s /2)^2} \;\ ,
\end{eqnarray}
where the oscillation wavenumber is given by
\begin{equation}\label{deltaosc}
\Delta_{\rm osc}^2=(\Delta_{\rm pl}-\Delta_a)^2 +
4 \Delta_{a \gamma}^2\,.
\end{equation}
The conversion probability is energy-independent when
$2|\Delta_{a\gamma}|\gg|\Delta_{\rm pl}-\Delta_{a}|$ or whenever the
oscillatory term in Eq.~(\ref{p1ga}) is small, i.e.\ $\Delta_{\rm osc}
s /2\ll1$, implying the limiting behavior $P_0=\left(\Delta_{a \gamma}
s\right)^2 \label{p1enind}$.
The propagation over many $B$-field domains is a truly 3-dimensional
problem, because different photon polarization states play the role of
$A_\parallel$ and $A_\perp$ in different domains. This average is
enough to guarantee that the conversion probability over many domains
is an incoherent average over magnetic field configurations and photon
polarization states. The probability after travelling over a distance
$r\gg s$, where $s$ is the domain size, is derived
in Appendix~A along the lines of Ref.~\cite{Grossman:2002by}
and is found to be
\begin{equation}\label{totpro}
P_{\gamma \to a}(r) = \frac{1}{3}
\left[1-\exp\left(-\frac{3P_0\,r}{2s}\right)\right]\,,
\end{equation}
with $P_0$ given by Eq.~(\ref{p1ga}). As expected one finds that
for $r/s\to\infty$ the conversion probability saturates, so that on
average one third of all photons converts to axions.
\section{Photon-axion conversion and supernova dimming}
\label{dimming}
\subsection{Observations}
In 1998, two groups using SNe~Ia as cosmic standard candles reported
first evidence for a luminosity-redshift relation that indicated that
the expansion of the universe was accelerating today~\cite{supnovR,
supnovP}. The quantity relevant for SN~Ia observations is the
luminosity distance $d_L$ at redshift $z$, defined by
\begin{equation}
\label{distance}
d^2_L(z) = \frac{\mathcal{L}}{4 \pi \mathcal{F}}\,,
\end{equation}
where $\mathcal{L}$ is the absolute luminosity of the source and
$\mathcal{F}$ is the energy flux arriving at
Earth~\cite{supnovR,supnovP}. In Friedmann-Robertson-Walker
cosmologies, the luminosity distance at a given redshift $z$ is a
function of the Hubble parameter $H_0$, the matter density
$\Omega_{\rm M}$, and the dark energy density $\Omega_\Lambda$.
Usually the data are expressed in terms of magnitudes
\begin{equation}
\label{magn}
m = M + 5 \log_{10} \left(\frac{d_L}{\rm Mpc}\right) + 25 \,,
\end{equation}
where $M$ is the absolute magnitude, equal to the value that $m$
would have at $d_L=10$ pc.
\begin{figure}[t]
\centering
\includegraphics[height=8 cm]{fig-02.eps}
\caption{\label{sn} \footnotesize\baselineskip=4mm SN~Ia Hubble diagram. {\em Upper panel:}\/
Hubble diagram for low- and high-redshift SN~Ia samples. Overplotted
are three cosmologies: ``low'' and ``high'' $\Omega_{\rm M}$ with
$\Omega_\Lambda=0$ and the best fit for a flat cosmology
$\Omega_{\rm M}=0.24$ and $\Omega_\Lambda=0.76$. {\em Lower
panel:}\/ Difference between data and models with $\Omega_{\rm
M}=0.20$ and $\Omega_\Lambda=0$. (Figure from Ref.~\cite{supnovR}
with permission.)}
\end{figure}
Figure~\ref{sn} shows the Hubble diagram for SN~Ia samples at low
and high $z$. The distances of high-redshift SNe are, on average,
$10\%$ to $15\%$ larger than in a low matter density ($\Omega_{\rm
M} =0.2$) Universe without dark energy ($\Omega_\Lambda =0$).
Therefore, objects of a fixed intrinsic brightness appear fainter if
the cosmic energy density budget is dominated by dark energy. The
best fit of these data supports a Universe composed of a fraction of
dark matter $\Omega_{\rm M}\simeq 0.3$ and a fraction of dark energy
$\Omega_\Lambda \simeq 0.7$.
Dark energy has been associated with vacuum energy or an Einstein
cosmological constant that produces a constant energy density at all
times. Defining the equation of state
\begin{equation}
w = \frac{p}{\rho} \,\ ,
\end{equation}
the cosmological constant is characterized by $p=-\rho$, i.e.~$w=-1$.
From the Friedmann equations any component of the density budget with
equation of state $w < - 1/3$ causes cosmic acceleration. SN~Ia data
imply that $w\gtrsim -0.5$ are disfavoured, supporting the cosmic
acceleration of the Universe~\cite{supnovP}.
\subsection{Interpretation in terms of photon-axion conversion}
To explore the effect of photon-axion conversion on SN dimming we
recast the relevant physical quantities in terms of natural
parameters. The energy of optical photons is a few~eV. The strength
of widespread, all-pervading $B$-fields in the intergalactic medium
must be less than a few~$10^{-9}$~G over coherence lengths $s$ crudely
at the Mpc scale, according to the constraint from the Faraday
effect of distant radio sources~\cite{Kronberg:1993vk}. Along a given
line of sight, the number of such domains in our Hubble radius is
about $N \approx H_0^{-1}/s\approx 4 \times 10^3$ for $s\sim 1$~Mpc.
The mean diffuse intergalactic plasma density is bounded by $n_e
\lesssim 2.7 \times 10^{-7}$~cm$^{-3}$, corresponding to the recent
WMAP measurement of the baryon density~\cite{Spergel:2003cb}. Recent
results from the CAST experiment~\cite{Andriamonje:2004hi} give a
direct experimental bound on the axion-photon coupling of $g_{a\gamma}
\lesssim 1.16 \times 10^{-10}$~GeV$^{-1}$, comparable to the
long-standing globular-cluster limit~\cite{Raffeltbook}. Suitable
representations of the mixing parameters are
\begin{eqnarray} \label{eq12nm}
\frac{\Delta_{a \gamma}}{{\rm Mpc}^{-1}} &=&
0.15\;g_{10}\;B_{\rm nG}
\,,\nonumber\\
\frac{\Delta_a} {{\rm Mpc}^{-1}} &=&
-7.7 \times 10^{28} \left(\frac{m_a}{1\,\rm eV}\right)^2
\left(\frac{\omega}{1\,\rm eV}\right)^{-1}
\,,\nonumber\\
\frac{\Delta_{\rm pl}}{{\rm Mpc}^{-1}} &=&
-11.1 \left(\frac{\omega}{1\,\rm eV}\right)^{-1}
\left(\frac{n_e}{10^{-7}\,{\rm cm}^{-3}}\right)\,,
\end{eqnarray}
where we have introduced $g_{10}=g_{a\gamma}/10^{-10}$ GeV$^{-1}$ and
$B_{\rm nG}$ is the magnetic field strength in nano-Gauss.
The mixing angle defined in Eq.~(\ref{tan}) is too small to yield a
significant conversion effect for the allowed range of axion masses
because $|\Delta_a| \gg |\Delta_{a \gamma}|$, $|\Delta_{\rm pl}|$.
Therefore, to ensure a sufficiently large mixing angle one has to
require nearly massless pseudoscalars, sometimes referred to as
``arions''~\cite{{Anselm:1981aw},{Anselm:1982ip}}. For such
ultra-light axions a stringent limit from the absence of
$\gamma$-rays from SN~1987A gives $g_{a\gamma}\lesssim 1\times
10^{-11}$~GeV$^{-1}$~\cite{Brockway:1996yr} or even $g_{a\gamma} \lesssim
3\times 10^{-12}$~GeV$^{-1}$~\cite{Grifols:1996id}. Henceforth we
will consider the pseudoscalars to be effectively massless, so that
our remaining independent parameters are $g_{10}B_{\rm nG}$ and
$n_e$. Note that $m_a$ only enters the equations via the term $m_a^2
- \omega_{\rm pl}^2$, so that for tiny but non-vanishing values of
$m_a$, the electron density should be interpreted as $n_{e,{\rm
eff}}=|n_e - m_a^2 m_e/(4 \pi \alpha)|$.
Allowing for the possibility of photon-axion oscillations in
intergalactic magnetic fields, the number of
photons emitted by the source and thus the flux $\mathcal{F}$ is
reduced to the fraction $P_{\gamma\to\gamma}=1-P_{\gamma\to a}$.
Therefore, the luminosity distance [Eq.~(\ref{distance})] becomes
\begin{equation}
d_L \to d_L/P_{\gamma \to \gamma}^{1/2} \,\ ,
\end{equation}
and the brightness [Eq.~(\ref{magn})]
\begin{equation}\label{isodim}
m \to m - \frac{5}{2}\log_{10}(P_{\gamma \to \gamma})\,.
\end{equation}
Distant SNe~Ia would eventually saturate ($P_{\gamma \to
\gamma}=2/3$), and hence they would appear $(3/2)^{1/2}$ times
farther away than they really are. This corresponds to a maximum
dimming of approximately 0.4~mag. Cs\'aki, Kaloper and Terning
(CKT~I) showed that if photon-axion conversion takes place, this
mechanism can reproduce the SN Hubble diagram~\cite{Csaki:2001yk},
assuming, for example, a nonstandard dark energy component
$\Omega_{S}=0.7$ with equation of state $w=-1/3$, which does not
produce cosmic acceleration (Fig.~\ref{csaki}).
\begin{figure}[t]
\centering
\includegraphics[height=5 cm]{fig-03.eps}
\caption{\label{csaki} \footnotesize\baselineskip=4mm Hubble diagram for SNe~Ia for different
cosmological models, relative to the curve with $\Omega_{\rm{tot}}
=0$ (dotted horizontal line). The dashed curve is a best fit to the
SN data assuming that the Universe is accelerating ($\Omega_{\rm M}=0.3$,
$\Omega_\Lambda=0.7$); the solid line is the photon-axion oscillation
model with $\Omega_{\rm M}=0.3$ and $\Omega_S=0.7$, the dot-dashed line is
$\Omega_{\rm M}=0.3$, $\Omega_S=0.7$ with no oscillation, the
dot-dot-dashed line is for $\Omega_{\rm M}=1$ and again no
oscillation. (Figure from Ref.~\cite{Csaki:2001yk} with permission.)}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[height=5 cm]{fig-04.eps}
\caption{\label{deffayet} \footnotesize\baselineskip=4mm Ratio of the probability of conversion of
photons to axions including the effects of the intergalactic plasma
($n_e\approx 10^{-7}{\rm cm}^{-3}$) and the probability of
oscillations when this effect is not considered, as a function of the
photon energy~$\omega$. The curves are drawn for different size $s$
of the magnetic domains: 0.5~Mpc (dashed line), 1~Mpc (solid line)
and 2~Mpc (dotted line). (Figure from Ref.~\cite{Deffayet:2001pc} with permission.)}
\end{figure}
However, in the model of CKT~I, plasma density effects were
neglected ($n_e=0$). Later it was recognized that the conclusions
of CKT~I can be significantly modified when the effects of the
intergalactic plasma on the photon-axion oscillations are taken into
account~\cite{Deffayet:2001pc}. In the presence of plasma effects,
the probability of oscillation is lower than before and it is no
longer achromatic (Fig.~\ref{deffayet}). SN observations not only
require dimming, but also that the dimming is achromatic. In fact,
SN observations put a constraint on the color excess between the $B$
and $V$ bands,
\begin{equation}
E[B-V]\equiv-2.5\ \ \log_{10}\left[{F^o(B) \over F^e(B)}\ \
{F^e(V) \over F^o(V)}\right],
\end{equation}
where $F^o$ or $F^e$ is the observed or emitted flux, respectively.
The $B$ and $V$ band correspond to $0.44~\mu {\rm m}$ and $0.55~\mu
{\rm m}$, respectively. Observations constrain $E[B-V]$ to be lower
than $0.03$~\cite{supnovP}. This can be translated to
\begin{equation}
\label{chrom}
P(\gamma \rightarrow a)_V \left[ {P(\gamma \rightarrow a)_B\over
P(\gamma
\rightarrow a)_V} -1 \right] < 0.03 \,\ .
\end{equation}
Therefore, assuming an electron density $n_e\approx n_{\rm baryons}=
n_{\gamma}\eta \sim 10^{-7}{\rm cm}^{-3}$, the model is ruled out in
most of the parameter space because of either an excessive photon
conversion or a chromaticity of the dimming~\cite{Deffayet:2001pc}.
Only fine-tuned parameters for the statistical properties of the
extragalactic magnetic fields would still allow this explanation.
On the other hand, Cs\'aki, Kaloper and Terning~\cite{Csaki:2001jk}
(CKT~II) criticized the assumed value of $n_e$ as being far too
large for most of the intergalactic space, invoking observational
hints for a value at least one order of magnitude smaller. As a
consequence, for values of $\omega_{\rm pl} \lesssim 6 \times
10^{-15}$~eV, corresponding to $n_e \lesssim 2.5 \times
10^{-8}$~cm$^{-3}$, one finds $|P_V - P_B| < 0.03$ so that the
chromaticity effect disappears very rapidly, and becomes
undetectable by present observations.
Figure~\ref{fig1} shows qualitatively the regions of $n_e$ and
$g_{10}B_{\rm nG}$ relevant for SN dimming at cosmological
distances. To this end we show iso-dimming contours obtained from
Eq.~(\ref{isodim}) for a photon energy 4.0~eV and a magnetic domain
size $s=1$~Mpc. For simplicity we neglect the redshift evolution of
the intergalactic magnetic field $B$, domain size $s$, plasma
density $n_e$, and photon frequency~$\omega$. Our iso-dimming curves
are intended to illustrate the regions where the photon-axion
conversion could be relevant. In reality, the dimming should be a
more complicated function since the intergalactic medium is expected
to be very irregular: there could be voids of low $n_e$ density, but
there will also be high density clumps, sheets and filaments and
these will typically have higher $B$ fields as well. However, the
simplifications used here are consistent with the ones adopted in
CKT~II and do not alter our main results.
The iso-dimming contours are horizontal in the low-$n_e$ and
low-$g_{10}B_{\rm nG}$ region. They are horizontal for any
$g_{10}B_{\rm nG}$ when $n_e$ is sufficiently low. From the
discussion in Sec.~\ref{sec:conversion} we know that the
single-domain probability $P_0$ of Eq.~(\ref{p1ga}) is indeed energy
independent when $|\Delta_{\rm osc} s| \ll 1$, i.e.\ for
$|\Delta_{\rm pl}| s/2\ll 1$ and $|\Delta_{a \gamma}| s \ll 1$. When
$n_e\lesssim {\rm few}~10^{-8}$~cm$^{-3}$ and $g_{10}B_{\rm nG}
\lesssim 4$, we do not expect an oscillatory behavior of the
probability. This feature is nicely reproduced by our iso-dimming
contours. From Fig.~\ref{fig1} we also deduce that a significant
amount of dimming is possible only for $g_{10}B_{\rm nG}\gtrsim
4\times 10^{-2}$.
In CKT~I, where the effect of $n_e$ was neglected, a value $m_a \sim
10^{-16}$ eV was used. In terms of our variables, this corresponds
to $n_{e,\rm eff} \approx 6 \times 10^{-12}$~cm$^{-3}$. As noted in
CKT~II, when plasma effects are taken into account, any value
$n_e\lesssim 2.5 \times 10^{-8}$~cm$^{-3}$ guarantees the required
achromaticity of the dimming below the 3\% level between the $B$ and
$V$~bands. The choice $B_{\rm nG}$ of a few and $g_{10}\approx 0.1$
in CKT~I and~II falls in the region where the observed SN dimming
could be explained while being marginally compatible with the bounds
on the intergalactic $B$ field and on the axion-photon
coupling~$g_{10}$.
\begin{figure}[h]
\centering
\includegraphics[height=7cm]{fig-05.eps}
\caption{\label{fig1} \footnotesize\baselineskip=4mm
Iso-dimming curves for an attenuation of 0.01, 0.1, and
0.4~magnitudes. The photon energy of $4.0$~eV is representative of
the B-band. The size of a magnetic domain is $s=1$~Mpc.
(Figure from Ref.~\cite{Mirizzi:2005ng}.)}
\end{figure}
\section{CMB Constraints} \label{CMBC}
If photon-axion conversion over cosmological distances is
responsible for the SN~Ia dimming, the same phenomenon should also
leave an imprint in the CMB. A similar argument was previously
considered for photon-graviton conversion~\cite{Chen:1994ch}.
Qualitatively, in the energy-dependent region of $P_{\gamma\to a}$
one expects a rather small effect due to the low energy of CMB
photons ($\omega \sim 10^{-4}$~eV). However, when accounting for the
incoherent integration over many domains crossed by the photon,
appreciable spectral distortions may arise in view of the accuracy
of the CMB data at the level of one part in $10^{4}$--$10^{5}$. For
the same reason, in the energy-independent region, at much lower
values of $n_e$ than for the SNe~Ia, the constraints on
$g_{10}B_{\rm nG}$ are expected to be quite severe. The depletion
of CMB photons in the patchy magnetic sky and its effect on the CMB
anisotropy pattern have been previously
considered~\cite{Csaki:2001yk}. However, more stringent limits come
from the distortion of the overall blackbody
spectrum~\cite{Mirizzi:2005ng}.
\begin{figure}[b]
\centering
\includegraphics[height=7cm, angle=90]{fig-06.eps}
\caption{\label{cmb} \footnotesize\baselineskip=4mm Uniform CMB spectrum and fit to the
blackbody spectrum. Uncertainties are a small fraction of the
line thickness.
(Figure from Ref.~\cite{Fixsen:1996nj} with permission.)}
\end{figure}
To this end the COBE/FIRAS data for the experimentally measured
spectrum were used, corrected for foregrounds~\cite{Fixsen:1996nj}.
Note that the new calibration of FIRAS~\cite{Mather:1998gm} is
within the old errors and would not change any of our conclusions.
The $N = 43$ data points $\Phi^{\rm exp}_i$ at different energies
$\omega_i$ are obtained by summing the best-fit blackbody spectrum
(Fig.~\ref{cmb}) to the residuals reported in
Ref.~\cite{Fixsen:1996nj}. The experimental errors $\sigma_i$ and
the correlation indices $\rho_{ij}$ between different energies are
also available. In the presence of photon-axion conversion, the
original intensity of the ``theoretical blackbody'' at temperature
$T$
\begin{equation}
\label{planck}
\Phi^0({\omega},T) = \frac{\omega^3}{ 2 \pi^2}
\big[ \exp (\omega/T )-1 \big]^{-1}
\end{equation}
would convert to a deformed spectrum that is given by
\begin{equation}
\Phi({\omega},T)=\Phi^0({\omega},T)P_{\gamma\to\gamma}({\omega}) \,\ .
\end{equation}
In Ref.~\cite{Mirizzi:2005ng}, we build the reduced chi-squared
function
\begin{equation}
\chi_\nu^2(T,\lambda)=\frac{1}{{N}-1}
\sum_{i,j=1}^{N} {\Delta \Phi_i} (\sigma^2)^{-1}_{ij}
{\Delta \Phi_j} \,,
\end{equation}
where
\begin{equation}
\Delta \Phi_i = \Phi^{\rm exp}_i-\Phi^0({\omega}_i,T)
P_{\gamma\to\gamma}({\omega_i},\lambda)
\end{equation}
is the $i$-th residual, and
\begin{equation}
\sigma^2_{ij}= \rho_{ij} \sigma_{i} \sigma_{j}
\end{equation}
is the covariance matrix. We minimize this function with respect to
$T$ for each point in the parameter space $\lambda=(n_e,g_{10}B_{\rm
nG})$, i.e.\ $T$ is an empirical parameter determined by the
$\chi_\nu^2$ minimization for each $\lambda$ rather than being fixed
at the standard value $T_0=2.725\pm0.002$~K \cite{Mather:1998gm}. In
principle, one should marginalize also over the galactic foreground
spectrum~\cite{Fixsen:1996nj}. However, this is a subleading effect
relative to the spectral deformation caused by the photon-axion
conversion.
\begin{figure}[t]
\centering
\includegraphics[height=7cm]{fig-07.eps}
\caption{\label{fig2} \footnotesize\baselineskip=4mm
Exclusion plot for axion-photon conversion based
on the COBE/FIRAS CMB spectral data. The region above the solid
curve is excluded at 95\% CL whereas the one above the dotted
curve is excluded at 99\% CL. The size of each magnetic domain is
fixed at $s=1$~Mpc. We also reproduce the iso-dimming contours from
Fig.~\ref{fig1}. (Figure from Ref.~\cite{Mirizzi:2005ng}.)}
\end{figure}
In Fig.~\ref{fig2} we show the exclusion contour in the plane of
$n_e$ and $g_{10}B_{\rm nG}$. The region above the continuous curve
is the excluded region at 95\% CL, i.e.\ in this region the chance
probability to get larger values of $\chi_\nu^2$ is lower than~5\%.
We also show the corresponding 99\% CL contour which is very close
to the 95\% contour so that another regression method and/or
exclusion criterion would not change the results very much. Within
a factor of a few, the same contours also hold if one varies the
domain size $s$ within a factor of~10. Comparing this exclusion plot
with the iso-dimming curves of Fig.~\ref{fig1} we conclude that the
entire region $n_e \lesssim 10^{-9}$~cm$^{-3}$ is excluded as a
leading explanation for SN~dimming.
A few comments are in order. Intergalactic magnetic fields probably
are a relatively recent phenomenon in the cosmic history, arising
only at redshifts of a few. As a first approximation we have then
considered the photon-axion conversion as happening for present
($z=0$) CMB photons. Since $P_{\gamma\to \gamma}$ is an increasing
function of the photon energy $\omega$, our approach leads to
conservative limits. Moreover, we assumed no correlation between
$n_e$ and the intergalactic magnetic field strength. It is however
physically expected that the fields are positively correlated with
the plasma density so that relatively high values of $g_{10}B_{\rm
nG}$ should be more likely when $n_e$ is larger. Our constraints in
the region of $n_e\gtrsim 10^{-10}$~cm$^{-3}$ are thus probably
tighter than what naively appears.
\section{QSO Constraints} \label{QSO}
\begin{figure}[t]
\centering
\includegraphics[height=7cm]{fig-08.eps}
\caption{\label{qso} \footnotesize\baselineskip=4mm
Simulated quasar spectra at $z=1$ for different
photon-axion oscillation scenarios. (Figure from
Ref.~\cite{Ostman:2004eh} with permission.)}
\end{figure}
CMB limits are nicely complementary to the ones obtained from the
effects of photon-axion conversion on quasar colors and
spectra~\cite{Ostman:2004eh}. One effect of photon-axion
oscillations is that a dispersion is added to the quasar spectra due
to the energy dependence of the effect. By comparing the dispersion
in observed in quasar spectra with the dispersion in simulated ones,
one can find out if the model behind each simulation is allowed. The
SuperNova Observation Calculator (SNOC)~\cite{Goobar:2002vm} was
used~\cite{Ostman:2004eh} to simulate the effects of photon-axion
oscillations on quasar observations (Fig.~\ref{qso}). If the
simulated dispersion is smaller than the observed, one cannot
exclude the scenario since real quasars have an intrinsic
dispersion.
In Fig.~\ref{fig3} we superimpose the
CMB exclusion contours with the schematic region excluded by quasars
\footnote{We use the exclusion regions of astro-ph/0410501v1.
In the published version~\cite{Ostman:2004eh}, corresponding to
astro-ph/0410501v2, the iso-dimming curves were erroneously changed.
The difference is that in version~1 the angle $\alpha$ in Eq.~(3) of
Ref.~\cite{Ostman:2004eh} that characterizes the random magnetic
field direction was correctly taken in the interval
0--360$^\circ$ whereas in version 2 it was taken in the interval
0--90$^\circ$ (private communication by the authors).}.
The region to the right of the dot-dashed line is excluded by
requiring achromaticity of SN~Ia dimming~\cite{Csaki:2001jk}. The
region inside the dashed lines is excluded by the dispersion in QSO
spectra. Moreover, assuming an intrinsic dispersion of 5\% in these
spectra, the excluded region could be enlarged up to the dotted
lines. The CMB argument excludes the region above the solid curve at
95\%~CL.
A cautionary remark is in order when combining the two constraints.
As we have discussed in the previous section, CMB limits on
photon-axion conversion are model independent. On the other hand,
the limits placed by the QSO spectra may be subject to loop holes,
since they are based on a full correlation between the intergalactic
electron density and the magnetic field strength, which is
reasonable but not well established observationally.
\begin{figure}[t]
\centering
\includegraphics[height=7cm]{fig-09.eps}
\caption{\label{fig3} \footnotesize\baselineskip=4mm
Exclusion plot for photon-axion conversion. The
region to the right of the dot-dashed line is excluded by requiring
achromaticity of SN~Ia dimming. The region inside the dashed lines
is excluded by the dispersion in QSO spectra. Assuming an intrinsic
dispersion of 5\% in QSO spectra, the excluded region could be
extended up to the dotted curve. The CMB argument excludes the
entire region above the solid curve at 95\% CL. (Figure
from Ref.~\cite{Mirizzi:2005ng}.)}
\end{figure}
\section{Constraints from angular diameter distance}
We now turn briefly to two other types of constraint on the
photon-axion conversion mechanism. The first is based on angular
diameter distance measurements of radio-galaxies. For a source of
linear radius $r$ and angular diameter $\theta$, the angular
diameter distance is
\begin{equation}
d_A = \frac{2r}{\theta} \,\ .
\end{equation}
In metric theories where photons travel on null geodesics and their
number is conserved, the angular distance $d_A$ and the luminosity
distance $d_L$ are fundamentally related by the reciprocity
relation~\cite{SEF}
\begin{equation}
d_L(z) = (1+z)^2 d_A(z) \,\ .
\end{equation}
Photon-axion conversion in intergalactic magnetic fields would not
affect the angular-diameter distance~\cite{bakuI, bakuII} and hence
would cause a fundamental asymmetry between measurements of $d_L(z)$
and $d_A(z)$.
In a first search for a violation of the reciprocity relation, a
joint analysis of high-redshift SNe~Ia [$d_L(z)$] and radio galaxies
[$d_A(z)$] was undertaken~\cite{bakuI}. The results do not favour
the loss of photons and hence disfavour mixing. However, this
constraint is less robust than the QSO one because it is affected by
possibly large systematic errors that are difficult to
quantify~\cite{uam}.
Since angular-diameter distance is immune to the loss of photons,
the axion-conversion versus accelerating-universe ambiguity in the
interpretation can be resolved~\cite{Song:2005af} by combining CMB
acoustic peak measurements with the recent detection of baryon
oscillations in galaxy power spectra~\cite{Eisenstein:2005su}. This
combination excludes a non-accelerating dark-energy species at the
$4\sigma$ level regardless of the level of the axion coupling.
\section{Conclusions} \label{conclusions}
We have reviewed the intriguing and phenomenologically
rich~\cite{Das:2004qk} mechanism of conversion of photons into very
low-mass axion-like particles in the presence of intergalactic
magnetic fields. We have examined the existing astrophysical and
cosmological limits on this model, coming from the distortion of the
CMB spectrum, from the quasar dispersion, and from the angular
diameter distance, including the baryon oscillations detected in
large-scale structure surveys.
In particular, we have shown that the resulting CMB spectral
deformation excludes a previously allowed parameter region
corresponding to very low densities of the intergalactic medium
(IGM). These limits are complementary to the ones derived from QSO
dispersion which place serious constraints on the axion-photon
conversion mechanism, especially for relatively large densities of the
IGM. As a result, it appears that the photon-axion conversion will
not play a leading role for the apparent SN~Ia dimming.
It may still happen that ultra-light or massless axions play an
important cosmological role. For example, it was shown that by
adding a photon-axion conversion mechanism on top of a dark energy
model with $w\gtrsim -1$, one can mimic cosmic equations of state as
negative as $w\simeq- 1.5$~\cite{Csaki:2004ha}. Although at present
there is no need for such an extreme equation of state, it is an
interesting possibility to keep in mind, especially since
alternative explanations as ghost/phantom fields usually pose a
threat to very fundamental concepts in general relativity and
quantum field theory.
\section*{Acknowledgments}
A.~M.\ and G.~R.\ thank the organizers of the Joint ILIAS-CAST-CERN
Axion Training at CERN for their kind hospitality. In Munich, this
work was supported, in part, by the Deutsche Forschungsgemeinschaft
under grant No.~SFB 375 and by the European Union under the ILIAS
project, contract No.~RII3-CT-2004-506222. A.M.\ is supported, in
part, by the Istituto Nazionale di Fisica Nucleare (INFN) and by the
Ministero dell'Istruzione, Universit\`a e Ricerca (MIUR) through the
``Astroparticle Physics'' research project.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In reinforcement learning, it is generally held that model-based approaches
learn more quickly than do model-free approaches assuming the model is
accurate enough \cite{Kaelbling+:1996:ArXiv}. However, it is well known that
learning model is challenging in general. We show here that for systems with
underlying physics dynamics, such as robotic control, one can often learn an
accurate low-dimension model quickly, using very little training data from the
actual system. Training with that model can result in asymptotic performance
comparable to model-free approaches, using models with many fewer parameters
and that are trained using less data than state of the art model-based
algorithms. Thus, for systems with physics-like dynamics, we bridge the gap
between model-free and model-based RL.
\newcommand{\vect}[1]{\overset{\rightharpoonup} #1}
\section{SINDy}
SINDy, which stands for Sparse Identification of Non-linear Dynamics, is an
approach developed in the physics community for extracting equational models
of physical systems from time series data. Specifically, SINDy can extract
differential equations (ODEs, PDEs) or difference equations from data given a
collection of possible equation terms (generally called \emph{features} in the
ML community) that are functions of the input values. Such terms might
include polynomial and trigonometric functions of the data values. The
``Sparse'' of SINDy indicates that it tries to extract from a possibly large
space of terms the minimum number of ones necessary for an accurate model.
``Non-linear'' indicates that the terms (features) may be non-linear functions
of one or more input values. SINDy was introduced by
\citewithauthor{Brunton+:2016:PNAS}, who showed it is powerful enough to
extract the physics even of chaotic systems such as the True Lorentz System
\cite{Brunton+:2016:PNAS}. Similar forms apply to discrete-time and noisy
systems \cite{Brunton+:2016:PNAS}. \citet{Brunton+:2016:IFAC} extends that
work by extended SINDy to deal with force-driven systems (control), and
\cite{Boninsegna+:2018:JCP} developed a stochastic version. To demonstrate
the power of the force-driven extension they solved the Lotka-Volterra
predator-prey model and the Lorenz system with forcing and control.
SINDy extracts a dynamics model with applied actions by solving the equation
\begin{equation}
\dot{x}(t) = f(x(t);a(t))
\end{equation}
for $f$, where the vector
\begin{equation}
x(t) = [x_1(t), x_2(t), \dots, x_n(t)]^T \in R^n
\end{equation}
represents the observation of the system at time $t$, the vector
\begin{equation}
a(t) = [ a_1(t), a_2(t), \dots, a_k(t)]^T \in R^k
\end{equation}
the action (typically physical forces) applied to the system at time $t$, the
semicolon indicates vector concatenation, and the (possibly nonlinear)
function $f(x(t);a(t))$ represents the dynamic constraints that define the
equations of motion of the system. We write $f_i$ for the function that
defines $\dot{x_i}(t)$. SINDy uses a similar form for discrete-time
difference equations. SINDy can be extended to probabilistic models, but that
was not necessary in our application.
At the heart of SINDy lies a method for feature selection and sparse
regression, based on the principle that only a few terms in the regression
model will be important. By using intuition about the model, the user
proposes a collection of feature functions, which may include polynomials,
Fourier terms, etc., that the user \emph{thinks} might govern the dynamics.
Each feature is a possibly non-linear function of one of more of the input
variables, that is, of the $x_i$ and $a_i$. SINDy attempts to extract a model
where each $f_i$ is a \emph{linear} function of the features.
To that end, SINDy defines
\begin{equation}
\hat{\boldsymbol{\Theta}} = [\boldsymbol{\Theta}_1, \boldsymbol{\Theta}_2, \dots, \boldsymbol{\Theta}_F], \mbox{where}\,\,\boldsymbol{\Theta}_i \in
R^{n+k} \rightarrow R,
\end{equation}
a vector of functions of the $x_i$ and $a_i$ that we will call \emph{feature
functions}. While the $\boldsymbol{\Theta}_i$ take $n+k$ arguments, they typically depend
on only a few elements of $x(t);a(t)$. $F$ is the number of features. We
further define
\begin{equation}
\boldsymbol{\Theta}(x;a) = [\boldsymbol{\Theta}_1(x;a), \boldsymbol{\Theta}_2(x;a), \dots, \boldsymbol{\Theta}_F(x;a)].
\end{equation}
SINDY also defines an $n \times F$ matrix $\boldsymbol{\Xi}$ of real values $\boldsymbol{\xi}_{i.j}$
in order to define
\begin{equation}
f_i(x;a) = \boldsymbol{\Theta}(x;a) \cdot \boldsymbol{\Xi}_i.
\end{equation}
Thus the $f_i$ are indeed linear functions of the possibly non-linear
features, with $\boldsymbol{\Xi}$ giving the coefficients (weights) of those features for
each $f_i$. We can now write
\begin{equation}
f(x;a) = \boldsymbol{\Theta}(x;a) \cdot \boldsymbol{\Xi}
\end{equation}
for the overall definition of $f$.
We now proceed to define the optimization problem that SINDy solves. Given a
sequence of observations $x(1), x(2), \dots, x(N)$ and actions $a(1), a(2),
\dots, a(N)$, SINDy can compute actual derivatives $\dot{x}(t)$ according to
several methods, and can also take user-supplied values for those derivatives
or a user-supplied derivative calculating function. We used its built-in
smoothed finite differencing method. We can thus form pairs for training
$(\dot{x}(i), x(i);a(i))$. Note that the $x(t);a(t)$ inputs are readily
extracted from RL trajectories of the typical $(s,a,s')$ form.
We aggregate the $x$ and $a$ values into arrays $X$ and $A$, respectively:
\begin{eqnarray}
\mathbf{X} &= \overset{\text{\normalsize state}}{\left.\overrightarrow{\begin{bmatrix}
x_1(1) & x_2(1) & \cdots & x_n(1)\\
x_1(2) & x_2(2) & \cdots & x_n(2)\\
\vdots & \vdots & \ddots & \vdots \\
x_1(N) & x_2(N) & \cdots & x_n(N)
\end{bmatrix}}\right\downarrow}\begin{rotate}{270}\hspace{-.125in}time~~\end{rotate}\label{Eq:DataMatrix},
\end{eqnarray}
\begin{eqnarray}
\mathbf{A} &= \overset{\text{\normalsize action}}{\left.\overrightarrow{\begin{bmatrix}
a_1(1) & a_2(1) & \cdots & a_k(1)\\
a_1(2) & a_2(2) & \cdots & a_k(2)\\
\vdots & \vdots & \ddots & \vdots \\
a_1(N) & a_2(N) & \cdots & a_k(N)
\end{bmatrix}}\right\downarrow}\begin{rotate}{270}\hspace{-.125in}time~~\end{rotate}\label{Eq:DataMatrixA},
\end{eqnarray}
and we write $X;A$ for the $(n+k) \times N$ array formed by appending each
$a(i)$ to the corresponding $x(i)$. We extend our $\boldsymbol{\Theta}$ notation to
define
\begin{equation}
\boldsymbol{\Theta}(X;A) = [\boldsymbol{\Theta}(x(1);a(1)), \boldsymbol{\Theta}(x(2);a(2)), \dots, \boldsymbol{\Theta}(x(N);a(N))].
\end{equation}
The optimization problem to be solved is then:
\begin{equation}
\dot{X} = \boldsymbol{\Theta}(X;A) \cdot \boldsymbol{\Xi},
\end{equation}
and we desire a solution where $\boldsymbol{\Xi}$ is sparse. Note that this is now a
sparse \emph{linear} regression problem, in terms of the (possibly non-linear)
functions $\boldsymbol{\Theta}$ of the input data $X;A$. SINDy can apply any of a variety
of sparse regression methods. We use its Sequentially Thresholded Least
Squares (STLSQ) method, which uses Ridge regression, with a threshold of
$0.0009$. It iteratively solves the least squares regression problem with a
regularizer being the L2 norm of the weights $\boldsymbol{\Xi}$, masking out weights below
the threshold (setting them to $0$).
Let us consider the very small example of a mass $M$ moving in one dimension
$x$ under a time varying action force $g$. (We use $g$ to avoid confusion
with $f$.) Our observation are the position $x$ and velocity $v$. From
Newton's Law we know that $\dot{v} = g/M$, so the equations of motion are:
\begin{equation}
\begin{bmatrix}
\dot{x} \\
\dot{v}
\end{bmatrix} =
\begin{bmatrix}
0 & 1 & 0 \\
0 & 0 & 1/M \\
\end{bmatrix} \cdot
\begin{bmatrix}
x \\
v \\
g
\end{bmatrix}
\end{equation}
In this case, given suitable data, and a collection of feature functions
$\hat{\boldsymbol{\Theta}}$ that included $\lambda (x,v,g). v$ and $\lambda (x,v,g). g$ (or
more loosely, terms $v$ and $g$), SINDy should arrive at a solution $\boldsymbol{\Xi}$
whose non-zero elements are exactly the $1$ and $1/M$ in the equations of
motion, to within computational error. Notice that SINDy in effect discovers
the mass $M$, that is, we knew the \emph{form} of the equations, but not
necesarily the exact values of the coefficients. This is important with real
world robots, each one of which will exhibit slight variations from a desired
specification, etc.
Here we knew the exact form in advance. If we were less certain, we might
include more functions in $\hat{\boldsymbol{\Theta}}$, such as terms of the form $1$, $x$,
$x^2$, $x\cdot v$, $\sin x$, etc., and SINDy would still arrive at the same
solution because of its accuracy and sparseness.
The key insights of SINDy are:
\begin{itemize}
\item deriving a model of physics-based dynamics using physically plausible
(but possibly nonlinear) features of the observations and actions; and
\item assuming a model that has a simple equational form rather than trying to
learn a model via ``brute force'' function approximation.
\end{itemize}
Together these insights allow learning of a highly accurate model with a
relatively small number of observations. Further, the models have only a
small number of parameters (typically much less than the number of elements of
$\boldsymbol{\Xi}$, which itself has orders of magnitude fewer weights than a typical
neural net model). One expects SINDy to do well if the problem is physics
based, that is, the problem admits of solution as a relatively simple,
possibly nonlinear, differential (or difference) equation. It is not a
general solution for extracting a model from an arbitrary data set.
\section{Dyna-Style Learning with SINDy}
\begin{figure}[htb
\vspace*{-12pt}
\begin{algorithm}[H]
\label{alg:dyna}
\caption{Dyna-Style Model-Based RL with SINDy}
\begin{algorithmic}
\STATE Hyper-parameters: Integers $N_e$ and $N$
\STATE Initialize $\mathcal{D}_{\mathrm{\emph{env}}}$ and $\mathcal{D}_{\mathrm{\emph{SINDy}}}$ as empty data sets
\STATE Initialize policy $\pi$ and SINDy parameters $\boldsymbol{\Xi}$
\STATE ~~~~to random values
\FOR{$N_e$ \emph{rollouts}}
\STATE Collect data ($s_i$, $a$, $s_{i+1}$) on real environment with
\STATE ~~~~random or pseudo-random policy
\STATE$\mathcal{D}_{\mathrm{\emph{SINDy}}} \gets \mathcal{D}_{\mathrm{\emph{SINDy}}} \bigcup$ ($s_i$, $a$, $s_{i+1}$)
\ENDFOR
\STATE Train model $\boldsymbol{\Xi}$ on $\mathcal{D}_{\mathrm{\emph{SINDy}}}$ using SINDy
\WHILE{$\pi$ \emph{is not optimal}}
\FOR{$N$ \emph{epochs}}
\STATE Collect rollout $roll_{\mathrm{\emph{sim}}}$ from model $\boldsymbol{\Xi}$
\STATE Train $\pi$ on simulated $roll_{\mathrm{\emph{sim}}}$ using
\STATE ~~~~model-free algorithm and known reward function
\ENDFOR
\STATE Collect a single rollout $r_{\mathrm{\emph{real}}}$ from real environment
\STATE Train $\pi$ on $r_{\mathrm{\emph{real}}}$ using an arbitrary model-free algorithm
\STATE $\mathcal{D}_{\mathrm{\emph{SINDy}}} \gets \mathcal{D}_{\mathrm{\emph{SINDy}}} \bigcup r_{\mathrm{\emph{real}}}$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\vspace*{-6pt}
\end{figure}
We propose a Dyna-style learning algorithm with SINDy at its base. In this
algorithm SINDy learns an accurate sparse model of the non-linear dynamics of
a physical RL system. We define two hyper-parameters that can be chosen
depending on the complexity of the physical system we are trying to learn:
\begin{itemize}
\item $N_e$: The number of rollouts for which the algorithms collects data
to train SINDy. The algorithm uses a random or pseudo-random policy for these
rollouts. SINDy does not generally need much data, so $N_e$ is typically
small, and in fact a value of 1 sufficed in our experiments.
\item $N$: The number of epochs to train using data generated by the
SINDy-induced model for each epoch of training using data from the actual
system, i.e., the actual system is used only one out of every $N+1$ epochs.
If the SINDy model is accurate over a wide enough part of the state space,
$N$ can be set to an arbitrarily large value, which worked for a number of
our experiments.
\end{itemize}
Notice that we assume $\hat{\boldsymbol{\Theta}}$ has been chosen in advance and thus
speak of the model as being $\boldsymbol{\Xi}$, which strictly speaking is the
coefficients of the model. The Dyna-style algorithm can readily be extended
to retrain the SINDy model periodically if there were benefit to doing so, but
it was not necessary in our experiments.
\section{Experiments}
We conducted experiments using four environments with three levels of
difficulty in mind: discrete actions (Cart Pole), continuous actions (Mountain
Car and Pendulum Swing Up), and realistic Mujoco physics control problems with damping and
friction (Inverted Pendulum). We use the Python package PySindy open sourced by
\citewithauthor{Kaptanoglu+:2021:PySINDy}. We selected variants of SINDy from
among its continuous-time, discrete-time, and driven models appropriate to
each experiment. We use a state of the art model-free algorithm,
Soft Actor-Critic (SAC), introduced by \citewithauthor{Haarnoja+:2018:arXiv}
and extended by \citewithauthor{Christodoulou:2019:arXiv}, as our basis for
training on both the real system and the simulated rollouts described in
Algorithm~\ref{alg:dyna}. We also compare our results against Model-Based
Policy Optimization (MBPO), a state of the art model-based method introduced
by \citewithauthor{Janner+:2019:NIPS}. MBPO and our model differ primarily in
that MBPO learns a model represented by a neural net while SINDy learns a
model represented by a differential (or difference) equation.
Our method outperforms both
other methods in all experiments, giving us a $4$--$100\times$, $40\times$,
and $200\times$ speedup against MBPO, and $15$--$375\times$, $60\times$,
$500\times$, and $5\times$ speedup against SAC for the Inverted Pendulum,
Pendulum Swing Up, Mountain Car, and Cart Pole problems, respectively.
Figure~\ref{fig:results} shows our experimental results. Each experiment is
averaged over 10 different seeds, with further averaging performed by
evaluating the agent 10 times for each seed. All results involve running the
true robotic system once for a small designated number of time steps
(Table~\ref{tab:hyperparameters}) to obtain samples for training the SINDy model, and then
performing further policy improvement steps based only on the
SINDy derived model. The plots for SINDy have been shifted right by the number of time steps
gathered to train SINDy based on interactions with the real environment. We
now offer more details of each experiment.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{Results.png}
\caption{The results of our SINDy method compared to state of the art
model-based (MBPO) and model-free (SAC) methods. Our method outperforms
current methods in discrete action, continuous action, and noisy/damped
environments. We found $4$--$100\times$, $40\times$, and $200\times$
speedup against MBPO and $15$--$375\times$, $60\times$, $500\times$, and
$5\times$ speedup against SAC for the Inverted Pendulum, Pendulum Swing
Up, Mountain Car, and Cart Pole, respectively.}
\label{fig:results}
\vspace{-10pt}
\end{figure*}
\textbf{Discrete Classic:} Our discrete action case is the Cart Pole
environment. Data from a single rollout of 30 steps was sufficient for SINDy to
identify a high accuracy dynamics model. To avoid rapid termination of an episode we
applied force in opposite directions at alternating time steps, with
occasional random actions taken for exploration.
We can summarize the dynamics of the Cart Pole system in these equations:
\begin{equation*}
\ddot\theta = \frac{(g\cdot\sin{\theta} - \cos{\theta}\cdot\mbox{\emph{C}})}{l\cdot(\frac{4}{3} - \frac{m_p}{m_p + m_c}\cdot(\cos{\theta})^2)},
\end{equation*}
\begin{equation}
\ddot x = \mbox{\emph{C}} - \frac{l}{m_p + m_c}\cdot\ddot\theta\cdot\cos{\theta}
\label{eq:cartpole}
\end{equation}
where
\begin{equation*}
\mbox{\emph{C}} = (F + l\cdot\dot\theta^2\sin{\theta})/(m_p+m_c),
\end{equation*}
where $l$ is the length of the pole, $m_p$ its mass, $x$ the position of the
cart, $m_c$ its mass, $\theta$ the vertical angle between the two, and $F$
the force. Using a small $\theta$ assumption, since the pole falls and the
episode terminates if the $\theta$ is not small, we can write $\sin{\theta}
\approx \theta$ and $\cos{\theta} \approx 1$, resulting in these equations:
\begin{equation*}
\ddot\theta = \frac{(g\cdot\theta - \mbox{\emph{C}})}{l\cdot(\frac{4}{3} - \frac{m_p}{m_p + m_c})},
\end{equation*}
\begin{equation}
\ddot x = \mbox{\emph{C}} - \frac{l}{m_p + m_c}\cdot\ddot\theta
\label{eq:cartpole_approx}
\end{equation}
where
\begin{equation*}
\mbox{\emph{C}} = (F + l\cdot\dot{\theta}^2\cdot\theta)/(m_p+m_c),
\end{equation*}
Thus, substituting the right hand side of the equation for $\ddot{\theta}$
into the one for $\ddot{x}$, the right hand sides of the system of dynamics
equations can be written in terms of $\theta$, $\dot{\theta}$, $x$, $\dot{x}$,
and constants. The $\hat{\boldsymbol{\Theta}}$ we provided to SINDy was $\begin{bmatrix}
1, & a, & a^2, & a\cdot b, & a^2\cdot b\end{bmatrix}$ where $a$ and $b$ can be any of
$\theta$, $\dot{\theta}$, $x$, and $\dot{x}$.
Figure~\ref{fig:results} shows that the approximation is accurate enough that
quitting when we are optimal using the SINDy model is still accurate in the
real environment and needs just two further episodes of training in the real
environment for fine tuning.
\textbf{Continuous Classic:} The equations governing the dynamics for the
continuous environments of
Mountain Car and Pendulum Swing Up are learned precisely by SINDy (to 4
decimal places), extracting appropriate features from among a larger set. For
example, for Mountain Car, the features we used were $1$, $x$, $x^2$, and
$\sin kx$ and $\cos kx$ for $k=1,2,3$. The results of
Figure~\ref{fig:results} show that the policy
that is optimal on the SINDy dynamics is also optimal in the real
environment. In fact, when we examine the equations learned
by SINDy, we see that they \textbf{\emph{match the true dynamics}} of these
environments. It is worth noting that the Mountain Car domain has inelastic
collisions, and our model is robust in the face of those discontinuities when learning
dynamics. Furthermore SINDy learned these dynamics using a single rollout of
length 100 for Mountain Car and 20 for Pendulum Swing Up.
\textbf{Mujoco:} We now consider the
Inverted Pendulum domain of the Mujoco physics simulator. This adds the
challenge of having damping on the controller and friction added to the
system. The dynamics that SINDy learned generalize well to the true environment as
can be seen in Figure~\ref{fig:results} figure, and it needed
\textbf{\emph{at most one additional training episode}} for fine tuning the policy
learned using the model so that it is optimal with respect to the true physical
system. We used the same approximation here as for
Equation~\ref{eq:cartpole_approx} and used the same policy for collecting the
initial samples.
\vspace{-5pt}
\begin{table*}[htb]
\begin{center}
\medskip
\caption{Hyper-parameters used for training the SINDy model and Results}
\label{tab:hyperparameters}
\begin{tabular}{lcccccrr}\hline
& & & &
Dynamics & & & \\
Environment & $N_e$ & $R$ & $N$ &
learned & Generalizes & $P$ & $P'$ \\\hline\hline
Cart Pole & 1 & 30 & $\infty$ & Approximate & Yes & 164 &$\sim$70\\\hline
Mountain Car & 1 & 50 & $\infty$ & Exact & Yes & 50 & 7\\\hline
Pendulum & & & & & & & \\
Swing up & 1 & 20 & $\infty$ & Exact & Yes & 99 & 10\\\hline
Inverted & & & & & & & \\
Pendulum & 1 & 30 & $\infty$ & Approximate & Yes & 164 & 50\\\hline
\end{tabular}
\end{center}
\noindent
$N_e$ is the number of rollouts; $R$ is the length of each rollout; $N$ is the
number of episodes using just the model (vs.\ the actual system); $P$ is the
number of parameters (size of $\boldsymbol{\Xi}$); $P'$ is the number of non-zero
parameters (Cart Pole is stochastic so the number varied a little). The
MBPO model sized was: 613,036 parameters.
The SINDy model size was $n \cdot F$, where $n$ is the number of dimensions in the
state and $F$ the number of functions (features) in $\boldsymbol{\Xi}$.
\end{table*}
\textbf{Discussion:} As Table~\ref{tab:hyperparameters} shows, our approach learns
models that have a very small number of parameters, particularly compared with
those learned by MBPO (613,036 parameters). Furthermore, since our models
represent physics dynamics equations explicitly, they are highly
interpretable, while function approximation neural nets generally are not. As
previously discussed, the models are very accurate and can be learned with
only a small amount of training data. The method works for a range of kinds
of dynamics and control, both continuous and discrete. We have further seen
that approximate dynamics, such as replacing $sin(\theta)$ by $\theta$ when
angles tend to be small in the region of the state space that is of interest,
can lead to dynamics models accurate enough for training to result in optimal
behavior.
\section{Conclusions and Future Work}
We presented a method for learning the model of physics based RL systems. It is
capable of learning either exact or high-accuracy approximate non-linear dynamics
from small numbers of samples. We showed results from four environments
demonstrating how training with these models results in asymptotic performance as good
as that achieved by state of the art model-free methods, while converging
significantly more rapidly and requiring less training data. Our
dynamics models are of low dimension, easy to extract, and highly
interpretable, advantages they have over state of the art model-based methods.
In summary, our contributions are:
\begin{enumerate}
\item Our algorithm matches or exceeds the asymptotic performance of existing
state of the art model-based and model-free learning methods on these
tasks while
requiring significantly fewer time steps of interaction with the real
system. In our experiments, we needed at most 50 time steps of
interaction with the real system to identify high accuracy models that
allow induction of near optimal policies. We reduced the time steps of
interaction with the real system
necessary to convergence by $4\times$--$100\times$, $40\times$, and
$200\times$ against MBPO, and $15\times$--$375\times$, $60\times$,
$500\times$, and $5\times$ against SAC for Inverted Pendulum, Pendulum
Swing Up, Mountain Car, and Cart Pole, respectively.
\item Our method requires significantly fewer parameters than state of the
art model-based methods. We need at most $n \cdot F$ parameters,
where $n$ is the dimensionality of the state space and $F$ the number of
features from which SINDy can choose.
In comparison, the MBPO network used 613,036 parameters.
\item Our dynamics models are more interpretable, working from intuitively
selected or approximated kernel functions, and extracting the governing
physics dynamics equations and their parameters (coefficients).
\end{enumerate}
A future direction for this work is exploring more complex robotic systems
supported by the Mujoco physical simulation framework.
\section*{Acknowledgements}
The authors would like to thank Lucas N. Alegre for his implementation of the
MBPO algorithm used in this paper. Rushiv Arora is supported by a Bay State
Fellowship.
{\raggedright
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction} \label{sec1}}
\IEEEPARstart{M}{ulti-view} multi-label learning is designed to predict the multiple labels of an object represented by multiple views. In reality, multi-view multi-label learning has wide applications ranging from image classification \cite {r1,r2} to video analysis \cite {r3,r4} in that multi-view multi-label data is ubiquitous. For example, an image can be described by Histogram of Oriented Gradients (HOG), Scale Invariant Feature Transform (SIFT), and color features, meanwhile, the image can also be labeled with "tree, water, sky"; a video includes diverse representations such as audio, text, and picture, at the same time, the video can be annotated by several labels like "Shakespeare, opera, King Lear". As two individual research fields, multi-view learning \cite {r5,r2017nie,r6,r7} and multi-label learning \cite {r8,r9,r10,r11} are severally and extensively studied in last two decades. Nonetheless, to date, as their intersection or marry, multi-view multi-label learning \cite {r12,r13} is still relatively under-studied. Note that, there has been active literature motivated for "multi-modal learning" \cite {r14,r15,r16,r17,r18}, and the scope of "multi-view learning" is more extensive since the latter contains not only the multi-modal learning but also the learning paradigm of the same modal but different perspectives.
In practice, a situation we will often suffer is that multi-view multi-label data appears in three forms: missing labels, incomplete views, and non-aligned views. Reasons behind the appearance of these issues are: (1) insufficient resources or limited knowledge makes it expensive to obtain all the relevant labels of a sample; (2) malfunction of sensors or occlusion in some views causes the incompleteness of views; (3) the completely aligned information can hardly be accessed for privacy protection or aligned views are disturbed by carelessness of mankind. All of the three issues could dramatically degenerate the performance.
Existing multi-view learning methods, no matter whichever supervised \cite{r19,r20,xu2014}, semi-supervised \cite{r21,r22,nie2017}, or unsupervised \cite{r23,r24,nie2018}, commonly assume that the views involved are aligned, however, this does not necessarily always hold in reality. For example, in the social network, users may register multiple accounts as in Facebook, Twitter, and Instagram, but it is hard to align these social accounts with the same user owing to the privacy protection; in disease diagnosis, we obtain different types of examination data of the patients from different hospitals, but for the same sake of privacy protection, we cannot align these data with the same patient; in questionnaire survey, different organizations survey the same group of people, but the survey is generally anonymous, making the alignment information unavailable. In addition, non-aligned views are natural in recommendation system \cite{r25}, video surveillance \cite{r26} and so on.
It is worth emphasizing that the non-aligned views add extra challenges to the original considerably challenging problem with missing labels and incomplete views. The reasons are as follows: (1) The non-aligned views make the interactions among different views no longer easy to be available, thus their explicitly complementary information can hardly be exploited. (2) In conventional incomplete multi-view learning, those missing views of samples can be completed with the help of the paired ones from other observed views, however, in the non-aligned views setting, no paired sample can be available. As a result, the problem of multi-view incompleteness is more severe and hard to be tractable even if possible. (3) Under the situation of non-aligned views, information able to be utilized for the correspondence among views most probably hides in the common or shared labels of samples from different views, however, unfortunately, in the missing multi-label setting, such beneficial label information is quite limited.
\noindent {\textbf {Remark 1.}} If the multiple labels are complete during training, samples of the non-aligned views can be simply aligned by directly using the given labels and then divided into the corresponding groups. We refer interested readers to \cite{r27} for the partially non-aligned two views. Consequently, the non-alignment of multiple views under the situation of complete multiple labels is formally relatively trivial. However, it is worth emphasizing that in the case of missing multiple labels, the complete ground truth of the samples is no longer available. Therefore, beneficial label information for aligning the non-aligned views is quite limited, making the problem of non-aligned multiple views with the missing multiple labels more challenging and non-trivial any longer. To alleviate such insufficiency of multiple labels, extra and accurate structural information of multiple labels accompanied the non-aligned incomplete multiple views needs to be mined, which is the main focus of this work.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1\textwidth]{Fig1}
\caption{The global and local structures of the multiple labels. The same sample of non-aligned views is represented by the same color, and "sky", "cloud" , … ,"fish" are the labels. "1" in the label matrix means that the sample is annotated with the corresponding label whereas "-1" means not. All the label matrices are vertically concatenated. The label matrix of all samples from two views has full column rank or high rank, which refers to the global structure of multiple labels. Meanwhile, the sub-label matrix comprised of samples that share the same individual label tends to have low rank, e.g., the rank of sub-label matrix that share the same label "cloud" equals to 2, which corresponds to the local structure of multiple labels.}
\label{fig1}
\end{figure*}
To date, existing methods \cite{r28,r29} of addressing the three challenges (\textit {missing labels, incomplete views, and non-aligned views}) usually suffer from two disadvantages. First, they only focus on the first two challenges and assume that the views involved must be aligned. However, this assumption does not necessarily always hold in reality as above-mentioned. Confronting non-aligned views, traditional multi-view learning methods operating in the aligned cases are difficult to be directly adopted due to the lack of straightforward interactions among views. Second, introducing overmuch hyper-parameters in dealing with the problems of incomplete views and missing labels makes them difficult to optimize and reproduce.
To alleviate the aforecited two disadvantages, two questions arise naturally. One is how to simultaneously address all the three challenges. The other is how to efficiently build a model with as few hyper-parameters as possible.
In this paper, we propose a \textbf{N}on-\textbf{A}ligned \textbf{I}ncomplete \textbf{M}ulti-view and \textbf{M}issing \textbf{M}ulti-label \textbf{L}earning method abbreviated as NAIM$^3$L to address the arising two questions. Our starting point is that although samples among views are not aligned explicitly, they can still be bridged implicitly through the common or shared labels and thus can be learned complementally. Besides, intuitively, the samples with similar labels are more prone to be strongly correlated with each other whereas those with dissimilar labels are weakly correlated, or even uncorrelated, which reflects the local and the global structural relations within multiple labels, respectively. Mathematically, the local correlated structure can approximately correspond to the low rankness of the label matrix of samples sharing the same individual label while the global weakly correlated or uncorrelated structure can roughly correspond to the high rankness of the label matrix of all samples. An intuitive description of this global-local structure is illustrated in Fig. \ref {fig1}. For conciseness and due to the limitation of space, we only use two views to illustrate the global and the local structures of multiple labels, but such an illustration can be directly generalized to the case of more than two views.
Note that in traditional multi-label learning methods, the label matrix is often assumed to be low-rank rather than high-rank as we do. We argue from the following two perspectives that the high-rank assumption is reasonable. First, intuitively, samples in real datasets with multiple labels are usually diverse and contain dissimilar labels. As samples with dissimilar labels are weakly correlated, or even uncorrelated, thus the entire multi-label matrix is often of a high rank. Second, mathematically, as entries of the label matrix take binary values, it is unlikely for this matrix to be low-rank. To further demonstrate the rationality of our assumptions, we show the high-rankness of the entire label matrix corresponding to all samples and the low-rankness of each sub-label matrix corresponding to samples that share a single label in Fig. \ref {Fig2a} and \ref {Fig2b}, respectively. It is well known that the rank of a matrix is equal to the number of its non-zero singular values. From Fig. \ref {Fig2a}, we can find that the singular values of the training and testing label matrices have a heavy tail, which indicates the high-rankness of the entire label matrix (the numbers of non-zero singular values of these two matrices are 259 and 249, i.e., the ranks are 259 and 249, either full-rank or almost full-rank). As the number of multiple labels is too big to show the low-rankness of each sub-label matrix in a single picture, alternatively, we show the mean value and the median value of the ranks of these sub-label matrices. From Fig. \ref {Fig2b}, we can observe that the ranks of the sub-label matrices are relatively low (the ranks are about 15 and 5). In brief, these observations are consistent with the assumptions of our model.
\begin{figure}[h]
\centering
\subfigure[High-rank]{
\includegraphics[width=0.227\textwidth]{Fig2_a}
\label{Fig2a}
}
\subfigure[Low-rank]{
\includegraphics[width=0.227\textwidth]{Fig2_b}
\label{Fig2b}
}
\caption{The high-rankness of the entire label matrix corresponding to all samples and the low-rankness of the sub-label matrices corresponding to samples that share a single label on Corel5k dataset, which has 260 labels.}
\end{figure}
By formulating the above two structures as a single regularizer, we design a final optimization objective for the model establishment. What makes our method most different from existing multi-label learning methods is that the latter almost completely neglect the global high-rank structure, which will be proved in our experiments that such global high rankness plays an indispensable role. In summary, our contributions are fourfold:
(1) To the best of our knowledge, this is the first multi-view multi-label learning work of jointly considering non-alignment of views, incompleteness of views, and missing of labels, which is much more realistic and more challenging. Contrary to other methods, this is the first work that explicitly models the global structure of the multi-label matrix to be high-rank, whose effectiveness has been validated in our experiments.
(2) We utilize the ConCave-Convex Procedure (CCCP) to reduce the objective function as a convex optimization problem, and then provide an efficient Alternating Direction Method of Multipliers (ADMM) algorithm by which a closed form solution of each sub-problem is derived. Besides, the customized ADMM algorithm for NAIM$^3$L has linear computational complexity with respect to the number of samples, which makes it more efficient to handle large scale data.
(3) An efficient algorithm enjoying linear time complexity regarding the number of samples is derived as a byproduct to compute the sub-gradient of the trace norm.
(4) Even without view-alignment, our method can still achieve better performance on five real datasets compared to state-of-the-arts with view-alignment.
Compared with other multi-view multi-label learning methods, our model exhibits the following four merits. Firstly, only one hyper-parameter corresponding to the regularization term is introduced in modeling, which makes its optimization greatly easier than existing methods. Secondly, our model is inductive, thus it can be directly applied to predict unseen samples. Thirdly, our model outperforms state-of-the-arts on five real datasets even without view-alignment. Fourthly, our model can also be directly non-linearized to its kernerlized version and cooperate with deep neural network in an end-to-end manner.
The rest of this paper is organized as follows. In Section \ref {secII}, we briefly overview some related work of multi-view multi-label learning. Section \ref {secIII} proposes our method NAIM$^3$L, and an efficient ADMM algorithm is presented in Section \ref {secIV} to solve it. Extensive experimental results and analyses are reported in Section \ref {secV}. Section \ref {secVI} concludes this paper with future research directions.
\section{Related Work} \label{secII}
To date, in the face of the aforesaid three challenges, existing work concerns either only one or two of them. According to the number of challenges addressed, we can roughly divide them into two major categories: methods of addressing one challenge and methods of addressing two challenges, where the former can be divided into three sub-categories: non-aligned multi-view learning, incomplete multi-view learning, and multi-label learning with missing labels. In this section, we overview recent research closely related to ours based on the above taxonomy.
\subsection{Methods of Addressing One Challenge}
\textbf{Non-aligned multi-view learning} deals with the problem that samples in all views are totally unpaired while accompanied with certain constraints from (weakly) supervised information such as must-link (ML) and cannot-link (CL). We will give a formal definition of non-aligned views in subsection \ref{III-A}. To our knowledge, UPMVKSC \cite {r30} is the first and only work that considers the non-aligned multiple views. In UPMVKSC, the authors incorporated the ML and the CL constraints into the kernel spectral clustering to increase the learning performance. However, this work assumes that views involved are complete and information about the constraints must be given in prior.
\textbf{Incomplete multi-view learning} handles the issue that samples in some views are missing. Recently, some incomplete multi-view learning methods have been proposed. For example, Xu \textit{et al.} \cite {r31} have proposed a method termed MVL-IV to accomplish multi-view learning with incomplete views by assuming that different views should be generated from a shared subspace. Afterwards, Du \textit{et al.} \cite {r32} have modeled the statistical relationships of multi-modality emotional data using multiple modality-specific generative networks with a shared latent space, in which a Gaussian mixture assumption of the shared latent variables is imposed. Lately, Xue \textit{et al.} \cite {r33} have integrated semi-supervised deep matrix factorization, correlated subspace learning, and multi-view label prediction into a unified framework to jointly learn the deep correlated predictive subspace and multi-view shared and private label predictors. Newly, Zhang \textit{et al.}\cite {r34} have presented CPM-Nets to learn the latent multi-view representation through mimicking data transmitting, such that the optimal trade-off between consistence and complementarity across different views can be achieved. In summary, all the above methods assume the views involved are aligned and focus on the task of multi-class learning, which is a special case of multi-label learning when each sample is annotated with only one label. In addition, there have been abundant literature about incomplete multi-view clustering \cite {r35,r36,r37,r38}, which is not the focus of this paper.
\textbf{Multi-label learning with missing labels} aims to predict the complete labels by giving part of multiple labels of an object. Zhang \textit{et al.} \cite {r39} have proposed a framework to sufficiently leverage the inter-label correlations and the optimal combination of heterogeneous features based on multi-graph Laplacian. Liu \textit{et al.} \cite {r40} have presented a model called lrMMC that first seeks a low-dimensional common representation of all the views by constraining their common subspace to be low-rank and then utilizes matrix completion for multi-label classification. Zhu \textit{et al.} \cite {r41} have put forward a method termed GLMVML, which extends the GLOCAL \cite {r42} model to its multi-view version by exploiting the global and the local label correlations of all the views and each view simultaneously.
To see the differences of the global and local structures between GLOCAL and our NAIM$^3$L, we briefly describe GLOCAL as follows. Given a dataset $\mathbf{X}$ and its corresponding multi-label matrix $\mathbf{Y}$, GLOCAL exploits the global and local label correlations by the manifold regularization,
\begin{equation} \label{eqa}
\begin{aligned}
\min _{\mathbf{U}, \mathbf{V}, \mathbf{W}}&\left\|\Pi_{\Omega}(\mathbf{Y}-\mathbf{U} \mathbf{V})\right\|_{F}^{2}+\lambda_{1}\left\|\mathbf{V}-\mathbf{W}^{T} \mathbf{X}\right\|_{F}^{2} \\
&+\lambda_{2}(\|\mathbf{U}\|_{F}^{2}+\|\mathbf{V}\|_{F}^{2}+\|\mathbf{W}\|_{F}^{2})+\lambda_{3} \operatorname{tr}\left(\mathbf{F}_{0}^{T} \mathbf{L}_{0} \mathbf{F}_{0}\right) \\
&+\sum_{m=1}^{g} \lambda_{4} \operatorname{tr}\left(\mathbf{F}_{m}^{T} \mathbf{L}_{m} \mathbf{F}_{m}\right)
\end{aligned},
\end{equation}
where $\Pi_{\Omega}$ is a projection operator, $\mathbf{U}\mathbf{V}$ is the low-rank decomposition of $\mathbf{Y}$, and $\mathbf{W}$ is a linear mapping matrix to be learned. $\mathbf{L}_{0}$ and $\mathbf{L}_{m}$ are the Laplacian matrices encoding the global and the local label correlations, respectively. $\mathbf{F}_{0}$ and $\mathbf{F}_{m}$ are the classifier output matrices for all samples and group $m$, respectively.
Although some of them also consider the global and the local structures, the main differences from ours lie in that almost all methods of such a kind assume not only the global and the local manifold structures of the given data but also the low-rankness of the whole label matrix, whereas our method just needs an assumption about the rank of (predictive) label matrix to formulate the global and the local structures, and more importantly, we argue that the whole label matrix should be high-rank opposing to the popular low-rank assumption.
\subsection{Methods of Addressing Two Challenges}
As far as we have known, there are only two methods called iMVWL \cite {r28} and IMVL-IV \cite {r29} that take both incomplete multi-view and missing multi-label into consideration. iMVWL learns a shared subspace from incomplete views by exploiting weak labels and local label correlations, and then trains a predictor in this subspace such that it can capture not only cross-view relationships but also weak-label information of the training samples. We present the formulation of iMVWL to distinguish its low-rank assumption from our high-rank assumption. Specifically, given a multi-view dataset of $n_{v}$ views $\left\{\mathbf{X}_{v}\right\}_{v=1}^{n_{v}}$ and its corresponding multi-label matrix $\mathbf{Y}$, the objective function of iMVWL is formulated as follows:
\begin{equation} \label{eqb}
\begin{aligned}
\min _{\left\{\mathbf{U}_{v}, \mathbf{V}, \mathbf{W}, \mathbf{S}\right\}} \sum_{v=1}^{n_{v}}\left\|\mathbf{O}^{v} \odot\left(\mathbf{X}_{v}-\mathbf{V} \mathbf{U}_{v}^{T}\right)\right\|_{F}^{2} \\
+\alpha\|\mathbf{M} \odot(\mathbf{V} \mathbf{W} \mathbf{S}-\mathbf{Y})\|_{F}^{2}+\beta\|\mathbf{S}\|_{*}
\end{aligned},
\end{equation}
where $\odot$ denotes the Hadamard product, $\mathbf{O}^{v}$ and $\mathbf{M}$ are the indicator matrices for the missing views and labels, respectively. $\mathbf{V}\mathbf{U}_{v}^{T}$ is the non-negative decomposition of $\mathbf{X}_{v}$, $\mathbf{W}$ is the prediction coefficient matrix, and $\mathbf{S}$ is the label correlation matrix.
Differently, IMVL-IV provides a unified framework for characterizing multiple ingredients including label-specific features, global and local correlations among labels, low-rank assumption of the label matrix, and consistency among the representations of these views. However, these methods require views to be aligned, and involve at least two explicit hyper-parameters in their objectives. More dauntingly, IMVL-IV even contains ten hyper-parameters.
\subsection{Summary}
To summarize, the aforementioned methods have three weaknesses: (1) Most of them assume that the views involved must be aligned, which naturally limits their applicability in practice. (2) Except for MVL-IV, all the methods involve at least two explicit hyper-parameters in their modeling, and some of them even have more extra implicit hyper-parameters making the model selection quite cumbersome. (3) Most of them are created by the non-negative matrix factorization \cite {r43}, resulting in the failure of directly obtaining an inductive learner, thus being suboptimal for predicting unseen samples. In the next section, we will propose a concise yet effective model to overcome the above shortcomings.
\section{The Proposed Method} \label{secIII}
\subsection{Problem Settings} \label{III-A}
In this subsection, we first give the formal definition of non-aligned views and then present the problem settings of our model in detail.
\textbf{Definition 1.} Given a multi-view multi-label data set $\Omega$, suppose that $\Omega=\left\{\mathbf{X}^{(i)}\right\}_{i=1}^{V}$ contains $V$ different views, where $\mathbf{X}^{(i)}=\left[\mathbf{x}_{1}^{(i)}, \mathbf{x}_{2}^{(i)}, \cdots, \mathbf{x}_{n}^{(i)}\right] \in \mathbb{R}^{n \times d_{i}}$ is the feature matrix of the $i$-th view, $n$ and $d_i$ are the numbers of samples and the dimensions of features of the $i$-th view, respectively. If samples across all views are totally unpaired, i.e., the $m$-th sample of the $i$-th view $\mathbf{x}_{m}^{(i)}$ and the $m$-th sample of the $j$-th view $\mathbf{x}_{m}^{(j)}$ are distinct samples, for all $m \in\{1,2, \cdots, n\}$, $i, j \in\{1,2, \cdots, V\}$, and $i\ne j$. Then these views are called non-aligned views.
In traditional full-label setting, $\mathbf{Y}^{(i)}=\left[\mathbf{y}_{1}^{(i)}, \mathbf{y}_{2}^{(i)}, \cdots, \mathbf{y}_{n}^{(i)}\right]$ $\in\{-1,1\}^{n \times c}$ is the corresponding label matrix of the $i$-th view and $c$ is the number of multiple labels. $\mathbf{y}_{j k}^{(i)}=1$ $(k=1,2, \cdots, c)$ means the $k$-th label is relevant while $\mathbf{y}_{j k}^{(i)}=-1$ means irrelevant. By considering the missing labels setting, some labels may not be observed, for example, when the $k$-th label of the $j$-th sample in the $i$-th view is missing, $\mathbf{y}_{j k}^{(i)}=0$, and it does not provide any information. Moreover, in the incomplete multi-view scenario, partial views of some samples are missing, correspondingly, the rows of these samples in the feature matrix $\mathbf{X}^{(i)}$ are missing.
\subsection{Problem Formulation}
In this subsection, we focus on the task of predicting the labels of unlabeled test data by learning from non-aligned incomplete multi-view and missing multi-label training data.
Predicting labels has attracted tremendous interests of researchers in the machine learning community and numerous work has been put forward. Among them, the linear regression \cite {r44} might be the most widely used framework due to its simplicity and effectiveness. Thus, we formulate the prediction as a regression problem. Formally, the loss function can be written as follows,
\begin{equation}\label{Eq1}
\mathcal{L}=\frac{1}{2} \sum_{i=1}^{V}\left\|\mathbf{X}^{(i)} \mathbf{W}^{(i)}-\mathbf{Y}^{(i)}\right\|_{F}^{2},
\end{equation}
where $\mathbf{W}^{(i)} \in \mathbb{R}^{d_{i} \times c}$ is the coefficient matrix corresponding to the $i$-th view. Further, to deal with the challenge of missing labels, we introduce an indicator matrix $\mathbf{P}^{(i)}(i=1,2, \cdots, V)$ for each label matrix. Let $\Omega \subseteq\{1,2, \cdots, n\} \times\{1,2, \cdots, c\}$ be the set of indices that observed in the label matrix $\mathbf{Y}^{(i)}$, then the definition of $\mathbf{P}^{(i)}$ is as follows:
\begin{equation}\label{Eq2}
\mathbf{P}_{j k}^{(i)}=\left\{\begin{array}{ll}
1 & \text { if }(j, k) \in \Omega \\
0 & \text {otherwise.}
\end{array}\right.
\end{equation}
Moreover, views are incomplete in our settings, to alleviate the negative impact arising from the incompleteness of multiple views, we set the rows of $\mathbf{P}^{(i)}$ to zero if the corresponding rows of $\mathbf{X}^{(i)}$ are missing, i.e., $\mathbf{P}_{j \bullet}^{(i)}=0$ if the $j$-th sample of the $i$-th view is missing, where $\mathbf{P}_{j \bullet}^{(i)}=0$ denotes the $j$-th row of the indicator matrix $\mathbf{P}^{(i)}$. By introducing $\mathbf{P}^{(i)}$, Eq. \eqref{Eq1} can be rewritten as:
\begin{equation}\label{Eq3}
\mathcal{L}=\frac{1}{2} \sum_{i=1}^{V}\left\|\mathbf{P}^{(i)} \odot\left(\mathbf{X}^{(i)} \mathbf{W}^{(i)}-\mathbf{Y}^{(i)}\right)\right\|_{F}^{2},
\end{equation}
where $\odot$ denotes the Hadamard product. Apparently simple as Eq. \eqref{Eq3} seems, it serves three purposes. First, it can be used to predict unlabeled data. Second, inferences of missing labels on training data can be achieved as a byproduct. Third, it can handle both the missing labels and incomplete views.
However, the above loss function neither utilizes multi-view consistence nor exploits multi-label structures. Thus, how to combine these two properties to make our model more discriminative is the main concern in the following.
Unfortunately, we are confronted with two obstacles when combing aforementioned two properties. One is that non-aligned views make the consensus of multiple views difficult to guarantee. The other is while dealing with non-aligned views, we also need to consider collaborating with the multi-label structures at the same time.
From the observation that although samples among views are not aligned explicitly, they can implicitly be bridged through the common or shared labels. To mitigate the above two obstacles, we align different views in a common label space, in which we characterize the global-local structures of multiple labels. Our motivations are intuitive. First, although the views are not aligned, samples of different views that share the same label should be consistent, hence, views can be aligned by their labels. Second, in real world, samples with similar labels are strongly correlated with each other whereas those with dissimilar labels are weakly correlated, or even uncorrelated. This implies the low rankness of the sub-label matrix of samples sharing the same label and the high rankness of the label matrix of all samples. Finally, the regularizer $\mathcal{R}$ is formulated as:
\begin{equation}\label{Eq4}
\begin{aligned}
\mathcal{R}=& \sum_{k=1}^{c}\left\|[\mathbf{X}_{k}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}_{k}^{(2)} \mathbf{W}^{(2)}; \cdots ; \mathbf{X}_{k}^{(V)} \mathbf{W}^{(V)}]\right\|_{*} \\
&-\left\|[\mathbf{X}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}^{(V)} \mathbf{W}^{(V)}] \right\|_{*}
\end{aligned},
\end{equation}
where $\|\bullet \|_{*}$ denotes the trace norm, $[\mathbf{A} ; \mathbf{B}]$ is the vertical concatenation of matrices $\mathbf{A}$ and $\mathbf{B}$, and $\mathbf{X}_{k}^{(i)}$ is the sub-matrix of $\mathbf{X}^{(i)}$ which consists of samples corresponding to the $k$-th label observed in the $i$-th view. Note that, the intersection of $\mathbf{X}_{k}^{(i)}$ w.r.t. $k$ is non-empty due to the fact that a sample has multiple labels.
By concatenating the samples that share the same single label in all V views (an early fusion strategy), the first term of Eq. \eqref{Eq4} aims at two purposes. It not only aligns samples of different views in a common label space to ensure consistency but also characterizes the local low-rank structure of each predictive sub-label matrix corresponding to samples that share the same single label. Similarly, the second term aligns diverse views of all samples and depicts the global high-rank structure of multiple labels corresponding to all samples. An intuitive illustration of this global-local structure is shown in Fig. \ref {fig1}. Combining Eq. \eqref{Eq3} and \eqref{Eq4}, the final objective function is formulated as:
\begin{equation}\label{Eq5}
\begin{aligned}
\min _{\mathbf{W}^{(i)}} & \frac{1}{2} \sum_{i=1}^{V}\left\|\mathbf{P}^{(i)} \odot\left(\mathbf{X}^{(i)} \mathbf{W}^{(i)}-\mathbf{Y}^{(i)}\right)\right\|_{F}^{2} \\
&+\lambda\left(\sum_{k=1}^{c}\left\|[\mathbf{X}_{k}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}_{k}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}_{k}^{(V)} \mathbf{W}^{(V)}]\right\|_{*}\right.\\
&\left.-\left\|[\mathbf{X}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}^{(V)} \mathbf{W}^{(V)}]\right\|_{*}\right).
\end{aligned}
\end{equation}
Note that, the two terms of the regularizer $\mathcal{R}$ are designed to jointly describe the global-local structure of multiple labels. More importantly, these two terms work as a whole and either of them is indispensable, which will be validated by ablation study in subsection \ref {V-D}. Thus, we only need one hyper-parameter in our objective function. However, the regularizer $\mathcal{R}$ is the difference of two convex functions, if this term is negative, then we may get trivial solutions when $\lambda$ is large enough. In the following, we will rigorously prove a theorem to claim that $\mathcal{R}$ is non-negative, thus, trivial solutions can be avoided.
\begin{lemma} \label{lemma1} {\rm \cite{r45}}
Let $\mathbf{A}$ and $\mathbf{B}$ be matrices of the same row dimensions, and $[\mathbf{A}, \mathbf{B}]$ be the concatenation of $\mathbf{A}$ and $\mathbf{B}$, we have $\| [\mathbf{A}, \mathbf{B}]\left\|_{*} \leq\right\| \mathbf{A}\left\|_{*}+\right\| \mathbf{B} \|_{*}$.
\end{lemma}
\begin{theorem} \label{theo1}
Let $\mathbf{X}_{k}^{(1)} \mathbf{W}^{(1)}, \mathbf{X}_{k}^{(2)} \mathbf{W}^{(2)}, \cdots, \mathbf{X}_{k}^{(V)} \mathbf{W}^{(V)}$ $(k = 1,2, \cdots,c)$ be matrices with the same column dimension, where $\mathbf{X}_{k}^{(i)}$ is a sub-matrix of $\mathbf{X}^{(i)}(i = 1,2, \cdots,V) $. If (a) $\forall i \in \{1,2,\cdots,V\}$, the vertical concatenation of $\mathbf{X}_{1}^{(i)} \mathbf{W}^{(i)}$ to $\mathbf{X}_{c}^{(i)} \mathbf{W}^{(i)}$ contains all rows of $\mathbf{X}^{(i)} \mathbf{W}^{(i)}$ and (b) $\forall k,h \in \{1,2,\cdots,c\},k\ne h$, at least one of the intersection between $\mathbf{X}_{k}^{(i)} \mathbf{W}^{(i)}$ and $\mathbf{X}_{h}^{(i)} \mathbf{W}^{(i)}$ is non-empty, then we have
\begin{equation}\label{Eq6}
\begin{aligned}
& \sum_{k=1}^{c}\left\|[\mathbf{X}_{k}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}_{k}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}_{k}^{(V)} \mathbf{W}^{(V)}]\right\| \\
& \geq\left\|[\mathbf{X}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}^{(2)} \mathbf{W}^{(2)}, \cdots ; \mathbf{X}^{(V)} \mathbf{W}^{(V)}]\right\|_{*}.
\end{aligned}
\end{equation}
\end{theorem}
At the first glance, Theorem \ref{theo1} seems able to be proved directly by extending Lemma \ref {lemma1}, however, in fact, the proof is not that trivial, the detailed proof is shown below.
\renewcommand{\theequation}{P.\arabic{equation}}
\setcounter{equation}{0}
\begin{proof} Before proving Theorem \ref{theo1}, firstly, we present the following three propositions.
\begin{equation} \label{eqA1}
\begin{aligned}
& \left\|\mathbf{A}_{1}\right\|_{*}+\left\|\mathbf{A}_{2}\right\|_{*}+\cdots+\left\|\mathbf{A}_{n}\right\|_{*} \\
\geq & \left\|\left[\mathbf{A}_{1} ; \mathbf{A}_{2} ; \cdots ; \mathbf{A}_{n}\right]\right\|_{*}
\end{aligned}
\end{equation}
\begin{equation} \label{eqA2}
\begin{aligned}
& \left\| \left[ \mathbf{A}_{1} ; \mathbf{A}_{2} ; \mathbf{A}_{3} \right] \right\|_{*}=\left\| \left[\mathbf{A}_{1} ; \mathbf{A}_{3} ; \mathbf{A}_{2} \right]\right\|_{*} \\
= & \left\| \left[ \mathbf{A}_{2} ; \mathbf{A}_{1} ; \mathbf{A}_{3} \right] \right\|_{*}=\left\| \left[\mathbf{A}_{2} ; \mathbf{A}_{3} ; \mathbf{A}_{1}\right]\right\|_{*} \\
= & \left\| \left[ \mathbf{A}_{3} ; \mathbf{A}_{1} ; \mathbf{A}_{2} \right] \right\|_{*}=\left\| \left[\mathbf{A}_{3} ; \mathbf{A}_{2} ; \mathbf{A}_{1}\right]\right\|_{*}
\end{aligned}
\end{equation}
\begin{equation} \label{eqA3}
\| [\mathbf{A} ; \mathbf{B}] \|_{*} \geq \|\mathbf{A}\|_{*}.
\end{equation}
\eqref{eqA1} is a generalization of Lemma \ref{lemma1} and can be proved by the following derivation.
\begin{equation} \nonumber
\begin{aligned}
& \left\|\mathbf{A}_{1}\right\|_{*}+\left\|\mathbf{A}_{2}\right\|_{*}+\cdots+\left\|\mathbf{A}_{n}\right\|_{*} \\
\geq & \left\|\mathbf{A}_{1}+\mathbf{A}_{2}+\cdots+\mathbf{A}_{n}\right\|_{*} (\text {by triangle inequality}) \\
= & \left\|\left[\mathbf{A}_{1} ; \mathbf{0} ; \cdots ; \mathbf{0}\right]+\left[\mathbf{0} ; \mathbf{A}_{2} ; \cdots ; \mathbf{0}\right]+\cdots+\left[\mathbf{0} ; \mathbf{0} ; \cdots ; \mathbf{A}_{n}\right]\right\|_{*}\\
= & \left\|\left[\mathbf{A}_{1} ; \mathbf{A}_{2} ; \cdots ; \mathbf{A}_{n}\right]\right\|_{*}
\end{aligned}
\end{equation}
\eqref{eqA2} is actually the commutative law of the trace norm with respect to the rows (columns). Without loss of generality, we only prove the case of three columns, and it can be directly extended to cases of any number (more than three) of columns.
\begin{equation} \nonumber
\begin{aligned}
& \left\|\mathbf{A}_{1} ; \mathbf{A}_{2} ; \mathbf{A}_{3}\right\|_{*}=t r \sqrt{\left[\mathbf{A}_{1}^{T}, \mathbf{A}_{2}^{T}, \mathbf{A}_{3}^{T}\right]\left[\mathbf{A}_{1} ; \mathbf{A}_{2} ; \mathbf{A}_{3}\right]} \\
= & tr \sqrt{\mathbf{A}_{1}^{T} \mathbf{A}_{1}+\mathbf{A}_{2}^{T} \mathbf{A}_{2}+\mathbf{A}_{3}^{T} \mathbf{A}_{3}} \\
= & tr \sqrt{\mathbf{A}_{1}^{T} \mathbf{A}_{1}+\mathbf{A}_{3}^{T} \mathbf{A}_{3}+\mathbf{A}_{2}^{T} \mathbf{A}_{2}} \\
= & tr \sqrt{\left[\mathbf{A}_{1}^{T}, \mathbf{A}_{3}^{T}, \mathbf{A}_{2}^{T}\right]\left[\mathbf{A}_{1} ; \mathbf{A}_{3} ; \mathbf{A}_{2}\right]}=\left\|\left[\mathbf{A}_{1} ; \mathbf{A}_{3} ; \mathbf{A}_{2}\right]\right\|_{*}
\end{aligned}
\end{equation}
The proof of remaining equations is similar.
\eqref{eqA3} can be easily proved by $\| [\mathbf{A} ; \mathbf{B}] \|_{*}=tr \sqrt{\mathbf{A}^{T} \mathbf{A}+\mathbf{B}^{T} \mathbf{B}} \geq tr \sqrt{\mathbf{A}^{T} \mathbf{A}}=\|\mathbf{A}\|_{*}$.
For simplicity of writing, let $\mathbf{X}_{k}^{(i)} \mathbf{W}^{(i)}=\mathbf{E}_{k}^{(i)}$ and $\mathbf{X}^{(i)} \mathbf{W}^{(i)}=\mathbf{E}^{(i)}$, where $i = 1,2, \cdots,V$ and $k = 1,2, \cdots,c$ .Then we have,
$\begin{aligned}
& \sum_{k=1}^{c}\left\|[\mathbf{X}_{k}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}_{k}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}_{k}^{(V)} \mathbf{W}^{(V)}]\right\|_{*}\\
= &\sum_{k=1}^{c}\left\|[\mathbf{E}_{k}^{(1)} ; \mathbf{E}_{k}^{(2)} ; \cdots ; \mathbf{E}_{k}^{(V)}]\right\|_{*} \\
\geq &\left\|[\mathbf{E}_{1}^{(1)}; \cdots ; \mathbf{E}_{1}^{(V)} ; \mathbf{E}_{2}^{(1)}; \cdots ; \mathbf{E}_{2}^{(V)} ; \cdots ;\mathbf{E}_{c}^{(1)}; \cdots ; \mathbf{E}_{c}^{(V)}]\right\|_{*} \\
= &\left\|[\mathbf{E}_{1}^{(1)} ; \cdots ; \mathbf{E}_{c}^{(1)} ; \mathbf{E}_{1}^{(2)} ; \cdots ; \mathbf{E}_{c}^{(2)} ; \cdots ; \mathbf{E}_{1}^{(V)} ; \cdots ; \mathbf{E}_{c}^{(V)}]\right\|_{*} \\
\geq &\left\|[\mathbf{E}^{(1)} ; \mathbf{E}^{(2)} ; \cdots ; \mathbf{E}^{(V)}]\right\|_{*}\\
= &\left\|[\mathbf{X}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}^{(V)} \mathbf{W}^{(V)}]\right\|_{*}.
\end{aligned}$
The first inequality holds by \eqref{eqA1}, and the second equality holds by \eqref{eqA2}. In the multi-label setting, the samples corresponding to all the individual labels contain those corresponding to the whole multiple labels, which implies that the condition (a) is satisfied. Condition (b) is likewise satisfied by the fact that a sample may have multiple labels. Thus, $[\mathbf{E}_{1}^{(i)} ; \cdots ; \mathbf{E}_{c}^{(i)}]$ can be rewritten as $[ \mathbf{E}_{1}^{(i)} ; \cdots ; \mathbf{E}_{c}^{(i)} ] = [\mathbf{E}^{(i)}; \mathbf{E}_{in}^{(i)}]$, where $\mathbf{E}_{in}^{(i)}$ is the matrix consisting of the intersections of $\mathbf{E}_{k}^{(i)}$ w.r.t. $k$. Then the last inequality holds by \eqref{eqA3}. At this point, the proof is complete.
\end{proof}
\noindent {\textbf {Remark 2.}}
A similar method termed as DM2L has been proposed in \cite{r46}. Major differences between this work and DM2L are as follows:
(1) This work generalizes DM2L to the multi-view setting and mainly focuses on the novel and unique challenge posed by non-aligned multiple views, which widely exist in reality while often being neglected. In brief, DM2L can be regarded as a special case of this work.
(2) Albeit the model of DM2L seems similar to ours, the problem addressed in this paper is much more challenging. More importantly, the motivations of these two are different. DM2L claims that the multi-label matrix is low-rank and the negative trace norm term in their model aims at making the DM2L more discriminative, whereas we argue that the multi-label matrix is NOT necessarily low-rank, conversely, high-rank as analyzed before. And it is the latter that we utilize the negative trace norm to directly characterize this property, making our motivations more intuitive and interpretable.
(3) We tailor an efficient optimization method to solve the proposed model. Specifically, we derive an ADMM algorithm with a closed form solution of each sub-problem. The linear computational complexity with respect to the number of samples allows our model to handle large-scale data. However, high efficiency, closed form solution, and the ability to handle large-scale data cannot be guaranteed in DM2L by using traditional convex optimization method. Besides, there are also other differences between NAIM$^3$L and DM2L, for example, we make a thorough computational complexity analysis; the non-negativity of the regularization term needed to be proved is more difficult than that of DM2L and we provide a more concise proof (see details above); we design a more efficient algorithm for computing the sub-gradient of the trace norm.
\section{Optimization} \label{secIV}
\subsection{ADMM Algorithm}
\renewcommand{\theequation}{\arabic{equation}}
\setcounter{equation}{8}
For the convenience of optimization, we first introduce some notations to simplify the formulas. Let
$
\mathbf{P}=\left[\begin{array}{c}
\mathbf{P}^{(1)} \\
\mathbf{P}^{(2)} \\
\vdots \\
\mathbf{P}^{(V)}
\end{array}\right]
$,
$
\mathbf{W}=\left[\begin{array}{c}
\mathbf{W}^{(1)} \\
\mathbf{W}^{(2)} \\
\vdots \\
\mathbf{W}^{(V)}
\end{array}\right]
$,
$
\mathbf{X}=\left[\begin{array}{cccc}
\mathbf{X}^{(1)} & \bf 0 &\cdots& \bf 0 \\
\bf 0 & \mathbf{X}^{(2)} &\cdots& \bf 0 \\
\vdots & \vdots & \ddots &\vdots\\
\bf 0 & \bf 0 &\cdots& \mathbf{X}^{(V)}
\end{array}\right]
$,
$
\mathbf{Y}=\left[\begin{array}{c}
\mathbf{Y}^{(1)} \\
\mathbf{Y}^{(2)} \\
\vdots \\
\mathbf{Y}^{(V)}
\end{array}\right]
$,
and
$
\mathbf{X}_k=\left[\begin{array}{cccc}
\mathbf{X}^{(1)}_k & \bf 0 & \cdots & \bf 0 \\
\bf 0 & \mathbf{X}^{(2)}_k & \cdots & \bf 0 \\
\vdots & \vdots & \ddots & \vdots \\
\bf 0 & \bf 0 & \cdots & \mathbf{X}^{(V)}_k
\end{array}\right]
$,
then Eq. \eqref{Eq5} can be simplified as:
\begin{equation}\label{Eq7}
\begin{aligned}
\min _{\mathbf{w}} &\frac{1}{2}\|\mathbf{P} \odot(\mathbf{X} \mathbf{W}-\mathbf{Y})\|_{F}^{2} \\
&+\lambda\left(\sum_{k=1}^{c}\left\|\mathbf{X}_{k} \mathbf{W}\right\|_{*}-\|\mathbf{X} \mathbf{W}\|_{*}\right),
\end{aligned}
\end{equation}
Eq. \eqref{Eq7} is a DC (Difference of Convex functions) programming, and it can be solved by the ConCave-Convex Procedure (CCCP). Let $ f = J_{cvx}+J_{cav } $ ,
\begin{equation}\label{Eq8}
J_{cvx}=\frac{1}{2}\|\mathbf{P} \odot(\mathbf{X} \mathbf{W}-\mathbf{Y})\|_{F}^{2}+\lambda \sum_{k=1}^{c}\left\|\mathbf{X}_{k} \mathbf{W}\right\|_{*},
\end{equation}
\begin{equation}\label{Eq9}
J_{cav}=-\lambda\|\mathbf{X} \mathbf{W}\|_{*},
\end{equation}
where $J_{cvx}$ is a convex function and $J_{cav}$ is a concave function.
Then by CCCP we have,
\begin{equation}\label{Eq10}
\partial J_{cvx}\left(\mathbf{W}_{t}\right)+\partial J_{cav}\left(\mathbf{W}_{t-1}\right)=0,
\end{equation}
where $\partial J_{cvx}\left(\mathbf{W}_{t}\right)$ is the sub-gradient of $J_{cvx}\left(\mathbf{W}_{t}\right)$ and $\mathbf{W}_{t}$ is the matrix of the $t$-th iteration. Afterwards, a surrogate objective function $J$ that satisfies Eq. \eqref{Eq10} can be derived,
\begin{equation}\label{Eq11}
\begin{aligned}
\min _{\mathbf{w}_{t}} J\left(\mathbf{W}_{t}\right)=& \min _{\mathbf{w}_{t}} \frac{1}{2}\left\|\mathbf{P} \odot\left(\mathbf{X} \mathbf{W}_{t}-\mathbf{Y}\right)\right\|_{F}^{2}+\lambda \sum_{k=1}^{c}\left\|\mathbf{X}_{k} \mathbf{W}_{t}\right\|_{*} \\
&-\lambda t r\left[\mathbf{W}_{t}^{T}\left(\partial\left\|\mathbf{X} \mathbf{W}_{t-1}\right\|_{*}\right)\right],
\end{aligned}
\end{equation}
Eq. \eqref{Eq11} is a convex function w.r.t. $\mathbf{W}_{t}$ and can be solved by off-the-shelf convex optimization toolkit.
However, traditional convex optimization methods often need to search the directions of the gradient, which makes it slow to obtain the optimum. Thus, we tailor an efficient ADMM algorithm and derive the closed form solution of each sub-problem. Specifically, let $\mathbf{Z}_{k}=\mathbf{X}_{k} \mathbf{W}_{t}$, we have the following augmented Lagrangian function,
\begin{equation}\label{Eq12}
\begin{aligned}
\Phi=& \frac{1}{2}\left\|\mathbf{P} \odot\left(\mathbf{X} \mathbf{W}_{t}-\mathbf{Y}\right)\right\|_{F}^{2}+\lambda \sum_{k=1}^{c}\left\|\mathbf{Z}_{k} \right\|_{*} \\
&-\lambda tr \left[\left(\mathbf{X} \mathbf{W}_{t}\right)^{T} \partial\left(\left\|\mathbf{X} \mathbf{W}_{t-1}\right\|_{*}\right)\right] \\
&+\sum_{k=1}^{c} tr \left[\mathbf{\Lambda}_{k}^{T}\left(\mathbf{X}_{k} \mathbf{W}_{t}-\mathbf{Z}_{k}\right)\right]+\frac{\mu}{2} \sum_{k=1}^{c}\left\|\mathbf{Z}_{k}-\mathbf{X}_{k} \mathbf{W}_{t}\right\|_{F}^{2}
\end{aligned},
\end{equation}
where $\mathbf{\Lambda}_{k}$ is the Lagrangian multiplier and $\mu$ is the penalty factor. Note that, $\mu$ is NOT a model hyper-parameter BUT a parameter of the ADMM algorithm that does not need to be adjusted, and it is introduced for the convenience of optimization. Specifically, in Eq. \eqref{Eq12}, with the equipment of the last term, each sub-problem of the ADMM algorithm becomes strongly convex, which guarantees a fast convergence. In the experiments, we will validate that the performance of our model remains unchanged under different $\mu$.
\subsubsection{Sub-problem of $\ \mathbf{W}_{t}$}
With $\mathbf{Z}_{k}$ and $\mathbf{\Lambda}_{k}$ fixed, $\mathbf{W}_{t}$ can be updated by
\begin{equation}\label{Eq13}
\begin{aligned}
\mathbf{W}_{t}&=(\mu \sum_{k=1}^{c} \mathbf{X}_{k}^{T} \mathbf{X}_{k})^{-1}\{\lambda \mathbf{X}^{T} \partial(\|\mathbf{X} \mathbf{W}_{t-1}\|_{*}) \\
&+\sum_{k=1}^{c}[\mathbf{X}_{k}^{T}(\mu \mathbf{Z}_{k}-\mathbf{\Lambda}_{k})]-\mathbf{X}^{T}[\mathbf{P} \odot(\mathbf{X} \mathbf{W}_{t-1}-\mathbf{Y})]\}
\end{aligned}.
\end{equation}
\subsubsection{Sub-problem of $\ \mathbf{Z}_{k}$}
With $\mathbf{W}_{t}$ and $\mathbf{\Lambda}_{k}$ fixed, the objective function of $\mathbf{Z}_{k}$ can be written as:
\begin{equation}\label{Eq14}
\Phi_{\mathbf{Z}_{k}}=\frac{\lambda}{\mu}\left\|\mathbf{Z}_{k}\right\|_{*}+\frac{1}{2}\left\|\mathbf{Z}_{k}-\left(\mathbf{X}_{k} \mathbf{W}_{t}+\frac{\mathbf{\Lambda}_{k}}{\mu}\right)\right\|_{F}^{2}.
\end{equation}
The above problem can be solved by the singular value thresholding algorithm \cite {r47}, and the update rule of $\mathbf{Z}_{k}$ is,
\begin{equation}\label{Eq15}
\begin{aligned}
\mathbf{Z}_{k} &=\Gamma\left(\mathbf{X}_{k} \mathbf{W}_{t}+\frac{\boldsymbol{\Lambda}_{k}}{\mu}\right) \\
&=\mathbf{U} \operatorname{Diag}\left[\left(\sigma_{i}-\frac{\lambda}{\mu}\right)_{+} \right]\mathbf{V}^{T},
\end{aligned}
\end{equation}
where $\Gamma$ is a singular value threshold operator, $\mathbf{U}$ and $\mathbf{V}$ are the matrices of left and right singular vectors of $\mathbf{X}_{k} \mathbf{W}_{t}+\frac{\mathbf\Lambda_{k}}{\mu}$, $\operatorname{Diag}$ stands for the diagonal matrix, $\sigma_{i}$ is the $i$-th largest singular value and $a_{+}=\max (0, a)$.
\subsubsection{Sub-problem of $\ \mathbf{\Lambda}_{k}$}
With $\mathbf{Z}_{k}$ and $\mathbf{W}_{t}$ fixed, $\mathbf{\Lambda}_{k}$ can be updated by
\begin{equation}\label{Eq16}
\boldsymbol{\Lambda}_{k} \leftarrow \boldsymbol{\Lambda}_{k}+\mu\left(\mathbf{X}_{k} \mathbf{W}_{t}-\mathbf{Z}_{k}\right).
\end{equation}
The entire optimization procedure is summarized in the Algorithm 1.
\begin{algorithm} \label{Alg1}
\caption{ADMM Algorithm for NAIM$^3$L}
\textbf{Input}: Feature matrix $\mathbf{X}$, observed label matrix $\mathbf{Y}$, indicator matrix $\mathbf{P}$\\
\textbf{Initialization}: Randomly initialize $\mathbf{W}_0$, $\mathbf{Z}_k = \bf 0$, and $\mathbf{\Lambda}_{k} = \bf 0$ $(k =1, 2, \cdots, c), \mu = 5. $\\
\textbf{Output}: $\mathbf{W}$
\begin{algorithmic}[1]
\STATE Let $t=0$.
\WHILE{not converge}
\STATE $t = t + 1$.
\STATE Update $\mathbf{W}_t$ by Eq. \eqref{Eq13}.
\STATE Update $\mathbf{Z}_k$ by Eq. \eqref{Eq15}.
\STATE Update $\mathbf{\Lambda}_{k}$ by Eq. \eqref{Eq16}.
\ENDWHILE
\STATE \textbf{return} $\mathbf{W}$
\end{algorithmic}
\end{algorithm}
\subsection{An Efficient Algorithm for Computing $\partial\|\bullet \|_{*}$}
Let $ \mathbf {A} \in \mathbb{R}^{n \times c}$ be an arbitrary matrix and $\mathbf{U} \boldsymbol{\Sigma} \mathbf{V}^{T}$ be its singular value decomposition (SVD), where $\mathbf {U}$ and $\mathbf {V}$ are the matrices of left and right singular vectors, respectively. It is well known \cite {r47,r48} that the sub-gradient of the trace norm can be computed by
$\partial\|\mathbf{A}\|_{*}=\left\{\mathbf{U} \mathbf{V}^{T}+\mathbf{Q}| \quad \mathbf{Q} \in \mathbb{R}^{n \times c}, \mathbf{U}^{T} \mathbf{Q}=\mathbf{0},\mathbf{Q} \mathbf{V}=\mathbf{0}, \|\mathbf{Q}\|_{2} \leq 1\right\}$, where $\|\bullet \|_{2}$ is the spectral norm.
For simplicity, let $\mathbf{Q}= \mathbf{0}$ , then $\partial\|\mathbf{A}\|_{*}$ can be computed by $\partial\|\mathbf{A}\|_{*}=\mathbf{U} \mathbf{V}^{T}$.
In the following, we will derive a theorem and then design an efficient algorithm to compute $\partial\|\mathbf{A}\|_{*}$ based on the theorem.
\begin{theorem} \label{theo2}
Let $\mathbf {A} = \mathbf{U} \boldsymbol{\Sigma} \mathbf{V}^{T}$ be the SVD of matrix $\mathbf{A}$, then $\mathbf{A}^{T} \mathbf{A}=\mathbf{V} \mathbf{\Sigma}^2 \mathbf{V}^{T} = \mathbf{V} \mathbf{S} \mathbf{V}^{T}$ is the eigenvalue decomposition of the matrix $\mathbf{A}^{T} \mathbf{A}$, and $\partial\|\mathbf{A}\|_{*}=\mathbf{U} \mathbf{V}^{T} = \mathbf{A} \mathbf{V} \mathbf{S}^{-\frac{1}{2}} \mathbf{V}^{T} $.
\end{theorem}
\begin{proof}
$\mathbf{A}=\mathbf{U} \Sigma \mathbf{V}^{T}$, then $\mathbf{A}^{T} \mathbf{A}=\mathbf{V} \mathbf{\Sigma} \mathbf{U}^{T} \mathbf{U} \Sigma \mathbf{V}^{T} =\mathbf{V} \mathbf{\Sigma}^2 \mathbf{V}^{T}$.
Let $\mathbf{S} = \mathbf{\Sigma}^2$, then $\mathbf{A}^{T} \mathbf{A}=\mathbf{V} \mathbf{S} \mathbf{V}^{T}$ is the eigenvalue decomposition of the matrix $\mathbf{A}^{T} \mathbf{A}$.
Rewrite $\mathbf{A}^{T} \mathbf{A} =\mathbf{V} \mathbf{\Sigma}^2 \mathbf{V}^{T} =\mathbf{V} \mathbf{\Sigma} \mathbf{V}^{T} \mathbf{V} \mathbf{\Sigma} \mathbf{V}^{T}$,
then $\left(\mathbf{A}^{T} \mathbf{A}\right)^{-\frac{1}{2}}=\left(\mathbf{V} \mathbf{\Sigma} \mathbf{V}^{T}\right)^{-1}=\mathbf{V} \mathbf{\Sigma}^{-1} \mathbf{V}^{T} = \mathbf{V} \mathbf{S}^{-\frac{1}{2}} \mathbf{V}^{T}$ and $ \mathbf{A}\left(\mathbf{A}^{T} \mathbf{A}\right)^{-\frac{1}{2}}=\mathbf{U} \Sigma \mathbf{V}^{T} \mathbf{V} \mathbf{\Sigma}^{-1} \mathbf{V}^{T} =\mathbf{U} \mathbf{V}^{T}$.
Finally, $\partial\|\mathbf{A}\|_{*}=\mathbf{U} \mathbf{V}^{T} = \mathbf{A} \mathbf{V} \mathbf{S}^{-\frac{1}{2}} \mathbf{V}^{T} $.
\end{proof}
According to Theorem \ref{theo2}, we can now design an efficient algorithm for computing $\partial\|\mathbf{A} \|_{*}$ by transforming the SVD of an $n \times c$ matrix into the eigenvalue decomposition of a $c \times c$ matrix.
If the full SVD is adopted to compute $\partial\|\mathbf{A} \|_{*}$, the whole time complexity is $\mathcal{O}\left(n c^{2} + c n^{2} \right) + \mathcal{O}\left( mrc\right)$, where $r$ is the rank of matrix $\mathbf{A}$. However, the time complexity of Algorithm 2 is $\mathcal{O}\left(n c^{2} \right) + \mathcal{O}\left( c^{3} \right)+ \mathcal{O}\left( n c^{2} \right)$. Generally, in the multi-label setting, the number of samples is much larger than that of multiple labels, i.e., $n\gg c$, so the time complexity of adopting full SVD is $\mathcal{O}\left( c n^{2} \right)$ whereas the time complexity of Algorithm 2 is $\mathcal{O}\left( n c^{2} \right)$, which is much more efficient. Algorithm 2 can be more efficient if a more sophisticated algorithm is elaborately customized. However, it is not the main focus of this work but just a byproduct, which itself also has an interest of independence to great extent. We argue that Algorithm 2 can be implemented quite easily with a few lines of codes in Matlab. More importantly, it is effective enough for us to handle large-scale datasets by the fact that the computation complexity with respect to the number of samples reduces from quadratic to linear. Regarding the ''efficient algorithm'' mentioned here, what we aim to emphasize is that Algorithm 2 can handle large-scale datasets, instead of that it can defeat the state-of-the-arts SVD algorithms. Experiments in section \ref{5.9} will validate the efficiency of Algorithm 2.
\begin{algorithm} [t] \label{Alg2}
\caption{An Efficient Algorithm for Computing $\partial\|\bullet \|_{*}$}
\textbf{Input}: A matrix $ \mathbf {A} \in \mathbb{R}^{n \times c}$, \\
\textbf{Output}: $\partial\|\mathbf {A}||_{*}$.
\begin{algorithmic}[1]
\IF {$ n \ge c $}
\STATE $\mathbf{B}=\mathbf{A}^{T} \mathbf{A}$.
\ELSE
\STATE $\mathbf{B}=\mathbf{A} \mathbf{A}^{T}$.
\ENDIF
\STATE Eigenvalue decomposition of $\mathbf{B}$, $\mathbf{B}= \mathbf{V} \mathbf{S} \mathbf{V}^{T}$.
\STATE $\partial\|\mathbf {A}||_{*}= \mathbf{A} \mathbf{V} \mathbf{S}^{-\frac{1}{2}} \mathbf{V}^{T}$.
\STATE \textbf{return} $\partial\|\mathbf {A}||_{*}$
\end{algorithmic}
\end{algorithm}
\subsection{Convergence and Complexity Analysis} \label{IV-C}
\emph{\textbf{Convergence Analysis}}. Before giving the convergence analysis of Algorithm 1, we introduce the following lemma.
\begin{lemma} \label{lemma2} {\rm \cite{r49}}
Consider an energy function $J(x)$ of form $J(x) = J_{cvx}(x) + J_{cav}(x)$, where $J_{cvx}(x)$, $J_{cav}(x)$ are convex and concave functions of $x$, respectively. Then the discrete iterative CCCP algorithm $ x^{t}\longmapsto x^{t+1}$ given by
\begin{equation} \nonumber
\nabla J_{cvx}(x^{t+1}) = - \nabla J_{cav}(x^{t})
\end{equation}
is guaranteed to monotonically decrease the energy $J(x)$ as a function of time and
hence to converge to a minimum or saddle point of $J(x)$.
\end{lemma}
According to Lemma \ref{lemma2} and Eq. \eqref{Eq10}, we can derive that $J_{cvx}\left(\mathbf{W}_{t}\right)+J_{cav}\left(\mathbf{W}_{t}\right) \leq J_{cvx}\left(\mathbf{W}_{t-1}\right)+J_{cav}\left(\mathbf{W}_{t-1}\right)$, which means the objective function $f$ is guaranteed to monotonically decrease. Moreover, according to Theorem \ref{theo1}, it is easy to validate that the objective function $f$ is lower bounded by 0. Thus, $f$ is guaranteed to converge by the above two facts. Besides, in the ADMM algorithm, the surrogate objective function $J$ is strongly convex, which guarantees a global optimum of each sub-problem. Therefore, Algorithm 1 is guaranteed to converge.
\emph{\textbf {Complexity Analysis}}. The time complexity of NAIM$^3$L is dominated by matrix multiplication and inverse operations. In each iteration, the complexity of updating $\mathbf{W}_t$ in Eq. \eqref{Eq13} is $\mathcal{O}\left[V(n d_{\max }^{2} c+d_{\max }^{3}+n d_{\max } c+n c^{2}+n d_{\max } c^{2}\right)]$ and the complexity of updating $\mathbf{Z}_k$ in Eq. \eqref{Eq15} is $\mathcal{O}\left[V(n c^{3} + n d_{\max } c^{2}\right)]$. The update of $\mathbf{\Lambda}_{k}$ in Eq. \eqref{Eq16} costs $\mathcal{O}\left(Vn d_{\mathrm{max}} c^{2}\right)$. Generally, $n>d_{\max }$, $n \gg c$ and $d_{\max } > c$, so the total complexity of NAIM$^3$L is $\mathcal{O}\left(t Vn d_{\max }^{2} c\right)$, where $t$ is number of iterations, $V$ is the number of views, $n$ is the number of samples, $d_{\max }$ is the maximum dimension of the features and $c$ is the number of multiple labels. Therefore, NAIM$^3$L has linear computational complexity with respect to the number of samples, which enables it more efficiently to handle large scale data.
\section{Experiments} \label{secV}
\subsection{Experimental Settings}
\textbf {Datasets:} Five real datasets Corel5k, Espgame, IAPRTC12, Mirflickr, and Pascal07 are used in our experiments. They are available at website\footnote{\url{ http://lear.inrialpes.fr/people/guillaumin/data.php}} \cite{r50}. For fairness, we use exactly the same settings as in iMVWL \cite {r28}. Specifically, each dataset involves six views: HUE, SIFT, GIST, HSV, RGB, and LAB. For each dataset, we randomly sample 70\% of the data for training and use the remaining 30\% data for testing (unlabeled data). Furthermore, in the incomplete multi-view setting, we randomly remove $\alpha$ samples in each view while ensuring each sample appears in at least one view; in the missing labels setting, for each label, we randomly remove $\beta$ positive and negative tags of the training samples; in the non-aligned multi-view setting, samples of all views are randomly arranged and totally unpaired as defined in subsection \ref{III-A}. During the process of training and learning, the alignment information of the samples (orders of the samples) is completely unknown to us. The statistics of the datasets are summarized in Table \ref{tab1}. From this table, we can find that the label matrices are of full column rank or high rank.
\begin{table}[h]
\centering
\caption{Statistics of the Datasets. {\upshape n} and {\upshape c} are the Numbers of Samples and Multiple Labels in Each Dataset, and \#{\upshape avg} is the Average Number of Relevant Labels in Each Sample. {\upshape train\_rank} and {\upshape test\_rank} are the Ranks of the Label Matrices of Training Set and Testing Set.}
\label{tab1}
\begin{tabular}{@{}cccccc@{}}
\toprule
datasets & n & c & \#avg & train\_rank & test\_rank \\ \midrule
Corel5k & 4999 & 260 & 3.396 & 259 & 249 \\
Espgame & 20770 & 268 & 4.686 & 268 & 268 \\
IAPRTC12 & 19627 & 291 & 5.719 & 291 & 291 \\
Mirflickr & 25000 & 38 & 4.716 & 38 & 38 \\
Pascal07 & 9963 & 20 & 1.465 & 20 & 20 \\ \bottomrule
\end{tabular}
\end{table}
\textbf {Compared methods:} In the experiments, NAIM$^3$L is compared with five state-of-the-art methods: iMSF \cite {r51}, LabelMe \cite {r39}, MVL-IV \cite {r31}, lrMMC \cite {r40}, and iMVWL \footnote{We sincerely thank the authors of iMVWL for providing the codes, however, their codes did not fix the random seeds, thus results in their original paper cannot be reproduced. We run their codes by using the optimal hyper-parameters suggested in their paper and fix the same random seeds as in our codes. Note that, half of the results of iMVWL are better than those reported in their original paper.} \cite {r28}. iMSF is a multi-class learning method, and we extend it for multi-label classification by training multiple classifiers (one for each label). IMVL-IV \cite {r29} is an incomplete multi-view and missing multi-label learning method, however, it contains ten hyper-parameters in its model, making it very difficult to tune for the optimum. For fairness, we omit it. Besides, there are also some deep neural network (DNN) based methods concerning the complete multi-view multi-label learning \cite{r3, r13}. In this work, we mainly focus on the novel problem of the non-aligned views with missing multiple labels and provide a simple yet effective solution. Considering that our model can directly cooperate with DNN, comparisons with these methods will be conducted in our future work. It is worth noting that all the compared methods cannot deal with non-aligned views, thus in the experiments, they are all implemented only with the missing labels and incomplete multi-view settings, whereas our NAIM$^3$L is conducted in the settings with all the three challenges. Optimal parameters for the competitive methods are selected as suggested in the corresponding papers. All experiments are repeated ten times, and both the mean and standard deviation are reported.
\textbf {Evaluation metrics:} Similar to iMVWL, four widely used multi-label evaluation metrics are adopted for performance evaluations, i.e., Ranking Loss (RL), Average Precision (AP), Hamming Loss (HL), and adapted Area Under ROC Curve (AUC). A formal definition of the first three metrics can be found in \cite {r9}. The adapted AUC is suggested in \cite {r52}. For consistency, in our experiments, we report 1-RL and 1-HL instead of RL and HL. Thus, the larger the values of all four metrics are, the better the performance is.
\subsection{Main Experimental Results} \label{5.2}
\subsubsection{Experiments under Incomplete Views, Missing Labels (and non-Aligned Views)}
In this subsection, we show the results under the settings of incomplete views, missing labels, and non-aligned views. To the best of our knowledge, existing methods cannot work under the non-aligned views setting. For this reason, the compared methods are all implemented only with the incomplete views and missing labels settings. However, our NAIM$^3$L is conducted in all three settings, which means that in the experiments, information available for NAIM$^3$L is much less than other methods.
Table \ref{tab2} shows the results compared with other methods under the setting of 50\% incomplete views and 50\% missing positive and negative labels. From this table, we can see that, even though without view alignment information, NAIM$^3$L still outperforms all the compared methods with view alignment information on five datasets. Note that, in these experiments, other methods are under the aligned-views setting, whereas ours are not. Nonetheless, with less available information and without view completion, NAIM$^3$L still achieves better performance. This can be attributed to the joint consideration of the local low-rank and global high-rank structures within multiple labels while the latter is almost completely neglected in other methods. In subsection \ref{V-D}, we will conduct extensive experiments to validate the importance of the global high-rank structure of multiple labels.
\begin{table*}[h]
\centering
\caption{Results on all Five Datasets with the Ratio of Incomplete Multi-view $\alpha = 50$\% and the Ratio of Missing Multi-label $\beta = 50$\%. Values in Parentheses Represent the Standard Deviation, and all the Values are Displayed as Percentages.}
\label{tab2}
\begin{tabular}{@{}cccccccc@{}}
\toprule
dataset & metrics & lrMMC & MVL-IV & LabelMe & iMSF & iMVWL & NAIM$^3$L \\ \midrule
\multirow{4}{*}{Corel5k} & 1-HL(\%) & 95.40(0.00) & 95.40(0.00) & 94.60(0.00) & 94.30(0.00) & 97.84(0.02) & \textbf{98.70}(0.01) \\
& 1-RL(\%) & 76.20(0.20) & 75.60(0.10) & 63.80(0.30) & 70.90(0.50) & 86.50(0.33) & \textbf{87.84}(0.21) \\
& AP(\%) & 24.00(0.20) & 24.00(0.10) & 20.40(0.20) & 18.90(0.20) & 28.31(0.72) & \textbf{30.88}(0.35) \\
& AUC(\%) & 76.30(0.20) & 76.20(0.10) & 71.50(0.10) & 66.30(0.50) & 86.82(0.32) & \textbf{88.13}(0.20) \\ \midrule
\multirow{4}{*}{Pascal07} & 1-HL(\%) & 88.20(0.00) & 88.30(0.00) & 83.70(0.00) & 83.60(0.00) & 88.23(0.38) & \textbf{92.84}(0.05) \\
& 1-RL(\%) & 69.80(0.30) & 70.20(0.10) & 64.30(0.40) & 56.80(0.00) & 73.66(0.93) & \textbf{78.30}(0.12) \\
& AP(\%) & 42.50(0.30) & 43.30(0.20) & 35.80(0.30) & 32.50(0.00) & 44.08(1.74) & \textbf{48.78}(0.32) \\
& AUC(\%) & 72.80(0.20) & 73.00(0.10) & 68.60(0.50) & 62.00(0.10) & 76.72(1.20) & \textbf{81.09}(0.12) \\ \midrule
\multirow{4}{*}{ESPGame} & 1-HL(\%) & 97.00(0.00) & 97.00(0.00) & 96.70(0.00) & 96.40(0.00) & 97.19(0.01) & \textbf{98.26}(0.01) \\
& 1-RL(\%) & 77.70(0.10) & 77.80(0.00) & 68.30(0.20) & 72.20(0.20) & 80.72(0.14) & \textbf{81.81}(0.16) \\
& AP(\%) & 18.80(0.00) & 18.90(0.00) & 13.20(0.00) & 10.80(0.00) & 24.19(0.34) & \textbf{24.57}(0.17) \\
& AUC(\%) & 78.30(0.10) & 78.40(0.00) & 73.40(0.10) & 67.40(0.30) & 81.29(0.15) & \textbf{82.36}(0.16) \\ \midrule
\multirow{4}{*}{IAPRTC12} & 1-HL(\%) & 96.70(0.00) & 96.70(0.00) & 96.30(0.00) & 96.00(0.00) & 96.85(0.02) & \textbf{98.05}(0.01) \\
& 1-RL(\%) & 80.10(0.00) & 79.90(0.10) & 72.50(0.00) & 63.10(0.00) & 83.30(0.27) & \textbf{84.78}(0.11) \\
& AP(\%) & 19.70(0.00) & 19.80(0.00) & 14.10(0.00) & 10.10(0.00) & 23.54(0.39) & \textbf{26.10}(0.13) \\
& AUC(\%) & 80.50(0.00) & 80.40(0.10) & 74.60(0.00) & 66.50(0.10) & 83.55(0.22) & \textbf{84.96}(0.11) \\ \midrule
\multirow{4}{*}{Mirflickr} & 1-HL(\%) & 83.90(0.00) & 83.90(0.00) & 77.80(0.00) & 77.50(0.00) & 83.98(0.28) & \textbf{88.15}(0.07) \\
& 1-RL(\%) & 80.20(0.10) & 80.80(0.00) & 77.10(0.10) & 64.10(0.00) & 80.60(1.11) & \textbf{84.40}(0.09) \\
& AP(\%) & 44.10(0.10) & 44.90(0.00) & 37.50(0.00) & 32.30(0.00) & 49.48(1.24) & \textbf{55.08}(0.18) \\
& AUC(\%) & 80.60(0.10) & 80.70(0.00) & 76.10(0.00) & 71.50(0.10) & 79.44(1.46) & \textbf{83.71}(0.06) \\
\bottomrule
\end{tabular}
\end{table*}
\subsubsection{Experiments under Incomplete Views}
To further validate the effectiveness of our NAIM$^3$L, we conduct experiments under the settings of incomplete views, full labels, and aligned views. iMVWL \cite {r28} is a method that can deal with both the incomplete views and missing labels. In this subsection, the settings are 50\% incomplete views, full labels, and aligned views. iMVWL-V and NAIM$^3$L-V are the corresponding methods dealing with incomplete views. As our method is customized for the non-aligned views, NAIM$^3$L-V is still conducted under the non-aligned views setting. From Table \ref{tab3}, we can see that NAIM$^3$L-V outperforms iMVWL-V. Besides, a weird phenomenon is found when comparing Tables \ref{tab2} and \ref{tab3}, i.e., the performance of iMVWL under the full labels setting is worse than that under the missing labels setting on the Pascal07, IAPRTC12, and Mirflickr datasets (the worse performance is denoted by down-arrows in Table \ref{tab3}). The reason for this counter-intuitive phenomenon will be explained and analyzed in subsection \ref {5.2.4}.
\begin{table}[h]
\centering
\caption{Results on all Five Datasets with Full Labels and the Ratio of Incomplete Multi-view $\alpha = 50$\%. Values in Parentheses Represent the Standard Deviation, and all the Values are Displayed as Percentages. Down-Arrows Denote that Performance of iMVWL under Full Label Setting is Worse than that under 50\% Missing Label Setting.} \label{tab3}
\begin{tabular}{cclc}
\toprule
datasets & metrics & iMVWL-V & NAIM$^3$L-V \\ \midrule
\multirow{4}{*}{Corel5k} & 1-HL(\%) & 97.85(0.03) & \textbf{98.70}(0.01) \\
& 1-RL(\%) & 87.00(0.27) & \textbf{88.52}(0.25) \\
& AP(\%) & 28.90(0.89) & \textbf{31.72}(0.32) \\
& AUC(\%) & 87.30(0.26) & \textbf{88.81}(0.23) \\ \midrule
\multirow{4}{*}{Pascal07} & 1-HL(\%) & 88.19(0.28)$\downarrow$ & \textbf{92.87}(0.05) \\
& 1-RL(\%) & 73.74(0.53) & \textbf{79.07}(0.08) \\
& AP(\%) & 43.54(0.87)$\downarrow$ & \textbf{49.25}(0.24) \\
& AUC(\%) & 76.80(0.59) & \textbf{81.81}(0.10) \\ \midrule
\multirow{4}{*}{ESPGame} & 1-HL(\%) & 97.19(0.01) & \textbf{98.27}(0.01) \\
& 1-RL(\%) & 80.96(0.15) & \textbf{82.22}(0.15) \\
& AP(\%) & 24.46(0.40) & \textbf{24.89}(0.17) \\
& AUC(\%) & 81.55(0.19) & \textbf{82.77}(0.16) \\ \midrule
\multirow{4}{*}{IAPRTC12} & 1-HL(\%) & 96.84(0.02)$\downarrow$ & \textbf{98.05}(0.01) \\
& 1-RL(\%) & 83.23(0.26)$\downarrow$ & \textbf{85.14}(0.09) \\
& AP(\%) & 23.40(0.46)$\downarrow$ & \textbf{26.45}(0.14) \\
& AUC(\%) & 83.50(0.22)$\downarrow$ & \textbf{85.29}(0.10) \\ \midrule
\multirow{4}{*}{Mirflickr} & 1-HL(\%) & 83.89(0.20)$\downarrow$ & \textbf{88.17}(0.07) \\
& 1-RL(\%) & 80.59(0.72)$\downarrow$ & \textbf{84.55}(0.08) \\
& AP(\%) & 48.95(0.97)$\downarrow$ & \textbf{55.30}(0.16) \\
& AUC(\%) & 79.69(1.15) & \textbf{83.84}(0.05) \\ \bottomrule
\end{tabular}
\end{table}
\subsubsection{Experiments under Missing Labels}
Similarly, experimental settings in this subsection are 50\% of missing labels, complete views, and aligned views. iMVWL-L and NAIM$^3$L-L are the corresponding methods dealing with missing labels. Still, NAIM$^3$L-L is conducted under the non-aligned views. Not surprisingly, NAIM$^3$L-L once again outperforms iMVWL-L. When comparing Tables \ref{tab2} and \ref{tab4}, the weird phenomenon that appeared in Table \ref{tab3} disappeared, and we will analyze these results in the next subsection.
\begin{table}[h]
\centering
\caption{Results on all Five Datasets with Complete views and the Ratio of Missing Multi-Label $\beta = 50$\%. Values in Parentheses Represent the Standard Deviation, and all the Values are Displayed as Percentages.}\label{tab4}
\begin{tabular}{cccc}
\toprule
datasets & metrics & iMVWL-L & NAIM$^3$L-L \\ \midrule
\multirow{4}{*}{Corel5k} & 1-HL(\%) & 97.91(0.01) & \textbf{98.70}(0.01) \\
& 1-RL(\%) & 88.00(0.31) & \textbf{88.55}(0.28) \\
& AP(\%) & 30.77(0.58) & \textbf{31.93}(0.39) \\
& AUC(\%) & 88.32(0.32) & \textbf{88.84}(0.26) \\ \midrule
\multirow{4}{*}{Pascal07} & 1-HL(\%) & 88.79(0.12) & \textbf{92.87}(0.06) \\
& 1-RL(\%) & 76.30(0.59) & \textbf{79.04}(0.15) \\
& AP(\%) & 46.95(0.55) & \textbf{49.30}(0.31) \\
& AUC(\%) & 79.24(0.61) & \textbf{81.79}(0.13) \\ \midrule
\multirow{4}{*}{ESPGame} & 1-HL(\%) & 97.23(0.01) & \textbf{98.27}(0.01) \\
& 1-RL(\%) & 81.54(0.22) & \textbf{82.20}(0.14) \\
& AP(\%) & 25.80(0.34) & \textbf{24.90}(0.17) \\
& AUC(\%) & 82.03(0.19) & \textbf{82.74}(0.14) \\ \midrule
\multirow{4}{*}{IAPRTC12} & 1-HL(\%) & 96.90(0.01) & \textbf{98.05}(0.01) \\
& 1-RL(\%) & 84.14(0.17) & \textbf{85.11}(0.10) \\
& AP(\%) & 25.00(0.18) & \textbf{26.47}(0.13) \\
& AUC(\%) & 84.17(0.15) & \textbf{85.27}(0.10) \\ \midrule
\multirow{4}{*}{Mirflickr} & 1-HL(\%) & 84.25(0.58) & \textbf{88.18}(0.07) \\
& 1-RL(\%) & 81.22(1.97) & \textbf{84.57}(0.08) \\
& AP(\%) & 50.23(2.03) & \textbf{55.36}(0.14) \\
& AUC(\%) & 79.48(2.91) & \textbf{83.86}(0.06) \\ \bottomrule
\end{tabular}
\end{table}
\subsubsection{Analysis and Summary} \label{5.2.4}
In this subsection, we make an explanation of the counter-intuitive phenomenon in Table \ref{tab3} and summarize the experimental results of subsection \ref{5.2}. By comparing Tables \ref{tab2} and \ref{tab4}, when the missing label ratio is fixed and the incomplete view ratio changes, results of iMVWL are intuitively consistent. However, from Tables \ref{tab2} and \ref{tab3}, when the incomplete view ratio is fixed and the missing label ratio changes, results of iMVWL are counter-intuitive. These results indicate that iMVWL deals with incomplete view relatively appropriate whereas inappropriate when dealing with missing multi-labels. It can be attributed to that iMVWL makes an improper low-rank assumption about the entire multi-label matrix (see Eq. (\ref{eqb})), which violates the reality. Differently, results of NAIM$^3$L are all intuitively consistent since the high-rank assumption about the entire multi-label matrix in NAIM$^3$L is supported by observations of real datasets and thus more appropriate than that of iMVWL.
To summarize, the results in Tables \ref{tab2}, \ref{tab3}, and \ref{tab4} indicate that our NAIM$^3$L consistently performs the best whether dealing with one challenge, two, or three, and the high-rank assumption indeed makes sense in all these settings.
\subsection{On the High/Low Rank Validation of the Predicted Multi-Label Matrices}
To justify the rationality of the assumptions in NAIM$^3$L, (i.e., the high-rankness of the entire label matrix and the low-rankness of the sub-label matrices), we conduct experiments on the predicted multi-label matrices by using the learned $\mathbf{W}$ in Algorithm 1. First, we validate the high-rankness of the predicted entire multi-label matrix by showing that the label matrices are of full column rank in Table \ref{tab5}. Second, we plot the nuclear norm value of the predicted entire multi-label matrix in Fig. \ref {Fig3}. As the number of multiple labels is too big to show the low-rankness of each sub-label matrix in a single figure, we show the mean value and the median value of the nuclear norm of the predicted sub-label matrices to validate the low-rank assumption. From Fig. \ref {Fig3}, we can find that the mean value and the median value are relatively small, which means the predicted sub-label matrices are of low-rank for the reason that the nuclear norm of a matrix is an upper bound of its rank. In a word, the ranks of the predicted multi-label matrices are consistent with the assumptions of our model.
\begin{table}[h]
\centering
\caption{Rank of the predicted entire multi-label matrix of each dataset.}\label{tab5}
\begin{tabular}{@{}cccccc@{}}
\toprule
dataset & Corel5k & Espgame & IAPRTC12 & Pascal07 & Mirflickr \\ \midrule
Rank & 260 & 268 & 291 & 20 & 38 \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=0.48 \textwidth, height = 38.1mm]{Fig3}
\caption{The nuclear norm of the predicted entire label matrix, the mean value and the median value of the nuclear norm of the predicted sub-label matrices.}\label{Fig3}
\end{figure}
\subsection{Ablation Study} \label{V-D}
In this subsection, we study NAIM$^3$L-I (with only the loss function $\mathcal{L}$)
\begin{equation} \nonumber
\begin{aligned}
{\text {NAIM$^3$L-I : }} \min _{\mathbf{w}^{(i)}} \frac{1}{2} \sum_{i=1}^{V}\left\|\mathbf{P}^{(i)} \odot\left(\mathbf{X}^{(i)} \mathbf{W}^{(i)}-\mathbf{Y}^{(i)}\right)\right\|_{F}^{2},
\end{aligned}
\end{equation}
and NAIM$^3$L-II (with $\mathcal{L}$ and the first low-rank term in the regularizer $\mathcal{R}$ )
\begin{equation} \nonumber
\begin{aligned}
{\text {NAIM$^3$L-II : }} \min _{\mathbf{W}^{(i)}} \frac{1}{2} \sum_{i=1}^{V}\left\|\mathbf{P}^{(i)} \odot\left(\mathbf{X}^{(i)} \mathbf{W}^{(i)}-\mathbf{Y}^{(i)}\right)\right\|_{F}^{2} \\
+\lambda\left(\sum_{k=1}^{c}\left\|[\mathbf{X}_{k}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}_{k}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}_{k}^{(V)} \mathbf{W}^{(V)}]\right\|_{*}\right)
\end{aligned}
\end{equation}
to validate the effectiveness of the proposed regularizer $\mathcal{R}$, especially the significance of the high-rank term. From Table \ref{tab6}, we can see that NAIM$^3$L-I performs the worst on five datasets while NAIM$^3$L has the best performance. Compared with NAIM$^3$L-I, the performance of NAIM$^3$L-II improves very little, but after adding the high-rank term, all the four metrics raise considerably. This demonstrates that the local and the global structures, especially the latter, are beneficial to characterize the relationships among multiple labels and the presented regularizer is effective in learning these relations. Additionally, an interesting result is found when comparing Tables \ref{tab2} and \ref{tab6}, that is, NAIM$^3$L-I has better performance than most of other compared methods in Table \ref{tab2}. The reasons may be that other methods make improper assumptions which violate reality and the indicator matrix $\mathbf {P}$ introduced by us can alleviate both the negative effects on missing labels and incomplete views.
\begin{table}[h]
\centering
\caption{Results of The Variants of NAIM$^3$L with the Ratio of Incomplete Multi-view $\alpha = 50$\% and the Ratio of Missing Multi-label $\beta = 50$\%. Values in Parentheses Represent the Standard Deviation, and all the Values are Displayed as Percentages. All the Three Methods are Implemented under the Non-aligned Views Setting.}
\label{tab6}
\begin{tabular}{ccccc}
\toprule
datasets & metrics & NAIM$^3$L-I & NAIM$^3$L-II & NAIM$^3$L \\ \midrule
\multirow{4}{*}{Corel5k} & 1-HL(\%) & \textbf{98.70}(0.00) & \textbf{98.70}(0.00) & \textbf{98.70}(0.01) \\
& 1-RL(\%) & 82.73(0.20) & 83.54(0.21) & \textbf{87.84}(0.21) \\
& AP(\%) & 30.20(0.40) & 30.47(0.36) & \textbf{30.88}(0.35) \\
& AUC(\%) & 82.99(0.20) & 83.80(0.21) & \textbf{88.13}(0.20) \\ \midrule
\multirow{4}{*}{Pascal07} & 1-HL(\%) & 92.83(0.00) & 92.83(0.00) & \textbf{92.84}(0.05) \\
& 1-RL(\%) & 77.29(0.18) & 77.35(0.17) & \textbf{78.30}(0.12) \\
& AP(\%) & 48.64(0.35) & 48.66(0.35) & \textbf{48.78}(0.32) \\
& AUC(\%) & 79.99(0.17) & 80.55(0.17) & \textbf{81.09}(0.12) \\ \midrule
\multirow{4}{*}{ESPGame} & 1-HL(\%) & \textbf{98.26}(0.00) & \textbf{98.26}(0.00) & \textbf{98.26}(0.01) \\
& 1-RL(\%) & 79.63(0.20) & 79.80(0.11) & \textbf{81.81}(0.16) \\
& AP(\%) & 24.28(0.20) & 24.34(0.16) & \textbf{24.57}(0.17) \\
& AUC(\%) & 80.04(0.20) & 80.24(0.13) & \textbf{82.36}(0.16) \\ \midrule
\multirow{4}{*}{IAPRTC12} & 1-HL(\%) & \textbf{98.05}(0.00) & \textbf{98.05}(0.00) & \textbf{98.05}(0.01) \\
& 1-RL(\%) & 82.52(0.00) & 82.70(0.00) & \textbf{84.78}(0.11) \\
& AP(\%) & 25.71(0.10) & 25.76(0.10) & \textbf{26.10}(0.13) \\
& AUC(\%) & 82.56(0.10) & 82.76(0.10) & \textbf{84.96}(0.11) \\ \midrule
\multirow{4}{*}{Mirflickr} & 1-HL(\%) & \textbf{88.15}(0.00) & \textbf{88.15}(0.00) & \textbf{88.15}(0.07) \\
& 1-RL(\%) & 84.05(0.00) & 84.10(0.00) & \textbf{84.40}(0.09) \\
& AP(\%) & 54.95(0.20) & 54.98(0.16) & \textbf{55.08}(0.18) \\
& AUC(\%) & 83.33(0.00) & 83.39(0.00) & \textbf{83.71}(0.06) \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Statistical Analysis} \label{V-E}
To further analyze the effectiveness of our proposed method, we conduct the significance test on the results reported in Tables 2 and 6. Specifically, for Table 2, we employ the Nemenyi test \cite {nemenyi1963,demvsar2006,zhang2019} on the 20 results (4 metrics on 5 datasets) of six methods and set the significance level $\alpha_{l}$ at 0.05. Then the critical value $q_{\alpha} = 2.850$ and the critical distance $CD = q_{\alpha} \sqrt{k(k+1) / N} (k = 6, N = 20)$. Similarly, for Table 6, $ q_{\alpha} = 2.344$ and $ CD = q_{\alpha} \sqrt{k(k+1) / N} (k = 3, N = 20)$. Results of the Nemenyi test are shown in Fig. \ref{Fig4a} and \ref{Fig4b}, respectively. As we can see from Fig. \ref{Fig4a}, NAIM$^3$L performs better than the other methods except for the iMVWL method at the significance level 0.05. Besides, from Fig. \ref{Fig4b}, we can conclude that NAIM$^3$L performs better than other two methods at the significance level 0.05, which indicates the rationality and importance of our regularization term, especially the high-rank term.
\begin{figure}[h]
\centering
\subfigure[Nemenyi test of Table 2]{
\includegraphics[width=0.231\textwidth, height = 25mm]{Fig4_a}
\label{Fig4a}
}
\subfigure[Nemenyi test of Table 6]{
\includegraphics[width=0.231\textwidth, height = 25mm]{Fig4_b}
\label{Fig4b}
}
\caption{Statistical analysis of NAIM$^3$L by the Nemenyi test, the significance level is 0.05 and methods not connected are significantly different.}
\end{figure}
\subsection{Different Ratios of Incomplete Multi-view and Missing Multi-label Study}
In this subsection, we conduct experiments on Core15k and Pascal07 to evaluate NAIM$^3$L under different ratios of incomplete multiple views and missing multiple labels. Specifically, to evaluate the impact of different ratios of incomplete multiple views, we fix the ratio of missing multiple labels $\beta = 50$\%, and the ratio of incomplete multiple views $\alpha$ varies within the set of $\{10\%, 30\%, 50\%, 70\%, 90\%\}$. Similarly, to evaluate the impact of different ratios of missing multiple labels, we fix $\alpha=50$\%, and $\beta$ changes within the set of $\{10\%, 30\%, 50\%, 70\%, 90\%\}$. Results of all four metrics on Core15k and Pascal07 are reported in Fig. \ref{Fig5a}-\ref{Fig5b} and \ref{Fig5c}-\ref{Fig5d}, respectively. The 'Full' means that the multiple views and multiple labels in the training set are complete. Note that, when the ratio of missing multiple labels $\beta = 50$\% and the ratio of incomplete multiple views $\alpha=90$\%, the multi-view learning principle that each sample in the training set must appear at least one view is violated. Thus, experiment of such incomplete ratio is omitted. We can see from Fig. \ref{Fig5a} and \ref{Fig5c} that only when the ratio of missing multiple labels $\alpha=90$\%, the performance of all four metrics degenerates considerably except for the HL metric. From these observations, we can draw two conclusions, one is that our NAIM$^3$L is relatively robust, the other is that the HL may not be an appropriate metric for multi-label learning. All in all, we can conclude that the lower the incomplete (missing) ratio, the better the performance, which is reasonable and intuitively consistent.
\begin{figure}[h]
\centering
\subfigure[Corel5k]{
\includegraphics[width=0.227\textwidth]{Fig5_a}
\label{Fig5a}
}
\subfigure[Corel5k]{
\includegraphics[width=0.227\textwidth]{Fig5_b}
\label{Fig5b}
}
\subfigure[Pascal07]{
\includegraphics[width=0.227\textwidth]{Fig5_c}
\label{Fig5c}
}
\subfigure[Pascal07]{
\includegraphics[width=0.227\textwidth]{Fig5_d}
\label{Fig5d}
}
\caption{The performance of four metrics on the Corel5k dataset and the Pascal07 dataset under different ratios of incomplete multiple views ($\alpha$) and missing multiple labels ($\beta$). The 'Full' means that the multiple views and multiple labels in the training set are complete. The average value and standard deviation are shown in each sub-figures.}
\end{figure}
\subsection{Hyper-parameter Study}
There is only one hyper-parameter in NAIM$^3$L, thus it is easy to choose the relatively optimal hyper-parameter. The grid search technique is adopted to choose the optimal hyper-parameter within the set of $\{10^{-3}, 10^{-2}, 0.1, 1, 10, 100\}$. For the sake of clarity, we scale them by $log10$ when showing in figures. Besides, we narrow the scope of Y-axis to make the trends of the performance look clearer. From Fig. \ref{Fig6a}-\ref{Fig6d}, we can see that NAIM$^3$L achieves relatively good performance when $\lambda$ in [0.1, 1], and the values of the four metrics vary slightly, which indicates that our method is NOT sensitive to the hyper-parameter $\lambda$.
\begin{figure}[H]
\centering
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig6_a}
\label{Fig6a}
}
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig6_b}
\label{Fig6b}
}
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig6_c}
\label{Fig6c}
}
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig6_d}
\label{Fig6d}
}
\caption{Results of the hyper-parameter selection on four metrics with different $\lambda$. The values of $\lambda$ are scaled by $log10$ for clarity.}
\end{figure}
\subsection{Impacts Study of $\mu$}
In this subsection, we conduct experiments to validate that the parameter $\mu$ introduced in the ADMM algorithm does not change the performance of our model. From Fig. \ref{Fig7a} we find that all the four metrics remain the same when $\mu$ changes, and from Fig. \ref{Fig7b} we can see that $\mu$ only impacts the convergence rate. The larger the $\mu$ is, the slower the objective function converges. Thus, $\mu$ is a parameter that does not need to be adjusted. Except for the experiments in this subsection, $\mu$ is fixed at 5 throughout all other experiments in this article.
\begin{figure}[h]
\centering
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig7_a}
\label{Fig7a}
}
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig7_b}
\label{Fig7b}
}
\caption{The performance of all four metrics and the convergence rates under different $\mu$.}
\end{figure}
\subsection{Convergence Study}
In this subsection, we show the convergence curves of the objective function \eqref{Eq5} and the surrogate objective function \eqref{Eq11} to validate the convergence analysis given in subsection \ref{IV-C}. In Fig. \ref{Fig8a} and \ref{Fig8b}, we can see that both of the objective functions converge after about 20 iterations and the differences between them are small, which means the surrogate function is an appropriate alternate.
\begin{figure}[h]
\centering
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig8_a}
\label{Fig8a}
}
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig8_b}
\label{Fig8b}
}
\caption{Convergence curves of the original objective function and the surrogate objective function.}
\end{figure}
\subsection{Efficiency Study of Algorithm 2} \label{5.9}
In this subsection, we conduct several experiments for computing the sub-gradient of the trace norm with matrices of different sizes to validate the high efficiency of Algorithm 2. Random matrices are used and the matrix size varies from $10000 \times 100$ to $100000 \times 400$. Algorithm 2 is compared with three widely used SVD methods, that is, the full SVD, the economical SVD, and the truncated SVD. The truncated SVD is implemented with parameter r (rank of the matrix), also known as reduced SVD in this case. All the experiments are repeated ten times and the average results of the run time (in seconds) are reported in Table \ref{tab7}. The SVD (full) fails when the matrix size increase to $40000 \times 100 $ because of that the computer is out of memory (16G RAM). From Table \ref{tab7}, we can find that Algorithm 2 is of orders of magnitude faster than the SVD( full). Besides, both the run time of SVD (econ) and SVD (trunc) is about twice of the Algorithm 2. All of these illustrate the efficiency of the Algorithm 2.
\begin{table*}[htbp]
\centering
\caption{The Average Run Time of Different Methods for Computing the sub-Gradient of the Trace Norm (in Seconds). '-' Denotes that the Computer is out of Memory.}
\label{tab7}
\begin{tabular}{@{}cccccccccc@{}}
\toprule
matrix size & 10000$\times$100 & 10000$\times$200 & 20000$\times$100 & 20000$\times$200 & 40000$\times$100 & 40000$\times$200 & 40000$\times$400 & 100000$\times$200 & 100000$\times$400 \\ \midrule
SVD (full) & 20.0031 & 1.3839 & 80.2122 & 4.9386 & - & - & - & - & - \\
SVD (econ) & 0.0571 & 0.1377 & 0.1459 & 0.2264 & 0.1696 & 0.4400 & 1.3460 & 1.3813 & 3.9416 \\
SVD (trunc) & 0.0475 & 0.1471 & 0.0955 & 0.2133 & 0.1838 & 0.4791 & 1.4396 & 1.4526 & 4.2048 \\
Algorithm 2 & \textbf{0.0334} & \textbf{0.0731} & \textbf{0.0590} & \textbf{0.1603} & \textbf{0.0864} & \textbf{0.2109} & \textbf{0.6736} & \textbf{0.5019} & \textbf{1.6042} \\
\bottomrule
\end{tabular}
\end{table*}
\section{Conclusion} \label{secVI}
In this paper, we propose a concise yet effective model called NAIM$^3$L to simultaneously tackle the missing labels, incomplete views and non-aligned views challenges with only one hyper-parameter in the objective function. An efficient ADMM algorithm with linear computational complexity regarding the number of samples is derived. Besides, NAIM$^3$L outperforms state-of-the-arts on five real data sets even without view-alignment. Note that, this framework can be directly non-linearized to its kernerlized version and can cooperate with deep neural network as our future research.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This work is supported by the Key Program of NSFC under Grant No. 61732006 and NSFC under Grant No. 61672281. The authors would like to thank Dr. Huan Li and Zhenghao Tan for their generous help and beneficial discussions.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Introduction} \label{sec1}}
\IEEEPARstart{M}{ulti-view} multi-label learning is designed to predict the multiple labels of an object represented by multiple views. In reality, multi-view multi-label learning has wide applications ranging from image classification \cite {r1,r2} to video analysis \cite {r3,r4} in that multi-view multi-label data is ubiquitous. For example, an image can be described by Histogram of Oriented Gradients (HOG), Scale Invariant Feature Transform (SIFT), and color features, meanwhile, the image can also be labeled with "tree, water, sky"; a video includes diverse representations such as audio, text, and picture, at the same time, the video can be annotated by several labels like "Shakespeare, opera, King Lear". As two individual research fields, multi-view learning \cite {r5,r2017nie,r6,r7} and multi-label learning \cite {r8,r9,r10,r11} are severally and extensively studied in last two decades. Nonetheless, to date, as their intersection or marry, multi-view multi-label learning \cite {r12,r13} is still relatively under-studied. Note that, there has been active literature motivated for "multi-modal learning" \cite {r14,r15,r16,r17,r18}, and the scope of "multi-view learning" is more extensive since the latter contains not only the multi-modal learning but also the learning paradigm of the same modal but different perspectives.
In practice, a situation we will often suffer is that multi-view multi-label data appears in three forms: missing labels, incomplete views, and non-aligned views. Reasons behind the appearance of these issues are: (1) insufficient resources or limited knowledge makes it expensive to obtain all the relevant labels of a sample; (2) malfunction of sensors or occlusion in some views causes the incompleteness of views; (3) the completely aligned information can hardly be accessed for privacy protection or aligned views are disturbed by carelessness of mankind. All of the three issues could dramatically degenerate the performance.
Existing multi-view learning methods, no matter whichever supervised \cite{r19,r20,xu2014}, semi-supervised \cite{r21,r22,nie2017}, or unsupervised \cite{r23,r24,nie2018}, commonly assume that the views involved are aligned, however, this does not necessarily always hold in reality. For example, in the social network, users may register multiple accounts as in Facebook, Twitter, and Instagram, but it is hard to align these social accounts with the same user owing to the privacy protection; in disease diagnosis, we obtain different types of examination data of the patients from different hospitals, but for the same sake of privacy protection, we cannot align these data with the same patient; in questionnaire survey, different organizations survey the same group of people, but the survey is generally anonymous, making the alignment information unavailable. In addition, non-aligned views are natural in recommendation system \cite{r25}, video surveillance \cite{r26} and so on.
It is worth emphasizing that the non-aligned views add extra challenges to the original considerably challenging problem with missing labels and incomplete views. The reasons are as follows: (1) The non-aligned views make the interactions among different views no longer easy to be available, thus their explicitly complementary information can hardly be exploited. (2) In conventional incomplete multi-view learning, those missing views of samples can be completed with the help of the paired ones from other observed views, however, in the non-aligned views setting, no paired sample can be available. As a result, the problem of multi-view incompleteness is more severe and hard to be tractable even if possible. (3) Under the situation of non-aligned views, information able to be utilized for the correspondence among views most probably hides in the common or shared labels of samples from different views, however, unfortunately, in the missing multi-label setting, such beneficial label information is quite limited.
\noindent {\textbf {Remark 1.}} If the multiple labels are complete during training, samples of the non-aligned views can be simply aligned by directly using the given labels and then divided into the corresponding groups. We refer interested readers to \cite{r27} for the partially non-aligned two views. Consequently, the non-alignment of multiple views under the situation of complete multiple labels is formally relatively trivial. However, it is worth emphasizing that in the case of missing multiple labels, the complete ground truth of the samples is no longer available. Therefore, beneficial label information for aligning the non-aligned views is quite limited, making the problem of non-aligned multiple views with the missing multiple labels more challenging and non-trivial any longer. To alleviate such insufficiency of multiple labels, extra and accurate structural information of multiple labels accompanied the non-aligned incomplete multiple views needs to be mined, which is the main focus of this work.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1\textwidth]{Fig1}
\caption{The global and local structures of the multiple labels. The same sample of non-aligned views is represented by the same color, and "sky", "cloud" , … ,"fish" are the labels. "1" in the label matrix means that the sample is annotated with the corresponding label whereas "-1" means not. All the label matrices are vertically concatenated. The label matrix of all samples from two views has full column rank or high rank, which refers to the global structure of multiple labels. Meanwhile, the sub-label matrix comprised of samples that share the same individual label tends to have low rank, e.g., the rank of sub-label matrix that share the same label "cloud" equals to 2, which corresponds to the local structure of multiple labels.}
\label{fig1}
\end{figure*}
To date, existing methods \cite{r28,r29} of addressing the three challenges (\textit {missing labels, incomplete views, and non-aligned views}) usually suffer from two disadvantages. First, they only focus on the first two challenges and assume that the views involved must be aligned. However, this assumption does not necessarily always hold in reality as above-mentioned. Confronting non-aligned views, traditional multi-view learning methods operating in the aligned cases are difficult to be directly adopted due to the lack of straightforward interactions among views. Second, introducing overmuch hyper-parameters in dealing with the problems of incomplete views and missing labels makes them difficult to optimize and reproduce.
To alleviate the aforecited two disadvantages, two questions arise naturally. One is how to simultaneously address all the three challenges. The other is how to efficiently build a model with as few hyper-parameters as possible.
In this paper, we propose a \textbf{N}on-\textbf{A}ligned \textbf{I}ncomplete \textbf{M}ulti-view and \textbf{M}issing \textbf{M}ulti-label \textbf{L}earning method abbreviated as NAIM$^3$L to address the arising two questions. Our starting point is that although samples among views are not aligned explicitly, they can still be bridged implicitly through the common or shared labels and thus can be learned complementally. Besides, intuitively, the samples with similar labels are more prone to be strongly correlated with each other whereas those with dissimilar labels are weakly correlated, or even uncorrelated, which reflects the local and the global structural relations within multiple labels, respectively. Mathematically, the local correlated structure can approximately correspond to the low rankness of the label matrix of samples sharing the same individual label while the global weakly correlated or uncorrelated structure can roughly correspond to the high rankness of the label matrix of all samples. An intuitive description of this global-local structure is illustrated in Fig. \ref {fig1}. For conciseness and due to the limitation of space, we only use two views to illustrate the global and the local structures of multiple labels, but such an illustration can be directly generalized to the case of more than two views.
Note that in traditional multi-label learning methods, the label matrix is often assumed to be low-rank rather than high-rank as we do. We argue from the following two perspectives that the high-rank assumption is reasonable. First, intuitively, samples in real datasets with multiple labels are usually diverse and contain dissimilar labels. As samples with dissimilar labels are weakly correlated, or even uncorrelated, thus the entire multi-label matrix is often of a high rank. Second, mathematically, as entries of the label matrix take binary values, it is unlikely for this matrix to be low-rank. To further demonstrate the rationality of our assumptions, we show the high-rankness of the entire label matrix corresponding to all samples and the low-rankness of each sub-label matrix corresponding to samples that share a single label in Fig. \ref {Fig2a} and \ref {Fig2b}, respectively. It is well known that the rank of a matrix is equal to the number of its non-zero singular values. From Fig. \ref {Fig2a}, we can find that the singular values of the training and testing label matrices have a heavy tail, which indicates the high-rankness of the entire label matrix (the numbers of non-zero singular values of these two matrices are 259 and 249, i.e., the ranks are 259 and 249, either full-rank or almost full-rank). As the number of multiple labels is too big to show the low-rankness of each sub-label matrix in a single picture, alternatively, we show the mean value and the median value of the ranks of these sub-label matrices. From Fig. \ref {Fig2b}, we can observe that the ranks of the sub-label matrices are relatively low (the ranks are about 15 and 5). In brief, these observations are consistent with the assumptions of our model.
\begin{figure}[h]
\centering
\subfigure[High-rank]{
\includegraphics[width=0.227\textwidth]{Fig2_a}
\label{Fig2a}
}
\subfigure[Low-rank]{
\includegraphics[width=0.227\textwidth]{Fig2_b}
\label{Fig2b}
}
\caption{The high-rankness of the entire label matrix corresponding to all samples and the low-rankness of the sub-label matrices corresponding to samples that share a single label on Corel5k dataset, which has 260 labels.}
\end{figure}
By formulating the above two structures as a single regularizer, we design a final optimization objective for the model establishment. What makes our method most different from existing multi-label learning methods is that the latter almost completely neglect the global high-rank structure, which will be proved in our experiments that such global high rankness plays an indispensable role. In summary, our contributions are fourfold:
(1) To the best of our knowledge, this is the first multi-view multi-label learning work of jointly considering non-alignment of views, incompleteness of views, and missing of labels, which is much more realistic and more challenging. Contrary to other methods, this is the first work that explicitly models the global structure of the multi-label matrix to be high-rank, whose effectiveness has been validated in our experiments.
(2) We utilize the ConCave-Convex Procedure (CCCP) to reduce the objective function as a convex optimization problem, and then provide an efficient Alternating Direction Method of Multipliers (ADMM) algorithm by which a closed form solution of each sub-problem is derived. Besides, the customized ADMM algorithm for NAIM$^3$L has linear computational complexity with respect to the number of samples, which makes it more efficient to handle large scale data.
(3) An efficient algorithm enjoying linear time complexity regarding the number of samples is derived as a byproduct to compute the sub-gradient of the trace norm.
(4) Even without view-alignment, our method can still achieve better performance on five real datasets compared to state-of-the-arts with view-alignment.
Compared with other multi-view multi-label learning methods, our model exhibits the following four merits. Firstly, only one hyper-parameter corresponding to the regularization term is introduced in modeling, which makes its optimization greatly easier than existing methods. Secondly, our model is inductive, thus it can be directly applied to predict unseen samples. Thirdly, our model outperforms state-of-the-arts on five real datasets even without view-alignment. Fourthly, our model can also be directly non-linearized to its kernerlized version and cooperate with deep neural network in an end-to-end manner.
The rest of this paper is organized as follows. In Section \ref {secII}, we briefly overview some related work of multi-view multi-label learning. Section \ref {secIII} proposes our method NAIM$^3$L, and an efficient ADMM algorithm is presented in Section \ref {secIV} to solve it. Extensive experimental results and analyses are reported in Section \ref {secV}. Section \ref {secVI} concludes this paper with future research directions.
\section{Related Work} \label{secII}
To date, in the face of the aforesaid three challenges, existing work concerns either only one or two of them. According to the number of challenges addressed, we can roughly divide them into two major categories: methods of addressing one challenge and methods of addressing two challenges, where the former can be divided into three sub-categories: non-aligned multi-view learning, incomplete multi-view learning, and multi-label learning with missing labels. In this section, we overview recent research closely related to ours based on the above taxonomy.
\subsection{Methods of Addressing One Challenge}
\textbf{Non-aligned multi-view learning} deals with the problem that samples in all views are totally unpaired while accompanied with certain constraints from (weakly) supervised information such as must-link (ML) and cannot-link (CL). We will give a formal definition of non-aligned views in subsection \ref{III-A}. To our knowledge, UPMVKSC \cite {r30} is the first and only work that considers the non-aligned multiple views. In UPMVKSC, the authors incorporated the ML and the CL constraints into the kernel spectral clustering to increase the learning performance. However, this work assumes that views involved are complete and information about the constraints must be given in prior.
\textbf{Incomplete multi-view learning} handles the issue that samples in some views are missing. Recently, some incomplete multi-view learning methods have been proposed. For example, Xu \textit{et al.} \cite {r31} have proposed a method termed MVL-IV to accomplish multi-view learning with incomplete views by assuming that different views should be generated from a shared subspace. Afterwards, Du \textit{et al.} \cite {r32} have modeled the statistical relationships of multi-modality emotional data using multiple modality-specific generative networks with a shared latent space, in which a Gaussian mixture assumption of the shared latent variables is imposed. Lately, Xue \textit{et al.} \cite {r33} have integrated semi-supervised deep matrix factorization, correlated subspace learning, and multi-view label prediction into a unified framework to jointly learn the deep correlated predictive subspace and multi-view shared and private label predictors. Newly, Zhang \textit{et al.}\cite {r34} have presented CPM-Nets to learn the latent multi-view representation through mimicking data transmitting, such that the optimal trade-off between consistence and complementarity across different views can be achieved. In summary, all the above methods assume the views involved are aligned and focus on the task of multi-class learning, which is a special case of multi-label learning when each sample is annotated with only one label. In addition, there have been abundant literature about incomplete multi-view clustering \cite {r35,r36,r37,r38}, which is not the focus of this paper.
\textbf{Multi-label learning with missing labels} aims to predict the complete labels by giving part of multiple labels of an object. Zhang \textit{et al.} \cite {r39} have proposed a framework to sufficiently leverage the inter-label correlations and the optimal combination of heterogeneous features based on multi-graph Laplacian. Liu \textit{et al.} \cite {r40} have presented a model called lrMMC that first seeks a low-dimensional common representation of all the views by constraining their common subspace to be low-rank and then utilizes matrix completion for multi-label classification. Zhu \textit{et al.} \cite {r41} have put forward a method termed GLMVML, which extends the GLOCAL \cite {r42} model to its multi-view version by exploiting the global and the local label correlations of all the views and each view simultaneously.
To see the differences of the global and local structures between GLOCAL and our NAIM$^3$L, we briefly describe GLOCAL as follows. Given a dataset $\mathbf{X}$ and its corresponding multi-label matrix $\mathbf{Y}$, GLOCAL exploits the global and local label correlations by the manifold regularization,
\begin{equation} \label{eqa}
\begin{aligned}
\min _{\mathbf{U}, \mathbf{V}, \mathbf{W}}&\left\|\Pi_{\Omega}(\mathbf{Y}-\mathbf{U} \mathbf{V})\right\|_{F}^{2}+\lambda_{1}\left\|\mathbf{V}-\mathbf{W}^{T} \mathbf{X}\right\|_{F}^{2} \\
&+\lambda_{2}(\|\mathbf{U}\|_{F}^{2}+\|\mathbf{V}\|_{F}^{2}+\|\mathbf{W}\|_{F}^{2})+\lambda_{3} \operatorname{tr}\left(\mathbf{F}_{0}^{T} \mathbf{L}_{0} \mathbf{F}_{0}\right) \\
&+\sum_{m=1}^{g} \lambda_{4} \operatorname{tr}\left(\mathbf{F}_{m}^{T} \mathbf{L}_{m} \mathbf{F}_{m}\right)
\end{aligned},
\end{equation}
where $\Pi_{\Omega}$ is a projection operator, $\mathbf{U}\mathbf{V}$ is the low-rank decomposition of $\mathbf{Y}$, and $\mathbf{W}$ is a linear mapping matrix to be learned. $\mathbf{L}_{0}$ and $\mathbf{L}_{m}$ are the Laplacian matrices encoding the global and the local label correlations, respectively. $\mathbf{F}_{0}$ and $\mathbf{F}_{m}$ are the classifier output matrices for all samples and group $m$, respectively.
Although some of them also consider the global and the local structures, the main differences from ours lie in that almost all methods of such a kind assume not only the global and the local manifold structures of the given data but also the low-rankness of the whole label matrix, whereas our method just needs an assumption about the rank of (predictive) label matrix to formulate the global and the local structures, and more importantly, we argue that the whole label matrix should be high-rank opposing to the popular low-rank assumption.
\subsection{Methods of Addressing Two Challenges}
As far as we have known, there are only two methods called iMVWL \cite {r28} and IMVL-IV \cite {r29} that take both incomplete multi-view and missing multi-label into consideration. iMVWL learns a shared subspace from incomplete views by exploiting weak labels and local label correlations, and then trains a predictor in this subspace such that it can capture not only cross-view relationships but also weak-label information of the training samples. We present the formulation of iMVWL to distinguish its low-rank assumption from our high-rank assumption. Specifically, given a multi-view dataset of $n_{v}$ views $\left\{\mathbf{X}_{v}\right\}_{v=1}^{n_{v}}$ and its corresponding multi-label matrix $\mathbf{Y}$, the objective function of iMVWL is formulated as follows:
\begin{equation} \label{eqb}
\begin{aligned}
\min _{\left\{\mathbf{U}_{v}, \mathbf{V}, \mathbf{W}, \mathbf{S}\right\}} \sum_{v=1}^{n_{v}}\left\|\mathbf{O}^{v} \odot\left(\mathbf{X}_{v}-\mathbf{V} \mathbf{U}_{v}^{T}\right)\right\|_{F}^{2} \\
+\alpha\|\mathbf{M} \odot(\mathbf{V} \mathbf{W} \mathbf{S}-\mathbf{Y})\|_{F}^{2}+\beta\|\mathbf{S}\|_{*}
\end{aligned},
\end{equation}
where $\odot$ denotes the Hadamard product, $\mathbf{O}^{v}$ and $\mathbf{M}$ are the indicator matrices for the missing views and labels, respectively. $\mathbf{V}\mathbf{U}_{v}^{T}$ is the non-negative decomposition of $\mathbf{X}_{v}$, $\mathbf{W}$ is the prediction coefficient matrix, and $\mathbf{S}$ is the label correlation matrix.
Differently, IMVL-IV provides a unified framework for characterizing multiple ingredients including label-specific features, global and local correlations among labels, low-rank assumption of the label matrix, and consistency among the representations of these views. However, these methods require views to be aligned, and involve at least two explicit hyper-parameters in their objectives. More dauntingly, IMVL-IV even contains ten hyper-parameters.
\subsection{Summary}
To summarize, the aforementioned methods have three weaknesses: (1) Most of them assume that the views involved must be aligned, which naturally limits their applicability in practice. (2) Except for MVL-IV, all the methods involve at least two explicit hyper-parameters in their modeling, and some of them even have more extra implicit hyper-parameters making the model selection quite cumbersome. (3) Most of them are created by the non-negative matrix factorization \cite {r43}, resulting in the failure of directly obtaining an inductive learner, thus being suboptimal for predicting unseen samples. In the next section, we will propose a concise yet effective model to overcome the above shortcomings.
\section{The Proposed Method} \label{secIII}
\subsection{Problem Settings} \label{III-A}
In this subsection, we first give the formal definition of non-aligned views and then present the problem settings of our model in detail.
\textbf{Definition 1.} Given a multi-view multi-label data set $\Omega$, suppose that $\Omega=\left\{\mathbf{X}^{(i)}\right\}_{i=1}^{V}$ contains $V$ different views, where $\mathbf{X}^{(i)}=\left[\mathbf{x}_{1}^{(i)}, \mathbf{x}_{2}^{(i)}, \cdots, \mathbf{x}_{n}^{(i)}\right] \in \mathbb{R}^{n \times d_{i}}$ is the feature matrix of the $i$-th view, $n$ and $d_i$ are the numbers of samples and the dimensions of features of the $i$-th view, respectively. If samples across all views are totally unpaired, i.e., the $m$-th sample of the $i$-th view $\mathbf{x}_{m}^{(i)}$ and the $m$-th sample of the $j$-th view $\mathbf{x}_{m}^{(j)}$ are distinct samples, for all $m \in\{1,2, \cdots, n\}$, $i, j \in\{1,2, \cdots, V\}$, and $i\ne j$. Then these views are called non-aligned views.
In traditional full-label setting, $\mathbf{Y}^{(i)}=\left[\mathbf{y}_{1}^{(i)}, \mathbf{y}_{2}^{(i)}, \cdots, \mathbf{y}_{n}^{(i)}\right]$ $\in\{-1,1\}^{n \times c}$ is the corresponding label matrix of the $i$-th view and $c$ is the number of multiple labels. $\mathbf{y}_{j k}^{(i)}=1$ $(k=1,2, \cdots, c)$ means the $k$-th label is relevant while $\mathbf{y}_{j k}^{(i)}=-1$ means irrelevant. By considering the missing labels setting, some labels may not be observed, for example, when the $k$-th label of the $j$-th sample in the $i$-th view is missing, $\mathbf{y}_{j k}^{(i)}=0$, and it does not provide any information. Moreover, in the incomplete multi-view scenario, partial views of some samples are missing, correspondingly, the rows of these samples in the feature matrix $\mathbf{X}^{(i)}$ are missing.
\subsection{Problem Formulation}
In this subsection, we focus on the task of predicting the labels of unlabeled test data by learning from non-aligned incomplete multi-view and missing multi-label training data.
Predicting labels has attracted tremendous interests of researchers in the machine learning community and numerous work has been put forward. Among them, the linear regression \cite {r44} might be the most widely used framework due to its simplicity and effectiveness. Thus, we formulate the prediction as a regression problem. Formally, the loss function can be written as follows,
\begin{equation}\label{Eq1}
\mathcal{L}=\frac{1}{2} \sum_{i=1}^{V}\left\|\mathbf{X}^{(i)} \mathbf{W}^{(i)}-\mathbf{Y}^{(i)}\right\|_{F}^{2},
\end{equation}
where $\mathbf{W}^{(i)} \in \mathbb{R}^{d_{i} \times c}$ is the coefficient matrix corresponding to the $i$-th view. Further, to deal with the challenge of missing labels, we introduce an indicator matrix $\mathbf{P}^{(i)}(i=1,2, \cdots, V)$ for each label matrix. Let $\Omega \subseteq\{1,2, \cdots, n\} \times\{1,2, \cdots, c\}$ be the set of indices that observed in the label matrix $\mathbf{Y}^{(i)}$, then the definition of $\mathbf{P}^{(i)}$ is as follows:
\begin{equation}\label{Eq2}
\mathbf{P}_{j k}^{(i)}=\left\{\begin{array}{ll}
1 & \text { if }(j, k) \in \Omega \\
0 & \text {otherwise.}
\end{array}\right.
\end{equation}
Moreover, views are incomplete in our settings, to alleviate the negative impact arising from the incompleteness of multiple views, we set the rows of $\mathbf{P}^{(i)}$ to zero if the corresponding rows of $\mathbf{X}^{(i)}$ are missing, i.e., $\mathbf{P}_{j \bullet}^{(i)}=0$ if the $j$-th sample of the $i$-th view is missing, where $\mathbf{P}_{j \bullet}^{(i)}=0$ denotes the $j$-th row of the indicator matrix $\mathbf{P}^{(i)}$. By introducing $\mathbf{P}^{(i)}$, Eq. \eqref{Eq1} can be rewritten as:
\begin{equation}\label{Eq3}
\mathcal{L}=\frac{1}{2} \sum_{i=1}^{V}\left\|\mathbf{P}^{(i)} \odot\left(\mathbf{X}^{(i)} \mathbf{W}^{(i)}-\mathbf{Y}^{(i)}\right)\right\|_{F}^{2},
\end{equation}
where $\odot$ denotes the Hadamard product. Apparently simple as Eq. \eqref{Eq3} seems, it serves three purposes. First, it can be used to predict unlabeled data. Second, inferences of missing labels on training data can be achieved as a byproduct. Third, it can handle both the missing labels and incomplete views.
However, the above loss function neither utilizes multi-view consistence nor exploits multi-label structures. Thus, how to combine these two properties to make our model more discriminative is the main concern in the following.
Unfortunately, we are confronted with two obstacles when combing aforementioned two properties. One is that non-aligned views make the consensus of multiple views difficult to guarantee. The other is while dealing with non-aligned views, we also need to consider collaborating with the multi-label structures at the same time.
From the observation that although samples among views are not aligned explicitly, they can implicitly be bridged through the common or shared labels. To mitigate the above two obstacles, we align different views in a common label space, in which we characterize the global-local structures of multiple labels. Our motivations are intuitive. First, although the views are not aligned, samples of different views that share the same label should be consistent, hence, views can be aligned by their labels. Second, in real world, samples with similar labels are strongly correlated with each other whereas those with dissimilar labels are weakly correlated, or even uncorrelated. This implies the low rankness of the sub-label matrix of samples sharing the same label and the high rankness of the label matrix of all samples. Finally, the regularizer $\mathcal{R}$ is formulated as:
\begin{equation}\label{Eq4}
\begin{aligned}
\mathcal{R}=& \sum_{k=1}^{c}\left\|[\mathbf{X}_{k}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}_{k}^{(2)} \mathbf{W}^{(2)}; \cdots ; \mathbf{X}_{k}^{(V)} \mathbf{W}^{(V)}]\right\|_{*} \\
&-\left\|[\mathbf{X}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}^{(V)} \mathbf{W}^{(V)}] \right\|_{*}
\end{aligned},
\end{equation}
where $\|\bullet \|_{*}$ denotes the trace norm, $[\mathbf{A} ; \mathbf{B}]$ is the vertical concatenation of matrices $\mathbf{A}$ and $\mathbf{B}$, and $\mathbf{X}_{k}^{(i)}$ is the sub-matrix of $\mathbf{X}^{(i)}$ which consists of samples corresponding to the $k$-th label observed in the $i$-th view. Note that, the intersection of $\mathbf{X}_{k}^{(i)}$ w.r.t. $k$ is non-empty due to the fact that a sample has multiple labels.
By concatenating the samples that share the same single label in all V views (an early fusion strategy), the first term of Eq. \eqref{Eq4} aims at two purposes. It not only aligns samples of different views in a common label space to ensure consistency but also characterizes the local low-rank structure of each predictive sub-label matrix corresponding to samples that share the same single label. Similarly, the second term aligns diverse views of all samples and depicts the global high-rank structure of multiple labels corresponding to all samples. An intuitive illustration of this global-local structure is shown in Fig. \ref {fig1}. Combining Eq. \eqref{Eq3} and \eqref{Eq4}, the final objective function is formulated as:
\begin{equation}\label{Eq5}
\begin{aligned}
\min _{\mathbf{W}^{(i)}} & \frac{1}{2} \sum_{i=1}^{V}\left\|\mathbf{P}^{(i)} \odot\left(\mathbf{X}^{(i)} \mathbf{W}^{(i)}-\mathbf{Y}^{(i)}\right)\right\|_{F}^{2} \\
&+\lambda\left(\sum_{k=1}^{c}\left\|[\mathbf{X}_{k}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}_{k}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}_{k}^{(V)} \mathbf{W}^{(V)}]\right\|_{*}\right.\\
&\left.-\left\|[\mathbf{X}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}^{(V)} \mathbf{W}^{(V)}]\right\|_{*}\right).
\end{aligned}
\end{equation}
Note that, the two terms of the regularizer $\mathcal{R}$ are designed to jointly describe the global-local structure of multiple labels. More importantly, these two terms work as a whole and either of them is indispensable, which will be validated by ablation study in subsection \ref {V-D}. Thus, we only need one hyper-parameter in our objective function. However, the regularizer $\mathcal{R}$ is the difference of two convex functions, if this term is negative, then we may get trivial solutions when $\lambda$ is large enough. In the following, we will rigorously prove a theorem to claim that $\mathcal{R}$ is non-negative, thus, trivial solutions can be avoided.
\begin{lemma} \label{lemma1} {\rm \cite{r45}}
Let $\mathbf{A}$ and $\mathbf{B}$ be matrices of the same row dimensions, and $[\mathbf{A}, \mathbf{B}]$ be the concatenation of $\mathbf{A}$ and $\mathbf{B}$, we have $\| [\mathbf{A}, \mathbf{B}]\left\|_{*} \leq\right\| \mathbf{A}\left\|_{*}+\right\| \mathbf{B} \|_{*}$.
\end{lemma}
\begin{theorem} \label{theo1}
Let $\mathbf{X}_{k}^{(1)} \mathbf{W}^{(1)}, \mathbf{X}_{k}^{(2)} \mathbf{W}^{(2)}, \cdots, \mathbf{X}_{k}^{(V)} \mathbf{W}^{(V)}$ $(k = 1,2, \cdots,c)$ be matrices with the same column dimension, where $\mathbf{X}_{k}^{(i)}$ is a sub-matrix of $\mathbf{X}^{(i)}(i = 1,2, \cdots,V) $. If (a) $\forall i \in \{1,2,\cdots,V\}$, the vertical concatenation of $\mathbf{X}_{1}^{(i)} \mathbf{W}^{(i)}$ to $\mathbf{X}_{c}^{(i)} \mathbf{W}^{(i)}$ contains all rows of $\mathbf{X}^{(i)} \mathbf{W}^{(i)}$ and (b) $\forall k,h \in \{1,2,\cdots,c\},k\ne h$, at least one of the intersection between $\mathbf{X}_{k}^{(i)} \mathbf{W}^{(i)}$ and $\mathbf{X}_{h}^{(i)} \mathbf{W}^{(i)}$ is non-empty, then we have
\begin{equation}\label{Eq6}
\begin{aligned}
& \sum_{k=1}^{c}\left\|[\mathbf{X}_{k}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}_{k}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}_{k}^{(V)} \mathbf{W}^{(V)}]\right\| \\
& \geq\left\|[\mathbf{X}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}^{(2)} \mathbf{W}^{(2)}, \cdots ; \mathbf{X}^{(V)} \mathbf{W}^{(V)}]\right\|_{*}.
\end{aligned}
\end{equation}
\end{theorem}
At the first glance, Theorem \ref{theo1} seems able to be proved directly by extending Lemma \ref {lemma1}, however, in fact, the proof is not that trivial, the detailed proof is shown below.
\renewcommand{\theequation}{P.\arabic{equation}}
\setcounter{equation}{0}
\begin{proof} Before proving Theorem \ref{theo1}, firstly, we present the following three propositions.
\begin{equation} \label{eqA1}
\begin{aligned}
& \left\|\mathbf{A}_{1}\right\|_{*}+\left\|\mathbf{A}_{2}\right\|_{*}+\cdots+\left\|\mathbf{A}_{n}\right\|_{*} \\
\geq & \left\|\left[\mathbf{A}_{1} ; \mathbf{A}_{2} ; \cdots ; \mathbf{A}_{n}\right]\right\|_{*}
\end{aligned}
\end{equation}
\begin{equation} \label{eqA2}
\begin{aligned}
& \left\| \left[ \mathbf{A}_{1} ; \mathbf{A}_{2} ; \mathbf{A}_{3} \right] \right\|_{*}=\left\| \left[\mathbf{A}_{1} ; \mathbf{A}_{3} ; \mathbf{A}_{2} \right]\right\|_{*} \\
= & \left\| \left[ \mathbf{A}_{2} ; \mathbf{A}_{1} ; \mathbf{A}_{3} \right] \right\|_{*}=\left\| \left[\mathbf{A}_{2} ; \mathbf{A}_{3} ; \mathbf{A}_{1}\right]\right\|_{*} \\
= & \left\| \left[ \mathbf{A}_{3} ; \mathbf{A}_{1} ; \mathbf{A}_{2} \right] \right\|_{*}=\left\| \left[\mathbf{A}_{3} ; \mathbf{A}_{2} ; \mathbf{A}_{1}\right]\right\|_{*}
\end{aligned}
\end{equation}
\begin{equation} \label{eqA3}
\| [\mathbf{A} ; \mathbf{B}] \|_{*} \geq \|\mathbf{A}\|_{*}.
\end{equation}
\eqref{eqA1} is a generalization of Lemma \ref{lemma1} and can be proved by the following derivation.
\begin{equation} \nonumber
\begin{aligned}
& \left\|\mathbf{A}_{1}\right\|_{*}+\left\|\mathbf{A}_{2}\right\|_{*}+\cdots+\left\|\mathbf{A}_{n}\right\|_{*} \\
\geq & \left\|\mathbf{A}_{1}+\mathbf{A}_{2}+\cdots+\mathbf{A}_{n}\right\|_{*} (\text {by triangle inequality}) \\
= & \left\|\left[\mathbf{A}_{1} ; \mathbf{0} ; \cdots ; \mathbf{0}\right]+\left[\mathbf{0} ; \mathbf{A}_{2} ; \cdots ; \mathbf{0}\right]+\cdots+\left[\mathbf{0} ; \mathbf{0} ; \cdots ; \mathbf{A}_{n}\right]\right\|_{*}\\
= & \left\|\left[\mathbf{A}_{1} ; \mathbf{A}_{2} ; \cdots ; \mathbf{A}_{n}\right]\right\|_{*}
\end{aligned}
\end{equation}
\eqref{eqA2} is actually the commutative law of the trace norm with respect to the rows (columns). Without loss of generality, we only prove the case of three columns, and it can be directly extended to cases of any number (more than three) of columns.
\begin{equation} \nonumber
\begin{aligned}
& \left\|\mathbf{A}_{1} ; \mathbf{A}_{2} ; \mathbf{A}_{3}\right\|_{*}=t r \sqrt{\left[\mathbf{A}_{1}^{T}, \mathbf{A}_{2}^{T}, \mathbf{A}_{3}^{T}\right]\left[\mathbf{A}_{1} ; \mathbf{A}_{2} ; \mathbf{A}_{3}\right]} \\
= & tr \sqrt{\mathbf{A}_{1}^{T} \mathbf{A}_{1}+\mathbf{A}_{2}^{T} \mathbf{A}_{2}+\mathbf{A}_{3}^{T} \mathbf{A}_{3}} \\
= & tr \sqrt{\mathbf{A}_{1}^{T} \mathbf{A}_{1}+\mathbf{A}_{3}^{T} \mathbf{A}_{3}+\mathbf{A}_{2}^{T} \mathbf{A}_{2}} \\
= & tr \sqrt{\left[\mathbf{A}_{1}^{T}, \mathbf{A}_{3}^{T}, \mathbf{A}_{2}^{T}\right]\left[\mathbf{A}_{1} ; \mathbf{A}_{3} ; \mathbf{A}_{2}\right]}=\left\|\left[\mathbf{A}_{1} ; \mathbf{A}_{3} ; \mathbf{A}_{2}\right]\right\|_{*}
\end{aligned}
\end{equation}
The proof of remaining equations is similar.
\eqref{eqA3} can be easily proved by $\| [\mathbf{A} ; \mathbf{B}] \|_{*}=tr \sqrt{\mathbf{A}^{T} \mathbf{A}+\mathbf{B}^{T} \mathbf{B}} \geq tr \sqrt{\mathbf{A}^{T} \mathbf{A}}=\|\mathbf{A}\|_{*}$.
For simplicity of writing, let $\mathbf{X}_{k}^{(i)} \mathbf{W}^{(i)}=\mathbf{E}_{k}^{(i)}$ and $\mathbf{X}^{(i)} \mathbf{W}^{(i)}=\mathbf{E}^{(i)}$, where $i = 1,2, \cdots,V$ and $k = 1,2, \cdots,c$ .Then we have,
$\begin{aligned}
& \sum_{k=1}^{c}\left\|[\mathbf{X}_{k}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}_{k}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}_{k}^{(V)} \mathbf{W}^{(V)}]\right\|_{*}\\
= &\sum_{k=1}^{c}\left\|[\mathbf{E}_{k}^{(1)} ; \mathbf{E}_{k}^{(2)} ; \cdots ; \mathbf{E}_{k}^{(V)}]\right\|_{*} \\
\geq &\left\|[\mathbf{E}_{1}^{(1)}; \cdots ; \mathbf{E}_{1}^{(V)} ; \mathbf{E}_{2}^{(1)}; \cdots ; \mathbf{E}_{2}^{(V)} ; \cdots ;\mathbf{E}_{c}^{(1)}; \cdots ; \mathbf{E}_{c}^{(V)}]\right\|_{*} \\
= &\left\|[\mathbf{E}_{1}^{(1)} ; \cdots ; \mathbf{E}_{c}^{(1)} ; \mathbf{E}_{1}^{(2)} ; \cdots ; \mathbf{E}_{c}^{(2)} ; \cdots ; \mathbf{E}_{1}^{(V)} ; \cdots ; \mathbf{E}_{c}^{(V)}]\right\|_{*} \\
\geq &\left\|[\mathbf{E}^{(1)} ; \mathbf{E}^{(2)} ; \cdots ; \mathbf{E}^{(V)}]\right\|_{*}\\
= &\left\|[\mathbf{X}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}^{(V)} \mathbf{W}^{(V)}]\right\|_{*}.
\end{aligned}$
The first inequality holds by \eqref{eqA1}, and the second equality holds by \eqref{eqA2}. In the multi-label setting, the samples corresponding to all the individual labels contain those corresponding to the whole multiple labels, which implies that the condition (a) is satisfied. Condition (b) is likewise satisfied by the fact that a sample may have multiple labels. Thus, $[\mathbf{E}_{1}^{(i)} ; \cdots ; \mathbf{E}_{c}^{(i)}]$ can be rewritten as $[ \mathbf{E}_{1}^{(i)} ; \cdots ; \mathbf{E}_{c}^{(i)} ] = [\mathbf{E}^{(i)}; \mathbf{E}_{in}^{(i)}]$, where $\mathbf{E}_{in}^{(i)}$ is the matrix consisting of the intersections of $\mathbf{E}_{k}^{(i)}$ w.r.t. $k$. Then the last inequality holds by \eqref{eqA3}. At this point, the proof is complete.
\end{proof}
\noindent {\textbf {Remark 2.}}
A similar method termed as DM2L has been proposed in \cite{r46}. Major differences between this work and DM2L are as follows:
(1) This work generalizes DM2L to the multi-view setting and mainly focuses on the novel and unique challenge posed by non-aligned multiple views, which widely exist in reality while often being neglected. In brief, DM2L can be regarded as a special case of this work.
(2) Albeit the model of DM2L seems similar to ours, the problem addressed in this paper is much more challenging. More importantly, the motivations of these two are different. DM2L claims that the multi-label matrix is low-rank and the negative trace norm term in their model aims at making the DM2L more discriminative, whereas we argue that the multi-label matrix is NOT necessarily low-rank, conversely, high-rank as analyzed before. And it is the latter that we utilize the negative trace norm to directly characterize this property, making our motivations more intuitive and interpretable.
(3) We tailor an efficient optimization method to solve the proposed model. Specifically, we derive an ADMM algorithm with a closed form solution of each sub-problem. The linear computational complexity with respect to the number of samples allows our model to handle large-scale data. However, high efficiency, closed form solution, and the ability to handle large-scale data cannot be guaranteed in DM2L by using traditional convex optimization method. Besides, there are also other differences between NAIM$^3$L and DM2L, for example, we make a thorough computational complexity analysis; the non-negativity of the regularization term needed to be proved is more difficult than that of DM2L and we provide a more concise proof (see details above); we design a more efficient algorithm for computing the sub-gradient of the trace norm.
\section{Optimization} \label{secIV}
\subsection{ADMM Algorithm}
\renewcommand{\theequation}{\arabic{equation}}
\setcounter{equation}{8}
For the convenience of optimization, we first introduce some notations to simplify the formulas. Let
$
\mathbf{P}=\left[\begin{array}{c}
\mathbf{P}^{(1)} \\
\mathbf{P}^{(2)} \\
\vdots \\
\mathbf{P}^{(V)}
\end{array}\right]
$,
$
\mathbf{W}=\left[\begin{array}{c}
\mathbf{W}^{(1)} \\
\mathbf{W}^{(2)} \\
\vdots \\
\mathbf{W}^{(V)}
\end{array}\right]
$,
$
\mathbf{X}=\left[\begin{array}{cccc}
\mathbf{X}^{(1)} & \bf 0 &\cdots& \bf 0 \\
\bf 0 & \mathbf{X}^{(2)} &\cdots& \bf 0 \\
\vdots & \vdots & \ddots &\vdots\\
\bf 0 & \bf 0 &\cdots& \mathbf{X}^{(V)}
\end{array}\right]
$,
$
\mathbf{Y}=\left[\begin{array}{c}
\mathbf{Y}^{(1)} \\
\mathbf{Y}^{(2)} \\
\vdots \\
\mathbf{Y}^{(V)}
\end{array}\right]
$,
and
$
\mathbf{X}_k=\left[\begin{array}{cccc}
\mathbf{X}^{(1)}_k & \bf 0 & \cdots & \bf 0 \\
\bf 0 & \mathbf{X}^{(2)}_k & \cdots & \bf 0 \\
\vdots & \vdots & \ddots & \vdots \\
\bf 0 & \bf 0 & \cdots & \mathbf{X}^{(V)}_k
\end{array}\right]
$,
then Eq. \eqref{Eq5} can be simplified as:
\begin{equation}\label{Eq7}
\begin{aligned}
\min _{\mathbf{w}} &\frac{1}{2}\|\mathbf{P} \odot(\mathbf{X} \mathbf{W}-\mathbf{Y})\|_{F}^{2} \\
&+\lambda\left(\sum_{k=1}^{c}\left\|\mathbf{X}_{k} \mathbf{W}\right\|_{*}-\|\mathbf{X} \mathbf{W}\|_{*}\right),
\end{aligned}
\end{equation}
Eq. \eqref{Eq7} is a DC (Difference of Convex functions) programming, and it can be solved by the ConCave-Convex Procedure (CCCP). Let $ f = J_{cvx}+J_{cav } $ ,
\begin{equation}\label{Eq8}
J_{cvx}=\frac{1}{2}\|\mathbf{P} \odot(\mathbf{X} \mathbf{W}-\mathbf{Y})\|_{F}^{2}+\lambda \sum_{k=1}^{c}\left\|\mathbf{X}_{k} \mathbf{W}\right\|_{*},
\end{equation}
\begin{equation}\label{Eq9}
J_{cav}=-\lambda\|\mathbf{X} \mathbf{W}\|_{*},
\end{equation}
where $J_{cvx}$ is a convex function and $J_{cav}$ is a concave function.
Then by CCCP we have,
\begin{equation}\label{Eq10}
\partial J_{cvx}\left(\mathbf{W}_{t}\right)+\partial J_{cav}\left(\mathbf{W}_{t-1}\right)=0,
\end{equation}
where $\partial J_{cvx}\left(\mathbf{W}_{t}\right)$ is the sub-gradient of $J_{cvx}\left(\mathbf{W}_{t}\right)$ and $\mathbf{W}_{t}$ is the matrix of the $t$-th iteration. Afterwards, a surrogate objective function $J$ that satisfies Eq. \eqref{Eq10} can be derived,
\begin{equation}\label{Eq11}
\begin{aligned}
\min _{\mathbf{w}_{t}} J\left(\mathbf{W}_{t}\right)=& \min _{\mathbf{w}_{t}} \frac{1}{2}\left\|\mathbf{P} \odot\left(\mathbf{X} \mathbf{W}_{t}-\mathbf{Y}\right)\right\|_{F}^{2}+\lambda \sum_{k=1}^{c}\left\|\mathbf{X}_{k} \mathbf{W}_{t}\right\|_{*} \\
&-\lambda t r\left[\mathbf{W}_{t}^{T}\left(\partial\left\|\mathbf{X} \mathbf{W}_{t-1}\right\|_{*}\right)\right],
\end{aligned}
\end{equation}
Eq. \eqref{Eq11} is a convex function w.r.t. $\mathbf{W}_{t}$ and can be solved by off-the-shelf convex optimization toolkit.
However, traditional convex optimization methods often need to search the directions of the gradient, which makes it slow to obtain the optimum. Thus, we tailor an efficient ADMM algorithm and derive the closed form solution of each sub-problem. Specifically, let $\mathbf{Z}_{k}=\mathbf{X}_{k} \mathbf{W}_{t}$, we have the following augmented Lagrangian function,
\begin{equation}\label{Eq12}
\begin{aligned}
\Phi=& \frac{1}{2}\left\|\mathbf{P} \odot\left(\mathbf{X} \mathbf{W}_{t}-\mathbf{Y}\right)\right\|_{F}^{2}+\lambda \sum_{k=1}^{c}\left\|\mathbf{Z}_{k} \right\|_{*} \\
&-\lambda tr \left[\left(\mathbf{X} \mathbf{W}_{t}\right)^{T} \partial\left(\left\|\mathbf{X} \mathbf{W}_{t-1}\right\|_{*}\right)\right] \\
&+\sum_{k=1}^{c} tr \left[\mathbf{\Lambda}_{k}^{T}\left(\mathbf{X}_{k} \mathbf{W}_{t}-\mathbf{Z}_{k}\right)\right]+\frac{\mu}{2} \sum_{k=1}^{c}\left\|\mathbf{Z}_{k}-\mathbf{X}_{k} \mathbf{W}_{t}\right\|_{F}^{2}
\end{aligned},
\end{equation}
where $\mathbf{\Lambda}_{k}$ is the Lagrangian multiplier and $\mu$ is the penalty factor. Note that, $\mu$ is NOT a model hyper-parameter BUT a parameter of the ADMM algorithm that does not need to be adjusted, and it is introduced for the convenience of optimization. Specifically, in Eq. \eqref{Eq12}, with the equipment of the last term, each sub-problem of the ADMM algorithm becomes strongly convex, which guarantees a fast convergence. In the experiments, we will validate that the performance of our model remains unchanged under different $\mu$.
\subsubsection{Sub-problem of $\ \mathbf{W}_{t}$}
With $\mathbf{Z}_{k}$ and $\mathbf{\Lambda}_{k}$ fixed, $\mathbf{W}_{t}$ can be updated by
\begin{equation}\label{Eq13}
\begin{aligned}
\mathbf{W}_{t}&=(\mu \sum_{k=1}^{c} \mathbf{X}_{k}^{T} \mathbf{X}_{k})^{-1}\{\lambda \mathbf{X}^{T} \partial(\|\mathbf{X} \mathbf{W}_{t-1}\|_{*}) \\
&+\sum_{k=1}^{c}[\mathbf{X}_{k}^{T}(\mu \mathbf{Z}_{k}-\mathbf{\Lambda}_{k})]-\mathbf{X}^{T}[\mathbf{P} \odot(\mathbf{X} \mathbf{W}_{t-1}-\mathbf{Y})]\}
\end{aligned}.
\end{equation}
\subsubsection{Sub-problem of $\ \mathbf{Z}_{k}$}
With $\mathbf{W}_{t}$ and $\mathbf{\Lambda}_{k}$ fixed, the objective function of $\mathbf{Z}_{k}$ can be written as:
\begin{equation}\label{Eq14}
\Phi_{\mathbf{Z}_{k}}=\frac{\lambda}{\mu}\left\|\mathbf{Z}_{k}\right\|_{*}+\frac{1}{2}\left\|\mathbf{Z}_{k}-\left(\mathbf{X}_{k} \mathbf{W}_{t}+\frac{\mathbf{\Lambda}_{k}}{\mu}\right)\right\|_{F}^{2}.
\end{equation}
The above problem can be solved by the singular value thresholding algorithm \cite {r47}, and the update rule of $\mathbf{Z}_{k}$ is,
\begin{equation}\label{Eq15}
\begin{aligned}
\mathbf{Z}_{k} &=\Gamma\left(\mathbf{X}_{k} \mathbf{W}_{t}+\frac{\boldsymbol{\Lambda}_{k}}{\mu}\right) \\
&=\mathbf{U} \operatorname{Diag}\left[\left(\sigma_{i}-\frac{\lambda}{\mu}\right)_{+} \right]\mathbf{V}^{T},
\end{aligned}
\end{equation}
where $\Gamma$ is a singular value threshold operator, $\mathbf{U}$ and $\mathbf{V}$ are the matrices of left and right singular vectors of $\mathbf{X}_{k} \mathbf{W}_{t}+\frac{\mathbf\Lambda_{k}}{\mu}$, $\operatorname{Diag}$ stands for the diagonal matrix, $\sigma_{i}$ is the $i$-th largest singular value and $a_{+}=\max (0, a)$.
\subsubsection{Sub-problem of $\ \mathbf{\Lambda}_{k}$}
With $\mathbf{Z}_{k}$ and $\mathbf{W}_{t}$ fixed, $\mathbf{\Lambda}_{k}$ can be updated by
\begin{equation}\label{Eq16}
\boldsymbol{\Lambda}_{k} \leftarrow \boldsymbol{\Lambda}_{k}+\mu\left(\mathbf{X}_{k} \mathbf{W}_{t}-\mathbf{Z}_{k}\right).
\end{equation}
The entire optimization procedure is summarized in the Algorithm 1.
\begin{algorithm} \label{Alg1}
\caption{ADMM Algorithm for NAIM$^3$L}
\textbf{Input}: Feature matrix $\mathbf{X}$, observed label matrix $\mathbf{Y}$, indicator matrix $\mathbf{P}$\\
\textbf{Initialization}: Randomly initialize $\mathbf{W}_0$, $\mathbf{Z}_k = \bf 0$, and $\mathbf{\Lambda}_{k} = \bf 0$ $(k =1, 2, \cdots, c), \mu = 5. $\\
\textbf{Output}: $\mathbf{W}$
\begin{algorithmic}[1]
\STATE Let $t=0$.
\WHILE{not converge}
\STATE $t = t + 1$.
\STATE Update $\mathbf{W}_t$ by Eq. \eqref{Eq13}.
\STATE Update $\mathbf{Z}_k$ by Eq. \eqref{Eq15}.
\STATE Update $\mathbf{\Lambda}_{k}$ by Eq. \eqref{Eq16}.
\ENDWHILE
\STATE \textbf{return} $\mathbf{W}$
\end{algorithmic}
\end{algorithm}
\subsection{An Efficient Algorithm for Computing $\partial\|\bullet \|_{*}$}
Let $ \mathbf {A} \in \mathbb{R}^{n \times c}$ be an arbitrary matrix and $\mathbf{U} \boldsymbol{\Sigma} \mathbf{V}^{T}$ be its singular value decomposition (SVD), where $\mathbf {U}$ and $\mathbf {V}$ are the matrices of left and right singular vectors, respectively. It is well known \cite {r47,r48} that the sub-gradient of the trace norm can be computed by
$\partial\|\mathbf{A}\|_{*}=\left\{\mathbf{U} \mathbf{V}^{T}+\mathbf{Q}| \quad \mathbf{Q} \in \mathbb{R}^{n \times c}, \mathbf{U}^{T} \mathbf{Q}=\mathbf{0},\mathbf{Q} \mathbf{V}=\mathbf{0}, \|\mathbf{Q}\|_{2} \leq 1\right\}$, where $\|\bullet \|_{2}$ is the spectral norm.
For simplicity, let $\mathbf{Q}= \mathbf{0}$ , then $\partial\|\mathbf{A}\|_{*}$ can be computed by $\partial\|\mathbf{A}\|_{*}=\mathbf{U} \mathbf{V}^{T}$.
In the following, we will derive a theorem and then design an efficient algorithm to compute $\partial\|\mathbf{A}\|_{*}$ based on the theorem.
\begin{theorem} \label{theo2}
Let $\mathbf {A} = \mathbf{U} \boldsymbol{\Sigma} \mathbf{V}^{T}$ be the SVD of matrix $\mathbf{A}$, then $\mathbf{A}^{T} \mathbf{A}=\mathbf{V} \mathbf{\Sigma}^2 \mathbf{V}^{T} = \mathbf{V} \mathbf{S} \mathbf{V}^{T}$ is the eigenvalue decomposition of the matrix $\mathbf{A}^{T} \mathbf{A}$, and $\partial\|\mathbf{A}\|_{*}=\mathbf{U} \mathbf{V}^{T} = \mathbf{A} \mathbf{V} \mathbf{S}^{-\frac{1}{2}} \mathbf{V}^{T} $.
\end{theorem}
\begin{proof}
$\mathbf{A}=\mathbf{U} \Sigma \mathbf{V}^{T}$, then $\mathbf{A}^{T} \mathbf{A}=\mathbf{V} \mathbf{\Sigma} \mathbf{U}^{T} \mathbf{U} \Sigma \mathbf{V}^{T} =\mathbf{V} \mathbf{\Sigma}^2 \mathbf{V}^{T}$.
Let $\mathbf{S} = \mathbf{\Sigma}^2$, then $\mathbf{A}^{T} \mathbf{A}=\mathbf{V} \mathbf{S} \mathbf{V}^{T}$ is the eigenvalue decomposition of the matrix $\mathbf{A}^{T} \mathbf{A}$.
Rewrite $\mathbf{A}^{T} \mathbf{A} =\mathbf{V} \mathbf{\Sigma}^2 \mathbf{V}^{T} =\mathbf{V} \mathbf{\Sigma} \mathbf{V}^{T} \mathbf{V} \mathbf{\Sigma} \mathbf{V}^{T}$,
then $\left(\mathbf{A}^{T} \mathbf{A}\right)^{-\frac{1}{2}}=\left(\mathbf{V} \mathbf{\Sigma} \mathbf{V}^{T}\right)^{-1}=\mathbf{V} \mathbf{\Sigma}^{-1} \mathbf{V}^{T} = \mathbf{V} \mathbf{S}^{-\frac{1}{2}} \mathbf{V}^{T}$ and $ \mathbf{A}\left(\mathbf{A}^{T} \mathbf{A}\right)^{-\frac{1}{2}}=\mathbf{U} \Sigma \mathbf{V}^{T} \mathbf{V} \mathbf{\Sigma}^{-1} \mathbf{V}^{T} =\mathbf{U} \mathbf{V}^{T}$.
Finally, $\partial\|\mathbf{A}\|_{*}=\mathbf{U} \mathbf{V}^{T} = \mathbf{A} \mathbf{V} \mathbf{S}^{-\frac{1}{2}} \mathbf{V}^{T} $.
\end{proof}
According to Theorem \ref{theo2}, we can now design an efficient algorithm for computing $\partial\|\mathbf{A} \|_{*}$ by transforming the SVD of an $n \times c$ matrix into the eigenvalue decomposition of a $c \times c$ matrix.
If the full SVD is adopted to compute $\partial\|\mathbf{A} \|_{*}$, the whole time complexity is $\mathcal{O}\left(n c^{2} + c n^{2} \right) + \mathcal{O}\left( mrc\right)$, where $r$ is the rank of matrix $\mathbf{A}$. However, the time complexity of Algorithm 2 is $\mathcal{O}\left(n c^{2} \right) + \mathcal{O}\left( c^{3} \right)+ \mathcal{O}\left( n c^{2} \right)$. Generally, in the multi-label setting, the number of samples is much larger than that of multiple labels, i.e., $n\gg c$, so the time complexity of adopting full SVD is $\mathcal{O}\left( c n^{2} \right)$ whereas the time complexity of Algorithm 2 is $\mathcal{O}\left( n c^{2} \right)$, which is much more efficient. Algorithm 2 can be more efficient if a more sophisticated algorithm is elaborately customized. However, it is not the main focus of this work but just a byproduct, which itself also has an interest of independence to great extent. We argue that Algorithm 2 can be implemented quite easily with a few lines of codes in Matlab. More importantly, it is effective enough for us to handle large-scale datasets by the fact that the computation complexity with respect to the number of samples reduces from quadratic to linear. Regarding the ''efficient algorithm'' mentioned here, what we aim to emphasize is that Algorithm 2 can handle large-scale datasets, instead of that it can defeat the state-of-the-arts SVD algorithms. Experiments in section \ref{5.9} will validate the efficiency of Algorithm 2.
\begin{algorithm} [t] \label{Alg2}
\caption{An Efficient Algorithm for Computing $\partial\|\bullet \|_{*}$}
\textbf{Input}: A matrix $ \mathbf {A} \in \mathbb{R}^{n \times c}$, \\
\textbf{Output}: $\partial\|\mathbf {A}||_{*}$.
\begin{algorithmic}[1]
\IF {$ n \ge c $}
\STATE $\mathbf{B}=\mathbf{A}^{T} \mathbf{A}$.
\ELSE
\STATE $\mathbf{B}=\mathbf{A} \mathbf{A}^{T}$.
\ENDIF
\STATE Eigenvalue decomposition of $\mathbf{B}$, $\mathbf{B}= \mathbf{V} \mathbf{S} \mathbf{V}^{T}$.
\STATE $\partial\|\mathbf {A}||_{*}= \mathbf{A} \mathbf{V} \mathbf{S}^{-\frac{1}{2}} \mathbf{V}^{T}$.
\STATE \textbf{return} $\partial\|\mathbf {A}||_{*}$
\end{algorithmic}
\end{algorithm}
\subsection{Convergence and Complexity Analysis} \label{IV-C}
\emph{\textbf{Convergence Analysis}}. Before giving the convergence analysis of Algorithm 1, we introduce the following lemma.
\begin{lemma} \label{lemma2} {\rm \cite{r49}}
Consider an energy function $J(x)$ of form $J(x) = J_{cvx}(x) + J_{cav}(x)$, where $J_{cvx}(x)$, $J_{cav}(x)$ are convex and concave functions of $x$, respectively. Then the discrete iterative CCCP algorithm $ x^{t}\longmapsto x^{t+1}$ given by
\begin{equation} \nonumber
\nabla J_{cvx}(x^{t+1}) = - \nabla J_{cav}(x^{t})
\end{equation}
is guaranteed to monotonically decrease the energy $J(x)$ as a function of time and
hence to converge to a minimum or saddle point of $J(x)$.
\end{lemma}
According to Lemma \ref{lemma2} and Eq. \eqref{Eq10}, we can derive that $J_{cvx}\left(\mathbf{W}_{t}\right)+J_{cav}\left(\mathbf{W}_{t}\right) \leq J_{cvx}\left(\mathbf{W}_{t-1}\right)+J_{cav}\left(\mathbf{W}_{t-1}\right)$, which means the objective function $f$ is guaranteed to monotonically decrease. Moreover, according to Theorem \ref{theo1}, it is easy to validate that the objective function $f$ is lower bounded by 0. Thus, $f$ is guaranteed to converge by the above two facts. Besides, in the ADMM algorithm, the surrogate objective function $J$ is strongly convex, which guarantees a global optimum of each sub-problem. Therefore, Algorithm 1 is guaranteed to converge.
\emph{\textbf {Complexity Analysis}}. The time complexity of NAIM$^3$L is dominated by matrix multiplication and inverse operations. In each iteration, the complexity of updating $\mathbf{W}_t$ in Eq. \eqref{Eq13} is $\mathcal{O}\left[V(n d_{\max }^{2} c+d_{\max }^{3}+n d_{\max } c+n c^{2}+n d_{\max } c^{2}\right)]$ and the complexity of updating $\mathbf{Z}_k$ in Eq. \eqref{Eq15} is $\mathcal{O}\left[V(n c^{3} + n d_{\max } c^{2}\right)]$. The update of $\mathbf{\Lambda}_{k}$ in Eq. \eqref{Eq16} costs $\mathcal{O}\left(Vn d_{\mathrm{max}} c^{2}\right)$. Generally, $n>d_{\max }$, $n \gg c$ and $d_{\max } > c$, so the total complexity of NAIM$^3$L is $\mathcal{O}\left(t Vn d_{\max }^{2} c\right)$, where $t$ is number of iterations, $V$ is the number of views, $n$ is the number of samples, $d_{\max }$ is the maximum dimension of the features and $c$ is the number of multiple labels. Therefore, NAIM$^3$L has linear computational complexity with respect to the number of samples, which enables it more efficiently to handle large scale data.
\section{Experiments} \label{secV}
\subsection{Experimental Settings}
\textbf {Datasets:} Five real datasets Corel5k, Espgame, IAPRTC12, Mirflickr, and Pascal07 are used in our experiments. They are available at website\footnote{\url{ http://lear.inrialpes.fr/people/guillaumin/data.php}} \cite{r50}. For fairness, we use exactly the same settings as in iMVWL \cite {r28}. Specifically, each dataset involves six views: HUE, SIFT, GIST, HSV, RGB, and LAB. For each dataset, we randomly sample 70\% of the data for training and use the remaining 30\% data for testing (unlabeled data). Furthermore, in the incomplete multi-view setting, we randomly remove $\alpha$ samples in each view while ensuring each sample appears in at least one view; in the missing labels setting, for each label, we randomly remove $\beta$ positive and negative tags of the training samples; in the non-aligned multi-view setting, samples of all views are randomly arranged and totally unpaired as defined in subsection \ref{III-A}. During the process of training and learning, the alignment information of the samples (orders of the samples) is completely unknown to us. The statistics of the datasets are summarized in Table \ref{tab1}. From this table, we can find that the label matrices are of full column rank or high rank.
\begin{table}[h]
\centering
\caption{Statistics of the Datasets. {\upshape n} and {\upshape c} are the Numbers of Samples and Multiple Labels in Each Dataset, and \#{\upshape avg} is the Average Number of Relevant Labels in Each Sample. {\upshape train\_rank} and {\upshape test\_rank} are the Ranks of the Label Matrices of Training Set and Testing Set.}
\label{tab1}
\begin{tabular}{@{}cccccc@{}}
\toprule
datasets & n & c & \#avg & train\_rank & test\_rank \\ \midrule
Corel5k & 4999 & 260 & 3.396 & 259 & 249 \\
Espgame & 20770 & 268 & 4.686 & 268 & 268 \\
IAPRTC12 & 19627 & 291 & 5.719 & 291 & 291 \\
Mirflickr & 25000 & 38 & 4.716 & 38 & 38 \\
Pascal07 & 9963 & 20 & 1.465 & 20 & 20 \\ \bottomrule
\end{tabular}
\end{table}
\textbf {Compared methods:} In the experiments, NAIM$^3$L is compared with five state-of-the-art methods: iMSF \cite {r51}, LabelMe \cite {r39}, MVL-IV \cite {r31}, lrMMC \cite {r40}, and iMVWL \footnote{We sincerely thank the authors of iMVWL for providing the codes, however, their codes did not fix the random seeds, thus results in their original paper cannot be reproduced. We run their codes by using the optimal hyper-parameters suggested in their paper and fix the same random seeds as in our codes. Note that, half of the results of iMVWL are better than those reported in their original paper.} \cite {r28}. iMSF is a multi-class learning method, and we extend it for multi-label classification by training multiple classifiers (one for each label). IMVL-IV \cite {r29} is an incomplete multi-view and missing multi-label learning method, however, it contains ten hyper-parameters in its model, making it very difficult to tune for the optimum. For fairness, we omit it. Besides, there are also some deep neural network (DNN) based methods concerning the complete multi-view multi-label learning \cite{r3, r13}. In this work, we mainly focus on the novel problem of the non-aligned views with missing multiple labels and provide a simple yet effective solution. Considering that our model can directly cooperate with DNN, comparisons with these methods will be conducted in our future work. It is worth noting that all the compared methods cannot deal with non-aligned views, thus in the experiments, they are all implemented only with the missing labels and incomplete multi-view settings, whereas our NAIM$^3$L is conducted in the settings with all the three challenges. Optimal parameters for the competitive methods are selected as suggested in the corresponding papers. All experiments are repeated ten times, and both the mean and standard deviation are reported.
\textbf {Evaluation metrics:} Similar to iMVWL, four widely used multi-label evaluation metrics are adopted for performance evaluations, i.e., Ranking Loss (RL), Average Precision (AP), Hamming Loss (HL), and adapted Area Under ROC Curve (AUC). A formal definition of the first three metrics can be found in \cite {r9}. The adapted AUC is suggested in \cite {r52}. For consistency, in our experiments, we report 1-RL and 1-HL instead of RL and HL. Thus, the larger the values of all four metrics are, the better the performance is.
\subsection{Main Experimental Results} \label{5.2}
\subsubsection{Experiments under Incomplete Views, Missing Labels (and non-Aligned Views)}
In this subsection, we show the results under the settings of incomplete views, missing labels, and non-aligned views. To the best of our knowledge, existing methods cannot work under the non-aligned views setting. For this reason, the compared methods are all implemented only with the incomplete views and missing labels settings. However, our NAIM$^3$L is conducted in all three settings, which means that in the experiments, information available for NAIM$^3$L is much less than other methods.
Table \ref{tab2} shows the results compared with other methods under the setting of 50\% incomplete views and 50\% missing positive and negative labels. From this table, we can see that, even though without view alignment information, NAIM$^3$L still outperforms all the compared methods with view alignment information on five datasets. Note that, in these experiments, other methods are under the aligned-views setting, whereas ours are not. Nonetheless, with less available information and without view completion, NAIM$^3$L still achieves better performance. This can be attributed to the joint consideration of the local low-rank and global high-rank structures within multiple labels while the latter is almost completely neglected in other methods. In subsection \ref{V-D}, we will conduct extensive experiments to validate the importance of the global high-rank structure of multiple labels.
\begin{table*}[h]
\centering
\caption{Results on all Five Datasets with the Ratio of Incomplete Multi-view $\alpha = 50$\% and the Ratio of Missing Multi-label $\beta = 50$\%. Values in Parentheses Represent the Standard Deviation, and all the Values are Displayed as Percentages.}
\label{tab2}
\begin{tabular}{@{}cccccccc@{}}
\toprule
dataset & metrics & lrMMC & MVL-IV & LabelMe & iMSF & iMVWL & NAIM$^3$L \\ \midrule
\multirow{4}{*}{Corel5k} & 1-HL(\%) & 95.40(0.00) & 95.40(0.00) & 94.60(0.00) & 94.30(0.00) & 97.84(0.02) & \textbf{98.70}(0.01) \\
& 1-RL(\%) & 76.20(0.20) & 75.60(0.10) & 63.80(0.30) & 70.90(0.50) & 86.50(0.33) & \textbf{87.84}(0.21) \\
& AP(\%) & 24.00(0.20) & 24.00(0.10) & 20.40(0.20) & 18.90(0.20) & 28.31(0.72) & \textbf{30.88}(0.35) \\
& AUC(\%) & 76.30(0.20) & 76.20(0.10) & 71.50(0.10) & 66.30(0.50) & 86.82(0.32) & \textbf{88.13}(0.20) \\ \midrule
\multirow{4}{*}{Pascal07} & 1-HL(\%) & 88.20(0.00) & 88.30(0.00) & 83.70(0.00) & 83.60(0.00) & 88.23(0.38) & \textbf{92.84}(0.05) \\
& 1-RL(\%) & 69.80(0.30) & 70.20(0.10) & 64.30(0.40) & 56.80(0.00) & 73.66(0.93) & \textbf{78.30}(0.12) \\
& AP(\%) & 42.50(0.30) & 43.30(0.20) & 35.80(0.30) & 32.50(0.00) & 44.08(1.74) & \textbf{48.78}(0.32) \\
& AUC(\%) & 72.80(0.20) & 73.00(0.10) & 68.60(0.50) & 62.00(0.10) & 76.72(1.20) & \textbf{81.09}(0.12) \\ \midrule
\multirow{4}{*}{ESPGame} & 1-HL(\%) & 97.00(0.00) & 97.00(0.00) & 96.70(0.00) & 96.40(0.00) & 97.19(0.01) & \textbf{98.26}(0.01) \\
& 1-RL(\%) & 77.70(0.10) & 77.80(0.00) & 68.30(0.20) & 72.20(0.20) & 80.72(0.14) & \textbf{81.81}(0.16) \\
& AP(\%) & 18.80(0.00) & 18.90(0.00) & 13.20(0.00) & 10.80(0.00) & 24.19(0.34) & \textbf{24.57}(0.17) \\
& AUC(\%) & 78.30(0.10) & 78.40(0.00) & 73.40(0.10) & 67.40(0.30) & 81.29(0.15) & \textbf{82.36}(0.16) \\ \midrule
\multirow{4}{*}{IAPRTC12} & 1-HL(\%) & 96.70(0.00) & 96.70(0.00) & 96.30(0.00) & 96.00(0.00) & 96.85(0.02) & \textbf{98.05}(0.01) \\
& 1-RL(\%) & 80.10(0.00) & 79.90(0.10) & 72.50(0.00) & 63.10(0.00) & 83.30(0.27) & \textbf{84.78}(0.11) \\
& AP(\%) & 19.70(0.00) & 19.80(0.00) & 14.10(0.00) & 10.10(0.00) & 23.54(0.39) & \textbf{26.10}(0.13) \\
& AUC(\%) & 80.50(0.00) & 80.40(0.10) & 74.60(0.00) & 66.50(0.10) & 83.55(0.22) & \textbf{84.96}(0.11) \\ \midrule
\multirow{4}{*}{Mirflickr} & 1-HL(\%) & 83.90(0.00) & 83.90(0.00) & 77.80(0.00) & 77.50(0.00) & 83.98(0.28) & \textbf{88.15}(0.07) \\
& 1-RL(\%) & 80.20(0.10) & 80.80(0.00) & 77.10(0.10) & 64.10(0.00) & 80.60(1.11) & \textbf{84.40}(0.09) \\
& AP(\%) & 44.10(0.10) & 44.90(0.00) & 37.50(0.00) & 32.30(0.00) & 49.48(1.24) & \textbf{55.08}(0.18) \\
& AUC(\%) & 80.60(0.10) & 80.70(0.00) & 76.10(0.00) & 71.50(0.10) & 79.44(1.46) & \textbf{83.71}(0.06) \\
\bottomrule
\end{tabular}
\end{table*}
\subsubsection{Experiments under Incomplete Views}
To further validate the effectiveness of our NAIM$^3$L, we conduct experiments under the settings of incomplete views, full labels, and aligned views. iMVWL \cite {r28} is a method that can deal with both the incomplete views and missing labels. In this subsection, the settings are 50\% incomplete views, full labels, and aligned views. iMVWL-V and NAIM$^3$L-V are the corresponding methods dealing with incomplete views. As our method is customized for the non-aligned views, NAIM$^3$L-V is still conducted under the non-aligned views setting. From Table \ref{tab3}, we can see that NAIM$^3$L-V outperforms iMVWL-V. Besides, a weird phenomenon is found when comparing Tables \ref{tab2} and \ref{tab3}, i.e., the performance of iMVWL under the full labels setting is worse than that under the missing labels setting on the Pascal07, IAPRTC12, and Mirflickr datasets (the worse performance is denoted by down-arrows in Table \ref{tab3}). The reason for this counter-intuitive phenomenon will be explained and analyzed in subsection \ref {5.2.4}.
\begin{table}[h]
\centering
\caption{Results on all Five Datasets with Full Labels and the Ratio of Incomplete Multi-view $\alpha = 50$\%. Values in Parentheses Represent the Standard Deviation, and all the Values are Displayed as Percentages. Down-Arrows Denote that Performance of iMVWL under Full Label Setting is Worse than that under 50\% Missing Label Setting.} \label{tab3}
\begin{tabular}{cclc}
\toprule
datasets & metrics & iMVWL-V & NAIM$^3$L-V \\ \midrule
\multirow{4}{*}{Corel5k} & 1-HL(\%) & 97.85(0.03) & \textbf{98.70}(0.01) \\
& 1-RL(\%) & 87.00(0.27) & \textbf{88.52}(0.25) \\
& AP(\%) & 28.90(0.89) & \textbf{31.72}(0.32) \\
& AUC(\%) & 87.30(0.26) & \textbf{88.81}(0.23) \\ \midrule
\multirow{4}{*}{Pascal07} & 1-HL(\%) & 88.19(0.28)$\downarrow$ & \textbf{92.87}(0.05) \\
& 1-RL(\%) & 73.74(0.53) & \textbf{79.07}(0.08) \\
& AP(\%) & 43.54(0.87)$\downarrow$ & \textbf{49.25}(0.24) \\
& AUC(\%) & 76.80(0.59) & \textbf{81.81}(0.10) \\ \midrule
\multirow{4}{*}{ESPGame} & 1-HL(\%) & 97.19(0.01) & \textbf{98.27}(0.01) \\
& 1-RL(\%) & 80.96(0.15) & \textbf{82.22}(0.15) \\
& AP(\%) & 24.46(0.40) & \textbf{24.89}(0.17) \\
& AUC(\%) & 81.55(0.19) & \textbf{82.77}(0.16) \\ \midrule
\multirow{4}{*}{IAPRTC12} & 1-HL(\%) & 96.84(0.02)$\downarrow$ & \textbf{98.05}(0.01) \\
& 1-RL(\%) & 83.23(0.26)$\downarrow$ & \textbf{85.14}(0.09) \\
& AP(\%) & 23.40(0.46)$\downarrow$ & \textbf{26.45}(0.14) \\
& AUC(\%) & 83.50(0.22)$\downarrow$ & \textbf{85.29}(0.10) \\ \midrule
\multirow{4}{*}{Mirflickr} & 1-HL(\%) & 83.89(0.20)$\downarrow$ & \textbf{88.17}(0.07) \\
& 1-RL(\%) & 80.59(0.72)$\downarrow$ & \textbf{84.55}(0.08) \\
& AP(\%) & 48.95(0.97)$\downarrow$ & \textbf{55.30}(0.16) \\
& AUC(\%) & 79.69(1.15) & \textbf{83.84}(0.05) \\ \bottomrule
\end{tabular}
\end{table}
\subsubsection{Experiments under Missing Labels}
Similarly, experimental settings in this subsection are 50\% of missing labels, complete views, and aligned views. iMVWL-L and NAIM$^3$L-L are the corresponding methods dealing with missing labels. Still, NAIM$^3$L-L is conducted under the non-aligned views. Not surprisingly, NAIM$^3$L-L once again outperforms iMVWL-L. When comparing Tables \ref{tab2} and \ref{tab4}, the weird phenomenon that appeared in Table \ref{tab3} disappeared, and we will analyze these results in the next subsection.
\begin{table}[h]
\centering
\caption{Results on all Five Datasets with Complete views and the Ratio of Missing Multi-Label $\beta = 50$\%. Values in Parentheses Represent the Standard Deviation, and all the Values are Displayed as Percentages.}\label{tab4}
\begin{tabular}{cccc}
\toprule
datasets & metrics & iMVWL-L & NAIM$^3$L-L \\ \midrule
\multirow{4}{*}{Corel5k} & 1-HL(\%) & 97.91(0.01) & \textbf{98.70}(0.01) \\
& 1-RL(\%) & 88.00(0.31) & \textbf{88.55}(0.28) \\
& AP(\%) & 30.77(0.58) & \textbf{31.93}(0.39) \\
& AUC(\%) & 88.32(0.32) & \textbf{88.84}(0.26) \\ \midrule
\multirow{4}{*}{Pascal07} & 1-HL(\%) & 88.79(0.12) & \textbf{92.87}(0.06) \\
& 1-RL(\%) & 76.30(0.59) & \textbf{79.04}(0.15) \\
& AP(\%) & 46.95(0.55) & \textbf{49.30}(0.31) \\
& AUC(\%) & 79.24(0.61) & \textbf{81.79}(0.13) \\ \midrule
\multirow{4}{*}{ESPGame} & 1-HL(\%) & 97.23(0.01) & \textbf{98.27}(0.01) \\
& 1-RL(\%) & 81.54(0.22) & \textbf{82.20}(0.14) \\
& AP(\%) & 25.80(0.34) & \textbf{24.90}(0.17) \\
& AUC(\%) & 82.03(0.19) & \textbf{82.74}(0.14) \\ \midrule
\multirow{4}{*}{IAPRTC12} & 1-HL(\%) & 96.90(0.01) & \textbf{98.05}(0.01) \\
& 1-RL(\%) & 84.14(0.17) & \textbf{85.11}(0.10) \\
& AP(\%) & 25.00(0.18) & \textbf{26.47}(0.13) \\
& AUC(\%) & 84.17(0.15) & \textbf{85.27}(0.10) \\ \midrule
\multirow{4}{*}{Mirflickr} & 1-HL(\%) & 84.25(0.58) & \textbf{88.18}(0.07) \\
& 1-RL(\%) & 81.22(1.97) & \textbf{84.57}(0.08) \\
& AP(\%) & 50.23(2.03) & \textbf{55.36}(0.14) \\
& AUC(\%) & 79.48(2.91) & \textbf{83.86}(0.06) \\ \bottomrule
\end{tabular}
\end{table}
\subsubsection{Analysis and Summary} \label{5.2.4}
In this subsection, we make an explanation of the counter-intuitive phenomenon in Table \ref{tab3} and summarize the experimental results of subsection \ref{5.2}. By comparing Tables \ref{tab2} and \ref{tab4}, when the missing label ratio is fixed and the incomplete view ratio changes, results of iMVWL are intuitively consistent. However, from Tables \ref{tab2} and \ref{tab3}, when the incomplete view ratio is fixed and the missing label ratio changes, results of iMVWL are counter-intuitive. These results indicate that iMVWL deals with incomplete view relatively appropriate whereas inappropriate when dealing with missing multi-labels. It can be attributed to that iMVWL makes an improper low-rank assumption about the entire multi-label matrix (see Eq. (\ref{eqb})), which violates the reality. Differently, results of NAIM$^3$L are all intuitively consistent since the high-rank assumption about the entire multi-label matrix in NAIM$^3$L is supported by observations of real datasets and thus more appropriate than that of iMVWL.
To summarize, the results in Tables \ref{tab2}, \ref{tab3}, and \ref{tab4} indicate that our NAIM$^3$L consistently performs the best whether dealing with one challenge, two, or three, and the high-rank assumption indeed makes sense in all these settings.
\subsection{On the High/Low Rank Validation of the Predicted Multi-Label Matrices}
To justify the rationality of the assumptions in NAIM$^3$L, (i.e., the high-rankness of the entire label matrix and the low-rankness of the sub-label matrices), we conduct experiments on the predicted multi-label matrices by using the learned $\mathbf{W}$ in Algorithm 1. First, we validate the high-rankness of the predicted entire multi-label matrix by showing that the label matrices are of full column rank in Table \ref{tab5}. Second, we plot the nuclear norm value of the predicted entire multi-label matrix in Fig. \ref {Fig3}. As the number of multiple labels is too big to show the low-rankness of each sub-label matrix in a single figure, we show the mean value and the median value of the nuclear norm of the predicted sub-label matrices to validate the low-rank assumption. From Fig. \ref {Fig3}, we can find that the mean value and the median value are relatively small, which means the predicted sub-label matrices are of low-rank for the reason that the nuclear norm of a matrix is an upper bound of its rank. In a word, the ranks of the predicted multi-label matrices are consistent with the assumptions of our model.
\begin{table}[h]
\centering
\caption{Rank of the predicted entire multi-label matrix of each dataset.}\label{tab5}
\begin{tabular}{@{}cccccc@{}}
\toprule
dataset & Corel5k & Espgame & IAPRTC12 & Pascal07 & Mirflickr \\ \midrule
Rank & 260 & 268 & 291 & 20 & 38 \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=0.48 \textwidth, height = 38.1mm]{Fig3}
\caption{The nuclear norm of the predicted entire label matrix, the mean value and the median value of the nuclear norm of the predicted sub-label matrices.}\label{Fig3}
\end{figure}
\subsection{Ablation Study} \label{V-D}
In this subsection, we study NAIM$^3$L-I (with only the loss function $\mathcal{L}$)
\begin{equation} \nonumber
\begin{aligned}
{\text {NAIM$^3$L-I : }} \min _{\mathbf{w}^{(i)}} \frac{1}{2} \sum_{i=1}^{V}\left\|\mathbf{P}^{(i)} \odot\left(\mathbf{X}^{(i)} \mathbf{W}^{(i)}-\mathbf{Y}^{(i)}\right)\right\|_{F}^{2},
\end{aligned}
\end{equation}
and NAIM$^3$L-II (with $\mathcal{L}$ and the first low-rank term in the regularizer $\mathcal{R}$ )
\begin{equation} \nonumber
\begin{aligned}
{\text {NAIM$^3$L-II : }} \min _{\mathbf{W}^{(i)}} \frac{1}{2} \sum_{i=1}^{V}\left\|\mathbf{P}^{(i)} \odot\left(\mathbf{X}^{(i)} \mathbf{W}^{(i)}-\mathbf{Y}^{(i)}\right)\right\|_{F}^{2} \\
+\lambda\left(\sum_{k=1}^{c}\left\|[\mathbf{X}_{k}^{(1)} \mathbf{W}^{(1)} ; \mathbf{X}_{k}^{(2)} \mathbf{W}^{(2)} ; \cdots ; \mathbf{X}_{k}^{(V)} \mathbf{W}^{(V)}]\right\|_{*}\right)
\end{aligned}
\end{equation}
to validate the effectiveness of the proposed regularizer $\mathcal{R}$, especially the significance of the high-rank term. From Table \ref{tab6}, we can see that NAIM$^3$L-I performs the worst on five datasets while NAIM$^3$L has the best performance. Compared with NAIM$^3$L-I, the performance of NAIM$^3$L-II improves very little, but after adding the high-rank term, all the four metrics raise considerably. This demonstrates that the local and the global structures, especially the latter, are beneficial to characterize the relationships among multiple labels and the presented regularizer is effective in learning these relations. Additionally, an interesting result is found when comparing Tables \ref{tab2} and \ref{tab6}, that is, NAIM$^3$L-I has better performance than most of other compared methods in Table \ref{tab2}. The reasons may be that other methods make improper assumptions which violate reality and the indicator matrix $\mathbf {P}$ introduced by us can alleviate both the negative effects on missing labels and incomplete views.
\begin{table}[h]
\centering
\caption{Results of The Variants of NAIM$^3$L with the Ratio of Incomplete Multi-view $\alpha = 50$\% and the Ratio of Missing Multi-label $\beta = 50$\%. Values in Parentheses Represent the Standard Deviation, and all the Values are Displayed as Percentages. All the Three Methods are Implemented under the Non-aligned Views Setting.}
\label{tab6}
\begin{tabular}{ccccc}
\toprule
datasets & metrics & NAIM$^3$L-I & NAIM$^3$L-II & NAIM$^3$L \\ \midrule
\multirow{4}{*}{Corel5k} & 1-HL(\%) & \textbf{98.70}(0.00) & \textbf{98.70}(0.00) & \textbf{98.70}(0.01) \\
& 1-RL(\%) & 82.73(0.20) & 83.54(0.21) & \textbf{87.84}(0.21) \\
& AP(\%) & 30.20(0.40) & 30.47(0.36) & \textbf{30.88}(0.35) \\
& AUC(\%) & 82.99(0.20) & 83.80(0.21) & \textbf{88.13}(0.20) \\ \midrule
\multirow{4}{*}{Pascal07} & 1-HL(\%) & 92.83(0.00) & 92.83(0.00) & \textbf{92.84}(0.05) \\
& 1-RL(\%) & 77.29(0.18) & 77.35(0.17) & \textbf{78.30}(0.12) \\
& AP(\%) & 48.64(0.35) & 48.66(0.35) & \textbf{48.78}(0.32) \\
& AUC(\%) & 79.99(0.17) & 80.55(0.17) & \textbf{81.09}(0.12) \\ \midrule
\multirow{4}{*}{ESPGame} & 1-HL(\%) & \textbf{98.26}(0.00) & \textbf{98.26}(0.00) & \textbf{98.26}(0.01) \\
& 1-RL(\%) & 79.63(0.20) & 79.80(0.11) & \textbf{81.81}(0.16) \\
& AP(\%) & 24.28(0.20) & 24.34(0.16) & \textbf{24.57}(0.17) \\
& AUC(\%) & 80.04(0.20) & 80.24(0.13) & \textbf{82.36}(0.16) \\ \midrule
\multirow{4}{*}{IAPRTC12} & 1-HL(\%) & \textbf{98.05}(0.00) & \textbf{98.05}(0.00) & \textbf{98.05}(0.01) \\
& 1-RL(\%) & 82.52(0.00) & 82.70(0.00) & \textbf{84.78}(0.11) \\
& AP(\%) & 25.71(0.10) & 25.76(0.10) & \textbf{26.10}(0.13) \\
& AUC(\%) & 82.56(0.10) & 82.76(0.10) & \textbf{84.96}(0.11) \\ \midrule
\multirow{4}{*}{Mirflickr} & 1-HL(\%) & \textbf{88.15}(0.00) & \textbf{88.15}(0.00) & \textbf{88.15}(0.07) \\
& 1-RL(\%) & 84.05(0.00) & 84.10(0.00) & \textbf{84.40}(0.09) \\
& AP(\%) & 54.95(0.20) & 54.98(0.16) & \textbf{55.08}(0.18) \\
& AUC(\%) & 83.33(0.00) & 83.39(0.00) & \textbf{83.71}(0.06) \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Statistical Analysis} \label{V-E}
To further analyze the effectiveness of our proposed method, we conduct the significance test on the results reported in Tables 2 and 6. Specifically, for Table 2, we employ the Nemenyi test \cite {nemenyi1963,demvsar2006,zhang2019} on the 20 results (4 metrics on 5 datasets) of six methods and set the significance level $\alpha_{l}$ at 0.05. Then the critical value $q_{\alpha} = 2.850$ and the critical distance $CD = q_{\alpha} \sqrt{k(k+1) / N} (k = 6, N = 20)$. Similarly, for Table 6, $ q_{\alpha} = 2.344$ and $ CD = q_{\alpha} \sqrt{k(k+1) / N} (k = 3, N = 20)$. Results of the Nemenyi test are shown in Fig. \ref{Fig4a} and \ref{Fig4b}, respectively. As we can see from Fig. \ref{Fig4a}, NAIM$^3$L performs better than the other methods except for the iMVWL method at the significance level 0.05. Besides, from Fig. \ref{Fig4b}, we can conclude that NAIM$^3$L performs better than other two methods at the significance level 0.05, which indicates the rationality and importance of our regularization term, especially the high-rank term.
\begin{figure}[h]
\centering
\subfigure[Nemenyi test of Table 2]{
\includegraphics[width=0.231\textwidth, height = 25mm]{Fig4_a}
\label{Fig4a}
}
\subfigure[Nemenyi test of Table 6]{
\includegraphics[width=0.231\textwidth, height = 25mm]{Fig4_b}
\label{Fig4b}
}
\caption{Statistical analysis of NAIM$^3$L by the Nemenyi test, the significance level is 0.05 and methods not connected are significantly different.}
\end{figure}
\subsection{Different Ratios of Incomplete Multi-view and Missing Multi-label Study}
In this subsection, we conduct experiments on Core15k and Pascal07 to evaluate NAIM$^3$L under different ratios of incomplete multiple views and missing multiple labels. Specifically, to evaluate the impact of different ratios of incomplete multiple views, we fix the ratio of missing multiple labels $\beta = 50$\%, and the ratio of incomplete multiple views $\alpha$ varies within the set of $\{10\%, 30\%, 50\%, 70\%, 90\%\}$. Similarly, to evaluate the impact of different ratios of missing multiple labels, we fix $\alpha=50$\%, and $\beta$ changes within the set of $\{10\%, 30\%, 50\%, 70\%, 90\%\}$. Results of all four metrics on Core15k and Pascal07 are reported in Fig. \ref{Fig5a}-\ref{Fig5b} and \ref{Fig5c}-\ref{Fig5d}, respectively. The 'Full' means that the multiple views and multiple labels in the training set are complete. Note that, when the ratio of missing multiple labels $\beta = 50$\% and the ratio of incomplete multiple views $\alpha=90$\%, the multi-view learning principle that each sample in the training set must appear at least one view is violated. Thus, experiment of such incomplete ratio is omitted. We can see from Fig. \ref{Fig5a} and \ref{Fig5c} that only when the ratio of missing multiple labels $\alpha=90$\%, the performance of all four metrics degenerates considerably except for the HL metric. From these observations, we can draw two conclusions, one is that our NAIM$^3$L is relatively robust, the other is that the HL may not be an appropriate metric for multi-label learning. All in all, we can conclude that the lower the incomplete (missing) ratio, the better the performance, which is reasonable and intuitively consistent.
\begin{figure}[h]
\centering
\subfigure[Corel5k]{
\includegraphics[width=0.227\textwidth]{Fig5_a}
\label{Fig5a}
}
\subfigure[Corel5k]{
\includegraphics[width=0.227\textwidth]{Fig5_b}
\label{Fig5b}
}
\subfigure[Pascal07]{
\includegraphics[width=0.227\textwidth]{Fig5_c}
\label{Fig5c}
}
\subfigure[Pascal07]{
\includegraphics[width=0.227\textwidth]{Fig5_d}
\label{Fig5d}
}
\caption{The performance of four metrics on the Corel5k dataset and the Pascal07 dataset under different ratios of incomplete multiple views ($\alpha$) and missing multiple labels ($\beta$). The 'Full' means that the multiple views and multiple labels in the training set are complete. The average value and standard deviation are shown in each sub-figures.}
\end{figure}
\subsection{Hyper-parameter Study}
There is only one hyper-parameter in NAIM$^3$L, thus it is easy to choose the relatively optimal hyper-parameter. The grid search technique is adopted to choose the optimal hyper-parameter within the set of $\{10^{-3}, 10^{-2}, 0.1, 1, 10, 100\}$. For the sake of clarity, we scale them by $log10$ when showing in figures. Besides, we narrow the scope of Y-axis to make the trends of the performance look clearer. From Fig. \ref{Fig6a}-\ref{Fig6d}, we can see that NAIM$^3$L achieves relatively good performance when $\lambda$ in [0.1, 1], and the values of the four metrics vary slightly, which indicates that our method is NOT sensitive to the hyper-parameter $\lambda$.
\begin{figure}[H]
\centering
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig6_a}
\label{Fig6a}
}
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig6_b}
\label{Fig6b}
}
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig6_c}
\label{Fig6c}
}
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig6_d}
\label{Fig6d}
}
\caption{Results of the hyper-parameter selection on four metrics with different $\lambda$. The values of $\lambda$ are scaled by $log10$ for clarity.}
\end{figure}
\subsection{Impacts Study of $\mu$}
In this subsection, we conduct experiments to validate that the parameter $\mu$ introduced in the ADMM algorithm does not change the performance of our model. From Fig. \ref{Fig7a} we find that all the four metrics remain the same when $\mu$ changes, and from Fig. \ref{Fig7b} we can see that $\mu$ only impacts the convergence rate. The larger the $\mu$ is, the slower the objective function converges. Thus, $\mu$ is a parameter that does not need to be adjusted. Except for the experiments in this subsection, $\mu$ is fixed at 5 throughout all other experiments in this article.
\begin{figure}[h]
\centering
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig7_a}
\label{Fig7a}
}
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig7_b}
\label{Fig7b}
}
\caption{The performance of all four metrics and the convergence rates under different $\mu$.}
\end{figure}
\subsection{Convergence Study}
In this subsection, we show the convergence curves of the objective function \eqref{Eq5} and the surrogate objective function \eqref{Eq11} to validate the convergence analysis given in subsection \ref{IV-C}. In Fig. \ref{Fig8a} and \ref{Fig8b}, we can see that both of the objective functions converge after about 20 iterations and the differences between them are small, which means the surrogate function is an appropriate alternate.
\begin{figure}[h]
\centering
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig8_a}
\label{Fig8a}
}
\subfigure[]{
\includegraphics[width=0.227\textwidth]{Fig8_b}
\label{Fig8b}
}
\caption{Convergence curves of the original objective function and the surrogate objective function.}
\end{figure}
\subsection{Efficiency Study of Algorithm 2} \label{5.9}
In this subsection, we conduct several experiments for computing the sub-gradient of the trace norm with matrices of different sizes to validate the high efficiency of Algorithm 2. Random matrices are used and the matrix size varies from $10000 \times 100$ to $100000 \times 400$. Algorithm 2 is compared with three widely used SVD methods, that is, the full SVD, the economical SVD, and the truncated SVD. The truncated SVD is implemented with parameter r (rank of the matrix), also known as reduced SVD in this case. All the experiments are repeated ten times and the average results of the run time (in seconds) are reported in Table \ref{tab7}. The SVD (full) fails when the matrix size increase to $40000 \times 100 $ because of that the computer is out of memory (16G RAM). From Table \ref{tab7}, we can find that Algorithm 2 is of orders of magnitude faster than the SVD( full). Besides, both the run time of SVD (econ) and SVD (trunc) is about twice of the Algorithm 2. All of these illustrate the efficiency of the Algorithm 2.
\begin{table*}[htbp]
\centering
\caption{The Average Run Time of Different Methods for Computing the sub-Gradient of the Trace Norm (in Seconds). '-' Denotes that the Computer is out of Memory.}
\label{tab7}
\begin{tabular}{@{}cccccccccc@{}}
\toprule
matrix size & 10000$\times$100 & 10000$\times$200 & 20000$\times$100 & 20000$\times$200 & 40000$\times$100 & 40000$\times$200 & 40000$\times$400 & 100000$\times$200 & 100000$\times$400 \\ \midrule
SVD (full) & 20.0031 & 1.3839 & 80.2122 & 4.9386 & - & - & - & - & - \\
SVD (econ) & 0.0571 & 0.1377 & 0.1459 & 0.2264 & 0.1696 & 0.4400 & 1.3460 & 1.3813 & 3.9416 \\
SVD (trunc) & 0.0475 & 0.1471 & 0.0955 & 0.2133 & 0.1838 & 0.4791 & 1.4396 & 1.4526 & 4.2048 \\
Algorithm 2 & \textbf{0.0334} & \textbf{0.0731} & \textbf{0.0590} & \textbf{0.1603} & \textbf{0.0864} & \textbf{0.2109} & \textbf{0.6736} & \textbf{0.5019} & \textbf{1.6042} \\
\bottomrule
\end{tabular}
\end{table*}
\section{Conclusion} \label{secVI}
In this paper, we propose a concise yet effective model called NAIM$^3$L to simultaneously tackle the missing labels, incomplete views and non-aligned views challenges with only one hyper-parameter in the objective function. An efficient ADMM algorithm with linear computational complexity regarding the number of samples is derived. Besides, NAIM$^3$L outperforms state-of-the-arts on five real data sets even without view-alignment. Note that, this framework can be directly non-linearized to its kernerlized version and can cooperate with deep neural network as our future research.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This work is supported by the Key Program of NSFC under Grant No. 61732006 and NSFC under Grant No. 61672281. The authors would like to thank Dr. Huan Li and Zhenghao Tan for their generous help and beneficial discussions.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In a recent paper \cite{homa}, we studied a class of supersymmetric
solutions to string theory which contain regular event horizons
and depend on arbitrary functions. These solutions describe extremal
black strings
with traveling waves and have an inhomogeneous distribution of momentum
along the string. (Solutions of this type were first discussed
in \cite{lawi}.) It was shown that for each traveling wave,
the Bekenstein-Hawking entropy
agreed precisely with the number of BPS states at
weak string coupling having the same momentum distribution as the black string.
This extended recent work [3 - 11] in which the microstates corresponding
to black hole entropy were identified.
These earlier investigations reproduced the gravitational entropy of certain
black holes (or translationally invariant black strings) by counting
the number of bound states of D-branes \cite{pol,pcj} with fixed total momentum.
We found that this agreement
extends to the case where an (essentially) arbitrary momentum distribution
is fixed and the corresponding black string is not translationally invariant.
In \cite{homa}, we considered six dimensional black strings with a constant
internal four dimensional space (so that the total spacetime has ten
dimensions). We studied two different types
of traveling waves; one carrying momentum along the string, and
the other carrying momentum both along the string and
in the internal four dimensional space.
Here, we extend this work in two directions. In section II we
add waves carrying angular momentum to the six dimensional black string.
We compute the horizon area and compare the Bekenstein-Hawking entropy
with the number of D-brane states at weak coupling with a
given angular momentum distribution. Once again we find complete agreement.
In section III, we consider five dimensional black strings (which yield four
dimensional black holes upon dimensional reduction). To
generalize the treatment,
the size of the internal five torus is allowed to vary in
spacetime. We study waves
carrying linear\footnote{
Rotating black holes in four dimensions
are not supersymmetric and five dimensional black strings do not
support traveling waves with angular momentum.} momentum in all
spacelike directions
and examine the solutions near the horizon. We then
compare the horizon area and the
number of D-brane states with the given momentum
distributions.
As before, the Bekenstein-Hawking entropy of the black string agrees
with the counting of states.
It was also shown in \cite{homa} that the traveling waves do not affect the
local geometry of the event horizon; it remains a homogeneous surface.
Physically, this is because the waves become purely outgoing near the horizon.
We will see that the same remains true for all the waves studied here.
\section{Angular Momentum Waves for 6D Black Strings}
\label{rot}
As in \cite{homa}, we consider
six dimensional black string solutions to type IIB string theory
compactified on a four torus with volume ${\sf V} $. We assume one
additional spatial direction is compactified to form a circle of length $L$
and choose $ L \gg {\sf V}^{1/4}$ so that the solutions resemble strings
in six dimensions. We are interested in solutions with
nonzero electric and magnetic charges
associated with the RR three-form; in the limit of weak string coupling,
these charges are carried by D-onebranes and D-fivebranes.
Solutions with these charges
have a regular event horizon even in the extremal limit.
Furthermore, such extremal black strings have a null Killing field
${\partial /{\partial v}}$ so that one can use the
observations of \cite{GV,Garf} to add
traveling waves. Several types of
waves carrying linear momentum were considered in \cite{homa}.
We now study a different class of
waves traveling along the string; these waves will carry
angular momentum.
\subsection{Traveling Waves Carrying Angular Momentum}
In \cite{homa}, we investigated the metric
\begin{equation}
\label{old}
ds^2 = \left(1 + {{r_0^2} \over {r^2}}\right)^{-1} \left[-dudv +
\left( {{p(u) } \over {r^2}} - 2 \ddot{f}_i(u) y^i
- 2 \ddot{h}_i(u) x^i \right) du^2 \right]
+\left(1 + {{r_0^2} \over {r^2}}\right) dx_i dx^i +dy_i dy^i
\end{equation}
which describes a black string carrying a `longitudinal
wave' $p(u)$, an `internal wave' $f_i(u)$, and an `external wave'
$h_i(u)$. Each of these waves carry momentum in a different
direction \cite{cmp,DGHW}. In the metric (\ref{old}), dots denote $d/du$,
the coordinates $y^i$ label points on the $T^4$,
the $x^i$ label points in the four dimensional
asymptotically flat space, and $r^2 = x_i x^i$. In addition,
$u=t-z, \
v=t+z$ where $z$ is a
coordinate on the $S^1$.
For this solution, the dilaton is constant and
the (integer normalized) charges associated with the
RR three-form take the values
\begin{equation}
Q_1 = {Vr_0^2\over g}, \qquad Q_5 = {r_0^2\over g}
\end{equation}
where $g$ denotes the string coupling.
We may now add angular momentum to this black
string as described in \cite{tse1}. To preserve supersymmetry, one needs
equal amounts of angular momentum in two orthogonal planes \cite{bmpv}.
Although \cite{tse1} considered
only uniformly rotating
strings,
the angular momentum density
may again be taken to be a function of $u$ without otherwise changing
the metric or matter fields \cite{cvts}.
This is directly analogous to the situation
for longitudinal momentum.
Writing the metric on the three sphere in the form
$d\Omega^2_3 = d\theta^2 + \sin^2 \theta d \varphi^2
+ \cos^2 \theta d \psi^2$, we obtain
\begin{eqnarray}
\label{rotwave}
ds^2 &=& \left(1 + {{r_0^2} \over {r^2}}\right)^{-1} \left[-dudv +
{{p(u) } \over {r^2}} du^2
+ {2{\gamma(u)} \over {r^2}}
(\sin^2 \theta d \varphi - \cos^2 \theta d \psi ) du \right] \cr
&+&\left(1 + {{r_0^2} \over {r^2}}\right) (dr^2 + r^2
d\Omega^2_3) +dy_i dy^i,
\end{eqnarray}
which describes a black string with angular momentum density
$\gamma(u)/\kappa^2$. The constant
$\kappa^2$ is given by
\begin{equation}\label{defkap}
\kappa^2 \equiv {4G_{10} \over \pi{\sf V}} = {2\pi g^2 \over V},
\end{equation}
where we have used the fact that the ten dimensional Newton's constant is
related to the string coupling by $G_{10} = 8\pi^6 g^2$ in units with
$\alpha' =1$ and set ${\sf V} = (2\pi)^4 V$. This black string has total
angular momentum
\begin{equation}
J_\varphi = - J_\psi = \kappa^{-2} \int_0^L \gamma(u)\ du
\end{equation}
and longitudinal momentum
\begin{equation}
P= \kappa^{-2} \int_0^L p(u) \ du.
\end{equation}
For simplicity, we have set the internal wave $f_i(u)$ and
the external wave $h_i(u)$ to zero in (\ref{rotwave}). These waves could have
been retained without altering the conclusions below.
The longitudinal wave $p(u)$, on the other hand,
cannot be set to zero in the presence of angular momentum.
We will see that, in order for the horizon to have a finite area,
the longitudinal wave $p(u)$ must be at least $\gamma^2(u)/r_0^4$
at each point along the string. Of course, both $p(u)$
and $\gamma(u)$ must be periodic with period $L$.
It turns out that, near the horizon, this metric effectively
coincides with a metric studied in \cite{homa}. As a result, after
making the correspondence clear, we may simply read off the desired
features of the horizon. This is done by replacing $\phi, \psi$
with the new coordinates
\begin{eqnarray}
\label{angles}
\tilde{\varphi} &=& \varphi + {{\beta(u)} \over{(r^2 + r^2_0)^{2}}}, \cr
\tilde{\psi} &=& \psi - {{\beta(u)} \over {(r^2 + r^2_0)^{2}}}, \cr
{\rm with} \ \beta(u) &=& \int^u \gamma(u')\ du'.
\end{eqnarray}
The metric then takes the somewhat complicated form
\begin{eqnarray}
\label{long}
ds^2 &=& \left(1 + {{r_0^2} \over {r^2}}\right)^{-1} \left[-dudv +
\left({{p(u) } \over {r^2}} - {{\gamma^2(u) } \over
{r_0^4 r^2}}\right) du^2\right] \cr
&+& \left(1 + {{r_0^2} \over {r^2}}\right) \left(
dr^2
+ r^2 [d \theta^2
+ \sin^2 \theta d \tilde{\varphi}^2 + \cos^2 \theta d \tilde{\psi}^2]
\right)
+dy_i dy^i \cr
&+&
{{16 r^2\beta^2
} \over {(r^2 + r^2_0)^5} } dr^2
+ {8 \beta
r \over {(r^2 + r_0^2)^2}}dr (\sin^2 \theta d\tilde{\varphi}
- \cos^2 \theta d\tilde{\psi}) \cr
&+& {{r^2 \gamma^2} \over {r_0^4(r^2 + r_0^2)^3}} (r^2 + 2 r_0^2) du^2.
\end{eqnarray}
Note, however, that the first two lines of (\ref{long}) give just the
metric (\ref{old})
for a black string with the longitudinal wave $p - \gamma^2/r_0^4$
and no angular momentum.
The last two lines may be considered as correction terms; they
are all of sub-leading order near the horizon.
In fact, the terms on the third line of (\ref{long})
vanish on the horizon and require no further work. Because the
horizon lies at $u = \infty$, the
$du^2$ term (on the last line above) does not vanish
on the horizon; however,
this term turns out to be
of the same form as the subleading order terms discussed in
the appendix of \cite{homa}. As a result, the techniques used there
apply to this case as well and show both that the metric is at least
$C^0$ at the horizon and that the horizon is locally a homogeneous surface.
Again, the waves do not affect the local horizon geometry.
To compute the area of the horizon, we must study both
its local {\it and} its global structure. From (\ref{angles}),
we may deduce the effects of our coordinate transformation
on the global identifications. Note that the new angles
$(\tilde{\varphi},\tilde{\psi})$
remain periodic with period
$2\pi$ on the three-sphere, but that
under the identification $z \rightarrow z - L$
the new angular coordinates acquire a shift:
$(\tilde{\varphi},\tilde{\psi}) \rightarrow (\tilde{\varphi} +
(r^2 + r^2_0)^{-2} \int_0^L
\gamma du, \ \tilde{\psi} - (r^2 + r^2_0)^{-2} \int_0^L \gamma du)$.
This change in global structure
is analogous to changing the modular parameter of a
torus; the global structure is different, but the volume remains
unchanged. In this case, the new identifications imply
that a translation along the horizon
is accompanied by a rotation of the three-sphere in the two orthogonal
planes. It follows that the
horizon area is the same as for
a longitudinal traveling wave with profile
\begin{equation}
\tilde{p}(u) =p(u) - {\gamma^2(u)\over r_0^4}.
\end{equation}
As explained in \cite{homa}, to write this area in a
simple form we must introduce a function $\sigma$, periodic in
$u$, which is related to $\tilde{p}(u)$ by $\sigma^2 + \dot{\sigma} =
r_0^{-4} \tilde{p}$. The horizon area is then
\begin{equation}
A = 2\pi^2 r_0^4 {\sf V} \int_0^L \sigma(u)\ du.
\end{equation}
When $\tilde{p}$ satisfies the `slowly varying condition'
\begin{equation}
\label{slow5}
\tilde{p}^{3/2} \gg r_0^2 |\dot{\tilde{p}}|,
\end{equation}
we have $\sigma = \sqrt{\tilde{p}/r_0^4}$ and the area takes the form
$A = 2\pi^2 r_0^2 {\sf V} \int_0^L \sqrt{\tilde{p}} du $.
The corresponding Bekenstein-Hawking
entropy is then
\begin{equation}
\label{simple5}
S_{BH} = {A \over {4G_{10}}} =
\sqrt{2\pi Q_1Q_5} \int_0^L \sqrt{\tilde{p}(u)/\kappa^2}\ du.
\end{equation}
In the special case where $p$ and $\gamma$ are constant, the total longitudinal
momentum is $P = pL/\kappa^2$ and the total angular momentum is $J = \gamma L/\kappa^2$.
Setting $P= 2\pi N/L$, we obtain $S_{BH} = 2\pi \sqrt{Q_1 Q_5 N - J^2}$
as expected \cite{bmpv,tse1}.
\subsection{Counting BPS States}
We now show that the exponential of the entropy (\ref{simple5}) yields the
number of BPS states at weak string coupling with the same distribution
of angular momentum and longitudinal momentum as the black string.
Recall that at weak coupling the black string corresponds to a collection
of D-fivebranes and D-onebranes. The low energy excitations are described by
a supersymmetric sigma model in $1+1$ dimensions (where the
spatial direction corresponds to our `string' direction $z$)
containing $4 Q_1 Q_5$
bosonic fields and an equal number of fermionic fields.
It was shown in \cite{bmpv} that the number of
D-brane configurations with total momentum $P$ and angular momentum $J$
agrees with the Bekenstein-Hawking entropy of an extreme five dimensional
black hole, which is equivalent to a homogeneous black string, i.e.
(\ref{rotwave}) with $p=P\kappa^2/L,\ \gamma = J\kappa^2/L$.
Our strategy will be to consider the
string to be made up of many short homogeneous segments
and then to
apply the results of \cite{bmpv} to each one in turn. The key point is
thus to show that the fields on different segments may be treated
independently. The argument is identical to the one
given in \cite{homa}, so we
will only sketch the derivation below.
Recall that one cannot fix the
{\it exact} value $j(u)$ of a current in a 1+1 quantum field theory.
The reason is simply that $j(u)$ and $j(u')$ do not commute;
for example, when $j$ is the momentum density, its Fourier
modes satisfy the Virasoro algebra.
We therefore
take a `mesoscopic' viewpoint for our discussion. That is, we imagine
that we use an apparatus which can resolve the system only down to a
`mesoscopic' length scale $l$ which is much larger than the
`microscopic' length scale (discussed below) on which quantum effects
are relevant. We will therefore divide the spacetime into
$ L/l$ intervals $\Delta_a$ ($a \in \{1,...,L/l\}$) of length
$l \ll L$. If our instruments find a momentum distribution $
p(u)/\kappa^2$ and an angular momentum distribution $
\gamma(u)/\kappa^2$,
this simply means that the interval $\Delta_a$ contains a momentum
$P_a =\kappa^{-2} \int_{\Delta_a}p(u) du$ and an angular momentum
$J_a = \kappa^{-2} \int_{\Delta_a} \gamma(u) du$;
we cannot resolve $p$ and $\gamma$
on
smaller scales. Of course, it would be meaningless for us to
assign a distribution $(p,\gamma)$ which has structure on scales of
size $l$ or smaller. As a result, $l$ should be much smaller
than $p/|\dot{p}|$ and $\gamma/|\dot{\gamma}|$ (and
therefore $\tilde{p}/|\dot{\tilde{p}}|$), the `macroscopic' length
scales set by the
variation of the wave profile $(p,\gamma)$.
We shall take the idea that $l$ is much larger than any microscopic
length scale to mean that the `level numbers'
$P_a l$ and
$(P_a l - J_a^2 \kappa^2 r_0^{-4})$ are both large
($\gg Q_1 Q_5$). These
conditions imply that the wavelength
of a typical excited mode with momentum $\tilde p l/\kappa^2$ is much less than $l$,
and they are just the
conditions imposed in \cite{bmpv} to enable a counting of states
on a string of length $l$. Thus we must choose $l$ to satisfy
$\tilde{p}/\dot{\tilde{p}}
\gg l \gg \sqrt{Q_1 Q_5\kappa^2 / \tilde{p}}$. Such an $l$ can exist only when
\begin{equation}
\tilde{p}^{3/2} \gg |\dot{\tilde{p}}| \sqrt{Q_1Q_5 \kappa^2}
= \sqrt{2 \pi} r_0^2 |\dot{\tilde{p}}|,
\end{equation}
which is equivalent to the slowly varying condition
(\ref{slow5}).
Under this condition, the arguments of \cite{homa}
show that the entropy is carried by modes of sufficiently high
frequency that each interval of length $l$ may
be treated as a separate sigma model.
The counting of BPS states then reduces to considering
states of longitudinal momentum $(P_a - {{J_a^2 \kappa^2}
\over {lr_0^{4}}})$
and {\it no} angular momentum; that is, the angular
momentum $J$ affects the entropy only by reducing the momentum
to be distributed among the entropy carrying modes. Each
segment then carries an entropy of $S_a = \sqrt{2 \pi Q_1Q_5 }
\sqrt{P_a l - {{J_a^2 \kappa^2} \over {r_0^4}}}$ and
the total entropy is
\begin{equation}
S = \sqrt{2 \pi Q_1 Q_5} \int_0^L \sqrt{\tilde{p}/\kappa^2}\ du.
\end{equation}
Thus the number of BPS states agrees with
the Bekenstein-Hawking entropy (\ref{simple5}) of a black string with the
corresponding distribution of momentum and angular momentum.
\section{Traveling Waves for 5D black strings}
\label{four}
We now turn to solutions of type IIA string theory which describe
black strings in five dimensions. These solutions are related to
{\it four} dimensional black holes. In close analogy with the 6D black
strings, one can add traveling waves to the extremal 5D black strings.
We will show that the Bekenstein-Hawking entropy of these solutions
again corresponds to the number of BPS
states at weak string coupling with the same distribution
of energy and momentum. This extends the results of \cite{mast,jkm}
where this correspondence was shown for the extremal
black strings without traveling
waves. In addition, we expand the discussion to
allow certain moduli of the internal torus
to vary across the spacetime.
We will consider solutions to type IIA string theory carrying magnetic
charge with respect to the RR two-form, electric charge with respect
to the RR four-form, and magnetic charge with respect to the NS-NS
three-form. At weak coupling, these charges are carried by a D-sixbrane,
D-twobrane, and solitonic fivebrane, respectively\footnote{One can
also form five dimensional black strings or four dimensional black holes
with other choices of charges \cite{jkm,klts,bala}. This choice was first
used in \cite{mast}. We follow the conventions of \cite{hlm}.}. We take six
dimensions to be compactified to form a torus and assume translational
symmetry in five of these dimensions. The sixth ($z$) direction
will have a length $L$ much
longer than the others, and will be the direction in which the waves propagate.
Hence these solutions describe black strings with traveling waves
in five dimensions. Four of the toroidal directions (labeled by
$y^a ,\ a = 1,2,3,4$) form a torus of
volume ${\sf V} = (2\pi)^4 V$,
and one ($w$) forms
a circle of length $\tilde{L}$.
The other
four dimensions ($t,x^i,\ i=1,2,3$) will be asymptotically
flat.
\subsection{Classical Black String Solutions}
\label{cs}
The solution of type IIA string theory describing five dimensional
black strings with the above charges
was found in \cite{tsyhar}. It is characterized by
three harmonic functions
\begin{equation}
H_2(r) = 1+ {r_2 \over r}, \qquad H_5(r) = 1+{r_5 \over r},
\qquad H_6(r) =1+{r_6\over r},
\end{equation}
where $r^2 = x_ix^i$.
The constants $r_2,\ r_5,\ r_6$ are related to the integer charges by
\begin{equation}\label{chargefd}
Q_2 = {2r_2 V\over g}, \qquad Q_5 = {r_5 \tilde L\over \pi},
\qquad Q_6 = {2r_6\over g}.
\end{equation}
Introducing the null coordinates $u = t - z$ and $v = t + z$
and using the flat metric $\delta_{ij}$ and $\delta_{ab}$ to
raise and lower the indices $i \in \{1,2,3\}$ and $a \in \{1,2,3,4\}$,
the Einstein metric for the black string takes the form
\begin{eqnarray}
\label{harm}
ds^2 &=& H_2^{3/8} H_5^{6/8} H_6^{7/8}\Big[-H_2^{-1} H_5^{-1} H_6^{-1}
du dv \cr &+& H_5^{-1}
H_6^{-1}dy_a dy^a
+H_2^{-1} H_6^{-1} dw^2 + dx_idx^i\Big]
\end{eqnarray}
and the dilaton is $e^{2\phi} = H_2^{1/2} H_5 H_6^{-3/2}$.
The horizon is at $r=0$, $u = \infty$.
Note that $\phi$ approaches a constant both at infinity and
at the horizon.
When all three harmonic functions are equal, $H_2 = H_5 = H_6\equiv H$,
the dilaton
vanishes and the metric reduces to
\begin{equation}
ds^2 = -H^{-1} du dv + dy_a dy^a + dw^2 + H^2 dx_idx^i
\end{equation}
which is just the product of a five torus and the
extremal black string solution of the five dimensional
Einstein-Maxwell theory \cite{ght}.
Since the solutions (\ref{harm}) all possess the null Killing vector
field ${\partial} / {\partial v}$, we may again add traveling
waves using the methods of \cite{GV,Garf}.
The result is a metric of the form
\begin{eqnarray}
\label{4waves}
ds^2 &=& H_2^{3/8} H_5^{6/8} H_6^{7/8}\Big[H_2^{-1} H_5^{-1} H_6^{-1}
du \left[-dv + K(u,x,y) du \right] \cr &+& H_5^{-1}
H_6^{-1}dy_a dy^a
+H_2^{-1} H_6^{-1} dw^2 + dx_idx^i\Big]
\end{eqnarray}
where $K$
satisfies
\begin{equation}
\label{K}
(\partial_i \partial^i + \partial_a \partial^a +
{\partial}_w
{\partial}_w
) K = 0.
\end{equation}
That is, $K$ is harmonic in the toroidal ($w, y^a$) and
asymptotically flat ($x^i$) coordinates, but has arbitrary $u$ dependence.
As a result,
$K$ contains free functions that describe traveling waves
along the `string direction' ($z$). Since the surface $r=0$
is a coordinate singularity in (\ref{harm}), we only require (\ref{K})
to hold for $r\ne 0$. Nonetheless, the
horizon will be a regular surface for all of the metrics we consider.
For the moment, we will
ignore the details of the compactifications; they will be
discussed below.
We wish to consider only waves
that are in some way `anchored' to the black string, i.e.,
waves that either become pure gauge or have unphysical singularities
when the black string is removed. Such waves were first discussed in
\cite{cmp,DGHW} in connection with a fundamental string and later in \cite{homa}
for a six dimensional black string. In the present context, the waves are
given by
\begin{equation}
\label{waves}
K = {{p(u)} \over {r} }
-2 \ddot{f}_a(u) y^a - 2 \ddot{b}(u)w - 2 \ddot{h}_i(u) x^i.
\end{equation}
As before, the $p$ term represents `longitudinal
waves,' the $f_a$ and $b$ terms represent `internal waves', and
the $h_i$ term represents `external waves.'
We will see that
this terminology corresponds to the various directions in which
the waves carry momentum. Since the internal and external waves are
clearly negligible compared to the longitudinal wave
near the horizon ($r=0$), one might expect that they do not contribute to
the horizon area. We will see that this is indeed the case.
With the waves (\ref{waves}), the metric (\ref{4waves}) is neither
asymptotically flat nor translationally invariant in the toroidal
directions. Both difficulties can be resolved by transforming
to coordinates $(u,v',w',x',y')$ which are related to those above
through
\begin{eqnarray}
v' &=& v + 2 \dot{f}_a y^a + 2 \dot{b} w + 2 \dot{h}_i x^i
+ \int^u (\dot{f}^2 + \dot{b}^2 + \dot{h}^2) du, \cr
w' &=& w + b, \cr
x'{}^i &=& x^i + h^i, \cr
y'{}^a &=& y^a + f^a,
\end{eqnarray}
where $\dot{f}^2 = \dot{f}_a \dot{f}^a$, $\dot{h}^2 = \dot{h}_i
\dot{h}^i$.
The metric then takes
the form
\begin{eqnarray}
\label{af}
ds^2 &=& H_2^{3/8} H_5^{6/8} H_6^{7/8}\Bigg[H_2^{-1} H_5^{-1} H_6^{-1} du
\Bigg(-dv' + \bigg[ {{p+r_2 \dot{f}^2 + r_5 \dot{b}^2}
\over r} + (H_2H_5H_6 -1) \dot{h}^2 \bigg] du \cr
&-& {2{r_2} \over {r}} \dot{f}_a dy'{}^a - {2{r_5} \over r} \dot{b} dw'
- 2 (H_2H_5H_6 -1 ) \dot{h}_i dx'{}^i
\Bigg) \cr &+& H_5^{-1}
H_6^{-1}dy'_a dy'{}^a
+H_2^{-1} H_6^{-1} dw' dw' + dx'_idx'{}^i\Bigg].
\end{eqnarray}
It is in terms of these coordinates that we make the periodic
identifications which compactify the spacetime. Setting $z'=(v'-u)/2$, the
large $S^1$ is defined by the identification $z' \rightarrow z' - L$,
or $(u,v',w',x',y') \rightarrow (u + L, v' -L,w', x',y')$, while the
small five-torus is defined by the identifications $w' \rightarrow w' +
\tilde L$, and $y' \rightarrow y' + a_I$
for an appropriate set of four vectors $a_I$.
Clearly, $p(u)$, $\dot{f}(u)$, $\dot{b}(u)$,
and $\dot{h}(u)$ must be periodic in $u$. In addition, we will
take $f(u)$, $b(u)$, and $h(u)$ to be periodic themselves.
In the ten-dimensional
space before compactification, this amounts to considering only
black strings with no net momentum in the $w'$, $x'$, and $y'$ directions.
The asymptotic charges can be read directly from the metric (\ref{af}).
As in \cite{homa}, the momentum is not
distributed uniformly along the string, and we will match this
momentum profile to a configuration of D-branes at weak coupling.
Defining\footnote{Although $\kappa^2$ will play the same role in this section
as it did in the previous one, its precise definition is different.
In section II, $\kappa^2$ was related to the six dimensional Newton's constant;
here it is related to Newton's constant in five dimensions.}
\begin{equation}\label{kappafd}
\kappa^2 \equiv {4 G_{10}\over {\sf V} \tilde L} =
{2\pi^2 g^2 \over V \tilde L},
\end{equation}
the black string with traveling waves (\ref{af}) has ADM momentum
\begin{eqnarray}
\label{momen}
P_{z'} &=& \kappa^{-2} \int_0^L du
\left[ p + r_2 \dot{f}^2 + r_5 \dot{b}^2
+ (r_2 + r_5 + r_6)\dot{h}^2 \right]\cr
P_a &=& \kappa^{-2} r_2 \int_0^L du \ \dot{f}_a \cr
P_{w'} &=& \kappa^{-2} r_5 \int_0^L du \ \dot{b} \cr
P_i &=& \kappa^{-2} (r_2 + r_5 + r_6)\int_0^L du\ \dot{h}_i \ ,
\end{eqnarray}
and ADM energy
\begin{equation}
E = \kappa^{-2} ( r_2 + r_5 + r_6) L + P_{z'} \ .
\end{equation}
It is clear from (\ref{momen}) that the black string is composed of
three different constituents:
Oscillations of the same amplitude in different directions
result in different amounts of momentum. This is consistent with the
weak coupling description in terms of branes wrapped around different
directions,
but is {\it not} one would expect from, say,
a large rubber band. The spacetime metric thus records the fact that
any `source' must have several components.
\subsection{The Event Horizon}
We would like to show
that $r=0$, $u = \infty$ is a horizon in the spacetime (\ref{4waves})
and to compute the horizon area.
The calculations are structurally identical to
those performed in \cite{homa} but are slightly more complicated
due to the presence of the different harmonic functions $H_2$, $H_5$,
and $H_6$. We will not present the full details here, but
the reader may reconstruct them by copying the steps
described in the appendix of \cite{homa}. Such
techniques suffice to show that the
horizon is at least $C^0$ and its area may
be computed using only the leading order behavior
of the metric (\ref{4waves}) near $r=0$. Setting $r= 4r_2 r_5 r_6/R^2$,
this leading order metric takes the form
\begin{equation}
ds^2 = (r_2^3 r_5^6 r_6^7)^{1/8} \left[4\left( R^{-2}[-du dv + dR^2]
+ {p(u) \over 4 r_2r_5 r_6} du^2\right) + d\Omega_2^2
+ {{dy_a dy^a} \over{r_5 r_6}} +
{{dw^2} \over {r_2 r_6}} \right].
\end{equation}
Using the results in \cite{homa} one finds that this spacetime has a homogeneous
horizon with area
\begin{equation}
A = 8 \pi r_2 r_5 r_6 \tilde{L} {\sf V} \int_0^L \sigma(u)\ du
\end{equation}
where $\sigma$ is a periodic function of $u$ satisfying
$\sigma^2 + \dot{\sigma} = p/(4r_2 r_5 r_6)$.
In analogy with the six dimensional
case, when
\begin{equation}
\label{slow4}
p^3 \gg (r_2r_5r_6) \dot{p}^2,
\end{equation}
the area becomes $A = 4\pi \sqrt{r_2r_5r_6}
\tilde{L} {\sf V} \int_0^L \sqrt {p(u)} \ du$
and the Bekenstein-Hawking entropy may be
written
\begin{equation}
\label{BH4}
S_{BH} = \sqrt{2 \pi Q_2 Q_5 Q_6} \int_0^L \sqrt{p(u)/\kappa^2}\ du.
\end{equation}
\subsection{Counting BPS States}
Consider the weak coupling limit of the IIA string theory with six dimensions
compactified to form a torus as above; one circle has length $L$ much larger
than the rest, another
has length $\tilde L$ and the remaining four have volume ${\sf V} = (2\pi)^4 V$.
The black strings (\ref{harm}) correspond to $Q_2$
D-twobranes wrapped around $L$ and $\tilde L$, $Q_5$ solitonic fivebranes
wrapped around $L$ and ${\sf V}$, and $Q_6$ D-sixbranes wrapped around the entire
six torus. Each brane contributes an effective string tension in the $L$
direction given by
\begin{equation}
T_2 = {\tilde L \over 4\pi^2 g}, \qquad T_5 = {V\over 2\pi g^2},
\qquad T_6 = {V\tilde L\over 4\pi^2 g}.
\end{equation}
It then follows from (\ref{chargefd}) and (\ref{kappafd}) that
\begin{equation}\label{qt}
{r_2\over \kappa^2} = Q_2 T_2, \qquad {r_5\over \kappa^2} = Q_5 T_5, \qquad
{r_6\over \kappa^2} = Q_6 T_6.
\end{equation}
We begin by setting
$b = h_i = 0$, and proceed as in section III of \cite{homa}.
We again adopt a mesoscopic viewpoint and divide the string into
a number of small intervals. We will consider only the case
$Q_6 = 1$, in which the
degrees of freedom that contribute to the entropy
correspond to oscillations of the twobranes in the sixbranes.
There are actually $4Q_2Q_5$ (bosonic) degrees of freedom of this type
since the fivebranes `cut' each twobrane into $Q_5$ different
pieces \cite{mast}, each with an effective average tension of $T_2/Q_5$
in the string direction.
We interpret the {\it field} momentum of these bosonic fields as
carrying {\it spacetime} momentum in the internal $y^a$ directions since
both generate translations of the twobranes.
Recall that a field $\chi$ with tension $T$ has momentum density $T\dot{\chi}$,
and the black string has momentum density $r_2 \dot{f}_a/\kappa^2$ in the
$y^a$ direction (\ref{momen}). Thus
the condition for the $Q_2 Q_5$ fields $\chi_A$
associated with fluctuations in the $y_1$
direction to have the
same momentum as the black string is
\begin{equation}
\label{constraint}
{T_2 \over Q_5} \sum_{A=1}^{Q_2Q_5} \dot{\chi}_A =
{r_2{\dot{f}_1} \over
{\kappa^2}} = Q_2 T_2 \dot{f}_1.
\end{equation}
It follows that $f_1$ is just the average fluctuation of the D-branes.
We impose similar conditions for the other three internal directions
($y_2, y_3, y_4$) transverse to the twobranes.
The counting may now be performed just as in
\cite{homa}. Applying our `mesoscopic' viewpoint, we
divide the string into segments of length $l$ and restrict only the
average values of the oscillations $\chi_A$ to satisfy (\ref{constraint})
on each segment.
Thus the internal momentum is carried by the zero mode in each
segment. An
internal momentum distribution $Q_2 T_2\dot{f}^a$ among $Q_2Q_5$
twobranes of tension $T_2/Q_5$ requires a longitudinal momentum of at least
$Q_2 T_2\dot{f}^2 $. The remaining longitudinal momentum is just $p/\kappa^2$,
and the entropy arises from distributing this momentum arbitrarily.
When each interval is highly excited, this gives the result
\begin{equation}
\label{4BPSS}
S = \sqrt{2 \pi Q_2 Q_5} \int_0^L \sqrt{p(u)/\kappa^2}\ du.
\end{equation}
As usual, the intervals can be highly excited only when
$p^3 \gg (r_2 r_5 r_6) \dot{p}^2$ so that the slowly varying condition
(\ref{slow4}) holds. Since $Q_6=1$,
(\ref{4BPSS}) agrees with the Bekenstein-Hawking
entropy (\ref{BH4}) of the black string.
Now consider the waves $b(u)$ and $h_i(u)$.
In general, the oscillation $\chi$ of a string with tension $T$ has momentum
$T \dot \chi$ in the transverse direction and $T \dot{\chi}^2$
in the longitudinal direction. It thus follows from (\ref{momen}) and (\ref{qt})
that the wave $b(u)$ is
naturally interpreted as corresponding to macroscopic oscillations
of the fivebranes, and the wave $h_i(u)$ is interpreted as coordinated
oscillations of all the branes together in the macroscopic directions.
The contributions of $b(u)$ and $h_i(u)$ to both the longitudinal and transverse
momenta can be accounted for in this way.
At weak string coupling, there are a small number of fields which
describe these oscillations. They act just like the fields $\chi$
described above: By requiring that their field momenta agree
with the transverse momentum of the black string, we `use up' part of the
longitudinal momentum. The remaining longitudinal momentum is just
$p(u)/\kappa^2$, independent of $b(u)$ and $h_i(u)$. Since the number of
such fields is small compared to $Q_2 Q_5$,
they do not change the number of degrees of freedom
which can carry the longitudinal momentum to leading order. Thus the entropy is
again given by (\ref{4BPSS}) in agreement with the Bekenstein-Hawking
entropy of the black string.
\section{Discussion}
We have considered two new families of solutions
containing black strings with traveling waves. One describes six dimensional
black strings with waves carrying angular momentum, and the other
describes five dimensional black strings with waves carrying linear momentum
in various directions. We computed the horizon area,
and counted the number of
BPS states at weak string coupling with the same distribution
of momentum and angular momentum. In each case, the Bekenstein-Hawking
entropy agreed with the number of microscopic string states.
Combined with the results of \cite{homa}, this leads to the following
conclusion:
{\it In every case where the entropy
of an extreme
black hole has been understood in terms of string states (five dimensional
black holes including rotation, and four dimensional black holes without
rotation)
one can apply the same counting arguments in a quasilocal way
along the D-brane, to explain the entropy of an inhomogeneous black string.}
Note, however, that this quasilocal picture is lost near the horizon
of the black string.
Even though the horizon always adjusts itself so that its area agrees with
the counting of states, it does so globally - not locally. Locally, the
horizon remains homogeneous, and is largely unaffected by the
traveling waves. This is similar to what happens with other disturbances
in the spacetime. For example, one can add another sixbrane at a point
$x_0$ outside the black string by simply replacing $H_6$ in (\ref{harm})
with
\begin{equation}
H_6 = 1 + {r_6\over r} + {g\over 2 |x-x_0|}
\end{equation}
It is clear that the effect of this new sixbrane becomes negligible near
the horizon $r=0$; both the area and local geometry of
the horizon remains unchanged. This is expected
since the area of the horizon is a measure of the number of
internal states, and it should not be affected by the
presence of matter (such as the new sixbrane) with which the
string does not interact.
Let us now return to the six dimensional black string with angular momentum.
Recall that in
section \ref{rot} we imparted angular momentum to
our string by adding the wave $\gamma(u)$, which left
the linear momentum density unchanged. However, as
described in \cite{cmp,DGHW}, one may also give
the black string angular momentum by adding appropriate
external waves $h_i(u)$. Roughly speaking, the wave $\gamma(u)$
corresponds to spinning the string, while the external wave
$h_i(u)$ corresponds to a string gyrating so as to act like a
rotating helical coil. If we specify both the
angular and the linear momentum density of the string, the contribution
to the angular momentum from the `spin waves' $\gamma(u)$
and the `gyrating external waves' $h_i(u)$ are uniquely determined.
We have seen that the Bekenstein-Hawking
entropy of the resulting spacetime
agrees with the counting of weak coupling bound
states whenever our mesoscopic picture is well-defined,
for any combination of gyration and rotation.
In addition, it is illustrative to consider a different point of
view. Suppose one wanted to
study rotating black {\it holes}. Both
spinning and gyrating black strings reduce to rotating black holes
in five dimensions, at least after appropriate averaging
(see \cite{cmp,DGHW}). However, when considering the reduced
solution it does not seem appropriate to specify the
{\it distribution} of momentum along the string. As a result, we
wish to consider the collection of D-brane
states corresponding to fixed total momentum $P$
and angular momentum $J$, but without any further restriction.
Such states will in general include both
spin waves and gyrating waves; we would like to know which
contribution dominates (if any) and to see that the result
remains compatible with the entropy of the five dimensional black
hole.
Suppose that we consider the somewhat smaller collection of
states for which the spin waves contribute an angular momentum
$J_{spin}$ and the gyrating waves contribute an angular momentum
$J_{gyro}$, where $J = J_{spin}+ J_{gyro}$. The relative contributions
of such states
can be determined from the corresponding entropy. Recall
that both waves effect the entropy by reducing the longitudinal
momentum that may be freely distributed among the various modes.
{}From section \ref{rot}, the spin angular momentum reduces
the available longitudinal momentum by $J_{spin}^2 \kappa^2/Lr_0^4$.
On the other hand, the gyrating wave reduces the longitudinal
momentum by term of the form
$J_{gyro}^2 \kappa^2/Lr_0^2A^2$ which depends on the amplitude $A$
of the wave. This is to be
expected, as an angular momentum $J$ typically contributes an
energy of the form $J^2/I$ where $I$ is a moment of inertia
($Lr_0^2/\kappa^2$ is the mass of the string).
The division of the angular momentum into spin and gyration
is determined by maximizing the entropy (i.e. the available longitudinal
momentum) subject to the constraint
$J = J_{spin} + J_{gyro}$.
Clearly, the result depends on the allowed radius $A$ of the gyrations.
Let us suppose that the gyrations can be arbitrarily large. In
this case, the associated black string is quite far from being
translationally invariant and appears to be shaking wildly.
The averaging process that gives the five dimensional spacetime
must then become ill-defined. It seems unlikely than any observer
could be `effectively five-dimensional' in such a setting; any
observer would find large discrepancies from the predictions of the
five dimensional black hole solution. As a result,
such states do not correspond to our picture of a five dimensional
black hole. The setting of the problem thus restricts
the amplitude $A$ of the gyrations to be much less than $r_0$, the
radius of the five dimensional black hole. With the condition
$A \ll r_0$, the entropy is maximized
for $J_{spin}=J$, $J_{gyro} = 0$. This value is then overwhelmingly
likely and the entropy of the entire collection of states with
$J_{spin} + J_{gyro} =J$ and $A \ll r_0$ is
\begin{equation}
S = \sqrt{2 \pi Q_1 Q_5} \sqrt{L P - J^2 \kappa^2/ r_0^4 }
= 2\pi \sqrt{Q_1 Q_5 N - J^2},
\end{equation}
in agreement with the entropy of the five dimensional black hole
\cite{bmpv}.
\acknowledgements
It is a pleasure to thank D. Lowe and A. Strominger
for useful discussions. This
work was supported in part by NSF grant PHY95-07065.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction} \label{sec:intro}
Clarifying the interstellar (IS) dust properties in external galaxies is crucial not only for inferring the intrinsic properties of astronomical objects but also for understanding the star/planet formation in galaxies beyond the Milky Way (MW), where dust plays important roles in the radiative transfer and chemistry. Dust properties in the MW and the Large/Small Magellanic clouds have been extensively studied through analysis of the extinction in stars and the dust radiation \citep[e.g.,][for a review]{Draine2003}, while in external galaxies they have been investigated in much less detail.
The existence of non-MW-like dust in external galaxies has been implied by the non-MW-like extinction observed in reddened Type~Ia supernovae (SNe), where smaller total-to-selective extinction ratios \citep[$R_{\rm{V}} \lesssim 2$; \textit{e.g.},][]{Tripp1998, Elias-Rosa2006, Elias-Rosa2008, Krisciunas2006, Kowalski2008, Wang2008, Nobili2008, Folatelli2010, Burns2014, Amanullah2014, Amanullah2015, Cikota2016} compared to the typical values for dust extinction in the MW \citep[$R_V\sim 3.1$; \textit{e.g.},][]{Fitzpatrick2007} have been found. Similarly, for some Type~IIP SNe, \citet[][]{Poznanski2009} reported a steep extinction law ($R_{\rm{V}} < 2$) by analyzing their light curves using an empirical standardization method.
The dust in the line of sight to SNe not only extinguishes, but also polarizes the SN light (interstellar polarization; ISP). This allows us to investigate the properties of the dust in the line of sight from polarimetric observations of SNe, particularly from the empirical relation between extinction and the wavelength of the polarization maximum, $\lambda_{\rm{max}}$, by \citet[][$R_{V} \sim 5.5\lambda_{\rm{max}}$ $\lbrack \mu$m$\rbrack$]{Serkowski1975}. The ISP of redden Type~Ia SNe shows a polarization maximum at shorter wavelengths \citep[$\lambda_{\rm{max}} \lesssim 0.4$~$\mu$m; \textit{e.g.},][]{Patat2015, Zelaya2017} than the typical MW ISP \citep[$\lambda_{\rm{max}} \sim 0.545$~$\mu$m;][]{Serkowski1975}. A similar property has also been reported for the ISP toward other types of transients, including the Type~Ib/c SN~2005bf \citep[][]{Maund2007} and the optical transient NGC~300~OT2008-1 \citep[][]{Patat2010}. On the other hand, the Type~II SN~1999gi shows a MW-like ISP curve for its host galaxy, characterized by $\lambda_{\rm{max}}= 0.53$~$\mu$m \citep[][]{Leonard2001}. This demonstrates the existence of MW-like dust in its host galaxy.
It is important to investigate the universality of such non-MW-like dust in the universe. If such non-MW-like dust grains are common at least in some places in external galaxies, it might qualitatively affect the derivation of the intrinsic properties of astronomical objects and the picture of star/planet formation in galaxies. Given that the existence of non-MW-like dust has been inferred mainly for Type~Ia SN host galaxies and similar investigations for core-collapse SNe have been quite limited, it is important to increase the number of ISP measurements toward core-collapse SNe; by studying them, we may probe the properties of the IS dust in different types of galaxies from the MW and Type~Ia SN host galaxies, or in different environments within the same galaxies.
In this work, we study the ISP of two reddened Type~II SNe, \textit{i.e.}, SNe~2022aau and 2022ame. SN~2022aau was discovered on 20.60 January 2022 UT during the ongoing $D<40$ Mpc (DLT40) one-day cadence SN search \citep[][]{Tartaglia2018} in NGC~1672 \citep[][]{Bostroem2022a}, located at $z=0.004440$ \citep[][]{Allison2014}. About one day later, the object was classified as a Type~II SN \citep[][]{Siebert2022}. A non-detection of SN~2022aau on 19.56 January 2022 UT, which is about one day before the detection, was reported \citep[][]{Bostroem2022a}. SN~2022ame was discovered on 27.51 January 2022 UT in NGC~1255 \citep[][]{Itagaki2022}, located at $z=0.005624$ \citep[][]{Koribalski2004}. About one day later, the object was classified as a Type~II SN \citep[][]{Bostroem2022b}. The non-detection of SN~2022ame on 24.86 January 2022 UT, which is about three days before the detection, was obtained by the Asteroid Terrestrial-impact Last Alert System (ATLAS) \citep[][]{Tonry2018}.
In the following section, we present details of our observations.
In \S~\ref{sec:results}, we discuss the ISP of these SNe.
\section{Observations and data reduction} \label{sec:obs}
\begin{table*}
\caption{Observation log and the estimated Serkowski parameters.
}
\label{tab:obs_log}
$
\begin{tabular}{|c||c|c|c|c||c|c|c|} \hline
SN & Date (UT) & Phase$^{a}$ & Airmass & Exposure time & $P_{\rm{max}}$ (\%) & $\lambda_{\rm{max}}$ ($\AA$) & $K$ \\ \hline\hline
SN~2022aau & 2022-01-28.11 & +7.51 & 1.3 & 4$\times$300s & $13.72^{+4.23}_{-0.14}$ & $800^{+90}_{-110}$ & $0.5^{+0.1}_{-0.1}$ \\ \cline{2-5}
& 2022-03-24.10 & +62.50 & 2.3 & 4$\times$300s & & &\\ \hline
SN~2022ame & 2022-01-30.10 & +2.59 & 1.4 & 4$\times$600s & $1.44^{+0.01}_{-0.06}$ & $5300^{+190}_{-310}$ & $1.7^{+0.4}_{-0.4}$ \\ \cline{2-5}
& 2022-03-01.02 & +32.51 & 1.5 & 4$\times$450s & & &\\ \hline
\end{tabular}
$
%
\begin{minipage}{.88\hsize}
\smallskip
Notes. ${}^{a}$Days relative to the discovery. The observational data will be available in the ESO Science Archive Facility at \url{http://archive.eso.org}.
\end{minipage}
\end{table*}
We have conducted spectropolarimetric observations for SNe~2022aau and 2022ame, using the FOcal Reducer/low-dispersion Spectrograph 2 \citep[hearafter FORS2;][]{Appenzeller1998} instrument mounted on the Cassegrain focus of the Very Large Telescope (VLT) UT1 (Antu) unit telescope in Chile.
The log of the observations is shown in Table~\ref{tab:obs_log}. We used FORS2 as a dual-beam polarimeter. The spectrum produced by a grism is split by a Wollaston prism into two beams with orthogonal direction of polarization: ordinary (o) and extraordinary (e) beams pass through a half-wave retarder plate (HWP). We used the low-resolution G300V grism coupled to a $1.0$ arcsec slit, giving a spectral coverage of $3800-9200$ {\AA}, a dispersion of $\sim 3.2$ {\AA}~ pixel$^{-1}$ and a resolution of $\sim 11.5$ {\AA} (FWHM) at $5580$ {\AA}. We adopted HWP angles of $0^{\circ}$, $22.5^{\circ}$, $45^{\circ}$ and $67.5^{\circ}$, which are measured between the acceptance axis of the ordinary beam of the Wollaston prism (which was aligned to the north-south direction) and the fast axis of the retarder plate.
The data were reduced by standard methods with IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.} following \citet[][]{Patat2006}. The ordinary and extraordinary beams were extracted by the PyRAF apextract.apall task with a fixed aperture size of $10$ pixels and then separately binned in $100$ {\AA} bins in order to improve the signal-to-noise ratio. The HWP zeropoint angle chromatism was corrected based on the data in the FORS2 user manual \footnote{\url{http://www.eso.org/sci/facilities/paranal/instruments/fors/doc/VLT-MAN-ESO-13100-1543_P07.pdf}}. The wavelength scale for the Stokes parameters was corrected to the rest-frame using the redshift of the galaxies.
\section{Results and discussion} \label{sec:results}
Figure~\ref{fig:specpol} presents the polarization spectra of SNe~2022aau and 2022ame. SN~2022aau shows a high degree of polarization, i.e., $P\gtrsim 3.0$ \% at $\lambda \sim 4500$ {\AA}, as well as a steep wavelength dependence, i.e., the polarization peaks are at a bluer wavelength than the MW ISP, similar to the ISP of reddened Type~Ia SNe. The spectra at the first and second epochs (Phases +7.51 and +62.50 days) are generally similar, showing a continuum polarization with a single polarization angle of $\sim 90$ degrees. At the same time, the polarization shows a slight increase of the continuum polarization as well as emergence of some line polarization, which corresponds to the line features in the SN spectrum (see Figures~\ref{fig:specpol} and \ref{fig:spec}). Here, we judge that the discrepancy at $\lambda \lesssim 4500$ {\AA} is likely due to the lack of signal, i.e., the incomplete extraction of the spectra (see Fig.~\ref{fig:spec}) and that at the other wavelengths might be due to the intrinsic SN polarization. We will discuss this additional component, probably from the aspherical structure in the SN ejecta, in a forthcoming paper. In the following discussion, we thus use the first-epoch spectrum as a pure ISP component of SN~2022aau. SN~2022ame also shows a high polarisation degree of $P\sim 1.5$ \% at $\lambda \sim 4500$ {\AA}, as well as smooth wavelength dependence
with a peak around $5300$ {\AA}, similar to the MW ISP. There is no noticeable time evolution between the two epochs, and in the following discussion, we therefore use the averaged spectrum from the first and second epochs for SN~2022ame.
Normally, Type~II SNe show low polarization ($\lesssim 0.1$\%) at early photospheric phases \citep[\textit{i.e.}, within a few months after the explosion; \textit{e.g.},][]{WangWheeler2008}, implying that the outermost layers of their progenitors are relatively spherical. In addition, the polarization that originates from the SN ejecta should have a constant continuum polarization degree through all wavelengths, because the scattering processes in the SN ejecta are dominated by electron scattering, whose opacity is gray \citep[see, e.g.,][]{Nagao2018}. Even in the extreme case of SN~2013ej with a large polarization degree (with no wavelength dependence) just after the explosion, which is interpreted to originate in an aspherical photosphere created by an aspherical circumstellar-material interaction, the continuum polarization was limited to a $\sim0.5$ \% level \citep[][]{Nagao2021}. Such a high intrinsic polarization ($P\gtrsim 1.5$ \%) has not been previously observed in any other Type~II SN at such early phases (within a few months after explosion). Since SNe~2022aau and 2022ame show both a high polarization and a significant wavelength dependence at early phases, their polarization is expected to be imposed externally.
A straightforward interpretation is the ISP, i.e., the polarization due to extinction by aspherical dusts grains aligned in a magnetic field. The Galactic reddening along the lines of sight to SNe~2022aau and 2022ame is $E(B-V)=0.021$ and $0.012$ mag, respectively \citep[][]{Schlafly2011}. With these values, the empirical relation found by \citet[][$P_{\rm{max}}\leq 9 E(B-V)$]{Serkowski1975} suggests that the Galactic ISP for SNe~2022aau and 2022ame should be lower than $\sim 0.2$~\% and $\sim 0.1$~\%, respectively. Therefore, we conclude that the ISP toward SNe~2022aau and 2022ame originates mainly from the dust in their host galaxies.
Since we do not see the time evolution of the ISP in both SNe, the dust that contributes to the ISP should be located in not a CS scale ($\lesssim 0.1$ pc; where the dust originates from the progenitor systems of these SNe) but a IS scale ($>> 0.1$ pc; where the dust is not directly related with the progenitor systems). The fact that Na~I~D line and the reddening are also constant toward time (see Appendix~\ref{sec:app2}) supports this conclusion.
The other possible external source of polarization with a blue peak is scattering by circumstellar (CS) dust around SNe \citep[e.g.,][]{Patat2005,Nagao2017,Nagao2018}.
However, this scenario is difficult to explain the observed polarization for SNe~2022aau and 2022ame. The polarization degree in this scenario is determined by the relative strength between the SN light and the scattered-echo light. Roughly speaking, to increase the scattered-echo flux towards the SN light, we need to have a larger-solid-angle CSM, which creates more spherical CSM distribution and thus reduces the polarization degree of each scattered photon. As a result, there is an upper limit for the polarization degree that can be created by this scenario toward an assumed input light curve, although, in reality, the echo process is complicated as it depends not only on the time evolution of the central source but also on many other factors, e.g., dust optical properties, the multiple-scattering effects, etc. It has been shown that the polarization level expected in the CSM echo scenario is limited to $\sim 0.1$ \% during the plateau phase of Type IIP SNe, even considering additional factors \citep[][]{Nagao2017,Nagao2018}. In addition, since the scattered echo has a delay time, it cannot contribute to the very early phases as we took spectropolarimetry of SNe~2022aau and 2022ame (Phases +7.51 and +2.59 days, respectively). If we assume that the dust locates just after the dust evaporation radius ($\sim 0.01$ pc for a typical brightness of Type IIP SNe), then the typical delay time is $t_{\rm{delay-time}}=r_{\rm{evp}}/c \sim 10$ days. In the dust scattering scenario, we should also see the time variation of the polarization degree from the early epoch and the latter epoch, which we did not observe for our targets.
SNe~2022aau and 2022ame show substantial reddening in their photometric and spectroscopic properties, and are significantly redder than other Type~II SNe at similar phases (see Fig.~\ref{fig:spec} in Appendix~\ref{sec:app2}). The equivalent widths of the Na~I~D lines formed in the host galaxies of SNe~2022aau and 2022ame also indicate very high extinction: EW$_{\rm{Na I}}$ = 4.8 and 1.4 {\AA} for SNe~2022aau and 2022ame (see Fig.~\ref{fig:spec} in Appendix~\ref{sec:app2}), respectively, imply $E(B-V) \gtrsim 0.6$ mag based on the empirical relation derived by \citet[][]{Poznanski2012}. This value would be converted into $P_{\rm{max}} \gtrsim 5.4$\% if the above Galactic Serkowski relation is applicable also in these galaxies. The inferred high extinction is in agreement with the high polarization degrees, supporting the conclusion that the ISP within the host galaxies is very likely responsible for the polarization observed toward the two Type~II SNe.
The ISP angle traces the direction of the magnetic field in the region where the ISP is formed, since the polarization occurs through the differential absorption of the electromagnetic wave by aspherical dust grains aligned with the local magnetic field \citep[\textit{e.g.},][]{Davis1951}. In a spiral galaxy the direction of the magnetic field globally follows the direction of the spiral arms \citep[\textit{e.g.},][]{Beck2015}, even though the magnetic field and thus the ISP might suffer local perturbations, \textit{e.g.}, from supernovae \citep{Ntormousi2018}. The polarization angle in SN~2022aau arguably corresponds to the spiral structure at the location of the SN in its host galaxy, supporting the above interpretation of the origin of its polarization (see Fig.~\ref{fig:host} in Appendix~\ref{sec:app1}). The polarization angle in SN~2022ame, on the other hand, does not match any large-scale structure at the location of the SN.
Even though the origin of the alignment/misalignment between the global galaxy structure and the local magnetic field is not fully clear \citep[see, \textit{e.g.},][]{Beck2015, Beck2020}, this may be the result of, \textit{e.g.}, of a local perturbation of the magnetic field.
Figure~\ref{fig:wavelength_dep} shows the wavelength dependence of the ISP toward SNe~2022aau and 2022ame, as compared with selected Type~Ia SNe and a Galactic star. The wavelength dependence of the ISP toward SN~2022aau deviates similarly from that of the MW as those toward some Type~Ia SNe, i.e., the polarization peaks are at a shorter wavelength ($\lambda_{\rm{max}}\lesssim4000$ {\AA}) than the typical ISP in the MW ($\lambda_{\rm{max}}\sim5500$ {\AA}). This is evidence for the presence of non-MW-like dust in its host galaxy, implying a significantly enhanced abundance of small grains compared to MW dust, as suggested for dust in the host galaxies of reddened Type~Ia SNe \citep[e.g.,][]{Patat2015, Chu2022}. This finding implies that such non-MW-like dust might be more common than expected in certain regions of galaxies, which might affect the picture of the star/planet formation in galaxies. On the other hand, the ISP of SN~2022ame is consistent with MW-like ISP, implying the existence of MW-like dust in its host galaxy. The two examples presented in this paper thus confirm that dust properties in external galaxies are diverse.
We have derived the wavelength dependence of the ISP by fitting the polarization spectra with the Serkowski curve \citep[][]{Serkowski1975}:
\begin{equation}
P(\lambda) = P_{\rm{max}} \exp \left[ -K \ln^{2} \left( \frac{\lambda_{\rm{max}}}{\lambda} \right) \right].
\end{equation}
The derived best-fit values for the parameters are $P_{\rm{max}}=13.72^{+4.23}_{-0.14}$ \%, $\lambda_{\rm{max}}=800^{+90}_{-110}$ {\AA} and $K=0.5^{+0.1}_{-0.1}$ for SN~2022aau and $P_{\rm{max}}=1.44^{+0.01}_{-0.06}$ \%, $\lambda_{\rm{max}}=5300^{+190}_{-310}$ {\AA} and $K=1.7^{+0.4}_{-0.4}$ for SN~2022ame. Here, from the empirical relation of \citet{Serkowski1975}, these values of $\lambda_{\rm{max}}$ imply $R_{V}\sim0.4$ and $\sim 2.9$ for SNe~2022aau and 2022ame, respectively. In Figure~\ref{fig:k_l}, the best-fit values and the 1-sigma confidence levels for the fitting are shown on the $\lambda_{\rm{max}}-K$ plane. SN~2022aau is located far from the cloud of the MW stars and close to some of Type~Ia SNe. SN~2022ame is close to the cloud of the MW stars, even though it still shows a slightly larger value of $K$ compared to the MW stars at more than 1 sigma confidence. It is noted that, since the best-fit value of $\lambda_{\rm{max}}$ for SN~2022aau is outside the wavelength range of our observations (3800-9200 {\AA}), the estimated values for SN~2022aau are not as reliable as those for SN~2022ame and the MW stars. However this does not affect the qualitative conclusion of a fundamental difference between SN~2022aau and the MW stars because the ISP peaks of the MW stars are caught by the observations.
Our results suggest that further investigation of IS dust properties using polarimetry of reddened SNe, in order to clarify the universality of such non-MW-like dust in other external galaxies, is highly promising. Furthermore, in order to identify the origin of such non-MW-like dust, it is important to study the dependence of the dust properties on environmental conditions such as gas density, strength of IS radiation field, strength of magnetic field, metallicity, etc.
\begin{acknowledgments}
This paper is based on observations collected at the European Southern Observatory under ESO programme 108.228K.001.
We are grateful to ESO's Paranal staff for the execution of the observations in service mode and the support astronomer, Paola Popesso, for the help with the arrangement of the observations.
The transient alert system developed by Steven Williams was helpful for our target selections.
T.N. thanks Masaomi Tanaka, Jian Jiang, Santiago Gonz{\'a}lez-Gait{\'a}n, Claudia P. Guti{\'e}rrez and Antonia Morales-Garoffolo for useful discussions.
T.N. is funded by the Academy of Finland project 328898.
T.N. acknowledges the financial support by the mobility program of the Finnish Centre for Astronomy with ESO (FINCA).
K.M. acknowledges support from the Japan Society for the Promotion of Science (JSPS) KAKENHI grant JP18H05223, JP20H00174, and JP20H04737. The work is partly supported by the JSPS Open Partnership Bilateral Joint Research Project between Japan and Finland (JPJSBP120229923).
H.K. is funded by the Academy of Finland projects 324504 and 328898.
M.B. acknowledges support from the Swedish Research Council (Reg. no. 2020-03330).
\end{acknowledgments}
\vspace{5mm}
\facilities{VLT (ESO)}
\begin{figure}[ht!]
\plotone{fig1a.eps}
\plotone{fig1b.eps}
\caption{Top two panels: Polarization degree $P$ and angle $\theta$ for SN~2022aau at epochs 1 (Phase +8.55 days; black) and 2 (Phase +63.54 days; gray). The black solid line corresponds to the best-fit Serkowski curve. Bottom two panels: Same as upper panels but for SN~2022ame at epochs 1 (Phase +5.24 days; red) and 2 (Phase +35.16 days; blue) as well as the weighted average of the values (black).
}
\label{fig:specpol}
\end{figure}
\begin{figure}[ht!]
\plotone{fig2.eps}
\caption{Wavelength dependence of the polarization normalized at 4000 {\AA} toward SNe~2022aau (red) and 2022ame (blue) with their best-fit Serkowski curves. For comparison, the data of three Type~Ia SNe \citep[SNe~2014J, 2008fp and 2006X;][]{Patat2015} and a Galactic star \citep[HD~43384;][]{Cikota2018} are also plotted.}
\label{fig:wavelength_dep}
\end{figure}
\begin{figure}[ht!]
\plotone{fig3.eps}
\caption{The ISP $\lambda_{\rm{max}}$-$K$ diagram showing the Type~II SNe~2022aau (red) and 2022ame (blue) from this study. Several Type~Ia SNe \citep[black crosses;][]{Patat2015,Zelaya2017,Cikota2018} and a large number of MW stars \citep[gray crosses;][]{Whittet1992} are also included. The colored points show the best-fit values of the Type~II SNe, and the lines represent the 1-sigma confidence intervals for the fitting.}
\label{fig:k_l}
\end{figure}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Given a base field $k$, an $n$-dimensional \emph{Severi-Brauer variety}
$X$ over $k$ is an (\'etale) $k$-form of the projective space $\mathbb{P}^n_k$; in other
words, there exists a finite separable field extension $L/k$ such that $X_L := X
\times_{\operatorname{Spec}(k)} \operatorname{Spec}(L)$ is isomorphic to $\mathbb{P}^n_L$.
The isomorphism classes of Severi-Brauer varieties of dimension $n$
are in bijective correspondence with central simple algebras $A$ of
degree $n+1$, which are forms of the algebra $M_{n+1}(k)$ of $(n+1)\times(n+1)$
matrices over $k$.
From another perspective, the isomorphism classes of
$n$-dimensional Severi-Brauer varieties over $k$ are classified by the
elements of the Galois cohomology set $H^1(k,\operatorname{PGL}_{n+1})$.
There is an injective function
\[
H^1(k,\operatorname{PGL}_n) \hookrightarrow \operatorname{Br}(k) = H^2(k,\mathbb{G}_{m}),
\]
functorial with respect to the field $k$, which associates a given
Severi-Brauer variety to the Brauer equivalence class of the
corresponding central simple algebra.
A \emph{separable algebra} $A$ over a field $k$ is a direct sum
\[
A = \bigoplus_{i=1}^r M_{n_i}(D_i)
\]
of matrix algebras $M_{n_i}(D_i)$ where each $D_i$ is a
finite-dimensional division
$k$-algebra whose center is a separable field extension of $k$.
Alternatively, a separable algebra is an \'etale $k$-form of a direct sum of
matrix algebras over $k$.
The del Pezzo surfaces of degree $6$ are all $k$-forms of one another.
Blunk~\cite{Blunk} demonstrated how to associate a form of a separable $k$-algebra
to each del Pezzo surface of degree $6$ in such a way that two surfaces
are isomorphic if and only if their corresponding algebras are
isomorphic. In this case, the split del Pezzo surface has the associated separable algebra $M_2(k)^{\oplus 3} \oplus M_3(k)^{\oplus 2}$.
Both Severi-Brauer varieties and del Pezzo surfaces are examples of
\emph{arithmetic toric varieties}: normal varieties which admit a faithful action of a torus (Definition~\ref{defn:torus}) with dense open orbit.
In \cite{Duncan}, it is shown that one can distinguish isomorphism
classes of $k$-forms of an arithmetic toric variety $X$ by separable $k$-algebras
whenever forms of $X$ with a rational point are retract rational.
In all these cases, the separable algebras are the direct sums of
endomorphism algebras of certain indecomposable vector bundles on the variety $X$.
It is natural to ask: can one can distinguish $k$-forms for wider classes
of objects via separable algebras?
For example, can varieties be distinguished by endomorphism algebras of
exceptional objects in their derived categories?
Are there even more exotic constructions?
The purpose of this paper is to precisely describe a fundamental obstruction to
all such strategies.
Recall that, under mild technical conditions,
the isomorphism classes of $k$-forms of an algebraic object $X$
are in bijection with the Galois cohomology set $H^1(k,G)$, where $G$ is the
automorphism group scheme of $X$.
If $A$ is a separable $k$-algebra with an algebraic action of an
algebraic group $G$, then
there is an algebraic group homomorphism $G \to \operatorname{Aut}(A)$.
We define
\begin{equation} \label{eq:ZheDefine}
\Zhe(k,G) := \bigcap_A \ker
\left( H^1(k,G) \to H^1(k,\operatorname{Aut}(A)) \right)
\end{equation}
where the intersection runs over all separable $k$-algebras $A$
with a $G$-action.
Informally, $\Zhe(k,G)$ is the set of $k$-forms of $X$ that cannot
be distinguished using forms of separable $k$-algebras.
Our main result completely characterizes this invariant using the theory
of flasque and coflasque resolutions of reductive algebraic groups, pioneered by
Colliot-Th\'el\`ene~\cite{CT2004,CT2008}, which is reviewed in
Section~\ref{sec:coflasque} below.
\begin{theoremintro} \label{thm:main}
Let $G$ be a (connected) reductive algebraic group over $k$.
Then
\[
\Zhe(k,G) = \op{im} \left(H^1(k,C) \to H^1(k,G)\right),
\]
where
\[
1 \to P \to C \to G \to 1
\]
is an exact sequence of algebraic groups such that
\begin{enumerate}
\item $P$ is a quasitrivial torus and
\item $C$ is an extension of a coflasque torus by a semisimple simply
connected group.
\end{enumerate}
\end{theoremintro}
Theorem~\ref{thm:main} is a consequence of
Theorem~\ref{thm:big_equivalence} below.
\begin{remark}
The reductive hypothesis is harmless in
characteristic $0$, since in this case there is a canonical isomorphism
$H^1(k,G) \cong H^1(k,G/U)$ for a connected linear algebraic group
$G$ with unipotent radical $U$.
\end{remark}
\begin{remark}
For a finite constant group $G$, the invariant $\Zhe(k,G)$ is always trivial
for almost tautological reasons.
Indeed, $H^1(k,G)$ classifies $G$-Galois algebras over $k$,
which are, in particular, separable algebras with a $G$-action.
\end{remark}
\begin{remark}
The automorphism group scheme of a toric variety is not usually connected.
Indeed, this is not even true for del Pezzo surfaces of degree $6$
studied in \cite{Blunk}.
However, using non-abelian $H^2$ as in \cite{Duncan},
one sees that Theorem~\ref{thm:main} is sufficient for most purposes.
In particular, Theorem~\ref{thm:main} shows that
the strategy for distinguishing forms of toric varieties using separable
algebras in \cite{Duncan}
is essentially the best one can expect.
\end{remark}
\begin{remark}
Our initial motivation for introducing $\Zhe(k,G)$ was to understand
the behavior of the derived category under twisting by a torsor.
If the derived category of a $G$-variety $X$ has a
$G$-stable exceptional collection $\{ E_1, \ldots, E_n \}$,
then the direct sum $A_X := \bigoplus_{i=1}^n \operatorname{End}(E_i)$ is a separable algebra
with a $G$-action (see \cite{BDM}).
However, $B_X:=\operatorname{End}(\bigoplus_{i=1}^n E_i)$ is not a separable algebra in
general and $A_X$ is only its semisimplification.
In particular, we do not allow \emph{arbitrary} finite-dimensional associative
$k$-algebras $A$ in the definition \eqref{eq:ZheDefine}
of $\Zhe(k,G)$.
\end{remark}
\subsection{Cohomological invariants}
\label{sec:intro_CI}
Recall that the Galois cohomology pointed set $H^i(k,G)$ is functorial
in both $G$ and $k$. In particular, fixing $G$, we may view $H^i(-,G)$
as a functor from the category of field extensions
of $k$ to the category of pointed sets
(or to groups, or to abelian groups, appropriately).
Let $\op{Inv}^2_\ast(G,S)$ be the group of \emph{normalized cohomological
invariants}, i.e., natural transformations
\[
\alpha : H^1(-,G) \to H^2(-,S)
\]
where $G$ is a linear algebraic group, $S$ is a commutative linear algebraic
group, and $\alpha$ takes the distinguished point to zero.
Recall that a linear algebraic group $G$ is \emph{special} if $H^1(K,G_K)$ is trivial for
all field extensions $K/k$.
In Theorem~\ref{thm:big_equivalence} below, we will see that for
reductive algebraic groups $G$ we have many equivalent characterizations
of $\Zhe(k,G)$. In particular,
\[
\Zhe(k,G) = \bigcap_S \bigcap_\alpha \op{ker}
\left( \alpha(k) \colon H^1(k,G) \to H^2(k,S) \right)
\]
where the intersections run over all special tori $S$
and all normalized cohomological invariants $\alpha \in \op{Inv}^2_\ast(G,S)$.
Thus, not only is $\Zhe(k,G)$ an obstruction to differentiating
Brauer classes obtained from actions of $G$ on separable algebras,
but also to those obtained from completely arbitrary maps
(provided they behave well under field extensions).
\subsection{Applications}
\label{sec:intro_apps}
Theorem~\ref{thm:main} allows us to compute $\Zhe(k,G)$ in many cases of interest.
For example, the following consequences are discussed in
Section~\ref{sec:applications}:
\begin{itemize}
\item when $k$ is a finite field or
nonarchimedean local field $\Zhe(k,G)$ is trivial.
\item if $S$ is a torus, the functor $\Zhe(-,S)$ is
trivial if and only if $S$ is retract rational.
\item if $S$ is a torus over a number field $k$, then
$$\Zhe(k,S)=\Sha^1(k,S).$$
\item if $G$ is semisimple and simply-connected over a number field,
then $$\Zhe(k,G) = \prod_{v \text{ real}} \Zhe(k_v,G_v).$$
\item if $k$ is a totally imaginary number field, then
$$\Zhe(k,G)=\Sha^1(k,G).$$
\end{itemize}
Retract rationality will be recalled in Section~\ref{sec:coflasque} below (see Definition~\ref{defn:rationality})
and $\Sha^1(k,G)$ denotes the Tate-Shafarevich group,
which is discussed in Section~\ref{sec:applications}.
Indeed the notation $\Zhe$ was chosen to remind the reader of
$\Sha$. The connection is made explicit for number fields in the
following:
\begin{theoremintro} \label{thm:Zhe_number_field}
Let $G$ be a reductive algebraic group over a number field $k$.
Then there exists a canonical isomorphism
\[
\Zhe(k,G) = \Sha^1(k,G) \times \prod_{v \textrm{ real}} \Zhe(k_v, G_{k_v}).
\]
\end{theoremintro}
The structure of the remainder of the paper is as follows.
In Section~\ref{sec:coflasque}, we overview the theory of coflasque
and flasque resolutions, moving from lattices to tori and then treating general
reductive algebraic groups.
In Section~\ref{sec:big_thm}, we prove Theorem~\ref{thm:main} as well as
several other equivalent characterizations of $\Zhe(k,G)$.
Finally, in Section~\ref{sec:applications}, we compute $\Zhe(k,G)$ in
several special cases and establish Theorem~\ref{thm:Zhe_number_field}.
In an Appendix, we prove a generalization of a result of Blinstein and
Merkurjev used to prove the main theorem, which may be of independent
interest.
\subsection*{Acknowledgements}
\label{sec:hat_tip}
The authors would like to thank B.~Antieau, M.~Borovoi,
J.-L.~Colliot-Th\'el\`ene, P.~Gille,
and A.~Merkurjev for helpful comments.
Via the first author, this material is based upon work supported by the National
Science Foundation under Grant No.~NSF DMS-1501813.
Via the second author, this work was supported by a grant from the Simons Foundation
(638961, AD).
The third author was partially supported by a USC SPARC grant.
The fourth author was partially supported by an AMS-Simons travel grant.
\subsection*{Notation and Conventions}
\label{sec:notation}
Throughout, $k$ denotes an arbitrary field with separable closure $\overline{k}$.
Let $\Gamma_k$ denote the absolute Galois group $\op{Gal}(\overline{k}/k)$,
which is a profinite group.
A variety is an integral separated scheme of finite type over a field.
A linear algebraic group is a smooth affine group scheme of finite-type over $k$.
A reductive group is assumed to be connected.
Let $\pi : \operatorname{Spec}(L) \to \operatorname{Spec}(k)$ be the morphism associated to a
separable field extension $L/k$.
For a $k$-variety $X$, we write
$X_L : = X \times_{\operatorname{Spec} L} \operatorname{Spec} k = \pi^\ast(X)$
and $\overline{X} : = X_{\overline{k}}$.
For an $L$-variety $Y$, we write
$R_{L/k}(Y) := \pi_\ast(Y)$ for the Weil restriction, which is a
$k$-variety.
Let $\operatorname{GL}_n$ denote the general linear group scheme and
$\mathbb{G}_{m} = \operatorname{Spec}({\mathbb Z}[t^{\pm 1}])=\operatorname{GL}_1$ as the multiplicative group over ${\mathbb Z}$.
We will simply write $\operatorname{GL}_n$ for $ \operatorname{GL}_{n,k}$ or $\mathbb{G}_{m}$ for $\mathbb{G}_{m,k}$
when there is no danger of confusion.
Unless otherwise specified, a $G$-torsor is a \emph{right} $G$-torsor.
We will reference the following categories:
\begin{itemize}
\item $\mathsf{Set}$ is the category of sets.
\item $\mathsf{Set}_\ast$ is the category of pointed sets.
\item $\mathsf{Grp}$ is the category of groups.
\item $\mathsf{Ab}$ is the category of abelian groups.
\item $\mathsf{Lat}$ is the category of finitely-generated free abelian
groups.
\end{itemize}
Given a base field $k$:
\begin{itemize}
\item $k\hyph\mathsf{Alg}$ is the category of associative $k$-algebras.
\item $k\hyph\mathsf{Fld}$ is the category of field extensions of $k$.
\item $k\hyph\mathsf{Grp}$ is the category of algebraic groups over $k$.
\end{itemize}
Given a profinite group $\Gamma$ and a concrete category $\mathsf{C}$
(in other words, $\mathsf{C}$ is equipped with a faithful functor to
category of sets),
we write $\Gamma\hyph\mathsf{C}$ to denote the category of objects
whose underlying sets are endowed with the discrete topology
and a continuous left action of $\Gamma$.
Objects in $\Gamma\hyph\mathsf{Set}$,
$\Gamma\hyph\mathsf{Ab}$,
and $\Gamma\hyph\mathsf{Lat}$
are called $\Gamma$-sets, $\Gamma$-modules, and $\Gamma$-lattices
respectively.
For $\Gamma$-modules $A,B$, we use the shorthand notation
$\operatorname{Hom}_\Gamma(A,B) := \operatorname{Hom}_{\Gamma\hyph\mathsf{Ab}}(A,B)$
and $\operatorname{Ext}^i_\Gamma(A,B) := \operatorname{Ext}^i_{\Gamma\hyph\mathsf{Ab}}(A,B)$.
For linear algebraic groups $A, B$ over $k$ with $B$ commutative,
we denote by $\operatorname{Ext}^1_k(A,B)$ the group of isomorphism classes of
central extensions of algebraic groups
\[
1 \to B \to G \to A \to 1
\]
under the usual Baer sum.
For $k$-algebras $A$ and $B$, we use the shorthand
$\operatorname{Hom}_k(A,B) := \operatorname{Hom}_{k\hyph\mathsf{Alg}}(A,B)$.
For algebraic groups $A$ and $B$ defined over $k$, we use the shorthand
$\operatorname{Hom}_k(A,B) := \operatorname{Hom}_{k\hyph\mathsf{Grp}}(A,B)$.
For a scheme $X$ and an \'etale sheaf $\mathcal{F}$ on $X$,
we write $H^n(X,\mathcal{F})$ to denote \'etale cohomology.
In particular, we write $\operatorname{Pic}(X)=H^1(X,\mathbb{G}_{m})$ and $\operatorname{Br}(X)=H^2(X,\mathbb{G}_{m})$.
For a field $k$, we write
$H^n(k,\mathcal{F}):=H^n(\operatorname{Spec}(k),\mathcal{F})$.
For a profinite group $\Gamma$ and a (continuous) $\Gamma$-set $A$,
we write $H^n(\Gamma,A)$ for the appropriate cohomology set,
assuming this makes sense given $n$ and $A$.
For readers unfamiliar with the Cyrillic letter $\Zhe$, ``Zhe'', it
is pronounced close to the ``s'' in ``treasure''.
\section{Coflasque resolutions}
\label{sec:coflasque}
\subsection{Preliminaries on lattices}
\label{sec:lattices}
We recall some facts about $\Gamma$-lattices following \cite{CTS77},
see also \cite{Vosky}.
\begin{definition}
Let $\Gamma$ be a profinite group and let $M$ be a $\Gamma$-lattice.
Note that the image of the $\Gamma$-action factors through a finite group $G$
called the \emph{decomposition group}, which acts faithfully on $M$.
\begin{enumerate}
\item $M$ is \emph{permutation} if there is a ${\mathbb Z}$-basis of $M$ permuted
by $\Gamma$.
\item $M$ is \emph{stably permutation} if there exist permutation
lattices $P_1$ and $P_2$ such that $M \oplus P_1 = P_2$.
\item $M$ is \emph{invertible} if it is a direct summand of a permutation lattice.
\item $M$ is \emph{quasi-permutation} if there
exists a short exact sequence
\[
0 \to M \to P_1 \to P_2 \to 0
\]
where $P_1$ and $P_2$ are permutation lattices.
\end{enumerate}
\end{definition}
Given a $\Gamma$-lattice $M$, let $[M]$ denote its similarity class.
In other words, $[M_1]=[M_2]$ if and only if there exist permutation
$\Gamma$-lattices $P_1$ and $P_2$ such that
$M_1 \oplus P_1 \cong M_2 \oplus P_2$.
Observe that the set of similarity classes form a monoid under direct
sum. Being stably permutation amounts to saying that $[M]=[0]$,
while being invertible amounts to saying there exists a lattice $L$ such
that $[M]+[L]=[0]$.
Given a $\Gamma$-lattice $M$, the \emph{dual lattice}
$M^\vee := \operatorname{Hom}_{\mathsf{Ab}}(M,{\mathbb Z})$ is the set of group homomorphisms from
$M$ to ${\mathbb Z}$ with the natural $\Gamma$-action
where ${\mathbb Z}$ has the trivial $\Gamma$-action.
Note that this duality induces an exact anti-equivalence of the category
of $\Gamma$-lattices with itself.
\begin{definition}
Let $M$ be a $\Gamma$-lattice.
\begin{enumerate}
\item $M$ is \emph{coflasque} if $H^1(\Gamma',M)=0$ for all
open subgroups $\Gamma' \subseteq \Gamma$.
\item $M$ is \emph{flasque} if $M^\vee$ is coflasque.
\item A \emph{flasque resolution of $M$ of the first type} is an exact sequence
\[
0 \to M \to P \to F \to 0
\]
while a \emph{flasque resolution of $M$ of the second type} is an exact sequence
\[
0 \to P \to F \to M \to 0
\]
where, in each case, $P$ is a permutation lattice and $F$ is a flasque lattice.
\item A \emph{coflasque resolution of $M$ of the first type} is an exact sequence
\[
0 \to C \to P \to M \to 0
\]
while a \emph{coflasque resolution of $M$ of the second type} is an exact sequence
\[
0 \to M \to C \to P \to 0
\]
where, in each case, $P$ is a permutation lattice and $C$ is a coflasque lattice.
\end{enumerate}
\end{definition}
The following alternative characterizations of flasque, coflasque, and
invertible will be useful:
\begin{lemma}
\label{lemma:many_faces_flasque}
Let $\Gamma$ be a profinite group.
\begin{enumerate}
\item The following are equivalent for a $\Gamma$-module $C$:
\begin{itemize}
\item $C$ is coflasque.
\item $\operatorname{Ext}_\Gamma^1(P,C)=0$ for every
permutation $\Gamma$-lattice $P$.
\item $\operatorname{Ext}_\Gamma^1(Q,C)=0$ for every
invertible $\Gamma$-lattice $Q$.\\
\end{itemize}
\item The following are equivalent for a $\Gamma$-module $F$:
\begin{itemize}
\item $F$ is flasque.
\item $\operatorname{Ext}_\Gamma^1(F,P)=0$
for every permutation $\Gamma$-lattice $P$.
\item $\operatorname{Ext}_\Gamma^1(F,Q)=0$
for every invertible $\Gamma$-lattice $Q$.\\
\end{itemize}
\item The following are equivalence for a $\Gamma$-module $M$:
\begin{itemize}
\item $M$ is invertible.
\item $\operatorname{Ext}_\Gamma^1(M,C)=0$
for every coflasque $\Gamma$-lattice $C$.
\item $\operatorname{Ext}_\Gamma^1(F,M)=0$
for every flasque $\Gamma$-lattice $F$.
\end{itemize}
\end{enumerate}
\end{lemma}
\begin{proof}
This is standard.
See, e.g., \cite[Lemme 9]{CTS77} and \cite[0.5]{ColSan87Principal}.
\end{proof}
Flasque/coflasque resolutions of both types always exist
but are never unique;
however, the similarity classes $[F]$ and $[C]$ are well-defined
\cite[Lemma 0.6]{ColSan87Principal}.
It is well known that flasque and coflasque resolutions of the first
type are
``versal'' in the following sense:
\begin{lemma} \label{lem:versal_first_type}
Let $M$ be a $\Gamma$-lattice.
If
\[
0 \to C \to P \xrightarrow{\alpha} M \to 0
\]
is a coflasque resolution of the first type,
then any morphism $P' \to M$ with $P'$ invertible factors through $\alpha$.
Dually, if
\[
0 \to M \xrightarrow{\beta} P \to F \to 0
\]
is a flasque resolution of the first type,
then any morphism $M \to P'$ with $P'$ invertible factors through $\beta$.
\end{lemma}
\begin{proof}
See \cite[Lemma 1.4]{CTS77}.
\end{proof}
Less well known is that resolutions of the second
type also satisfy a ``versality'' property.
\begin{lemma} \label{lem:versal_second_type}
Let $M$ be a $\Gamma$-lattice.
Suppose
\[
0 \to M \xrightarrow{\alpha} C \to P \to 0
\]
is a coflasque resolution of the second type and
\[
0 \to M \xrightarrow{\gamma} N \to Q \to 0
\]
is an extension of $\Gamma$-lattices with $Q$ invertible.
Then there is a morphism $\phi:N \to C$ such that
$\phi \circ \gamma = \alpha$.
Let $M$ be a $\Gamma$-lattice.
Suppose
\[
0 \to P \to F \xrightarrow{\alpha} M \to 0
\]
is a flasque resolution of the second type and
\[
0 \to Q \to N \xrightarrow{\gamma} M \to 0
\]
is an extension of $\Gamma$-lattices with $Q$ invertible.
Then there is a morphism $\phi:F \to N$ such that
$\gamma \circ \phi = \alpha$.
\end{lemma}
\begin{proof}
From Lemma~\ref{lemma:many_faces_flasque}, an equivalent condition that $C$ is coflasque is that
$\operatorname{Ext}^1_{\Gamma}(Q,C)=0$ for all invertible modules $Q$.
Thus from the exact sequence
\[
\operatorname{Hom}_\Gamma(Q,P) \to \operatorname{Ext}^1_{\Gamma}(Q,M) \to \operatorname{Ext}^1_{\Gamma}(Q,C) = 0
\]
there exists some map $\beta: Q \to P$ such that the extension
\begin{displaymath}
0 \to M \to Q \oplus_P C \to Q \to 0
\end{displaymath}
is isomorphic to
\begin{displaymath}
0 \to M \to N \to Q \to 0.
\end{displaymath}
The desired homomorphism $\phi: N \to C$ is the composition
\begin{displaymath}
N \cong Q \oplus_P C \to C.
\end{displaymath}
The result for flasque resolutions follows by duality.
\end{proof}
\subsection{Preliminaries on algebraic tori}
\label{sec:tori}
\begin{definition}\label{defn:torus}
A $k$-\emph{torus} is an algebraic group $T$ over $k$ such that
$T _{\overline{k}} \cong \mathbb{G}_{m,\overline{k}} ^n$
for some non-negative integer $n$.
A torus is \emph{split} if $T \cong \mathbb{G}_{m,k}^n$. A field extension
$L/k$ satisfying $T_L \cong \mathbb{G}_{m,L} ^n$ is called a \emph{splitting field}
of the torus $T$.
Any torus admits a finite Galois splitting field.
\end{definition}
Recall that there is an anti-equivalence of categories between
$\Gamma_k$-lattices and $k$-tori, which we will call \emph{Cartier duality}
(see, e.g., \cite{Vosky}).
Given a torus $T$, the Cartier dual (or \emph{character lattice}) $\widehat{T}$ is the
$\Gamma$-lattice $\op{Hom}_{\bar{k}}(\overline{T},
\mathbb{G}_{m,\bar{k}})$.
Given a $\Gamma_k$-lattice $M$, we use $\cd M$ to denote the
Cartier dual torus.
\begin{definition}
Let $T$ be a torus with corresponding $\Gamma_k$-lattice $M := \widehat{T}$.
\begin{enumerate}
\item $T$ is \emph{quasi-trivial} if $M$ is permutation.
\item $T$ is \emph{flasque} if $M$ is flasque.
\item $T$ is \emph{coflasque} if $M$ is coflasque.
\end{enumerate}
Similarly, we may define flasque/coflasque resolutions
of both types via Cartier duality.
\end{definition}
\begin{proposition} \label{prop:invertible_H1_vanish}
A torus $T$ is special if and only if $\widehat{T}$ is an invertible
$\Gamma_k$-lattice.
\end{proposition}
\begin{proof}
This follows from the classification of
special tori due to Colliot-Th\'el\`ene
\cite[Theorem 13]{Huruguen}.
\end{proof}
As in the introduction, a \emph{separable algebra} $A$ over $k$ is a
finite direct sum of finite-dimensional matrix algebras
over finite-dimensional division $k$-algebras whose centers are separable
field extensions over $k$.
Given a separable algebra $A$ over $k$, we recall that $\operatorname{GL}_1(A)$ is the
group scheme of units of $A$, i.e.,
\[
\operatorname{GL}_1(A)(R) := (A \otimes_k R)^\times
\]
for any commutative $k$-algebra $R$.
An \'etale algebra over $k$ of degree $n$ is a commutative separable
algebra over $k$ of dimension $n$.
In other words, $E = F_1 \times \cdots \times F_r$
where $F_1, \ldots, F_r$ are separable field extensions of $k$.
There is an antiequivalence between finite $\Gamma_k$-sets $\Omega$ and
\'etale algebras $E$ via
\[
\Omega = \operatorname{Hom}_{k\hyph\mathsf{Alg}}(E,\bar{k})
\textrm{ and } E = \operatorname{Hom}_{\Gamma_k\hyph\mathsf{Set}}(\Omega,\bar{k})
\]
with the natural $\Gamma_k$-action and $k$-algebra structure
on $\bar{k}$ (see, e.g., \cite[\S{18}]{BOI}).
\begin{proposition} \label{prop:hilbert90_shapiro}
Let $E = F_1 \times \cdots \times F_r$
be an \'etale algebra over $k$ of degree $n$,
where $F_1, \ldots, F_r$ are separable field extensions of $k$.
Let $T = R_{E/k} \mathbb{G}_{m}$ be the Weil restriction
and let $\Omega := \operatorname{Hom}_k(E,\bar{k})$ be the corresponding $\Gamma$-set.
\begin{enumerate}
\item $T(k) = E^\times$.
\item $\widehat{T}$ is a permutation $\Gamma_k$-lattice with a canonical basis
isomorphic to $\Omega$.
\item $H^1(k,T) = 1$.
\item $H^2(k,T) = \prod_{i=1}^r \operatorname{Br}(F_i)$.
\end{enumerate}
\end{proposition}
\begin{proof}
These are standard consequences of Hilbert's Theorem 90 and
Shapiro's Lemma.
\end{proof}
Let us now recall some relevant rationality properties.
\begin{definition}\label{defn:rationality} A $k$-variety $X$ is \emph{rational} if $X$ is birationally
equivalent to $\mathbb{A}^n_k$ for some $n \ge 0$.
We say $X$ is \emph{stably rational} if $X \times \mathbb{A}^n_k$ is birational to
$\mathbb{A}^m_k$ for some $n,m \ge 0$.
We say $X$ is \emph{retract rational} if there is a dominant rational
map $f : \mathbb{A}^n_k \dasharrow X$ that has a rational section
$s : X \dasharrow \mathbb{A}^n_k$ such that $f \circ s$ is the identity on $X$.
\end{definition}
A complete characterization of rationality of tori is still an open
problem (it is not known if all stably rational tori are
rational).
However, stable rationality and retract rationality of a torus is
completely understood via its flasque resolutions.
\begin{theorem} \label{theorem:torus_rationality}
Let $T$ be a $k$-torus and
\[
1 \to F \to P \to T \to 1
\]
a flasque resolution of the first type.
\begin{itemize}
\item $T$ is stably rational if and only if $\widehat{F}$ is stably
permutation.
\item $T$ is retract rational if and only if $\widehat{F}$ is invertible.
\end{itemize}
\end{theorem}
\begin{proof}
The first item is \cite[Theorem 2]{VoskyStable}.
The second is \cite[Theorem 3.14]{SaltmanRR}.
\end{proof}
\subsection{Flasque and coflasque resolutions of algebraic groups}
\label{sec:FC_alg_grp}
We recall how one can define flasque and coflasque resolutions for
more linear algebraic groups following \cite{CT2008}.
Let $G$ be a (connected) reductive algebraic group over a field $k$.
Note that since our main application will be understanding the first
Galois cohomology set of $G$, in characteristic $0$ the reductive
hypothesis is largely harmless.
Let $G^{ss}$ be the derived subgroup of $G$, which is semisimple,
and let $G^{tor}$ be the quotient $G/G^{ss}$, which is a torus.
\begin{definition}
Let $G$ be a reductive algebraic group.
\begin{itemize}
\item
The group $G$ is \emph{quasi-trivial} if $G^{tor}$ is a quasi-trivial
torus and $G^{ss}$ is simply-connected.
\item
The group $G$ is \emph{coflasque} if $G^{tor}$ is a coflasque
torus and $G^{ss}$ is simply-connected.
\item
A \emph{flasque resolution} of $G$ is a short exact sequence
\[
1 \to S \to H \to G \to 1
\]
where $S$ is a flasque torus and $H$ is quasi-trivial.
\item
A \emph{coflasque resolution} of $G$ is a short
exact sequence
\[
1 \to P \to C \to G \to 1
\]
where $P$ is a quasi-trivial torus and $C$ is coflasque.
\end{itemize}
\end{definition}
The group extensions in a flasque or coflasque resolution are
automatically central since the group $G$ is connected and the
automorphism group scheme of a torus has trivial connected component.
Unlike the situation for $\Gamma$-lattices and tori,
the symmetry between flasque and coflasque is now broken.
In the case where $G$ is a torus, the resolutions above specialize to
flasque resolutions of the first type and coflasque resolutions of the
second type.
In this context, we do not refer to the ``type'' of a
flasque or coflasque resolution.
However, as for tori, the flasque and coflasque resolutions defined
above always exist:
\begin{theorem}[Colliot-Th\'el\`ene] \label{thm:CT_collected}
Let $G$ be a reductive algebraic group over $k$. Then there exists both a flasque resolution and coflasque resolution of $G$. Moreover, for any two coflasque resolutions
\begin{gather*}
1 \to P_1 \to C_1 \to G \to 1 \\
1 \to P_2 \to C_2 \to G \to 1
\end{gather*}
there is an isomorphism
\begin{displaymath}
P_1 \times C_2 \cong P_2 \times C_1.
\end{displaymath}
\end{theorem}
\begin{proof}
The existence statements are \cite[Proposition 3.1]{CT2008} and \cite[Proposition 4.1]{CT2008}. The isomorphism is \cite[Proposition 4.2(i)]{CT2008}.
\end{proof}
\begin{proposition} \label{prop:coflasque_general_versal}
Suppose $G$ is a reductive algebraic group and
consider a coflasque resolution
\[
1 \to P \to C \to G \to 1
\]
where $P$ is a quasi-trivial torus and $C$ is coflasque.
Suppose there exists an extension
\[
1 \to S \to H \to G \to 1
\]
where $S$ is a central special torus.
Then there exists a morphism $C \to H$ inducing
a morphism of the extensions above that is the identity on $G$.
\end{proposition}
\begin{proof}
This proof is a variation of that of Proposition~4.2~of~\cite{CT2008}.
Let $E$ be the fiber product of $H$ and $C$ over $G$.
We have a commutative diagram
\[
\xymatrix{
& & 1 \ar[d] & 1 \ar[d] \\
& & P \ar@{=}[r] \ar[d] & P \ar[d] \\
1 \ar[r] & S \ar[r] \ar@{=}[d] & E \ar[d] \ar[r] & C \ar[r] \ar[d] & 1 \\
1 \ar[r] & S \ar[r] & H \ar[r] \ar[d] & G \ar[r] \ar[d] & 1 \\
& & 1 & 1 & }
\]
with exact rows and columns.
From \cite[Proposition 1.10 and 2.6]{CT2008},
we know $H^1(C,Q)=0$ for $C$ coflasque and $Q$ a
quasi-trivial torus.
Since $S$ is special there is a factorization $S \to Q \to S$ of the
identity for some
quasi-trivial torus $Q$, and thus $H^1(C,S)=0$.
Arguing as in the proof of \cite[Proposition 3.2]{CT2008}
(or using Theorem~\ref{thm:CTtorsor_to_group} below),
we conclude the group extension
\[
1 \to S \to E \to C \to 1
\]
is split.
The composite morphism $C \to E \to H$ gives the desired result.
\end{proof}
\begin{proposition} \label{prop:coflasque_resolution_independence}
Given a reductive algebraic group $G$ and a coflasque resolution
\[
1 \to P \to C \to G \to 1 \ ,
\]
the natural morphism
\[
H^1(k,C) \to H^1(k,G)
\]
is injective and its image is independent of the choice of
coflasque resolution.
\end{proposition}
\begin{proof}
Since $P$ is central, the fibers of the natural morphism
$H^1(k,C) \to H^1(k,G)$ are either empty or are torsors under
$H^1(k,P)$. Since $P$ is quasi-trivial, $H^1(k,P)$ is trivial
and we conclude that $H^1(k,C) \to H^1(k,G)$ is injective.
Suppose
\[
1 \to P' \to C' \to G \to 1 \ ,
\]
is another coflasque resolution of $G$.
From \cite[Proposition 4.2(i)]{CT2008} and its proof,
there is an isomorphism $\alpha : P \times C' \cong P' \times C$.
such that the diagram
\[
\xymatrix{
P \times C' \ar[d]^\alpha \ar[r] & C' \ar[r] & G \ar@{=}[d]\\
P' \times C \ar[r] & C \ar[r] & G}
\]
commutes.
As above, since $P$ is quasi-trivial, the projection $P \times C' \to C'$
induces an isomorphism $H^1(k,P \times C') \cong H^1(k,C')$.
Thus the composite
\[
H^1(k,C') \to H^1(k,P \times C') \xrightarrow{\alpha}
H^1(k,P' \times C) \to H^1(k,C)
\]
is an isomorphism and induces equality of the images in $H^1(k,G)$.
\end{proof}
Note that a flasque resolution of a reductive algebraic group $G$ does not in
general give rise to a flasque resolution of its abelianization.
However, this does occur if $G$ is coflasque:
\begin{proposition} \label{prop:flasqueOfCoflasque}
If $C$ is a coflasque reductive algebraic group, then any flasque resolution of
the first type gives rise to a commutative diagram
\[
\xymatrix{
1 \ar[r] &
S \ar[r] \ar@{=}[d] &
H \ar[r] \ar[d] &
C \ar[r] \ar[d] &
1 \\
1 \ar[r] &
S \ar[r] &
H^{tor} \ar[r] &
C^{tor} \ar[r] &
1 }
\]
with exact rows, where $H$ is a quasi-trivial algebraic group,
$S$ is a flasque torus, and the vertical maps are abelianizations.
Note that both rows are flasque resolutions.
\end{proposition}
\begin{proof}
The only potential problem is that abelianization is not left-exact in
general.
The morphism $\varphi: H \to C$ induces a surjective morphism $H' \to C'$ of
their derived subgroups with commutative kernel $H' \cap S$.
However, since $C$ is coflasque, the semisimple algebraic group
$C'$ is simply-connected by definition.
Thus $H' \to C'$ is an isomorphism and $H' \cap S = 1$.
Consider the map $S \to \varphi^{-1}(C')/H'$.
The kernel is $S \cap H' = 1$. Given $h \in \varphi^{-1}(C')$,
since $\varphi|_{H'}$ is an isomorphism, there is some $h'$ with
$\varphi(hh') = 1$ so $S \to \varphi^{-1}(C')/H'$ is surjective.
We conclude that
\begin{displaymath}
1 \to S \to H/H' \to C/C' \to 1
\end{displaymath}
is exact.
\end{proof}
\section{Cohomological Invariants}
\label{sec:big_thm}
We review the notion of a \emph{cohomological invariant}
following \cite{Skip}.
Fix a base field $k$ and recall our notation $k\hyph\mathsf{Fld}$
for the category of field extensions of $k$.
We consider two functors
\[
A : k\hyph\mathsf{Fld} \to \mathsf{Sets}_\ast
\]
and
\[
H : k\hyph\mathsf{Fld} \to \mathsf{Ab} \ .
\]
A \emph{normalized $H$-invariant of $A$} is a morphism of functors $A \to H$.
The group of all such invariants will be denoted $\op{Inv}_\ast(A,H)$.
\begin{remark}
We demand a priori that $A$ is a functor into \emph{pointed} sets.
This explains the adjective ``normalized.''
This condition is harmless as a general $H$-invariant of $A$
can be written uniquely as the sum of a normalized invariant
and a ``constant'' invariant coming from $H(k)$.
\end{remark}
The two kinds of functors we will consider are as follows.
Given an algebraic group $G$ over $k$,
we may view Galois cohomology
\[
H^i(-,G) : k\hyph\mathsf{Fld} \to \mathsf{Sets}_\ast
\]
as a functor (the codomain may be interpreted as $\mathsf{Grp}$ if $i=0$
or $\mathsf{Ab}$ if $G$ is commutative).
If the functor $A$ is $H^1(-,G)$ and the functor $H$ is $H^i(-,C)$
for $G$, $C$ algebraic groups, we let
\[
\op{Inv}^i_\ast(G,C)
\]
denote the group of normalized $H$-invariants of $A$.
Let $S$ be a torus.
Recall that $\operatorname{Ext}^1_k(G,S)$ is the group of central extensions of algebraic
groups
\begin{equation} \label{eq:elemOfExt}
1 \to S \to H \to G \to 1
\end{equation}
up to equivalence under the usual Baer sum.
At the risk of some ambiguity, we will use $[H]$ to denote the class of such an
extension.
Given such an extension, there is a connecting homomorphism
\[
\partial_H : H^1(k,G) \to H^2(k,S)
\]
from the long exact sequence in Galois cohomology.
\begin{theorem} \label{thm:superBM}
Let $G$ be a reductive algebraic group over $k$
and $S$ a special torus over $k$.
Let $\operatorname{Ext}^1_k(G,S)$ be the group of isomorphism classes of central algebraic group
extensions of $G$ by $S$ under Baer sum.
Then the canonical map
\[
\operatorname{Ext}^1_k(G,S) \to \op{Inv}^2_\ast(G,S)
\]
that takes an extension $\xi$
to its connecting homomorphism $\partial_\xi$,
is an isomorphism of groups.
\end{theorem}
\begin{proof}
For the proof, see the \hyperlink{proof:BM}{appendix}.
\end{proof}
\begin{remark}
The above theorem is a generalization of a result of Blinstein and
Merkurjev~\cite[Theorem 2.4]{Blinstein}, which shows
that there \emph{exists} an isomorphism when $S=\mathbb{G}_{m}$.
In their proof, the isomorphism comes from a map in an exact sequence of
Sansuc (see Proposition~6.10 of \cite{Sansuc1981}), which is somewhat
mysterious.
However, Borovoi and Demarche show in \cite[Theorem 2.4]{BorovoiDemarche}
that one can modify Sansuc's sequence so that it produces exactly the
isomorphism given in the above theorem.
Another proof that Blinstein and Merkurjev's result holds with the
desired isomorphism was proved by Lourdeaux in
\cite[\S{3.1.2}]{Lourdeaux}.
All of the aforementioned results are for the case when $S=\mathbb{G}_{m}$.
A short argument via Weil restrictions show that the result holds for
all special tori.
The authors had proved Theorem~\ref{thm:superBM} before being made
aware of the work of Borovoi, Demarche and Lourdeaux.
This exposition is included as an appendix since the intermediate
results may be of independent interest.
\end{remark}
\subsection{Connecting Coflasque Resolutions and Cohomological Invariants}
The purpose of this section is to prove
Theorem~\ref{thm:main}.
We will actually prove a stronger theorem:
\begin{theorem} \label{thm:big_equivalence}
Let $G$ be a reductive algebraic group $G$ defined over a field $k$.
The following sets are equal:
\begin{enumerate}
\item
$ \displaystyle
\Zhe(k,G) = \bigcap_A \ker
\left( H^1(k,G) \to H^1(k,\operatorname{Aut}(A)) \right)
$
where the intersection runs over all separable $k$-algebras $A$
with a $G$-action.
\bigskip
\item
$ \displaystyle
\Zhe_{Br}(k,G) := \bigcap_E \bigcap_\alpha \op{ker}
\left( \alpha(k) \colon H^1(k,G) \to \operatorname{Br}(E) \right)
$
where the intersections run over all \'etale algebras $E$
and all normalized cohomological invariants $\alpha$.
\bigskip
\item
$ \displaystyle
\Zhe_{qt}(k,G) := \bigcap_S \bigcap_\alpha \op{ker}
\left( \alpha(k) \colon H^1(k,G) \to H^2(k,S) \right)
$
where the intersections run over all quasi-trivial tori $S$
and all normalized cohomological invariants $\alpha$.
\bigskip
\item
$ \displaystyle
\Zhe_{sp}(k,G) := \bigcap_S \bigcap_\alpha \op{ker}
\left( \alpha(k) \colon H^1(k,G) \to H^2(k,S) \right)
$
where the intersections run over all special tori $S$
and all normalized cohomological invariants $\alpha$.
\bigskip
\item
$ \displaystyle
\operatorname{Im}\left(H^1(k,C) \to H^1(k,G)\right)
$
where $1 \to P \to C \to G \to 1$ is a coflasque resolution
of $G$.
\end{enumerate}
Moreover, if $\cap_i X_i$ denotes any one of the intersections
from (a) through (d), then there exists an element $X_i$
such that $X_i = \cap_i X_i$.
\end{theorem}
\begin{remark}
We expand on the final statement of the theorem in case (a).
Here, there exists a single separable algebra $A$ with a $G$-action
such that $\Zhe(k,G) = \ker( H^1(k,G) \to H^1(k,\operatorname{Aut}(A)))$.
Note, however, that this algebra $A$ is never unique.
Similar interpretations exist for cases (b)--(d).
\end{remark}
We begin with the following lemma:
\begin{lemma} \label{lem:4flavors}
Let $G$ be a reductive algebraic group $G$ defined over a field $k$.
Let
\[
1 \to P \to C \to G \to 1
\]
be a coflasque resolution of $G$.
For any special torus $S$ and any normalized cohomological invariant
$\alpha \in \op{Inv}^2_\ast(G,S)$,
there exists a group homomorphism $f\colon P \to S$ such that
$\alpha$ is equal to the composite
\[
H^1(k,G) \xrightarrow{\partial_C} H^2(k,P) \xrightarrow{f_\ast} H^2(k,S)
\]
and $\ker(\alpha)$ contains the image of $H^1(k,C) \to H^1(k,G)$.
\end{lemma}
\begin{proof}
By Theorem~\ref{thm:superBM}, every $\alpha$ is obtained as a
connecting homomorphism from some extension
\[
1 \to S \to M \to G \to 1 \ .
\]
By Proposition~\ref{prop:coflasque_general_versal},
there exists a homomorphism $m : P \to S$ coming from a morphism of
extensions.
Applying Galois cohomology,
we obtain a commutative diagram with exact rows
\[
\SelectTips{cm}{10}\xymatrix{
H^1(k,C) \ar[r] \ar[d] & H^1(k,G) \ar[r] \ar@{=}[d] & H^2(k,P) \ar[d] \\
H^1(k,M) \ar[r] & H^1(k,G) \ar[r]^{\alpha(k)} & H^2(k,S) \\
}
\]
Thus $H^1(k,C)$ is in the kernel of $\alpha$ as desired.
\end{proof}
From \S{23}~of~\cite{BOI}, we recall some standard facts about
automorphisms of separable algebras.
Let $A$ be a separable algebra over $k$ with center $Z(A)$
(an \'etale algebra over $k$).
Recall that the connected component $\operatorname{Aut}_k(A)^\circ$ of the group
scheme of algebra automorphisms of $\operatorname{Aut}_k(A)$ is the kernel of the restriction
map $\operatorname{Aut}_k(A) \to \operatorname{Aut}_k(Z(A))$.
We have an exact sequence
\[
1 \to \operatorname{GL}_1(Z(A)) \to \operatorname{GL}_1(A) \to \operatorname{Aut}_k(A)^\circ \to 1
\]
where $\operatorname{GL}_1(B)$ is the group scheme of units of a $k$-algebra $B$
(this is a consequence of the Skolem-Noether theorem).
We define $\operatorname{PGL}_1(A)$ as the quotient
$\operatorname{GL}_1(A)/\operatorname{GL}_1(Z(A)) \cong \operatorname{Aut}_k(A)^\circ$.
\begin{lemma} \label{lem:PGL_from_Brauer}
Suppose there is a central extension of algebraic groups
\[
1 \to S \to H \to G \to 1
\]
and a homomorphism $m : S \to P$ where $P$ is a quasi-trivial torus.
Then there exists a separable algebra $A$ such that $P \cong
\operatorname{GL}_1(Z(A))$
and there is commutative diagram
\[
\SelectTips{cm}{10}\xymatrix{
1 \ar[r] & S \ar[r] \ar[d]^m & H \ar[r] \ar[d] & G \ar[r] \ar[d] & 1 \\
1 \ar[r] & P \ar[r] & \operatorname{GL}_1(A) \ar[r] & \operatorname{PGL}_1(A) \ar[r] & 1 \\
}
\]
with exact rows.
\end{lemma}
\begin{proof}
By taking the pushout of $H$ along $S \to P$,
we may assume that $S=P$ and the morphism $S \to P$ is the identity.
We begin by proving the theorem in the case where $S=\mathbb{G}_{m}$.
Let $\rho : H \to \operatorname{GL}(V)$ be a faithful algebraic representation of $H$
where $V$ is a $k$-vector space.
Recall that tori are linearly reductive over any field,
so the restricted representation $\rho|_S$
has a canonical decomposition $V=V_1 \oplus \cdots \oplus V_n$
into isotypic components where $\rho|_S$ acts on $V_i$ via
a direct sum of many copies of a single irreducible representation
$\sigma_i : S \to \mathbb{G}_{m}$.
Observe that $H$ cannot permute these components since $S$ is central,
thus each $V_i$ is $H$-stable.
Since the representation $\rho$ is faithful and $S$ is central,
at least one $\sigma_i$ must be a faithful representation of $S$.
Since $S=\mathbb{G}_{m}$, either $\sigma_i$ is the identity or the inversion.
In the latter case, $\sigma_i^\vee$ is the identity.
Thus there exists a representation of $H$ on $V_i$ which restricts to
scalar multiplication on $S=\mathbb{G}_{m}$.
Thus, the theorem follows when $S=P=\mathbb{G}_{m}$ if we set $A = \op{End}(V_i)$.
We now consider the general case where $S=P$ is quasi-trivial.
It suffices to assume that $P=R_{K/k}\mathbb{G}_{m}$
for a finite separable field extension $K/k$ of degree $n$.
Indeed, quasi-trivial tori are products of such tori;
so the general result follows by taking the product of the constructions.
Let $\pi : \operatorname{Spec}(K) \to \operatorname{Spec}(k)$ be the morphism corresponding to the
field extension $K/k$.
For brevity and clarity we will write $L(X)=X_K$ for $k$-varieties $X$
and $R(Y)=R_{K/k}(Y)$ for $K$-varieties $Y$,
which emphasizes that scalar extension, $L$, is a left adjoint to
Weil restriction $R$.
We have an adjoint pair, so we denote the counit by
$\epsilon : L R \to \operatorname{id}$
and the unit by $\eta : \operatorname{id} \to R L$.
Let $f : RL(\mathbb{G}_{m})=R_{K/k}(\mathbb{G}_{m,K}) \to H$ be the inclusion of $S$ into $H$.
Define the $K$-group $J$ as the pushout
\[
\xymatrix{
LRL(\mathbb{G}_{m}) \ar[r]^{\epsilon_{L\mathbb{G}_{m}}} \ar[d]^{L f} &
L(\mathbb{G}_{m})\ar[d]^g \\
L(H) \ar[r]^h & J }
\]
with $g$ and $h$ the canonical maps.
Since the lemma has been proven for the case $S=\mathbb{G}_{m}$, we have an embedding
$\rho : J \to \operatorname{GL}_{n,K}$ for some $n$ such that $\rho \circ g$ is the identity
on scalar matrices.
We have the following commutative diagram:
\[
\xymatrix{
RL(\mathbb{G}_{m}) \ar[rr]^{\eta_{RL\mathbb{G}_{m}}} \ar[d]^f & &
RLRL(\mathbb{G}_{m}) \ar[rr]^{R\epsilon_{L\mathbb{G}_{m}}}
\ar[d]^{RL f} &&
RL(\mathbb{G}_{m}) \ar[d]^{R g} \\
H \ar[rr]^{\eta_H} &&
RL(H) \ar[rr]^{R h}
& & R(J) }
\]
where the left square commutes due to naturality of $\eta$.
The top row composes to be the identity since
$R\epsilon \circ \eta R = \operatorname{id}$ by standard facts regarding
adjunctions.
Let $A$ be the $k$-algebra of $n \times n$ matrices over $K$.
Since $R_{K/k}(\operatorname{GL}_{n,K})$ is canonically isomorphic to
$\operatorname{GL}_1(A)$,
the composition
\[
R(\rho \circ h) \circ \eta_H : H \to R_{K/k}(\operatorname{GL}_{n,K})
\]
gives the desired map.
The isomorphism $S \to Z(\operatorname{GL}_1(A))$ is given by the top
row of the diagram above.
\end{proof}
With this technical lemma in hand, we are finally able to prove
Theorem~\ref{thm:big_equivalence} (and thus Theorem~\ref{thm:main}).
\begin{proof}[Proof of Theorem~\ref{thm:big_equivalence}]
Since $\operatorname{Br}(E)=H^2(k,R_{E/k}\mathbb{G}_{m})$ for an \'etale $k$-algebra $E$,
we conclude immediately that (b) and (c) are equal.
Since a quasi-trivial torus is, in particular, special,
the equality of (c), (d) and (e) follow from
Lemma~\ref{lem:4flavors}.
Thus, the theorem is proven provided we can show
$\Zhe(k,G) = \Zhe_{qt}(k,G)$.
Suppose $x \in \Zhe_{qt}(k,G)$.
Let $A$ be an algebra with a group action
$\alpha : G \to \operatorname{Aut}(A)$.
Recall that if $H$ is a linear algebraic group, then
$H^0(-,H) \to H^0(-,\pi_0(H))$ is surjective
by \cite[Theorem 6.5]{Waterhouse}.
Thus, $H^1(-,H^\circ) \to H^1(-,H)$ is injective.
Thus we may assume $\alpha : G \to \operatorname{Aut}(A)^\circ$ instead since
$G$ is connected.
We have a composition
\[
\beta \colon H^1(k,G) \xrightarrow{\partial_{\alpha}}
H^1(k,\operatorname{Aut}(A)^\circ) \hookrightarrow H^2(k,\operatorname{GL}_1(Z(A)))
\]
where the second arrow is injective by Hilbert 90.
In particular, this composition gives rise to a cohomological invariant
and thus $\beta(x)=0$ since $x \in \Zhe_{qt}(k,G)$;
thus $\partial_{\alpha}(x)=0$.
We conclude that $x \in \Zhe(k,G)$.
Suppose $x \in \Zhe(k,G)$.
Consider a quasi-trivial torus $P$ and a cohomological invariant
$\alpha \in \op{Inv}^2_\ast(G,P)$.
From Theorem~\ref{thm:superBM}, the functor $\alpha$ is the connecting
homomorphism induced from a central extension
\[
1 \to P \to H \to G \to 1 \ .
\]
From Lemma~\ref{lem:PGL_from_Brauer}, we may construct a separable
algebra $A$ and a commutative diagram
\[
\SelectTips{cm}{10}\xymatrix{
1 \ar[r] & P \ar[r] \ar@{=}[d] & H \ar[r] \ar[d] & G \ar[r] \ar[d] & 1 \\
1 \ar[r] & P \ar[r] & \operatorname{GL}_1(A) \ar[r] & \operatorname{PGL}_1(A) \ar[r] & 1 \\
}
\]
with exact rows.
Applying Galois cohomology we obtain a factorization
\[
H^1(-,G) \to H^1(-,\operatorname{PGL}_1(A)) \to H^2(-,P)
\]
of the functor $\alpha$.
We conclude that $x \in \Zhe_{qt}(k,G)$.
\end{proof}
\section{Coflasque algebraic groups over particular fields}
\label{sec:applications}
\subsection{General statements and low cohomological dimension}
An algebraic group $G$ over $k$ is \emph{special} if and only if
$H^1(K,G_K)=\ast$ for every field extension $K/k$
(see \cite{Huruguen}).
\begin{proposition} \label{prop:ICP_special}
If $G$ is a reductive group then $\Zhe(-,G)$ is trivial
if and only if $C$ is special,
where
\[
1 \to P \to C \to G \to 1
\]
is a coflasque resolution of $G$.
\end{proposition}
\begin{proof}
By Theorem~\ref{thm:main}, we have an isomorphism of functors
$\Zhe(-,G) \cong H^1(-,C)$; the latter is trivial if and only if $C$ is
special by definition.
\end{proof}
\begin{proposition}
Let $T$ be a torus over $k$. Then, $\Zhe(-,T)$
is a stable birational invariant of $T$. Moreover, $\Zhe(-,T)$ is trivial
if and only if $T$ is retract rational.
\end{proposition}
\begin{proof}
Let
\[
1 \to P \to C \to T \to 1
\]
be a coflasque resolution of $T$ of the second type.
Assume we have an exact sequence
\begin{displaymath}
1 \to Q \to E \to T \to 1
\end{displaymath}
with $Q$ invertible.
Taking the fiber product we obtain a commutative diagram
\[
\xymatrix{
& & 1 \ar[d] & 1 \ar[d] \\
& & P \ar@{=}[r] \ar[d] & P \ar[d] \\
1 \ar[r] & Q \ar[r] \ar@{=}[d] & H \ar[d] \ar[r] & C \ar[r] \ar[d] & 1 \\
1 \ar[r] & Q \ar[r] & E \ar[r] \ar[d] & T \ar[r] \ar[d] & 1 \\
& & 1 & 1 & }
\]
with exact rows and columns.
Taking duals of the middle column we get a short exact sequence
\begin{displaymath}
1 \to \widehat{C} \to \widehat{H} \to \widehat{Q} \to 1
\end{displaymath}
whose associated long exact sequence includes
\begin{displaymath}
H^1(\Gamma', \widehat{C}) \to H^1(\Gamma', \widehat{H}) \to H^1(\Gamma', \widehat{Q})
\end{displaymath}
for any $\Gamma' \leq \Gamma$.
Since the outer two terms vanish, so does the middle.
Thus, $H$ is coflasque.
Additionally, since $C$ is coflasque and
$Q$ is quasi-trivial, this extension splits
$H \cong C \times Q$. Thus, the map
\begin{displaymath}
H^1(k,H) \to H^1(k,C)
\end{displaymath}
is an isomorphism. Applying Theorem~\ref{thm:main}, we see that
\begin{displaymath}
\Zhe(k,E) \cong \Zhe(k,T).
\end{displaymath}
Assume that $T$ and $T^\prime$ are stably birational tori.
Then their flasque invariants coincide
\cite[Proposition 2.6]{CTS77} and there exist short exact sequences
\begin{gather*}
1 \to P \to E \to T \to 1 \\
1 \to P^\prime \to E \to T^\prime \to 1
\end{gather*}
with both $P$ and $P^\prime$ quasi-trivial
\cite[Lemme 1.8]{CTS77}. From the above, we see that
\begin{displaymath}
\Zhe(k,T) \cong \Zhe(k,E) \cong \Zhe(k,T^\prime).
\end{displaymath}
Assume that $T$ is retract rational. Then appealing to
Theorem~\ref{theorem:torus_rationality} we have an exact sequence
\begin{displaymath}
1 \to Q \to P \to T \to 1
\end{displaymath}
where $Q$ is invertible and $P$ is quasi-trivial.
Thus, $\Zhe(-,T) \cong \Zhe(-,P)$. Since $P$ is
quasi-trivial it is coflasque so $\Zhe(-,P) = H^1(-,P)$
by Theorem~\ref{thm:main}. Proposition
~\ref{prop:hilbert90_shapiro} says the latter is trivial.
Assume $\Zhe(-,T)$ is trivial. From Proposition~\ref{prop:ICP_special},
$C$ is special. Then, from Corollary~\ref{prop:invertible_H1_vanish}
$C$ is invertible. Then there is a quasi-trivial torus
$P$ with $P = C \times D$ so
\begin{displaymath}
1 \to D \to P \to C \to 1
\end{displaymath}
is a flasque resolution with $D$ invertible. Thus,
Theorem~\ref{theorem:torus_rationality} shows $C$ is retract rational.
\end{proof}
From Proposition~\ref{prop:ICP_special}, understanding when $\Zhe(k,G)$ is trivial amounts to understanding
when a coflasque algebraic group is special.
When $k$ is perfect and of cohomological dimension $\le 1$, then \emph{all} torsors
of connected algebraic groups are trivial by Serre's Conjecture I
(now Steinberg's Theorem \cite[\S{III.2.3}]{SerreGC}).
Thus, we have:
\begin{proposition}
If $k$ is a field of cohomological dimension $\le 1$,
then $\Zhe(k,G)=\ast$ for all reductive algebraic groups $G$.
In particular, this holds for finite fields.
\end{proposition}
In a more subtle manner, we may also leverage Serre's
Conjecture II:
\begin{conjecture}[Serre's Conjecture II]
If $k$ is a perfect field of cohomological dimension $\le 2$,
then $H^1(k,G)=\ast$ for all simply-connected semisimple algebraic groups.
\end{conjecture}
Note that Serre's conjecture II is still open in general, although many
cases are known (see the survey \cite{GilleSurvey}).
In particular, the conjecture is proved for non-archimedean local fields
\cite{Kneser1,Kneser2}.
\begin{proposition} \label{prop:SerreIIapplication}
Suppose $k$ is a field for which the conclusion of Serre's Conjecture II holds.
Let $C$ be a coflasque reductive algebraic group over $k$ and consider the exact
sequence
\[
1 \to C^{sc} \to C \to C^{tor} \to 1
\]
where $C^{sc}$ is the derived subgroup of $C$ and $C^{tor}$ is the
abelianization.
Then the induced map $H^1(k,C) \to H^1(k,C^{tor})$ is injective.
\end{proposition}
\begin{proof}
By definition, the derived subgroup $C^{sc}$ of a coflasque
reductive algebraic group is semisimple simply-connected.
Since any form of a simply-connected semisimple algebraic group
is simply-connected semisimple,
all fibers of the map $H^1(k,C) \to H^1(k,C^{tor})$
are trivial or empty.
\end{proof}
In the remainder of this section, our goal is to
understand $\Zhe(k,G)$ over number fields.
We begin with characterizations of coflasque algebraic groups over local fields.
\begin{lemma} \label{lem:coflasque_local}
If $C$ is a coflasque algebraic group over a nonarchimedean local field $k$,
then $H^1(k,C)=\ast$.
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:SerreIIapplication},
it suffices to assume $C$ is a coflasque torus.
Let $K/k$ be any Galois splitting field of $C$ with Galois group $\Gamma_{K/k}$.
From Tate-Nakayama duality, see e.g. \cite[Theorem 11.3.5]{Vosky}, we
have an isomorphism
\[
H^1(\Gamma_{K/k},C(K)) \cong H^1(\Gamma_{K/k}, \widehat{C}) = 0
\]
since $\widehat{C}$ is coflasque.
Thus $H^1(k,C)=0$ as desired.
\end{proof}
The archimedean case is more complicated.
For real tori, the notions of flasque, coflasque, and
quasi-trivial all coincide, so $H^1({\mathbb R},T) = \ast$ for a coflasque real torus $T$.
However, coflasque real algebraic groups can have non-trivial torsors. Thus $\Zhe({\mathbb R},G)$ may be non-trivial when $G$ is a not a torus.
\begin{ex}
The group $\op{SL}_2(\mathbb{H}) \cong \op{Spin}(5,1)$ is simply-connected hence coflasque. However,
\begin{displaymath}
|H^1(\mathbb{R}, \op{SL}_2(\mathbb{H}))| = 2
\end{displaymath}
from \cite[Section 10.1]{Adams}. Similarly, for the compact form of $E_8$, which is also simply-connected, we have
\begin{displaymath}
|H^1(\mathbb{R},E_8)| = 3
\end{displaymath}
from \cite[Section 10.2]{Adams}.
\end{ex}
Nevertheless, from \cite{Borovoi}, the set $H^1({\mathbb R},G)$ has an explicit
combinatorial description for any reductive algebraic group $G$, so this
case can be explicitly computed.
\subsection{Number fields}
\label{sec:number_fields}
We recall the \emph{Tate-Shafarevich group} of a linear algebraic group
(see, e.g., \cite[\S{7}]{PlatonovRapinchuk}).
If $G$ is a reductive algebraic group and $k$ is a number field, then
\begin{displaymath}
\Sha^i(k,G) := \op{ker} \left( H^i(k, G) \to \prod_v
H^i(k_v,G_{k_v}) \right),
\end{displaymath}
where the product is over all places $v$ of $k$.
The \emph{Tate-Shafarevich group} is the case where $i=1$,
which is an abelian group even if $G$ is not commutative.
For simply-connected algebraic groups, the Tate-Shafarevich group is trivial.
In fact, we have the following even stronger
result~\cite[Theorem~6.6]{PlatonovRapinchuk}:
\begin{theorem}[Kneser, Harder, Chernousov] \label{thm:KHC}
If $G$ is a simply-connected semisimple algebraic group over a number field $k$,
then the natural map
\[
H^1(k, G) \to \prod_{v \mathrm{\ real}} H^1(k_v,G_{k_v})
\]
is a bijection.
\end{theorem}
\begin{lemma}{\cite[Proposition 9.4(ii)]{CT2008}} \label{lem:Sha1to2}
Let $G$ be a reductive algebraic group over a number field.
Suppose
\[
1 \to S \to H \to G \to 1
\]
is a flasque resolution of $G$.
Then the connecting homomorphism induces
a bijection $\Sha^1(G) \cong \Sha^2(S)$.
\end{lemma}
Finally, we are in a position to prove our final result:
\begin{proof}[Proof of Theorem~\ref{thm:Zhe_number_field}]
Let
\begin{displaymath}
1 \to P \to C \to G \to 1
\end{displaymath}
be a coflasque resolution of $G$ and let
\begin{displaymath}
1 \to S \to H \to G \to 1
\end{displaymath}
be a flasque resolution of $G$. Setting $H' := C \times_G H$, we obtain
the following commutative diagram with exact rows
\begin{equation} \label{eq:both_resolutions}
\xymatrix{
& & 1 \ar[d] & 1 \ar[d] \\
& & P \ar@{=}[r] \ar[d] & P \ar[d] \\
1 \ar[r] & S \ar[r] \ar@{=}[d] & H' \ar[d] \ar[r] & C \ar[r] \ar[d] & 1 \\
1 \ar[r] & S \ar[r] & H \ar[r] \ar[d] & G \ar[r] \ar[d] & 1 \\
& & 1 & 1 & }
\end{equation}
where $P$ is a quasi-trivial torus, $S$ is a flasque torus,
$H'$ and $H$ are quasi-trivial algebraic groups and $C$ is a coflasque
algebraic group.
By Lemma~\ref{lem:Sha1to2}, the induced maps
\begin{displaymath}
\Sha^1(k,G) \to \Sha^2(k,S)
\end{displaymath}
and
\begin{displaymath}
\Sha^1(k,C) \to \Sha^2(k,S)
\end{displaymath}
are isomorphisms.
By Proposition~\ref{prop:flasqueOfCoflasque},
there is a flasque resolution of the first type
\begin{displaymath}
1 \to S \to \left(H^\prime\right)^{tor} \to C^{tor} \to 1.
\end{displaymath}
Using Lemma~\ref{lem:Sha1to2} again, the induced map
$\Sha^1(k,C^{tor}) \to \Sha^2(k,S)$ is an isomorphism.
Thus the morphism $\Sha^1(k,C) \to \Sha^1(k,C^{tor})$ is an isomorphism.
The task is to compute $\Zhe(k,G)$, Since $\Zhe(k,G) \cong H^1(k,C)$ by Theorem~\ref{thm:big_equivalence}, we must compute $H^1(k,C)$.
We start with the short exact sequence
\[
1 \to C^{sc} \to C \to C^{tor} \to 1 \ .
\]
We get a commutative diagram
\[
\SelectTips{cm}{10}\xymatrix{
H^1(k,C^{sc}) \ar[r] \ar[d] & H^1(k,C) \ar[r] \ar[d] & H^1(k,C^{tor}) \ar[d] \\
{\displaystyle \prod_v} H^1(k_v,C^{sc}) \ar[r] & {\displaystyle \prod_v} H^1(k,C) \ar[r] & {\displaystyle \prod_v} H^1(k_v,C^{tor}) }
\]
From Lemma~\ref{lem:coflasque_local}, we have
\begin{displaymath}
H^1(k_v,C) = H^1(k_v,C^{sc}) = H^1(k_v,C^{tor}) = \ast
\end{displaymath}
for any finite $v$; the same holds for complex $v$.
Since coflasque tori are quasi-trivial over ${\mathbb R}$,
we know $H^1({\mathbb R},C^{tor})=\ast$.
Thus, we reduce to the commutative diagram
\begin{equation} \label{eq:abelianization_in_disguise}
\SelectTips{cm}{10}\xymatrix{
H^1(k,C^{sc}) \ar[r] \ar@{=}[d] & H^1(k,C) \ar[r] \ar[d] & H^1(k,C^{tor}) \ar[d] \\
{\displaystyle \prod_{v \textrm{ real}}} H^1(k_v,C^{sc}) \ar[r] & {\displaystyle
\prod_{v \textrm{ real}}} H^1(k_v,C) \ar[r] & \ast }
\end{equation}
with exact rows,
where the left vertical map is a bijection by Theorem~\ref{thm:KHC}.
In particular, the map
\begin{displaymath}
H^1(k,C) \to \prod_{v \textrm{ real}} H^1(k_v,C)
\end{displaymath}
is surjective.
We obtain the commutative diagram
\[
\SelectTips{cm}{10}\xymatrix{
1 \ar[r] &
\Sha^1(k,C) \ar[r] \ar@{=}[d] &
H^1(k,C) \ar[r] \ar[d] &
{\displaystyle \prod_{v \textrm{ real}}} H^1(k_v,C) \ar[r] \ar[d] &
\ast
\\
1 \ar[r] &
\Sha^1(k,C^{tor}) \ar@{=}[r] &
H^1(k,C^{tor}) \ar[r] &
1}
\]
with exact rows.
We have a surjective function $H^1(k,C) \to \Sha^1(k,C^{tor})$
that has a canonical retract.
For any cocycle $\gamma \in Z^1(k,C)$, the twisted group
${}^\gamma C^{tor}$ is isomorphic to $C^{tor}$.
Thus $\Sha^1(k,C) \cong \Sha^1(k,{}^\gamma C)$
and we conclude all fibers of the map
\[
H^1(k,C) \to {\displaystyle \prod_{v \textrm{ real}}} H^1(k_v,C)
\]
are isomorphic.
Using Theorem~\ref{thm:main} and the isomorphism $\Sha^1(k,G) \cong \Sha^1(k,C)$, established above, we can rewrite the resulting direct product
\begin{displaymath}
\Zhe(k,G) \cong H^1(k,C) \cong \Sha^1(k,C) \times \prod_{v \textrm{ real}} H^1(k_v,C) \cong \Sha^1(k,G) \times \prod_{v \textrm{ real}} \Zhe(k_v,G).
\end{displaymath}
\end{proof}
\begin{remark}
M.~Borovoi pointed out that Theorem~\ref{thm:Zhe_number_field} can also
be understood using the abelian Galois cohomology group described in
\cite{BorovoiMemoir}.
Indeed, from \cite[Theorem 5.11]{BorovoiMemoir}, the commutative
square
\begin{equation} \label{eq:Borovoi}
\SelectTips{cm}{10}\xymatrix{
H^1(k,C) \ar[r]^{\op{ab}^1} \ar[d] &
H^1_{\op{ab}}(k,C) \ar[d] \\
{\displaystyle \prod_{v \textrm{ real}} H^1(k_v,C)} \ar[r]^{\op{ab}^1} &
{\displaystyle \prod_{v \textrm{ real}} H^1_{\op{ab}}(k_v,C) }}
\end{equation}
is Cartesian for any reductive group $C$ over a number field.
In the case where $C$ is coflasque,
the maps
$\op{ab}^1 : H^1(-,C) \to H^1_{\op{ab}}(-,C)$ can be identified with the
maps $H^1(-,C) \to H^1(-,C^{tor})$ by \cite[Example 3.14]{BorovoiMemoir}.
One can then show that \eqref{eq:Borovoi} can be identified with the
right hand square from
\eqref{eq:abelianization_in_disguise}.
\end{remark}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
A finite phase space of dimension $M$, where coordinate and momentum
have $M$ possible values, is a frequent component of various physical
and mathematical problems. Fast Fourier Transform (FFT) \cite{Good fft,C-T},
Schwinger factorization of unitary operators \cite{Sch 1}, generation
of \textit{kq} bases and finite dimensional Harper-like Hamiltonians
\cite{MRZ} are the problems related to finite phase space that will
be considered in this paper. A recent review of various quantum systems
with finite Hilbert space can be found in ref. \cite{Vourdas}.
Originally studied by Weyl \cite{Weyl}, the finite dimensional Hilbert
space was systematized by Schwinger in terms of {}``Unitary Operator
Bases'' \cite{Sch 1}. Schwinger considered a $M$-dimensional physical
system. Such a Hilbert space can be achieved by application of the
following boundary conditions on the wave function $\psi(x)$ and
its Fourier transform $\Psi(p)$ (ref. \cite{Zak boundary}):
\begin{equation}
\psi(x)=\psi(x+Mc),\,\,\Psi(p)=\Psi(p+\frac{2\pi\hbar}{c}),\end{equation}
where $c$ is a length unit. In what follows, we will assume $c=1$.
As a consequence of the above boundary conditions $x$ and $p$ have
a finite discrete spectrum of eigenvalues:
\begin{equation}
x=0,1,...,M-1;\,\, p=\frac{2\pi\hbar}{M}\cdot\{0,1,...,M-1\}.\end{equation}
Using unitary operators $U$ and $V$ (ref. \cite{MRZ} with $c=1$):
\begin{equation}
\left\{ \begin{array}{l}
U=e^{i\hat{x}\frac{2\pi{}}{M}},\\
V=e^{\frac{i}{\hbar}\hat{p}},\end{array}\right.\label{eq:UV}\end{equation}
the complete orthogonal operator basis of $M^{2}$ operators can be
defined as \cite{Sch 1}:
\begin{equation}
U^{k}V^{n};\, k,n=0,1,...,M-1.\label{eq:Sch Bases}\end{equation}
The above operators have the commutation relation:
\begin{equation}
V^{n}U^{k}=U^{k}V^{n}e^{\frac{2\pi i}{M}nk}.\label{eq:U-V comm}\end{equation}
For a coprime decomposition of $M=M_{1}M_{2}$, using the Fermat-Euler
theorem, Schwinger showed how to factorize the unitary operators.
The Fermat-Euler theorem states that if $M_{1}$ and $M_{2}$ are
coprime, then there exist unique $N_{1}$ and $N_{2}$ such that:
\begin{equation}
M_{1}N_{2}=1\,(\text{mod }M_{2}),\,\, M_{2}N_{1}=1\,(\text{mod }M_{1}).\end{equation}
Therefore, the two pairs of unitary operators defined as:
\begin{equation}
\left\{ \begin{array}{l}
U_{1}=U^{M_{2}},\\
V_{1}=V^{M_{2}N_{1}},\end{array}\right.\left\{ \begin{array}{l}
U_{2}=U^{M_{1}},\\
V_{2}=V^{M_{1}N_{2}},\end{array}\right.\label{eq:Sch U V}\end{equation}
behave as independent complementary operators of factorized dimensions
$M_{1}$ and $M_{2}$. The respective commutation relations are:
\begin{equation}
V_{i}^{n_{i}}U_{i}^{k_{i}}=U_{i}^{k_{i}}V_{i}^{n_{i}}e^{\frac{2\pi i}{M_{i}}n_{i}k_{i}},\, i=1,2;\label{eq:Schw U V 1}\end{equation}
\begin{equation}
V_{i}^{n_{i}}U_{j}^{k_{j}}=U_{j}^{k_{j}}V_{i}^{n_{i}},\, i\neq j,\, i,j=1,2.\label{eq:Schw U V 2}\end{equation}
Each unitary operator on the $M$ dimensional space (Eq. \ref{eq:Sch Bases})
can be considered as two operators from the factorized dimensions
$M_{1}$ and $M_{2}$. This is due to the one-to-one {}``Sino-Ruritanian''
correspondences \cite{Good 2FFT}:
\begin{equation}
\begin{split}\begin{array}{c}
n=n_{1}M_{2}N_{1}+n_{2}M_{1}N_{2}\,\,\,(mod\, M),\\
k=k_{1}M_{2}+k_{2}M_{1}\,\,\,(mod\, M).\end{array}\end{split}
\label{eq:Sino-Ru}\end{equation}
Therefore, for every power $n$ of the operator $V$ we can find the
unique representation by the factorized unitary operators $V_{1}$
and $V_{2}$. The appropriate powers $n_{1}$ and $n_{2}$ of the
factorized operators $V_{1}$ and $V_{2}$ are determined by the first
{}``Sino-Ruritanian'' correspondence (Eq. \eqref{eq:Sino-Ru}).
Similarly, the correspondence between $U$ and $(U_{1},U_{2})$ is
determined by the second {}``Sino-Ruritanian'' correspondence (Eq.
\eqref{eq:Sino-Ru}). Another recent factorization construction based
on the Chinese Remainder Theorem (CRT) can be found in ref. \cite{Vourdas}.
After we have obtained factorization of the $M$ dimensional Hilbert
space into its coprime sub-dimensions $M_{1}$ and $M_{2}$, we can
apply it to the \textit{kq} bases generation and the Harper-like Hamiltonian
model. Let us first consider the \textit{kq} bases generation. The
factorized operators from Eq. \eqref{eq:Sch U V} can be used for
generation of the following two pairs of operators (note that $V_{2}^{M_{1}}=V^{M_{1}}$
and $V_{1}^{M_{2}}=V^{M_{2}}$):
\begin{equation}
\begin{split}(a)\left\{ \begin{array}{l}
\tau(\frac{2\pi{}}{a})=e^{i\hat{x}\frac{2\pi{}}{a}}=U^{M_{2}};\\
T(a)=e^{\frac{i}{\hbar}\hat{p}a}=V^{M_{1}};\end{array}\right.(b)\left\{ \begin{array}{l}
\tau(\frac{2\pi{}}{b})=e^{i\hat{x}\frac{2\pi{}}{b}}=U^{M_{1}};\\
T(b)=e^{\frac{i}{\hbar}\hat{p}b}=V^{M_{2}},\end{array}\right.\end{split}
\label{eq:Zak operators}\end{equation}
where the dimension $M=M_{1}M_{2}$ and $a=M_{1}$, $b=M_{2}$ (according
to the notation of ref. \cite{MRZ} with c=1). Hence, by employing
all possible powers, each pair of operators (a) and (b) forms a complete
set of M commuting operators and thus generates an alternative \textit{kq}
basis for treatment of the $M$-dimensional Hilbert space. We have
two such bases:
\begin{gather}
\begin{split}\begin{array}{c}
(a)\text{ }|k,q\rangle=\frac{1}{\sqrt{M_{2}}}\sum_{s=0}^{M_{2}-1}e^{iksa}|q+sa\rangle,\\
\\(a)\left\{ \begin{array}{l}
k=\frac{2\pi}{M}f,\,\text{ }f=0,...,M_{2}-1,\\
q=0,...,M_{1}-1,\end{array}\right.\end{array}\end{split}
\begin{split}\begin{array}{cc}
(b)\text{ }|K,Q\rangle=\frac{1}{\sqrt{M_{1}}}\sum_{t=0}^{M_{1}-1}e^{iKtb}|Q+tb\rangle\\
\\(b)\left\{ \begin{array}{l}
K=\frac{2\pi}{M}f',\,\text{ }f'=0,...,M_{1}-1,\\
Q=0,...,M_{2}-1.\end{array}\right.\end{array}\end{split}
\label{eq:kq}\end{gather}
The unique property of the \textit{kq} bases is that they are eigenfunctions
of both space and momentum displacement operators. These functions
have partial knowledge about both position and momentum, whose precise
simultaneous knowledge is limited by the non-commutation of the corresponding
operators. In the case of dimension $M=M_{1}M_{2}$ factorizable to
coprime numbers $M_{1}$ and $M_{2}$, the two \textit{kq} bases (a)
and (b) are Mutually Unbiased Bases (MUB) \cite{MRZ}. The MUB property
of the bases means that if the physical system is found in one of
the states of one MUB (for example set (a)), then it has equal probabilities
to be in all the states of the other MUB (set (b) in our example).
Mathematically, the mutual unbiasedness of the two \textit{kq} bases
means the following equality: $|\langle k,q|K,Q\rangle|^{2}=\frac{1}{M}$.
For non-coprime $M_{1}$ and $M_{2}$ the MUB property is violated.
For example, if $M_{1}=m_{1}r$ and $M_{2}=m_{2}r$ have a common
multiple $r$, we have:
\begin{equation}
\langle k,q|K,Q\rangle=\frac{1}{\sqrt{M}}\sum_{t,s}e^{-iksa}e^{iKtb}\langle sm_{1}r+q|tm_{2}r+Q\rangle.\end{equation}
The product $\langle sm_{1}r+q|tm_{2}r+Q\rangle$ equals unity for
the solution of the following modular equation:
\begin{equation}
sm_{1}r+q-tm_{2}r-Q=0\,(mod\, M).\end{equation}
Following ref. \cite{Vinogradov} (p. 45 theorem `d'), the above equation
can be taken modulo $r$:
\begin{equation}
q=Q\,(mod\, r).\end{equation}
Therefore, for $q_{0}=0$ and $Q_{0}=1$ (which is always possible
according to the ranges of values Eqs. (\ref{eq:kq})) we have $|\langle k,q_{0}|K,Q_{0}\rangle|^{2}=0\neq\frac{1}{M}$.
To complete the introduction to \textit{kq} MUB we note their quasi-periodic
properties:
\begin{equation}
\begin{split}\begin{array}{cc}
(a)\,|k+\frac{2\pi}{M_{1}},q\rangle=|k,q\rangle,\, & |k,q+M_{1}\rangle=e^{-ika}|k,q\rangle,\\
(b)\,|K+\frac{2\pi}{M_{2}},Q\rangle=|K,Q\rangle,\, & |K,Q+M_{2}\rangle=e^{-iKb}|K,Q\rangle.\end{array}\end{split}
\label{eq:Quasi kq}\end{equation}
Now, let us consider Harper-like Hamiltonians. They are defined as
Hamiltonians of one degree of freedom periodic both in coordinate
and momentum \cite{Harper-like}. For our discussion we are interested
in the use of Harper-like Hamiltonians for the energy spectra design
considered in ref. \cite{MRZ}. The energy spectra design is a direct
consequence of the factorization of the $M=M_{1}M_{2}$ - dimensional
Hilbert space to coprime constituents $M_{1}$ and $M_{2}$. In the
original version (ref. \cite{MRZ}) one considered a Harper-like Hamiltonian
$H[T(b),\tau(\frac{2\pi}{a})]$ which is a function of the two operators
$T(b)$ and $\tau(\frac{2\pi}{a})$ from Eq. \eqref{eq:Zak operators}.
It is important that the Hamiltonian $H[T(b),\tau(\frac{2\pi}{a})]$
is a function of the operators $V_{1}$ and $U_{1}$ (due to $T(b)=V_{1}^{M_{2}}$
and $\tau(\frac{2\pi}{a})=U_{1}$); in such a case only the $M_{1}$
- dimensional subspace is affected by the Hamiltonian. The $M_{2}$
- dimensional subspace is untouched by the Hamiltonian. Hence, considering
$H[T(b),\tau(\frac{2\pi}{a})]$ we expect to obtain $M_{1}$ energy
levels (with a spectrum determined by the details of the Hamiltonian)
each of which is degenerate $M_{2}$ times.
The aim of this paper is first to extend the Schwinger factorization
to non-coprime $M_{1}$ and $M_{2}$. Then the other two applications,
the \textit{kq}-like MUB generation and the energy spectra design
by Harper-like Hamiltonian, are extended correspondingly. For that
purpose, in section 2, we define the permutation operator $A$, based
on the previous study by Cooley and Tukey of Fast Fourier Transform
(FFT) \cite{C-T}. Using the operator $A$ we obtain pairs of unitary
operators, which have commutation relations as in Eqs. (\ref{eq:Schw U V 1},
\ref{eq:Schw U V 2}). In section 3 we use the new factorized unitary
pairs to generate two \textit{kq}-like MUB. New quasi-periodicity
properties are obtained in one of the bases. In section 4 we apply
the new factorization to the energy spectra design using Harper-like
Hamiltonians without any restriction on the factors $M_{1}$ and $M_{2}$
of the dimension $M=M_{1}M_{2}$. Section 5 includes a discussion
and summary.
\section{Factorization of unitary operators using the permutation operator
A}
To define the permutation operator $A$ we start by recalling the
Division Algorithm Theorem (DAT) from number theory.
The theorem states (ref. \cite{Vinogradov} page 2 or ref. \cite{Natha}
page 3) that for any integer numbers $D$ and $d$ with $d>0$, there
exists a unique pair of integer numbers $q$ and $r$ satisfying the
following conditions: \begin{equation}
\begin{split}\begin{array}{l}
(a)\text{ }D=d\cdot{q}+r,\\
(b)\text{ }0\leq{r}<d.\end{array}\end{split}
\end{equation}
Consider the special case of positive integer $D$ in the range $[0,1,...,M_{1}M_{2}-1]$
and positive $d=M_{2}$. In this case there is a unique pair of integers
$q$ and $r$ satisfying the following conditions: \begin{equation}
\begin{split}\begin{array}{l}
(a)\text{ }q\in{[0,1,...,M_{1}-1]},\\
(b)\text{ }r\in{[0,1,...,M_{2}-1]},\\
(c)\text{ }D=d\cdot{q}+r.\end{array}\end{split}
\end{equation}
For our discussion, this DAT based special representation of the numbers
modulo $M=M_{1}M_{2}$ is the crucial component.
In 1965 Cooley and Tukey \cite{C-T} introduced an FFT algorithm not
limited to coprime factorization of $M=M_{1}M_{2}$. They used two
complementary DAT based representations for the $x$ and $p$ variable
indices:
\begin{equation}
\begin{split}\left\{ \begin{array}{l}
n=n_{1}M_{2}+n_{2},\\
k=k_{1}+k_{2}M_{1},\end{array}\right.\end{split}
\label{eq:conj ENR}\end{equation}
which enabled them to simplify the Discrete Fourier Transform (DFT)
calculation ($w_{M}=e^{\frac{2\pi{}i}{M}}$):
\begin{equation}
\begin{split}\begin{array}{c}
p_{k}=\frac{1}{\sqrt{M}}\sum_{n=0}^{M-1}x_{n}w_{M}^{nk}=\frac{1}{\sqrt{M}}\sum_{n=0}^{M-1}x_{n}w_{M}^{(n_{1}M_{2}+n_{2})(k_{1}+k_{2}M_{1})}=\\
\\=\frac{1}{\sqrt{M_{2}}}\sum_{n_{2}=0}^{M_{2}-1}w_{M}^{n_{2}(k_{1}+k_{2}M_{1})}\frac{1}{\sqrt{M_{1}}}\sum_{n_{1}=0}^{M_{1}-1}x_{n}w_{M}^{n_{1}k_{1}M_{2}}.\end{array}\end{split}
\label{eq:C-T FFT}\end{equation}
The two summations in the last line of the above formula require $M\cdot\sum_{i=1}^{2}M_{i}$
operations instead of $M^{2}$ operations by direct calculation \cite{C-T}.
Following Cooley and Tukey, we define a permutation operator $A$,
which acts in the finite $M$ - dimensional Hilbert space:
\begin{equation}
A=\sum_{x_{1}=0}^{M_{1}-1}\sum_{x_{2}=0}^{M_{2}-1}|x_{2}+M_{2}x_{1}\rangle\langle x_{1}+M_{1}x_{2}|.\label{eq:A operator}\end{equation}
For the construction of the operator $A$ we used two DAT based representations,
as in Eq. \eqref{eq:conj ENR}, applied to the coordinate states $|x\rangle$.
In the coordinate representation our operator is equal to the stride
permutation matrix widely used in signal processing \cite{Toli-An}.
For a simple illustration, let us consider the example of dimension
$M=6$, where $M_{1}=2$ and $M_{2}=3$. The table of correspondence
between the numbers $x=0,1,2,3,4,5$ and pairs of numbers $(x_{1}=0,1;\, x_{2}=0,1,2)$
according to the rule $x=x_{1}+M_{1}x_{2}$ is:
\begin{table}[H]
\begin{centering}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$x$ & 0 & 1 & 2 & 3 & 4 & 5\tabularnewline
\hline
\hline
$x_{1}$ & 0 & 1 & 0 & 1 & 0 & 1\tabularnewline
\hline
$x_{2}$ & 0 & 0 & 1 & 1 & 2 & 2\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{DAT representation of $x=(x_{1},x_{2})$}
\end{table}
With the rule $x=x_{1}^{'}M_{2}+x_{2}^{'}$ we have another table
:
\begin{table}[H]
\begin{centering}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$x$ & 0 & 1 & 2 & 3 & 4 & 5\tabularnewline
\hline
\hline
$x_{1}^{'}$ & 0 & 0 & 0 & 1 & 1 & 1\tabularnewline
\hline
$x_{2}^{'}$ & 0 & 1 & 2 & 0 & 1 & 2\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{DAT representation of $x=(x_{1}^{'},x_{2}^{'})$}
\end{table}
Consequently, the permutation matrix $A$ corresponding to Eq. \eqref{eq:A operator}
(using the standard basis for $|x\rangle$) is:
\begin{equation}
A=\left(\begin{array}{cccccc}
1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0\\
0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1\end{array}\right).\end{equation}
Also, it can be written in a compact way as $A=(0)(1,3,4,2)(5)$.
This means that $A$ leaves the coordinate states $|0\rangle$ and
$|5\rangle$ unchanged, $|1\rangle$ goes into $|3\rangle$, $|3\rangle$
goes into $|4\rangle$, $|4\rangle$ goes into $|2\rangle$ and $|2\rangle$
goes into $|1\rangle$. We note that the operator $A$ is unitary:
\begin{equation}
AA^{\dagger}=I.\label{eq:perm pr}\end{equation}
With the permutation operator $A$ at hand, we can define the new
factorization. To do this we use the pairs of operators from Eq. \eqref{eq:Zak operators},
whose definition captures a general factorization of $M=M_{1}M_{2}$.
We modify the (a) set of operators by the permutation operator $A$
of Eq. \eqref{eq:A operator}, and for convenience relabel all the
operators:\\
\begin{equation}
\begin{split}(a')\left\{ \begin{array}{l}
\tilde{U}_{1}=\tau'(\frac{2\pi{}}{a})=A\tau(\frac{2\pi{}}{a})A^{\dag}=AU^{M_{2}}A^{\dag};\\
\tilde{V}_{2}=T'(a)=AT(a)A^{\dag}=AV^{M_{1}}A^{\dag};\end{array}\right.(b)\left\{ \begin{array}{l}
\tilde{U}_{2}=\tau(\frac{2\pi{}}{b})=e^{i\hat{x}\frac{2\pi{}}{b}}=U^{M_{1}};\\
\tilde{V}_{1}=T(b)=e^{\frac{i}{\hbar}\hat{p}b}=V^{M_{2}}.\end{array}\right.\end{split}
\label{eq:New Fact}\end{equation}
The tilde denotes the new version of operators. The (a') and (b) sets
of operators replace the Schwinger operators of Eq. \eqref{eq:Sch U V}.
As we will show shortly, they obey all the commutation relations Eqs.
(\ref{eq:Schw U V 1}, \ref{eq:Schw U V 2}) of factorized operators.
Therefore, the (a') and (b) sets of operators define the new factorization,
not restricted to coprime decomposition. The commutation relation
Eq. \eqref{eq:Schw U V 2} is fulfilled by the tilde operators (Eq.
\eqref{eq:New Fact}) due to the unitarity property of $A$. For the
commutation relation Eq. \eqref{eq:Schw U V 1} we first calculate
the operation of $AU^{M_{2}}A^{\dag}$ and $A^{\dag}U^{M_{1}}A$ on
coordinate states:
\begin{equation}
\begin{split}\begin{split}\begin{array}{c}
AU^{M_{2}}A^{\dag}|x\rangle=AU^{M_{2}}A^{\dag}|x_{1}M_{2}+x_{2}\rangle=AU^{M_{2}}|x_{1}+M_{1}x_{2}\rangle=\\
=e^{\frac{2\pi i}{M_{1}}x_{1}}A|x_{1}+M_{1}x_{2}\rangle=e^{\frac{2\pi i}{M_{1}}x_{1}}|x_{1}M_{2}+x_{2}\rangle;\end{array}\end{split}
\end{split}
\end{equation}
\begin{equation}
\begin{split}\begin{array}{c}
A^{\dag}U^{M_{1}}A|x\rangle=A^{\dag}U^{M_{1}}A|x_{1}^{'}+M_{1}x_{2}^{'}\rangle=A^{\dag}U^{M_{1}}|x_{1}^{'}M_{2}+x_{2}^{'}\rangle=\\
=e^{\frac{2\pi i}{M_{2}}x_{2}^{'}}A^{\dag}|x_{1}^{'}M_{2}+x_{2}^{'}\rangle=e^{\frac{2\pi i}{M_{2}}x_{2}^{'}}|x_{1}^{'}+M_{1}x_{2}^{'}\rangle,\end{array}\end{split}
\end{equation}
where the only difference in the above calculations is that we used
different DAT based representations for the $x$ values. Using the
above expressions one can prove:
\begin{equation}
\begin{split}\begin{array}{c}
\tilde{V}_{1}^{n_{1}}\tilde{U}_{1}^{k_{1}}|x\rangle=V^{M_{2}n_{1}}AU^{M_{2}k_{1}}A^{\dag}|x_{1}M_{2}+x_{2}\rangle=e^{\frac{2\pi i}{M_{1}}k_{1}x_{1}}V^{M_{2}n_{1}}|x_{1}M_{2}+x_{2}\rangle=\\
=e^{\frac{2\pi i}{M_{1}}k_{1}x_{1}}|(x_{1}-n_{1})M_{2}+x_{2}\rangle,\end{array}\end{split}
\end{equation}
whereas applying $\tilde{U}_{1}^{k_{1}}\tilde{V}_{1}^{n_{1}}$ we
get:
\begin{equation}
\begin{split}\begin{array}{c}
\tilde{U}_{1}^{k_{1}}\tilde{V}_{1}^{n_{1}}|x\rangle=AU^{M_{2}k_{1}}A^{\dag}V^{M_{2}n_{1}}|x_{1}M_{2}+x_{2}\rangle=AU^{M_{2}k_{1}}A^{\dag}|(x_{1}-n_{1})M_{2}+x_{2}\rangle=\\
=e^{\frac{2\pi i}{M_{1}}k_{1}(x_{1}-n_{1})}|(x_{1}-n_{1})M_{2}+x_{2}\rangle.\end{array}\end{split}
\end{equation}
Similar results can be shown for the operators $\tilde{V}_{2}^{n_{2}}$
and $\tilde{U}_{2}^{k_{2}}$. Summarizing the results, the commutation
relation (Eq. \eqref{eq:Schw U V 1}) is fulfilled:
\begin{equation}
\tilde{V}_{i}^{n_{i}}\tilde{U}_{i}^{k_{i}}=\tilde{U}_{i}^{k_{i}}\tilde{V}_{i}^{n_{i}}e^{\frac{2\pi i}{M_{i}}n_{i}k_{i}},\, i=1,2.\end{equation}
Therefore, using the permutation operator $A$ we obtained the new
factorization of the unitary operators, which is not limited to coprime
decomposition of $M$. Here we used the permutation operator $A$
for the transformation of the (a) set of operators to the new (a')
set for the factorization. Obviously we could have applied the transformation
to the (b) set, which would have also enabled the factorization.
In the particular case of $M_{1}=M_{2}$ (a=b), the two \textit{kq}
bases Eq. \eqref{eq:kq} are identical, and so are the two (a) and
(b) sets of operators in Eq. \eqref{eq:Zak operators}, and the permutation
operator $A$ from Eq. \eqref{eq:A operator} satisfies $A^{2}=I$.
In this case we have only one set of \textit{kq} - operators and only
one $|k,q\rangle$ basis. Application of the permutation operator
$A$ to that set of operators defines the tilde set, which obeys the
proper commutation relations with the original set (Eqs. \ref{eq:Schw U V 1},
\ref{eq:Schw U V 2}). Their respective eigenstates (obtained by applying
A to the unique $|k,q\rangle$ basis) are MUB with respect to the
original $|k,q\rangle$ states. (See also the next example and the
treatment of Harper - like Hamiltonians for $M=4=2^{2}$ in section
4).
The permutation operator $A$, based on the analogy to the Cooley
and Tukey FFT, solves the unitary operator factorization. To acquire
some physical intuition about the operator $A$, let us consider the
example of dimension $M=4$, where $M_{1}=M_{2}=2$. In this case,
the operators from Eq. \eqref{eq:Zak operators} in the coordinate
representation may be presented as:
{\small \begin{equation}
\begin{split}(a)\left\{ \begin{array}{l}
U^{2}=\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & -1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & -1\end{array}\right),\,\, V^{2}=\left(\begin{array}{cccc}
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\end{array}\right).\end{array}\right.\end{split}
\end{equation}
}The sets (a) and (b) of operators in Eq. \eqref{eq:Zak operators}
are identical in our example. To get the second set of factorized
operators we write the (a') set from Eq. \eqref{eq:New Fact}:
{\small \begin{equation}
\begin{split}(a')\left\{ \begin{array}{l}
AU^{2}A^{\dag}=\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & -1 & 0\\
0 & 0 & 0 & -1\end{array}\right),\,\, AV^{2}A^{\dag}=\left(\begin{array}{cccc}
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\end{array}\right),\end{array}\right.\end{split}
\end{equation}
}where the permutation operator in coordinate representation is $A=(0)(1,2)(3)$.
The operator $U^{2}$ has two eigenvalues (1 and -1), and the operator
$V^{2}$ permutes between the vectors with the same eigenvalue (1
or -1). This is why the operator $U^{2}$ commutes with the operator
$V^{2}$. Permutation by the operator $A$ turns the operator $U^{2}$
into the operator $AU^{2}A^{\dag}$, which anticommutes with $V^{2}$.
The operators $AU^{2}A^{\dag}$ and $V^{2}$ form a complementary
pair of operators for sub-dimension $M_{1}=2$, where their anticommutation
is consistent with Eq. \eqref{eq:Schw U V 1}. The operators $U^{2}$
and $AV^{2}A^{\dag}$ form another complementary pair of operators
for sub-dimension $M_{2}=2$. The operator $A$ permutes the eigenvalues
of $U^{2}$ in such a way as to make the operator $V^{2}$ anticommute
with $AU^{2}A^{\dag}$.
A more interesting example is dimension $M=12$, where both coprime
and non-coprime factorizations are possible. For the case of $M_{1}=2$
and $M_{2}=6$, using the coordinate representation, the permutation
operator is
\[
A=(0)(1,6,3,7,9,10,5,8,4,2)(11).\]
In the other case, where $M_{1}=3$ and $M_{2}=4$, the permutation
operator is
\[
A=(0)(1,4,5,9,3)(2,8,10,7,6)(11).\]
In both cases the construction of the operators from Eq. \eqref{eq:New Fact}
leads to the factorized pairs of operators. The generality of the
new factorization enables to perform it for every factorized numbers
$M_{1}$ and $M_{2}$. In the case where $M_{1}$ or $M_{2}$ are
composite numbers, another factorization can be performed until we
reach prime numbers in factorization.
Note that Schwinger's solution for non-coprime factorization in ref.\cite{Sch 1}
gives the factorized pairs of operators (see also ref.\cite{Sch-Eng}),
which obey the commutation relations of Eqs. (\ref{eq:Schw U V 1},
\ref{eq:Schw U V 2}). However, while for the coprime factorization
an explicit expression is given in ref.\cite{Sch 1} connecting between
the factorized pairs and the original operators $U$ and $V$, no
such expression is given for the non-coprime case. In our paper this
explicit expression is given in Eq. \eqref{eq:New Fact}.
\section{New \textit{kq}-like bases}
Each set (a') and (b) of operators (Eq. \eqref{eq:New Fact}) generates
$M$ commuting operators and can be used for the definition of a basis
for the $M$ - dimensional Hilbert space. The set (b) of operators
has, as an eigenbasis, the $|K,Q\rangle$ basis. As a result of the
unitary transformation of the (a) set, the (a') set defines the \textit{kq}-like
basis $\widetilde{|k,q\rangle}$ as follows:
\begin{equation}
\begin{split}\begin{array}{c}
(a')\,\text{ }\widetilde{|k,q\rangle}=A|k,q\rangle=\frac{1}{\sqrt{M_{2}}}\sum_{s=0}^{M_{2}-1}e^{iksa}|s+qM_{2}\rangle,\\
\\(a')\left\{ \begin{array}{l}
k=\frac{2\pi}{M}f,\,\text{ }f=0,...,M_{2}-1,\\
q=0,...,M_{1}-1.\end{array}\right.\end{array}\end{split}
\label{eq:new kq}\end{equation}
As a result of the fact that the two sets of operators $(\tilde{V}_{1},\tilde{U}_{1})$
and $(\tilde{V}_{2},\tilde{U}_{2})$ describe $M_{1}$ and $M_{2}$
subspaces in the entire $M$ - dimensional Hilbert space, the bases
$|\widetilde{k,q}\rangle$ and $|K,Q\rangle$ are mutually unbiased.
Let us check the overlap between $|\widetilde{k,q}\rangle$ and $|K,Q\rangle$
states ($a=M_{1},\, b=M_{2})$:
\begin{equation}
\langle\widetilde{k,q}|K,Q\rangle=\frac{1}{\sqrt{M_{1}}}\frac{1}{\sqrt{M_{2}}}\sum_{s=0}^{M_{2}-1}\sum_{t=0}^{M_{1}-1}e^{-iksa}e^{iKtb}\langle s+qM_{2}|Q+tb\rangle.\end{equation}
Inserting ($a=M_{1},b=M_{2})$ we have:
\begin{equation}
\begin{split}\begin{array}{c}
=\frac{1}{\sqrt{M}}\sum_{s=0}^{M_{2}-1}\sum_{t=0}^{M_{1}-1}e^{-iksM_{1}}e^{iKtM_{2}}\langle s+qM_{2}|Q+tM_{2}\rangle=\\
\\=\frac{1}{\sqrt{M}}\sum_{s=0}^{M_{2}-1}\sum_{t=0}^{M_{1}-1}e^{-iksM_{1}}e^{iKtM_{2}}\delta^{M_{1}}(s-Q)\delta^{M_{2}}(q-t)=\\
\\=\frac{1}{\sqrt{M}}e^{-ikQM_{1}}e^{iKqM_{2}}.\end{array}\end{split}
\end{equation}
Here $\delta^{M_{i}}(x-x_{0})$ means that the argument of the delta
function is taken modulo $M_{i},$ $\delta^{M_{i}}(0)=1$ and elsewhere
is zero. Therefore, these bases are mutually unbiased: $|\langle\widetilde{k,q}|K,Q\rangle|^{2}=\frac{1}{M}$.
We call the basis $|\widetilde{k,q}\rangle$ a \textit{kq}-like basis
because it is not an eigenfunction of the same operators as the $|k,q\rangle$
basis, but of the operators related to them by the permutation transformation.
In addition, it has different periodicity properties. As $|\widetilde{k,q}\rangle$
is defined, it has the completely periodic property:
\begin{equation}
(a')\,|\widetilde{k+\frac{2\pi}{M_{1}},q}\rangle=|\widetilde{k,q+M_{1}}\rangle=|\widetilde{k,q}\rangle.\end{equation}
To show explicitly the difference and similarity between the bases
$|\widetilde{k,q}\rangle$ and $|k,q\rangle$ we consider an example
of dimension $M=6$ with $M_{1}=3$ and $M_{2}=2$. Using the $|x\rangle$
representation we list in three columns all basis members of $|k,q\rangle$
on the left hand side, all $|\widetilde{k,q}\rangle$ basis vectors
in the middle and $|K,Q\rangle$ on the right hand side:
{\footnotesize \begin{equation}
\begin{split}\left\{ \begin{array}{l}
|(0,0)\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle+|3\rangle\right),\\
|(0,1)\rangle=\frac{1}{\sqrt{2}}\left(|1\rangle+|4\rangle\right),\\
|(0,2)\rangle=\frac{1}{\sqrt{2}}\left(|2\rangle+|5\rangle\right),\\
|(1,0)\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle-|3\rangle\right),\\
|(1,1)\rangle=\frac{1}{\sqrt{2}}\left(|1\rangle-|4\rangle\right),\\
|(1,2)\rangle=\frac{1}{\sqrt{2}}\left(|2\rangle-|5\rangle\right),\end{array}\right.\left\{ \begin{array}{c}
|\widetilde{(0,0)}\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle+|1\rangle\right),\\
|\widetilde{(0,1)}\rangle=\frac{1}{\sqrt{2}}\left(|2\rangle+|3\rangle\right),\\
|\widetilde{(0,2)}\rangle=\frac{1}{\sqrt{2}}\left(|4\rangle+|5\rangle\right),\\
|\widetilde{(1,0)}\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle-|1\rangle\right),\\
|\widetilde{(1,1)}\rangle=\frac{1}{\sqrt{2}}\left(|2\rangle-|3\rangle\right),\\
|\widetilde{(1,2)}\rangle=\frac{1}{\sqrt{2}}\left(|4\rangle-|5\rangle\right),\end{array}\right.\left\{ \begin{array}{c}
|(0,0)\rangle=\frac{1}{\sqrt{3}}\left(|0\rangle+|2\rangle+|4\rangle\right),\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\\
|(0,1)\rangle=\frac{1}{\sqrt{3}}\left(|1\rangle+|3\rangle+|5\rangle\right),\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\\
|(1,0)\rangle=\frac{1}{\sqrt{3}}\left(|0\rangle+e^{\frac{2\pi i}{3}}|2\rangle+e^{\frac{4\pi i}{3}}|4\rangle\right),\\
|(1,1)\rangle=\frac{1}{\sqrt{3}}\left(|1\rangle+e^{\frac{2\pi i}{3}}|3\rangle+e^{\frac{4\pi i}{3}}|5\rangle\right),\\
|(2,0)\rangle=\frac{1}{\sqrt{3}}\left(|0\rangle+e^{\frac{4\pi i}{3}}|2\rangle+e^{\frac{2\pi i}{3}}|4\rangle\right),\\
|(2,1)\rangle=\frac{1}{\sqrt{3}}\left(|1\rangle+e^{\frac{4\pi i}{3}}|3\rangle+e^{\frac{2\pi i}{3}}|5\rangle\right).\end{array}\right.\end{split}
\end{equation}
}Hence, the $|\widetilde{k,q}\rangle$ and the $|k,q\rangle$ bases
are neither equal nor orthogonal to one another (they are eigenfunctions
of different sets of operators). Nevertheless, both these bases are
mutually unbiased to the $|K,Q\rangle$ basis in this coprime case.
\section{Engineering of the energy spectrum using the permutation operator
A in Harper-like Hamiltonians}
The new factorized pairs of operators $\left(T'(a),\tau(\frac{2\pi}{b})\right)$
and $\left(T(b),\tau'(\frac{2\pi}{a})\right)$ (Eq. \eqref{eq:New Fact})
describe $M_{2}$ and $M_{1}$ dimensional subspaces, respectively
\cite{Sch 1}. Therefore, replacing the Harper-like Hamiltonian $H[T(b),\tau(\frac{2\pi}{a})]$
of ref.\cite{MRZ} by $H[T'(b),\tau(\frac{2\pi}{a})]$ we should obtain
$M_{1}$ energy levels, each of which is degenerate $M_{2}$ times,
without restriction for $M_{2}$ and $M_{1}$ to be coprime.
To show the advantage of the new factorization, we compare it with
the energy spectra design method of ref. \cite{MRZ}. As a first example,
let us consider the dimension $M=6$. We choose the simple Harper-like
Hamiltonian proposed in ref. \cite{MRZ}:
\begin{equation}
\begin{split}\begin{array}{l}
H=H(T(b),\tau(\frac{2\pi}{a}))=V_{1}cos(\frac{b}{\hbar}\hat{p})+V_{2}cos(\frac{2\pi}{a}\hat{x}),\end{array}\end{split}
\label{eq:Harper H}\end{equation}
where $V_{1}$ and $V_{2}$ are constants. We solve this Hamiltonian
using the kq-representation:
\begin{equation}
|\psi\rangle=\sum_{k,q}|k,q\rangle\langle k,q|\psi\rangle=\sum_{k,q}C_{k,q}|k,q\rangle.\end{equation}
The resulting eigenvalue equation for our Hamiltonian is:
\begin{equation}
\begin{split}\begin{array}{l}
[V_{1}cos(\frac{b}{\hbar}\hat{p})+V_{2}cos(\frac{2\pi}{a}\hat{x})]\sum_{k,q}C_{k,q}|k,q\rangle=\varepsilon\sum_{k,q}C_{k,q}|k,q\rangle.\end{array}\end{split}
\end{equation}
After applying the operators we have:
\begin{equation}
\begin{split}\begin{array}{l}
\sum_{k,q}C_{k,q}[\frac{V_{1}}{2}(|k,q-b\rangle+|k,q+b\rangle)+V_{2}cos(\frac{2\pi}{a}q)|k,q\rangle]=\varepsilon\sum_{k,q}C_{k,q}|k,q\rangle.\end{array}\end{split}
\label{eq:Harper 6}\end{equation}
The above eigenvalue equation can be solved for each value of $k=\frac{2\pi}{M}f$
independently. So for our particular choice of dimension $M=6$ with
$M_{1}=a=2$ and $M_{2}=b=3$, performing the summation over $q$
values with the use of the quasi-periodicity property of the $|k,q\rangle$
states, we obtain the following equation (for some particular $k$
value): \begin{equation}
\begin{split}\begin{array}{l}
C_{k,0}[\frac{V_{1}}{2}(e^{4ki}|k,1\rangle+e^{-2ki}|k,1\rangle)+V_{2}cos(\frac{2\pi}{2}\cdot0)|k,0\rangle]+\\
\\C_{k,1}[\frac{V_{1}}{2}(e^{2ki}|k,0\rangle+e^{-4ki}|k,0\rangle)+V_{2}cos(\frac{2\pi}{2}\cdot1)|k,1\rangle]=\\
\\=\varepsilon[C_{k,0}|k,0\rangle+C_{k,1}|k,1\rangle].\end{array}\end{split}
\end{equation}
Using the orthogonality of the $|k,q\rangle$ states the above equation
is equivalent to the solution of the following $M_{1}=2$ coupled
equations: \begin{equation}
\begin{split}\begin{array}{l}
\left(\begin{array}{cc}
V_{2} & V_{1}e^{-\frac{2\pi i}{6}4f}\\
V_{1}e^{\frac{2\pi i}{6}4f} & -V_{2}\end{array}\right)\left(\begin{array}{c}
C_{k,0}\\
C_{k,1}\end{array}\right)=\varepsilon\left(\begin{array}{c}
C_{k,0}\\
C_{k,1}\end{array}\right).\end{array}\end{split}
\end{equation}
The energy spectrum $\varepsilon$ is: \begin{equation}
\begin{split}\begin{array}{l}
\varepsilon_{1,2}=\pm\sqrt{V_{1}^{2}+V_{2}^{2}},\end{array}\end{split}
\end{equation}
which is $f$ independent and therefore each energy level is 3-fold
degenerate (note that for current example $f=\{0,1,2\}$ and $k=\frac{2\pi}{6}f$).
The relation between the coefficients is:
\begin{equation}
C_{k,0}=\frac{V_{1}e^{-\frac{2\pi i}{6}4f}}{\varepsilon-V_{2}}C_{k,1}\,\text{ or equally }\, C_{k,1}=\frac{V_{1}e^{\frac{2\pi i}{6}4f}}{\varepsilon+V_{2}}C_{k,0}.\label{eq:C k-dep phase}\end{equation}
On the other hand, if instead of coprime factorized $M=6$ we choose
$M=4$ and substitute $M_{1}=a=2$ and $M_{2}=b=2$ into equation
\eqref{eq:Harper 6}, we get Eq. \eqref{eq:Non degen} with non-degenerate
energy spectrum:
\begin{equation}
\sum_{k,q}C_{k,q}[\frac{V_{1}}{2}(|k,q-2\rangle+|k,q+2\rangle)+V_{2}cos(\frac{2\pi}{2}q)|k,q\rangle]=\varepsilon\sum_{k,q}C_{k,q}|k,q\rangle.\label{eq:Non degen}\end{equation}
Using the quasi-periodicity properties of $|k,q\rangle$ states we
have:
\begin{equation}
\sum_{k,q}C_{k,q}[V_{1}cos(2k)|k,q\rangle+V_{2}cos(\frac{2\pi}{2}q)|k,q\rangle]=\varepsilon\sum_{k,q}C_{k,q}|k,q\rangle,\end{equation}
and the energy spectrum is:
\begin{equation}
\varepsilon_{1,2,3,4}=\pm V_{1}\pm V_{2}.\label{eq:E non-dege}\end{equation}
This result is expected, because of the absence of factorization into
sub-dimensions $M_{1}=2$ and $M_{2}=2$ using the operators of Eq.
\eqref{eq:Zak operators}.
Let us now follow the same procedure with the new operators of Eq.
\eqref{eq:New Fact}. Accordingly, the Harper-like Hamiltonian of
Eq. \eqref{eq:Harper H} changes to: \begin{equation}
\begin{split}\begin{array}{l}
H=H(T(b),\tau'(\frac{2\pi}{a}))=V_{1}cos(\frac{b}{\hbar}\hat{p})+V_{2}Acos(\frac{2\pi}{a}\hat{x})A^{\dag}.\end{array}\end{split}
\label{eq:new Harper H}\end{equation}
To compare the two schemes we solve the above Hamiltonian using the
$\widetilde{kq}$-representation:
\begin{equation}
|\psi\rangle=\sum_{k,q}|\widetilde{k,q}\rangle\langle\widetilde{k,q}|\psi\rangle=\sum_{k,q}\widetilde{C}_{k,q}|\widetilde{k,q}\rangle.\end{equation}
The eigenvalue equation for our Hamiltonian is: \begin{equation}
\begin{split}\begin{array}{l}
[V_{1}cos(\frac{b}{\hbar}\hat{p})+V_{2}Acos(\frac{2\pi}{a}\hat{x})A^{\dag}]\sum_{k,q}\widetilde{C}_{k,q}|\widetilde{k,q}\rangle=\varepsilon\sum_{k,q}\widetilde{C}_{k,q}|\widetilde{k,q}\rangle.\end{array}\end{split}
\end{equation}
After applying the operators we have:
\begin{equation}
\begin{split}\begin{array}{l}
\sum_{k,q}\widetilde{C}_{k,q}[\frac{V_{1}}{2}(|\widetilde{k,q-1}\rangle+|\widetilde{k,q+1}\rangle)+V_{2}cos(\frac{2\pi}{a}q)|\widetilde{k,q}\rangle]=\varepsilon\sum_{k,q}\widetilde{C}_{k,q}|\widetilde{k,q}\rangle,\end{array}\end{split}
\label{eq:new Harper 6}\end{equation}
where we have used the two relations:
\[
T(b)|\widetilde{k,q}\rangle=|\widetilde{k,q-1}\rangle\text{ and }\tau'(\frac{2\pi}{a})|\widetilde{k,q}\rangle=e^{\frac{2\pi i}{a}q}|\widetilde{k,q}\rangle.\]
As before, the eigenvalue equation \eqref{eq:new Harper 6} can be
solved for each value of $k$ independently, and using the complete
periodicity property of the $|\widetilde{k,q}\rangle$ we have (with
$M_{1}=a=2$ and $M_{2}=b=3$): \begin{equation}
\begin{split}\begin{array}{l}
\tilde{C}_{k,0}[\frac{V_{1}}{2}(|\widetilde{k,1}\rangle+|\widetilde{k,1}\rangle)+V_{2}cos(\frac{2\pi}{2}\cdot0)|\widetilde{k,0}\rangle]+\\
\\\tilde{C}_{k,1}[\frac{V_{1}}{2}(|\widetilde{k,0}\rangle+|\widetilde{k,0}\rangle)+V_{2}cos(\frac{2\pi}{2}\cdot1)|\widetilde{k,1}\rangle]=\\
\\=\varepsilon[\tilde{C}_{k,0}|\widetilde{k,0}\rangle+\tilde{C}_{k,1}|\widetilde{k,1}\rangle].\end{array}\end{split}
\end{equation}
In matrix form the above equation reads: \begin{equation}
\begin{split}\begin{array}{l}
\left(\begin{array}{cc}
V_{2} & V_{1}\\
V_{1} & -V_{2}\end{array}\right)\left(\begin{array}{c}
\tilde{C}_{k,0}\\
\tilde{C}_{k,1}\end{array}\right)=\varepsilon\left(\begin{array}{c}
\tilde{C}_{k,0}\\
\tilde{C}_{k,1}\end{array}\right).\end{array}\end{split}
\end{equation}
Hence, we get the same spectrum of energies as before, with each
level being 3-fold degenerate: \begin{equation}
\begin{split}\begin{array}{l}
\varepsilon_{1,2}=\pm\sqrt{V_{1}^{2}+V_{2}^{2}},\end{array}\end{split}
\end{equation}
and a new relation between the coefficients:
\begin{equation}
\tilde{C}_{k,0}=\frac{V_{1}}{\varepsilon-V_{2}}\tilde{C}_{k,1}\,\text{ or equally }\,\tilde{C}_{k,1}=\frac{V_{1}}{\varepsilon+V_{2}}\tilde{C}_{k,0}.\end{equation}
In comparison with the previous solution Eq. \eqref{eq:C k-dep phase},
now we do not have a k-dependent phase in the relation between the
coefficients $\tilde{C}_{k,0}$ and $\tilde{C}_{k,1}$.
In the case of $M=4$, solving equation \eqref{eq:new Harper 6} with
$M_{1}=a=2$ and $M_{2}=b=2$, we have:
\begin{equation}
\sum_{k,q}\widetilde{C}_{k,q}[\frac{V_{1}}{2}(|\widetilde{k,q-1}\rangle+|\widetilde{k,q+1}\rangle)+V_{2}cos(\frac{2\pi}{2}q)|\widetilde{k,q}\rangle]=\varepsilon\sum_{k,q}\widetilde{C}_{k,q}|\widetilde{k,q}\rangle.\end{equation}
As a result of the $k$ independence of the equation above it is equivalent
to the eigenvalue equation considered for dimension $M=6$. Therefore
(as one can easily check) we have to solve the matrix equation:
\begin{equation}
\begin{split}\begin{array}{l}
\left(\begin{array}{cc}
V_{2} & V_{1}\\
V_{1} & -V_{2}\end{array}\right)\left(\begin{array}{c}
\tilde{C}_{k,0}\\
\tilde{C}_{k,1}\end{array}\right)=\varepsilon\left(\begin{array}{c}
\tilde{C}_{k,0}\\
\tilde{C}_{k,1}\end{array}\right),\end{array}\end{split}
\end{equation}
and consequently the corresponding energy levels, with each level
being 2-fold degenerate, are: \begin{equation}
\begin{split}\begin{array}{c}
\varepsilon_{1,2}=\pm\sqrt{V_{1}^{2}+V_{2}^{2}}.\end{array}\end{split}
\end{equation}
Therefore, in spite of the non-degenerate spectrum of the Hamiltonian
$H(T(b),\tau(\frac{2\pi}{a}))$ for non-coprime $M_{1}$ and $M_{2}$,
for the Hamiltonian $H(T(b),\tau'(\frac{2\pi}{a}))$ the energy levels
preserve their degeneracies.
\section{Summary and discussion}
The main result of our work is a generalization of the Schwinger unitary
operator factorization to non-coprime factorizations. That is, for
a composite dimension $M=M_{1}M_{2}$, we factorize the $U$ and $V$
operators from Eq. \eqref{eq:UV} into two pairs of operators $(\tilde{U}_{1},\tilde{V}_{1})$
and $(\tilde{U}_{2},\tilde{V}_{2})$ Eq. \eqref{eq:New Fact}. Each
of the pairs generates a complete orthogonal operator basis for the
sub-dimensions $M_{1}$ and $M_{2}$, and operators from different
bases commute. The factorization enables us to consider any single
physical system with dimension $M=M_{1}M_{2}$ as a pair of physical
systems in $M_{1}$ and $M_{2}$ - factorized degrees of freedom,
where $M_{1}$ and $M_{2}$ are not restricted to be coprime. Considering
factorized operators may simplify various $M$ - dimensional phase
space problems in the same way as the Cooley-Tukey FFT simplifies
the application of the DFT. Moreover, the new factorization deepens
our physical intuition. In particular, we applied the new factorization
to a Harper-like Hamiltonian model, and developed an algorithm for
energy spectrum design in this model. Using the algorithm, we can
construct a Hamiltonian, which is a function of the operators $(\tilde{U}_{1},\tilde{V}_{1})$.
Therefore, it is designed to obtain $M_{1}$ energy levels (with a
spectrum determined by Hamiltonian's details), each level being $M_{2}$-fold
degenerate. The algorithm of the energy spectrum design can be of
interest, for example in solid state physics for electrons in a strong
magnetic field \cite{Zak Mag}.
The application of the permutation operator $A$ (which is the key
to the solution for the non-coprime cases) to the \textit{kq} bases
problem generates the \textit{kq}-like basis which is a MUB to the
original $|K,Q\rangle$ basis. This \textit{kq}-like basis has a different
periodicity property than the original \textit{kq} bases: it is completely
periodic in the coordinate and momentum variables simultaneously.
\section*{Acknowledgments}
AM is grateful to Prof. Wei-Min Zhang for his very kind hospitality
in Tainan. The work of AM was partly supported by grant HUA97-12-02-161
at NCKU. The authors acknowledge numerous informative and helpful
discussions with Professor Michael Revzen.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}~
Coherence underlies quantum phenomena. Familiar from waves, coherence gives rise to interference effects that power quantum systems to be different from and more useful than everyday objects. Quantum coherence enables computation \cite{Shor1999}, measurement \cite{Dowling1998}, teleportation \cite{Bouwmeesteretal1997}, and more, making it an important resource to quantify \cite{Aberg2006arxiv,Baumgratzetal2014,LeviMintert2014,WinterYang2016}. Our ability to generate and transform coherence is thus vital to the success of these ventures.
Here, we show how to ideally transfer arbitrary amounts of coherence from light to atomic systems.
Previous work found the ideal states of light for transferring maximal coherence to a single atom: \textit{transcoherent} states do this job and can be approximated by easier-to-generate squeezed light in the appropriate limits \cite{GoldbergSteinberg2020}. These are important to the plethora of applications requiring maximally coherent atomic states, such as quantum engines \cite{Korzekwaetal2016} and quantum state preparation with quantum logic gates \cite{Mulleretal2009}. In other applications, arbitrary superpositions of a two-level atom's ground and excited states $\ket{\mathrm{g}}$ and $\ket{\mathrm{e}}$ may be desired, with the most general state being
\eq{
\ket{\theta,\phi}\equiv \cos\frac{\theta}{2}\ket{\mathrm{g}}+\sin\frac{\theta}{2}\text{e}^{\text{i}\phi}\ket{\mathrm{e}};
\label{eq:theta phi atomic state}
} the atom may be a physical atom or any other physical system with two energy levels, known as a qubit. We find the ideal states for all of these applications and demonstrate how they avoid residual light-atom entanglement and other deleterious effects that ruin the quality of the atomic coherence.
Light is routinely used for controlling atomic states. Strong, classical light with a frequency close to the transition frequency of a two-level atom induces ``Rabi flopping'' that coherently drives the atom between $\ket{\mathrm{g}}$ and $\ket{\mathrm{e}}$ at the Rabi frequency $\Omega_0\sqrt{\bar{n}}$, with $\bar{n}$ equal to the intensity of the field in the appropriate units that amount to the single-photon intensity when the field is quantized \cite{AllenEberly1987}. Waiting an appropriate time $\Omega_0\sqrt{\bar{n}}t=\theta$, for example, will lead to an atom in state $\ket{\mathrm{g}}$ rotating to state $\ket{\theta,0}$. However, even quasiclassical light in a coherent state is fundamentally made from a superposition of different numbers of photons \cite{Sudarshan1963,Glauber1963}, which each drive oscillations in the atom at a different Rabi frequency; these give rise to famous effects such as the collapses and revivals of Rabi oscillations that help demonstrate the existence of quantized photons underlying quasiclassical light \cite{Eberlyetal1980,Rempeetal1987}.
When considering the quantized version of light's interaction with a single atom, the Jaynes-Cummings model (JCM) dictates that the light will generally become entangled with the atom \cite{GeaBanacloche1990,GeaBanacloche1991,PhoenixKnight1991a,GeaBanacloche2002,vanEnkKimble2002,SilberfarbDeutsch}. This prevents the atom from being in any pure state $\ket{\theta,\phi}$ and always tends to degrade the quality of the atomic state thus created. Such is the problem that transcoherent states surmount for $\theta=\tfrac{\pi}{2}$ and that we generalize here.
A natural, further generalization of these results is to field states interacting with a collection of atoms. While the dynamics between a single atom and a mode of light are straightforward to solve through the JCM, the same with a collection of atoms, known as the Tavis-Cummings model (TCM), cannot usually be done in closed form. This enriches our problem and allows us to incorporate strategies from semiclassical quantization in our investigations.
The above interactions are linear in the electromagnetic field operators. We lastly extend these results for arbitrary atomic control to interactions that involve nonlinear contributions from the electromagnetic field, such as $m$-photon absorption processes. These showcase the reach of our transcoherence idea well beyond the initial goal of transferring coherence from light to atoms.
\subsection{Jaynes-Cummings model}~
The Jaynes-Cummings Hamiltonian governs the resonant interaction between a single bosonic mode annihilated by ${a}$ and a two-level atom with ground and excited states $\ket{\mathrm{g}}$ and $\ket{\mathrm{e}}$, respectively:
\eq{
H=\omega\left({a}^\dagger\vphantom{a}{a}+\ket{\mathrm{e}}\bra{\mathrm{e}}\right)+\frac{\Omega_0}{2}\left({a}\sigma_+ + {a}^\dagger\vphantom{a}\sigma_-\right),
} where $\omega$ is the resonance frequency, $\Omega_0$ is the single-excitation Rabi frequency (sometimes known as the vacuum Rabi frequency), and $\sigma_+=\sigma_-^\dagger=\ket{\mathrm{e}}\bra{\mathrm{g}}$ is the atomic raising operator. The JCM characterizes light-matter interactions in a variety of physical systems including circuit quantum electrodynamics (QED) \cite{Raimondetal2001}, cavity QED \cite{Finketal2008}, and parametric amplification \cite{GutierrezJaureguiAgarwal2021}. This interaction conserves total energy and total excitation number, as can be seen from its eigenstates
\eq{
\ket{\pm,n}=\frac{\ket{n}\otimes\ket{\mathrm{e}}\pm\ket{n+1}\otimes\ket{\mathrm{g}}}{\sqrt{2}},
\label{eq:JCM eigenstates}
} with eigenenergies $\pm\tfrac{\Omega_n}{2}$ for the quantized Rabi frequencies
\eq{
\Omega_n=\Omega_0\sqrt{n+1}.
} These are responsible for the field and the atom periodically exchanging an excitation with frequency $\tfrac{\Omega_n}{2}$ when the initial state is either $\ket{n}\otimes\ket{\mathrm{e}}$ or $\ket{n+1}\otimes\ket{\mathrm{g}}$.
We will work in the interaction picture with Hamiltonian
\eq{
H_{\mathrm{I}}=
\frac{\Omega_0}{2}\left({a}\sigma_+ + {a}^\dagger\vphantom{a}\sigma_-\right);
} the Schr\"odinger-picture results can thence be obtained with the substitutions $\ket{n}\to\text{e}^{-\text{i}\omega nt}\ket{n}$ and $\ket{\mathrm{e}}\to\text{e}^{-\text{i} \omega t}\ket{\mathrm{e}}$.
When the atom is initially in its ground state and the field in state $\sum_n \psi_n\ket{n}$, the evolved state takes the form
\eq{
\ket{\Psi(t)}=&\psi_0\ket{0}\otimes\ket{\mathrm{g}}+\sum_{n=0}^\infty \psi_{n+1}\left(\cos\frac{\Omega_n t}{2}\ket{n+1}\otimes\ket{\mathrm{g}}
\right.\\&\qquad\qquad\qquad\qquad\left.
-\text{i}\sin\frac{\Omega_n t}{2}\ket{n}\otimes \ket{\mathrm{e}}\right)\\
=&\sum_{n=0}^\infty \ket{n}\otimes\left(\psi_n\cos\frac{\Omega_{n-1}t}{2}\ket{\mathrm{g}}-\text{i} \psi_{n+1}\sin\frac{\Omega_n t}{2}\ket{\mathrm{e}}\right).
\label{eq:JCM from ground}
} Similarly,
when the atom is initially in its excited state and the field in state $\sum_n \psi_n\ket{n}$, the evolved state takes the form
\eq{
\ket{\Psi(t)}=&\sum_{n=0}^\infty \ket{n}\otimes \left(\psi_n\cos\frac{\Omega_n t}{2}\ket{\mathrm{e}}
-\text{i} \psi_{n-1}\sin\frac{\Omega_{n-1} t}{2}\ket{\mathrm{g}}\right).
\label{eq:JCM from excited}
}
To achieve an arbitrary final atomic state, the most intuitive procedure is to begin with the atom in its ground state and the field in the target atomic state $\cos\frac{\theta}{2}\ket{0}+\text{i}\sin\frac{\theta}{2}\text{e}^{\text{i}\phi}\ket{1}$ and wait for the duration of a ``single-excitation $\pi$ pulse'' $\Omega_0 t=\pi$ to enact the transformation
\eq{
&\left(\cos\frac{\theta}{2}\ket{0}+\text{i}\sin\frac{\theta}{2}\text{e}^{\text{i}\phi}\ket{1}\right)\otimes\ket{\mathrm{g}}
\\
&\qquad\qquad\to\ket{0}
\otimes \left(\cos\frac{\theta}{2}\ket{\mathrm{g}}+\sin\frac{\theta}{2}\text{e}^{\text{i}\phi}\ket{\mathrm{e}}\right)= \ket{0}\otimes\ket{\theta,\phi}.
} We exhaustively show in Section \ref{sec:perfectly generating arbitrary coherence} how to achieve this transformation with other field states, with no residual atom-field entanglement, at faster rates, and with more feasible pulses of light.
Since the free atomic evolution enacts $\phi\to\phi-\omega t$, we can generate the states $\ket{\theta,\phi}$ with any value of $\phi$ and simply allow free evolution to generate the same state with any other value of $\phi$, so in the following we set $\phi=0$ (alternatively, direct solutions with $\phi\neq 0$ can readily be obtained by adjusting the relative phases between the photon-number states).
\section{Optimal field states for generating arbitrary amounts of atomic coherence}~
\label{sec:perfectly generating arbitrary coherence}
What are the optimal field states that can generate arbitrary pulse areas? That is, which field states
\eq{
\ket{\psi}=\sum_n \psi_n\ket{n}
} can achieve the transformations
\eq{
\ket{\psi}\otimes\ket{\mathrm{g}}\to\ket{\psi^\prime}\otimes \ket{\theta}
} or
\eq{
\ket{\psi}\otimes\ket{\mathrm{e}}\to \ket{\psi^\prime}\otimes \ket{\theta},
} where the former correspond to ``$\theta$ pulses,'' the latter to ``$\theta+\pi$ pulses,'' and we have defined the atomic state $\ket{\theta}\equiv\ket{\theta,0}$ by allowing $\theta$ to extend to $2\pi$.
We specifically seek transformations for which the final state has zero residual entanglement between the atom and the light, such that the atomic state can be used in arbitrary quantum information protocols without degradation.
\subsection{Transcoherent states}~
In Ref. \cite{GoldbergSteinberg2020} we defined \textit{transcoherent states} as those enabling $\tfrac{\pi}{2}$ pulses. For atoms initially in their ground states, perfect $\tfrac{\pi}{2}$ pulses can be achieved by the transcoherent states whose coefficients in the photon-number basis satisfy the recurrence relation
\eq{
\psi_{n+1}=\text{i}\frac{\cos\frac{\Omega_{n-1} t}{2}}{\sin\frac{\Omega_n t}{2}}\psi_n
\label{eq:recurrence relation transcoherent ground}
} to ensure that the amplitudes of $\ket{\mathrm{g}}$ and $\ket{\mathrm{e}}$ in the evolved state in Eq. \eqref{eq:JCM from ground} are equal. This can be satisfied by field states with $\psi_n=0$ for $n> n_{\mathrm{max}}$ for \textit{any} chosen maximum photon number $n_{\mathrm{max}}\geq 1$, so long as the total interaction time satisfies
\eq{
\Omega_{n_{\mathrm{max}}-1}t=\pi,
\label{eq:interaction time transcoherent ground}
} which ensures that the highest-excitation subspace spanned by $\ket{\pm,n_{\mathrm{max}}-1}$ undergoes a $\pi$ pulse such that it transfers all of its excitation probability from $\ket{n_{\mathrm{max}}}\otimes\ket{\mathrm{g}}$ to $\ket{n_{\mathrm{max}}-1}\otimes\ket{\mathrm{e}}$.
In the large-$n_{\mathrm{max}}$ limit, these states strongly approximate Gaussian states with an average photon number $\bar{n}$ whose photon-number distributions are squeezed from that of a canonical coherent state, $\sigma^2_{\mathrm{coh}}=\bar{n}$, by a factor of $\tfrac{\pi}{2}$.
Similarly, another set of transcoherent states has its photon-number distribution satisfy the same recurrence relation as Eq. \eqref{eq:recurrence relation transcoherent ground}, with the lowest-excitation manifold undergoing a $(2k)\pi$ pulse and the highest a $(2k+1)\pi$ pulse for any $k\in\mathds{N}_0$. This pulse, in the large-$\bar{n}$ limit, corresponds to a $\frac{4k+1}{2}\pi$ pulse produced by a coherent state with its photon-number distribution squeezed by a factor of $\frac{4k+1}{2}\pi$.
Superpositions of such states with nonzero coefficients all satisfying $(2k)^2 n_{\mathrm{max}}\leq n\leq (2k+1)^2 n_{\mathrm{max}}$ will also enact perfect $\tfrac{\pi}{2}$ pulses in a time $\Omega_{n_{\mathrm{max}}-1}t=\pi$.
Another set of transcoherent states is found when the atom is initially in its excited state. This setup requires the initial field state's coefficients to satisfy a different recurrence relation to ensure equal coefficients of $\ket{\mathrm{g}}$ and $\ket{\mathrm{e}}$ in Eq. \eqref{eq:JCM from excited}:
\eq{
\psi_{n+1}=-\text{i}\frac{\sin\frac{\Omega_{n} t}{2}}{\cos\frac{\Omega_{n+1} t}{2}}\psi_{n}.
\label{eq:recurrence relation transcoherent excited}
} As well, the lowest-excitation sector must now undergo a $(2k+1)\pi$ pulse while the highest undergoes a $2(2k+1)\pi$ pulse. This happens in a time \eq{
\Omega_{n_{\mathrm{min}}}t=(2k+1)\pi,
\label{eq:interaction time transcoherent excited}
} corresponding in the large-$\bar{n}$ limit to a $\tfrac{4k+3}{2}\pi$ pulse generated by a coherent state that has been photon-number squeezed by $\tfrac{4k+3}{2}\pi$. Superpositions of commensurate states will again achieve perfect coherence transfer.
\subsection{Beyond transcoherent states}~
We can generalize the recurrence relations of Eqs. \eqref{eq:recurrence relation transcoherent ground} and \eqref{eq:recurrence relation transcoherent excited} to generate arbitrary states of the form of Eq. \eqref{eq:theta phi atomic state}.
When the atom is initially in its ground state, it will evolve to a state of the form of Eq. \eqref{eq:theta phi atomic state} if and only if the initial field state's coefficients obey the recurrence relation [again, c.f. Eq. \eqref{eq:JCM from ground}]
\eq{
\psi_{n+1}=\text{i}\tan{\frac{\theta}{2}}\frac{\cos\frac{\Omega_{n-1} t}{2}}{\sin\frac{\Omega_n t}{2}}\psi_n.
\label{eq:recurrence relation beyond transcoherent ground}
} The same boundary conditions as for transcoherent states hold, meaning that we require interaction times of the form of Eq. \eqref{eq:interaction time transcoherent ground} such that the lowest-excitation manifold undergoes a $0\pi$ pulse and the highest a $\pi$ pulse; extensions to other excitation manifolds are similarly permissible. We plot a number of such states in Fig. \ref{fig:beyond transcoherent from g}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{beyond_states_nmax_200_various_pulses_v2}
\caption{Photon-number probability distributions for field states that exactly generate arbitrary rotations $\theta$ on atoms initially in their ground states (various shapes correspond to different values of $\theta$). The field states are calculated using the recursion relation Eq. \eqref{eq:recurrence relation beyond transcoherent ground} with $n_{\mathrm{max}}=200$. Also plotted are the photon-number distributions for coherent states with the same average energies (solid curves). For the same $n_{\mathrm{max}}$ and thus the same value of $\Omega_0 t$, a higher-energy pulse generates a larger rotation angle $\theta$, with more photon-number squeezing being necessary for larger rotation angles.}
\label{fig:beyond transcoherent from g}
\end{figure}
When the atom is initially in its excited state, it will evolve to a state of the form of Eq. \eqref{eq:theta phi atomic state} if and only if the initial field state's coefficients obey the recurrence relation [again, c.f. Eq. \eqref{eq:JCM from excited}]
\eq{
\psi_{n+1}=-\text{i}\tan{\frac{\theta}{2}}\frac{\sin\frac{\Omega_{n} t}{2}}{\cos\frac{\Omega_{n+1} t}{2}}\psi_{n}
.
\label{eq:recurrence relation beyond transcoherent excited}
} The same boundary conditions as for transcoherent states hold, meaning that we require interaction times of the form of Eq. \eqref{eq:interaction time transcoherent excited} such that the lowest-excitation manifold undergoes a $(2k+1)\pi$ pulse and the highest a $(4k+2)\pi$ pulse; extensions to superpositions of excitation manifolds are again permissible.
What are the properties of these extended transcoherent states whose coefficients are given respectively by Eqs. \eqref{eq:recurrence relation beyond transcoherent ground} and \eqref{eq:recurrence relation beyond transcoherent excited}? In Ref. \cite{GoldbergSteinberg2020}, we discussed how transcoherent states maximize the coherence \eq{
\mathcal{C}(t)&=\left|\bra{\Psi(t)}\sigma_+\ket{\Psi(t)}\right|+\left|\bra{\Psi(t)}\sigma_-\ket{\Psi(t)}\right|.
\label{eq:coherence definition}
} This is achieved through a careful balance between the narrowness of the distribution $|\psi_n|^2$, which favours nearly equivalent Rabi frequencies, and its broadness, which leads to larger overlap terms $|\psi_{n+1}\psi_n|$ in Eq.
\eqref{eq:coherence definition}.
By creating the atomic states of Eq. \eqref{eq:theta phi atomic state}, we are attaining arbitrary values of $\mathcal{C}(t)$. How do the conditions of Eqs. \eqref{eq:recurrence relation beyond transcoherent ground} and \eqref{eq:recurrence relation beyond transcoherent excited} achieve the optimal balance for the distribution $|\psi_n|^2$?
We begin with the case of atoms initially in their ground states: Eq. \eqref{eq:recurrence relation beyond transcoherent ground}. By choosing states that satisfy the condition of Eq. \eqref{eq:interaction time transcoherent ground}, we ensure that the recursion relation truncates. Then, since the ratio $|\psi_{n+1}/\psi_n|$ monotonically decreases until it reaches zero, where the series truncates, the photon-number distribution approaches a smooth, singly peaked distribution;
for sufficiently large $\bar{n}$, this distribution is Gaussian.
The recursion relation of Eq. \eqref{eq:recurrence relation beyond transcoherent ground} is stationary when \eq{
\cos\frac{\theta}{2}\sin\frac{\Omega_n t}{2}-\sin\frac{\theta}{2}\cos\frac{\Omega_{n-1}t}{2}=0.
} For large $\bar{n}$, this occurs at a sufficiently large value of $n$ such that $\Omega_n\approx \Omega_{n-1}$; we will refer to this value as $\tilde{n}$, which we will later see to be on the order of the average number of photons $\tilde{n}=\mathcal{O}(\bar{n})$. This leads to the condition
\eq{
\sin\left(\frac{\theta}{2}-\frac{\Omega_{\tilde{n}} t}{2}\right)\approx 0\qquad
\Rightarrow\qquad \Omega_{\tilde{n}} t\approx \theta ,
\label{eq:stationary point ground}
}
corresponding with the classical scenario in which a pulse area of $\theta$ is applied when the Rabi frequency and total interaction time satisfy $\Omega_{\tilde{n}} t=\theta$.
Substituting this condition into Eq. \eqref{eq:recurrence relation beyond transcoherent ground} leads to the expansion about large $\tilde{n}$:
\eq{
\tan{\frac{\theta}{2}}\frac{\cos\frac{\Omega_{{n}-1} t}{2}}{\sin\frac{\Omega_{{n}} t}{2}}=&\tan{\frac{\theta}{2}}\frac{\cos\frac{\theta\sqrt{n}}{2\sqrt{\tilde{n}+1}}}{\sin\frac{\theta\sqrt{n+1}}{2\sqrt{\tilde{n}+1}}}=\tan{\frac{\theta}{2}}\frac{\cos\frac{\theta\sqrt{\tilde{n}+\delta}}{2\sqrt{\tilde{n}+1}}}{\sin\frac{\theta\sqrt{\tilde{n}+\delta+1}}{2\sqrt{\tilde{n}+1}}}\\
&
\approx 1- \frac{\theta}{2\tilde{n}\sin\theta}\left(\delta-\sin^2\frac{\theta}{2}\right)
.} From this we find the approximate difference relation:
\eq{
\frac{\psi_{\tilde{n}+\delta}-\psi_{\tilde{n}}}{\delta}\approx -\frac{\theta}{2\tilde{n}\sin\theta}\left(\delta-\sin^2\frac{\theta}{2}\right)\psi_{\tilde{n}}.
} This describes a Gaussian distribution
\eq{
|\psi_{\tilde{n}+\delta}|^2\approx |\psi_{\tilde{n}}|^2 \exp\left[
-\frac{\theta}{2\tilde{n}\sin\theta}\left(\delta-\sin^2\frac{\theta}{2}\right)^2
\right]
\label{eq:Gaussian fround ground}
} with photon-number variance
\eq{
\sigma^2=\tilde{n}\sinc\theta.
\label{eq:sinc variance}
} The mean is slightly shifted to $\bar{n}=\tilde{n}+\sin^2\frac{\theta}{2}$, due to the discreteness of $n$; had we instead set the stationary point of the recursion relation to be at $\tilde{n}-1$, we would have accordingly found the argument of the Gaussian to be $\left(\delta+\cos^2\frac{\theta}{2}\right)^2$.
The same calculation can be performed for an atom initially in its excited state. Looking at the stationary point of the recursion relation of Eq. \eqref{eq:recurrence relation beyond transcoherent excited}, namely,
\eq{
\cos\frac{\theta}{2}\cos\frac{\Omega_{n+1}t}{2}-\sin\frac{\theta}{2}\sin\frac{\Omega_{n}t}{2}=0,
} we now arrive at the condition
\eq{
\cos\left(\frac{\theta}{2}+\frac{\Omega_{\tilde{n}t}}{2}\right)=0\qquad \Rightarrow \qquad \Omega_{\tilde{n}}t\approx \theta+\pi.
\label{eq:stationary point excited}
} This similarly corresponds to the classical scenario in which a pulse area of $\theta+\pi$ (i.e., rotating from $\ket{\mathrm{e}}$ to $\ket{\mathrm{g}}$ to $\ket{\theta}$) is achieved when the Rabi frequency and total interaction time satisfy $\Omega_{\bar{n}}t=\theta+\pi$.
Substituting this new condition into Eq. \eqref{eq:recurrence relation beyond transcoherent excited} leads to the expansion:
\eq{
-\tan\frac{\theta}{2}\frac{\sin\frac{\Omega_{n}t}{2}}{ \cos\frac{\Omega_{n+1}t}{2} }
\approx 1-\frac{\theta+\pi}{2\tilde{n}\sin\theta}\left(\delta+\cos^2\frac{\theta}{2}\right).
} Using this for the approximate difference equation leads to the Gaussian distribution
\eq{
|\psi_{\tilde{n}+\delta}|^2\approx |\psi_{\tilde{n}}|^2 \exp\left[
-\frac{\theta+\pi}{2\tilde{n}\sin\theta}\left(\delta+\cos^2\frac{\theta}{2}\right)^2
\right].
} This differs from Eq. \eqref{eq:Gaussian fround ground} by an innocuous-looking addition of $\pi$ that is responsible for a number of important properties (as well as the stationary point being shifted from $tilde{n}$ by 1).
\subsection{Discussion}~
The states defined by Eqs. \eqref{eq:recurrence relation beyond transcoherent ground} and \eqref{eq:recurrence relation beyond transcoherent excited} directly generalize the transcoherent states of Ref. \cite{GoldbergSteinberg2020}. Transcoherent states, in the large-$\bar{n}$ limit, enact $\tfrac{4k+1}{2}\pi$ pulses on state $\ket{\mathrm{g}}$ because their photon-number distributions are squeezed by $\tfrac{4k+1}{2}\pi$ and $\tfrac{4k+3}{2}\pi$ pulses on state $\ket{\mathrm{e}}$ because their photon-number distributions are squeezed by $\tfrac{4k+3}{2}\pi$.
We can now understand from where these factors truly arise: number squeezing by $\sinc \theta$ leads to pulse areas of $\theta$ on states $\ket{\mathrm{g}}$ and by $\left(\theta+\pi\right)^{-1}\sin\theta=-\sinc\left(\theta+\pi\right)$ leads to pulse areas of $\theta+\pi$ on states $\ket{\mathrm{e}}$. The properties of the $\sinc$ function are responsible for the forms of the viable solutions to optimally delivering arbitrary pulse areas.
Variances cannot be negative. The $\sinc$ function, however, flips its sign periodically with period $\pi$. This means that the only pulses that can be delivered to state $\ket{\mathrm{g}}$ are those with
\eq{
(2k)\pi \leq \theta\leq (2k+1)\pi\quad \Leftarrow\quad \sigma^2=\bar{n}\sinc\theta
\label{eq:theta constraints from ground}
} and to $\ket{\mathrm{e}}$ are those with
\eq{
(2k+1)\pi \leq \theta+\pi\leq (2k+2)\pi\quad \Leftarrow\quad \sigma^2=-\bar{n}\sinc\left(\theta+\pi\right),
\label{eq:theta constraints from excited}
} where $k\in\mathds{N}_0$. These ranges, together with Eqs. \eqref{eq:stationary point ground} and \eqref{eq:stationary point excited}, cover all of the classically allowed possibilities for pulse areas, now in a fully quantized regime. The periodic maxima of these functions correspond to the transcoherent states, the negative regions explain why certain pulse areas are only accessible to atoms initially in their ground or excited states, and the property $\left|\sinc \theta\right|\leq 1$ implies that only photon-number squeezing, not photon-number broadening, is useful for gaining quantum advantages in generating atomic states with arbitrary coherence properties.
This also explains why we chose to retain the $-$ sign from Eq. \eqref{eq:recurrence relation beyond transcoherent excited} in Eq. \eqref{eq:stationary point excited} and similarly how we chose the signs of the terms in Eq. \eqref{eq:stationary point ground}: the solutions found from the alternate choices of signs lead to minima in the recurrence relations instead of maxima, corresponding to regions where the $\sinc$ functions are negative, which do not lead to valid solutions.
We can inspect a number of limits to ensure that these states behave sensibly. In the limit of a pulse area of $0$, where we desire no change in the state of the system, the best states have no number squeezing: $\sigma^2=\bar{n}$. Moreover, these states require an interaction time satisfying $\Omega_{\tilde{n}}t=0$, so $\bar{n}=0$, and the optimal solution is that the field is in its vacuum state, which is the trivial case of a coherent state with no photon-number squeezing. This solution works for atoms initially in either state $\ket{\mathrm{g}}$ or $\ket{\mathrm{e}}$; for the latter, the pulse area has $\theta+\pi=0$, which seems like it imparts a negative photon-number variance because $-\sinc 0=-1$, but this is not a problem because the product of this negative squeezing factor and $\bar{n}=0$ still vanishes.
To achieve a pulse area $l\pi$ for $l\in\mathds{N}$, a variance of zero is required, corresponding to the zeroes of the $\sinc$ function. Equivalently, the only field states that exactly generate $\pi$ pulses, $2\pi$ pulse, and so on are those that are ``infinitely'' photon-number-squeezed coherent states: number states. This directly accords with the eigenstates of the JCM Hamiltonian $\ket{\pm,n}$ found in Eq. \eqref{eq:JCM eigenstates}: when the joint system begins in either $\ket{n}\otimes\ket{\mathrm{e}}$ or $\ket{n+1}\otimes\ket{\mathrm{g}}$, an interaction time of $\Omega_n t=(2l+1)\pi$ swaps the excitation between the field and the atom, while an interaction time of $\Omega_n t=(2l)\pi$ returns the excitations to their original starting points.
Complementing the properties of the $\sinc$ function, there is another reason why the ideal field states only exist for the pulse areas described by Eqs. \eqref{eq:theta constraints from ground} and \eqref{eq:theta constraints from excited}. An ideal pulse acting on $\ket{\mathrm{g}}$ must undergo a $(2k)\pi$ pulse in the lowest-excitation manifold and $(2k+1)\pi$ pulse in the highest. The pulse area for the \textit{average} photon number $\bar{n}$ must therefore always be between $(2k)\pi$ and $(2k+1)\pi$, never between $(2k+1)\pi$ and $(2k+2)\pi$; the converse holds for atoms initially in state $\ket{\mathrm{e}}$. This is why, for example, a perfect $\tfrac{\pi}{2}$ pulse can never be applied to state $\ket{\mathrm{e}}$, which must instead experience a perfect $\tfrac{3}{2}\pi$ pulse. Controlling the pulse areas of the lowest- and highest-excitation sections is paramount for ideal coherence transfer.
The idea of tailoring pulses such that the highest-excitation manifold undergoes a $\pi$ pulse was recently explored in a different context Ref. \cite{Liuetal2021constructing}.
There, this paradigm was used to create a universal set of quantum operations that could be used for quantum computation. We thus stress the importance of using the fully quantized JCM to surmount information leakage in light-matter-interaction protocols.
A useful property of transcoherent states and beyond is that the field states experience less backaction from the interaction with the atom than standard coherent states. This allows them to be repeatedly used as ``catalysts'' for transferring coherence to the atoms before eventually running out of energy and coherence to impart and degrading through repeated interactions \cite{GoldbergSteinberg2020}.\footnote{C.f. quantum catalysis as studied in the JCM in Ref. \cite{Messingeretal2020}.} Therefore, the cost associated with producing a transcoherent state should, in some sense, be reduced by a factor of the number of times that state can be used for practical coherence transfer.
\section{Optimal field states for generating $\Theta$ pulses on arbitrary atomic states}~
Transcoherent states and their generalizations in Section \ref{sec:perfectly generating arbitrary coherence} are the unique optimal field states that generate arbitrary atomic states $\ket{\theta}$ in arbitrarily short times from atoms initially in state $\ket{\mathrm{g}}$ or $\ket{\mathrm{e}}$. For quantum information protocols, one often seeks a transformation that transforms arbitrary initial atomic states in the same way. The transcoherent states offer a method for enacting the ideal rotation by $\tfrac{\pi}{2}$ about the $y$ axis on the Bloch sphere:
\eq{
\begin{pmatrix}
\ket{\mathrm{g}}\\\ket{\mathrm{e}}
\end{pmatrix}\to\frac{1}{\sqrt{2}}\begin{pmatrix}
1&1\\-1&1
\end{pmatrix}\begin{pmatrix}
\ket{\mathrm{g}}\\\ket{\mathrm{e}}
\end{pmatrix},
\label{eq:general pi/2 pulse}
} equivalent to a $\tfrac{\pi}{2}$ pulse, on either of the initial states $\ket{\mathrm{g}}$ or $\ket{\mathrm{e}}$. This is similar to the Hadamard transformation, which is also useful for generating coherence.
What is the optimal field state for enacting a $\tfrac{\pi}{2}$ transformation on arbitrary initial states $\ket{\theta,\phi}$ to the transformed states:
\eq{
\ket{\theta,\phi}_{\frac{\pi}{2}}\equiv
\frac{\cos\frac{\theta}{2}-\text{e}^{\text{i}\phi}\sin\frac{\theta}{2}}{\sqrt{2}}\ket{\mathrm{g}}+\frac{\cos\frac{\theta}{2}+\text{e}^{\text{i}\phi}\sin\frac{\theta}{2}}{\sqrt{2}}\ket{\mathrm{e}}?
\label{eq:theta phi state after Hadamard}
} It is impossible to do this perfectly, in contrast to transcoherent states and their generalizations in Sec. \ref{sec:perfectly generating arbitrary coherence} that can do this perfectly, because the highest-excitation manifold must undergo a $(2k)\pi$ pulse for most initial states $\ket{\theta,\phi}$ but a $(2k+1)\pi$ pulse for initial state $\ket{\mathrm{e}}$. We thus seek states that perform the best on average.
A straightforward way of determining the success of creating the state depicted in Eq. \eqref{eq:theta phi state after Hadamard} is as follows:
we begin with some state $\ket{\theta,\phi}$ and some initial field state, evolve the joint system using the JCM, measure the overlap of the evolved state $\ket{\Psi(t)}$ with $\ket{\theta,\phi}_{\frac{\pi}{2}}$, and average the result over all initial atomic state angles $\theta$ and $\phi$. The result should depend on the initial field state, so we can ask what field state maximizes the resulting averaged fidelity, equivalent to the averaged success probability.
By combining Eqs. \eqref{eq:JCM from ground} and \eqref{eq:JCM from excited}, we learn that a state $\sum_n \psi_n\ket{n}\otimes\ket{\theta,\phi}$ evolves to
\eq{
\ket{\Psi(t)}=\sum_{n=0}^\infty\ket{n}\otimes\left(\cos\frac{\theta}{2}\ket{G_n}+\sin\frac{\theta}{2}\text{e}^{\text{i}\phi}\ket{E_n}\right),
\label{eq:transformed joint state from arbitrary initial}
} where we have defined the atomic states
\eq{
\ket{G_n}&=\psi_n\cos\frac{\Omega_{n-1}t}{2}\ket{\mathrm{g}}-\text{i} \psi_{n+1}\sin\frac{\Omega_n t}{2}\ket{\mathrm{e}} ,\\
\ket{E_n}&=\psi_n\cos\frac{\Omega_n t}{2}\ket{\mathrm{e}}
-\text{i} \psi_{n-1}\sin\frac{\Omega_{n-1} t}{2}\ket{\mathrm{g}}.
}
Comparing this result to the desired state $\ket{\theta,\phi}_{\frac{\pi}{2}}$, we observe that a strongly peaked photon-number distribution around $\bar{n}$ satisfying \eq{
\psi_{\bar{n}}\cos\frac{\Omega_{\bar{n}-1}t}{2}\approx \psi_{\bar{n}}\cos\frac{\Omega_{\bar{n}}t}{2}
&
\approx \text{i} \psi_{{\bar{n}}-1}\sin\frac{\Omega_{\bar{n}-1}t}{2}\\&\approx
-\text{i} \psi_{{\bar{n}}+1}\sin\frac{\Omega_{\bar{n}}t}{2}
\label{eq:desired distribution properties}
} would be highly beneficial for performing the transformation of Eq. \eqref{eq:general pi/2 pulse}. This seems to suggest an optimal interaction time conforming to the classical relationship $\Omega_{\bar{n}}t=\tfrac{\pi}{2}$.
There are a few considerations to calculating the averaged fidelity. To be viable for arbitrary initial states, all angles of the Bloch sphere should be equally weighted in the average.
If the azimuthal angle of the initial atomic state were known, on the other hand, one could integrate only over the polar coordinate, after waiting an appropriate free evolution such that $\phi\to 0$.
It is not obvious that there will be a single unique solution: some points on the Bloch sphere are hardly rotated by a $\tfrac{\pi}{2}$ pulse because they are close to the $\pm y$-axis thereof.
Indeed, these processes yield different ideal states and will be considered in turn.
\subsection{Averaging fidelity over all initial atomic states}~
We calculate the squared overlap averaged over the entire surface of the Bloch sphere for $\tfrac{\pi}{2}$-pulses in Appendix \ref{app:averaging fidelity} to find, for an optimal phase relationship $\psi_n \psi_{n+1}^*=-\text{i}\left|\psi_n \psi_{n+1}\right|$,
\eq{
\mathcal{F}=&\frac{1}{2}+\frac{1}{6}\sum_n\left|\psi_n\right|^2 \cos\frac{\Omega_n t}{2}\cos\frac{\Omega_{n-1}t}{2}\\
&+\left|
\psi_{n-1}\psi_{n+1}\right|\sin\frac{\Omega_n t}{2}\sin\frac{\Omega_{n-1}t}{2}
\\
&+ \left|\psi_n \psi_{n+1}\right|\sin\frac{\Omega_{n}t}{2}\left(
2\cos\frac{\Omega_n t}{2}+
\cos\frac{\Omega_{n-1}t}{2}+
\cos\frac{\Omega_{n+1}t}{2}
\right).
\label{eq:averaged fidelity general}
} What photon-number distributions and what times optimize this averaged fidelity?
It is clear from counting the terms in Eq. \eqref{eq:averaged fidelity general} that achieving distributions resembling Eq. \eqref{eq:desired distribution properties} would allow for averaged fidelities approaching unity. But the relationships in Eq. \eqref{eq:desired distribution properties} compete with each other: a narrow photon-number distribution ensures that the Rabi frequencies coincide, while a broad distribution increases the overlap between adjacent photon-number coefficients. We are thus faced with the same problem of optimizing the width of the photon-number distribution that we faced in finding the transcoherent states.
Writing the photon-number distribution as
\eq{
|\psi_n|^2\propto\exp\left[-\frac{\left(n-\bar{n}\right)^2}{2\sigma^2}\right]
\label{eq:Gaussian photon number distribution}
} up until some manual cutoff $n_{\mathrm{max}}\gg \bar{n}+\sigma$, we can optimize over the variances $\sigma^2$ for various values of the average photon number $\bar{n}$. Intriguingly, we find that, for sufficiently large $\bar{n}$, the optimal variance is always \textit{slightly} number squeezed, approaching
\eq{
\sigma_{\mathrm{optimal}}^2\approx 0.9\bar{n}.
\label{eq:optimal variance for average pi over 2}
} This is depicted in Fig. \ref{fig:nbar20}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{nmax_400_nbar_20_ave_fids}
\caption{Average fidelity $\mathcal{F}$ for rotating atoms in arbitrary initial states by $\tfrac{\pi}{2}$ calculated from Eq. \eqref{eq:averaged fidelity general} using states of the form of Eq. \eqref{eq:Gaussian photon number distribution} with various variances. The optimal photon-number variances are scaled by that of coherent light with $\sigma^2=\bar{n}$. The cutoff point was chosen to be $n_{\mathrm{max}}=400$; here, $\bar{n}=20$.}
\label{fig:nbar20}
\end{figure}
We can repeat this calculation to find the optimal squeezed to achieve any $\Theta$ pulse when averaged over all initial atomic states. The calculation yields a more cumbersome version of Eq. \eqref{eq:averaged fidelity general} and is given in Appendix \ref{app:averaging fidelity any pulse}. Optimizing this expression numerically over Gaussian field states, we find the intriguing relationship (Fig. \ref{fig:optimal pulses all situations})
\eq{
\sigma^2_{\mathrm{optimal}}= \bar{n}\sinc\frac{\Theta}{2},
\label{eq:sinc variance any atom}
} which explains the $0.9$ found in Eq. \eqref{eq:optimal variance for average pi over 2}. The maximum achievable fidelity decreases with $\Theta$ for a given $\bar{n}$ and is notably less than the perfect fidelities achievable when the atom is initially in its ground state. In fact, comparing this expression with Eq. \eqref{eq:sinc variance}, we see that the optimal field state for delivering a $\Theta$ pulse to an unknown atomic state has the same amount of squeezing as the optimal field state for delivering a $\Theta/2$ pulse to an atom in its ground state. While we only speculate on the origin of this conclusion, we are confident that it arises from some averaging of the distance that an atom must traverse on the Bloch sphere, which ranges from $0$ to $\Theta$; i.e., the average atom must rotate half as far as a ground-state atom during a $\Theta$ pulse, so the average atom requires a $\Theta/2$ pulse.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fidelity_and_variance_all_situations_nbar500_v2}
\caption{Optimal fidelity and squeezing for field states imparting $\Theta$ pulses on atoms with known (blue dots), unknown (orange triangles), and partially known (green $\times$s) initial states. Each point represents a different value of $\Theta$ for which the average fidelities were optimized over all possible variances and a given, fixed $\bar{n}=500$. \textbf{(a)} The average fidelities achieved are all large because we have used field states with large intensities. Known-state atoms can be always be perfectly transformed, while it is easier to achieve shorter pulses with smaller $\Theta$ for atoms in unknown states. \textbf{(b)} The optimal variances are plotted in units of $\bar{n}$. Plotted on top are the curves $\sinc\Theta$ (blue), $\sinc\tfrac{\Theta}{2}$ (orange), and $\sqrt{1/2}\sinc\tfrac{\Theta}{2}$ (green). The blue and green curves intersect at $\Theta=\pi/2$ when $\sigma^2=2\bar{n}/\pi$ (red, dashed).}
\label{fig:optimal pulses all situations}
\end{figure}
\subsection{Averaging fidelity over initial states with known azimuth}~
We next take the case where the initial azimuthal angle of the atom is known; this is equivalent to taking $\phi=0$ instead of averaging over that coordinate. As calculated in Appendix \ref{app:averaging fidelity any pulse fixed phi}, the fidelity for a $\tfrac{\pi}{2}$ pulse, averaged over all initial values of the atom's polar coordinate $\theta$, is
\eq{
\mathcal{F}=&\frac{1}{2}
+\frac{1}{4}\sum_n \left|\psi_n \psi_{n+1}\right|\sin\frac{\Omega_n t}{2}\\
&\times\left(2\cos\frac{\Omega_n t}{2}+
\cos\frac{\Omega_{n-1} t}{2}+
\cos\frac{\Omega_{n+1} t}{2}\right)
.
}
By numerically maximizing this quantity over all states with Gaussian photon-number distributions, we find that the optimal field state is a coherent state that is number squeezed by $\tfrac{\pi}{2}$. This is exactly the same result as for transcoherent states, even though the corresponding quantity in Ref. \cite{GoldbergSteinberg2020} [Eq. (13) there] has a single cosine term instead of all three, so we proceed with the same method to justify our optimal solution. To maximize the fidelity, we need to maximize the inner product between vectors with components $\psi_{n+1} \sin\tfrac{\Omega_n t}{2}$ and $\psi_{n}\left(2\cos\frac{\Omega_n t}{2}+
\cos\frac{\Omega_{n-1} t}{2}+
\cos\frac{\Omega_{n+1} t}{2}\right)/4$. By the Cauchy-Schwarz inequality, this is achieved when the vectors are parallel, satisfying
\eq{
\psi_{n+1} \sin\tfrac{\Omega_n t}{2}=\frac{\psi_{n}}{4}\left(2\cos\frac{\Omega_n t}{2}+
\cos\frac{\Omega_{n-1} t}{2}+
\cos\frac{\Omega_{n+1} t}{2}\right).
} This generates a recursion relation for the ideal state coefficients that can be expanded about their peak at $\bar{n}$ for an evolution time $\Omega_{\bar{n}}t=\tfrac{\pi}{2}$:
\eq{
\frac{2\cos\frac{\Omega_{\bar{n}+\delta} t}{2}+
\cos\frac{\Omega_{\bar{n}+\delta-1} t}{2}+
\cos\frac{\Omega_{\bar{n}+\delta+1} t}{2}}{4\sin\tfrac{\Omega_{\bar{n}+\delta} t}{2}}\approx 1-\frac{\pi\delta}{4\bar{n}}.
}
We select a probability distribution satisfying the approximate difference equation
\eq{
\frac{\psi_{\bar{n}+\delta}-\psi_{\bar{n}}}{\delta}\approx -\frac{\pi\delta}{4\bar{n}}\psi_{\bar{n}}
,
} whose solution is the photon-number-squeezed Gaussian distribution
\eq{
\psi_{\bar{n}+\delta}\approx \psi_{\bar{n}}\exp\left(-\frac{\delta^2}{4\sigma^2}\right),\quad \sigma^2=\frac{2\bar{n}}{\pi}.
}
We thus observe that the best states for \textit{exactly} producing a $\tfrac{\pi}{2}$ pulse and for \textit{on average} producing a $\tfrac{\pi}{2}$ pulse are the same, so long as the azimuthal coordinate of the initial atomic state is known.
The same calculation can be done for arbitrary pulse areas $\Theta$. Averaging the success probability for acting on atoms with $\phi=0$ and arbitrary $\theta$ (Appendix \ref{app:averaging fidelity any pulse fixed phi}), we find that the optimal variances obey (Fig. \ref{fig:optimal pulses all situations})
\eq{
\sigma^2=\frac{2\bar{n}}{\pi}\frac{\sinc \frac{\Theta}{2}}{\sinc \frac{\pi}{4}}=\bar{n}\frac{\sinc \frac{\Theta}{2}}{\sqrt{2}}.
\label{eq:sinc variance known phi}
} As usual, larger pulse areas require more photon-number squeezing, and we find that more squeezing is required than when averaging over the entire Bloch sphere.
\subsection{Discussion}~
The optimal field state for enacting a pulse area of $\Theta$ on an atomic state depends on the atomic state. We can collect some of our results: when the atom is initially in its ground state, the optimal photon-number variance is [Eq. \eqref{eq:sinc variance}] $\bar{n}\sinc\Theta$; when the atom is initially in a state with some known $\phi$ but unknown polar angle, the optimal variance is [Eq. \eqref{eq:sinc variance known phi}] $\bar{n}\sinc\tfrac{\Theta}{2}/\sqrt{2}$; and, when the atom is initially in an unknown state, the optimal variance is [Eq. \eqref{eq:sinc variance any atom}] $\bar{n}\sinc\tfrac{\Theta}{2}$. How do all of these compare with each other?
In terms of fidelity, not knowing the initial state leads to poorer performance. Surprisingly, averaging over a known azimuth leads to slightly smaller fidelities than averaging over the entire sphere. This discrepancy arises from the different Jacobian factors when integrating over a circle versus a sphere, implying that the ratio of the performances of states initially near the equator to states initially at the poles is what controls the overall success on average.
All of the scenarios require more photon-number squeezing for larger pulse areas. When the pulse area is $\tfrac{\pi}{2}$ or greater, an atom in its ground state requires the most squeezing because it has to travel the furthest, an atom oriented along a known meridian requires less squeezing on average, and an atom oriented in an unknown direction requires the least squeezing on average. That the completely unknown orientation requires the least squeezing makes sense: on average, such atoms need to traverse an angular distance of $\Theta/2$ for a rotation about some fixed axis on the Bloch sphere. That the polar-angle-unknown orientation requires less squeezing than ground-state atoms is more surprising: this implies that it requires more ``effort,'' in terms of greater squeezing, to travel from the poles of the Bloch sphere than from any other point. The cause of this discrepancy for angles other than $\Theta=\tfrac{\pi}{2}$ remains an open question for further study.
When the pulse area is less than $\tfrac{\pi}{2}$, the variances quoted above are more squeezed for the case with known $\phi$ than for atoms initially in their ground states. While this may be an empirical phenomenon, there is a fly in the ointment: it is not clear for $\Theta<\tfrac{\pi}{2}$ what the optimal relationship \eq{\varphi\equiv \arg \psi_{n+1}-\arg \psi_n} in Appendix \ref{app:averaging fidelity any pulse fixed phi}
should be for the initial field states. However, performing a multiparameter optimization over $\varphi$ and $\sigma^2$, we always find the optimal value to have $\varphi\approx\tfrac{\pi}{2}$ and thus maintain the variance relationship of Eq. \eqref{eq:sinc variance known phi}. We must then conclude that, somehow, ground-state atoms require less squeezing than others for rotations by $\Theta<\tfrac{\pi}{2}$ about a great circle and more squeezing than others for rotations by $\Theta>\tfrac{\pi}{2}$ about the same great circle. This is an intriguing phenomenon that surely deserves further research.
Typically, quantum computing algorithms require the same operation to be performed on arbitrary initial states. Given the optimal field state for this purpose that is squeezed by $\sinc\tfrac{\theta}{2}$, what advantage can one acquire relative to standard quantum computing protocols that use coherent states to perform logic gates on atoms? We plot in Fig. \ref{fig:fidelity improvement} the improvement in the average fidelity [Eq. \eqref{eq:averaged fidelity general}] that one can attain using the optimally squeezed states relative to coherent states with no squeezing, comparing how this improvement changes with the energy of the field state. The fidelity improvement of squeezed states over coherent states increases quickly with the rotation angle and the improvement lessens with increasing $\bar{n}$, while the relative error decreases with rotation angle and is independent from $\bar{n}$. These imply that quantum computing applications that are limited in average photon number, that are using many $\pi$ gates, or that possess any significant error rates may benefit the most from using squeezed light to improve their logic gates.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fidelity_improvement_unknown_atom}
\caption{\textbf{(a)} Additive increase in the fidelity $\mathcal{F}$ of performing a $\Theta$-rotation on an atom averaged over all initial atomic states using light whose photon-number distribution is squeezed by an optimal amount $\sinc\tfrac{\Theta}{2}$ relative to coherent light with the same strength. The improvement is significant for larger rotation angles and smaller field-state strengths. \textbf{(b)} Multiplicative decrease in the error $1-\mathcal{F}$ of performing a $\Theta$-rotation on an atom averaged over all initial atomic states using light whose photon-number distribution is squeezed by an optimal amount $\sinc\tfrac{\Theta}{2}$ relative to coherent light with the same strength. The improvement is significant for larger rotation angles and is largely independent of the field-state strength.
}
\label{fig:fidelity improvement}
\end{figure}
\section{Generating $\tfrac{\pi}{2}$ pulses for collections of atoms}~
Can the transcoherent states be generalized to field states that impart optimal pulses on collections of atoms?
A set of atoms all in the maximally coherent state $\ket{\tfrac{\pi}{2}}$ is useful for applications such as creating lasers with noise-free amplification \cite{ScullyZubairy1988}.
We investigate the collective interaction governed by the Tavis-Cummings interaction Hamiltonian \cite{TavisCummings1968}
\eq{
H_{\mathrm{TC}}=\frac{\Omega_0}{2}( {a} J_+ + {a}^\dagger\vphantom{a} J_-),
} where we now employ the collective excitation operators from SU(2):
\eq{
J_i=\sum_{k=1}^{2J}\mathds{I}^{(1)}\otimes\cdots\otimes\mathds{I}^{(k-1)}\otimes\sigma_i^{(k)}\otimes\mathds{I}^{(k+1)}\otimes\cdots\otimes\mathds{I}^{(2J)}.
} Here, $2J$ is the total number of atoms, where the $k$th atom has its own Pauli operators $\sigma_i^{(k)}$ and the permutation-symmetric states of the $2J$ atoms are equivalent to a single spin-$J$ particle. The transcoherent state problem begins with all of the atomic states in their collective ground state $\ket{J,-J}=\ket{\mathrm{g}}^{\otimes 2J}$ that is annihilated by $J_-$ and is an eigenstate of $J_z$ with minimal eigenvalue $-J$. A $\tfrac{\pi}{2}$ pulse acting on all atoms simultaneously would enact the transformation to the spin-coherent state $\ket{\mathrm{g}}^{\otimes 2J}\to 2^{-J} (\ket{\mathrm{g}}+\ket{\mathrm{e}})^{\otimes 2J}$ that is an eigenstate of $J_x$ with maximal eigenvalue $J$, which can be expressed in the basis of $J_z$ eigenstates as
\eq{
\ket{J,-J}\to\frac{1}{2^J}\sum_{m=-J}^J \sqrt{\binom{2J}{m+J}}\ket{J,m}.
\label{eq:pi/2 pulse on 2J atoms}
} How can this best be performed?
\subsection{Optimal pulses for maximum coherence generation}~
We can investigate a series of field states to find which ones best impart a $\tfrac{\pi}{2}$ pulse on a collection of $2J$ atoms. Unlike in the case of the JCM, it is not convenient to write a closed-form expression for the fidelity as a function of the field-state coefficients. Instead, we choose a variety of representative field states, from which we evolve the TCM numerically using \texttt{QuTiP} \cite{Johanssonetal2012,Johanssonetal2013}. These can then be compared to the optimal final state from Eq. \eqref{eq:pi/2 pulse on 2J atoms} and optimized accordingly.
To make the optimization tractable, we choose to optimize over field states with Gaussian photon-number distributions with varying widths. This is motivated in part by the optimal states for the JCM always having Gaussian photon-number distributions, in part because Gaussian states are among the easiest to prepare experimentally, and in part because field states with sufficiently large average photon number and sufficiently localized photon-number distributions will convert $H_{\mathrm{TC}}$ into a rotation of the form
\eq{
\exp\left(-\text{i} H_{\mathrm{TC}}t\right)\underset{\bar{n}\gg 1}{\approx}\exp\left[-i\Omega_0 \sqrt{\bar{n}}t\left(J_x\cos\varphi+J_y\sin\varphi\right)\right],
\label{eq:strong field approx TCM rotation}
} where again $\varphi$ encodes the relative phases of the field-state coefficients. For a given fixed $\bar{n}$, we thus expect the fidelity to be optimized by an interaction time $\Omega_0 t\sqrt{\bar{n}}\approx \pi/2$.
Figure \ref{fig:TCM optimals} plots the optimal field-state variances and interaction times for achieving $\tfrac{\pi}{2}$ pulses for various values of $J$ and $\bar{n}$. These parameters are the best ones found using the Nelder–Mead method implemented in \texttt{SciPy} with a variety of random seeds. As expected, the overall fidelities increase with increasing $\bar{n}$. It is perhaps unsurprising that they decrease with increasing $J$, as $J=1/2$ is the only situation in which perfect $\tfrac{\pi}{2}$ pulses can be implemented, and that the increase is quadratic in $J$ from this minimum. The optimal squeezing and time parameters follow opposite trends such that the optimal photon-number variance and interaction time obey the following relationship for a given average photon number: \eq{
\sigma_{\mathrm{optimal}}^2&\approx \frac{2\bar{n}}{\pi} ,\\
\Omega_0 t_{\mathrm{optimal}}\sqrt{\bar{n}}&\approx \frac{\pi}{2} ,\\
\Rightarrow\quad
\sigma_{\mathrm{optimal}}^2\Omega_0 t_{\mathrm{optimal}}&\approx \sqrt{\bar{n}}.
\label{eq:optimal params TCM}
} There is also some residual dependence on $J$ that may warrant the replacement $\bar{n}\to\bar{n}-J/2+1/2$ in Eq. \eqref{eq:optimal params TCM}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{infidelity_variance_time_nbar100_200_v2}
\caption{Optimal $\tfrac{\pi}{2}$ pulses on a collection of $2J$ atoms or another spin-$J$ system, for various average energies (blue dots and orange triangles have $\bar{n}=100$ and $\bar{n}=200$, respectively). \textbf{(a)} Error probability, i.e., unity minus the fidelity, of enacting a perfect $\tfrac{\pi}{2}$ pulse to achieve the transformation of Eq. \eqref{eq:pi/2 pulse on 2J atoms}. All of the fidelities are excellent, with larger $\bar{n}$ being more favourable and larger $J$ being less favourable. \textbf{(b)} Optimal variances for the initial field states. These all have their photon-number distributions squeezed by approximately $\tfrac{\pi}{2}$ relative to coherent states, with increasing squeezing required for increasing $J$. The scatter in the plot implies that not all of the results have converged to their optimal values, which may only be achievable with larger $\bar{n}$ and longer optimization times. \textbf{(c)} Optimal interaction times to achieve the desired transformation. These are all approximately the classical times for a $\tfrac{\pi}{2}$ pulse, with a slight increase in optimal time with increasing $J$. Of note, the products of the optimal variances and times are approximately unity in these units, implying that $\sigma^2\Omega_0 t/\sqrt{\bar{n}}\approx 1$.
}
\label{fig:TCM optimals}
\end{figure}
\subsection{Semiclassical limit}~
The semiclassical limit of the TCM with a highly energetic field has been studied in Refs. \cite{DrobnyJex1993,Chumakovetal1994,KlimovChumakov1995,Retamaletal1997}. Those showed that, like in the JCM \cite{GeaBanacloche1990,PhoenixKnight1991a,PhoenixKnight1991b,GeaBanacloche1991,GeaBanacloche1992,GeaBanacloche2002,vanEnkKimble2002,SilberfarbDeutsch}, one cannot simply employ the replacement ${a}\to\alpha$ in $H_{\mathrm{TC}}$ in the strong-field limit as per Eq. \eqref{eq:strong field approx TCM rotation}, as this neglects possible atom-field entanglement for any finite $\bar{n}$ and wrongly predicts the final atomic state to be pure. In Ref. \cite{Retamaletal1997}, for example, we find that the TCM Hamiltonian can be approximated in the presence of a strong field with the appropriate relative phase relationships as
\eq{
\tilde{H}_{\mathrm{TC}}=-\Omega_0\sqrt{{a}^\dagger\vphantom{a}{a}-J+1/2}{J}_y.
\label{eq:TCM approx H}
} This approximation is valid for $\bar{n}-J+1/2\gg 1$ and all of the interaction times we consider here $(\Omega_0 t\sim \bar{n}^{-1/2}\ll \bar{n})$). It serves to rotate the collective atomic state at a Rabi frequency
\eq{
\Omega(J,n)=\Omega_0\sqrt{n-J+1/2}
} for a given field-state energy level $\ket{n}$, which is smaller than $\Omega_n$ above and decreases with $J$, explaining why slightly longer interaction times are required with increasing $J$ to achieve the same $\tfrac{\pi}{2}$ pulses [Fig. \ref{fig:TCM optimals}\textbf{(c)}]. However, the actual functional dependence in Fig. \ref{fig:TCM optimals}\textbf{(c)} looks like it follows $\Omega(J/2,n)$ instead of $\Omega(J,n)$, so we will investigate further to elucidate whether this is simply a numerical artifact. That one cannot simply replace $\bar{n}\to\bar{n}-J+1/2$ in Eqs. \eqref{eq:optimal params TCM} to match the replacement $\Omega_n\to\Omega(J,n)$ may be justified by the competition between Rabi frequencies with a variety of values of $n$.
We can approximate the full evolution of our initial state using Eq. \eqref{eq:TCM approx H} and the unitary evolution
\eq{
U_{\mathrm{TC}}=Q\exp\left(-\text{i}\tilde{H}_{\mathrm{TC}}t\right)Q^\dagger.
} Here,
\eq{
Q=\sum_{m=-J}^J\text{e}^{\text{i}\hat{\phi}(J+m)}\otimes\ket{J,m}\bra{J,m}
} is an almost-unitary operator ($\left[Q,Q^\dagger\right]=|0\rangle\langle 0|$) and \eq{
\exp(\text{i} \hat{\phi})=\sum_{n=0}^\infty \ket{n}\bra{n+1}
} is a phase operator for the field (we have reserved the caret for the operator $\hat{\phi}$ in deference to the intricacies of phase operators \cite{BarnettVaccaro2007}). The first transformation leaves the state unchanged:
\eq{
Q^\dagger \sum_n \psi_n\ket{n}\otimes \ket{J,-J}=\sum_n \psi_n\ket{n}\otimes\ket{J,-J}.
}
Next, the effective Hamiltonian enacts a rotation of the atomic states by an angle that depends on the field's energy level in a manner reminiscent of Eq. \eqref{eq:pi/2 pulse on 2J atoms}:
\eq{
&\text{e}^{-\text{i}\tilde{H}_{\mathrm{TC}}t}\sum_n \psi_n\ket{n}\otimes\ket{J,-J}=\sum_n \psi_n\ket{n}\\
&\otimes \sum_{m=-J}^J\sqrt{\binom{2J}{m+J}}\cos^{J-m}\frac{\Omega(J,n)t}{2}\sin^{J+m}\frac{\Omega(J,n)t}{2}\ket{J,m}.
\label{eq:midway approximate psi TCM}
}
Note that the atomic states in the superposition are SU(2)-coherent states that are eigenvalues of the spin operator pointing at different angles for different field energy levels $J_x \sin\left[\Omega(J,n)t\right]-J_z\cos\left[\Omega(J,n)t\right]$; if this was the end of the evolution, an initial field state with definite photon number $n$ would suffice to perfectly enact a rotation by $\Omega(J,n)$ on all of the atoms.
Finally, using
\eq{
\exp(\text{i} \hat{\phi}k)=\sum_{n=0}^\infty \ket{n}\bra{n+k},
} the transformed state becomes
\eq{
\ket{\Psi(t)}=\sum_{n=2J+1}^\infty\sum_{m=-J}^J
\sqrt{\binom{2J}{m+J}}\psi_{n+J+m}\ket{n}\otimes\ket{J,m}\\
\times \cos^{J-m}\frac{\Omega(J,n+J+m)t}{2}\sin^{J+m}\frac{\Omega(J,n+J+m)t}{2},
\label{eq:approx psi(t) TCM}
} where we have restricted our attention to states with $\psi_n= 0$ for $n\leq 2J$ such that $Q$ is unitary.
We can then inspect the properties of this state to see how to optimally achieve the $\tfrac{\pi}{2}$ pulse of Eq. \eqref{eq:pi/2 pulse on 2J atoms}.
For Eq. \eqref{eq:approx psi(t) TCM} to best approximate a $\tfrac{\pi}{2}$ pulse, we would like
\eq{
\psi_{n+k} \cos^{2J-k}\frac{\Omega(J,n+k)t}{2}\sin^{k}\frac{\Omega(J,n+k)t}{2}\approx \frac{1}{2^J}
} for all values of $n$ and $k$ such that the final state is most separable and the atomic state is closest to that of Eq. \eqref{eq:pi/2 pulse on 2J atoms}. The width of the trigonometric terms' distribution changes with $n$ and $k$, so it is not obvious how to choose the appropriate optimal width for the photon-number distribution, although we note that all Gaussian distributions centred at $\bar{n}$ with interaction times $\Omega(J,\bar{n})t\approx\pi/2$ will converge to the proper limit with large $\bar{n}$. Instead, we can look at the overlap between $\ket{\Psi(t)}$ and a state rotated by $\Theta$:
\eq{
\left|\braket{J,\Theta}{\Psi(t)}\right|^2=
\sum_{n=2J+1}^\infty\left|\sum_{m=-J}^{J}
{\binom{2J}{m+J}}\psi_{n+J+m}\right.\\
\times \cos^{J-m}\frac{\Omega(J,n+J+m)t}{2}\sin^{J+m}\frac{\Omega(J,n+J+m)t}{2}\\
\left.\times \cos^{J-m}\frac{\Theta}{2}\sin^{J+m}\frac{\Theta}{2}\right|^2.
\label{eq:approx fidelity TCM}
} By selecting $\Theta=\pi/2$ and Gaussian photon-number distributions centred at $\bar{n}$, we can optimize this result over all interaction times $\Omega_0 t$ and photon-number variances $\sigma^2$. Exemplary results are plotted in
Fig. \ref{fig:approx infidelities}, with the data from Fig. \ref{fig:TCM optimals} overlain.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{approx_infidelity_variance_time_nbar100_200_500_v2}
\caption{Optimal $\tfrac{\pi}{2}$ pulses on a collection of $2J$ atoms as in Fig. \ref{fig:TCM optimals}, but now optimized using the approximate fidelity of Eq. \eqref{eq:approx fidelity TCM} to permit larger $\bar{n}$ and $J$ ($\bar{n}=100$, $200$, and $500$ are the blue dots, orange triangles, and green diamonds, respectively). The approximation breaks down when $\sqrt{\bar{n}}\gg J$ is not achieved. \textbf{(a)} The error probabilities $1-\mathcal{F}$ with this method are again quite low but are nonnegligible when the approximation fails. \textbf{(b)} The optimal variances best match the curve $\sigma^2=2(\bar{n}-1.2J+0.5)/\pi$. \textbf{(c)} The optimal interaction times best match the curve $\Omega_0 t\sqrt{\bar{n}-1.2J+0.5}=\pi/2$. In all cases, the solutions found using the exact evolution (red $\times$s and purple $+$s for $\bar{n}=100$ and $200$, respectively) best match the $\bar{n}=500$ solution (i.e., the largest energy considered) for all $J$.
}
\label{fig:approx infidelities}
\end{figure}
Comparing the results between the two optimization methods in Fig. \ref{fig:approx infidelities}, we see that they match in the case of large $\bar{n}$. Intriguingly, the replacement $\bar{n}\to \bar{n}-qJ+1/2$ in Eq. \eqref{eq:optimal params TCM} seems to always hold, but the optimal order-unity parameter $q$ does not seem consistent between $\sigma^2_{\mathrm{optimal}}$ and $t_{\mathrm{optimal}}$.
This may be partially explained by setting $|\psi_{\bar{n}+\delta}|\propto \exp(-\delta^2/4\sigma^2)$ for small $\delta$ in Eq. \eqref{eq:approx fidelity TCM}, expanding around $n=\bar{n}$, expanding the sinusoidal terms around large $\bar{n}$, and asking what photon-number variance $\sigma^2$ will cancel all of the $\mathcal{O}(\delta)$ terms; the result is
\eq{
\sigma^2
=\frac{2J+m+m^\prime}{m+m^\prime}\left[\frac{2(\bar{n}-J+1/2)}{\pi}+\frac{J (m+m^\prime)+m^2+m^{\prime 2}}{2}\right],
} which resembles Eq. \eqref{eq:optimal params TCM} with the replacement $\bar{n}\to\bar{n}-J+1/2$ but changes with different values of $m$ and makes no sense (goes negative) when $m+m^\prime\leq 0$.
It is thus always a good approximation to use Eq. \eqref{eq:optimal params TCM} and then update $\bar{n}$ as a function of $J$ for the problem at hand.
A complementary strategy for optimizing the initial field states is to look at the evolution of the expectation values of $J_z$ and $J_x$. For a perfect $\tfrac{\pi}{2}$ pulse, the expectation value of $J_z$ should go from its minimal value $-J$ to $0$, while that of $J_x$ should go from $0$ to its maximal value $J$. In fact, given a fixed total spin, any of the collective operators $J_i$ attaining its maximal eigenvalue is a sufficient condition for the spin state to be in a pure state and thus for there to be no residual entanglement with the light. These goals can then give us constraints on our initial field states in the spirit of transcoherence.
The $J_z$ operator evolves in the Heisenberg picture as \cite{KlimovChumakov1995}
\eq{
U_{\mathrm{TC}}J_z U_{\mathrm{TC}}^\dagger=J_z \cos\hat{\tau}-J_y\sin\hat{\tau}
} for the field operator $\hat{\tau}=\Omega_0 t\sqrt{{a}^\dagger\vphantom{a}{a}-J+1/2}$. Since the initial atomic state has $\expct{(J_x,J_y,J_z)}=(0,0,-J)$, the final state has
\eq{
\langle \Psi(t)|J_z|\Psi(t)\rangle=-J\expct{\cos\hat{\tau}},
} where the final expectation value is taken with respect to the initial field state. For this quantity
\eq{
\expct{\cos\hat{\tau}}=\sum_n |\psi_n|^2\cos\left(\Omega_0 t\sqrt{n-J+1/2}\right)
}
to vanish, the photon-number distribution should be strongly peaked around $\expct{\tau}=\pi/2$, again corresponding to a classical $\tfrac{\pi}{2}$ pulse with average field strength $\bar{n}$ and interaction time $\Omega(J,\bar{n})t=\pi/2$.
We then look at the evolution of $J_x$. Since we already have an expression for the evolved state, we can directly calculate
\eq{
\langle \Psi(t)|J_x|\Psi(t)\rangle=\sum_{n=2J+1}^\infty\sum_{m=-J}^J\binom{2J}{J+m}(J-m)\psi_{n+J+m}\psi_{n+J+m+1}^*\\
\times \cos^{J-m}\frac{\Omega(J,n+J+m)t}{2}\sin^{J+m}\frac{\Omega(J,n+J+m)t}{2}\\
\times \cos^{J-m-1}\frac{\Omega(J,n+J+m+1)t}{2}\sin^{J+m+1}\frac{\Omega(J,n+J+m+1)t}{2}.
} Expanding around $n+J=\bar{n}+\delta$ for small $\delta$ and again using $|\psi_{\bar{n}+\delta}|\propto\exp(-\delta^2/4\sigma^2)$, we find
\eq{
\frac{\psi_{n+J+m}\psi_{n+J+m+1}^*}{|\psi_{\bar{n}}|^2}\approx 1-\frac{(\delta+m)^2+(\delta+m+1)^2}{4 \sigma^2}+\mathcal{O}(\sigma^{-4}).
} The sinusoidal terms expand to leading order in $\bar{n}-J+1/2$ as
\eq{
4^{-J}\left\{1+\frac{\pi \left[2 \delta m+\delta+2 m (m+1)+1\right]}{4\left(\bar{n}-J+1/2\right)}\right\}
\nonumber
} Multiplying these terms by the coefficients $\binom{2J}{J+m}(J-m)$ and summing from $m=-J$ to $m=J$ yields, to leading order in $\sigma^2$ and $\bar{n}-J+1/2$,
\eq{
\langle \Psi(t)|J_x|\Psi(t)\rangle\approx J\sum_{\delta} |\psi_{\bar{n}}|^2\left[ 1-\frac{ 2 \delta^2+J}{4 \sigma^2}+\frac{\pi J}{4 (\bar{n}-J+1/2)}\right].
} Performing the sum from $\delta=-l$ to $\delta=l$ for some $l$, the leading-order terms cancel each other when
\eq{
\sigma^2=\frac{\bar{n}-J+1/2}{\pi }\left(1+\frac{2l(l+1)}{3J}\right).
} This is exactly the result of Eq. \eqref{eq:optimal params TCM} with the replacement $\bar{n}\to\bar{n}-J+1/2$ and the number-squeezing factor being replaced as $\tfrac{2}{\pi}\to \left(1+\tfrac{2l(l+1)}{3J}\right)/\pi$; if the normalization is given by $|\psi_{\bar{n}}|^2\approx 1/\sqrt{2J+1}$, we find $l=(\sqrt{2J+1}-1)/2$ and the number-squeezing factor being exactly $\tfrac{2}{\pi}$ together yield the best result for $\langle \Psi(t)|J_x|\Psi(t)\rangle\approx J$. The overall dependence of the optimal variance $\sigma^2$ on $\bar{n}$ and $J$ is now apparent and considerations of different ranges of the sum over $\delta$ change the dependence of the optimal variance on $J$.
\subsection{Perfect pulses cannot be generated for the Tavis-Cummings interaction}~
The evolution of the TCM can always be solved exactly \cite{TavisCummings1968,Scharf1970,HeppLieb1973} but requires the intricate solution of the Bethe ansatz equations \cite{Bogoliubovetal1996,Luetal2021arxiv}. We solve this system of equations for the case of $N=2$ atoms, given the initial state $\sum_n \psi_n\ket{n}\otimes \ket{J,-J}$, to show that perfect $\tfrac{\pi}{2}$ pulses are not generally possible. Since total excitation number is conserved and the atomic subspace is spanned by only three states $\ket{1,\pm 1}$ and $\ket{1,0}$, we can analytically solve this model.
The evolved state at any time $t$ is (using $\ket{m,n}\equiv\ket{m}\otimes\ket{J,n}$ for brevity in this subsection alone):
\eq{
\ket{\Psi(t)}&=\psi_0\ket{0,-1}+\psi_1\left(\cos\frac{\Omega_0 t}{\sqrt{2}}\ket{1,-1}-\text{i}\sin\frac{\Omega_0 t}{\sqrt{2}}\ket{0,0}\right)\\
&+\sum_{n\geq 2} \psi_n\ket{n,-1}\left(\frac{n-1}{2n-1}+\frac{n}{2n-1}\cos\frac{\Omega_0 t\sqrt{2n-1}}{\sqrt{2}}\right)\\
&+ \psi_n\ket{n-2,1}\frac{\sqrt{n(n-1)}}{2n-1}\left(-1+\cos\frac{\Omega_0 t\sqrt{2n-1}}{\sqrt{2}}\right)\\
&-\text{i} \psi_n\ket{n-1,0}\sqrt{\frac{n}{2n-1}}\sin\frac{\Omega_0 t\sqrt{2n-1}}{\sqrt{2}}.
}
We found this using the eigenstates
\eq{
\ket{n\pm}=\frac{1}{\sqrt{2}}\left(\sqrt{\frac{n}{2n-1}}\ket{n,-1}\pm\ket{n-1,0}+\sqrt{\frac{n-1}{2n-1}}\ket{n-2,1}\right)
} with eigenvalues $\pm\sqrt{2n-1}\Omega_0/\sqrt{2}$ and the null eigenstate
\eq{
\ket{n0}=-\sqrt{\frac{n-1}{2n-1}}\ket{n,-1}+\sqrt{\frac{n-1}{2n-1}}\ket{n-2,1}.
} Can this ever perfectly create the desired pulse?
Projecting onto $\ket{0}$ for the field state, the first requirement for the atoms to be in the correct rotated state is that
\eq{
\psi_0\ket{-1}-\text{i} \psi_1\sin\frac{\Omega_0 t}{\sqrt{2}}\ket{0}+\psi_2\frac{\sqrt{2}}{3}\left(\cos\frac{\Omega_0 t\sqrt{3}}{\sqrt{2}}-1\right)\ket{1},\\
\propto \ket{-1}+\sqrt{2}\ket{0}+\ket{1}.
} This immediately yields two constraints for the three free parameters $\psi_1$, $\psi_2$, and $t$:
\eq{
\psi_0&=\psi_2\frac{\sqrt{2}}{3}\left(\cos\frac{\Omega_0 t \sqrt{3}}{\sqrt{2}}-1\right)\\
\psi_0\sqrt{2}&=-\text{i} \psi_1\sin\frac{\Omega_0 t}{\sqrt{2}}.
} If any of these three coefficients vanishes then they all must, unless the timing spares $\psi_1$ or $\psi_2$ from needing to vanish. The next requirement found by projecting the field onto state $\ket{1}$ is that
\eq{
&\psi_1\cos\frac{\Omega_0 t}{\sqrt{2}}\ket{-1}-\text{i} \psi_2\sqrt{\frac{2}{3}}\sin\frac{\Omega_0 t\sqrt{3}}{\sqrt{2}}\ket{0}
\\
&+\psi_3\frac{\sqrt{6}}{5}\left(\cos\frac{\Omega_0 t\sqrt{5}}{\sqrt{2}}-1\right)\ket{1}
\propto \ket{-1}+\sqrt{2}\ket{0}+\ket{1}.
} Again, if any of the coefficients vanishes then they all must, unless rescued by an exact timing prescription. This introduces another required relationship between $\psi_1$ and $\psi_2$,
\eq{
\psi_1\sqrt{2}\cos\frac{\Omega_0 t}{\sqrt{2}}=-\text{i} \psi_2\sqrt{\frac{2}{3}}\sin\frac{\Omega_0 t\sqrt{3}}{\sqrt{2}},
}
which together impose the timing requirement:
\eq{
\frac{\frac{2}{3}\left(\cos\frac{\Omega_0 t \sqrt{3}}{\sqrt{2}}-1\right)}{-\text{i}\sin\frac{\Omega_0 t}{\sqrt{2}}}=
-\text{i} \frac{\frac{1}{\sqrt{3}}\sin\frac{\Omega_0 t \sqrt{3}}{\sqrt{2}}}{\cos\frac{\Omega_0 t}{\sqrt{2}}}\\
\frac{2}{\sqrt{3}}\left(\cos\frac{\Omega_0 t \sqrt{3}}{\sqrt{2}}-1\right)=-\sin\frac{\Omega_0 t \sqrt{3}}{\sqrt{2}}
\tan\frac{\Omega_0 t}{\sqrt{2}}.
} This only holds when $\Omega_0 t \sqrt{3/2}=2k\pi,\,k\in\mathds{N}$. But then we find that $\psi_0=0$, $\psi_1=0$, and so on, and we have no solution. So it is impossible to perfectly solve this problem for two atoms.
Even considering the general case where the transformed state approximately follows Eq. \eqref{eq:approx psi(t) TCM}, the parameters cannot be chosen precisely enough such that no excitation escapes. Consider that there will always be some probability of finding the atom in any state $\ket{J,m}$ for rotations with $0<\Theta<\pi$. Every field state $\ket{n}$ must be in a tensor product with the same superposition of atomic states, so, whenever any coefficient $\psi_{n+J+m}$ is nonzero, every other coefficient $\psi_{n+J+m^\prime}$ must also be nonzero for all $-J\leq m^\prime\leq J$. Since $n$ can vary by $1$ and the range of values of $m$ must vary by at least $2$ for any $J>1/2$ (i.e., for anything but the JCM case of a single atom), there can be no maximal coefficient beyond which all of the coefficients vanish. Some excitation will always leak out of the subspaces in which the atoms are in the proper rotated state. The sole alternative to setting coefficients to zero is that one of the sinusoidal terms vanishes for a particular $n$ and $m$ and $t$. This suffices in the case of the JCM because there is only a single maximal coefficient that must be constrained, but is insufficient in the $J>1/2$ case where entire ranges of coefficients must vanish in the field state's maximal photon number. This is why, although squeezing is beneficial regardless of $J$, the true transcoherent states that perfectly enact $\Theta$ rotations only exist for the case of $J=1/2$. We note incidentally that a true transcoherent state for arbitrary $J$ can be achieved in the trivial case of a rotation by $0$ (i.e., the field should be in the vacuum state) and for $\Theta=\pi$ with a Fock state. This is exact for the JCM and holds precisely within the approximations of the TCM that lead to Eq. \eqref{eq:approx psi(t) TCM}, because then state in Eq. \eqref{eq:midway approximate psi TCM} is separable and remains separable following the application of the almost-unitary operator $Q^\dagger$ on the state.
\subsection{Discussion}~
Photon-number squeezing increases the probability of successfully imparting a $\tfrac{\pi}{2}$ pulse on a collection of ground-state atoms. Given the ubiquity of this result for other initial atomic states in the JCM, we expect photon-number squeezing to similarly enhance arbitrary pulse areas in the TCM for unknown initial atomic states. This problem is slightly more numerically cumbersome due to the lack of a closed-form expression for the fidelities averaged over all initial atomic states, so we leave this expectation as a conjecture that could be evidenced by numerical investigations of Eq. \eqref{eq:approx fidelity TCM} with various $\Theta$.
Earlier studies showed that a collection of partially excited atoms in the final state of Eq. \eqref{eq:pi/2 pulse on 2J atoms} will interact with a coherent field state to number squeeze the latter \cite{Retamaletal1997}. It thus comes as no surprise that photon-number-squeezed states are useful for enacting, in some sense, the reverse of this process. This idea of matching the squeezing to the interplay between different Rabi frequencies should be useful for a variety of light-matter-interaction protocols.
\section{Arbitrary coherence cannot be generated in the presence of nonzero detuning}~
All of the models thus far have dealt with resonant interactions between the field mode and the atoms. We show here that the same can \textit{never} be achieved \textit{exactly} for nonzero values of detuning.
One reason to consider nonzero detuning is to establish transformations like the Hadamard transform. This sends an atom in its ground state to an even superposition of its ground and excited state, but does so by a $\pi$ rotation of the spin vector about the $\tfrac{x+z}{\sqrt{2}}$-axis of the Bloch sphere, instead of the $\tfrac{\pi}{2}$ pulses we have been discussing here. For large field states, the interaction parts of the JCM and TCM Hamiltonian effectively rotate the average spin vector about some axis in the $xy$-plane, so the only method for truly imparting a Hadamard gate is by the introduction of a nonzero detuning that allows for effective rotations about other spin axes.
The Jaynes-Cummings interaction Hamiltonian at nonzero detuning $\delta$ takes the form
\eq{
H_{\mathrm{II}}=\frac{\delta}{2}\sigma_z+\frac{\Omega_0}{2}\left({a}\sigma_-+{a}^\dagger\vphantom{a}\sigma_+\right).
} The new eigenstates are
\eq{
\ket{+,n}_\delta&=\cos\frac{\alpha_n}{2}\ket{n}\otimes\ket{\mathrm{e}}+\sin\frac{\alpha_n}{2}\ket{n+1}\otimes\ket{\mathrm{g}},\\
\ket{-,n}_\delta&=\sin\frac{\alpha_n}{2}\ket{n}\otimes\ket{\mathrm{e}}-\cos\frac{\alpha_n}{2}\ket{n+1}\otimes\ket{\mathrm{g}}
} and have interaction-picture energies
\eq{
E_\pm(n)=\pm\frac{\Omega(n)}{2},
}
where \eq{
\alpha_n=\tan^{-1}\frac{\Omega_0\sqrt{n+1}}{\delta}
} and we have defined the detuned Rabi frequencies
\eq{
\Omega(n)=\sqrt{\Omega_n^2+\delta^2}=\sqrt{\Omega_0^2(n+1)+\delta^2}.
} An atom initially in its ground state and the field initially in the general state $\sum_n \psi_n\ket{n}$ together evolve to
\eq{
\ket{\Psi(t)}=&\sum_{n=-1}^\infty \psi_{n+1}\left[
-\text{i} \sin\frac{\Omega(n)t}{2}\sin\frac{\alpha_n}{2}\cos\frac{\alpha_n}{2}\ket{n}\otimes\ket{\mathrm{e}}\right.\\
&\left.
\left(\text{e}^{-\text{i}\Omega(n)t/2}\sin^2\frac{\alpha_n}{2}+\text{e}^{\text{i}\Omega(n)t/2}\cos^2\frac{\alpha_n}{2}\right)\ket{n+1}\otimes\ket{\mathrm{g}}\right].
}
We can proceed as usual to find initial field states and interaction times that create separable states with the atom completely coherent. Here, there are more complicated conditions that need to be solved, but there is the extra degree of freedom in the detuning that could account for them.
Projecting the evolved state onto states with definite photon number and requiring the result to be proportional to $\cos\frac{\theta}{2}\ket{\mathrm{g}}+\sin\frac{\theta}{2}\ket{\mathrm{e}}$ leads to the recurrence relation:
\eq{
\frac{\psi_{n+1}}{\psi_n}=-\text{i}\tan\frac{\theta}{2
\frac{\text{e}^{-\text{i}\Omega(n-1)t/2}\sin^2\frac{\alpha_{n-1}}{2}+\text{e}^{\text{i}\Omega(n-1)t/2}\cos^2\frac{\alpha_{n-1}}{2}}{\sin\frac{\Omega(n) t}{2}\sin\frac{\alpha_n}{2}\cos\frac{\alpha_n}{2}}.
} For a given $t$, $\Omega_0$, and $\delta$, this series is uniquely determined. Can we force this series to truncate on both sides?
To not have any probability leak out of the lowest-excitation subspace with the smallest nonzero coefficient $\psi_{n_{\mathrm{min}}}$, we require
\eq{
\sin\frac{\Omega({n_{\mathrm{min}}-1}) t}{2}\sin\frac{\alpha_{n_{\mathrm{min}}-1}}{2}\cos\frac{\alpha_{n_{\mathrm{min}}-1}}{2}=0.
}
This constraint is readily satisfied when $n_{\mathrm{min}}=0$, because $\alpha_{-1}=0$.
For larger $n_{\mathrm{min}}$, this amounts to the requirement
\eq{
\Omega({n_{\mathrm{min}}-1}) t=\Omega_0 t\sqrt{ n_{\mathrm{min}}+\left(\frac{\delta}{\Omega_0}\right)^2}=2m\pi,\quad m\in\mathds{N},
}
which can readily be satisfied by an appropriate interaction time $\Omega_0 t$ and detuning $\delta$.
However, it is impossible to not have any probability leak out of the highest-excitation subspace with the largest nonzero coefficient $\psi_{n_{\mathrm{max}}}$. To do so, we would require both
\eq{
\cos\frac{\Omega({n_{\mathrm{max}}}-1)t}{2}\left(\sin^2\frac{\alpha_{{n_{\mathrm{max}}}-1}}{2}+\cos^2\frac{\alpha_{{n_{\mathrm{max}}}-1}}{2}\right)=0
} and
\eq{
\sin\frac{\Omega({n_{\mathrm{max}}}-1)t}{2}\left(\sin^2\frac{\alpha_{{n_{\mathrm{max}}}-1}}{2}-\cos^2\frac{\alpha_{{n_{\mathrm{max}}}-1}}{2}\right)=0.
} While the former is readily converted into the satisfiable condition
\eq{
\sqrt{\Omega_0^2 n_{\mathrm{max}}+\delta^2}t&=(2l+1)\pi,\quad l\in\mathds{N}
,
}
the latter can \textit{only} be satisfied by \eq{
\alpha_{n_{\mathrm{max}}-1}=\frac{\pi}{2} \quad\Rightarrow\quad \delta=0.
} Any presence of nonzero detuning allows excitations to leak beyond the highest-excitation subspace, so there can never be perfect transfer of coherence from a field state to an atom that leaves no residual entanglement when the interaction is nonresonant. This accords with complete transfer of probability between $\ket{\mathrm{g}}$ and $\ket{\mathrm{e}}$ being impossible with nonzero detuning in the Rabi model with a classical field.
\section{$m$-photon processes}~
\subsection{Beyond the Jaynes-Cummings model}~
What happens when it takes more than one photon to excite an atom? We can consider the nonlinear interaction that requires $m$-photon absorption to transform $\ket{\mathrm{g}}$ to $\ket{\mathrm{e}}$:
\eq{
H=\omega\left(\frac{1}{m}{a}^\dagger\vphantom{a}{a}+\ket{\mathrm{e}}\bra{\mathrm{e}}\right)+\frac{\Omega_0^{(m)}}{2}\left({a}^m\sigma_+ + {a}^\dagger\vphantom{a}^m\sigma_-\right),
} where $\omega$ is the resonance frequency but now each individual photon provides energy $\tfrac{\omega}{m}$ and $\Omega_0^{(m)}$ is the coupling strength that depends on the $m$th-order nonlinearity. This interaction conserves total energy and a form of the total excitation number, as can be seen from its eigenstates
\eq{
\ket{\pm,n}=\frac{\ket{n}\otimes\ket{\mathrm{e}}\pm\ket{n+m}\otimes\ket{\mathrm{g}}}{\sqrt{2}},
\label{eq:JCM eigenstates}
} which now have the quantized-Rabi-like frequencies
\eq{
\Omega_n^{(m)}=\Omega_0^{(m)}\sqrt{(n+m)(n+m-1)\cdots(n+1)}.
} We will work in the interaction picture with Hamiltonian
\eq{
H_{\mathrm{I}}=
\frac{\Omega_0^{(m)}}{2}\left({a}^m\sigma_+ + {a}^\dagger\vphantom{a}^m\sigma_-\right);
} the Schr\"odinger-picture results can thence be obtained with the substitutions $\ket{n}\to\text{e}^{-\text{i}\omega nt/m}\ket{n}$ and $\ket{\mathrm{e}}\to\text{e}^{-\text{i} \omega t}\ket{\mathrm{e}}$.
When the atom is initially in its ground state and the field in state $\sum_n \psi_n\ket{n}$, the evolved state takes the form [c.f. Eq. \eqref{eq:JCM from ground}]
\eq{
\ket{\Psi(t)}
=&\sum_{n=0}^\infty \ket{n}\otimes\left(\psi_n\cos\frac{\Omega_{n-m}^{(m)}t}{2}\ket{\mathrm{g}}-\text{i} \psi_{n+m}\sin\frac{\Omega_n^{(m)} t}{2}\ket{\mathrm{e}}\right).
} Similarly,
when the atom is initially in its excited state and the field in state $\sum_n \psi_n\ket{n}$, the evolved state takes the form [c.f. Eq. \eqref{eq:JCM from excited}]
\eq{
\ket{\Psi(t)}=&\sum_{n=0}^\infty \ket{n}\otimes \left(\psi_n\cos\frac{\Omega_n^{(m)} t}{2}\ket{\mathrm{e}}
-\text{i} \psi_{n-m}\sin\frac{\Omega_{n-m}^{(m)} t}{2}\ket{\mathrm{g}}\right).
}
\subsection{Transcoherent states and beyond}~
What are the optimal field states that can generate arbitrary pulse areas for this nonlinear interaction?
We again seek transformations for which the final state has zero residual entanglement between the atom and the light, such that the atomic state can be used in arbitrary quantum information protocols without degradation.
From the ground state, arbitrary transformations can be performed by field states whose photon-number coefficients satisfy
\eq{
\psi_{n+m}=\text{i}\tan\frac{\theta}{2}\frac{\cos\frac{\Omega_{n-m}^{(m)} t}{2}}{\sin\frac{\Omega_n^{(m)} t}{2}}\psi_n
} to ensure that the amplitudes of $\ket{\mathrm{g}}$ and $\ket{\mathrm{e}}$ in the evolved state in Eq. \eqref{eq:JCM from ground} are equal. This can be satisfied by field states with $\psi_n=0$ for $n> n_{\mathrm{max}}$ for some chosen $n_{\mathrm{max}}\geq 1$, so long as the total interaction time satisfies
\eq{
\Omega_{n_{\mathrm{max}}-1}^{(m)}t=\pi,
} which ensures that the highest-excitation subspace spanned by $\ket{\pm,n_{\mathrm{max}}-1}$ undergoes a $\pi$ pulse. Now, in contrast to the $m=1$ scenario, there are $m$ independent recursion relations that must all truncate at the same time $t$, which cannot occur because the oscillation frequencies cannot have an integer ratio $\Omega_{n_k}^{(m)}/\Omega_{n}^{(m)}$ for any integer $k$. Therefore, in order to exactly produce the desired atomic state, one must use a state of light that sets to zero $m-1$ of the coefficients from $\psi_0$ to $\psi_{m-1}$ and thus that only has population in photon numbers spaced $m$ apart. The alternative, which can still outperform coherent states, is to use a state with a large average number of photons that will approximate the recursion relation by a squeezed state.
The same can be said for the atom initially in its excited state, where now the optimal recursion relation takes the form
\eq{
\psi_{n+m}=-\text{i}\tan\frac{\theta}{2}\frac{\sin\frac{\Omega_{n}^{(m)} t}{2}}{\cos\frac{\Omega_{n+m}^{(m)} t}{2}}\psi_{n}.
}
What are the properties of the field states that approximate these recursion relations in the limit of large numbers of photons? As usual, the optimal interaction time matches the semiclassical one:
\eq{
\Omega_0^{(m)} t&=\frac{\theta}{\sqrt{(\bar{n}+m)(\bar{n}+m-1)\cdots(\bar{n}+1)}}\\
&\approx \frac{\theta}{\sqrt{\bar{n}^m+\frac{m(m+1)}{2}\bar{n}^{m-1}}}
} from the ground state and
\eq{
\Omega_0^{(m)} t\approx\frac{\theta+\pi}{\sqrt{\bar{n}^m+\frac{m(m+1)}{2}\bar{n}^{m-1}}}
} from the excited state. The ratios in the recursion relations obey
\eq{
\tan\frac{\theta}{2}\frac{\cos\frac{\Omega_{\bar{n}+\delta-m}^{(m)} t}{2}}{\sin\frac{\Omega_{\bar{n}+\delta}^{(m)} t}{2}}\approx 1-\frac{m }{2 \bar{n}\sinc\theta}\left(\delta-m\sin^2\frac{\theta}{2}\right)
} and (recall that starting in $\ket{\mathrm{e}}$ requires $\sinc(\theta+\pi)$ to be negative in order to rotate by $\theta+\pi$ to $\ket{\theta}$)
\eq{
-\tan\frac{\theta}{2}\frac{\sin\frac{\Omega_{\bar{n}+\delta}^{(m)} t}{2}}{\cos\frac{\Omega_{\bar{n}+\delta+m}^{(m)} t}{2}}\approx 1+\frac{m}{2\bar{n}\sinc(\theta+\pi)}\left(\delta+m\sin^2\frac{\theta+\pi}{2}\right).
} For comparison, an exponential distribution for coefficients separated by $m$ obeys
\eq{
\left|\frac{\psi_{\bar{n}+\delta+m}}{\psi_{\bar{n}+\delta}}\right|=\exp\left[-\frac{(\delta+m)^2-\delta^2}{4\sigma^2}\right]\approx1-\frac{m}{2\sigma^2}\left(\delta+\frac{m}{2}\right).
} We see that \textit{no different number squeezing is needed} for $m$-photon processes relative to the JCM, with the mean photon numbers being shifted from the ideal classical ones by a factor of $\pm m\sin^2\frac{\theta}{2}$.
\color{black}
\section{Conclusions}~
We have performed a detailed investigation of the optimal field states for transferring arbitrary amounts of coherence to individual and collections of atoms. When a single atom is initially its ground or excited state, there exists a field state to rotate it by an arbitrary amount in an arbitrarily short amount of time that generates no residual entanglement with the field. Since these unitary operations can be reversed and, therefore, composed, we have thus found field states for perfectly performing arbitrary rotations on arbitrary atomic states without resorting to any semiclassical approximations.
The perfect field states and interaction times depend on knowing the initial state of the atom. When this initial state is unknown, field states with their photon-number distributions squeezed relative to coherent states retain an advantage in their ability to perform arbitrary operations on some average atomic state. These squeezed field states can then be useful for tasks like creating logic gates for quantum computers.
We showed that squeezed light is also useful for transferring coherence to a collection of atoms or to any spin system. This cements squeezed light as a resource beyond traditional realms such as metrology \cite{LIGO2011} and computation \cite{Madsenetal2022}. More squeezing is required to perform larger rotations on atomic states, so the continuing improvements in squeezing capabilities motivated by said traditional applications provides increasing benefit to our light-matter-interaction scenarios.
Finally, we found that generalizations of the JCM to nonlinear processes responsible for high-harmonic generation and $m$-photon absorption can also have transcoherent states and beyond. The optimal field states for rotation atoms by $\theta$ through nonlinear interactions are \textit{also} squeezed in their photon-number variances by a factor of $\sinc\theta$. All of the results from linear interactions thus extend \textit{mutatis mutandis} to nonlinear ones.
Transcoherent states and beyond stimulate many questions that may be explored in future work. Are these states easiest to generate in a cavity; if so, does the number of atoms in the cavity affect how the field states may enter the cavity? Can they be generated in an optomechanical system using phonons instead of photons as the bosonic mode? If one instead uses a beam of light travelling through free space, for which the JCM is no longer the exact model \cite{KiilerichMolmer2019,KiilerichMolmer2020}, how does squeezing affect coherence transfer to atoms? Does transcoherence lead to better design of field states in the presence of nonzero detuning between the atomic energy gap and the field frequency; can there still be an advantage in coherence transfer due to squeezing? Are there other interactions beyond the JCM and the generalizations considered here for which squeezing can confer additional advantages relative to coherent light? These exciting questions are but a fraction of what can now be studied.
Our previous work explored quantum catalysis as a particularly useful application of transcoherent states that generate perfect $\tfrac{\pi}{2}$ pulses to individual atoms. Now that the toolbox has been expanded to arbitrary rotations and arbitrary numbers of atoms, we strongly believe that our transcoherent states and beyond will be important to application in which light is used to precisely control atoms in any desired fashion.
\begin{acknowledgments}
AZG and KH acknowledge that the NRC headquarters is located on the traditional unceded territory of the Algonquin Anishinaabe and Mohawk people. This work was supported by NSERC. AMS acknowledges support as a CIFAR Fellow. AZG thanks Andrei Klimov for useful discussions.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Hyperspectral image (HSI) collects hundreds of contiguous spectral representations of objects, which demonstrates advantages over conventional multispectral image (MSI) or RGB image with much less spectral information ~\cite{chakrabarti2011statistics,bioucas2012hyperspectral}. Compared to conventional images, the rich spectral information of HSI can effectively distinguish visually similar objects that actually consist of different materials. Thus, HSI has been shown to enhance the performance of a wide range of computer vision tasks, such as, object recognition and classification~\cite{kwan2006novel, fauvel2013advances,zhang2016scene, maggiori2017recurrent,paoletti2018new,yokoya2017hyperspectral, haut2018active}, segmentation~\cite{aydav2018classification}, tracking~\cite{van2010tracking,Fu_2016_CVPR, Uzkent_2016_CVPR_Workshops,Uzkent_2017_CVPR_Workshops}, environmental monitoring~\cite{spangler2010shallow,plaza2011foreword}, and change detection~\cite{kwon2005kernel,borengasser2007hyperspectral,asaari2018close}.
\begin{figure}[t]
\begin{center}
\subfloat[]{\includegraphics[width=0.4\linewidth]{fig/img1_rotate/img1_lr.png}}
\subfloat[]{\includegraphics[width=0.4\linewidth]{fig/img1_rotate/img1_msi.jpg}}\\
\subfloat[]{\includegraphics[width=0.4\linewidth]{fig/img1_rotate/img1_hr.jpg}}
\subfloat[]{\includegraphics[width=0.4\linewidth]{fig/img1_rotate/img1_org.jpg}}\hfill
\end{center}
\caption{Unregistered hyperspectral image super-resolution.(a) First band of the 20 degree rotated and cropped LR HSI with 38\% information missing. (b) First band of the HR MSI. (c) First band of the reconstructed HR HSI by the proposed methods. (d) First band of the reference HR HSI.}
\label{fig:sample}
\end{figure}
During the HSI acquisition process, the finer the spectral resolution, the smaller the radiation energy that can reach the sensor for a particular spectral band within narrow wavelength range. Thus, the high spectral resolution of HSI can only be achieved at the cost of its spatial resolution due to the hardware limitations ~\cite{kawakami2011high,akhtar2015bayesian}. On the contrary, we can obtain conventional MSI or RGB with much higher spatial resolution by integrating the radiation energy over broad spectral bands which inevitably reduces their spectral resolution significantly~\cite{lanaras2015hyperspectral}. To improve the spatial resolution of HSI for better application performance, a natural way is to fuse the high spectral information extracted from HSI with the high-resolution spatial information extracted from conventional images to yield high resolution images in both spatial and spectral domains~\cite{vivone2015critical,yokoya2017hyperspectral,qu2018unsupervised}. This procedure is referred to as \textit{hyperspectral image super-resolution (HSI-SR)}~\cite{akhtar2015bayesian,lanaras2015hyperspectral,dian2017hyperspectral}.
The problem of HSI-SR originates from multispectral image super-resolution (MSI-SR) in remote sensing field, which has been intensively studied over the past several decades~\cite{thomas2008synthesis,chavez1991comparison,aiazzi2006mtf,aiazzi2007improving}. Traditional HSI-SR methods extended from MSI-SR mainly include component substitution (CS) ~\cite{thomas2008synthesis,chavez1991comparison,aiazzi2007improving} and multi-resolution analysis (MRA) based methods~\cite{aiazzi2006mtf}. However, they suffer from spectral distortion. Existing approaches specifically designed for HSI-SR can be broadly divided into two categories, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, matrix factorization based and Bayesian based approaches~\cite{loncan2015hyperspectral,yokoya2017hyperspectral,dong2016hyperspectral}. Matrix factorization based methods~\cite{kawakami2011high,yokoya2012coupled,dong2016hyperspectral,lanaras2015hyperspectral,veganzones2016hyperspectral} generally assume that the LR HSI is down-sampled from the HR HSI, and such a down-sampling function is adopted as a prior in the fusion procedure. In practice, the down-sampling function may not exist due to the complex environmental conditions. Bayesian based approaches reconstruct the HR HSI by extracting spectral information from LR HSI and spatial information from HR MSI separately~\cite{wei2015hyperspectral,akhtar2015bayesian,simoes2015convex}. However, the spectral information extracted from LR HSI may not be the optimal spectral bases for MSI, since MSI is not utilized at all during the optimization procedure. So in summary, spectral distortion can be easily introduced during the optimization procedure of methods from both categories.
HSI-SR is also closely related to natural image super-resolution, which usually trains a mapping function between synthetic LR images and its corresponding HR images~\cite{dong2016image, Lu_2015_CVPR, Shi_2016_CVPR, Kim_2016_CVPR1,Kim_2016_CVPR2, ledig2016photo, lai2017deep, he2016deep,Bulat_2018_CVPR,Haefner_2018_CVPR,Wang_2018_CVPR,Hui_2018_CVPR,Han_2018_CVPR,Haris_2018_CVPR,Zhang_2018_CVPR,Chen_2018_CVPR,Zhang_2018_CVPR} on large high resolution image dataset, in a supervised fashion. There have been several attempts to address the MSI-SR or HSI-SR problem with supervised deep learning where the mapping function is learned using different frameworks \cite{huang2015new,masi2016pansharpening,he2016deep,wei2017boosting,haut2018new,chang2018hsi}. However, these deep learning based methods, are all supervised, making their adoption on HSI-SR a challenge due to three reasons. First, the scale differences between LR HSI and HR MSI can reach as large as 10, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, one pixel in HSI covers 100 pixels in MSI. In some applications, the scale difference can even be 25~\cite{kwan2017blind} and 30~\cite{kwan2018mars}. But most existing super resolution methods only work on up to $8$ times upscaling. Second, they are designed to find an end-to-end mapping function between the LR images and HR images under the assumption that the mapping function is the same for different images. However, the mapping function may not be the same for images acquired with different sensors. Even for the data collected from the same sensor, the mapping function for different spectral bands may not be the same. Thus the assumption may cause severe spectral distortion. Third, training a mapping function is a supervised problem which requires a large dataset, the down-sampling function, and the availability of the HR HSI, making supervised learning unrealistic for HSI.
Despite a plethora of works on HSI-SR, all current approaches require at least one pre-requisite to solve the problem of HSI-SR, i.e., the two input modalities (HSI and MSI) must be well registered, and the quality of the reconstructed HR HSI relies heavily on the registration accuracy~\cite{yokoya2017hyperspectral,bioucas2012hyperspectral,asaari2018close,zhou2017nonrigid,qu2018unsupervised}. According to previous works, there are a few methods that introduce registration as a pre-step before data fusion~\cite{van2012multi,loncan2015hyperspectral,wei2015fast}. However, these pre-steps can only handle small scale differences, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, two pixels/eight pixels offset in LR HSI/HR MSI \cite{zhou2017nonrigid}. Moreover, even in the registration community, HSI and MSI registration is a challenging problem itself as one pixel in LR HSI may cover hundreds of pixels in the corresponding HR MSI. And the spectral difference is also large that both spectral response function (SRF) and multi-band images have to be taken into consideration during registration ~\cite{chui2003new,fan2010spatial,myronenko2010point,ma2015robust,zhou2017nonrigid}. Thus, most registration approaches can only handle small scale differences.
In this paper, an unsupervised network structure is proposed, aiming to solve the HSI-SR problem directly without multi-modality registration. An example is shown in Fig.~\ref{fig:sample}. We address the problem based on the assumption that, the pixels in the overlapped region of HR HSI and HR MSI can be approximated by a linear combination of the same spectral information (spectral bases) with the same corresponding spatial information (representations), which indicates how the spectral basis are constructed for each pixel. Since LR HSI are the down-sampled version of the HR HSI, ideally, its representations should be correlated with that of the HR MSI and HR HSI, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, they should follow similar patterns and distributions although possessing different resolutions, as shown in Fig.~\ref{fig:repres}. Therefore, to reconstruct HR HSI with minimum spectral distortion, the network is designed to decouple both the LR HSI and HR MSI into spectral bases and representations, such that their spectral bases are shared and their representations are correlated with each other.
\begin{figure}[htb]
\begin{center}
\subfloat[]{\includegraphics[width=0.38\linewidth]{fig/img1_rotate/img1_h_lr.png}\label{fig:rgbresult:a}}\hspace{10mm}
\subfloat[]{\includegraphics[width=0.39\linewidth]{fig/img1_rotate/img1_h_hr.jpg}}\hfill
\end{center}
\caption{Learned hidden representations from unregistered (a) low resolution HSI and (b) high resolution MSI, respectively, as shown in Fig.~\ref{fig:sample}.}
\label{fig:repres}
\end{figure}
The novelty of this work is three-fold. First, the network extracts both the spectral and spatial information of two unregistered modalities through the same encoder-decoder structure, by projecting the LR HSI onto the same statistical space as HR MSI, as illustrated in Fig.~\ref{fig:flow}. The representations of the network are encouraged to follow a Dirichlet distribution to naturally meet the non-negative and sum-to-one physical constraints. Second, to prevent spectral distortion, we further adopt mutual information (MI) to extract optimal and correlated representations from multi-modalities. Since the two-modalities are unregistered, the correlated representations are learned by maximizing the mutual information (MI) between the representations and their own inputs during the network optimization. Third, a collaborative $l_{21}$ norm is employed as the reconstruction error instead of commonly used $l_2$ loss, so that the network is able to reconstruct individual pixels as accurately as possible. In this way, the network preserves the spectral information better. With the above design, the proposed network is able to work directly on unregistered images and the spectral distortion of the reconstructed HR HSI can be largely reduced. The proposed method is referred to as unregistered and unsupervised mutual Dirichlet Net, or $u^2$-MDN for short.
This work is an extension of our previous work uSDN~\cite{qu2018unsupervised}. However, uSDN only works on general Hyperspectral image super-resolution problem (HSI-SR) with well-registered LR HSI and HR MSI. Here, we have made substantial extensions to address the challenges of HSI-SR with unregistered multi-modalities. The major improvements are three-fold. First, instead of adopting two deep learning networks as uSDN, the proposed $u^2$-MDN is specifically designed to extract the representations of multi-modalities with one encoder-decoder structure, which largely stabilized the network given unregistered multi-modalities. Second, uSDN minimizes spectral distortion of the reconstructed HR HSI by reducing the angular difference of the multi-modalities' representations, which fails to handle unregistered cases. While the proposed $u^2$-MDN is able to handle both well-registered and unregistered cases by extracting correlated representations with mutual information(MI). Third, instead of commonly used $l_2$ loss adopted by uSDN, the collaborative $l_{21}$ norm is introduced by the proposed $u^2$-MDN to preserve the spectral information better.
\section{Related Work}
\subsection{Hyperspectral Image Super-Resolution}
The problem of HSI-SR originates from multispectral image super-resolution (MSI-SR) in the remote sensing field, where the spatial resolution of MSI is further improved by a high-resolution panchromatic image (PAN). Traditional widely utilized MSI-SR methods can be roughly categorized into two groups: the component substitution (CS) and the multi-resolution analysis (MRA) based approaches. Generally, CS--based approaches~\cite{thomas2008synthesis} project the given data onto a predefined space where the spectral information and spatial information are separated. Subsequently, the spatial component is substituted with the one extracted from PAN~\cite{chavez1991comparison,aiazzi2007improving}. MRA based approaches achieve the spatial details by first applying a spatial filter to the HR images. Then the spatial details are injected into the LR HSI~\cite{mallat1989theory,shensa1992discrete,liu2000smoothing, burt1983laplacian,aiazzi2006mtf,loncan2015hyperspectral}. Although these traditional pan-sharpening approaches can be extended to solve the HSI-SR problem, they usually suffer from severe spectral distortions ~\cite{loncan2015hyperspectral,akhtar2015bayesian,dian2017hyperspectral} as discussed in Sec.~\ref{sec:intro}.
Recent approaches consist of Bayesian based and matrix factorization based methods~\cite{loncan2015hyperspectral,yokoya2017hyperspectral}. Bayesian approaches estimate the posterior distribution of the HR HSI given LR HSI and HR MSI. The unique framework of Bayesian offers a convenient way to regularize the solution space of HR HSI by employing a proper prior distribution such as Gaussian. Different methods vary according to the different prior distributions adopted. Wei \emph{et al}\onedot proposed a Bayesian Naive method~\cite{wei2014bayesian} based on the assumption that the representation coefficients of HR HSI follow a Gaussian distribution. However, this assumption does not always hold especially when the ground truth HR HSI contains complex textures. Instead of using Gaussian prior, dictionary based approaches solve the problem under the assumption that HR HSI is a linear combination of properly chosen over-complete dictionary and sparse coefficients~\cite{wei2015hyperspectral}. Simoes \emph{et al}\onedot proposed HySure~\cite{simoes2015convex}, which takes into account both the spatial and spectral characteristics of the given data. This approach solves the problem through vector based total variation regularization. Akhtar \emph{et al}\onedot~\cite{akhtar2015bayesian} introduced a non-parametric Bayesian strategy to solve the HSI-SR problem. The method first learns a spectral dictionary from LR HSI under the Bayesian framework. Then it estimates the spatial coefficients of the HR MSI by Bayesian sparse coding. Eventually, the HR HSI is generated by combining the spatial dictionary with the spatial coefficients.
Matrix factorization based approaches have been actively studied recently~\cite{kawakami2011high,yokoya2012coupled,dong2016hyperspectral,lanaras2015hyperspectral,veganzones2016hyperspectral}, with Kawakami \emph{et al}\onedot~\cite{kawakami2011high} being the first that introduced matrix factorization to solve the HSI-SR problem. The method learns a spectral basis from LR HSI and then use this basis to extract sparse coefficients from HR MSI with non-negative constraints. Similar to Bayesian based approaches, the HR HSI is generated by linearly combining the estimated bases with the coefficients. Yokoya \emph{et al}\onedot~\cite{yokoya2012coupled} decomposed both the LR HSI and HR MSI alternatively to achieve the optimal non-negative bases and coefficients that used to generate HR HSI. Wycoff \emph{et al}\onedot~\cite{wycoff2013non} solved the problem with alternating direction method of multipliers (ADMM). Lanaras \emph{et al}\onedot~\cite{lanaras2015hyperspectral} further improved the fusion results by introducing a sparse constraint. However, most methods~\cite{yokoya2012coupled,wycoff2013non,lanaras2015hyperspectral} are based on the same assumption that the down-sampling function between the spatial coefficients of HR HSI and LR HSI is known beforehand. This assumption is not always true.
\subsection{Deep learning based Super-Resolution}
Deep learning attracts increasing attention for natural image super-resolution since 2014, when Dong \emph{et al}\onedot first introduced convolution neural network (CNN) to solve the problem of natural image super-resolution and demonstrated state-of-the-art restoration quality \cite{dong2014image}. Ledig \emph{et al}\onedot proposed a method based on generative adversarial network and skipped residual network \cite{he2016deep}. The method employed perceptual loss through VGG network~\cite{simonyan2014very,johnson2016perceptual} which is able to recover photo-realistic textures from heavily down-sampled images~\cite{ledig2016photo}. Usually, natural image SR methods only works up to 8 times upscaling. There have been several attempts to address the MSI-SR or HSI-SR with deep learning in a supervised fashion. In 2015, a modified sparse tied-weights denoising autoencoder was proposed by Huang \textit{et al.}~\cite{huang2015new} to enhance the resolution of MSI. The method assumes that the mapping function between LR and HR PAN are the same as the one between LR and HR MSI. Masi \emph{et al}\onedot proposed a supervised three-layer SRCNN~\cite{masi2016pansharpening} to learn the mapping function between LR MSI and HR MSI. Similar to~\cite{masi2016pansharpening}, Wei \emph{et al}\onedot~\cite{wei2017boosting} learned the mapping function with deep residual network~\cite{he2016deep}. Li \emph{et al}\onedot~\cite{li2017hyperspectral} solved the HSI-SR problem by learning a mapping function with spatial constraint strategy and convolutional neural network. Dian \emph{et al}\onedot~\cite{dian2018deep} initialized the HR HSI from the fusion framework via Sylvester equation. Then, the mapping function is trained between the initialized HR-HSI and the reference HR HSI through deep residual learning. However, these deep learning based methods can not be readily adopted on HSI-SR due to the reasons elaborated in Sec.~\ref{sec:intro}.
Recently, we proposed an unsupervised uSDN~\cite{qu2018unsupervised}, which addressed the problem of HSI-SR with deep network models. Specifically, it extracts the spectral and spatial information through two encoder-decoder networks from the two modalities. The angular difference between the LR HSI and HR MSI representations is minimized to reduce the spectral distortion for every ten iterations. However, the method only works for well registered images.
\section{Problem Formulation}
\label{sec:formulate}
Given the LR HSI, $\bar{\mathbf{Y}}_h \in \mathbb{R}^{m\times n\times L}$, where $m$, $n$ and $L$ denote its width, height and number of spectral bands, respectively, and the unregistered HR MSI with overlapped region, $\bar{\mathbf{Y}}_m \in \mathbb{R}^{M \times N \times l}$, where $M$, $N$ and $l$ denote its width, height and number of spectral bands, respectively, the goal is to reconstruct the HR HSI $\bar{\mathbf{X}} \in \mathbb{R}^{M\times N \times L}$ based on the content of HR MSI. In general, MSI has much higher spatial resolution than HSI, and HSI has much higher spectral resolution than MSI, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot., $M\gg m$, $N \gg n$ and $L \gg l$.
To facilitate the subsequent processing, we unfold the 3D images into 2D matrices, $\mathbf{Y}_h \in \mathbb{R}^{mn\times L}$, $\mathbf{Y}_m \in \mathbb{R}^{MN \times l}$ and $\mathbf{X}\in \mathbb{R}^{MN \times L}$, such that each row represents the spectral reflectance of a single pixel. Since each pixel in both LR HSI and HR MSI can be approximated by a linear combination of $c$ spectral bases $\mathbf{D}$~\cite{lanaras2015hyperspectral,akhtar2015bayesian,qu2018unsupervised}, the matrices can be further decomposed as
\begin{align}
\begin{split}\label{equ:blrhsi}
&\mathbf{Y}_h = \mathbf{S}_h\mathbf{D}_h
\end{split}\\
\begin{split}\label{equ:bmsi}
&\mathbf{Y}_m = \mathbf{S}_m\mathbf{D}_m
\end{split}\\
\begin{split}\label{equ:bhrhsi}
&\mathbf{X} = \mathbf{S}_m\mathbf{D}_h
\end{split}\end{align}
where $\mathbf{D}_h\in\mathbb{R}^{c \times L}$, $\mathbf{D}_m\in\mathbb{R}^{c \times l}$ denote the spectral bases of LR HSI and HR MSI, respectively. And $\mathbf{S}_h\in\mathbb{R}^{mn\times c}$, $\mathbf{S}_m\in\mathbb{R}^{MN\times c}$ denote the coefficients of LR HSI and HR MSI, respectively, Since $\mathbf{S}_h$ or $\mathbf{S}_m$ indicate how the spectral bases are combined for individual pixels at specific locations, they preserve the spatial structure of HSI. Note that the benefit of unfolding the data into 2D matrices is that, the extracting procedure can decouple each pixel without changing the relationship of the pixel and its neighborhood pixels, thus the reconstructed image has less artifacts~\cite{lanaras2015hyperspectral,akhtar2015bayesian,qu2018unsupervised}.
In real applications, although the areas captured by LR HSI and HR MSI might not be registered well, they always have overlapping regions, and the LR HSI includes all the spectral basis of HR MSI \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot., they share the same type of materials carrying specific spectral signatures. The relationship between LR HSI and HR MSI can be expressed as
\begin{align}
\begin{split}\label{equ:chm}
&\mathcal{C}_h \neq \mathcal{C}_m, \quad \mathcal{C}_h \cap \mathcal{C}_m \neq \emptyset,\quad \mathbf{D}_m =\mathbf{D}_h\mathcal{R},
\end{split}
\end{align}
where $\mathcal{C}_h$ and $\mathcal{C}_m$ denote the contents of LR HSI and HR MSI, respectively. $\mathcal{R}\in\mathbb{R}^{L \times l}$ is the prior transformation matrix of sensor~\cite{kawakami2011high, yokoya2012coupled, wei2015hyperspectral,loncan2015hyperspectral,simoes2015convex,vivone2015critical,lanaras2015hyperspectral,dian2017hyperspectral,qu2018unsupervised}, which describes the relationship between HSI and MSI bases.
With $\mathbf{D}_h\in\mathbb{R}^{c \times L}$ carrying the high-resolution spectral information and $\mathbf{S}_m \in\mathbb{R}^{MN\times c}$ carrying the high-resolution spatial information, the desired HR HSI, $\mathbf{X}$, is generated by Eq. \eqref{equ:bhrhsi}.
The challenges to solve this problem is that 1) the ground truth $\mathbf{X}$ is not available, and 2) the LR HSI and HR MSI do not cover the same region. To solve this unsupervised and unregistered HR-HSI problem, the key is to take advantage of the shared spectral information $\mathbf{D}_h$ among different modalities. In addition, the representations of both modalities specifying the spatial information of scene should meet the non-negative and sum-to-one physical constraints. Moreover, in the ideal case, for the pixels in the overlapped region between LR HSI and HR MSI, their spatial information should follow similar patterns, because they carry the information of how the reflectance of shared materials (spectral basis) are mixed in each location. Therefore, the network should have the ability to learn correlated spatial and spectral information from unregistered multi-modality images to maximize its ability to prevent spectral distortion.
\section{Proposed Approach}
We propose an unsupervised architecture for unregistered LR HSI and HR MSI as shown in Fig.~\ref{fig:flow}. Here, we highlight the structural uniquenesses of the network. To extract correlated spectral and spatial information of unregistered multi-modalities, the network projects the LR HSI into the same statistical space as HR MSI, so that the two modalities can share the same encoder and decoder. The encoder enforces the representations (carrying spatial information) of both modalities to follow a Dirichlet distribution, to naturally meet the non-negative and sum-to-one physical properties. In order to prevent spectral distortion, mutual information is introduced during optimization to maximize the correlation between the representations of LR HSI and HR MSI. And the collaborative $l_{21}$ loss is adopted to encourage the network to extract accurate spectral and spatial information from both modalities.
\begin{figure}
{\includegraphics[width=1\linewidth]{fig/flow.jpg}}
\caption{Simplified architecture of $u^2$-MDN.}
\label{fig:flow}
\end{figure}
\subsection{Network Architecture}
\label{sec:arch}
As shown in Fig.~\ref{fig:flow}, the network reconstructs both the LR HSI $\mathbf{Y}_h$ and HR MSI $\mathbf{Y}_m$ by sharing the same encoder and decoder network structure. Since the number of the spectral band $L$ of the HSI $\mathbf{Y}_h$ is much larger than that of the spectral band $l$ of MSI $\mathbf{Y}_m$, we project $\mathbf{Y}_h$ into an $l$ dimensional space by $\tilde{\mathbf{Y}}_h = \mathbf{Y}_h\mathcal{R}$, such that $\tilde{\mathbf{Y}}_h$ represents the LR MSI lying in the same space as HR MSI. In this way, both modalities are linked to share the same encoder structure without additional parameters.
On the other hand, the spectral information $\mathbf{D}_m$ of MSI is highly compressed from that of HSI, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathbf{D}_m =\mathbf{D}_h\mathcal{R}$. Thus, it is very unstable and difficult to directly extract $\mathbf{D}_h$, carrying high spectral resolution from, MSI with low-spectral resolution. But the spectral basis of HR MSI can be transformed from those of LR HSI which possesses more spectral information, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\hat{\mathbf{Y}}_m = \mathbf{S}_m \mathbf{D}_m = \mathbf{S}_m \mathbf{D}_h \mathcal{R} = \mathbf{X} \mathcal{R}$. Therefore, in the network design, both modalities share the same decoder structure $\mathbf{D}_h$. And the transformation matrix $\mathcal{R}$ is added as fixed weights to reconstruct the HR MSI $\hat{\mathbf{Y}}_m$. Then the output of the layer before the fixed weights is actually $\mathbf{X}$, according to Eq.~\eqref{equ:bhrhsi}.
Let us define the input domain as $\mathcal{Y} = \{\tilde{\mathbf{Y}}_h, \mathbf{Y}_m\}$, output domain as $\hat{\mathcal{Y}} = \{\hat{\mathbf{Y}}_h, \mathbf{X}\}$, and the representation domain as $\mathcal{S} = \{\mathbf{S}_h, \mathbf{S}_m\}$, the encoder of the network $\text{E}_{\phi}:\mathcal{Y}\rightarrow\mathcal{S}$, maps the input data to low-dimensional representations (latent variables on the Bottleneck hidden layer), \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $p_{\phi}(\mathcal{S}\vert {\mathcal{Y}})$ and the decoder $\text{D}_{\psi}:\mathcal{S}\rightarrow\hat{\mathcal{Y}}$ reconstructs the data from the representations, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $p_{\psi}(\hat{\mathcal{Y}} \vert \mathcal{S})$. Note that the bottleneck hidden layer $\mathcal{S}$ behaves as the representation layer that reflects the spatial information, and the weights $\psi$ of the decoder $\text{D}_{\psi}$ serve as $\mathbf{D}_h$ in Eq.~\eqref{equ:blrhsi}, respectively. This correspondence is further elaborated below.
Taking the procedure of training LR HSI as an example. The LR HSI is reconstructed by $\hat{\mathbf{Y}}_h = \mathbf{D}_{\psi}(\mathbf{S}_h)$, where $\mathbf{S}_h = \mathbf{E}_{\phi}(\mathbf{Y}_h)$. Since $\mathbf{Y}_h$ carrys the high-resolution spectral information, to better extract the spectral basis, part of the network should simulate the prior relationship described in Eq.~\eqref{equ:blrhsi}. That is, the representation layer $\mathbf{S}_h$ acts as the proportional coefficients and the weights $\psi$ of the decoder correspond to the spectral basis $\mathbf{D}_h$ in Eq.~\eqref{equ:blrhsi}. Therefore, in the network structure, we define $\psi = \mathbf{W}_1\mathbf{W}_2...\mathbf{W}_k = \mathbf{D}_h$ with identity activation function without bias, where $\mathbf{W}_k$ denotes the weights in the $k$th layer. In this way, $\mathbf{D}_h$ preserves the spectral information of LR HSI, and the latent variables $\mathbf{S}_h$ preserves the spatial information effectively. More implementation details will be described in Sec.~\ref{sec:diri}.
Eventually, the desired HR HSI is generated directly by $\mathbf{X} = \mathbf{S}_m\mathbf{D}_h$. Note that the dashed lines in Fig.~\ref{fig:flow} show the path of back-propagation which will be elaborated in Sec.~\ref{sec:opt}.
\subsection{Mutual Dirichlet Network with Collaborative Constraint}
\label{sec:diri}
To extract better spectral information and naturally incorporate the physical requirements of spatial information, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, non-negative and sum-to-one, the representations $\mathcal{S}$ are encouraged to follow a Dirichlet distribution. In addition, the network should have the ability to learn the correlated and optimized representations generated from the encoder $\mathbf{E}_{\phi}$ for both modalities. Thus, in the network design, we maximize the mutual information (MI) between the representations of LR HSI, $\mathbf{S}_h$, and HR MSI ,$\mathbf{S}_m$, by maximizing the MI between the input images and their own representations. To further reduce the spectral distortion, the collaborative $l_{2,1}$ loss is incorporated into the network instead of traditional $l_2$ reconstruction loss. The detailed encoder-decoder structure and the MI structure are shown in Fig.~\ref{fig:flow1} and Fig.~\ref{fig:flow2}, respectively.
\begin{figure}
\begin{center}
{\includegraphics[width=0.8\linewidth]{fig/flow1.jpg}}
\caption{Details of the encoder-decoder structure.}
\label{fig:flow1}
\end{center}
\end{figure}
\subsubsection{Dirichlet Structure}
To generate representations with Dirichlet distribution, we incorporate the stick-breaking structure between the encoder and representation layers. The stick-breaking process was first proposed by Sethuranman~\cite{sethuraman1994constructive} back in 1994. It can be illustrated as breaking a unit-length stick into $c$ pieces, the length of which follows a Dirichlet distribution. Nalisnick and Smyth, and Qu \emph{et al}\onedot successfully coupled the expressiveness of networks with Bayesian nonparametric model through stick-breaking process~\cite{nalisnick2016deep,qu2018unsupervised}. Here, we follow the work of \cite{nalisnick2016deep,qu2018unsupervised}, which draw the samples of $\mathcal{S}$ from Kumaraswamy distribution \cite{kumaraswamy1980generalized}.
The stick-breaking process is integrated into the network between the encoder $\mathbf{E}_{\phi}$ and the decoder $\mathbf{D}_{\psi}$, as shown in Fig.~\ref{fig:flow}. Assuming that the generated representation row vector is denoted as $\mathbf{s}_i = \{s_{ij}\}_{1 \leq j \leq c}$, we have $0\leq s_{ij}\leq1$, and $\sum_{j=1}^{c}{s_{ij}}=1$. Each variable $s_{ij}$ can be defined as
\begin{equation}
s_{ij} =\left\{
\begin{array}{ll}
v_{i1} \quad & \text{for} \quad j = 1\\
v_{ij}\prod_{k<j}(1-v_{ik}) \quad &\text{for} \quad j>1 ,
\end{array}\right.
\label{equ:stick}
\end{equation}
where $v_{ik}\sim \text{Beta}(u, \alpha,\beta)$. Since it is difficult to draw samples directly from Beta distribution, we draw samples from the inverse transform of Kumaraswamy distribution. The benefit of Kumaraswamy distribution is that it has a closed-form CDF, and it is equivalent to Beta distribution when $\alpha = 1$ or $\beta = 1$. Let $\alpha=1$, we have
\begin{equation}
v_{ik}\sim 1-(1-u_{ik}^\frac{1}{\beta_{i}}).
\label{equ:draw}
\end{equation}
Both parameters $u_{ik}$ and $\beta_{i}$ are learned through the network for each row vector as illustrated in Fig.~\ref{fig:flow}. Because $\beta>0$, a softplus is adopted as the activation function \cite{dugas2001incorporating} at the ${\beta}$ layer. Similarly, a sigmoid \cite{han1995influence} is used to map ${u}$ into $(0,1)$ range at the $\mathbf{u}$ layer. Due to the fact that the spectral signatures of data are different for each image pair, the network only trains one group of data, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, LR HSI $\mathbf{Y}_h$ and HR MSI $\mathbf{Y}_m$, to reconstruct its own HR HSI $\mathbf{X}$. Therefore, to increase the representation power of the network, the encoder of the network is densely connected, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, each layer is fully connected with all its subsequent layers \cite{huang2016densely}.
\subsubsection{Mutual Dirichlet Network}
\begin{figure}
\begin{center}
{\includegraphics[width=0.6\linewidth]{fig/flow2.jpg}}
\caption{Details of the MI structure.}
\label{fig:flow2}
\end{center}
\end{figure}
Before further describing the details of the network, we first explain the reason that motivates this design. Given unregistered multi-modalities LR HSI, $\mathbf{Y}_h$ and HR MSI, $\mathbf{Y}_m$, and the desired HR HSI, $\mathbf{X}$, each pixel of which indicates the mixed spectral reflection of the captured area. The overlapped region of the three modalities is defined by $\mathcal{C}$. Ideally, each pixel in the overlapped region of these three modalities should possess the same spectral signatures. In addition, the corresponding proportional coefficients of $\mathbf{X}$ and $\mathbf{Y}_m$ should be the same for a given pixel within $\mathcal{C}$. Since $\mathbf{Y}_h$ is a down-sampling and transformed version of $\mathbf{X}$, its proportional coefficients (representations) should follow the same pattern as that of $\mathbf{X}$ and $\mathbf{Y}_m$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathbf{S}_h$ and $\mathbf{S}_m$ should be highly correlated although with different resolution. One example is shown in Fig.~\ref{fig:sample}. Therefore, to generate HR HSI with low spectral distortion, it is necessary to encourage the representations $\mathbf{S}_h$ and $\mathbf{S}_m$ to follow similar patterns. However, traditional constraints like correlation may not work properly, because the input LR HSI and HR MSI are not registered with each other and the mapping function $\mathbf{E}_{\phi}$, between the input $\mathcal{Y}$ and the representations $\mathcal{S}$, holds the non-linear property. Therefore, we introduce mutual information (MI), which captures the non-linear statistical dependencies between variables~\cite{kinney2014equitability}, to reinforce the representations of LR HSI and HR MSI to follow similar patterns with statistics.
Mutual information has been widely used for multi-modality registrations~\cite{zitova2003image,woo2015multimodal}. It is a Shannon-entropy based measurement of mutual independence between two random variables, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, $\mathbf{S}_h$ and $\mathbf{S}_m$. The mutual information $\mathcal{I}( \mathbf{S}_h; \mathbf{S}_m)$ measures how much uncertainty of one variable ($\mathbf{S}_h$ or $\mathbf{S}_m$) is reduced given the other variable ($\mathbf{S}_m$ or $\mathbf{S}_h$). Mathematically, it is defined as
\begin{equation}
\begin{array}{ll}
\mathcal{I}(\mathbf{S}_h; \mathbf{S}_m) &= H(\mathbf{S}_h) - H(\mathbf{S}_h \vert \mathbf{S}_m)\\
&= \int_{\mathcal{S}_h\times\mathcal{S}_m}\log\frac{d\mathbb{P}_{\mathbf{S}_h\mathbf{S}_m}}
{d\mathbb{P}_{\mathbf{S}_h}\otimes d\mathbb{P}_{\mathbf{S}_m}}d\mathbb{P}_{\mathbf{S}_h\mathbf{S}_m}
\end{array}
\end{equation}
where $H$ indicates the Shannon entropy, $H(\mathbf{S}_h\vert \mathbf{S}_m)$ is the conditional entropy of $\mathbf{S}_h$ given $\mathbf{S}_m $. $d\mathbb{P}_{\mathbf{S}_h\mathbf{S}_m}$ is the joint probability distribution, and $\mathbb{P}_{\mathbf{S}_h}$, $\mathbb{P}_{\mathbf{S}_m}$ denote the marginals. Belghazi \emph{et al}\onedot ~\cite{belghazi2018mine} introduced an MI estimator, which allows neural network to estimate MI through back-propagation, by adopting the concept of Donsker-Varadhan representation~\cite{donsker1983asymptotic}.
In order to maximally preserve the spectral information of the reconstructed HR HSI, our goal is to encourage the two representations $\mathbf{S}_h$ and $\mathbf{S}_m$ to follow similar patterns by maximizing their mutual information (MI), $\mathcal{I}(\mathbf{S}_h; \mathbf{S}_m)$, during optimization procedure. Since $\mathbf{S}_h = \mathbf{E}_{\phi}(\mathbf{Y}_h)$ and $\mathbf{S}_m = \mathbf{E}_{\phi}(\mathbf{Y}_m)$, the MI can also be expressed as $\mathcal{I}(\mathbf{E}_{\phi}(\mathbf{Y}_h); \mathbf{E}_{\phi}(\mathbf{Y}_m))$. However, it is difficult to maximize such MI directly with neural network, because the two modalities do not match with each other in our scenario. Therefore, we maximize the average MI between the representations and their own inputs, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{I}(\mathbf{Y}_h, \mathbf{E}_{\phi}(\mathbf{Y}_h))$ and $\mathcal{I}(\mathbf{Y}_m, \mathbf{E}_{\phi}(\mathbf{Y}_m))$. The benefit of doing this is two-fold. First, by optimizing the encoder weights $\mathbf{E}_{\phi}$, it is able to greatly improve the quality of individual representations~\cite{hjelm2018learning}. Thus it helps the network preserve the spectral and spatial information better. Second, since the multi-modalities are correlated, and the dependencies (MI) between the representations and multi-modalities are maximized, it also maximizes the MI, $\mathcal{I}(\mathbf{S}_h; \mathbf{S}_m)$, between different modalities, such that $\mathbf{S}_h$ and $\mathbf{S}_m$ are encouraged to follow similar patterns.
Taking $\mathcal{I}(\mathbf{Y}_h, \mathbf{E}_{\phi}(\mathbf{Y}_h))$ as an example. It is equivalent to Kullback-Leibler (KL) divergence~\cite{belghazi2018mine} between the joint distribution $\mathbb{P}_{\mathbf{Y}_h\mathbf{E}_{\phi}(\mathbf{Y}_h)}$ and the product of the marginals $\mathbb{P}_{\mathbf{Y}_h}\otimes \mathbb{P}_{\mathbf{E}_{\phi}(\mathbf{Y}_h)}$. Let $\mathbb{P} = \mathbb{P}_{\mathbf{Y}_h\mathbf{E}_{\phi}(\mathbf{Y}_h)}$
and $\mathbb{Q} = \mathbb{P}_{\mathbf{Y}_h}\otimes \mathbb{P}_{\mathbf{E}_{\phi}(\mathbf{Y}_h)}$, we can further express MI as
\begin{equation}
\mathcal{I}(\mathbf{Y}_h, \mathbf{E}_{\phi}(\mathbf{Y}_h)) = \mathbb{E}_{\mathbb{P}}
[\log\frac{d\mathbb{P}}{d\mathbb{Q}}] = D_{KL}(\mathbb{P}\Vert\mathbb{Q})
\label{equ:kl}
\end{equation}
Such MI can be maximized by maximizing the KL-divergence's lower bound based on Donsker-Varadhan (DV) representation~\cite{donsker1983asymptotic}. Since we do not need to calculate the exact MI, we introduce an alternative lower bound based on Jensen-Shannon which works better than DV based objective function~\cite{hjelm2018learning}.
In the network design, an additional network $\mathcal{T}_w:\mathcal{Y}\times\mathcal{S}\rightarrow\mathbb{R}$ is built with parameter $w$. Then the estimator can be defined as
\begin{equation}
\begin{array}{ll}
\mathcal{I}_{\phi,w}(\mathbf{Y}_h, \mathbf{E}_{\phi}(\mathbf{Y}_h)) :&=
\mathbb{E}_{\mathbb{P}}[-sp(-\mathcal{T}_{w,\phi}(\mathbf{Y}_h,\mathbf{E}_{\phi}(\mathbf{Y}_h))]
\end{array}
\label{equ:js}
\end{equation}
where $sp(x) =\log(1+e^x) $. Note that we ignore the negative samples in DV based objective function~\cite{hjelm2018learning}, which are usually generated by shuffling the input data. Because it is unstable to train the network with random shifting input data given only two input data pairs. Since both $\mathbf{E}_\phi$ and $\mathcal{T}_w$ are used to find the optimal representations, they are updated together. Combined with the MSI MI, the objective function is defined as
\begin{equation}
\label{optmiall}
\begin{array}{ll}
\mathcal{L}_{\mathcal{I}}(\phi,w) &= \mathcal{I}_{\phi,w}(\mathbf{Y}_h, \mathbf{E}_{\phi}(\mathbf{Y}_h))\\
&+ \mathcal{I}_{\phi,w}(\mathbf{Y}_m, \mathbf{E}_{\phi}(\mathbf{Y}_m))
\end{array}
\end{equation}
Since the encoder $\mathbf{E}_{\phi}$ and the estimation network of MI $\mathcal{T}_w$ for both LR HSI and HR MSI share the same weights $\phi$ and $w$, their optimized representations follow similar patterns. More optimization details are described in Sec.~\ref{sec:opt}.
In order to extract better spectral information, we adopt the collaborative reconstruction loss with $l_{2,1}$ norm \cite{nie2010efficient} instead of traditional $l_2$ norm for both LR HSI and HR MSI. The objective function for $l_{2,1}$ loss is defined as
\begin{equation}
\begin{array}{ll}
\mathcal{L}_{2,1}(\phi,w) &= \Vert D_\psi(E_\phi(\mathbf{Y}_h))- \mathbf{Y}_h\Vert_{2,1}\\
&+\Vert D_\psi(E_\phi(\mathbf{Y}_m))- \mathbf{Y}_m\Vert_{2,1}
\end{array}
\end{equation}
where $\Vert X \Vert_{2,1} = \sum_{i=1}^{m}\sqrt{\sum_{j=1}^{n}X_{i,j}^2}$. $l_{2,1}$ norm will encourage the rows of the reconstruction error to be sparse. That is, the network is designed to learn individual pixels as accurate as possible. In this way, it extracts better spectral information and further reduces the spectral distortion.
\subsection{Optimization and Implementation Details}
\label{sec:opt}
The objective functions of the proposed network architecture can then be expressed as:
\begin{equation}
\label{equ:optall}
\mathcal{L}(\phi,w) = \mathcal{L}_{2,1}(\phi,w) - \lambda\mathcal{L}_{\mathcal{I}}(\phi,w) + \mu\Vert\psi\Vert_F^2
\end{equation}
where $l_2$ norm is applied on the decoder weights $\psi$ to prevent over-fitting. $\lambda$ and $\mu$ are the parameters that balance the trade-off between construction error, negative of mutual information and weights loss, respectively.
Before feeding into the network, the spectral vectors in LR HSI and HR MSI are transformed to zero-mean vectors by reducing the vector mean of their own image. Since the spectral information of MSI has been compressed too much (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, HSI has 31 bands, but MSI has 3 bands), the decoder of the network is only updated by LR HSI data to stabilize the network. The number of the input nodes is equal to the band number of HR MSI $l$. LR HSI $\mathbf{Y}_h$ is projected into a $l$ dimensional space by $\tilde{\mathbf{Y}}_h = \mathbf{Y}_h\mathcal{R}$ before feeding into the network, while HR MSI is directly fed into the network. The number of the output nodes is chosen based on the band number of LR HSI $L$. When the input of the network is ${\mathbf{Y}}_h$, the output of the decoder is $\hat{\mathbf{Y}}_h$. When the input of the network is ${\mathbf{Y}}_m$, the reconstructed $\hat{\mathbf{Y}}_m$ is generated by multiplying the output of the decoder with fixed weights $\mathcal{R}$.
The $\mathbf{v}$ is drawn with Eq.~\eqref{equ:draw} given $\mathbf{u}$ and $\beta$, which are learned by back-propagation. $\beta$ has only one node, which is learned by a two-layer densely-connected neural network. It denotes the distribution parameter of each pixel. $u$ has 15 nodes, which are learned by a four-layer densely-connected neural-network. The representation layer $\mathcal{S}$ with 15 nodes is constructed with $\mathbf{v}$ and $\mathbf{\beta}$, according to Eq.~\eqref{equ:stick}. The decoder and MI network $\mathcal{T}_w$ have two fully-connected layers. The number of nodes and the activation functions for different layers are shown in Table.~\ref{tab:layers}.
The training is done in an unsupervised fashion without ground truth HR HSI. Given multi-modalities LR HSI and HR MSI, the network is optimized with back-propagation to extract their correlated spectral bases and representations, as illustrated in Fig.~\ref{fig:flow} with red-dashed lines. It stops when the reconstruction error of the network does not decrease anymore. Then we can feed the HR MSI into the network, and get the reconstructed HR HSI $\mathbf{X}$ from the output of the decoder.
\begin{table}[htb]
\caption{The number of layers and nodes in the proposed network.}
\label{tab:layers}
\begin{center}
\begin{tabular}{c|cccc}
\hline
&encoder of $\mathbf{u}$/$\mathbf{\beta}$&$\mathbf{u}$/${\beta}$/$\mathbf{v}$&$\mathcal{T}_w$&decoder\\
\hline
$\#$layers &4/2&1/1/1&2&2\\
$\#$nodes &[3,3,3,3]/[3,3]& 15/1/15&[18,1]&[15,15]\\
activation&linear&sigmoid/softplus/linear&sigmoid&linear\\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Experiments and Results}
\subsection{Datesets and Experimental Setup}
The proposed $u^2$-MDN has been extensively evaluated with two widely used benchmark datasets, CAVE \cite{yasuma2010generalized} and Harvard \cite{chakrabarti2011statistics}. The CAVE dataset consists of 32 HR HSI images and each of which has a dimension of $512\times 512$ with 31 spectral bands taken within the wavelength range 400 $\sim$ 700nm at an interval of 10 nm. The Harvard dataset includes 50 HR HSI images with both indoor and outdoor scenes. The images are cropped to $1024\times 1024$, with 31 bands taken at an interval of 10nm within the wavelength range of 420 $\sim$ 720nm.
For real applications, the two modalities may be unregistered and the scale difference between the real LR HSI and HR HSI may be 10 or even more~\cite{kwan2017blind,kwan2018mars}. However, not all the approaches can reconstruct HR HSI from unregistered LR HSI and HR MSI. Thus, for fair comparison, we first evaluate the performance on individual well-registered image pairs with extreme super-resolution (sr) factor 32, where the LR HSI $\mathbf{Y}_h$ is obtained by averaging the HR HSI over $32\times 32$ disjoint blocks. The HR MSI with 3 bands are generated by multiplying the HR HSI with the given spectral response matrix $\mathcal{R}$ of Nikon D700~\cite{lanaras2015hyperspectral,akhtar2015bayesian,qu2018unsupervised}.
The results of the proposed method on individual images in Fig.~\ref{fig:individual} are compared with eight state-of-the-art methods, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, traditional methods: CS based~\cite{aiazzi2007improving}, MRA based methods: SFIM~\cite{liu2000smoothing} and GLP ~\cite{aiazzi2006mtf}; Matrix factorization based methods: CNMF~\cite{yokoya2012coupled}, Lanaras's (CSU) ~\cite{lanaras2015hyperspectral}; Bayesian based methods: HySure~\cite{simoes2015convex} and Akhtar's (BSR)~\cite{akhtar2015bayesian}; and deep-learning based uSDN~\cite{qu2018unsupervised} that belong to different categories of HSI-SR. These methods also reported the best performance \cite{loncan2015hyperspectral,akhtar2015bayesian,lanaras2015hyperspectral,qu2018unsupervised}, with the original code made available by the authors. The average results on the complete dataset is also reported to evaluate the robustness of the proposed method.
\begin{figure}[htb]
\begin{center}
\subfloat[balloon]{\includegraphics[width=0.25\linewidth]{fig/individual/balloons_RGB.jpg}\label{fig:individual:a}}
\subfloat[cloth]{\includegraphics[width=0.25\linewidth]{fig/individual/cloth_RGB.jpg}\label{fig:individual:b}}
\subfloat[pompoms]{\includegraphics[width=0.25\linewidth]{fig/individual/pompoms_RGB.jpg}\label{fig:individual:c}}
\subfloat[spool]{\includegraphics[width=0.25\linewidth]{fig/individual/thread_spools_RGB.jpg}\label{fig:individual:d}}\hfill\\
\subfloat[img1]{\includegraphics[width=0.3\linewidth]{fig/individual/img1.jpg}\label{fig:individual:e}}
\subfloat[imgb5]{\includegraphics[width=0.3\linewidth]{fig/individual/imgb5.jpg}\label{fig:individual:f}}
\subfloat[imgc5]{\includegraphics[width=0.3\linewidth]{fig/individual/imgc5.jpg}\label{fig:individual:g}}\hfill
\end{center}
\caption{The HR MSI of individual test images from the CAVE~\cite{yasuma2010generalized} (top row) and Harvard~\cite{chakrabarti2011statistics} (bottom row) datasets.}
\label{fig:individual}
\end{figure}
To further validate the effectiveness of the proposed approach, the performance on unregistered image pairs are reported. The unregistered image pairs are generated by rotating LR HSI with $5^{\circ}$ and cropping 15\% of its surrounding pixels, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, for images in the Cave dataset, 39322 pixels of the MSI are not covered in the LR HSI, for images in the Harvard dataset, 157290 pixels of the MSI are not covered in the LR HSI. Since only four methods that extract spectral information and spatial information independently may reconstruct HR HSI from unregistered images, the proposed method is compared with these four state-of-the-art methods, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, CS ~\cite{aiazzi2007improving}, SFIM~\cite{liu2000smoothing}, CNMF~\cite{yokoya2012coupled} and Akhtar's (BSR)~\cite{akhtar2015bayesian}. Note that, in order to work on unregistered image pairs, the LR HSI should include all the spectral basis of HR MSI. Since not all the images can meet this requirement after rotation and cropping, we choose seven commonly benchmarked image pairs for comparison, as shown in Fig.~\ref{fig:individual}.
\subsection{Evaluation Metrics}
For quantitative comparison, the erreur relative globale adimensionnelle de synthèse (ERGAS), the peak signal-to-noise ratio (PSNR), and the spectral angle mapper (SAM) are applied to evaluate the quality of the reconstructed HSI.
ERGAS provides a measurement of band-wise normalized root of mean square error (RMSE) between the reference HSI, $\mathbf{X}$, and the reconstructed HSI, $\hat{\mathbf{X}}$, with the best value at 0~\cite{wald1997fusion}. It is defined as
\begin{equation}
\text{ERGAS}(\mathbf{X},\hat{\mathbf{X}}) = \frac{100}{\text{sr}}\sqrt{{\frac{1}{L}}\sum_{i=1}^{L}\frac{\text{mean}\Vert\mathbf{X}_i-\hat{\mathbf{X}_i}\Vert_2^2}{(\text{mean}\mathbf{X}_i)^2}},
\end{equation}
where $\text{sr}$ denotes the sr factor between the HR MSI and LR HSI, $L$ denotes the number of spectral bands of the reconstructed $\hat{\mathbf{X}}$.
PSNR is the average ratio between the maximum power of the image and the power of the residual errors in all the spectral bands. A larger PSNR indicates a higher spatial quality of the reconstructed HSI. For each image band of HSI, the PSNR is defined as
\begin{equation}
\text{PSNR}(\mathbf{X}_i,\hat{\mathbf{X}}_i)=10\cdot\log_{10}\bigg(\frac{\max(\mathbf{X}_i)^2}{\text{mean}\Vert\mathbf{X}_i-\hat{\mathbf{X}}_i\Vert_2^2}\bigg)
\end{equation}
SAM~\cite{kruse1993spectral} is commonly used to quantify the spectral distortion of the reconstructed HSI. The larger the SAM, the worse the spectral distortion of the reconstructed HSI. For each HSI pixel $\hat{\mathbf{X}_j}$, the SAM is defined as
\begin{equation}
\text{SAM}(\mathbf{X}_j,\hat{\mathbf{X}_j}) = \arccos\left(\frac{\mathbf{X}_j^T\hat{\mathbf{X}_j}}{\Vert\mathbf{X}_j\Vert_2\Vert\hat{\mathbf{X}}_j\Vert_2}\right)
\end{equation}
And the global SAM is estimated by averaging the SAM over all the pixels in the entire image.
\subsection{Experimental Results on Registered Image Pairs}
For fair comparison, we first perform experiments on the general case when LR HSI and HR MSI are well registered. Tables~\ref{tab:ergas},~\ref{tab:psnr} and~\ref{tab:sam} show the experimental results of 7 groups of commonly benchmarked images from the CAVE and Harvard datasets \cite{kawakami2011high,akhtar2015bayesian,akhtar2016hierarchical}. Note that, in order to show how the method works in different scenario, the data is not normalized. Since the intensities of Harvard dataset are small, the ERGAS of the methods are generally smaller than that of the CAVE dataset.
We observe that traditional methods suffer from spectral distortion, thus could not achieve competitive performance. The matrix-based approach, CSU \cite{lanaras2015hyperspectral} works better than CNMF \cite{yokoya2012coupled} on both CAVE and Harvard dataset. And both CSU abd CNMF preserve spectral information better than the Bayesian based methods. The reason is that they could extract more correlated spatial coefficients from LR HSI and HR MSI with the given predefined down-sampling function. But for the applications with unknown down-sampling function, this prior would limit their ability to extract accurate spatial information from data with arbitrary distributions.
The Bayesian non-parametric based method BSR \cite{akhtar2015bayesian} outperforms HySure \cite{simoes2015convex} by estimating the spectra through non-parametric learning. However, although it can achieve results with relatively low reconstruction error, it could not preserve the spectral information well. That is because the spectral and spatial information are extracted separately during the optimization of BSR. Therefore, the extracted spectral bases from LR HSI may not be the optimized spectral bases for HR MSI. Then the representations extracted from two modalities are not correlated. Thus the reconstructed HR HSI would have large spectral distortion.
The deep learning based uSDN preserves spectral information better than that of BSR. However, it can only work on well registered images due to its network design with angular difference regularization. Based on the experiments, the proposed $u^2$-MDN network powered by the mutual information and collaborative $l_{2,1}$ loss outperforms all of the other approaches in terms of ERGAS, PSNR and SAM, and it is quite stable for different types of input images.
The average of ERGAS, PSNR and SAM over the complete CAVE and Harvard dataset are reported in Table ~\ref{tab:average}, to further demonstrate the robustness of the proposed $u^2$-MDN. We only compare the performance of matrix factorization based CSU, Bayesian based BSR, and deep learning based uSDN since they demonstrated better performance on well-registered images. We observe that CSU achieves better results than BSR due to the predefined down-sampling function, which is given as a prior. Although BSR can achieve relatively good ERGAS and PSNR scores, its SAM scores are not promising. Due to the fact that it estimates the spectral and spatial information independently, it may not find the optimal representations to recover the HR HSI. The deep learning based methods uSDN is comparable to CSU, and it can reduce the spectral distortion better than BSR. However, its optimization strategy limits its ability to further reduce the spectral distortion. The proposed approach consistently outperforms the other methods and achieves the state-of-the-art performance in terms of ERGAS, PSNR and SAM as reported in Table ~\ref{tab:average}. It is very effective in preserving the spectral signature of the reconstructed HR HSI, showing much improved performance especially on SAM on the CAVE data. To demonstrate the reconstruction performance in different spectral bands, the averaged PSNR on each wavelength is shown in Fig.~\ref{fig:avg_psnr}. We can observe that, the proposed method consistently outperforms the other methods for all the spectral bands.
\begin{table}[htb]
\centering
\caption{Benchmarked results in terms of ERGAS.}
\label{tab:ergas}
\begin{tabular}{p{1.3cm}|p{0.65cm} p{0.4cm} p{0.75cm} p{0.55cm}| p{0.4cm}p{0.4cm}p{0.4cm}}
\hline
\multirow{2}{*}{Methods}&\multicolumn{4}{|c|}{CAVE}&\multicolumn{3}{|c}{Harvard} \\
\cline{2-8}
{}&balloon&cloth&pompoms&spool&img1&imgb5&imgc5\\
\hline
CS &0.33&0.51&0.47&0.67 &0.16&0.25&0.19\\
\hline
SFIM &0.59& 0.54&3.76&2.93&0.23&0.29&0.23\\
GLP &0.39&0.52&0.49&0.71&0.22&0.27&0.21\\
\hline
CNMF &0.26&0.54&0.31&0.54&0.15&0.17&0.13\\
CSU &0.19&0.40&0.28&0.45&0.12&0.18&0.12\\
\hline
HySure&0.34&0.53&0.46&0.66& 0.17&0.35&0.19\\
BSR &0.30&0.48&0.48&0.62&0.14&0.30&0.18\\
\hline
uSDN &0.20&0.35&0.25&0.40&0.12&0.16&0.11\\
$u^2$-MDN &\textbf{0.16}&\textbf{0.30}&\textbf{0.19}&\textbf{0.37}&\textbf{0.11}&\textbf{0.15}&\textbf{0.11}\\
\hline
\end{tabular}
\end{table}
\begin{table}[htb]
\centering
\caption{Benchmarked results in terms of PSNR.}
\label{tab:psnr}
\begin{tabular}{p{1.3cm}|p{0.65cm} p{0.4cm} p{0.75cm} p{0.55cm}| p{0.4cm}p{0.4cm}p{0.4cm}}
\hline
\multirow{2}{*}{Methods}&\multicolumn{4}{|c|}{CAVE}&\multicolumn{3}{|c}{Harvard} \\
\cline{2-8}
{}&balloon&cloth&pompoms&spool&img1&imgb5&imgc5\\
\hline
CS &36.12& 30.51&31.78&36.61 & 37.41&36.06&36.02\\
\hline
SFIM &33.52&30.59&25.39&28.63&32.62&33.15&35.62\\
GLP &36.01&32.04&30.69&35.89&33.26&33.88&35.98\\
\hline
CNMF &39.27&30.52&35.45&37.28&37.25&39.06&38.49\\
CSU &41.52&33.47&36.81&39.64&39.12&39.01&39.05\\
\hline
HySure& 37.12&30.24&32.56&35.79&36.43&36.44&36.99\\
BSR & 39.37&31.70&33.05&37.45&38.88&36.95&36.46\\
\hline
uSDN &41.54&33.48&37.84&38.49&39.30&39.72&39.12\\
$u^2$-MDN &\textbf{43.59}&\textbf{34.85}&\textbf{39.12}&\textbf{40.08}&\textbf{40.97}&\textbf{39.76}&\textbf{39.19}\\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Benchmarked results in terms of SAM.}
\label{tab:sam}
\begin{tabular}{p{1.3cm}|p{0.65cm} p{0.4cm} p{0.75cm} p{0.55cm}| p{0.4cm}p{0.4cm}p{0.4cm}}
\hline
\multirow{2}{*}{Methods}&\multicolumn{4}{|c|}{CAVE}&\multicolumn{3}{|c}{Harvard} \\
\cline{2-8}
{}&balloon&cloth&pompoms&spool&img1&imgb5&imgc5\\
\hline
CS &10.07&7.95&10.39&13.54&2.79&3.18&2.77\\
\hline
SFIM &11.45&7.25&12.89&19.71&2.89&3.52&2.84\\
GLP &10.25&7.39&11.09&13.51&2.99&3.29&2.80\\
\hline
CNMF &9.71&6.55&6.32&16.77&2.86&2.14&2.64\\
CSU &4.68&5.52&6.01&6.84&2.30&2.37& 2.38\\
\hline
HySure &10.21&7.09&10.65&17.19&4.06&3.43&2.93\\
BSR & 9.63&7.02&10.13&14.52&2.78&3.27&2.67\\
\hline
uSDN &4.56&4.16&5.43&13.01&2.27&2.10&2.58\\
$u^2$-MDN&\textbf{1.93}&\textbf{4.31}&\textbf{3.46}&\textbf{4.47}&\textbf{2.06}&\textbf{2.08}&\textbf{1.77}\\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{The average ERGAS, PSNR and SAM scores over complete benchmarked datasets.}
\label{tab:average}
\begin{center}
\begin{tabular}{l|ccc|ccc}
\hline
\multirow{2}{*}{Methods}&\multicolumn{3}{|c|}{CAVE}&\multicolumn{3}{|c}{Harvard} \\
\cline{2-4}\cline{5-7}
&ERGAS&PSNR&SAM&ERGAS&PSNR&SAM\\
\hline
CSU\cite{lanaras2015hyperspectral}&0.40&38.50&7.14& 0.29&40.01&4.38\\
BSR\cite{akhtar2015bayesian}& 0.53 &36.24&13.23& 0.30&38.43&4.49\\
uSDN\cite{qu2018unsupervised} &0.36&38.96&6.95&0.28&40.15&4.28\\
$u^2$-MDN &\textbf{0.31}&\textbf{40.65}&\textbf{4.72}&\textbf{0.23}&\textbf{41.37}&\textbf{3.28}\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[htb]
\begin{center}
\subfloat[]{\includegraphics[width=0.48\linewidth]{fig/avg/avg_cave.jpg}\label{fig:avg_psnr:a}}
\subfloat[]{\includegraphics[width=0.48\linewidth]{fig/avg/avg_harvard.jpg}\label{fig:avg_psnr:b}}\hfill
\end{center}
\caption{The average PSNR of different wavelengths of the reconstructed HSI in the (a) CAVE dataset and (b) Harvard dataset, respectively.}
\label{fig:avg_psnr}
\end{figure}
\subsection{Experimental Results on Unregistered Image Pairs}
To solve the problem of unregistered hyperspectral image super-resolution, the LR HSI should contain all the spectral bases in the HR MSI. Since one pixel in the LR HSI covers 1024 pixels in the HR MSI, when the LR HSI is rotated and cropped, certain spectral information contained in HR MSI is corrupted or missing. Thus, the reconstruction error is expected to be increased.
The performance of different methods on unregistered image pairs are reported in Tables~\ref{tab:unergas},~\ref{tab:unpsnr} and~\ref{tab:unsam}. Note that, only the methods, that are able to extract the spectral and spatial information separately, can work on unregistered images. Thus four methods are compared in the tables. Traditional CS-based and MTF-based methods fail in this scenario. That is because when the given two modalities are unregistered, the spatial details could not be directly added to improve the spatial resolution of LR HSI. The performance of CNMF drops largely for both datasets. The reason is that the adopted predefined down-sampling function will introduce significant spectral distortion when the LR HSI and HR MSI are unregistered. BSR works better than the other three approaches due to the fact that, it extracts spectral and spatial information separately with sparse constraints. However, the method could not preserve spectral information well, since the extracted spectral bases from LR HSI may not be the optimal bases to extract correlated spatial information from the HR MSI. The proposed $u^2$-MDN is able to handle challenging scenarios much better than the state-of-the-art. The main reason that contributes to the success of the proposed approach is that, the network is able to extract the optimal and correlated spatial representations from two modalities through mutual information and collaborative loss. In this way, both the spatial and especially the spectral information are effectively preserved. This demonstrates the representation capacity of the proposed structure.
\begin{table}[htb]
\centering
\caption{Results on unregistered images in terms of ERGAS.}
\label{tab:unergas}
\begin{tabular}{p{1.3cm}|p{0.65cm} p{0.4cm} p{0.75cm} p{0.55cm}| p{0.4cm}p{0.4cm}p{0.4cm}}
\hline
\multirow{2}{*}{Methods}&\multicolumn{4}{|c|}{CAVE}&\multicolumn{3}{|c}{Harvard} \\
\cline{2-8}
{}&balloon&cloth&pompoms&spool&img1&imgb5&imgc5\\
\hline
CS& 0.82&0.76&1.18&1.07&1.65&0.40&0.69\\
SFIM&1.51&1.01&1.82&1.88& 1.33&0.68&0.89\\
CNMF&0.71&0.69&0.83&0.63&0.74&0.34&0.48\\
BSR&0.32&0.54&0.58&0.65&0.15&0.33&0.29\\
$u^2$-MDN&\textbf{0.30}&\textbf{0.40}&\textbf{0.37}&\textbf{0.56}&\textbf{0.13}&\textbf{0.25}&\textbf{0.14}\\
\hline
\end{tabular}
\end{table}
\begin{table}[htb]
\centering
\caption{Results on unregistered images in terms of PSNR.}
\label{tab:unpsnr}
\begin{tabular}{p{1.3cm}|p{0.65cm} p{0.4cm} p{0.75cm} p{0.55cm}| p{0.4cm}p{0.4cm}p{0.4cm}}
\hline
\multirow{2}{*}{Methods}&\multicolumn{4}{|c|}{CAVE}&\multicolumn{3}{|c}{Harvard} \\
\cline{2-8}
{}&balloon&cloth&pompoms&spool&img1&imgb5&imgc5\\
\hline
CS& 27.71&27.59&23.54&29.92&23.60&20.07&22.18\\
SFIM&22.47&24.74&19.50&25.30&17.41&25.38&19.93\\
CNMF&29.18&27.84&26.67&24.62&22.29&31.39&25.34\\
BSR&35.61&30.39&32.44&31.58&37.58&33.91&34.18\\
$u^2$-MDN&\textbf{38.61}&\textbf{32.89}&\textbf{33.64}&\textbf{36.25}&\textbf{39.42}&\textbf{36.90}&\textbf{36.29}\\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Results on unregistered images in terms of SAM.}
\label{tab:unsam}
\begin{tabular}{p{1.3cm}|p{0.65cm} p{0.4cm} p{0.75cm} p{0.55cm}| p{0.4cm}p{0.4cm}p{0.4cm}}
\hline
\multirow{2}{*}{Methods}&\multicolumn{4}{|c|}{CAVE}&\multicolumn{3}{|c}{Harvard} \\
\cline{2-8}
{}&balloon&cloth&pompoms&spool&img1&imgb5&imgc5\\
\hline
CS& 14.35&9.75&22.13&17.16&10.66&4.90&5.32\\
SFIM&12.69&9.65&14.89&21.02&3.28&4.44&3.96\\
CNMF&10.63&8.12&11.88&17.03&3.85&3.97&3.16\\
BSR&10.56&7.85&10.22&18.09&2.93&4.27&2.92\\
$u^2$-MDN&\textbf{3.48}&\textbf{6.08}&\textbf{4.87}&\textbf{6.78}&\textbf{2.32}&\textbf{2.73}&\textbf{2.26}\\
\hline
\end{tabular}
\end{table}
\begin{figure}[t]
\begin{center}
\begin{minipage}{0.9\linewidth}
{\includegraphics[width=0.22\linewidth]{fig/balloon/ball_2_lr.png}
\label{fig:ball:a}}
{\includegraphics[width=0.22\linewidth]{fig/balloon/ball_2_org.jpg}
\label{fig:ball:b}}
{\includegraphics[width=0.22\linewidth]{fig/balloon/ball_2_hr.jpg}
\label{fig:ball:c}}
{\includegraphics[width=0.252\linewidth]{fig/balloon/diff_2.jpg}
\label{fig:ball:d}}\\
{\includegraphics[width=0.22\linewidth]{fig/balloon/ball_14_lr.png}
\label{fig:ball:e}}
{\includegraphics[width=0.22\linewidth]{fig/balloon/ball_14_org.jpg}
\label{fig:ball:f}}
{\includegraphics[width=0.22\linewidth]{fig/balloon/ball_14_hr.jpg}
\label{fig:ball:g}}
{\includegraphics[width=0.252\linewidth]{fig/balloon/diff_14.jpg}
\label{fig:ball:h}}\\
{\includegraphics[width=0.22\linewidth]{fig/balloon/ball_29_lr.png}
\label{fig:ball:i}}
{\includegraphics[width=0.22\linewidth]{fig/balloon/ball_29_org.jpg}
\label{fig:ball:j}}
{\includegraphics[width=0.22\linewidth]{fig/balloon/ball_29_hr.jpg}
\label{fig:ball:k}}
{\includegraphics[width=0.252\linewidth]{fig/balloon/diff_29.jpg}
\label{fig:ball:l}}
\end{minipage}\vspace{3mm}
\begin{minipage}{0.9\linewidth}
{\includegraphics[width=0.22\linewidth]{fig/imgc5/1_lr.png}
\label{fig:imgc5:a}}
{\includegraphics[width=0.22\linewidth]{fig/imgc5/imgc5_1_org.jpg}
\label{fig:imgc5:b}}
{\includegraphics[width=0.22\linewidth]{fig/imgc5/imgc5_1_hr.jpg}
\label{fig:imgc5:c}}
{\includegraphics[width=0.26\linewidth]{fig/imgc5/imgc5_1_diff.jpg}
\label{fig:imgc5:d}}\\
{\includegraphics[width=0.22\linewidth]{fig/imgc5/imgc5_12_lr.png}
\label{fig:imgc5:e}}
{\includegraphics[width=0.22\linewidth]{fig/imgc5/imgc5_12_org.jpg}
\label{fig:imgc5:f}}
{\includegraphics[width=0.22\linewidth]{fig/imgc5/imgc5_12_hr.jpg}
\label{fig:imgc5:g}}
{\includegraphics[width=0.26\linewidth]{fig/imgc5/diff_12.jpg}
\label{fig:imgc5:h}}\\
{\includegraphics[width=0.22\linewidth]{fig/imgc5/imgc5_27_lr.png}
\label{fig:imgc5:i}}
{\includegraphics[width=0.22\linewidth]{fig/imgc5/imgc5_27_org.jpg}
\label{fig:imgc5:j}}
{\includegraphics[width=0.22\linewidth]{fig/imgc5/imgc5_27_hr.jpg}
\label{fig:imgc5:k}}
{\includegraphics[width=0.26\linewidth]{fig/imgc5/diff_27.jpg}
\label{fig:imgc5:l}}
\end{minipage}
\end{center}
\caption{Reconstructed images given unregistered LR HSI from the CAVE (top) and Harvard dataset (bottom) at wavelength 420, 540 and 690 nm. First column: LR images. Second: estimated images. Third: ground truth images. Fourth: absolute difference.}
\label{fig:imgc5}
\end{figure}
To visualize the results, we show the reconstructed samples given unregistered image pairs of CAVE and Harvard datasets taken at wavelengths 420, 540, and 690 nm in Fig.~\ref{fig:imgc5}. The first through fourth columns show the LR images, reconstructed images from our method, ground truth images, and the absolute difference between the images at the second and third columns, respectively.
We also compare the proposed method with other methods on the challenge images from CAVE and Harvard dataset. The results are shown in Figs.~\ref{fig:pom}-\ref{fig:img1}. We can observe from the absolute difference and SAM that, the results from CS and SFIM have large displacement since the LR HSI and HR MSI are not registered. The CNMF works slightly better, but it could not work effectively due to the predefined down-sampling function. The proposed method is comparable to BSR in terms of PSNR, but it has much less spectral distortion than that of BSR. In summary, the effectiveness of the proposed method can be readily observed from the difference images, SAM and PSNR, where the proposed approach is able to preserve both the spectral and spatial information.
\begin{figure*}[htb]
\begin{center}
\begin{minipage}{0.8\linewidth}
\subfloat[Reference HR HSI]{\includegraphics[width=0.16\linewidth]{fig/pom/pom_14_hr.jpg}
\label{fig:pom:a}}
\subfloat[CS]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/gsa.jpg}
\label{fig:pom:b}}
\subfloat[SFIM]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/sfimhs.jpg}
\label{fig:pom:c}}
\subfloat[CNMF]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/cnmf.jpg}
\label{fig:pom:d}}
\subfloat[BSR]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/bsr.jpg}
\label{fig:pom:e}}
\subfloat[$u^2$-MDN]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/u2mdn.jpg}
\label{fig:pom:f}}\hfill\\
\subfloat[LR HSI]{\includegraphics[width=0.16\linewidth]{fig/pom/pom_14_lr.png}
\label{fig:pom:g}}
\subfloat[Difference of CS]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/gsa_rmse.jpg}
\label{fig:pom:h}}
\subfloat[Difference of SFIM]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/sfimhs_rmse.jpg}
\label{fig:pom:i}}
\subfloat[Difference of CNMF]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/cnmf_rmse.jpg}
\label{fig:pom:j}}
\subfloat[Difference of BSR]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/bsr_rmse.jpg}
\label{fig:pom:k}}
\subfloat[Difference of $u^2$-MDN]{\includegraphics[width=0.187\linewidth]{fig/rotate_pom/u2mdn_rmse.jpg}
\label{fig:pom:l}}\hfill\\
\subfloat[PSNR]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/rotate_pom_psnr.png}
\label{fig:pom:m}}
\subfloat[SAM of CS]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/gsa_sam.jpg}
\label{fig:pom:n}}
\subfloat[SAM of SFIM]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/sfimhs_sam.jpg}
\label{fig:pom:o}}
\subfloat[SAM of CNMF]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/cnmf_sam.jpg}
\label{fig:pom:p}}
\subfloat[SAM of BSR]{\includegraphics[width=0.16\linewidth]{fig/rotate_pom/bsr_sam.jpg}
\label{fig:pom:q}}
\subfloat[SAM of $u^2$-MDN]{\includegraphics[width=0.1825\linewidth]{fig/rotate_pom/u2mdn_sam.jpg}
\label{fig:pom:r}}\hfill\\
\end{minipage}
\end{center}
\caption{Reconstructed results given unregistered image pairs from the CAVE dataset. (a) Reference HR HSI at wavelength 540 nm. (g) LR HSI at wavelength 540 nm. (b)-(f): reconstructed results at wavelength 540 nm from different methods. (h)-(l): average absolute difference between the reconstructed HSI and reference HSI over different spectral bands, from different methods. (n)-(r) SAM of each pixel between the reconstructed HSI and reference HSI from different methods. (m) PSNR of different methods.}
\label{fig:pom}
\end{figure*}
\begin{figure*}[!h]
\begin{center}
\begin{minipage}{0.8\linewidth}
\subfloat[Reference HR HSI]{\includegraphics[width=0.16\linewidth]{fig/spools/spool_14_hr.jpg}
\label{fig:spool:a}}
\subfloat[CS]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/gsa.jpg}
\label{fig:spool:b}}
\subfloat[SFIM]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/sfimhs.jpg}
\label{fig:spool:c}}
\subfloat[CNMF]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/cnmf.jpg}
\label{fig:spool:d}}
\subfloat[BSR]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/bsr.jpg}
\label{fig:spool:e}}
\subfloat[$u^2$-MDN]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/u2mdn.jpg}
\label{fig:spool:f}}\hfill\\
\subfloat[LR HSI]{\includegraphics[width=0.16\linewidth]{fig/spools/spool_14_lr.png}
\label{fig:spool:g}}
\subfloat[Difference of CS]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/gsa_rmse.jpg}
\label{fig:spool:h}}
\subfloat[Difference of SFIM]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/sfimhs_rmse.jpg}
\label{fig:spool:i}}
\subfloat[Difference of CNMF]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/cnmf_rmse.jpg}
\label{fig:spool:j}}
\subfloat[Difference of BSR]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/bsr_rmse.jpg}
\label{fig:spool:k}}
\subfloat[Difference of $u^2$-MDN]{\includegraphics[width=0.187\linewidth]{fig/rotate_spool/u2mdn_rmse.jpg}
\label{fig:spool:l}}\hfill\\
\subfloat[PSNR]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/rotate_spool_psnr.jpg}
\label{fig:spool:m}}
\subfloat[SAM of CS]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/gsa_sam.jpg}
\label{fig:spool:n}}
\subfloat[SAM of SFIM]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/sfimhs_sam.jpg}
\label{fig:spool:o}}
\subfloat[SAM of CNMF]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/cnmf_sam.jpg}
\label{fig:spool:p}}
\subfloat[SAM of BSR]{\includegraphics[width=0.16\linewidth]{fig/rotate_spool/bsr_sam.jpg}
\label{fig:spool:q}}
\subfloat[SAM of $u^2$-MDN]{\includegraphics[width=0.1825\linewidth]{fig/rotate_spool/u2mdn_sam.jpg}
\label{fig:spool:r}}\hfill\\
\end{minipage}
\end{center}
\caption{Reconstructed results given unregistered image pairs from the CAVE dataset. (a) Reference HR HSI at wavelength 540 nm. (g) LR HSI at wavelength 540 nm. (b)-(f): reconstructed results at wavelength 540 nm from different methods. (h)-(l): average absolute difference between the reconstructed HSI and reference HSI over different spectral bands, from different methods. (n)-(r) SAM of each pixel between the reconstructed HSI and reference HSI from different methods. (m) PSNR of different methods.}
\label{fig:spool}
\end{figure*}
\begin{figure*}[htb]
\begin{center}
\begin{minipage}{0.8\linewidth}
\subfloat[Reference HR HSI]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/crop_img1_hr.jpg}
\label{fig:img1:a}}
\subfloat[CS]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/gsa.jpg}
\label{fig:img1:b}}
\subfloat[SFIM]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/sfimhs.jpg}
\label{fig:img1:c}}
\subfloat[CNMF]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/cnmf.jpg}
\label{fig:img1:d}}
\subfloat[BSR]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/bsr.jpg}
\label{fig:img1:e}}
\subfloat[$u^2$-MDN]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/u2mdn.jpg}
\label{fig:img1:f}}\hfill\\
\subfloat[LR HSI]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/crop_img1_lr.png}
\label{fig:img1:g}}
\subfloat[Difference of CS]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/gsa_rmse.jpg}
\label{fig:img1:h}}
\subfloat[Difference of SFIM]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/sfimhs_rmse.jpg}
\label{fig:img1:i}}
\subfloat[Difference of CNMF]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/cnmf_rmse.jpg}
\label{fig:img1:j}}
\subfloat[Difference of BSR]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/bsr_rmse.jpg}
\label{fig:img1:k}}
\subfloat[Difference of $u^2$-MDN]{\includegraphics[width=0.187\linewidth]{fig/rotate_img1/u2mdn_rmse.jpg}
\label{fig:img1:l}}\hfill\\
\subfloat[PSNR]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/img1_psnr.jpg}
\label{fig:img1:m}}
\subfloat[SAM of CS]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/gsa_sam.jpg}
\label{fig:img1:n}}
\subfloat[SAM of SFIM]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/sfimhs_sam.jpg}
\label{fig:img1:o}}
\subfloat[SAM of CNMF]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/cnmf_sam.jpg}
\label{fig:img1:p}}
\subfloat[SAM of BSR]{\includegraphics[width=0.16\linewidth]{fig/rotate_img1/bsr_sam.jpg}
\label{fig:img1:q}}
\subfloat[SAM of $u^2$-MDN]{\includegraphics[width=0.1825\linewidth]{fig/rotate_img1/u2mdn_sam.jpg}
\label{fig:img1:r}}\hfill\\
\end{minipage}
\end{center}
\caption{Reconstructed results given unregistered image pairs from the Harvard dataset. (a) Reference HR HSI at wavelength 540 nm. (g) LR HSI at wavelength 540 nm. (b)-(f): reconstructed results at wavelength 540 nm from different methods. (h)-(l): average absolute difference between the reconstructed HSI and reference HSI over different spectral bands, from different methods. (n)-(r) SAM of each pixel between the reconstructed HSI and reference HSI from different methods. (m) PSNR of different methods.}
\label{fig:img1}
\end{figure*}
\subsection{Ablation and Parameter Study}
Taking the challenging rotated `pompom' image from the CAVE dataset as an example, we further evaluate 1) the necessity of maximizing the mutual information (MI) between representations and input images and 2) the usage of collaborative $l_{2,1}$ loss. Since they all designed to reduce the spectral distortion of the reconstructed image, we use SAM as the evaluation metric.
Fig. \ref{fig:mutual} illustrates the SAM of the reconstructed HR HSI when increasing the parameters of mutual information $\lambda$ in Eq.~\ref{equ:optall}. We can observe that, if there is no mutual information maximization, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $\lambda=0$, the spectral information would not be preserved well. When we gradually increase $\lambda$, the reconstructed HR HSI preserves better spectral information, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the SAM is largely reduced. The reason for that is, when we maximize the MI between the representations and their own inputs, it actually maximizes the mutual information of the representations of two modalities. Therefore, the network is able to correlate the extracted spectral and spatial information from unregistered HR MSI and LR MSI in an effective way, to largely reduce the spectral distortion. However, when the parameter is too large, it hinders the reconstruction procedure of the network. Therefore, we need to choose a proper $\lambda$ for the network. In our experiments, we set $\lambda=1\times10^{-5}$ and keep $\mu=1\times 10^{-4}$ during the experiments to reduce over-fitting.
The effectiveness evaluation of the collaborative $l_{21}$ norm is demonstrated in Fig.~\ref{fig:l21}. Because the $l_{21}$ norm encourages the network to reconstruct individual pixels as accurately as possible, the network can preserve the spectral information better and significantly reduce the spectral distortion of the restored HR HSI.
\begin{figure*}[htb]
\begin{center}
\begin{minipage}{0.28\linewidth}
{\includegraphics[width=1\linewidth]{fig/pmi.jpg}}
\caption{Influence of MI}
\label{fig:mutual}
\end{minipage}
\begin{minipage}{0.28\linewidth}
{\includegraphics[width=1\linewidth]{fig/l21.jpg}}\hfill
\caption{The effect of $l_{21}$}
\label{fig:l21}
\end{minipage}
\begin{minipage}{0.28\linewidth}
{\includegraphics[width=1\linewidth]{fig/tolerence.jpg}}\hfill
\caption{Tolerance study}
\label{fig:tol}
\end{minipage}\hfill
\end{center}
\end{figure*}
\subsection{Tolerance Study}
At last, we would like to examine how much spectral information can be preserved when the network deals with unregistered images. To preserve spectral information, the input LR HSI should cover all the spectral signatures of HR MSI. Thus, we choose the image in Fig.~\ref{fig:sample} from the Harvard dataset which has most of spectral signatures centered in the image. The results are shown in Fig.~\ref{fig:tol}. The image is rotated from 5 degrees to 30 degrees with 15\% to 48\% percents of information missing. We can observe that as long as the spectral bases are included in the LR HSI, no matter how small the overlapped region is between the LR HSI and HR MSI, we could always achieve the reconstructed image with small spectral distortion even for unregistered input images.
\section{Conclusion}
We proposed an unsupervised encoder-decoder network $u^2$-MDN to solve the problem of hyperspectral image super-resolution (HSI-SR) without multi-modality registration. The unique structure stabilizes the network training by projecting both modalities into the same space and extracting the spectral basis from LR HSI with rich spectral information as well as spatial representations from HR MSI with high-resolution spatial information simultaneously. The network learns correlated spatial information from two unregistered modalities by maximizing the mutual information (MI) between the representations and their own raw inputs. In this way, it maximizes the MI between the two representations that largely reduces the spectral distortion. In addition, the collaborative $l_{2,1}$ norm is adopted to encourage the network to further preserve spectral information. Extensive experiments on two benchmark datasets demonstrated the superiority of the proposed approach over state-of-the-art.
\section*{Acknowledgment}
The authors would like to thank all the developers of the evaluated methods who kindly offer their codes. This publication was made possible in part by NASA grant NNX12CB05C and NNX16CP38P. The statements made herein are solely the responsibility of the authors.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Introduction}
The modern theory of Hamiltonian dynamical systems was pioneered by Henri Poincaré in his celebrated \textit{New Methods of Celestial Mechanics}\footnote{to which the title of this article humbly pays tribute.} \cite{Poin}. In that work and the following ``Mémoires''\cite{Ch.12}, Poincaré proposed and explored the revolutionary idea of using geometrical and topological techniques to determine the qualitative behaviour and global properties of solutions to differential systems, rupturing with the ancestral methods devised to find exact solutions. His ideas were then developed during all the twentieth century, with applications that have proved useful to solve problems well beyond the scope of mathematics and theoretical physics \cite{Arn}. Perhaps the most ambitious problem that Poincare's methods was able to tackle was the qualitative resolution of the classical $N$-body problem, which ultimately gave birth to KAM-theory \cite{Se.03} and the stability analysis of quasi-integrable Hamiltonian systems.\\
At the chore of Hamiltonian mechanics lies the notion of periodic solutions. The two classical examples are the two-body (or Keplerian) problem, and the harmonic oscillator. The former is at the basis of all celestial mechanics, and the latter encodes the periodic nature of basically all integrable Hamiltonian systems. There is, however, another important feature that these two fundamental problems have in common: they are \textit{isochrone} systems, in the sense that \textit{all periodic orbits in the Kepler and harmonic potentials have a period that is only a function of the energy} of the system. In particular, this period is independent of the angular momentum of the particles orbiting these potentials, whence the name \textit{iso}chrone. This particular notion of isochrony was introduced in galactic dynamics by Michel Hénon \cite{HeI.59}, in a very specific context: finding a potential that could account for the harmonic, central regions of galaxies and their Keplerian outskirts. In his study, Hénon found a third isochrone potential, now called \textit{the} isochrone model \cite{BiTr}.\\
In a recent series of papers \cite{SPD,RP.20}, new light was shed on these isochrone potentials and orbits therein. In this paper, we propose to continue (and somewhat complete) this program, by examining isochrone potentials and isochrone orbits within the realm of Hamiltonian mechanics. One of our goal is to bring to light the central role and universal property of isochrone potentials, in place (and as a generalisation) of the well-established Kepler and harmonic. As a matter of fact, fundamental properties and symmetries of these two academic potentials, for example Kepler's laws of motion, the Bertrand Theorem, the Kepler Equation, eccentric orbital elements and other fundamentals of classical gravitational mechanics, are all but special cases of the central properties of isochrone systems. In particular, they can all be understood and derived from geometrical reasoning. In \cite{SPD,RP.20} it was shown already that these results follow from \textit{Euclidean} geometry, as the set of isochrone potentials is in a one-to-one correspondence with the set of parabolae. In this article, we show that isochrony can be naturally explained and understood in the context of \textit{symplectic} geometry, the isochrony of a system being encoded into the Birkhoff invariants of its Hamiltonian formulation. To present these ideas, we have organised this paper in four sections, as follows: \\
\textbullet \, In section \ref{sec:un}, we provide a brief reminder of the general theory of test particles in radial potentials (\ref{sec:gen}), as well as a quick introduction and summary of classical results on isochrone potentials (\ref{sec:gauge}) and isochrone parabolae (\ref{sec:one}). These reminders all follow from \cite{SPD,RP.20}.\\
\textbullet \, The aim of section \ref{sec:deux} is twofold: first, we solve the general problem of isochrone dynamics in the context of Hamiltonian mechanics, by constructing a well-suited set of action-angle variables (\ref{sec:hamiso}). Then, we show that the Kepler equation of celestial mechanics, reLating the orbital time to the eccentric anomaly, actually holds for the whole class of isochrone orbits (\ref{sec:ecc}). Lastly, we use this generalised Kepler equation to derive a parametric solution for the orbital polar coordinates, in terms of a generalised eccentric anomaly (\ref{sec:para}).\\
\textbullet \, Sections \ref{sec:trois} and \ref{sec:quatre} are rather independent from the first two, and dedicated to the proof of three fundamental results within the theory of isochrone potentials. The main tool we use is the Birkhoff normal form (and Birkhoff invariants), which arises in the context of Hamiltonian mechanics. In section \ref{sec:trois}, we briefly introduce (\ref{sec:Binv}) and construct this normal form and these invariants (\ref{sec:Biso} and \ref{sec:Bgen}). We use them in section \ref{sec:quatre} to prove the fundamental theorem of isochrony (\ref{sec:fti}), the Bertrand theorem (\ref{sec:Ber}) and a generalisation of Kepler's third law to all isochrone potentials (\ref{sec:K3}). \\
For convenience, a summary of the notations used in this paper and in the previous ones \cite{SPD,RP.20} is provided in Table \ref{Table}. Several appendices contain mathematical details as well as secondary remarks worthy of interest, but that would otherwise break the natural flow of the arguments.
\section{Isochrone potentials, isochrone orbits} \label{sec:un}
\subsection{Generalities} \label{sec:gen}
Consider, in the 3-dimensional Euclidean space of classical mechanics, a spherically symmetric body of mass density $\rho(r)$, with $r$ a radial coordinate. This body generates a radial potential $\psi(r)$ through the Poisson equation $\Delta \psi=4\pi G \rho$. We are interested in the motion of a test, unit mass particle orbiting in this radial potential. As is well-known, the spherical symmetry implies that the motion is confined in a plane orthogonal to the angular momentum vector $\vec{L}=\vec{r}\times\vec{v}$. In particular, the norm $\Lambda:=|\vec{L}|$ of the latter is conserved, as is as the mechanical energy $\xi=\tfrac{1}{2} |\vec{v}|^2+\psi(r)$. In general, i.e., for a generic potential $\psi(r)$, and when viewed in the 2-dimensional orbital plane, the quantities $(\xi,\Lambda)$ are the only two constants of motion.
If we consider polar coordinates $(r,\theta)$ on the orbital plane, the explicit formulae for $\xi,\Lambda$ can be turned into two ordinary differential equations with respect to time for the coordinate position $(r(t),\theta(t))$ of the particle. These read
\begin{equation} \label{eom}
\frac{1}{2} \biggl(\frac{\mathrm{d} r}{\mathrm{d} t} \biggr)^2 = \xi - \frac{\Lambda^2}{2r^2} - \psi(r)\,, \quad \frac{\mathrm{d} \theta}{\mathrm{d} t} = \frac{\Lambda}{r^2} \, .
\end{equation}
If the motion is bounded, then the function $r(t)$ solving this system must be periodic. Let us call \textit{radial period}, the smallest value $T\in\mathbb{R}_+$ such that $r(t+T)=r(t)$ for all $t\geq 0$. The minimum (resp. maximum) values $r_p$ (resp. $r_a$) of the function $r(t)$ then corresponds, physically, to the radius at periastron (resp. apoastron). They can be obtained in terms of $(\xi,\Lambda)$ by solving the algebraic equation obtained by setting $\dot{r}=0$ in \eqref{eom}. Without loss of generality, we assume that, at initial time $t=0$, the particle is at periastron and the polar angle is $0$, so that $(r(0),\theta(0))=(r_p,0)$. The initial conditions for the coordinate velocities $(\dot{r}(0),\dot{\theta}(0))$ are then uniquely specified once $\xi,\Lambda$ are fixed, using equations \eqref{eom}.
An explicit formula can be obtained for $T$ by isolating a time element $\mathrm{d} t$ from equation \eqref{eom} and integrating it over one radial period, giving
\begin{equation} \label{defT}
T(\xi,\Lambda) := \sqrt{2}\int_{r_p}^{r_a} \biggl( \xi-\frac{\Lambda^2}{2r^2} - \psi(r)\biggr)^{-1/2} \mathrm{d} r \, .
\end{equation}
With the radial period $T$, another quantity of interest is the \textit{apsidal angle} $\Theta$, defined as the variation of polar angle $\theta$ during one radial period, namely $\Theta := \theta(t+T) - \theta(t)$. From equation \eqref{eom}, $\Theta$ is a constant. Integrating the equation $\dot{\theta}(t)=\Lambda/r^2$ over one radial period then gives an explicit, integral expression for $\Theta$:
\begin{equation} \label{defTheta}
\Theta(\xi,\Lambda) := \sqrt{2}\Lambda \int_{r_p}^{r_a} \biggl( \xi-\frac{\Lambda^2}{2r^2} - \psi(r)\biggr)^{-1/2} \frac{\mathrm{d} r}{r^2} \, .
\end{equation}
In equations \eqref{defT} and \eqref{defTheta}, both quantities $T$ and $\Theta$ have a functional dependence on the potential $\psi$, and also depend on both constants of motion $(\xi,\Lambda)$. Now we are ready to state what makes a potential $\psi(r)$ isochrone.
\subsection{Isochrone potentials}\label{sec:gauge}
A potential $\psi$ being fixed, both $T$ and $\Theta$ now depend on the two constants of motion $(\xi,\Lambda)$. A radial potential $\psi(r)$ is said to be \textit{isochrone} when all bounded orbits within this potential are such that $T$ is independent of the angular momentum $\Lambda$:
\begin{equation} \label{defiso}
\psi(r) \text{ is isochrone } \Leftrightarrow \text{ } T(\xi,\Lambda) \text{ is independent of } \Lambda.
\end{equation}
Fundamentally, the definition of isochrony is thus encoded in the radial period $T$. However, as noticed initially in \cite{SPD}, the isochrone property of $\psi$ can be equivalently encoded in the angular part of the dynamics, via a condition on the apsidal angle $\Theta$. Indeed, we will crucially rely on the following, equivalent characterisation:
\begin{equation} \label{defisoT}
\psi(r) \text{ is isochrone } \Leftrightarrow \text{ } \Theta(\xi,\Lambda) \text{ is independent of } \xi.
\end{equation}
Notice the duality between \eqref{defiso} and \eqref{defisoT}. The equivalence between the two directly follows from considerations about a special set of action-angle variables, which we will detail in section \ref{sec:hamiso}.
Regarding both academic purposes and physical applications, the two most important radial potentials are without doubt the Kepler potential $\psi_{\rm{Ke}}$ and the harmonic potential $\psi_{\rm{Ha}}$, defined by
\begin{equation} \label{KeHa}
\psi_{\rm{Ke}}(r)=-\frac{\mu}{r} \quad \text{and} \quad \psi_{\rm{Ha}}(r)=\frac{1}{8}\omega^2 r^2 \,,
\end{equation}
with $\mu$ and $\omega$ two constants characterising the mass sourcing these potentials. These two potentials \textit{are} isochrone. This can be readily seen from the radial period of a particle orbiting in these potentials. They read, respectively,
\begin{equation} \label{periods}
T_{\rm{Ke}} = \frac{2\pi\mu}{(-2\xi)^{3/2}} \quad \text{and} \quad T_{\rm{Ha}} = \frac{2\pi}{\omega} \,.
\end{equation}
The first equation in \eqref{periods} is Kepler's celebrated third law of motion, and the second explains our choice of normalisation for the harmonic potential (so that $\omega$ coincides with the radial angular frequency). Physically, any orbit in either of these two potentials are perfectly closed ellipses. This remarkable property holds for and only for these two potentials, a result known as Bertrand's theorem \cite{Arn,BiTr}. We shall come back to this and provide a proof of this fundamental theorem within the context of isochrony, in section \ref{sec:quatre}. As a consequence of this, it is not surprising that the apsidal angle for these two potentials is a rational multiple of $\pi$, which reads explicitly:
\begin{equation} \label{Thetas}
\Theta_{\rm{Ke}} = 2\pi \,, \quad \text{and} \quad \Theta_{\rm{Ha}} = \pi \,.
\end{equation}
As we can see, the isochrone property $T=T(\xi)$ and $\Theta=\Theta(\Lambda)$ are verified throughout equations \eqref{periods} and \eqref{Thetas}, indeed demonstrating the isochrone character of the Kepler and harmonic potentials.
There exists, however, other isochrone potentials. The most well-known is probably the one discovered by Michel Hénon \cite{HeI.59,HeII.59} which is usually called \textit{the} isochrone. It is given by
\begin{equation}\label{He}
\psi_{\rm{He}}(r)=-\frac{\mu}{\beta+\sqrt{\beta^2+r^2}} \,,
\end{equation}
where $\beta$ is a positive constant. Notice that when $\beta=0$, $\psi_{\rm{He}}$ reduces the Kepler potential $\psi_{\rm{Ke}}$. It is preferable to refer to \eqref{He} as the \textit{Hénon} potential, reserving the qualifier \textit{isochrone} for any potential with the defining property ``$T$ independent of $\Lambda$''. Two other classes of isochrone potentials exist, called the $Bounded$ potentials and the $Hollowed$ potentials. They were put forward and discussed in \cite{SPD} and \cite{RP.20}, respectively. They are given by
\begin{equation}\label{BoHo}
\psi_{\rm{Bo}}(r)=\frac{\mu}{\beta+\sqrt{\beta^2-r^2}} \,, \quad \text{and} \quad \psi_{\rm{Ho}} = -\frac{\mu}{r^2}\sqrt{r^2-\beta^2} \,.
\end{equation}
The most important feature of these potentials is that, contrary to $\psi_{\rm{Ha}}$ and $\psi_{\rm{He}}$, they are not defined for all $r\in\mathbb{R}_+$. Indeed, $\psi_{\rm{Bo}}(r)$ is only defined for $0\leq r\leq\beta$, whence the name \textit{Bounded} potential. Whatever the initial conditions of motion, a Bounded potential confines the motion of the particle within the 3D ball defined by $r\leq \beta$, and could thus provide an effective, toy-model for physically confined systems such as quarks in baryons \cite{Mu.93}.
On the other hand, $\psi_{\rm{Ho}}(r)$ is defined only when $r\geq\beta$
and is thus, in a sense, complementary to the Bounded class $\psi_{\rm{Bo}}$: where a particle in the Bounded potential cannot cross the $r=\beta$ sphere from within, a particle in the Hollowed potential $\psi_{\text{Ho}}$ cannot cross it from the outside. This class of potential could therefore be used to model classical, self-gravitating systems with central singularities, like dark matter halos \cite{Me.06}.\\
Finally, let us mention that it is possible to add to any isochrone potential a term of the form $\varepsilon+\frac{\lambda}{2r^2}$ where $(\varepsilon,\lambda)\in\mathbb{R}^2$. In other words, if $\psi(r)$ is isochrone, then $\psi(r)+\varepsilon+\frac{\lambda}{2r^2}$ is also isochrone. This additional term can always be interpreted as a shift in angular momentum and energy of the particle, and has therefore been coined an $(\varepsilon,\lambda)$-gauge \cite{SPD}. This classification of isochrone potentials into four families $(\psi_{\rm{Ha}},\psi_{\rm{He}},\psi_{\rm{Bo}},\psi_{\rm{Ho}})$ up to a gauge-term was presented thoroughly in \cite{SPD} and enjoys a remarkable group structure. We will not make additional comments on it in the remaining of the article. In particular, the formalism used here (explained in the next section) will encompass all isochrone potentials at once, and we will not need to distinguish between the different classes, or the presence of a gauge.
\subsection{Isochrone parabolae}\label{sec:one}
What do all isochrone potentials have in common, besides the $T=T(\xi)$ property that defines them? The answer is that they are characterised by a central geometric property when viewed in other variables than $r$. This fundamental result was covered in \cite{SPD,RP.20}, but for the sake of consistency and to introduce our notations, we review it briefly in this section. \\
Introducing the so-called Hénon variable $x=2r^2$ and the potential $Y(x)$ via $Y(2r^2)=2r^2\psi(r)$, the radial equation of motion \eqref{eom} can be rewritten as
\begin{equation} \label{eomx}
\frac{1}{16} \left(\frac{\mathrm{d} x}{\mathrm{d} t}\right)^2 = \xi x-\Lambda^2 - Y(x) \,.
\end{equation}
The radial motion $r(t)$ of an orbit corresponds to a solution $x(t)$ of equation \eqref{eomx}, through $x(t)=2r(t)^2$. As readily seen on equation \eqref{eomx}, the condition $\dot{x}=0$ corresponds to solutions $x$ of the algebraic equation $\xi x-\Lambda^2=Y(x)$, i.e., to intersections between the line $y=\xi x-\Lambda^2$ and the curve $y=Y(x)$, in the $(x,y)$-plane. These turning points are nothing but the periastron $x_p:=2r_p^2$ and apoastron $x_a:=2r_a^2$ of the orbit, in the $x$ variable. When there is only one intersection, at some abscissa $x_c=2r_c^2$, the associated orbit is circular. From this point of view, the variables $(x,Y(x))$ are particularly useful compared to $(r,\psi(r))$ as it allows to make a one-to-one geometric correspondence between a particle, described by $(\xi,\Lambda)$, and a given potential $\psi(r)$, described by $Y(x)$. This procedure allows to construct orbits very easily in the $(x,y)$-plane, as depicted on figure \ref{fig:Henon}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.65\linewidth]{Henon.png}
\caption{Construction of two orbits in the $(x,y)$-plane. The periastron and apoastron $r_p$ and $r_a$ of the physical orbit are such that $x_p=2r_p^2$ and $x_a=2r_a^2$, given by the intersections between the curve $y=Y(x)$ (potential) and the line $y=\xi x-\Lambda^2$ (particle). For a given $\Lambda$, there is a value $\xi_c(\Lambda)$ such that the orbit is circular, of radius $r_c$ such that $x_c=2r_c^2$. \label{fig:Henon}}
\end{figure}
In this article, one of the main goals is to prove the \textit{fundamental theorem of isochrony}, which lies at the core of each and every results regarding isochrone potentials \cite{SPD,RP.20}. In terms of Hénon variable $x=2r^2$ and the potential $Y(x)=x\psi(r(x))$ variable, it is simply stated as:
\begin{equation} \label{thm}
\psi(r) \text{ is isochrone } \Leftrightarrow \text{ } Y(x) \text{ is a convex arc of parabola.}
\end{equation}
Although this fundamental result has been somewhat proven by Michel Hénon in \cite{HeI.59}, it is only in \cite{SPD} that a rigorous proof was provided, using complex analysis. In \cite{RP.20}, it was linked to a intrinsic geometric property verified by parabolae that dates back to Archimedes. These proofs may be somewhat unsatisfying since they need tools outside of the scope of classical mechanics. It is our aim in sections \ref{sec:trois} and \ref{sec:quatre} to show that the result \eqref{thm} is actually naturally encoded in the Hamiltonian formulation of the problem. As a consistency check, one may readily verify that the Kepler and Harmonic potentials \eqref{KeHa} are expressed in the $x$ variable as:
\begin{equation} \label{KeHax}
Y_{\rm{Ke}}(x)=-\mu\sqrt{2x} \,,\quad \text{and} \quad Y_{\rm{Ha}}(x)=\frac{1}{16}\omega^2 x^2 \,.
\end{equation}
Both curves in \eqref{KeHax} are indeed, arcs of parabolae. The fundamental result \eqref{thm} allows an easy and complete classification of isochrone potentials simply by classifying the parabolae in the plane. This was developed thoroughly in \cite{RP.20}, we review it briefly.
The general definition for a parabola in the $(x,y)$ plane is an implicit, second-order equation of the form
\begin{equation} \label{parabolaimplicit}
(ax + by)^2 + cx + dy + e=0 \, ,
\end{equation}
with coefficients $(a,b,c,d,e)\in\mathbb{R}^5$. For any parabola, these five coefficients can always be chosen such that the discriminant $\delta:=ad-bc$ of the parabola is strictly positive. When $b=0$, solving equation \eqref{parabolaimplicit} for $y$ gives a quadratic polynomial in $x$, the convex part\footnote{Isochrone potentials must correspond to a convex arc of parabola since $y=\xi x-\Lambda^2$ intersects $y=Y(x)$ twice (this is therefore a chord of $Y(x)$), and from \eqref{eomx} it follows that $\xi x-\Lambda^2\geq Y(x)$, i.e., the chord is above the curve, whence the convexity.} of which reads
\begin{equation} \label{har}
Y(x)=-\frac{c}{d} x - \frac{e}{d} - \frac{a^2}{d}\, x^2 \,.
\end{equation}
The affine part (first two terms on the right) in equation \eqref{har} corresponds to the addition of a constant and a centrifugal-like term in the potential $\psi(r)$, i.e., to the gauge mentioned at the end of section \ref{sec:gauge}. The quadratic term corresponds to the usual harmonic potential, as in \eqref{KeHax}. The class of upright parabolae ($d < 0$ in \eqref{har}) corresponds to the harmonic class of isochrone potentials, all defined up to an affine term (in $x$) or a gauge (in $r$) (recall the second paragraph below \eqref{BoHo}).
When $b\neq 0$, the same procedure, namely solving equation \eqref{parabolaimplicit} for $y$ and keeping the convex part, yields
\begin{equation} \label{otr}
Y(x)=-\frac{a}{b} x - \frac{d}{2b^2} - \frac{\sqrt{b\delta(x-x_v)}}{b^2}\,, \quad x_v:=\frac{4b^2e-d^2}{4b\delta} \,.
\end{equation}
Since $\delta>0$ and the inside of the square root must be positive, the different combination of signs for $(b,x_v)$ determines three classes of parabolae: $(b>0,x_v<0)$, $(b<0,x_v>0)$ and $(b>0,x_v>0)$ for the Hénon, Bounded and Hollowed class of parabolae, respectively. In particular, for each of these three cases, we can combine the definition $Y(2r^2)=2r^2\psi(r)$ with equation \eqref{otr} to find the three classes of potentials mentioned in \eqref{He} and \eqref{BoHo}; the parameters $\mu,\beta$ appearing there being given explicitly in terms of the Latin parameters $(a,b,c,d,e)$. For example, the Kepler potential $\psi_{\text{Ke}}$ corresponds to
\begin{equation} \label{Keplimit}
\phantom{i\quad\quad\quad} (a,b,c,d,e) \,\, = \,\, (0,1,-2\mu^2,0,0) \,, \quad \quad \text{(Kepler)}
\end{equation}
For the other classes of potentials, the relation between the Greek parameters $(\omega,\mu,\beta)$ of \eqref{He},\eqref{BoHo} and the Latin ones $(a,b,c,d,e)$ may be found, e.g., in section III.B.\textit{3} of \cite{RP.20}. A complete summary of the different types of isochrone potentials can be found in \cite{RP.20} (see in particular figure 7 there.) Geometrically, the sign of $b$ determines whether the parabola opens right or left, and $x_v$ is the abscissa of the point where the tangent to the parabola is vertical, as summarised on figure \ref{fig:parabolas}. In passing, we note again that each parabola in \eqref{otr} is defined up to an affine term, much like the harmonic class \eqref{har}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.85\linewidth]{parabolas.png}
\caption{Three classes of (non-harmonic) isochrone parabolae. Left-orientation $(b<0)$ corresponds the Bounded class. Right-orientation $b>0$ corresponds to the H\'{e}non class ($x_v<0$) or the Hollowed class $x_v>0$. For each parabola, the physical part (i.e., where the associated $\psi(r)$ is well-defined) is highlighted in red. \label{fig:parabolas}}
\end{figure}
In the remaining of the paper (section \ref{sec:deux} to \ref{sec:quatre}), we shall exclusively be using the Latin parameters $(a,b,c,d,e)$ that define a parabola implicitly \eqref{parabolaimplicit}, so as to state our results for any isochrone potential. However, since they are very well-known and easily derived, we will relegate results about the harmonic class \eqref{har} in appendix \ref{app:KeHo} and only consider the three other families of isochrone (with $b\neq 0$) given in equation \eqref{otr}. \\
We end this first section with a summary. Isochrone potentials are radial potentials in which test particles orbit with a radial period $T$ independent of its angular momentum $\Lambda$. Any (and every) isochrone potential is of the form $r\mapsto \epsilon + \frac{\lambda}{2r^2} + \psi(r)$, where $(\epsilon,\lambda)\in\mathbb{R}^2$ and $\psi$ belongs to one of the four classes $(\psi_{\rm{Ha}},\psi_{\rm{He}},\psi_{\rm{Bo}},\psi_{\rm{Ho}})$, given in the above equations. The Harmonic class $\psi_{\rm{Ha}}$ depends on one parameter $\omega\in\mathbb{R}$, whereas the three other classes $(\psi_{\rm{He}},\psi_{\rm{Bo}},\psi_{\rm{Ho}})$ depend on two $(\mu,\beta)\in\mathbb{R}_+^2$. They all have in common the property that the curve $y=Y(x)$ describes a parabola in the $(x,y)$-plane, where $x=2r^2$ is called the Hénon variable and $Y(x)=x\psi(r(x))$. This fundamental result (isochrone $\Leftrightarrow$ parabola) is referred to as the \textit{fundamental theorem of isochrony}, a proof of which will be given in section \ref{sec:quatre}. Therefore, all isochrone potentials can be represented by an implicit, 5-parameter curve \eqref{parabolaimplicit} depicting a parabola in the $(x,y)$-plane. These are the Latin parameters $(a,b,c,d,e)$ that will be used in the next sections to solve analytically the problem of motion in each and every isochrone potential.
\section{Hamiltonian solutions to the problem of motion} \label{sec:deux}
In the gravitational two-body problem of classical mechanics (see chapter 2 in \cite{BoPuV1} for a nice exposition), the orbit is a perfect ellipse. Therefore, an explicit, analytic polar equation $r(\theta)$ can be found. However, no analytic solution can be found in the form $(r(t),\theta(t))$, where $t$ is the time. However, it is possible to find a parametric solution for all three, namely $(r(E),\theta(E),t(E))$, in terms of the so-called eccentric anomaly $E$ (these classical results are recalled in appendix \ref{app:KeHo}). In this section, we derive a series of formulae that are closely related to this parametric solution of the Keplerian problem, but that is actually true of any isochrone orbit (meaning any orbit in any isochrone potential). Quite remarkably, all these formulae can be derived analytically in terms of (1) the properties of the particle $(\xi,\Lambda)$ and (2) the properties of the isochrone potential $(a,b,c,d,e)$. We derive these formulae, and compare them to the Keplerian case to motivate generalised definitions. It should be noted that some of the following results were proposed in slightly different forms as "useful formula for numerical methods" in appendix A of \cite{McGBi.90}, and in section 5.3 of \cite{BoPuV1}. In both cases, this concerns only the (non-gauged) Hénon potential, and not the whole class of isochrone.
\subsection{Hamiltonian and action-angle variables}\label{sec:hamiso}
From now on, we consider the isochrone problem from the point of view of Hamiltonian mechanics. But first, let us be more general and let $H$ be the Hamiltonian of the system made of a particle in a generic radial potential $\psi(r)$, not necessarily isochrone. In terms of the polar coordinates adapted to the orbital plane $(r,\theta)$, the canonical momenta $(p_r,p_\theta)$ simply read $(\dot{r},\Lambda)$, as is is well-known (we set the mass of the particle to $1$). The constancy of the angular momentum $\Lambda$ then follows from the fact that $\theta$ is a cyclic variable. Indeed, in these variables the Hamiltonian reads
\begin{equation} \label{H}
H(r,\theta,p_r,p_\theta)=\frac{1}{2} \biggl( p_r^2 + \frac{p_\theta^2}{r^2}\biggr) + \psi(r) \, ,
\end{equation}
Now we consider the problem in terms of action-angle variables. Actions may be constructed in a systematic manner by using the Poincaré invariants $J_i:=\tfrac{1}{2\pi}\oint p_i \mathrm{d} q_i$, where $i\in\{r,\theta\}$ and the integral is performed over any closed curve in phase space followed during one orbital transfer (e.g., from one periastron to the following) \cite{BiTr,Arn}. For the angular part, this is almost tautological: $J_\theta:=\tfrac{1}{2\pi}\oint p_\theta \mathrm{d} \theta = \Lambda$, which is a constant of motion. For the radial part, a quick computation provides
\begin{equation} \label{j}
J_r:=\frac{1}{2\pi} \oint p_r \mathrm{d} r \quad \Rightarrow \quad J_r=\frac{\sqrt{2}}{\pi} \int_{r_p}^{r_a} \biggl( \xi-\frac{\Lambda^2}{2r^2} - \psi(r)\biggr)^{1/2} \mathrm{d} r \,,
\end{equation}
where $r_p$ and $r_a$ are the periastron and apoastron radii, respectively, and to get the second identity we simply integrated $p_r=\dot{r}$ as given in \eqref{eom}. We now have a set of actions which we will denote $(J_r,J_\theta)=(J,\Lambda)$ from now on, for simplicity and without risk of confusion.
By definition of action-angle variables, the Hamiltonian $H$ of the system is independent of the angles. We will now show that, under the assumption of isochrony, an explicit expression for $H=H(J,\Lambda)$ can be obtained.\\
First, notice that, as provided in \eqref{j}, the radial action $J$ generally depends on both constants of motion $(\xi,\Lambda)$. Taking the partial derivatives of the rightmost equation in \eqref{j}, and comparing the result with the definitions \eqref{defT} and \eqref{defTheta} reveals\footnote{Formulae \eqref{derJ} are true in general, and explicit formulae such as the r.h.s of \eqref{j} is not necessary to derive \eqref{derJ} from $J=\tfrac{1}{2\pi}\oint p_r \mathrm{d} r$. Fundamentally, this can be understood from the fact that, locally around the equilibrium (circular orbit), the pair $(H,t)$ itself defines symplectic coordinates (see \cite{Fe.13} for more details).} that \cite{SPD}
\begin{equation} \label{derJ}
\frac{T}{2\pi} = \frac{\partial J}{\partial \xi} \quad \text{and} \quad \frac{\Theta}{2\pi} = - \frac{\partial J}{\partial \Lambda} \, .
\end{equation}
The identities \eqref{derJ} are true of any radial potential, not necessarily isochrone. However, in the case of isochrony, by definition \eqref{defiso} we have $\partial_\Lambda T=0$, but \eqref{derJ} implies that $\partial_{\Lambda}T =- \partial_{\xi}\Theta$ (by swapping the order of derivatives using Schwartz's theorem.). Therefore, we see that $\partial_\Lambda T=0\Rightarrow\partial_{\xi}\Theta=0$ and conversely, thus recovering the equivalence between \eqref{defiso} and \eqref{defisoT}, mentioned in section \ref{sec:one}.
From now on, we assume that the potential is isochrone. Consequently,we have $T=T(\xi)$ and $\Theta=\Theta(\Lambda)$. Therefore, the PDE's in \eqref{derJ} are easily integrated and combined to give
\begin{equation} \label{RA}
J(\xi,\Lambda) = \frac{1}{2\pi}\int \!T(\xi)\mathrm{d} \xi - \frac{1}{2\pi}\int \!\Theta(\Lambda)\mathrm{d} \Lambda \,,
\end{equation}
where any antiderivative can be considered at this stage. We emphasise that whereas \eqref{derJ} holds for any $\psi$, equation \eqref{RA} only holds for isochrone $\psi$. To make more progress towards the explicit expression of the isochrone Hamiltonian, we need to refer to \cite{RP.20} where it was shown (based on geometric arguments on parabolae) that for \textit{any} particle of energy and angular momentum $(\xi,\Lambda)$ orbiting in \textit{any} isochrone potential, parametrized by $(a,b,c,d,e)$ as explained in section \ref{sec:one}, the radial period $T$ admits a closed-form expression, given by
\begin{equation} \label{T}
T^2 = - \frac{\pi^2}{4} \frac{\delta}{(a+b\xi)^3} \,,
\end{equation}
where we recall that $\delta:=ad-bc>0$. Of course, $T$ as given by equation \eqref{T} does not depend on the angular momentum $\Lambda$ of the particle, by definition of an isochrone potential (cf \eqref{defiso}). Also derived in \cite{RP.20} was a similar formula for the apsidal angle $\Theta$, which reads
\begin{equation} \label{Theta}
\frac{\Theta^2}{\pi^2\Lambda^2} = \frac{2b^2\Lambda^2-d}{b^2\Lambda^4-d\Lambda^2+e} + \frac{2b}{\sqrt{b^2\Lambda^4-d\Lambda^2+e}} \,,
\end{equation}
and which is indeed independent of the energy $\xi$ (cf \eqref{defisoT}). With the help of formulae \eqref{T} and \eqref{Theta}, we can integrate explicitly\footnote{Although \eqref{T} and \eqref{Theta} hold for any isochrone, including the harmonic class; starting from equation \eqref{action}, most expressions differ in the harmonic case (cf appendix \ref{app:KeHo}), because of the condition $b=0$.} equation in \eqref{RA} and obtain
\begin{equation} \label{action}
J(\xi,\Lambda) = \frac{1}{2b} \sqrt{\frac{-\delta}{a+b\xi}} - \frac{R(\Lambda)}{2b} \,,
\end{equation}
where for convenience we introduced the function $R(\Lambda)$ independent of $\xi$ and given by
\begin{equation} \label{R}
R(\Lambda) := \sqrt{2b^2\Lambda^2 - d + 2b \sqrt{b^2\Lambda^4 - d\Lambda^2 + e}} \,.
\end{equation}
It should be noted that while performing the integrals from \eqref{RA} to \eqref{action}, a constant of integration should be included in the latter expression. However, that constant can be shown to vanish since $J$ should reduce to the well-known \cite{BiTr} radial action $J_{\text{Ke}} = \mu/\sqrt{-2\xi}-\Lambda$ in the Kepler potential $\psi(r)=-\mu/r$ (which is isochrone), corresponding to the limit \eqref{Keplimit}. Equation \eqref{action} gives an exact formula for the radial action of all non-harmonic isochrone potentials. For the harmonic class ($b=0$), the computation is given in appendix \ref{app:KeHo} (see equation \eqref{Jhar} there).\\
Going back to the Hamiltonian $H(J,\Lambda)$, for any pair $(J,\Lambda)$ corresponding to a well-defined orbit, the numerical value of $H$ is actually the energy $\xi$ of the particle. Therefore, we may solve equation \eqref{action} for $\xi$ in terms of $(J,\Lambda)$, to obtain the expression of $H(J,\Lambda)$. This readily gives
\begin{equation} \label{ham}
H(J,\Lambda) = -\frac{a}{b}-\frac{\delta}{b\bigl(2bJ + R(\Lambda) \bigr)^2} \,,
\end{equation}
with $R(\Lambda)$ was given in \eqref{R}. Equation \eqref{ham} provides the general expression for the Hamiltonian of a particle in any non-harmonic isochrone potential in action-angle variables. (see equation \eqref{Hhar} of appendix \ref{app:KeHo} for the harmonic class). In the Keplerian limit \eqref{Keplimit}, we recover the Hamiltonian of the classical two body problem in terms of the Delaunay variables (see e.g. equation (E.1) of \cite{BiTr}). It coincides with (and generalises) the Hamiltonian of the Hénon potential as discussed in section 3.5.2 of \cite{BiTr}. In action-angle variables, the equations of motion for the isochrone orbit are in their simplest form, given by the constancy of $(J,\Lambda)$ and the linear-in-time evolution of the associated angles, namely
\begin{equation} \label{angles}
z_J(t) = \omega_J t + z_J(0) \quad \text{and} \quad z_{\Lambda}(t) = \omega_{\Lambda} t + z_{\Lambda}(0) \,.
\end{equation}
where the Hamiltonian frequencies $\omega_i$ read, by definition,
\begin{equation}
\omega_J := \frac{\partial H}{\partial J} \quad \text{and} \quad \omega_\Lambda := \frac{\partial H}{\partial \Lambda} \,.
\end{equation}
In action-angle variables, the four-dimensional phase space can be represented by embedding a torus of radii $J,\Lambda$ in $\mathbb{R}^3$, allowing for a particularly nice representation, see figure \ref{fig:torus}. However, reLating the angle variables to the polar coordinates $(r,\theta)$ remains to be done. The easiest way is to express the time $t$ that appears in \eqref{angles} in terms of $(r,\theta)$. This will enable us to derive a generalisation of the Kepler equation and Kepler's third law, as well as true/eccentric anomaly relations, which we explore in the next subsection. A straightforward computation from equation \eqref{ham} reveals that the frequencies read
\begin{equation} \label{freq}
\omega_J = \frac{4\delta}{\bigl(2bJ+R(\Lambda)\bigl)^3} \quad \text{and} \quad \omega_\Lambda = \frac{2\delta R'(\Lambda)}{\bigr(2bJ+R(\Lambda)\bigl)^3} \,.
\end{equation}
Whenever the orbit is closed in real space, it should also be in phase space. Consequently, the ratio number. Indeed, computing the ratio $\omega_\Lambda/\omega_J$ using equations \eqref{freq} and comparing the result to equation \eqref{Theta} readily gives
\begin{equation} \label{ratiofreq}
\frac{\omega_\Lambda}{\omega_J} = \frac{\Theta(\Lambda)}{2\pi} \,.
\end{equation}
It is quite remarkable that all isochrone potentials admit an universal and closed-form expression for the Hamiltonian in action-angle variables. Coupled to the large variety of properties (recall section \ref{sec:gauge}) that these potentials offer, this allows for a remarkable pedagogical tool to teach Hamiltonian mechanics with much more applications than the usual harmonic oscillator and two-body problem. From the fundamental point of view, it is very tempting to build toy-models for different types of systems in terms of isochrone potentials, as it is rather rare to have analytic expressions available for entire systems. In fact, analyticity has probably been the main reason for the success of the Hénon potential in this context, as a generalisation of the Kepler potential with closed-form expressions. We emphasise that this characteristic (availability of closed-form formulae) holds for all isochrones.
\begin{figure}[!htbp]
\includegraphics[width=0.7\linewidth]{tore.jpg}
\caption{One torus $(J,\Lambda)$ in the phase space, depicting a generic (non-circular) orbit (black curve). The circular orbit $(J=0$ with the same angular momentum $\Lambda$ is depicted in red. \label{fig:torus}}
\end{figure}
\subsection{Kepler equation and eccentric anomaly} \label{sec:ecc}
In \cite{RP.20}, it was shown that the equations of motion for a subclass of isochrone orbits (namely those associated with parabolae crossing the origin of the $(x,y)$-plane) could be integrated analytically in a parametric expression of the type $(r(s),\theta(s))$ for some parameter $s\in\mathbb{R}$. The parameter used in these expression was then a pure mathematical quantity, bearing, \textit{a priori}, no physical meaning. In particular, these parametric equations were obtained by reLating any isochrone orbit to a Keplerian one, through a linear transformation acting on their respective arcs of parabolae, in the $(x,y)$-plane. In this section, we show that these formulae can be obtained (1) by direct integration, (2) without making any assumption as to the subclass of isochrone, (3) such that the parameter $s$ admits a clear, physical interpretation. \\
We suppose that a particle of energy and angular momentum $(\xi,\Lambda)$ orbits an isochrone potential $\psi(r)$. As argued before, $\psi(r)$ is in a one-to-one correspondence with a convex arc of parabola $y=Y(x)$. We start by deriving an explicit formula for the time $t$ elapsed during orbit. Let $T$ be the radial period and $t\in [0;T/2]$ be an instant between the initial-time periastron $r(0)=r_p$ and apoastron $r(T/2)=r_a$. By isoLating $\mathrm{d} t$ in the radial equation of motion \eqref{eomx} and integrating, we readily obtain
\begin{equation} \label{temp}
t = \frac{1}{4}\int_{x_p}^{x} \bigl(a_0 + a_1 x+\sqrt{a_2 x + a_3}\,\bigr)^{-1/2} \mathrm{d} x \,,
\end{equation}
where, for the sake of simplicity, we temporarily introduced the following coefficients that depend on the particle $(\xi,\Lambda)$ and the potential $(a,b,c,d,e)$:
\begin{equation} \label{coefficients}
(a_0,a_1,a_2,a_3)
=\biggl(\frac{d}{2b^2}-\Lambda^2,\xi+\frac{a}{b},\frac{\delta}{b^3},\frac{d^2-4b^2e}{4b^4}\biggr)\,.
\end{equation}
Equation \eqref{temp} is nothing but the integral \eqref{T} written in terms of $x=2r^2$, and for an isochrone $Y(x)$. The change of variables $u=\sqrt{a_2x +a_3}$ then turns the term in parenthesis in \eqref{temp} into a pure quadratic, namely
\begin{equation} \label{temp2}
t = \frac{\sqrt{u_0}}{\sqrt{2}a_2}\int_{u_p}^{u} \frac{u\,\mathrm{d} u}{\sqrt{v_0-(u-u_0)^2}} \,,
\end{equation}
where $(u_0,v_0)$ are the coordinates of the apex of that quadratic, given by $u_0 = -a_2/2a_1$ and $v_0 = u_0^2 + 2a_0u_0 + a_3$. Note that in the $u$ variable, the periastron (lower bound of the integral \eqref{temp2}) corresponds to the smallest root of the quadratic in the denominator, namely $u_p=u_0-\sqrt{v_0}$. To integrate equation \eqref{temp2}, we start by turning the quadratic $v_0-(u-u_0)^2$ in canonical form by performing the linear transformation $s=(u-u_0)/\sqrt{v_0}$, so that $s=-1$ when $u=u_p$. This turns \eqref{temp2} into
\begin{equation}\label{intint}
\Omega\,t = \int_{-1}^{s} \frac{1+\epsilon s}{\sqrt{1-s^2}}\,\mathrm{d} s \,, \quad \text{where }\,\,\epsilon=\frac{\sqrt{v_0}}{u_0}\,,\quad\Omega = \frac{\sqrt{2}a_2}{u_0^{3/2}} \,.
\end{equation}
Notice that by construction, $0<\epsilon<1$, since $u_p=u_0-\sqrt{v_0}>0$. The integral in \eqref{intint} may then be finally integrated by defining an angle $E\in[0,\pi]$ such that $s=-\cos E$, with $s=0$ at periastron $(t=0\Leftrightarrow s=-1)$. Integrating in this fashion, we obtain the sought-after, expression for $t$, which takes the form of a generalised Kepler equation
\begin{equation} \label{KE}
\Omega \, t = E -\epsilon \sin E \,.
\end{equation}
Equation \eqref{KE} looks exactly like the Kepler equation \eqref{Keplereq} found in the classical two-body problem. The expression of the constants $(\Omega,\epsilon)$ can be given in terms of the generic Latin parameters $(a,b,c,d,e)$ and $(\xi,\Lambda)$ that characterise the potential and the particle, respectively. These expressions read:
\begin{subequations} \label{Omegepsi}
\begin{align}
\Omega^2 &= -16\Delta(a+b\xi)^3 \label{Omega}\,, \\
\epsilon^2 &= 1+2\delta^{-1}(2b^2\Lambda^2-d)(a+b\xi)+\delta^{-2}(d^2-4b^2e)(a+b\xi)^2 \,,
\end{align}
\end{subequations}
where $\delta=ad-bc$. Naturally, we may identify $\epsilon$ and $E$ as the \textit{isochrone eccentricity} and \textit{isochrone eccentric anomaly}, respectively. The isochrone eccentricity verifies $0\leq\epsilon<1$, vanishes only for circular orbits and coincides with the Keplerian eccentricity in the appropriate limit \eqref{Keplimit}. The isochrone eccentric anomaly is a well-defined angle and coincides with its Keplerian counterpart as well (as we will see in the next subsection). It should be stressed that the frequency $\Omega$ also coincides with $2\pi/T$, as we see by comparing \eqref{Omega} and \eqref{T}. In other words, the left-hand side of the generalised Kepler equation \eqref{KE} involves the frequency $\Omega$ of the radial motion $r(t)$. In particular, combining the angle coordinates \eqref{ratiofreq} and the Kepler equation \eqref{KE} provides
\begin{equation} \label{zKep}
z_J = E - \epsilon \sin E \,, \quad z_{\Lambda} = \frac{\Theta}{2\pi} (E - \epsilon \sin E) \,,
\end{equation}
where we have set $(z_J,z_{\Lambda})=(0,0)$ at $t=0$. It is clear from \eqref{zKep} that $z_J$, the angle variable associated to the radial action $J$, generalises in fact the Keplerian mean anomaly.
\subsection{Parametric polar solution} \label{sec:para}
Now that an eccentric anomaly $E$ has been introduced via Kepler's equation \eqref{KE}, we derive its relation to the orbital radius $r$ (or equivalently $x=2r^2$) and the polar angle $\theta$.
\subsubsection{Radial motion}
For the radial part $r(E)$, we may simply go through the different changes of variables used to compute the integral \eqref{temp} in the last subsection, but in reverse, i.e., $E\mapsto s\mapsto u\mapsto x$. After some easy algebra, we find that\footnote{There is a subtlety in the case $a_2<0$, since then the function $u(x)=\sqrt{c_2 x+c_3}$ is decreasing. This is resolved by keeping track of $\text{sign}(a_2)=\text{sign}(b)$, which results in the $\pm |b|$ in \eqref{xE}.}
\begin{equation} \label{xE}
x(E) = \frac{4b^2e-d^2}{4b\delta} \pm \frac{\delta}{4|b|(a+b\xi)^2}\,(1-\epsilon \cos E)^2\,.
\end{equation}
where $\pm$ corresponds to the sign of $b$. We note that the first term on the right-hand side is actually $x_v$, the abscissa of the point with vertical tangent on the parabola introduced in equation \eqref{otr}. Whenever the potential is Keplerian, then $x_v=0$ (and $b>0$) and we recover the classical link $r=\alpha_{\text{Ke}}(1-\epsilon \cos E)$, where $\alpha_{\text{Ke}}$ is the semi-major axis of the Keplerian ellipse. This motivates the following definition for an \textit{isochrone semi-major axis} $\alpha$ such that the orbital radius $r(E)$ reads
\begin{equation} \label{smaxis}
r(E) = \sqrt{\frac{x_v}{2} \pm \alpha^2(1-\epsilon \cos E)^2}\,, \quad \text{where} \quad \alpha^2 := \frac{\delta}{8|b|(a+b\xi)^2} \,.
\end{equation}
This isochrone semi-major axis is, in general, not related to an ellipse axis, as isochrone orbits are not, in general, ellipses \cite{RP.20}. However, it coincides with its Keplerian counterpart in the proper limit, and comparing equations \eqref{smaxis} with \eqref{Omegepsi} reveals the equality
\begin{equation} \label{third}
\Omega^2 \alpha^3 = \sqrt{\frac{\delta}{2|b|^3}}\,.
\end{equation}
which we recognise a the generalisation of Kepler's third law of motion, reLating the (square of) the orbital frequency to the (cube) of the semi-major axis. Indeed, the Keplerian limit \eqref{Keplimit} of equation \eqref{third} gives $\Omega^2\alpha^3=\mu$, a well-known formulation of Kepler's third law \cite{BiTr}. In fact, the quantity appearing on the right-hand side has dimension of mass and is exactly the mass parameter $\mu$ appearing in equations \eqref{He} and \eqref{BoHo}, see equation (3.12) of \cite{RP.20}.
\subsubsection{Angular motion}
For the angular motion, we adapt the strategy developed in \cite{RP.20} and first construct a differential equation of which $\theta(E)$ is a solution. We can do this with the Leibniz rule as follows
\begin{equation} \label{bup}
\frac{ \mathrm{d} \theta}{\mathrm{d} E} = \frac{ \mathrm{d} \theta}{\mathrm{d} t}\, \frac{\mathrm{d} t}{\mathrm{d} E} = \frac{\Lambda}{\Omega} \frac{1-\epsilon \cos E}{r(E)^2} \,,
\end{equation}
where in the second equality we used the angular equation of motion $\dot{\theta}=\Lambda/r^2$ and the Kepler equation \eqref{KE}. To obtain an expression $\theta(E)$ from \eqref{bup}, we simply need to inject the expression of $r(E)$ given in \eqref{smaxis} and integrate the result. By doing so, we readily obtain
\begin{equation} \label{int0}
\theta(E) = \frac{2\Lambda}{\Omega} \int_0^E \frac{1-\epsilon \cos \phi}{x_v + 2\alpha^2 (1-\epsilon \cos \phi)^2} \mathrm{d} \phi \,,
\end{equation}
where $\alpha$ is given by \eqref{smaxis} and $x_v$ depends only on the potential, cf. \eqref{otr}. When $x_v\leq 0$, which corresponds to the Hénon class of potentials, the integral can be easily integrated. Indeed, a partial fraction decomposition gives
\begin{equation} \label{int}
\theta(E) = \frac{\Lambda }{2\Omega\alpha^2} \sum_{\pm} \frac{1}{1\pm\zeta} \int_0^E \frac{\mathrm{d} \phi} {1 - \epsilon_{\pm} \cos \phi} \,,
\end{equation}
where $\epsilon_{\pm}=\epsilon/(1\pm\zeta)$ with $\zeta^2=-x_v/2\alpha^2$ and is such that $0\leq \zeta \leq \epsilon < 1$, so that the integrals are well-defined. Equation \eqref{int} can be further simplified by computing explicitly the integral, and thus provides the final formula for $\theta$ in terms of $E$, namely
\begin{equation} \label{thetaE}
\theta(E) = \frac{\Lambda }{\Omega\alpha^2} \sum_{\pm} \frac{\epsilon_{\pm}}{\sqrt{1-\epsilon_{\pm}^2}} \arctan \biggl( \sqrt{\frac{1+\epsilon_{\pm}}{1-\epsilon_{\pm}}} \tan \frac{E}{2} \biggr) \,.
\end{equation}
The Keplerian limit of equation \eqref{thetaE} consists in taking $\zeta\propto x_v=0$ such that $\epsilon_{\pm}=\epsilon$, and thus provides a sum of two identical terms, recovering the well-known Keplerian result \eqref{elemts}. For consistency, one can check that when $E=\pi$, which should correspond to the apoastron of the orbit, equation \eqref{thetaE} coincides with the general expression \eqref{Theta} of the apsidal angle $\Theta$, which satisfies $\theta(\pi)=\Theta/2$ by definition. \\
As final word, let us mention that the derivation of formula \eqref{thetaE} relies on the crucial assumption that $x_v\leq 0$, and thus only holds for the Hénon class of isochrone potentials (recall figure \ref{fig:parabolas}). When $x_v\geq 0$, the auxiliary quantity $\zeta$, defined by $\zeta^2=-x_v/2\alpha^2$, becomes imaginary and the $\epsilon_{\pm}$ are now complex. It turns out that formula \eqref{thetaE} also holds for this case. The reason is that, although both terms $+$ and $-$ in the $\sum_{\pm}$ sum are complex numbers, they are conjugate to one another. Their sum is therefore twice their real part, and is thus a real quantity. We provide the details of this in appendix \ref{app:complex}. In particular, formula \eqref{thetaE} for imaginary $\zeta$ is mathematically well-defined and the resulting $(r(E),\theta(E))$-orbit does coincide with the true isochrone dynamics. \\
Once again, we end this section by a summary of the results. The equations of motion for a test particle in any isochrone potential can be solved analytically in the parametric form $(r(E),\theta(E))$ where $E$ is a parameter that reduces to the Keplerien eccentric anomaly. These equations are given in \eqref{smaxis} and \eqref{thetaE}. The polar coordinates $(r,\theta)$ along the orbit can also be related to orbital time $t$ through a generalisation of the Kepler equation \eqref{KE}, that holds for any isochrone orbit. Finally, we have shown that the radial action variable $J$ is particularly well-adapted to the isochrone problem, as it (1) splits into a sum of $\xi$- and $\Lambda$-dependent terms and (2) can be used to derive the general Hamiltonian of the dynamics in action-angle variables $(J,\Lambda)$.
\section{Birkhoff normal forms and invariants} \label{sec:trois}
The fundamental theorem of isochrony \eqref{thm} is what allows to derive all the analytical results for isochrone potentials and orbits therein, as we did in section \ref{sec:deux}. This theorem was first proven by Michel Hénon in his seminal paper \cite{HeI.59}, although not without some (minor) mistakes. It was then discussed in \cite{SPD} by borrowing techniques from complex analysis, and in \cite{RP.20} using Euclidean geometry but necessitating an abstract mathematical result. In any case, at present, a self-consistent and natural proof of this central theorem relying only on classical mechanics, is nowhere to be found, to our knowledge. It is our goal, in the present and following sections to introduce and exploit a powerful tool of Hamiltonian mechanics: the \textit{Birkhoff normal form}. In a nutshell, the Birkhoff normal form allows to (quantitatively and rigorously) probe the neighbourhood of equilibrium points in phase space, to obtain information on their stability, and thus on the integrability of the underlying Hamiltonian. \\
There exists a lot of specialised literature on this topic. Yet, introductory material on normal forms may be hard to find for non-specialists. Among the most accessible, we found that Arnold's classical textbook (\cite{Arn}, appendix 7) and Hofer \& Zehnder's lectures (\cite{HoZe.12}, sections 1.7 and 1.8) are particularly relevant (see also section (8.5) of \cite{BoPuV2} and \cite{Pi.13}). Other examples of accessible expositions (with applications) may be found in \cite{BoPuV1} (for the stability of the Lagrange points), and \cite{Gr.07} (for solving PDE's). Other notable references, namely \cite{FeKa.04,Fe04,ChPi.11}, present explicit computations of normal forms and use them to study the stability of the (restricted) $N$-body problem. The latter have largely motivated the present derivation.\\
Applications of our method go beyond the sole isochrone theorem \eqref{thm}, as it allows us to prove the Bertrand theorem \cite{Arn}, as well as the generalised Kepler's third laws \eqref{T},\eqref{Theta}. We relegate these applications to section \ref{sec:quatre}, and only focus on the derivation of the normal form in the present section, which we organise as follows:
\begin{itemize}
\item in subsection \ref{sec:Binv} we introduce the notion of Birkhoff normal form and Birkhoff invariants in a very simple case, sufficient for our purpose;\vspace{-2mm}
\item in subsection \ref{sec:Bgen} we write the Birkhoff normal form $N_1$ for the Hamiltonian of a particle in a \textit{generic} potential, which encodes information on the the potential $Y(x)$;\vspace{-2mm}
\item in subsection \ref{sec:Biso} we write the Birkhoff normal form $N_2$ for the Hamiltonian of a particle in an \textit{isochrone} potential, using the radial action \eqref{j} well-adapted to isochrony.\vspace{-2mm}
\end{itemize}
\subsection{Normal form and Birkhoff invariants}\label{sec:Binv}
For the sake of simplicity we will only cover the very basics of normal forms and refer to the above literature for the details. In particular, we consider a 1-dimensional problem (2-dimensional phase space), but all can be generalised to any $2n$-dimensional phase space (see, e.g., section 1.8 of \cite{HoZe.12}). Let $H(q,p)$ be a Hamiltonian defined in terms of coordinates $(q,p)\in\mathbb{R}^2$. We assume, without any loss of generality, that the origin $(q,p)=(0,0)$ is an elliptic\footnote{We focus on elliptic equilibria since we want to describe a periodic motion of the particle.} equilibrium point. A classical theorem (due initially to Birkhoff \cite{Bi.27} and then refined/generalised since then \cite{HoZe.12}) then says that \textit{there exists a local coordinate transformation $(q,p)\mapsto (\rho,\varphi)$ such that the Hamiltonian takes the form}
\begin{equation} \label{HBirk}
H(\rho,\varphi) = \mathfrak{l} + \mathfrak{b} \rho + \frac{1}{
2} \mathfrak{B} \rho^2 + o(\rho^2) \,.
\end{equation}
The real numbers $(\mathfrak{l}, \mathfrak{b}, \mathfrak{B})$ are called Birkhoff invariants of zeroth, first and second order, respectively\footnote{In the general case $(q,p)\in \mathbb{R}^n \times \mathbb{R}^n$, then $\rho\in \mathbb{R}^n$ and, accordingly, the Birkhoff invariants $\mathfrak{b}$ and $\mathfrak{B}$ are linear and bilinear forms, respectively.}. They depend exclusively on $H$ (not on the mapping $(q,p)\mapsto (\rho,\varphi)$). They encode the information on the geometry of phase space around the equilibrium point. In whole generality, Birkhoff's theorem gives much stronger results than the result \eqref{HBirk}. In particular, it holds for any dimensions, explains how to extend \eqref{HBirk} to any order in the powers of $\rho$, makes a distinction between resonant or non-resonant orbits. However, quadratic order will be sufficient for our purposes, and we will refer the reader to section (1.8) of \cite{HoZe.12} (and references therein) for a more detailed and rigorous exposition. \\
The main feature of the variables $(\rho,\varphi)$ in \eqref{HBirk} is that, up to $o(\rho^2)$ corrections, they are action-angle coordinates, as $H$ does not depend on the angle $\varphi$. In this work, we call \textit{normal form} of $H(\rho,\varphi)$, and denote by $N(\rho)$, the quadratic part of \eqref{HBirk}, namely
\begin{equation} \label{defNorm}
N(\rho) = \mathfrak{l} + \mathfrak{b} \rho + \frac{1}{
2} \mathfrak{B} \rho^2 \,.
\end{equation}
Heuristically, the quantity $N(\rho)$ in \eqref{defNorm} can be seen as a Hamiltonian that is (1) completely integrable and (2) describes the same dynamics as $H$ in \eqref{HBirk} in the $O(\rho^2)$-neighbourhood of the equilibrium $(0,0)$. Owing to the unicity of the Birkhoff invariants, the normal form \eqref{defNorm} is itself unique, in the sens that a change of action-angle coordinates that leaves the equilibrium point at the origin must be the identity (see \cite{An.81} for a detailed exposition as well as appendix \ref{app:B}). A natural method to construct a normal form is crystallised in figure \ref{fig:PStrans}, which shows the successive steps one may use to transform the geometry of the phase space $(q,p)$ around $(0,0)$, so as to introduced polar-symplectic coordinates $(\rho,\varphi)$ (cf subsection \ref{sec:polar}). In fact, in section \ref{sec:Bgen} will will explicitly provide a constructive example of such $(q,p)\mapsto(\rho,\varphi)$ mapping, that brings $H(q,p)$ into normal form \eqref{HBirk}.
\subsection{Birkhoff normal form: generic radial potential}\label{sec:Bgen}
Let us start with the Hamiltonian of a particle in a radial potential $\psi(r)$, as given by \eqref{H}, rewritten here for convenience as:
\begin{equation} \label{HH}
H(r,R,\theta,\Lambda)= \frac{R^2}{2} + \frac{\Lambda^2}{2r^2} + \psi(r) \, ,
\end{equation}
where $(r,\theta)$ are the coordinates and $(R,\Lambda)$ their conjugated momenta. The complete, 4-dimensional phase space of the dynamics is a subset of $\mathbb{R}_+ \times \mathbb{R} \times [0;2\pi[ \times \mathbb{R}_+\ni(r,R,\theta,\Lambda)$. However, since $\theta$ does not appear explicitely in \eqref{HH} and its conjugated momentum $\Lambda$ is constant, $(\theta,\Lambda)$ is already a pair of angle-action coordinates. Therefore, it can be practical to think of \eqref{HH} as a 1-dimensional family of Hamiltonians parametrised by $\Lambda$. In this way, we just need to focus the radial part $(r,R)$ of the dynamics, and perform successive symplectic transformations to reach a normal form, the coefficients of which will thus be $\Lambda$-dependent. With this \textit{2-dimensional phase space} point of view, we will write $H(r,R)$ instead of $H(r,R,\theta,\Lambda)$ to stick with the notations of section \ref{sec:Binv}, with no risk of confusion. Moreover, while performing successive symplectic changes of coordinates on the phase space $(r,R)\in\mathbb{R}_+\times\mathbb{R}$, we will keep the (lower case/upper case) notation for a coordinate ($r,x,z,\ldots$) and its conjugated momentum ($R,X,Z,\ldots$). \\
Although we have tried to be as pedagogical as possible (and we believe these computations are interesting in themselves), the following subsections are rather technical. On first reading (or for the reader in a hurry), it is possible to skip the following steps and just assume that there exists a pair of variables $(\rho,\varphi)$, such that the Hamiltonian \eqref{HH} admits a normal form $N_2(\rho)$, given by equation \eqref{N1} below; before directly proceeding with subsection \ref{sec:Binv}.
\subsubsection{Hénon variable and circular orbits}
The Hamiltonian \eqref{HH} describes the same system as the Hamiltonian in \eqref{HJL} only if the potential is isochrone. Since the following computations are true for any radial potential (not just isochrones) we stay general and relegate the isochrone assumption to the next subsection. Still, as we have the isochrone theorem \eqref{thm} in mind, we would prefer to speak in terms of $(x,Y(x))$ instead of $(r,\psi(r))$. Therefore, the first step is the change of variables $(r,R)\mapsto(x,X)$, where $x=2r^2$ and $X$ is the canonical momenta associated to $x$. The transformation is easily seen to be symplectic if and only if $R=\sqrt{8x}X$. In these variables, the Hamiltonian \eqref{HH} now reads
\begin{equation} \label{Hnew}
H(x,X) = 4xX^2+\frac{\Lambda^2}{x}+\frac{Y(x)}{x} \,.
\end{equation}
where we recall that $Y(x)=x\psi(r(x))$. The derivation of a normal form starts with a choice of equilibrium point around which to write it.
Using \eqref{Hnew} for a given $\Lambda$, these points are simply given by $(x,X)=(x_c,0)$, where $x_c=x_c(\Lambda)$ is a solution to the algebraic equation
\begin{equation} \label{xc}
x_c Y'(x_c)-Y(x_c)=\Lambda^2 \,.
\end{equation}
At this equilibrium, we have $X=0\rightarrow\dot{r}=0$, thus corresponding to circular orbits, the radius $r_c$ of which is such that $x_c=2r_c^2$ and $x_c$ solves \eqref{xc}. In the complete 4-dimensional phase space, there exists a family of such circular orbits, parametrised by $\Lambda$.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.55\linewidth]{PStrans.png}
\caption{The circular orbit (red point) and three non-circular orbits (red curves) in the 2-dimensional phase space under each transformation. The map $(x,X)\mapsto(\hat{x},\hat{X})$ translate $x_c$ at the origin, and $(\hat{x},\hat{X})\mapsto(z,Z)$ circularises the orbits in the only in the close vicinity of the origin (the outer curves are not circular). Then $(z,Z)\mapsto(\bar{z},\bar{Z})$ circularises a larger neighbourhood of the origin (all curves circular up to $O(\rho^2)$) allowing to construct polar action-angle variables $(\rho,\varphi)$ in that region. \label{fig:PStrans}}
\end{figure}
\subsubsection{TransLating the equilibrium at the origin}
Now we are going to write the Birkhoff normal form of $H$ around a given circular orbit $(x,X)=(x_c,0)$. The main goal is to fix a $\Lambda$ and to circularise the phase space around the equilibrium $(x_c,0)$, in order to introduce symplectic polar coordinates, following the discussion in section \ref{sec:Binv}. \\
We start by transLating the equilibrium $(x,X)=(x_c,0)$ to the origin, by setting $(\hat{x},\hat{X})=(x-x_c,X)$ and then Taylor-expanding in the $\hat{x}$ variable, small by assumption. We obtain\footnote{\label{fn1}In the 4D phase space, this change of variable is rendered symplectic by changing the angle accordingly. For example, the mapping $(x,\theta,X,\Lambda) \mapsto (x-x_c(\Lambda) ,\hat{\theta},\hat{X},\Lambda)$ is symplectic if we take $\hat{\theta}=\theta-x_c'(\Lambda)X$.} the following expression
\begin{equation} \label{Hhat}
H(\hat{x},\hat{X}) = Y_1 + 4 x_c \hat{X}^2 + \frac{Y_2}{2 x_c} \hat{x}^2 + 4\hat{X}^2\hat{x} + c_3 \hat{x}^3 + c_4 \hat{x}^4 + o(\hat{x}^4) \,,
\end{equation}
where we introduced the convenient notation $Y_n:=Y^{(n)}(x_c)$, and defined the following coefficients that depends on the derivatives of $Y(x)$ at $x=x_c$, namely
\begin{equation} \label{coeff}
c_3 = \frac{x_c Y_3 - 3 Y_2}{6 x_c^2} \,, \quad \text{and} \quad c_4 = \frac{12 Y_2 - 4x_c Y_3 + x_c^2 Y_4}{24 x_c^3} \,.
\end{equation}
In these variables, the circular orbits are at the origin $(\hat{x},X)=(0,0)$, and in the 4D phase space each coefficient depends on $\Lambda$ through $x_c = x_c(\Lambda)$ (recall equation \eqref{xc}). Notice that the energy of the circular orbit is $H(0,0)=Y_1=Y'(x_c(\Lambda))$. This is in agreement with the way orbits are constructed in the Hénon variables, as explained around figure \ref{fig:Henon}. Lastly, for the sake of completeness, let us deduce from \eqref{Hhat} the nature of the equilibrium $(\hat{x},\hat{X})=(0,0)$. Writing the Hamilton equations and linearising around $(0,0)$ readily gives
\begin{equation} \label{HamEq}
\frac{\mathrm{d} \hat{x}}{\mathrm{d} t} = \frac{\partial H}{\partial \hat{X}} = 8 x_c \hat{X} + o(x,X) \,, \quad \frac{\mathrm{d} \hat{X}}{\mathrm{d} t} = -\frac{\partial H}{\partial \hat{x}} = - \frac{Y_2}{x_c} \hat{x} + o(x,X) \,.
\end{equation}
Now, since the potential $x\mapsto Y(x)$ must be convex, (recall the construction of an orbit on figure \ref{fig:Henon}), it is clear that we must have
$Y_2\geq 0$. Consequently, the eigenvalues $(\ell_1,\ell_2)$ of the linearised system \eqref{HamEq} are
\begin{equation}
\ell_1 = \mathrm{i}\sqrt{8Y_2} \quad \text{and} \quad \ell_2 = - \mathrm{i}\sqrt{8Y_2} \,.
\end{equation}
These eigenvalues are conjugate, imaginary numbers, allowing us to conclude that the equilibrium $(\hat{x},\hat{X})=(0,0)$ is, indeed, elliptic, as our use of the Birkhoff normal form requires.
\subsubsection{Circularising the equilibrium neighbourhood}
Next, notice that the quadratic part of $H$ in \eqref{Hhat} describes ellipses in the $(\hat{x},\hat{X})$-plane. As we aim, eventually, towards a polar-like system of action-angle coordinates, we would like to \textit{circularise} these ellipses; that is, have the same coefficients in front of $\hat{x}^2$ and $\hat{X}^2$ in equation \eqref{Hhat}. This can be done easily by yet another change of variables. Explicitly, we set $(\hat{x},\hat{X})=(\eta z,\gamma Z)$ (a homothety for fixed $\Lambda$) and choose $(\eta,\gamma)$ such that: (i) the transformation is symplectic, and (ii) the coefficients in front of $z^2$ and $Z^2$ are equal in the new variables. A calculation reveals that condition (i) holds if $\gamma=1/\eta$, while condition (ii) holds if we set $\eta^4=8x_c^2/Y_2$. Expressing the Hamiltonian with the new $(z,Z)$-variables\footnote{\label{fn2}We would also need to change the angle $\hat{\theta}\mapsto\hat{\theta} + \tfrac{\eta'(\Lambda)}{\eta(\Lambda)}\hat{x}$, to ensure symplecticity in the 4D phase space.}, we find
\begin{equation} \label{Hz}
H_{\Lambda}(z,Z) = Y_1 + \sqrt{2Y_2}\bigl(z^2 + Z^2 + c_0 z Z^2+ c_1 z^3 + c_2 z^4\bigr) + o(z^4)\,,
\end{equation}
where we see that our phase-space ellipses have indeed been circularised, and where we defined new coefficients $(c_0,c_1,c_2)$ by
\begin{equation} \label{abc}
c_0 = \frac{8^{1/4}}{Y_2^{1/4}x_c^{1/2}}\,, \quad c_1= \frac{8^{1/4}}{3}\frac{x_c Y_3-3Y_2}{x_c^{1/2}Y_2^{5/4}} \quad \text{and}\quad c_2 = \frac{8^{1/2}}{12}\frac{12 Y_2 - 4 x_c Y_3 + x_c^2 Y_4}{x_c Y_2^{3/2}} \,.
\end{equation}
One more time, we emphasise that, in the complete, 4-dimensional phase space, these coefficients all depend on $\Lambda$, through $x_c(\Lambda)$ and $Y_n(x_c(\Lambda))$. Next, we simplify the $(z,Z)$-dependent part in the parentheses of \eqref{Hz}.
\subsubsection{Flowing towards the normal form}
For the moment, let us rewrite \eqref{Hz} in the form $H=Y_1+\sqrt{2Y_2}\tilde{H}+o(z^4)$, where
\begin{equation} \label{Hti}
\tilde{H}(z,Z) = z^2 + Z^2 +c_0 zZ^2 + c_1 z^3 + c_2 z^4 \,.
\end{equation}
As our final aim is to introduce polar-type coordinates, we would like $\tilde{H}$, a polynomial in $(z,Z)$, to be written solely as powers of $\rho=z^2+Z^2$, which would then correspond to the radial part of the polar-type coordinates. The best way to massage $\tilde{H}$ into this form is to make a transformation derived from the flow of another (polynomial) Hamiltonian $\Phi$. Let us take a moment to explain this method more clearly. \\
Using the flow of a secondary Hamiltonian can be viewed as a very general procedure to produce symplectic transformations $(z,Z) \mapsto (\bar{z},\bar{Z})$. Let $\Phi(z,Z)$ be some arbitrary Hamiltonian, and let $\phi_t$ be the \textit{flow} associated to $\Phi$, such that $\phi_t:(z,Z)\mapsto(\bar{z},\bar{Z})=(z(t),Z(t))$, where $(z(t),Z(t))$ is the solution to Hamilton's equations for $\Phi$. For $t=1$, the map $\phi:=\phi_1$ is appropriately called the \textit{time-one flow}, as it sends a point $(z,Z)=(z(0),Z(0))$ (corresponding to some initial condition $t=0$) to some other point $(\bar{z},\bar{Z})=(z(1),Z(1))$ (corresponding to its updated value at $t=1$). Choosing $\Phi$ in the right way allows one to determine the dynamics between $t=0$ and $t=1$, and thus select the image $(\bar{z},\bar{Z})$ of each point $(z,Z)$. By construction, this mapping $\phi$ defines a symplectic transformation on the phase space, because it derives from a Hamiltonian system. \\
Returning to our problem, an explicit computation adapted from \cite{FeKa.04} (with a slight adjustment for the cross term $z Z^2$ in \eqref{Hti} absent there) shows that if $\tilde{H}(z,Z)$ is of the form \eqref{Hti}, then the time-one flow $\phi$ of a well-chosen\footnote{Explicitly, $\Phi(z,Z)=b_1 Zz^2 + b_2 Z^3 + b_3 Z z^3 + b_4 z Z^3$, where $(b_1,b_2,b_3,b_4)$ are combinations of $(c_0,c_1,c_2)$.
} $\Phi(z,Z)$ defines a set of coordinates $(\bar{z},\bar{Z})$ precisely such that the polynomial Hamiltonian \eqref{Hti} now reads, in the ``bar'' variables:
\begin{equation} \label{Hbar}
\bar{H}(\bar{z},\bar{Z}) = \bar{z}^2+\bar{Z}^2 + C\,(\bar{z}^2+\bar{Z}^2)^2 + O(5) \,,
\end{equation}
where $O(5)$ contains terms of order 5 or more in $(\bar{z},\bar{Z})$, and $C$ is expressed in terms of the constants appearing in \eqref{Hti}, by
\begin{equation} \label{C}
C=-\frac{3}{32} (5 c_1^2-4c_2^2+2c_1c_0 + c_0^2) \,.
\end{equation}
We can now go back to the original Hamiltonian \eqref{Hz} and express it in the new variables $(\bar{z},\bar{Z})$. To this end, we replace $\tilde{H}(z,Z)$ in \eqref{Hz} (recall that $H=Y_1+\sqrt{2Y_2}\tilde{H}+o(z^4)$) by $\bar{H}(\bar{z},\bar{Z})$ as given in \eqref{Hbar}. We eventually find
\begin{equation} \label{Hend}
H(\bar{z},\bar{Z}) = Y_1 + \sqrt{2Y_2}(\bar{z}^2+\bar{Z}^2) + C \sqrt{2Y_2}\,(\bar{z}^2+\bar{Z}^2)^2 +O(5)\,.
\end{equation}
where $C$ is a function of $x_c$ given by combining equations \eqref{C} and \eqref{abc}. Expression \eqref{Hend} is then directly amendable to a normal form, as we show in the next paragraph.
\subsubsection{Polar action-angle coordinates}\label{sec:polar}
The final step to extract the normal form of \eqref{Hend} is to promote $\bar{z}^2+\bar{Z}^2$ to an action variable. A classical technique \cite{Arn} is to think of $(\bar{z},\bar{Z})$ as a kind of cartesian-type coordinates and pass to (symplectic) polar coordinates, by setting
\begin{equation}
z = \sqrt{2\rho} \cos \varphi \quad \text{and} \quad Z = -\sqrt{2\rho} \sin \varphi \,,
\end{equation}
where this expression is necessary to enforce symplecticity. Inserting into equation \eqref{Hend} the new coordinates $(\rho,\varphi)$ gives us our final expression for the Hamiltonian
\begin{equation} \label{Hfinal}
H(\rho,\varphi) = Y_1 + \sqrt{8Y_2}\rho + C \sqrt{32Y_2} \,\rho^2 +o(\rho^2)\,.
\end{equation}
Now we can compute $C=C(\Lambda)$ in terms of $x_c$ and the $Y_n$'s from equations \eqref{C} and \eqref{abc}. The normal form $N_1(\rho)$ of \eqref{Hfinal} is therefore
\begin{equation} \label{N1}
N_1(\rho) = Y_1 + \sqrt{8Y_2}\,\rho +\frac{1}{2} \biggl( \frac{4 Y_3}{Y_2} + \frac{x_c}{3Y_2^2} \bigl( 3Y_2 Y_4 - 5 Y_3^2 \bigr) \biggr) \,\rho^2 \,.
\end{equation}
It should be noted that no assumption about the radial potential $Y(x)$ has been made to derive this normal form. In particular, $Y(x)$ is not required to be isochrone, and, much like in \cite{FeKa.04}, this normal form is valid for any radial potential.
We know turn to the derivation of the normal form for an isochrone potential.
\subsection{Birkhoff normal form: isochrone potential}\label{sec:Biso}
In general, the strength and simplicity of a normal form is usually balanced by the (analytic) complexity involved in its derivation (see e.g. the normal form of the restricted $N$-body problem, \cite{Fe04,ChPi.11}). However, in the case of an isochrone potential, things are much simpler thanks to the symmetry at play. We explain, in this subsection, how to construct the normal form of the Hamiltonian in that particular, isochrone case. The simplicity of the argument should then be compared to the previous subsection \ref{sec:Bgen}, where without the isochrone assumption the calculation was much more involved, and very close to that of \cite{FeKa.04}.\\
We will follow the same steps used in section \ref{sec:hamiso}. In particular, we start from the following result: if the potential is isochrone, the radial action $J$ decomposes into a sum of two terms, one $\xi$-dependent and one $\Lambda$-dependent, as was shown in \eqref{RA}. For convenience we rewrite this as
\begin{equation} \label{genJ}
J(\xi,\Lambda)=F(\xi)-G(\Lambda)\,,
\end{equation}
where $F,G$ are two functions\footnote{We assume that $F,G$ behave nicely: they can be differentiated several times and inverted on their domain of definition. We know this will be the case as we know their explicit form \eqref{T}, \eqref{Theta}.} such that $F'(\xi)=T(\xi)/2\pi$ and $G'(\Lambda) = \Theta(\Lambda)/2\pi$, since for any radial potential, \eqref{derJ} must hold. In section \ref{sec:hamiso} we had the explicit expressions of $F$ and $G$ (recall \eqref{action}) but these have been obtained in \cite{RP.20} assuming what we are attempting to prove, namely the isochrone theorem. As we shall see, these explicit forms are not required to make the computation.
Now let us fix a value of $\Lambda$, and solve equation \eqref{genJ} for the energy $\xi$ in terms of the radial action $J$. Since that expression holds for any $\xi$, i.e. any numerical value of the Hamiltonian $H=\xi$, we have just obtained $H$ expressed in terms of the action $J$, at fixed $\Lambda$. This expression reads
\begin{equation} \label{HJL}
H(J,z_J) = F^{-1}(G(\Lambda)+J)\,,
\end{equation}
where we have re-introduced the dependence on the radial angle variable $z_J$ associated to the radial action $J$, for completeness. Let us emphasise one more time that, at this stage, we do not know the expressions of $F,G$. They can only be computed once the isochrone theorem is demonstrated. When this is done equations \eqref{genJ} and \eqref{HJL} will become \eqref{action} and \eqref{ham}, respectively.
As emphasised in the last section, for a given value of $\Lambda$, circular orbits are relative equilibria of $H$ and correspond to $J=0$. Let us then Taylor-expand \eqref{HJL} around $J=0$ and set $H(0,z_J):=\xi_c(\Lambda)$ as the energy of that circular orbit. We readily get
\begin{equation}
H(J,z_J) = \xi_c + \frac{1}{F'(\xi_c )} J + \frac{1}{2} \biggl(-\frac{F''(\xi_c)}{F'(\xi_c )^3} \biggr) J^2 +o(J^2) \,.
\end{equation}
We can now extract the normal form of the above Hamiltonian $H(J,z_j)$, which we denote by $N_1(J)$, such that $H(J,z_J)=N_1(J)+o(J^2)$. Using the property $F'(\xi)=T(\xi)/2\pi$ one more time, we find
\begin{equation} \label{N2}
N_2(J) = \xi_c + \frac{2\pi}{T(\xi_c )} J + \frac{1}{2} \biggl(-\frac{4\pi^2 T'(\xi_c)}{T(\xi_c )^3} \biggr) J^2 \,,
\end{equation}
where $T(\xi_c)$ is understood as the limit of $T(\xi)$ when $\xi\rightarrow\xi_c(\Lambda)$ for fixed $\Lambda$, since the radial period of a circular orbit can be ambiguous to define. Equation \eqref{N2} is, for a given $\Lambda$, a normal form for $H$, but we emphasise that it holds \textit{only if the potential is isochrone}, otherwise equation \eqref{genJ} (from which \eqref{N2} follows) does not hold in the first place. With this second normal form at hand, we can finally turn to the applications, in the next and last section.
\section{Three applications of the normal form} \label{sec:quatre}
In this fourth and last section, we use the two Birkhoff normal forms \eqref{N2} and \eqref{N1} of the Hamiltonian describing a particle in an isochrone potential. By exploiting the equality between their respective Birkhoff invariants, we provide: (1) a proof of the fundamental theorem of isochrony \eqref{thm}; (2) a proof of the Bertrand theorem; and (3) a proof of the generalised Kepler's third law \eqref{T}. These three items are presented in each of the three following subsections.
\subsection{Fundamental theorem of isochrony} \label{sec:fti}
The two Birkhoff normal forms $N_1$ and $N_2$ derived in the previous section define two sets of three Birkhoff invariants (according to \eqref{defNorm}), one for each normal form. They must be equal, by unicity of the normal form. From the first normal form \eqref{N1}, derived in the $\rho$ action coordinate, their expression is
\begin{equation} \label{Binv2}
\mathfrak{l}_1 = Y_1\,, \quad
\mathfrak{b}_1 = \sqrt{8Y_2} \,, \quad \text{and} \quad
\mathfrak{B}_1 =\frac{4 Y_3}{Y_2} + \frac{x_c}{3Y_2^2} \bigl( 3Y_2 Y_4 - 5 Y_3^2 \bigr) \,,
\end{equation}
where we emphasise that each of these invariants are $\Lambda$-dependent, through the derivatives of the potential $Y_n=Y^{(n)}(x_c(\Lambda))$ and (twice the square of) the radius of the circular orbit $x_c=x_c(\Lambda)$. The second normal form \eqref{N2} then provides an alternative expression
\begin{equation} \label{Binv1}
\mathfrak{l}_2 = \xi_c(\Lambda) \,, \quad
\mathfrak{b}_2 = \frac{2\pi}{T(\xi_c)} \,, \quad \text{and} \quad
\mathfrak{B}_2 = -4\pi^2\frac{ T'(\xi_c)}{T(\xi_c)^3} \,,
\end{equation}
where, once gain, they are $\Lambda$-dependent through the energy of the circular orbit $\xi_c = \xi_c(\Lambda)$. The invariants \eqref{Binv1} are computed under the assumption that $Y(x)$ (or equivalently $\psi(r)$) is isochrone, while the invariants \eqref{Binv2} are valid for any $Y(x)$ (not necessarily isochrone). However, if we assume $Y(x)$ isochrone, then \eqref{Binv2} and \eqref{Binv1} are the Birkhoff invariants of the same system (a particle of angular momentum $\Lambda$ and energy $H=\xi$ in an isochrone potential). Therefore, from now on we assume that $Y(x)$ is isochrone, and derive the isochrone theorem \eqref{thm} by exploring the consequences of the three equalities $ (\mathfrak{l}_1,\mathfrak{b}_1,\mathfrak{B}_1)=(\mathfrak{l}_2,\mathfrak{b}_2,\mathfrak{B}_2)$ in three steps, one for each order of invariants.
\subsubsection{Zeroth order invariant}
The first equality $\mathfrak{l}_1=\mathfrak{l}_2$ provides a link between the energy of the circular orbit of angular momentum $\Lambda$ and the first derivative of $Y$, namely:
\begin{equation} \label{Birk1}
Y'(x_c(\Lambda)) = \xi_c(\Lambda) \,.
\end{equation}
This equation is consistent with the construction of an orbit in the $x=2r^2$ variable, as we explained in figure \ref{fig:Henon}. Indeed, a circular orbit of energy $\xi_c$ corresponds the line $y=\xi_c x-\Lambda^2$ being tangent to the curve $Y(x)$. Therefore, their respective slope must be equal at the tangency point $x_c$, hence $\xi_c=Y'(x_c)$. The other consequence of that equation is how $\xi_c$ varies with respect to $\Lambda$. Indeed, we have
\begin{equation} \label{prime}
\frac{\mathrm{d} \xi_c}{\mathrm{d} \Lambda} = \frac{\mathrm{d} Y_1}{\mathrm{d} \Lambda} = x_c'(\Lambda)Y_2 \,,
\end{equation}
where a prime denotes $\mathrm{d} /\mathrm{d} \Lambda$, and the Leibniz rule must be used to compute the derivative of $Y_1=Y'(x_c(\Lambda))$. Equation \eqref{prime} will be useful below.
\subsubsection{First order invariant}
The second equality $\mathfrak{b}_1=\mathfrak{b}_2$ implies a relation between the radial period and the second derivative of $Y$ at $x_c$, namely
\begin{equation} \label{Birk2}
Y''(x_c(\Lambda))=\frac{\pi^2}{2}\frac{1}{T(\xi_c(\Lambda))^2}\,.
\end{equation}
Once we know $Y(x)$, this equation allows to derive easily the generalisation of the Kepler's third law of motion, which we saw back in \eqref{T}. The other consequence of \eqref{Birk2} is an equation for $\mathfrak{B}_2$. Indeed, differentiating \eqref{Birk2} with respect to $\Lambda$ readily gives
\begin{equation} \label{primeprime}
x_c'Y'''(x_c)=-\pi^2 \frac{\mathrm{d}\xi_c}{\mathrm{d} \Lambda}\frac{T'(\xi_c)}{T(\xi_c)^3}
\,.
\end{equation}
We see that \eqref{primeprime} is very similar to the expression of $\mathfrak{B}_2$ in \eqref{Binv1}. In fact, inserting \eqref{prime} in \eqref{primeprime} and comparing the resulting with \eqref{Binv1} readily gives the relation
\begin{equation}\label{primeprimeprime}
\mathfrak{B}_2=\frac{4Y_3}{Y_2} \,,
\end{equation}
where we used the fact that $x_c'(\Lambda)\neq 0$, which follows by differentiating \eqref{xc} with respect to $\Lambda$ to obtain $x_c' x_c Y_2=2\Lambda$. With equation \eqref{primeprimeprime} at hand we may finally complete the proof of the fundamental theorem of isochrony.
\subsubsection{Second-order invariant}
Lastly, we insert in the equality $\mathfrak{B}_1=\mathfrak{B}_2$ the expression \eqref{primeprimeprime} for $\mathfrak{B}_2$, and the expression \eqref{Binv2} for $\mathfrak{B}_1$, to conclude that
\begin{equation} \label{fun}
\frac{x_c}{3Y_2^2} \bigl( 3Y_2 Y_4 - 5 Y_3^2 \bigr) = 0 \,.
\end{equation}
Since $x_c=2r_c^2\neq 0$, the parenthesis must vanish. Recalling the notation $Y_n=Y^{(n)}(x_c(\Lambda))$, and since \eqref{fun} should hold for any $\Lambda$, we may now let $\Lambda$ vary continuously. By continuity of $\Lambda\mapsto x_c(\Lambda)$, the equation $3Y_2 Y_4 = 5 Y_3^2$ is nothing but an ODE for the function $x_c\mapsto Y(x_c)$, i.e., the function $Y$. Therefore, at least on some open interval of $\mathbb{R}_+$, we must have
\begin{equation} \label{edo}
3 Y^{(2)}Y^{(4)} = 5\bigl(Y^{(3)}\bigr)^2 \,.
\end{equation}
It turns out that equation \eqref{edo} is \textit{the universal differential equation for parabolae}, in the sense that its solutions cover all and only functions $Y$ whose curve $y=Y(x)$ are parabolae in the $(x,y)$ plane. A short proof of this statement is included in appendix \ref{app:para}. This concludes the proof of the isochrone theorem \eqref{thm}. Before going to the next paragraph, let us mention that \eqref{edo} can be simply written as an ODE for the $\Lambda$-dependent Birkhoff invariants $(\mathfrak{l},\mathfrak{b},\mathfrak{B})$ themselves, namely
\begin{equation} \label{edoBirk}
\mathfrak{B} \, \frac{\mathrm{d} \mathfrak{l}}{\mathrm{d} \Lambda} = \mathfrak{b} \, \frac{\mathrm{d} \mathfrak{b}}{\mathrm{d} \Lambda} \,.
\end{equation}
In fact we could have obtained \eqref{edoBirk} readily from the fact that $\mathfrak{B}_2 \mathfrak{l}_2'=\mathfrak{b}_2 \mathfrak{b}_2'$ (here a prime denotes $\mathrm{d}/\mathrm{d}\Lambda$), which can be seen easily from \eqref{Binv1}. That the isochrone theorem follows from such a simple differential relation between the Birkhoff invariants constitutes a very nice and fundamental characterisation of isochrony. More insight on \eqref{edoBirk} is provided in appendix \ref{app:B}.
\subsection{Bertrand theorem} \label{sec:Ber}
There is another fundamental result that we can derive from this formalism: the Bertrand theorem. As mentioned before, this was actually done in \cite{FeKa.04} and was the main motivation behind our exposition here. However, we would like to present it in the light of isochrony. Indeed: as we mentioned back in section \ref{sec:gauge}, the Bertrand theorem states that only the Harmonic and Kepler potentials generate closed and only closed orbits. But notice that both of these potentials are isochrone. Therefore, we expect the Bertrand theorem to be a corollary of the isochrone theorem (as was argued already in \cite{SPD}). We prove the Bertrand theorem in two steps: first we show that a Bertrand potential $Y(x)$ must be a power law (up to a linear term); and second, that it must be isochrone.\\
Let us consider the normal form \eqref{N1}, which holds for any radial potential $Y(x)$, including isochrone and Bertrand potentials. Let us write the corresponding Hamiltonian $H(\rho,\phi,\Lambda,\vartheta)$ in the complete, 4D-phase space with the two pairs $(\rho,\phi)$, $(\Lambda,\vartheta)$ of action angle variables. We have seen that it reads
\begin{equation}
H(\rho,\Lambda)= \mathfrak{l}_1(\Lambda) + \mathfrak{b}_1(\Lambda) \rho + \frac{1}{
2} \mathfrak{B}_1(\Lambda) \rho^2 + o(\rho^2) \,,
\end{equation}
where $(\mathfrak{l}_1,\mathfrak{b}_1,\mathfrak{B}_1)$ are given in terms of $Y(x_c(\Lambda))$ in \eqref{N1}. Associated to the action variables $(\rho,\Lambda)$, the corresponding frequencies $(\omega_{\rho},\omega_{\Lambda})$ of this Hamiltonian thus read
\begin{equation} \label{ombirk}
\omega_{\Lambda} := \frac{\partial H}{\partial \Lambda} = \frac{\mathrm{d}\mathfrak{l}_1}{\mathrm{d}\Lambda} + o(1) \,, \quad \text{and} \quad \omega_{\rho} := \frac{\partial H}{\partial \rho} = \mathfrak{b}_1(\Lambda) + o(1) \,.
\end{equation}
If $Y(x)$ satisfies the Bertrand theorem, then all the orbits are closed in real space. In phase space, a closed orbit corresponds to a pair of actions $(\rho,\Lambda)$ (recall figure \ref{fig:torus}) that defines a torus, on which the associated curve wraps around, but ultimately closes on itself. This is called a resonant orbit, i.e., an orbit for which there exists integers $(k_{\Lambda},k_{\rho})\in\mathbb{Z}$ such that $k_{\Lambda}\omega_{\Lambda}+k_{\rho}\omega_{\rho}=0$. Now, since \textit{each and every} orbit must be closed for a Bertrand potential, this means that these integers $(k_{\Lambda},k_{\rho})$ are actually independent of the pair $(\rho,\Lambda)$. In other words, there exists a $Q\in\mathbb{Q}$ such that for all $(\rho,\Lambda)$,
\begin{equation} \label{Q}
\omega_{\Lambda} (\rho,\Lambda) = Q \, \omega_{\rho} (\rho,\Lambda) \,.
\end{equation}
We emphasise that equation \eqref{Q} should hold for any pair of actions $(\rho,\Lambda)$. In particular, \eqref{Q} should hold for a given $\Lambda$ in the limit $\rho\rightarrow 0$ (quasi-circular orbits). According to \eqref{ombirk}, this means that
\begin{equation} \label{ind}
\frac{\mathrm{d}\mathfrak{l}_1}{\mathrm{d}\Lambda} = Q \, \mathfrak{b}_1 \,.
\end{equation}
It is rather remarkable that the Bertrand theorem is equivalent to such a simple condition, namely a differential equation for the Birkhoff invariants. We can solve this equation easily. First we insert the definitions \eqref{Binv2} of $\mathfrak{l}_1(\Lambda)$ and $\mathfrak{b}_1(\Lambda)$ in terms of $Y_1$ and $Y_2$. Then the calculation reads
\begin{equation}
\label{ind+}
\eqref{ind}
\,\,\Rightarrow\,\, x_c' Y_2 = Q \sqrt{8Y_2}
\,\,\Rightarrow\,\, \Lambda^2 = 2x_c^2Q^2 Y_2
\,\,\Rightarrow\,\, x_c Y_1 - Y = 2x_c^2Q^2 Y_2 \,,
\end{equation}
where in the first step we differentiated with the Leibniz rule (much like in \eqref{prime}), in the second step we squared and used $x_c'x_cY_2=2\Lambda$ which we obtain by differentiating \eqref{xc} with respect to $\Lambda$, and in the last step we used \eqref{xc} once more to remove $\Lambda$.
Much like equation \eqref{fun} can be seen as an ODE for $x_c\mapsto Y(x_c)$, the rightmost equation in \eqref{ind+} is an ODE too, in which $Q\in\mathbb{Q}$ is a parameter. The solution to this ODE is simply found as
\begin{equation} \label{Ber}
Y(x) = C_1 x + C_2 x^K \,, \quad \text{with} \quad K:=\frac{1}{2Q^2} \,,
\end{equation}
with two integration constants $(C_1,C_2)\in\mathbb{R}^2$. The linear term $C_1 x$ corresponds to the addition of a constant in the potential $\psi(r)$ (recall $Y(2r^2)=2r^2\psi(r)$). As it does not affect the dynamics, we leave it aside and set $C_1=0$.\\
On the one hand, we have shown that if $Y(x)$ is a Bertrand potential, then according to equation \eqref{Ber} it must be a power law. On the other hand, it is clear that a Bertrand potential must be isochrone: if all bounded orbits are closed, then the apsidal angle $\Theta(\xi,\Lambda)$ must be a constant, rational multiple of $2\pi$. In particular, as a constant function it is independent of the energy $\xi$ of the particle. But this characterises isochrony according to \eqref{defisoT}. The conclusion is thus that a Bertrand potential must, at once, have the form of a power law and that of a parabola. The only parabolae that verify this property are either the square root $Y\propto\sqrt{x}$ or the quadratic $Y\propto x^2$. In terms of the variable $r$, this means that either $\psi(r)\propto 1/r$ (the Kepler potential), or $\psi\propto r^2$ (the Harmonic potential). Moreover, according to \eqref{Ber}, these two cases correspond to $K=1/2$ and $K=2$, i.e. to $Q=1$ or $Q=1/2$, respectively. In light of the link between the apsidal angle $\Theta$ and the ratio of Hamiltonian frequencies \eqref{freq}, we recover the classical formulae \eqref{Thetas}.
\subsection{Generalisation of Kepler's Third Law} \label{sec:K3}
As a final application of the Birkhoff normal forms, let us consider once more the equality $\mathfrak{b}_2=\mathfrak{b}_1$, which was written explicitly in terms of $Y$ in \eqref{primeprime}. Re-arranging this equation provides, for any $\Lambda$,
\begin{equation} \label{T2}
T(\xi_c(\Lambda))^2 = \frac{\pi^2}{2} \frac{1}{Y''(x_c(\Lambda))} \,.
\end{equation}
But now, recall the initial definition of isochrone potentials \eqref{defiso}: the radial period should be independent of $\Lambda$. Although here the equation holds for the circular orbit of energy $\xi_c$, there exist other, non-circular orbits with the same energy. Geometrically, they can be constructed by transLating the line $y=\xi_c x-\Lambda^2$ upward on figure \ref{fig:Henon}. By construction, all these orbits (defined by the translation) only see their angular momentum change, not their energy (a translation preserves the slope). Consequently, their radial period (squared) is numerically equal to \eqref{T2}. Summarising, we can now write that an orbit of energy and angular momentum $(\xi,\Lambda)$ has a radial period $T(\xi)$ given by
\begin{equation} \label{T3}
T(\xi) = \frac{\pi}{\sqrt{2}} \frac{1}{\sqrt{Y''(x_c(\xi))}} \,,
\end{equation}
where now $x_c(\xi)$ denotes the abscissa of the circular orbit with energy $\xi$, obtained by a downward translation (cf figure \ref{fig:Henon}). Equation \eqref{T3} is in complete agreement with formula (B2) derived in the appendix of \cite{RP.20}, where $T$ was expressed in terms of the radius of curvature $R_c$ of the parabola at the point of abscissa $x_c$. Recalling the link between curvature and the second derivative for explicit curves, equality between the two formulae follows. To obtain the general expression \eqref{T} in terms of $\xi$ and the parabola parameters $(a,b,c,d,e)$, one simply needs to compute the second derivative of a given parabola $Y(x)$, evaluate it at $x_c$ and insert the result in \eqref{T3} (this is explained in details in section IV.4.\textit{1} of \cite{RP.20}). The result \eqref{T} follows immediately.
\section*{Conclusions}
In this article, we have explained how the notion of isochrony for radial potentials, as first introduced by Michel Hénon \cite{HeI.59,HeII.59,HeIII.59} and explored in depth in \cite{SPD,RP.20}, was most naturally expressed in the context of Hamiltonian mechanics. Most of our results provide a thorough and self-consistent answer to some questions that were left open in our previous work \cite{RP.20}. \\
In particular, in section \ref{sec:deux} we used the remarkable property of the radial action $J$ \eqref{genJ}, which, along with the angular momentum action $\Lambda$, provides a system of angle-action coordinates particularly well-suited for isochrone potentials, due to its energy and angular momentum splitting \eqref{RA}. Using these variables, we have: (1) solved at once all the dynamics of test particles in any isochrone potential in terms of a generalised eccentric anomaly \eqref{smaxis},\eqref{thetaE}; and (2) shown how the Kepler equation \eqref{Keplereq} and Kepler's third law \eqref{T} -- which are classically (and rightfully) associated to the two-body problem only -- are actually universal properties of all isochrone orbits. \\
Along with these generalisations and explicit solutions, the Hamiltonian point of view used in this paper allowed us to provide, in sections \ref{sec:trois} and \ref{sec:quatre}, a natural and self-consistent proof of the fundamental theorem of isochrony \eqref{thm} that relates isochrone potentials to parabolae in the plane. With the help of the Birkhoff normal form written around circular orbits, we have provided a proof that does not rely on abstract and unrelated mathematical results (as is the case both in \cite{SPD} and \cite{RP.20}), but only uses fundamentals of Hamiltonian mechanics -- in our case, the unicity of the Birkhoff invariants \eqref{edoBirk}. Additionally, we showed how this normal form formalism gives as an elementary by-product, the Bertrand theorem \eqref{ind} of classical mechanics, and the Kepler third law \eqref{T} generalised to all isochrone (initially derived in \cite{RP.20}).\\
From a more global point of view, the results derived in this paper show in a definitive manner that the century old, intricate symmetries associated with the harmonic and Kepler potentials [e.g., Kepler's laws of motion and Kepler's equation (cf chapter 2 of \cite{BoPuV1}, the Bertrand theorem (cf \cite{SPD} and references therein), the Bohlin-Levi-Civita transform (cf \cite{Bo.11,LyJi.08}), etc] are actually sub cases of a much larger, isochrone paradigm, as was already emphasised in \cite{SPD,RP.20}. However, whether or not these results can be applied and useful to actual physical problems is still unclear (except, of course, for the Kepler and harmonic potential). It is worth mentioning, though, that the Hénon potential has been known to model particularly well some clusters of stars (see \cite{SPD} and references therein), and seem to be related to cluster formation in the first place \cite{SPal.19}. What's more, toy-models for dark matter halos \cite{Me.06} using the Hollowed class as well as quark confinement \cite{Mu.93} using the Bounded class could also prove to be useful and should be explored. Lastly, stability analysis of perturbed isochrone potentials may be a way to make progress towards the problem of dark matter halo collapse and virialisation \cite{SPal.19}. \\
Another thing that is missing in our analysis, but would be interesting to investigate, is an explicit geometrical construction of this generalised eccentric anomaly. In the Kepler case, this is well-known (see, e.g., section 2.1.2 of \cite{Arn}), and much like everything else regarding isochrony, there may be a ``straightedge and compass'' construction for the eccentric anomaly of a general isochrone orbit. Another line of research worth mentioning is a formula that encompasses both the non-harmonic and harmonic case (compare \eqref{smaxis} and \eqref{thetaE} with \eqref{Hasol} and \eqref{thetaharm}, respectively). We leave the resolution of these two geometric problems for future work. \\
As a final word, we would like to stress the remarkable fact that all and any isochrone question can be answered with explicit and analytical equations, and that this is of particular interest for pedagogical and academic purposes. As we already mentioned in \cite{RP.20}, we think that mathematical physics problems (such as isochrony) may be hard to find in the evermore specialising literature. This being said, we encourage the interested reader to use all these isochrone results to illustrate the power of Poincaré's ``geometrical thinking'': to simplify and solve a differential problem that is complex at first sight, using nothing but symmetries and geometry, be it Euclidean as in \cite{RP.20}, or symplectic, as we proposed in the present paper.
\acknowledgments
PR is grateful and indebted to J.~F{\'e}joz for many helpful discussions on the theory of Birkhoff normal forms.
\clearpage
\begin{table}[!ht]
\caption{List of frequently used symbols in \cite{SPD,RP.20} and this paper.}
\vspace{0.2cm}
\begin{tabular}{ccc}
\toprule
\textbf{Symbol} & \textbf{Description} & \textbf{Definition} \\
\midrule
\textbf{Classical mechanics} & & \\
$\xi$ & (mechanical) energy, & \\
$\Lambda$ & angular momentum & \\
$T$ & radial period & \eqref{defT}\\
$\Theta$ & apsidal angle & \eqref{defTheta}\\
$\psi(r)$ & radial potential & \\
$(r,\theta)$ & polar coordinates & \\
\midrule
\textbf{Isochrony potentials} & & \\
$x=2r^2$ & Hénon variable & \\
$Y(x)=x\psi(r(x))$ & radial potential in Hénon's variable & \\
$(a,b,c,d,e)$ & Latin parameters & ~\eqref{parabolaimplicit}\\
$\delta=ad-bc$ & parabola discriminant & \\
$x_v$ & abscissa of vertical tangent & \eqref{otr} \\
$\mu,\beta$ & mass and length parameter & \cite{RP.20}, (3.12)\\
\midrule
\textbf{Isochrone orbits} & & \\
$E$ & eccentric anomaly & ~\eqref{KE}\\
$\Omega=2\pi/T$ & radial frequency & ~\eqref{Omegepsi}\\
$\epsilon$ & eccentricity & ~\eqref{Omegepsi}\\
$\alpha$ & semi-major axis & ~\eqref{smaxis}\\
\midrule
\textbf{Hamiltonian mechanics} & & \\
$H$ & Hamiltonian\footnote{or perhaps should one say, the ``huygensian''. Indeed, from his own writing in the second edition of his masterwork ``Mécanique Analytique'' (1811), Lagrange introduced the letter $H$ for the ``vis viva constant'', i.e., what we nowadays call the total mechanical energy. At that time, Hamilton was only 5 years old, and it is probable that Lagrange chose $H$ for Huygens, making clear mention of him throughout his work, in particular of the insight he must have had to understand that the ``vis viva'' was conserved. More on this fascinating story can be found in \cite{image_des_maths} or appendix B of \cite{piz}, and references therein.} & \\
$J$ & radial action & \eqref{j}\\
$N_1,N_2$ & normal forms of $H$ & \eqref{defNorm}\\
$(J,\Lambda,z_J,z_\Lambda)$ & action-angle coordinates & \\
$(\rho,\Lambda,\varphi,\vartheta)$ & action-angle coordinates & \eqref{Hfinal}\\
$\mathfrak{l},\mathfrak{b},\mathfrak{B}$ & Birkhoff invariants & \eqref{defNorm}\\
\bottomrule
\vspace{5mm}
\end{tabular}
\label{Table}
\end{table}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction\label{intro}}
Propagation of electromagnetic waves in dispersive homogeneous media has been originally
investigated by Brillouin and Sommerfeld \cite{Bri1960}.
More recently, the introduction of negative index materials
\cite{Ves68}, metamaterials \cite{Pendry00,Notomi00,Gralak00} and
transformation optics \cite{Pendry06, Leonhardt06, Smith06} caused a renewed
interest for the dispersion phenomenon. Indeed, according to the causality
principle and passivity, effective index with values below unity or negative
requires to introduce frequency dispersion \cite{Ves68,Landau,Jackson}.
Hence the effect of dispersion in metamaterials has been recently investigated
in the cases of negative index, flat lens \cite{Collin10,GT10,PRL-Pen11,GM12,
PRL-Gref12} and invisibility systems \cite{Gra16}.
The original work on propagation of electromagnetic waves in homogeneous dispersive
media led to the description of the forerunners (also known as Brillouin and Sommerfeld precursors)
which are transients that precede the main propagating signal \cite{Bri1960}. They have been then observed later
\cite{Aav91,Jeo06,Ni06}. Since these pioneering works, the advances reported in the literature concern
asymptotic descriptions of percursors \cite{Oug89,MS12}, the definition of energy density
and related quantities \cite{Rup02,Dia11}, and in dispersive and absorptive media.
The book of Oughstun \cite{Oughstun2009} can be consulted for more complete bibliography.
A new situation is proposed to analyze the propagation of electromagnetic
waves in dispersive media.
The considered electromagnetic source has the sinusoidal time dependence $\sin[\omega_s t]$
after it has been switched on at an initial time $t=0$, as introduced in
\cite{Bri1960}, and used more recently in \cite{Collin10,GT10,GM12,Gra16}.
When such a source radiates in a dispersive homogeneous medium of
relative permittivity $\varepsilon(\omega)$, the time dependence of the field at a
distance $x$ from the source is given by the integral \cite{Bri1960}
\begin{equation}
E(x,t)=\frac{1}{2\pi}\displaystyle\int_\Gamma d\omega\, \dfrac{\om_s}{\omega^2 - \om_s^2} \,
e^{- i \omega t} \; e^{i \omega \sqrt{\varepsilon(\omega)} x / c_0 } \, ,
\label{intBri}
\end{equation}
where $c_0$ is the light velocity in vacuum, $\sqrt{\varepsilon}$ is the ``index'' of the medium.
The integration path
$\Gamma$ is the line parallel to the real axis made of the complex frequencies
$\omega = \nu + i \eta$ with imaginary part $\eta>0$. The main difficulty
to compute this integral is the presence of the square root
$\sqrt{\varepsilon(\omega)}$ which induces branch points and branch cuts.
In this article, it is proposed to consider a one dimensional problem in the $x$-direction,
where the source is located in vacuum at the vicinity of a homogeneous dispersive medium
slab with thickness $d$ as illustrated in Fig. \ref{Fig1}. The objective is to evaluate
the transmitted field at the output.
\begin{figure}[h]
\includegraphics[width=\linewidth, keepaspectratio]{Fig1.pdf}
\caption[Optional caption]{A dispersive homogeneous slab illuminated by a source $S_0$ located at a distance
$|x_0|$.}
\label{Fig1}
\end{figure}
The difference between the refractive indices of the slab and the surrounding media introduces
multiple reflections inside the slab that interfere with each other depending on their relative
phases. This can be modeled as a passive Fabry-P\'erot resonator, as shown in Fig. \ref{fig:slab}.
In this case, the time dependence of the transmitted field at the output is
\begin{equation}
E(x=d,t) = \frac{1}{2\pi}\displaystyle\int_\Gamma d\omega\, \dfrac{\om_s}{\omega^2 - \om_s^2} \,
e^{- i \omega (t - x_0 /c_0)} \, T(\omega) \, ,
\label{intslab}
\end{equation}
where $T(\omega)$ is the transfer function of the slab. The advantage of this method follows that
the coefficient $T(\omega)$ contains no square root, therefore implying the integral expression
free from branch points and branch cuts. Hence the value of the integral reduces to the overall
contributions of the different poles of the equation, which can be calculated using the residue theorem.
\begin{figure}
\includegraphics[ width=0.7\linewidth, keepaspectratio]{Fig2.pdf}
\caption{Internal multiple reflections inside a slab of a medium with a different refractive
index than the surroundings. Normal incidence is considered while rays are vertically shifted to
illustrate the multiple roundtrips inside.}\label{fig:slab}
\end{figure}
The purpose is to revisit the propagation of waves in dispersive media
made by Brillouin and Sommerfeld \cite{Bri1960} in the new situation of Fig. \ref{Fig1},
where the dispersive medium has a finite extent and the multiple reflections effect is present
(Fig. \ref{fig:slab}). In addition, the effect of the resonance frequency in the Drude-Lorentz
model is exhybited.
This article is organized as follows. After the introduction, section II describes the methodology
used in the model leading to a general equation for the temporal response of the considered system
of Fig. \ref{Fig1}.
Section III deals with the non-dispersive case, which is straightforward to analyze.
It paves the way to understand the more complicated dispersive case.
In section IV, the Drude-Lorentz model of a dispersive medium is considered. Section V
demonstrates the temporal response of a dispersive slab, with the transient time regime including
the precursors of Sommerfeld and Brillouin, and with the steady-state solution.
Finally, in section VI, the case of a weakly absorbing dispersive medium is briefly approached.
\section{Methodology \label{method}}
This section presents the method considered to address the propagation throught a dispersive homogeneous slab.
First, the electromagnetic source is introduced and treated as a causal excitation started at a particular
time allowing us to investigate the transient behavior of the system. Next, the transfer
function of the slab is given: it takes into account the internal multiple reflections, known as the Fabry-P\'erot
resonator model. Eventually, a general equation is demonstrated for the temporal response of the dispersive
slab that describes the transmitted field. This expression is analyzed in details in the next sections.
\subsection{The causal source} \label{sec:source}
The incident field is a monochromatic plane-wave propagating in x-direction and generated at distance $x_0$ from
the dispersive slab, as shown in Fig. \ref{Fig1}. The source is causal: it is ``OFF'' for negative times
and then switched ``ON'' at time $t=0$. Let $\delta$ be the Dirac function and
$\theta(t)$ the step function: $\theta(t) = 0$ if $t<0$ and $\theta(t) = 1$ otherwise.
The considered point current source $J(x,t)$ is then given by
\begin{equation} \label{source}
J(x,t) = \dfrac{2 E_0 }{c_0 \mu_0} \, \delta(x - x_0) \, \sin[\omega_s t] \, \theta(t) \, ,
\end{equation}
where $E_0$ is a constant electric field amplitude, and $c_0 = 1 / \sqrt{\varepsilon_0 \mu_0}$, $\varepsilon_0$ and
$\mu_0$ are respectively the light velocity, the permittivity and the permeability in the vacuum.
The idea is to mimic a plane wave, at the limit $t \to \infty$, with the requirement
of causality. The spectral representation of the source components $\widehat{J}(x,\omega)$
is obtained using the Laplace transform
\begin{equation} \label{eq:LT}
\widehat{f}(\omega)= \int_{0}^{\infty} dt\, e^{i \omega t} f(t) \, .
\end{equation}
Notice that the imaginary part of $\omega$ must be positive to ensure the convergence of the integral.
After this Laplace decomposition, the source becomes
\begin{equation}
\widehat{J}(x,\omega) = - \dfrac{2 E_0 }{c_0 \mu_0} \, \dfrac{\omega_s}{\omega^2-\omega_s^2} \, \delta(x - x_0) \, .
\end{equation}
The spectral representation of the causal monochromatic source has a maximum centered at the
excitation frequency $\omega_s$ with a nonzero broadening, in opposite to a non-causal monochromatic
source case with a delta-function at the given frequency (pure sinusoidal wave).
The Laplace transform is then applied to the one-dimensional Helmholtz equation to obtain the
incident electric field radiated in vacuum:
\begin{equation}
\dfrac{d^2 \widehat{E}}{dx^2} (x,\omega) + \dfrac{\omega^2}{c_0^2} \, \widehat{E} (x,\omega) =
- i \omega \mu_0 \widehat{J}(x,\omega) \, .
\end{equation}
Taking the spatial Fourier transform into the $k$-domain, this equation becomes
\begin{equation}
\tilde{E}(k,\omega) = \dfrac{2 E_0 }{c_0} \, \dfrac{\omega_s}{\omega^2-\omega_s^2} \,
\dfrac{i \omega}{\omega^2/c_0^2 -k^2} \, e^{- i k x_0} \, .
\end{equation}
In order to retrieve the time domain expression of the field $E(x,t)$, the inverse spatial
Fourier transform is performed
\begin{equation}
\widehat{E}(x,\omega) = E_0 \, \dfrac{\omega_s}{\omega^2-\omega_s^2} \, e^{ i \omega | x - x_0 | / c_0} \, .
\end{equation}
Here, the absolute value preserves the direction of propagation to be
as outgoing waves. Then, applying the inverse Laplace transform leads to the final
expression of the illuminating field, normalized to the input amplitude $E_0$:
\begin{equation}\label{illumin-z}
\begin{array}{ll}
E(x,t) & = \dfrac{1}{2\pi}\; \displaystyle{\lim_{\Gamma \to 0}} \displaystyle\int_{\Gamma} d\omega \, e^{-i \omega t } \,
\dfrac{\omega_s}{\omega^2-\omega_s^2} \, e^{ i \omega | x - x_0 | / c_0} \, .
\end{array}
\end{equation}
The integration is performed along the line $\Gamma = \mathbb{R}+i\eta$ in the complex frequency
domain in order to prevent the divergence of the integration, as shown in Fig. \ref{fig:ComplexPlane}.
This satisfies the causality principle that requires the existence of the field only for
$t > | x - x_0 | / c_0$ where the integral has a non-zero value due to the presence of poles
below the integration line on the real axis.
\begin{figure}
\includegraphics[width=1\linewidth, keepaspectratio]{Fig3.pdf}
\caption[Optional caption]{The plane of complex frequencies $\omega$ that shows the
poles of Eq. (\ref{illumin-z}). (Left) For $t < | x - x_0 | / c$, there are no poles in the
upper half plane and the field vanishes. (Right) For $t > | x - x_0 | / c$
the integral has a non-vanishing value.}
\label{fig:ComplexPlane}
\end{figure}
\subsection{The resonator model of the slab} \label{sec:slab}
A plane wave encountring an interface between two different media is subject to reflection. Assuming
the surrounding medium of the slab is vacuum, the portion of the reflected field from a single interface,
i.e. the reflection coefficient $\rho$, is given by
\begin{equation}\label{eq:r}
\rho(\omega) = \dfrac{1 - \sqrt{\varepsilon(\omega)}}{1 + \sqrt{\varepsilon(\omega)}} \, .
\end{equation}
The wave propagating through the slab experiences multiple reflections which create other copies
of the main signal with amplitudes depending on the relative refractive index of the slab to the surrounding
medium. These multiple copies interfere according to the phase difference between them, which is determined
by the propagation length inside the slab and the frequency of the excitation. Subsequently, the slab can be
modeled as a frequency-dependent element, i.e. a resonator.
A block diagram of the resonator is shown in Fig. \ref{fig:resonator}, where $\theta$
is the
propagation phase during a single trip and $\Lambda$ is the transmission coefficient of a single interface,
related to the reflection coeffcient throught $\Lambda = 1 - \rho$. The loop represents the round-trip
due to the internal reflections between the slab interfaces.
Normalized quantities are used to describe the system in a general way, with the frequency and the time
normalized with respect to the slab thickness $d$ and the ligth velocity in vacuum:
\begin{equation}
\hat\omega = \dfrac{\omega d}{c_0} \longrightarrow \omega \, , \quad
\hat{t}= \frac{c_0 \, t}{d} \longrightarrow t \, .
\label{hom}
\end{equation}
The circumflex is omitted in the rest of this letter.
The slab transfer function $T(\omega)$ can be obtained by applying the feedback theory \cite{sedra}
\begin{equation} \label{eqSlabTF}
T(\omega) = \dfrac{[ 1 - \rho(\omega)^2 ] \, e^{i \omega \sqrt{\varepsilon(\omega)} }}{1-\rho(\omega)^2 \,
e^{2 i \omega \sqrt{\varepsilon(\omega)}}} \, ,
\end{equation}
which provides the same result as the electromagnetic calculation.
\begin{figure}
\includegraphics[ width=1\linewidth, keepaspectratio]{Fig4.pdf}
\caption[Optional caption]{The resonator block diagram that resembles the propagation inside
a slab taking into account the internal multiple reflections.}
\label{fig:resonator}
\end{figure}
It is important to remark that this transfer function can be written as
\begin{equation}\label{eqT}
\dfrac{1}{T(\omega)} = \cos[\omega \sqrt{\varepsilon(\omega)}] + i \, \dfrac{1 + \varepsilon(\omega)}{2} \,
\dfrac{\sin[\omega \sqrt{\varepsilon(\omega)}]}{\sqrt{\varepsilon(\omega)}} \, .
\end{equation}
Indeed, this expression shows that $T(\omega)$ is an even function of $\sqrt{\epsilon(\omega)}$.
This implies the absence of square roots of permittivity in $T(\omega)$, hence the absence
of branch cuts in Eq. (\ref{intslab}). This is an advantage for the resonator model in
comparison with the method generally used \cite{Bri1960}, since dealing with branch cuts in Eq. (\ref{intBri})
is a challenging step in the study of propagation inside a dispersive medium \citep{Bri1960}.
\subsection{Temporal response equation} \label{sec:tempeq}
Replacing the slab transfer function by its expression (\ref{eqSlabTF}) in the integral
(\ref{intslab}), and taking the source location at the edge of the slab $x_0=0$, the
transmitted field at the output of the slab ($x=d$) is
\begin{equation}\label{eq:ResponseInt}
E_d(t) =\frac{1}{2\pi} \int{d\omega\;e^{-i \omega t } \; \frac{\omega_s}{\omega^2-\omega_s^2}
\;\frac{ [1-\rho^2(\omega)]\; e^{i \omega \sqrt{\epsilon(\omega)}}}{1- \rho^2(\omega)\;e^{2i \omega \sqrt{\epsilon(\omega)}}} } \, ,
\end{equation}
which is the general equation to describe the response of a slab of a dispersive medium
to a causal excitation. Since no branch cut exists, the integral reduces to the
contributions of discrete poles given by the residue theorem. The poles of the integral are
defined by
\begin{equation} \label{eq:poleseq}
\pm \omega_s \, , \quad \omega_q: \rho(\omega_q) = \pm e^{i\; \sqrt{\epsilon(\omega_q)} \omega_q} \, .
\end{equation}
Here, the couple $\pm\omega_s$ contains the source poles providing to the steady state part of
the solution defined as ($t\to\infty$),
while the infinite set of $\omega_q$ contains the poles of the slab transfer function $T(\omega)$
corresponding to the transient part of the solution. Notice that the reason of the existence
of the transient behavior is the causality of the source.
The solution of the output field can be written as,
\begin{equation} \label{twoparts}
\begin{array}{rl}
E_d(t) = & \theta(t - \tau_0)\; \, E_{\text{stst}}(t) \\[4mm]
+ & \theta(t - \tau_0)\; \, E_{\text{trans}}(t) \, ,
\end{array}
\end{equation}
where $\tau_0= 1$ is the single-trip normalized time needed for the front of the wave
(fastest part that experiences a unity permittivity) to reach from the input side of the slab
to the output side. This preserves causality since no signal can reach from one side to the
other faster than $c_0$.
The steady state part can be calculated using that $T(-\omega)$ is the complex conjugated of $T(\omega)$.
It gives a causal function oscillating at the excitation frequency $\omega_s$, multiplied by a scaling factor
depending on the slab characteristics:
\begin{equation} \label{eq:stst}
E_{\text{stst}}(t)= - \text{Im}
\big[ e^{- i \om_s t }\; T(\om_s) \big] ,
\end{equation}
where ``Im'' means the imaginary part.
As to the transient part, it is given by the residues of the set of poles $\omega_q$ of the
transmission coefficient $T(\omega)$. Since $T(\omega)$ is not polynomial,
then the transient part can expressed as
\begin{equation} \label{eq:MainTransient}
E_{\text{trans}}(t)= i \;\sum_{q} e^{-i \omega_q t } \; \frac{\omega_s}{\omega_q^2-\omega_s^2}\;
\left[ \dfrac{\partial T^{-1}} {\partial \omega} \, (\omega_q) \right]^{-1}\, ,
\end{equation}
where $T^{-1}(\omega)$ is considered as the denominator of the transfer function $T(\omega)$. The
expression (\ref{eqT}) leads to
\begin{equation} \label{eq:Qdiff}
\begin{array}{rl}
\dfrac{\partial T^{-1}} {\partial \omega} \, (\omega_q) = &
i \, \dfrac{1 + \epsilon(\omega_q)}{2} \left[ 1 + \dfrac{\omega_q}{2 \epsilon(\omega_q)}
\dfrac{\partial \varepsilon}{\partial \omega} (\omega_q) \right] \\[4mm]
& \times \cos[\omega_q \sqrt{\epsilon(\omega_q)}] \\[4mm]
- & \left[ \epsilon(\omega_q) + \dfrac{2 \omega_q \epsilon(\omega_q) + i (\epsilon(\omega_q) - 1)}{4 \epsilon(\omega_q)}
\dfrac{\partial \varepsilon}{\partial \omega} (\omega_q) \right] \\[4mm]
& \times
\dfrac{\sin[\omega_q \sqrt{\epsilon(\omega_q)}]}{\sqrt{\epsilon(\omega_q)}}
\end{array}
\end{equation}
Each term in the sum (\ref{eq:MainTransient}) represents the contribution of a pole $q$
to the total transient response of the system and has a scaling coeffcient depending on the
distance to the source pole.
It is stressed that this method presents the crucial advantage of the resonator model,
when compared with the usual integral Eq. (\ref{intBri}), as as it preserves to any branch cut
and hence reduces to the use of the residue theorem to obtain the expression of the field propagating
in a dispersive medium.
\section{Non dispersive slab\label{method}}
A non dispersive dielectric slab with permittivity fixed to $\varepsilon(\omega)=\varepsilon_s$ has
source poles $\pm\omega_s$ and the slab poles
\begin{equation} \label{eq:nondisppoles}
\omega_q = \frac{q \pi}{\sqrt{\varepsilon_s}} - i \frac{\ln (1/\rho_s^2)}{2\sqrt{\varepsilon_s}},
\end{equation}
where $q$ is an integer. Figure \ref{fig:nondispoles} shows both types of poles in
the frequency domain, where it can be checked that $\omega_{-q}= - \overline{\omega_q}$.
The imaginary part represents the losses
of the resonator (reflection losses and, if it is present, absorption). Assuming a lossless medium,
the wave propagating inside suffers only
from the reflection (outcoupling) loss at the interfaces. The more outcoupling is, the
less round-trips the wave stays inside, the shorter the transient regime is.
The single-trip time value determines the initial time at which the transmitted field starts to appear
at the output,
\begin{equation} \label{eq:Taues}
\tau_s= \tau_0\; \sqrt{\varepsilon_s} \, .
\end{equation}
Since the contribution of each pole depends on the distance to the source pole, it is
assumed that the closest two slab poles ($\pm q'$) to the source poles are dominant in the
transient summation as $|\omega_{\pm q'}| \approx |\omega_s|$ and $\text{Re}({\omega_{\pm q'}})=\omega_s$,
leading to
\begin{equation} \label{eq:nondisresp}
\begin{array}{ll}
E_{\text{d}}(t) \approx & - \text{Im} \big[ e^{-i\omega_s t}\; T({\om_s}) \big] \; \theta(t - \tau_s) \\[4mm]
& + \text{Im} \big[ 2\,e^{-i \om_s t } \, e^{-t/\tau_{\text{tr}}} F(\om_s) \big] \;
\theta(t - \tau_s),
\end{array}
\end{equation}
where the first part represents the steady state solution, and the other part shows the transient part,
given that,
\begin{equation}
F(\om_s) = \frac{ (1-\rho_s^2)\; e^{i \om_s \sqrt{\epsilon_s} } } {2\; \ln(1/\rho_s^2)} \, , \quad
\tau_{\text{tr}} = \dfrac{2\sqrt{\varepsilon_s}}{\ln (1/\rho_s^2)} \, .
\end{equation}
The time-constant $\tau_{\text{tr}}$ which characterizes the decay of the transient region is
the reciprocal of the imaginary part of the poles given in Eq. (\ref{eq:nondisppoles}).
\begin{figure}
\includegraphics[width=0.75\linewidth, keepaspectratio]{Fig5.pdf}
\caption[Optional caption]{ Equally distant poles of a nondispersive slab with the same imaginary part.
The curve sketches the relative contributions of the slab poles as a function of the distance to the source poles.}
\label{fig:nondispoles}
\end{figure}
When the source frequency matches the real part of one of the poles, the transmission
is maximized (unity for a symmetric resonator). The interference between the multiple
reflections inside is constructive in this case as the phase difference is always a
multiple of $2\pi$. The frequencies at which minimum transmission occurs called
anti-resonant frequencies and the minimum transmission value depends on the single interface
reflectivity,
\begin{equation} \label{eq:Tmin}
T_{\text{min}} = \frac{1-\rho^2}{1+\rho^2}.
\end{equation}
For a test case of a dielectric constant $\varepsilon_s = 100$, Fig. \ref{fig:nondispws} shows the
slab transfer function modulus $|T(\omega_s)|$ as the function of the excitation frequency, which
demonstrates the effect of the internal multiple reflections. Figure \ref{fig:nondisw005}
shows the temporal behavior of a dielectric slab for an excitation frequency
corresponding to a maximum transmission at the steady state regime.
The blue curve in the figure shows the temporal response using Eq. (\ref{twoparts})
that includes the contributions of all poles, while the red curve is using the
approximated Eq. (\ref{eq:nondisresp}) that only includes the two nearest poles
to the sources poles.
\begin{figure}
\includegraphics[width=0.85\linewidth, keepaspectratio]{Fig6.pdf}
\caption[Optional caption]{
The transfer
function of a non dispersive slab with a dielectric constant $\varepsilon_s=100$. }
\label{fig:nondispws}
\end{figure}
\begin{figure}
\includegraphics[width=0.85\linewidth, keepaspectratio]{Fig7.pdf}
\caption[Optional caption]{Temporal response
(time normalized to $\tau_s$)
of the non dispersive slab for an excitation
frequency $\omega_s=0.05 \times 2 \pi$ showing a maximum transmission.
The red curve shows the approximated expression.
[The step function $\theta(t - \tau_s)$ has not been implemented in
this curve, which leads to a non vanishing contribution from $\tau = 0$
to $\tau = \tau_s$.]
}
\label{fig:nondisw005}
\end{figure}
\section{Dispersive medium Model} \label{sec:dismat}
In this article, the Drude-Lorentz model is used for the dispersive medium \cite{Ckittle},
similarly to Brillouin and Sommerfeld \cite{Bri1960}. The frequency dependency
of the permittivity is then given by
\begin{equation}
\varepsilon(\omega) = 1 - \frac{\Omega^2}{\omega^2-\om_0^2+i\omega \gamma} \,
\label{epDL}
\end{equation}
where $\Omega$ is a constant of the medium (related to the electron density \cite{Jackson}),
$\om_0$ is the resonance frequency of the dispersive medium, and $\gamma$ is
the absorption constant.
We are interested in the case of lossless dispersive medium,
i.e. $\gamma = 0$. It is stressed that modeling the lossless case
should be addressed by taking the limit of Eq. (\ref{epDL}) as $\gamma \downarrow 0$.
According to causality principle and the Kramers Kronig relations \cite{Jackson},
the dispersion leads to an imaginary part in the permittivity. Indeed, in the limit
$\gamma \downarrow 0$, the imaginary part of the permittivty turns to be a Dirac function
at $\pm \om_0$. Nevertheless, this Dirac contribution can be ignored because the transmission
of the input interface of the slab vanishes as $\varepsilon \to \infty$ at $\pm \om_0$. We will show
later that setting the absorption term to 0 gives the same results as taking the limit,
see section \ref{sec:NumValid}. In this way, we can confidently use the Lorentz model
for the lossless case by setting $\gamma$ to zero.
Under this condition, the plasma frequency $\om_p$, defined by the permittivity set to zero,
is
\begin{equation}
\om_p^2 = \om_0^2+\Omega^2.
\end{equation}
Figure \ref{fig:epsilon} shows the permittivity of a dispersive medium with $\om_0=\Omega=10$
and $\omega_p=10\sqrt{2}$ as a test case. Determining exactely the poles of the dispersive slab
analytically is unattainable. However, it is possible to obtain estimates if the problem is
analyzed in different frequency zones where one can approximate the permittivity.
Five important zones, that have distinct features where an analytical solution
can be found, are considered separately. As to the poles in a region in between these zones,
they can be interpolated from the two adjacent ones.
\begin{figure}
\includegraphics[width=0.85\linewidth, keepaspectratio]{Fig8.pdf}
\caption[Optional caption]{The Drude-Lorentz model of the permittivity of a dispersive medium with $\om_0=\Omega=10$. }
\label{fig:epsilon}
\end{figure}
It is relevent to investigate the group velocity of a wave in a dispersive
medium \cite{Bri1960}, which corresponds to the velocity of the pulse envelope:
\begin{equation}
\dfrac{v_g(\omega)}{c_0} = \frac{1}{n_g(\omega)} = \dfrac{1 / n(\omega)}{1+ \frac{\omega}{n(\omega)} \frac{dn(\omega)}{d\omega}}.
\end{equation}
where $n_g$ is the group refractive index of the medium that defines the propagation of a
narrow-band pulse in a dispersive medium. It can be associated with the velocity of the energy
propagation of the pulse which cannot exceed the speed of light: whenever $\varepsilon(\omega) > 0$,
which corresponds to a propagating solution with a real refractive index, $v_g(\omega)$ is always below $c_0$.
Figure \ref{fig:vg} shows the group velocity as a function of the frequency for the same dispersive
medium with $\om_0=\Omega=10$. This information can be used to visualize the temporal response
of a dispersive system. The single-trip time for a narrow-band pulse propagating in a dispersive medium is then
\begin{equation} \label{eq:transtime}
\tau_g = \tau_o \; n_g,
\end{equation}
where $n_g$ is defined for the central frequency of the pulse.
\begin{figure}
\includegraphics[width=0.85\linewidth, keepaspectratio]{Fig9.pdf}
\caption[Optional caption]{The group velocity of the dispersive medium with $\om_0=\Omega=10$.}
\label{fig:vg}
\end{figure}
\subsection{Zones of Dispersive medium} \label{sec:poles}
The analytical estimates of the poles of the dispersive slab are provided for the different
zones represented on Fig. \ref{fig:epsilon}. The detailed calculations are reported in the appendix.
\subsubsection{{Low-frequency zone}: $\omega \ll \om_0$ and $\varepsilon \simeq \varepsilon_s > 1$}
In this zone, the permittivity can be approximated as
\begin{equation} \label{eq:zone1eps}
\varepsilon(\omega) \underset{\omega \ll \om_0}{\approx} \varepsilon_s + (\varepsilon_s - 1 ) \, \dfrac{\omega^2}{\om_0^2} \, ,
\end{equation}
where $\varepsilon_s = 1 + \Omega^2 / \om_0^2$ is the static permittivity at $\omega = 0 $.
The medium can be considered non dispersive as the group velocity is almost
constant in this zone. The obtained expression of the poles is,
\begin{equation}
\label{eq:polesnearZ}
\omega_q = \frac{q\pi }{\sqrt{\varepsilon_s}} - i \dfrac{\ln \big| 1/\rho_{\omega_s} \big|}{\sqrt{\varepsilon_s}}
+ \dfrac{q^2 \pi^2}{8 \om_0^2 \varepsilon_s \sqrt{\varepsilon_s}} \, ,
\end{equation}
and remains valid as long as $q \pi \ll \sqrt{\varepsilon_s} \om_0$.
\subsubsection{Near-resonance zone: $\omega <\approx \om_0$ and $\varepsilon \to +\infty$}
In this case, the permittivity $\varepsilon \to +\infty$ and it can be approached by
\begin{equation} \label{epr}
\varepsilon(\omega) \underset{\omega \to \om_0}{\approx} \dfrac{\Omega^2}{2 \om_0} \, \dfrac{1}{\om_0 - \omega} \, .
\end{equation}
This zone is highly dispersive while the group velocity tends to zero.
The poles are found using an iterative method (see appendix):
\begin{equation}\ \label{eq:NearPoles}
\omega_{q} = \om_0 - \frac{\om_0 \Omega^2 } {2 q^2 \pi^2 } \left[ 1 +i \frac{4\om_0}{q^2\pi^2} \right].
\end{equation}
This expression is limited to $q \gg (\om_0,\Omega)$ and no pole exists for $\omega >\approx \om_0$.
\subsubsection{Damping zone: $\om_0<\omega<\om_p$ and $\varepsilon < 0$}
Here, the permittivity $\varepsilon (\omega)$ is negative and the index purely imaginary.
This leads to a decay of the signal inside the medium, so there is no wave propagation
and no poles in the transfer function.
This zone can be thus ignored for the calculation of the time dependent field.
\subsubsection{Near-plasma zone: $\omega >\approx \omega_p$ and $\varepsilon >\simeq 0$}
The dielectric becomes transparent again, its expression can be approached by
\begin{equation}
\varepsilon(\omega) \underset{\omega \to \om_p}{\approx}
\dfrac{\omega^2 - \om_p^2}{\om_p^2 - \om_0^2},
\label{epp}
\end{equation}
and the set of poles is given by
\begin{equation}\label{eq:PlasPoles}
\omega_{q}=\omega_p \; \left[ 1+ \frac{q^2 \pi^2 \Omega^2}{8 \omega_p^4} \;(1 - 4 i / \omega_p) \right].
\end{equation}
The limit of this expression is $q \ll \frac{\sqrt{8} \omega_p^2}{ \Omega}$.
\subsubsection{{High-frequency zone}: $\omega \gg \om_0$ and $\varepsilon \simeq 1$}
The medium cannot follow the excitation of the source and behaves like the vacuum,
and $v_g \to c_0$. The permittivity is close to unity and the poles are given by
\begin{equation}\label{eq:zone5poles}
\omega_{q} = {q\pi} - i \ln{\frac{q^2 \pi^2 -\om_0^2}{\Omega^2/2}} \, .
\end{equation}
This expression for high frequency poles remains valid as
long as $\pi^2 q^2 \gg \om_0^2 + \Omega^2 /2 $.
\subsection{Numerical Validation and discussion} \label{sec:NumValid}
Using Muller's method \cite{Mullernumerical}, one can check numerically the derived expressions
of the poles of a test case of a dispersive slab with $\om_0=\Omega=10$. Figure
\ref{fig:allzones} shows the comparison between the analytical and the numerical results for the poles.
Figure \ref{fig:PolesDCRes} illustrates the region below resonance and show an example of how we
can asymptotically use the derived expressions to evaluate the poles of a dispersive slab
anywhere in the frequency domain.
For a dispersive system, the imaginary part of the poles are frequency dependent in contrast
to the non dispersive case. This imaginary part defines the transient time of the system
since it represents its decay: it is related to the losses due to refections and absorption.
For near-resonance zone, approaching $\om_0$ leads to a unity
reflectivity [since $\rho \to -1$, see Eq. (\ref{eq:r})]. Thus, the imaginary part of the
poles tends to zero. This means that the wave at this frequency undergoes low losses and therefore it
stays longer inside the resonator. Similarly, approaching $\omega_p$ leads to the imaginary part
of the poles tending to zero. As the frequency increases (zone 5), the imaginary part increases as well
and $\rho \to 0$ as $\varepsilon \to 1$ meaning that most of the wave will escape the
resonator without suffering from a reflection. All these cases are discussed
in detail in the next section.
Fig. \ref{fig:LorentzCONV} shows the validity of the Drude-Lorentz model to study the
lossless dispersive medium excited by a causal source, as discussed in the beginning of
this section. Taking the limit of the absorption term $\gamma \to 0$ gives the same
results as putting $\gamma = 0$ in the Drude-Lorentz model (blue curve on Fig. \ref{fig:LorentzCONV}).
\begin{figure}
\includegraphics[width=0.9\linewidth, keepaspectratio]{Fig10.pdf}
\caption[Optional caption]{Analytical vs. Numerical results for the poles of the test case. }
\label{fig:allzones}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\linewidth, keepaspectratio]{Fig11.pdf}
\caption[Optional caption]{The poles below resonance of the test case.}
\label{fig:PolesDCRes}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\linewidth]{Fig12.pdf}
\caption[Optional caption]{The validation of Lorentz model of dispersive medium for the lossless case. }
\label{fig:LorentzCONV}
\end{figure}
\section{The dispersive slab response } \label{sec:response}
We turn now to the analyzis of the transmitted field throught the dispersive slab.
According to Fig. \ref{fig:vg}, a causal pulse (spectral broadened) is expected to
split into different temporal regions while propagating in a dispersive medium as each
frequency zone of the pulse has its own group velocity. This section represents the response of a
dispersive slab that gives the transmitted field using Eqs. (\ref{twoparts}-\ref{eq:MainTransient}).
We represent the temporal solutions of the different frequency zones, in a chronological
order according to the value of the group velocity. The poles of zone 5 having group velocity
close to unity appears first at the output: the corresponding contribution in the field
is known as the Sommerfeld precursor (or forerunner). Then, it is followed
by the Brillioun precursors which is the response of the poles of zone 1. The poles of zone 2
and 4 appear later as they have smaller group velocities. The transient and steady state
solutions should add together so that the main signal starts to appear at the output after
a certain time that depends on the group velocity value at the source frequency.
We consider the same test case of $\om_0 = \Omega = 10$, excited by a source frequency
$\omega_s = 16 $ that falls in the plasma zone. For this case $n_g(\omega \approx 0)=\sqrt{2}$
and $n_g(\omega_s)=2.36$. These values determine the beginning of the Brillouin
precursor and the main pulse, respectively.
\subsection{Sommerfeld precursor}
We can approximate the medium permittivity and the slab reflectivity for zone 5 as
$\sqrt{\epsilon_{\omega}} \simeq 1$ and $\rho_{\omega} \simeq0$. Assuming
$|\omega_q| \gg |\omega_s|$,
the transient solution for the poles of zone 5 given by Eq. (\ref{eq:MainTransient})
becomes
\begin{equation} \label{eq:FarTransient}
E_{\text{trans}}^{(5)}(t) = \frac{-\omega_s}{4\pi} \;\sum_{q}
\frac{ e^{-i \omega_q \big[ t-\sqrt{\epsilon(\omega_q)} \big]}}{\omega_q^2} \, .
\end{equation}
It is shown in the appendix that this expression can be reduced to the formula
of the Sommerfeld precursor for a wave
propagating in a dispersive medium \cite{Bri1960}:
\begin{equation} \label{eq:Sommer}
E_{\text{trans}}^{(5)}(t) = \dfrac{\omega_s}{4 \pi} \, \dfrac{\sqrt{t - \tau_0}}{\Omega \sqrt{2 \tau_0}}
\: J_1\big( \Omega \sqrt{2 \tau_0} \sqrt{t - \tau_0} \, \big) \, ,
\end{equation}
where $J_1$ is the Bessel function of the first kind. This formula is valid
when $(t-\tau_0) / \tau_0 \ll \Omega^2 / 2$. It predicts an initial periodicity of the output
independent on the main signal frequency $\omega_s$. See Appendix \ref{AppB} for more details of
the derivation of this expression. When compared to the expression (\ref{eq:Sommer}), our
method presents the advantage that it is not limited to the condition of short times $t-\tau_0$.
It gives the complete temporal response of the high frequency components of the propagating pulse.
For instance, figure \ref{fig:SommerComp} shows the Sommerfeld precursor for the given test case.
It raises in the beginning and then gradually decays. The figure compares the derived expression
for the poles of zone 5 with the Sommerfeld precursor formula which is only valid for the short
times.
\begin{figure}
\includegraphics[width=0.9\linewidth]{Fig13.pdf}
\caption[Optional caption]{A comparison between Sommerfeld precursor expression
(in red) with the High frequency poles [zone 5] response (in blue).}
\label{fig:SommerComp}
\end{figure}
\subsection{Brillouin precursor}
The poles of zone 1 resemble the Brillouin precursor that follows the Sommerfeld one.
We can use similar assumptions of zone 5 except that the permitivity is given by Eq.
(\ref{eq:zone1eps}). Figure \ref{fig:Precursors} represents the combined results of zone 1
and 5 that clearly shows how the rapidly oscillating Sommerfeld precursor followed by they
slowly oscillating Brillouin precursor. The beginning of the Brillouin precursor is determined
by the value of $n_g(\omega\approx0)= \sqrt{2}$, according to equation (\ref{eq:transtime}).
\begin{figure}
\includegraphics[width=0.9\linewidth]{Fig14.pdf}
\caption[Optional caption]{The precursors of the signal; the rapidly oscillating
Sommerfeld followed by the Brillouin precursor that begins from the dotted line. }
\label{fig:Precursors}
\end{figure}
\subsection{Contribution of the resonance zone}
Figure \ref{fig:ResPoles} shows the response of the near resonance poles of zone 2
for the given test case. Since the excitation frequency is away from the resonance
region and because the resonance poles have a small imaginary parts, the amplitude
does not appear significant.
The effective group velocity of these poles is very small. Therefore, the contribution
of the resonance poles appears later than other poles. On the other hand, this contribution
lasts for a longer time because the small imaginary part means that the outcoupling is small,
and hence the wave at this zone
can survive for a longer time inside the resonator.
\begin{figure}
\includegraphics[width=0.9\linewidth]{Fig15.pdf}
\caption[Optional caption]{The temporal response of the near resonance poles.}
\label{fig:ResPoles}
\end{figure}
\subsection{The total response}
The transmitted field from the dispersive slab is given by Eqs. (\ref{twoparts}-\ref{eq:MainTransient}).
Figure \ref{fig:totaltemporalresponse} shows the total temporal response which includes both
the steady state and the transient regimes. The steady state value defined as ($t \to \infty$)
is given by the slab transfer function in Eq. (\ref{eqSlabTF}).
Figure \ref{fig:Stst} displays the transfer function for the test case. The maximum
transmission with unity value indicates the position of the poles which are not equally distant
contrary to the non dispersive case. As we get closer to the plasma
and resonance zones, the poles get denser.
The frequency-dependent minimum transmission is also highlighted in the figure by the red lines.
Its variation is due to the frequency-dependent single interface reflectivity $\rho(\omega)$.
Similarly to Eq. (\ref{eq:Tmin}), this minimum transmission is given by
\begin{equation} \label{eq:Tmindis}
T_{\text{min}} (\omega_s) = \frac{1-\rho(\omega_s)^2}{1+\rho(\omega_s)^2}
\end{equation}
Since the excitation frequency of the test case is in the plasma zone, the temporal
response of the poles of zone 4 combined with the source poles forms the main pulse
at the output. This part of the field starts from a time depending on the group velocity
at this excitation frequency: its value $n_g(\omega_s)=2.36$ is given by Eq. (\ref{eq:transtime}).
\begin{figure}
\includegraphics[width=0.9\linewidth]{Fig16.pdf}
\caption[Optional caption]{The total temporal response of the dispersive slab. The dotted line
indicates the commencement of the main pulse at the output.}
\label{fig:totaltemporalresponse}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\linewidth]{Fig17.pdf}
\caption[Optional caption]{The transfer function of the dispersive slab (Steady state solution).}
\label{fig:Stst}
\end{figure}
\section{Effect of absorption\label{absorp}}
The effect of absorption on the response of a dispersive slab is briefly discussed. The absorption
means an energy transfer between the incoming wave and the medium. Assuming a small absorption of the
slab medium ($\gamma \ll \om_0$), it can be expected to have a significant absorption only in the
resonance region.
The absorption leads to a change in the imaginary part of the poles of the slab.
The modified expression for the poles can be obtained near
the resonance when $\omega \approx \om_0$ using the lossy expression of the
permittivity in Eq. (\ref{epDL}) by replacing $\om_0$ by $\om_0-\i\om_0 \gamma/2$: in zone 2,
the poles are then given by
\begin{equation} \label{eq:PolesLossy}
\omega_{q} \approx \om_0 - \frac{ \om_0 \Omega^2 } { 2 q^2 \pi^2 } \left[ 1+i \frac{4\om_0}{q^2\pi^2}+
i \gamma \frac{q^2}{4\Omega^2} \right] .
\end{equation}
In this case, the imaginary part of the poles is affected by both the medium
absorption and the reflectivity of the interfaces of the slab. Therefore, for the case of a
lossy medium, even if the near-resonance excitation frequency matches one of the poles
of the system, the transfer function is less than unity, see Fig. \ref{fig:TFloss}.
Depending on the value of the absorption coefficient, the damping of the transient domain
(imaginary part of the pole) is either governed by the slab outcoupling or by the medium
absorption. It is stressed that Figure \ref{fig:TFloss} confirms our assumption stating
that the effect of absorption mainly occurs at the vicinity of the resonance frequency $\om_0$.
\begin{figure}
\includegraphics[width=0.9\linewidth]{Fig18.pdf}
\caption[Optional caption]{The transfer function of a lossy slab with $\gamma=0.01$. }
\label{fig:TFloss}
\end{figure}
\section{Conclusion\label{conclu}}
A general equation has been established for the response of a slab of dispersive medium
illuminated by a causal sinusoidal electromagnetic plane wave in normal incidence.
The slab is modeled as a resonator, taking into account the internal reflections at
the interfaces of the slab with the surrounding medium due to the discrepancy of
their refractive indices. The advantage of using the resonator model is the
elimination of the brunch-cut in the complex frequency domain that is usually needed
to investigate the propagation in a dispersive medium.
The Drude-Lorentz model is then used for the dispersive medium. The poles of the dispersive slab
have been expressed analytically and then the residue theorem has been used to evaluate the
temporal response, including both the steady state and transient regimes. A causal pulse
propagating inside a dispersive medium is shown to split into different temporal regimes
as each frequency zone of the pulse has its own group velocity. The temporal solutions of
the different frequency zones have been highlighted in a chronological order. The original
results of Sommerfeld and Brillouin have been retrieved, with the Sommerfeld and
Brillouin precursors.
The method proposed in this article appears promising to make progress in the understanding
of the waves propagation in dispersive media. In particular, the present analysis could be extended
to the non-normal incidence case, to the absorptive case, and to a dispersion more general
than the Drude-Lorentz model.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Conclusion And Future Work}\label{sec:conclusion}
This paper presents execution templates, a novel abstraction for cloud
computing runtime systems that allows them to support extremely high task
rates. The need for high task rates is driven by the observation that many
modern workloads are CPU-bound, and rewriting them in high performance code can
easily lead to task rates that overwhelm modern schedulers. Long-running
applications with high task rates, however, usually consist of many executions
of a loop. Rather than reschedule each iteration of the loop from scratch,
execution templates allow a controller to cache its scheduling decisions and
invoke large, complex sets of tasks on worker nodes with a single message.
Using execution templates, the paper shows that some benchmark applications
reimplemented in C++ can run up to 40 times faster; without templates, their
speedup is limited only a factor of 5. Finally, execution templates enable
whole new classes of applications to run in the cloud, such as high performance
simulations used in computer graphics.
\section{Evaluation}\label{sec:evaluation}
\begin{figure*}
\centering
\setlength\tabcolsep{-3pt}
\begin{tabular}{cc}
\subfigure[Logistic regression]
{
\includegraphics[width=3in]{figs/lr-strong.pdf}
\label{fig:lr-strong}
}
&
\subfigure[K-means clustering]
{
\includegraphics[width=3in]{figs/k-means-strong.pdf}
\label{fig:kmeans-strong}
}
\end{tabular}
\caption{Iteration time of logistic regression and k-means for a data set of
size 100GB. Spark, Naiad and Nimbus, run Scala, C\# and C++ respectively.
Spark-opt and Naiad-opt show the performance when the computations are replaced
with spin-wait as fast as tasks in C++. Execution templates helps Nimbus scale
out almost linearly.}
\label{fig:data-analytics}
\end{figure*}
This section evaluates how execution templates can support fast,
optimized data analytics jobs at scale. It compares the performance of
k-means and logistic regression benchmarks implemented in Nimbus with
implementations in Spark and Naiad. It measures the costs of computing
and installing templates as well as the performance effect of needing
to recompute worker templates due to load re-balancing. Finally, it
evaluates how far execution templates can scale by measuring their
effect on a distributed graphics workload whose median task length is
13ms and 10th percentile task length is 3ms.
In summary, our findings show:
\begin{itemize}[leftmargin=*]
\setlength{\itemsep}{.5pt}
\item Execution templates support orders of magnitude more tasks per
second than existing centralized (Spark) and decentralized (Naiad)
designs. Task throughput scales almost linearly with the number of
workers.
\item Using execution templates, Nimbus is able to run logistic
regression and k-means benchmarks 16-43 times faster than Spark and
Naiad implementations.
\item Half of this performance benefit is from optimized tasks, the
other half is from execution templates scheduling optimized
tasks at scale. If Spark and Naiad use optimized tasks, they cannot
scale out past 20 nodes; execution templates allow Nimbus to scale
out to at least 100 nodes and cut completion times by a factor of
4-8.
\item Using execution templates, Nimbus is able to run a complex
graphical simulation with tasks as short as $100\mu$s within
15\% of the performance of a hand-tuned MPI implementation. Without
templates, completion time increases by 520\% as the
controller cannot schedule tasks quickly enough.
\end{itemize}
All experiments use Amazon EC2 compute-optimized instances since they
are the most cost effective for compute-bound workloads. Worker nodes
are {\tt c3.2xlarge} instances, which have 8 virtual cores and 15GB of
RAM. Because we wish to evaluate how the controller can become a
bottleneck, we run it on a more powerful instance than the workers, a
{\tt c3.4xlarge} instances, with 16 cores and 30GB of RAM. This shows
the performance of the controller even when it has more resources than
the workers. We measure completion time of different jobs on 20--100
worker nodes. Nodes are allocated within a placement group and so have
full bisection bandwidth.
Iteration time is averaged over 30 iterations and excludes the first
iteration due to its overhead of data loading and JIT compilation. We
observed negligible variance in iteration times. For Nimbus, the first
iteration includes the cost of template installation. We therefore
quantify this cost separately from overall performance.
\subsection{Data Analytics Benchmarks}\label{sec:data-analytics}
Figure~\ref{fig:data-analytics} shows the completion time for logistic
regression and k-means when run in Spark, Naiad and Nimbus. In
addition to a Scala implementation in Spark and a C\# implementation
in Naiad, we also measure performance if these frameworks could
execute tasks as quickly as Nimbus. We consider the {\it best case}
performance of no overhead for invoking native code by having them run
a busy loop.
For logistic regression, Naiad's C\# runs 6 times faster than Spark's
Scala. The fastest Spark configuration is 100 nodes, while for Naiad
it is 50 nodes. This is because Naiad's faster task execution means
its control plane overhead overwhelms the benefits of running on more
workers. Naiad's control overhead grows quickly because it requires
O($n^2$) communication among Naiad nodes, where $n$ is the number of nodes.
C++ tasks run 51 times faster than Scala and 9 times faster than C\#.
When Spark and Naiad's tasks are replaced by tasks running as quickly
as C++ code, neither scale out past 20 nodes. We ran them on fewer than 20
nodes: 20 is the fastest configuration. For example, running on 100
nodes, Naiad-opt runs almost 3 times slower than on 50 nodes, as its
$n^2$ coordination overhead grows.
Nimbus runs 43 times faster than Spark and almost 16 times faster than
Naiad. Its control overhead is almost negligible, even when scaled out
to 100 nodes. This allows it to come very close to the expected
performance benefits of C++. Even if Spark and Naiad were to run
optimized tasks, execution templates lead Nimbus to run 4-8 times
faster.
K-means shows similar results to logistic regression: Nimbus runs
almost 30 times faster than Spark with Scala and 23 times faster than
Naiad with C\#. It runs 5 times faster than Spark or Naiad even when
they use optimized tasks.
\subsection{Task Throughput}\label{sec:throughput}
\begin{figure}
\centering
\includegraphics[width=3.0in]{figs/weak-throughput.pdf}
\caption{Task throughput of cloud frameworks as the number of workers
increases. Spark and Naiad saturate at about 8,000 tasks per second, while Nimbus
grows almost linearly as the number of workers increases.}
\label{fig:weak-throughput}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=6.0in]{figs/multi-tenant-annotated-cropped.pdf}
\caption{Adaptive behavior of execution templates as resources change.
If the number of available workers changes, a controller can recompute
new templates or fall back to templates it has already computed.}
\label{fig:multi-tenant-annotated}
\end{figure*}
The results in Figure~\ref{fig:data-analytics} show that neither Naiad
nor Spark can scale out to handle optimized tasks at scale. Since
progress bottlenecks at the controller, workers spend a
larger fraction of time idle. Figure~\ref{fig:weak-throughput} shows
the {\it task throughput} (the number of tasks per second that workers
execute) each system sustains for logistic regression. Both Spark and
Naiad saturate at about 8,000 tasks per second. Using execution
templates, Nimbus is able to scale almost linearly, supporting almost
200,000 tasks/second for 100 nodes.
Execution templates scale slightly sub-linearly because the scheduling
cost at the controller increases linearly with the number of workers.
If these benchmarks were run on 800 workers with 1 core each (rather
than 100 workers with 8 cores each), each worker template would be
1/8th the size and the controller would have to process 8 times as
many template instantiation messages. If $T$ is the number of tasks to
execute in a block and $W$ is the number of workers, Spark's
controller cost is $O(T)$, Naiad's is $O(W^2)$ and execution templates
are $O(W)$.
\subsection{Template Overhead and Gains}
\label{sec:gain}
\begin{table}
\centering
{\small
\begin{tabular}{lrr} \toprule[2pt]
& {\bf Per-task cost} & {\bf Iter. overhead}\\ \midrule[1pt]
Controller Template & $25\mu s$ & 20\%\\
Worker Template @cntrl & $15\mu s$ & 12\%\\
Worker Template @work & $9\mu s$ & 7\%\\
\bottomrule[2pt]
\end{tabular}
}
\caption{Costs of installing templates on
the first iteration of logistic regression running on 100
nodes. The cost is predominantly at the controller.
Nonetheless, the one-time cost of installing templates on
the first iteration causes the iteration to run 39\% slower.}
\label{tab:costs}
\end{table}
\begin{table}
\centering
{\small
\begin{tabular}{lrr} \toprule[2pt]
& {\bf Completion time} \\ \midrule[1pt]
No templates & 1.07s \\
Controller template only & 0.49s \\
Worker \& controller template & 0.07s \\ \bottomrule[2pt]
\end{tabular}
}
\caption{Execution time of logistic regression iterations (100 nodes)
with and without templates.}
\label{tab:cost-gain}
\end{table}
To filter out the startup cost of the JVM and CLR loading object files
and just-in-time compilation, the results in
Figure~\ref{fig:data-analytics} do not include the first iteration of
either computation. This also excludes out the cost of generating and
installing templates. Table~\ref{tab:costs} shows the costs of
installing templates in logistic regression with 100 workers.
Installing templates increases the execution time of the first
iteration by 39\%. This cost is predominantly at the controller, as it
must generate both the controller template as well as the controller
half of the worker template. Processing each task at the controller
takes 40$\mu$s. A controller is therefore limited to processing at
most 25,000 tasks/second on the first iteration: this is approximately
3 times what available controllers can handle.
Table~\ref{tab:cost-gain} shows how controller and worker templates
reduce control plane costs. Both controller and worker templates cut
the overhead significantly as they transform thousands of tasks into a
single message. Their benefits are roughly equal. A controller
template transforms tens of thousands of messages from the driver to
the controller to a single message. Worker templates transform tens of
thousands of messages from the controller to the workers to one
message per worker. Together, they reduce control plane overhead from
93\% to negligible.
\subsection{Template Adaptation}\label{sec:adaptive}
If a controller decides to re-balance a job across workers, remove workers, or
add workers, it must recompute new worker task graphs and install the
corresponding templates on workers whose responsibilities have changed.
Figure~\ref{fig:multi-tenant-annotated} shows the time it takes for each
iteration of logistic regression as a cluster manager adjusts the available
workers. The run begins with templates disabled: iterations take 1.07s. On
iteration 10, templates are turned on. This iteration takes 1.2s due to
controller template installation (20\% overhead). On iteration 11, the
controller's half of worker templates is installed. On iteration 12, the
worker's half of the worker templates is installed. We intentionally separated
each phase of the template installation on progressive iterations to show the
cost and gain from each. However, all phases could overlap on a single
iteration (39\% overhead). Once all templates are installed, the iteration
time drops to 0.07s.
On the 20th iteration, the controller receives a command from a cluster manager
to stop using half of its workers. This does not change the controller
template, but forces the controller to recompute worker templates. It then
executes at half speed (0.14s/iteration), until iteration 30. At iteration 30,
the 50 workers are restored to the controller. It is then able to go back to
using its first set of templates, which are still cached.
\subsection{Complex Applications}
\label{sec:physbam}
\begin{figure}
\centering
\includegraphics[width=3in]{figs/task-length.pdf}
\caption{CDF of task durations in a PhysBAM simulation. The
median task is 13ms, the 10th percentile is 3ms, and some tasks are
as short as 100$\mu$s.}
\label{fig:physbam-tasks}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=2in]{figs/glass.jpg}
\caption{Still of a PhysBAM simulation of water being poured into a glass.}
\label{fig:glass}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3in]{figs/physbam-speedup-single.pdf}
\caption{Iteration time of a PhysBAM water simulation in Nimbus with
and without templates as well as its standard MPI implementation.}
\label{fig:physbam}
\end{figure}
This final set of experiments examines how templates scale to support
complex applications. PhysBAM is an open-source library for simulating
many phenomena in computer graphics~\cite{physbam}. It is the result
of over 50 developer-years of work and has won two Academy Awards. We
ported PhysBAM to Nimbus, wrapping PhysBAM functions inside tasks and
interfacing PhysBAM data objects into Nimbus so they can be copied and
transferred.
We wrote a driver program for a canonical fluid simulation benchmark,
water being poured into a vessel (e.g., Figure~\ref{fig:glass}). This
simulation uses the particle-levelset method~\cite{particle-levelset},
maintaining the simulation as a volume of fixed grid cells but using
particles along the surface of the water to simulate it in much higher
detail. The simulation is the same core simulation used in films
such as The Perfect Storm and Brave and has a triply-nested loop with
26 different computational stages that access over 40 different variables.
We ran a $1024^{3}$ cell simulation (512GB-1TB of RAM) on 64 workers,
comparing the performance of Nimbus with PhysBAM's hand-tuned MPI
implementation. The MPI implementation cannot re-balance load, and in
practice developers rarely use it due to its brittle behavior and lack
of fault tolerance.
Figure~\ref{fig:physbam-tasks} shows the CDF of task duration in PhysBAM. While
the main computational tasks are 60-70ms some tasks run for only 100$\mu$s.
These tasks computing minimum and maximum values over small sets.
Figure~\ref{fig:physbam} shows PhysBAM's performance using Nimbus and MPI.
Without templates, the simulation generates tasks 8 times faster than a
controller can handle: Nimbus takes 520\% longer than MPI, because controller
becomes a bottleneck. With templates, it runs within 15\% of the MPI
implementation.
\section{Implementation}
\label{sec:implementation}
\begin{figure*}
\centering
\setlength\tabcolsep{-3pt}
\begin{tabular}{cc}
\subfigure[Simple task graph example with three tasks and three data objects.
The data flow among tasks forms a DAG. For example, task C reads the updated
data objects 2, and 3 after execution of task A, and B.]
{
\includegraphics[width=3.0in]{figs/simple-example.pdf}
\label{fig:simple-example}
}
&
\subfigure[ Mapping of the task graph in Figure~\ref{fig:simple-example} over
two workers. Each task graph embeds per worker task dependencies and data copy
among workers. Task graph dependencies allow workers to proceed without
controller's middling.]
{
\includegraphics[width=3.0in]{figs/simple-binding.pdf}
\label{fig:simple-binding}
}
\end{tabular}
\caption{Simple task graph example (a) and how it maps into per worker task graph
in Nimbus (b).}
\end{figure*}
This section describes the design and implementation of execution
templates in a C++ analytics framework we have implemented, called
Nimbus. We chose to implement execution templates in a new framework
in order to explore their tradeoffs when not limited by prior design
decisions that might conflict with their goals. There is also discussion
on how execution templates can be introduced into existing frameworks.
\subsection{Nimbus} \label{sec:nimbus}
Because execution templates are tightly entwined with a framework's
data and execution model, we first explain the relevant details of
Nimbus. The core Nimbus implementation is 15,000 semicolons of C++
code.
\subsubsection{Data Model}
Nimbus has a data flow model similar to DryadLINQ~\cite{dryadlinq},
Naiad~\cite{naiad},
and Spark~\cite{spark}. A job is decomposed into {\it stages}. Each
stage is a computation over a set of input data and produces a set of
output data. Each data set is partitioned into many {\it data objects}
so that stages can be parallelized. Each stage typically executes as
many {\it tasks}, one per object, that operate in parallel. In
addition to the identifiers specifying the data objects it accesses,
each task can be passed parameters, such as a time-step value or
constants.
Unlike Spark's RDDs, and to avoid the cost of data copying noted in
Section~\ref{sec:speedups}, Nimbus allows tasks to mutate data in
place. Mutable data has the additional benefit that multiple
iterations of a loop can access the same objects and reuse their
identifiers. This makes templates more efficient to parameterize, as
the object identifiers can be cached rather than recomputed on each
iteration. There can be multiple copies of a data object. However,
since objects are mutable they are not always consistent. If one worker
writes to its copy of an object, other workers who later read it
will need to receive the latest update.
Data flow between tasks forms a directed acyclic graph (DAG), called
{\it task graph}, whose vertices are tasks and edges are data
dependencies. Figure~\ref{fig:simple-example} shows a simple task
graph with three tasks that operate over three data objects. The
rest of this section uses this example task graph to explain how
templates are generated and instantiated.
\subsubsection{Dependencies and Data Exchange}
\label{sec:explicit}
The goal of execution templates is to allow workers to generate
and correctly schedule large batches of tasks. Not all of these tasks
are immediately runnable. For example, when a worker instantiates the
template in Figure~\ref{fig:simple-example}, it cannot run task C
until both A and B have completed. The ability to locally determine
when it is safe to run a task is critical for reducing load
on a controller; otherwise, the controller would need to publish when
every task has completed. Workers need to be able to know this both when
the dependent tasks are local as well as when they run on another node.
To enforce the correct execution order, each task includes a set of
dependencies. These dependencies are identifiers for tasks that must complete
before the worker schedules the task. As shown in
Figure~\ref{fig:simple-binding}, these dependencies can also be across workers:
task B on worker 2 must complete before task C can run on worker 1. Nimbus
represents this dependency by introducing a pair of control tasks, a send task
on worker 2 and a receive task on worker 1, and inserting the receive task as a
dependency in C. These explicit dependencies allow workers to know when a task
is ready to run without involving the controller.
\begin{figure*}
\centering
\setlength\tabcolsep{-3pt}
\begin{tabular}{cc}
\subfigure[ A controller template represents the common structure of a task
graph metadata. It stores task dependencies and data access patterns. It is
invoked by filling in task identifiers and parameters to each task.]
{
\includegraphics[width=3.0in]{figs/simple-ct.pdf}
\label{fig:controller-template}
}
&
\subfigure[Each worker template stores the common structure of a task graph for
execution including the data copies among workers. It is invoked by passing the
task identifiers, and parameters to each task.]
{
\includegraphics[width=3.0in]{figs/simple-wt.pdf}
\label{fig:worker-template}
}
\end{tabular}
\caption{Controller template (a) and worker templates (b) for the task
graph in Figure~\ref{fig:simple-example}.}
\end{figure*}
\subsection{Dynamic Template Generation} \label{sec:dynamic}
Nimbus generates execution templates on the granularity of basic
blocks. A {\it basic block} is a code sequence in the driver program
with only one entry point and no branches except at the exit. For
example, in Figure~\ref{fig:driver-code}, there are two basic blocks,
the optimizer and the estimator. Each iteration of the algorithm
executes the optimizer block (inner loop) multiple times and the
estimator block (outer loop) once.
There are two types of execution templates. {\it Controller templates}
are installed on a controller by the driver program; they encode the
task graph of a basic block across all of the workers. Controller
templates reduce the control overhead between the driver and the
controller. {\it Worker templates} are installed on workers by the
controller; they encode the task graph of a basic block for that
worker. Once a template is installed, it can be invoked with a single
message. When the driver program invokes a controller template, the
controller typically invokes the corresponding worker template on each
worker.
The two types of templates are decoupled to enable dynamic scheduling
decisions. If a worker fails or the controller re-balances load across
worker, two invocations of the same controller template can result in
two different partitioning of tasks among workers. For every
partitioning strategy a separate worker template is installed on the
workers.
\subsubsection{Controller Templates}
Controller template stores control dependencies between tasks as
well as which data tasks access. To create a controller template, the
driver program sends a start message to the controller, which
instructs to start building a template. As the controller receives each subsequent
task, it both schedules it normally and adds it to the
template. At the end of the basic block, the driver sends a finish
message to the controller. At this point the controller processes the
template it has built and generates worker templates from it. For the
successive instance of the same basic block, driver only invokes the
installed template by passing the task identifiers and parameters to
each task. Figure~\ref{fig:controller-template} shows how the
controller templates for the graph in Figure~\ref{fig:simple-example}
are installed and invoked.
\subsubsection{Worker Templates}
A worker template has two halves. The first half exists at the
controller and represents the entire computation across all of the
workers. This centralized half allows the controller to cache how the
template's tasks are distributed across workers and which data objects
the tasks access. The second half is distributed among the workers and
caches the per-worker local task graph with dependencies and data
exchange directives.
As with controller templates, to generate a worker template the controller
sends all tasks to workers explicitly. Workers execute these tasks
and simultaneously build a template. The next time the driver program
invokes the controller template, the controller invokes the templates
at each worker by passing new task ids, and parameters.
Figure~\ref{fig:worker-template} shows how the controller templates
for the scheduling strategy in Figure~\ref{fig:simple-binding} are
installed and invoked.
\subsection{Patching Worker Templates} \label{sec:patching}
Templates, when generated, assume that the latest updates to data objects are
distributed in a certain way. For example, in Figure~\ref{fig:simple-binding},
the template on worker 2 assumes that data objects 1 and 3 contain the latest
update. Since templates are installed dynamically, the runtime does not know
the complete control structure of the program. It can be that there are code
paths which do not leave the latest update in every object. Put
in other words, the driver may have issued tasks which invalidate the
template's assumptions. At the same time, the driver program does not know
where data objects are placed or tasks execute, so cannot correct for these
cases.
Because this problem is subtle, we provide an analogy based on JIT
compilers. JIT generated blocks of native instructions assume that
variables are in particular registers. If the registers do not hold
the correct variables when the block of native instructions is
invoked, then move, store, and load instructions must be added so the
registers do hold the correct variables.\footnote{This is one reason
why JITs often operate on function boundaries, since function calling
conventions specify how variables must be laid out in registers.}
Whenever a template is invoked, the controller needs to first {\it
validate} whether the corresponding worker templates will operate on
the latest updates. If validation fails, the controller patches the
template by inserting control tasks that copy the latest updates
to the objects. For example, if immediately after the template in
Figure~\ref{fig:worker-template} the same template is invoked, then
controller needs to transfer the latest update of first object
(updated by task $t_3$) to worker 2 to satisfy the preconditions. Only
after patching it is safe to invoke the template again.
Validating and patching must be fast, specially when there are many
workers, data objects, and nodes. For example, the complex graphics
application in Section~\ref{sec:physbam} has almost
one million data objects. Nimbus uses two optimizations to make validation
and patching fast.
First, for the common case of a template executing twice back to back,
the controller ensures that the input objects to a template hold the
latest updates when the template completes. This is especially
important for when there are small, tight loops: the controller can
bypass both validation and patching. Second, for basic blocks that can
be entered from multiple places in the program (e.g., the block after
an if/else clause), the controller generates a separate template for
each possible control flow.
\subsection{Load Balancing and Fault Tolerance}\label{sec:lb-ft}
Nimbus balances load across workers by periodically collecting
performance statistics at the controller. When the controller
detects that certain workers are busier than others, it redistributes
tasks across the workers, regenerating templates for any workers whose
load has changed.
To recover from worker failures, the Nimbus controller periodically
checkpoints system state. To create a checkpoint, the controller
inserts tasks that commit data objects to durable storage as well as
metadata on where in program execution this checkpoint is. If a worker
fails and the system loses the latest update to an object, the controller
halts all tasks on the workers. It restores the lost objects to their
last checkpoint as well as any other objects which have been modified
since that checkpoint. It then restarts execution, regenerating
any worker templates as needed. If the controller fails, it can restart
and restore the entire system from the checkpoint.
\subsection{Templates in Other Frameworks}\label{sec:others}
Templates are a general abstraction that can be applied to many
frameworks. However, the requirements in Section~\ref{sec:templates}
can be simpler to incorporate in some systems than others. For
example, incorporating execution templates into Spark would require
three significant changes to its data model and execution model,
particularly its lazy evaluation and scheduling. First, it would need
to support mutable data objects. When data is immutable, each
execution of a template is on new data object identifiers. Second, the
Spark controller needs to be able to proactively push updates to each
worker's block manager. Otherwise, every access of a new data object
requires a lookup at the controller. Third, in Spark the controller is
completely responsible for ensuring tasks run in the correct order,
and so tasks sent to workers do not contain any dependency
information. Adding execution templates would require adding this
metadata to tasks as well as worker control logic. While these changes
are all quite tractable, together they involve a significant change
to Spark's core execution model and so we are beginning to discuss
this with its developers.
We are have not yet considered adding templates to Naiad since it is no
longer actively supported (the last code commit was Nov 9, 2014).
\iffalse
We have implemented Nimbus as a standalone framework in C++. The core library
that implements controller, worker, and driver interface for application is
about 15,000 semicolons. In addition, we have implemented a handful of
application including a machine-learning library with various algorithms and
few state-of-the-art physical simulations. The details of applications and
evaluation are covered in \S~\ref{sec:evaluation}. In this section, we present the
key aspects of Nimbus implementation.
\subsection{Dynamic Template Generation} \label{sec:generation}
Execution templates are generated and installed dynamically throughout the
lifetime of the application. This flexibility differentiates execution
templates from solutions that requires static compilation of the control flow
along with the application logic. Dynamic generation allows on-the-fly
scheduling decision to adapt the control flow to the non-uniform resources in
the cloud.
As driver submits the tasks, controller detects the code blocks and creates
controller templates. Currently, application writer marks the beginning and end
of a branch-free block of code for the sake of simplicity. However, one can
imagine other solutions for automatic block detection by the controller and
driver. The first time that tasks of a code block are submitted, controller
seamlessly generates controller templates given the tasks complete metadata. It
also notifies the driver that is has installed a controller template for a
specific code block. For later instantiation of the installed code block,
driver only sends the new task ids (instead of the complete metadata). Driver
guarantees that the order of task ids for instantiation matches the order of
spawning for the first time. This way controller can instantiate the template
given the fixed indices into an array of identifiers.
For each execution strategy of a code block, controller creates a separates
worker template and installs them on the worker. Execution strategy includes
task assignment, adding copy jobs, and binding. Different data placement and
load balancing strategy results in a different execution strategy. We call a
specific execution strategy for a code block a \textit{scenario}. The first
time a scenario is executed, controller marks the beginning and end of the code
block for workers and issues complete commands to the workers. Workers
seamlessly generate worker templates with task and data instance identifiers as
parameters. Also, controller creates a per worker mapping of identifiers. For
later instantiations of the same scenario, controller fills is per worker
mappings, and only sends the task and data instance identifiers to the workers.
\subsection{Load Balancing and Fault Tolerance}\label{sec:lb-ft}
Resource management, cluster sharing, load balancing, and fault tolerance has
been extensively covered in cloud computing literature and there are different
centralized and distributed scheduling strategies~\cite{late, mantri, sparrow,
tarcil}. What execution templates provide is orthogonal to scheduling. In fact,
separating worker and controller templates allows adopting any scheduling
strategy on-the-fly. Moreover, execution templates alleviates the burden on the
scheduling layer by caching the decisions.
Nimbus implements a simple yet effective scheduling layer. To offload the
controller, Nimbus distributes performance measurements of the nodes over the
workers and only centralizes the policy at the controller. Specifically, Each
worker monitors its own statistics by measuring the busy time executing tasks,
blocked time on remote data delivery, and idle time waiting for the controller
to push tasks to the workers. These statistics are sent as frequently
heartbeats to the controller with a light weight message. Also, controller
keeps an index of which data objects and versions each worker has. Nimbus
default scheduler tries to assign tasks to workers that have the data objects
the task needs in order to reduce data movement. Upon load imbalance, scheduler
offloads workers with high busy time and assign more tasks to workers with low
busy time. Perhaps integrating Nimbus with more sophisticated schedulers would
be a worthwhile future work.
Nimbus creates frequent checkpoints to recover from worker failures. For each
checkpoint controller takes a snapshot of the controller flow and for each data
object creates a durable copy with the version in the version context of the
snapshot. Note that since data instances are distributed over workers the
checkpoint creation is also distributed among the workers. Upon failure,
controller reloads the latest checkpoint snapshot and starts execution from
there. Due to enforced write orders, it is guaranteed that all require data
version are available in the saved version context.
\fi
\section{Introduction}
The CPU has become the new bottleneck for analytics benchmarks and
applications. One recent study found that the big data benchmark
(BDBench), TCP decision support benchmark (TCP-DS), and production
workloads from Databricks were all CPU-bound. Improving network I/O
would reduce their median completion time by at most 2\% and improving
disk I/O would reduce their median completion time by at most
19\%~\cite{ousterhout15}.
At the same time, systems such as DMLL~\cite{dmll} and
DimmWitted~\cite{dimwitted} have shown it is possible to achieve
orders-of-magnitude improvements in CPU performance over frameworks
such as Spark~\cite{spark}. Comparing the performance of C++ and
Spark implementations of two standard machine learning
benchmarks, we find that the C++ implementations run up to {\it 51
times faster}. Modern analytics frameworks are CPU-bound, but most of
these cycles are wasted.
One straw man solution to improve performance is to have a framework
call into C++ implementations of computational kernels, e.g., through
the Java Native Interface (JNI). In Section~\ref{sec:implications}, we
show that this only sees modest speedups (5x rather than 50x): worker
nodes spend 90\% of their cycles idle. The central Spark {\it
controller}, which is responsible for telling to workers to execute
tasks, cannot schedule tasks quickly enough. The framework's control
plane becomes a bottleneck and workers fall idle. In
Section~\ref{sec:evaluation} we show that Naiad~\cite{naiad}, another
framework, has similar control plane bottlenecks.
Current frameworks do not scale to run optimized tasks on many
nodes. They can either run on many nodes or run optimized tasks, but
not both, because the control plane cannot schedule tasks fast
enough. Prior scalable scheduling systems such as
Sparrow~\cite{sparrow}, Omega~\cite{omega}, Apollo~\cite{apollo},
Mercury~\cite{mercury}, Hawk~\cite{hawk} and Tarcil~\cite{tarcil} all
propose ways to distribute the scheduling of many jobs which together
overwhelm a single controller. Scheduling a job requires
centralized state, and so for all these systems, tasks from a single
job still go through a single scheduler. Optimized tasks, however,
mean that a {\it single} job can saturate a controller.
Section~\ref{sec:templates} presents {\it execution templates} a
control plane abstraction which scales to schedule optimized tasks on
many nodes. The key insight behind execution templates is that
long-running CPU-bound computations are repetitive: they run the same
computation (e.g., a loop body) many times. Rather than reschedule
each repetition from scratch, a runtime caches scheduling decisions as
an execution template of tasks. A program invokes a template,
potentially creating thousands of tasks, with a single message. We
call this abstraction a template because it can cache some decisions
(e.g., dependencies) but fully instantiating it requires parameters
(e.g., task identifiers).
Section~\ref{sec:implementation} describes an implementation of
execution templates in Nimbus, a C++ analytics framework that
incorporates execution templates. Compared to Spark and Naiad,
benchmarks in Nimbus run 16-43 times faster. Rewriting benchmarks in
Spark and Naiad to use optimized tasks reduces their completion time
by a factor of 3.7-5. However, Section~\ref{sec:implications} shows
results that neither can scale out past 20 worker nodes
because the control plane becomes a bottleneck: running on more than
20 nodes {\it increases} completion time. Using execution templates,
implementations of these benchmarks in Nimbus scale out to 100 nodes
(800 cores), seeing nearly linear speedups.
Execution templates allow a centralized controller to handle tasks
shorter than 1ms, or 100 times shorter than what prior systems
support~\cite{sparrow}. This makes whole new applications possible. We
have ported PhysBAM, a graphical simulation library~\cite{physbam}
used in many feature films\footnote{PhysBAM is a cornerstone of
special effects at Industrial Light and Magic and is also used
Pixar.} to Nimbus. PhysBAM has tasks as short as 100$\mu$s, yet
execution templates can execute extremely large simulations within 15\%
of the speed of PhysBAM's hand-tuned MPI libraries.
This paper makes five contributions:
\begin{enumerate}[leftmargin=*]
\setlength{\itemsep}{.5pt}
\item A detailed analysis of how Spark
spends CPU cycles, finding that C++ implementations run 51 times faster and
most of Spark's cycles are wasted due to runtime and programming
language overheads (Section~\ref{sec:speedups}).
\item Results showing Spark and Naiad's control planes are a
bottleneck when running optimized (C++) tasks and so they can only
provide modest speedups (Section~\ref{sec:implications}).
\item Execution templates, a novel control plane abstraction that
allows optimized tasks to run at scale
(Section~\ref{sec:templates}).
\item The design of Nimbus, an analytics framework that incorporates
execution templates and a data model based on mutable data objects
which permit in-place modifications (Section~\ref{sec:implementation}).
\item An evaluation of execution templates, finding they allow Nimbus
to run optimized tasks with almost no overhead, scaling out to 100
nodes (800 cores) while running 30-43 times faster than Spark and
16-23 times faster than Naiad. Execution templates also allow Nimbus to
support large, complex applications with tasks as short as 100$\mu$s
(Section~\ref{sec:evaluation}).
\end{enumerate}
Section~\ref{sec:implementation} provides details on the Nimbus
implementation of execution templates, including the dynamic program
analysis that ensures they execute properly despite variations in
control and data flow. Section~\ref{sec:related} presents related
work and Section~\ref{sec:conclusion} concludes.
\section{Motivation}
A recent study found that Spark analytics applications are
CPU-bound~\cite{ousterhout15}. Increasing server RAM and easy
parallelization means that many applications can keep their entire
working set in RAM and completion time is limited by CPU performance.
This section motivates the need for a new control plane in cloud data
analytics frameworks. It starts by examining where Spark's CPU cycles
go: 98\% of them are wasted. Re-implementations in C++ run up to 51 times
faster. However, if a Spark job uses these faster re-implementations,
it only sees modest (5x) speedups because the control plane
(messages to schedule and dispatch tasks)
become the bottleneck. The section concludes by observing an important property
of CPU-bound applications, that their control flow and execution
exhibits very regular patterns, which can be calculated, cached and
reused.
\subsection{Where the Cycles Go}\label{sec:speedups}
\begin{figure}
\centering
\includegraphics[width=3.0in]{figs/scala-slow-down.pdf}
\caption{Logistic regression execution time implemented in Scala and C++. C++
is 51 times faster than Scala. These results are averaged over 30
iterations and discard the first iteration to allow Java Virtual Machine (JVM)
to warm up and just-in-time compile.}
\label{fig:gradient}
\end{figure}
Frameworks such as Spark~\cite{spark} and Naiad~\cite{naiad} focus on
applications whose data sets can fit in memory when spread across
many nodes. At the same time, a push for greater programmer
productivity has led them to support higher-level languages: 70\% of
Spark applications are written in Scala~\cite{nvl-platformlab}.
\begin{table}
\centering
\setlength\tabcolsep{5pt}
\begin{tabular}{lrrr} \toprule[2pt]
{\bf Code} & {\bf Nodes} & {\bf Task Length} & {\bf Completion Time} \\ \midrule[1pt]
{\bf Scala} & 100 nodes & 206ms & 2.86s \\
{\bf C++} & 100 nodes & 4ms & 1.00s \\
{\bf C++} & 20 nodes & 20ms & 0.53s \\ \bottomrule[2pt]
\end{tabular}
\caption{Effect of running optimized logistic regression tasks
in Spark. Although C++ tasks can run 51 times faster, a job using
C++ tasks on 100 nodes only runs 2.8x faster. It
run 5x faster when run on 20 nodes. Both are much slower than the
expected speedups and 20 nodes is faster than 100 due to the control
plane being unable to schedule tasks fast enough.}
\label{tab:melting}
\end{table}
These two trends (in-memory datasets and higher-level languages)
conflict: for applications that operate on in-memory data, higher-level
language overheads become significant. Figure~\ref{fig:gradient}
shows the execution time of logistic regression, a common analytics
benchmark, implemented in Spark using Scala and implemented in C++.
The C++ implementation runs {\it 51 times faster} than the Spark one.
This poor performance has three major causes.\footnote{To determine the cause
of this slowdown, we configured the JVM to output the JIT assembly and
inspected it. We inserted performance counters in the Scala code re-inspected
the assembly to verify they captured the correct operations. To separate the
cost of Scala from JVM bytecode interpretation, we decompiled the JVM bytecodes
Scala generated into Java, rewrote this code to remove its overheads,
recompiled it, and verified that the bytecodes for the computational operations
remained unchanged.} First, since Scala's generic methods cannot use primitive
types (e.g., they must use the {\tt Double} class rather than a {\tt double}),
every generic method call allocates a new object for the value, boxes the value
in it, un-boxes for the operation, and deallocates the object. In addition to
cost of a {\tt malloc} and {\tt free}, this results in millions of tiny objects
for the garbage collector to process. 85\% of logistic regression's CPU cycles
are spent boxing/un-boxing.
Second, Spark's resilient distributed datasets (RDDs) forces methods
to allocate new arrays, write into them, and discard the source array.
For example, a {\tt map} method that increments a field in a dataset
cannot perform the increment in-place and must instead create a whole
new dataset. This data duplication adds an additional factor of
$\approx2$x slowdown.
Third, using the
Java Virtual Machine has an additional factor of $\approx3$x
slowdown over C++. This result is in line with prior studies, which have
reported 1.9x-3.7x for computationally dense
codes~\cite{google-loops,gheradi3d}. In total, this results in Spark
code running 51 times slower than C++.
\subsection{Implications of Optimized Tasks}\label{sec:implications}
To determine how much tasks running at C++ speeds could improve
performance, we replaced the logistic regression benchmark's Spark
Scala code with loops that take as long as the C++ implementations.
This represents the best-case performance of Spark calling into a
native method (there is no overhead).
Table~\ref{tab:melting} shows the results. While the computational
tasks run 51 times faster, on 100 nodes the overall computation
only runs 2.8 times faster. Worker nodes spend most of the time idle
because the central Spark controller
cannot schedule tasks fast enough. Each core can execute 250 tasks
per second (each task is 4ms), and 100 nodes (800 cores) can execute
200,000 tasks per second. We measured Spark's controller to be able to
issue $\approx$8,000 tasks per second.
This control plane bottleneck is not unique to Spark.
Naiad~\cite{naiad} is the best available \textit{distributed} cloud
framework. In Naiad, worker nodes directly coordinate with one another
rather than acting through a central controller. While Naiad code is
in C\# rather than Scala and so sees overall better performance than
Spark, its all-to-all synchronization also becomes a bottleneck above
20 nodes. We defer detailed experimental results on Naiad to
Section~\ref{sec:data-analytics}.
Scheduling techniques such as Sparrow~\cite{sparrow},
Omega~\cite{omega}, Apollo~\cite{apollo}, Mercury~\cite{mercury}, Hawk~\cite{hawk} and
Tarcil~\cite{tarcil}, address the scheduling bottleneck that occurs
when there are many concurrent jobs. In aggregate,
many jobs can execute more tasks per second than a single
controller can schedule. But since these jobs share underlying
computing resources, they need to be scheduled cooperatively to prevent
overloading workers or contention. Each of these systems propose ways
for many separate, per-job controllers to coordinate their
resource allocation and scheduling decisions. These systems all solve
the problem of when the aggregate task rate of {\it many} jobs is
greater than what one controller can handle. Optimized tasks,
however, mean that {\it single} job can saturate a controller.
None of these systems can distribute a single job's scheduling.
\subsection{Observation: Repetition}\label{sec:repetitive}
Cloud computing applications are increasingly advanced data analytics
including machine learning, graph processing, natural language
processing, speech/image recognition, and deep learning. These
applications are usually implemented on top of frameworks such as
Spark~\cite{spark} or Naiad~\cite{naiad}, for seamless parallelization
and elastic scalability. A recent survey~\cite{spark-survey} of Spark
users, for example, shows 59\% of them use the Spark machine learning
library~\cite{spark-ml}. Efforts such as Apache Mahout~\cite{mahout}
and Oryx~\cite{oryx} provide machine learning libraries on top of
Spark. Cloud providers, in response to this need, now offer special
services for machine learning models~\cite{azure-ml, amazon-ml}.
One important property of analytics jobs is their computations have
repetitive patterns: they execute a loop (or set of nested loops)
until a convergence condition. The Ernest system~\cite{ernest}, for
example, leveraged this observation for predicting the performance
and managing resources. Logistic regression, for example, often
executes until parameters have converged and are no longer changing or
a large fixed number of iterations (whichever happens first).
For example, Figure~\ref{fig:cross-validation} shows the execution
graph of the hold-out cross validation method, a common machine
learning method used for training regression
algorithms~\cite{ml-book}. It has two stages, training and estimation,
which form a nested loop. The training stage uses an iterative
algorithm, such as gradient descent, to tune coefficients. The
estimation stage calculates the error of the coefficients and feeds
this back into the next iteration of the training phase.
Each iteration generates the same tasks and schedules them to the same
nodes (those that have the data resident in memory). Re-scheduling
each iteration repeats this work. This suggests that a control plane
cached these decisions and reused would schedule tasks much
faster and scale to support fast tasks running on more nodes. The next
section describes execution templates, a control plane abstraction
that achieves this goal.
\begin{figure}
\centering
\includegraphics[width=2.5in]{figs/cross-validation.pdf}
\caption{Execution graph of training a regression algorithm. It is
iterative with an outer loop for updating model parameters based on the
estimation error, and an inner loop for optimizing the feature
coefficients.}
\label{fig:cross-validation}
\end{figure}
\section{Related Work}\label{sec:related}
This paper builds on a long history of related work from several disparate
fields: cloud computing, high performance computing, and programming languages.
\medskip\noindent\textbf{Fast data analytics:}
Within the database and parallel computing communities, prior work has explored the
computational inefficiency of Spark code, proposing new programming models and
frameworks to replace it~\cite{dimwitted,dmll}. Facebook's AI research group
has open-sourced GPU modules for the Torch machine learning
framework.~\cite{fair}. There is also ongoing research on a common intermediate
language for Spark that provides a glossary of data-parallel optimizations
(including vectorization, branch flattening and, prediction), suggesting
performance in some cases even faster than, hand-written C~\cite{nvl-mit,
nvl-platformlab}.
The trend shows that the next generation of
cloud computing frameworks will execute tasks which run orders of
magnitude faster than today.
\medskip\noindent\textbf{Cloud programming frameworks:}
MapReduce~\cite{mapreduce} is a widely used programming model for processing
large data sets. Open source MapReduce frameworks such as Hadoop and
Hive~\cite{hadoop, hive} are I/O bound: they fetch stable input data from disks, and
save intermediate and final results to disks. Spark~\cite{spark} uses resilient
distributed datasets (RDDs) to perform computations on in-memory data, while
providing the reliability that data on disk provides. For optimized data analytics with
short task, however, Spark's centralized runtime system becomes a bottleneck.
While Nimbus also uses a centralized controller, execution templates enable
Nimbus to handle orders of magnitude higher task rate.
Naiad~\cite{naiad} is another framework for in-memory computations.
While the distributed
event-based runtime helps scalability without creating a centralized
bottleneck, the cost of synchronization dominates as the number of workers
grows. Logical to physical graph translation on Naiad nodes resembles the worker
templates on Nimbus, however the lack of centralized controller to resolves the
inter-worker dependencies leaves the burden of synchronization on the runtime system.
Dataflow frameworks such as Dryad~\cite{dryad}, DryadLINQ~\cite{dryadlinq},
CIEL~\cite{ciel} and FlumeJava~\cite{flumejava} focus on abstractions for
parallel computations that enable optimizations and high performance. This
paper examines a different but complementary question: how the runtime scales
out to support very fast computations. In fact, our framework implementation
that incorporates execution templates, Nimbus, resembles the data flow model in
DryadLINQ~\cite{dryadlinq}.
\medskip\noindent\textbf{Distributed scheduling systems:}
There is a huge body of work on distributed scheduling. They deploy various
mechanisms to provide efficient scheduling decisions with high throughput. For
example, Sparrow~\cite{sparrow} uses a stateless scheduling model based on
batch sampling and late binding. Omega~\cite{omega}, on the other hand,
leverages a shared global state through atomic transactions to improve the
allocation decisions. Apollo~\cite{apollo} benefits from a similar model, and
adds task completion time estimations to optimize the scheduling decisions.
Tarcil~\cite{tarcil} is a hybrid model based on both sampling and performance
estimation. Hawk~\cite{hawk}, and Mercury~\cite{mercury} suggest a hybrid
distribute/centralized solution to realize better efficiency in the cluster.
At a very high level, all these systems solve the same problem as execution
templates do: providing higher task throughput at the runtime system. However
there is a very important and subtle difference: these systems distribute the
scheduling across the job boundaries. For a single job with high task rate the
scheduling still goes through a single node. The distributed solution only
solves the problem of multiple jobs producing high aggregate task rate in the
cluster by directing the scheduling of each job to a different node. In a way,
execution templates are orthogonal to theses systems. Every node in the
distributed implementation could benefit from execution templates to support
jobs with orders of magnitude higher task rate.
\medskip\noindent\textbf{High performance computing (HPC):}
MPI~\cite{mpi} provides an interface to exchange messages between parallel
computations, and is the most commonly used framework to write distributed
computations in the HPC domain. MPI does not include any support for
load-balancing or fault recovery. Frameworks such as Charm++~\cite{charmpp}
and Legion~\cite{legion} provide abstractions to decouple control flow,
computation and communication, similar to cloud data flow frameworks. Their
fundamental difference, however, is that they provide mechanisms and very
little policy; applications are expected to decide on how their data is placed
as well as were tasks run. The scale and cost of the machines they are
designed for (supercomputers) is such that they demand more programmer effort
in order to achieve more fine-tuned and optimized use of hardware resources.
\medskip\noindent\textbf{Just-in-time (JIT) compilation:}
Finally, the idea of memoizing control flow and dynamic decisions in an
execution path closely resembles the approach taken in just-in-time (JIT)
compilers~\cite{dynamo} as well as the Synthesis kernel~\cite{synthesis}. Both
of these approaches note that particular decisions, while dynamic in the
general case, might lead to deterministic results in any particular case.
Therefore, optimizing that deterministic result can remove all of the
surrounding overhead. While a JIT compiler and the Synthesis kernel generate
optimized native code for particular functions, execution templates generate
optimized structures for scheduling distributed computations.
\section{Execution Templates}
\label{sec:templates}
We describe execution templates, a control plane abstraction for cloud
computing. Execution templates make it possible for workers to
inexpensively generate and execute large batches of tasks.
If a program has a loop in it, rather than resend all of the tasks for that
loop to the controller on every iteration, it should instead send them
once. For subsequent iterations, it can tell the controller ``execute
those tasks again.'' The same is true for the controller; rather than
resend all of the tasks to the workers, it can tell each worker to
``execute those tasks again.''
The execution and control structure of cloud frameworks places
requirements on how templates operate. Figure~\ref{fig:architecture}
shows the architecture of a cloud computing framework. A {\it driver}
program generates tasks, which it sends to a centralized {\it
controller}. The driver and controller may or may not reside on the
same node. The controller processes these tasks and dispatches them to
a cluster of {\it workers}. The controller balances load across
workers and recovers execution when one fails.
\begin{table}
\begin{tabular}{ll} \toprule[2pt]
{\bf Execution template} & {\bf JIT compiler}\\ \midrule[1pt]
Template & Function \\
Task (Driver$\rightarrow$Controller) & Bytecode instruction \\
Task (Controller$\rightarrow$Worker) & Native instruction \\
Data object & Register \\
\bottomrule[2pt]
\end{tabular}
\caption{Execution templates are analogous to a just-in-time
compiler for a data analytics control plane.}
\label{tab:analogy}
\end{table}
Templates optimize repeated control decisions. In this way, they are
similar to a just-in-time (JIT) compiler for the control plane. A JIT
compiler transforms blocks of bytecodes into native instructions;
execution templates transform blocks of tasks into dependency graphs
and other runtime scheduling structures. Table~\ref{tab:analogy}
shows the correspondences in this an analogy: an execution template is
a function (the granularity JIT compilers typically operate on), a
task from the driver to the controller is a bytecode instruction, and
a task executing on the worker is a native instruction.
The rest of this section describes six requirements for how templates
operate. While the analogy to JIT compilation fits well and many of
these requirements follow from it, the driver-controller-worker
execution model adds an additional requirement, the need to validate and patch
templates before executing them.
\medskip\noindent\textbf{1. Templates must be dynamically generated.}
Controllers and workers do not have the driver program. They receive a
stream of tasks, which they dynamically schedule and execute. They
therefore need to generate templates in response to this dynamic
stream of information. Furthermore, because a controller can
dynamically shift how it schedules tasks to workers (e.g., in response
to load imbalance or changing resources), it needs to be able to
correspondingly dynamically create new templates. Put another way,
templates cannot be statically compiled: they must instead be created
just-in-time.
\medskip\noindent\textbf{2. Templates must be parameterizable.}
Similarly to how a program must be able to pass parameters to
just-in-time compiled functions, a driver must be able to pass
parameters to execution templates. Analytics jobs involve many
repetitions of the same loop or computation, but the repetitions are
not identical. The cross-validation job in
Figure~\ref{fig:cross-validation}, for example, updates {\tt
parameters}, which are then passed to the optimizer block. Each
instantiation of the optimizer block must fill in {\tt parameters} to
the {\tt find\_gradient} tasks. In addition to data parameters,
templates also require control parameters, such as which task
identifiers to use, to ensure that two workers do not use the same
globally unique identifier.
\medskip\noindent\textbf{3. Workers must locally resolve
dependencies.} Large blocks of tasks often have data dependencies
between them. For example, the line {\tt coeff += gradient} in
Figure~\ref{fig:driver-code} cannot run until the previous line
computing {\tt gradient} completes. For a worker to be able to execute
the tasks for both lines of code locally, without coordinating with
the controller, it must know this dependency and correctly determine
when that line of code can run. This is similar to how a CPU uses data
flow to know when it can execute an instruction that depends on the
output of other instructions.
\medskip\noindent\textbf{4. Workers must directly exchange
data.} Optimized tasks read and write in-memory data objects on
workers. Often, within a single template, the output of a task on one
worker is needed as the input for a task on another. As part of
executing the template, the two workers need to directly exchange this
data. This is similar to how two cores accessing the same memory need
to be able to to update each other's caches rather than always write
through to main memory.
\medskip\noindent\textbf{5. Controllers must be able to quickly validate
and patch templates.} The driver-controller-worker execution model
adds additional complexities that JIT compilers do not need to handle.
Just as function calls assume that arguments are in certain registers
or stack positions, when a controller generates execution templates
for workers, it must assume certain preconditions on where data is
located. However, a driver can insert new tasks at any time, which
might violate these preconditions. For example, it might insert
instructions that increment a variable and store it in a different
register. When a controller instantiates a template, it must validate
whether a template's preconditions hold, and if not, insert tasks to
patch it. In the above example, the controller needs to detect the
variable is now in a new register and issue a move instruction to put
it back in the register the function call expects.
\medskip\noindent\textbf{6. Templates must be fast.} Finally, as the
overall goal of templates is to allow the control plane to support
optimized tasks at scale, the performance gains of instantiating them
must be greater than their cost to generate and instantiate.
Execution templates are tightly entwined with a framework's data model
and execution. The next section describes a concrete implementation of
them in the context of an analytics framework designed to
execute optimized tasks at scale.
\iffalse
{\bf Execution templates correspond to basic blocks; there are two types of
them: control templates and worker templates.}
Execution templates operate on the granularity of basic blocks. There are two
types of execution templates. {\it Controller templates} are installed on a
controller by the driver program; they encode the task graph of a basic block
across all of the workers. {\it Worker templates} are installed on workers by
the controller; they encode the task graph of a basic block for that worker.
{\bf controller/worker templates are seamlessly installed on-the-fly.}
The first time explicit tasks are sent regularly, while collected in to a
template in parallel. For the next iterations templates are invoked.
{\bf They are saved as parameterized templates because some parts change.}
For example task identifiers from one instance of a basic block to another
instance should change, otherwise there is no way for task replication or
dealing with failures.
{\bf Template generation has to be dynamic, static solution does not work.}
Driver program when written is agnostic to cluster size and partitioning so
controller template needs to be generated on the fly. There are
stragglers/failures so worker template needs to be generated dynamically, too.
One compiled worker template does not work, they need to be adaptable.
\subsection{Discussion}
{\bf The goal is to minimize the instantiation cost of the templates.}
We leverage mutable data model, as you can only fill in the input data objects
to the template, not every instance they are referenced or mutated. The data
flow is encoded within the template itself.
{\bf The goal is for the worker to compute a batch of tasks without controller intervention.}
Worker templates need to encode the order in which tasks execute. Also there
are cases that workers need to exchange data. Controller encodes the data
communication within the worker template as well. proactive synchronization:
tell the workers in advance where the data exist and how to fetch it.
Execution templates operate on the granularity of basic blocks. There
are two types of execution templates. {\it Controller templates} are
installed on a controller by the driver program; they encode the task
graph of a basic block across all of the workers. {\it Worker
templates} are installed on workers by the controller; they encode
the task graph of a basic block for that worker. Once a template is
installed, it can be invoked with a single message. When the driver
program invokes a controller template, the controller typically
invokes the corresponding worker template on each worker. For sake of
exposition, in the rest of this section we focus on how a driver
creates controller templates, describing how worker templates differ
at the end.
To create a template, the driver program sends a start message to the
controller, which instructs to start building a template. As it
receives each subsequent task, the controller both schedules it
normally and adds it to the template. The template stores control
dependencies between tasks as well as which data tasks access. At the
end of the basic block, the driver sends a finish message to the
controller. At this point the controller processes the template it has
built and generates worker templates from it.
Nested templates occur when there is an inner loop inside an outer
loop. Nested templates imply that there are at least three basic
blocks: the outer loop code before the inner loop, the inner loop, and
the outer loop following the inner loop. A
There are two approaches to support nested templates. In the first,
the outer loop is broken into two templates, one before the inner loop
and one after. The inner loop is given its own template. When the
beginning outer template executes, it finishes by instantiating the
inner template. The inner template instantiates itself, or, when it
completes, it instantiates the ending outer template. The ending outer
template, depending on whether the outer loop has completed, either
instantiates the beginning outer template or the stage after the loop.
In the second approach, a template has a second parameter, a task
identifier. This task identifier is the task that should execute to
indicate that the inner loop is com- plete. The driver code within the
outer loop, that creates the inner loop, can create new tasks that are
dependent on this task identifier. The advantage of this approach over
the first is that the scheduler can perform analysis and issue
commands for the later tasks immediately, without waiting for the
inner loop to complete. This can reduce the idle time at workers. For
small loops with a small number of iterations, this optimization can
be helpful.
A single basic block may require multiple worker templates because of
different try conditions. For example, consider a {\tt} loop. The
first iterations of the loop initializes variables, while subsequent
iterations do not. If the loop is an inner loop, the initialization in
the first iteration may execute many times so it is worth encoding it
in a template. The controller will generate two templates, one for the
first iteration and one for subsequent iterations.
The controller cannot generate worker templates until the controller
template is complete because a given basic block
Processing and executing multiple instance of the same basic block results in
common patterns both on the controller and workers. Execution templates is an
abstraction to memoize the common computations and expedite the processing of
reoccurring patterns.
For example, consider executing the driver program in
Figure~\ref{fig:driver-code}. Each time driver executes the optimizer block it
submits a parallel set of task for computing gradient over each partition of
the training data. For parallelism, data is partitioned in to at least as many
cores as available on the worker nodes. Usually there are more than one
partition per core for the sake of straggler mitigation. So on a cluster with
100 workers, 8 cores each, and 10 partitions per core, there are as many as
8,000 tasks for computing the gradient stage.
The first time that controller receives the tasks in a basic block it installs
a {\bf controller template} as a parameterized DAG of tasks in the block. It is
parameterized since two iterations of the same basic block, while very similar
in the structure, have differences too. For example task identifiers from one
instance of a basic block to another instance should change, other wise there
is no way for task replication or dealing with failures. For next
executions of the same basic block, driver does not submit the tasks
explicitly. Instead it only invokes the controller template at the controller
with new parameters.
Control partitions and sends the tasks with in a basic block to the workers for
execution. Again for parallelism and straggler mitigation there are usually
tens of tasks that a worker receives for each basic block. The first time
controller schedules the tasks in a basic block, it sends every task to each
worker explicitly. It also installs a {\bf worker template} on each worker as
a block of tasks. For the next iterations, controller invokes the template with
new parameters on the workers without explicitly sending each task.
The two types of templates are decoupled to enable dynamic scheduling decisions.
If a worker fails or the controller re-balances load across worker, two
invocations of the same controller template can result in two different
partitioning of tasks among workers. For every partitioning strategy a separate
worker template is installed on the workers. We argue that since the task are
in the order of sub-milliseconds, the same partitioning strategy is reused
multiple times so that worker templates are helpful.
With templates, executing the driver program in Figure~\ref{fig:driver-code}
becomes a series of template invocations at the controller and workers. For
each iteration of a basic block, driver only sends one message to controller
and controller only send one message per worker.
Specific details of execution templates depends on the framework, underneath.
For example, which parts of the template remains the same and which parts are
parameterized depends on how driver specifies the DAG; or it depends on the data
model that controller uses to drive workers (e.g. mutable or immutable
objects). Next subsection suggests few goals for a runtime design that benefits
the most from the execution templates. Section~\ref{sec:design} describes our
framework design, Nimbus, as one possible choice and gives details on how
controller and worker templates looks under Nimbus in \S\ref{sec:ct} and
\S\ref{sec:wt}.
\subsection{System Design Goals}
There are two main goals in designing the framework and its API to
benefit the most from execution templates:
\begin{enumerate}[leftmargin=*]
\item Detecting the basic blocks with the most number of tasks in the
driver program: this lets the runtime system amortize the cost of
template instantiation over larger number of tasks. Note that in the
extreme case, every tasks is a basic block. To achieve this goal we
suggest that the driver program should define the {\bf task
dependencies explicitly}. When it is possible, the data flow among
tasks should not depend on task execution itself. For example, if
there is a reduce operation, driver should explicitly specify the
output of the reduce operation and how it is fed in to other tasks,
rather than leaving the control flow dependent on executing the
reduce operation. In Spark's terminology, the {\it stage} boundaries
are marked by any reduce or group operation, called {\it narrow
dependency}~\cite{spark}. Spark controller cannot execute
application beyond a single stage: the task graph depends on the
outcome of the narrow dependency.
\item Minimizing the variable parameters of execution templates: this
results in less computation per template instantiation. We suggest a
{\bf mutable data model} to reduce the data references in the
templates. This way, if the output of one task is the input of
another task, template cannot keep a single parameter for both
references. For example in Spark, resilient distributed data sets
(RDD) are immutable. Upon change in the RDD, a new RDD is created
and referenced in the task graph.
\end{enumerate}
\section{Execution Templates Details}
\label{sec:templates-details}
\begin{figure*}
\centering
\setlength\tabcolsep{-3pt}
\begin{tabular}{cc}
\subfigure[ A controller template represents the common structure of a task
graph metadata. It stores dependencies as indices in to an array of task
identifiers, and versions as offsets from the version context at the beginning
of the task graph.]
{
\includegraphics[width=3.0in]{figs/controller-template.pdf}
\label{fig:controller-template}
}
&
\subfigure[ Worker template represents the common structure of a task graph
execution. Each worker stores the task graph for execution including copy tasks
and bindings with task and instance identifiers as indices in to arrays.]
{
\includegraphics[width=3.0in]{figs/worker-template.pdf}
\label{fig:worker-template}
}
\end{tabular}
\caption{Controller template (left) and worker templates (right) for the task
graph in Figure~\ref{fig:lr-example}.}
\end{figure*}
This section describes execution templates, an abstraction for memoizing
control plane computations and caching commands issued to the workers. For
each basic block in the execution DAG, execution templates allow caching the
similar segments among all the instances and parameterizing the minor variable
segments specific to each instance. To this end, execution templates break the
generation and execution of a basic block in to two parts:
\begin{enumerate}
\item \textbf{Controller templates} are installed on the controller. They
allow the driver to instantiate the entire task graph for an iteration of a
loop with a single command. They help controller avoid rebuilding the task graph
for each iteration from scratch, and memoize the common versioning
computations.
\item \textbf{Worker templates} are installed on workers. They allow a
controller to instantiate a batch of commands at each worker per iteration with
a single command. They help controller avoid from the scratch computations for
assignment, adding copy tasks, and binding. In addition workers can cache
fixed portions of the commands and only receive the minor variable parts for
instantiation.
\end{enumerate}
The two types of templates are decoupled to enable dynamic control decisions.
If a worker fails or the controller rebalances load across them, two
invocations of the same controller template can result in different worker
templates. A controller template represents the complete execution across all
workers, while worker templates are the per-worker partitions of the execution.
The rest of this section describes each segment of execution templates in
details.
\subsection{Controller Templates} \label{sec:ct}
A controller template memoizes versioning computations as well as construction
of a task graph. For an iterative block the relative task dependencies remain
the same. Thus, a controller template is stored as a task graph where tasks
and dependencies are indices in to an array of identifiers. It takes an array
of task identifiers as input, which it uses to populate the task graph for each
iteration. In Figure~\ref{fig:controller-template}, for example, the task
identifier for A and the first element of C's dependencies is the first
element, $t_1$, of the array passed in.
The data versions that tasks use within a loop iteration are deterministic
computations from the version context at the beginning of the iteration. Rather
than recompute the version of each object for each task, object versions are
stored as offsets from the version context at the start of the template.
Computing the version of an object requires taking the maximum version of all
of a task's dependencies. For applications that have hundreds of thousands of
data objects and many wide dependencies, computing this can be extremely
expensive: it requires $O(TDW)$ comparisons, where $T$ is the number of tasks,
$D$ is the number of objects, and $W$ is the dependency width. Analyses of
highly regular task graphs can be much simpler, but computations on irregular
structures, such as graph partitions and geometric volumes with boundaries,
require this generality. The controller therefore memoizes the version offset
to avoid the expensive computation.
When the controller receives a request from the driver to instantiate a
template, it creates or reuses an obsolete copy of the graph structure, stored
in a compact format. It then iterates over the input array of task ids,
filling in the identifiers and dependencies from the template. Also, given the
version context at the instantiation, all data access versions are resolved
with a single offset addition. Figure~\ref{fig:controller-template} shows how a
controller template of the running task graph example is instantiated.
Nested templates occur when there is an inner loop inside an outer loop. In
this case the outer loop is broken in to two templates, one before the inner
loop and one after. The inner loop is given its own template. When the
beginning outer template executes, it finishes by instantiating the inner
template. The inner template instantiates itself, or, when it completes, it
instantiates the ending outer template.
\subsection{Worker Templates} \label{sec:wt}
A worker template has two halves. The first half is centralized at the
controller and allows the controller to memoize the assignment, adding copy
tasks, and binding decisions. The second half is distributed among the workers
and allows the workers to cache common segments of controller commands locally
such that controller can instantiate a batch of commands per worker by sending
only variable parameters specific to each iteration. The batch of commands
generates a local task graph of copy and compute tasks per worker. An example
of a worker template in depicted in Figure~\ref{fig:worker-template}.
Task assignment strategy among workers
do not change as frequent as each iteration of a code block. Specially, this holds
for application with short tasks; even blocks with multiple stages take only few
hundreds of milliseconds to complete. This is insignificant compared to the
time it takes for data partition migration over the network for potential load
balancing or time to detect possible stragglers. Hence, controller caches the
assignment decisions for tasks within a code block and reuses the
decision for a few iterations until there is load imbalance or worker failures.
For a given code block and task partitioning among workers the required copy
task for data replication (to realize parallel execution) or data exchange
among worker (to gather data for example for a reduction) are deterministic.
Controller only decipher the copy tasks once, and reuses them for later
iterations. Also, how an input physical instance evolves as a result of
executing the tasks remains deterministic. Controller only needs to bind the
physical instance the first time accessed by one of template tasks. The later
bindings of the same instance is deterministic for the remaining accesses. For
example, in Figure~\ref{fig:worker-template}, once one instance is picked as
$d_2$ for $t_1$ the same instance is passed to $t_3$ as one of the scratch
pieces.
Controller generates a task graph per worker with copy and compute task
identifiers and dependencies as indices in to an array. Also bindings of
physical instances are encoded with in the graph as indices in to an array. For
each iteration of the loop, controller fills in the tasks identifiers given by
driver and adds new identifiers for copy tasks. Also controller fills in the
instance binding for the data objects the first time they are accessed and the
rest of the bindings are resolved following the binding indices.
Figure~\ref{fig:worker-template} shows how worker templates are instantiated.
For the first iteration controller installs a per worker task graph with
parameterized task ids and instance ids. For later iterations controller only
sends new tasks task and instance ids to instantiates a per worker task graph.
As an optimization, controller could put extra effort in bindings such that the
same physical instances are reusable for back to back execution of the same
block. This way the binding steps could be completely bypassed for iterative
loops. We will explain this optimization in details in \S~\ref{sec:cascaded}.
\fi
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Introduction}
\diff{Marriage and descent form the basic units of society; that is, families. Kinship relationship stipulates the alliance of families and organises social structures.
It is considered as one of the oldest, most frequent human social organisations \cite{levi1969elementary, service1962primitive}.}
In various indigenous societies, people constitute a cultural association, or clan, in which they are culturally (but not necessarily biologically) related \cite{fox1983kinship, levi1969elementary, maddock1969alliance}. Marriage and descent relationships are, thus, determined by clan attributions \diff{(i.e. the clan to which individuals belong)}. Specifically, marriage within a clan is often prohibited by the symbolic incest taboo \cite{levi1969elementary, leach1954political, malinowski1963sex, murdock1949social, hopkins1980brother, Hill2011}. The rule can further specify the clan from which one must select a mate from and that to which children must belong \cite{levi1969elementary}.
\diff{Kinship relationships also regulate social relationships, such as cooperation or rivalry \cite{fox1983kinship}.
The elucidation of kinship systems has been a core theme in cultural and evolutionary anthropology \cite{fox1983kinship, shenk2011rebirth}. Anthropologists have characterised kinship systems by focusing on the affinal network of clans, namely, kinship structure \cite{levi1969elementary}; or by
the categorisation of relatives by ego, namely, kinship terminology \cite{murdock1949social, passmore2021kin}. Here, we consider kinship structures.}
\diff{Kinship structures are diverse yet patterned. They can be classified into several types, according to the length of cycles composed by the marriage and descent relationships of clans \cite{levi1969elementary, white1963anatomy}.
For example, if a rule exists for women in clan X to marry men in clan Y, the marriage relationship is represented by $X \Rightarrow Y$.
If everyone can potentially have mates, the relationships of clans should be $X \Rightarrow Y \Rightarrow \cdots \Rightarrow X$. Here, marriage relationships of clans constitute a cycle, the length of which is termed marriage cycle $C_m$ (e.g., $C_m = 3$ if $X \Rightarrow Y \Rightarrow Z \Rightarrow X$). Similarly, if the children belong to clan B, and their father to clan A, it represents the descent relationship $A \rightarrow B$. This relationship also constitutes the cycle, and its length is termed the descent cycle $C_d$.
Notably, a clan is not always a residence group. Family members of different generations can have different clan attributions \cite{romney1958simplified}.
For example, when children inherit their father's surname but live in their mother's location, the children's attribution, determined by both surname and location, differs from those of both their father and mother.
(This can also be regarded as children belonging to several associations following each parent simultaneously \cite{service1962primitive}.)}
Kinship structures are characterised by marriage cycle $C_m$ and descent cycle $C_d$.
The classes include the incest structure -- conducting endogamy without the symbolic incest taboo ($C_m = C_d = 1$, i.e. without division of clans); dual organisation -- a direct exchange of brides between two clans ($C_m = 2, C_d = 1$); generalised exchange -- an indirect exchange of brides among more than two clans ($C_m \ge 3, C_d = 1$); and restricted exchange -- a direct exchange of brides with the flow of children to different clans ($C_m = C_d = 2$). Structures with $C_m \ge 3$ and $C_d \ge 2$ are rarely observed.
\diff{In this paper, we discuss the evolution of three types of descent systems.
When children belong to the same clan as either their father (or mother), the descent system is classified as patrilineal (or matrilineal) descent, respectively. In these cases, $C_d = 1$, and paternally (or maternally) inherited trait is significant for characterising clans.
Conversely, when $C_d > 1$, children inherit cultural traits from both parents independently and have clan attributions different from either parent. When both paternally and maternally inherited traits are significant for characterising clans, the system is termed double descent. In the above cases, traits are assumed to be independently inherited through paternal and maternal lines. (In some societies, however, people can choose either their father's or mother's traits to inherit in each generation (ambilineal descent) or they concern genealogical distance only (bilateral descent) which exceed the scope of our model \cite{levi1965future, service1962primitive}.)}
Ethnographic reports provide examples of various descent systems \cite{levi1969elementary, double_descent, Goody1961}. Global data indicate that patrilineal descent is dominant over matrilineal or double descent \cite{murdock1969standard}.
Evolutionary anthropologists attribute this imbalance to the higher investment efficiency of reproductive resources for sons than for daughters \cite{hartung1981paternity, holden2003matriliny, shenk2011rebirth}.
However, this perspective ignores the distinction of cultural associations within societies regarding symbolic traits. Indeed, the identity of the categorical descent group is more emphasised than genetic relatedness in some cooperative actions \cite{alvard2003kinship, alvard2011genetic}.
\diff{Moreover, cultural traits of families and kinship are slow to change, because they are inherited in families and regulated by social norms \cite{cavalli1981cultural}.
Empirical studies confirm such slow changes \cite{guglielmino1995cultural, mulder2001study, minocher2019explaining}.
To consider the inheritance of family traits from parents or their relatives with slight changes, it is appropriate to model their long-term evolution through the accumulation of small variations, as represented by mutations.
Notably, families constitute society, whereas society provides the environment for families.
Consequently, we adopted a framework involving the multi-level evolution of families and societies.
Multi-level evolution is a framework generally applied for discussing the evolution of group-level structures in hierarchical systems \cite{traulsen2006evolution, takeuchi2017origin, spencer2001multilevel, turchin2009evolution}.
In this study, we aimed to reveal the emergence of various kinship structures and descent systems from family interactions depending on environmental conditions.}
\diff{We thus modelled the family behaviour in indigenous societies. In the model, evolution is considered at two levels: that of the family, which is an individual agent of the model; and that of society, which is a group of families.
We assigned each family a trait $t$ and a mate preference $p$.
Social relationships of families -- including cooperation, competition, and marriage -- are determined by their traits and preferences.
Families grow through interactions with other families, which subsequently leads to the growth of societies.
As a result of this multi-level evolutionary simulation, $t, p$ values of families diverge and form clusters within each society. These clusters are exogamous groups of families, which can be regarded as clans. By tracing the marriage and descent relationships of the emergent clans, we demonstrate the evolution of kinship structures and descent systems.
Previously, we have constructed an intricate model to illustrate the evolution of kinship structures \cite{itao2020evolution}.
Here, we introduce a simplified model suitable for studying the evolution of both kinship structures and descent systems, together with analytical estimates and empirical tests on a cross-cultural database.}
For data analysis, we used the global ethnographic database of premodern societies, the Standard Cross-Cultural Sample (SCCS) \cite{murdock1969standard, kirby2016d}. The SCCS contains 186 societies, considered culturally and linguistically independent of each other, \diff{(even if some correlation exists due to the shared ancestry in the strict sense \cite{minocher2019explaining}).} The data allowed us to quantitatively analyse cultural adaptations to environments \cite{marsh1967comparative, bernard2017research}.
Previous studies have investigated conditions that generally favour cousin marriages \cite{hoben2016factors} and polygamy \cite{white1988causes}. However, it is difficult to further explain the diversity in cousin marriage and kinship structures solely from correlation analyses \cite{racz2020social}.
Thus, we demonstrate that the collaboration between theoretical simulation and statistical analysis can enable us to unveil the origins of, and conditions for, each kinship structure.
The remainder of this paper is organised as follows. In the next section, we introduce a simplified model. Then, using evolutionary simulations, we demonstrate the emergence of kinship structures and descent systems, and uncover the conditions for their emergence. We also estimate these conditions analytically. Next, by analysing the SCCS, the theoretical results are verified. Finally, we discuss how the present method, which combines theoretical models and empirical data analysis, is relevant to exploring anthropological phenomena.
\section*{Model}
\begin{figure}[tb]
\centering
\includegraphics[width= 1.0\linewidth]{fig_scheme2.eps}
\caption{Schematic of the model.
(A) Life cycle of the model. Societies (green) consist of families (blue), whose population (black) grows. The grey frame represents a single generation. Families grow through interactions with other families in the same society.
When the population of a family (society) exceeds a given threshold, the family (society) splits. Subsequently, another society is removed from the system at random to keep the number of societies fixed.
(B) Families cooperate with kin and mates (blue and orange solid lines), and conflict with rivals (red dashed line) depending on their traits $\bm{t}$ and mate preferences $\bm{p}$.
Families $i$ and $j$ are kin (blue) when $|\bm{t}^i - \bm{t}^j| / \tau$ is sufficiently small, mates (orange) when $|\bm{t}^i - \bm{p}^j| / \tau$ or $|\bm{p}^i - \bm{t}^j| / \tau$ is small, and rivals (red) when $|\bm{p}^i - \bm{p}^j| / \tau$ is small. Only the relationships with the upper-left family are shown. In the figure, we plotted the relationships in a two-dimensional space for simplicity. In the model, however, $\bm{t}$ and $\bm{p}$ are both two-dimensional. Thus, the relationships are considered in four-dimensional space.
}
\label{fig:scheme}
\end{figure}
The model is described below in general terms (see the Methods section for further details). Fig. \ref{fig:scheme} shows a schematic of our model. Families grow by interacting with other families in the same society (Fig. \ref{fig:scheme}(A)).
\diff{Here, we ignore the explicit interaction between societies including migration, for simplicity. However, the following results are robust against slight migrations, as shown in Fig. S1.}
At the time of marriage, family members independently build new families of their own. The society splits in half when the number of families therein doubles its initial value $N_f$. At this time, another society is removed at random; thus, the number of societies in the entire system remains fixed at $N_s$. However, the number of families fluctuates between $0$ and $2 N_f$. This process introduces society-level selection, such that societies that grow at a faster rate replace others. This can be interpreted as invasion, imitation, or the coarse-grained description of a growing system. This framework, known as the multi-level selection, has been widely adopted in biological and social evolution studies to explain group-level structures \cite{itao2020evolution, itao2021evolution, Traulsen2006, spencer2001multilevel, takeuchi2017origin, Wilson2003, turchin2009evolution}.
\diff{Previously, we considered a model with three layers, including the intermediate layer of ``lineages'' between families and societies. Here, we simplify the model by eliminating it, to explore the generality of the results and to be suitable for analytical calculations \cite{itao2020evolution}.}
\diff{Moreover, each family has a pair of cultural traits and mate preferences that are culturally transmitted to the next generation. The traits can represent any social features by which people can measure their cultural similarity, for instance, surnames, occupations, or totems \cite{levi1962pensee}.
In the following section, we demonstrate that initially uniform traits gradually diverge to be discrete for distinguishing family groups.}
Marriage occurs when men's traits are close to women's preferences. In our model, this point is the sole asymmetry between men and women, consistent with
anthropological studies stating that in most societies, brides' families determine whether grooms are suitable for marriage \cite{levi1969elementary, levi1962pensee}.
There are two pathways for cultural transmission: paternal and maternal. Hence, we require the two-dimensional trait $\bm{t} = (t_1, t_2)$ and preference $\bm{p} = (p_1, p_2)$. Thus, when a man in family $i$ and a woman in family $j$ are married, their children will have the trait $\bm{t} = (t_1^i, t_2^j)$ and preference $ \bm{p} = (p_1^i, p_2^j)$.
At the time of cultural transmission, we add noise $\bm{\eta} = (\eta_1, \eta_2)$ to $\bm{t}$ and $\bm{p}$, independently sampled from a normal distribution with mean $0$ and variance $\mu^2$. Similar to genetic mutations in evolutionary biology, cultural traits are slightly modified when they are transmitted \cite{cavalli1981cultural}. Such cultural traits are used to categorise social groups, even without genetic relatedness \cite{sperber2004cognitive}.
\diff{Previously, we assumed that $t_1$ and $p_1$ are inherited from the father, and $t_2$ and $p_2$ are inherited either from the father or the mother, depending on the families' strategies \cite{itao2020evolution}. However, this assumption limits the evolution of descent systems, as the matrilineal descent system is set to be harder to evolve.
Here, we revised this to enable discussion on the evolution of various descent systems. (Notably, paternal and maternal traits are still supposed to be inherited independently. Hence, those descent systems in which both parents' traits are multiply referred to, exceed the scope of our model.)}
First, we introduced cooperative relationships with cultural kin and mates (blue and orange solid lines in Fig. \ref{fig:scheme}(B)). Families cooperate with those who have traits similar to their own, and those who prefer (or are preferred by) them.
\diff{In the model, the degree of cooperation between family $i$ and $j$ is given by $\exp(-\min(|\bm{t}^i-\bm{t}^j|, |\bm{t}^i-\bm{p}^j|, |\bm{p}^i-\bm{t}^j|) ^2/\tau^2)$, where $|\bm{t}^i-\bm{t}^j| = \sqrt{(t_1^i - t_1^j)^2 + (t_2^i - t_2^j)^2}$ represents Euclidean distance and $\tau$ represents the tolerance for similar traits and preferences.
By averaging this degree for families in the same society, we calculated the density of cooperative families \emph{friend}$_i$ for each family $i$.}
A smaller \emph{friend} value implies that the family gained less cooperation, resulting in a decline in the growth rate, where $d_c$ represents the death rate increment due to non-cooperation.
\begin{table}[tb]
\caption{Parameters used in the model. In the results described below, the values of $b, \mu, \tau, N_f,$ and $N_s$ are fixed to those shown in the table, unless the value is described explicitly.}
\label{table:param}
\centering
\begin{tabular}{l|l|c}
Sign & Explanation & Value \\ \hline
$b$ & Intrinsic growth rate & 5.0 \\
$\mu$ & Mutation rate for $\bm{t}, \bm{p}$ & 0.1\\
$\tau$ & Tolerance for similarity & 1.0\\
$N_f$ & Initial number of families in society& 50\\
$N_s$& Number of societies in a system& 50\\
$d_c$ & Decline in mortality with cooperation& Variable\\
$d_m$ & Increase in mortality with competition& Variable\\
$\bm{t}$&Cultural traits of family& Evolve\\
$\bm{p}$&Preferences for groom traits& Evolve
\end{tabular}
\end{table}
Next, we introduced competitive relationships with mating rivals (red dashed line in Fig. \ref{fig:scheme}(B)). Families compete with those who have similar preferences.
\diff{The degree between family $i$ and $j$ is given by $\exp(-|\bm{p}^i-\bm{p}^j|^2/\tau^2)$.
We calculated the density of competitive families \emph{rival}$_i$ for each family $i$.} A larger \emph{rival} value implies that the family has many rivals, resulting in a decline in the growth rate, where $d_m$ represents an increase in the death rate owing to competition. Here, the strength of competition depends only on the number of families with close preferences. It is independent of the number of preferred families, because competition occurs even when there are sufficient grooms and brides \cite{Chagnon1988}.
\begin{figure*}[tb]
\centering
\includegraphics[width= 0.9\linewidth]{fig_emergent_structure.eps}
\caption{Examples of the evolution of kinship structures. ($\bm{t}, \bm{p}$) values of families in society after 500 simulation steps. The figures show the temporal evolution of ($\bm{t}, \bm{p}$) values (upper-left), a schematic representation of the emergent structure (upper-right), and the final state (bottom). The temporal evolution of the trait and preference values of families in a society are represented in blue and red, respectively. The final states are shown as a $t_1-p_1$ map, a $t_2-p_2$ map, and a $t_1-t_2$ map, from left to right (The scales of axes differ, depending on the variance of values.)
The structures are categorised by calculating the marriage($C_m$) and descent cycles ($C_d$) as the lengths of the cycles of the flow of women and children, respectively.
(A) Incest structure without the division of clans. Marriage occurs within clan A (yellow). $d_c = 5.0, d_m = 0.1.$
(B) Dual organisation with a matrilineal descent system. Clans A (yellow) and B (green) diverge concerning the maternally inherited trait $t_2$ and prefer each other. $d_c = 0.3, d_m = 0.2.$
(C) Generalised exchange with a patrilineal descent system. Clans A (green), B (yellow), and C (orange) diverge concerning the paternally inherited trait $t_1$ and prefer others cyclically. $d_c = 0.5, d_m = 1.0.$
(D) Restricted exchange with a double descent system. Clans A$_1$ (orange), A$_2$ (pink), B$_1$ (green), and B$_2$ (yellow) exhibit pairwise marriage and descent relationships. Here, clans diverge regarding both maternally and paternally inherited traits. $d_c = 0.2, d_m = 1.0.$
}
\label{fig:emergence}
\end{figure*}
Then, we calculated the population growth as determined by the interactions of families.
The numbers of men and women in family $i$ who survive till marriageable age are given by Poisson distribution with mean $b\exp(-d_c(1 - \emph{friend}_i) - d_m \emph{rival}_i)$, where $b$ determines the intrinsic growth rate. We adopted this form, as it is more suitable for analytical calculations.
The presented results are qualitatively independent of these specific forms. For example, $b - d_c(1 - \emph{friend}) - d_m \emph{rival}$ or $b/((1 + d_c(1-\emph{friend}))(1 + d_m\emph{rival}))$ (the latter was adopted in the previous model \cite{itao2020evolution}) essentially produces identical results if cooperation enhances, and conflict suppresses, the population.
\diff{Finally, people get married according to their traits and preferences. The probability of marriage of men in family $i$ and women in family $j$ is proportional to $\exp(-|\bm{t}^i-\bm{p}^j|^2/\tau^2)$. After marriage, couples create their own families, bear children with inheriting traits and preferences, and then die.}
The initial values of $\bm{t}, \bm{p}$ are $(0, 0)$ in this model. Thus, at first, no rules concerning marriage or descent exist. Initially, any couple can marry, even within a nuclear family.
\diff{This assumption is set to demonstrate society-level structures that determine that the marriage rules of families can evolve, even without introducing any rules initially.
However, the results after sufficient generations are independent of the initial conditions.}
The notations and parameter values adopted in the simulations are summarised in Table \ref{table:param}.
\section*{Evolution of Kinship Structures}
The model was simulated iteratively for various parameter values listed in Table \ref{table:param}. In a simulation of 500 steps, the $(\bm{t}, \bm{p})$ values of families within a society diverged, and finally, formed some clusters in $(\bm{t}, \bm{p})$ space, as shown in Fig. \ref{fig:emergence}.
\diff{With the pressure to increase cooperators by increasing kin and mates, isolated families without sufficient friend values are removed, and families are clustered. With the pressure to decrease mating rivals, families' preferences diverge. Accordingly, under sufficient strengths of both pressures, that is, sufficient $d_c$ and $d_m$ values, families form several discrete clusters united by marital relationships in $(\bm{t}, \bm{p})$ space.}
Siblings belonged to the same cluster. Families within the same cluster, including those who were genetically unrelated, had similar traits and recognised each other as cultural kin. They avoided marriage within their cluster and preferred mates from other clusters, that is, $\bm{t}^i\not\simeq \bm{p}^i$ to increase cooperators by acquiring mates other than their cultural kin.
Consequently, the emergent clusters were culturally united groups with the symbolic incest taboo, preferring exogamy. They can, therefore, be interpreted as clans.
Clans were attributed based on parental traits. Here, the different clans are characterised by discretised trait values. Discretisation for $t_1$, $t_2$, or both values leads to the evolution of various descent systems.
In this model, clans' descent relationships, as well as their marriage relationships, emerged. Here, we used the $X$-means method for clustering to optimise the number of clusters by adopting the Bayesian information criterion \cite{pelleg2000x}. The relationships between clans were determined by tracing the marriage and descent relationships of the cluster centres. The emergent structures were classified according to the cycles of marriage and descent relationships, that is, $C_m$ and $C_d$, respectively.
Various kinship structures and descent systems have evolved, as shown in Fig. \ref{fig:emergence}.
In Fig. \ref{fig:emergence}(A), only one clan, namely, A (yellow) exists and marriage occurs within it, representing an incest structure. Here, traits and preferences do not diverge.
In Fig. \ref{fig:emergence}(B), two clans, namely, A (yellow) and B (green) prefer each other (A $\Leftrightarrow$ B), representing dual organisation. In this case, traits and preferences diverge in $(t_2, p_2)$ space only. One can interpret this as a system in which maternally inherited traits $t_2$ are solely referred to for marriage and descent. Hence, a matrilineal descent system evolves.
In Fig. \ref{fig:emergence}(C), three clans, namely, A (green), B (yellow), and C (orange) prefer other clans cyclically (A $\Rightarrow$ B $\Rightarrow$ C $\Rightarrow$ A), representing generalised exchange.
Here, traits and preferences diverge only in $(t_1, p_1)$ space. One can interpret this as a system in which paternally inherited traits $t_1$ are solely referred to for marriage and descent. Hence, a patrilineal descent system evolves.
\diff{Notably, in this paper, we term the system in which families choose a mate from a specific clan, as generalised exchange that is observed in some regions \cite{leach1954political, levi1969elementary}. However, the system that prohibits within-clan marriage only is not included in our model.}
In Fig. \ref{fig:emergence}(D), four clans, namely, A$_1$ (orange), A$_2$ (pink), B$_1$ (yellow), and B$_2$ (green) exhibit pairwise mating preferences (A$_1$ $\Leftrightarrow$ B$_2$ and A$_2$ $\Leftrightarrow$ B$_1$) and descent relationships (A$_1$ $\leftrightarrow$ A$_2$ and B$_1$ $\leftrightarrow$ B$_2$). Specifically, restricted exchange has evolved. Here, both maternally and paternally inherited traits significantly diverge. Hence, a double descent system evolves.
\begin{figure}[tb]
\centering
\includegraphics[width= 1.0\linewidth]{fig_phase_boundary2.eps}
\caption{Phase diagrams on kinship structures. The figures show the classes of kinship structures that evolve for each environmental parameter $d_c$ and $d_m$, both theoretically and empirically.
The incest structure is plotted in yellow, dual organisation in green, generalised exchange in orange, and restricted exchange in pink. Conditions leading to the extinction of all societies are plotted in blue.
The dashed lines represent the rough phase boundaries of the structures. Boundaries approach $d_m / d_c = C$ asymptotically when $d_c$ is large and $d_m = C'$ when $d_c$ is small, according to the analytical calculations in the supplementary text.
(A) Theoretical phase diagram of kinship structures. \diff{We calculated the frequencies of each kinship structure using 100 trials for each $d_c$ and $d_m$ values.} The figure illustrates the structure that evolved most frequently under each condition. Here, $N_s = N_f = 50$, and $\mu = 0.1$.
(B) Empirical phase diagram of kinship structures, except for the incest structure. By analysing the Standard Cross-Cultural Sample (SCCS), we estimated the parameters for each society and plotted the dependencies of kinship structures on them. The estimated $\widetilde{d_c}$ and $\widetilde{d_m}$ are relative values, compared to $d_c$ and $d_m$.
(See Fig. S5 for the empirical phase diagram of kinship structures, including the incest structure.)
}
\label{fig:structure_phase}
\end{figure}
\diff{Evolved kinship structures and descent systems depend on environmental parameter values $d_c$ and $d_m$. We conducted an evolutionary simulation 100 times for each condition and counted the frequencies with which each structure evolved.}
Fig. \ref{fig:structure_phase}(A) shows the dependence of kinship structures that evolved most frequently in each condition as the phase diagram. When $d_c$ far exceeded $d_m$, the incest structure (yellow) evolved most frequently. As $d_m$ increased relative to $d_c$, the emergent structure changed to dual organisation (green), generalised exchange (orange), and then to restricted exchange (pink).
\diff{When $d_c$ is small and $d_m$ is large, societies can be composed of several endogamous clans, that is, incest structures, as shown in Fig. S2. However, it rarely occurs within the current parameter regions.}
Note that the diagram is qualitatively robust to the choice of initial conditions.
\diff{These successive transitions were accompanied by an increased number of clans within societies and a decreased probability of sustaining structures against population fluctuations.
To estimate the phase boundary, we analytically calculated conditions for each structure to evolve. We explain it below briefly (see the supplementary text for further details).
We assume that the centres of the groom and bride clans deviate with the order of the mutation rate $\mu$ due to the fluctuations. Because of this deviation, the degree of cooperation of the mate is reduced by the factor $\exp(-\alpha\mu^2)$ from that of the kin (where $\alpha \sim \mathcal{O}(1)$).
Hence, for example, every family in the incest structure is kin and rival simultaneously, whereas a half is kin and rival, and the other half is mate in dual organisation. Then, recalling the above reduction, the conditions in which dual organisation is more adaptive than the incest structure are given by}
\diff{
\begin{align}
& p_I \exp(-d_c\cdot 0- d_m \cdot 1) < \\
&\ \ \ p_D \exp\left(-d_c\left(\frac{1}{2} - \frac{1}{2}\exp(-\alpha\mu^2)\right) - d_m \cdot \frac{1}{2}\right), \label{eq:full} \\
\Leftrightarrow & d_m / d_c > 1 - \exp(-\alpha\mu^2) + \frac{2}{d_c}\log p_I / p_D \label{eq:incest_dual},
\end{align}
where $p_I$ and $p_D$ denote the sustenance probability for incest structure and dual organisation, respectively (see supplementary text for their estimation). The transition to generalised or restricted exchange is estimated similarly.
In short, the transitions occur if the pressure for segmentation caused by large $d_m$ values exceeds that for clustering by $d_c$ and the relative probability for sustaining structures. Then, we derived the phase boundaries of $d_m / d_c = C$ asymptotically when $d_c$ was large and $d_m = C'$ when $d_c$ was small, as shown in Fig. \ref{fig:structure_phase}.}
\begin{figure}[tb]
\centering
\includegraphics[width= 1.0\linewidth]{fig_phase_descent.eps}
\caption{Frequency of descent systems for each kinship structure. Figures show the frequency of each descent system, both theoretically and empirically. The frequencies of matrilineal (yellow), patrilineal (orange), and double descent (green) systems are shown.
(A) Theoretical phase diagram. The model was simulated by changing the $d_c$ and $d_m$ values. We calculated the frequencies of each descent system for each kinship structure. Here, $N_s = 50$, $N_f = 30$, and $\mu = 0.1$.
(B) Empirical phase diagram. By analysing the SCCS, we identified the descent systems and kinship structures of each society. We counted the frequencies of each descent system for each kinship structure.}
\label{fig:descent_phase}
\end{figure}
Fig. \ref{fig:descent_phase}(A) shows the dependency of descent systems on kinship structures. Double descent is dominant in restricted exchange.
Patrilineal descent is dominant over matrilineal descent in dual organisation and generalised exchange, whereas matrilineal descent is more frequent in dual organisation.
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\linewidth]{phase_phase_kinship_for_stat.eps}
\caption{
Dependence of phase diagrams of kinship structures on other parameters (A) the number of societies $N_s$, (B) the number of families within a society $N_f$, and (C) the mutation rate $\mu$. The incest structure is plotted in yellow, dual organisation in green, generalised exchange in orange, and restricted exchange in pink. Conditions leading to the extinction are plotted in blue. Each diagram is obtained in the same way as Fig. \ref{fig:structure_phase} (A). Unless shown on the axis, the parameter values are fixed to those in Table \ref{table:param}.}
\label{fig:kinship_phase_phase}
\end{figure}
Fig. \ref{fig:kinship_phase_phase} shows the dependence of phase diagrams of kinship structures on the number of societies in the system $N_s$, the number of families within a society $N_f$, and the mutation rate $\mu$.
Fig. \ref{fig:kinship_phase_phase} (A) suggests that, as $N_s$ increases, restricted exchange evolves across broader parameter regions, whereas the region of dual organisation narrows. Generally, as the number of groups increases, group-level selection is strengthened in multi-level evolution \cite{itao2020evolution, takeuchi2017origin, traulsen2006evolution}. As restricted exchange requires divergence in both traits, its formation is more difficult, even for large $d_m / d_c$. Hence, group-level pressure is necessary for the evolution of such sophisticated structures.
Next, Fig. \ref{fig:kinship_phase_phase} (B) suggests that, as $N_f$ increases, generalised and restricted exchanges evolve across broader parameter regions. Meanwhile, the incest structure and dual organisation evolve in narrower regions.
As $N_f$ increases, fluctuations in the population of clans decrease, and thus, sophisticated structures can be easily sustained.
Realistically, however, as the population increases, interactions among families will diversify, other social organisations will evolve, and kinship structures may be destroyed. This exceeds the scope of our model.
Finally, Fig. \ref{fig:kinship_phase_phase} (C) suggests that sophisticated structures, such as restricted or generalised exchanges, disappear as $\mu$ increases. A larger $\mu$ causes larger fluctuations in traits and preferences, which can destroy more sophisticated structures.
\section*{Empirical Data Analyses}
We then verified our results on the phase diagrams of kinship structures and descent systems using the SCCS database \cite{murdock1969standard, kirby2016d}.
\diff{We classified the kinship structures of each society by identifying the composition of clans and marriage and descent rules between them. See Table S2 for further details.
Of the 186 societies in SCCS, we identified 87 as incest structures, 14 as dual organisation, 33 as generalised exchange, and 12 as restricted exchange.
Forty societies were excluded from the analyses, as their marriage rules prohibit within clan (or family) marriage only. Fig. S3 and S4 show the geographic distributions of kinship structures and descent systems, respectively. Each structure is distributed globally, without a clear spatial pattern, suggesting that kinship structures in each region were achieved by cultural adaptation, rather than cultural transmission; this must be further investigated by phylogenetic comparative analysis.}
\begin{table}[tb]
\caption{Correlations between SCCS variables and kinship structures (excerpt).
For pairs of kinship structures, the Spearman's rank correlation between the SCCS variables and the structures was calculated. Then, the absolute values of the correlation were averaged for each pair.
We list the variables that exhibited high correlations and were relevant to $d_c$ and $d_m$, along with the average value of the correlation, and the corresponding parameters in the model.
See Table S3 for further information.
}
\label{table:corr}
\centering
\begin{tabular}{lll}
Variable & Corr. & Model \\\hline
Tributary Payments or Taxation & 0.58 & $d_c$ \\
Violence against Other Ethnic Groups & 0.57 & $d_c$ \\
External Warfare & 0.54 & $d_c$ \\
Hostility towards Other Ethnic Groups & 0.48 & $d_c$ \\
Cross-cutting Ties & 0.45 & $d_c$ \\\hline
Conflict within the Society & 0.59 & $d_m$ \\
Violence within the Society & 0.55 & $d_m$ \\
Disapproval of Rape & 0.53 & $d_m$ \\
Disapproval of Premarital Sex & 0.51 & $d_m$ \\
Disapproval of Incest & 0.41 & $d_m$
\end{tabular}
\end{table}
\diff{Next, we conducted Spearman's rank correlation analyses and calculated the correlation between SCCS variables and kinship structures.
The database contains various variables of socio-ecological factors.
Whereas there are no variables in SCCS that exactly correspond to $d_c$ and $d_m$, $d_c$ can be related to the extent of social unity and external warfare, and $d_m$ to the attitude towards adultery and the extent of internal warfare. Notably, marriage conflict over mates arises at the family or kin group level, whereas inter-society conflict requires cooperation across different kin groups. Thus, violence within a society is related to $d_m$, and that involving other societies to $d_c$.
We calculated the correlation for each variable and listed the variables in descending order in the absolute value of the correlation. We then, found that the variables related to $d_c$ and $d_m$ were located at the top of the list (rather than the middle or bottom).} The variables that were highly correlated with kinship structures are listed in Table S3. Among them, we show the variables that can be related to $d_c$ and $d_m$ in Table \ref{table:corr}.
We estimated $d_c$ using the variables pertaining to social unity (\textit{tributary payments or taxation} and \textit{cross-cutting ties}) and society-level conflict that requires immense cooperation within society (\textit{violence against other ethnic groups}, \textit{external warfare} and \textit{hostility towards other ethnic groups}). We estimated $d_m$ using the variables pertaining to attitudes towards adultery (\textit{disapproval of rape}, \textit{disapproval of premarital sex} and \textit{disapproval of incest}) and intra-society conflict (\textit{conflict within the society} and \textit{violence within the society}).
Next, we normalised the values of each variable to set the mean $0$ and variance $1$. We changed the sign if necessary, so that larger values corresponded to larger $d_c$ or $d_m$. For some societies, the data for some variables were lacking; however, we averaged the available values to estimate $\widetilde{d_c}$ and $\widetilde{d_m}$ (hereafter, values with tilde represent those estimated by empirical data analyses). We added a constant to set the minimum values of $\widetilde{d_c}$ and $\widetilde{d_m}$ to $0$, because $d_c$ and $d_m$ were positive values in our model.
Although the absolute magnitudes were not comparable, $\widetilde{d_c}$ and $\widetilde{d_m}$ would be positively correlated with $d_c$ and $d_m$, respectively. The empirical dependence of the kinship structures on $\widetilde{d_c}$ and $\widetilde{d_m}$ is shown in Fig. \ref{fig:structure_phase}(B). The results were qualitatively consistent with the theoretical phase diagrams for $d_c$ and $d_m$. As $\widetilde{d_m} / \widetilde{d_c} $ increased, kinship structures changed from dual organisation to generalised exchange, and then to restricted exchange. The consistency between data and model results was worse for the incest structure, as observed in Fig. S5. This may be because societies with such structures can have social systems other than kinship, regulating social unity and suppressing marital competition.
The frequency of each descent system in each kinship structure was also calculated and shown in Fig. \ref{fig:descent_phase}(B).
The dominance of patrilineal over matrilineal descent was observed in generalised exchange. The fraction of matrilineal descent was larger for dual organisation. These are comparable with the model results, although the correspondence was much weaker than that of the kinship structures.
\section*{Discussion}
By considering cooperation among kin and mates, as well as competition among rivals in our model, we demonstrated that families formed some clusters in traits and preferences. Families within a cluster are recognised as cultural kin, and marriage occurs only among families from different clusters. Hence, the clusters of families that emerged in our model can be interpreted as clans.
Initially, uniform traits are discretised into several clusters corresponding to distinguished clans. Furthermore, by tracing marriage and descent relationships between clans, the evolution of various kinship structures were observed. The traits and preferences were differentiated involving either paternally or maternally inherited ones only, or both. This demonstrates the evolution of patrilineal, matrilineal, and double descent systems, respectively.
Additionally, we revealed that the parameters related to $d_c$ and $d_m$ in our model can be considered as significant explanatory variables for different kinship structures, by analysing the ethnographic data of 146 societies. By estimating $d_c$ and $d_m$ from the data, we demonstrated consistency between the theoretical and empirical results of the parameter dependencies of the kinship structures and descent systems.
In cultural anthropology, ``descent theory'' and ``alliance theory'' have been proposed to explain kinship structures. They emphasise cooperation fostered by shared descent and marriage, respectively \cite{levi1969elementary, leach1982social}.
Here, we added the effect of marital competition.
\diff{By introducing the evolutionary pressure to increase cooperation among kin and mates, and to decrease competition among rivals, we illustrated that diverse kinship structures evolve depending on the pressures.
Generally, it is difficult to compare historical consequences of the formation of kinship structures since chronological records are rarely available.
Nevertheless, we can explain how each structure was sustained for a specific condition.
Indeed, L{\'e}vi-Strauss demonstrated several examples of the sustenance of kinship structures. Cultural groups become divergent owing to population growth and internal conflict. Simultaneously, however, they are united by marital relationships.
Even if some of the population is damaged, structures eventually recover within several generations \cite{levi1969elementary}.}
Furthermore, we can compare our theoretical results with empirical data, and their consistency supports the plausibility of our scenario.
According to the simulations, kinship structures evolve depending on the two pressures parameterised by $d_c$ and $d_m$. That is, the importance of cooperating among kin and mates and that of avoiding marital competition determined by environmental conditions. For example, $d_c$ is related to the frequency and importance of public works or massive violence in societies, whereas $d_m$ is related to the scarcity of mates.
When the pressure for cooperation dominates the avoidance of competition, societies comprise one or several clans and most families are united as kin or mates. By contrast, as the importance of avoiding competition increases, dividing societies into more clans becomes more adaptive. Hence, as $d_m / d_c$ increases, the emergent structures change from incest structures to dual organisation, generalised exchange, and finally, to restricted exchange.
In cultural anthropology, dual organisation is categorised as the simplest form of restricted exchange, by focusing on $C_m = 2$ \cite{levi1969elementary}. Our results, however, suggest that it is closer to generalised exchange concerning environmental dependencies as expected by focusing on $C_d = 1$.
Furthermore, diverse descent systems evolved in our model.
\diff{Under the moderate values of $d_m$, either paternally or maternally inherited traits and preferences solely diverge because of the symmetry breaking. This leads to patrilineal or matrilineal descent, respectively.} In our simulation, patrilineal descent evolved more frequently than did matrilineal. As mentioned above, the sole asymmetry of sex lies in the process of choosing a mate; that is, women (or their families) prefer certain men's traits. Thus, the selection pressure to favour those men with preferable traits leads to the divergence of paternally inherited traits.
In real-world data too, patrilineal descent is more frequent than matrilineal descent.
However, in our model, the dominance of patrilineal descent was excessive.
In some societies, grooms' families choose brides.
Furthermore, other aspects cannot be neglected.
For example, with paternal uncertainty, matrilineal descent will likely evolve \cite{holden2003matriliny}. The necessity for cooperation generally differs for men and women, depending on subsistence patterns or frequency of warfare \cite{service1962primitive}. For further discussion on the evolution of descent systems, these biases should be considered.
Nonetheless, our study shows the emergence of significant traits that are frequently inherited paternally.
In the empirical data analysis, we found the correlation between kinship structures and the status of wives, as well as $d_c$ and $d_m$ (see Table S3). Specifically, gender inequality concerning the wives' status increased in the following order: restricted exchange, dual organisation, and generalised exchange (though this cannot be directly related to the gender balance in general).
\diff{Thus, in this aspect, it may be reasonable to assume that dual organisation is more similar to restricted exchange than to generalised exchange. As the empirical data and our model exhibit, the descent system is more biased towards patrilineal descent in generalised exchange and less so in dual organisation, whereas double descent is adopted in restricted exchange. In societies with patrilineal descent systems, wives join husbands' groups after marriage \cite{service1962primitive}.
If male dominance is more frequently observed therein, we can explain the above trend. The inequality is the largest in generalised exchange; that is, between restricted exchange and dual organisation regarding environmental dependence. This suggests the benefit of analysing kinship structures to elucidate other cultural aspects of society.}
Apart from the parameters $d_c$ and $d_m$, the mutation rate $\mu$, the number of competing societies $N_s$, and the initial number of families within a society $N_f$ are also relevant in determining kinship structures (see Fig. \ref{fig:kinship_phase_phase} and Table S3). Fig. \ref{fig:kinship_phase_phase} as well as our analytical calculations suggest that sophisticated structures, such as generalized and restricted exchanges, are more fragile due to the larger fluctuation under smaller $N_s$ or $N_f$, or larger $\mu$.
Table S3 suggests that such sophisticated structures are correlated with large $N_s$ and small $N_f$.
Hence, the theoretical trend was empirically verified for $N_s$, but not for $N_f$. In reality, if $N_f$ is larger, incest structures become dominant.
This may be due to the development of social organisations other than kinship, such as political organisations, which would regulate the social relationships of families in societies with larger populations \cite{service1962primitive}. Such structurisation can be interpreted as a cultural evolutionary phenomenon; however, it is beyond the scope of our model.
\diff{Regarding $\mu$, it will be determined by how traits and preferences are inherited, and social norms regulate precise inheritance \cite{cavalli1981cultural}.}
Thus far, however, we have been unable to estimate it from the data, and this remains a task for the future.
\diff{Our model shares some similarities with the mating preference model for sympatric speciation in biology. The evolution of several endogamous groups is shown to be a result of mating competition \cite{dieckmann1999origin} or niche construction \cite{kaneko2000sympatric}.
Conversely, humans develop the ability of kin recognition, leading to organising the affinal network of groups by exogamy \cite{chapais2009primeval, planer2020towards}. Our model includes cooperation among mates and thus, exhibits the emergence of diverse kinship structures more than the mere divergence of groups.}
The present study has some limitations. In the model, we only focused on societies in which marriage and descent rules were strictly determined by customs. Our model concerns the elementary structures of kinship, where paternal and maternal traits are referred to independently \cite{levi1969elementary}. As population size expands, the unity of kin groups weakens, and marriage rules are relaxed, such that only marriage within the clan or nuclear family is prohibited \cite{harrell1997human}.
\diff{Such rules to exclude unpreferable mates cannot evolve in our model. To cover observed rules comprehensively, a new model needs to be developed to consider positive and negative preferences for mates.} The descent rules also change, such that they can refer either to the father or the mother in each generation by choice, or genealogical distance only. This occurs in complex structures of kinship \cite{murdock1969standard, kirby2016d}.
Moreover, we could only analyse the correlations between ethnographic variables and kinship structures. \diff{We could not assign the variables to $d_c$ and $d_m$ a priori. Therefore, our estimation of $\widetilde{d_c}$ and $\widetilde{d_m}$ may seem arbitrary. To measure these variables directly, it is, thus, necessary to collaborate with field studies.}
It is also desirable to conduct further analyses, such as classification learning; however, it was unfeasible in the current study owing to data insufficiency. \diff{Phylogenetic comparative analysis is also necessary to control statistical non-independence owing to shared ancestry \cite{minocher2019explaining}.} Furthermore, because of the lack of chronological data, we could not analyse the causal relationships between social structures and cultural conditions related to $d_c, d_m$, and other parameters.
Social structures, such as kinship, are formed through interactions among people over many generations. \diff{In this paper, we theoretically demonstrate such formation of macroscopic social structures through microscopic family behaviours.
It is considered difficult to explain such complex systems from basic conditions solely using simple correlation analyses \cite{racz2020social}.}
Combined with theoretical simulations of a simple constructive model and empirical data analyses, we have demonstrated that various kinship structures emerge depending on the degrees of cooperation and avoidance of competition.
Theoretical studies, as shown here, produce explanatory scenarios by referring to empirical studies and propose relevant variables to be measured in the field. Empirical studies in the field describe notable anthropological phenomena and enable the measurement of variables to test theories. Such collaboration of theoretical and empirical studies will contribute to discussing the emergence of complex social structures and unveiling universal features in anthropology.
\section*{Method}
\subsection*{Algorithm}
To simulate population growth considering social interactions of families, the degrees of cooperation and competition were calculated by comparing trait and preference values with a tolerance parameter $\tau$. Hence, families $i$ and $j$ cooperate if $|\bm{t}^i - \bm{t}^j| / \tau$,
$|\bm{t}^i - \bm{p}^j| / \tau$, or
$|\bm{p}^i - \bm{t}^j| / \tau$ is sufficiently small.
These conditions correspond to $i$ and $j$ being cultural kin, the women in $j$ preferring men in $i$, and the women in $i$ preferring men in $j$, respectively. Families $i$ and $j$ compete if they prefer similar families, that is, if $|\bm{p}^i-\bm{p}^j| / \tau$ is sufficiently small. Then, the possibility of marriage and the degrees of cooperation and competition were measured using a Gaussian function. For example, the degree of cooperation between cultural kin is given by $\exp(-|\bm{t}^i - \bm{t}^j|^2/\tau^2)$.
We adopted the following algorithm for population changes in families: For family $i$ and time step $n$, the numbers of unmarried men and unmarried women are denoted by $M^i(n)$ and $F^i(n)$, respectively. The intrinsic growth rate is denoted by $b$. We represented the set of families in society as $\Phi$ and the families that accept men in the family $i$ as grooms as $i'$.
The population change in family $i$ is given by
{\small
\begin{align}
d_{i, j} &= \min(|\bm{t}^i(n)-\bm{t}^j(n)|,\\
&\ \ \ \ \ \ |\bm{p}^i(n)-\bm{t}^j(n)|, |\bm{t}^i(n) - \bm{p}^j(n)|), \label{clan_eq:distance}\\
\emph{friend$^{ i}(n)$}&=\sum_{j \in \Phi} \frac{\exp(-d_{i, j}^2/\tau^2)}{\#\Phi}, \label{clan_eq:friend}\\
\emph{rival$^{ i}(n)$}&= \sum_{j \in \Phi} \frac{\exp(- |\bm{p}^i(n)-\bm{p}^j(n)|^2/\tau^2)}{\#\Phi}, \label{clan_eq:rival}\\
r &= b \exp(-d_c(1 - \emph{friend$^{ i}(n)$}) - d_m\emph{rival$^{ i}(n)$}), \label{clan_eq:fitness} \\
M^i(n) &= \text{Poisson}(r),\ F^i(n) = \text{Poisson}(r), \label{clan_eq:birth}\\
\intertext{Here, the probability for family $i$ to offer family $i'$ for marriage $P(i')$ is}
P(i') &= \frac{\exp(- |\bm{t}^i(n)-\bm{p}^{i'}(n)|^2/\tau^2)}{\sum_{j \in \Phi} \exp(- |\bm{t}^i(n)-\bm{p}^j(n)|^2/\tau^2)}, \label{clan_eq:mate_choice}\\
\bm{t}^{i^*}(n + 1) &= (t^i_1, t^{i'}_2) + \bm{\eta},\ \bm{p}^{i^*}(n + 1) = (p^i_1, p^{i'}_2) + \bm{\eta}. \label{clan_eq:marriage}
\end{align}
}
The population growth of each family depends on \emph{friend} and \emph{rival}, as given by Eqs. (\ref{clan_eq:friend}) and (\ref{clan_eq:rival}), respectively.
The number of unmarried children in each family follows a Poisson distribution, as given by Eqs. (\ref{clan_eq:fitness}) and (\ref{clan_eq:birth}), respectively.
People are married according to the traits and preferences of their families, as shown in Eq. (\ref{clan_eq:mate_choice}). After marriage, couples give birth to children and die. At this time, children inherit the traits and preferences of parents by adding the noise component $\bm{\eta}$ to them. This comprises two independent normal variates with mean $0$ and variance $\mu^2$ as shown in Eq. (\ref{clan_eq:marriage}).
Unmarried people can join the mating in the next step. However, those who cannot find mates within two steps die without having children.
Here, we assumed monogamy; however, the result was qualitatively independent of such a marriage system.
\subsubsection*{Data accessibility.}
Source codes for the model can be found here: \url{https://github.com/KenjiItao/clan.git}
\subsubsection*{Author contributions.}
K.I. designed the model, conducted the simulations, analysed the data, and wrote the paper. K.K. designed the model, analysed the data, and wrote the paper.
\subsubsection*{Competing interests.}
The authors declare no conflict of interest.
\subsubsection*{Funding.}
This study was partially supported by a Grant-in-Aid for Scientific Research on Innovative Areas (17H06386) from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT) of Japan.
\subsubsection*{Acknowledgements.}
The authors thank Tetsuhiro S. Hatakeyama, Yuma Fujimoto, and Kenji Okubo for a stimulating discussion, and Takumi Moriyama, Koji Hukushima, Yusuke Kato, and Yasuo Ihara for illuminating comments.
{\footnotesize
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Bars are common structures found in approximately two thirds of disc galaxies in the local Universe \citep{Eskridgeetal2000,Menendezetal2007,Aguerrietal2009,Gadotti2009,Mastersetal2011}, with this fraction decreasing towards higher redshifts, and reaching $\sim20\%$ at $z=1$ \citep{Shethetal2008,Melvinetal2014}, although a number of studies find evidence for the existence of bars at redshifts as high as $z\sim1.5-2$ (e.g. \citealt{Simmonsetal2014,Gadottietal2015}).
They are known to affect their host galaxy in a variety of ways e.g. by pushing gas to the central regions, where it can form nuclear structures such as nuclear discs and rings (e.g. \citealt{Athanassoula1992b, Knapenetal2002, Comeronetal2010, Ellisonetal2011, Fragkoudietal2016, Sormanietal2018, deLorenzoCaceresetal2019,MendezAbreuetal2019,Leamanetal2019}; see reviews by \citealt{KormendyKennicutt2004} and \citealt{Athanassoula2013}).
Bars also re-shape the central regions of their host galaxy via the formation of a vertically extended bulge, often referred to as an X-shaped or boxy/peanut (b/p) bulge \citep{CombesSanders1981, Combesetal1990,Rahaetal1991,Patsisetal2002,Athanassoula2005,MartinezValpuestaetal2006,Quillenetal2014,Fragkoudietal2015}.
In N-body simulations, b/p bulges form soon after the bar forms, either rapidly after a buckling instability (e.g. \citealt{Mihosetal1995,MartinezValpuestaetal2006}) or more slowly through resonant heating at the vertical inner Lindblad resonance of the bar \citep{PfennigerFriedli1991,Friedlietal1996,CeverinoKlypin2007,Quillenetal2014,Portailetal2015}.
These and the aforementioned nuclear discs are sometimes collectively referred to as `pseudo-bulges', to differentiate them from dispersion-dominated, so-called `classical' bulges \citep{KormendyKennicutt2004}. To avoid confusion, we differentiate between b/p bulges -- which are formed by vertically extended orbits, and thus `puff-out' of the plane of the disc -- and nuclear discs or rings, which form out of gas pushed to the central regions by bars and which are flattened (disc-like) structures.
Classical bulges are thought to form via violent processes such as dissipationless collapse, mergers or clump migration at high redshifts (e.g. \citealt{Eggenetal1962,Toomre1977, vanAlbada1982,Bournaudetal2007,NaabBurket2003,Hopkinsetal2009,Perezetal2013}; and the recent review by \citealt{BrooksChristensen2016}). The secular formation of b/p bulges and the violent formation mechanisms responsible for classical bulges leave different chemodynamical imprints, which can thus be used to decipher the formation history of their host galaxy.
The Milky Way (MW) is our closest barred galaxy, and therefore the bar's effects on its central regions and on its stellar disc can be explored in exceptional detail.
There has been ample debate over the origin of the MW bulge -- whether a dispersion-dominated component formed from the dissipational collapse of gas or mergers, or a b/p bulge, formed via secular processes. Observations
in the near- and mid-infrared reveal that the MW bulge has a boxy or X-shape, pointing to its secular origin (e.g. \citealt{Dweketal1995,McWilliamandZoccali2010,Natafetal2010,Weggetal2013,NessLang2016}). However, a number of studies find the MW bulge to be an exclusively old population (e.g. \citealt{Zoccalietal2003,Clarksonetal2008,Valentietal2013}), with a negative radial metallicity gradient (e.g. \citealt{Zoccalietal2008}), which points to properties closer to those of a classical bulge.
Further intensifying the debate, recent observational studies find that the MW bulge might not be exclusively old, with a significant fraction of stars younger than 8\,Gyrs \citep{Bensbyetal2013,Haywoodetal2016b,Bensbyetal2017}.
These seemingly contradictory properties have lent support to a hybrid scenario for the MW bulge, in which the metal-rich stellar populations are part of the b/p, formed from disc material, while the metal-poor populations constitute a separate dispersion-dominated, spheroidal, classical bulge component (e.g. see \citealt{Babusiauxetal2010,RojasArriagadaetal2014,Barbuy2016, Barbuyetal2018} and references therein).
On the other hand, our understanding of the disc of the Milky Way has also undergone a revolution of sorts, thanks to the recent second Gaia data release (Gaia DR2; \citealt{GaiaCollaboration2018}). Gaia DR2 has allowed for a detailed exploration of phase-space of the Milky Way's disc, revealing a number of previously unknown substructures, such as the Gaia snail or spiral \citep{Antojaetal2018}. Some of the most striking features the data have revealed are the prominent ridges in $V_{\phi}-r$ space \citep{Kawataetal2018,Antojaetal2018}, which have undulations in $V_r$ associated to them \citep{Fragkoudietal2019}. The bar has been proposed as a culprit for a number of these features including the observed ridges in the $V_{\phi}-r$ plane \citep{Fragkoudietal2019} and the Gaia spiral via the buckling instability (\citealt{Khoperskovetal2019}; but see \citealt{Laporteetal2019} for an alternative explanation). Furthermore, as shown recently by \citet{Khannaetal2019}, the ridges exhibit different abundance trends compared to phase space around them, which could perhaps give clues as to their origin.
Additionally, recent studies have probed the age and abundance structure of the inner disc of the Milky Way, finding seemingly contradictory results. On the one hand, \cite{Leungetal2019a,Leungetal2019b,Bovyetal2019} -- using APOGEE data in combination with machine learning techniques -- find that the bar of the Milky Way is metal-poor, while other studies such as \cite{Weggetal2019} -- using FLAMES \citep{Pasquinietal2000} spectra of red clump giant stars -- find that metal-rich stars in the inner disc tend to be on more elongated orbits, suggesting that the bar of the Milky Way is metal-rich. Furthermore, recent studies of local barred galaxies find that some bars tend to be more metal-rich than their surrounding disc, while others have similar or lower metallicities as compared to their surrounding disc population \citep{Neumannetal2020}. The aforementioned studies highlight the varied properties of barred galaxies, both for the Milky Way and external barred galaxies, as well as the tight interplay between the central regions of galaxies, the bar and the disc, all of which need to be explored in a unified framework within the global context of galaxy formation and evolution.
On the theoretical side, recent studies using tailored, isolated simulations of Milky Way-type galaxies, have shown that the metal-poor populations in the bulge of the Milky Way are in fact consistent with being composed of the thick disc seen at the Solar neighbourhood, with no need for an additional `classical' bulge component \citep{DiMatteo2016,Fragkoudietal2017b,Debattistaetal2017,Portailetal2017b,Haywoodetal2018,Fragkoudietal2018}. These models are able to explain the chemo-morphological and chemo-kinematic relations of stellar populations in the bulge \citep{Fragkoudietal2018,Gomezetal2018}, as well as its vertical and radial metallicity gradients \citep{Fragkoudietal2017c}.
While isolated simulations can be tailored to study specific galaxies in detail, such as the Milky Way, one would also like to be able to study the formation of bulges of MW-like galaxies in the full cosmological context.
Advances in resolution and physical fidelity (through sub-grid models) in recent cosmological zoom-in simulations have led to the formation of realistic disc galaxies, with smaller bulges (\citealt{Governatoetal2010,Bonolietal2016,BrooksChristensen2016}), which have thus started being used to study the properties of bars and b/p bulges in the context of the MW bulge (e.g. \citealt{Tisseraetal2018,Bucketal2018,Bucketal2019,Debattistaetal2019}).
In general, however, these studies have explored single galaxies and therefore do not capture the diversity of formation histories that Milky Way mass galaxies can undergo. Also, while they reproduce a number of trends similar to the Milky Way bulge (such as e.g. morphology and global kinematic properties) they do not reproduce some of the key chemodynamical features of the Milky Way bar and bulge, such as the kinematical properties of the metal-poor (-1<[Fe/H]<-0.5) populations in the bulge (e.g. \citealt{Bucketal2019}), around which most of the debate about the origin of the MW bulge is centred.
We now have at our disposal for the first time a large sample of high resolution zoom-in cosmological simulations of Milky Way mass galaxies, the Auriga suite \citep{Grandetal2017,Grandetal2019}. These simulations develop realistic discs from diverse formation histories \citep{Gomezetal2017}, contain mostly bulges with low Sersic indices (\citealt{Gargiuloetal2019}, from now on G19) and develop bars and b/p bulges which at $z=0$ have structural properties in agreement with observations (\citealt{BlazquezCaleroetal2020}, from now B20). We can therefore now study the formation of bars and b/p's in the full cosmological context, exploring the chemodynamical imprints left by their formation history. This allows us to constrain the merger history of the Milky Way (see also \citealt{Monachesietal2019}), and to explore consistency with the recently proposed Gaia Sausage/Enceladus merger \citep{Belokurovetal2018,Haywoodetal2018b,Helmietal2018}. As we will show, these models are able to reproduce a number of chemodynamical properties of the MW bulge, thus shedding light on its formation history, while also allowing us to explore the effects of the bar on the disc, not only in terms of kinematics but also by taking into account the chemical enrichment and ages of stellar populations in the disc.
This paper is the first in a series exploring the properties of bars in the Auriga cosmological simulations. Here we explore the chemodynamical properties of Auriga galaxies with prominent b/p bulges, comparing them to the MW bulge and connecting them to their assembly history. We also explore the effects that bars have on the discs of MW-type galaxies.
The paper is structured as follows: in Section \ref{sec:auriga} we describe the Auriga simulations, focusing on the sample studied here, and show some statistical properties of barred galaxies in Auriga. In Section \ref{sec:ageabund} we describe the age and abundance distributions in our sample, focusing on the bar-b/p region. In Sections \ref{sec:chemorph} and \ref{sec:chemokinem} we describe the chemo-morphological and chemo-kinematic relations of stellar populations in the central regions and then compare them to the Milky Way bulge, while in Section \ref{sec:SFH} we relate these to the galaxies' merger history and fraction of ex-situ stars. In Section \ref{sec:bareffectdisc} we explore the effects of the bar on the inner and outer disc. In Section \ref{sec:discussion} we discuss some of the implications of our findings in terms of the inner disc of the Milky Way, and in Section \ref{sec:summary} we conclude and summarise our results.
\section{The Auriga Simulations}
\label{sec:auriga}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{plots/rgb_rotcurve_halos_2_withbs_new}
\caption{Properties of the five Auriga galaxies explored in this study. \emph{Top row:} RGB images -- synthesized from a projection of the K-, B- and U-band luminosity of stars -- of the subsample of Auriga galaxies which we explore in this study. The size of the face-on panels is $50\times50\,\rm kpc$ and of the edge-on panels $50\times 25 \, \rm kpc$ (for edge-on projections the line-of-sight is along the bar minor axis). The halo number is denoted at the top of each plot and the inner and outer dashed circles show the corotation and Outer Lindblad Resonance radius respectively. \emph{Second row:} Circular velocity profiles for the sample of galaxies: total (solid lines), stellar component (dashed lines), dark matter component (dot-dashed lines) and gaseous component (dashed lines). \emph{Third row:} Angular frequency plots showing $\Omega$ the angular frequency (solid) and $\Omega - \kappa/2$ and $\Omega + \kappa/2$ in dashed and dot-dashed respectively, where $\kappa$ indicates the radial frequency of stars. The horizontal red line denotes the pattern speed of the bar. \emph{Fourth row:} Bar strength $A_2$ as a function of lookback time. The vertical dot-dashed line marks the formation of a strong bar and the thick dashed line the formation of the b/p.}
\label{fig:rgball}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{plots/barbpfrac_withztlook_withSheth08_Eskridge_w9bps}
\includegraphics[width=0.42\textwidth]{plots/bp_stuff_cut}
\caption{Statistical properties of the galaxies in the entire Auriga sample and in the subsample examined here. \emph{Top:} Bar fraction as a function of redshift for the entire suite of Auriga simulations, compared to observations \citep{Shethetal2008,Eskridgeetal2000}. The blue points indicate the fraction of b/p bulges in barred galaxies in Auriga, comparing to observations from \citet{Luttickeetal2000} and \citet{ErwinDebattista2017} (right $y$-axis). \emph{Bottom Left:} Disc scale-length vs bar length for SDSS galaxies from Gadotti (2011) (blue circles) and for the five Auriga galaxies explored in this study (symbols). \emph{Bottom Right:} Corrected bar ellipticity (see text) vs bar length over disc scale-length.}
\label{fig:barfrac}
\end{figure}
The Auriga simulations \citep{Grandetal2017,Grandetal2019} are a suite of cosmological magneto-hydrodynamical zoom simulations of haloes with masses in the range of $0.5 \times 10^{12}-2 \times 10^{12}\rm M_{\odot}$ which run from redshift $z=127$ to $z=0$ with cosmological parameters: $\Omega_{\rm m}=0.307$, $\Omega_{\rm b}=0.048$ and $\Omega_{\rm \Lambda}=0.693$, and a Hubble constant of $H_{\rm 0} =100h\, \rm km\,s^{-1}\,\rm Mpc^{-1}$,where $h=0.6777$ \citep{Planck2014XVI}.
The simulations are performed with the magnetohydrodynamic code {\sc AREPO} \citep{Springel2010,Pakmoretal2016}, with a comprehensive galaxy formation model (see \citealt{Vogelsbergeretal2013, Marinaccietal2014a,Grandetal2017}, for more details) which includes primordial and metal line cooling, a prescription for a uniform background ultraviolet field for reionization (completed at $z=6$), a subgrid model for star formation, stellar evolution and feedback, magnetic fields, and black hole seeding, accretion and feedback.
The dark matter particles have a mass of $\sim 4\times10^5 \rm M_{\odot}$ and the stars and gas have a mass resolution $\sim 5\times10^4 \rm M_{\odot}$. The physical softening of collisionless particles grows with time and corresponds to a fixed comoving softening length of 500\,$h^{-1}$pc, while the maximum physical softening allowed is 369\,pc (see \citealt{Poweretal2003} for reasonable softening parameters). The physical softening for the gas cells is scaled by the gas cell radius with a minimum limit of the softening equal to that of the collisionless particles.
Star formation and stellar feedback is modelled as follows: if a given gas cell is eligible for star formation, it is converted (according to the \citealt{Chabrier03} initial mass function) either into a star particle -- in which case it represents a single stellar population of a given mass, age and metallicity -- or into a site for SNII feedback.
In the latter case, this particle is launched in a random direction as a wind particle with a velocity that scales with the 1-D local dark matter velocity dispersion (see \citealt{Grandetal2017} for more details). Its metal content is determined by the initial metallicity of the gas cell from which the wind particle originated, i.e. it is loaded with $\eta=0.6$ of the total metals of the parent gas cell. For the stellar particles, we model the mass loss and metal enrichment from SNIa and AGB stars by calculating the mass moving off the main sequence for each star particle at each timestep. The mass and metals are then distributed among nearby gas cells with a top-hat kernel.
We track a total of nine elements: H, He, C, O, N, Ne, Mg, Si, and Fe and in what follows we use (Mg+Si+O)/3 to study the $\alpha$-abundances.
The simulations form disc-dominated star-forming galaxies with flat rotation curves that reproduce a range of observed scaling relations such as the Tully-Fisher relation \citep{Grandetal2017} and the size-mass relation of HI gas discs \citep{Marinaccietal2017}. They also form instabilities in the discs such as bars and boxy/peanuts which have structural properties similar to those of observed bars \citepalias{BlazquezCaleroetal2020} and mainly consist of so-called pseudo-bulges \citepalias{Gargiuloetal2019}, reproducing what is found for disc galaxies in the local Universe \citep{KormendyKennicutt2004,Gadotti2009}.
Four of the haloes used in this study (Au13, Au17, Au23 and Au26) are the original haloes presented in \cite{Grandetal2017}, while Au18 is a re-run of the original halo 18 from \citet{Grandetal2017}, for which we have high cadence snapshot outputs, saved every 5\,Myr\footnote{While the initial conditions of the halo are the same as those of the original halo in \cite{Grandetal2017}, the final galaxy is not identical due to differences in the integration time-step. However the overall properties of the galaxy and its bar are broadly similar as a function of redshift, which gives confidence that the properties of strongly barred galaxies are to some extent robust.}.
\subsection{Analysis}
\label{sec:analysis}
To obtain the bar strength in our sample of simulated galaxies we select the stellar particles in the disc and calculate the Fourier modes of the surface density as,
\begin{equation}
a_m(R) = \sum^{N}_{i=0} {\rm m_i} \cos(m\theta_i), \,\,\,\,\, \,\,\,\,\, m=0,1,2,...,
\end{equation}
\begin{equation}
b_m(R) = \sum^{N}_{i=0} {\rm m_i} \sin(m\theta_i), \,\,\,\,\,\,\,\,\,\, m=0,1,2,...
\end{equation}
\noindent where $\rm m_i$ is the mass of particle $i$, $R$ is the cylindrical radius, $N$ is the total number of particles in that radius and $\theta$ is the azimuthal angle.
To obtain a single value for the bar strength we take the maximum of the relative $m=2$ component within the inner 10\,kpc as,
\begin{equation}
A_2 = \rm max \frac{\sqrt{ \left( a^2_2 + b^2_2 \right)} }{a_0}.
\end{equation}
Depending on the analysis, this can be calculated for all stars in the disc, or for each mono-age or mono-abundance population separately. In what follows we define a (strong) bar as having formed when $A_2>0.3$. We always also visually inspect the bars to be sure that large values of $A_2$ are not due to transient effects, such as an off-centering due to a merger etc.
The bar pattern speed in our fiducial model, Au18, is obtained by calculating the $m=2$ phase in each snapshot and then calculating the bar pattern speed $\Omega_p$,
\begin{equation}
\Omega_p = \frac{\Delta \theta}{\Delta t}.
\end{equation}
For the other four haloes investigated in this study, for which we do not have high enough cadence outputs in order to calculate the bar pattern speed directly from the temporal evolution of the simulations, we calculate $\Omega_{\rm p}$ using the Tremaine-Weinberg method (TW; \citealt{TremaineWeinberg1984}). The method relies on the continuity equation and on the disc having a well defined pattern speed, such as the bar pattern speed $\Omega_{\rm p}$, and can be easily used to calculate the pattern speed using slits placed perpendicular to the line of nodes of the galaxy (see \citealt{TremaineWeinberg1984} for more details). We first tested the TW method on Au18, the fiducial model, and found the parameters such as bar orientation, disc inclination and number of slits and their extent along the bar major axis etc., that give the most accurate results (see \citealt{Debattista2003,GarmaOehmichenetal2019} and references therein for tests of the TW method). To calculate the pattern speeds in the four galaxies in our sample we employ an inclination angle of the disc of $i=45$\,degrees and rotate the bar such that it has an angle of 60\,deg with respect to the line of nodes. We tested our implementation of the TW method on other reruns of the Auriga sample for which we have high cadence outputs (and which will be presented in future work; Fragkoudi et al. in prep.) and found that we can recover the true pattern speed with an accuracy of 5\%.
\subsection{Barred-boxy/peanut sample}
\label{sec:bpsample}
In this study, we focus on five halos from the Auriga suite, shown in Figure \ref{fig:rgball}, which have bars and prominent b/p bulges which are readily identified in their edge-on\footnote{In what follows, unless explicitly stated, when referring to edge-on projections we project the galaxy along the $y$-axis, with the bar's semi-major axis aligned with the $x$-axis.} surface density projections. We consider a galaxy to have a b/p bulge when the X-shape of the bar is visible in the edge-on projection along with a `bump' in the mean height of stars as a function of radius (see Figure \ref{fig:peanut_strength}).
Four other Auriga barred galaxies show hints of a b/p bulge in the process of forming, which we term `weak b/p's' (these b/p's are identified because there is a `bump' in the mean height of \emph{young} stars). However, as these are too weak to be seen in the edge-on projection of \emph{all} stars they would likely not be identified as peanut galaxies observationally, therefore we do not include them in this study\footnote{We note that there is no strict definition of how to classify a peanut (see also \citealt{CiamburGraham2016}). In \citetalias{BlazquezCaleroetal2020}, we use un-sharp masking of edge-on projections to identify b/p's with which we identify 6 b/p's, five of which are the prominent ones presented in this study; the sixth one has a very weak b/p which we include in our sample of `weak b/p's'.}.
We note that the fraction of barred galaxies in the entire Auriga sample -- 40 haloes presented in \citet{Grandetal2017,Grandetal2019} -- as a function of redshift is consistent with observations (e.g. \citealt{Shethetal2008}): i.e. we find that $\sim70\%$ of the Auriga galaxies have bars at $z=0$ with the fraction steadily decreasing towards higher redshifts, and reaching a plateau of $\sim20\%$ at $z=1.5$ -- see Figure \ref{fig:barfrac}. We also compare the fraction of b/p's in Auriga (including weak b/p's) to observed fractions of b/p's in the local Universe \citep{Luttickeetal2000,ErwinDebattista2017}; the fraction of b/p's in Auriga is low (30\%) -- and is even lower if we exclude the weak b/p's which would be hard to detect observationally -- compared to the observed fraction of $\sim70\%$ (and see also \citetalias{BlazquezCaleroetal2020}). This could be due to the slightly too hot discs in our simulations (e.g. see \citealt{Grandetal2016}); we will discuss this and the formation of barred/peanut galaxies in the cosmological context in more detail in upcoming work (Fragkoudi et al. in prep.).
In the false-colour face-on and edge-on RGB images of our sample of galaxies in Figure \ref{fig:rgball}, we see the overall morphology of the Auriga galaxies in our sample, with the corotation radius (CR) and Outer Lindblad Resonance (OLR) of the bar marked with the inner and outer and dashed lines respectively. The galaxies have interesting morphological features, similar to many barred galaxies in the local Universe; for example haloes Au17, Au18 and Au23 have a red and quenched region inside the bar radius (often referred to as the `star formation desert', e.g. \citealt{Jamesetal2009}), and a blue star forming disc. On the other hand, haloes Au13 and Au26 have ongoing star formation in the central regions. We also note the presence of a star forming inner ring inside the CR, which surrounds the bar, in haloes Au18 and Au23. These and other ring like structures are a common feature of barred galaxies, and have been typically thought to form due to gas piling up at bar-induced resonances (\citealt{ButaCombes1996}; and see Section \ref{sec:effectring} for more discussion on these rings).
In the third row of Figure \ref{fig:rgball} we plot the rotation curves for these galaxies\footnote{Obtained by approximating the mass distribution as spherical and using $V_c(r) = \sqrt{GM(<r)/r}$}. Haloes Au18 and Au23 exhibit almost flat outer profiles, while Au13, Au17 and Au26 have rather peaked profiles in the central regions due to a more concentrated stellar distribution (we note that all the galaxies in our sample have slightly declining rotation curves, as expected for massive spiral galaxies -- e.g. \citealt{SofueRubin2001}). In the third row of Figure \ref{fig:rgball} we show the angular frequency curves, as well as the $\Omega \pm \kappa/2$ curves, where $\kappa$ is the radial frequency of stars on near circular orbits\footnote{$\kappa(R_{\rm g}) = \sqrt{ (R\frac{\rm d\Omega^2}{\rm d R} + 4\Omega^2)}$ in the epicyclic approximation; see \citet{BT2008}}. The bar pattern speed $\Omega_p$ is indicated with the horizontal red line, while its intersection with the $\Omega$ and $\Omega \pm \kappa/2$ curves gives the approximate locations of the CR and the Outer and Inner Lindblad Resonances respectively\footnote{These are the locations of the resonances strictly only for mildly non-axisymmetric systems}.
In the fourth row of Figure \ref{fig:rgball} we show the evolution of bar strength as a function of time; we mark the bar formation time (i.e. for $A_2>0.3$) with a thin dot-dashed vertical line and the formation of the b/p with a thick dashed line.
The properties of the bars in the Auriga simulations at $z=0$ are in general in good agreement with those of observed galaxies, as can be seen in the bottom panels of Figure \ref{fig:barfrac}, where we show the relation between disc scale-length, $\rm h_r$, and bar length, $\rm r_{bar}$, as well as bar ellipticity, $\epsilon$, vs $\rm r_{bar}/\rm h_r$ (see also \citetalias{BlazquezCaleroetal2020} who carry out a detailed comparison of structural properties of bars and b/p's at $z=0$ in Auriga with observations). Here we derive the disc scale-lengths by fitting the 1D surface density with a disc and bulge component. The bar lengths are obtained from ellipse fitting the surface density images where the bar length is derived as the minimum between the first minimum of ellipticity profile or when the angle of the ellipses changes by more than 5 degrees (see \citealt{Erwin2005}). The bar ellipticity is obtained as the maximum ellipticity of our fits in the bar region\footnote{\cite{Gadotti2008} showed that the ellipticity obtained using ellipse fits is 20\% lower than that obtained using 2D image decompositions (see their Section 3.4); we therefore correct our ellipticities accordingly in order to be able to compare with the observed sample.}. We see that the bars in our simulations match well the properties of observed barred galaxies in \cite{Gadotti2011} in terms of bar length, ellipticity and disc scalelength.
In what follows we refer to Au18 as our fiducial model; this run has high cadence outputs and is therefore used for tests in much of the analysis that follows, while furthermore, as we will show in the next Sections, the model has similar chemodynamical properties to the Milky Way bulge.
\section{Ages \& abundances in bars and b/p bulges}
\label{sec:ageabund}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{plots/xy_age_FeH_alpha_only_all_zcut05_forpap_cmap_10_compressed.pdf}
\caption{Face-on projection of the ages and abundances of stars within $|z|<0.5\,\rm kpc$ in our sample of galaxies. \emph{Top row:} Mass-weighted mean age. \emph{Second row:} Mass-weighted mean [Fe/H]. \emph{Third row:} Mass-weighted mean [$\alpha$/Fe] distribution. In all panels the inner and outer dashed circles denote the CR and OLR radius. The black curves indicate iso-density contours.}
\label{fig:xyall_ages}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{plots/xz_age_FeH_alpha_only_all_forpap_cmap2_compressed.pdf}
\caption{Ages and abundances of the boxy/peanut bulges in our sample: \emph{First row:} Mass-weighted mean age in the boxy/peanut. \emph{Second row:} Mass-weighted mean [Fe/H]. \emph{Third row:} Mass-weighted mean [$\alpha$/Fe]. The vertical dashed lines indicate the corotation radius (for the cases where it falls inside the plotted region). We see that the edges of the b/p bulges are traced by younger, more metal-rich and more $\alpha$-poor populations. }
\label{fig:xy_edgeon_feh_all}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.23\textwidth]{plots/bulgestars_Au18_agehist.pdf}
\includegraphics[width=0.23\textwidth]{plots/agebulge_cululative_all}
\caption{Age distributions in the inner regions of the Auriga galaxies explored in this study. \emph{Left panel:} Distribution of ages inside the boxy/peanut bulge of halo Au18. In the top left corner we denote the fraction of stars younger than a certain age (<8, 5, 3\,Gyrs). \emph{Right panel:} Cumulative ages inside the boxy/peanut bulges of all the haloes in our sample.}
\label{fig:cumul}
\end{figure}
\subsection{Face-on projection}
In Figure \ref{fig:xyall_ages} we show face-on maps of mass-weighted mean age, metallicity and $\alpha$-abundances for all galaxies in our sample, selecting stars within $|z|<0.5\,\rm kpc$ from the plane of the galaxy. We see that there is a large variety in the mean ages and abundances of the bars and discs in the sample. In haloes Au17, Au18 and Au23 the bar region is overall old (mean age $>8\,\rm Gyrs$), while haloes Au13 and Au26 have more recent episodes of star formation (as we will see below in Section \ref{sec:SFH}) and therefore have younger ages ($\sim4\,\rm Gyr$) in the bar region.
We see that in all cases the ends of the bar tend to have younger ages than stars found perpendicular to the bar. If we were to trace the mean age of stars in cuts perpendicular to the bar, we would therefore find a decreasing age gradient towards the center of the bar, with younger ages clustering along the bar semi-major axis (and see also \citealt{Wozniak2007}). This is likely a consequence of the kinematic differentiation of stars in the bar (which we will discuss in the next Section in more detail) where younger populations in the bar have more elongated shapes than older populations which are rounder (see \citealt{Debattistaetal2017,Fragkoudietal2017b}). The variation in morphology for populations of different ages naturally leads to such an age gradient perpendicular to the bar. This behaviour has been recently observed in local barred galaxies using spectroscopic data from the MUSE-TIMER survey -- see \cite{Neumannetal2020} for more details.
\vspace{1cm}
In the second row of Figure \ref{fig:xyall_ages} we show the metallicity distribution in our sample. We see that in all haloes, the bar is more metal-rich than the surrounding disc, with the exception of haloes Au18 and Au23 where a prominent inner ring is formed, which is star-forming and metal-rich (see Section \ref{sec:effectring} for a more detailed discussion on these inner rings). In these two cases (Au18 and Au23) only the inner 1.5\,kpc of the galaxy is more metal-rich than the inner ring, while along the bar the metallicity is lower than that of the inner ring. The metal-rich inner-most regions of the bars in our sample, i.e. inside $\sim$1.5\,kpc, perhaps indicate the formation of nuclear discs in the Auriga galaxies. However these would be larger than those found in observed galaxies (here of the order of 1-2\,kpc while nuclear discs tend to have sizes of the order of a few hundred parsec, e.g. \citealt{Comeronetal2010}) which could be due to resolution issues in the central-most kiloparsec or possibly due to the AGN feedback implementation; this will be the subject of future investigations. We also see that, especially in the region of the bar, there are clear azimuthal variations in the metallicity maps, with metallicity gradients being flatter along the bar than perpendicular to it.
In the third row of Figure \ref{fig:xyall_ages} we show the $\alpha$-abundances in the discs of our haloes. We see that the inner regions are on average more $\alpha$-enhanced, with a similar gradient as for the age, i.e. $\alpha$-poor stars are concentrated along the bar major axis. We also see that for haloes Au18 and Au23, the aforementioned possible metal-rich nuclear discs, correspond to regions of low $\alpha$-enhancement, as is expected for these types of inner structures which form through secular processes.
\subsection{Edge-on projection}
The MW bulge has long been thought to be exclusively old, as found by studies using colour-magnitude diagrams in fields towards the bulge (e.g. \citealt{Zoccalietal2003,Clarksonetal2008,Valentietal2013}). On the other hand, recent studies such as those of \cite{Bensbyetal2013,Bensbyetal2017}, which derive the ages of microlensed stars in the bulge, find that there is in fact a wide distribution of ages in the bulge, with up to 50\% of metal-rich ([Fe/H]>0) stars younger than 8\,Gyrs. Furthermore, \cite{Haywoodetal2016} recently re-analysed the CMD which was used in \cite{Clarksonetal2008} and found that when allowing for an evolving Age-[Fe/H] relation for stars in the bulge, the bulge CMD is better fit by isochrones with a spread of ages. They furthermore found that all stars with [Fe/H]>0 can be younger than 8\,Gyrs (which would make up $\sim50\%$ of all the stars in the bulge).
In what follows we analyse the age distributions for the b/p's in our sample of simulated galaxies.
In Figure \ref{fig:xy_edgeon_feh_all} we show edge-on maps of mean ages, metallicities and abundances in our sample (to remove contamination from disc stars we exclude stars outside galactocentric radius $R=6\,\rm kpc$). We see that, as in the face-on distribution of ages in the galaxies in Figure \ref{fig:xyall_ages}, haloes Au17, Au18 and Au23 have on average older b/p bulges, while Au13 and Au26 have younger b/p's, since they have recent ongoing star formation in the central regions. In all haloes we see an X-shaped distribution of ages, with the younger populations dominating the X-shape of the peanut -- i.e. the relative fraction of young to old stars will depend on which region of the bulge is explored.
In the second and third rows of the figure we show the edge-on metallicity and $\alpha$-abundance distributions. We see that all the b/p bulges demonstrate a pinched X-shape metallicity and $\alpha$-abundance distributions (see also e.g. \citealt{Gonzalezetal2017,Debattistaetal2017,Fragkoudietal2018}).
We examine the age distribution of stars in the entire b/p bulge region (here we restrict this cut to $R<4\, \rm kpc$ and $|z|<2\, \rm kpc$ to take only stars within the central-most regions) of our fiducial model, Au18, in the left panel of Figure \ref{fig:cumul}. We see that there is a significant fraction of young stars in the boxy/peanut bulge region, with 52\% of stars younger than 8\,Gyr, 30\% of stars with Ages<$5\,$Gyrs and 13\% of stars with Ages$< 3\,$Gyrs. In the right panel of Figure \ref{fig:cumul} we show the cumulative fraction of ages in all five haloes in our sample. We see that all haloes show significant fractions of young stars inside the boxy/peanut bulge, with stars younger than 5\,Gyrs ranging between 25-60\% depending on the halo and its star formation history\footnote{We note that, as we discuss in later sections, Au18 has tentatively a similar star formation history as the MW}. It is worth noting that for none of our b/p bulges are the ages exclusively old (i.e. all older than 10\,Gyr).
Our models therefore suggest that there is a spread of ages in b/p's that are formed in the full cosmological setting, with a non-negligible fraction of young stars in the central regions of Milky Way-mass galaxies.
\section{Morphological properties of stellar populations in bars and b/p bulges}
\label{sec:chemorph}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{plots/xy_halo18_example_faceon_hor.pdf}
\caption{Chemo- and chrono-morphological properties of Au18: \emph{Top:} Face-on surface density projection of Au18 for all stars (left) and for stars with different ages (subsequent columns; age is denoted in the upper left corner of each panel) as well as accreted stars (right panel). \emph{Bottom:} Face-on surface density projection for stars with different metallicities, as denoted in the top left corner of each panel.}
\label{fig:xy_faceon_edgeon}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{plots/barstrength_vs_fe_wsigma_wage2.pdf}
\caption{The dependence of bar strength on metallicity, velocity dispersion and age: The black curves and left $y$-axis indicate the bar strength $A_2$ for each mono-abundance population. The blue curves and right $y$-axis denote the radial (solid) and vertical (dashed) velocity dispersion of the underlying population, while the colour-coding of the circles indicates their mean age (see colourbar -- in Gyr -- of each panel). Bar strength increases for increasing [Fe/H], decreasing velocity dispersion and younger age of the underlying population.}
\label{fig:bs_feh_all}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{plots/Angmom_halo18_specific_new2.pdf}
\includegraphics[width=0.47\textwidth]{plots/Angmom_halo23_specific_new.pdf}
\caption{\emph{Left:} Evolution of specific angular momentum inside the bar region for mono-age populations (denoted in Gyrs) in Au18 (first row) and Au23 (second row) inside the bar region. The vertical dashed line marks time of bar formation. The vertical dot-dashed line marks onset of bar instability. \emph{Right:} Change in specific angular momentum.}
\label{fig:angmom2318}
\end{figure}
In this section we examine the chemo-morphological properties of stellar populations in the bars and b/p bulges of the galaxies in our sample.
In Figure \ref{fig:xy_faceon_edgeon} we show mass-weighted face-on surface density maps for different age and metallicity bins for our fiducial model Au18. In the columns of the top row we show the morphology of different mono-age populations where age is indicated in the top left corner of each panel, and in the rightmost panel we show the face-on surface density of the accreted component in the galaxy. In the second row we show the face-on surface density as a function of metallicity, where the metallicity intervals are indicated in the bottom left corner of each panel. The younger and more metal-rich populations have a more elongated, bar-like morphology than the older populations which are on average rounder -- however even these oldest populations show signs of bar-like morphology (see also Figures \ref{fig:xy_faceon_ages_all}-\ref{fig:xy_faceon_feh_all} for the face-on and edge-on surface density maps as a function of age and metallicity for all haloes in our sample). This behaviour has also been found in Made-to-Measure chemodynamical models of the Milky Way bar and bulge (see \citealt{Portailetal2017b}) suggesting that the Milky Way has a similar chemo-morphological dependency in the bar region.
In Figure \ref{fig:bs_feh_all} we quantify this by calculating the bar strength $A_2$ as a function of metallicity for Au18, and the other four galaxies in our sample. In all the haloes in our sample, the bar strength increases as a function of the metallicity of the stellar populations. This behaviour is a consequence of the kinematic properties of the underlying stellar population \citep{Fragkoudietal2017b}, as shown by the blue lines in the same Figure, which denote the in-plane (solid) and vertical (dashed) velocity dispersion, $\sigma_r$ and $\sigma_z$. The coloured circles denote the mean age of each mono-abundance populations, as shown by the colourbar in each panel where the minimum and maximum age are denoted in Gyrs. To calculate the velocity dispersion and age of each population we select stars from the disc in an annulus outside the bar region (so that the velocity dispersion of the stars is minimally affected by the bar)\footnote{By selecting stars at larger radii we bias the mean age towards younger ages, however we are mainly interested in the trend by which younger populations are colder, and therefore have stronger bar-like morphologies.}. We find that the velocity dispersions of the mono-abundance populations decrease for more metal-rich stars while the bar strength increases, signalling the fact that colder, and therefore younger, populations can participate more strongly in the bar instability. We see therefore that there is a relation between the bar (and b/p) morphology and the kinematics of the underlying mono-age or mono-abundance population (for more details see \citealt{Fragkoudietal2017b}; \citealt{Debattistaetal2017} termed this behaviour `kinematic fractionation').
This kinematic differentiation of mono-age populations in the bar and b/p occurs due to the angular momentum that these populations are able to exchange (see also \citealt{Fragkoudietal2017b}). This is further explored in Figure \ref{fig:angmom2318} for our fiducial model, Au18 (top row) and Au23 (bottom row), where we show the specific angular momentum ($l_z$) evolution of mono-age populations in the galaxy inside the bar region (i.e. $R<6\,\rm kpc$). In the left panels the dot-dashed line marks the beginning of the bar instability phase and in all panels the dashed line marks the time at which a strong bar has formed (here we define a strong bar as $A_2=0.3$).
In the top left panel of the Figure we see that populations are born with progressively more angular momentum, until the bar forms, as denoted by the dot-dashed line. The second column shows the change in specific angular momentum for each population from the onset of the bar instability (or from the time of birth for those which are born after the bar forms). We see that the populations born before the bar lose the most angular momentum (which is redistributed to the outer disc and halo). Of these, the oldest population (10-12\,Gyr; dark brown curve) loses less angular momentum than younger populations (8-10\,Gyr; light brown curve) which are formed just before the bar forms. The same behaviour can be seen for halo Au23 in the bottom panels of Figure \ref{fig:angmom2318}. This occurs because the older populations are hotter and therefore lose less angular momentum than the colder populations which can get trapped on more elongated bar-like orbits (see \citealt{Fragkoudietal2017b}).
On the other hand, the populations born after the bar forms have an already decreased specific angular momentum, compared to that which they would have were the bar not present. This can be verified by examining the specific angular momentum of mono-age populations in Au23, where the bar forms at a later time as compared to Au18 ($t_{\rm lookback}$=4\,Gyr vs 8\,Gyrs). We see that the specific angular momentum of younger populations increase until the bar forms, as stars are forming on more settled circular orbits. However, once the bar forms, stars born inside the bar region are born on more elongated orbits, thus with less angular momentum to begin with.
We show the edge-on surface density distributions in the top panels of Figure \ref{fig:morph_edgeon_abund} in three metallicity bins -- from top to bottom, [Fe/H]$>$0, -0.5<[Fe/H]<0 and -1<[Fe/H]<-0.5 respectively -- for the galaxies in our sample. As in the case of the face-on projection and the bar, the b/p morphology is more pronounced for more metal-rich populations. However, the morphology of the most metal-poor component (i.e. whether a flattened or spheroidal distribution) is different for each of the five haloes. In Sections \ref{sec:kin2} and \ref{sec:SFH} we will explore how the morphology of the metal-poor population depends on its kinematic properties and on the galaxy's assembly history.
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{plots/xy_vels_only_all_zcut05_2_forpap_10}
\caption{Global kinematics in the face-on projection of bars and b/p bulges: Mass-weighted tangential velocity $V_{\phi}$ (first row), radial velocity $V_r$ (second row) and vertical velocity $V_z$ (third row) for stars with $|z|<0.5\,\rm kpc$. In all panels we denote the corotation and OLR radius (when they fall within 10\,kpc) by the inner and outer dashed circles. We see a clear kinematic signature for the asymmetric b/p's (Au17, Au18 and Au26) as a butterfly pattern in $V_z$ (see Appendix \ref{sec:Appendixvels} for more details).}
\label{fig:xyall}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.86\textwidth]{plots/xz_vels_sigma_only_all_forpap_compressed}
\caption{Global kinematics in the edge-on projection of bars and b/p bulges: \emph{First row:} Mass-weighted line of sight velocity where iso-velocity contours are denoted with white lines with a spacing of $25\, {\rm km\,s^{-1}}$. We see that all models show cylindrical rotation. \emph{Second row:} Mass-weighted line of sight velocity dispersion. We see that the velocity dispersion profile has an X-shape. In all panels the vertical dashed line indicates the CR radius of the model (if it is inside 6\,kpc), the bar is along the $x$-axis and the black lines show iso-density contours.}
\label{fig:xzvelsall}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{plots/composite_chemodynamics2.pdf}
\caption{{\textbf{The relation between chemodynamical properties and formation history:}} \emph{Top panels:} Edge-on morphology of b/p's in three metallicity bins: From top to bottom, [Fe/H]>0, -0.5 <[Fe/H]<0 and -1 <[Fe/H]<-0.5.
\emph{Middle panels:} Line of sight velocities (top) and velocity dispersion (bottom) for stars in the three aforementioned metallicity ranges as a function of longitude. We compare these to the kinematic properties of the Milky Way bulge (circles) from \citet{Nessetal2013b}.
\emph{Bottom panels:}
Star formation history of the disc for the five galaxies in our sample (black line), and separated in the three different metallicity bins as indicated by the coloured lines in the legend. The vertical coloured lines at the top of the panels indicate the merger time of all galaxies with $M_{\star,s} > 10^7\rm M_{\odot}$; the colour of the lines indicates the stellar mass ratio of the merger, $f_{\star} = M_{\star,s}/M_{\star,m}$ (see the colourbar below the figure). The vertical black dotted line marks the formation time of the bar. The galaxies in which the metal-poor component has a thick-disc-like morphology (e.g. Au17 and Au18) are also the ones in which these stars rotate almost as fast as the metal-rich population. These galaxies are also the ones in which the merger history of the galaxy is very quiescent, indicating the intimate relation between the chemodynamical properties of the bulge and formation history.
}
\label{fig:morph_edgeon_abund}
\end{figure*}
\section{Kinematic properties of stellar populations in bars and b/p bulges}
\label{sec:chemokinem}
In this section we examine the global kinematic properties of the bars and b/p bulges in Auriga (Section \ref{sec:kin1}), and then explore the kinematic properties of mono-abundance populations inside the bar-b/p region, comparing them to the properties of the Milky Way bulge (Section \ref{sec:kin2}).
\subsection{Global Kinematic properties}
\label{sec:kin1}
In Figure \ref{fig:xyall} we show face-on kinematic maps (mass-weighted mean $V_{\phi}$, $V_r$ and $V_z$) of the Auriga haloes in our sample; we focus on the kinematics close to the plane by taking stars within $|z|<0.5\, \rm kpc$. The corotation and OLR radii are marked with the inner and outer dashed circles respectively, while the black lines are iso-density contours of the face-on surface density distribution.
In the top row of Figure \ref{fig:xyall} we see that all galaxies show an elongated shape of low $V_{\phi}$, which follows the shape of the bar, due to the slower rotation of stars on bar-like orbits. In some haloes (e.g. Au18 and Au23) we also see an X-shape in the $V_{\phi}$ pattern, indicating the X-shaped morphology of orbits in the peanut region.
The face-on distribution of $V_r$ shows a butterfly pattern, characteristic of barred galaxies, i.e. of inward and outward moving velocities in the bar region.
When examining the mean vertical velocity, $V_z$, we see that there is a butterfly pattern in some of the haloes, in particular in Au17, Au18 and Au26. Upon closer examination of the edge-on morphology of these haloes (top row of Figures \ref{fig:morph_edgeon_abund} \& \ref{fig:xy_edgeon_ages_all}) we see that their boxy/peanuts are asymmetric, i.e. the peanut is currently undergoing a buckling instability. This signature of buckling bars in velocity was first explored in an isolated N-body simulation of a Milky Way-like galaxy in \citet{Lokas2019}, and we confirm this signature here using the Auriga cosmological simulations, as well as verifying the signature with an isolated disc galaxy simulation in Appendix \ref{sec:Appendixvels}. Interestingly, Au17 and Au18 which are currently undergoing a buckling phase, have bars which formed at $z>1$ and have had a b/p since 8 and 5\,Gyrs ago (see the thick dashed line in the bottom panel of Figure \ref{fig:rgball}). This indicates that they are undergoing a renewed buckling instability at $z=0$, which has been shown to occur for strong bars (e.g. \citealt{MartinezValpuestaetal2006}). This butterfly pattern in $V_z$, which is present for asymmetric b/p's, can therefore be used to identify buckling bars in almost face-on projections.
In Figure \ref{fig:xzvelsall} we show edge-on kinematic maps of the galaxies in our sample (where the bar is aligned with the $x$-axis), focusing on the inner region, i.e. on the boxy/peanut bulges. To reduce contamination from outer disc particles, we select stars within $R<6\,\rm kpc$ of the galactic centre. In the cases where corotation falls inside the panel, it is marked with a curved white dashed line. In the first row we show the mass-weighted line-of-sight velocity $V_{\rm los}$ for the galaxies in our sample, with white contours denoting iso-velocity lines with spacing of 25\,km/s. As expected for boxy/peanut bulges, these galaxies exhibit cylindrical rotation inside the boxy/peanut bulge region, i.e. the line of sight velocity is independent of height above the plane. In \citetalias{Gargiuloetal2019} we also showed that, overall, the bulges in Auriga exhibit rather large amounts of rotation. In the second row of Figure \ref{fig:xyall} we show the mass-weighted line of sight velocity dispersion, $\sigma_{\rm los}$, which for all haloes has a distinctive X-shape, with low velocity dispersion tracing the tips of the peanut.
\subsection{Chemo-kinematic properties}
\label{sec:kin2}
We now explore the kinematic properties of stars in the bar-b/p of the Auriga galaxies as a function of metallicity, and compare these to the chemo-kinematic properties of the Milky Way bulge.
In the middle panels of Figure \ref{fig:morph_edgeon_abund} we show the line of sight velocity, $V_{\rm GC}$, and velocity dispersion, $\sigma_{\rm GC}$, as a function of longitude for stars in the b/p bulges of the Auriga galaxies (solid lines). We separate the stars into three metallicity bins -- [Fe/H]>0 (red), -0.5<[Fe/H]<0 (blue) and -1<[Fe/H]<-0.5 (green) and compare these to the velocity and velocity dispersion of stars with corresponding metallicities in the Milky Way bulge (circles), using data from the ARGOS survey \citep{Nessetal2013b}. In order to compare the models to the Milky Way, we rescale the masses of the Auriga galaxies to match the stellar mass of the Milky Way, and rotate the bar to have an angle of 30 degrees with respect to the galactocentric line of sight, placing the observer at 8.3\,kpc from the centre of the galaxy \citep{BlandHawthornGerhard2016}. For this plot we select stars close to the plane ($b<1\,\rm degrees$ for the model and $b=5\,\rm degrees$ for the ARGOS data).
In the Milky Way bulge the line-of-sight velocity $V_{\rm GC}$ of stars is comparable for the three different metallicity bins, i.e. even the most metal-poor stellar population with -1<[Fe/H]<-0.5 has significant rotation, similar to the metal-rich and intermediate populations, even though it is a hotter component with higher velocity dispersion. For the models explored here, this behaviour approximately holds for Au17 and Au18, i.e. all three metallicity components have similar rotation. On the other hand, we see that the most metal-poor component in models such as Au26 has little net rotation, differing significantly from the rotation of the metal-rich and intermediate components. We also note the correlation between morphology and rotation for the metal-poor component, i.e. Au17 and Au18 which have highly rotating metal-poor populations, also have flattened, thick disc-like morphologies for the metal-poor populations, while Au26 whose metal-poor component is hardly rotating has a spheroidal morphology. As we discuss below, in Section \ref{sec:SFH}, the merger history of these galaxies is imprinted on the morphology and kinematics of the metal-poor stellar populations.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{plots/deltaV_merger_all_efrt2.pdf}
\caption{{\textbf{$\delta V$ vs merger impact:}} The normalised difference in velocity between the metal-rich, intermediate and metal-poor populations at the effective radius ($\delta V$) of all Auriga galaxies versus the impact of mergers, given by the sum over all mergers of the stellar mass ratio normalised by the merger redshift (see text for more details). The colour-coding shows the sum of the dot product of the angular momentum of the main disc and that of the orbital angular momentum of the merging subhalo, weighted by the merger impact, indicating how prograde or retrograde all the mergers in a given galaxy are. Disc galaxies with higher impact from mergers will have larger $\delta V$, as discussed in Figure \ref{fig:morph_edgeon_abund}; Haloes Au17 and Au18 which are most similar to the Milky Way in terms of bulge chemodynamics have the lowest merger impact and overall number of mergers with stellar mass above $10^7\rm M_{\odot}$. Some of the scatter in the relation is due to the orbital configuration of mergers: galaxies with cos$\alpha$ $\sim 1$ have had mostly prograde mergers which lead to lower $\delta V$ for a higher merger impact.}
\label{fig:formrelation}
\end{figure}
\section{Formation histories of barred-b/p galaxies}
\label{sec:SFH}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{plots/SFH_fracs_4}
\caption{Comparison of in-situ vs accreted population in the boxy/peanut bulges of our sample. \emph{Top left:} Star formation histories for the in-situ (blue) and accreted (red) populations in the haloes in our sample. We see that for all haloes the accreted material in the b/p region is older than 7\,Gyrs. \emph{Top right:} Fraction of accreted vs in-situ stars inside the b/p bulge region for the five haloes for stars of all metallicities. \emph{Bottom panels:} Fraction of in-situ vs accreted stars for stars with metallicities -1$<$[Fe/H]$<$-0.5 (left) and [Fe/H]<-1 (right).}
\label{fig:fr_accr}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{plots/fracinsitu_bar_nobar_bp_nl_mean}
\caption{Average fraction of accreted vs in-situ stars in the central regions ($r<4\,\rm kpc$, $|z|<2\,\rm kpc$) of galaxies in Auriga, separating into those in this study, i.e. with a boxy/peanut (bp), barred Auriga galaxies without a prominent b/p (nobp) and Auriga galaxies without a bar (nobar). We see that galaxies which form a b/p have on average little accreted material in their central region, with the ex-situ fraction being of the order of a few percent.}
\label{fig:fr_withwithoutbp}
\end{figure}
We explore the link between the formation history of galaxies in our sample and the chemo-morphological and chemo-kinematic properties of stellar populations in their bars and b/p bulges (for an exploration of the formation histories of \emph{all} bulges in Auriga see \citetalias{Gargiuloetal2019}).
In the bottom panels of Figure \ref{fig:morph_edgeon_abund} we show the star formation rate (SFR) in the disc (solid black curve) of the five galaxies in our sample, i.e. within $R<20\,\rm kpc$ and $|z|<2\,\rm kpc$. The red, blue and green curves indicate the SFR of the metal-rich ([Fe/H]>0), intermediate (-0.5<[Fe/H]<0) and metal-poor (-1<[Fe/H]<-0.5) populations respectively. In all panels the vertical black dotted line marks the formation time of the bar. The upper thick vertical coloured lines mark the merger times of all satellites with stellar mass $M_{\star,s}>10^7 \rm M_{\odot}$; the lines are coloured according to the stellar mass ratio of the merger $f_{\star}=M_{\star ,s}/M_{\star , m}$, where $M_{\star ,s}$ and $M_{\star,m}$ are the stellar mass of the satellite and the main galaxy respectively, at the time of satellite infall\footnote{We note that the bars in our sample tend to form soon after a merger, since mergers increase the disc mass, and can also remove angular momentum from the disc thus kick-starting the formation of the bar (we will discuss in more detail the mechanisms responsible for bar formation in Auriga galaxies in Fragkoudi et al. in prep.).}.
By comparing the middle and bottom panels of Figure \ref{fig:morph_edgeon_abund}, we see that the two haloes that have the most similar kinematic properties to the bulge of the Milky Way -- in terms of the high rotation of their metal-poor component -- i.e. Au17 and Au18, have no major mergers (stellar mass ratios$>$1:4) in the last 12\,Gyrs of their evolution. The most massive mergers these galaxies experience since $z\sim3.5$ are $<$1:20 mergers, which is broadly consistent, although on the low mass end, with respect to recent estimates of a Gaia Sausage/Enceladus-type merger for the Milky Way\footnote{As discussed in \citealt{Monachesietal2019}, we caution that the Auriga haloes are in general more massive than the Milky Way halo, so the limit on the most massive accreted system in the Milky Way will likely be lower. We also point out that, as shown in \citealt{Monachesietal2019}, 40\% of the halo mass comes from the Gaia Enceladus-like progenitor, however a total of 14 satellites make up the entire halo of this model.} (e.g. \citealt{Haywoodetal2018,Helmietal2018,Belokurovetal2019,Deasonetal2019}; see also \citealt{Bignoneetal2019}). These Enceladus-like mergers in Au17 and Au18 also result in stars in the inner halo being on radial orbits \citep{Fattahietal2019}.
On the other hand, Au13, Au23 and Au26 have significant ($f_{\star}>0.1$) mergers occurring in the last 12\,Gyrs. Halo Au26 undergoes a massive 1:2 merger at $t_{\rm lookback}\sim9\, \rm Gyrs$ which creates a dispersion dominated, spheroidal bulge for the metal-poor, old populations (see Figures \ref{fig:morph_edgeon_abund} \& \ref{fig:xy_edgeon_ages_all}). It is worth noting that the metal-poor population was already mostly formed at the time of the merger (see green line in bottom right panel of Figure \ref{fig:morph_edgeon_abund}) and thus the merger disrupts this in-situ, old and metal-poor component, creating a dispersion dominated spheroid, while the new stars born during this merger are also on non-circular orbits (see also \citealt{Grandetal2020} for a discussion on the effects of an Enceladus-like mergers on the disc and halo populations). Haloes Au13 and Au23 also have metal-poor populations with lower net rotation at $z=0$ (first and fourth column of middle panels Figure \ref{fig:morph_edgeon_abund}) as they undergo significant mergers in their recent past -- Au13 undergoes a $f_{\star}=0.15$ merger at $t_{\rm lookback}=7\,\rm Gyrs$, while Au23 undergoes a $f_{\star}=0.25$ merger at $t_{\rm lookback}=10\,\rm Gyrs$.
We therefore find that there is an upper limit on how massive mergers can be (since $t_{\rm lookback}\sim12\,\rm Gyrs$) while still maintaining a rotationally supported metal-poor component in the inner regions of disc galaxies. This behaviour is summarised in Figure \ref{fig:formrelation}, where we plot the normalised difference in rotation velocity at $z=0$ between the metal-rich, intermediate and metal-poor populations at the effective radius $\delta V = (V_{\rm MR} - V_{\rm INT})/V_{\rm MR} + (V_{\rm MR} - V_{\rm MP})/V_{\rm MR}$, versus the impact of mergers, which we here define as the sum of the mass ratio of mergers (for $M_{\star,s}\geq10^7\rm M_{\odot}$) divided by the redshift of the merger squared, i.e. the impact of mergers is given by $\sum_{\rm i} f_{\rm i}/(z_{\rm i} +1)^2$. There is a trend for the five galaxies we explore in our sample, which are denoted by the large symbols (see the legend), in which the larger the merger impact, the larger the difference in rotation between the stellar populations $\delta V$. We also include in the figure the rest of the Auriga galaxies (small circles) for the entire parent sample of 30 (\citealt{Grandetal2017}; excluding two Auriga haloes which are undergoing a merger at $z=0$). These follow a similar trend, albeit with considerable scatter. We colour-code the symbols by the sum of the cosines of the angles between the angular momentum vector of the disc of the main galaxy and that of the orbital angular momentum of each merging subhalo, weighted by the impact of each merger; this indicates how prograde, or retrograde the mergers are. We see that some of the scatter in the relation comes from the fact that galaxies which undergo more prograde mergers can have relatively high merger impact while having low $\delta V$, while galaxies undergoing retrograde mergers will have higher $\delta V$ for a smaller merger impact. We note that haloes Au17 and Au18, which are the most Milky Way-like haloes in terms of the overall morphology and chemodynamics of their bulge stellar populations, have the most quiescent merger histories in the entire Auriga sample. This highlights the importance of a quiet merger history for forming b/p bulges which have chemo-kinematic properties similar to the Milky Way bulge.
\subsection{Ex-situ fraction of stars}
We now explore the amount of ex-situ stars in the b/p bulges in Auriga, and how this relates to their chemodynamical properties.
As shown in \citetalias{Gargiuloetal2019}, Auriga bulges have a range of ex-situ fractions, from $<1\%$ to 42\%, with many of the bulges forming mostly in-situ, i.e. 21\% of the Auriga galaxies have less than 1\% of ex-situ fractions.
In the top left panel of Figure \ref{fig:fr_accr} we show the star formation histories in the b/p bulge region (i.e. $R<4\,\rm kpc$ and $|z|<2\,\rm kpc$) for the accreted (red) and in-situ (blue) populations, and in the top right panel we list the fraction of accreted to in-situ stars inside the b/p's. Most stars present in the central regions of our sample of galaxies are formed in-situ, with almost all ex-situ material being accreted at early times, before $z\sim1$ (see also \citealt{Bucketal2019} who similarly found low ex-situ fractions in the central regions of their cosmological model). Therefore the accreted stars in the b/p bulges of our models are subdominant in all haloes, less than 1\% for Au17 and Au18, of the order of a few percent for Au13 and Au23, with the highest fraction of ex-situ stars being found in Au26 which has 9\% of ex-situ stars.
We therefore find that the fraction of ex-situ stars is also linked to the kinematics of the metal-poor populations of the bulge; Au17 and Au18 (which have the smallest ex-situ fractions) have metal-poor populations which rotate fast, contrary to Au13, Au23 and Au26, which have higher ex-situ fractions and more slowly rotating metal-poor populations.
As expected of course, the fraction of ex-situ stars increases as we consider lower metallicity ranges, as shown in the bottom left panel of Figure \ref{fig:fr_accr}. Haloes Au17 and Au18 -- which have similar kinematics to the MW -- have only a few percent (1.7 and 6.4\% respectively) of ex-situ stars even in the metal-poor population of -1<[Fe/H]<-0.5, while Au26 has almost 60\% of the metal-poor population in the ex-situ component (and see also \citealt{Monachesietal2019} for the fraction of accreted material in the haloes of these galaxies). If we consider stars with [Fe/H]$<$-1 (bottom right panel of Figure \ref{fig:fr_accr}), the fraction of accreted stars increases substantially, with a maximum of 80\% (for Au26) while it can still be quite low, for example 13\% for Au17, which has a very quiescent merger history, and $\sim$38\% for our fiducial Milky Way model, Au18. Therefore, in order to detect the ex-situ population of stars in the MW b/p bulge we will likely have to probe the extremely metal-poor tail of the inner regions (see e.g. \citealt{Starkenburgetal2017,Arentsenetal2019} and Figure \ref{fig:vlos_sigmalos_all_append} where we show the kinematics of the [Fe/H]<-1 population in Auriga).
We also find that the low fraction of ex-situ stars is a generic property of b/p bulges, compared to other bulges\footnote{For a discussion of all bulges in Auriga see also \citetalias{Gargiuloetal2019}.}. This is shown in Figure \ref{fig:fr_withwithoutbp}, where we consider the fraction of ex-situ stars in this sample (bp), compared to haloes in Auriga which have bars but do not form b/ps (nobp), and those that do not form bars at all by $z=0$ (nobar). As previously, we consider as inner regions those inside $r<4\,\rm kpc$ and $|z|<2\,\rm kpc$. We find that haloes with bars that form b/p's tend to have the smallest fraction of ex-situ stars, compared to those which do not form b/p's, while haloes without bars have the highest fraction of ex-situ stars in their bulge region. This is due to the intimate connection between the presence of a dispersion dominated component in the central regions of disc galaxies, and the formation of bars and b/p's (e.g. \citealt{Athanassoula2005}) and highlights the importance of relatively quiescent merger histories for the formation of b/p bulges.
\section{effect of the bar on the disc}
\label{sec:bareffectdisc}
In this Section we explore the effects of bars on their host discs, focusing in Section \ref{sec:effectring} on the effects of the bar on the inner disc, via the formation of so-called `inner rings', and in Section \ref{sec:phasespace} on their effects on the outer disc, via the formation of ridges in phase space.
\subsection{Effect of the bar on the inner disc: Formation of inner rings}
\label{sec:effectring}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{plots/SFH_all_withratio_forpap_new_nodisc.pdf}
\caption{\emph{Top row:} Star formation history for the five haloes in our sample, inside the corotation radius (blue), inside the bar (light blue) and the difference between the two (i.e. CR-bar; light green). When there is no inner ring (i.e. Au13, Au17 and Au26), the (CR-bar) region corresponds to the so-called star formation desert, while in the presence of an inner ring it will correspond to the star formation inside the inner ring. \emph{Bottom row:} The SFR of the (CR-bar) region over the SFR of the bar region. In all panels the vertical dot-dashed line corresponds to the formation time of the bar. We see that in the cases without an inner ring the ratio stays below 1, while in the case where an inner ring forms, the ratio increases once the bar and inner ring are in place. For Au18 this corresponds to $t_{\rm lb}\sim7\, \rm Gyr$ and for Au23 to $t_{\rm lb}\sim9\, \rm Gyr$. This method could be used to age date the formation of the bar in the presence of an inner ring.}
\label{fig:sfhring}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[height=0.2\textwidth]{plots/vphir_onlyrho_zcut05_forpap.pdf}
\caption{Logarithmic density in the $V_{\phi}-r$ plane for all haloes in the study. The dashed line indicates the constant angular momentum of a circular orbit at the OLR radius. We see that in all cases where the disc extends beyond the OLR (i.e. all apart from Au13) the longest and most prominent ridge corresponds to the OLR.}
\label{fig:vphir_all}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{plots/vphir_onlyAu18_agefehalpha_zcut05_forpap.pdf}
\caption{The $V_{\phi}-r$ plane for fiducial model Au18 with mean mass-weighted age (left), metallicity (middle) and $\alpha$-abundance (right) colour-coded. The dashed line indicates the angular momentum of a circular orbit at the OLR radius. We see that the ridges have younger, more metal-rich and more $\alpha$-poor stars on average than surrounding regions of phase-space.}
\label{fig:vphir_au18_agefe}
\end{figure*}
By examining the age and abundance maps of the galaxies in our sample in Section \ref{sec:ageabund}, we see that in Au18 and Au23 the bar affects the age, metallicity and $\alpha$-abundance distribution of the disc with the formation of a prominent inner ring.
The formation mechanism of inner rings is still under debate, however they have classically been associated with the locations of resonances in barred galaxies (e.g. \citealt{Schwarz1981,ButaCombes1996}). A more recent theory which was proposed to explain the presence and morphology of inner and outer rings in barred galaxies is the invariant manifold theory (\citealt{RomeroGomezetal2006,AthanassoulaRomeroGomez2009}). Invariant manifolds can be thought of as `tubes' which emanate from the ends of the bar, and which guide stars and gas along particular orbits, and which are able to transport material from outside to inside corotation and vice versa.
Such metal-rich and star-forming inner rings could partially explain the recently reported metal-rich ``inner disc'' in the Milky Way which was reported using APOGEE and Gaia DR2 data \citep{Bovyetal2019}. Firstly, the metal-poor inner regions of the bar and the positive metallicity gradient with radius reported in \citet{Bovyetal2019}, can be partially explained by the interplay between the metal-poor thick disc and the metal-rich thin disc of the Milky Way: in the innermost regions the metal-poor thick disc dominates, as it has a shorter scale-length, while at larger radii the metal-rich thin disc starts to dominate. This was explored in the model presented in \citet{Fragkoudietal2018}, where it was shown (see Figure 13) that the face-on metallicity maps of a composite thin+thick disc model produce a low metallicity innermost region, with metallicity increasing as a function of radius (note that this model was constructed to reproduce observations of the bulge of the Milky Way, and so serves as a verified prediction for the relation shown in \citealt{Bovyetal2019}). However, in the thin+thick disc model of \citet{Fragkoudietal2018}, the metallicity of the inner disc does not reach the highest values (0.2\,dex) found in \citet{Bovyetal2019} around the bar. Reaching these high metallicities could however be naturally achieved with the addition of a star forming inner ring due to the bar. As we show below, in these inner rings, ongoing star formation can help them reach higher metallicities. Indeed our galaxy is thought to host a gaseous inner ring as observed in both HI and CO, called the near and far 3\,kpc arms \citep{vanWoerdenetal1957,DameThaddeus2008}.
In Figure \ref{fig:sfhring} we show the star formation histories inside corotation for the five haloes in different regions: the blue line indicates the star formation rate inside the entire corotation region, excluding the inner kpc where the nuclear disc (or pseudo-bulge) is. In the light blue line we show the star formation rate in the bar (excluding the inner 1\,kpc) and in light green we show the star formation rate inside corotation minus that inside the bar. In the absence of an inner ring this region (CR - bar) will correspond to the so-called star formation desert (see e.g. \citealt{Jamesetal2009,DonohoeKeyesetal2019}), while in the presence of an inner ring this region will correspond to star formation inside the ring.
The fact that star formation in the inner ring (which is inside corotation) continues, while star formation in the bar is quenched (see the green lines in the top panels of Figure \ref{fig:sfhring}, for Au18 and Au23), suggests that gas inside corotation is constantly being replenished. This therefore implies that gas is being transported from outside corotation, a mechanism that can be explained in the framework of invariant manifolds, but not in the framework of resonant built rings. This constantly renewed supply of gas sustains star formation in the ring for extended periods of time even after the bar pushes all the gas to the centre and quenches star formation in the bar region.
As the inner ring forms due to the bar, its star formation history could help reveal the formation time of the bar (similarly to how nuclear rings and discs can serve to age-date the bar -- see \citealt{Gadottietal2015,Gadottietal2019} and \citealt{BabaKawata2019}).
We see that, as expected, in the two cases with a prominent inner ring, the residual star formation in CR-bar is much higher than in those without an inner ring. Since the inner rings form after the bar, the star formation history of the ring could provide a method for determining the formation time of the bar. We explore this in the bottom panels of Figure \ref{fig:sfhring} where we show the SFR inside corotation minus that inside the bar (i.e. CR-bar) divided by the SFR in the bar. In these panels the formation of the bar is marked by the vertical dot-dashed line. In the galaxies without a ring this ratio is low, since almost all star formation which happens inside corotation takes place inside the bar region. However, when an inner ring forms this ratio increases above one and continues to rise, since while star formation quenches in the bar due to gas depletion, it is still ongoing in the ring (because as discussed above, manifolds can transport gas from outside corotation into the ring). While in the bottom panels of Figure \ref{fig:sfhring} the sudden increase in SFR does not mark the exact time of bar formation, it does provide a lower limit to the age of the bar.
\subsection{Effect of the bar on disc phase-space}
\label{sec:phasespace}
The second Gaia data release \citep{GaiaCollaboration2018} recently revealed a plethora of complex substructure in phase-space in the disc of the Milky Way, with one particularly prominent feature being the ridges observed in the space of tangential velocity, $V_{\phi}$ versus galactocentric radius $r$ \citep{Kawataetal2018,Antojaetal2018} and the correlation of these ridges with undulations in radial velocity $V_r$ \citep{Fragkoudietal2019}.
In \cite{Fragkoudietal2019} we showed, using an N-body simulation of a Mikly Way-type galaxy, that these ridges and undulations can be the product of bar-induced resonances, and specifically that the OLR will create the largest ridge observed in this plane, with a prominent inwards and outwards moving $V_r$ component associated to it. This occurs due to the underlying resonant orbital structure at the OLR, where there are overlapping anti-aligned $x_1(1)$ and $x_1(2)$ orbits (see \citealt{Dehnen2000} and Figure \ref{fig:orbsAu18} for examples of these types of orbits in our fiducial model Au18).
Here we explore the effects of the bar on the disc kinematics in our sample of Auriga galaxies, specifically on the $V_{\phi}-r$ plane, as well as the relation between the kinematic signatures of the OLR and ages and chemical abundances.
In Figure \ref{fig:vphir_all} we show logarithmic density plots of the $V_{\phi}-r$ plane in the discs of the five models explored in this study. To construct the plots we select all stars within 0.5\,kpc from the plane of the galaxy. We see that there are a number of ridges present in this plane in all cases, with one particularly prominent ridge present in most cases. The dashed line in each panel corresponds to the angular momentum of a circular orbit at the OLR radius. This line is associated to OLR resonant stars, as shown in Figure \ref{fig:resonancesAu18} where we carry out a spectral orbital analysis of stars in our fiducial model, Au18. We see that in all cases the OLR resonance is associated with the largest ridge in the $V_{\phi}-r$ plane, with the exception of Au13; in this case, the disc of the galaxy ends at around the OLR radius (this can be seen also by examining the top row of Figure \ref{fig:rgball}). We therefore see that the longest ridge in the $V_{\phi}-r$ plane being associated to the OLR is a generic feature apparent in a number of models, and can therefore be used as an independent method for estimating the location of the bar OLR, both for the Milky Way, as well as for external galaxies.
In Figure \ref{fig:vphir_au18_agefe} we show the $V_{\phi}-r$ plane of Au18 with mass-weighted mean age (left), metallicity (middle) and $\alpha$-abundance (right) colour-coded (see Figure \ref{fig:vphiall} for all models). As can be seen in the Figure, all ridges, and especially the OLR ridge which is the most prominent, has on average younger stars associated to it. This could be due to the fact that colder populations are most affected by the resonances caused by the bar, or due to preferential ongoing star formation in these regions. Correspondingly, the ridge region also has on average higher metallicity [Fe/H] and lower mean alpha-abundances [$\alpha$/Fe]. Therefore, we find that the ridges in density are also apparent as ridges in age, metallicity and $\alpha$-abundance space. This has been shown to be the case also for the ridge structure of the Milky Way (see \citealt{Khannaetal2019} who combined the kinematic information on the ridges from Gaia DR2 with information on chemistry from the GALAH survey). There is therefore a plethora of information to be distilled by combining kinematics with chemistry in order to disentangle the origin of the different ridges seen in phase-space in the Milky Way, and we will explore this in more detail in upcoming work.
\section{Discussion}
\label{sec:discussion}
\subsection{The metal-poor vs metal-rich Milky Way bar/bulge}
In the Auriga simulations we find that bars and b/p's are predominantly metal-rich (see Figures \ref{fig:xyall_ages} and \ref{fig:xy_edgeon_feh_all}). This is in agreement with IFU spectroscopic studies of the inner regions of external galaxies, which find that a number of bars and b/p's are rather metal-rich, or as metal-rich as their surrounding disc (see e.g. \citealt{Pinnaetal2019,Gonzalezetal2017,Gadottietal2019,Neumannetal2020}).
In this section we discuss these findings in the context of the inner few kpc of the Milky Way, which, as we will discuss below, appear to be rather metal-poor (including the bar/bulge region).
A number of surveys (e.g. GIBS, GES, APOGEE) which have explored the bulge of the Milky Way (i.e. inside $|l,b|<10\,\rm deg$) find that it has a significant metal-poor component at all latitudes, which leads to the Milky Way bulge being on average metal-poor, i.e. [Fe/H]<0 (e.g. see \citealt{Nessetal2013a,NessFreeman2016,Zoccalietal2017,RojasArriagadaetal2017,Fragkoudietal2018}). Furthermore, as discussed in previous sections, recently \citet{Bovyetal2019}, combining APOGEE DR16 with Gaia DR2 data with a machine-learning algorithm (see \citealt{Leungetal2019a,Leungetal2019b}) constructed `face-on' metallicity maps of the Milky Way. They find that the innermost regions of the Galaxy, i.e. the bar/bulge region, are metal-poor while the surrounding disc is more metal-rich. It is worth noting that the work of \citet{Bovyetal2019} is therefore consistent with the aforementioned surveys on the bulge of the Milky Way - i.e. they all find that the inner regions of the Milky Way are metal-poor (and see also the discussion in Section \ref{sec:effectring}). Therefore, it seems that the Milky Way might be somewhat an outlier with respect to other nearby barred galaxies, which tend to have metal-rich bar/bulge regions.
On the other hand, \citet{Weggetal2019} recently found that metal-rich stars in the bar region of the MW are on more elongated orbits, compared to metal-poor stars which are on rounder, more disc-like orbits. This behaviour was predicted in studies of discs with multiple stellar populations, in which `kinematic fractionation' occurs i.e. that metal-rich stars should be on more elongated orbits (e.g. \citealt{Debattistaetal2017,Fragkoudietal2017b}). This behaviour can also be explained with star formation occurring after the bar forms, which will preferentially place new stars on bar-like orbits. Based on this finding, and on a spatial separation of stars in the bar vs the inner disc, \citet{Weggetal2019} conclude that their findings are in tension with those of \citet{Bovyetal2019}.
It is worth noting that kinematic `fractionation' in itself, is not in tension with a metal-poor bar-b/p region. As shown in \citet{Fragkoudietal2018}, in Section 5 (see specifically Figure 13), the overall metallicity of the inner bar/bulge region can be low if the metal-poor thick disc dominates at these radii (i.e. there will be a higher density of metal-poor stars in that region). This will occur if the thick disc is relatively massive, and has a shorter scale-length than the metal-rich thin disc, as is thought to be the case for the Milky Way (e.g. \citealt{Bensbyetal2011}). Since the model of \citet{Fragkoudietal2018} contains the effects of `kinematic fractionation', we see that this is not inconsistent with a metal-poor inner region inside the bar. Furthermore, as seen in Figure 13 of \citet{Fragkoudietal2018}, the end of the bar will be more metal-rich than the innermost region, a trend which is also found in \citet{Bovyetal2019} and consistent with the findings of \citet{Weggetal2019}.
Therefore, there seem to be two scenarios that can explain the apparently metal-poor bar/bulge of the Milky Way: the first one is that the Galaxy has a metal-poor inner region on average\footnote{Although see also \citealt{Schultheisetal2019} who find that the innermost degree of the MW – which contains the nuclear star cluster – is metal-rich.}, and therefore is perhaps an outlier compared to other local barred spiral galaxies (but see also \citealt{Zhuangetal2019}, who find that late-type spirals in the CALIFA survey can have positive metallicity gradients). In this case, the metal-rich inner disc of the galaxy can be explained via a thin+thick disc scenario with different scale-lengths \citep{Fragkoudietal2018} in combination with a metal-rich inner ring formed due to the presence of the bar (as shown in Section \ref{sec:effectring}). The second scenario, is that the Milky Way has a bar/bulge region which is more metal-rich than the inner disc, but, due to some selection effects, a significant fraction of metal-rich stars are missing from the aforementioned surveys. In either case, we reiterate that having a metal-poor bar/bulge region is in fact consistent with having metal-rich stars on more elongated orbits compared to the metal-poor ones, since what sets the \emph{overall} metallicity of a region is the local density of metal-rich vs metal-poor stars, which is set by a combination of the mass and scale-lengths of these stellar populations.
\section{Conclusions}
\label{sec:summary}
In this paper, the first in a series exploring the properties of barred galaxies in the Auriga magneto-hydrodynamical cosmological zoom-in simulations, we focus on the Auriga galaxies which form prominent boxy/peanut (b/p) bulges by $z=0$. We explore their chemodynamical properties, comparing these to the properties of the Milky Way bar and bulge, thus allowing us to place constraints on the formation history of the Galaxy. We also examine the effects of bars on the inner and outer disc of their host galaxy, exploring how they redistribute stars and gas in inner rings and phase-space ridges. Our results are as follows:
\begin{itemize}
\item {\bf{\emph{Statistical properties:}}} We find that the Auriga suite of simulations reproduces well the fraction of barred galaxies as a function of redshift, as well as the properties of bars at $z=0$ as compared to observations (see also \citetalias{BlazquezCaleroetal2020}). The b/p's have a range of formation times, from 1 to 8\,Gyrs ago and can undergo multiple buckling phases, however their fraction at $z=0$ is lower than that found in observations (see Section \ref{sec:bpsample}).
\item {\textbf{\emph{Ages and abundances:}}} The face-on and edge-on distribution of ages and abundances are significantly affected by the bar and b/p, which redistribute stars according to the kinematic properties of the underlying stellar population (see e.g. \citealt{Fragkoudietal2017b,Athanassoulaetal2017} -- this process was dubbed `kinematic fractionation' in \citealt{Debattistaetal2017}.) This leads to age and abundance gradients along the bar minor axis, in which younger stars cluster at the ends and of the bar and along its major axis (see also \citealt{Neumannetal2020}). Also, the b/p's show an X-shaped age and abundance distribution, in which younger and more metal-rich stars trace the shape of the peanut. All the b/p's in our sample contain a significant fraction of stars younger than 5\,Gyrs, $\sim$30\% for our fiducial Milky Way model, Au18 (see Section \ref{sec:ageabund}).
\item {\textbf{\emph{Chemo-morphological properties:}}} Stellar populations in the bar and b/p show signs of `kinematic fractionation', i.e. younger and more metal-rich populations are trapped on more elongated bar-like orbits in the face-on projection, and have more prominent peanut shapes in their egde-on projection. This is a consequence of the amount of angular momentum lost by stellar populations with different kinematic properties, i.e. younger/colder stellar populations which are present in the disc before the bar forms lose more angular momentum compared to older/hotter populations. Populations born inside the bar region after bar formation do not have as much angular momentum to lose to begin with, because stars are born on elongated bar-like orbits (see Section \ref{sec:chemorph}).
\item {\textbf{\emph{Global kinematic properties:}}} When viewed edge-on the b/p's in our sample exhibit cylindrical rotation as well as X-shaped dispersion profiles (i.e. the peanut region has lower velocity dispersion). When viewed face-on, asymmetric b/p's (i.e. bars which are currently buckling) display a butterfly pattern in $V_z$, confirming the results of \citealt{Lokas2019}. This kinematic signature of buckling bars can help identify asymmetric b/p's which are viewed face-on (see Section \ref{sec:kin1} and Appendix \ref{sec:Appendixvels}).
\item {\textbf{\emph{Chemo-kinematics \& formation history:}}} We compare the chemo-kinematic properties of stellar populations in different metallicity bins in our b/p's to those of the Milky Way bulge (with the ARGOS survey -- \citealt{Nessetal2013b}). The haloes which best reproduce the kinematics of the Milky Way bulge, i.e. which have a rotating metal-poor component with a flat velocity dispersion profile, are Au17 \& Au18. These galaxies have the most quiescent merger histories of the entire Auriga sample of galaxies; their last major merger, i.e. with stellar mass ratio $f_{\star}>0.25$, occurs $> 12\, \rm Gyrs$, and all subsequent mergers are $f_{\star}<0.05$ prograde or radial mergers. This suggests a stellar mass ratio of $\sim 1:20$ for the recently proposed Gaia Sausage/Enceladus merger (see e.g. \citealt{Haywoodetal2018,Helmietal2018,DiMatteoetal2018,Belokurovetal2019} and see Sections \ref{sec:kin2} and \ref{sec:SFH}).
\item{\textbf{\emph{Relation between morphology and kinematics:}}} The models which best reproduce the chemo-kinematics of the Milky Way bulge (Au17 \& Au18), i.e. which have fast rotating metal-poor components, also have metal-poor components with flattened density distributions in their edge-on projection, compatible with a thick disc distribution (see Sections \ref{sec:chemorph} and \ref{sec:kin2}).
\item {\textbf{\emph{Formation histories \& ex-situ fractions:}}} While the galaxies in our sample have diverse formation histories they all have low fractions of ex-situ material in their central regions (and see also \citetalias{Gargiuloetal2019}). The haloes with the most violent merger histories have larger ex-situ fractions (10\% for Au26) while those with the most quiescent merger histories (Au17 \& Au18) have ex-situ fractions of less than 1\%. Therefore, the two most MW-like b/p's, Au17 \& Au18 are essentially entirely made of in-situ stars. When considering only stellar populations with [Fe/H]<-1, the ex-situ fraction increases, but is still rather low for the two MW-like b/p's -- 13\% for Au17 and 37\% for Au18. The mean ex-situ fraction of stars for all Auriga galaxies with b/p's is 3\%, while for Auriga galaxies which do not form a b/p nor a bar it is 11\% and 18\% respectively - i.e. galaxies with b/p bulges tend to have lower ex-situ fractions overall, compared to galaxies without b/p's and without bars (see Section \ref{sec:SFH}).
\item {\textbf{\emph{Inner rings:}}} Au18 and Au23 form an inner ring around the bar, which is star-forming and metal-rich. Such an inner ring, in combination with a thin+thick disc scenario, could explain the very metal-rich `inner disc' recently reported for the Milky Way \citep{Bovyetal2019}. Indeed the Milky Way is thought to harbour a gaseous inner ring, identified as the near and far 3\,kpc arms \citep{vanWoerdenetal1957,DameThaddeus2008}. The inner rings in our models show indications of being formed due to invariant manifolds, where gas is transported from outside to inside corotation. As inner rings form after bars, their star formation histories can help obtain a lower limit on the age of the bar (see Sections \ref{sec:bareffectdisc} and \ref{sec:discussion}).
\item {\textbf{\emph{Effect of the bar on phase-space in the disc:}}} In all the haloes in our sample the longest ridge in the $V_{\phi}-r$ plane is related to the bar OLR resonance (confirming the results of \citealt{Fragkoudietal2019}). This could provide an independent method for determining the bar pattern speed both in the Milky Way and in external galaxies. We also find that the ridges in this plane are associated with higher metallicities, lower alpha-abundances and younger ages, compared to the surrounding disc phase-space (see Sections \ref{sec:phasespace} and Appendix \ref{sec:appendixB}).
\end{itemize}
To summarise, we find that the models in our sample which best match the properties of the Milky Way bulge, have an in-situ origin (with <1\% of stars in the bulge formed ex-situ). Their metal-poor (-1<[Fe/H]<-0.5) populations rotate almost as fast as the more metal-rich populations ([Fe/H]>-0.5), and have have flattened morphologies, compatible with a thick disc. This is in agreement with recent chemodynamical studies carried out using tailored, isolated N-body simulations, in which the bulge of the Milky Way is composed of thin and thick disc populations (see e.g. \citealt{DiMatteo2016,Debattistaetal2017,Fragkoudietal2017b,Fragkoudietal2017c,Fragkoudietal2018}).
Furthermore, contrasting the chemodynamical properties of the b/p's in our models with those of the Milky Way, allow us to place constraints on the merger history of the Galaxy, including on the recently proposed Gaia Sausage/Enceladus merger \citep{Belokurovetal2018,Haywoodetal2018b,Helmietal2018}.
One of our best-fitting models, Au18, experienced its last significant merger at $t_{\rm lookback}=9\,\rm Gyrs$, with the merging progenitor having a stellar mass ratio of 1:20 -- broadly in agreement with recent estimates of the Gaia Sausage/Enceladus merger (e.g. \citealt{Helmietal2018,DiMatteoetal2018,Belokurovetal2019}).
While our study does not involve an exploration of the full parameter space of merger times, mass ratios and orbital configurations, our results point to the Galaxy's largely quiescent merger history, where the last major merger ($f_{\star}\geq0.25$) took place at $z\geq3.5$, with only prograde or radial mergers with stellar mass ratio $f_{\star}\leq0.05$ occuring since $t_{\rm lookback}\sim12\,\rm Gyr$; more recent massive mergers would disturb the rotationally supported kinematics of the metal-poor populations in the bulge, thus not allowing to reproduce the Milky Way bulge's chemodynamical properties.
We therefore see that with a diverse sample of Milky Way-type galaxies formed in the full cosmological context we can disentangle the effects of different formation mechanisms on the chemodynamical properties of bars and b/p bulges, shedding light on the formation history of the Galaxy and the origin of its bulge.
\section*{Acknowledgements}
FF thanks Wilma Trick, Paola Di Matteo, Dimitri Gadotti and Misha Haywood for comments on earlier versions of the manuscript which greatly improved its clarity, and for many interesting discussions. The authors thank the anonymous referee for a constructive report. The authors thank David Campbell and Adrian Jenkins for generating the initial conditions and selecting the sample of the Auriga galaxies, and Paola Di Matteo for the isolated N-body simulations used in Appendix B.
FM acknowledges support through the Program ``Rita Levi Montalcini'' of the Italian MIUR.
I.G. acknowledges financial support from CONICYT Programa Astronom\'{i}a, Fondo ALMA-CONICYT 2017 31170048.
AM acknowledges support from CONICYT FONDECYT Regular grant 1181797.
FAG acknowledges financial support from CONICYT through the project FONDECYT Regular Nr. 1181264. FAG, AM and IG acknowledge funding from the Max Planck Society through a Partner Group grant.
This project was developed in part at the 2019 Santa Barbara Gaia Sprint, hosted by the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara. This research was supported in part at KITP by the Heising-Simons Foundation and the National Science Foundation under Grant No. NSF PHY-1748958.
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and statement of results.}
\ \ \ \ Fractal properties of subsets of the integer grid in $\mathbb{R}^{d}$ have been studied earlier and notions of dimensions of such subsets have been introduced in different contexts by Fisher \cite{Fisher}, Bedford and Fisher\cite{Bedford}, Lima and Moreira \cite{Moreira}, Naudts \cite{Naudts1, Naudts2}, Furstenberg \cite{Furstenberg}, Barlow and Taylor \cite{Barlow1,Barlow2}, and Iosevich, Rudnev and Uriarte-Tuero\cite{Iosevich}, Glasscock \cite{DG}. The analogies have been drawn from the continuous theory of dimension, for which Chapter 4 of \cite{Mattila} or Chapter 1 of \cite{Bishop} are standard references.
The mass and counting dimensions of any $1$-separated set $E\in \mathbb{R}^2$ are respectively defined as:
\begin{equation}
\overline{D}(E)=\limsup\limits_{l\to \infty}\frac{\log|E\cap [-l,l]^{2}|}{\log(2l)}, \ D(E)=\limsup\limits_{||C||\to \infty}\frac{\log|E\cap C|}{\log||C||}.
\end{equation}
In \cite{Moreira}, the counting dimension was used in $\mathbb{Z}\subset \mathbb{R}$ to study the growth of certain subsets of $Z$ with zero upper Banach density. A natural Marstrand type projection theorem is proven there, with the counting dimension behaving like the Hausdorff dimension in the statement of the classical Marstrand projection theorem; see Theorem 1.2 in \cite{Moreira}. Later this was extended by Glasscock \cite{DG} who used the more general notion of the mass and counting dimension, in $\mathbb{Z}^{d}\subset \mathbb{R}^{d}$, and proved analogous projection theorems with the mass dimension as well.
The natural dual slicing statement with the mass dimension was recently shown to be true by the author \cite{Aritro}. When dealing with the slicing question with a $1$ separated set in $\mathbb{R}^2$, it is natural to work with a width 1 tube $t_{u,v}$ which is explicitly described as:
\begin{equation}
t_{u,v} = \left\{(x,y) \in \mathbb{R}^2 \ \middle| \ -\frac 1u x + v \sqrt{ 1 + \frac 1{u^2}} < y \leq -\frac 1u x + (v+1) \sqrt{ 1 + \frac 1{u^2}} \right\}.
\end{equation}
This is a tube of width 1, and whose perpendicular projecting line has slope $u$ and the vertical displacement of the right edge of the tube along this projecting line has displacement $v$. Later we will revert to using for lines, the coordinates $(\tilde{u},\tilde{v})$ that are the slopes and $y$ intercepts, and it will be clear from the context.
In the continuous case, the Marstrand slicing theorem is a standard result that talks of the dimension of a typical slice in the sense of the Lebesgue measure. When we consider Cartesian grids, for two multiplicately independent numbers $p,q$ (i.e with $\frac{\log p}{\log q}$ is irrational.), it was conjectured by Furstenberg that every slice of the set $A_{p}\times A_{q} \subset [0,1]^{2}$ where $A_{p},A_{q}$ are respectively $p,q$ invariant, has Hausdorff dimension less than or equal to $dim(A_{p})+dim(A_{q})-1$ where $dim$ refers to the Hausdorff dimension. This was resolved recently by Shmerkin \cite{Shmerkin} and Wu \cite{Wu}.
In \cite{Aritro}, for the mass dimension in our setting of $1$ separated subsets in $\mathbb{R}^{2}$, with a Tchebysheff and Fubini type argument the slicing statement was first shown to be true in a weak asymptotic sense, and then it was also shown to be true for Lebesgue almost every slice. One then specializes to sets $A,B \subset \mathbb{N}$ and considers the dimension of the intersection of the broken line $\{(x,y): y=\lfloor \tilde{u}x +\tilde{v} \rfloor, \tilde{u}>0 \}$ with the Cartesian product set $A \times B$. Such a broken line is a tube with vertical cross section of length 1, and thus the width of the tube is less than 1 and so the result follows for such tubes.
We state the main slicing result with the mass dimension that was obtained earlier in \cite{Aritro}.
\begin{theorem}\label{thm:strongerslicing3}
Let $E \subseteq \mathbb{R}^2$ be a $1$ separated set of mass dimension $\overline{D}(E)$. Then for all $v \in \mathbb{R}$, for Lebesgue-a.e. $u \in \mathbb{R}_+$,
\[\overline{D}(E \cap t_{u,v}) \leq \text{max} (0, \overline{D}(E)-1).\]
\end{theorem}
As a corollary to this, we obtain the main slicing result stated below, upon integrating over the $v$ coordinate.
\begin{theorem}\label{thm:slicing}
Let $E \subseteq \mathbb{R}^2$ be a $1$ separated set of mass dimension $\overline{D}(E)$. Then in the Lebesgue sense, for almost every tube $t_{u,v}$ of width $1$, slope $u$, and displacement $v$ along the projecting line, we have that $\overline{D}(E \cap t_{u,v}) \leq \text{max} (0, \overline{D}(E)-1)$.
\end{theorem}
In this paper we show that the corresponding results with the counting dimension are false. This is what one might expect, since the counting dimension of every slice can be high if there is a growth of points in every slice at very sparse locations. This can happen even if the actual set $E$ has low counting dimension.
We state our results for the counting dimension below:
\begin{theorem}\label{thm:strongerslicing1}
For any $\epsilon>0, u_{0}\in \mathbb{R}$, there exists $E \subseteq \mathbb{Z}^2$ of counting dimension $D(E)$, such that for all $v \in (-\epsilon, \epsilon)$,
\[D(E \cap t_{u_0,v})= D(E)=1.\]
\end{theorem}
\begin{theorem}\label{thm:strongerslicing2}
For any $\epsilon>0, v_{0}\in \mathbb{R}$, there exists $E \subseteq \mathbb{Z}^2$ of counting dimension $D(E)$, such that for all $u \in (-\epsilon, \epsilon)$,
\[D(E \cap t_{u,v_0})= D(E)=1.\]
\end{theorem}
Building on the construction in \cref{thm:strongerslicing2}, we construct the counterexample to the statement of the Marstrand slicing theorem with the counting dimension:
\begin{theorem}\label{thm:strongslicing}
For any $\epsilon>0, u_{0}\in \mathbb{R}$, there exists $E \subseteq \mathbb{Z}^2$ of counting dimension $D(E)$, such that for all $(u,v) \in (-\epsilon, \epsilon)\times (-\epsilon,\epsilon)$,
\[D(E \cap t_{u,v})= D(E)=1.\]
\end{theorem}
In other words, we can construct a set $E\subset \mathbb{Z}^{2}$ such that the slices of $E$ by the tubes parametrized by a positive Lebesgue measure set have exceptionally high counting dimension.
In fact the construction used in the proof of \cref{thm:strongslicing} for any fixed $\epsilon$, also gives us the following stronger result:
\begin{theorem}\label{thm:weakreal}
\emph{For all $E \subseteq \mathbb{N}^2$, the set $U$ of parameters $(u,v)$, with $u,v\in \mathbb{R}$, so that}
\[D \big(E \cap t_{(u,v)} \big) = D(E)=1 .\] \emph{is such that $\lim\limits_{M\to \infty} \frac{|U \cap [-M,M]^{2}|}{(2M)^{2}}>0$. }
\end{theorem}
An analogous version of \cref{thm:strongerslicing1} is also true for the mass dimension, cf. Example 2 in Section 3 of \cite{Aritro}.
In the next section, as an illustration we begin with a standard example that shows that the slicing result is sharp with the mass dimension, where we have a set so that every slice in a cone has mass dimension $\frac{1}{2}$ while the set so constructed has mass dimension $\frac{3}{2}$. This is a set where every slice in a cone has counting dimension 1, while the set itself has a growing subsequence of two dimensional grid of points and hence is itself of counting dimension 2. After that we construct a set where we have a growth of points in a diagonal sense, and while every slice in a cone still has counting dimension 1, we are able to reduce the counting dimension of the set itself to 1.
After this we show that the proof of \cref{thm:weakreal} essentially follows from the construction used in \cref{thm:strongslicing}. This is the strongest slicing statement we can make with the counting dimension. In \cite{Aritro} we show that the corresponding theorem is false, i.e in a weak sense, the Marstrand type slicing theorem holds true with the mass dimension.
We remark that the analysis here again remains the same if in place of $\mathbb{Z}^{2} \subset \mathbb{R}^{2}$, we considered the grid $\delta\mathbb{Z}^{2}$ with separation $\delta$, and considered tubes of width $\delta$.
\section{Slicing results with the counting dimension.}
Consider a cone of arbitrary small angular width $\theta$, centered around the $y-$axis, with vertex at the origin and pointing up. In this case, for some large $ k_0 >0$, \footnote{In order to construct a set with mass dimension 3/2, we chose the scale $2^{2^{k}}$ instead of $2^k$ since if we have a range of $k$ values from 1 to some $k_0$, the levels are so sparse that only the last level $k_{0}$ is relevant when counting the points for the mass dimension till height $2^{2^{k_0}}$. With the scaling $2^{k_0}$ we would need to add up all the points in all the lower levels as well.} for each $k\geq k_0$, we fill the annular region inside the cone between the heights $2^{2^{k+1}}$ and $2^{2^{k}}+2^{2^{k+1}}$ with all the points belonging to $\mathbb{Z}^{2}$ within the annular region. This example was already considered in \cite{Aritro}. This is a set of mass dimension $3/2$ with each of the tubes within the cone having mass dimension $1/2$. However this is also a set that has counting dimension 2: since for all $k\geq k_0$ above the height $2^{2^{k+1}}$ we have a square of dimension $2^{2^{k}} \times 2^{2^{k}}$ that is filled with points of the integer grid. As $k$ is taken to infinity, this implies that we have a set of counting dimension 2. Moreover, each of the tubes has an intersection with $\sim 2^{2^{k}}$ many points just above the height $2^{2^{k+1}}$ and so as $k$ is taken to infinity, each of the tubes are shown to have counting dimension 1.
In order to construct a set $E$ with counting dimension $D(E)>1$ such that a cone of tubes have exceptionally high dimension $>(D(E)-1)$, we have to ensure that the set of points of the set $E$ do not cluster together in growing two dimensional grids. In the examples below, we show how to construct a set $E$ where the growth of the points is along a diagonal in a cone, so that we can't locate within this set any growing two dimensional sub-sequence that is growing to infinity. This is done while ensuring that the each of the ray of tubes has a growing sub-sequence of points that ensures that it still has counting dimension 1. That would prove \cref{thm:strongerslicing2}.
\bigskip
Now we construct the set that proves \cref{thm:strongerslicing1}.
\begin{proof} [Proof of \cref{thm:strongerslicing1}]
Let $u_{0}$ be a specific slope of the projecting line, and consider a set $V$ of values of $v$, with $\mu(V)=wk$, where $w>1$ is a positive number and $k$ is any arbitrary positive integer and $\mu$ is the Lebesgue measure \footnote{We can clearly make this work for any $\epsilon$ so that the statement of the theorem is satisfied}. In this case, the set $E$ is contained within the semi-infinite tube of width $wk$ with one other side on the projecting line with values in the set $V$.
The idea here is similar to the one used in the previous problem, and we start with a chunk of the integer grid of width $w_{1}$, height $n_{1}$ placed at the bottom left corner of this semi-infinite tube. We place the next chunk of the same width and height on the top right corner of the previous chunk, and do this all the way till we put the last chunk adjacent to the right edge of this tube, at the height $H_{1}:=kn_{1}$.
Now we consider the height $h_{2}:=e^{H_{1}}$ and repeat the same process at this level with the chunks placed diagonally as before from the bottom left to the top right corner, with now the chunks of height $n_{2}:=n_1 +1$ and width $w$. Thus at the end we reach the height $H_{2}:=h_{2}+wn_{2}$. Inductively we repeat the process where at each step $H_{m}:=h_{m}+wn_{m}$, where $h_{m}:=e^{H_{m-1}}$ and $n_{m}:=n_{m-1}+1$. Like before, it is clear that this is a set where between the heights $h_{m}$ and $H_{m}$ we have for the purpose of the counting dimension, effectively a straight chunk of width $w$ and length $kn_{m}\to \infty$ as $m\to \infty$. It is clear this set $E$ has counting dimension exactly 1, when taking boxes of lengths $kn_{m}\to \infty$ as $m\to \infty$ at the $m$'th level of the construction of the set $E$. Moreover, every single tube with $v$ parameter within $V$ has between $n_{m}$ and $2n_{m}$ many points at the $m$'th level, and thus this is also clearly a tube of counting dimension exactly equal to 1.
\end{proof}
\bigskip
In this case, it is clear that $H_{m}> \underbrace{e^{e^{\iddots }}}_{m \ \text{times}}$ and till that height we have $k(n_1 + (n_1 +1)+(n_1+2)+\dots+(n_1 +m))=\big(km(n_1 + \frac{m+1}{2}) \big)$ many points in E. Thus very clearly this set $E$ has mass dimension 0. It's similarly also clear that every width 1 tube with parameter $(u_{0},v)$, $v\in V$, has mass dimension 0.
\bigskip
Now we prove \cref{thm:strongerslicing2}.
\begin{proof}[Proof of \cref{thm:strongerslicing2}]
Without loss of generaility, consider a cone with the vertex at the origin, and pointing up, symmetric about the y axis, and with total angle $\theta$. We parametrize the angles within the cone by the coordinate $u$ and the the coordinate along the projecting line by the coordinate $v$. Here, all the tubes within the cone have the coordinate $v=0$, and we have an interval of $u$ values of width $\theta$.
We fix an integer width $w\geq1$, some initial height $h_1$, and some initial length $n_{1}\geq 1$, say. On the left edge of the cone, we put a chunk of the integer grid (or more generally any maximal $1$ separated set) of width $w$ and length $n_{1}$,\footnote{Since $n_1\geq 1, w\geq 1$ this chunk would contain at least one point.} just above the arc at height $h_{1}$. This subtends the angle $-\frac{\theta}{2},-\frac{\theta}{2}+\frac{w}{h_{1}})$ at the origin, of width $\frac{w}{h_1}$.
At the next step, we put a chunk of width $w$ just above the arc at height $h_1 +n_1$ so this subtends the angle $(-\frac{\theta}{2}+\frac{w}{h_{1}},-\frac{\theta}{2}+ \frac{w}{h_1}+ \frac{w}{h_1+n_1})$ of width $\frac{w}{h_1+n_1}$. At the $k$'th step, we put a chunk of width $w$ at the height $h_1 +(k-1)n_1$ so that it subtends the angle $(-\frac{\theta}{2}+\sum\limits_{i=0}^{k-2} \frac{w}{h_{1}+i\cdot n_{1}},-\frac{\theta}{2}+\sum\limits_{i=0}^{k-1}\frac{w}{h_{1}+i\cdot n_1})$ of width $\frac{w}{h_1 +(k-1)\cdot n_1}$. Clearly the growth of the total angle is akin to the growth of the harmonic series, and eventually after some step $K_1$ we reach the other end of the cone at the angle $\theta/2$. At the very last end, the width of the integer grid put at the height $H_{1}:=h_1 +(K_1-1)n_1$ could be less than $w$.
Consider the next height $h_2:=e^{H_{1}}$, and $n_{2}:=n_1 +1$ while the width $w$ remains fixed. Now the growth of points here is in a manner similar to the growth in the first level, and we begin the growth again from the left edge of the cone starting at the height $h_{2}$. At the $k$'the stage in this level, we put a chunk of width $w$ at the height $h_2 +(k-1)n_{2}$ which subtends the angle $(-\frac{\theta}{2}+\sum\limits_{i=0}^{k-2}\frac{w}{h_2 +i\cdot n_{2}}, -\frac{\theta}{2}+\sum\limits_{i=0}^{k-1}\frac{w}{h_{2}+i\cdot n_{2}})$ of width $\frac{w}{h_{2}+(k-1)\cdot n_{2}}$. Again, after a finite number $K_2$ of steps, we would hit the right edge of the cone, where again at the last end, the width of the integer grid just above the height $H_2:=h_2 +(K_{2}-1)n_2$ is less than or equal to $w$.
For each $m\geq 3$, we would inductively define $h_{m}:=e^{H_{m-1}}$ and $n_{m}:=n_{m-1}+1$ \big(Thus $n_{m}=n_{1}+(m-1)$\big) and thus have a growth of points beginning at the left edge at height $h_m$ and continuing on to height $H_{m}$ at the last $K_{m}$'th step of the iteration in this $m$'th level, where this last chunk of the integer grid is put adjacent to the right edge of the tube.
This is a set where every tube with parameter $u\in (-\frac{\theta}{2},\frac{\theta}{2})$, has counting dimension exactly 1; since for each of these tubes, at the $k$'th level, there is a strip with between $n_{m}$ and $2n_{m}$ points that lies in this tube. But the set $E$ so constructed is itself also of counting dimension exactly 1, since at each particular $m$'th level, we have a set of strips each of length $n_1 + (m-1)$, the $k$'th one lying just above and to the right of the $(k-1)$'th strip. If we had a square of length $K_{m}\cdot (n_{1}+(m-1))$ intersect the set $E$ at this appropriate height, then for the purpose of the counting dimension this is equivalent to having a vertical straight line of length $\approx K_{m}\cdot(n_1 +(m-1))$ within this square. As $m\to \infty$, the lengths of these boxes go to infinity, and we have a set of counting dimension exactly 1.
\end{proof}
\bigskip
Note that the mass dimension of this set is 1, and the mass dimension of all the tubes is exactly 0. To see this, note that the height $H_{m}\sim h_{m}.e^{\frac{m\theta}{w}}$ since, the total angle covered is $\theta \sim \int_{0}^{K_{m}} \frac{w dx}{h_{m}+x n_{m}}$ and where $H_{m}=h_{m}+ (K_{m}-1)n_{m}$ and thus $K_{m}\sim \frac{h_{m}}{n_{m}}\big(e^{\frac{\theta n_{m}}{w}} -1\big)$ and thus $H_{m}=h_{m}+(K_{m}-1)n_{m}\sim h_{m}e^{\frac{\theta m}{w}}$. Thus the growth of the set is equivalent to having a straight line of width $w$, starting at height $h_{m}$ and ending at about the height $h_{m}e^{\frac{m\theta}{w}}$, and thus of length about $h_{m}(e^{\frac{\theta m }{w}}-1)$. By construction, we also have that $h_{m}=e^{H_{m-1}}$ for all $m\geq 2$, and so that this height range is from $e^{H_{m-1}}$ to $e^{(H_{m-1}+\frac{\theta \cdot m}{w})}$ and where $H_{m-1}\sim h_{m-1}e^{\frac{\theta\cdot (m-1)}{w}}>> \frac{\theta\cdot m}{w}$ when $m$ is sufficiently large enough. Thus for the purpose of the mass dimension, we consider the appropriate box of length $e^{(H_{m-1}+\frac{\theta \cdot m}{w})}$, and we have a `line' of width $w$ of points from the height $e^{H_{m-1}}$ to $e^{(H_{m-1}+\frac{\theta\cdot m}{w})}$ within this box, while below the height $H_{m-1}$ there are some $o(w\cdot H_{m-1})$ many points. Thus the mass dimension at this height is:
\[ \frac{\log(o(w\cdot H_{m-1})+ w(e^{(H_{m-1}+\frac{\theta\cdot m}{w})}-e^{H_{m-1}}))}{\log(e^{H_{m-1}+\frac{\theta\cdot m}{w}})} \to \frac{\log(w)+(H_{m-1}+\frac{\theta\cdot m}{w})-\log(1-e^{-\frac{\theta\cdot m}{w}})}{H_{m-1}+\frac{\theta\cdot m}{w}}\to 1 \]
as $m\to \infty$. Also note that each of the tubes has $w\cdot n_{m}\sim wm$ number of points between the heights $h_{m}$ and $H_{m}$ where $h_{m}=e^{H_{m-1}}>e^{m-1}$ for $m\geq 2$, and thus clearly each of the tubes has 0 mass dimension.
\bigskip
Now we come to the example that builds on the previous example, where we have a set $\Tilde{E}$ of counting dimension 1, whereas every tube in a parameter set $\{(u,v): -\beta \leq u\leq \beta, -\epsilon \leq v \leq \epsilon \}$ also has counting dimension exactly 1. This shows that the natural Marstrand slicing statement with the counting dimension is false.
\begin{proof}[Proof of \cref{thm:strongslicing}]
The construction follows from the previous example. Without loss of generality we can consider the cones to the symmetric about the y-axis \footnote{Upon a clockwise rotation by $\pi/2$ we would get a cone where the $u$ parameters of the tubes are centered around $0$}. Each of the width 1 tubes are such that the right edge is parametrized by the line $\frac{y}{x-\tilde{v}}=\tilde{u}$ that passes through the point $(\tilde{v},0)$ on the x axis with $\tilde{v} \in (-\epsilon,\epsilon)$, and $\tilde{u}\in (-\beta,\beta)$, which corresponds to an angular range $(-\theta/2,\theta/2)$ centered about the $y$ axis, with $\cot (\theta/2) =\beta$. So for each value of $\tilde{v}\neq 0$, $v\in (-\epsilon,\epsilon) $, we find a translate of the cone $E$ with total cone angle $\theta$ that was considered in the previous example. Consider the construction of the $E$ outlined in the previous example for the cone with vertex $(0,0)$, and for each translated cone with vertex $(\tilde{v},0)$, construct the set in the exact same manner as for the cone with vertex at the origin. In the process, with the obvious overlaps, we would have chunks of width $w+2\epsilon$ and length $n_{k}$ at the $k$'th level, growing diagonally as before. In the process, any two chunks, one almost on top of the other, are horizontally separated by the width $w$. Arguing identically as in the previous example, we conclude that the set so constructed is such that every tube with parameters $(\tilde{u},\tilde{v}) \in (-\beta ,\beta)\times (-\epsilon,\epsilon)$ has counting dimension exactly 1. The set $\tilde{E}$ itself, is such that it grows diagonally with each chunk at the $k$'th level having height $n_{k}$ and width $w+2\epsilon$ \footnote{These are approximately rectangular, but not exactly rectangular chunks. If we placed rectangular chunks in place of the chunks as defined for our set, it is clear that when $n_{k}$ becomes sufficiently large, the diagonal line of these chunks will itself have a steeper slope than the right edge of the rightmost cone, and we will never reach that edge.}. This is a set with a fixed width $w+2\epsilon$ that is growing from right edge of the `leftmost' tube to the right edge of the `rightmost' tube at a particular level, and the total number of points in each level increases to infinity as in the earlier example . Thus $\tilde{E}$ is a set of counting dimension exactly 1.
\end{proof}
Following this we are now ready to prove \cref{thm:weakreal}.
\begin{proof}[Proof of \cref{thm:weakreal}]
Consider all the tubes whose right edges are lines that intersect the set $\tilde{E}$ and have slopes greater than $\cot \theta$. In this case, after a specific level, these tubes intersect all the `levels' so constructed in this set $\tilde{E}$. In fact, it is clear that except for at most the initial level it intersects, such a tube intersects both the top and bottom boundary layers of all the other levels. Thus these tubes have increasingly large intersections with the set $\tilde{E}$, and the cardinality of these intersections goes infinity as the levels go to infinity. Thus all of these tubes have counting dimension exactly 1. The set of such tubes is parametrized by the set $S=\{(\tilde{u},\tilde{v})\in \Big( (\cot \theta,\infty)\times (-\infty, \epsilon)\Big) \cup \Big((-\infty,-\cot \theta) \cup (\epsilon,\infty)\Big) \} $ and thus $\lim_{M\to \infty} \frac{ S \cap [-M,M]^{2}}{(2M)^{2}} >0$ and so the weak asymptotic form of the slicing theorem is also false for the counting dimension.
\end{proof}
\section{Further questions.}
\begin{enumerate}
\item While we show that a positive Lebesgue measure set of tubes of width 1 have exceptionally large dimension, the question of constructing sets such that a set of broken lines $\{(x,y): y=\lfloor ux+v \rfloor \}$ with positive Lebesgue measure in the parameter space $(\tilde{u},\tilde{v})$ all have exceptionally large counting dimension, is not resolved here. The broken lines are sets that have vertical cross section 1, but the actual width is smaller than unity, and so there is no guarantee that a large number of broken lines have exceptionally high dimension, let alone a set of broken lines parametrized by a positive Lebesgue measure set in the $(u,v)$ space. Such a question will likely require a much finer study of the intersection patterns of tubes of arbitrary small width with the integer grid $\mathbb{Z}^{2}\subset \mathbb{R}^{2}$.
\item While we have constructed our sets in $\mathbb{R}^2$, there should be a natural way to extend these results in higher dimensions, and while the construction in \cref{thm:strongerslicing1} would be extended to higher dimension in an obvious way, we would likely have to be a bit more careful in constructing the cones in dimensions 3 and higher while proving the results analogous to \cref{thm:strongerslicing2}.
\item While we could construct a set $E$ in \cref{thm:strongerslicing2} with counting dimension 1 and a cone of tubes each with counting dimension 1, we can ask whether it would be possible to construct a set $E$ with $D(E)=\eta<1$ and a positive Lebesgue measure of tubes that have counting dimension 1. In our cone construction, we would have to increase the spacing between successive diagonal chunks within the same level in order to decrease the counting dimension of the set, but then it is possible that the angular growth within the cone as outlined in the proof becomes slower than harmonic, and thus we may not be able to reach from one end of the cone to the other with the diagonal growth, when starting at sufficiently large heights.
\end{enumerate}
\section{Acknowledgements:} The author is extremely grateful to Daniel Glasscock for introducing the author to the slicing problem in the integer lattice grid in $\mathbb{R}^{2}$. The author is supported as a PhD student in Brandeis University at the time of writing this manuscript.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{#1}}
\def\ssect#1{\subsection{#1}}
\def\sssect#1{\subsubsection{#1}}
\def\lt#1{\left#1}
\def\rt#1{\right#1}
\def\t#1{\widetilde{#1}}
\def\h#1{\hat{#1}}
\def\b#1{\bar{#1}}
\def\frc#1#2{\frac{#1}{#2}}
\newcommand{\underline{\mathrm{P}}}{\underline{\mathrm{P}}}
\newcommand{\partial}{\partial}
\newcommand{{\cal P}\exp}{{\cal P}\exp}
\newcommand{{\cal P}}{{\cal P}}
\newcommand{{\rm vac}}{{\rm vac}}
\newcommand{\langle}{\langle}
\newcommand{\rangle}{\rangle}
\newcommand{{\mathbb{Z}}}{{\mathbb{Z}}}
\newcommand{{\mathbb{N}}}{{\mathbb{N}}}
\newcommand{{\mathbb{R}}}{{\mathbb{R}}}
\newcommand{{\cal T}}{{\cal T}}
\newcommand{:{\cal T}\phi:}{:{\cal T}\phi:}
\newcommand{:\widetilde{\cal T}\phi:}{:\widetilde{\cal T}\phi:}
\newcommand{{\rm x}}{{\rm x}}
\newcommand{{\rm y}}{{\rm y}}
\newcommand{ {\rm i} }{ {\rm i} }
\usepackage{authblk}
\newcommand{\uparrow}{\uparrow}
\newcommand{\downarrow}{\downarrow}
\newcommand{\epsilon}{\epsilon}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\alpha}{\alpha}
\newcommand{\beta}{\beta}
\newcommand{\gamma}{\gamma}
\newcommand{\lambda}{\lambda}
\newcommand{\Lambda}{\Lambda}
\def\Res#1{\mbox{Res}_{#1}}
\newcommand{{\rm End}}{{\rm End}}
\newcommand{{\int \hspace{-4mm} \backslash}}{{\int \hspace{-4mm} \backslash}}
\newcommand{\boldsymbol{w}}{\boldsymbol{w}}
\newcommand{\boldsymbol{\partial w}}{\boldsymbol{\partial w}}
\newcommand{\boldsymbol{ \beta}}{\boldsymbol{ \beta}}
\newcommand{\partial_x\boldsymbol{ \beta}}{\partial_x\boldsymbol{ \beta}}
\newcommand{{\cal J}}{{\cal J}}
\newcommand{\alpha^\dag}{\alpha^\dag}
\newcommand{\t\alpha^\dag}{\t\alpha^\dag}
\newcommand{\Delta}{\Delta}
\newcommand{{\rm i}}{{\rm i}}
\newcommand{{\rm d}}{{\rm d}}
\title{Diffusive Hydrodynamics of Inhomogenous Hamiltonians }
\usepackage{authblk}\author[1]{Joseph Durnin}
\author[2]{Andrea De Luca}
\author[2]{Jacopo De Nardis}
\author[1]{Benjamin Doyon}
\affil[1]{ Department of Mathematics, King's College London, Strand WC2R 2LS, London, U.K.}
\affil[2]{ Laboratoire de Physique Théorique et Modélisation, CNRS UMR 8089,CY Cergy Paris Université, 95302 Cergy-Pontoise Cedex, France.}
\date{}
\setcounter{Maxaffil}{0}
\renewcommand\Affilfont{\itshape\small}
\begin{document}
\maketitle
\begin{abstract}
We derive a large-scale hydrodynamic equation, including diffusive and dissipative effects, for systems with generic static position-dependent driving forces coupling to local conserved quantities. We show that this equation predicts entropy increase and thermal states as the only stationary states. The equation applies to any hydrodynamic system with any number of local, PT-symmetric conserved quantities, in arbitrary dimension. {It is fully expressed in terms of elements of an extended Onsager matrix.} {In integrable systems, this matrix admits an expansion in the density of excitations. We evaluate exactly its 2-particle-hole contribution, which dominates at low density, in terms of the scattering phase and dispersion of the quasiparticles, giving a lower bound for the extended Onsager matrix and entropy production.} We conclude with a molecular dynamics simulation, demonstrating thermalisation over diffusive time scales in the Toda interacting particle model with an inhomogeneous energy field.
\end{abstract}
\tableofcontents
\section{Introduction}
\label{sec:intro}
Interacting systems comprised of many elementary constituents are notoriously hard to describe. The theory of hydrodynamics provides one significant approach to reducing the apparent complexity of the dynamics of such systems, by restricting the descriptors of the dynamics to a smaller set, whose evolution is given in terms of non-linear partial differential equations. In the past few years there has been a resurgence in the use of hydrodynamic techniques. Prominent examples can be found in the study of gravity and relativistic fluids \cite{Rangamani2009,deBoer2015}, strongly coupled field theories \cite{Lucas2015} and electron gases \cite{Lucas2018,Ku2020,PhysRevLett.118.226601}. Specialising to many-body theory in one spatial dimension, the theory of generalised hydrodynamics (GHD) \cite{PhysRevX.6.041065,PhysRevLett.117.207201}, together with numerous subsequent works \cite{SciPostPhys.2.2.014,PhysRevLett.125.240604,vir1,PhysRevB.101.180302,2005.13546,Gopalakrishnan2019,PhysRevB.102.115121,PhysRevB.96.081118,PhysRevLett.125.070601,PhysRevLett.124.210605,PhysRevLett.124.140603,Bulchandani_2019,SciPostPhys.3.6.039,Doyon_2017}, has represented a major breakthrough in attempts to describe the non-equilibrium dynamics of real-life strongly correlated gases of bosons and low-dimensional magnets \cite{PhysRevLett.122.090601,2009.06651,2006.08577,2009.13535}.
The conceptual approach underpinning the theory of hydrodynamics is the separation of scales \cite{ldlandau2013}, whereby the microscopic system is coarse-grained into mesoscopic fluid cells, wherein local relaxation is assumed to occur over mesoscopic timescales. Hydrodynamics then describes the evolution, on macroscopic space-time scales, of the parameters characterising the local states that the system has relaxed to. According to the hydrodynamic principle, these parameters are the chemical potentials associated to the local and quasi-local conserved quantities of the dynamics, and the hydrodynamic equations follow by imposing the conservation laws associated to these conserved quantities. The states of most apparent relevance are the maximal entropy states, states defined by the properties of being steady, homogeneous, and ergodic under the dynamics. Taking these as the local states, we obtain the Euler-scale equations: the equations of motion for the convective flows of all conserved quantities. By adding more detailed information on the spatial modulation of the local states, it is possible to include the effect of dissipation and viscosity by adding diffusive terms to the Euler-scale equations. Inclusion of these terms is usually associated with entropy production, where information is transferred from macroscopic scales to microscopic scales due to interactions, an effect which is absent in the Euler-scale description. In conventional fluids, the Euler equation with the addition of viscosity effects is known as the Navier-Stokes equation.
Other terms in the Euler and Navier-Stokes equations are often included to describe the effect of external fields, such as gravitational fields in the original formulation of the Euler and Navier-Stokes equations. These are so-called force or acceleration terms, often resulting from coupling to the density of particles or energy in the fluid. For Galilean (relativistic) invariant systems it is straightforward to add a term representing a coupling to the number (energy) to the Navier-Stokes equation. This is because a special simplification occurs: the current of particles (energy) is the momentum density, which is itself a conserved quantity. However, the diffusive hydrodynamics in the presence of generic forces, as in magneto- and thermo- hydrodynamics \cite{Davidson2001} and electron gases in magnetic fields \cite{PhysRevLett.118.226601,PhysRevX.10.011019}, does not contain such a simplification.
In this paper we derive the general, multi-component diffusive hydrodynamic equations accounting for generic force fields. This generalises the Euler-scale results of \cite{SciPostPhys.2.2.014,Doyon2021} to the diffusive order. We express all terms using appropriate Onsager coefficients, written as time-integrated correlation functions of generalised currents as in the Green-Kubo formula. We use the quantum microscopic dynamics, involving the Kubo-Mori-Bogoliubov inner products coming from perturbation theory and the Kubo-Martin-Schwinger relations. However, by standard arguments, the final results apply equally well to quantum and classical systems. The rate of entropy production which we obtain is shown to be non-negative, and generalises the known Onsager expression for the entropy rate under charge gradients \cite{PhysRev.37.405,PhysRev.38.2265}. { We show that thermal states of the inhomogeneous evolution hamiltonian, accounting for the force fields and with arbitrary chemical potentials for ultra-local quantities (such as the total mass in Galilean systems), are stationary under our hydrodynamic equation.} We mostly focus on systems in one spatial dimension, however the derivation is easily extended to higher dimensions and we give the final fluid equation in any dimension.
Our results have notable implications for the dynamics of integrable one-dimensional systems subjected to external fields. Integrable systems are characterised by the presence of a large number of local and quasi-local conserved quantities, which provide dynamical constraints prohibiting conventional thermalisation. Many experimental and theoretical results have emerged which have confirmed the validity of the following simple principle: isolated purely integrable Hamiltonian systems, both classical and quantum, relax to statistical ensembles which are obtained by maximising entropy under the constraints provided by the full set of conserved quantities present in the system. These are the so-called generalised Gibbs ensembles (GGEs) \cite{Eisert2015,rigol2007relaxation,PhysRevLett.106.227203,Ilievski_2016_str,PhysRevLett.115.157201,Vernier_2017,Bastianello_2017,Essler_2017}. The correct hydrodynamical description of such systems is GHD, where all such conserved quantities are taken into account.
It is generally expected that the addition of terms in the Hamiltonian that break all but a few of the conserved charges should restore canonical thermalisation. However, the old numerical experiment of Fermi-Pasta-Ulam-Tsingou \cite{Dauxois2008} and the more recent cold-atom experiment of the quantum Newton's cradle \cite{Kinoshita2006}, left no doubts that one-dimensional interacting systems of particles whose Hamiltonian dynamics are almost integrable can fail to thermalise on very large time-scales, prompting many theoretical and experimental investigations of the topic \cite{Langen2013,Kitagawa_2011,Gring2012,PhysRevLett.115.180601,Gring1318,Mallayya2019,PhysRevLett.119.010601,2007.01286,PhysRevResearch.2.022034,Langen2016,PhysRevX.8.021030,2103.11997}.
Thus for integrable systems, as potentials coupling to the local and quasi-local conserved charges of the system generically break integrability, we expect that a hydrodynamic description should describe the approach to thermal stationary states. This can reproduce for example the effect of external trapping potentials in cold atomic systems, different interactions with external fields, or spatial inhomogeneities in the system's Hamiltonian. While Euler hydrodynamics has thermal states among its stationary states, the equations do not allow for the entropy production required to reach this state from a generic initial condition \cite{SciPostPhys.2.2.014,Doyon2021}. The inclusion of viscosity terms is therefore fundamental to provide the required entropy production and the approach to thermalisation. This was already shown in the simplest case of a coupling to the density \cite{PhysRevLett.125.240604} in a Galilean invariant system, where the analysis simplifies considerably as mentioned above. Here we consider generic classical and quantum integrable systems and generic forces, { and derive consistent hydrodynamic equations describing the approach to thermalisation, which fully justify the simplified equation used in \cite{PhysRevLett.125.240604} and give an explicit lower bound for entropy production.}
\subsection{Presentation of the problem and main result}
We consider an isolated quantum system with $n$ (which is infinite in integrable systems) extensive conserved quantities in involution,
\begin{equation}
Q_i = \int dx \ q_i(x)
\end{equation}
with $[Q_i,Q_j]=0$ for all $i,j$. These are conserved in the sense that they are invariant with respect to any Hamiltonian formed of linear combinations of $Q_i$'s. The Hamiltonian we choose below is inhomogeneous, under which they are not necessarily conserved, however as inhomogeneity length scales are large, these conserved quantities play a role within the emergent hydrodynamic description.
The charge densities $q_i(x)$ either act non-trivially on finite regions surrounding $x$ (local) or have norms that decay quickly enough away from $x$ (quasi-local). The charge densities are assumed to be hermitian operators $q_i^\dagger = q_i$. We further assume that there exists a parity and time (PT) inversion symmetry under which densities transform simply: an anti-unitary algebra involution such as
\begin{equation}
\mathcal{PT} (q_i(x)) = q_i(-x) ,
\end{equation}
As a consequence, the charges $Q_i$ are PT invariant. We also introduce the current operators $j_{k,i}(y)$ defined by the flow induced by each conserved quantity, as \cite{SciPostPhys.2.2.014}
\begin{equation}
{\rm i} [Q_k, q_i(x)] + \partial_x j_{k,i}(x)=0.
\end{equation}
The currents can be chosen hermitian, and by the assumed PT invariance, they can be chosen to transform simply under the PT symmetry, $\mathcal{PT}(j_{k,i}(x)) = j_{k,i}(-x)$. By anti-unitarity $\mathcal{PT}(q_i(x,t)) = q_i(-x,-t)$ and $\mathcal{PT}(j_{k,i}(x,t)) = j_{k,i}(-x,-t)$.
Our results are valid for a generic number $n$ of conserved quantities and we shall later apply it to the integrable case where $n$ is infinite. We shall first restrict ourselves to one spatial dimension and later generalise the result to higher dimensions.
Applying external fields coupling to the densities, the most generic inhomogenous Hamiltonian reads
\begin{equation}\label{eq:Hamiltonian}
H = \sum_{i=1}^n \int dx \ w^i(x) q_i(x),
\end{equation}
where $w^i(x)$ are generic functions of $x$. The time evolved observables $o(x,t)$ are denoted as usual
\begin{equation}
o(x,t) = e^{ {\rm i} t H}o(x)e^{- {\rm i} tH}.
\end{equation}
We assume that the $w^i(x)$ vary slowly in space such that we can expand them around each point $x_0$ as
\begin{equation}
w^i(x) = w^i(x_0) + (x-x_0)\partial_{x_0} w^i(x_0) + \ldots,
\end{equation}
with small higher order corrections. The accuracy of the resulting hydrodynamic equations is determined by how small such derivatives are with respect to the miscroscopic scales of the model; we leave a precise analysis for future works, but the example of the Toda gas in section \ref{sec:toda} will give some intuition. Effectively, the system is subject to external inhomogenous forces
\begin{equation}
\mathfrak{f}^i(x) = - \frac{\partial w^i(x) }{\partial x}
\end{equation}
which locally break the conserved quantities. In the following we will use the repeated indices convention to denote sums over indices.
Our main result is a hydrodynamic equation for the space-time evolution of the local expectation values of the charge densities
\begin{equation}
{\tt q}_i(x,t) = \langle q_i(x,t)\rangle_{\rm ini},
\end{equation}
with respect to some initial state $\langle \cdots\rangle_{\rm ini}$, up to second order in spatial derivatives. This includes diffusive terms which are responsible for entropy increasing and thermalisation. We define $\ell = {\rm min}_i(|\partial_x {\tt q}_i(x,t)|^{-1})$, the spatial scale of variation of the local densities which, in the hydrodynamic approximation, determines the scale of variation of other local observables, and $\ell_{\mathfrak f} = {\rm min}_i(|\mathfrak f^i|^{-1})$. The resulting hydrodynamic equations, around the point $x,t$, are correct up to and including terms of order $1/\ell^2,\, 1/\ell_{\mathfrak f}^2, 1/(\ell\ell_{\mathfrak f})$. With increasing time, $\ell$ is expected to increase at almost all points, up to the scale $\ell_{\mathfrak f}$ determined by the external fields, with the possible exception of isolated points where shocks or other singular structures may develop. Therefore at large enough times, we assume $\ell\approx \ell_{\mathfrak f}$.
It is convenient to characterise the state at $x,t$ by a local maximal entropy state (a local GGE) which reproduces the averages ${\tt q}_i(x,t)$. For this purpose, we define the thermodynamic potentials $\beta^i(x,t)$ by inverting the defining relation
\begin{equation}\label{eq:hydrostate}
{\tt q}_i(x,t) = \frac{{\rm tr}[ q_i e^{- Q_l \beta^l(x,t)}]}{{\rm tr}[ e^{- Q_l \beta^l(x,t)}]},
\end{equation}
at each position $x,t$. We denote by $\langle\cdots\rangle$ the GGE with thermodynamic potentials $\beta^i(x,t)$, where here and throughout we keep the $x,t$ dependence implicit. This forms a family, parametrised by $x,t$, of homogeneous and stationary states on the infinite line.
Let us consider a single fluid cell at position $x,t$, and introduce several quantities associated to it.
Given any $n$-dimensional vectors $\boldsymbol{a},\;\boldsymbol{b}$ (which may depend on $x,t$), we shall use the following compact notation for the contraction of indices with the external fields or forces
\begin{equation}
j_{ \boldsymbol{ a},k}(y)= a^i j_{i,k}(y), \quad j_{i,\boldsymbol{b}}(y)= j_{i,k}(y)b^k,\quad j_{ \boldsymbol{ a},\boldsymbol{ b}}(y)= a^ij_{i,k}(y)b^k.
\end{equation}
We denote the time evolution of a generic operator $o(y)$ in complex time as
\begin{equation}\label{eq:evolwithw}
o(y, s \boldsymbol{w} - {\rm i} \tau \boldsymbol{\beta} ) = e^{ {\rm i} s w^i Q_i + \tau \beta^l Q_l} \ o(x) \ e^{- {\rm i} s w^i Q_i - \tau \beta^l Q_l}
\end{equation}
where $w^i=w^i(x)$ and $\beta^i=\beta^i(x,t)$. Within the formulae below, this is to be interpreted as microscopic time evolution, occurring within the mesoscopic cell at $x,t$. In particular, the real part of the time evolution is with respect to the Hamiltonian $H_{x} = w^i(x) Q_i$, which is \eqref{eq:Hamiltonian} taken with the fields constant, at their local value at $x$. We also define the generalised Kubo-Mori-Bogoliubov (KMB) inner product for two local operators in terms of the connected correlation function
\begin{equation}\label{genKMB}
(o_1(y,s\boldsymbol{w}),o_2(0)) =
\int_0^1
d\lambda\,
\langle{o_1(y,s \boldsymbol w - {\rm i} \lambda\boldsymbol{\beta} )o_2(0)}\rangle^{\rm c}.
\end{equation}
Throughout we shall require that the inner product is sufficiently clustering, specifically that $(o_1(y,s\boldsymbol{w}),o_2(0,0))$ decays faster than $1/|y|$ at large $|y|$. We can then introduce the following extended Onsager coefficients for a generic local operator $o(x)$,
\begin{align}\label{eq:def_Onsager}
& \mathfrak{L}[\boldsymbol{ a}, \boldsymbol{ b} ;o] = \int_{-\infty}^{\infty} ds \, ( J_{ \boldsymbol{ a},\boldsymbol{ b}}(s \boldsymbol{w} ), o )^C ,
\end{align}
with $J_{i,k}(s\boldsymbol{w}) = \int_{-\infty}^\infty dy \ j_{i,k}(y,s\boldsymbol{w})$, the spatially integrated current, and $o=o(0)$. Here we have introduced the connected inner product, denoted $^C$, which is based on clustering at large times:
\begin{align}\label{eq:connected}
& ( O_1(s\boldsymbol{w}), o_2 )^C = ( O_1(s\boldsymbol{w}) , o_2 ) - \lim_{s \to \infty} ( O_1(s\boldsymbol{w}) , o_2 ).
\end{align}
By the method of hydrodynamic projections, we can write the temporally disconnected component as
\begin{equation}\label{eq:projection}
\lim_{s \to \infty} ( O_1(s\boldsymbol{w}) , o_2 )
= ( \mathbb P O_1 , o_2 )= \sum_{ij} (O_1,q_i)\mathsf C^{ij}(Q_j,o_2),
\end{equation}
where $\mathsf C^{ij}$ is the inverse of the susceptibility matrix ${\mathsf C}_{ij} = \langle Q_i q_j \rangle^c$, and $\mathbb P$ is the projector onto the space of conserved quantities $Q_i$. In terms of explicit double-indices $(i,j)$ and $(k,l)$, the extended Onsager matrix $\mathfrak L$ has matrix elements $\mathfrak L_{i,j;k,l}\equiv \mathfrak L[i,j;j_{k,l}]$ defined by
\begin{equation}\label{OnsagerMatrix}
a^ib^jc^kd^l \mathfrak L_{i,j;k,l}
= \mathfrak L[\boldsymbol a,\boldsymbol b;j_{\boldsymbol c,\boldsymbol d}].
\end{equation}
We now write the hydrodynamic equation governing the evolution of the local expectation values ${\tt q}_i(x,t)$. Suppressing the $x,t$ dependence of each term, our main result reads:
\begin{align}\label{main_result}
&\partial_t {\tt q}_i + \partial_x {\tt j}_{\boldsymbol{w},i} + \frac{1}{2}\partial_x\left(\mathfrak{L}[\boldsymbol{w}, \partial_x\boldsymbol{ \beta} ; j_{\boldsymbol{w},i}] + \mathfrak{L}[\boldsymbol{\beta}, \boldsymbol{\mathfrak{f}} ; j_{\boldsymbol{w},i}]\right) \nonumber \\=&\ {\tt j}_{i,\boldsymbol{\mathfrak{f}} } + \frac{1}{2}\left( \mathfrak{L}[\boldsymbol{w}, \partial_x \boldsymbol{ \beta} ; j_{ i,\boldsymbol{\mathfrak{f}} }] + \mathfrak{L}[\boldsymbol{\beta}, \boldsymbol{\mathfrak{f}} ; j_{i,\boldsymbol{\mathfrak{f}} }] \right)
\end{align}
where we denoted the Euler expectation values of currents on local macroscopic states as
\begin{equation}\label{eq:jGGE}
{\tt j}_{i,k}(x,t) = \frac{{\rm tr}[ j_{i,k} e^{- Q_l \beta^l(x,t)}]}{{\rm tr}[ e^{- Q_l \beta^l(x,t)}]}.
\end{equation}
Notice that the spatial derivatives of the thermodynamic potentials are related to the spatial derivatives of the expectation values of the charges. In terms of the susceptibility matrix ${\mathsf C}_{ij}$, this reads
\begin{equation}
{\mathsf C}_{ij} \partial_x {\beta}^j = - \partial_x {\tt q}_i .
\end{equation}
{ Eq.~\eqref{main_result} gives an unbroken continuity equation for the inhomogeneous energy density $w^i(x){\tt q}_i(x)$, which follows by linearity of the current operator $j_{i,\boldsymbol a}$ as a function of ${\boldsymbol a}$. \red{Using the relation $w^i(x) \partial_x\mathfrak{L}[\boldsymbol{w}, \partial_x\boldsymbol{ \beta} ; j_{\boldsymbol{w},i}] = \partial_x\mathfrak{L}[\boldsymbol{w}, \partial_x\boldsymbol{ \beta} ; j_{\boldsymbol{w},\boldsymbol{w}}] + \mathfrak{L}[\boldsymbol{w}, \partial_x\boldsymbol{ \beta} ; j_{\boldsymbol{w},\boldsymbol{\mathfrak{f}} }]$, and the analogous relations for other terms, we see}
\begin{equation}
\partial_t (w^i{\tt q}_i) + \partial_x[ {\tt j}_{\boldsymbol{w},\boldsymbol w} + \frac{1}{2} \left(\mathfrak{L}[\boldsymbol{w}, \partial_x\boldsymbol{ \beta} ; j_{\boldsymbol{w},\boldsymbol w}] + \mathfrak{L}[\boldsymbol{\beta}, \boldsymbol{\mathfrak{f}} ; j_{\boldsymbol{w},\boldsymbol w}]\right)] =0.
\end{equation}
Eq.~\eqref{main_result} also returns conservation laws for any {\em ultra-local} density. This is defined as a density $u^i q_i(x)$, for some constants $u^i$, whose charge does not generate any current, $u^i j_{i,k}(x)=0$. An equivalent definition of an ultra-local density is $u^i[Q_i, q_k(x)]=0$, for all $k$. In this case we have again the continuity equation
\begin{equation}
\partial_t (u^i {\tt q}_i) + \partial_x[ {\tt j}_{\boldsymbol{w},\boldsymbol u} + \frac{1}{2} \left(\mathfrak{L}[\boldsymbol{w}, \partial_x\boldsymbol{ \beta} ; j_{\boldsymbol{w},\boldsymbol u}] + \mathfrak{L}[\boldsymbol{\beta}, \boldsymbol{\mathfrak{f}} ; j_{\boldsymbol{w},\boldsymbol u}]\right) ] =0.
\end{equation}
Examples of ultra-local densities are the particle density in Galilean models, and the magnetic field in spin and Fermi-Hubbard chains.
}
The usual case, with a dynamics induced by a homogeneous Hamiltonian, is recovered by setting $\mathfrak{f}=0$, hence $ {\tt j}_{i,\boldsymbol{\mathfrak{f}} }=0$ and therefore
\begin{equation}\label{eq:simplify}
\mathfrak{L}[\boldsymbol{a}, \boldsymbol{ b} ; j_{i,\boldsymbol{\mathfrak{f}}}]=\mathfrak{L}[ \boldsymbol{ a}, \boldsymbol{\mathfrak{f}} ; o] =0.
\end{equation}
The simplification \eqref{eq:simplify} also holds whenever the force only couples to a conserved density, such as $q_0(x)$ (that is, $\mathfrak f^k = \delta^k_0$), whose currents with respect to all flows, i.e. $j_{k,0}(x)$ for all $k$, are themselves conserved densities. We will refer to such a $q_0(x)$ as \textit{self-conserved}. In this case, as conserved quantities are projected out in \eqref{eq:connected}, the resulting Onsager coefficients vanish. This phenomenon is observed, for instance, for the local particle density in integrable or non-integrable Galilean systems, where we have $j_{k,0} = q_{k-1}$ $\forall$ $k$ by Galilean invariance, in a suitable basis for the charges. Similar statements hold for relativistic models, and for the XXZ chain, where in both cases the energy density is self-conserved. Equalities of this type, for averages in a GGE, arise from the thermodynamic Bethe ansatz and GHD expressions, as first noted in \cite{SciPostPhys.2.2.014}; at the level of operators, they are related to the observation that the boost operator preserves the integrable hierarchy \cite{10.21468/SciPostPhys.8.2.016}.
Another simplification occurs if the external field $\mathfrak f^kq_k(x)$ is ultra-local. In this case the Onsager coefficients $\mathfrak{L}[\boldsymbol{w}, \partial_x\boldsymbol{ \beta} ; o]$ become those taken with respect to the homogenenous backround Hamiltonian with $\boldsymbol w$ constant. The case of coupling to the particle density in Galilean systems, considered in \cite{PhysRevLett.125.240604}, admits both the aforementioned simplifications. Note that the stronger requirement that all $j_{k,0}$ be themselves conserved densities, for the simplification \eqref{eq:simplify}, was missed in \cite{PhysRevLett.125.240604}.
\subsection{Entropy increase and stationarity}
The equation \eqref{main_result} guarantees positive thermodynamic entropy increase and thermal states as the only stationary states of the evolution. The definition of the entropy density leads to the time-evolution
\begin{equation}
\partial_t s(x,t) = \beta^i \partial_t {\tt q}_i(x,t) .
\end{equation}
Using that the Euler part of the hydrodynamic equation does not lead to entropy increase \cite{Doyon2021},
\begin{equation}
\int dx \,\beta^i ( \partial_x {\tt j}_{\boldsymbol{w},i} - {\tt j}_{i,\boldsymbol{\mathfrak{f}} })=0,
\end{equation}
and spatially integrating \eqref{main_result} by parts, the increase in the total entropy $S(t)= \int dx \, s(x,t)$, is given by the following combination of extended Onsager coefficients:
\begin{align}\label{eq:entropyIncrease}
&\partial_t S = \frac{1}{2} \int_{-\infty}^{\infty} dx\,\Big( \mathfrak{L}[\boldsymbol{w}, \partial_x\boldsymbol{ \beta} ; j_{\boldsymbol{w},\partial_x\boldsymbol{ \beta}}] + \mathfrak{L}[ \boldsymbol{ \beta}, \boldsymbol{\mathfrak{f}} ; j_{\boldsymbol{w},\partial_x\boldsymbol{ \beta}}] + \mathfrak{L}[\boldsymbol{w}, \partial_x\boldsymbol{ \beta} ; j_{\boldsymbol{ \beta},\boldsymbol{\mathfrak{f}}}] + \mathfrak{L}[\boldsymbol{ \beta}, \boldsymbol{\mathfrak{f}} ; j_{\boldsymbol{ \beta},\boldsymbol{\mathfrak{f}}}]\Big).
\end{align}
Using the definition of the extended Onsager coefficients \eqref{eq:def_Onsager}, the entropy increase takes the quadratic form
\begin{align}\label{eq:Squadratic}
&\partial_t S = \frac{1}{2} \int_{-\infty}^{\infty} dx \int_{-\infty}^{\infty} ds\, (J_{\boldsymbol{w},\partial_x\boldsymbol{ \beta}}(s \boldsymbol{w} ) + J_{\boldsymbol{ \beta},\boldsymbol{\mathfrak{f}}}(s \boldsymbol{w} ),j_{\boldsymbol{w},\partial_x\boldsymbol{ \beta} }+ j_{\boldsymbol{ \beta},\boldsymbol{\mathfrak{f}}})^C .
\end{align}
The right-hand-side is always non-negative
\begin{equation}\label{eq:positiventropy}
\partial_t S \geq 0,
\end{equation}
as the KMB inner product $(\cdot,\cdot)$ is positive semi-definite\footnote{In fact, it is in general a pre-inner product, as it is not necessarily positive-definite.}. Therefore $(O,o)$ is non-negative, as by translation invariance and clustering it can be written as the limit
\begin{equation}\label{eq:positiveOo}
(O,o) = \lim_{L\to\infty}\frc1{2L}\left(\int_{-L}^L dy\, o(y)\;,\; \int_{-L}^L dz\,o(z)\right)\geq 0.
\end{equation}
Then, $(O,o)^C = ((1-\mathbb P)O,(1-\mathbb P)o)\geq 0$, and finally, by stationarity and clustering at large times,
\begin{equation}
\int_{-\infty}^\infty ds\,( O(s \boldsymbol{w} ) , o )^C = \lim_{T\to\infty}
\frc1{2T} \left( \int_{-T}^T ds\, O(s \boldsymbol{w}),\int_{-T}^T du\, o(0,u \boldsymbol{w})\right)^C\geq 0,
\end{equation}
showing \eqref{eq:positiventropy}.
It should be stressed that equation \eqref{eq:entropyIncrease} generalises the entropy production rate induced by thermodynamic forces found by Onsager \cite{PhysRev.37.405,PhysRev.38.2265} (see also more recent works \cite{PhysRevResearch.2.022009,PhysRevLett.115.090601,PhysRevE.101.012132}) to a system with $n$ conserved quantities and external inhomogeneous force fields. Analogously to the work of Kubo \cite{Kubo1957} we here prove positivity of entropy increase by the definition of the extended Onsager coefficients. { In particular, by the above discussion the extended Onsager matrix define in \eqref{OnsagerMatrix} is positive semi-definite,
\begin{equation}
\mathfrak L \geq 0
\end{equation}
and the entropy production formula is
\begin{equation}
\partial_t S = \frc12 m^{i,j} \\ \mathfrak L_{i,j;k,l} m^{k,l} ,\quad
m^{i,j} = w^i \partial_x \beta^j + \beta^i \mathfrak f^j.
\end{equation}
}
As entropy can only increase, stationarity should be reached when the entropy of local states is maximal. The condition of stationary entropy is obtained from \eqref{eq:Squadratic} by considering the non-negativity result \eqref{eq:positiveOo}. It implies that we must have
\begin{equation}
\int_{-\infty}^\infty ds\,(J_{\boldsymbol{w},\partial_x\boldsymbol{ \beta}}(s \boldsymbol{w} ) + J_{\boldsymbol{ \beta},\boldsymbol{\mathfrak{f}}}(s \boldsymbol{w} ),j_{\boldsymbol{w},\partial_x\boldsymbol{ \beta} }+ j_{\boldsymbol{ \beta},\boldsymbol{\mathfrak{f}}})^C = 0
\end{equation}
at every point $x$. Suppose that there is some conserved density $q_0$ which is ultra-local and self-conserved, and $q_1$, which is ultra-local but not necessarily self-conserved. Then a family of stationary entropy solutions is given by
\begin{equation}\label{eq:conditionSmax}
\boldsymbol\beta(x) = \beta (\boldsymbol w(x) - \mu_0(x) \boldsymbol e_0 - \mu_1 \boldsymbol e_1),
\end{equation}
where $\beta$ and $\mu_1$ are constants, $\mu_0(x)$ is an arbitrary function, and $\boldsymbol e_{0,1}$ are the associated basis vectors.
However, a stationary entropy does not necessarily guarantee that a stationary solution has been reached, as it only accounts for stationarity under the diffusive terms in the hydrodynamic equation. The condition \eqref{eq:conditionSmax} makes all Onsager terms in \eqref{main_result} cancel, however there generally remains a nontrivial Euler-scale evolution, which preserves entropy. The fully stationary solutions are those where $\partial_x {\tt j}_{\boldsymbol{w},i} = {\tt j}_{i,\boldsymbol{\mathfrak{f}} }$, which imposes the constraint that $\mu_0(x)$ is a constant in \eqref{eq:conditionSmax} \cite{SciPostPhys.2.2.014,CauxCradle,Doyon2021}. If it is not, the resulting Euler-scale evolution exits the space of states \eqref{eq:conditionSmax}, and diffusion then further increases entropy. At large times, the solution reached is then for $\mu_0(x)=$ constant, a mechanism described in \cite{PhysRevLett.125.240604}. This shows that the expected thermal states
\begin{equation}\label{LDA_state}
\beta^i(x) = \beta_0 (w^i(x) - \mu_0\delta_{0}^i - \mu_1\delta_1^i),
\end{equation}
where the thermodynamic and external forces cancel out, $\partial_x \beta^i(x) + \beta \mathfrak{f}^i(x)=0$, are stationary states with vanishing entropy production. We note that both the temperature, and the chemical potential associated to any ultra-local conserved charge, are arbitrary parameters of the stationary state. Ultra-local charges are still conserved with an inhomogeneous Hamiltonian, and temperatures and chemical potentials are determined by the initial values of the total energy and of the total charges.
\subsection{Formulation in $D$ spatial dimensions}
It is a simple matter to extend our calculations and results to higher dimensions. The only technical requirement is that the KMB product satisfy a slighter stronger clustering condition: in $D$-spatial dimensions, $(o_1(\vec{x},t),o_2(0,0))$ decays faster than $1/|\vec{x}|^D$ at large $|\vec{x}|$. We obtain
\begin{align}\label{main_result_highD}
\partial_t {\tt q}_i + \nabla\cdot \vec{{\tt j}}_{\boldsymbol{w},i} + & \frac{1}{2} \nabla \cdot\left(\mathfrak{L}\left[\boldsymbol{w}, \nabla \boldsymbol{ \beta} ; \vec{j}_{\boldsymbol{w},i}\right] + \mathfrak{L}\left[\boldsymbol{\beta}, \vec{\boldsymbol{\mathfrak{f}}} ; \vec{j}_{\boldsymbol{w},i}\right]\right) \nonumber \\& = {\tt j}_{i,\vec{\boldsymbol{\mathfrak{f}} }} + \frac{1}{2}\left( \mathfrak{L}\left[\boldsymbol{w}, \nabla\boldsymbol{ \beta} ; j_{ i,\vec{\boldsymbol{\mathfrak{f}} }}\right] + \mathfrak{L}\left[\boldsymbol{\beta}, \vec{\boldsymbol{\mathfrak{f}}} ; j_{i,\vec{\boldsymbol{\mathfrak{f}}} }\right] \right),
\end{align}
where $j_{i,\vec{\boldsymbol{a}} }=\vec{j}_{i,k}\cdot \vec{a}^{\,k}$, and the extended Onsager coefficients in higher dimensions read
\begin{align}\label{eq:def_Onsager_highD}
& \mathfrak{L}[\boldsymbol{ a}, \vec{\boldsymbol{ b }};o] = \lim_{t \to \infty} \int_{-t}^{t} ds \left( J_{ \boldsymbol{ a},\vec{\boldsymbol b}}(y,s \boldsymbol{w} ), o(0,0) \right)^C .
\end{align}
Note that $\mathfrak{L}[\boldsymbol{ a}, \vec{\boldsymbol{ b }};o]$ inherit the vectorial type of $o$. For the entropy increase we have similarly:
\begin{align}
&\partial_t S = \frac{1}{2} \int_{-\infty}^{+\infty} ds \left(J_{\boldsymbol{w},\nabla\boldsymbol{ \beta}}(s \boldsymbol{w} ) + J_{\boldsymbol{ \beta},\vec{\boldsymbol{\mathfrak{f}}}}(s \boldsymbol{w} ),j_{\boldsymbol{w},\nabla\boldsymbol{ \beta} }(0,0)+ j_{\boldsymbol{ \beta},\vec{\boldsymbol{\mathfrak{f}}}}(0,0)\right)^C \ge0.
\end{align}
\section{Diffusive hydrodynamics with inhomogeneous fields }
The purpose of this section is to derive the main result \eqref{main_result}. The derivation is based on evaluating, from microscopic calculations, an expression for the currents at large times, from an initial condition where thermodynamic potentials are linear in space (constant thermodynamic forces), and under a dynamics where the fields vary linearly in space (constant external forces). The thermodynamic and external forces are taken to be small, and the calculation is perturbative in these forces, with the application of standard perturbation theory. \red{At microscopic scales, there is local relaxation to a state which is near to a maximal entropy state, and which spatially extends to the larger mesoscopic scales.} The expression for the expectation values of currents on these mesoscopic scales, expressed in terms of the charge densities and their derivatives at the same mesoscopic scales, and combined with the conservation laws, is the basis for the hydrodynamic equation governing the macroscopic evolution. The Euler contribution to these current expectation values gives the ``leading order'' of the mesoscopic scales, and diffusive contributions are obtained at sub-leading order.
Similar ideas are used in order to obtain the slow dynamics under integrability breaking in homogeneous situations \cite{2103.11997,Durnin2020,PhysRevB.101.180302,2005.13546}
\subsection{Diffusive hydrodynamics: constant fields }
In order to illustrate the method, we start with a simpler problem where the external fields are constant in space, where we will obtain the hydrodynamic equation including diffusive effects to the second order in spatial derivatives. Under unitary Heisenberg time evolution and with constant fields $w^i(x)=w^i$ in the Hamiltonian, eq. \eqref{eq:Hamiltonian}, the conserved quantities $Q_i$ of the system are all exactly conserved. Their densities therefore satisfy a set of continuity equations (denoting with $o(x,t) = e^{i H t } o(x) e^{-i H t } $, with $H=w^i Q_i$)
\begin{equation}\label{eq:continuity0}
\partial_t q_i(x,t) + \partial_x j_{\boldsymbol{w} ,i}(x,t) = 0.
\end{equation}
The basic ingredient of the hydrodynamic approach is the hydrodynamic state, which is
\begin{equation}\label{eq:density_mat}
\tilde{\rho}_{t} \propto e^{- \int dx\, \beta^l(x,t) q_l(x) }.
\end{equation}
The state at a given time is thus equivalent to its potentials $\beta^l(x,t)$, viewed as functions of $x$, and hydrodynamics prescribes how to calculate the time-evolution of these potentials. Note however that the potentials appearing in \eqref{eq:density_mat} are not exactly those appearing in \eqref{eq:hydrostate} and \eqref{eq:jGGE}, with the two differing by first-order spatial derivative terms, see the discussion at the end of this subsection.
Crucial to these results is the hydrodynamic approximation, which accounts for two separate effects:
\medskip
\noindent I. Separation into local fluid cells: at every mesoscopic time $t_0$ and position $x_0$, the fluid can be represented by the following state:
\begin{equation}\label{eq:hydro1}
\tilde{\rho}_{x_0,t_0} \propto e^{ -\beta^l(x_0,t_0) Q_l - \partial_{x_0} \beta^l(x_0,t_0)\, \int dy \: (y-x_0) \: {q}_l(y) },
\end{equation}
where the functions $\beta(x,t)$ are exactly those appearing in \eqref{eq:density_mat}.
This expression is justified if the state is sufficiently slowly varying in space, and correlations decay sufficiently rapidly in space, as then an expectation value evaluated at some position $x_0$ in the state \eqref{eq:density_mat} will be approximately equal to that in the state \eqref{eq:hydro1}.
\medskip
\noindent II. Local relaxation: hydrodynamic averages ${\tt o}(x_0,t_0)$ in the fluid cell located at $x_0,t_0$ are not defined by equating them to $\langle o\rangle_{\t{\rho}_{x_0,t_0}}\equiv\mathrm{Tr}(o\t\rho_{x_0,t_0})$, but instead, they are defined as the value of this observable after relaxation has occurred within the state $\t\rho_{x_0,t_0}$ describing the fluid cell. Relaxation occurs at mesoscopic time scales $t_{\rm meso}$; micro-, meso- and macroscopic timescales, in the cell $\t\rho_{x_0,t_0}$, are informally separated as
\begin{equation}\label{times}
t_{\rm micro} \ll t_{\rm meso} \ll {\rm min}_{l} ([v_{\rm micro} |\partial_{x_0}\beta^l(x_0,t_0)|]^{-1}) \red {\equiv t_{\rm macro}}.
\end{equation}
Here the quantities $t_{\rm micro}, \, v_{\rm micro}$ are some microscopic time and velocity depending on the model. Averages of observables obtained after relaxation -- ``mesoscopic averages" -- are to be expressed as functions of the mesoscopic averages of conserved densities, which are then identified with the hydrodynamical variables ${\tt q}_i$. In practise, the mesoscopic averages are obtained by taking limits in the correct order: inifinite macroscopic times $|\partial_{x_0}\beta^l(x_0,t_0)|\sim 0$, followed by infinite microscopic evolution time $t\to\infty$.
\medskip
Utilising the hydrodynamic approximation, we can trivially write the conservation equations as
\begin{equation}\label{eq:hydroeqderivation}
\partial_{t_0} {\tt q}_i(x_0,t_0) + \partial_{x_0} \lim_{{\rm meso}}\langle j_{\boldsymbol w, i}(x_0,t_{\rm meso})\rangle_{\t{\rho}_{x_0,t_0}} = 0,
\end{equation}
which is the basis of our hydrodynamic equation. Here $\lim_{{\rm meso}}$ is defined in eq. \eqref{times} and we have defined ${\tt q}_i(x_0,t_0)=\lim_{{\rm meso}}\langle q_i(x_0,t_{\rm meso})\rangle_{\t{\rho}_{x_0,t_0}}$. The non-trivial task is to express the mesoscopic average of the currents in terms of ${\tt q}_i(x_0,t_0)$, which is the subject of the remainder of this section. Note that point II is crucial in establishing the irreversibility of the hydrodynamic equations based on \eqref{eq:hydroeqderivation}. In practise, the evolution over mesoscopic times at $x_0,t_0$ is obtained by linear response from the ramp state \eqref{eq:hydro1}: first performing a perturbation theory in $\partial_{x_0} \beta^l(x_0,t_0)$, and then taking the infinite (microscopic) time limit. Thus the general procedure for obtaining the hydrodynamic equations is the following:
\begin{eqnarray}
&&\langle o(x_0,t_0)\rangle_{\rm ini} \nonumber\\&\approx& \lim_{t\to\infty} \Big[\langle o(x_0,t) \rangle_{\t{\rho}_{x_0,t_0}}\nonumber\\ && \hspace{1cm} \mbox{expressing $\beta^l(x_0,t_0)$ as functions of ${\tt q}_i^{(t)}(x_0,t_0) := \langle q_i(x_0,t)\rangle_{\t{\rho}_{x_0,t_0}}$,}\nonumber\\ && \hspace{1cm} \mbox{expanding to first order in $\partial_{x_0}\beta^l(x_0,t_0)$}
\Big]\label{eq:method}
\end{eqnarray}
Under linear response from a ramp initial state, many observables grow linearly in time (such as currents of ballistically transported quantites). The fact that the long-time limit in the second line of \eqref{eq:method} exists and is finite as a function of ${\tt q}_i(x_0,t_0)$ is nontrivial, and must be ascertained by the explicit calculation.
Without loss of generality, it is sufficient to consider $x_0=t_0=0$. As written, \eqref{eq:hydro1} is sufficient to ascertain the diffusive hydrodynamics; neglecting the derivative term yields the Euler hydrodynamics, and including further terms in the series expansion of the argument of the exponential would yield higher derivative corrections to the hydrodynamic equation, which may or may not be physically sensible.
We denote $\t\rho = \t \rho_{0,0}$ and compute expectation values of charges $q_i(0,t)$ and their currents $j_{\boldsymbol{w},i}(0,t)$ in the state $\t \rho$; throughout we expand to first order in $\partial_{x_0} \beta^l(x_0,0)|_{x_0=0} \equiv \partial_x \beta^l$. In the following, expectation values denoted $\langle \cdots \rangle$ are taken with respect to the homogeneous stationary GGE state $ {\rho}_{\rm GGE} \propto e^{ -\beta^l(0,0) Q_l} $, in contrast with the case when they are taken with respect to the inhomogeneous ramp-state $\tilde{\rho}$ as $\langle \cdots \rangle_{\tilde{\rho}}$.
We use the following relation, valid for generic operators $A$ and $B$
\begin{equation}
e^{A + \epsilon B } = e^A + \epsilon \int_0^1 d\tau e^{\tau A}B e^{(1-\tau) A} + O(\epsilon^2)
\end{equation}
in order to expand the expectation values of charges and currents to first order in the derivatives, in terms of the KMB inner product \eqref{genKMB}. This gives, using the notation introduced in eq. \eqref{eq:evolwithw},
\begin{align}\label{eq:charges01}
\langle q_i(0, t \boldsymbol w ) \rangle_{\tilde{\rho}} &= \langle q_i \rangle - \partial_x \beta^k \int dy \,y\, ( q_k(y ) , q_i(0, t \boldsymbol w ) ),\\
\label{eq:currents01}
\langle j_{\boldsymbol{w}, i}(0, t \boldsymbol w ) \rangle_{\tilde{\rho}} &= \langle j_{\boldsymbol{w}, i} \rangle - \partial_x \beta^k \int dy\, y \, ( q_k(y ) , j_{ \boldsymbol{w},i }(0, t \boldsymbol w) ).
\end{align}
We have the following expression for integrated correlation functions on a homogenous, stationary state:
\begin{eqnarray}
K[q_k;o] &=& \int dy\, y \,( q_k(y ) , o(0, t \boldsymbol{w} ) )\nonumber \\
&=& \frac{1}{2} \int dy\, y \int_{-t}^t ds\,\partial_s ( q_k(y ) , o(0, s \boldsymbol{w} ) ) \nonumber \\
&=& -\frac{1}{2} \int dy \int_{-t}^t ds\, ( j_{\boldsymbol{w}; k}(y ,s\boldsymbol{w}) , o(0,0 ) ),
\label{eq:K}
\end{eqnarray}
where we used $PT$ symmetry, and stationarity of the state and the conservation equation, to introduce the $s$-integral, and current $j_{\boldsymbol{w},k}$, respectively.
As per \eqref{eq:method}, we now need to express $\beta^k$ in terms of ${\tt q}_i^{(t)} = \langle q_i(0,t)\rangle_{\t\rho}$ and take the limit $t\to\infty$. This is obtained from \eqref{eq:charges01}, where the right hand side is considered as a function of $\beta^k$ and $\partial_x\beta^k$,
and using \eqref{eq:K} we have:
\begin{equation}\label{eq:qiqi}
\langle q_i\rangle = {\tt q}^{(t)}_i +
\partial_x \beta^k K[q_k,q_i]
= {\tt q}^{(t)}_i -
t \,\partial_x \beta^k (J_{\boldsymbol{w},k},q_i).
\end{equation}
Note that the second term on the right-hand side is small, as $t|\partial_x \beta^k| \ll v_{\rm micro}^{-1}$. Consider Eq.~\eqref{eq:currents01}, where generally $\langle j_{{\boldsymbol{w},i}}\rangle[\langle q\rangle]$ is known from the thermodynamics, and thus we utilise this by changing variable to ${\tt q}_i^{(t)}$ within the same function. Eq.~\eqref{eq:qiqi} then implies:
\begin{equation}
\langle j_{\boldsymbol{w},i} \rangle \equiv \langle j_{\boldsymbol{w},i} \rangle [\langle q \rangle] = \langle j_{\boldsymbol{w},i} \rangle[{\tt q}^{(t)}] - t\,\partial_x \beta^k \frac{\delta \langle j_{\boldsymbol{w},i} \rangle}{\delta \langle q_l \rangle} (J_{\boldsymbol{w},k},q_l) + \ldots,
\end{equation}
and therefore \eqref{eq:currents01} can be expressed as:
\begin{eqnarray}\label{eq:jwitw}
\langle j_{\boldsymbol{w},i}(0,t \boldsymbol w )\rangle_{\t\rho} &=&
{\tt j}_{\boldsymbol{w},i} +
\frc{\partial_x \beta^k}2 \int dy \int_{-t}^t ds\,
(j_{\boldsymbol{w},k}(y,s\boldsymbol{w}),j_{\boldsymbol{w}, i}(0,0) -
\frac{\delta \langle j_{\boldsymbol{w},i} \rangle}{\delta \langle q_l \rangle}q_l(0,0)) \nonumber\\ &=&
{\tt j}_{\boldsymbol{w},i} +
\frc{\partial_x \beta^k}2 \int dy \int_{-t}^t ds\,
(j_{\boldsymbol{w},k}(y,s\boldsymbol{w}),j_{\boldsymbol{w}, i})^C.
\end{eqnarray}
where in the first line we used the definition \eqref{eq:jGGE}, in the second, the chain rule for differentiation and the projection formula \eqref{eq:projection}. Taking the limit $t\to\infty$, this returns the hydrodynamic equation in the absence of external forces:
\begin{equation}
\partial_t{\tt q}_i+\partial_x\left({\tt j}_{\boldsymbol{w},i} + \frac{1}{2} \mathfrak L[\boldsymbol{w},\partial_x \boldsymbol\beta;j_{\boldsymbol{w},i}]\right)=0
\end{equation}
Two subtleties need to be clarified in the obtaining of this hydrodynamic equation in the final step.
First, in \eqref{eq:jwitw}, the Euler-scale current ${\tt j}_{w,i}$ is evaluated within a GGE characterised by the ${\tt q}_i$'s, while the KMB inner product $(\cdot,\cdot)$ is evaluated within the GGE determined by $\boldsymbol\beta(0,0)$. These are {\em different GGE's}, expectation values of charge densities within the GGE with potentials $\boldsymbol\beta(0,0)$ are $\langle q_i\rangle\ne{\tt q}_i$; the correction term on the right-hand side of \eqref{eq:qiqi} means that ${\tt q}_i$ can be written as a GGE average of charge densities by the expected bijectivity between potentials and charge densities, but with different associated thermodynamic potentials as per \eqref{eq:hydrostate}, say $\boldsymbol\beta^{\rm hydro}(0,0)$. But the difference is first-order in derivatives, and hence in \eqref{eq:jwitw} we can use $\boldsymbol\beta^{\rm hydro}(0,0)$ for the KMB inner product, the error being second-order in derivatives.
Second, in \eqref{eq:jwitw}, $\partial_x\boldsymbol\beta$ is the slope of the ramp in the density matrix $\t\rho$. We must equate the slope $\partial_x\boldsymbol\beta$ with the corresponding macroscopic spatial derivative $\partial_{x_0}\boldsymbol\beta^{\rm hydro}(x_0,0)|_{x_0=0}$, connecting neighbouring fluid cells. This can be done using \eqref{eq:qiqi}, written for arbitrary $x_0$, and taking the derivative $\partial_{x_0}$. As $t\partial_x \beta^k\ll v_{\rm micro}^{-1}$, the correction term is one derivative order smaller, and thus again to leading order $\partial_x\boldsymbol\beta = \partial_{x_0}\boldsymbol\beta^{\rm hydro}(x_0,0)|_{x_0=0}$, which is sufficient in \eqref{eq:jwitw}. With this identification, the results of section \ref{sec:intro} are technically expressed in terms of $\boldsymbol\beta^{\rm hydro}$ rather than the potentials introduced in \eqref{eq:density_mat}, as these form the most convenient basis to describe the evolution.
\subsection{Diffusive hydrodynamics: inhomogeneous fields }
We now turn our attention again to spatially modulated fields $w^i(x)$ in the Hamiltonian \eqref{eq:Hamiltonian}. As we did for the state above, we can also expand the Hamiltonian locally around any point $x_0$ as
\begin{equation}\label{eq:pert}
H= w^i(x_0) Q_i - \mathfrak{f}^i(x_0) \int dx\, (x-x_0) q_i(x) + \ldots,
\end{equation}
providing similar conditions hold. As we are interested in a hydrodynamic theory up to second derivatives, it is in principle necessary to also include the second order term this expression. However imposing PT symmetry for all densities $q_i(x)$ leads such contributions to the hydrodynamic equations to vanish, see appendix \ref{app:PT}.
The continuity equation for the charge and current density operators now has an extra contribution, due to the new term in the Hamiltonian, and reads
\begin{equation}\label{eq:newCE}
\partial_t q_i( x,t) = - \partial_x j_{\boldsymbol{w},i} (x,t ) + j_{i,\boldsymbol{\mathfrak{f}}}(x,t) + \ldots,
\end{equation}
where we have used the result \cite{SciPostPhys.2.2.014} (a higher-dimensional generalisation is shown in \cite{Doyon2021}):
\begin{equation}
{\rm i} \left[q_k(y),q_i(x)\right]=\partial_xj_{k,i}(x)\delta(y-x)+(j_{k,i}(x)+j_{i,k}(x))\delta'(y-x)+\left(\mathrm{higher}\;\mathrm{derivatives}\right).
\end{equation}
It should be noted that now $[ H, Q_i] \ne 0$, but, as mentioned, in the hydrodynamic framework, the full set of charges for the homogeneous part of \eqref{eq:pert} is to be considered, as the correction is most aptly treated through a separation of scales. This means that we assume that the typical scale of spatial modulation of the external fields, $\ell_{\mathfrak f} = {\rm min}_i (|\mathfrak f^i|^{-1})$, must be large enough. As mentioned, $\ell_{\mathfrak f}$ naturally determines the scale of spatial modulation of the local fluid state, hence we expect to have, at all large enough times, $\ell\approx\ell_{\mathfrak f}$, and the result of the hydrodynamic framework is an equation valid up to, and including, order $\ell^{-2}\approx \ell_{\mathfrak f}^{-2}$. The physical interpretation is that the correction generates a fast dynamics to a local hydrodynamic state as eq. \eqref{eq:hydrostate} on mesoscopic times, and the effect on the state due to the perturbations can be evaluated by perturbation theory in the interaction picture. Again, as in our discussion of the case without external forces, taking the long-time limit in the perturbation theory amounts to taking a mesoscopic time scale and accounting for local relaxation, $t_{\rm micro}\ll t_{\rm meso} \ll {\rm min}_{l,x_0} ((v_{\rm micro}|\partial_{x_0}\beta^l(x_0,t_0)|)^{-1},\,(v_{\rm micro}|f^l(x_0)|)^{-1})$. Once the average currents have been obtained in fluid cells after mesoscopic times, we insert them into \eqref{eq:newCE} in order to obtain the final hydrodynamic equation.
We employ perturbation theory to first order in the perturbation strength, $ \mathfrak{f}^i(0)$, as higher orders contain higher spatial derivatives and powers thereof. Under the full time evolution generated by $H$ eq. \eqref{eq:pert}, we find, for a generic local operator $o(x,t) = e^{i H t} o(x) e^{-i H t}$ at position $x=0$,
\begin{equation}
o(0,t) = o(0,t \boldsymbol{w}) - {\rm i} \int_0^t d s\, \int dx \ x \ \mathfrak{f}^i(0) \ [q_i(x,s \boldsymbol{w}),o(0,t \boldsymbol{w})] +\ldots,
\end{equation}
where, as before, we denoted operators evolved with respect to the portion of the Hamiltonian with flat fields in eq. \eqref{eq:pert} by means of the time arguments $t \boldsymbol{w}$, as in eq. \eqref{eq:evolwithw}. A single time argument refers still to the real time under the full evolution. The hydrodynamic expansion of the previous section in the presence of these new terms now reads
\begin{align}\label{eq:q_pretherm}
& \langle q_i(0,t) \rangle_{\tilde{\rho}} = \langle q_i \rangle - (\partial_x \beta^k) K[q_k;q_i] - \mathfrak{f}^k \overline{K}[q_k; q_i],
\end{align}
\begin{align}\label{eq:j1_pretherm}
& \langle j_{\boldsymbol{w},i}(0,t) \rangle_{\tilde{\rho}} = \langle j_{\boldsymbol{w},i} \rangle - (\partial_x \beta^k) K[q_k ; j_{\boldsymbol{w},i}] - \mathfrak{f}^k \overline{K}[q_k; j_{\boldsymbol{w},i}],
\end{align}
\begin{align}\label{eq:j2_pretherm}
& \langle j_{i,\boldsymbol{\mathfrak{f}}}(0,t) \rangle_{\tilde{\rho}} = \langle j_{i,\boldsymbol{\mathfrak{f}}} \rangle - (\partial_x \beta^k) K[q_k ;j_{i,\boldsymbol{\mathfrak{f}}}] - \mathfrak{f}^k \overline{K}[q_k;j_{i,\boldsymbol{\mathfrak{f}}}],
\end{align}
where $K[a;b]$ is defined in \eqref{eq:K}, and we define the integrated correlation functions of the commutators as
\begin{equation}\label{eq:KKs}
\overline{K}[a;b] = {\rm i} \int_0^t d s\, \int dy \,y \,\langle [a(y,s \boldsymbol{w}),b(0,t \boldsymbol{w})] \rangle.
\end{equation}
We now invoke hydrodynamic separation of scales as in the previous section with homogeneous Hamiltonians, assuming that the space-time dependence of the system is contained entirely in the potentials $\beta^k(x,t)$, which vary slowly in space and time. Here, this consists of taking the infinite time limit of Eqs.~\eqref{eq:q_pretherm} - \eqref{eq:j2_pretherm}, which describe the slow drift in the values of the charges after relaxation under the homogenous-breaking perturbation has occurred, where we again assume that the limit is approximately reached at timescales much shorter than the timescales of hydrodynamic evolution. Proceeding analogously to the previous section, and taking the infinite time limit in eq. \eqref{eq:KKs} we obtain the following diffusive equation
\begin{align}\label{eq:finalperturb}
\partial_t {\tt q}_i &+\partial_x \Big[
{\tt j}_{\boldsymbol{w},i} + \frac{1}{2} \mathfrak{L}[ \boldsymbol{w} , \partial_x\boldsymbol{\beta}; j_{\boldsymbol{w},i}]
- \frac{1}{2} \mathfrak{F}[ \boldsymbol{\mathfrak{f}} , j_{\boldsymbol{w},i}] \Big]
= { \tt j}_{i, \boldsymbol{\mathfrak{f}}} + \frac{1}{2} \mathfrak{L}[ \boldsymbol{w} , \partial_x\boldsymbol{\beta} ;j_{i;\boldsymbol{\mathfrak{f}}}]
- \frac{1}{2} \mathfrak{F}[ \boldsymbol{\mathfrak{f}} ,j_{i;\boldsymbol{\mathfrak{f}}}] ,
\end{align}
with the new integrated correlator defined as by
\begin{align}\label{deffrakF}
& \mathfrak{F}[ \boldsymbol{\mathfrak{f}} ,o] = { {\rm i} } \int_{-\infty}^{\infty} ds \int dy \ y \ \mathfrak{f}^k \langle [q_{k}(y, \boldsymbol{w} s ) , o(0,0) - \frc{\partial \langle o\rangle}{\partial \langle q_i\rangle} q_i(0,0)] \rangle ,
\end{align}
where again we have used PT symmetry in order to symmetrise the integral over time. Here, the state $\langle\cdots\rangle$ is the GGE at the fluid cell $(x,t)$.
The latter quantity can be rewritten in terms of the usual Onsager coefficients by employing the Kubo–Martin–Schwinger (KMS) relation \cite{PhysRev.115.1342,Doyon2021}. This allows us to rewrite the expectation value of the commutator in any homogeneous, stationary GGE as
\begin{align}
\langle [q_k(y,s \boldsymbol{w}), o(0,0)] \rangle =& - {\rm i} \int_0^1 d\lambda \,\beta^l \partial_y \langle j_{l,k}(y,s \boldsymbol{w}- {\rm i} \lambda \boldsymbol{\beta}) o(0,0) \rangle
\nonumber \\=& - {\rm i} \, \partial_y (j_{\boldsymbol\beta,k}(y,s \boldsymbol{w}),o(0,0)).
\end{align}
Note that the spatial derivative along with homogeneity of the state allows us to introduce the connected correlation function.
Therefore we have, integrating by parts over $y$ in \eqref{deffrakF},
\begin{align}
\mathfrak{F} [ \boldsymbol{\mathfrak{f}} ,o] =
-\mathfrak{L}[\boldsymbol{\beta}, \boldsymbol{\mathfrak{f}} ; o],
\end{align}
which finally gives our main result \eqref{main_result}.
\section{Integrable systems: quasiparticle expression and lower bounds for Onsager coefficient}
In integrable systems, the local GGE state is most conveniently characterised by the Bethe rapidities $\theta$, whose distribution function (root density) in the fluid cell located at $(x,t)$ is denoted $\rho_{\rm p }(\theta; x,t)$. This object specifies the density of quasiparticles with rapidity $\theta$ at position $(x,t)$, and is related to the hydrodynamic variables by
\begin{equation}
{\tt q}_i(x,t) = \int \ d \theta \rho_{\rm p }(\theta;x,t) h_i(\theta),
\end{equation}
where $h_i(\theta)$ are the single-particle eigenvalues of the charges $Q_i$. With a sufficiently complete set of charges, it is possible to invert this expression, so there is a bijection between the root density and the expectation values of the charges. We now introduce the dressing operation on functions defined over $\mathbb{R}$. The action of a linear integral operator is:
\begin{equation}
A\cdot h:=\int d\alpha A(\theta,\alpha)h(\alpha), \quad
\quad h \cdot A:=\int d\alpha h(\alpha) A(\alpha,\theta),
\end{equation}
with which we can define the dressed function
\begin{equation}
h^{\rm dr} = (1- T n)^{-1} \cdot h,
\end{equation}
where $[Tn](\theta,\alpha)=T(\theta,\alpha)n(\alpha)$. Here $T$ is the scattering shift of the model, independent of the state, and the filling function is $n = 2\pi \rho_{\rm p }/(p')^{\rm dr}$, where $p(\theta)$ is the eigenvalue of the momentum operator. In the following we shall denote $(p')^{\rm dr} = k'$.
The main task is to compute the hitherto unknown expressions for the extended Onsager matrices $\mathfrak{L}[\boldsymbol{a}, \boldsymbol{b} ; j_{\boldsymbol{c},\boldsymbol{d}}]$ defined by \eqref{eq:def_Onsager}. In integrable models these can be computed exactly whenever { at least one of the currents $j_{\boldsymbol a,\boldsymbol b}$ or $j_{\boldsymbol c,\boldsymbol d}$ project entirely on the space spanned by linear and quadratic fluctuations of the local conserved densities \cite{10.21468/SciPostPhys.9.5.075,1912.01551}. This is argued to be the case if $\boldsymbol a = \boldsymbol w$ or $\boldsymbol c = \boldsymbol w$, as, after hydrodynamic reduction, currents of the type $j_{\boldsymbol w,\boldsymbol b}$ lie in the hydrodynamic subspace that is invariant under higher flows, which is argued to be spanned by quadratic charges \cite{Durnin2020,1912.01551}.}
In this case, the only contributions to diffusion are given by two-body scattering amongst quasiparticles: the two particle-hole contribution in the expansion over intermediate states $\mathfrak{L}[\boldsymbol{a}, \boldsymbol{b} ; j_{\boldsymbol{c},\boldsymbol{d}}]= \mathfrak{L}_{2-\rm ph}[\boldsymbol{a}, \boldsymbol{b} ; j_{\boldsymbol{c},\boldsymbol{d}}] $ \cite{PhysRevLett.121.160603,1911.01995,PhysRevB.98.220303,RevCorrelations}. The idea that such two-body processes are responsible for diffusion in integrable models first appeared before the advent of GHD in \cite{PhysRevLett.83.2293}. We compute the two particle-hole contribution analogously to Ref.~\cite{10.21468/SciPostPhys.6.4.049} via a form-factor expansion, see appendix \ref{sec:FFComputation}.
{ However, currents of the type $j_{\boldsymbol a,\boldsymbol b}$ for $\boldsymbol a\neq \boldsymbol w$ do not entirely overlap with the space of linear and quadratic fluctuations. Therefore, while three of the four Onsager coefficients in \eqref{main_result} are fully given by their two particle-hole contributions, the coefficient $\mathfrak{L}[\boldsymbol{ \beta},\boldsymbol{\mathfrak{f}} ; j_{i,\boldsymbol{\mathfrak{f}}}]$ is not, see appendix \ref{sec:FFComputation} for more details. Nevertheless, the lower bound $\mathfrak L\geq \mathfrak{L}_{2-\rm ph}$ for the extended Onsager matrix \eqref{OnsagerMatrix}
follows from the hydrodynamic projection mechanism and the identification of the two particle-hole contribution with the projection onto the quadratic space \cite{10.21468/SciPostPhys.9.5.075,1912.01551}.}
Our results read
\begin{eqnarray}\label{Lintegrable1}
\mathfrak{L}[\boldsymbol{w}, \partial_x\boldsymbol{\beta} ; j_{\boldsymbol{w},i}] &=& h_i \cdot \mathfrak{D} \mathsf C \cdot \partial_x( \beta^k h_k ),\\
\mathfrak{L}[\boldsymbol{ \beta},\boldsymbol{\mathfrak{f}} ; j_{\boldsymbol{w},i}] &=& h_i \cdot \mathfrak{D}_{\boldsymbol{\mathfrak{f}}} \mathsf C \cdot \partial_\theta( \beta^k h_k),\\
\mathfrak{L}[\boldsymbol{w}, \partial_x\boldsymbol{\beta} ; j_{i,\boldsymbol{\mathfrak{f}}}] &=& \partial_\theta h_i \cdot \mathfrak{D}_{\boldsymbol{\mathfrak{f}}} \mathsf C \cdot \partial_x (\beta^k h_k) ,\\
\mathfrak{L}[\boldsymbol{ \beta},\boldsymbol{\mathfrak{f}} ; j_{i,\boldsymbol{\mathfrak{f}}}] & = & \partial_\theta h_i \cdot \mathfrak{D}_{\boldsymbol{\mathfrak{f}^2}} \mathsf C \cdot \partial_\theta( \beta^kh_k ) .
\label{Lintegrable4}
\end{eqnarray}
The kernels in these equations are defined below. All kernels can be written in terms of the full effective velocity given by the dynamics of the system Hamiltonian \eqref{eq:Hamiltonian} at position $x$, namely
\begin{equation}
v^{\rm eff}_{\boldsymbol{w}}(\theta; x, t )= \frac{ w^i(x) (h_i')^{\rm dr}(\theta; x ,t )}{k'(\theta; x , t )},
\end{equation}
and of the effective acceleration \cite{SciPostPhys.2.2.014},
\begin{equation}
a^{\rm eff}_{\boldsymbol{\mathfrak f}}(\theta; x ,t ) = \frac{ \mathfrak f^i(x) (h_i)^{\rm dr}(\theta; x ,t )}{k'(\theta; x , t )}.
\end{equation}
The kernels in rapidity space are given by
\begin{align}\label{eq:DD}
\mathfrak{D} \mathsf C = (1- n T )^{-1}\cdot \frac{\delta_{\theta_1,\theta_2} \int d\alpha {\kappa_{\mathfrak{D}}(\theta_1,\alpha)}{ } - {\kappa_{\mathfrak{D}}(\theta_1,\theta_2)}{} }{k'(\theta_1) k'(\theta_2)} \cdot (1- T n )^{-1},
\end{align}
and similarly for $\mathfrak D_{\boldsymbol{\mathfrak f}},\,\mathfrak D_{\boldsymbol{\mathfrak f}^2}$ with different functions $\kappa_{\mathfrak D_{\boldsymbol{\mathfrak f}}},\,\kappa_{\mathfrak D_{\boldsymbol{\mathfrak f}^2}}$,
where the susceptibility kernel is given by
\begin{equation}
\mathsf C= (1- n T )^{-1} \cdot \rho_{\rm p} f \cdot (1- T n )^{-1},
\end{equation}
with the function $f(\theta)$ incorporating the statistics of quasiparticles (see for example the review \cite{RevCorrelations}). Notice that $ \rho_{\rm p} f$ or its inverse denote the diagonal operator $\rho_{\rm p}(\theta) f(\theta) \delta(\theta - \alpha)$.
The function $\kappa_{\mathfrak{D}}$ is defined as (neglecting the $x,t$ dependence)
\begin{align}\label{eq:ffun}
\kappa_{\mathfrak{D}}(\theta_1,\theta_2) & = k'(\theta_1) k'(\theta_2) n(\theta_1) f(\theta_1) n(\theta_2) f(\theta_2) (T^{\rm dr}(\theta_1,\theta_2))^2 |v^{\rm eff}_{\boldsymbol{w}}(\theta_1) - v^{\rm eff}_{\boldsymbol{w}}(\theta_2)| ,
\end{align}
and the others are defined by including ratios of effective velocity and acceleration differences
\begin{eqnarray}
\kappa_{\mathfrak D_{\mathfrak f}}(\theta_1,\theta_2) &=& \frac{a^{\rm eff}_{\boldsymbol{\mathfrak f}}(\theta_1)-a^{\rm eff}_{\boldsymbol{\mathfrak f}}(\theta_2)}{v^{\rm eff}_{\boldsymbol{w}}(\theta_1) - v^{\rm eff}_{\boldsymbol{w}}(\theta_2)}\times \kappa_{\mathfrak{D}}(\theta_1,\theta_2) \label{eq:kdf}\\
\kappa_{\mathfrak D_{\mathfrak{ f}^2}}(\theta_1,\theta_2) &\stackrel{{2-\rm ph}}=& \lt(\frac{a^{\rm eff}_{\boldsymbol{\mathfrak f}}(\theta_1)-a^{\rm eff}_{\boldsymbol{\mathfrak f}}(\theta_2)}{v^{\rm eff}_{\boldsymbol{w}}(\theta_1) - v^{\rm eff}_{\boldsymbol{w}}(\theta_2)}\rt)^2\times \kappa_{\mathfrak{D}}(\theta_1,\theta_2) \label{eq:kdf2}.
\end{eqnarray}
Here we assumed a symmetric scattering shift $T(\theta,\alpha)= T(\alpha, \theta)$.
The general multi-linear expression \eqref{Labcd} for the Onsager coefficient makes it clear that in thermal states \eqref{LDA_state}, stationarity occurs, with the following cancellation:
\begin{align}
& \mathfrak{L}^{\rm thermal}[\boldsymbol{w}, \partial_x\boldsymbol{\beta} ; j_{\boldsymbol{w},i}]+ \mathfrak{L}^{\rm thermal}[\boldsymbol{ \beta},\boldsymbol{\mathfrak{f}} ; j_{\boldsymbol{w},i}]=0, \\
& \mathfrak{L}^{\rm thermal}[\boldsymbol{w}, \partial_x\boldsymbol{\beta} ; j_{i,\boldsymbol{\mathfrak{f}}}] + \mathfrak{L}_{2-\rm ph}^{\rm thermal}[\boldsymbol{ \beta},\boldsymbol{\mathfrak{f}} ; j_{i,\boldsymbol{\mathfrak{f}}}] =0.
\end{align}
Ultra-local quantities, which have an arbitrary chemical potential in the thermal states, correspond in the quasi-particle basis to quantities $Q_0$ for which $h_0(\theta)=$ a constant. It is possible to use the following relations
\begin{equation}\label{eq:thermal1}
\partial_x( \beta^i h_i) = - (1- T n) \cdot \frac{\partial_x n}{ n f(n)} ,
\end{equation}
\begin{equation}\label{eq:thermal2}
\partial_\theta( \beta^i h_i) = - (1- T n) \cdot\frac{\partial_\theta n}{ n f(n)} .
\end{equation}
to verify that expressions \eqref{Lintegrable1}-\eqref{Lintegrable4} do satisfy the above cancellations in a thermal state: In thermal states, the occupation number satisfies the following relations:
\begin{align}\label{eq:thermalpp}
& \partial_\theta n^{\rm thermal} = - \beta_0 n f v^{\rm eff}_{\boldsymbol{w}} k' , \quad \quad
\partial_x n^{\rm thermal} = \beta_0 n f a^{\rm eff}_{\boldsymbol{\mathfrak{f}}} k'
\end{align}
which, together with $\partial_x \beta^i = -\beta_0 \mathfrak{f}^i$ in the thermal state, implies the above cancellation between the sum of the Onsager coefficients.
We are now in a position to write Eq.~\eqref{main_result} for quasiparticles, using the relation $\delta n = \frac{n}{\rho_{\rm p}} (1- n T) \delta \rho_{\rm p}$, where $\delta$ is a variation with respect to the rapidity or the spatial argument. We use this to obtain the following hydrodynamic equation
\begin{align}\label{main_particle_result}
\partial_t \rho_{\rm p } + \partial_x& \left( v^{\rm eff}_{\boldsymbol{w}} \rho_{\rm p }\right) + \partial_\theta \left( a_{\boldsymbol{\mathfrak{f}}}^{\rm eff} \rho_{\rm p } \right) = \nonumber \\& \frac{1}{2} \partial_x (\mathfrak{D} \cdot \partial_x \rho_{\rm p }) + \frac{1}{2} \partial_x ( \mathfrak{D}_{\boldsymbol{\mathfrak{f}}} \cdot \partial_\theta \rho_{\rm p } )
+ \frac{1}{2} \partial_\theta (\mathfrak{D}_{\boldsymbol{\mathfrak{f}}} \cdot \partial_x \rho_{\rm p }) + \frac{1}{2} \partial_\theta ( \mathfrak{D}_{\boldsymbol{\mathfrak{f}^2}} \cdot \partial_\theta \rho_{\rm p} ),
\end{align}
where we have used the known results for the Euler currents
\cite{Bertini16,PhysRevX.6.041065,SciPostPhys.2.2.014,PhysRevX.10.011054,10.21468/SciPostPhys.8.2.016,2004.07113}
\begin{align}
& { \tt j}_{\boldsymbol{w}, i} (x,t )= \int d\theta \ \rho_{\rm p}(\theta; x, t) v_{\boldsymbol{w}}^{\rm eff}(\theta; x ,t ) h_i(\theta), \\
& { \tt j}_{i, \boldsymbol{\mathfrak{f}}}(x,t) = \int d\theta \ \rho_{\rm p}(\theta; x ,t ) a_{\boldsymbol{\mathfrak{f}}}^{\rm eff}(\theta; x ,t ) h_i'(\theta).
\end{align}
A notable consequence of eq. \eqref{main_particle_result} is that motion of the fluid due to inhomogeneities in the state always corresponds to spatial derivatives, while the effect of the external forces is contained in rapidity derivatives. Therefore eq. \eqref{main_particle_result} describes convective and diffusive motions for the quasiparticles, equally in the physical and quasi-momentum coordinate of the fluid. In particular the kernel $ \mathfrak{D}_{\boldsymbol{\mathfrak{f}^2}}$ can be understood as the effective diffusion constant of the quasiparticles in momentum space. The presence of the latter is directly an effect of breaking the underlying integrability of the system, by means of the forces $\mathfrak{f}^i$, and proportional to the square of the force strength, similarly to Fermi's golden rule terms in homogeneous settings \cite{Durnin2020,2103.11997,2005.13546}. We stress that our analytical expression for $\mathfrak{D}_{\boldsymbol{\mathfrak{f}^2}}$, given in terms of the function \eqref{eq:kdf2}, constitutes generically only a lower bound for such effective diffusion constant, where only two-body scatterings processes are taken into account. Higher particle scattering processes, can indeed non-trivially contribute to this kernel, { nevertheless the form \eqref{Lintegrable4} is expected to hold at all orders.}
As the analysis here is conducted for integrable systems with only one quasiparticle type, the extension to systems with multiple species is easily done by replacing the rapidity $\theta$ with the global index $(\theta,s)$, and inserting a sum over particle species to accompany each rapidity integral.
It is easy to check that the total density of quasiparticles $N$ and the total energy $E = \langle H \rangle$ are conserved quantities of motion. We have
\begin{equation}
\partial_t N = \int d\theta \int dx \ \partial_t \rho_{\rm p}(\theta;x,t) =0
\end{equation}
by the structure of derivatives in the equation. Regarding the total energy
\begin{equation}
\partial_t E = \int d\theta \int dx \ h_i(\theta) w^i(x)\ \partial_t \rho_{\rm p}(\theta;x,t)
\end{equation}
it is already established that the convective terms $\partial_x \left( v^{\rm eff}_{\boldsymbol{w}} \rho_{\rm p }\right) + \partial_\theta \left( a_{\boldsymbol{\mathfrak{f}}}^{\rm eff} \rho_{\rm p } \right)$ in eq. \eqref{main_particle_result} conserve total energy, \cite{SciPostPhys.2.2.014} while for the diffusive terms we have, after integration by parts
\begin{align}
\partial_t E & =\frac{1}{2} \int dx \int d\theta \ \Big[ \mathfrak{f}^i h_i \cdot \mathfrak{D} \cdot \partial_x \rho_{\rm p } + \mathfrak{f}^i h_i \cdot \mathfrak{D}_{\boldsymbol{\mathfrak{f}}} \cdot \partial_\theta \rho_{\rm p }
\nonumber \\& - w^i h_i' \cdot \mathfrak{D}_{\boldsymbol{\mathfrak{f}}} \cdot \partial_x \rho_{\rm p } - w^i h'_i \cdot \mathfrak{D}_{\boldsymbol{\mathfrak{f}^2}} \cdot \partial_\theta \rho_{\rm p} \Big]=0,
\end{align}
as by their definitions, eq. \eqref{eq:DD} together with eq. \eqref{eq:kdf}, \eqref{eq:kdf2}, we have
\begin{align}
& \mathfrak{f}^i h_i \cdot \mathfrak{D} = w^i h_i' \cdot \mathfrak{D}_{\boldsymbol{\mathfrak{f}}} , \\
& \mathfrak{f}^i h_i \cdot \mathfrak{D}_{\boldsymbol{\mathfrak{f}}} = w^i h'_i \cdot \mathfrak{D}_{\boldsymbol{\mathfrak{f}^2}}. \label{eq:energyconsconstraint}
\end{align}
The positive entropy increase \eqref{eq:entropyIncrease} can also be written explicitly in the quasiparticle basis by projecting it into the two particle-hole sector. Denoting $\Delta_{x} n= (1- T n) \cdot \frac{\partial_x n}{ n f(n)} $ and $\Delta_{\theta} n= (1- T n) \cdot \frac{\partial_\theta n}{ n f(n)}$, the entropy increase can be written as a quadratic $2$-by-$2$ block quadratic form:
\begin{align}\label{eq:entropyLower}
\partial_t S & = \frac{1}{2} \begin{pmatrix} \Delta_x n & \Delta_\theta n \end{pmatrix} \cdot \begin{pmatrix} \mathfrak{D}\mathsf C & \mathfrak{D}_{\boldsymbol{\mathfrak{f}}}\mathsf C \\ \mathfrak{D}_{\boldsymbol{\mathfrak{f}}} \mathsf C & \mathfrak{D}_{\boldsymbol{\mathfrak{f}^2}} \mathsf C \end{pmatrix} \cdot \begin{pmatrix} \Delta_x n \\ \Delta_\theta n \end{pmatrix} .
\end{align}
{ Note that both $ \mathfrak{D}\mathsf C$ and $\mathfrak{D}_{\boldsymbol{\mathfrak{f}^2}} \mathsf C$ are positive semi-definite operators, as is the full matrix of operators on the right-hand side of \eqref{eq:entropyLower}.} As our explicit expression for $\mathfrak{D}_{\boldsymbol{\mathfrak{f}^2}}\mathsf C$ is a lower bound, substituting it into eq. \eqref{eq:entropyLower} provides a lower bound for entropy production. However, such a lower bound still ensures positive entropy production and vanishing of entropy generation on thermal states, as it can be easily verified using eq. \eqref{eq:thermalpp}.
\section{Thermalisation in the Toda gas}\label{sec:toda}
\subsection{Thermodynamics}
The most immediate consequence of the formalism derived in the previous sections, is that integrable systems perturbed by couplings to the local charge densities thermalise in the diffusive regime. This is a prediction accessible by molecular dynamics simulations of classical systems. We choose as an example the integrable Toda system, whose Hamiltonian in the presence of an external field coupling to the energy can be written
\begin{equation}\label{Toda_Ham}
H=\sum_{i=1}^N\frac{V(x_i)}{2}\left(p_i^2+e^{-(x_{i+1}-x_i)}+e^{-(x_i-x_{i-1})}\right),
\end{equation}
where we have absorbed the effect of the external potential into a single prefactor $V(x_i)$. In fact, in the integrable Toda system there are two complementary interpretations, whose notions of space differ. In the first case, which we consider here, the Hamiltonian \eqref{Toda_Ham} is viewed as describing a gas of particles with position $x_i$, which is however not invariant under permutations of these coordinates. In this case the physical space is parameterised by the $x_i$, and the choice of external field in \eqref{Toda_Ham} reflects our consideration of the gas picture. In the complementary chain picture, the physical space is parameterized by the index $i$, and we would have instead a potential $V_i$. Nevertheless, the two systems are expected to be thermodynamically equivalent, and while the dynamics of the two systems will differ in the presence of external fields, we expect that our general results will apply to both, with the suitable definition of physical space. See \cite{Doyon2019} for a detailed discussion of the relationship between the gas and chain pictures.
We will take periodic boundary conditions $x_{N+1}=x_1+L$ and $x_{0}=x_N-L$; in the absence of the external field such that $V(x_i)=1$, the system with these boundary conditions is integrable. The full thermodynamics of the integrable system were recently elucidated in \cite{doyon_generalised_2019,spohn_generalized_2019} for the open system, following earlier analysis of the thermal states (see the review \cite{cuccoli_thermodynamics_1994} and references therein). The dynamics of the open system are unbounded, however one can bound the dynamics by introducing a pressure term, which under the equivalence of ensembles is expected to provide equivalent results to the periodic system. We define the variable $R=x_N-x_1$ which is approximately $L$ in the periodic case due to energetic considerations, as long as $L\sim N$, which we shall impose. The dynamics of the integrable Toda system in various contexts have recently been studied both analytically within the context of Euler-GHD and numerically \cite{Bulchandani_2019,spohn_ballistic_2020,spohn_hydrodynamic_2021,mendl_high-low_2021}.
In order to make the connection between the numerical results and the analytical predictions, we define numerical `fluid cells' of width $w\gg L/N$, and compare the average values of observables within these cells in the stationary state to those in the state defined by the maximal entropy condition \eqref{eq:conditionSmax}. This process is simplified by the fact that, in the thermal state, the thermodynamics of the Toda gas can be solved explicitly. In the Toda gas, there are two convenient ensembles which can be used, the Gibbs ensemble in which $N$ is fixed and $R$ is fluctuating, and the Landau ensemble in which $R$ is fixed and $N$ fluctuates. In the simulation procedure, the bins are of fixed width, and therefore the latter ensemble is the relevant one, although it is convenient to first evaluate the former, in order to express the thermodynamics in the latter. We have
\begin{align}
\mathcal{Z}_\mathrm{Gibbs}=&\int \prod_{i=1}^Ndx_idp_i\exp\left(\frac{-\beta p_i^2}{2}\right)\exp\left(-\beta e^{-(x_{i+1}-x_i)}-P(x_{i+1}-x_i)\right)\sim e^{-Ng}
\end{align}
\begin{align}
\mathcal{Z}_\mathrm{Landau}=&\sum_{N=1}^\infty e^{\mu N}\int \prod_{i=1}^Ndx_idp_i\exp\left(\frac{-\beta p_i^2}{2}\right)\exp\left(-\beta e^{-(x_{i+1}-x_i)}\right)\delta(x_N-x_1-R)\sim e^{-Rf}.
\end{align}
The thermodynamics in the Gibbs ensemble is found to be
\begin{equation}\label{toda_therm}
\varepsilon(P,\beta)=\frac{\exv{H}}{N}=\frac{2P+1}{2\beta}\;,\;\;\nu(P,\beta)=\frac{\exv{R}}{N}=\log(\beta)-\partial_P\Gamma(P).
\end{equation}
We relate the two ensembles using \cite{Doyon2019}:
\begin{equation}
f(\mu,\beta)+P=\frac{g(P,\beta)-\mu}{\nu},
\end{equation}
where the ensembles are chosen such that the density $\nu^{-1}$ is the same in both, where in the Landau ensemble $\nu=R/\exv{N}$. This allows the calculation of expectation values in the Landau ensemble, and thus to compare the LDA state predictions against the numerical results. There is a subtlety when comparing with the numerics, arising from the fact that $R$ is not necessarily positive under the dynamics, or in the thermodynamics. If this occurs, then matching the thermodynamics as calculated through bins of fixed width becomes ill-defined. Therefore the initial state is chosen such that $R>0$ for every sufficiently large subset of particles in the gas for all times.
\subsection{Numerical Results}
\begin{figure}
\centering
\includegraphics[width=0.485\textwidth]{n.pdf}
\includegraphics[width=0.485\textwidth]{en_R.pdf}
\caption{Numerical value of and LDA predictions for the energy density and number density of the stationary state. In this figure $N=1000$.}
\label{fig:LDA}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{temp_profile_1k.pdf}
\caption{Convergence of the global temperature $\beta_0^{-1}=N^{-1}\sum_{i=1}^NV(x_i)p_i^2$, which is equal to the thermodynamic quantity by equipartition, to the LDA value. In this figure $N=1000$.}
\label{fig:temp}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.485\textwidth]{density_profile.pdf}
\includegraphics[width=0.485\textwidth]{energy_density_profile.pdf}
\caption{Convergence of the density and energy density near the trough of the potential to the LDA prediction. In this figure $N=500$.}
\label{fig:density_convergence}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.485\textwidth]{temp_profile_L.pdf}
\includegraphics[width=0.485\textwidth]{temp_profile_R.pdf}
\caption{Position independence of the temperature $\beta_0^{-1}$, defined within a region as $\beta_0^{-1}=N^{-1}\sum_{i=1}^NV(x_i)p_i^2$. Shown is (L) the convergence to the LDA value for $0< x\le L/2$ and (R) for $L/2<x\le L$, with the same average values being obtained at late times in both regions. In this figure $N=500$.}
\label{fig:temp_convergence}
\end{figure}
By implementing a molecular dynamics solver for the Toda system with Hamiltonian \eqref{Toda_Ham}, we can show that the thermal LDA state \eqref{LDA_state} is indeed approximately reached at diffusive timescales. We take the periodic potential $V(x)=2.5+\sin(2\pi x/L)$, where the periodicity of the potential is equal to the periodicity of the Toda gas. We take $N=L$ throughout, with $N=1000$ in Figs.~\ref{fig:LDA} and \ref{fig:temp}, and $N=500$ in Figs.~\ref{fig:density_convergence} and \ref{fig:temp_convergence}, where more realisations are needed to obtain relatively smooth plots\footnote{The required simulation length scales $\sim N^3$, $N$ with the number of particles, and a further factor $N^2$ associated to the simulation time required to access diffusive timescales, assuming the system is held at constant density.}. Finally we take 50 fluid cells in both cases. The energy density $\exv{H}/N$ and density $\nu^{-1}$ are constant across initial states, with values of $1.26$ and $1$ respectively.
In all the figures there are periodic oscillations of observables. By running the simulations without the external energy field, that is, for the usual Toda gas, it was verified that this is the period related to the periodic boundary conditions. In Fig.~\ref{fig:temp_convergence}, the two graphs have distinct frequencies, around the trough of the potential the dominant period is $\tau\sim 300$, and around the peak of the potential the dominant period is, to a good approximation, $\tau/2$. Comparing the predictions of the LDA state with the averages of the numerical data, we see a good agreement, to less than $5\%$ errors, of all measured quantities.
\section{Conclusion}
We have introduced a fully general formalism for the hydrodynamics of a Hamiltonian system with generic inhomogeneous force fields, including terms to second order in spatial derivatives. The resulting hydrodynamic equation is fully expressed in terms of Euler currents and extended Onsager coefficients, which provide the net entropy increase, shown to be always non-negative. While our hydrodynamic equation applies to any inhomogeneous deformed Hamiltonian constructed from PT-symmetric densities of conserved quantities in involution, in the particular case of integrable systems { we showed that expressions for the extended Onsager coefficients may be obtained by a form factor expansion, and we obtained explicit expressions at the two particle-hole order, which are valid at low density of excitations.} We have thereby extended the equations of generalised Hydrodynamics to include generic force fields and dissipative terms. The equation has thermal states as the only stationary states and positive entropy increase, showing how second order terms in the hydrodynamics, namely diffusive processes in real and quasi-momenta space are responsible for the thermalisation of the system. This confirms and generalises our previous result for the one dimensional Bose-gas in a generic trapping potential \cite{PhysRevLett.125.240604}. We have confirmed the final thermalisation of an integrable classical system, a Toda chain under an inhomogeneous energy field representing the effect of an inhomogeneous Hamiltonian.
Several extension and open questions are in reach. First the lower bound for the diffusion constant $\mathfrak{D}_{\boldsymbol{\mathfrak{f}^2}}\mathsf C$ we found should be tested against numerical predictions to check its validity. It is reasonable to expect that its corrections are subleading at low density of excitations and in weakly interacting limits \cite{Durnin2020,10.21468/SciPostPhys.9.6.082}. In these regimes the final equation \eqref{main_particle_result} can be applied straightforwardly to describe the effect of external magnetic and electric fields in lattice systems as the XXZ spin-$1/2$ chain and the Fermi Hubbard chain, where interesting non-equilibrium phenomena can be observed \cite{2010.12965,2102.01675}. Moreover it will be important to clarify the role of diffusion in quasi-momentum space for the quasi-particles. The recent years have brought up different classes of non-diffusive transport in real space for integrable or quasi-integrable spin chains, related to the KPZ universality class, see \cite{Scheie2021,PhysRevB.101.041411,PhysRevLett.122.210602,2009.08425,Gopalakrishnan2019,2103.01976}. Similar forms of super-diffusion could be found in the quasi-momentum $\theta$ space, in particular in those cases where the kernel $ \mathfrak{D}_{\boldsymbol{\mathfrak{f}^2}}$ can diverge. Finally, the techniques we have used are clearly generalisable to higher-order hydrodynamics, which is thought to be meaningful at least in integrable systems. We shall investigate these exciting questions in the near future.
\subsection*{Acknowledgements}
We acknowledge relevant discussions with Romain Vasseur, Sarang Gopalakrishnan, Jerome Dubail, Herbert Spohn. JD acknowledges funding from the EPSRC Centre for Doctoral Training in Cross-Disciplinary Approaches to Non-Equilibrium Systems (CANES) under grant EP/L015854/1.
\begin{appendix}
\section{Computation of the Onsager coefficients }\label{sec:FFComputation}
Onsager coefficients in integrable models can be analytically computed by expanding over intermediate quasiparticle excitations, written in terms of particle-hole intermediate states. This corresponds to hydrodynamic projections of each current on the space of normal modes and their quadratic fluctuations, which has been shown to provide generically a lower bound for the Onsager coefficients \cite{1912.01551}. However, whenever at least one of the two currents in the Onsager coefficient is generated by the the homogeneous Hamiltonian, $w^i Q_i$, flow, the lower bound is saturated and the two-particle hole contribution provides the full Onsager coefficient (while the one particle-hole contribution gives the subtracted ballistic part \cite{RevCorrelations}). We shall here write the calculation for such contribution for generic currents
\begin{align} \label{eq:fullOnsager}
\mathfrak L [i,j;j_{k,
l}] = \lim_{t \to \infty} & \int_{-t}^{t} ds ( J_{i,j }(y,s \boldsymbol{w} ), j_{k,l}(0,0) )^C \nonumber = \sum_{n=2}^\infty \mathfrak L_{n-\rm ph}[i,j;j_{k,
l}] \\& = \frac{ (2\pi)^2}{2!^2} \lim_{t \to \infty} \int {\rm d} \theta_1^- {\rm d} \theta_2^- \rho_{\text{p}}(\theta_1^-) \rho_{\text{p}}(\theta_2^-) \fint {\rm d} \theta_1^+ {\rm d} \theta_2^+ \rho_{\text{h}}(\theta_1^+) \rho_{\text{h}}(\theta_2^+) \nonumber \\& \times \delta(k) \delta_t(\varepsilon_{\boldsymbol{w}}) \langle \rho_{\rm p} | j_{i,j} | \theta^+_1, \theta^+_2, \theta^-_1, \theta^-_2 \rangle \langle \theta^+_1, \theta^+_2, \theta^-_1, \theta^-_2 | j_{k,l} | \rho_{\text{p}} \rangle + \ldots,
\end{align}
where the integration $\fint$ denotes Hadamard regularisation, see for example \cite{Panfil2021}, due to the singularities in the integrand.
In particular we shall use the matrix elements of the generalised current operators
\begin{align}
& \langle \rho_{\rm p}| j_{k,i } | \{ \theta_{\rm p}^\bullet, \theta_{\rm h}^\bullet \}\rangle = h^{\rm Dr}_k(\{ \theta_{\rm p}^\bullet, \theta_{\rm h}^\bullet \}) f_i(\{ \theta_{\rm p}^\bullet, \theta_{\rm h}^\bullet \}) .
\end{align}
The function $h^{\rm Dr}_k$ is the eigenvalue of the $k$-th charge dressed by the shift function on the background given by $\rho_{\text{p}}$, see for example \cite{10.21468/SciPostPhys.6.4.049}, and the function $f_i$ is known and given by \cite{10.21468/SciPostPhys.6.4.049}
\begin{align}
f_i(\{ \theta_{\rm p}^\bullet, \theta_{\rm h}^\bullet \}) &= \left(\frac{T^{\rm dr}(\theta^-_2, \theta^-_1) h_i^{\rm dr}(\theta^-_2) }{k'(\theta^-_1) k'(\theta^-_2) (\theta^+_1 - \theta^-_1)}+ \frac{T^{\rm dr}(\theta^-_1, \theta^-_2) h_i^{\rm dr}(\theta^-_1) }{k'(\theta^-_2) k'(\theta^-_1) (\theta^+_2 - \theta^-_2)} \right. \nonumber\\
&\left. + \frac{T^{\rm dr}(\theta^-_2, \theta^-_1) h_i^{\rm dr}(\theta^-_2) }{k'(\theta^-_1) k'(\theta^-_2) (\theta^+_2 - \theta^-_1)} + \frac{T^{\rm dr}(\theta^-_1, \theta^-_2) h_i^{\rm dr}(\theta^-_1) }{k'(\theta^-_2) k'(\theta^-_1) (\theta^+_1 - \theta^-_2)} + (\dots) \right),
\end{align}
with corrections $(\dots)$ given by finite elements in the limit of one of the particles $\theta_i^+$ approaching the value of one of the holes $\theta_j^-$.
Notice that the energy constraint need particular care. We reguralise it as follows
\begin{equation}
\int_{-t}^t ds \, e^{i s \varepsilon } = \frac{\sin(t \varepsilon)}{\pi \varepsilon} = \delta_t (\varepsilon).
\end{equation}
We proceed analogously to the standard case. Due to the two delta functions, the integral is around particle and holes having the same values. We therefore expand over small $\Delta_i = \theta_i^+ - \theta_i^-$ (and its permutation, accounting for an additional factor 2 to the integrated correlator), obtaining
\begin{equation}
k =k'(\theta_1) \Delta_1 +k'(\theta_2) \Delta_2 + \ldots,
\end{equation}
\begin{equation}
\varepsilon_{\boldsymbol{w}} = v_{\boldsymbol{w}}^{\rm eff}(\theta_1) k'(\theta_1) \Delta_1 + v_{\boldsymbol{w}}^{\rm eff}(\theta_2) k'(\theta_2) \Delta_2 + \ldots,
\end{equation}
\begin{equation}
h^{\rm Dr}_k = (h_{k}')^{\rm dr}(\theta_1) \Delta_1 + (h'_{k})^{\rm dr}(\theta_2) \Delta_2 + \ldots.
\end{equation}
The integration over $\Delta_2$ can be done using $\delta(k)$, which set $\Delta_2 = - \Delta_1 k'(\theta_1)/k'(\theta_2)$, which gives
\begin{align}
\mathfrak L_{2-\rm ph}[i,j;j_{k,
l}] \nonumber \\& = \frac{1}{2} \lim_{t \to \infty} \int d\theta_1 \int d \theta_2 \fint d\Delta_1 \nonumber \\& k'(\theta_1) n(\theta_1 ) f(\theta_1+ \Delta_1) k'(\theta_2) n(\theta_2) f(\theta_2- k'(\theta_1)/k'(\theta_2) \Delta_1) (T^{\rm dr}(\theta_1,\theta_2))^2 \nonumber \\& \times \delta_t (\Delta_1 k'(\theta_1) (v_{\boldsymbol{w}}^{\rm eff}(\theta_1) - v_{\boldsymbol{w}}^{\rm eff}(\theta_2)) ) k'(\theta_1) {(v^{\rm eff}_i(\theta_1) - v^{\rm eff}_i(\theta_2)) (v^{\rm eff}_k(\theta_1) - v^{\rm eff}_k(\theta_2)) }{} \nonumber \\& \times \left( \frac{h_j^{\rm dr}(\theta_1)}{k'(\theta_1)} - \frac{h_j^{\rm dr}(\theta_2)}{k'(\theta_2)} \right)\left( \frac{h_l^{\rm dr}(\theta_1)}{k'(\theta_1)} - \frac{h_l^{\rm dr}(\theta_2)}{k'(\theta_2)} \right),
\end{align}
where $v^{\rm eff}_i(\theta) = (h_i' )^{\rm dr}(\theta)/k'(\theta)$.
The limit $t \to \infty$ can now be taken, excluding the zero-measure set of points where $(v_{\boldsymbol{w}}^{\rm eff}(\theta_1) - v_{\boldsymbol{w}}^{\rm eff}(\theta_2))=0$.
Then $\Delta_1$ can be integrated with the $\delta(\varepsilon_{\boldsymbol{w}})$, which produces the Jacobian factor $|k'(\theta_1) (v_{\boldsymbol{w}}^{\rm eff}(\theta_1) - v_{\boldsymbol{w}}^{\rm eff}(\theta_2))|$. We then obtain
\begin{align}
\mathfrak L_{2-\rm ph}[i,j;j_{k,
l}] \nonumber \\& = \frac{1}{2} \int d\theta_1 \int d \theta_2 k'(\theta_1) n(\theta_1) f(\theta_1) k'(\theta_2) n(\theta_2) f(\theta_2) (T^{\rm dr}(\theta_1,\theta_2))^2\nonumber \\& \times \frac{(v^{\rm eff}_i(\theta_1) - v^{\rm eff}_i(\theta_2)) (v^{\rm eff}_k(\theta_1) - v^{\rm eff}_k(\theta_2)) }{|v^{\rm eff}_{\boldsymbol{w}}(\theta_1) - v^{\rm eff}_{\boldsymbol{w}}(\theta_2) |} \nonumber \\& \times \left( \frac{h_j^{\rm dr}(\theta_1)}{k'(\theta_1)} - \frac{h_j^{\rm dr}(\theta_2)}{k'(\theta_2)} \right)\left( \frac{h_l^{\rm dr}(\theta_1)}{k'(\theta_1)} - \frac{h_l^{\rm dr}(\theta_2)}{k'(\theta_2)} \right).
\end{align}
To recover the results in the main text it should be used also
\begin{equation}
\beta^i v^{\rm eff}_i(\theta) = - n'(\theta)/(f(\theta) n(\theta) k'(\theta)),
\end{equation}
and
\begin{equation}
w^i v^{\rm eff}_i(\theta) = v^{\rm eff}_{\boldsymbol{w}}(\theta).
\end{equation}
One may also write an expression of the two particle-hole contribution to the generalised Onsager coefficient, in terms of the kernel \eqref{eq:ffun}, in the explicitly multilinear form
\begin{align}\label{Labcd}
&\mathfrak L_{2-\rm ph} [\boldsymbol a,\boldsymbol b;j_{\boldsymbol c,
\boldsymbol d}]
\nonumber \\& =
\frc12 \int d\theta_1\int d \theta_2\,
\kappa_{\mathfrak D}(\theta_1,\theta_2)
\frc{\Delta v^{\rm eff}_{\boldsymbol a}(\theta_1,\theta_2)
\Delta v^{\rm eff}_{\boldsymbol c}(\theta_1,\theta_2)
\Delta a^{\rm eff}_{\boldsymbol b}(\theta_1,\theta_2)
\Delta a^{\rm eff}_{\boldsymbol d}(\theta_1,\theta_2)}{(\Delta v^{\rm eff}_{\boldsymbol w}(\theta_1,\theta_2))^2},
\end{align}
where
\begin{equation}
\Delta v^{\rm eff}_{\boldsymbol a}(\theta_1,\theta_2) = v^{\rm eff}_{\boldsymbol a}(\theta_1) - v^{\rm eff}_{\boldsymbol a}(\theta_2),\quad
\Delta a^{\rm eff}_{\boldsymbol a}(\theta_1,\theta_2) = a^{\rm eff}_{\boldsymbol a}(\theta_1) - a^{\rm eff}_{\boldsymbol a}(\theta_2).
\end{equation}
By appropriately separating the integral kernel from the external functions, one immediately obtains \eqref{Lintegrable1}-\eqref{Lintegrable4}. Higher particle-hole contribution $\Big[ \mathfrak L[\boldsymbol a,\boldsymbol b;j_{\boldsymbol c,
\boldsymbol d}] \Big]_{n>2-\rm ph} $ vanish whenever at least one of the form-factor is proportional to the Hamiltonian energy $\varepsilon_{\boldsymbol{w}}$, namely for current induced by Hamiltonian flow, due to the energy conservation $\delta(\varepsilon_{\boldsymbol{w}})$, which gives a finite contribution only for the the two-particle hole contribution, together with momentum conservation. However the case $\boldsymbol{a} = \boldsymbol{\beta}$ and $\boldsymbol{c} = i$ evades this case. Therefore the full sum over generic number of particle-hole excitations in this case remains
\begin{equation}
\mathfrak L[\boldsymbol{ \beta},\boldsymbol {\mathfrak{f}};j_{i,
\boldsymbol{ \mathfrak{f}}}] = \mathfrak L_{2-\rm ph}[\boldsymbol{ \beta},\boldsymbol {\mathfrak{f}};j_{i,
\boldsymbol{ \mathfrak{f}}}] + \sum_{n \geq 3} \mathfrak L_{n-\rm ph}[\boldsymbol{ \beta},\boldsymbol {\mathfrak{f}};j_{i,
\boldsymbol{ \mathfrak{f}}}] .
\end{equation}
As the sum over $n>2$ particle-hole contribution is currently out-of-reach, we shall here only provide the $n=2$ contribution, which constitutes a finite lower bound to the Onsager coefficient $\mathfrak L[\boldsymbol{ \beta},\boldsymbol {\mathfrak{f}};j_{i,
\boldsymbol{ \mathfrak{f}}}]$. Such higher particle-hole contribution will not affect total energy conservation, see eq. \eqref{eq:energyconsconstraint} and the positiviness of the entropy increase.
\section{PT symmetry and perturbation theory}\label{app:PT}
To include all terms up to second derivative order in the hydrodynamic equation in the presence of inhomogeneous external fields, we must in principle also include the following term in the Hamiltonian \eqref{eq:pert}
\begin{equation}
H^{(2)}=-\frac{\partial_{x_0} \mathfrak{f}^i(x_0)}{2}\int dx\,(x-x_0)^2 \ q_i(x).
\end{equation}
However, in systems with PT symmetry the contribution to the hydrodynamics from this term vanishes. Taking expectation values with respect to the local homogeneous, PT invariant states (neither perturbation theory nor inhomogeneity of the state need be considered as we are already at highest order in derivatives), we find
\begin{align}
\exv{[H^{(2)},q_i(x_0)]}=&-\frac{\partial_{x_0} \mathfrak{f}^k(x_0)}{2}\int dx\,(x-x_0)^2\exv{[q_k(x),q_i(x_0)]}\nonumber \\
=&\frac{\partial_{x_0} \mathfrak{f}^k(x_0)}{2}\int dx\,x^2\exv{[q_k(x),q_i]}^*
\nonumber \\
=&\frac{\partial_{x_0} \mathfrak{f}^k(x_0)}{2}\int dx\,x^2\exv{\mathcal{PT}([q_k(x),q_i])}
\nonumber \\
=&\frac{\partial_{x_0} \mathfrak{f}^k(x_0)}{2}\int dx\,x^2\exv{[q_k(-x),q_i]}\nonumber \\
=&\frac{\partial_{x_0} \mathfrak{f}^k(x_0)}{2}\int dx\,x^2\exv{[q_k(x),q_i]},
\end{align}
which implies
\begin{equation}
\exv{[H^{(2)},q_i(x_0)]}=0.
\end{equation}
We therefore conclude that terms proportional to $\partial_{x_0} \mathfrak{f}^i(x_0)$ only can appear at $3$-th order in spatial derivatives, within the hydrodynamic gradient expansion.
\end{appendix}
\bibliographystyle{ieeetr.bst}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\subsection{Dataset overview}
The field dataset used in this example was acquired within the Gulf of Mexico by Shell Exploration and Production Company in 2010. The area sits in the Garden Banks region, about 362km southwest of New Orleans, Louisiana (Figure~\ref{fig:CardamomOverview}). This data aims to illuminate prospects belonging to a producing field to improve the subsurface images around the diffuse salt bodies by leveraging the full wide-azimuth capability of an OBN geometry. The area has been subjected to diffuse diapirism and presents multiple salt bodies making its exploration possibly challenging from a seismic perspective \cite[]{murray1966salt,thompson2011salt}.
From the entire dataset, we select $255$ nodes that recorded multi-component seismic data generated by $41000$ airgun sources covering an area of $100$ $\text{km}^2$. Figure~\ref{fig:CardamoGeo} shows the sources' and nodes' x-y positions overlaid on a depth slice of the initial velocity model depicting a salt diapir (i.e., high-velocity circular portion). The sources have been acquired using a flip-flop acquisition geometry with a source interval of approximately $25$ m, whose depth is $9.8$ m. The sail lines are aligned with the x-axis, and their interval on the cross-line or y-axis is approximately $100$ m. From Figure~\ref{fig:CardamoSouGeo}, it is clear where the source vessel had to divert its trajectory to abide by the acquisition restrictions in the proximity of production platforms. The multi-component nodes are placed at the seabed, and their depth varies between $0.83$ and $1.0$ km. Their spatial x-y interval is approximately $250$ m in both directions.
Besides the observed data, Shell also provided the elastic stiffness components $C_{11}$, $C_{33}$, $C_{13}$, $C_{44}$, and $C_{66}$ and a constant density value of the sediment layers. In addition to this information, they included the salt-body edges' positions obtained by interpreting subsurface images of the area. From the stiffness components, we construct an initial P-wave velocity model based on the provided inputs assuming an isotropic medium~\cite[]{mah2003determination}. Figure~\ref{fig:CardamomInit} shows representative sections of the 3D volume of the velocity model. The initial sediment velocity does not present any distinguishable geological feature. A salt diapir, reaching the seabed floor depth, is located close to the center of the area of interest, whose P-wave velocity is set to $4.5$ km/s and is assumed to be homogeneous, a common assumption based on field observations~\cite[]{zong2015elastic}.
\subsection{Data preprocessing}
We describe in detail the pre-processing steps we followed to apply our target-oriented workflow. Since we rely on FWI approaches to estimate the migration velocity model and the elastic parameters of the subsurface, we first identify the minimum frequency for which the active source signal has a sufficient signal-to-noise ratio (SNR). To this end, we apply different high-cut filters to a representative shot-binned common-receiver gather and visually identify the frequency for which the direct arrival signal is distinguishable from the natural background noise (Figure~\ref{fig:CardamoMinFreq}). The top two panels clearly show that no useable active energy is recorded below $2$ Hz. Conversely, from the bottom two panels, we notice that the SNR increases between $3$ and $4$ Hz. Thus, we set the lowest frequency used in this field example to $3$ Hz. We observe a good signal-to-noise ratio (SNR) for the signal up to 40 Hz on the higher end of the spectrum. However, to reduce the modeling computational cost for this study, we apply a 30 Hz high-cut filter to the dataset.
The estimation of the source signature is one of the fundamental steps for the successful application of any FWI workflow~\cite[]{rickett2013variable,sun2014source,skopintseva2016importance}. Theoretically, it is possible to compute such a signature from first principles by knowing the airgun experimental setup~\cite[]{ziolkowski1982signature}. However, all the necessary parameters are unknown, or other first-order experiment effects could prevent the theory from retrieving the correct source impulse response (e.g., faulty airguns in the battery, incorrect source parameters). For this reason, we follow a data-driven approach. To compute a proxy of the direct arrival waveform for each node, we apply a hyperbolic moveout (HMO) correction to all the nodes using a constant velocity of $1.5$ km/s~\cite[]{yilmaz2001seismic}. By stacking all the traces belonging to a representative common-receiver gather, we obtain the signal depicted in Figure~\ref{fig:CardamomDirectTime}. A strong peak is present on the signal's onset, which is then followed by the typical bubble response commonly generated by airgun sources~\cite[]{watson2019controls}. The frequency spectrum presents multiple notches due to the source-side ghost and the bubble response (Figure~\ref{fig:CardamomDirectFreq}). Additionally, the frequency content rapidly decreases after $40$ Hz, and minimal energy is present above $160$ Hz.
Besides retrieving the P-wave speed using an FWI workflow, we also use the recorded data to produce a reverse time migration (RTM) image volume of the subsurface. This volume allows us to understand the area's geological setting and imaging challenges (i.e., position and shape of the salt dome). The usage of the time signature displayed in Figure~\ref{fig:CardamomDirectTime} for imaging would result in an image volume presenting multiple side lobes due to the bubble reverberations unless specific imaging conditions or procedures are used to form the subsurface image~\cite[]{valenciano2002deconvolution}. One standard practice to reduce the bubble reverberations is to apply prediction-error filters to the observed data to dampen these oscillations~\cite[]{wood1978debubbling}. However, in this application, we follow a different approach by combining acoustic wave-equation modeling and shaping filters.
The initial model of Figure~\ref{fig:CardamomInit} is used to compute an estimate of the observed data using an acoustic isotropic 3D approximation. We use the predicted pressure to calibrate the modeling operator on the datasets' observed amplitudes and reshape the data response. For each common-receiver gather, we compute the direct arrival proxy following the HMO-stacking procedure and shape the observed signal into the predicted one using a frequency-domain Wiener filtering operation. The proxy event is then used to estimate the filter by solving the following equation in the frequency domain,
\begin{eqnarray}
\label{eqn:wiener-filter}
d^{pre}_i(t) &=& \big ( f_i * d^{obs}_i \big) (t),
\end{eqnarray}
where * denotes the time-convolution operator, $d^{obs}_i$ and $d^{pre}_i$ are observed and predicted direct arrival proxies for the $i$-th node, respectively. $f_i$ is the unknown filter for the $i$-th node. The panels of Figure~\ref{fig:CardamomShaping} show plots of the observed, initially predicted, and shaped data. The shaping procedure removes the bubble response from the observed data and makes the direct arrival waveforms consistent between the initial predicted pressure and the observed data. Compared to other techniques, this shaping procedure automatically removes the instrument response, which can be different for each node, makes the observed data consistent with the acoustic modeling operator, and gives complete freedom in choosing the source signature employed within the modeling and imaging operators.
\subsection{Initial RTM images and geological scenario description}
The shaped data are employed to compute RTM images using the initial P-wave velocity model. These images provide information on the geological subsurface structures present in this area. To simultaneously utilize the up- and down-going energy recorded by the nodes, we perform the acoustic Green's function computation using a free-surface condition at the water surface during the imaging procedure~\cite[]{robertsson1996numerical}. This choice avoids the necessity of performing an up-down separation step~\cite[]{schalkwijk1999application}. Moreover, we apply a source-side illumination compensation to diminish any acquisition artifacts within the subsurface image volume~\cite[]{kaelin2006imaging}.
Figure~\ref{fig:CardamomInitRTM} shows depth slices extracted from the RTM image obtained using the initial velocity model. From the depth section extracted at $z=0.9$ km (Figure~\ref{fig:CardamomInitRTMZ1}), the top edges of the diapir are recognizable, as well as radially distributed features around it. These structures are commonly observed in areas where diapirism is present and are related to the presence of radially distributed faults caused by the rising of the salt diapir~\cite[]{stewart2006implications,coleman2018and}. Furthermore, within deeper sections, multiple structures associated with turbidite sequences can be observed~\cite[]{berg1982seismic}. The panel of Figure~\ref{fig:CardamomInitRTMZ2} shows an example of such structures on the bottom-right corner of the depth slice. Mild acquisition-footprint artifacts are present despite the application of source-side compensation during the migration process. Finally, high-amplitude features are visible close to the diapir flanks in even deeper portions of the image volume. For instance, in the depth section extracted at $z=2.865$ km a noticeable faulted structure is present (Figure~\ref{fig:CardamomInitRTMZ3}). Such image features are potentially associated with hydrocarbon prospects~\cite[]{harding1979structural,tiapkina2008imaging}.
\subsection{Acoustic FWI}
The redatuming process we employ relies on the knowledge of an accurate overburden velocity model. Thus, we apply an acoustic isotropic constant-density FWI process to the shaped pressure data to improve the initial velocity model. The acoustic wave-equation operators employ a free-surface boundary condition at the top edge and absorbing boundaries on the other ones. To mitigate the inaccuracy of the modeling operator in correctly predicting the observed event amplitudes, we employ the objective function proposed by \cite{shen2010near}, where a trace-by-trace normalization is applied to both modeled and observed data vectors before computing their difference. It can be easily shown that the minimization of such an objective function corresponds to the maximization of the zero-lag cross-correlation between the predicted and observed data~\cite[]{shen2014early}. Furthermore, we invert the data using a data-space multiscale approach~\cite[]{bunks1995multiscale}, where progressively wider frequency bands undergo the inversion procedure. The chosen frequency bands are the following: $3-6$, $3-9$, $3-12$, and $3-18$ Hz. For the first three bands, the modeling is performed with an FD grid size of $35$ m in the three dimensions, while, for the last band, the modeling is performed with a grid size of $25$ m. To mitigate the introduction of any inversion artifacts, we parameterize the model on spline grids with an x-y sampling of $175$, $105$, and $50$ m for each band, respectively. The spline grid in the z-axis is as fine as the FD sampling. The usage of a spline parameterization during the inversion also has the additional advantage of limiting the convergence to local minima~\cite[]{barnier2019waveform}. Finally, we apply the acoustic reciprocity theorem so that the $255$ nodes act as sources and the $41000$ sources as receivers~\cite[]{aki2002quantitative}.
The acoustic FWI objective function is minimized using a BFGS algorithm~\cite[]{liu1989limited}. Overall, $216$ iterations are performed to invert the data up to $18$ Hz. Figure~\ref{fig:CardamomAcoFWIObj} displays the convergence curve of the acoustic FWI problem. The discontinuities in the curve correspond to the changes in the frequency band during the inversion. For the two central bands, a decrease of approximately 70\% is achieved by the minimization algorithm.
From the final P-wave velocity we extract the same cross- and in-line sections of the 3D volume as the ones in Figure~\ref{fig:CardamomInit} (Figure~\ref{fig:CardamomFinalCrossIn}). The inverted model by the FWI scheme shows geologically consistent features. For instance, the panel in Figure~\ref{fig:CardamomFinalX1} presents a discontinuity potentially indicating the presence of a fault at $y=50.2$ km and $z=3$ km. Moreover, a clear low-velocity anomaly is placed on the salt flank at $z=2.8$ km (Figure~\ref{fig:CardamomFinalY2}). This decrease in velocity could be related to gas accumulation at the top of a hydrocarbon reservoir sealed by the salt body. Finally, the same inclusion reported by~\cite{dahlke2020applied} is retrieved by the acoustic FWI workflow (Figure~\ref{fig:CardamomFinalX2}). In addition, other velocity variations are present at the top of the diapir.
The inversion is affected by artifacts due to the limited acquisition geometry (left sides of Figure~\ref{fig:CardamomFinalX1} and~\ref{fig:CardamomFinalY3}). Moreover, low-velocity anomalies are placed at depths higher than $3$ km. These features are probably due to the convergence to a local minimum of the optimization algorithm. Thus, we limit our area of search for any potential target to $3$ km of depth. In addition to the inclusion shown in Figure~\ref{fig:CardamomFinalX2}, the FWI workflow placed a low-velocity anomaly close to the top of the diapir (Figure~\ref{fig:CardamomInitComp}). This velocity decrease could be associated with salt-encased sediment packages included during the diapir formation~\cite[]{fernandez2017origin}.
As a quality control (QC) step, we compare the phase matching between the predicted and the observed pressure data before and after the inversion process is applied. Figure~\ref{fig:CardamomDataCompY} shows the phase matching when one of the source-spatial positions is fixed. The phase matching between modeled and observed data for both long- and short-offset traces improves after applying the FWI workflow. When a time slice is extracted from the modeled and the observed data, and we compare the phase agreement between the two (Figure~\ref{fig:CardamomDataCompT}), an excellent match is found using the final FWI acoustic model to generate the pressure data. On the other hand, from Figure~\ref{fig:CardamomDataCompY}, we notice that the accuracy of the matching diminishes for recording time greater than $5$ s for the mid- and short-offset ranges. This mismatch could explain the spurious low-velocity anomalies in the FWI acoustic model previously described.
Besides the satisfactory phase matching between the predicted and observed data, the quality of the RTM image greatly improves thanks to the more accurate velocity retrieved by the FWI process (Figure~\ref{fig:CardamomRTMcompX}). In fact, by comparing Figures~\ref{fig:CardamomRTMInitx1} and~\ref{fig:CardamomRTMFWIx1}, the fault planes between $y=49.8$ and $y=51.5$ km are more visible within the RTM image obtained on the FWI velocity model. For the sections passing through the salt diapir (Figures~\ref{fig:CardamomRTMInitx2} and~\ref{fig:CardamomRTMFWIx2}), the overall reflectors' continuity is improved for the RTM image generated on the FWI model; especially, for the reflectors close to the top of the salt body. Moreover, the high amplitude reflectors present a more consistent contact point with the salt flanks within the FWI-related RTM image. One interesting geological feature present on the left side in both sections of Figures~\ref{fig:CardamomRTMInitx2} and~\ref{fig:CardamomRTMFWIx2} is the sigmoidal shaped reflectors at $z=2.0$ km. These events are due to the presence of the turbidite deposits previously described.
\subsection{Target-oriented elastic FWI of a potential prospect}
By analyzing the RTM image volume obtained using the final FWI velocity model, we identify a clear high-amplitude reflector in the proximity of the salt flank (Figure~\ref{fig:CardamomRTMFWICube}). This amplitude response could be related to gas accumulation at the top of a hydrocarbon reservoir~\cite[]{mazzotti1990prestack}. Therefore, we apply the redatuming technique, followed by an elastic FWI workflow to retrieve this potential prospect's elastic properties.
The first step is to solve an extended acoustic linearized inversion of the observed data to obtain an extended 3D image volume. We limit the observed data's maximum frequency to $12$ Hz to make the least-squares process feasible with the available computational resources. Higher frequencies can be employed as long as computer memory or disks can hold the migration wavefield and the extended images. Random-boundary conditions could be used to alleviate the memory requirements during the imaging step~\cite[]{clapp2009reverse}. The FD grid is set to $35$ m in each direction. Moreover, given the acquisition's full-azimuth nature, we employ $h_x$ and $h_y$ subsurface-offset extensions of $9$ points in each direction, resulting in a maximum absolute subsurface offset of $140$ m. Finally, a differential-semblance-optimization (DSO) regularization term is added to improve the image focusing~\cite{symes1994inversion}, and its weight is chosen on a heuristic basis. We focus the iterative process on inverting the data component stemming from the target area by employing a mask tailored for the potential prospect. After 30 iterations of a linear CG algorithm, an acceptable numerical minimum of the objective function is reached given the selected parameters (Figure~\ref{fig:CardamomExtLSRTMObj}). Each linear iteration took approximately 1 hour and 15 minutes on the same machine used for the Marmousi2 experiment. When extracting the zero-subsurface offset image within the target, an evident high-amplitude reflector is visible (Figure~\ref{fig:CardamomExtLSRTMTarg}). Moreover, fault planes are clearly affecting the horizon of interest.
The focusing of the offset-domain common-image gathers (ODCIGs) of the target area provides an additional QC step for assessing the migration velocity model's accuracy during the linear inversion process. A representative ODCIG extracted from the target volume is displayed in Figure~\ref{fig:CardamomExtODCIG}, which presents a clear focus around the zero-subsurface offset axes. Furthermore, we convert this ODCIG into the angle-azimuth domain~\cite[]{biondi20043d}, and extract the angle-domain common-image gather (ADCIG) (Figure~\ref{fig:CardamomExtADCIG}). This angle gather presents a flat response across reflection angles. These two observations suggest that the acoustic FWI process can retrieve an accurate overburden velocity. Finally, Figure~\ref{fig:CardamomExtADCIG-AVA} shows the amplitude response the the ADCIG at $z=2.7$ km and $\varphi=45^{\circ}$ following an class-3 AVO signature~\cite[]{castagna1993avo}. This amplitude response in addition to the bright reflectivity displayed in Figure~\ref{fig:CardamomRTMFWICube} is compelling evidence of the presence of a gas-bearing sand reservoir in that subsurface area.
We employ this extended image volume to synthesize the elastic pressure data with a new redatumed acquisition geometry placed at $z=2.1$ km. The sources' and receivers' x-y positions are shown in the panels of Figure~\ref{fig:CardamomTargetGeo}. We employ $150$ sources and $8444$ receivers, with the latter regularly sampled and spaced by $25$ m in each direction. This new acquisition is chosen based on how the original OBN geometry has illuminated the target. We purposely avoid placing acquisition devices on the salt body, given the limited illumination of the target by the original OBN geometry present in that section of the model. Figure~\ref{fig:CardamomDatumData} shows a representative shot gather where an increase in amplitude for the first reflected event is noticeable for receivers at a further distance from the source position. This behavior is a potential indication of an AVO signature from the chosen prospect.
The entire model domain is approximately $10\times10\times4$ $\text{km}^3$, while the target domain size is approximately $1.5\times3\times1$ $\text{km}^3$, making the target computational domain approximately $67$ times smaller compared to the original one.
The target area's initial P-wave velocity model is obtained by mildly smoothing the acoustically inverted FWI P-wave velocity. The initial density parameter is simply computed using Gardner's equation~\cite[]{gardner1974formation}. Finally, the starting guess for the S-wave velocity is obtained using the provided stiffness tensor components. Figure~\ref{fig:CardamomTargetInitEla} shows different panels extracted from the initial elastic parameters of the target area.
We apply an elastic FWI workflow to the redatumed dataset to estimate the target area's elastic parameters. The entire bandwidth of the reconstructed data is simultaneously injected (i.e., $3-12$ Hz), and the three elastic parameters are jointly inverted. The total recording time is $4.5$ s, which is almost half of the original $8$ s data. The elastic FD operator is based on a $20$ m grid to abide by the dispersion and stability conditions. However, the inverted model is parameterized using a spline grid of $100$ m in the x and y axes, while the z-axis has the same sampling as the FD grid. As in the acoustic FWI step, the spline parameterization effectively acts as regularization and avoids the introduction of spurious features during the inversion. By assuming the same scattering regime considered for the target-oriented inversion applied to the synthetic case, all the wave-equation operators are constructed using absorbing boundary conditions around the entire simulation domain.
After minimizing the L2-norm difference between the predicted and the synthesized elastic pressure data with a BFGS optimizer for $10$ iterations. The described inversion process achieves an accurate fit of redatum data as shown by the data-residual panels in Figure~\ref{fig:CardamomTargetRes}, and the retrieved subsurface parameter cubes are shown in Figure~\ref{fig:CardamomTargetFinalEla}. The inversion procedure introduces most of the changes within the P-wave and density parameters. A noticeable decrease in both is observed at the same position as the high-amplitude anomaly observed in the RTM image of Figure~\ref{fig:CardamomRTMFWICube}. On the contrary, no significant updates are placed within the S-wave parameter, although similar geometrical features are present within the inverted parameter.
To highlight how the elastic FWI process updates the three parameters, we plot the difference between the inverted and the initial models in Figure~\ref{fig:CardamomTargetDiffEla}. As expected, an evident decrease at the target's position is observed within the P-wave and density parameters. On the other hand, the S-wave model does not present such a reduction in the same position and displays slightly different structures than the other two parameters. Moreover, the updates in the S-wave parameter are an order of magnitude smaller compared with the P-wave velocity. This behavior could be due to the limited surface offsets considered in this field-data example. However, the different structures and sensitivity provide evidence of the ability of the process not to introduce cross-talk artifacts during the elastic FWI process.
Using the elastic parameters obtained by the target-oriented inversion, we compute two standard rock physics attributes, namely, the Vp/Vs ratio and the acoustic impedance (AI) (Figure~\ref{fig:CardamomTargetPropEla}). The average Vp/Vs ratio and AI of the low-velocity and low-density anomaly are approximately $1.7$ and $4.8$ $\text{g}/\text{cm}^3 * \text{km}/\text{s}$, respectively (Figure~\ref{fig:CardamomTargetPropElaProfiles}). These values are consistent with a potential gas-charged sand whose Vp/Vs ratio is commonly below 1.8 and its AI is ranging from 2.5 to 7 $\text{g}/\text{cm}^3 * \text{km}/\text{s}$ \cite[]{gardner1968velocity,odegaard2003interpretation}. In addition, the rock-physical parameters above the gas-bearing sand present values in agreement with a high-shale-content formation, corresponding to the necessary cap rock of this reservoir.
This field application of the proposed target-oriented elastic FWI workflow demonstrates its ability to estimate a potential prospect's elastic properties correctly. The applied workflow retrieves the elastic parameters of a possibly gas-charged reservoir located on the flank of the salt diapir. Moreover, the method's ability to limit the computational domain to only the target area allows applying a wave-equation estimation method such as FWI. Using an elastic FWI on the entire $100$ $\text{km}^2$ domain is a challenge given the computational cost of solving the elastic wave equation. Using the same resources described within the Marmousi2 test, the elapsed time for performing a single iteration is $133$ minutes. We estimate a computational speed-up factor ranging from $500$ to $1000$ between the original and the target-oriented inversions for this specific example. This estimate does not consider the possibility of limiting the surface offsets during the elastic inversion process of the surface data. However, even considering this offset limitation, a considerable speed-up factor would be achieved by the target-oriented workflow.
\subsection{Redatuming of elastic pressure waves through extended linearized waveform inversion}
Figure~\ref{fig:flatVp2D} shows the P-wave velocity model for the flat interface test. The change in the elastic parameters is depicted in the three vertical profiles shown in Figure~\ref{fig:flatProfiles}, where an increase of all the elastic parameters is occurring across the interface. we generate elastic pressure data using an explosive source, whose time signature and spectrum are displayed in Figure~\ref{fig:flatWavelet}. The goal is to use the elastic pressure data recorded at $z=0$ km to reconstruct a new dataset as if the sources and the receivers could have been placed at $400$ m below the surface.
We record the pressure using 81 sources and 401 receivers placed at the surface and spaced by 50 and 10 m, respectively. A single reflected event is recorded by the receivers for each experiment, where a clear phase rotation is present as the offset between the source and receiver pair increases (Figure~\ref{fig:flatData}).
To perform the imaging step, we solve the linearized waveform inversion problem using an acoustic extended Born modeling operator and minimize the objective function in equation~\ref{eqn:ext-lsrtm} using 500 iterations of the LCG algorithm. The migration velocity model is a constant speed set to 2.5 km/s, corresponding to the correct overburden wave speed. The extended-space inversion achieves a relative objective function decrease of an approximately numerical level of accuracy for single-precision operators (i.e., $10^{-6}$), showing the ability of the extended-image space to fully preserve all the elastic amplitude variations present in the recorded data.
The extended linearized waveform inversion problem's solution is a function of the spatial coordinates $x$ and $z$ and of the extended subsurface offset axis $h$. The shape of the image highly depends on the recorded events and the acquisition geometry employed. Figure~\ref{fig:flatODCIGs} shows the offset-domain common image gathers (ODCIGs) for two different $x$ coordinates. For $x=2.0$ km, the ODCIG appears to be focused around the zero-offset axis. This behavior is expected since the correct migration velocity has been used during the inversion process~\cite[]{biondi20043d}. The two linear features below $z=0.8$ km represent the head waves recorded in the longer offset shot gathers mapped into the image space. The other two faint linear features above $z=0.8$ km are caused by the limited acquisition aperture (i.e., the maximum source-receiver offset of $4$ km). On the other hand, when an ODCIG is extracted at $x=0.0$ km (Figure~\ref{fig:flatODCIGleft}), the image does not appear as focused as for the central-model position because fewer reflection angles have been illuminated from the surface acquisition.
Figure~\ref{fig:redatumFlatRefl} displays two representative shot gathers obtained when the acquisition geometry is placed at $z=400$ m. The same amplitude-versus-offset behavior is observed as in the surface pressure data (Figure~\ref{fig:flatData}).
The schematic of Figure~\ref{fig:redatuming} shows that a scattering point can be used to generate the surface and the sunk acquisition datasets. This observation also implies that the images obtained from two acquisitions, assuming infinite source-receiver extent, are identical. To demonstrate this statement, we compare the ODCIGs obtained by inverting the surface and sunk-acquisition data, respectively (Figures~\ref{fig:redatumFlatSurfOdcig} and~\ref{fig:redatumFlatDatOdcig}). Indeed, the only difference between the two ODCIGs is due to the limited acquisition aperture (Figure~\ref{fig:redatumFlatDiffOdcig}).
Since the data from the two acquisition geometries maps into the same extended image, we can use the ODCIGs obtained from the surface pressure to synthesize the events recorded by the sunk sources and receivers. Figure~\ref{fig:redatumFlatRecMid} shows the shot gather at $x=2.0$ km obtained by demigrating the ODCIGs of Figure~\ref{fig:redatumFlatSurfOdcig}. A similar amplitude behavior is present compared to the shot gather of Figure~\ref{fig:redatumFlatReflMid} up to an offset of $1$ km. The artifacts above the apex of the reflected event are due to the truncation of the surface acquisition geometry. In fact, when we demigrate the image where those truncation artifacts are masked (Figure~\ref{fig:redatumFlatRecMaskOdcig}), the reconstructed reflection does not present any spurious events (Figure~\ref{fig:redatumFlatRecMidMask}).
As we described using the schematic of Figure~\ref{fig:redatumingGeom}, not all the events associated with any source-receiver pairs can be reconstructed from an image obtained with surface data. In this case, the maximum illuminated reflection angle from the surface geometry is approximately $64^{\circ}$, which corresponds to a maximum half offset of $1$ km for the sunk acquisition geometry. Figure~\ref{fig:redatumFlatRecMidMaskDiff} shows the difference between the reference and the reconstructed data of Figures~\ref{fig:redatumFlatReflMid} and~\ref{fig:redatumFlatRecMidMask}, where only energy for an offset greater than $1$ km are present as expected.
\subsubsection{Sensitivity to assumed source wavelet}
Since the redatuming technique is based on an imaging step, it is necessary to create a source wavelet signature. However, the data reconstruction is invariable to the choice of the source signature. To numerically verify this statement, we reconstruct the same events of Figure~\ref{fig:redatumFlatRecMidMask}, but employ different waveforms during the linearized waveform inversion and demigration steps. Figure~\ref{fig:redatumWav90rot} displays the same wavelet of Figure~\ref{fig:flatWaveletTime} on which a 90-degree phase rotation has been applied. The right panel in Figure~\ref{fig:redatumWavRick} shows a Ricker wavelet with a domain frequency of 15 Hz. These two waveforms are independently used to solve the extended linearized waveform inversion problem defined on the flat-interface model (Figure~\ref{fig:flatVp2D}). The extended gathers generated by this process are then used to reconstruct the elastic pressure events at the new datum (i.e., $400$ m).
Figure~\ref{fig:redatumWavRec} shows the redatumed pressure when the 90-degree rotated waveform and the Ricket wavelet are employed during the demigration process, respectively. No evident difference is visible when these two panels are compared to the one displayed in Figure~\ref{fig:redatumFlatRecMidMask}.
This invariability can also be seen by analyzing the amplitude behavior of the ADCIGs generated by the extended linearized waveform inversion when different wavelets are employed. The same AVA pattern is visible in the three panels of Figure~\ref{fig:redatumWavADCIG} showing the ADCIGs generated with three source signatures described in this section.
\subsubsection{Sensitivity to migration velocity}
Finally, we analyze the sensitivity of the reconstruction process with respect to the migration velocity map used during the linearized waveform inversion step. To this end, we perform the same redatuming steps previously described for the flat-interface case but in which a 5\% slower velocity is used compared to the correct one (i.e., $2375$ m/s). As expected the ODCIG obtained during the migration process is not as focused as when the correct velocity is employed (compare Figures~\ref{fig:redatumRecWrongOdcig} and~\ref{fig:redatumFlatSurfOdcig}). Moreover, the typical curving effect within the angle gather is observed when analyzing Figure~\ref{fig:redatumRecWrongAdcig} \cite[]{biondi2004angle}. The successful application of any target-oriented method, whether is based on local solvers or on a redatuming step, is dependent on the accuracy of the overburden velocity model. Here, we show how our redatuming method is affected by overburden inaccuracies.
When the ODCIGs obtained using the slower migration velocity are demigrated to reconstruct the datumed elastic pressure, the AVO pattern is reconstructed but the kinematics of the events result incorrect (Figure~\ref{fig:redatumRecWrong}). However, when the data are reconstructed at the original acquisition depth, then both kinematics and amplitude effects are perfectly reconstructed.
This test demonstrates the importance of obtaining an accurate migration velocity model of the overburden before performing the redatuming step. This observation is generally true for any other redatuming technique. However, the proposed technique, since it is based on an imaging step, provides a quality control step thanks to the kinematic behavior of the generated ODCIGs and ADCIGs with respect to the migration velocity model.
\subsection{Elastic target-oriented inversion applied to the Marmousi2 model}
We apply the described redatuming and elastic inversion techniques to the Marmousi2 model to estimate the elastic parameters associated with a gas-bearing reservoir located within a faulted anticline structure. The true subsurface elastic parameters are displayed in Figure~\ref{fig:MarmElaTrue}. This gas reservoir is located at a depth of $1.1$ km and spans approximately $500$ m in the horizontal direction, starting from $x=10$ km.
First, we apply an elastic FWI workflow to a surface dataset to retrieve the entire model's subsurface parameters starting from a smoothed version of the true model. Then, we solve an extended linearized waveform inversion to synthesize the reflected events generated by the gas reservoir recorded with an acquisition located in its vicinity. The redatumed dataset is then used within the same elastic FWI workflow to estimate the reservoir's elastic properties. Finally, we compare the target-oriented results with the elastic FWI applied to the entire surface dataset.
The observed elastic pressure data is generated from a surface acquisition composed of 140 sources and 567 receivers spaced by $120$ m and $30$ m along the x-axis, respectively. The modeling is performed using absorbing boundaries around all the four edges of the simulation domain~\cite[]{israeli1981approximation}. Figure~\ref{fig:MarmWaveletTime} shows the time signature of the explosive source employed in this synthetic experiment. This wavelet's frequency content is effectively contained between $4$ and $13$ Hz with a flat response between $6$ and $10$ Hz (Figure~\ref{fig:MarmWaveletSpectrum}). The choice of the lowest frequency wants to simulate a field scenario in which the low-frequency content is commonly removed given its low signal-noise ratio (SNR).
Given the amplitude response of the reflected events from the subsurface interfaces, the dataset is mostly dominated by reflections. In fact, by analyzing the two representative shot gathers displayed in Figure~\ref{fig:MarmShots}, the mentioned reflections present greater amplitudes than the transmitted waves. This presence of these reflected events represents the ideal application scenario for the redatuming technique. The linearized waveform inversion process can map the AVO of the reflected events within the extended image space.
The initial elastic parameters are obtained by applying a moving average filter to the true model (Figure~\ref{fig:MarmElaInit}). This process produces an accurate initial elastic model and mitigates the possibility of falling into a non-useful local minimum given the chosen frequency content. Additionally, all the short-wavelength features of all the reservoirs present within the subsurface are entirely missing from the initial guess.
The full bandwidth of the data is simultaneously injected, and the three elastic parameters are jointly inverted within an elastic FWI procedure. Moreover, we apply the model-space multi-scale approach described in the previous chapter to mitigate the presence of local minima and mitigate any spatial artifacts that may arise during the inversion process. Three sequentially refined spline grids are employed, namely, $100$ m, $50$ m, and $25$ m spacing, while the propagation is performed with a $5$ m sampling in both directions. For each spline grid, the elastic FWI process employs the L-BFGS optimization method, and the inversion is stopped when an appropriate step-length value cannot be found. The convergence curve obtained using the described elastic FWI workflow is shown in Figure~\ref{fig:MarmElaObj}. The first spline grid reaches the closest local minimum after 90 iterations and achieves a relative objective function decrease of more than 80\%. The final elastic model is then projected onto a finer spline grid, and then other 35 iterations are employed to further diminish the objective function. The spline refinement is performed again to obtain an additional objective function decrease.
The panels in Figure~\ref{fig:MarmElaInv} show the final elastic parameters obtained at the end of the described elastic FWI workflow. The P-wave velocity is the parameter accurately retrieved and does not present any evident artifacts. On the other hand, the S-wave velocity is affected by some inversion artifacts and a potential cross-talk positioned at $x=3.0$ km and $z=1.0$ km. However, overall this parameter is in agreement with the true one shown in Figure~\ref{fig:MarmElaVsTrue}. The density parameter is also in good agreement with the true one, and the anomaly associated with the gas reservoir is correctly retrieved.
To evaluate the quality of the inverted elastic model, we display the predicted and the observed data within the same plot to compare the amplitude and timing of the reflected event. The representative shot gather is located at $x=4.190$ km, and only the positive offsets are compared. Figure~\ref{fig:MarmElaModObsDataInit} shows this comparison when the observed data are plotted along with the predicted pressure obtained using the initial elastic model. All the reflected events are not modeled from the initial guess, and a clear mismatch in the long offset events is evident. On the contrary, after applying the FWI workflow (Figure~\ref{fig:MarmElaModObsDataInv}), the predicted data using the inverted elastic parameters are in excellent agreement with the observed events. This observation is also evident when comparing the initial and final data residuals for a representative shot gather (Figure~\ref{fig:MarmElaRes}).
Despite the elastic FWI workflow's ability to retrieve accurate elastic subsurface parameters from surface data, the overall computational cost makes the method hardly applicable to 3D field datasets. In fact, in this 2D synthetic example, each model point evaluation, which comprised of an objective function and gradient evaluations, took approximately 3 hours on an Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz connected to 4 Nvidia Tesla V100-PCIe-16GB graphics processing units (GPUs). In the reported example, 180 model points have been tested, making the total elapsed time approximately 540 hours, corresponding to almost 23 days of computation.
This test shows the high computational cost associated with solving an elastic FWI problem. In field applications, higher frequencies than those used in this test may contain valuable information on the subsurface. However, the increase of the computational cost as the fourth power of the maximum frequency highly limits elastic FWI methodologies' applicability at high frequency. The proposed target-oriented technique has the potential of overcoming this limitation. High-resolution elastic parameters are usually necessary to be estimated only within potential areas of interest or hazard (e.g., over-pressured zones, gas pockets, and natural resource reservoirs). These subsurface targets are recognizable from images generated from surface data, making the image-space redatuming and target-oriented elastic FWI subsequent steps of an exploration project.
As previously mentioned, the goal of this test is to characterize the elastic properties associated with a gas-bearing reservoir placed within the faulted anticline structure. The panels on the left column of Figure~\ref{fig:MarmTargInit} display the elastic parameters of the target structure located at $z = 1100$ m and $x = 10300$ m. A clear decrease in the P-wave velocity and density parameters is noticeable, while the S-wave velocity does not present such variation. The initial elastic model is obtained by extracting the same parameters from the ones shown in Figure~\ref{fig:MarmElaInit} and are shown in the panels on the right panels of Figure~\ref{fig:MarmTargInit}.
The smoothed P-wave velocity parameter of Figure~\ref{fig:MarmElaVpInit} is employed to solve an acoustic extended linearized waveform inversion of the surface elastic pressure data. The same absorbing boundary conditions have been used during the imaging step as those employed in the elastic surface-data computation and inversion. we apply 500 iterations of the linear conjugate-gradient method to reach the numerical minimum of the problem (Figure~\ref{fig:MarmTargLSRTMObjLog}). Within the zero-subsurface offset image of the target area, a high-amplitude response is associated with the reservoir (Figure~\ref{fig:MarmTargLSRTMZeroOff}). Additionally, the subsurface structures are correctly imaged since an accurate velocity model has been employed. This observation is also supported by the flat response of ADCIG extracted at $x=10.3$ km (Figure~\ref{fig:MarmTargLSRTMADCIG}).
The extended image of the target area is demigrated to synthesize the elastic data as if the acquisition geometry was placed in the reservoir's proximity. The elastic pressure is reconstructed assuming 33 sources and 67 fixed receivers spaced every $60$ m and $30$ m, respectively, and recorded for $4$ s. we employ four absorbing boundary conditions to reconstruct and invert the redatumed data because we assume that most of the energy scattered from the target leaves the area of interest, and it is not reflected back from top interfaces. To retrieve the target's elastic parameters, we employ a similar elastic FWI workflow as for the surface data (Figure~\ref{fig:MarmTargInv}). In this case, we only use two spline grid refinements; namely, $50$ and $25$ m sampling. Overall, 20 iterations of BFGS have been applied to invert the elastic pressure on each spline grid (Figure~\ref{fig:MarmElaObjTarget}). A decrease of 98\% is reached after only $40$ iterations instead of the $145$ needed for the surface-data inversion to achieve the same data fitting level (Figure~\ref{fig:MarmElaResTarget}). The P-wave and density parameters of the reservoir gas anomaly are correctly retrieved. No leakage of the gas anomaly is observed in the inverted S-wave parameter. An increase in all three parameters is present right below the reservoir. This artifact is related to the limited frequency range and the regularization parameter used during the imaging step. Moreover, different regularization weights and image masks applied during the linearized waveform inversion step can diminish this artifact's impact. This artifact seems to affect only the first reconstructed event, thus a simple mask as described here can solve this issue. This test demonstrates the target-oriented approach's ability to retrieve the gas anomaly's elastic parameters and their spatial extent.
Compared to the elastic FWI applied to the surface data, the target-oriented inversion is approximately 200 times computationally cheaper, including the migration process, leading to a memory usage decrease of 25 folds. The main computation speed-up is due to the target-oriented inversion workflow's ability to significantly diminish the simulation domain's size compared to the one where the data have been acquired. The imaging step is not as intensive as the elastic inversion. In fact, in the 2D case, the computational cost of elastic Green's function is approximately $12$ times higher than the one of acoustic wavefields. This observation is also valid for the 3D case, where this factor can be $30$. Moreover, the decreased domain size greatly simplified the implementation of inversion methods because the elastic wavefields can be stored within the computer memory, avoiding the need for applying checkpointing techniques~\cite[]{anderson2012time}. Finally, the computational and memory cost-saving factors can allow the application of elastic FWI methodologies to high-frequency data with reasonable processing time.
\subsection{Extended imaging}
The formation of a subsurface-offset extended image $\tilde{m}$ is performed using a source and a receiver wavefield, $p_0$ and $q$, as follows:
\begin{align}\label{eqn:extended-imaging}
\tilde{m}(\mathbf{x},\mathbf{h}) = \int \ddot{p}_0(\mathbf{x}-\mathbf{h},t) q(\mathbf{x}+\mathbf{h},t)dt,
\end{align}
where $\mathbf{x}$ represents the spatial coordinates, $\mathbf{h}$ is the subsurface offset variables, $t$ is the time axis, and the $\ddot{}$ denotes the second-order time derivative of a function.
The source wavefield is computed using an acoustic isotropic medium and thus it compliant with the following partial-differential equation (PDE):
\begin{align}\label{eqn:acoustic-fwd-wave-eq}
\left[\frac{1}{v^2(\mathbf{x})}\frac{\partial^2}{\partial t^2 } - \nabla^2\right]p_0(\mathbf{x},t) = s(\mathbf{x},t),
\end{align}
where $v$ represents the seismic P-wave speed or velocity and $s$ is the known seismic source. The receiver wavefield is obtained by solving the adjoint wave equation~\cite[]{fichtner2010full}:
\begin{align}\label{eqn:acoustic-adj-wave-eq}
\left[\frac{1}{v^2(\mathbf{x})}\frac{\partial^2}{\partial t^2 } - \nabla^2\right]^{*}q(\mathbf{x},t) = d(\mathbf{x},t),
\end{align}
where $^*$ denotes the adjoint sign and $d$ represents the recorded seismic data for a given shot. By discretizing the previous equations we can define the adjoint Born extended operator $\tilde{\mathbf{B}}^{*}$ as follows:
\begin{align}\label{eqn:ext-born-adj}
\tilde{\mathbf{m}}= \tilde{\mathbf{B}}(\mathbf{v})^{*} \mathbf{d},
\end{align}
where $\tilde{\mathbf{m}}$ represents the extended image, $\mathbf{d}$ is the data vector, and the extended Born operator depends non-linearly on the velocity vector $\mathbf{v}$.
In the forward extended Born operator we employ the following scattering condition:
\begin{align}\label{eqn:extended-scattering}
s'(\mathbf{x},t) = \int \ddot{p}_0(\mathbf{x}-2\mathbf{h},t) \tilde{m}(\mathbf{x}-\mathbf{h},\mathbf{h})\mathbf{dh}.
\end{align}
The secondary source $s'$ is then propagated by solving the following PDE:
\begin{align}\label{eqn:acoustic-fwd-scat-wave-eq}
\left[\frac{1}{v^2(\mathbf{x})}\frac{\partial^2}{\partial t^2 } - \nabla^2\right]q'(\mathbf{x},t) = s'(\mathbf{x},t),
\end{align}
in which we can extract the Born model data from the scattered wavefield $q'$. The discretization of equations~\ref{eqn:acoustic-fwd-wave-eq},~\ref{eqn:extended-scattering}, and~\ref{eqn:acoustic-fwd-scat-wave-eq} is used to define the forward extended Born modeling operator as follows:
\begin{align}\label{eqn:ext-born-fwd}
\mathbf{d}'= \tilde{\mathbf{B}}(\mathbf{v})\tilde{\mathbf{m}},
\end{align}
where $\mathbf{d}'$ represents the scattered data vector. The extended Born forward and adjoint operators are employed within the extended linearized waveform inversion step.
\subsection{Extended linearized waveform inversion}
To form an optimal extended subsurface image, which is then used to synthesize data for a new acquisition, we minimize the following quadratic objective function:
\begin{eqnarray} \label{eqn:ext-lsrtm}
\phi(\tilde{\mathbf{m}})=\frac{1}{2}\left \|\tilde{\mathbf{B}}\tilde{\mathbf{m}} - \mathbf{d} \right \|_2^2,
\end{eqnarray}
in which we dropped the velocity dependency of the extended Born operator for brevity. Given the dimensions of the extended Born operator and its computational cost when applied to a given vector, especially, for the 3D case, we solve the problem in equation~\ref{eqn:ext-lsrtm} using an iterative method, such as linear conjugate gradient (LCG)~\cite[]{Aster}. In certain scenarios, such as sparse acquisition geometry or complex overburden (e.g., subsalt targets), we instead solve the following regularized quadratic problem:
\begin{eqnarray} \label{eqn:ext-lsrtm-dso}
\phi(\tilde{\mathbf{m}})=\frac{1}{2}\left \|\tilde{\mathbf{B}}\tilde{\mathbf{m}} - \mathbf{d} \right \|_2^2 + \frac{\epsilon}{2} \left \|\mathbf{D}\tilde{\mathbf{m}} \right \|_2^2,
\end{eqnarray}
where $\mathbf{D}$ represents the regularization operator, and $\epsilon$ is a scalar weight associated with the regularization term. In this work, a differential semblance optimization (DSO) operator is employed within the regularization term \cite[]{symes1991velocity}. The effect of the added regularization term on the optimal image is to enhance its focusing, which in turn corresponds to an enhancement of the coherency of the image across reflection angles~\cite[]{shen2005wave,biondi2021target}. Effectively, this process interpolates across reflection angles that have not been illuminated by the surface acquisition in the extended-image space~\cite[]{prucha2002subsalt}.
\subsection{Redatuming through extended least-squares migration}
The goal of any redatuming method is to transform the observed data acquired at a certain location (e.g., at the surface) into a new dataset as if they had been acquired at a different location in the subsurface \cite[]{wapenaar1992elastic,mulder2005rigorous}. Here, we seek to reconstruct the data generated from a target area that is recorded with sources and receivers placed directly above the target. This process enables the application of an FWI algorithm only within the target area.
In our redatuming step, the optimal solution to the imaging problems of equations~\ref{eqn:ext-lsrtm} and~\ref{eqn:ext-lsrtm-dso}, $\tilde{\mathbf{m}}_{opt}$, is used to reconstruct the data $\mathbf{d}'$ corresponding to sources and receivers placed at a new subsurface acquisition level. The reconstruction is performed by the following demigration process:
\begin{eqnarray} \label{eqn:demig-datum}
\mathbf{d}'=\tilde{\mathbf{B}}'\tilde{\mathbf{M}}\tilde{\mathbf{m}}_{opt},
\end{eqnarray}
where $\tilde{\mathbf{M}}$ is a restriction operator that limits the extended image to only the target area. The symbol $'$ denotes quantities related to the new acquisition geometry. The success of this reconstruction method depends on the knowledge of an accurate overburden, which is a common assumption within any redatuming technique. The advantage of this redatuming process compared to other methods resides in the usage of the image space to reconstruct the subsurface data. In fact, the regularization term employed during this step releases the common strict constraint of having dense source-receiver surface sampling.
To intuitively understand how this redatuming step works, let $d_{z_0}$ represent the subset of the observed data of interest (e.g., reflected events). Furthermore, we assume that $d_{z_0}$ is given by the following relation:
\begin{align}\label{eqn:surface-data-cont}
d_{z_0}(\mathbf{x}_r,\mathbf{x}_s,t) = \int g(\mathbf{x}_r, \mathbf{x},t) * g(\mathbf{x},\mathbf{x}_s,t)p(\mathbf{x})\mathbf{dx},
\end{align}
where $*$ denotes the time convolution, $p$ represents a scattering potential (also referred to as subsurface image), and $g$ is the Green's function for the acoustic wave equation (equation~\ref{eqn:acoustic-fwd-wave-eq}). The usage of wave-equation operators in this step allows us to take into account all the finite-frequency effects (e.g., Fresnel's zone, scale sensitivity as a function of frequency). The usage of an extended image space should not be confused with a Born approximation since the extended Born approximation can be used to fit any kind of wave arrivals~\cite[]{barnier2022full1}.
Figure~\ref{fig:redatuming1} schematically illustrates the process of computing the data $d_{z_d}$ for a single point in the scattering potential and one source and receiver pair. The source wavefield $G(\vec{x}_p,\vec{x}_s)$ propagates from the source position $\vec{x}_s$ at the surface $z_0$ and it is then scattered by an image point placed at $\vec{x}_p$. This secondary source is then propagated by the receiver-side Green's function $G(\vec{x}_r,\vec{x}_p)$ and recorded from the device placed at $\vec{x}_r$. To obtain the same data but for the source-receiver pair placed at $z_d$, deeper than the surface level $z_0$, the same scattering potential can be used to generate the data for a source-receiver pair placed at $\vec{x}_s'$ and $\vec{x}_r'$ (Figure~\ref{fig:redatuming2}). Hence, the knowledge of the scattering potential $p$ enables the computation of the same event for a given source-receiver pair placed at two different depth levels.
Since this redatuming procedure is based on the formation of an image, the source-receiver distribution at the new datum depends on the maximum extent of the surface acquisition. Figure~\ref{fig:redatumingGeom} shows how the surface acquisition geometry extent changes when mapped to a deeper subsurface position $z_d$ assuming a constant velocity. The surface and datum acquisition extents $\bar{x}$ and $\bar{x}'$ identically illuminates the image point $\vec{x}_p$. Therefore, an image formed using the data acquired at $z_0$ with a source-receiver extent $\bar{x}$ can be employed to synthesize the data with an acquisition extent $\bar{x}'$ at $z_d$. The datumed acquisition geometry is reduced compared to the surface one. Thus, when generating the datumed dataset, a reduced source-receiver distribution must be used to avoid the introduction of data artifacts due to the limited illumination of the surface acquisition. Finally, the selection of the virtual geometry in complex geological scenarios can be performed using a ray-tracing algorithm for reflected events~\cite[]{vcerveny1987ray}.
In this simplified discussion, we assume that the recorded events are generated by a scattering or reflection process. Therefore, the virtual datumed geometry is kept horizontal and moved only along the vertical direction. However, as shown by~\cite{biondi2014simultaneous}, transmitted events, such as diving and head waves, can be reconstructed using an extended image. Thus, allowing for potentially positioning the virtual geometry around the target area. Despite this fact, this possibility is not explored in this work, which is focused on reflected events. We employ this method only to redatum single-component pressure data, but this process can be modified to compute the particle velocities associated with P-wave reflected events. In the shown examples, we employ an acoustic wave-equation operator during the image formation since transmission effects can be approximated by using an acoustic engine. When strong mode conversion or elastic transmission effects are present (e.g., basalt layers), then elastic wavefields should be considered during the extended migration step. This change would also allow a proper reconstruction of converted waves when an accurate overburden model is employed in this process. Finally, one could potentially employ the extended-image gathers to estimate the elastic properties of the reflectors using a Zoeppritz approximation~\cite[]{aki2002quantitative}. However, this approach would be hampered by the limitation of this approximation (e.g., locally flat interface, ray-based approximation), and thus would limit the application of an elastic inversion to simple geological scenarios.
\subsection{Elastic FWI}
Once the redatumed data have been reconstructed, they can be used within any elastic FWI framework. We assume the subsurface to be an elastic isotropic medium. Thus, to predict the observed data we employ the elastic isotropic wave equation in the velocity-stress form~\cite[]{virieux1986p}:
\begin{align}\label{eqn:elastic-wave-cont}
\rho(\mathbf{x})\frac{\partial v_i(\mathbf{x},t)}{\partial t} &= \frac{\partial \sigma_{ik}(\mathbf{x},t)}{\partial x_k}+f_i(\mathbf{x},t),\\\nonumber
\frac{\partial \sigma_{ij}(\mathbf{x},t)}{\partial t} &=\lambda(\mathbf{x})\frac{\partial v_k(\mathbf{x},t)}{\partial x_k}\delta_{ij}+\mu(\mathbf{x})\left[\frac{\partial v_i(\mathbf{x},t)}{\partial x_j}+\frac{\partial v_j(\mathbf{x},t)}{\partial x_i}\right]+M_{ij}(\mathbf{x},t),
\end{align}
where I employ the Einstein notation and $\delta_{ij}$ represents the Kronecker delta. The subsurface is fully characterized by the three elastic parameters: density $\rho$, first Lam\'e parameter $\lambda$, and shear modulus $\mu$. The wavefield variables in this equation are given by the particle velocities $v_i$ and the stress tensor components $\sigma_{ij}$. The wave propagation is due to the presence of the source terms $f_i$ and $M_{ij}$ that represent a volumetric force field and the time derivative of the moment tensor, respectively \cite[]{aki2002quantitative}. The pressure data can be obtained by averaging the normal-stress components. In our applications, we consider single component pressure data generated by explosive sources (i.e., $M_{ij} \neq 0$ for $i=j$).
By discretizing the PDEs in equation~\ref{eqn:elastic-wave-cont}, we can define the elastic wave-equation modeling operator $\mathbf{f}$. We define the elastic FWI objective function as follows:
\begin{eqnarray} \label{eqn:elastic-fwi}
\psi(\mathbf{m}_{ela})=\frac{1}{2}\left\| \mathbf{f}(\mathbf{m}_{ela}) - \mathbf{d}\right\|_2^2,
\end{eqnarray}
where $\mathbf{m}_{ela}$ represents the elastic parameters of the subsurface. In our elastic FWI workflow, we parameterize the elastic isotropic medium using the P- and S-wave velocities $V_p$ and $V_s$, and density $\rho$.
\section{Introduction}
\input{intro}
\section*{Theory}
\input{theory}
\section*{Synthetic tests}
\input{numerical}
\section*{Field-data application}
\subsection*{Workflow of the methodology for field data}
\input{field-method}
\subsection*{OBN field-data application}
\input{field}
\section*{Conclusions}
\input{conclusions}
\section{ACKNOWLEDGMENTS}
We would like to thank the Stanford Exploration Project affiliate companies for their financial support. We also thank Mark Meadows for the useful suggestions made during the development stages of the method and Thomas Cullison for the help in the implementation of the 3D GPU-based finite-difference operators used in the reported field application. Finally, we would like to acknowledge Shell Exploration and Production Company for permission to show the obtained results and providing the described field dataset.
\newpage
\bibliographystyle{seg}
\section{Introduction}
\input{intro}
\section*{Theory}
\input{theory}
\section*{Synthetic tests}
\input{numerical}
\section*{Field-data application}
\subsection*{Workflow of the methodology for field data}
\input{field-method}
\subsection*{OBN field-data application}
\input{field}
\section*{Conclusions}
\input{conclusions}
\section{ACKNOWLEDGMENTS}
We would like to thank the Stanford Exploration Project affiliate companies for their financial support. We also thank Mark Meadows for the useful suggestions made during the development stages of the method and Thomas Cullison for the help in the implementation of the 3D GPU-based finite-difference operators used in the reported field application. Finally, we would like to acknowledge Shell Exploration and Production Company for permission to show the obtained results and providing the described field dataset.
\newpage
\bibliographystyle{seg}
\subsection{Dataset overview}
The field dataset used in this example was acquired within the Gulf of Mexico by Shell Exploration and Production Company in 2010. The area sits in the Garden Banks region, about 362km southwest of New Orleans, Louisiana (Figure~\ref{fig:CardamomOverview}). This data aims to illuminate prospects belonging to a producing field to improve the subsurface images around the diffuse salt bodies by leveraging the full wide-azimuth capability of an OBN geometry. The area has been subjected to diffuse diapirism and presents multiple salt bodies making its exploration possibly challenging from a seismic perspective \cite[]{murray1966salt,thompson2011salt}.
From the entire dataset, we select $255$ nodes that recorded multi-component seismic data generated by $41000$ airgun sources covering an area of $100$ $\text{km}^2$. Figure~\ref{fig:CardamoGeo} shows the sources' and nodes' x-y positions overlaid on a depth slice of the initial velocity model depicting a salt diapir (i.e., high-velocity circular portion). The sources have been acquired using a flip-flop acquisition geometry with a source interval of approximately $25$ m, whose depth is $9.8$ m. The sail lines are aligned with the x-axis, and their interval on the cross-line or y-axis is approximately $100$ m. From Figure~\ref{fig:CardamoSouGeo}, it is clear where the source vessel had to divert its trajectory to abide by the acquisition restrictions in the proximity of production platforms. The multi-component nodes are placed at the seabed, and their depth varies between $0.83$ and $1.0$ km. Their spatial x-y interval is approximately $250$ m in both directions.
Besides the observed data, Shell also provided the elastic stiffness components $C_{11}$, $C_{33}$, $C_{13}$, $C_{44}$, and $C_{66}$ and a constant density value of the sediment layers. In addition to this information, they included the salt-body edges' positions obtained by interpreting subsurface images of the area. From the stiffness components, we construct an initial P-wave velocity model based on the provided inputs assuming an isotropic medium~\cite[]{mah2003determination}. Figure~\ref{fig:CardamomInit} shows representative sections of the 3D volume of the velocity model. The initial sediment velocity does not present any distinguishable geological feature. A salt diapir, reaching the seabed floor depth, is located close to the center of the area of interest, whose P-wave velocity is set to $4.5$ km/s and is assumed to be homogeneous, a common assumption based on field observations~\cite[]{zong2015elastic}.
\subsection{Data preprocessing}
We describe in detail the pre-processing steps we followed to apply our target-oriented workflow. Since we rely on FWI approaches to estimate the migration velocity model and the elastic parameters of the subsurface, we first identify the minimum frequency for which the active source signal has a sufficient signal-to-noise ratio (SNR). To this end, we apply different high-cut filters to a representative shot-binned common-receiver gather and visually identify the frequency for which the direct arrival signal is distinguishable from the natural background noise (Figure~\ref{fig:CardamoMinFreq}). The top two panels clearly show that no useable active energy is recorded below $2$ Hz. Conversely, from the bottom two panels, we notice that the SNR increases between $3$ and $4$ Hz. Thus, we set the lowest frequency used in this field example to $3$ Hz. We observe a good signal-to-noise ratio (SNR) for the signal up to 40 Hz on the higher end of the spectrum. However, to reduce the modeling computational cost for this study, we apply a 30 Hz high-cut filter to the dataset.
The estimation of the source signature is one of the fundamental steps for the successful application of any FWI workflow~\cite[]{rickett2013variable,sun2014source,skopintseva2016importance}. Theoretically, it is possible to compute such a signature from first principles by knowing the airgun experimental setup~\cite[]{ziolkowski1982signature}. However, all the necessary parameters are unknown, or other first-order experiment effects could prevent the theory from retrieving the correct source impulse response (e.g., faulty airguns in the battery, incorrect source parameters). For this reason, we follow a data-driven approach. To compute a proxy of the direct arrival waveform for each node, we apply a hyperbolic moveout (HMO) correction to all the nodes using a constant velocity of $1.5$ km/s~\cite[]{yilmaz2001seismic}. By stacking all the traces belonging to a representative common-receiver gather, we obtain the signal depicted in Figure~\ref{fig:CardamomDirectTime}. A strong peak is present on the signal's onset, which is then followed by the typical bubble response commonly generated by airgun sources~\cite[]{watson2019controls}. The frequency spectrum presents multiple notches due to the source-side ghost and the bubble response (Figure~\ref{fig:CardamomDirectFreq}). Additionally, the frequency content rapidly decreases after $40$ Hz, and minimal energy is present above $160$ Hz.
Besides retrieving the P-wave speed using an FWI workflow, we also use the recorded data to produce a reverse time migration (RTM) image volume of the subsurface. This volume allows us to understand the area's geological setting and imaging challenges (i.e., position and shape of the salt dome). The usage of the time signature displayed in Figure~\ref{fig:CardamomDirectTime} for imaging would result in an image volume presenting multiple side lobes due to the bubble reverberations unless specific imaging conditions or procedures are used to form the subsurface image~\cite[]{valenciano2002deconvolution}. One standard practice to reduce the bubble reverberations is to apply prediction-error filters to the observed data to dampen these oscillations~\cite[]{wood1978debubbling}. However, in this application, we follow a different approach by combining acoustic wave-equation modeling and shaping filters.
The initial model of Figure~\ref{fig:CardamomInit} is used to compute an estimate of the observed data using an acoustic isotropic 3D approximation. We use the predicted pressure to calibrate the modeling operator on the datasets' observed amplitudes and reshape the data response. For each common-receiver gather, we compute the direct arrival proxy following the HMO-stacking procedure and shape the observed signal into the predicted one using a frequency-domain Wiener filtering operation. The proxy event is then used to estimate the filter by solving the following equation in the frequency domain,
\begin{eqnarray}
\label{eqn:wiener-filter}
d^{pre}_i(t) &=& \big ( f_i * d^{obs}_i \big) (t),
\end{eqnarray}
where * denotes the time-convolution operator, $d^{obs}_i$ and $d^{pre}_i$ are observed and predicted direct arrival proxies for the $i$-th node, respectively. $f_i$ is the unknown filter for the $i$-th node. The panels of Figure~\ref{fig:CardamomShaping} show plots of the observed, initially predicted, and shaped data. The shaping procedure removes the bubble response from the observed data and makes the direct arrival waveforms consistent between the initial predicted pressure and the observed data. Compared to other techniques, this shaping procedure automatically removes the instrument response, which can be different for each node, makes the observed data consistent with the acoustic modeling operator, and gives complete freedom in choosing the source signature employed within the modeling and imaging operators.
\subsection{Initial RTM images and geological scenario description}
The shaped data are employed to compute RTM images using the initial P-wave velocity model. These images provide information on the geological subsurface structures present in this area. To simultaneously utilize the up- and down-going energy recorded by the nodes, we perform the acoustic Green's function computation using a free-surface condition at the water surface during the imaging procedure~\cite[]{robertsson1996numerical}. This choice avoids the necessity of performing an up-down separation step~\cite[]{schalkwijk1999application}. Moreover, we apply a source-side illumination compensation to diminish any acquisition artifacts within the subsurface image volume~\cite[]{kaelin2006imaging}.
Figure~\ref{fig:CardamomInitRTM} shows depth slices extracted from the RTM image obtained using the initial velocity model. From the depth section extracted at $z=0.9$ km (Figure~\ref{fig:CardamomInitRTMZ1}), the top edges of the diapir are recognizable, as well as radially distributed features around it. These structures are commonly observed in areas where diapirism is present and are related to the presence of radially distributed faults caused by the rising of the salt diapir~\cite[]{stewart2006implications,coleman2018and}. Furthermore, within deeper sections, multiple structures associated with turbidite sequences can be observed~\cite[]{berg1982seismic}. The panel of Figure~\ref{fig:CardamomInitRTMZ2} shows an example of such structures on the bottom-right corner of the depth slice. Mild acquisition-footprint artifacts are present despite the application of source-side compensation during the migration process. Finally, high-amplitude features are visible close to the diapir flanks in even deeper portions of the image volume. For instance, in the depth section extracted at $z=2.865$ km a noticeable faulted structure is present (Figure~\ref{fig:CardamomInitRTMZ3}). Such image features are potentially associated with hydrocarbon prospects~\cite[]{harding1979structural,tiapkina2008imaging}.
\subsection{Acoustic FWI}
The redatuming process we employ relies on the knowledge of an accurate overburden velocity model. Thus, we apply an acoustic isotropic constant-density FWI process to the shaped pressure data to improve the initial velocity model. The acoustic wave-equation operators employ a free-surface boundary condition at the top edge and absorbing boundaries on the other ones. To mitigate the inaccuracy of the modeling operator in correctly predicting the observed event amplitudes, we employ the objective function proposed by \cite{shen2010near}, where a trace-by-trace normalization is applied to both modeled and observed data vectors before computing their difference. It can be easily shown that the minimization of such an objective function corresponds to the maximization of the zero-lag cross-correlation between the predicted and observed data~\cite[]{shen2014early}. Furthermore, we invert the data using a data-space multiscale approach~\cite[]{bunks1995multiscale}, where progressively wider frequency bands undergo the inversion procedure. The chosen frequency bands are the following: $3-6$, $3-9$, $3-12$, and $3-18$ Hz. For the first three bands, the modeling is performed with an FD grid size of $35$ m in the three dimensions, while, for the last band, the modeling is performed with a grid size of $25$ m. To mitigate the introduction of any inversion artifacts, we parameterize the model on spline grids with an x-y sampling of $175$, $105$, and $50$ m for each band, respectively. The spline grid in the z-axis is as fine as the FD sampling. The usage of a spline parameterization during the inversion also has the additional advantage of limiting the convergence to local minima~\cite[]{barnier2019waveform}. Finally, we apply the acoustic reciprocity theorem so that the $255$ nodes act as sources and the $41000$ sources as receivers~\cite[]{aki2002quantitative}.
The acoustic FWI objective function is minimized using a BFGS algorithm~\cite[]{liu1989limited}. Overall, $216$ iterations are performed to invert the data up to $18$ Hz. Figure~\ref{fig:CardamomAcoFWIObj} displays the convergence curve of the acoustic FWI problem. The discontinuities in the curve correspond to the changes in the frequency band during the inversion. For the two central bands, a decrease of approximately 70\% is achieved by the minimization algorithm.
From the final P-wave velocity we extract the same cross- and in-line sections of the 3D volume as the ones in Figure~\ref{fig:CardamomInit} (Figure~\ref{fig:CardamomFinalCrossIn}). The inverted model by the FWI scheme shows geologically consistent features. For instance, the panel in Figure~\ref{fig:CardamomFinalX1} presents a discontinuity potentially indicating the presence of a fault at $y=50.2$ km and $z=3$ km. Moreover, a clear low-velocity anomaly is placed on the salt flank at $z=2.8$ km (Figure~\ref{fig:CardamomFinalY2}). This decrease in velocity could be related to gas accumulation at the top of a hydrocarbon reservoir sealed by the salt body. Finally, the same inclusion reported by~\cite{dahlke2020applied} is retrieved by the acoustic FWI workflow (Figure~\ref{fig:CardamomFinalX2}). In addition, other velocity variations are present at the top of the diapir.
The inversion is affected by artifacts due to the limited acquisition geometry (left sides of Figure~\ref{fig:CardamomFinalX1} and~\ref{fig:CardamomFinalY3}). Moreover, low-velocity anomalies are placed at depths higher than $3$ km. These features are probably due to the convergence to a local minimum of the optimization algorithm. Thus, we limit our area of search for any potential target to $3$ km of depth. In addition to the inclusion shown in Figure~\ref{fig:CardamomFinalX2}, the FWI workflow placed a low-velocity anomaly close to the top of the diapir (Figure~\ref{fig:CardamomInitComp}). This velocity decrease could be associated with salt-encased sediment packages included during the diapir formation~\cite[]{fernandez2017origin}.
As a quality control (QC) step, we compare the phase matching between the predicted and the observed pressure data before and after the inversion process is applied. Figure~\ref{fig:CardamomDataCompY} shows the phase matching when one of the source-spatial positions is fixed. The phase matching between modeled and observed data for both long- and short-offset traces improves after applying the FWI workflow. When a time slice is extracted from the modeled and the observed data, and we compare the phase agreement between the two (Figure~\ref{fig:CardamomDataCompT}), an excellent match is found using the final FWI acoustic model to generate the pressure data. On the other hand, from Figure~\ref{fig:CardamomDataCompY}, we notice that the accuracy of the matching diminishes for recording time greater than $5$ s for the mid- and short-offset ranges. This mismatch could explain the spurious low-velocity anomalies in the FWI acoustic model previously described.
Besides the satisfactory phase matching between the predicted and observed data, the quality of the RTM image greatly improves thanks to the more accurate velocity retrieved by the FWI process (Figure~\ref{fig:CardamomRTMcompX}). In fact, by comparing Figures~\ref{fig:CardamomRTMInitx1} and~\ref{fig:CardamomRTMFWIx1}, the fault planes between $y=49.8$ and $y=51.5$ km are more visible within the RTM image obtained on the FWI velocity model. For the sections passing through the salt diapir (Figures~\ref{fig:CardamomRTMInitx2} and~\ref{fig:CardamomRTMFWIx2}), the overall reflectors' continuity is improved for the RTM image generated on the FWI model; especially, for the reflectors close to the top of the salt body. Moreover, the high amplitude reflectors present a more consistent contact point with the salt flanks within the FWI-related RTM image. One interesting geological feature present on the left side in both sections of Figures~\ref{fig:CardamomRTMInitx2} and~\ref{fig:CardamomRTMFWIx2} is the sigmoidal shaped reflectors at $z=2.0$ km. These events are due to the presence of the turbidite deposits previously described.
\subsection{Target-oriented elastic FWI of a potential prospect}
By analyzing the RTM image volume obtained using the final FWI velocity model, we identify a clear high-amplitude reflector in the proximity of the salt flank (Figure~\ref{fig:CardamomRTMFWICube}). This amplitude response could be related to gas accumulation at the top of a hydrocarbon reservoir~\cite[]{mazzotti1990prestack}. Therefore, we apply the redatuming technique, followed by an elastic FWI workflow to retrieve this potential prospect's elastic properties.
The first step is to solve an extended acoustic linearized inversion of the observed data to obtain an extended 3D image volume. We limit the observed data's maximum frequency to $12$ Hz to make the least-squares process feasible with the available computational resources. Higher frequencies can be employed as long as computer memory or disks can hold the migration wavefield and the extended images. Random-boundary conditions could be used to alleviate the memory requirements during the imaging step~\cite[]{clapp2009reverse}. The FD grid is set to $35$ m in each direction. Moreover, given the acquisition's full-azimuth nature, we employ $h_x$ and $h_y$ subsurface-offset extensions of $9$ points in each direction, resulting in a maximum absolute subsurface offset of $140$ m. Finally, a differential-semblance-optimization (DSO) regularization term is added to improve the image focusing~\cite{symes1994inversion}, and its weight is chosen on a heuristic basis. We focus the iterative process on inverting the data component stemming from the target area by employing a mask tailored for the potential prospect. After 30 iterations of a linear CG algorithm, an acceptable numerical minimum of the objective function is reached given the selected parameters (Figure~\ref{fig:CardamomExtLSRTMObj}). Each linear iteration took approximately 1 hour and 15 minutes on the same machine used for the Marmousi2 experiment. When extracting the zero-subsurface offset image within the target, an evident high-amplitude reflector is visible (Figure~\ref{fig:CardamomExtLSRTMTarg}). Moreover, fault planes are clearly affecting the horizon of interest.
The focusing of the offset-domain common-image gathers (ODCIGs) of the target area provides an additional QC step for assessing the migration velocity model's accuracy during the linear inversion process. A representative ODCIG extracted from the target volume is displayed in Figure~\ref{fig:CardamomExtODCIG}, which presents a clear focus around the zero-subsurface offset axes. Furthermore, we convert this ODCIG into the angle-azimuth domain~\cite[]{biondi20043d}, and extract the angle-domain common-image gather (ADCIG) (Figure~\ref{fig:CardamomExtADCIG}). This angle gather presents a flat response across reflection angles. These two observations suggest that the acoustic FWI process can retrieve an accurate overburden velocity. Finally, Figure~\ref{fig:CardamomExtADCIG-AVA} shows the amplitude response the the ADCIG at $z=2.7$ km and $\varphi=45^{\circ}$ following an class-3 AVO signature~\cite[]{castagna1993avo}. This amplitude response in addition to the bright reflectivity displayed in Figure~\ref{fig:CardamomRTMFWICube} is compelling evidence of the presence of a gas-bearing sand reservoir in that subsurface area.
We employ this extended image volume to synthesize the elastic pressure data with a new redatumed acquisition geometry placed at $z=2.1$ km. The sources' and receivers' x-y positions are shown in the panels of Figure~\ref{fig:CardamomTargetGeo}. We employ $150$ sources and $8444$ receivers, with the latter regularly sampled and spaced by $25$ m in each direction. This new acquisition is chosen based on how the original OBN geometry has illuminated the target. We purposely avoid placing acquisition devices on the salt body, given the limited illumination of the target by the original OBN geometry present in that section of the model. Figure~\ref{fig:CardamomDatumData} shows a representative shot gather where an increase in amplitude for the first reflected event is noticeable for receivers at a further distance from the source position. This behavior is a potential indication of an AVO signature from the chosen prospect.
The entire model domain is approximately $10\times10\times4$ $\text{km}^3$, while the target domain size is approximately $1.5\times3\times1$ $\text{km}^3$, making the target computational domain approximately $67$ times smaller compared to the original one.
The target area's initial P-wave velocity model is obtained by mildly smoothing the acoustically inverted FWI P-wave velocity. The initial density parameter is simply computed using Gardner's equation~\cite[]{gardner1974formation}. Finally, the starting guess for the S-wave velocity is obtained using the provided stiffness tensor components. Figure~\ref{fig:CardamomTargetInitEla} shows different panels extracted from the initial elastic parameters of the target area.
We apply an elastic FWI workflow to the redatumed dataset to estimate the target area's elastic parameters. The entire bandwidth of the reconstructed data is simultaneously injected (i.e., $3-12$ Hz), and the three elastic parameters are jointly inverted. The total recording time is $4.5$ s, which is almost half of the original $8$ s data. The elastic FD operator is based on a $20$ m grid to abide by the dispersion and stability conditions. However, the inverted model is parameterized using a spline grid of $100$ m in the x and y axes, while the z-axis has the same sampling as the FD grid. As in the acoustic FWI step, the spline parameterization effectively acts as regularization and avoids the introduction of spurious features during the inversion. By assuming the same scattering regime considered for the target-oriented inversion applied to the synthetic case, all the wave-equation operators are constructed using absorbing boundary conditions around the entire simulation domain.
After minimizing the L2-norm difference between the predicted and the synthesized elastic pressure data with a BFGS optimizer for $10$ iterations. The described inversion process achieves an accurate fit of redatum data as shown by the data-residual panels in Figure~\ref{fig:CardamomTargetRes}, and the retrieved subsurface parameter cubes are shown in Figure~\ref{fig:CardamomTargetFinalEla}. The inversion procedure introduces most of the changes within the P-wave and density parameters. A noticeable decrease in both is observed at the same position as the high-amplitude anomaly observed in the RTM image of Figure~\ref{fig:CardamomRTMFWICube}. On the contrary, no significant updates are placed within the S-wave parameter, although similar geometrical features are present within the inverted parameter.
To highlight how the elastic FWI process updates the three parameters, we plot the difference between the inverted and the initial models in Figure~\ref{fig:CardamomTargetDiffEla}. As expected, an evident decrease at the target's position is observed within the P-wave and density parameters. On the other hand, the S-wave model does not present such a reduction in the same position and displays slightly different structures than the other two parameters. Moreover, the updates in the S-wave parameter are an order of magnitude smaller compared with the P-wave velocity. This behavior could be due to the limited surface offsets considered in this field-data example. However, the different structures and sensitivity provide evidence of the ability of the process not to introduce cross-talk artifacts during the elastic FWI process.
Using the elastic parameters obtained by the target-oriented inversion, we compute two standard rock physics attributes, namely, the Vp/Vs ratio and the acoustic impedance (AI) (Figure~\ref{fig:CardamomTargetPropEla}). The average Vp/Vs ratio and AI of the low-velocity and low-density anomaly are approximately $1.7$ and $4.8$ $\text{g}/\text{cm}^3 * \text{km}/\text{s}$, respectively (Figure~\ref{fig:CardamomTargetPropElaProfiles}). These values are consistent with a potential gas-charged sand whose Vp/Vs ratio is commonly below 1.8 and its AI is ranging from 2.5 to 7 $\text{g}/\text{cm}^3 * \text{km}/\text{s}$ \cite[]{gardner1968velocity,odegaard2003interpretation}. In addition, the rock-physical parameters above the gas-bearing sand present values in agreement with a high-shale-content formation, corresponding to the necessary cap rock of this reservoir.
This field application of the proposed target-oriented elastic FWI workflow demonstrates its ability to estimate a potential prospect's elastic properties correctly. The applied workflow retrieves the elastic parameters of a possibly gas-charged reservoir located on the flank of the salt diapir. Moreover, the method's ability to limit the computational domain to only the target area allows applying a wave-equation estimation method such as FWI. Using an elastic FWI on the entire $100$ $\text{km}^2$ domain is a challenge given the computational cost of solving the elastic wave equation. Using the same resources described within the Marmousi2 test, the elapsed time for performing a single iteration is $133$ minutes. We estimate a computational speed-up factor ranging from $500$ to $1000$ between the original and the target-oriented inversions for this specific example. This estimate does not consider the possibility of limiting the surface offsets during the elastic inversion process of the surface data. However, even considering this offset limitation, a considerable speed-up factor would be achieved by the target-oriented workflow.
\subsection{Redatuming of elastic pressure waves through extended linearized waveform inversion}
Figure~\ref{fig:flatVp2D} shows the P-wave velocity model for the flat interface test. The change in the elastic parameters is depicted in the three vertical profiles shown in Figure~\ref{fig:flatProfiles}, where an increase of all the elastic parameters is occurring across the interface. we generate elastic pressure data using an explosive source, whose time signature and spectrum are displayed in Figure~\ref{fig:flatWavelet}. The goal is to use the elastic pressure data recorded at $z=0$ km to reconstruct a new dataset as if the sources and the receivers could have been placed at $400$ m below the surface.
We record the pressure using 81 sources and 401 receivers placed at the surface and spaced by 50 and 10 m, respectively. A single reflected event is recorded by the receivers for each experiment, where a clear phase rotation is present as the offset between the source and receiver pair increases (Figure~\ref{fig:flatData}).
To perform the imaging step, we solve the linearized waveform inversion problem using an acoustic extended Born modeling operator and minimize the objective function in equation~\ref{eqn:ext-lsrtm} using 500 iterations of the LCG algorithm. The migration velocity model is a constant speed set to 2.5 km/s, corresponding to the correct overburden wave speed. The extended-space inversion achieves a relative objective function decrease of an approximately numerical level of accuracy for single-precision operators (i.e., $10^{-6}$), showing the ability of the extended-image space to fully preserve all the elastic amplitude variations present in the recorded data.
The extended linearized waveform inversion problem's solution is a function of the spatial coordinates $x$ and $z$ and of the extended subsurface offset axis $h$. The shape of the image highly depends on the recorded events and the acquisition geometry employed. Figure~\ref{fig:flatODCIGs} shows the offset-domain common image gathers (ODCIGs) for two different $x$ coordinates. For $x=2.0$ km, the ODCIG appears to be focused around the zero-offset axis. This behavior is expected since the correct migration velocity has been used during the inversion process~\cite[]{biondi20043d}. The two linear features below $z=0.8$ km represent the head waves recorded in the longer offset shot gathers mapped into the image space. The other two faint linear features above $z=0.8$ km are caused by the limited acquisition aperture (i.e., the maximum source-receiver offset of $4$ km). On the other hand, when an ODCIG is extracted at $x=0.0$ km (Figure~\ref{fig:flatODCIGleft}), the image does not appear as focused as for the central-model position because fewer reflection angles have been illuminated from the surface acquisition.
Figure~\ref{fig:redatumFlatRefl} displays two representative shot gathers obtained when the acquisition geometry is placed at $z=400$ m. The same amplitude-versus-offset behavior is observed as in the surface pressure data (Figure~\ref{fig:flatData}).
The schematic of Figure~\ref{fig:redatuming} shows that a scattering point can be used to generate the surface and the sunk acquisition datasets. This observation also implies that the images obtained from two acquisitions, assuming infinite source-receiver extent, are identical. To demonstrate this statement, we compare the ODCIGs obtained by inverting the surface and sunk-acquisition data, respectively (Figures~\ref{fig:redatumFlatSurfOdcig} and~\ref{fig:redatumFlatDatOdcig}). Indeed, the only difference between the two ODCIGs is due to the limited acquisition aperture (Figure~\ref{fig:redatumFlatDiffOdcig}).
Since the data from the two acquisition geometries maps into the same extended image, we can use the ODCIGs obtained from the surface pressure to synthesize the events recorded by the sunk sources and receivers. Figure~\ref{fig:redatumFlatRecMid} shows the shot gather at $x=2.0$ km obtained by demigrating the ODCIGs of Figure~\ref{fig:redatumFlatSurfOdcig}. A similar amplitude behavior is present compared to the shot gather of Figure~\ref{fig:redatumFlatReflMid} up to an offset of $1$ km. The artifacts above the apex of the reflected event are due to the truncation of the surface acquisition geometry. In fact, when we demigrate the image where those truncation artifacts are masked (Figure~\ref{fig:redatumFlatRecMaskOdcig}), the reconstructed reflection does not present any spurious events (Figure~\ref{fig:redatumFlatRecMidMask}).
As we described using the schematic of Figure~\ref{fig:redatumingGeom}, not all the events associated with any source-receiver pairs can be reconstructed from an image obtained with surface data. In this case, the maximum illuminated reflection angle from the surface geometry is approximately $64^{\circ}$, which corresponds to a maximum half offset of $1$ km for the sunk acquisition geometry. Figure~\ref{fig:redatumFlatRecMidMaskDiff} shows the difference between the reference and the reconstructed data of Figures~\ref{fig:redatumFlatReflMid} and~\ref{fig:redatumFlatRecMidMask}, where only energy for an offset greater than $1$ km are present as expected.
\subsubsection{Sensitivity to assumed source wavelet}
Since the redatuming technique is based on an imaging step, it is necessary to create a source wavelet signature. However, the data reconstruction is invariable to the choice of the source signature. To numerically verify this statement, we reconstruct the same events of Figure~\ref{fig:redatumFlatRecMidMask}, but employ different waveforms during the linearized waveform inversion and demigration steps. Figure~\ref{fig:redatumWav90rot} displays the same wavelet of Figure~\ref{fig:flatWaveletTime} on which a 90-degree phase rotation has been applied. The right panel in Figure~\ref{fig:redatumWavRick} shows a Ricker wavelet with a domain frequency of 15 Hz. These two waveforms are independently used to solve the extended linearized waveform inversion problem defined on the flat-interface model (Figure~\ref{fig:flatVp2D}). The extended gathers generated by this process are then used to reconstruct the elastic pressure events at the new datum (i.e., $400$ m).
Figure~\ref{fig:redatumWavRec} shows the redatumed pressure when the 90-degree rotated waveform and the Ricket wavelet are employed during the demigration process, respectively. No evident difference is visible when these two panels are compared to the one displayed in Figure~\ref{fig:redatumFlatRecMidMask}.
This invariability can also be seen by analyzing the amplitude behavior of the ADCIGs generated by the extended linearized waveform inversion when different wavelets are employed. The same AVA pattern is visible in the three panels of Figure~\ref{fig:redatumWavADCIG} showing the ADCIGs generated with three source signatures described in this section.
\subsubsection{Sensitivity to migration velocity}
Finally, we analyze the sensitivity of the reconstruction process with respect to the migration velocity map used during the linearized waveform inversion step. To this end, we perform the same redatuming steps previously described for the flat-interface case but in which a 5\% slower velocity is used compared to the correct one (i.e., $2375$ m/s). As expected the ODCIG obtained during the migration process is not as focused as when the correct velocity is employed (compare Figures~\ref{fig:redatumRecWrongOdcig} and~\ref{fig:redatumFlatSurfOdcig}). Moreover, the typical curving effect within the angle gather is observed when analyzing Figure~\ref{fig:redatumRecWrongAdcig} \cite[]{biondi2004angle}. The successful application of any target-oriented method, whether is based on local solvers or on a redatuming step, is dependent on the accuracy of the overburden velocity model. Here, we show how our redatuming method is affected by overburden inaccuracies.
When the ODCIGs obtained using the slower migration velocity are demigrated to reconstruct the datumed elastic pressure, the AVO pattern is reconstructed but the kinematics of the events result incorrect (Figure~\ref{fig:redatumRecWrong}). However, when the data are reconstructed at the original acquisition depth, then both kinematics and amplitude effects are perfectly reconstructed.
This test demonstrates the importance of obtaining an accurate migration velocity model of the overburden before performing the redatuming step. This observation is generally true for any other redatuming technique. However, the proposed technique, since it is based on an imaging step, provides a quality control step thanks to the kinematic behavior of the generated ODCIGs and ADCIGs with respect to the migration velocity model.
\subsection{Elastic target-oriented inversion applied to the Marmousi2 model}
We apply the described redatuming and elastic inversion techniques to the Marmousi2 model to estimate the elastic parameters associated with a gas-bearing reservoir located within a faulted anticline structure. The true subsurface elastic parameters are displayed in Figure~\ref{fig:MarmElaTrue}. This gas reservoir is located at a depth of $1.1$ km and spans approximately $500$ m in the horizontal direction, starting from $x=10$ km.
First, we apply an elastic FWI workflow to a surface dataset to retrieve the entire model's subsurface parameters starting from a smoothed version of the true model. Then, we solve an extended linearized waveform inversion to synthesize the reflected events generated by the gas reservoir recorded with an acquisition located in its vicinity. The redatumed dataset is then used within the same elastic FWI workflow to estimate the reservoir's elastic properties. Finally, we compare the target-oriented results with the elastic FWI applied to the entire surface dataset.
The observed elastic pressure data is generated from a surface acquisition composed of 140 sources and 567 receivers spaced by $120$ m and $30$ m along the x-axis, respectively. The modeling is performed using absorbing boundaries around all the four edges of the simulation domain~\cite[]{israeli1981approximation}. Figure~\ref{fig:MarmWaveletTime} shows the time signature of the explosive source employed in this synthetic experiment. This wavelet's frequency content is effectively contained between $4$ and $13$ Hz with a flat response between $6$ and $10$ Hz (Figure~\ref{fig:MarmWaveletSpectrum}). The choice of the lowest frequency wants to simulate a field scenario in which the low-frequency content is commonly removed given its low signal-noise ratio (SNR).
Given the amplitude response of the reflected events from the subsurface interfaces, the dataset is mostly dominated by reflections. In fact, by analyzing the two representative shot gathers displayed in Figure~\ref{fig:MarmShots}, the mentioned reflections present greater amplitudes than the transmitted waves. This presence of these reflected events represents the ideal application scenario for the redatuming technique. The linearized waveform inversion process can map the AVO of the reflected events within the extended image space.
The initial elastic parameters are obtained by applying a moving average filter to the true model (Figure~\ref{fig:MarmElaInit}). This process produces an accurate initial elastic model and mitigates the possibility of falling into a non-useful local minimum given the chosen frequency content. Additionally, all the short-wavelength features of all the reservoirs present within the subsurface are entirely missing from the initial guess.
The full bandwidth of the data is simultaneously injected, and the three elastic parameters are jointly inverted within an elastic FWI procedure. Moreover, we apply the model-space multi-scale approach described in the previous chapter to mitigate the presence of local minima and mitigate any spatial artifacts that may arise during the inversion process. Three sequentially refined spline grids are employed, namely, $100$ m, $50$ m, and $25$ m spacing, while the propagation is performed with a $5$ m sampling in both directions. For each spline grid, the elastic FWI process employs the L-BFGS optimization method, and the inversion is stopped when an appropriate step-length value cannot be found. The convergence curve obtained using the described elastic FWI workflow is shown in Figure~\ref{fig:MarmElaObj}. The first spline grid reaches the closest local minimum after 90 iterations and achieves a relative objective function decrease of more than 80\%. The final elastic model is then projected onto a finer spline grid, and then other 35 iterations are employed to further diminish the objective function. The spline refinement is performed again to obtain an additional objective function decrease.
The panels in Figure~\ref{fig:MarmElaInv} show the final elastic parameters obtained at the end of the described elastic FWI workflow. The P-wave velocity is the parameter accurately retrieved and does not present any evident artifacts. On the other hand, the S-wave velocity is affected by some inversion artifacts and a potential cross-talk positioned at $x=3.0$ km and $z=1.0$ km. However, overall this parameter is in agreement with the true one shown in Figure~\ref{fig:MarmElaVsTrue}. The density parameter is also in good agreement with the true one, and the anomaly associated with the gas reservoir is correctly retrieved.
To evaluate the quality of the inverted elastic model, we display the predicted and the observed data within the same plot to compare the amplitude and timing of the reflected event. The representative shot gather is located at $x=4.190$ km, and only the positive offsets are compared. Figure~\ref{fig:MarmElaModObsDataInit} shows this comparison when the observed data are plotted along with the predicted pressure obtained using the initial elastic model. All the reflected events are not modeled from the initial guess, and a clear mismatch in the long offset events is evident. On the contrary, after applying the FWI workflow (Figure~\ref{fig:MarmElaModObsDataInv}), the predicted data using the inverted elastic parameters are in excellent agreement with the observed events. This observation is also evident when comparing the initial and final data residuals for a representative shot gather (Figure~\ref{fig:MarmElaRes}).
Despite the elastic FWI workflow's ability to retrieve accurate elastic subsurface parameters from surface data, the overall computational cost makes the method hardly applicable to 3D field datasets. In fact, in this 2D synthetic example, each model point evaluation, which comprised of an objective function and gradient evaluations, took approximately 3 hours on an Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz connected to 4 Nvidia Tesla V100-PCIe-16GB graphics processing units (GPUs). In the reported example, 180 model points have been tested, making the total elapsed time approximately 540 hours, corresponding to almost 23 days of computation.
This test shows the high computational cost associated with solving an elastic FWI problem. In field applications, higher frequencies than those used in this test may contain valuable information on the subsurface. However, the increase of the computational cost as the fourth power of the maximum frequency highly limits elastic FWI methodologies' applicability at high frequency. The proposed target-oriented technique has the potential of overcoming this limitation. High-resolution elastic parameters are usually necessary to be estimated only within potential areas of interest or hazard (e.g., over-pressured zones, gas pockets, and natural resource reservoirs). These subsurface targets are recognizable from images generated from surface data, making the image-space redatuming and target-oriented elastic FWI subsequent steps of an exploration project.
As previously mentioned, the goal of this test is to characterize the elastic properties associated with a gas-bearing reservoir placed within the faulted anticline structure. The panels on the left column of Figure~\ref{fig:MarmTargInit} display the elastic parameters of the target structure located at $z = 1100$ m and $x = 10300$ m. A clear decrease in the P-wave velocity and density parameters is noticeable, while the S-wave velocity does not present such variation. The initial elastic model is obtained by extracting the same parameters from the ones shown in Figure~\ref{fig:MarmElaInit} and are shown in the panels on the right panels of Figure~\ref{fig:MarmTargInit}.
The smoothed P-wave velocity parameter of Figure~\ref{fig:MarmElaVpInit} is employed to solve an acoustic extended linearized waveform inversion of the surface elastic pressure data. The same absorbing boundary conditions have been used during the imaging step as those employed in the elastic surface-data computation and inversion. we apply 500 iterations of the linear conjugate-gradient method to reach the numerical minimum of the problem (Figure~\ref{fig:MarmTargLSRTMObjLog}). Within the zero-subsurface offset image of the target area, a high-amplitude response is associated with the reservoir (Figure~\ref{fig:MarmTargLSRTMZeroOff}). Additionally, the subsurface structures are correctly imaged since an accurate velocity model has been employed. This observation is also supported by the flat response of ADCIG extracted at $x=10.3$ km (Figure~\ref{fig:MarmTargLSRTMADCIG}).
The extended image of the target area is demigrated to synthesize the elastic data as if the acquisition geometry was placed in the reservoir's proximity. The elastic pressure is reconstructed assuming 33 sources and 67 fixed receivers spaced every $60$ m and $30$ m, respectively, and recorded for $4$ s. we employ four absorbing boundary conditions to reconstruct and invert the redatumed data because we assume that most of the energy scattered from the target leaves the area of interest, and it is not reflected back from top interfaces. To retrieve the target's elastic parameters, we employ a similar elastic FWI workflow as for the surface data (Figure~\ref{fig:MarmTargInv}). In this case, we only use two spline grid refinements; namely, $50$ and $25$ m sampling. Overall, 20 iterations of BFGS have been applied to invert the elastic pressure on each spline grid (Figure~\ref{fig:MarmElaObjTarget}). A decrease of 98\% is reached after only $40$ iterations instead of the $145$ needed for the surface-data inversion to achieve the same data fitting level (Figure~\ref{fig:MarmElaResTarget}). The P-wave and density parameters of the reservoir gas anomaly are correctly retrieved. No leakage of the gas anomaly is observed in the inverted S-wave parameter. An increase in all three parameters is present right below the reservoir. This artifact is related to the limited frequency range and the regularization parameter used during the imaging step. Moreover, different regularization weights and image masks applied during the linearized waveform inversion step can diminish this artifact's impact. This artifact seems to affect only the first reconstructed event, thus a simple mask as described here can solve this issue. This test demonstrates the target-oriented approach's ability to retrieve the gas anomaly's elastic parameters and their spatial extent.
Compared to the elastic FWI applied to the surface data, the target-oriented inversion is approximately 200 times computationally cheaper, including the migration process, leading to a memory usage decrease of 25 folds. The main computation speed-up is due to the target-oriented inversion workflow's ability to significantly diminish the simulation domain's size compared to the one where the data have been acquired. The imaging step is not as intensive as the elastic inversion. In fact, in the 2D case, the computational cost of elastic Green's function is approximately $12$ times higher than the one of acoustic wavefields. This observation is also valid for the 3D case, where this factor can be $30$. Moreover, the decreased domain size greatly simplified the implementation of inversion methods because the elastic wavefields can be stored within the computer memory, avoiding the need for applying checkpointing techniques~\cite[]{anderson2012time}. Finally, the computational and memory cost-saving factors can allow the application of elastic FWI methodologies to high-frequency data with reasonable processing time.
\subsection{Extended imaging}
The formation of a subsurface-offset extended image $\tilde{m}$ is performed using a source and a receiver wavefield, $p_0$ and $q$, as follows:
\begin{align}\label{eqn:extended-imaging}
\tilde{m}(\mathbf{x},\mathbf{h}) = \int \ddot{p}_0(\mathbf{x}-\mathbf{h},t) q(\mathbf{x}+\mathbf{h},t)dt,
\end{align}
where $\mathbf{x}$ represents the spatial coordinates, $\mathbf{h}$ is the subsurface offset variables, $t$ is the time axis, and the $\ddot{}$ denotes the second-order time derivative of a function.
The source wavefield is computed using an acoustic isotropic medium and thus it compliant with the following partial-differential equation (PDE):
\begin{align}\label{eqn:acoustic-fwd-wave-eq}
\left[\frac{1}{v^2(\mathbf{x})}\frac{\partial^2}{\partial t^2 } - \nabla^2\right]p_0(\mathbf{x},t) = s(\mathbf{x},t),
\end{align}
where $v$ represents the seismic P-wave speed or velocity and $s$ is the known seismic source. The receiver wavefield is obtained by solving the adjoint wave equation~\cite[]{fichtner2010full}:
\begin{align}\label{eqn:acoustic-adj-wave-eq}
\left[\frac{1}{v^2(\mathbf{x})}\frac{\partial^2}{\partial t^2 } - \nabla^2\right]^{*}q(\mathbf{x},t) = d(\mathbf{x},t),
\end{align}
where $^*$ denotes the adjoint sign and $d$ represents the recorded seismic data for a given shot. By discretizing the previous equations we can define the adjoint Born extended operator $\tilde{\mathbf{B}}^{*}$ as follows:
\begin{align}\label{eqn:ext-born-adj}
\tilde{\mathbf{m}}= \tilde{\mathbf{B}}(\mathbf{v})^{*} \mathbf{d},
\end{align}
where $\tilde{\mathbf{m}}$ represents the extended image, $\mathbf{d}$ is the data vector, and the extended Born operator depends non-linearly on the velocity vector $\mathbf{v}$.
In the forward extended Born operator we employ the following scattering condition:
\begin{align}\label{eqn:extended-scattering}
s'(\mathbf{x},t) = \int \ddot{p}_0(\mathbf{x}-2\mathbf{h},t) \tilde{m}(\mathbf{x}-\mathbf{h},\mathbf{h})\mathbf{dh}.
\end{align}
The secondary source $s'$ is then propagated by solving the following PDE:
\begin{align}\label{eqn:acoustic-fwd-scat-wave-eq}
\left[\frac{1}{v^2(\mathbf{x})}\frac{\partial^2}{\partial t^2 } - \nabla^2\right]q'(\mathbf{x},t) = s'(\mathbf{x},t),
\end{align}
in which we can extract the Born model data from the scattered wavefield $q'$. The discretization of equations~\ref{eqn:acoustic-fwd-wave-eq},~\ref{eqn:extended-scattering}, and~\ref{eqn:acoustic-fwd-scat-wave-eq} is used to define the forward extended Born modeling operator as follows:
\begin{align}\label{eqn:ext-born-fwd}
\mathbf{d}'= \tilde{\mathbf{B}}(\mathbf{v})\tilde{\mathbf{m}},
\end{align}
where $\mathbf{d}'$ represents the scattered data vector. The extended Born forward and adjoint operators are employed within the extended linearized waveform inversion step.
\subsection{Extended linearized waveform inversion}
To form an optimal extended subsurface image, which is then used to synthesize data for a new acquisition, we minimize the following quadratic objective function:
\begin{eqnarray} \label{eqn:ext-lsrtm}
\phi(\tilde{\mathbf{m}})=\frac{1}{2}\left \|\tilde{\mathbf{B}}\tilde{\mathbf{m}} - \mathbf{d} \right \|_2^2,
\end{eqnarray}
in which we dropped the velocity dependency of the extended Born operator for brevity. Given the dimensions of the extended Born operator and its computational cost when applied to a given vector, especially, for the 3D case, we solve the problem in equation~\ref{eqn:ext-lsrtm} using an iterative method, such as linear conjugate gradient (LCG)~\cite[]{Aster}. In certain scenarios, such as sparse acquisition geometry or complex overburden (e.g., subsalt targets), we instead solve the following regularized quadratic problem:
\begin{eqnarray} \label{eqn:ext-lsrtm-dso}
\phi(\tilde{\mathbf{m}})=\frac{1}{2}\left \|\tilde{\mathbf{B}}\tilde{\mathbf{m}} - \mathbf{d} \right \|_2^2 + \frac{\epsilon}{2} \left \|\mathbf{D}\tilde{\mathbf{m}} \right \|_2^2,
\end{eqnarray}
where $\mathbf{D}$ represents the regularization operator, and $\epsilon$ is a scalar weight associated with the regularization term. In this work, a differential semblance optimization (DSO) operator is employed within the regularization term \cite[]{symes1991velocity}. The effect of the added regularization term on the optimal image is to enhance its focusing, which in turn corresponds to an enhancement of the coherency of the image across reflection angles~\cite[]{shen2005wave,biondi2021target}. Effectively, this process interpolates across reflection angles that have not been illuminated by the surface acquisition in the extended-image space~\cite[]{prucha2002subsalt}.
\subsection{Redatuming through extended least-squares migration}
The goal of any redatuming method is to transform the observed data acquired at a certain location (e.g., at the surface) into a new dataset as if they had been acquired at a different location in the subsurface \cite[]{wapenaar1992elastic,mulder2005rigorous}. Here, we seek to reconstruct the data generated from a target area that is recorded with sources and receivers placed directly above the target. This process enables the application of an FWI algorithm only within the target area.
In our redatuming step, the optimal solution to the imaging problems of equations~\ref{eqn:ext-lsrtm} and~\ref{eqn:ext-lsrtm-dso}, $\tilde{\mathbf{m}}_{opt}$, is used to reconstruct the data $\mathbf{d}'$ corresponding to sources and receivers placed at a new subsurface acquisition level. The reconstruction is performed by the following demigration process:
\begin{eqnarray} \label{eqn:demig-datum}
\mathbf{d}'=\tilde{\mathbf{B}}'\tilde{\mathbf{M}}\tilde{\mathbf{m}}_{opt},
\end{eqnarray}
where $\tilde{\mathbf{M}}$ is a restriction operator that limits the extended image to only the target area. The symbol $'$ denotes quantities related to the new acquisition geometry. The success of this reconstruction method depends on the knowledge of an accurate overburden, which is a common assumption within any redatuming technique. The advantage of this redatuming process compared to other methods resides in the usage of the image space to reconstruct the subsurface data. In fact, the regularization term employed during this step releases the common strict constraint of having dense source-receiver surface sampling.
To intuitively understand how this redatuming step works, let $d_{z_0}$ represent the subset of the observed data of interest (e.g., reflected events). Furthermore, we assume that $d_{z_0}$ is given by the following relation:
\begin{align}\label{eqn:surface-data-cont}
d_{z_0}(\mathbf{x}_r,\mathbf{x}_s,t) = \int g(\mathbf{x}_r, \mathbf{x},t) * g(\mathbf{x},\mathbf{x}_s,t)p(\mathbf{x})\mathbf{dx},
\end{align}
where $*$ denotes the time convolution, $p$ represents a scattering potential (also referred to as subsurface image), and $g$ is the Green's function for the acoustic wave equation (equation~\ref{eqn:acoustic-fwd-wave-eq}). The usage of wave-equation operators in this step allows us to take into account all the finite-frequency effects (e.g., Fresnel's zone, scale sensitivity as a function of frequency). The usage of an extended image space should not be confused with a Born approximation since the extended Born approximation can be used to fit any kind of wave arrivals~\cite[]{barnier2022full1}.
Figure~\ref{fig:redatuming1} schematically illustrates the process of computing the data $d_{z_d}$ for a single point in the scattering potential and one source and receiver pair. The source wavefield $G(\vec{x}_p,\vec{x}_s)$ propagates from the source position $\vec{x}_s$ at the surface $z_0$ and it is then scattered by an image point placed at $\vec{x}_p$. This secondary source is then propagated by the receiver-side Green's function $G(\vec{x}_r,\vec{x}_p)$ and recorded from the device placed at $\vec{x}_r$. To obtain the same data but for the source-receiver pair placed at $z_d$, deeper than the surface level $z_0$, the same scattering potential can be used to generate the data for a source-receiver pair placed at $\vec{x}_s'$ and $\vec{x}_r'$ (Figure~\ref{fig:redatuming2}). Hence, the knowledge of the scattering potential $p$ enables the computation of the same event for a given source-receiver pair placed at two different depth levels.
Since this redatuming procedure is based on the formation of an image, the source-receiver distribution at the new datum depends on the maximum extent of the surface acquisition. Figure~\ref{fig:redatumingGeom} shows how the surface acquisition geometry extent changes when mapped to a deeper subsurface position $z_d$ assuming a constant velocity. The surface and datum acquisition extents $\bar{x}$ and $\bar{x}'$ identically illuminates the image point $\vec{x}_p$. Therefore, an image formed using the data acquired at $z_0$ with a source-receiver extent $\bar{x}$ can be employed to synthesize the data with an acquisition extent $\bar{x}'$ at $z_d$. The datumed acquisition geometry is reduced compared to the surface one. Thus, when generating the datumed dataset, a reduced source-receiver distribution must be used to avoid the introduction of data artifacts due to the limited illumination of the surface acquisition. Finally, the selection of the virtual geometry in complex geological scenarios can be performed using a ray-tracing algorithm for reflected events~\cite[]{vcerveny1987ray}.
In this simplified discussion, we assume that the recorded events are generated by a scattering or reflection process. Therefore, the virtual datumed geometry is kept horizontal and moved only along the vertical direction. However, as shown by~\cite{biondi2014simultaneous}, transmitted events, such as diving and head waves, can be reconstructed using an extended image. Thus, allowing for potentially positioning the virtual geometry around the target area. Despite this fact, this possibility is not explored in this work, which is focused on reflected events. We employ this method only to redatum single-component pressure data, but this process can be modified to compute the particle velocities associated with P-wave reflected events. In the shown examples, we employ an acoustic wave-equation operator during the image formation since transmission effects can be approximated by using an acoustic engine. When strong mode conversion or elastic transmission effects are present (e.g., basalt layers), then elastic wavefields should be considered during the extended migration step. This change would also allow a proper reconstruction of converted waves when an accurate overburden model is employed in this process. Finally, one could potentially employ the extended-image gathers to estimate the elastic properties of the reflectors using a Zoeppritz approximation~\cite[]{aki2002quantitative}. However, this approach would be hampered by the limitation of this approximation (e.g., locally flat interface, ray-based approximation), and thus would limit the application of an elastic inversion to simple geological scenarios.
\subsection{Elastic FWI}
Once the redatumed data have been reconstructed, they can be used within any elastic FWI framework. We assume the subsurface to be an elastic isotropic medium. Thus, to predict the observed data we employ the elastic isotropic wave equation in the velocity-stress form~\cite[]{virieux1986p}:
\begin{align}\label{eqn:elastic-wave-cont}
\rho(\mathbf{x})\frac{\partial v_i(\mathbf{x},t)}{\partial t} &= \frac{\partial \sigma_{ik}(\mathbf{x},t)}{\partial x_k}+f_i(\mathbf{x},t),\\\nonumber
\frac{\partial \sigma_{ij}(\mathbf{x},t)}{\partial t} &=\lambda(\mathbf{x})\frac{\partial v_k(\mathbf{x},t)}{\partial x_k}\delta_{ij}+\mu(\mathbf{x})\left[\frac{\partial v_i(\mathbf{x},t)}{\partial x_j}+\frac{\partial v_j(\mathbf{x},t)}{\partial x_i}\right]+M_{ij}(\mathbf{x},t),
\end{align}
where I employ the Einstein notation and $\delta_{ij}$ represents the Kronecker delta. The subsurface is fully characterized by the three elastic parameters: density $\rho$, first Lam\'e parameter $\lambda$, and shear modulus $\mu$. The wavefield variables in this equation are given by the particle velocities $v_i$ and the stress tensor components $\sigma_{ij}$. The wave propagation is due to the presence of the source terms $f_i$ and $M_{ij}$ that represent a volumetric force field and the time derivative of the moment tensor, respectively \cite[]{aki2002quantitative}. The pressure data can be obtained by averaging the normal-stress components. In our applications, we consider single component pressure data generated by explosive sources (i.e., $M_{ij} \neq 0$ for $i=j$).
By discretizing the PDEs in equation~\ref{eqn:elastic-wave-cont}, we can define the elastic wave-equation modeling operator $\mathbf{f}$. We define the elastic FWI objective function as follows:
\begin{eqnarray} \label{eqn:elastic-fwi}
\psi(\mathbf{m}_{ela})=\frac{1}{2}\left\| \mathbf{f}(\mathbf{m}_{ela}) - \mathbf{d}\right\|_2^2,
\end{eqnarray}
where $\mathbf{m}_{ela}$ represents the elastic parameters of the subsurface. In our elastic FWI workflow, we parameterize the elastic isotropic medium using the P- and S-wave velocities $V_p$ and $V_s$, and density $\rho$. | {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Within the last decade, spectral line intensity mapping has been proposed as an additional, complementary probe of the large-scale structure (LSS) of star-forming galaxies during the epoch of reionization (EoR) ~\cite{1999ApJ...512..547S,2008A&A...489..489R,2010JCAP...11..016V}. Glimpses into this era have been limited to observations of individual massive galaxies and quasars at high redshifts, provided by the Hubble Space Telescope (HST) and ground-based telescopes. While in the future, we anticipate new instruments like Atacama Large Millimeter Array (ALMA) and the James Webb Space Telescope (JWST) providing more detailed views of this ``cosmic dawn", they will be restricted by relatively small fields of view and an inability to observe galaxies that are simply too faint to detect individually. Intensity mapping offers a complementary glimpse of the three-dimensional structure of the high-redshift universe by imaging aggregate line emissions from thousands of unresolved objects and studying the large-scale fluctuations in the given line intensity due to the clustering of unresolved sources.
Intensity mapping can be performed using many different spectral lines, the 21 cm neutral hydrogen line being among the most common, given its unique insight into the evolution of the neutral IGM during the EoR~\cite{1997ApJ...475..429M,2004PhRvL..92u1301L,2004ApJ...608..622Z,2006PhR...433..181F,2010ARA&A..48..127M,2011AAS...21710703P}. In this paper, we focus on the millimeter to far-infrared rotational transitions of carbon monoxide (CO), a molecule that forms primarily in star-forming regions and whose intensity maps promise a wealth of information on the spatial distribution of star formation in the universe~\cite{2014MNRAS.443.3506B}. While CO intensity fluctuations have already been studied, initially as foreground contaminants to cosmic microwave background (CMB) measurements~\cite{2008A&A...489..489R} and then as probes of LSS~\cite{2010JCAP...11..016V,2011ApJ...730L..30C,2011ApJ...728L..46G,2011ApJ...741...70L,2013ApJ...768...15P}, these studies have been limited to the lowest order transitions of the molecule, mainly, CO(1-0) and CO(2-1). These lines are often considered because they are typically among the brightest and have redshifted frequencies that can potentially be observed from the ground. However, with the advent of ALMA\footnote{http://almascience.nrao.edu} and its frequency coverage (84 - 950 GHz), fluctuations in the line emission of many high-J CO transitions will potentially be available and can be measured and translated into a 3D map of the early universe. Having access to multiple CO rotational lines will further facilitate redshift identification and mitigate line confusion, making it possible to statistically isolate the fluctuations from a particular redshift by cross-correlating the emission from different sets of lines~\cite{2010JCAP...11..016V,2013fgu..book.....L}.
Attempts in the recent literature to obtain a theoretical estimate of the CO emission signal from high redshift galaxies have relied heavily on empirical relations calibrated from local observations and are limited almost exclusively to the $^{12}$CO J=1$\rightarrow$0 transition line. To calculate this mean CO brightness, a simple model is often adopted that connects the strength of the CO emission to the abundance of the dark matter halos that host CO luminous galaxies.~\cite{2010JCAP...11..016V} construct this model by first approximating a galaxy's star formation rate (SFR) as a linear function of the mass of the galaxy's host halo. They then further assume a linear relationship between the line luminosity and SFR and adopt the $L_{CO}$ to SFR ratio from M\,82 to calibrate the proportionality constant.~\cite{2011ApJ...741...70L} embrace a similar approach, adopting the $SFR-M$ relation proposed by~\cite{2010JCAP...11..016V}, but using a set of empirical scaling relations between a galaxy's SFR, far-infrared luminosity, and CO(1-0) luminosity that have been measured for galaxies at $z \lesssim$ 3. Both studies lead to a simple empirical estimate of the CO luminosity that is linear in halo mass ($L_{CO} \propto SFR \propto M$) and that relies on the extrapolation of low-redshift calibrations to higher redshifts, redshifts corresponding to the EoR.~\cite{2011ApJ...728L..46G} arrives at the relation between CO(1-0) luminosity and the halo mass via a different route, making use of the Millennium numerical simulation results of~\cite{2009ApJ...698.1467O}, a study which, although incorporates many physical processes to model the CO emission from high-redshift galaxies, still invokes low-redshift measurements to calibrate the normalization factor of the CO luminosity for a given halo.
These various models, among others, have led to estimates of the CO power spectra amplitude signal that vary over a range spanning two orders of magnitude, illustrating the lack of theoretical understanding of the physics of CO transitions in a high-redshift context~\cite{2014MNRAS.443.3506B}. In~\cite{2013MNRAS.435.2676M}, this problem is addressed and a computation of CO fluxes is presented within an analytic framework that incorporates both global modes of star formation and the physics of molecular rotational lines in $z \gtrsim$ 6 Lyman-break galaxies. Our paper follows the general direction taken by~\cite{2013MNRAS.435.2676M} and introduces a simpler approach that captures and ties the physics of molecular emission lines to high-redshift ($z \geq$ 4) observations of star-forming galaxies~\cite{2015arXiv150700999M}. Our approach is based on large velocity gradient (LVG) modeling, a radiative transfer modeling technique that generates the full CO spectral line energy distribution (SLED) for a specified set of physical parameters. Typically, LVG modeling is employed to quantitatively analyze an observed set of emission lines and determine the set of parameters that best reproduce the observed SED~\cite{2015ApJ...802...81M}. In this paper, we consider applying the LVG methodology in the reverse direction: given a halo of mass $M$ with CO-emitting molecular clouds characterized by a kinetic temperature $T_{kin}$, velocity gradient $dv/dr$, gas volume density $n$, molecular abundance $\chi$, and column density $N$, we will derive the full CO SED and compute the mean surface brightness of any CO rotational line emitted by halos with that mass at any given redshift.
Our paper is organized as follows. In Section 2, we introduce our LVG model for the specific intensity of CO emission and outline the formalism that relates the set of LVG parameters driving the physics of CO transitions in molecular clouds to the global properties of the host galaxy, mainly, the SFR. Given the empirically determined high-redshift $SFR-M$ relation~\cite{2015arXiv150700999M}, these parameters, and thus, the specific CO intensity, can ultimately be expressed as functions of the host halo mass $M$ and redshift $z$. In Section 3 we compute the spatially averaged CO surface brightness from star-forming galaxies, as well as the power spectrum of spatial fluctuations in the CO emission at any given redshift, with a focus on redshifts corresponding to the EoR. We conclude in Section 4 with a summary of our results and a brief comparison with other related calculations of the CO intensity mapping signal. Throughout we consider a $\Lambda$CDM cosmology parametrized by $n_s$ = 1, $\sigma_8$ = 0.8, $\delta_c$ = 1.69, $\Omega_m$ = 0.31, $\Omega_\Lambda$ = 0.69, $\Omega_b$ = 0.05, and $h$ = 0.7, consistent with the latest measurements from Planck~\cite{2015arXiv150201589P}.
\section{Modeling the CO Emission}
\subsection{CO Brightness Temperature}
To calculate the average CO brightness temperature, we follow the formalism presented by~\cite{2011ApJ...741...70L} and consider the specific intensity of a CO line observed at frequency $\nu_{obs}$ at redshift $z$ = 0,
\begin{equation}
I(\nu_{obs})=\frac{c}{4\pi}\int_0^\infty dz'\,\,\frac{\epsilon\left[\nu_{obs}(1+z')\right]}{H(z')(1+z')^4}
\end{equation}
as determined by solving the cosmological radiative transfer equation, where $H(z)$ is the Hubble parameter and $\epsilon\left[\nu_{obs}(1+z')\right]$ is the proper volume emissivity of the given line. Since CO is emitted from within halos hosting star-forming galaxies, we take the CO luminosity, $L_{CO}$, to be some function of the halo mass $M$ and redshift, and assume the profile of each CO line is a delta function in frequency,
\begin{equation}
L_{CO}=L(M,z)\delta_D(\nu-\nu_J)
\end{equation}
where $\nu_J$ is the rest frame frequency of the transition of interest. If we then further assume that at any given time, a fraction $f_{duty}$ of halos with mass larger than $M_{min,CO}$ actively emit CO lines, then for a given halo mass function $dn/dM$, the volume emissivity is,
\begin{equation}
\epsilon(\nu,z)=\delta_D(\nu-\nu_J)(1+z)^3f_{duty}\int_{M_{min,CO}}^\infty\hspace{-0.7cm} dM \,\,\frac{dn}{dM}(M,z)L(M,z)
\end{equation}
The specific intensity of a line with rest frame frequency $\nu_J$, emitted by gas at redshift $z_J$ thus simplifies to
\begin{equation}
I_{\nu_{obs}}=\frac{c}{4\pi}\frac{1}{\nu_J H(z_J)}f_{duty}\int_{M_{min,CO}}^\infty\hspace{-0.7cm} dM\,\,\frac{dn}{dM}(M,z_J)L(M,z_J)
\end{equation}
or, written as the brightness temperature,
\begin{equation}
\langle T_{CO} \rangle = \frac{c^3}{8\pi}\frac{(1+z_J)^2}{k_B \nu_J^3 H(z_J)}f_{duty}\int_{M_{min,CO}}^\infty\hspace{-0.7cm} dM\,\,\frac{dn}{dM}(M,z_J)L(M,z_J)
\end{equation}
where $k_{B}$ is the Boltzmann constant.
To determine $L(M,z_J)$, the specific luminosity of a given CO line, we employ large velocity gradient (LVG) modeling, a method of radiative transfer in which the excitation and opacity of CO lines are determined by the kinetic temperature $T_{kin}$, velocity gradient $dv/dr$, gas density $n$, CO-to-H$_2$ abundance ratio $\chi_{CO}$, and the CO column density of the emitting source. We adopt the escape probability formalism~\cite{1970MNRAS.149..111C,1974ApJ...189..441G} derived for a spherical cloud undergoing uniform collapse where
\begin{equation}
\beta_J=\frac{1-e^{\tau_{J}}}{\tau_{J}}
\end{equation}
is the probability, for a line optical depth $\tau_J$, that a photon emitted in the transition $J\rightarrow J-1$ escapes the cloud (see~\cite{2013MNRAS.435.2407M} for a more detailed presentation of the LVG formalism.) Assuming that each emitting source consists of a large number of these unresolved homogeneous collapsing clouds, the corresponding emergent intensity of an emission line integrated along a line of sight can be expressed as
\begin{equation}
I_J=\frac{h\nu_J}{4\pi} A_J x_J\beta_J(\tau_J)N_{CO}
\end{equation}
where $x_J$ is the population fraction in the $J^{th}$ level, $h\nu_J$ is the transition energy, $A_J$ is the Einstein radiative coefficient, and $N_{CO}$ is the beam-averaged CO column density.
The LVG-modeled specific luminosity of a line emitted by a host halo with disk radius $R_{d}$, therefore takes the form
\begin{equation}
L_{CO}(M,z_J) = 4\pi^2R_{d}^2I_{J,LVG}=\pi h\nu_J R_{d}^2 A_J x_J\beta_J(\tau_J)N_{CO} \,\,.
\end{equation}
We set
\begin{eqnarray}
R_d(M,z) &=& \frac{\lambda}{\sqrt{2}}\frac{j_d}{m_d} r_{vir}\nonumber\\
&=& \frac{\lambda}{\sqrt{2}}\frac{j_d}{m_d}\times 1.5 \left[\frac{\Omega_m}{\Omega_m(z)}\frac{\Delta_c}{18\pi^2}\right]^{-1/3}\left(\frac{M}{10^8 M_\odot}\right)^{1/3}\left(\frac{1+z}{10}\right)^{-1} \textrm{kpc}
\end{eqnarray}
where $\Delta_c = 18\pi^2+82d-39d^2$, $d = \Omega_m(z)-1$, and $\Omega_m(z) = \Omega_m(1+z)^3/(\Omega_m(1+z)^3+\Omega_\Lambda)$~\cite{2013fgu..book.....L}. We assume that the specific angular momentum of the material that forms the disk is the same as that of the halo, i.e. $j_d/m_d = 1$, and adopt a spin parameter of $\lambda\approx$ 0.05, corresponding to an isolated exponential disk~\cite{1998MNRAS.295..319M}.
Since the excitation state and optical depth of a given line, $x_J$ and $\tau_J$ respectively, are determined by the set of physical parameters \{$T_{kin}$, $dv/dr$, $n$, $\chi_{CO}$\} that characterize the emitting molecular clouds, the task remains to express these parameters as functions of $M$ and $z$, global properties of the host halo.
\subsection{The Star Formation Model}
\FloatBarrier
\begin{figure*}[t!]
\begin{minipage}{1\linewidth}
\hspace{-1.7cm}\includegraphics[width=575pt,height=325pt]{{SFR_SigPlot-eps-converted-to}.pdf}
\vspace{-3cm}
\caption{\textit{Left panel:} The mean $SFR-M$ relation in the high redshift universe, i.e. $z \gtrsim$ 4, derived empirically via abundance matching and fitted by the double power law given in eq. (2.11) where \{$a_1, a_2, b_1, b_2$\} = \{2.4$\times$10$^{-17}$, 1.1$\times$10$^{-5}$, 1.6, 0.6\} for $f_{UV}$ = 1 . \textit{Right panel:} The corresponding SFR surface density, $\Sigma_{SFR}(M,z)$, derived by dividing the SFR by the halo disk area, $\pi R_d(M,z)^2$, at redshifts $z$ = 6 (red) and $z$ = 10 (blue). For more details and comparison to data, see~\cite{2015arXiv150700999M}.}
\end{minipage}
\end{figure*}
As will be physically motivated below, the LVG parameters that dictate the shape of CO SLEDs in galaxies are well correlated with the galaxy's global star formation rate surface density. The first ingredient of our model is therefore a $SFR-M$ relation that connects the SFR and host halo mass at the high redshifts we are concerned with in this paper. In~\cite{2015arXiv150700999M}, such an empirical relation is derived by mapping the shape of the observed ultraviolet luminosity functions (UV LFs) at $z \sim$ 4-8 to that of the halo mass function at the respective redshifts. In this abundance-matching method, each dark-matter halo is assumed to host a single galaxy and the number of galaxies with star formation rates greater than $SFR$ are equated to the number of halos with mass greater than $M$,
\begin{equation}
f_{UV}\int_M^\infty \,\,dM\,\,\frac{dn}{dM}(M,z) = \int_{SFR}^\infty \,\, dSFR\,\,\phi(SFR,z)
\end{equation}
where $\phi(SFR,z)$ is the observed, dust-corrected UV LF at redshift $z$ and $f_{UV}$ is the starburst duty cycle, i.e.\,the fraction of halos with galaxies emitting UV luminosity at any given time.~\cite{2015arXiv150700999M} find that the $SFR-M$ scaling law remains roughly constant across this redshift range and thus, an average relation can be obtained and applied to even higher redshifts, $z \gtrsim$ 8, where it faithfully reproduces the observed $z \sim$ 9 and 10 LFs. This mean scaling law, $SFR_{av}(M)$, is fairly well parameterized by a double power law of the form,
\begin{equation}
SFR_{av}(M) =
\begin{cases}
a_1M^{b_1}\,, & M\leq M_c\\
a_2M^{b_2}\,, & M\geq M_c
\end{cases}
\end{equation}
with a turnover at a characteristic halo mass $M_c \approx$ 10$^{11.6}$ M$_\odot$. Fitting the average relations in the observed SFR range $\simeq$ 0.1 - 500 M$_\odot$/yr, we obtain \{$a_1, a_2, b_1, b_2$\} = \{2.4$\times$10$^{-17}$, 1.1$\times$10$^{-5}$, 1.6, 0.6\} for $f_{UV}$ = 1. We find it reasonable to assume a UV duty cycle of unity throughout our calculations given that the time between mergers grows shorter than the Hubble time at the high redshifts we are considering and the fact that typical hydrodynamical simulations, where star-formation is driven not just by mergers, find a star-forming galaxy in effectively every halo at these redshifts~\cite{2010MNRAS.406.2267F}.
The corresponding mean SFR surface density, $\Sigma_{SFR}$ (units M$_\odot$ yr$^{-1}$kpc$^{-2}$), is computed by dividing the SFR from eq. (2.11) by the area of the active star-forming halo disk with radius given by eq. (2.9). Figure 1 depicts the average $SFR-M$ relation and the resulting SFR surface densities at redshifts $z$ = 6 and 10.
\subsection{Theoretical Models for LVG Parameters}
The next key step in our LVG-motivated approach to predicting the high redshift CO emission signal is to model the LVG parameters dictating the shape of the CO SLED as functions of the global properties of the host halo. As pointed out in~\cite{2014MNRAS.442.1411N}, quantities, such as the gas temperature and density, which characterize the CO-emitting molecular interstellar medium (ISM) are well-correlated with the star formation rate surface densities of galaxies. Qualitatively, this makes sense since regions of high SFR density typically arise from denser gas concentrations and have large UV radiation fields, with increased efficiency for thermal coupling between gas and dust. In the following sections, we will outline how each LVG parameter can be expressed in terms of the SFR surface density, $\Sigma_{SFR}$, and thus ultimately as a function of just the halo mass and redshift.
\subsubsection{Gas Kinetic Temperature}
To determine the effective kinetic temperature of the CO-emitting molecular gas, we assume that the gas and dust in the star-forming disk are thermally well-coupled, i.e. $T_{gas} \approx T_{dust}$. The temperature of a dust grain is set by the balance of radiative heating and cooling processes taking place in molecular clouds. The rate of heating due to absorption of optical or UV radiation and the incident cosmic microwave background (CMB) can be written as
\begin{equation}
\left(\frac{dE}{dt}\right)_{abs}=\pi a^2\left[\int_0^\infty d\nu \,\,Q_{abs,UV}(\nu)F_{\nu}(\nu)\,+\,\sigma_{SB}T_{CMB}^4\right]
\end{equation}
where $\sigma_{SB}$ is the Stefan-Boltzmann constant, $a$ is the grain radius, $Q_{abs,UV}(\nu)$ is the emissivity in the UV-optical regime, which we set equal to unity~\cite{2007A&A...462...81C}, $T_{CMB}$ = 2.73(1+$z$) is the CMB temperature and $F_\nu(\nu)$ is the flux of energy radiated by the central starburst in the disk. Assuming optically thick conditions requires this starlight energy to be totally reemitted in the infrared; the integral on the right-hand side can thus be expressed as $L_{IR}/4\pi R_d^2$ and, adopting the conversion between $L_{IR}$ and $SFR$ presented in~\cite{1998ApJ...498..541K},
\begin{equation}
\left(\frac{L_{IR}}{\text{erg s$^{-1}$}}\right)=2.2\times10^{43}\left(\frac{SFR}{\text{M$_\odot$ yr$^{-1}$}}\right) \,,
\end{equation}
eq. (2.12) takes the final form:
\begin{equation}
\left(\frac{dE}{dt}\right)_{abs}=\pi a^2 \left[\frac{2.2\times10^{43}}{4\pi}\frac{\Sigma_{SFR}}{\text{(M$_\odot\, $yr$^{-1}$kpc$^{-2}$)}}\,+\,\sigma_{SB}(2.73(1+z))^4\right] \,\,.
\end{equation}
The rate of cooling of dust grains by infrared emission is given by
\begin{equation}
\left(\frac{dE}{dt}\right)_{em}=4\pi a^2\int_0^\infty d\nu\,\,Q_{abs,IR}(\nu)\pi B_\nu(\nu,T_d)
\end{equation}
where $Q_{abs,IR}(\nu) \simeq Q_{abs}(\nu_0)(\nu/\nu_0)^\beta$ is the emissivity in the infrared regime, $\nu_0$ is the reference frequency, and $\beta$ is the dust emissivity index.
Substituting in for $Q_{abs,IR}(\nu)$ and $\pi B_\nu(\nu,T_d)$ (the flux emitted by a black-body), the integral simplifies to
\begin{equation}
\left(\frac{dE}{dt}\right)_{em}=\pi a^2\left[\frac{60\sigma_{SB}}{\pi^4}\left(\frac{k_B}{h}\right)^\beta\frac{Q_{abs}(\nu_0)}{\nu_0^\beta}\Gamma(\beta+4)\zeta(\beta+4)T_d^{\beta+4}\right] \,\,.
\end{equation}
\noindent At thermal equilibrium, $(dE/dt)_{abs}=(dE/dt)_{em}$, and the steady-state grain temperature in units of kelvin is
\begin{dmath}
T_d(\Sigma_{SFR}(M,z),z) = \left(\frac{\pi^4}{60}\left(\frac{h}{k_B}\right)^\beta\frac{\nu_0^\beta}{Q_{abs}(\nu_0)}\frac{1}{\Gamma(\beta+4)\zeta(\beta+4)}\left[\frac{2.3\times10^{-3}}{4\pi\sigma_{SB}}\frac{\Sigma_{SFR}(M,z)}{\text{(M$_\odot$\,yr$^{-1}$kpc$^{-2}$)}}+(2.73(1+z))^4\right]\right)^{1/(\beta+4)}\hspace{-1cm} .
\end{dmath}
In the analysis below, we take the fiducial value of $\beta$ = 1.3, consistent with the mean emissivity index derived from the SCUBA Local Universe Galaxy Survey~\cite{2000MNRAS.315..115D}, and the standard value $Q_{abs}$(125 $\mu$m) = 7.5$\times$10$^{-4}$~\cite{1983QJRAS..24..267H,2004A&A...425..109A,2007A&A...462...81C}. A plot of the kinetic temperature as a function of halo mass at different redshifts can be found in the left panel of figure 2.
\subsubsection{Cloud volume density}
Another key parameter in determining the shape of the CO SLED is $n$, the cloud volume density (cm$^{-3}$) of the dominant collision partner for CO rotational excitation. Since we are assuming CO-emitting molecular clouds, $n$ in this case is the cloud volume density of H$_2$, given by
\begin{equation}
n_{H_2}=\frac{3\Sigma_{cl}}{4\mu m_{H_2}r_{cl}}
\end{equation}
where $\mu$ = 1.36 takes into account the helium contribution to the molecular weight (assuming cold, neutral gas), $m_{H_2}$ = 3.34$\times$10$^{-27}$ kg, and $r_{cl}$ is the cloud radius assuming a uniform sphere. At these high redshifts where the ISM is dominated by molecular gas, the surface density of H$_2$ in a given molecular cloud, $\Sigma_{cl}$, can be related to the beam-averaged gas surface density in the galactic disk, $\Sigma_{gas}$.
Empirical studies have found that a correlation exists between the surface density of molecular gas and the surface density of the SFR, a discovery that is consistent with observations that stars form predominantly in the molecular component of the ISM. The Kennicutt-Schmidt (KS) relation~\cite{1959ApJ...129..243S,1989ApJ...344..685K} formulates this correlation in terms of a power-law, $\langle\Sigma_{SFR}\rangle \propto \langle\Sigma_{gas}\rangle^N$, where estimates of the index $N$ range from super-linear~\cite{1989ApJ...344..685K,1998ApJ...498..541K,2007ApJ...671..303B,2011ApJ...735...63L,2013ApJ...772L..13M}, to linear~\cite{2008AJ....136.2846B,2008AJ....136.2782L,2013AJ....146...19L}, to sublinear ~\cite{2014MNRAS.437L..61S}.
Given our paper's focus on high redshift star-forming sources with CO-emitting molecular clouds, we deemed it most appropriate to adopt the KS relation presented in~\cite{2010MNRAS.407.2091G},
\begin{equation}
\Sigma_{SFR} =(3.3\pm0.6)\times10^{-4} \left(\frac{\Sigma_{gas}}{1 \text{ M$_\odot$ pc$^{-2}$}}\right)^{1.2\pm0.1} \,\,\,\text{M$_\odot$ yr$^{-1}$ kpc$^{-2}$} \,\, ,
\end{equation}
a relation that was derived from data sets of CO molecular emission in $z \sim$ 1-3 normal star-forming galaxies and restricted to the regime where molecular gas dominates the ISM at these redshifts, i.e. $\Sigma_{gas} \gtrsim$ 3 M$_\odot$ pc$^{-2}$. Since no evidence of any redshift-dependence of this relation has been found thus far, applying eq. (2.19) at the higher redshifts we consider in this paper, $z \geq$ 4, is a reasonable extrapolation.
We therefore set $\Sigma_{cl}$ equal to $\Sigma_{gas}$, as defined by eq. (2.19), except in cases where $\Sigma_{gas}$ drops below the threshold surface density for which the cloud is predominantly molecular. This ``star-formation" threshold, defined by a molecular gas fraction of $f_{H_2}$ = 0.5, is derived in~\cite{2014ApJ...790...10S} for a plane-parallel slab (including H$_2$ dust) as a function of metallicity $Z'$,
\begin{equation}
\Sigma_{gas,*}(Z')=\frac{2m}{\sigma_g(Z')}\left(1.6\ln{\left[\frac{\alpha G(Z')}{3.2}+1\right]}\right)
\end{equation}
where $m$ = 2.34$\times$10$^{-27}$ kg is the mean particle mass per hydrogen nucleus, $\sigma_g(Z') = 1.9\times10^{-21}Z'$ cm$^{-2}$ is the dust-grain Lyman-Werner-photon absorption cross section per hydrogen nucleon, and $\alpha G$ is the dimensionless parameter that defines the LW-band optical depth in the cloud due to HI dust,
\begin{equation}
\alpha G(Z') = \left(\frac{1+3.1Z^{'0.365}}{4.1}\right)\frac{6.78}{1+\sqrt{2.64Z'}}\,\,.
\end{equation}
\FloatBarrier
\begin{figure*}[t!]
\begin{minipage}{1\linewidth}
\hspace{-2cm}\includegraphics[width=575pt,height=325pt]{{T_N-eps-converted-to}.pdf}\\
\vspace{-3cm}\caption{\textit{Left panel:} Gas kinetic temperature assuming the fiducial values $\beta$ = 1.3 and $Q_{abs}$(125 $\mu$m) = 7.5$\times$10$^{-4}$ in eq. (2.17). \textit{Right panel:} Cloud volume density of molecular hydrogen as defined by equations (2.18)-(2.27) plotted at redshifts $z$ = 6 (red) and $z$ = 10 (blue).}
\end{minipage}
\end{figure*}
We adopt the fundamental metallicity relation (FMR), a tight relation between the gas-phase metallicity $Z'$, stellar mass $M_*$, and SFR, to ultimately express $Z'$ as a function solely of the halo mass and redshift. The FMR was initially observed and formulated in~\cite{2010MNRAS.408.2115M} for local galaxies in the mass range 9.2 $\leq \log{M_*/M_\odot} \leq$ 11.4. Since then, the FMR has been confirmed to hold for star-forming galaxies at redshifts as high as $z \sim$ 3~\cite{2013ApJ...772..141B}, and to extend smoothly at lower masses~\cite{2011MNRAS.414.1263M}, taking the final form,
\begin{equation}
12+\log{(O/H)}=
\begin{cases}
8.90+0.37m-0.14s-0.19m^2+0.12ms-0.054s^2\, & \text{for}\,\,\,\, \mu_{0.32}\geq9.5\\
8.93+0.51(\mu_{0.32}-10)\, & \text{for}\,\,\,\, \mu_{0.32} < 9.5
\end{cases}
\end{equation}
where $\mu_\alpha = \log{(M_*)} - \alpha\log{(SFR)}$, $m = \log{(M_*)}-10$, and $s=\log{(SFR)}$.
To further parametrize the metallicity as a function of the halo mass and redshift, we rely on observations and models that support the conclusion that the SFR in galaxies at redshifts $z \gtrsim$ 4 scales nearly linearly with increasing stellar mass and does not vary by more than a factor of order 2~\cite{2015ApJ...799..183S}. This behavior is consistent with a crude estimation of the stellar mass of a galaxy with a star formation rate $SFR$:
\begin{equation}
M_* \sim \int_0^{t_H(z)}dt\,SFR(t)\sim SFR(z)\times t_H(z)
\end{equation}
where $t_H(z)$ is the age of the universe at a given redshift $z$. Since each halo of interest formed at some fraction of the age of the universe, the right-hand side should be multiplied by a factor $\lesssim$ 1. We therefore calibrate the above expression using the SFR-$M_*$ best-fit parameters presented in~\cite{2015ApJ...799..183S} for $z \sim$ 4 - 6 and obtain the following relation
\begin{equation}
M_*(M,z)=(0.28\pm0.02)\,SFR_{av}(M)\,t_H(z)
\end{equation}
where $SFR_{av}(M)$ is the average SFR for a halo of mass $M$. We assume this relation continues to apply at redshifts $z >$ 6 in the following calculations.
Armed with the parametrization of $SFR$ introduced in \S2.2, and thus a metallicity $Z$ expressed solely as a function of halo mass and redshift, we can now return to our model for the cloud surface density, $\Sigma_{cl}$. To ensure the molecular state of each individual cloud, i.e. $f_{H_2} \geq$ 0.5, we set $\Sigma_{cl}$ equal to the beam-averaged gas surface density $\Sigma_{gas}(M,z)$ (derived by inverting eq. (2.19)) for all halo masses $M > \tilde{M}$ and floor it to the value $\Sigma_{gas,*}(\tilde{M},z,f_{H_2}=0.5)$ for all $M <\tilde{M}$,
\begin{equation}
\Sigma_{cl}(M,z)=
\begin{cases}
\Sigma_{gas}(M,z)\, & \text{if}\,\,\,\, M>\tilde{M}\\
\Sigma_{gas,*}(\tilde{M},z,f_{H_2}=0.5) \, &\text{otherwise}
\end{cases}
\end{equation}
where $\tilde{M}$ is the halo mass at which $\Sigma_{gas}(z)$ drops below $\Sigma_{gas,*}(f_{H_2}=0.5,z)$ at a given redshift $z$.
The other variable that appears in our definition of the H$_2$ volume density is $r_{cl}$, the molecular cloud radius. Assuming that the molecular gas within the clump is in hydrostatic equilibrium, the radius of the cloud can be related to its surface mass density through the relation,
\begin{equation}
r_{cl}= \frac{c_{s,eff}^2}{\pi G \Sigma_{cl}}
\end{equation}
where $G$ is the gravitational constant and $c_{s,eff}$ is the effective sound speed in the gas (taking into account turbulence, $c_{s,eff}^2$ = $c_s^2 + \sigma_{turbulence}^2$), which we set to 10 km/s. The final expression for the molecular cloud volume density then simplifies to
\begin{equation}
n_{H_2}(M,z)=\frac{3\pi G}{4\mu m_{H_2}c_{s,eff}^2}\Sigma_{cl}^2(M,z) \, ,
\end{equation}
a plot of which can be found in the right panel of figure 2. The steep decline in number density at low halo masses, as depicted in figure 2, mirrors the steeply declining $SFR-M$ relation at the low-mass end (see left panel in figure 1); as the star-formation rate diminishes by several orders of magnitude with decreasing halo mass, the SFR surface density drops accordingly and ultimately translates into reduced gas column and number densities at these low masses.
\subsubsection{Velocity Gradient}
For the sake of simplicity, we assume self-gravitating, virialized molecular clouds, in which case, the velocity gradient $dv/dr$ and cloud volume density $n_{H_2}$ are related in the following way~\cite{2001ApJ...557..736G}
\begin{equation}
\frac{dv}{dr}\simeq3.1\sqrt{\frac{n_{H_2}}{10^4\,\,\text{cm$^{-3}$}}}\,\,\,\,\,\text{km\,s$^{-1}$pc$^{-1}$}
\end{equation}
where $n_{H_2}$ is defined in eq. (2.27). A plot of $dv/dr$ as a function of halo mass at different redshifts can be found in the left panel of figure 3.
\FloatBarrier
\begin{figure*}[t!]
\begin{minipage}{1\linewidth}
\hspace{-2cm}\includegraphics[width=575pt,height=325pt]{{dvdr_chi-eps-converted-to}.pdf}
\vspace{-3cm}\caption{\textit{Left panel:} Velocity gradient $dv/dr$ as a function of halo mass $M$ assuming self-gravitating, virialized clouds as defined by eq. (2.28). \textit{Right panel:} CO-to-H$_2$ abundance ratio via eq. (2.29), plotted at redshifts $z$ = 6 (red) and $z$ = 10 (blue).}
\end{minipage}
\end{figure*}
\subsubsection{CO-to-H$_2$ Abundance Ratio}
Studies have shown that at high metallicities, i.e. $Z' \gtrsim$ 10$^{-2}$, the dominant metal-bearing molecule in the ISM is CO. Furthermore,~\cite{2015MNRAS.450.4424B} find that even at low metallicities, 30-100\% of the available carbon is always locked in CO if the CO is shielded, provided that the hydrogen gas is in molecular form. In this limit, the relative CO-to-H$_2$ abundance varies approximately linearly with metallicity~\cite{2015MNRAS.450.4424B}, and assuming most of the carbon is in fact locked up in CO, $\chi_{CO}$ is given by,
\begin{equation}
\chi_{CO}(Z)\simeq3\times10^{-4}Z'
\end{equation}
where an expression for the relevant metallicity $Z'$ can be found in eq. (2.22). A plot of $\chi_{CO}$ is shown in the right panel of figure 3.
\subsubsection{CO Column Density, including photodissociation}
\FloatBarrier
\begin{figure*}[t!]
\begin{minipage}{1\linewidth}
\hspace{1.5cm}\includegraphics[width=350pt,height=270pt]{{NCO-eps-converted-to}.pdf}
\vspace{-.5cm}\caption{Beam-averaged CO column density derived at redshifts z=6 (red) and z=10 (blue) under conditions where CO photodissociation is accounted for (solid curves, eq. (2.30)) and neglected (dashed curves, eq. (2.32)). In the case where the effects of CO photodissociation are included, the CO column density drops to zero for halos smaller than $M\lesssim$ 10$^{10}$ M$_\odot$, indicating that the CO in the gas has fully disassociated.}
\end{minipage}
\end{figure*}
The CO column density, which sets the overall scaling and amplitude of the CO SLED, can be expressed most simply as the product of the CO-to-H$_2$ abundance ratio and the beam-averaged molecular column density,
\begin{equation}
N_{CO}=\chi_{CO}N_{H_2}
\end{equation}
where $N_{H_2}$ is a power-law function of the SFR surface density, derived by inverting the KS relation in eq. (2.19),
\begin{equation}
N_{H_2}(\Sigma_{SFR}(M,z))=\frac{2\times10^{-4}}{\mu m_{H_2}}\left(\frac{\Sigma_{SFR}}{1 \text{ M$_\odot$ yr$^{-1}$ kpc$^{-2}$}}\right)^{1/1.2} \,\,\,\text{cm$^{-2}$} \,\,\,\,\, .
\end{equation}
The above expression for $N_{CO}$ does not account for conditions under which CO has photodissociated into C and C$^+$ while the gas continues to remain molecular due to either H$_2$ self-shielding or dust-shielding. Such conditions, which may exist on the surfaces of molecular clouds or the clumps contained within such clouds, ultimately result in a fraction of H$_2$ gas that is ``dark" in CO transitions. Observations indicate that the column density of this ``dark gas" can be as high as 30\% ($\sim$ 3$\times$10$^{21}$ cm$^{-2}$) of the total molecular column density in the local Galaxy ($\sim$10$^{22}$ cm$^{-2}$)~\cite{2010ApJ...716.1191W}. A first-order approximation of the CO column density that accounts for the effects of CO photodissociation is then
\begin{equation}
N_{CO}=\chi_{CO}\left(N_{H_2} -\frac{3\times10^{21}}{Z'}\right) \,\,\,\text{cm$^{-2}$}
\end{equation}
where, as before, $Z'$ is the metallicity in solar units. The second term in the above expression is essentially the H$_2$ column required to enable CO dust-shielding under the assumption that the threshold for CO survival is set by a universal dust opacity. This approximation captures the effects of decreasing metallicity on the abundance of CO, reducing the column density as the number of shielding dust grains grows sparse. In the limiting case where $Z'$ drops so low that there is not enough dust to effectively shield CO, (corresponding to the parenthesized portion of eq. (2.32) growing negative), the CO is fully dissociated and $N_{CO}$ is set to zero. We note that eq. (2.32) is only a first-order approximation and that the constant, 3$\times$10$^{21}$, can be altered due to variations in UV field intensity or gas clumping factors. The beam-averaged CO column density at different redshifts is plotted in figure 4, both for the case where CO photodissociation is accounted for (solid curves) and neglected (dashed curves).
\subsection{Model CO SLEDs}
Equipped with analytic expressions for the LVG parameters that dictate the shape and magnitude of the CO SLED, we can now compute the intensity of each CO line and the resulting SED generated by the molecular clouds in a halo with mass $M$ at redshift $z$. To carry out the computations, we use the Mark \& Sternberg LVG radiative transfer code described in~\cite{2012A&A...537A.133D}, with CO-H$_2$ collisional coefficients taken from~\cite{2010ApJ...718.1062Y} and energy levels, line frequencies, and Einstein $A$ coefficients taken from the Cologne Database for Molecular Spectroscopy (CDMS). For a given set of parameters, \{$T_{kin}$, $n_{H_2}$, $dv/dr$, $\chi_{CO}$, $N_{H_2}$\}, the code determines the level populations by iteratively solving the equations of statistical equilibrium which balance radiative absorptions, stimulated emission, spontaneous emission, and collisions with H$_2$ using the escape probability formalism discussed in \S2.1. Once the level populations are computed, the full CO rotational ladder and line intensities follow from eq. (2.7).
Figure 5 shows the CO SLEDs generated by halos at redshift $z$ = 10 with masses in the range $M$ = 10$^8$-10$^{13}$ M$_\odot$ when the effects of CO photodisassociation are excluded. In the case where photodissociation is considered, the line intensities emitted by low-mass halos with $M <$ 10$^{10}$ M$_\odot$ (red and orange curves) entirely disappear, reflecting the full dissociation of CO in these molecular clouds where dust-shielding has grown inefficient. However, the other curves, corresponding to line emission from higher mass halos that have been normalized to the ground state, remain unchanged. This is due to the fact that ``turning on" photodissociation merely reduces the beam-averaged CO column density according to eq. (2.32); since $N_{CO}$ controls the overall amplitude of the SLED (and not the shape), adjusting this quantity simply amplifies or reduces the intensity of all the CO lines by the same amount, leaving the ratio between lines unchanged.
As expected, we find that as the physical conditions in the emitting molecular clouds grow more extreme, the CO rotational levels become increasingly populated and the SLED rises accordingly. Consequently, the line intensities not only grow in magnitude, but the peak of the CO SLED also shifts to higher $J$ values, reflecting the excitation of the more energetic states of the molecule.
\begin{figure*}[h!]
\begin{minipage}{1\linewidth}
\hspace{-1.5cm} \includegraphics[width=570pt,height=270pt]{{COsed_z10-eps-converted-to}.pdf}
\vspace{-.5cm}\caption{CO SLEDs generated by a halo at redshift $z$ = 6 (dashed curves) and $z$ = 10 (solid curves) with halo mass ranging from 10$^8$ to 10$^{13}$ M$_\odot$, neglecting the effects of photodissociation. If CO photodissociation is taken into account, the red and orange curves, corresponding to line emission from halos with $M <$ 10$^{10}$ M$_\odot$, would disappear while the other curves would remain unchanged. The brown curve denotes the expected SLED in the optically-thick case when the population levels are in local thermal equilibrium (LTE); in this limiting case, the intensity at a given level follows the Planck function, $B(\nu,T)d\nu = \frac{2h\nu^3}{c^2}\frac{1}{e^{h\nu/kT}-1}d\nu$, where $\langle T \rangle \sim$ 91 K.}
\end{minipage}
\end{figure*}
The curves plotted in figure 5 demonstrate this trend in the case where a UV duty cycle of unity is assumed. In this scenario, the SFR and SFR surface density range from 10$^{-4}$-1000 M$_\odot$\,yr$^{-1}$ and 10$^{-2}$-20 M$_\odot$\,yr$^{-1}$\,kpc$^{-2}$ respectively, when the halo mass varies from 10$^8$ to 10$^{13}$ M$_\odot$ (see figure 1). While the corresponding kinetic temperature remains relatively constant for this $\Sigma_{SFR}$ range ($T_{kin} \sim$ 90 K), the H$_2$ number density varies significantly, 10$^2$ cm$^{-3}$ $\lesssim n_{H_2} \lesssim 3\times10^5$ cm$^{-3}$. The variance in the shape of the CO SED computed for $f_{UV}$ = 1 reflects this range of physical conditions parameterized by the halo mass at $z$ = 10; the SEDs produced by low-mass halos, $M <$ 10$^{10}$ M$_\odot$, peak at $J \simeq$ 4 before turning over and plummeting (neglecting CO photodissociation). In these low density clouds, with correspondingly small velocity gradients ($dv/dr \sim$ 0.5 km\,s$^{-1}$\,pc$^{-1}$) and CO abundances ($\chi_{CO}\sim$ 5$\times$10$^{-6}$), only the low-lying transitions such as CO $J$ = 1\,$\rightarrow$\,0 are optically thick. In contrast, in more extreme star-forming galaxies, i.e. $M >$ 10$^{10}$ M$_\odot$, the H$_2$ number densities reach $\sim$ 10$^5$ cm$^{-3}$, the typical critical density value for high-$J$ CO emission.
Consequently, the high-$J$ lines grow optically thick and the line ratios approach thermalization for transitions as high as $J \sim$ 13 in these high-mass halos.
\FloatBarrier
\begin{figure*}[h!]
\begin{minipage}{1\linewidth}
\vspace{-1cm}
\hspace{-1.5cm}\includegraphics[width=275pt,height=300pt]{{Fco1-eps-converted-to}.pdf}
\hspace{-.5cm}\includegraphics[width=275pt,height=300pt]{{Fco2-eps-converted-to}.pdf}
\hspace{-1cm}\vspace{-.5cm}\caption{The CO rotational line flux as a function of halo mass at $z$ = 6 (left panel) and $z$ = 10 (right panel) for $J_{upper}$ = 1-8. Solid and dashed curves denote results obtained by neglecting (eq. (2.30)) and including (eq. (2.32)) the effects of CO photodissociation.}
\end{minipage}
\end{figure*}
\section{Results}
\subsection{Predicted CO Fluxes and Spatially Averaged CO Brightness Temperature}
Given our LVG model of the full CO SLED, we can now predict the CO line fluxes generated by a set of molecular clouds residing in a host halo of mass $M$ at redshift $z$, as well as compute the spatially averaged brightness temperature of any CO rotational transition. The former is obtained by converting LVG-derived CO luminosities (eq. (2.8)) into observed, velocity-integrated fluxes, $F_{CO}$, using typical observer units,
\begin{equation}
\frac{L_{CO}}{L_\odot} = 1.040\times10^{-3}\left(\frac{D_L}{\text{Mpc}}\right)^2\frac{\nu_{obs}}{\text{GHz}}\,\,\frac{F_{CO}}{\text{Jy km s$^{-1}$}}
\end{equation}
where $D_L$ is the luminosity distance. The results are shown in figure 6 for a range of CO rotational lines emitted as a function of host halo mass at $z$ = 6 (left panel) and $z$ = 10 (right panel). The dashed curves represent the fluxes obtained by including the effects of CO photodissociation in our computations; as expected, the change in flux when accounting for this phenomenon is most pronounced at the low-mass end where $F_{CO}$ drops to zero once the CO is fully photodissociated.
At the higher-mass end, we predict that a $\sim$10$^{11}$ M$_\odot$ halo at redshift $z$ = 6 will emit strongest in the J = 7 $\rightarrow$ 6 transition with a flux of $\sim$ 25 mJy while a halo with the same mass at $z$ = 10 will emit strongest in a higher energy state, J = 10 $\rightarrow$ 9, but with nearly half the flux, $\sim$ 14 mJy.
\FloatBarrier
\begin{figure*}[h!]
\begin{minipage}{1\linewidth}
\vspace{-1cm}
\hspace{-2cm}\includegraphics[width=530pt,height=270pt]{{Tvsz1-eps-converted-to}.pdf}\\
\hspace*{-2cm}\includegraphics[width=530pt,height=270pt]{{Tvsz2-eps-converted-to}.pdf}
\hspace{-1cm}\vspace{-1cm}\caption{Volume-averaged CO brightness temperature as a function of redshift for a minimum host halo mass of CO luminous galaxies of M$_{min,CO}$ = 10$^8$ (red), 10$^9$ (green), and 10$^{10}$ M$_\odot$ (blue curves), neglecting the effects of CO photodissociation. The dashed curves denote the signals obtained when photodissociation is taken into account (M$_{min,CO}$ = 10$^{10}$). Each panel shows $\langle T_{CO} \rangle$ for a different line in the CO rotational ladder with 1 $\leq J_{upper} \leq$ 8.}
\end{minipage}
\end{figure*}
In order to determine the spatially-averaged CO brightness temperature emitted by halos across the mass range, we compute the following integral,
\begin{equation}
\langle T_{CO}(\nu_J) \rangle = \frac{c^3}{8\pi}\frac{(1+z_J)^2}{k_B \nu_J^3 H(z_J)}f_{duty}\int_{M_{min,CO}}^\infty\hspace{-0.7cm} dM\,\,\frac{dn}{dM}(M,z_J)L(M,z_J)
\end{equation}
where $\nu_J$ is the rest-frame frequency of the specified line. This equation is parameterized by $M_{min,CO}$, the minimum host halo mass for CO luminous halos, and $f_{duty}$, the duty cycle for CO activity. Since CO lines are excited by starburst activity, we generally expect that the duty cycle for CO luminous activity is comparable to the starburst duty cycle. We therefore assume $f_{duty} = f_{UV}$ = 1 in our fiducial models. Furthermore, in computing the volume-averaged CO brightness temperature, we vary $M_{min,CO}$ widely between $M_{min,CO}$ = 10$^8$, 10$^9$, and 10$^{10}$ M$_\odot$ to illustrate the sensitivity of the results to this parameter. In our model, these halos host CO luminous galaxies with SFRs of $\sim$ 10$^{-4}$, 3$\times$10$^{-3}$, and 0.1 M$_\odot$\,yr$^{-1}$ (left panel of figure 1).
We plot the results in figure 7, where the mean brightness temperature is shown as a function of redshift for different CO rotational lines. As expected, we find that $\langle T_{CO} \rangle$ is a
\noindent steeply declining function of $z$, a direct consequence of the decreasing number of host halos (per volume) at these high redshifts predicted by the Sheth-Tormen halo mass function~\cite{1999MNRAS.308..119S}. When CO photodissociation is neglected (solid curves), the redshift evolution of $\langle T_{CO} \rangle$ for the low-$J$ CO lines steepens at the high-$z$ end when larger values are assumed for $M_{min,CO}$.
While halos with masses $M >$ 10$^{10}$ M$_\odot$ emit the most flux at all redshifts (figure 6), the flux emitted by halos with $M \sim$ 10$^9$-10$^{10}$ M$_\odot$ makes a non-negligible contribution to the total CO brightness temperature at higher redshifts. An illustration of this is presented in figure 8 which plots $dT/d\ln{M}$, the contribution to the mean brightness temperature per logarithmic halo mass for different CO lines as a function of halo mass at $z$ = 6 (dashed curves) and $z$ = 10 (solid curves). It is clear that at $z$ = 10, although the primary component of the CO signal originates from halos with masses in the range $M \sim$ 10$^{10}$-10$^{11}$ M$_\odot$, the CO emission from $ \sim 10^9$ M$_\odot$ halos makes up nearly 15\% of the total signal for the low-J lines. Thus, raising the minimum host halo mass for CO luminous halos from 10$^8$ M$_\odot$ to 10$^{10}$ M$_\odot$ excludes a population of CO-emitting sources and reduces the amplitude of $\langle T_{CO} \rangle$ for the low-J lines by a factor of $\sim$ 2. The effects of raising $M_{min,CO}$ vanish in the higher energy states since these low-mass halos emit less than 1\% of the total signal in these high-J lines; hence, the red, green, and blue solid curves, denoting the mean brightness temperature for models where $M_{min,CO}$ = 10$^8$, 10$^9$, and 10$^{10}$ M$_\odot$, respectively, are barely distinguishable from one another for the higher energy rotational transitions.
In the case where CO photodissociation is taken into account, we found that the CO becomes fully photodissociated in halos with $M <$ 10$^{10}$ M$_\odot$ (figure 4); since these low-mass halos do not emit CO flux in this model, the minimum host halo mass of a CO luminous galaxy is effectively set to $M_{min,CO}$ = 10$^{10}$ M$_\odot$. Accounting for CO destruction in molecular clouds results in an overall reduction in the amplitude of $\langle T_{CO} \rangle$, with the low- and high-J lines weakening by $\sim$ 2-20\% and 10-45\%, respectively, over the range of redshifts shown in figure 7 (dashed curves).
\FloatBarrier
\begin{figure*}[t!]
\begin{minipage}{1\linewidth}
\hspace{2cm}\includegraphics[width=350pt,height=270pt]{{TofM-eps-converted-to}.pdf}
\vspace{-.5cm}\caption{Contribution to the mean brightness temperature per logarithmic mass of different CO lines at redshift $z$ = 6 (dashed) and $z$ = 10 (solid), in the case where the effects of CO photodissociation are neglected.}
\end{minipage}
\end{figure*}
The spatially averaged brightness temperature of the CO J = 1$\rightarrow$\,0 line predicted by our model assuming $M_{min,CO}$ = 10$^8$ M$_\odot$ ranges from $\sim$ 0.6 $\mu$K at $z$ = 6 to $\sim$ 0.03 $\mu$K at $z$ = 10 when CO photodissociation is neglected. The strength of the CO signal dwindles for higher-J lines, with $\langle T_{CO}(z=6) \rangle \sim$ 0.3 $\mu$K and $T_{CO}(z=10) \rangle \sim$ 0.1 $\mu$K for the CO J = 6$\rightarrow$\,5 transition.
``Turning on" photodissociation does not significantly effect the mean brightness temperature of the high-J lines, but it does reduce the CO(1-0) signal to $\langle T_{CO} \rangle$ $\sim$ 0.4 and $\sim$ 0.01 $\mu$K at $z$ = 6 and 10, respectively .
Without delving into any particular instrumental design, we briefly consider the plausibility of detecting such signals. We use the radiometer equation for the signal-to-noise ratio, $S/N = (T_{CO}/T_{sys})\sqrt{\Delta\nu \,t_{int}}$ where $T_{sys} \sim$ 20 K is the system temperature of the detector, $\Delta\nu$ is the observed bandwidth, and $t_{int}$ is the integration time, which we take to be 1000 hours. We find that at $z$ = 6, the predicted brightness temperatures of the 4 lowest-lying CO transitions (neglecting CO photodissociation) can be detected at 5$\sigma$ confidence with a bandwidth of $\Delta\nu\sim$ 10 GHz, while a bandwidth of $\Delta\nu \sim$ 100 GHz is required for detection with 1$\sigma$ confidence at $z$ = 10. Higher J lines in this model will require even longer integration times to achieve the desired brightness sensitivity.
\subsection{CO Power Spectrum}
Given the existence of brighter foreground sources of emission at the relevant frequencies, the mean redshifted CO signal will be difficult, if not impossible, to directly observe. We therefore consider spatial fluctuations in the surface brightness and compute the CO power spectrum predicted by different variations of our model. In contrast to the spectrally smooth foreground sources, the CO signal is expected to have structure in frequency space which can be used to isolate spatial fluctuations in its brightness temperature. Since the power spectrum captures the underlying matter distribution and structure, a map of the CO brightness temperature fluctuations at $z \geq$ 6 can be used to probe the spatial distribution of star-forming galaxies during the EoR.
Following the formalism in~\cite{2011ApJ...728L..46G} and~\cite{2011ApJ...741...70L}, the three-dimensional power spectrum of the CO brightness temperature fluctuations is expected to take the form
\begin{equation}
P_{CO}(k,z) = \langle T_{CO} \rangle(z) \left[ b_{CO}(z)^2 P_{lin}(k,z)+P_{shot}(z)\right] \,\,.
\end{equation}
Since CO is emitted from within halos, the first term in this expression represents spatial variations due to correlations with the underlying dark matter density field where $P_{lin}$ is the linear theory density power spectrum and the bias $b_{CO}$ is given by
\begin{equation}
b_{CO}(z) = \frac{\int_{M_{min,CO}}^\infty\hspace{-.5cm} dM \frac{dn}{dM}L_{CO}(M,z)b(M,z)}{\int_{M_{min,CO}}^\infty \hspace{-.5cm}dM \frac{dn}{dM}L_{CO}(M,z)}
\end{equation}
where $b(M,z) = 1 + (\nu^2(M,z)-1)/\delta_c$, $\nu(M,z)=\delta_c/\sigma(M,z)$, and $\sigma(M,z)$ is the RMS density fluctuation in a spherical region containing mass $M$~\cite{2002MNRAS.336..112M}. The second term of eq. (3.3), the shot noise contribution due to Poisson fluctuations in the number of halos on the sky, can be expressed as
\begin{equation}
P_{shot}(z)=\frac{1}{f_{duty}}\frac{\int_{M_{min,CO}}^\infty\hspace{-.5cm}dM \frac{dn}{dM}L_{CO}(M,z)^2}{\left(\int_{M_{min,CO}}^\infty\hspace{-.5cm}dM \frac{dn}{dM}L_{CO}(M,z)\right)^2}
\end{equation}
where, in all of our calculations, we adopt the Sheth-Tormen halo mass function for $dn/dM$.
\FloatBarrier
\begin{figure*}[t!]
\begin{minipage}{1\linewidth}
\vspace{-2cm}
\hspace{2cm}\includegraphics[width=300pt,height=200pt]{{delta_1-eps-converted-to}.pdf}
\end{minipage}
\begin{minipage}{1\linewidth}
\hspace{-3cm}\includegraphics[width=600pt,height=300pt]{{delta_2-eps-converted-to}.pdf}
\vspace{-0.5cm}\caption{Auto power spectrum of CO brightness temperature fluctuations for lines all the way up to $J_{upper}$ = 10, at $z$ = 6 (red) and $z$ = 10 (blue). In each panel, the solid curves represent results for the models in which CO photodissociation was neglected and a minimum host halo mass of $M_{CO,min}$ = 10$^8$ M$_\odot$ was assumed; dashed curves denote results for the case where the effects of CO photodissociation were included, effectively setting the minimum host halo for CO luminous galaxies to $M_{CO,min}$ = 10$^{10}$ M$_\odot$.}
\end{minipage}
\end{figure*}
The power spectra of the CO rotational lines from $J_{upper}$ = 1 to $J_{upper}$ = 10 are plotted in figure 9 for models which include (dashed) and exclude (solid) the effects of CO photodissociation. The y-axis shows $\Delta^2_{CO}(k,z) = k^3 P_{CO}(k,z)/(2\pi^2)$, the contribution to the variance of $\langle T_{CO} \rangle$ per logarithmic bin, in units of [$\mu$K$^2$].
In general, the fluctuations depend strongly on the wavenumber, with the overall shape of the predicted power spectra reflecting the form of eq. (32). On large scales, small $k$, the clustering term dominates and the fluctuations in CO brightness temperature mirror the underlying dark matter density field. Conversly, on small scales (large $k$), the shot noise takes over and the fluctuations reach $\Delta^2_{CO} \sim$ 300 - 1300 $\mu$K$^2$ on scales of $k$ = 10 Mpc$^{-1}$ at $z$ = 6, and $\Delta^2_{CO} \sim$ 4 - 14 $\mu$K$^2$ at $z$ = 10 (when photodissociation is neglected). The $J$ = 1$\rightarrow$\,0 transition is predicted to produce the strongest signal, with an amplitude that drops by a factor of $\sim$5-50 across the wavenumber range $k \sim$ 10$^{-2}$-10 Mpc$^{-1}$ when the higher energy state, $J$ = 10$\rightarrow$\,9, is considered instead.
The redshift evolution of $\Delta^2_{CO}$ is ultimately dictated by the behavior of $\langle T_{CO} \rangle(z)$; although $b_{CO}^2$ increases as the host halos become more clustered at higher redshifts, this effect is not enough to compensate the declining brightness temperature with $z$. The auto-correlation signal of all the CO lines therefore weakens as the redshift varies from $z$ = 6 to 10, dropping by $\sim$ 2 orders of magnitude over this redshift range. On the other hand, our results for $\Delta^2_{CO}$ depend very weakly on our choice of the minimum host halo mass for CO luminous galaxies; therefore, in the fiducial models where photodissociation is neglected, we only plot results for the case where $M_{min,CO}$ = 10$^8$ M$_\odot$, thereby including the contribution of the low-mass halos to the overall emission signal.
The effects of including CO photodissociation are most prominent for the low-$J$ lines, weakening the signal by, at most, a factor of $\sim$ 6 at $z$ = 10 on large scales. At $k$ = 0.1 Mpc$^{-1}$, we find CO(1-0) brightness temperature fluctuations of amplitude $\Delta_{CO}^2 \sim$ 0.1 $\mu$K$^2$ at $z$ = 6, whether or not CO photodissociation is taken into account. At $z$ = 10, $\Delta_{CO}^2(k=0.1$ Mpc$^{-1})$ $\sim$ 5$\times$10$^{-4}$ and 8$\times$10$^{-5}$ $\mu$K$^2$ for the $J$ = 1$\rightarrow$\,0 line in the models where photodissociation is turned ``off" (assuming fixed $M_{min,CO}$ = 10$^8$ M$_\odot$) and ``on", respectively.
\section{Discussion}
We have presented a new approach to estimating the mean CO emission signal from the epoch of reionization (EoR) that links the atomic physics of molecular emission lines to high-redshift observations of star-forming galaxies. This method is based on LVG modeling, a radiative transfer modeling technique that generates the full CO SLED for a specified set of characterizing parameters, namely, the kinetic temperature, number density, velocity gradient, CO abundance, and column density of the emitting source. We showed that these LVG parameters, which dictate both the shape and amplitude of the CO SLED, can be expressed in terms of the emitting galaxy's global star formation rate, $SFR$, and the star formation rate surface density, $\Sigma_{SFR}$. Employing the $SFR$-$M$ relation empirically derived for high-redshift galaxies, i.e. $z \geq$ 4, we can then ultimately express the LVG parameters, and thus, the specific intensity of any CO rotational line, as functions of the host halo mass $M$ and redshift $z$.
Adopting a starburst duty cycle of $f_{UV}$ = 1, the average $SFR-M$ relation derived via abundance-matching for 4 $< z <$ 8 is characterized by a steeply declining slope at the low-mass end, where the star formation rate goes as $SFR \propto M^{1.6}$. With the SFR dropping to values below 0.1 M$_\odot$\,yr$^{-1}$ for $M <$ 10$^{10}$ M$_\odot$, the physical conditions in these low-mass halos are not ``extreme" enough to substantially excite the high-$J$ CO rotational states. The resulting CO SLEDs correspondingly peak around $J \simeq$ 4 before turning over, and the overall contribution of CO emission from this halo population grows negligible at lower redshifts. On the other hand, the H$_2$ number density and CO-to-H$_2$ abundance in halos with masses $M \geq$ 10$^{10}$ M$_\odot$ are large enough to keep the high-$J$ population levels thermalized; the CO SLEDs generated by these massive halos therefore have peaks shifited to $J \geq$ 10 and overall higher amplitudes, reflecting the excitation of the more energetic states of the molecule.
We also consider the effects of CO photodissociation on the CO line intensities generated by our fiducial models. Assuming that the threshold for CO survival is set by a universal dust opacity, we adopt a first-order approximation of $N_{CO}$ which effectively reduces the CO column density with decreasing metallicity. In the limiting case where the metallicity drops so low that there is not enough dust to shield CO, the CO is considered fully dissociated and $N_{CO}$ is set to zero. We find that such conditions occur in halos with $ M \lesssim$ 10$^{10}$ M$_\odot$, causing the line intensities emitted by these low-mass halos to entirely disappear in models where CO photodissociation is accounted for.
Given our LVG model of the full CO SLEDs, we can predict both $F_{CO}$, the CO line flux generated by a set of molecular clouds in a host halo of mass $M$ at redshift $z$, as well as $\langle T_{CO} \rangle$, the spatially averaged brightness temperature of any CO rotational transition. We find that the flux emitted in the $J$ = 1 $\rightarrow$ 0 transition by a halo at redshift $z$ = 6 with mass $M \sim$ 10$^{11}$ M$_\odot$ is $F_{CO(1-0)} \sim$ 0.8 mJy km/s. The higher rotational lines are expected to be even brighter, with a 10$^{11}$ M$_\odot$ halo at $z$ = 6 emitting a CO(6-5) flux of $\sim$ 23 mJy km/s. These fluxes drop by 25-30\% when considering a 10$^{11}$ M$_\odot$ halo emitting at $z$ = 10, with $F_{CO(1-0)} \sim$ 0.2 mJy km/s and $F_{CO(6-5)} \sim$ 7 mJy km/s.
The CO line fluxes emitted by individual host halos of mass $M$ at redshift $z$ as estimated in this paper, are found to be generally higher than previous estimates obtained in the literature ~\cite{2009ApJ...698.1467O,2013MNRAS.435.2676M}.
Building an analytic formalism within a paradigm where star formation is a function of gas supply and stellar feedback, the model presented in~\cite{2013MNRAS.435.2676M} provides radial distributions of SFR, $\Sigma_{gas}$, and $T$ which are then used to calculate the masses, sizes, and number of GMCs, and the corresponding total CO line luminosities emitted by these clouds.
Consequently, the CO(1-0) flux emitted by a 10$^{11}$ M$_\odot$ halo at $z$ = 6 as predicted by this model is $\sim$ 0.01-0.03 mJy km/s, an order of magnitude smaller than the flux estimates derived with our methodology. The flux in the $J$ = 6 $\rightarrow$ 5 transition is found to be highly sensitive to the inclusion of turbulent clumps within the GMCs, given that the high densities in such inhomogeneities can effect the thermalization of the higher-$J$ rotational lines; the resulting CO(6-5) flux in the models presented in~\cite{2013MNRAS.435.2676M} thus varies from 10 mJy km/s to 10$^{-3}$ mJy km/s when turbulent clumps are included and excluded, respectively.
While these results are comparable to those derived using the semi-analytic methods of~\cite{2009ApJ...698.1467O}, they are consistently smaller than the values presented in this paper. The prolific number of parameters used to characterize the GMC properties in~\cite{2013MNRAS.435.2676M} make it difficult to ascertain the source of divergence between our results, which both use radiative transfer techniques. However, we suspect these differences can be traced back to the different prescriptions used to set the star formation rate surface density, $\Sigma_{SFR}$, and gas surface density, $\Sigma_{gas}$, the two ingredients which, once related to one another via the Kennicutt-Schmidt relation in our model, determine the shape and amplitude of the resulting CO SLED.
Furthermore,~\cite{2013MNRAS.435.2676M} parameterizes the properties of MCs as a function of their position relative to the disk center, and after considering the level populations and optical depth of CO in each cloud directly, computes the resulting flux from the entire galaxy by counting up the number of clouds at each galactic radius. The molecular clouds in our model, on the other hand, are characterized by disk-averaged properties of the host halo disk; to derive the signal from a CO-emitting galaxy, we assume a large number of these identical homogeneous collapsing clouds and scale the line intensities with the disk-averaged CO column density.
Based on the sensitivity limits quoted in~\cite{2013MNRAS.435.2676M}, our approach predicts that the $J_{upper}$ = 1 line in $z$ = 6 halos with mass $M \geq$ 10$^{12}$ M$_\odot$ will be observable by JVLA\footnote{http://www.vla.nrao.edu/} (Jansky Very Large Array) after ten hours of observation; at this redshift, the CO(2-1) and CO(3-2) lines fluxes emitted by halos with $M >$ 10$^{11}$ M$_\odot$ are also expected to be observed by JVLA. Similarly, we expect ALMA to detect the CO(6-5) flux emitted by $z$ = 6 halos with mass $M \geq$ 5$\times$10$^{10}$ M$_\odot$ after ten hours of observation, and higher rotational lines $J_{upper} \geq$ 7 emitted by halos with $M \gtrsim$ 10$^{11}$ M$_\odot$.
To obtain an estimate of the spatially averaged brightness temperature of a given line at a particular redshift, we simply integrate $L_{CO}(M,z)$ over the range of halo masses that are expected to host CO-luminous galaxies. In the case where CO photodissociation is included, the minimum host halo mass of CO-emitting galaxies is set to $M_{CO,min} \sim$ 10$^{10}$ M$_\odot$ by the model itself, since the CO is found to be fully dissociated in halos with $M <$ 10$^{10}$ M$_\odot$. When CO photodissociation is ignored, varying $M_{CO,min}$ from 10$^8$ to 10$^{10}$ M$_\odot$ reduces the low-$J$ line signals at high redshifts, where the CO emission from $\sim$ 10$^9$ M$_\odot$ halos make up a non-negligible percentage of the total emission.
In our fiducial model where CO photodissociation is neglected and $M_{min,CO}$ = 10$^8$ M$_\odot$, we predict a spatially averaged brightness temperature of $\langle T_{CO} \rangle \sim$ 0.5 $\mu$K at $z$ = 6 and 0.03 $\mu$K at $z$ = 10 for the low-$J$ CO rotational lines, with brightness temperature fluctuations of amplitude $\Delta^2_{CO} \sim$ 0.1 and 0.005 $\mu$K$^2$ respectively, at $k$ = 0.1 Mpc$^{-1}$. These CO emission signals are further reduced to $\langle T_{CO} \rangle \sim$ 0.4 and 0.01 $\mu$K at $z$ = 6 and 10, respectively, for the low-lying states when the effects of CO photodissociation are included in the calculations.
(Note that, since CO lines are typically excited by starburst activity, the choice of $M_{min,CO}$ = 10$^8$ M$_\odot$ is favored by theoretical and numerical investigations which indicate that 10$^8$ M$_\odot$ is the minimum mass required for a halo to cool and form stars at these high redshifts~\cite{1996ApJ...464..523H,1997ApJ...474....1T,2010ApJ...714L.202T,2013fgu..book.....L}).
Our estimates of $\langle T_{CO} \rangle$ for the low-$J$ CO transitions are comparable to the values obtained in previous work by~\cite{2011ApJ...730L..30C},~\cite{2011ApJ...728L..46G},~\cite{2011ApJ...741...70L}, and~\cite{2013ApJ...768...15P}. Constructing a model based on the required cosmic star formation rate density to reionize the universe,~\cite{2011ApJ...730L..30C} obtains an order-of-magnitude estimate of $\langle T_{CO} \rangle(z=8) \sim$ 1 $\mu$K for the $J$ = 1$\rightarrow$\,0 and $J$ = 2$\rightarrow$\,1 transitions.~\cite{2011ApJ...728L..46G} arrives at a slightly smaller estimate of the CO(1-0) brightness temperature, $\langle T_{CO} \rangle \sim$ 0.5 $\mu$K for $z$ = 6 and 0.1 $\mu$K for $z$ = 10, by using the Millennium numerical simulation results of~\cite{2009ApJ...698.1467O} to model the CO emission from high-redshift galaxies.~\cite{2011ApJ...741...70L} assumes a linear $SFR-M$ relation and a set of low-$z$ empirical scaling relations between a galaxy's SFR, $L_{FIR}$, and $L_{CO(1-0)}$ to estimate $\langle T_{CO} \rangle$ for these low-$J$ lines; they find a mean brightness temperature of $\sim$ 2 $\mu$K at $z \sim$ 6 and 0.5 $\mu$K at $z \sim$ 10 with fluctuations $\Delta^2_{CO} \sim$ 0.2 and 0.02 $\mu$K$^2$, respectively, on scales of $k \sim$ 0.1 Mpc$^{-1}$. In a more recent paper,~\cite{2013ApJ...768...15P} follows the approach taken in~\cite{2011ApJ...741...70L} with a few adjustments to the $SFR-M$ prescription to arrive at brightness temperatures of $\langle T_{CO} \rangle \sim$ 0.7 and 0.2 $\mu$K at redshifts $z$ = 6 and 10 respectively, assuming $M_{min,CO}$ = 10$^9$ M$_\odot$ for the CO(1-0) and CO(2-1) lines.
While these previous works are limited almost exclusively to predicting the signals of the $^{12}$CO J=1$\rightarrow$0 and 2$\rightarrow$1 transition lines, our LVG-based approach generates the full CO SLED and thus allows us to compute the signal strength of the higher-J energy states as well. For example, we predict a CO(10-9) brightness temperature of $\langle T_{CO} \rangle \sim$ 0.05 $\mu$K at $z$ = 6 and 0.003 $\mu$K at $z$ = 10 with $\Delta^2_{CO}(k = 0.1) \sim$ 0.003 and 10$^{-5}$ $\mu$K$^2$ respectively. We look forward to future experiments, such as the Carbon MonOxide Mapping Array (COMA)\footnote{http://www.stanford.edu/group/church_group/cgi-bin/wordpress/?page_id=515} currently under development, which promise to provide spectral-spatial intensity mapping of CO at the high redshifts characterizing the epoch of reionization.
\acknowledgments
We thank Reinhard Genzel for helpful discussions and suggestions.This work was supported by the Raymond and Beverly Sackler Tel Aviv University-Harvard/ITC Astronomy Program. A.L. acknowledges support from the Sackler Professorship by Special Appointment at Tel Aviv University. This work was also supported in part by a PBC Israel Science Foundation I-CORE Program grant 1829/12, and in part by NSF grant AST-1312034. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE1144152. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not
necessarily reflect the views of the National Science Foundation.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{I\lowercase{ntroduction}}
\label{01}
From a mathematical and philosophical point of view, the vacuum can be
comparable to the region of absolutely empty space or, equivalently, with the
region of the space where there are no fields and massive particles (see for
example \cite{Saund}).
One can mention the Lamb Shift \cite{Lamb}, Casimir effect \cite{Casimir},
Unruh effect \cite{Unru}, anomalous magnetic
moment of electron \cite{Proh}, Van der Waals forces \cite{Van}, Delbr\"{u}ck
scattering \cite{Del}, Hawking radiation \cite{Haw}, the cosmological constant
problem \cite{Weinb,Wein}, vacuum polarization at weak electromagnetic fields
\cite{Ash1,Ash2,Ash3} - here is an incomplete list of phenomena, part of which
has been experimentally discovered and that are conditioned by the
physical vacuum or, more accurately, by a quantum vacuum (QV).
In reference \cite{Wein} author discuss the issues of cosmic acceleration,
while in \cite{Steinh} dark energy-quintessence are studied in the framework
of QV theories, which necessarily include scalar fields. In reference \cite{ENov}
authors evaluate mass and electric dipole moment of the graviton, identifying
as a particle of dark matter, which radically changes our understanding of
dark matter and possibly of dark energy. The properties of QV can be studied in
the scope of quantum field theory (QFT), i.e. quantum electrodynamics (QED) and
quantum chromodynamics (QCD). Note that QFT may accurately describe QV, considering
that one can sum up an infinite series of perturbation theory terms, which is
typical of field theories. However, it is well known that perturbation theory for
QFT is does not converges at low energies, failing to describe, for example, nonzero
values of the vacuum expectation, called condensates in QCD or the BCS superconductivity
theory. In particular, as shown in \cite{Savi}, as well as in \cite{SCHARF}, the radiative
corrections of the massless Yang-Mills theory leads to instability of the vacuum state, which
corresponds to the asymptotic freedom of gauge theories and is due to infrared features.
According to the Standard Model (SM) precisely the non-zero vacuum expectation value
of the Higgs field \cite{Higgs1,Higgs2}, arising from spontaneous symmetry breaking,
is the principle mechanism allowing to generate masses.
To overcome these difficulties and conduct a consistent and comprehensive QV
study, we propose using complex Langevin-type stochastic differential equations
(SDE) as the basic equation of motion for which Yang-Mills equations serve as the
principle of local correspondence. Note that such mathematical representation
allows to describe the massless quantum fields with multi-scale fluctuations in
Hilbert space, where subspaces with single-particle states of zero mass and
spin 1 exist.
In this study, our goal is to give an unambiguous answer to a number of
important questions of the QV theory, many of which are still not well understood
or remain open problems. In particular, we believe that it is crucial to answer
to the following questions:
\begin{itemize}
\item Could random fluctuations of massless vector quantum fields lead to the
formation of stable massless Bose particles with spin 1 -\emph{hions}?
\item What are the space-time features of \emph{hion} and how does its quantum
state change when the multi-scale nature of random fluctuations of massless
fields are taken into account?
\item What entangled states are created by the two \emph{hions}? How do
these quantum states evolve on the second phase (scale) of relaxation?
\item What are the properties of the \emph{scalar quantum field-quintessence-dark energy}
consisting of massless spin-0 bosons?
\item What is the structure empty space-time taking into account a quantum vacuum?
\item Is it possible to implement space-time engineering and, accordingly, change
the fundamental properties of subatomic particles?
\end{itemize}
The manuscript is organized as follows.
In the section \Rom 2, we briefly present some well-known facts about the quantum
motion of a photon, described by the wave function in the coordinate representation.
In the section \Rom 3, we justify the multi-scale random nature of the free
Yang-Mills fields. First, we obtain the explicit form of SDE of QVF with the
gauge group symmetry $SU(2)\otimes U(1)$, assuming that all the
self-action terms are identically zero. Secondly, in the limit of statistical
equilibrium of complex probabilistic processes, the necessary conditions for
quantization are formulated. Further, the equations of motion for the QVF are derived.
By solving these equations, we obtain a discrete set of orthonormal wave functions
describing the stationary states of a massless Bose particle with spin-1
(\emph{hion}), which are localized on a two-dimensional topological manifold.
In section \Rom 4 we study in detail the probability distribution in different
quantum states of a \emph{hion}.
In section \Rom 5, we investigate the evolution of the \emph{hion} wave state
on the second relaxation scale. We show that as a result of a new relaxation,
\emph{hion} becomes into a quasi-particle, which leads to its spontaneous
transitions to various mass and massless states.
In section \Rom 6, we construct the singlet and triplet states of two entangled
vector bosons (\emph{hions}). The properties of the scalar bosons (singlet states) and
the possibility of the formation of the Bose-Einstein condensate of these particles
are studied in detail.
In section \Rom 7, we thoroughly discuss the obtained results.
In the section \Rom 8, we present an important proof confirming the
convergence of the developed theory.
\section{Q\lowercase{uantum motion of a photon in empty space}}
The question of the correspondence between the Maxwell equation and
the equation of quantum mechanics was the focus of attention of many
researchers at the dawn of the development of quantum theory \cite{Opp,Mol,Wein1}.
As shown (see, for example in \cite{BIALYNICKI}), the quantum motion of a
photon in a vacuum can be considered within the framework of a wave function
representation, writing it in vector form similarly to the Weyl equation for
a neutrino:
\begin{eqnarray}
\partial_{t}\bm{\Psi^{\pm}}(\textbf{r},t)\mp c_0\bigl(\textbf{S}\cdot
\bm\nabla\bigr) \bm{\Psi^{\pm}}(\textbf{r},t)=0,\qquad \partial_{t}\equiv\partial/\partial t,
\label{1.01a}
\end{eqnarray}
where $c_0$ denotes the speed of light in the empty Minkowski space-time
$(\textbf{r},t)\in \mathbb{R}^4$, with regard to
$\bm{\Psi^{+}}(\textbf{r},t)$ and $\bm{\Psi^{-}}(\textbf{r},t)$, they are the
photon wave functions of both helicities, corresponding to left-handed and
right-handed circular polarizations. In addition, the set of matrices $\textbf{S}
=(S_x,S_y,S_z) $ in a first-order regular vector equation (\ref{1.01a})
describes an infinitesimal rotations of particles with spin projections $+1$
and $-1$, respectively. These three matrices can be represented by the form:
\begin{widetext}
\begin{eqnarray}
S_x=\left(
\begin{array}{ccc}
0 & 0 & 0\\
0 & 0 & -i \\
0 & i & 0 \\
\end{array}
\right),
\qquad
S_y=\left(
\begin{array}{ccc}
0 & 0 & i\\
0 & 0 & 0 \\
-i & 0 & 0 \\
\end{array}
\right),\qquad
S_z=\left(
\begin{array}{ccc}
0 & -i & 0\\
i & 0 & 0 \\
0 & 0 & 0 \\
\end{array}
\right).
\label{1.01cy}
\end{eqnarray}
\end{widetext}
Recall that these three matrices are a natural generalization of the Pauli matrices
for $SU(2)$ to $SU(3)$, which formed the basis of the Gell-Mann's quark model \cite{Gell-Mann}.
The absence of electrical and magnetic charges in the equation
(\ref{1.01a}) ensures the following conditions:
\begin{equation}
\nabla \cdot \bm{\Psi^{\pm}}(\textbf{r},t)=0.
\label{1.01}
\end{equation}
If we present the wave function in the form of a Riemann-Zilberstein vector \cite{RieSil}:
\begin{eqnarray}
\bm{\Psi^{\pm}}(\textbf{r},t)=\frac{1}{\sqrt{2}}\biggl\{\frac{\textbf{D}(\textbf{r},t)}{\sqrt{\epsilon_0}}
\pm i\frac{\textbf{B}(\textbf{r},t)}{\sqrt{\mu_0}}\biggr\},\qquad c_0=\frac{1}{\sqrt{\mu_0\epsilon_0}},
\label{1.01b}
\end{eqnarray}
then from the Eqs. (\ref{1.01a}) and (\ref{1.01}) it is easy to find Maxwell's equations
in an ordinary vacuum or in empty space:
\begin{eqnarray}
\partial_t \textbf{D} -{\bm\nabla}\times\textbf{H} =0,\qquad
\nabla\cdot\textbf{E}=0,\nonumber\\
\partial_t \textbf{B} +\,{\bm\nabla}\times\textbf{E}=0,\qquad
\nabla\cdot\textbf{H}=0,
\label{1.01c}
\end{eqnarray}
where $\epsilon_0$ and $\mu_0$ describe the electric and magnetic constants of
the vacuum, respectively. It is important to note that the dielectric and magnetic
constants provide the following equations:
$$\textbf{D}=\epsilon_0\textbf{E}, \qquad \textbf{B}=\mu_0\textbf{H}.$$
Recall that the only difference between the equations (\ref{1.01a}) and (\ref{1.01c})
is that the Maxwell equations system does not explicitly take into account the spin
of the photon, which is important for further theoretical constructions.
Since $\epsilon_0$ and $\mu_0$ are constants independent of external
fields and characterizing the state of a free or \emph{unperturbed vacuum},
an idea arises: to consider a vacuum or, more precisely, QV, as some
energy environment with \emph{unusual properties and structures}.
\section{V\lowercase{ector field and its fundamental particle-\underline{\emph{hion}}}}
\subsection{Yang-Mills theory for free fields}
The Yang-Mills theory is a special example of gauge field theory
with a non-Abelian gauge symmetry group, whose Lagrangian for the free-fields
case has the following form (see \cite{Yang}, as well as \cite{Caprini}):
\begin{eqnarray}
\mathcal{L}_{gf}=-\frac{1}{2}\mathrm{Tr}(\mathcal{F}^2)=-
\frac{1}{4}\mathcal{F}^{\mu\nu}_{a}\mathcal{F}^a_{\mu\nu},
\label{2.l1}
\end{eqnarray}
where $\mathcal{F}$ is the 2-form of the Yang-Mills field strength.\\
Note that it remains invariant under impact of the tensor potential
$A_{\mu}^a$ of the gauge group is affected:
\begin{eqnarray}
\mathcal{F}^a_{\mu\nu}=\partial_\mu \mathcal{A}^a_\nu-\partial_\nu\mathcal{A}^a_\mu
+\mathfrak{g}\mathfrak{f}^{abc}\mathcal{A}^b_\mu \mathcal{A}^c_\nu,\qquad
\mu,\nu=0,1,2,3,
\label{2.l2}
\end{eqnarray}
where $\partial_\mu=(ic_0^{-1}\partial_t,\partial_x,\partial_y,\partial_z)=
(ic_0^{-1}\partial_t,{\bm \nabla})$ denotes the covariant derivative in the
four-dimensional Minkowski space-time, which
in Galilean coordinates is reduced to the usual partial derivative. In addition,
$\mathfrak{f}^{abc}=\mathfrak{f}_{abc}$ are called structural constants of the
group (Lie algebra), $\mathfrak{g}$ is the interaction constant and, finally,
for the group $SU(N)$, the number of isospins generators varies
within $a,b,c =[1, N^2-1].$
From the Lagrangian (\ref{2.l1}) one can derive the equations of motion
for the classical free Yang-Mills fields:
\begin{eqnarray}
\partial^\mu \mathcal{F}^a_{\mu\nu}+\mathfrak{g}\mathfrak{f}^{abc}
\mathcal{A}^{\mu b}\mathcal{F}^c_{\mu\nu}=0,
\label{2.l3}
\end{eqnarray}
which are obviously characterized by self-action. Note that in the case of a small
coupling constant $\mathfrak{g} <1$, the perturbation theory is applicable for
solving these equations. However, as shown by numerous studies, in the case
$\mathfrak{g}<1$, massless vector bosons with spin 1 are not formed in the
free Yang-Mills fields. In other words, it remains to assume that the coupling
constant should be greater than $\mathfrak{g}>1$, but then it is not clear how
to solve the equations (\ref{2.l3}) and, accordingly, the problem remains open \cite{Cly}.
Note that, as shown by numerous studies, even with relatively weak nonlinearity,
the behavior of Yang-Mills fields is chaotic in large regions of the phase space
(see, for example, \cite{Matinyan}). However, the chaotic behavior of the Yang-Mills
fields, in our opinion, may have a different, no less fundamental nature associated
with the global behavior of quintessence. In particular, as follows from various
experiments, on very small space-time scales, continuous fluctuations of vacuum
fields become so significant that the inclusion of these contributions to the basic
equations of motion becomes a fundamentally necessary task. In other words, it
would be quite natural if we assumed that the Lagrangian (\ref{2.l1}) or more precisely,
the vacuum fields $\mathcal{F}^a_{\mu\nu} $ are random. The latter is obviously
equivalent to the requirement that the basic equations of motion (\ref{2.l3}) be
\emph{stochastic differential equations} (SDE). In this regard, the natural
question arises: how to quantize these fields? It is obvious that canonical
quantization, i.e. the functional integral methods for a generating functional
the $n$-point functions, cannot be used in this case.
It would seem that in order to solve the problem of field quantization, the method
of stochastic quantization Parisi-Wu \cite{Dam} is more adapted, which is based on
an important analogy between Euclidean quantum field theory and classical statistical
mechanics. Without going into details, we note that the Paris-Wu approach, despite
a number of serious advantages over other approaches, in particular, the absence
of problems with the Faddeev-Popov ghost fields \cite{Fad-Pop}, is problematic to
apply to such a critical substance as quantum vacuum.
\subsection{Q\lowercase{uantization of stochastic vacuum fields}}
Let us consider the simplest case, where space-time is described by
the Lorentz metric $\chi_{\mu\nu}=\mathrm{diag}(+\,-\,-\,-)$, the coupling
constant $\mathfrak{g}=0$, and when the fields satisfy the symmetry group
$SU(2)\otimes U(1)$. The latter means that we consider the unified electroweak
interaction within the framework of the Abelian gauge group, but using stochastic
field equations. It should, however, be noted that in the case when
$\mathfrak{g}=0$, the isospins do not interact with each other.
We determine the covariant antisymmetric tensor of the quantum vacuum fields
(QVF) in the form:
\begin{equation}
\mathcal{F}^a_{\mu\nu}=
\left(
\begin{array}{cccc}
0 & \psi_x^\pm & \psi_y^\pm & \psi_z^\pm\\
\psi_x^\pm & 0 &\pm\psi_z^\pm &\mp \psi_y^\pm\\
\psi_y^\pm & \mp \psi_z^\pm & 0&\mp \psi_x^\pm\\
\psi_z^\pm &\pm \psi_y^\pm &\mp \psi_x^\pm & 0\\
\end{array}
\right).
\label{2.l0}
\end{equation}
In the case when $\mathfrak{g} = 0 $, it seems logical to refine the equation
(\ref{2.l3}) for short distances (see for example \cite{Faizal}), which leads to the equation:
$$\partial^\mu \mathcal{F}^a_{\mu\nu}-\frac{\alpha}{15\pi m^2}\square\partial^\mu
\mathcal{F}^a_{\mu\nu}=0,$$ where $\square=\triangle-c^{-2}\partial_t^2$ denotes the D'Alembert operator,
$\bigtriangleup$ is the Laplace operator, $\alpha$ is the fine structure constant
and $m$ is the mass of an electron. However, if to make the substitution
$\tilde{\mathcal{F}}^a_{\mu\nu}=\mathcal{F}^a_{\mu\nu}-\frac{\alpha}{15\pi m^2}\square
\mathcal{F}^a_{\mu\nu}$, then we get the equation $\partial^\mu\tilde{\mathcal{F}}^a_{\mu\nu}=0.$
Below we will study the properties of generalized QVF $\tilde{\mathcal{F}}^a_{\mu\nu}$.
Since for $\mathfrak{g}=0$ the equations (\ref{2.l3})
of all isospins are symmetric, below, where it does not cause confusion, the index
$a=(1,2,3)$ will be omitted.
Now we will consider the equation (\ref{2.l3}), taking into account the fact that
$\mathfrak{g}=0$.\\ Substituting (\ref{2.l0}) into the equation (\ref{2.l3}), one can
obtain the following three independent SDEs:
\begin{eqnarray}
ic^{-1}\dot{\psi}^{\pm}_x=\pm\partial_y \psi^{\pm}_z\mp\partial_z \psi^{\pm}_y,
\nonumber\\
ic^{-1}\dot{\psi}^{\pm}_y=\pm\partial_z\psi^{\pm}_x\mp\partial_x \psi^{\pm}_z,
\nonumber\\
ic^{-1}\dot{\psi}^{\pm}_z=\pm\partial_x\psi^{\pm}_y\mp\partial_y \psi^{\pm}_x,
\label{1.02tab}
\end{eqnarray}
and the following equation between of derivatives of the field components:
\begin{eqnarray}
\partial_x{\psi}^{\pm}_x+ \partial_y{\psi}^{\pm}_y+\partial_z{\psi}^{\pm}_z=0,
\label{1.02ta}
\end{eqnarray}
where $c$ is the velocity of the field propagation, which may differ
from the speed of light in \emph{ordinary vacuum}, in addition,
$\dot{{\psi}}^{\pm}_\sigma= \partial{\psi}^{\pm}_\sigma/\partial t$
and $\sigma=x,y,z$.
Combining SDE (\ref{1.02tab}), we can write the following stochastic vector equation:
\begin{eqnarray}
\dot{\bm{\psi}}^{\pm}\bigl(\textbf{r},t;\textbf{f}(t)\bigr) \mp
c\bigl(\textbf{S}\cdot \bm\nabla\bigr)
\bm{\psi^{\pm}}\bigl(\textbf{r},t;\textbf{f}(t)\bigr) =0,\qquad t\in(-\infty,+\infty),
\label{2.0a1}
\end{eqnarray}
where $\textbf{f}(t)$ is a random function, which is clearly defined below (see Eqs.
(\ref{1.0k2a}) and (\ref{1.0t2a})), and, accordingly, the function
$\bm{\psi^{\pm}}\bigl(\textbf{r},t;\textbf{f}(t)\bigr)$, in this case will
mathematically have the sense of a complex probabilistic process (see for example
\cite{AshG,Ashg1}), which can be represented as a three-component vector in
a Hilbert space:
\begin{eqnarray}
\label{1.01cz}
\bm\psi^{\pm}\bigl(\textbf{r},t;\textbf{f}(t)\bigr)=\left[
\begin{array}{ccc}
\psi^{\pm}_x\bigl(\textbf{r},t;\textbf{f}(t)\bigr)\\
\psi^{\pm}_y\bigl(\textbf{r},t;\textbf{f}(t)\bigr)\\
\psi^{\pm}_z\bigl(\textbf{r},t;\textbf{f}(t)\bigr)\\
\end{array}
\right].
\end{eqnarray}
Note that $\bm{\psi^{\pm}}\bigl(\textbf{r},t;\textbf{f}(t)\bigr)\in
L^2(\mathbb{R}^4\otimes \textbf{R}_{\{\textbf{f}\}}),$ where
$\textbf{R}_{\{\textbf{f}\}}$ denotes the functional space. Recall that
in the case when $\textbf{f}(t)\equiv0$, the equation (\ref{2.0a1}) goes into the
usual Weyl equation of type (\ref{1.01a})-(\ref{1.01cy}).
As for the symmetry of the electroweak fields, they satisfy the
following commutation relations:
\begin{equation}
[T_a,T_b]=i\sum_{c=1}^3\mathfrak{f}_{abc}T_c,\qquad T_a=\frac{1}{2}S_a,
\label{1.l2k}
\end{equation}
where $S_1=S_x,\,S_2=S_y$ and $S_3=S_z$ (see expressions (\ref{1.01cy})).\\
Note that the structural constants are subject to the following relations;
$\mathfrak{f}_{123}=-\mathfrak{f}_{132}=\mathfrak{f}_{312}
=-\mathfrak{f}_{321}=\mathfrak{f}_{231}=1/2$ and ${\emph{Tr}}\,(T_a)=0$.
It is well known that quantum vacuum fields are characterized by random multi-scale
fluctuations in four-dimensional Minkowski space-time.
The latter, as a result, leads to a multi-scale evolution of these fields.
Note that, in simplest case the multi-scale evolution of the system can be
characterized by two sets of parameters $\{\tau,\bm\varepsilon\}$, by the
relaxation times $\{\tau\}=(\tau_0,\tau_{1}, ...)$ and fluctuation
powers $\{\bm\varepsilon\}=(\bm{\varepsilon}_0,\bm{\varepsilon}_1, ...,) $,
respectively. It is also assumed that at small time intervals $\delta{t}<<\tau_0$,
where $\tau_0=min\{\tau\}$ is the relaxation time of the minimum duration, the
complex SDE (\ref{2.0a1}) passes to the Weyl type equation (\ref{1.01a})-(\ref{1.01cy}).
In other words, the equation (\ref{1.01a})-(\ref{1.01cy}) in this case plays the role of
the principle of local correspondence, and therefore further the stochastic equation
(\ref{2.0a1}) will conventionally be called the \emph{Langevin-Weyl} equation.
Note that our main goal will be to study the equation (\ref{2.0a1}) for the
symmetry group $SU(2)\otimes U(1)$ on the main relaxation scale, characterized by
parameters $(\tau_0,\bm{\varepsilon}_0)$. Recall that for each symmetry group these
parameters are different. In particular, for the symmetry under consideration,
multi-scale fluctuations should obviously be characterized by three constants
$({\bm\varepsilon}_{0}^1,{\bm\varepsilon}_{0}^2,{\bm\varepsilon}_{0}^3)$
(see the definition of the number of isospin generators after the equation
(\ref{2.l2}) and, respectively, below after the equation (\ref{1.0t2a})).
\textbf{Theorem.} \emph{If QVF obeys the Langevin-Weyl SDE (\ref{2.0a1}), then
for the symmetry group $SU(2)\otimes U(1)$ on the main relaxation scale
$(\tau_0,\bm{\varepsilon}_0^a)$, in the limit of statistical equilibrium, a massless
Bose particle with spin 1 is formed as a 2D topological structure in 3D space.}
Obviously, in the case of a localized quantum state, the four-dimensional
interval of the propagated signal will be equal to zero, and, respectively,
the points of the Minkowski space (events) must be connected by a relation
similar to the light cone:
\begin{equation}
s^2 =c^2t^2-r^2=0, \qquad r^2=x^2+y^2+z^2.
\label{1.02k}
\end{equation}
Bearing in mind that particles with projections of spins $+1$ and $-1$ are
symmetric, below we will investigate only the wave function of a particle
with spin projection +1.
Taking into account (\ref{1.02tab}) and (\ref{1.02ta}), we obtain the following
second-order partial differential equations:
\begin{eqnarray}
\square \psi^{+}_x={c}^{-1}{c_{,y}}\bigl(\partial_x \psi^{+}_y-\partial_y \psi^{+}_x\bigr)-
{c}^{-1} {c_{,z}} \bigl(\partial_z \psi^{+}_x-\partial_x \psi^{+}_z \bigr)- c_{,t}c^{-3}\dot{\psi}^{+}_x,
\nonumber\\
\square\psi^{+}_y={c}^{-1}{c_{,z}}\bigl(\partial_y \psi^{+}_z-\partial_z \psi^{+}_y\bigr)-
{c}^{-1}{c_{,x}}\bigl(\partial_x \psi^{+}_y-\partial_y \psi^{+}_x\bigr)-c_{,t}c^{-3}\dot{\psi}_y,
\nonumber\\
\square\psi^{+}_z= {c}^{-1}{c_{,x}}\bigl(\partial_z \psi^{+}_x-\partial_x \psi^{+}_z\bigr)-
{c}^{-1}{c_{,y}}\bigl(\partial_y \psi^{+}_z-\partial_z \psi^{+}_y\bigr)-c_{,t}c^{-3}\dot{\psi}_z,
\label{1.02t}
\end{eqnarray}
where $c_{,\breve{\sigma}}=\partial c/\partial\breve{\sigma}$ and $\breve{\sigma}=(x,y,z,t)$.
Now we need to determine the explicit form of the equations for this it is
necessary to calculate the derivatives $c_{,\breve {\sigma}}$.
Using the equations (\ref{1.02k}), it is easy to calculate:
\begin{equation}
c_{,t}=- c^2/r,\qquad c_{,x}= cx/r^{2}, \qquad c_{,y}= cy/r^{2}, \qquad
c_{,z}= cz/r^{2}.
\label{1.02zt}
\end{equation}
For further analytic study of the problem, it is useful to bring the system of equations
(\ref{1.02t}) into canonical form, when all components of the field are separated
and each of them is described by a separate equation.
In particular, by courting the fact that in the problem under consideration all fields
are symmetric, the following additional conditions can be imposed on the field components:
\begin{eqnarray}
\bigl(c_{,z}- c_{,y}\bigr)\dot{\psi}_x=c_{,z}\dot{\psi}_y^+-c_{,y}\dot{\psi}_z^+,
\nonumber\\
\bigl(c_{,x}- c_{,z}\bigr)\dot{\psi}_y=c_{,x}\dot{\psi}_z^+-c_{,z}\dot{\psi}_x^+,
\nonumber\\
\bigl(c_{,y}- c_{,x}\bigr)\dot{\psi}_z=c_{,y}\dot{\psi}_x^+-c_{,x}\dot{\psi}_y^+.
\label{1.02ak}
\end{eqnarray}
It is easy to verify that these conditions are symmetric with respect to
the components of the field and are given on the hyper-surface of four-dimensional
events. Using the conditions (\ref{1.02ak}), the system of equations (\ref{1.02t})
can be easily reduced to the canonical form:
\begin{eqnarray}
\Bigl\{\square+\bigl[i(c_{,z}-c_{,y})+c_{,t}c^{-1}\bigr]c^{-2}\partial_t\Bigr\}\psi^{+}_x=0,
\nonumber\\
\Bigl\{\square+\bigl[i(c_{,x}-c_{,z})+c_{,t}c^{-1}\bigr]c^{-2}\partial_t\Bigr\}\psi^{+}_y=0,
\nonumber\\
\Bigl\{\square+\bigl[i(c_{,y}-c_{,x})+c_{,t}c^{-1}\bigr]c^{-2}\partial_t\Bigr\}\psi^{+}_z=0.
\label{1.02zkl}
\end{eqnarray}
For further investigations, it is convenient to represent the wave function
component in the form:
\begin{equation}
\psi_\sigma^+\bigl(\textbf{r},t;f_\sigma(t)\bigr)=\exp\biggl
\{\int^t_{-\infty} \zeta_\sigma(t')dt'\biggr\}\phi_\sigma^+(\textbf{r}),
\label{1.02a}
\end{equation}
where $\zeta_\sigma(t)$ denotes the random function, and $f_\sigma(t)$ is
the corresponding projection of the random vector $\textbf{f}(t)$.
Substituting (\ref{1.02a}) into (\ref{1.02zkl}) and taking into account (\ref{1.02zt}),
we get the following system of differential equations:
\begin{eqnarray}
\biggl\{\triangle-\Bigl[\Bigl(\frac{\xi_x(t)}{c}\Bigr)^2+\frac{r-i(z-y)}{cr^2}
\zeta_x(t)\Bigr]\biggr\}\phi_x^+(\textbf{r})=0,\,\,
\nonumber\\
\biggl\{\triangle-\Bigl[\Bigl(\frac{\xi_y(t)}{c}\Bigr)^2+\frac{r-i(x-z)}{c r^2}
\zeta_y(t)\Bigr]\biggr\}\phi_y^+(\textbf{r})
=0,\,\,
\nonumber\\
\biggl\{\triangle-\Bigl[\Bigl(\frac{\xi_z(t)}{c}\Bigr)^2+\frac{r-i(y-x)}{cr^2}
\zeta_z(t)\Bigr]\biggr\}\phi_z^+(\textbf{r})=0.\,\,
\label{1.02b}
\end{eqnarray}
In the equations (\ref{1.02b}) the following notation is made:
$$\xi^2_\sigma(t)=\dot{\zeta}_\sigma(t)+\zeta^2_\sigma(t),$$
where $\dot{\zeta}_\sigma(t)=\partial\zeta_\sigma(t)/\partial t$.
It is easy to verify that the coefficients in the equations (\ref{1.02b}),
are random functions of time. It will be natural if we average these
equations on the scale of the relaxation time $\tau_0$.
Averaging the equations (\ref{1.02b}), we get the following system of
second-order stationary differential equations:
\begin{eqnarray}
\biggl\{\triangle-\Bigl[\Bigl(\frac{\omega_x}{c}\Bigr)^2+\frac{r-i(z-y)}{cr^2}
\varrho(\omega_x)\Bigr]\biggr\}\phi_x^+(\textbf{r})=0,
\nonumber\\
\biggl\{\triangle-\Bigl[\Bigl(\frac{\omega_y}{c}\Bigr)^2+\frac{r-i(x-z)}{c r^2}
\varrho(\omega_y)\Bigr]\biggr\}\phi_y^+(\textbf{r})=0,
\nonumber\\
\biggl\{\triangle-\Bigl[\Bigl(\frac{\omega_z}{c}\Bigr)^2+\frac{r-i(y-x)}{cr^2}
\varrho(\omega_z)\Bigr]\biggr\}\phi_z^+(\textbf{r})=0,
\label{2.0k2b}
\end{eqnarray}
where $\omega_\sigma$ and $\varrho(\omega_\sigma)$ are regular parameters of the
problem, which are defined as follows:
\begin{equation}
\omega^2_\sigma=\langle\xi^2_\sigma(t)\rangle=\langle\dot{\zeta}_\sigma(t)+
\zeta^2_\sigma(t)\rangle,\qquad \varrho(\omega_\sigma)=\langle\zeta_\sigma(t)\rangle.
\label{2.0k2a}
\end{equation}
In the (\ref{2.0k2a}) the bracket $\langle...\rangle$ denotes the averaging
operation by the relaxation time $\tau_0$.
Now the main question is that: is it possible the emergence of statistical
equilibrium in the system under consideration, which can lead to the stable
distribution of the parameter $\varrho(\omega_\sigma)$?
Note that the latter circumstance, for obvious reasons, eliminates the nontrivial
question connected with the unitary transformation of the state vector, since
the quantum system in this problem is not isolated. Obviously, in this case
it is necessary to require the conservation of the norm of the average
value of the state vector:
\begin{eqnarray}
\bigl\langle\bm\psi^{\pm}\bigl(\textbf{r},t;\textbf{f}(t)\bigr)
\bigr\rangle_{\textbf{R}_{\{\textbf{f}\}}}= \bm\phi^{\pm}(\textbf{r})=\left[
\begin{array}{ccc}
\phi^{\pm}_x(\textbf{r})\\
\phi^{\pm}_y(\textbf{r})\\
\phi^{\pm}_z(\textbf{r})\\
\end{array}
\right],
\label{2.07c}
\end{eqnarray}
where
$$
\phi^{\pm}_\sigma(\textbf{r})=\bigl\langle\psi^{\pm}_\sigma\bigl(\textbf{r},t;
f_\sigma(t)\bigr)\bigr\rangle_{R^\sigma_{\{f_\sigma\}}},\qquad\sigma=x,y,z,
$$
the bracket $\langle...\rangle_{R^\sigma_{\{f\}}}$ denotes the functional integration:
$$
\textbf{R}_{\{\textbf{f}\}}=R^x_{\{f_\sigma\}}\otimes R^y_{\{f_\sigma\}}\otimes
R^z_{\{f_\sigma\}}.
$$
Now the key question for the representation will be the proof of
existence of an average value of the state vector $\bm\phi^{\pm}(\textbf{r})$
in the limit of statistical equilibrium or formally at $t\to\infty$.
Using the first relation in (\ref{2.0k2a}), we can define the following Langevin
equation:
\begin{eqnarray}
\dot{\zeta}_\sigma=-(\zeta^2_\sigma-\omega^2_\sigma)+U_\sigma(t),
\label{1.0k2a}
\end{eqnarray}
where $U_\sigma(t)=U_{0 \sigma}+f_\sigma(t),$ in addition,
$U_{0 \sigma}=\langle{U_\sigma(t)}\rangle<0$ is an unknown constant. As for the
term $f_\sigma(t)$, it denotes a random force that satisfies the \emph{white noise}
correlation relations:
\begin{equation}
\langle f_\sigma(t)\rangle=0,\qquad \langle{f_\sigma(t)f_\sigma(t')}\rangle
=\varepsilon_{0\sigma}^a\delta(t-t').
\label{1.0t2a}
\end{equation}
Recall that in (\ref{1.0t2a}) the set of constants
${\bm\varepsilon}_{0}^a=(\varepsilon_{0x}^a,\varepsilon_{0y}^a,\varepsilon_{0z}^a)$
denote the oscillation powers of isospin $a$ along different axes.
It is natural to assume that for each isospin ${\varepsilon}_{0}^a=\varepsilon_{0x}^a=
\varepsilon_{0y}^a=\varepsilon_{0z}^a$, whereas
$\varepsilon_{0y}^a\neq\varepsilon_{0z}^b$ when $a\neq b$.
Recall that in $SU(2)\times U(1)$ gauge group there are three $W$ bosons
of weak isospin from $SU(2)$, namely ($W_1,W_2,$ and $W_3$), and the $B$
boson of weak hypercharge from $U(1)$, respectively, all of which are massless.
These bosons obviously must be characterized by a set of constants
$({\varepsilon}_0^1,{\varepsilon}_0^2,{\varepsilon}_0^3)$.
Using SDE (\ref{1.0k2a}) and relations (\ref{1.0t2a}), as well as assuming that
$U_{0\sigma}=-2\,\omega^2_\sigma$, one can obtain the equation for the probability distribution
\cite{Klyat,Lif}):
\begin{equation}
\frac{\partial\mathcal{P}^0}{\partial t}=\biggl\{\frac{\partial}{\partial \zeta}
\bigl(\zeta^2+\omega^2\bigl)
+\frac{\varepsilon_{0}}{2}\frac{\partial^2}{\partial\zeta^2}\biggr\}\mathcal{P}^0.
\label{2.0t2a}
\end{equation}
Recall that in (\ref{2.0t2a}) and below, to simplify writing, we will omit both
the isospin index $a$ and the index $\sigma$, which denotes the coordinate.
Solving the equation (\ref{2.0t2a}), it is easy to find \cite{Frish}:
\begin{equation}
\mathcal{P}^0(\bar{\zeta};\bar{\omega})=2\varepsilon^{-1}_{0}\mathcal{J}
(\bar{\omega})e^{-2\Phi(\bar{\zeta})}
\int^{\bar{\zeta}}_{-\infty}e^{2\Phi(\bar{\zeta}')}d\bar{\zeta}',
\label{2.32a}
\end{equation}
where $\bar{\zeta}=\zeta/\varepsilon^{1/3}_0$ and $\Phi(\bar{\zeta})=
(\bar{\zeta}^3+3\bar{\omega}^2\bar{\zeta})/3$.\\
As for the coefficient $\mathcal{J}(\bar{\omega})$, it is determined from
the normalization condition of the distribution (\ref{2.32a}) to the unity:
$$
\mathcal{J}^{-1}(\bar{\omega})=\sqrt{\pi}\biggl(\frac{2}{\varepsilon_0}\biggr)^{1/3}
\int_{0}^{\infty}
\exp\Bigl[-\frac{x^3}{6}-2\,\bar{\omega}^{2}x\Bigr]\frac{dx}{\sqrt{x}},
$$
where $\bar{\omega}=\omega/\varepsilon^{1/3}_0$ is the dimensionless frequency.
The coefficient $\mathcal{J}(\bar{\omega})$, which is a function of frequency,
has the sense of the probability density of states.
Finally, using the von Neumann mean ergodic theorem \cite{Neu} and also the Birkhoff
pointwise ergodic theorem \cite{Birk}, we can calculate the explicit form of the
function $\varrho(\bar{\omega})$:
\begin{equation}
\varrho(\bar{\omega})=\langle\zeta(t)\rangle=\int_{-\infty}^{+\infty}
\mathcal{P}(\bar{\zeta};\bar{\omega})\bar{\zeta} d\bar{\zeta}=
\sqrt{\pi}\,\breve{\mathcal{J}}(\bar{\omega})\int_0^{\infty}\sqrt{x}
\exp\Bigl[-\frac{x^3}{6}-2\,\bar{\omega}^{2}x\Bigr]dx,
\label{2.32b}
\end{equation}
where $\breve{\mathcal{J}}(\bar{\omega})=\mathcal{J}(\bar{\omega})/\varepsilon^{1/3}_0.$\\
Note that the function $\varrho(\bar{\omega})$ has the dimension of frequency.
Following the standard procedures (see in detail \cite{Ashg1}), we can construct a
measure of the functional space $R^\sigma_{\{f\}}$ and, accordingly, to calculate
the functional integral entering into the expression (\ref{2.07c}):
\begin{equation}
\label{2.w3a}
I_1(t)=\biggl\langle \exp\biggl\{\int^t_{-\infty}
\zeta_\sigma(t')dt'\biggr\}\biggr\rangle_{R^\sigma_{\{f\}}}=
\int_{-\infty}^{+\infty}\mathcal{P}^1(\zeta_\sigma,t)d\zeta_\sigma,
\end{equation}
where $\mathcal{P}^1(\zeta_\sigma,t)$ is a function satisfying the following
second-order partial differential equation:
\begin{equation}
\frac{\partial\mathcal{P}^1}{\partial t}=\biggl\{3\zeta+
\bigl(\zeta^2+\omega^2\bigl)\frac{\partial}{\partial \zeta}
+\frac{\varepsilon_0}{2}\frac{\partial^2}{\partial\zeta^2}\biggr\}\mathcal{P}^1.
\label{2.3wv}
\end{equation}
Now it is important to show that the integral (\ref{2.w3a}) converges.
As proven (see Appendix A), in the limit of statistical equilibrium;
$\lim_{t\to\infty}I_1(t)\leq M =const $, or, which is the same thing,
the integral (\ref{2.w3a}) converges. The latter means that the function
$\mathcal{P}^1(\zeta_\sigma,t)$ can be given the meaning of the probability
density and normalized it on unity.
Thus, we have proved that on the scale of the relaxation time $\tau_0$, the
system goes to a statistical equilibrium state and describing by the stationary wave
function (\ref{2.07c}). Obviously, in this case the parameter $\varrho(\bar{\omega})$
is a regular function of the frequency.
\subsection{T\lowercase{he wave function of a massless particle with spin $1$}}
Since the equations in the system (\ref{2.0k2b}) are independent, we can investigate
them separately. For definiteness, consider the first equation of the system
(\ref{2.0k2b}), which describes the $x$ component of QVF.
Representing the wave function in the form:
\begin{equation}
\phi_x^+(\textbf{r})=\phi_x^{+(r)}(\textbf{r})+i\phi_x^{+(i)}(\textbf{r}),
\label{3.020}
\end{equation}
from the first equation of the system (\ref{2.0k2b}), we can get the following two equations:
\begin{eqnarray}
\biggl\{\triangle-\Bigl[\Bigl(\frac{\omega}{c}\Bigr)^2+\frac{\lambda}{r}\Bigr]\biggr\}
\phi_x^{+(r)}(\textbf{r})-\lambda\frac{z-y}{r^2}\phi_x^{+(i)}(\textbf{r})
=0,\,\,\,\,\,
\nonumber\\
\biggl\{\triangle-\Bigl[\Bigl(\frac{\omega}{c}\Bigr)^2+\frac{\lambda}{r}\Bigr]\biggr\}
\phi_x^{+(i)}(\textbf{r})+\lambda\frac{z-y}{r^2}\phi_x^{+(r)}(\textbf{r})=0,\,\,\,\,\,
\label{3.02a}
\end{eqnarray}
where the parameter:
\begin{equation}
\lambda =-\varrho(\bar{\omega})/c<0,
\label{2.12}
\end{equation}
has the dimension of the inverse distance.
It is easy to show that the equations (\ref{3.02a}) are invariant with
respect to permutations:
$$\phi_x^{+(r)}(\textbf{r})\mapsto \phi_x^{+(i)}(\textbf{r}),\qquad
\phi_x^{+(i)}(\textbf{r})
\mapsto -\phi_x^{+(r)}(\textbf{r}).$$
From this it follows that the solutions $\phi_x^{+(r)}(\textbf{r})$ and
$\phi_x^{+(i)}(\textbf{r})$ globally are equivalent and differ only by sign.
In other words, the symmetry properties mentioned above make it possible
to obtain two independent equations of the form:
\begin{eqnarray}
\biggl\{\triangle+\Bigl[-\Bigl(\frac{\omega}{c}\Bigr)^2+|\lambda|\frac{r-(y-z)}{r^2}
\Bigr]\biggr\}\phi_x^{+(r)}(\textbf{r})=0,
\nonumber\\
\biggl\{ \triangle+\Bigl[-\Bigl(\frac{\omega}{c}\Bigr)^2+|\lambda|\frac{r+(y-z)}{r^2}
\Bigr]\biggr\}\phi_x^{+(i)}(\textbf{r})=0.
\label{3.02b}
\end{eqnarray}
Now let us analyze the possibility of obtaining a discrete set of
solutions for wave functions, which can describe a localized state.
For definiteness, we consider the solution of the equation for the
wave function $\phi_x^{+(r)}(\textbf{r})$ on the plane:
\begin{equation}
\label{3.02ab}
r-y+z=\mu r,
\end{equation}
where $\mu$ is a some parameter. The changing range of this parameter will be
defined below.
Given the equation (\ref{3.02ab}), the first equation in (\ref{3.02b}) can be written as:
\begin{equation}
\label{3.03k}
\Bigl\{\triangle+\Bigl[-\Bigl(\frac{\omega}{c}\Bigr)^2+\frac{|\lambda|\mu}{r}
\Bigr]\Bigr\}\phi_x^{+(r)}(\textbf{r})=0.
\end{equation}
It is convenient to carry out further investigation of the problem in spherical
coordinates. Rewriting (\ref{3.03k}) in the spherical coordinate
system $(x,y,z)\mapsto(r,\theta,\varphi)$, we obtain:
\begin{equation}
\biggl\{\frac{1}{r^{2}}\Bigl[\frac{\partial}{\partial r}\Bigl(r^2\frac{\partial}{\partial r}\Bigr)
+\frac{1}{\sin^2\theta}\frac{\partial^2}{\partial\varphi^2}+\frac{1}{ \sin\theta}
\frac{\partial}{\partial\theta}\Bigl(\sin\theta
\frac{\partial}{\partial\theta}\Bigr)\Bigr]
+\Bigl[-\Bigl(\frac{\omega}{c}\Bigr)^2+ \frac{|\lambda| \mu(\theta,\varphi)}{r}\Bigr]
\biggr\}\phi_x^{+(r)}=0.
\label{3.03al}
\end{equation}
Representing the wave function in the form:
\begin{equation}
\phi_x^{+(r)}(\textbf{r})=\Lambda(r)Y(\theta,\varphi),
\label{3.03b}
\end{equation}
we can conditionally divide the variables in the equation (\ref{3.03al}) by writing
it in the form:
\begin{eqnarray}
r^2\Lambda ^{''}+2r\Lambda ^{'}+\bigl[-(\omega/c)^2 r^2+|\lambda|\mu(\theta,\varphi)r
-\nu\bigr]\Lambda =0,\quad
\label{3.02bt}
\end{eqnarray}
\vspace {-2mm}
and, respectively;
\begin{eqnarray}
\frac{1}{\sin\theta}\Bigl\{
\frac{1}{\sin\theta}\frac{\partial^2}{\partial\varphi^2}+\frac{\partial}{\partial\theta}
\Bigl(\sin\theta\frac{\partial}{\partial\theta}\Bigr)\Bigr\}Y+\nu Y=0,
\label{3.02bt'}
\end{eqnarray}
where $\Lambda^{'}=d\Lambda/dr$ and $\nu$ is a constant, which can represented in
the form $\nu=l(l+1)$, in addition, $l=0,1,2...$
Note that the conditional separation of variables means to impose an
additional condition on the function $\mu(\theta,\varphi)=const$.
Writing equation (\ref{3.02ab}) in spherical coordinates, we obtain the
following trigonometric equation:
\begin{equation}
\label{3.02abc}
\mu(\theta,\varphi)=1-\sin\theta\sin\varphi +\cos\theta.
\end{equation}
Analysis of the equation (\ref{3.02abc}) shows that $\mu\in[(1-\sqrt{2}), (1+\sqrt{2})]$.
The solution of the equation (\ref{3.02bt'}) is well known, these are spherical Laplace
functions $Y_{l,m}(\theta,\varphi)$, where $m=0,\pm1,...,\pm l$.
As for the equation (\ref{3.02bt}), we will solve it for a fixed value
$\mu$, which is equivalent to the plane cut of the three-dimensional solution.
In particular, we will seek a solution $\Lambda(r)$ tending to finite value
for $r\to 0 $ and, respectively, to zero at $r\to\infty$.
For a given parameter $\mu_0>0$, we can write the equation (\ref{3.02bt})
in the form:
\begin{eqnarray}
\frac{d^2\Lambda}{d\rho^2}+\frac{2}{\rho}\frac{d\Lambda}{d\rho}+\Bigl[-\beta^2 +\frac{2}{\rho}-
\frac{l(l+1)}{\rho^2}\Bigr]\Lambda=0,
\label{3.02btz}
\end{eqnarray}
where $\rho=r/a_p$ and $a_p=2/(|\lambda|\mu_0)$ denotes the characteristic
spatial dimension of a hypothetical massless Bose particle with spin projection +1,
in addition, the parameter $\beta$, which further will be play a key role
for finding a discrete set of solutions is determined by the expression
$\beta=(\omega a_p/c)$.
It is important to note that from the symmetry and non-coincidence of the components
$\phi^{+(r)}_x$ and $\phi^{+(i)}_x$, it follows that $\mu_0 = 2$. This fact will be taken
into account in further calculations.
As well-known the solution of the equation (\ref{3.02btz}) describes the radial
wave function of the hydrogen-like system, which is written in the form \cite{Land}:
\begin{eqnarray}
\label{3.03tz}
\Lambda_{nl}(r)=\frac{(b)^{3/2}
(br)^le^{-br/2}}{\sqrt{2n(n-l-1)!(n+l)!}}L_{n-l-1}^{2l+1}(br),\quad
\end{eqnarray}
where $b=(2/na_p)$. In addition, the functions $L_{n}^{k}(x)$ are associated
Laguerre polynomials orthogonal to $[0,\infty)$ with respect to the weight function
$x^ke^{-x}$ and, respectively, satisfy the condition normalization:
$$
\int^\infty_0 x^ke^{-x}L_{n}^{k}(x)L_{m}^{k}(x)dx=\frac{(n+k)!}{n!}\delta_{mk},
$$
where $\delta_{mk}$ is the Kronecker delta (more detail see \cite{Abram}).
Note that the solution (\ref{3.03tz}) takes place if the following condition is satisfied:
\begin{equation}
\label{3.03}
n_r+l+1=n+l=\beta^{-1},\qquad n_r=0,1,2...,
\end{equation}
where $n_r$ is the radial quantum number, $n$ is the principal quantum number and
$l$ denotes the quantum number of the angular momentum, which is limited $l\leq n-1$.
In other words, the quantization condition is the integer value of the term
$\beta^{-1} $, which implies satisfying the following conditions:
\begin{eqnarray}
\label{3.04ak}
\bigl[\beta^{-1}\bigr]=\Bigl[\frac{\breve{\varrho}(\bar{\omega}) }{\bar{\omega}}\Bigr]=n,\qquad
\bigl\{\beta^{-1}\bigr\}=\Bigl\{ \frac{\breve{\varrho}(\bar{\omega})}{\bar{\omega}}\Bigr\}=0,
\end{eqnarray}
where $\breve{\varrho}(\bar{\omega})=\varepsilon^{1/3}_0\varrho(\bar{\omega})$ is a
dimensionless function, the brackets $[...]$ and $\{...\}$ denote the integer and
fractional parts of the function, respectively.
As follows from the calculations (see Fig. 1),
in the frequency range $\bar{\omega}\in\{0.05,0.34\}$ there are 8 points, that are highlighted
in red, satisfy the quantization conditions (\ref{3.04ak}). The latter means that in
specified frequency range there are only eight quantum states, however the number of these
states is growing at $\bar{\omega}\to 0.$
\textbf{Table 1}. \emph{The average-statistical dimensionless frequency of the system
in different quantum states (see condition (\ref{3.03})).}
$$
\begin{array}{cccccccc}
\beta^{-1}=n+l\,= 1 & 2 & 3 & 4 & 5 & 6& 7 & 8\,... \\
\qquad 10^{-2}\times\bar{\omega}= 34 &20 &14 & 10 & 9 &7 & 6 &5\,...
\end{array}
$$
\begin{figure}
\includegraphics[width=110mm]{fig1.eps}
\caption{\emph{The dependence of a quantity $\beta^{-1}(\bar{\omega})$ on the
dimensionless frequency. As calculations show, in the frequency range under
consideration (see table) there are only eight values of $\beta^{-1}$ (red points),
for which the quantization conditions (\ref{3.03})-(\ref{3.04ak}) are satisfied.
The blue dots denote such states for which the quantization conditions are not satisfied.}}
\label{Fig.1}
\end{figure}
Now we consider the problem of localization of the solution $\phi^{+(r)}_x$. Taking into
account the fact that $\mu_0=2$, the equation (\ref{3.02abc}) can be written in the form:
\begin{equation}
\label{3.03by}
1+\sin\theta\sin\varphi -\cos\theta=0.
\end{equation}
In particular, as follows from the equation (\ref{3.03by}), all solutions (\ref{3.03tz})
are localized on the manifold $S^r_x(\theta,\varphi)\cong(-Y,Z)$ (one-quarter of the
plane $\{Y,Z\}$, where the bracket $\{.\,,.\}$ denotes the entire plane (see Fig. 2).
The imaginary part of the wave function $\phi^{+(i)}_x$ is calculated similarly
and has the same form, but in this case the solution must satisfy the following
trigonometric equation:
\begin{equation}
\label{3.03ba}
1-\sin\theta\sin\varphi+\cos\theta=0.
\end{equation}
Obviously, the equation (\ref{3.03ba}) defines another quarter of the plane
$S^i_x(\theta,\varphi)\cong(Y,-Z)\subset\{Y,Z\}$, on which the solution
$\phi^{+(i)}_x$ is localized. As for the wave function
$\phi^{+}_x\bigl(\phi^{+(i)}_x,\phi^{+(r)}_x\bigr)$,
then it is localized on the manifold $S^r_x\cup S^i_x\subset\{Y,Z\}$.
A similar investigation for the projections of the wave function
$\bigl\{\phi^{+(r)}_y,\phi^{+(i)}_y\bigr\}$ and $\bigl\{\phi^{+(r)}_z,\phi^{+(i)}_z\bigr\}$
shows that the separation of variables in corresponding equations is possible taking into
account the following algebraic equations:
\begin{eqnarray}
\label{3.07k}
\bigl\{\phi^{+(r)}_y,\phi^{+(i)}_y\bigr\}:\quad 1\pm\cos\theta\sin\varphi\mp\cos\theta=0,\qquad
\nonumber\\
\bigl\{\phi^{+(r)}_z,\phi^{+(i)}_z\bigr\}:\quad 1\pm\sin\theta\sin\varphi\mp\cos\theta\sin\varphi=0.
\end{eqnarray}
Analysis of the equations (\ref{3.07k}) shows that the projections of the wave
function $\bm\phi^{+}(\textbf{r}) $ are localized on the following manifolds;
$\bigl\{\phi^{+(r)}_y[S^{r}_y\cong(-X,Z)],
\,\phi^{+(i)}_y[S^{i}_y\cong(X,-Z)\bigr]\bigr\}$ and $\bigl\{\phi^{+(r)}_z[S^{r}_z\cong(-X,Y)],\,
\phi^{+(i)}_z[S^{i}_z\cong(X,-Y)]\bigr\}$.
\begin{figure}
\includegraphics[width=90mm]{fig2.eps}
\caption{\label{fig:epsart} \emph{The coordinate system $\{X,Y,Z\}$ divides
the three-dimensional space into eight spatial regions using three planes. The
boson of a vector field (hion) with projection of spin +1 is a two-dimensional
structure consisting of six components localized on the following manifolds
$\phi^+_x[(-Y,Z)\cup(Y,-Z)],$ \,$\phi^+_y[(-X,Z)\cup(X,-Z)]$ and
$\phi^+_z[(-X,Y)\cup(X,-Y)]$, respectively.}}
\end{figure}
\\
\emph{\textbf{The theorem is proved.}}
\\
Thus, we have proved the possibility of the formation of a stable massless Bose
particle with a spin of 1 as a result of random fluctuations of the QVF.
As can be seen the obtained solutions (\ref{3.03tz}) combine the properties of quantum
mechanics and the theory of relativity and, respectively, maximally reflect to the ideas
of \emph{string theory}. It is interesting to note that the \emph{ground state} of
the vector boson characterized the highest frequency. In the future we will call
the particle of a vector field \emph{hion}.
\section{Q\lowercase{uantum distribution in different \emph{hion} states}}
Let us consider the solution of the equation (\ref{3.02btz}) in the \emph{ground state}.
Taking into account (\ref{3.03b}) and (\ref{3.03tz}), we can get the following
solution:
$$\phi^{+(r)}_x(\textbf{r})=
\phi^{+(r)}_{x(1,0,0)} (\textbf{r})=\Lambda_{10}(r)Y_{0,0}(\theta,\varphi)
$$
where
\begin{equation}
\label{3.03a}
\Lambda_{10}(r)=Ca_p^{-3/2}e^{-r/a_p},\qquad
Y_{0,0}(\theta,\varphi)=\frac{1}{2\sqrt{\pi}},
\end{equation}
in addition, the indices $(1,0,0)$ of the wave function denote the quantum numbers $(n,l,m)$,
accordingly, the constant $C$ is defined below from the normalization
condition of the wave function, in addition, $a_p$ is the characteristic spatial
dimension of the vector boson in the \emph{ground state}, which can be calculated
taking into account the equations (\ref{2.32b}) and (\ref{2.12}):
\begin{equation}
a_p= |\lambda|^{-1}=c/\varrho(\bar{\omega})= 2^{1/3} c\,\omega^{-1}.
\label{3.05at}
\end{equation}
Recall that in (\ref{3.05at}) the frequency $\omega=\varepsilon^{1/3}_0\bar{\omega}$,
where dimensionless frequency of the \emph{ground state} is equal $\bar{\omega}=0.34$
(see table and Fig. 1). In the framework of the developed approach, it is impossible
to determine the constant $a_p$, since the speed $c$ and fluctuations power
$\varepsilon_0$ remain free parameters of the theory. Apparently, these
parameters will have to be refined experimentally and introduced
into the theory as fundamental constants.
As for the wave function $\phi^{+(i)}_{x(1,0,0)}(\textbf{r})$, it is also described by the
expressions (\ref{3.03a}), but with the only difference that in this case the wave function
is localized on the manifold $S^i_x$. In a similar way one can obtain solutions for the
wave functions $\phi^+_{y(1,0,0)}$ and $\phi^+_{z(1,0,0)} $ localized on the corresponding
manifold.
Now we can write down the normalization condition for the full wave function:
\begin{equation}
\label{3.04b}
\int\bm\phi^+\bigl(\bar{\bm\phi}^+\bigr)^TdV=1, \qquad dV=dxdydz,
\end{equation}
where $\bigl(\bar{\bm\phi}^+\bigr)^T=
\bigl(\bar{\bm\phi}^+_x,\bar{\bm\phi}^+_y,\bar{\bm\phi}^+_z\bigr)$.
Considering that the projections of the full wave function
$\bm\phi^{+}(\textbf{r})$ are localized on different non-intersecting manifolds
and the definition (\ref{3.04b}), we can write:
$$
\int\phi^+_x\bar{\phi}^+_xdV=\int\phi^+_y\bar{\phi}^+_ydV=\int \phi^+_z\bar{\phi}^+_zdV= {1}/{3}.
$$
Below as an example, we will calculate the first term of the integral, considering the case of the
\emph{ground state}. Taking into account that the wave function $\phi^{+}_x$ can be
represented in the form $\phi^{+}_x=\phi^{+(r)}_x+i\phi^{+(i)}_x$, we can write:
\begin{eqnarray}
\label{3.05}
\int\phi^+_x\bar{\phi}^+_xdV=\int\bigl(|\phi^{+(r)}_x|^2+|\phi^{+(i)}_x|^2\bigr)dV
=2\int|\phi^{+(r)}_x|^2dV=
\nonumber\\
2\int|\phi^{+(i)}_x|^2dV=\frac{a_p}{2\pi}\int_{(-Y,Z)}|
\Lambda_{10}(\rho)|^2 dydz=\frac{1}{3},
\end{eqnarray}
where $\varrho=r(0,y,z)= \sqrt{y^2+z^2}$ denotes radius-vector $r$ on the plane $S^r_x=(-Y,Z)$,
in addition, in calculating the integral, we assume that the wave function in the
direction $x$ perpendicular to the plane $S^r_x$ is the Dirac delta function.
Taking into account (\ref{3.03a}), we can calculate the integral in the expression (\ref{3.05}):
\begin{equation}
\label{3.05a}
\int_{(-Y, Z)}|\Lambda_{10}(\rho)|^2dydz=\frac{\pi}{8a_p}.
\end{equation}
Considering (\ref{3.05}) and (\ref{3.05a}), we can determine the normalization constant
of the wave function (\ref{3.03a}), which is equal to $C=4/\sqrt{3}$. Note that in
a similar way one can obtain the \emph{hion} wave function with the spin projection
-1 (see Appendix B).
Now we can calculate the probability distribution of the \emph{hion's} $x$-projection
in the \emph{ground state}. Using (\ref{3.03a}), we obtain the following
expression for the probability distribution on the surface element $dS$:
\begin{equation}
W(\rho)dS=\frac{1}{4\pi}{C'}^2e^{-2\rho}{\rho}d{\rho}d\vartheta,
\label{3.06t}
\end{equation}
where $C'=C/a_p^{3/2}$ and $dS =\rho d\rho d\vartheta$.
Recall that the angle $\vartheta$ coincides with the angle $\theta$ on the fixed plane.
\begin{figure}
\includegraphics[width=100mm]{fig3.eps}
\caption{\emph{The probability distribution of hion in the
\emph{ground state} depending on the radius. The distance $\rho_0=1/2$, or more
precisely $\varrho_0=a_p/2$, at which the maximum value of the amplitude of the
hion probability is reached.}}
\end{figure}
Integrating the expression (\ref{3.06t}) by the angle $\vartheta\in[0,\pi/2]$, we
obtain the probability distribution of the \emph{ground state} depending on radius:
\begin{equation}
W_{1,0,0}(\rho)=\frac{2}{3}\rho e^{-2\rho}.
\label{3.a06t}
\end{equation}
Finally, calculating the expression (\ref{3.a06t}), we find that for the value $\rho_0=1/2$
and, respectively, for $r(0,x,y) =\rho=a_p/2$, the probability distribution has a
maximum (see Fig. 3).
Now we consider the first three excited quantum states, which are characterized by the principal
quantum number $n=2$. Using the solution (\ref{3.03tz}), we can write the explicit
form of these wave functions:
\begin{eqnarray}
\phi^{+(r)}_{x(2,0,0)}\,\,=\,\frac{1}{3\sqrt{2}}\frac{2-\rho}{a_p^{3/2}}e^{-\rho/2}Y_{0,0},
\nonumber\\
\phi^{+(r)}_{x(2,1,0)}\,\,=\,\frac{2}{3\sqrt{3}}\frac{\rho}{a_p^{3/2}}\,\,e^{-\rho/2}Y_{1,0},
\nonumber\\
\phi^{+(r)}_{x(2,1,\pm1)}=\frac{1}{3\sqrt{3}}\frac{\rho}{a_p^{3/2}}e^{-\rho/2}Y_{1,\pm1},
\label{3.07}
\end{eqnarray}
where
$$
Y_{0,0}=\frac{1}{2\sqrt{\pi}},\qquad Y_{1,0}=\sqrt{\frac{3}{4\pi}}\cos\vartheta,
\qquad
Y_{1,\pm1}=\mp e^{\pm i\varphi}\sqrt{\frac{3}{8\pi}}\sin\vartheta.
$$
\begin{figure}
\center\includegraphics[width=100mm]{fig4.eps}
\caption{\emph{The probability distributions of the first four excited states of
the \emph{hion} depending on radius. The orange curve in the graph shows the
probability distributions in various three excited states.} }
\end{figure}
Taking into account expressions (\ref{3.07}), we can construct a radial probability distribution
for the first four excited states of the \emph{hion}:
$$
W_{2,0,0}=\frac{1}{6^2}(2-\rho)^2\rho e^{-\rho}, \qquad
W_{2,1,0}=W_{2,1,\pm1}=\frac{1}{6^2}\rho^3e^{-\rho}.
$$
Recall that at deriving of expressions for the probability distributions $W_{2,1,0}(\rho)$,
$W_{2,1,+1}(\rho)$ and $W_{2,1,-1}(\rho)$, averaging over the angle $\vartheta$ is performed.
Note that the probability distributions (see FIG. 4), and also the energies of considered
three states coincide. In particular, the quantum state described by the wave function
$\phi^{+(r)}_{x(2,0,0)}$ has the energy $\mathcal{E}_{2,0,0}=-0.2\hbar \varepsilon^{1/3}_0$,
whereas three different quantum states $\phi^{+(r)}_{x(2,1,0)}$, $\phi^{+(r)}_{x(2,1,+1)}$ and
$\phi^{+(r)}_{x(2,1,-1)}$ are characterized by the same energy $\mathcal{E}_{2,1,0}=
\mathcal{E}_{2,1,\pm1}=-0.14\hbar \varepsilon^{1/3}_0$.
\section{T\lowercase{he state of the \emph{hion} on the next scale of relaxation }}
So far we have investigated the possibility of the formation of \emph{hion}
as a result of the continuously fluctuations of QVF on the first phase of relaxation,
while such phases can be a few. In this connection, the natural question arises:
namely, how the state of the \emph{hion} changes if we take into account random
fluctuations of QVF of the next order, ie, consider the change of the particle on
the next evolutionary scale $ \{\tau_1,{\bm\varepsilon}_1\}$.
Let us consider the evolution of \emph{hion} with the spin projection +1 taking
into account the influence of the random environment in the framework of SDE of the type:
\begin{equation}
\partial_{t} {\breve{\bm{\psi}}^{+}}(\textbf{r},t)-c\bigl(\textbf{S}\cdot \bm\nabla\bigr)
\bm{\breve{{\psi}}^{+}}(\textbf{r},t)=\bm\eta^+(s),
\label{4.s0}
\end{equation}
and also the equation:
\begin{equation}
\nabla\bm{\breve{{\psi}}^{+}}(\textbf{r},t)=0,\qquad t\in(-\infty,+\infty),
\label{4.s1}
\end{equation}
where $\bm\eta^+(s)=(\eta_x^+,\eta_y^+,\eta_z^+)$ denotes the generator of random forces,
and $ds^2=c^2dt^2-dx^2-dy^2-dz^2$ is the 4$D$-interval in which these random influences are carried.
The equation (\ref{4.s0}) can be represented in matrix form:
\begin{equation}
\left[
\begin{array}{ccc}
ict & -z & y\\
z& ict & -x \\
-y& x & ict \\
\end{array}
\right]\cdot
\left[
\begin{array}{ccc}
\dot{\breve{{\psi}}}_x^+\\
\dot{\breve{{\psi}}}_y^+ \\
\dot{\breve{{\psi}}}_z^+\\
\end{array}
\right]=s\left[
\begin{array}{ccc}
\eta_x^+ \\
\eta_y^+ \\
\eta_z^+\\
\end{array}
\right],\quad
\label{4.s2}
\end{equation}
and the equation (\ref{4.s1}), respectively, in the form:
\begin{equation}
x \dot{\breve{{\psi}}}^{+}_x+ y \dot{\breve{{\psi}}}^{+}_y+ z\dot{\breve{{\psi}}}^{+}_z=0,
\qquad \dot{\breve{{\psi}}}^{+}_\sigma=\partial{\breve{{\psi}}}^{+}_\sigma/\partial{s}.
\label{4.s03}
\end{equation}
For further constructions, the system of equations (\ref{4.s2})-(\ref{4.s03}) must
be reduced to the canonical form:
\begin{equation}
\dot{\breve{{\psi}}}^{+}_\sigma(\bar{s};\textbf{r},t)=\bigl\{b^+_\sigma(\textbf{r},t)+
d^+_\sigma(\textbf{r},t)\bigr\}\bar{s}^{-1}\eta(\bar{s}),
\label{4.s04}
\end{equation}
where $\bar{s}=s/a_p$ and $\eta(\bar{s})=\bar{s}^{-1}\eta_x=\bar{s}^{-1}\eta_y=\bar{s}^{-1}\eta_z$,
in addition:
\begin{eqnarray}
b^+_x=\frac{z-y}{a_p},\qquad d_x^+=\frac{c^2t^2-x^2-xy-xz}{a_pct},
\nonumber\\
b^+_y=\frac{x-z}{a_p},\qquad d^+_y=\frac{c^2t^2-y^2-xy-yz}{a_pct},
\nonumber\\
b^+_z=\frac{y-x}{a_p},\qquad d^+_z=\frac{c^2t^2-z^2-xz-yz}{a_pct}.
\label{4.s05}
\end{eqnarray}
For simplicity, we will use a random generator that satisfies the conditions of \emph{white noise}:
\begin{equation}
\langle\eta(\bar{s})\rangle=0,\qquad \langle\eta(\bar{s})\eta(\bar{s}')\rangle=
2\varepsilon_1\delta(\bar{s}-\bar{s}'),
\label{4.s06}
\end{equation}
where $\varepsilon_1=\varepsilon^r_1+i\varepsilon^i_1$ and $\varepsilon^r_1=\varepsilon^i_1,$
in addition, it is assumed that the bracket $\langle...\rangle$ means averaging over the
relaxation time $\tau_1$.
The joint probability distribution of QVF can be represented in the form
(see \cite{AshG}):
\begin{eqnarray}
\mathcal{P}(\{\breve{\bm\psi}^+\},\bar{s};\textbf{r},t)=\prod_{\sigma}
\bigl\langle\delta\bigl(\breve{\bm\psi}_\sigma^+(\bar{s};\textbf{r},t)-
{\bm\breve{\phi}}_{\sigma}^+\bigr)\bigr\rangle,
\label{4.s07}
\end{eqnarray}
where the set of wave functions $\{\breve{\bm\psi}^+\}=({\breve{\psi}}^+_x,
{\breve{\psi}}^+_y,{\breve{\psi}}^+_z)$ denotes vacuum fields, $\{{\bm\breve{\phi}}^+\}=
({\breve{\phi}}^+_x,{\breve{\phi}}^+_y,{\breve{\phi}}^+_z)$ and
${\breve{\phi}}_{\sigma}^+(\textbf{r})={\breve{\psi}}_{\sigma}^+(\bar{s};\textbf{r},t)|_{s\to0}$.
In addition, in (\ref{4.s07}) the function $\delta(\breve{\bm\psi}_\sigma^+(\bar{s};\textbf{r},t)-
\breve{\bm{\phi}}_{\sigma}^+)$ denotes the Dirac delta function in the three-dimensional
Hilbert space, in addition, by default we will assume that the wave function is dimensionless,
i.e. it is multiplied by a constant value
$a_p^{3/2}$ (see (\ref{3.03a})).
Now using the system of SDE (\ref{4.s04}) and (\ref{4.s05})-(\ref{4.s06}), for the
conditional probability (\ref{4.s07}) the following second order partial differential
equation can be obtained \cite{Ashg1}:
\begin{eqnarray}
\biggl\{\frac{\partial }{\partial\bar{s}}-\frac{1}{2} \sum_{\sigma}
\varepsilon_{1\sigma}^+(\textbf{r},t) \frac{\partial^2 }{\partial{\breve{\psi}}_\sigma^+
\partial\bar{\breve{\psi}}_\sigma^+}\biggr\}\mathcal{P}=0,
\qquad
\frac{\partial^2 }{\partial{\breve{\psi}}_\sigma^+\partial\bar{\breve{\psi}}_\sigma^+}
=\frac{\partial^2 }{\partial\bigl[{\breve{\psi}}_\sigma^{+(r)}\bigr]^2}
+\frac{\partial^2 }{\partial\bigl[{\breve{\psi}}_\sigma^{+(i)}\bigr]^2},
\label{4.s08}
\end{eqnarray}
where $\bar{\breve{\psi}}_\sigma^+$ denotes the complex conjugate of the wave function
$\breve{\psi}_\sigma^+$ and $\varepsilon_{1\sigma}^+(\textbf{r},t)=
\varepsilon_1[b_\sigma^+(\textbf{r},t)+
d_\sigma^+(\textbf{r},t)]^{2}$, which is a dimensionless quantity denoting the fluctuations
power. In the equation (\ref{4.s08}) the following
notations also are made; $\breve{\psi}_\sigma^{+(r)}=\emph{Re}(\breve{\psi}_\sigma^+)$ and
$\breve{\psi}_\sigma^{+(i)}=\emph{Im}(\breve{\psi}_\sigma^+)$.
The general solution of the equation (\ref{4.s08}) is convenient to represent in
the integral form:
\begin{widetext}
\begin{eqnarray}
\mathcal{P}(\{\breve{\bm\psi}^+\},\bar{s}\,;\textbf{r},t)=\int_{\Xi^3}
R(\{\breve{\bm\phi}^+\})\prod_{\sigma}
\exp\biggl\{-\frac{(\breve{\phi}_{\sigma}^+-{\breve{\psi}}_\sigma^+)(\breve{\bar{\phi}}_{\sigma}^+
-\bar{{\breve{\psi}}}_\sigma^+)}{2\bar{s}\varepsilon_{1\sigma}^+}\biggr\}
\frac{d\breve{\phi}_{\sigma}^+}{\sqrt{2\pi\bar{s}\varepsilon_{1\sigma}^+}},
\label{4.s09}
\end{eqnarray}
\end{widetext}
where $d\breve{\phi}_{\sigma}^+=d\breve{\phi}_{\sigma}^{+(r)}
d\breve{\phi}_{\sigma}^{+(i)}$ and the function $R(\{\breve{\bm\phi}^+\})$ denotes the initial condition of
the equation (\ref{4.s08}) at $s=0$, before switching on the interaction with the
random environment. Since before switching on the interaction, the \emph{hion}
(the vector-boson) is in a pure quantum state, ie, in the Hilbert space is determined
by a fixed vector $\bm\phi^+$, then we can put; $R(\{\breve{\bm\phi}^+\})=
\mathcal{{P}}^0(\{{\bm\phi}^+\})$, where $\mathcal{{P}}^0(\{{\bm\phi}^+\})$ has
the sense of the distribution \emph{hion}, which is defined as follows:
\begin{equation}
\mathcal{{P}}^0(\{\bm\phi^+\})=||\bm{\phi}^+||^2=\left[
\begin{array}{ccc}
\mathcal{{P}}^0_x(\phi^+_x) \\
\mathcal{{P}}^0_y(\phi^+_y)\\
\mathcal{{P}}^0_z(\phi^+_z)\\
\end{array}
\right],
\label{4.s11}
\end{equation}
where
\begin{equation}
\mathcal{{P}}^0_\sigma(\phi^+_\sigma)=\frac{1}{2}||\phi^+_\sigma||^2=
\frac{1}{2}\sum_{\varpi=r,i}||\phi^{+(\varpi)}_\sigma||^2,\qquad
||\phi^{+(\varpi)}_\sigma||^2=\phi^{+(\varpi)}_\sigma
\bar{\phi}^{+(\varpi)}_\sigma.
\label{4.s12}
\end{equation}
Substituting the expressions (\ref{4.s11})-(\ref{4.s12}) into (\ref{4.s09}) and
integrating over the variables $ \bm{\breve{\phi}}^+$ within
$[{\breve{\psi}}^{+(\varpi)}_\sigma,\phi^{+(\varpi)}_\sigma]$,
we obtain the expression for the deformation of the initial quantum distribution
$\mathcal{{P}}^0(\{{\bm\phi}^+\})$, taking into account the evolution of \emph{hion}
in a random environment:
\begin{equation}
\mathcal{P}(\{\bm\phi^+\},\{\breve{\bm\psi}^+\},\bar{s}\,;\textbf{r},t)
=\left[\begin{array}{ccc}
\mathcal{{P}}_x(\phi^+_x,\breve{\psi}^+_x;\bar{s},t) \\
\mathcal{{P}}_y(\phi^+_y,\breve{\psi}^+_y;\bar{s},t)\\
\mathcal{{P}}_z(\phi^+_z,\breve{\psi}^+_z;\bar{s},t)\\
\end{array}
\right],
\label{4.s13}
\end{equation}
where
\begin{eqnarray}
\mathcal{{P}}_\sigma(\phi^+_\sigma,\breve{\psi}^+_\sigma;\bar{s},t)=\frac{1}{2}
\sum_{\varpi=r,i}||\phi^{+(\varpi)}_\sigma||^2
F_\sigma^{+(\varpi)}.
\label{4.s14}
\end{eqnarray}
Note that the function $F_\sigma^{+(\varpi)}$ characterizes the deformation
of the initial distribution:
$$
F_\sigma^{+(\varpi)}\bigl(\phi^{+(\varpi)}_\sigma,
\breve{\psi}^{+(\varpi)}\bigr)=\frac{1}{2}
\Biggl\{1+\mathrm{erf}\biggl[\frac{\phi^{+(\varpi)}_\sigma-
\breve{\psi}^{+(\varpi)}_\sigma}{\sqrt{2\bar{s}\varepsilon_{1\sigma}^+}}\biggr]\Biggr\}.
$$
Integrating (\ref{4.s09}) taking into account (\ref{4.s13})-(\ref{4.s14}), we obtain
the quantum distribution of \emph{hion} with consideration of the
random influence of an environment. It is easy to see that before the relaxation,
the 4$D$-interval is zero, i.e. $s=0 $ and, accordingly, the deformation coefficient
$F=1$, as expected.
By similar reasoning, we can calculate the deformation of the
\emph{hion} state vector:
\begin{eqnarray}
\mathcal{D} \phi^{+}_\sigma=
\bigl\{\phi^{+(r)}_\sigma F_\sigma^{+(r)}+i\phi^{+(i)}_\sigma F_\sigma^{+(i)}\bigr\}.
\label{4.s15}
\end{eqnarray}
Thus, it is obvious that the deformation of the quantum state \emph{hion} leads
to a breaking of symmetry, which leads to spontaneous transitions from \emph{ground state}
to another, massless, and also mass states.
It is important to note that despite the fact that \emph{hion} is deformed
under the influence of random influences of the environment, nevertheless the full
probability is conserved. In particular, if to integrate the representation (\ref{4.s09})
over the fields $\{\breve{\bm\psi}^+\}$, then, obviously, we can get:
$$
\int_{\Xi^3}\mathcal{P}(\{\breve{\bm\psi}^+\},\bar{s}\,;\textbf{r},t)d\{{\breve{\bm\psi}^+}\}=
\int_{\Xi^3} \mathcal{P}^0(\{\bm\phi\}) d\{{\bm\phi}\}=
\left[\begin{array}{ccc}
1/3\\
1/3\\
1/3\\
\end{array}
\right],
$$
where $d\{{\breve{\bm\psi}^+}\}=d\breve{\psi}^+_xd\breve{\psi}^+_y d\breve{\psi}^+_z$.
Recall that integration over the Hilbert space $\Xi^3$ is equivalent
to integration over the configuration space $\mathbb{R}^3$.
This is an obvious proof that the probability is preserved.
\section{F\lowercase{ormation of singlet and triplet pairs of \emph{hions} }}
At the second stage of relaxation in the ensemble \emph{hions}, the formation
of singlet and triplet states are possible by entangling them \cite{Einst}.
As it is well known \cite{Bell}, there are four possible entangled states
of the so-called Bell states, which can be represented as:
\begin{eqnarray}
\bm\phi^{\updownarrow}_\mp(\textbf{r}_+,\textbf{r}_-)= \frac{1}{\sqrt{2}}
\bigl\{|\uparrow\rangle_1\otimes|\downarrow\rangle_2\mp|\downarrow\rangle_1
\otimes|\uparrow\rangle_2\bigr\},
\nonumber\\
\bm\phi^{\upuparrows}_\mp(\textbf{r}_+,\textbf{r}_-)= \frac{1}{\sqrt{2}}
\bigl\{|\uparrow\rangle_1\otimes|\uparrow\rangle_2\mp|\downarrow\rangle_1
\otimes|\downarrow\rangle_2\bigr\},
\label{4.03b}
\end{eqnarray}
where the radius vectors $\textbf{r}_+$ and $\textbf{r}_-$ determine positions
of the first and second \emph{hions}, respectively. Note that the first
equation denotes possible two singlet states, and the second - two triplet
states. In (\ref{4.03b}) also the following notations are made:
\begin{eqnarray}
|\uparrow\rangle_1=\bm\phi^+(\textbf{r}_+),\qquad |\downarrow\rangle_2=
\bigl[\bm\phi^-(\textbf{r}_-)\bigr]^T,
\nonumber\\
|\downarrow \rangle_1=\bar{\bm\phi}^+(\textbf{r}_+), \qquad |\uparrow\rangle_2=
\bigl[\bar{\bm\phi}^-(\textbf{r}_-)\bigr]^T,
\label{4.z03b}
\end{eqnarray}
where we recall that the wave functions $|\uparrow\rangle_1$ and $|\uparrow\rangle_2$ denote
the pure states of \emph{hions} with the spin projections +1 and -1, respectively.
In (\ref{4.03b}) the dash $^-$ over a wave function denotes complex conjugation, $ [...]^T $
denotes the transposed vector and the symbol $\otimes$, respectively, denotes the tensor
product between the vectors.
The explicit form of a direct tensor product between vectors with opposite
spins have the following form:
$$
\textbf{A}^\updownarrow=|\uparrow\rangle_1\otimes| \downarrow\rangle_2 =
\left[
\begin{array}{ccc}
{\phi}_x^+\\
{\phi}_y^+\\
{\phi}_z^+\\
\end{array}
\right]
\left[
\begin{array}{ccc}
{\phi}_x^-\,\,
{\phi}_y^- \,\,
{\phi}_z^-
\end{array}
\right]=\left[
\begin{array}{ccc}
{\phi}_x^+{\phi}_x^- \,\,\, {\phi}_x^+{\phi}_y^-\,\,\,{\phi}_x^+{\phi}_z^-\\
{\phi}_y^+{\phi}_x^- \,\,\, {\phi}_y^+{\phi}_y^-\,\,\,{\phi}_y^+{\phi}_z^- \\
{\phi}_z^+{\phi}_x^- \,\,\, {\phi}_z^+{\phi}_y^-\,\,\,{\phi}_z^+{\phi}_z^-\\
\end{array}
\right]=\left[
\begin{array}{ccc}
A_{11}^\updownarrow \,\,\, A_{12}^\updownarrow\,\,\,A_{13}^\updownarrow\\
A_{21}^\updownarrow \,\,\, A_{22}^\updownarrow\,\,\,A_{23}^\updownarrow\\
A_{31}^\updownarrow \,\,\,A_{32}^\updownarrow\,\,\,A_{33}^\updownarrow\\
\end{array}
\right],\quad\,
$$
\begin{eqnarray}
\bar{\textbf{A}}^\updownarrow=|\downarrow \rangle_1\otimes|\uparrow \rangle_2 =
\left[
\begin{array}{ccc}
{\bar{\phi}}_x^+\\
{\bar{\phi}}_y^+\\
{\bar{\phi}}_z^+\\
\end{array}
\right]
\left[
\begin{array}{ccc}
{\bar{\phi}}_x^-\,\,
{\bar{\phi}}_y^- \,\,
{\bar{\phi}}_z^-
\end{array}
\right]=\left[
\begin{array}{ccc}
\bar{\phi}_x^+\bar{\phi}_x^- \,\,\,\bar{\phi}_x^+\bar{\phi}_y^-\,\,\,\bar{\phi}_x^+\bar{\phi}_z^-\\
\bar{\phi}_y^+\bar{\phi}_x^- \,\,\,\bar{\phi}_y^+\bar{\phi}_y^-\,\,\,\bar{\phi}_y^+\bar{\phi}_z^- \\
\bar{\phi}_z^+\bar{\phi}_x^- \,\,\,\bar{\phi}_z^+\bar{\phi}_y^-\,\,\,\bar{\phi}_z^+\bar{\phi}_z^-\\
\end{array}
\right]=\left[
\begin{array}{ccc}
\bar{A}_{11}^\updownarrow\,\,\,\bar{A}_{12}^\updownarrow\,\,\,\bar{A}_{13}^\updownarrow\\
\bar{A}_{21}^\updownarrow\,\,\,\bar{A}_{22}^\updownarrow\,\,\,\bar{A}_{23}^\updownarrow\\
\bar{A}_{31}^\updownarrow\,\,\,\bar{A}_{32}^\updownarrow\,\,\,\bar{A}_{33}^\updownarrow\\
\end{array}
\right],\nonumber\\
\label{A.01ab}
\end{eqnarray}
whereas the direct tensor product between vectors with parallel spins can be represented as:
\begin{eqnarray}
\textbf{A}^\upuparrows=|\uparrow\rangle_1\otimes|\uparrow\rangle_2 =
\left[
\begin{array}{ccc}
{\phi}_x^+\\
{\phi}_y^+\\
{\phi}_z^+\\
\end{array}
\right]
\left[
\begin{array}{ccc}
\bar{{\phi}}_x^-\,\,
\bar{{\phi}}_y^- \,\,
\bar{{\phi}}_z^-
\end{array}
\right]=\left[
\begin{array}{ccc}
{\phi}_x^+\bar{{\phi}}_x^- \,\,\, {\phi}_x^+\bar{{\phi}}_y^-\,\,\,{\phi}_x^+\bar{{\phi}}_z^-\\
{\phi}_y^+\bar{{\phi}}_x^- \,\,\, {\phi}_y^+\bar{{\phi}}_y^-\,\,\,{\phi}_y^+\bar{{\phi}}_z^-\\
{\phi}_z^+\bar{{\phi}}_x^- \,\,\, {\phi}_z^+\bar{{\phi}}_y^-\,\,\,{\phi}_z^+\bar{{\phi}}_z^-\\
\end{array}
\right]=\left[
\begin{array}{ccc}
A_{11}^\upuparrows \,\,\, A_{12}^\upuparrows\,\,\,A_{13}^\upuparrows\\
A_{21}^\upuparrows\,\,\, A_{22}^\upuparrows\,\,\,A_{23}^\upuparrows\\
A_{31}^\upuparrows\,\,\,A_{32}^\upuparrows\,\,\,A_{33}^\upuparrows\\
\end{array}
\right],
\nonumber\\
\textbf{A}^\downdownarrows=|\downarrow\rangle_1\otimes|\downarrow\rangle_2 =
\left[
\begin{array}{ccc}
\bar{{\phi}}_x^+\\
\bar{{\phi}}_y^+\\
\bar{{\phi}}_z^+\\
\end{array}
\right]
\left[
\begin{array}{ccc}
{\phi}_x^-\,\,
{\phi}_y^- \,\,
{\phi}_z^-
\end{array}
\right]=\left[
\begin{array}{ccc}
\bar{{\phi}}_x^+{\phi}_x^- \,\,\,\bar{\phi}_x^+{\phi}_y^-\,\,\,\bar{\phi}_x^+{\phi}_z^-\\
\bar{\phi}_y^+{\phi}_x^- \,\,\,\bar{\phi}_y^+{\phi}_y^-\,\,\,\bar{\phi}_y^+{\phi}_z^- \\
\bar{\phi}_z^+{\phi}_x^- \,\,\,\bar{\phi}_z^+{\phi}_y^-\,\,\,\bar{\phi}_z^+{\phi}_z^-\\
\end{array}
\right]=\left[
\begin{array}{ccc}
A_{11}^\downdownarrows\,\,\, A_{12}^\downdownarrows\,\,\,A_{13}^\downdownarrows\\
A_{21}^\downdownarrows\,\,\, A_{22}^\downdownarrows\,\,\,A_{23}^\downdownarrows\\
A_{31}^\downdownarrows\,\,\,A_{32}^\downdownarrows\,\,\,A_{33}^\downdownarrows\\
\end{array}
\right],\nonumber\\
\label{A.01b}
\end{eqnarray}
where $\textbf{A}^\updownarrow$ and $\textbf{A}^\upuparrows$
denote the third-rank matrices, while $\bar{\textbf{A}}^\updownarrow$ and
$\bar{\textbf{A}}^\upuparrows=\textbf{A}^\downdownarrows$
are their complex conjugate matrices.
\subsection{T\lowercase{he zero-spin particles and the scalar field}}
Now the main question is the question of the so-called \emph{quintessence}-
is it possible to form particle-like excitations in the form of some dynamic
scalar field \cite{Steinh}?
Using the first equation of the system (\ref{4.03b}), we can construct the
boson wave function with zero spin, \textbf{entangling two hions with opposite spin
projections}, presenting it in the form (see Appendix B):
\begin{eqnarray}
\bm\phi^{\updownarrow}_\mp(\textbf{r}_+,\textbf{r}_-)=\frac{1}{\sqrt{2}}
\bigl\{\textbf{A}^\updownarrow\mp\bar{\textbf{A}}^\updownarrow\}=\frac{1}{\sqrt{2}}\,
\textbf{B}_\mp=
\frac{1}{\sqrt{2}}\left[
\begin{array}{ccc}
B_{11}^\mp\,\,\,B_{12}^\mp\,\,\,B_{13}^\mp\\
B_{21}^\mp\,\,\,B_{22}^\mp\,\,\,B_{23}^\mp\\
B_{31}^\mp\,\,\,B_{32}^\mp\,\,\,B_{33}^\mp\\
\end{array}
\right]=
\frac{1}{\sqrt{2}}
\left[
\begin{array}{ccc}
B_{11}^\mp\quad 0\quad 0\,\,\,\\
0\quad B_{22}^\mp\,\,\,\, 0\,\\
\,\,0\quad\, 0\,\,\,\,\, B_{33}^\mp\\
\end{array}
\right],\,\,\nonumber\\
\label{4.03bz}
\end{eqnarray}
where the matrix elements $B^\mp_{ij}=A_{ij}\mp\bar{A}_{ij}$ are calculated explicitly.
Taking into account the rule of localization of the wave function components on the
corresponding planes (see Fig. 2), we obtain:
\begin{widetext}
\begin{eqnarray}
B^-_{11}=2i\bigl[\phi^{+(i)}_x\phi^{-(r)}_x-\phi^{+(r)}_x\phi^{-(i)}_x\bigr],\qquad
B^+_{11}=2\bigl[\phi^{+(r)}_x\phi^{-(r)}_x+\phi^{+(i)}_x\phi^{-(i)}_x\bigr],
\nonumber\\
B^-_{22}=2i\bigl[\phi^{+(i)}_y\phi^{-(r)}_y-\phi^{+(r)}_y\phi^{-(i)}_y\bigr],\qquad
B^+_{22}=2\bigl[\phi^{+(r)}_y\phi^{-(r)}_y+\phi^{+(i)}_y\phi^{-(i)}_y\bigr],
\nonumber\\
B^-_{33}=2i\bigl[\phi^{+(i)}_z\phi^{-(r)}_z-\phi^{+(r)}_z\phi^{-(i)}_z\bigr],\qquad
B^+_{33}=2\bigl[\phi^{+(r)}_z\phi^{-(r)}_z+\phi^{+(i)}_z\phi^{-(i)}_z\bigr].
\label{4.04}
\end{eqnarray}
\end{widetext}
Note that the matrix elements with the plus sign are zero $B^+_{ij}=0$, since
it is possible easy to show that the components of the corresponding wave functions
are localized on disjoint manifolds. Latter means that the wave state
$\bm\phi^{\updownarrow}_+(\textbf{r}_+,\textbf{r}_-)$ does not exist. In other words,
in the case of the Minkowski space-time there is only one singlet state for zero-spin boson.
The quantum distribution of the scalar boson in the singlet state before the onset of the
relaxation process, ie, for $s=0$, can be represented as:
\begin{eqnarray}
\mathcal{{P}}^0(\{\bm\phi^+\},\{\bm\phi^-\})=
\bigl|\bigr|\bm\phi_-^\updownarrow(\textbf{r}_+,\textbf{r}_-)
\bigr|\bigr|^2 = \frac{1}{2}\, \textbf{C},
\label{4.03bz}
\end{eqnarray}
where
$$\textbf{C}=\textbf{B}\cdot\bar{\textbf{B}}=
\left[
\begin{array}{ccc}
C_{11}\quad 0\quad 0 \\
0\quad C_{22}\quad0\\
0\quad 0\quad C_{33}\\
\end{array}
\right],
$$
is a diagonal third rank matrix, the elements of which have the following form:
\begin{eqnarray}
C_{11}=B_{11}\bar{B}_{11},\quad C_{22}=B_{22}\bar{B}_{22},\quad
C_{33}=B_{33}\bar{B}_{33},
\quad
C_{13}=C_{12}=C_{23}=C_{32}=0.
\label{4.04}
\end{eqnarray}
Taking into account the fact that the spins of two vector states $\bm\phi^+$ and
$\bm\phi^-$ are directed oppositely, and also considering features of spatial
localization these quasiparticles, we obtain the following expressions for the matrix elements:
\begin{eqnarray}
C_{11}=4||\phi^{+(r)}_x||^2||\phi^{-(i)}_x||^2+4||\phi^{+(i)}_x||^2||\phi^{-(r)}_x||^2,
\nonumber\\
C_{22}=4||\phi^{+(r)}_y||^2||\phi^{-(i)}_y||^2+4||\phi^{+(i)}_y||^2||\phi^{-(r)}_y||^2,
\nonumber\\
C_{33}=4||\phi^{+(r)}_z||^2||\phi^{-(i)}_z||^2+4||\phi^{+(i)}_z||^2||\phi^{-(r)}_z||^2.
\label{4.04bzt}
\end{eqnarray}
Now let us consider how the density of the quantum distribution of a scalar boson
changes taking into account the random influence of the environment.
To study this problem, we will use the following system of complex stochastic matrix equations:
\begin{eqnarray}
\partial_{t}\bm{\breve{{\psi}}^{\pm}}(\textbf{r}_\pm,t)\mp \bigl(\textbf{S}\cdot \bm\nabla\bigr) \bm{\breve{{\psi}}^{\pm}}(\textbf{r}_\pm,t)=\bm\eta^\pm(s_\pm),
\label{4.02t}
\end{eqnarray}
and also the equations:
\begin{eqnarray}
\nabla\bm{\breve{{\psi}}^{\pm}}(\textbf{r}_\pm,t)=0,
\label{4.02k}
\end{eqnarray}
where $\textbf{r}_\pm$ denotes the radius-vector of corresponding \emph{hion},
in addition, the complex generators $\bm\eta^\pm(s_\pm)=
(\eta_x^\pm,\eta_y^\pm,\eta_z^\pm)$ describing random fluctuations of charges and
currents, which continuously arise in 4$D$-interval $ds^2_\pm=c^2dt^2-dx^2_\pm-dy^2_\pm-dz^2_\pm$.
For further studies, the system of equations (\ref{4.02t}), it is
useful to write in the matrix form:
$$
\left[
\begin{array}{ccc}
ict & -z_+ & y_+\\
z_+ & ict & -x_+ \\
-y_+& x_+ & ict \\
\end{array}
\right]\cdot
\left[
\begin{array}{ccc}
\dot{\breve{{\psi}}}_x^+\\
\dot{\breve{{\psi}}}_y^+ \\
\dot{\breve{{\psi}}}_z^+\\
\end{array}
\right]=s_+\left[
\begin{array}{ccc}
\eta_x^+ \\
\eta_y^+ \\
\eta_z^+\\
\end{array}
\right],\quad
$$
and, respectively,
\begin{eqnarray}
\label{4.01a}
\left[
\begin{array}{ccc}
ict & z_- &- y_-\\
-z_- & ict & x_- \\
y_-& -x_- & ict \\
\end{array}
\right]\cdot
\left[
\begin{array}{ccc}
\dot{\breve{{\psi}}}_x^-\\
\dot{\breve{{\psi}}}_y^- \\
\dot{\breve{{\psi}}}_z^-\\
\end{array}
\right]=s_-\left[
\begin{array}{ccc}
\eta_x^- \\
\eta_y^- \\
\eta_z^-\\
\end{array}
\right],
\end{eqnarray}
where $ \dot{\breve{{\psi}}}_\sigma^\varsigma=\partial
\breve{{\psi}}_\sigma^\varsigma/\partial s_\varsigma$ and $\varsigma=\pm.$
As in the case of one \emph{hion} (see (\ref{4.s04})-(\ref{4.s05})),
the system of SDE (\ref{4.01a}) can be reduced to the canonical form:
\begin{eqnarray}
\dot{\breve{{\psi}}}^{\pm}_\sigma(s_\pm;\textbf{r}_\pm,t)=\bigl\{b^\pm_\sigma(\textbf{r}_\pm,t)+
d^\pm_\sigma(\textbf{r}_\pm,t)\bigr\}\bar{s}^{-1}_\pm\eta^-(s_\pm), \quad
\label{4.02a}
\end{eqnarray}
where $\bar{s}_+=s_+/a_p$ and $\bar{s}_-=s_-/a_p$, in addition, the following notations are made:
\begin{eqnarray}
b^\varsigma_x=\frac{z_\varsigma-y_\varsigma}{a_p},\qquad
d_x^{\,\varsigma}=\frac{c^2t^2-x^2_\varsigma-x_\varsigma y_\varsigma-x_\varsigma z_\varsigma}{a_pct},
\nonumber\\
b^\varsigma_y=\frac{x_\varsigma-z_\varsigma}{a_p},\qquad
d^{\,\varsigma}_y=\frac{c^2t^2-y^2_\varsigma-x_\varsigma y_\varsigma-y_\varsigma z_\varsigma}{a_pct},
\nonumber\\
b^\varsigma_z=\frac{y_\varsigma-x_\varsigma}{a_p},
\qquad
d^{\,\varsigma}_z=\frac{c^2t^2-z^2_\varsigma-x_\varsigma z_\varsigma-y_\varsigma z_\varsigma}{a_pct}.
\label{4.02b}
\end{eqnarray}
Below in the equations (\ref{4.02a})-(\ref{4.02b}), we will assumed that
the following relations are satisfied:
$$
\eta_x^\varsigma=\eta_y^\varsigma=\eta_z^\varsigma=\eta,\qquad d\bar{s}_+=d\bar{s}_-=d\bar{s},
$$
which is quite natural.
As in the case of a single \emph{hion}, we will assume that the random
generator $\eta(s)$ satisfies the correlation properties of white noise (see Eqs. (\ref{4.s06})).
The joint probability distribution for a scalar boson can be represented as (see \cite{AshG}):
\begin{eqnarray}
\mathcal{P}(\{\breve{\psi}\},\bar{s};\textbf{r}_+,\textbf{r}_-,t)=\prod_{\varsigma,
\sigma }\bigl\langle\delta\bigl(\breve{\psi}_\sigma^\varsigma(\bar{s};\textbf{r}_\varsigma,t)-
\breve{\phi}_{\sigma}^\varsigma\bigr)\bigr\rangle,
\label{4.02abc}
\end{eqnarray}
where $\{{\breve{\psi}}\}=({\breve{\psi}}^+_x,...,{\breve{\psi}}^-_z)$ denotes a set of
fluctuating vacuum fields and $\breve{\phi}_{\sigma}^\varsigma=
\breve{\psi}_{\sigma}^\varsigma(\bar{s};\textbf{r},t)|_{s\to0}$.
In the representation (\ref{4.02abc}) the function
$\delta(\breve{\psi}_\sigma^\varsigma(s;\textbf{r}_\varsigma,t)\breve{\phi}_{\sigma}^\varsigma)$
denotes the Dirac delta function generalized on a 6$D$-Hilbert space.
Using the SDE system (\ref{4.02a}), for the conditional probability
describing the relaxation of the singlet state, we can obtain the
following partial differential equation of the second order (see \cite{Ashg1}):
\begin{eqnarray}
\Bigl\{\frac{\partial }{\partial\bar{s}}-\frac{1}{2} \sum_{\varsigma,\sigma }
\varepsilon_{1\sigma}^\varsigma(\textbf{r}_\varsigma,t) \frac{\partial^2 }{\partial{\breve{\psi}}_\sigma^\varsigma\partial\bar{\breve{\psi}}_\sigma^\varsigma}
\Bigr\}\mathcal{P}=0,
\label{4.03}
\end{eqnarray}
where $\bar{\breve{\psi}}_\sigma^\varsigma$ denotes the complex conjugate of the function
$\breve{\psi}_\sigma^\varsigma$ and $\varepsilon_{1\sigma}^\varsigma(\textbf{r}_\varsigma,t)=
\varepsilon_1 [b_\sigma^\varsigma(\textbf{r}_\varsigma,t)+
d_\sigma^\varsigma(\textbf{r}_\varsigma,t)]^{2}$.
For further analytical calculations, it is convenient to represent the general
solution of the equation (\ref{4.03}) in the integral form:
\begin{eqnarray}
\mathcal{P}(\{\breve{\bm\psi}\},\bar{s}\,;\textbf{r}_+,\textbf{r}_-,t)=\int_{\Xi^6}
R(\{\breve{\bm\phi}^+\},\{\breve{\bm\phi}^-\})\prod_{\varsigma,\sigma}
\exp\biggl\{-\frac{(\breve{\phi}_{\sigma}^\varsigma-{\breve{\psi}}_\sigma^\varsigma)
(\bar{\breve{\phi}}_{\sigma}^\varsigma
-\bar{{\breve{\psi}}}_\sigma^\varsigma) }{2\bar{s}\varepsilon_{1\sigma}^\varsigma}\biggr\}
\frac{d \breve{\bm\phi}_{\sigma}^\varsigma}{ \sqrt{2\pi \bar{s}\varepsilon_{1\sigma}^\varsigma}},\quad
\label{4.03a}
\end{eqnarray}
where $d\breve{\bm\phi}_{\sigma}^\varsigma=d\breve{\phi}_{\sigma}^{\varsigma(r)}
d\breve{\phi}_{\sigma}^{\varsigma(i)}$, in addition, as in the case of a single \emph{hion}, we assume that $R(\{\breve{\bm\phi}^+\},\{\breve{\bm\phi}^-\})=\mathcal{{P}}^0(\{\bm\phi^+\},\{\bm\phi^-\})$
is the initial distribution of the scalar boson before the relaxation begins.
It is obvious that integration over the space-time, ie, by the spectrum, in
accordance with the ergodic hypothesis, is equivalent to integration
over the full 12$D$ space.
Substituting $(\ref{4.03bz})-(\ref{4.04bzt})$ into $(\ref{4.03a})$ and integrating by
variables $ \breve{\bm\phi}_{\sigma}^\varsigma$ within $[{\breve{\psi}}_\sigma^{\varsigma(\varpi)},
\phi_{\sigma}^{\varsigma(\varpi)}]$, we get:
\begin{eqnarray}
\mathcal{P}(\{\breve{\bm\psi}\},\bar{s}\,;\textbf{r}_+,\textbf{r}_-,t)
= \frac{1}{2}\left[
\begin{array}{ccc}
\overline{C}_{11}\,\,\,\,0\quad 0 \\
0\quad \overline{C}_{22}\,\,\, 0 \\
\,\,0\quad\, 0\,\,\,\, \overline{C}_{33}\\
\end{array}
\right].
\label{4.05w}
\end{eqnarray}
Recall that $\overline{C}_{\sigma\sigma}$ denotes the mean value:
$$
\overline{C}_{\sigma\sigma}=4\Bigl\{\bigl\langle||\phi^{+(r)}_\sigma||^2\bigr\rangle
\bigl\langle||\phi^{-(i)}_\sigma||^2\bigr\rangle +
\bigl\langle||\phi^{+(i)}_\sigma||^2\bigr\rangle
\bigl\langle||\phi^{-(r)}_\sigma||^2\bigr\rangle\Bigr\},
$$
where
$$
\bigl\langle||\phi^{\varsigma(\varpi)}_\sigma||^2\bigr\rangle=
||\phi^{\varsigma(\varpi)}_\sigma||^2F_\sigma^{\varsigma(\varpi)}
\bigl(\phi^{\varsigma(\varpi)}_\sigma,\breve{\psi}^{\varsigma(\varpi)}_\sigma\bigr),
$$
in addition, performing similar calculations, as in the case of a single \emph{hion}
(see (\ref{4.s09})-(\ref{4.s14})), we can get:
\begin{eqnarray}
F_\sigma^{\varsigma(\varpi)}= \frac{1}{2}
\biggl\{1+\mathrm{erf}\biggl[\frac{\phi^{\varsigma(\varpi)}_\sigma-
\breve{\psi}^{\varsigma(\varpi)}_\sigma}{\sqrt{2\bar{s}\epsilon_{1\sigma}^\varsigma}}\biggr]\biggr\}.
\label{4.t01}
\end{eqnarray}
Note that, using analogous arguments, we can construct the wave function of entangled two
\emph{hions} with consideration of its relaxation in a random environment. Also,
performing calculations similarly to the case of one \emph{hion}, one can
see that the deformation of the wave state of one scalar boson under the influence of
a random environment does not lead to a violation of the law of conservation of full probability.
Thus, we have shown that, as a result of the multi-scale evolution of QVF, a scalar
field is formed, as a sort of Bose-Einstein condensate of massless scalar bosons.
However, such a condensate differs significantly from a conventional substance,
since it consists of massless particles that have a large Compton wavelength that
can not thicken unlimitedly and to form large-scale structures such as stars, planets, etc.
In other words, the described substance meets all the characteristics of the
\emph{quintessence} requirements and, accordingly, it can be asserted that the
\emph{quintessence hypothesis} is theoretically proved.
\subsection{T\lowercase{riplet state of two \emph{hions} and the vector field}}
The wave function of the triplet state formed by entangling two \emph{hions}
with \textbf{parallel} projections of the spin can be represented as:
\begin{eqnarray}
\bm\phi^{\upuparrows}_\mp(\textbf{r}_+,\textbf{r}_-)=\frac{1}{\sqrt{2}}
\bigl\{\textbf{A}^\upuparrows\mp \textbf{A}^\downdownarrows\}=
\frac{1}{\sqrt{2}}\,\textbf{D}_\mp=
\frac{1}{\sqrt{2}}\left[
\begin{array}{ccc}
D_{11}^\mp\,\,\,D_{12}^\mp\,\,\,D_{13}^\mp\\
D_{21}^\mp\,\,\,D_{22}^\mp\,\,\,D_{23}^\mp\\
D_{31}^\mp\,\,\,D_{32}^\mp\,\,\,D_{33}^\mp\\
\end{array}
\right]=
\frac{1}{\sqrt{2}}
\left[
\begin{array}{ccc}
D_{11}^\mp\quad 0\quad 0\,\,\,\\
0\quad D_{22}^\mp\,\,\,\, 0\,\\
\,\,0\quad\, 0\,\,\,\,\,D_{33}^\mp\\
\end{array}
\right],\,\,\nonumber\\
\label{4.04bz}
\end{eqnarray}
where, as shown by simple calculations, the following equalities hold:
\begin{equation}
D^\mp_{11}=B^\mp_{11},\qquad D^\mp_{22}=B^\mp_{22},\qquad D^\mp_{33}=B^\mp_{33},
\label{4.04z}
\end{equation}
From these equalities it follows that there is only one triplet
state, which described by the matrix of the third rank
$\bm\phi^{\upuparrows}_-(\textbf{r}_+,\textbf{r}_-)$.
The relaxation of the triplet state (\ref{4.04bz}) can be taken into account
using a similar construction, as in the case of the singlet state (\ref{4.03a}).
In this case, however, the following substitutions
$(x_-\to -x_-,\quad y_-\to -y_-,\quad z_-\to -z_-)$ must be made in the expressions
(\ref{4.02a})-(\ref{4.02b}), which will be equivalent to transforming the singlet
state into the triplet state. Note that these replacements in the expression
(\ref{4.03a}) changes the power of fluctuations $\varepsilon_\sigma^\varsigma$.
\section{C\lowercase{onclusion}}
Although a fundamental scalar field has not yet been observed experimentally,
it is generally accepted that such fields play a key role in the construction
of modern theoretical physics of elementary particles. There are a few important
hypothetical scalar fields, for example the Higgs field for the Standard Model,
the \emph{dark energy}-\emph{quintessence} for a theory of the quantum vacuum, etc.
Note that the presence of each of them is necessary for the complete classification
of the theory of fundamental fields, including new physical theories, such as,
for example, String Theory.
Recall that despite the great progress in the representations of modern particle theory
within the framework of the SM, it does not give a clear explanation of a number of
fundamental questions of the modern physics, such as \emph{What is dark energy and dark matter}?
or \emph{What happened to the antimatter after the big bang}? and so on.
As modern astrophysical observations show, not less than 74 percent of the energy of
the universe is associated with a substance called \emph{dark energy}, which has no
mass and whose properties are not sufficiently studied and understood. Based on
many considerations, it was obvious to assume that this substance must be related
to a quantum vacuum or simply be QV itself.
As it is known, in the modern understanding of what is called the vacuum state or
the quantum vacuum, it is by no means a \emph{simple empty space}. Recall that
in the vacuum state, electromagnetic waves and particles continuously appear and
disappear, so that on the average their value is zero. It would be reasonable to
think that these fluctuating or flickering fields are born as a result of spontaneous
decays of quasi-stable scalar massless particles, very inert to any external influences.
The main purpose of this work was the theoretical justification for the possibility
of forming a scalar field consisting of uncharged massless zero-spin particles.
The developed approach is formally similar to the Parisi-Wu \cite{Parisi-Wu, Dam}
stochastic quantization, however it also has significant differences.
In particular, as in the case of the Parisi-Wu stochastic quantization, when considering
Euclidean quantum field theory, we consider the stochastic Yang-Mills equations as the
basic equations. However, unlike the Parisi-Wu concept, we believe that the nature of
stochasticity is multi-scale and, therefore, the equilibrium limit of the statistical
system associated with a heat reservoir is not one, but many. In other words, there
are many quasi-equilibrium states between which spontaneous transitions occur,
but at the same time the dynamical equilibrium between these states is conserved.
Another significant difference between the developed representation and the
Parisi-Wu theory is that the analogy with classical statistical physics is
not used for determination the stationary distribution of a random process.
The latter circumstance allows us to avoid a number of inaccuracies inherent
in standard representations, which, in our opinion, makes it difficult to
study such a specific substance as QV.
In this article, we have considered the simplest case when the self-action
terms are absent in the Yang-Mills stochastic equations, that is, $\mathfrak{g}=0$.
This means that considered QVF are Abelian fields, which
satisfy the gauge group symmetry $SU(2)\times U(1)$. We quantized
the classical stochastic vector field (see quantization conditions (\ref{3.04ak}))
and proved that on the main relaxation scale $(\tau_0,\bm\varepsilon_0^a)$, in the
Hilbert space, in the limit of statistical equilibrium, a discrete set
of stationary solutions (\ref{3.03tz}) arises, describing a massless spin-1 Bose
particle (named \emph{hion}). It is important to note that these solutions,
combining relativism with quantum mechanics together (see Eqs (\ref{2.0k2b})),
are as close as possible to the concept of 2$D$ quantum string theory.
It is shown that \emph{hion} is characterized by three quantum numbers, where
the \emph{ground state} is characterized by the highest frequency. As shown in our study,
the \emph{hion} in the $3D$ space is localized on the complex 2$D$
surface consisting of three perpendicular planes (see Fig. 2).
We have proved that on the second scale of relaxation $(\tau_1,\bm\varepsilon_1)$,
\emph{hion} becomes quasi-particle, which can make spontaneous transitions
to other mass and massless states. Note that during spontaneous
transitions \emph{hion} the bosons $W_3$ and $B$ combine into two different
bosons, such as the photon $\gamma$ (electromagnetic interaction) and the massive
boson $Z^0$ (weak interaction). It is also shown that on the second relaxation scale,
two \emph{hions} with spin projections +1 and -1 can form a boson with zero spin. The
ensemble of spin-0 bosons forms a Bose-Einstein condensate, which is a scalar
field with all the necessary properties. In other words, the work is a theoretical
proof of the \emph{dark energy}-\emph{quintessence} hypothesis and, accordingly,
the stability of the QED vacuum in the infrared limit. Recall that in the infrared
limit of vacuum diagrams does not exist in theories with self-interacting massless
fields (QCD) or massless interacting particles (massless QED), if the theory is
renormalizable \cite{Savi,SCHARF}.
Note that a small part of the energy of a quantum vacuum is concentrated in
vector fields consisting of \emph{hions} and vector bosons with spin-2.
These fields on a large scales of $3D$ space can be represented as Heisenberg
spin glass, the total polarization of which is zero.
A very important question related to the value of the parameter $a_p$
(see (\ref{3.05at})), which characterizes the spatial size of the \emph{hion},
remains open within the framework of the developed representation. Apparently,
we can get a clear answer to this question by conducting a series of experiments.
In particular, if the value of the constant $a_p$ will be substantially
different from the Planck length $l_P$, then it will be necessary to introduce
a new fundamental constant defining the spatial size of the \emph{hion}.
We are convinced that if it will be impossible to conduct direct measurements
of constants $({\varepsilon}_0^1,{\varepsilon}_0^2,{\varepsilon}_0^3)$,
characterizing the fluctuations of quintessence, and, accordingly, formation
of \emph{hions}, then this can be done using of indirect measurements. In
particular, we believe that interesting experiments on the detection of optical
force between controlled light waves, known as ``optical binding strength" (see
for example \cite{Mo}), are not related to the Casimir vacuum properties, as some
researchers are trying to explain. The magnitude of the optical force and its
diverse behavior do not allow us to hope for it. To explain these forces, most
likely, there must be a more ``fundamental vacuum" with more non-trivial properties
like as scalar field or dark energy. A different experiments aimed at non-invasive
measurements of various characteristics of biological organisms and systems with
fluctuating entropies also push us to the idea of the existence of quintessence,
with its still unknown properties \cite{Ash3}.
Thus, studies carried out within the framework of the symmetry group
$SU(2)\times U(1)$ allow us to speak of the properties of the QVF and,
accordingly, of the structure and properties of empty space-time down to the
distances $\sim10^{-15} m$, when strong interactions begin to dominate.
As preliminary studies show, the properties of space-time can change as a
result of the polarization of the vector component QV and, accordingly,
changes in the refractive indices of the vacuum due to orientational
effects of \emph{hion}s in external, even weak electromagnetic fields.
Obviously, in this case the photon-photon interaction and, accordingly,
the interaction of two light beams occurs according to another physical
mechanism, which differs from the mechanism described by the fourth-order
Feynman diagrams.
In other words, there is every reason to speak for the first time about
the real, rather than theoretical, possibility of implementing space-time
engineering.
Finally, as many researchers point out, beginning of the middle of the twentieth
century a new scientific-technological revolution began, based not on energy but
on information. In this regard, some researcher are rightly identified the
Universe with giant quantum computer, which can explain previously unexplained
features, most importantly, the co-existence in the universe of randomness and order,
and of simplicity and complexity (see, for example \cite{Lloyd}). In the light
of the above theoretical proofs, the quantum vacuum-quintessence or scalar field
is nothing more than a natural quantum computer with complex logic,
different from that currently being realized in practice.
\section{A\lowercase{ppendix}}
\subsection{}
Let us consider the limit of statistical equilibrium
$\lim_{\,t \to \infty}\mathcal{P}^1(\zeta,t) =\bar{\mathcal{P}}^1(\zeta) $.
In this case, the partial differential equation (\ref{2.3wv}) is transformed
into the ordinary differential equation of the form:
\begin{equation}
\biggl\{3\zeta+
\bigl(\zeta^2+\omega^2\bigl)\frac{d}{d\zeta}
+\frac{\varepsilon_0}{2}\frac{d^2}{d\zeta^2}\biggr\}\bar{\mathcal{P}}^1(\zeta)=0.
\label{1.a0}
\end{equation}
\textbf{Proposition.} \emph{If the function $\bar{\mathcal{P}}^1(\zeta)$ the solution of the
equation (\ref{1.a0}), then the integral:
\begin{equation}
\int_{-\infty}^{+\infty}\bar{\mathcal{P}}^1(\zeta)d\zeta<M,
\label{1.ta1}
\end{equation}
where $M=const>0$.}
We represent the solution of the equation (\ref{1.a0}) in the form:
\begin{equation}
\bar{\mathcal{P}}^1(\zeta)=Z(\zeta){\mathcal{P}}^+(\zeta).
\label{1.a1}
\end{equation}
Let the function ${\mathcal{P}}^+(\zeta)$ satisfies the equation:
\begin{equation}
\biggl\{\bigl(\zeta^2+\omega^2\bigl)\frac{d}{d\zeta}
+\frac{\varepsilon_0}{2}\frac{d^2}{d\zeta^2}\biggr\}{\mathcal{P}}^+(\zeta)=0,
\label{1.a2}
\end{equation}
then it will look like:
$$
{\mathcal{P}}^+(\zeta)=C^+\int^{\zeta}_{-\infty} e^{-2\varepsilon_0^{-1}
(\frac{1}{3}z^3+\omega^2z)}dz,
$$
where $C^+$ is the arbitrary constant.\\
If we choose $C^+>0$ then the function
${\mathcal{P}}^+(\zeta)$ will be positive on all axis $\zeta\in(-\infty,+\infty)$.
Substituting (\ref{1.a1}) into (\ref{1.a0}), we get:
\begin{equation}
\biggl\{3\zeta+
\Bigl[\bigl(\zeta^2+\omega^2\bigl)+\varepsilon_0K(\zeta)\Bigr]
\frac{d}{d\zeta}+\frac{\varepsilon_0}{2}\frac{d^2}{d\zeta^2}\biggr\}Z(\zeta)=0,
\label{1.a3}
\end{equation}
where
$$
K(\zeta)=\frac{d{\ln\mathcal{P}}^+(\zeta)}{d\zeta}=
e^{-2\varepsilon_0^{-1}(\frac{1}{3}\zeta^3+\omega^2\zeta)}\biggl/\int^{\zeta}_{-\infty}
e^{-2\varepsilon_0^{-1}(\frac{1}{3}z^3+\omega^2z)}dz.
$$
As follows from the analysis of the coefficient $K(\zeta)$,
on the axis $\zeta\in(-\infty,+\infty)$, it has types of uncertainties $\frac{0}{0}$
or $\frac{\infty}{\infty}$. Applying the L'H\^{o}pital's rule, for the coefficient
near the critical points $\zeta_i$, where uncertainties appear, we obtain the expression:
\begin{equation}
\lim_{\zeta\to \zeta_i} K(\zeta)=\frac{\frac{d}{d\zeta}\Bigl(e^{-2\varepsilon_0^{-1} (\frac{1}{3}\zeta^3+\omega^2\zeta)}\Bigr)}{\frac{d}{d\zeta}\Bigl(\int^{\zeta}_{-\infty} e^{-2\varepsilon_0^{-1}
(\frac{1}{3}z^3+\omega^2z)}dz\Bigr)}=-2\frac{\zeta^2+\omega^2}{\varepsilon_0}.
\label{1.0a4}
\end{equation}
With consideration (\ref{1.0a4}) the equation (\ref{1.a3}) can be written in the form:
\begin{equation}
\biggl\{3\zeta-
\bigl(\zeta^2+\omega^2\bigl)\frac{d}{d\zeta}+\frac{\varepsilon_0}{2}
\frac{d^2}{d\zeta^2}\biggr\}Z(\zeta)=0.
\label{1.a4}
\end{equation}
In asymptotic domains, i.e. when $|\zeta|\gg1$ in the equation (\ref{1.a4}), we can
neglect the term $3\zeta$ and to obtain the following solution:
\begin{equation}
Z(\zeta)\sim
\int^\zeta_{-\infty}e^{2\varepsilon_0^{-1}(\frac{1}{3}z^3+\omega^2z)}dz>0,
\qquad |\zeta|\gg1,
\label{1.a5}
\end{equation}
whereas the asymptotic solution of the equation (\ref{1.a0}), respectively, is:
\begin{eqnarray}
\bar{\mathcal{P}}^1(\zeta)\sim C^+\biggl(\int^{\zeta}_{-\infty} e^{-2\varepsilon_0^{-1}
(\frac{1}{3}z^3+\omega^2z)}dz\biggr)
\biggl(\int^\zeta_{-\infty} e^{2\epsilon_0^{-1}(\frac{1}{3}y^3+\omega^2y)}dy\biggr)>0,
\qquad |\zeta|\gg1.
\label{1.a6}
\end{eqnarray}
Again using the L'H\^{o}pital's rule, it can be proved that:
$$
\lim_{|\zeta|\to \infty}\biggl|\frac{\acute{Z}(\zeta)}{Z(\zeta)}\biggr|=\zeta^2+\omega^2\gg1,
\qquad \acute{Z}(\zeta)=\frac{d}{d\zeta} Z(\zeta),
$$
from which flow out the estimation:
$$
e^{- 2 \epsilon_0^{-1}
(\frac{1}{3}\zeta^3+\omega^2\zeta)}\gg \int^{\zeta}_{-\infty}e^{-2\varepsilon_0^{-1}
(\frac{1}{3}z^3+\omega^2z)}dz,\qquad |\zeta|\gg1,
$$
and, respectively;
\begin{equation}
\bar{\mathcal{P}}^1(\zeta)<C^+e^{-2\varepsilon_{0}^{-1}
(\frac{1}{3}\zeta^3+\omega^2\zeta)} \int^\zeta_{-\infty}
e^{2 \varepsilon_{0}^{-1}(\frac{1}{3}z^3+\omega^2z)}dz.
\label{1.a7}
\end{equation}
Now let us represent the integral (\ref{1.ta1}) in the form of a sum from three terms:
\begin{equation}
\int_{-\infty}^{+\infty}\bar{\mathcal{P}}^1(\zeta)d\zeta=
\biggl\{\,\int_{-\infty}^{-L} + \int_{-L}^{+L}
+\int^{+\infty}_{+L}\,\biggr\}\bar{\mathcal{P}}^1(\zeta)d\zeta,
\label{1.a8}
\end{equation}
where $L>>1.$
For the first and third integrals, we can write down the following obvious estimates
(see (\ref{2.32a})):
\begin{equation}
\int_{-\infty}^{-L}\bar{\mathcal{P}}^1(\zeta)d\zeta<
\int_{-\infty}^{-L}{\mathcal{P}}^0(\zeta)d\zeta<
\int_{-\infty}^{+\infty}{\mathcal{P}}^0(\zeta)d\zeta=1,
\label{1.a9}
\end{equation}
and, respectively:
\begin{equation}
\int^{+\infty}_{+L}\bar{\mathcal{P}}^1(\zeta)d\zeta<
\int^{+\infty}_{+L}{\mathcal{P}}^0(\zeta)d\zeta<
\int_{-\infty}^{+\infty}{\mathcal{P}}^0(\zeta)d\zeta=1.
\label{1.a10}
\end{equation}
As for the second term in (\ref{1.a8}):
\begin{equation}
\int_{-L}^{+L}\bar{\mathcal{P}}^1(\zeta)d\zeta<M^\ast,
\label{1.a11}
\end{equation}
then it is obviously converges $M^\ast<\infty$, since the function
$\bar{\mathcal{P}}^1(\zeta)$ is bounded, and the integration is performed
on a finite interval.
Thus taking into account the estimations (\ref{1.a9})-(\ref{1.a11})
we can approve that the integral (\ref{1.ta1}) converges, i.e. the estimate:
$$
\int_{-\infty}^{+\infty}\bar{\mathcal{P}}^1(\zeta)d\zeta<M,
$$
where $M=const<\infty,$ is \textbf{correct}.\\
\emph{ \textbf{The proposition is proved.}}
\subsection{}
Using the systems of equations (\ref{1.02k})-(\ref{1.02ta}) and carrying out a
similar calculations for vacuum fields consisting of particles with projections
of spin -1, we can obtain the following system of equations:
\begin{eqnarray}
\bigl\{\square+ [i(c_{,y}-c_{,z})+c_{,t}c^{-1}]c^{-2}\partial_t\bigr\}\psi^{-}_x=0,
\nonumber\\
\bigl\{\square+[i(c_{,z}-c_{,x})+c_{,t}c^{-1}]c^{-2}\partial_t\bigr\}\psi^{-}_y=0,
\nonumber\\
\bigl\{\square+[i(c_{,x}-c_{,y})+c_{,t}c^{-1}]c^{-2}\partial_t\bigr\}\psi^{-}_z=0.
\label{A.l}
\end{eqnarray}
In (\ref{A.l}) substituting the solutions of the equations in the form:
\begin{equation}
\psi_\sigma^-(\textbf{r},t)=\exp{\biggl\{\int_{-\infty}^t \zeta_\sigma(t')dt'
\biggr\}}\phi_\sigma^-(\textbf{r}),\qquad
\sigma=x,y,z,
\label{A.2}
\end{equation}
we obtain the following system of stationary equations:
\begin{eqnarray}
\Bigl\{\triangle-\Bigl[\Bigl(\frac{\xi(t)}{c}\Bigr)^2+\frac{r-i(y-z)}{cr^2}\,
\zeta(t)\Bigr]\Bigr\}\phi_x^-(\textbf{r})=0,
\nonumber\\
\Bigl\{\triangle-\Bigl[\Bigl(\frac{\xi(t)}{c}\Bigr)^2+\frac{r-i(z-x)}{c r^2}\,
\zeta(t)\Bigr]\Bigr\}\phi_y^-(\textbf{r})=0,
\nonumber\\
\Bigl\{\triangle-\Bigl[\Bigl(\frac{\xi(t)}{c}\Bigr)^2+\frac{r-i(x-y)}{cr^2}\,
\zeta(t)\Bigr]\Bigr\}\phi_z^-(\textbf{r})=0.
\label{A.2a}
\end{eqnarray}
Further, carrying out standard arguments, we find the following stationary equations:
\begin{eqnarray}
\Bigl\{\triangle -\Bigl[\Bigl(\frac{\omega}{c}\Bigr)^2+ \frac{r-i(y-z)}{r^2}\varrho(\omega)\Bigr]
\Bigr\}\phi_x^-(\textbf{r})=0,
\nonumber\\
\Bigl\{\triangle -\Bigl[\Bigl(\frac{\omega}{c}\Bigr)^2+\frac{r-i(z-x)}{r^2}\varrho(\omega)\Bigr]
\Bigr\}\phi_y^-(\textbf{r})=0,
\nonumber\\
\Bigl\{\triangle-\Bigl[\Bigl(\frac{\omega}{c}\Bigr)^2+\frac{r-i(x-y)}{r^2}\varrho(\omega)\Bigr]\Bigr\}
\phi_z^-(\textbf{r})=0.
\label{A.2a}
\end{eqnarray}
The solution of the system of equations is conducted in a similar way, as for fields
$\phi^+(\textbf{r})$ (see Eq.s (\ref{3.020})-(\ref{3.02b})). In particular,
calculations show that the components of the vector boson with the spin -1
projection are localized on a manifold consisting of the following set of planes;
$\bigl[\phi_x^{-(r)}(Y,-Z),\,\,
\phi_x^{-(i)}(-Y,Z)\bigr],\quad\bigl[\phi_y^{-(r)}(X,-Z),$ $\,\,\phi_x^{-(i)}(-X,Z)\bigr]$ and
$\bigl[\phi_z^{-(r)}(X,-Y),\,\,\phi_x^{-(i)}(-X,Y)\bigr]$, respectively.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
\IEEEPARstart{T}{he} goal of this paper is to develop a methodology that automatically provides numerically tight performance bounds for first-order decentralized methods on convex functions and to demonstrate its usefulness on different existing methods.
Decentralized optimization has received an increasing attention due to its useful applications in large-scale machine learning and sensor networks, see e.g. \cite{DGD} for a survey. In decentralized methods for separable objective functions, we consider a set of agents $\{1,\dots,N\}$, working together to solve the following optimization problem: \vspace{-2mm}
\begin{equation} \label{opt:dec_prob}
\underset{\text{\normalsize $x \in \Rvec{d}$}}{\mathrm{minimize}} \quad f(x) = \frac{1}{N}\sum_{i=1}^N f_i(x), \vspace{-1mm}
\end{equation}
where $f_i: \Rvec{d}\to\mathbb{R}$ is the private function locally held by agent $i$.
To achieve this goal, each agent $i$ holds its own version $x_i$ of the decision variable $x \in \Rvec{d}$. Agents perform local computations and exchange local information with their neighbors to come to an agreement on the minimizer $x^*$ of the global function $f$. Exchanges of information often take the form of an average consensus step on some quantity, e.g., on the $x_i$. This corresponds to a multiplication by an averaging matrix $W \in \Rmat{N}{N}$, typically assumed symmetric and \emph{doubly stochastic}, i.e., a nonnegative matrix whose rows and columns sum to one. This matrix $W$ indicates both the topology of the network of agents and the weights they are using during the average consensus. Therefore, we call it network or averaging matrix, without distinction.
One of the simplest decentralized optimization method is the distributed (sub)gradient descent (DGD) \cite{DsubGD} where agents successively perform an average consensus step \eqref{eq:DGD_cons} and a local gradient step \eqref{eq:DGD_comp}:
\vspace*{-4mm}
\begin{align}
y_i^k &= \sum_{j=1}^N w_{ij} x_j^k, \hspace*{15mm} \label{eq:DGD_cons} \\[-0.5mm]
x_i^{k+1} &= y_i^k - \alpha^k \nabla f_i(x_i^k), \hspace*{15mm} \label{eq:DGD_comp} \vspace{-2mm}
\end{align}
for step-sizes $\alpha^k > 0$.
Numerous other methods rely on the interplay of gradient and average consensus steps,
such as EXTRA \cite{EXTRA}, DIGing \cite{DIGing}, and NIDS \cite{NIDS}. The convergence results of DIGing holds for time-varying network matrices and the algorithm can also be extended to handle directed communications \cite{DIGing}. Different other algorithms can handle directed communications \cite{directed,directed2}.
There also are accelerated decentralized methods, such as the accelerated distributed Nesterov gradient descent (Acc-DNGD) \cite{AccDNGD}. \\
Furthermore, average consensus is used in different distributed primal dual methods such as DDA \cite{DDA}, MSDA \cite{MSDA} or MSPD \cite{MSPD}, though not directly in all, e.g., D-ADMM \cite{ADMM_1,ADMM}.
We note in particular recent results have reached optimal performance for certain specific performance criteria and classes of functions \cite{optimal_gradient_tracking}, \cite{MSPD}. However, many questions and challenges remain open in the design and analysis of decentralized optimization method.
For further information on the topic, we refer the reader to the introduction of this recent thesis \cite{H_Hendrikx_thesis} in the field.
The quality of an optimization method is often evaluated \textit{via} a worst-case guarantee. Having accurate performance bounds for a decentralized algorithm is important to correctly understand the impact of its parameters and the network topology on its performance and to compare it fairly with other algorithms. However, obtaining these guarantees using theoretical proofs can often be a challenging task, requiring combining the impact of the optimization component and the interconnection network, sometimes resulting in bounds that are conservative or overly complex.
For example, we will show in Section \ref{sec:NumRes} that the existing performance bounds for DGD or DIGing are significantly worse than the actual worst-cases.
\subsection{Contributions}
We propose a new analysis methodology allowing to compute numerically tight worst-case performance bounds of decentralized optimization methods. This methodology is based on an alternative computational approach that finds a worst-case performance guarantee of an algorithm by solving an optimization problem, known as the performance estimation problem (PEP), see Section \ref{sec:PEPcen} for more details.
The PEP approach has led to many results in centralized optimization, see e.g. \cite{PEP_Smooth,PEP_composite}, but it has never been exploited in decentralized optimization.
The current PEP framework lacks ways of representing the communications between the agents. Therefore, we propose two formulations of the average consensus steps that can be embedded in a solvable PEP. Both formulations are presented in Section \ref{sec:consensusPEP}.
The first one uses a given averaging matrix $W$ and is exact. It can be used in PEP with any matrix $W$ and leads to performance bounds that are tight, but specific to the given matrix. However, these bounds are too specific to understand the general behavior of the algorithm. Our second formulation thus considers entire spectral classes of symmetric matrices, as often found in the literature.
This allows the PEP problem to obtain spectral upper bounds on the performance that are valid over an entire spectral class of network matrices and to look for the worst matrix in the given class. Although this formulation is a relaxation, we observe tight results in most cases, see Section \ref{sec:NumRes}.
This spectral formulation is our main methodological contribution.
Using these two new formulations, the PEP approach can be applied directly to a large class of first-order decentralized algorithms, in a wide range of settings and problems. We demonstrate these new formulations by analyzing the worst-case performance of DGD \cite{DGD}, DIGing \cite{DIGing} and Acc-DNGD \cite{AccDNGD} in Section \ref{sec:NumRes}. For all three algorithms, the spectral formulation leads to tight spectral performance bounds and we observe that these bounds are actually independent of the number of agents $N$ (when $N \ge 2$).
For DGD and DIGing, these spectral bounds significantly improve on the existing theoretical ones. For DIGing, we also show how to use the PEP approach to obtain linear convergence rate guarantees on an infinite horizon. For Acc-DNGD, we use our tool to analyze the impact of the parameters and nuance the conjecture from \cite{AccDNGD} about the asymptotic convergence rate $\bigO(\frac{1}{K^2})$, where $K$ is the total number of iterations.
One of the advantages of this new methodology is its great flexibility, allowing to answer many different questions, for example, by changing the class of functions or the performance criterion. In future works, this tool can easily be used to analyze the effect of inexact communications or gradient computations. This methodology could also consider performance criteria that not only account for efficiency but also robustness against errors in gradient computations or communications in order to help designing algorithms with the best efficiency-robustness trade-off.
A preliminary version of these results was published in \cite{PEP_dec}.
Our main new contributions with respect to \cite{PEP_dec} are (i) the proof of the necessary constraints used in our spectral formulation for any dimension $d$ of the variables $x_i \in \Rvec{d}$; (ii) the technique to find the worst averaging matrix based on the worst-case solutions of the spectral formulation; (iii) a wider demonstration of the methodology by analyzing DIGing and Acc-DNGD algorithms in addition of DGD.
\subsection{Related work}
An alternative approach with similar motivations for automated performance evaluation and inspired by dynamical systems concepts is proposed in \cite{IQC}. Integral quadratic constraints (IQC), usually used to obtain stability guarantees on complex dynamical systems, are adapted to obtain sufficient conditions for the convergence of optimization algorithms. It provides infinite-horizon linear rates of convergence, based on relatively small size problems, but it only applies when the convergence is geometric. In comparison, the PEP approach computes the worst-case performance on a finite horizon, and therefore, the size of the problem grows with the limit of the horizon. However, this allows PEP to analyze non-geometric convergence and the impact of time-varying properties.
Unlike PEP, the IQC approach offers no a priori guarantee of tightness, though it turns out to be tight in certain situations.
An application of the IQC methodology to decentralized optimization is presented in \cite{IQC_dec} and is also exploited for designing a new decentralized algorithm that achieves a faster worst-case linear convergence rate in the smooth strongly convex case. This IQC formulation uses problems whose size is independent of the number of agents $N$ and it considers a fixed framework of decentralized algorithms that embeds a lot of explicit first-order methods such as DIGing \cite{DIGing}, EXTRA \cite{EXTRA} and NIDS \cite{NIDS}.
This methodology cannot be directly applied to DGD, nor to smooth convex functions or any other situation that does not have a geometric convergence.
Our PEP approach applies directly to any decentralized method (implicit or explicit) that involves consensus steps in the form of a matrix product, without the need to cast the method into a specific framework. This makes PEP very modular and user-friendly, in particular with the PESTO toolbox \cite{PESTO}.
\section{General PEP approach} \label{sec:PEPcen}
In principle, a tight performance bound on an algorithm could be obtained by running it on every single instance - function and initial condition - allowed by the setting considered and selecting the worst performance obtained. This would also directly provide an example of "worst" instance if it exists.
The performance estimation problem (PEP) formulates this abstract idea as a real optimization problem that maximizes the error measure of the algorithm result, over all possible functions and initial conditions allowed \cite{PEP_Drori}.
This optimization problem is inherently infinite-dimensional, as it contains a continuous function among its variables. Nevertheless, Taylor et al. have shown \cite{PEP_Smooth,PEP_composite} that PEP can be solved exactly using an SDP formulation, for a wide class of centralized first-order algorithms and different classes of functions.
The main ingredients of a centralized PEP are: (i) a performance measure $\mc{P}$, e.g. $f(x^k)-f(x^*)$; (ii) a class of functions $\mc{F}$, e.g. the class $\mc{F}_{\mu,L}$ of $\mu$-strongly convex and $L$-smooth functions; (iii) an optimization method $\mc{M}$ on class $\mc{F}$; (iv) a set $\mc{I}^0$ of initial conditions, e.g. $\|x^0-x^*\|^2 \le 1$.
These ingredients are organized into an optimization problem as \vspace{-1mm}
\begin{align}
\underset{f, x^0,\dots, x^K, x^*}{\sup} \quad & \mathrm{\mc{P}}\qty(f, x^0,\dots,x^K, x^*) \hspace*{-4mm} \label{eq:gen_PEP}\\[-0.3mm]
\text{s.t.} \hspace{8mm} & \hspace{-5mm} f \in \mc{F}, \hspace{8mm}
x^* \in \mathrm{argmin }~ f, \\[-0.6mm]
& \hspace{-5mm} x^k \text{ are iterates from method $\mc{M}$ applied on $f$,} \\[-0.6mm]
& \hspace{-5mm} \mc{I}^0 ~\text{ holds.} \\[-6mm]
\end{align}
To overcome the infinite dimension of variable $f$, the problem can be discretized into $\{\qty(x^k,g^k,f^k)\}_{k\in I}$, where $g^k$ and $f^k$ are respectively the (sub)gradient and the function value of $f$ at point $k$ and $I = \{0,\dots,K,*\}$. Then, the constraint $f \in \mc{F}$ is replaced by interpolation conditions ensuring that there exists a function of class $\mc{F}$ which interpolates those data points $\{\qty(x^k,g^k,f^k)\}_{k \in I}$, i.e. these values are consistent with an actual function.
Such constraints are provided for many different classes of functions in \cite[Section 3]{PEP_composite}. They are generally quadratics and potentially non-convex in the iterates and the gradients vectors, but they are linear in the scalar products of these and in the function values. The same holds true for most classical performance criteria and initial conditions.
We can then consider these scalar products directly as decision variables of the PEP. For this purpose, we define a Gram matrix $G$ that contains scalar products between all vectors, e.g. the iterates $x^k \in \Rvec{d}$ and subgradients $g^k \in \Rvec{d}$. \vspace{-0.25mm}
\begin{align}
G = P^TP, \text{ with } P = \qty[g^0\dots g^K g^* x^0\dots x^K x^*]. \\[-5.5mm]
\end{align}
By definition, $G$ is symmetric and positive semidefinite.
Moreover, to every matrix $G \succeq 0$ corresponds a matrix $P$ whose has a number of rows equal to $\text{rank}~G$. We can show (see \cite{PEP_composite}) that the reformulation of \eqref{eq:gen_PEP} with $G$ is lossless provided that we do not impose the dimension $d$ in \eqref{eq:gen_PEP}, and indeed look for the worst-case over all possible dimensions (imposing the dimension would correspond to adding a typically less tractable rank constraint on $G$).
The idea is therefore to formulate a PEP of the form of \eqref{eq:gen_PEP} as an equivalent positive semidefinite program (SDP) using the Gram matrix $G$ and the vector of functional values $f_v = [f^k]_{k \in I}$ as variables.
This SDP formulation is convenient because it can be solved numerically to global optimality, leading to a tight worst-case bound, and it also provides the worst-case solution over all possible problem dimensions.
We refer the reader to \cite{PEP_composite} for more details about the SDP formulation of PEP, including ways of reducing the size of matrix $G$. However, the dimension of $G$ always depends on the number of iterations $K$.
From a solution $G$, $f_v$ of the SDP formulation, we can construct a solution for the discretized variables $\{\qty(x^k,g^k,f^k)\}_{k\in I}$, e.g. using the Cholesky decomposition of $G$. Since these points satisfy sufficient interpolation constraints, we can also construct a function from $\mc{F}$ interpolating these points.
Proposition \ref{prop:GramPEP} states sufficient conditions under which a PEP in the form of \eqref{eq:gen_PEP} can be formulated as an SDP. These conditions are satisfied in most PEP settings, allowing to formulate and solve the SDP PEP formulation for a large class of (implicit and explicit) first-order methods, with different classes of functions and performance criteria, see \cite{PEP_composite}. The proposition uses the following definition.
\begin{definition}[Gram-representable \cite{PEP_composite}] \label{def:Gram}
Consider a Gram matrix $G$ and a vector $f_v$, as defined above. We say that
a constraint or an objective is linearly (resp. LMI) Gram-representable if it can be expressed using a finite set of linear (resp. LMI) constraints involving (part of) $G$ and $f_v$.
\end{definition} \smallskip
\begin{proposition}[\hspace{-0.5pt}{{\cite[Proposition 2.6]{PEP_composite}}}] \label{prop:GramPEP}
If the interpolation constraints of the class of functions $\mc{F}$, the satisfaction of the method $\mc{M}$, the performance measure $\mc{P}$ and the set of constraints $\mc{I}$, which includes the initial conditions, are linearly (or LMI) Gram-representable, then, computing the worst-case for criterion $\mc{P}$ of method $\mc{M}$ after $K$ iterations on objective functions in class $\mc{F}$ with constraints $\mc{I}$ can be formulated as an SDP, with $G \succeq 0$ and $f_v$ as variables. \\
This remains valid when the objective function is the sum of $N$ sub-functions being each in a class of functions with linearly (or LMI) Gram representable interpolation constraints.
\end{proposition} \smallskip
\emph{Remark:}
In \cite{PEP_composite}, Definition \ref{def:Gram} and Proposition \ref{prop:GramPEP} were only formulated for linearly Gram-representable constraints, but their extension to LMI Gram-representable constraints is direct. Such constraints appear in the analysis of consensus steps with spectral classes of network matrices.
PEP techniques allowed answering several important questions in optimization, see e.g. the list in \cite{Taylor_thesis}, and to make important progress in the tuning of certain algorithms including the well-known centralized gradient-descent.
It was further exploited to design optimal first-order methods: OGM for smooth convex optimization \cite{OGM} and its extension ITEM for smooth strongly convex optimization \cite{ITEM}.
It can also be used to deduce proofs about the performance of the algorithms \cite{PEP_compo}. It has been made widely accessible \textit{via} a Matlab \cite{PESTO} and Python \cite{pepit2022} toolbox.
However, PEPs have never been used to study the performance of decentralized methods.
\section{Representing consensus steps in PEP} \label{sec:consensusPEP}
The main missing block to develop a PEP formulation for decentralized methods is to find a proper way of representing the consensus steps of the methods in the SDP PEP.
In PEP for decentralized methods, we must find the worst-case for each of the $N$ local functions $f_i$ and the $N$ sequences of local iterates $x_i^0 \dots x_i^K$ ($i=1,\dots,N$). The same techniques as in Section \ref{sec:PEPcen} can be applied to discretize the problem, using proper interpolation conditions on each local function. We also want to use a Gram matrix of scalar products to reformulate it as an SDP that can be solved efficiently. Therefore, according to Proposition \ref{prop:GramPEP}, we need to use linearly Gram representable performance criteria, classes of functions and initial conditions. Moreover, we also need to find a linearly or LMI Gram-representable way of expressing the updates of the decentralized method to be analyzed.
There is currently no representation of the consensus steps that can be embedded in the SDP formulation. Therefore, the rest of this section proposes ways to represent agent interactions in the SDP PEP formulation and thus provides the missing block for analyzing decentralized first-order optimization methods with PEP.
Agent interactions are part of any decentralized methods and often take place \textit{via} a weighted averaging, which can be described as a consensus step of the following form, similar to that used in DGD \eqref{eq:DGD_cons}: \vspace{-1.5mm}
\begin{equation} \label{eq:consensus}
y_i = \sum_{j=1}^N w_{ij} x_j, \qquad \text{for all $i\in\{1,\dots,N\}$,} \vspace{-1.5mm}
\end{equation}
where $x_j$ can represent any vector in $\Rvec{d}$ held by agent $j$, e.g., its local iterates in the case of DGD or something else in more advanced methods. Vector $y_i \in \Rvec{d}$ is an auxiliary variable that represents the result of the interaction and $W \in \Rmat{N}{N}$ is the averaging matrix.
This form of communication is used in many decentralized methods such as DGD \cite{DsubGD}, DIGing \cite{DIGing}, EXTRA \cite{EXTRA}, NIDS \cite{NIDS} and the results presented in this section can be exploited for all these methods.
When $K$ different consensus steps are involved in the algorithm, we observe that the Gram matrix $G$ of all scalar products contains the submatrix $G_c$:
\begin{align}
G_c &= P_c^TP_c, \label{eq:Gc} \\[1mm]
\small\text{ with } P_c &= \begin{bmatrix} x_0^0 \dots x_N^0 \dots x_0^K \dots x_N^K~y_0^0 \dots y_N^0 \dots y_0^K \dots y_N^K \end{bmatrix}. \\[-8mm]
\end{align}
We will see that the new constraints we propose to represent the consensus steps are linearly or LMI Gram-representable in $G_c$, and then also linearly or LMI Gram-representable in $G$.
\subsection{Averaging matrix given a priori}
When the averaging matrix is given \emph{a priori}, the consensus step \eqref{eq:consensus} corresponds to a set of linear equality constraints on $y_i$ and $x_j$, which are equivalent to the constraints \vspace{-1mm}
$$\Big(y_i - \sum_{j=1}^N w_{ij} x_j\Big)^T\Big(y_i - \sum_{j=1}^N w_{ij} x_j\Big) = 0, \text{ for $i=1,\dots,N$.} \vspace{-1mm}$$
These constraints only involve scalar products from $G_c$ \eqref{eq:Gc} and are thus linearly Gram-representable. They can therefore be used in the SDP formulation of a PEP, see Proposition \ref{prop:GramPEP}. Alternatively, linear equality constraints \eqref{eq:consensus} can also be used to substitute the values of $y_i$ in the problem, allowing to reduce its size. This solution is also preferable for numerical reasons.
In any case, this allows writing PEPs that provide exact worst-case performances for the given decentralized method and the specific averaging matrix given \emph{a priori}. We call this the \textit{exact PEP formulation}. It can be applied to any matrix $W$, and not only for symmetric or doubly stochastic ones.
This can be useful for trying different network matrices and observing their impact on the worst-case performance of the algorithm. It will serve as an exact comparison baseline in the numerical experiments of Section \ref{sec:NumRes}.
The next section presents a way of representing communications in PEP that allows obtaining more general performance guarantees, valid over entire classes of network matrices and not only for a specific one.
\subsection{Averaging matrix as a variable}
We now consider that the matrix $W$ is not given \emph{a priori}, but is \emph{one of the decision variables} of the performance estimation problem with bounds on its possible eigenvalues. Hence, the PEP also looks for the worst averaging matrix among all the possible ones. Typically, the literature considers matrices that are symmetric, doubly-stochastic and whose all eigenvalues (except $1$) are in a given range. We were unable to represent this class of matrices in the SDP PEP formulation, in particular the non-negativity assumption embedded in the doubly stochasticity. Therefore, we will consider a slightly more general class of matrices, where we relax this non-negativity assumption.
\begin{definition}[Generalized doubly stochastic matrix \cite{gds}] \label{def:gds} A matrix $W \in \Rmat{N}{N}$ is \emph{generalized doubly stochastic} if its rows and columns sum to one, i.e., if \vspace{-1.5mm}
$$\sum_{i=1}^N w_{ij} = 1, \qquad \sum_{i=1}^N w_{ji} = 1, \quad \text{ for $j=1,\dots,N$.} \vspace{1.5mm}$$
\end{definition}
The resulting worst matrix of the PEP may thus have negative elements. However, the provided worst-case guarantees are also valid for non-negative matrices. Moreover, we note that most results from the literature exploiting spectral information of stochastic matrices do in fact not use the non-negativity of $W$ and are thus really about generalized stochastic matrices, see e.g. \cite{DGD, EXTRA, NIDS}. We analyze the impact of this relaxation in the case of DGD in Section \ref{sec:analysis_DGD}.
Formally, the search space for $W$ is restricted by the following constraints, for each agent $i\in\{1,\dots,N\}$ and each consensus steps $k\in\{1,\dots,K\}$, \vspace{-1mm}
\begin{align}
y_i^k &= \sum_{j=1}^N w_{ij} x_j^k, \label{eq:cons_2} \\
W &\in \Wcl{\lm}{\lp}, \label{eq:cons_1}\\[-5.5mm]
\end{align}
where $\Wcl{\lm}{\lp}$ is the set of real, symmetric and generalized doubly stochastic $N\times N$ matrices that have their eigenvalues between $\lm$ and $\lp$, except for $\lam_1 = 1$:
$$ \lm \le \lam_N\le\cdots\le\lam_2\le\lp \quad \text{where $\lm, \lp \in \qty(-1,1)$ }. $$
We do not have a direct way for representing constraints \eqref{eq:cons_2} and \eqref{eq:cons_1} in an LMI Gram-representable manner that can be embedded in an SDP PEP, hence we will use a relaxation of these constraints that we will see in Section \ref{sec:NumRes} is often close to tight. From constraints \eqref{eq:cons_2} and \eqref{eq:cons_1}, we derive new necessary conditions involving only variables $y_i^k$ and $x_i^k$, allowing to eliminate $W$ from the problem. We also show that these new constraints can be expressed in terms of $G_c$ \eqref{eq:Gc} and are therefore Gram-representable.
\subsubsection*{Notations} Let $x_i^k, y_i^k \in \Rvec{d}$ be the local variables of agent $i$ at iteration $k$ of an algorithm.
Let $X^j$, $Y^j \in \Rmat{N}{K}$ be matrices containing the $j^{\mathrm{th}}$ component of each of these local variables:
$$ X^j_{i,k} = (x_i^k)_j \quad \text{and}\quad Y^j_{i,k} = (y_i^k)_j \qquad \text{for ~~$\substack{j=1,\dots,d \\ i=1,\dots,N \\ k=1,\dots,K}$ }$$
Each column corresponds thus to a different consensus step $k$, and each row to a different agent $i$. Using this notation, the consensus steps constraints \eqref{eq:cons_2} can simply be written as $d$ matrix equations: $Y^j = W X^j$ for each $j=1,\dots,d$.
Moreover, we can stack each $X^j$ and $Y^j$ vertically in matrices $X,Y \in \Rmat{Nd}{K}$, and write all the consensus steps \eqref{eq:cons_2} with one matrix equation:
$$\underbrace{\begin{bmatrix} Y^{1} \\ \vdots \\ Y^{d} \end{bmatrix}}_{Y} = \underbrace{\begin{bmatrix} W &&0 \\& \ddots &\\ 0&&W \end{bmatrix}}_{\Wt} \underbrace{\begin{bmatrix} X^{1} \\ \vdots \\ X^{d} \end{bmatrix}}_{X}. $$
The matrix $\Wt \in \Rmat{Nd}{Nd}$ is block-diagonal repeating $d$ times $W$ and can be written as $\Wt = (I_d \otimes W)$, where $I_d \in \Rmat{d}{d}$ is the identity matrix and $\otimes$ denotes the Kronecker product. We decompose matrices $X$ and $Y$ in average and centered parts: \vspace{-1mm}
$$ X = (\Xb \otimes \mathbf{1}_N ) + \Xc, \qquad Y = (\Yb \otimes \mathbf{1}_N) + \Yc,$$
where $\Xb$ and $\Yb$ are agents average vectors in $\Rmat{d}{K}$, defined as $\Xb_{\cdot k} = \frac{1}{N} \sum_{i=1}^N x_i^k$, $\Yb_{\cdot k} = \frac{1}{N} \sum_{i=1}^N y_i^k$ for $k=1,\dots,K$ and $\mathbf{1}_N = \qty[1\dots 1]^T \in \Rvec{N}$.
The centered matrices $\Xc, \Yc \in \Rmat{Nd}{K}$ have by definition an agent average of zero for each component and iteration: $(I_d \otimes \mathbf{1}_N)^T \Xc = \mathbf{0}_{d\times K}$.
\subsubsection*{Gram-representable relaxation of \eqref{eq:cons_2} and \eqref{eq:cons_1}}
\begin{theorem}[Consensus Constraints] \label{thm:conscons}
If $Y^j = WX^j$, for every $j=1,\dots,d$ and for a same matrix $W\in \Wcl{\lm}{\lp}$,
i.e., if $Y = (I_d \otimes W) X$, then
\begin{enumerate}[(i)]
\item The matrices $X^TY$ and $\Xc^T\Yc$ are symmetric,
\item The following constraints are satisfied \vspace{-0.5mm}
\begin{align}
\Xb &= \Yb, \label{eq:eq_mean} \\
\lm \Xc^T \Xc ~ \preceq ~\Xc^T \Yc~ &\preceq ~\lp \Xc^T \Xc, \label{eq:scal_cons} \\
(\Yc - \lm \Xc)^T(\Yc - \lp \Xc) ~&\preceq ~0, \hspace{20mm} \label{eq:var_red} \\[-6.5mm]
\end{align}
where the notations $\succeq$ and $\preceq$ denote respectively positive and negative semi-definiteness.
\item Constraints \eqref{eq:eq_mean}, \eqref{eq:scal_cons}, \eqref{eq:var_red} are LMI Gram-representable.
\end{enumerate}
\end{theorem}
\begin{proof}
First, we average elements from both sides of the assumption $Y^j = WX^j$ to obtain constraint \eqref{eq:eq_mean}:
$$ \Yb_{j\cdot} = \frac{\mathbf{1}^T Y^j}{N} = \frac{\mathbf{1}^T WX^j}{N} = \frac{\mathbf{1}^T X^j}{N} = \Xb_{j\cdot} \text{ for $j=1,\dots,d$} $$
where $\mathbf{1}^TW = \mathbf{1}^T$ follows from $W$ being generalized doubly stochastic, i.e., its rows and columns sum to one (see Definition \ref{def:gds}).
In the sequel, we will consider all the components $j$ at once, and then we use the notation $Y=\Wt X$, where $\Wt = I_d \otimes W$ is in $\Rmat{Nd}{Nd}$. Since $\Wt$ is a block-diagonal matrix repeating $d$ times $W$, if $W\in \Wcl{\lm}{\lp}$, then $\Wt\in \Wcl{\lm}{\lp}$. \\
The symmetry of the matrix $X^TY$ follows from the assumption $Y = \Wt X$, with $\Wt$ symmetric. The same argument shows the symmetry of $\Xc^T\Yc$, because $Y = \Wt X$ and $\Xb = \Yb$ imply $\Yc = \Wt \Xc$. \\
Since the averaging matrix $\Wt$ is real and symmetric, we can take an orthonormal basis $\mathbf{v_1},\dots,\mathbf{v_{Nd}}$ of eigenvectors, corresponding to real eigenvalues $ \lam_{1}\ge\cdots \ge \lam_{Nd}$.
Since $\Wt$ is composed of $d$ diagonal blocks of the generalized doubly stochastic matrix $W$, it has the same eigenvalues as $W$ but with a multiplicity $d$ times larger for each of them.
Therefore, its largest eigenvalues are $\lam_j = 1$ ($j=1,\dots,d$) and corresponds to the eigenvectors $\mathbf{v_j} = [0\dots \mathbf{1}_N^T\dots 0]^T$ where the position for $\mathbf{1}_N$ corresponds to the position of the block $j$ in $\Wt$. These eigenvectors can be written in matrix form as $V_{1,\dots,d} = I_d \otimes \mathbf{1}_N $. By assumption, other eigenvalues are such that
$$\lm \le \lam_i \le \lp \text{ for $i=d+1,\dots,Nd$, with $\lm, \lp \in (-1,1)$.} $$
Let us now consider a combination $\Xc z$ of the columns of the matrix $\Xc$, for an arbitrary $z \in \Rvec{K}$. It can be decomposed in the eigenvector basis of $\Wt$, and used to express the combination $\Yc z$ as well: \\[-8mm]
\small
\begin{align} \label{eq:ev_basis}
\hspace{-2mm} \Xc z = \sum_{i = d+1}^{Nd} \gamma_i \mathbf{v}_i, \text{ and }
\Yc z = \Wt \Xc z = \sum_{i = d+1}^{Nd} \gamma_i \lambda_i \mathbf{v}_i, \hspace{4mm}
\end{align}
\normalsize
where $\gamma_i$ are real coefficients. These coefficients for $\mathbf{v_1}, \mydots, \mathbf{v_d}$ are zero because $\Xc z$ is orthogonal to these eigenvectors associated with eigenvalue $ \lam_j = 1$. Indeed $\Xc$ is centered with respect to the agents, for any component $j$ or iteration k and then we have
$$ (I_d \otimes \mathbf{1}_N)^T \Xc = V_{1,\dots,d}^T \Xc = \mathbf{0}_{d\times K}.$$
Using the decomposition \eqref{eq:ev_basis} to compute the scalar product $z^T \Xc \Yc z$ for any $z \in \Rvec{K}$ leads to the following scalar inequalities
\begin{align}
z^T \Xc^T \Yc z = \sum_{i = d+1}^{Nd} \gamma_i^2 \lambda_i &\ge \lm z^T \Xc^T \Xc z, \\[-3mm]
&\le \lp z^T \Xc^T \Xc z.
\end{align}
Having these inequalities satisfied for all $z \in \Rvec{K}$, is equivalent to \eqref{eq:scal_cons}.
In the same way, \eqref{eq:var_red} is obtained by verifying that the following inequality holds for all $z \in \Rvec{K}$:
$$ (\Yc z - \lm \Xc z)^T(\Yc z - \lp \Xc z) \le 0. $$
This can be done by substituting $\Xc z$ and $\Yc z$ using equation \eqref{eq:ev_basis}, and by using the bounds on $\lam_i$ ($i=d+1,\dots,Nd$). \\
Finally, we prove part (iii) of the theorem. Constraint \eqref{eq:eq_mean} is linearly (and thus also LMI) Gram-representable because it can be expressed using only elements of $G_c$ \eqref{eq:Gc}, i.e. scalar product between $x_i^k $ and $y_j^k$,
$$\footnotesize \Biggl(\frac{1}{N} \sum_i \qty(x_i^k-y_i^k) \Biggr)^T\Biggl(\frac{1}{N} \sum_j \qty(x_j^k-y_j^k) \Biggr) = 0 ~ \text{ for $k=1,...,K$.}$$
Constraints \eqref{eq:scal_cons} and \eqref{eq:var_red} are LMI Gram-representable because they are LMIs whose entries can be defined using only the entries of $G_c$, which is a submatrix of the full Gram matrix $G$ of scalar products. For example, the entry $k,l$ of $\Xc^T\Xc$ can be expressed as the scalar product of columns $k$ and $l$ of $\Xc$
$$ \qty(\Xc^T)_{k \cdot} \qty(\Xc)_{\cdot l} = \sum_{i=1}^N \Bigl(x_i^k - \frac{1}{N} \sum_j x_j^k \Bigr)^T \Bigl(x_i^l - \frac{1}{N} \sum_j x_j^l \Bigr). $$
\end{proof}
\smallskip
Using Theorem \ref{thm:conscons}, we can relax constraints \eqref{eq:cons_2} and \eqref{eq:cons_1} and replace them by \eqref{eq:eq_mean}, \eqref{eq:scal_cons} and \eqref{eq:var_red}, which are LMI Gram-representable.
Then, Proposition \ref{prop:GramPEP} allows to write a relaxed SDP formulation of a PEP providing worst-case results valid for the entire spectral class of matrices $\Wcl{\lm}{\lp}$. We call this formulation the \textit{spectral PEP formulation} and its results the \textit{spectral worst-case}.
This SDP formulation has matrix $G\succeq0$ and vectors $f_{i,v}$ ($i=1,...,N$) as decision variables. The vector $f_{i,v}$ contains the function values of $f_i$ at the different iterates $x_i^k$ ($k=1,...,K$). The values in $G$ correspond to the scalar products of the iterates ($x_i^k, y_i^k$) and the gradients ($g_i^k$) of the different agents.
When different averaging matrices are used for different sets of consensus steps, the constraints from Theorem \ref{thm:conscons} can be applied independently to each set of consensus steps.
Constraint \eqref{eq:eq_mean} is related to the stochasticity of the averaging matrix and imposes that variable $x$ has the same agents average as $y$, for each consensus step and for each component.
Linear matrix inequality constraints \eqref{eq:scal_cons} and \eqref{eq:var_red} imply in particular scalar constraints for the diagonal elements.
They correspond to independent constraints for each consensus step, i.e., for each column $\xc$ and $\yc$ of matrices $\Xc$, $\Yc$: \vspace{-0.5mm}
\begin{align}
\lm \xc^T \xc \le \xc^T\yc &\le \lp \xc^T \xc, \label{eq:scal_cons_1}\\
(\yc - \lm \xc)^T(\yc - \lp \xc) & \le 0. \label{eq:scal_cons_2} \\[-5.5mm]
\end{align}
These constraints imply in particular that \vspace{-0.5mm} $$\yc^T \yc \le \lmax^2~ \xc^T\xc,\quad \text{where $\lmax = \max(|\lm|,|\lp|)$,} \vspace{-0.5mm}$$
meaning that the disagreement between the agents, measured by $\yc^T \yc$ for $y$ and $\xc^T \xc$ for $x$, is reduced by a factor $\lmax^2 \in [0,1)$ after a consensus.
But constraints \eqref{eq:scal_cons} and \eqref{eq:var_red} also allow linking different consensus steps to each other, \textit{via} the impact of off-diagonal terms, in order to exploit the fact that these steps use the same averaging matrix. \\
We can also interpret constraints \eqref{eq:scal_cons} and \eqref{eq:var_red} as a sum over all the dimensions $j=1,\dots,d$. Each term of this sum corresponds to the same constraint expression as \eqref{eq:scal_cons} and \eqref{eq:var_red}, but applied only to $\Xc^j$ and $\Yc^j \in \Rvec{N \times K}$.
Indeed, any product involving $\Xc$ or $\Yc \in \Rvec{Nd \times K}$ can be written as a sum over the dimensions $d$. For instance, for $\Xc^T \Xc$, we have \vspace{-1.5mm}
$$\Xc^T \Xc = \sum_{j=1}^d (\Xc^j)^T \Xc^j.$$
The following proposition show that constraint \eqref{eq:scal_cons} is redundant when we consider a symmetric range of eigenvalue, i.e., when $\lp = -\lm$.
\begin{proposition}[Symmetric range of eigenvalues] \label{prop:sym_ev}
If $\lp = - \lm = \lam \in \qty[0,1]$, then LMI constraints \eqref{eq:scal_cons} and \eqref{eq:var_red} from Theorem \ref{thm:conscons} are equivalent to
\begin{equation}
\Yc^T\Yc \preceq \lambda^2 \Xc^T\Xc. \label{eq:var_red_sym}
\end{equation}
\end{proposition}
\begin{proof} Constraint \eqref{eq:var_red} is equivalent to \eqref{eq:var_red_sym} when $\lp = - \lm = \lam$.
Constraint \eqref{eq:scal_cons} with $\lp = - \lm = \lam$ becomes
\begin{equation}
-\lam \Xc^T \Xc ~ \preceq ~\Xc^T \Yc~ \preceq ~\lam \Xc^T \Xc, \label{eq:scal_cons_sym}
\end{equation}
We now show that constraint \eqref{eq:scal_cons_sym} is implied by \eqref{eq:var_red_sym}, which achieves the proof.
When $\lam \ge 0$, constraint \eqref{eq:var_red_sym} is equivalent to the following bound on $\|\Yc z\|$, for any $z\in \Rvec{K}$,
\begin{equation} \label{eq:var_red_sym_eq}
\|\Yc z\| \le \lam \|\Xc z\| \qquad \text{for any $z\in \Rvec{K}$.}
\end{equation}
Moreover, the scalar product between $\Xc z$ and $\Yc z$ can be bounded, for any $z\in \Rvec{K}$, using the Cauchy-Schwarz inequality
\begin{equation} \label{eq:scal_cons_sym_eq}
- \|\Xc z\| ~\|\Yc z\| \le (\Xc z)^T(\Yc z) \le \|\Xc z\| ~\|\Yc z\|.
\end{equation}
We can combine inequality \eqref{eq:scal_cons_sym_eq} with \eqref{eq:var_red_sym_eq} and obtain
$$ - \lam z^T\Xc^T\Xc z \le z^T\Xc^T\Yc z \le \lam z^T \Xc^T\Xc z,~ \text{for any $z\in \Rvec{K}$,}$$
which is equivalent to \eqref{eq:scal_cons_sym}.
\end{proof}
Proposition \ref{prop:sym_ev} means that when we consider a class of matrices with a symmetric range of eigenvalues, i.e., $\Wcl{-\lam}{\lam}$, we can remove constraint \eqref{eq:scal_cons} from the spectral PEP formulation, without modifying its result. Other constraints from Theorem \ref{thm:conscons}, including the symmetry of $X^TY$ and $\Xc^T\Yc$, should still be imposed.
\subsubsection*{Recovering the worst averaging matrix}
Theorem \ref{thm:conscons} provides necessary constraints for describing a set of consensus steps $Y^j=WX^j$ that use an unknown averaging matrix $W$ from a spectral class of symmetric generalized doubly-stochastic matrices $\Wcl{\lm}{\lp}$.
But these constraints are not known to be sufficient; so even if matrices $X$ and $Y$ satisfy constraints \eqref{eq:eq_mean}, \eqref{eq:scal_cons}, \eqref{eq:var_red}, we do not know if there is a matrix $W \in \Wcl{\lm}{\lp}$ such that $Y^j=WX^j$ for all $j=1,\dots,d$.
The question of the existence of such a matrix can be expressed as follows: is the optimal cost of the following problem equal to 0?
\begin{mini}{\Wh \in \Wcl{\lm}{\lp}}{\mynorm{\Yr-\Wh\Xr}_F\label{eq:SDP_Wh}}{}{}
\end{mini}
where matrices $\Xr, \Yr \in \Rmat{N}{Kd}$ stack the $X^j, Y^j \in \Rmat{N}{K}$ horizontally and thus reshape matrices $X$ and $Y$. This reshape is needed for recovering a matrix $\Wh$ which is identical for every dimension $j$ and that has appropriate size ($N \times N$).
Note that problem \eqref{eq:SDP_Wh} can easily be solved since it is an SDP, as the constraint $\Wh \in \Wcl{\lm}{\lp}$ can be formulated as $ {\lm I \preceq ( \Wh - \frac{\mathbf{11}^T}{N}) \preceq \lp I}$ together with $\Wh^T = \Wh$ and $\Wh \mathbf{1} = \mathbf{1}$.
If the optimal cost of \eqref{eq:SDP_Wh} is zero, the optimal value of $\Wh$ is a valid worst averaging matrix $W$.
Alternatively, problem \eqref{eq:SDP_Wh} without any constraints is a least-square problem whose solution is cheaper to compute and is given by ${\Whp = \Yr\Xr^{\dag}}$, where $\Xr^{\dag}$ is the pseudo-inverse of $\Xr$. If its remainder $\mynorm{\Yr-\Whp\Xr}_F$ is zero, then we check a posteriori that the constraint $\Whp \in \Wcl{\lm}{\lp}$ is satisfied. This was often the case in our experiments from Section \ref{sec:NumRes} and allows to recover rapidly the value of the worst averaging matrix. Note, though, that when $\Whp$ does not satisfy the required conditions, a valid $W$ might still exist, and so one must solve \eqref{eq:SDP_Wh} to find it.
We have now all the elements to write and solve PEPs for decentralized optimization methods, including ways of representing the consensus steps in a Gram-representable manner, allowing to formulate the PEP as an SDP. In the next section, we demonstrate the methodology by analyzing 3 decentralized methods.
\section{Demonstration of our methodology} \label{sec:NumRes}
Using results from previous sections, we can build two PEP formulations for analyzing the worst-case performance of a large class of decentralized optimization methods: the exact and the spectral formulations. These formulations allow to obtain rapidly and automatically accurate numerical performance bound.
We demonstrate the power of the methodology by focusing on the analysis of 3 selected algorithms.
For each algorithm, we use the same settings (performance criterion, initial condition, class of functions,...) as its theoretical bound, in order to obtain comparable results. These particular settings are not a limitation from our PEP formulations, which allows to represent a larger diversity of situations.
\subsection{Distributed (sub)gradient descent (DGD)} \label{sec:analysis_DGD}
We consider $K$ iterations of DGD described by \eqref{eq:DGD_cons} and \eqref{eq:DGD_comp}, with constant step-size $\alpha$, in order to solve problem \eqref{opt:dec_prob}, i.e., minimizing $f(x) = \frac{1}{N}\sum_{i=1}^N f_i(x)$, with $x^*$ as minimizer of $f$.
There are different studies on DGD; e.g., \cite{DGD1} shows that its iterates converge to a neighborhood of the optimal solution $x^*$ when the step-size is constant. In the sequel, we take as baseline the results of a recent survey \cite{DGD} providing a theoretical bound for the functional error at the average of all the iterates, valid when subgradients are bounded. \smallskip
\begin{theorem}[Performance of DGD {{\cite[Theorem 8]{DGD}}}] \label{thm:bound}
~\\ Let $f_i,\dots,f_N \in \mc{F}_R$, i.e. convex local functions with subgradients bounded by $R$. Let $x^0$ be an identical starting point for all agents such that $\|x^0 - x^*\|^2 \le D^2$. And let $W \in \Wcl{-\lam}{\lam}$ for some $\lam \in [0,1)$, i.e. $W$ is a symmetric and generalized doubly stochastic matrix with eigenvalues $\lam_2,\dots,\lam_N \in \qty[-\lam,\lam]$. \\
If we run DGD for $K$ steps with a constant step-size $\alpha = \frac{1}{\sqrt{K}}$, then there holds\footnote{Note the factor 2 in the second term of the bound \eqref{eq:th_bound} was missing in \cite{DGD} but its presence was confirmed by the authors of \cite{DGD}.}
\vspace{-3pt}
\begin{equation} \label{eq:th_bound}
f(\xmoy ) - f(x^*) \le \frac{D^2 + R^2}{2 \sqrt{K}} + \frac{2 R^2}{\sqrt{K}(1-\lam)}, \vspace{-3pt}
\end{equation}
where $\xmoy = \frac{1}{N(K+1)} \sum_{i=1}^N \sum_{k=0}^K x_i^k $ is the average over all the iterations and all the agents.
\end{theorem}
Theorem \ref{thm:bound} was stated in \cite{DGD} for doubly stochastic matrix $W$ but the proof never uses the non-negativity assumption and therefore it also holds for generalized doubly stochastic matrices.
For comparison purposes, we will analyze the performance of DGD using our PEP formulations in exactly the same settings as Theorem \ref{thm:bound}.
Our first PEP formulation searches for solutions to the following maximization problem: \vspace{-2.2mm}
\begin{align}
\hspace{-1cm}\max_{\substack{\footnotesize{~x^*, f_i, x^0_i,y_i^0\dots x^K_i,y_i^K} \\ \footnotesize{\text{for } i=1,\dots,N}}} ~ & \frac{1}{N}\sum_{i=1}^N \qty(f_i(\xmoy)- f_i(x^*)) &\tag{DGD$(W)$-PEP} \label{prob:DGD_PEP}\\
\text{s.t.} \hspace{12mm} & \hspace*{-10mm} f_i \in \mc{F}_R &\small{\forall i} \\[-0.7mm]
& \hspace*{-10mm} x^* = \underset{x}{\mathrm{argmin}}~ \frac{1}{N}\sum_{i=1}^N f_i(x),& \\[-0.8mm]
& \hspace*{-10mm} \|x^0_i - x^*\|^2 \le D^2 \quad \text{and} \quad x_i^0 = x_j^0 & \small{\forall i, j }\\[-0.8mm]
& \hspace*{-10mm} y_i^k = \sum_{j=1}^N w_{ij} x_j^k, & \small{\forall i, k} \label{eq:DGD-PEP-cons}\\[-0.7mm]
& \hspace*{-10mm} x_i^{k+1} = y_i^k - \alpha^k \nabla f_i(x_i^k), & \small{\forall i, k} \\[-5mm]
\end{align}
The objective function and the constraints of \eqref{prob:DGD_PEP} are all linearly Gram-representable and the problem can then be formulated as an SDP, according to Proposition \ref{prop:GramPEP}.
This is referred to as the \emph{exact formulation} because it finds the exact worst-case performance of the algorithm for a specific given matrix $W$.
The second formulation relaxes the consensus constraints imposed by equation \eqref{eq:DGD-PEP-cons} and replaces them with the constraints from Theorem \ref{thm:conscons}, with $-\lm=\lp = \lam$. Those are LMI Gram-representable (see Theorem \ref{thm:conscons}) and can then be used in the SDP formulation of PEP, according to Proposition \ref{prop:GramPEP}. This formulation is referred to as the \emph{spectral formulation} and provides \emph{spectral worst-cases},
i.e., upper bounds on the worst-case performances of the algorithm, valid for any matrix $W \in \Wcl{-\lam}{\lam}$. These spectral bounds can thus be compared with the bound from Theorem \ref{thm:bound}.
In our experiments, we focus on the situation where $D = 1$ and $R = 1$, but the results obtained can be scaled up to general values using changes of variables, see Appendix \ref{annexe:scaling}. \smallskip
\paragraph{Impact of the number of agents $N$}
In Fig. \ref{fig:wc_Nevol}, we observe that the results of the spectral formulation are \emph{independent} of the number of agents $N \ge 2$ in the problem. This is shown for $K$ = 5 iterations and different spectral ranges.
This observation has been confirmed for other values of $K$ (10, 15, and 20).
The theoretical performance bound from Theorem \ref{thm:bound} is also independent of $N$. Therefore, in the sequel, we analyze the spectral formulation for $N=3$, which keeps the computational complexity low while keeping some non-trivial network matrices.
\begin{figure}[h!]
\vspace{-1mm}
\centering
\includegraphics[width=0.5\textwidth]{img/wc_Nevol_K5_multi_B}
\caption{Independence of $N$ for the spectral worst-case performance of $5$ iterations of DGD in the setting of Theorem \ref{thm:bound}. \vspace{-4mm}}
\label{fig:wc_Nevol}
\end{figure}
\paragraph{Comparison with Theorem \ref{thm:bound}}
We compare the spectral bound with the theoretical bound from Theorem \ref{thm:bound} for different ranges of eigenvalues $\qty[-\lam,\lam]$ for the network matrix. The value of $1-\lam$ corresponds to the algebraic connectivity of the network.
Fig. \ref{fig:wc_lamevol_N3} shows the evolution of both bounds with $\lam$ for $K = 10$ iterations of DGD with $N = 3$ agents. We observe that the spectral worst-case performance bound (in blue) largely improves on the theoretical one (in red), especially when $\lam$ approaches 1, in which case the theoretical bound grows unbounded.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{img/DGD_wc_lamevol_final}
\vspace{-3mm}
\caption{Evolution with $\lam$ of the worst-case performance of $K = 10$ iterations of DGD in the setting of Theorem \ref{thm:bound} with $N = 3$ agents. The plot shows (i) the theoretical bound from equation \eqref{eq:th_bound} (in red), largely above (ii) the spectral worst-case performance (in blue), (iii) the exact worst-case performance for the symmetric generalized doubly stochastic matrix $W^{(1)}$ from equation \eqref{eq:mat} (in green) and (iv) the exact worst-case performance for symmetric doubly stochastic matrices found based on an exhaustive exploration of such matrices used in the exact PEP formulation (in pink).
This indicates the tightness of the spectral formulation of PEP for DGD with symmetric generalized doubly stochastic matrices, within numerical errors.
\vspace{-2mm}}
\label{fig:wc_lamevol_N3}
\end{figure}
The improvements of the bounds when $\lam$ is close to 1 is particularly relevant since large values for $\lam$ are frequent for averaging matrices of large networks of agents \cite{eigenBound}.
For example, for a 5 by 5 grid of agents with Metropolis weights \cite{DGD}, the range of eigenvalues of the resulting averaging matrix is $\qty[-0.92,0.92]$.
In that case, after $K=10$ iterations, our spectral bound guarantees that the performance measure is below $0.85$, compared to 8.2 for the theoretical bound from Theorem \ref{thm:bound}.
This accuracy of $0.85$ would only be guaranteed using Theorem \ref{thm:bound} with $K = 936$. \smallskip
\paragraph{Worst averaging matrix and tightness analysis}
When considering the spectral formulation with a symmetric spectral range $-\lm=\lp=\lam$, we observe that the worst averaging matrices are matrices of the following form \vspace{-0.3mm}
\begin{equation} \label{eq:mat}
W^{(1)} = J - \lam (I - J), \vspace{-0.3mm}
\end{equation}
where $J = \frac{\mathbf{11^T}}{N}$, i.e. it has all entries equal to $\frac{1}{N}$, and $I$ is the identity matrix. Matrix $W^{(1)}$ is symmetric and generalized doubly stochastic, leading $1$ to be one of its eigenvalues. All its other eigenvalues are equal to $-\lam$.
Matrix $W^{(1)}$ always produces a remainder $\|\Yr - W^{(1)}\Xr\|_F$ close to zero for DGD, but it may not be the only one. The bound obtained using the exact PEP formulation with this specific matrix $W^{(1)}$ for $K=10$ is plotted in green in Fig. \ref{fig:wc_lamevol_N3} and \emph{exactly} matches the spectral bound in blue, within numerical errors. This means that the spectral formulation provides a \emph{tight performance bound} for DGD with symmetric generalized doubly stochastic matrices, even though it is a relaxation,.
This observation has been confirmed for other values of $K$ (5, 15, and 20). \smallskip
\paragraph{Doubly stochastic versus generalized doubly stochastic}
Since every doubly stochastic matrix is also generalized doubly stochastic, the spectral bound also provides an upper bound on the performance of DGD with symmetric doubly stochastic matrices. This bound remains tight for $\lam \le \frac{1}{N-1}$ because the worst-case matrix $W^{(1)}$ \eqref{eq:mat} we have obtained is non-negative and is therefore doubly stochastic. For $\lam > \frac{1}{N-1}$, this is no longer the case and the analysis is performed by empirically looking for symmetric stochastic averaging matrices leading to the worst performance.
In Fig. \ref{fig:wc_lamevol_N3}, for $N=3$ and $\lam>0.5$, we have generated more than 6000 random symmetric doubly stochastic 3 by 3 matrices. We have analyzed their associated DGD performance using the exact PEP formulation and have only kept those leading to the worst performances.
The resulting pink curve deviates no more than 20\% below the spectral bound. In that case, the spectral bound is thus no longer tight for DGD with doubly stochastic matrices but remains very relevant.
This observation has been confirmed for other values of $K$ and $N$ ($N = 3,5,7$, and $K=10,15$). \smallskip
\paragraph{Evolution with the total number of iterations $K$}
Fig. \ref{fig:wc_Kevol} shows the evolution of the spectral worst-case performance for DGD multiplied by $\sqrt{K}$, for different values of $\lam$. Except when $\lam = 1$, all lines tend to a constant value, meaning that the spectral bound behaves in $\bigO\qty(\frac{1}{\sqrt{K}})$, as the theoretical bound \eqref{eq:th_bound}, but with a much smaller multiplicative constant.
When $\lam = 1$, the line grows linearly and never reaches a constant value. In that case, the worst averaging matrices lead to counterproductive interactions, preventing DGD from working in the worst case. \smallskip
\paragraph{Tuning the step-size $\alpha$}
The PEP methodology allows us to easily tune the parameters of a method.
For example, Fig. \ref{fig:wc_alphevol} shows the evolution of the spectral worst-case performance of DGD with the constant step-size it uses,
\begin{figure}[h!]
\vspace{-1mm}
\centering
\includegraphics[width=0.48\textwidth]{img/wc_Kevol_N3_multi_lam06_final}
\caption{Evolution with $K$ of the \emph{normalized} spectral worst-case performance of $K$ iterations of DGD in the setting of Theorem \ref{thm:bound} with $N = 3$. The shown spectral worst-cases are normalized by $\frac{1}{\sqrt{K}}$ to show that they evolve at this rate.
\vspace{-4mm}}
\label{fig:wc_Kevol}
\end{figure}
\begin{figure}[b]
\vspace{-3mm}
\centering
\includegraphics[width=0.48\textwidth]{img/wc_alphevol_N3_K10_multi_lam08}
\vspace{-1mm}
\caption{Evolution with $\alpha$ of the spectral worst-case performance of $K = 10$ iterations of DGD in the setting of Theorem \ref{thm:bound} with $N$ = 3 agents and $\lam = 0.8$ (except for $\alpha$).}
\label{fig:wc_alphevol}
\end{figure}
in the setting of Theorem \ref{thm:bound} with $N=3$, $K=10$ and $\lam = 0.8$.
In that case, we observe that the value $\alpha = \frac{1}{\sqrt{K}}$ used in Theorem \ref{thm:bound} for deriving the theoretical performance bound is not the best possible choice for $\alpha$ and should be divided by two to improve the performance guarantees by 30\%.
The optimal value for $\alpha$, regarding our spectral bound, is the one that provides the best worst-case guarantee,
whatever the averaging matrix from $\Wcl{-\lam}{\lam}$ is used.
The impact of the step-size on the other experiments and observations can be studied by setting $\alpha = \frac{h}{\sqrt{K}}$, for some $h > 0 $. We focused on $h=1$ for comparison with the theoretical bound from Theorem \ref{thm:bound}. Nevertheless, all our other observations have been confirmed\footnote{For too small step-size such as $h \le 0.1$, the worst averaging matrix observed is no more $W^{(1)}$.} for $h = 0.1, 0.5, 2, 10$. \vspace{-5mm}
\subsection{DIGing}
The DIGing algorithm \cite{DIGing}, described in Algorithm \ref{algo:DIGing}, combines DGD with a \emph{gradient tracking} technique. Each agent $i$ holds an estimation $\yDIG_i$ of the average of all the local gradients and uses it in its update instead of its own local gradient.
The DIGing algorithm allows using a different network matrix $W^k$ at each iteration $k$.
\begin{algorithm}
\caption{DIGing}
\begin{algorithmic}
\STATE Choose step-size $\alpha > 0$ and pick any $x_i^0 \in \Rvec{d}$;
\STATE Initialize $\yDIG^0_i = \nabla f_i(x_i^0)$ for all $i=1,\dots,N$;
\FOR{$k = 0, 1,\dots$}
\FOR{$i=1,\dots,N$} \vspace{1mm}
\STATE $x_i^{k+1} = \sum_{j=1}^N w_{ij}^k x_j^k - \alpha \yDIG_i^k$;\vspace{1mm}
\STATE $\yDIG_i^{k+1} = \sum_{j=1}^N w_{ij}^k \yDIG_j^k + \nabla f_i(x_i^{k+1}) - \nabla f_i(x_i^k)$; \vspace{1mm}
\ENDFOR
\ENDFOR
\end{algorithmic}
\label{algo:DIGing}
\end{algorithm}
The linear convergence of DIGing has been established in \cite[Theorem 3.14]{DIGing}
provided that the local functions $f_i$ are $L$-smooth and $\mu$-strongly convex (with $L \ge \mu > 0$); the network matrices are symmetric, doubly stochastic, and have their second largest eigenvalue $\lam$ below 1 (in absolute value); and the step-size is within the interval \vspace{-5mm}
$$\alpha \in \Bigg(0, \frac{(1-\lam)^2}{2L\qty(1+4\sqrt{N}\sqrt{L/\mu})} \Bigg]. \vspace{-5mm} $$
The largest accepted step-size decreases thus as $\bigO(\frac{1}{\sqrt{N}})$ and has also a dependence in $\bigO(\frac{1}{L\sqrt{L/\mu}})$ which is less favorable than the usual $\bigO(\frac{1}{L})$ in optimization, leading thus often to very small values for accepted $\alpha$.
The spectral condition on the network matrices ($|\lam| < 1$) guarantees the network connectivity at each iteration. Actually, \cite{DIGing} imposes a weaker spectral condition which only requires that the union of all the networks over $B$ steps is connected. In this section, we consider the case $B=1$. Under all these conditions, \cite[Theorem 3.14]{DIGing} guarantees the following \mbox{R-linear}\footnote{\label{note:defConv} We recall the definition of the R-linear convergence and its differences with the Q-linear convergence, based on the definitions provided in \cite{DIGing}. \\ Suppose that a sequence $\{x^k\}$ converges to $x^*$ in some norm $\|\cdot\|$.
We say that the convergence is: (i) R-linear if there exists $ \rho \in (0, 1)$ and some positive constant $C$ such that $\|x^k - x^*\| \le C \rho^k$ for all $k$; (ii) Q-linear if there exists $\rho \in (0, 1)$ such that $ \frac{\|x^{k+1} - x^*\|}{\|x^{k} - x^*\|} \le \rho$ for all $k$.
Both convergences are geometric but the Q-linear convergence is stronger since it implies monotonic decrease of $\|x^k - x^*\|$, while R-linear convergence does not. By definition, Q-linear convergence implies R-linear convergence with the same rate but the inverse implication does not hold in general.} convergence:
\begin{equation} \label{eq:DIGing_conv}
\sqrt{\sum_{i=1}^N \|x_i^K-x^*\|^2} \le C \rho_{\mathrm{theo}}^K, \quad \text{ for any $K \in \mathbb{N}$,}
\end{equation}
where $C$ is a positive constant and $\rho_{\mathrm{theo}} \in \qty(0,1)$ is the convergence rate depending on $N$, $\lam$, $L$ and $\mu$ (see \cite[Theorem 3.14]{DIGing} for details about its expression).
This section analyzes the worst-case performance of DIGing via the exact and spectral formulations, in the same settings as \cite[Theorem 3.14]{DIGing} to get a fair comparison. Therefore, we consider the set of $L$-smooth and $\mu$-strongly convex functions for local functions.
As performance criterion, we consider the same as in \eqref{eq:DIGing_conv} but squared and scaled by $N$: \vspace{-2mm}
\begin{equation} \label{eq:DIGing_crit}
\mc{P}^K = \frac{1}{N}\sum_{i=1}^N \|x_i^K-x^*\|^2. \vspace{-1mm}
\end{equation}
The corresponding theoretical convergence rate for this criterion is given by $ \rho_{\mathrm{theo}}^2$.
We also consider initial conditions similar to those used implicitly in \cite[Theorem 3.14]{DIGing}:
\begin{align}
\frac{1}{N}\sum_{i=1}^N\|x_i^0 - x^*\|^2 \le D^2, \label{eq:init_DIGing1} \\
\frac{1}{N}\sum_{i=1}^N \|\yDIG_i^0 - \frac{1}{N}\sum_{j=1}^N\nabla f_j(x_j^0)\|^2 \le E^2. \label{eq:init_DIGing2}
\end{align}
Condition \eqref{eq:init_DIGing1} bounds the initial performance criterion $\mc{P}(x^0)$ which measures the average error of the initial iterates $ x_i^0$. Condition \eqref{eq:init_DIGing2} bounds the average error made by the agents on the initial average gradient estimates $\yDIG_i^0$. \\
For the spectral formulation, to have the same setting as \cite[Theorem 3.14]{DIGing}, we consider time-varying averaging matrices that are symmetric, generalized doubly stochastic, and with a symmetric range of eigenvalues $[-\lam, \lam]$, i.e. in $\Wcl{-\lam}{\lam}$. In a second time, we will also consider constant network matrices.
The problem depends on 6 parameters: $L$, $\mu$, $D$, $E$, $\lam$, $\alpha$. We fix $L=1$ and $D=1$, as the results can then be scaled up to general values using appropriate changes of variables. The value of $E$ is arbitrarily fixed to $E = 1$, but all the observations have been confirmed for other values of $E$ ($E = 0.1, 10$). Different values of the step-size $\alpha$ will be analyzed to understand its impact on the worst-case performance of the DIGing algorithm. We show the results for representative values of $\mu$ and $\lam$ ($\mu = 0.1$ and $\lam = 0.9$). \smallskip
\paragraph{Impact of the number of agents $N$}
As it was the case for DGD (see Section \ref{sec:analysis_DGD}), we have observed that the spectral worst-case of DIGing is independent of the number of agents $N$ (for $N \ge 2$). This differs from the theoretical analysis of DIGing from \cite[Theorem 3.14]{DIGing} for which the range of accepted step-sizes, as well as the convergence rate, depend on $N$. It appears that the worst-case performance of DIGing does not get worst when $N$ increases, or can at least be bounded uniformly for all values of $N$, for example with our spectral bound. Such uniform bound will allow us to better choose the step-size $\alpha$ of DIGing, identically for all $N$.
For the subsequent analysis of the results of our spectral PEP formulation for DIGing, we fix $N=2$ for keeping a low computational complexity. This also corresponds to the most favorable situation for the theoretical results \cite[Theorem 3.14]{DIGing}, which get worse as $N$ increases. \smallskip
\begin{figure}[b]
\vspace{-3mm}
\includegraphics[width=0.5\textwidth]{img/wc_DIGing_Kevol_N2_tvar_diffstart_v4.pdf}
\caption{Evolution with K of the spectral worst-case of K iterations of DIGing with $N=2$, $\lam = 0.9$, $\mu = 0.1$, and different values of step-size $\alpha$. The corresponding theoretical rates from \cite[Theorem 3.14]{DIGing} are also shown in comparison. Logarithmic y-axis.}
\label{fig:DIGing_Kevol}
\end{figure}
\paragraph{Comparison between the spectral and theoretical bounds}
We compare the spectral bounds obtained with PEP with the corresponding guarantees obtained using $\rho_{\mathrm{theo}}^2$, i.e. the square of the theoretical convergence rate bound \cite[Theorem 3.14]{DIGing}. Fig. \ref{fig:DIGing_Kevol} shows the evolution of both guarantees (spectral and theoretical) with the total number of iterations $K$ of the algorithm, for different values of the step-size $\alpha$ and for $N=2$, $\lam = 0.9$ and $\mu = 0.1$. The spectral bounds are always smaller than the corresponding theoretical ones.
For the three values of $\alpha$, we observe a linear decrease of the spectral worst-cases, which strongly suggests a linear convergence rate. The observed rates are listed in TABLE \ref{tab:rates_DIG} below and can be compared with the theoretical convergence rate $\rho_{\mathrm{theo}}^2$, which are all larger. The step-size $\alpha = 2.6 \times 10^{-4}$ is the one that optimizes the theoretical convergence guarantee $\rho_{\mathrm{theo}}$ from \cite[Theorem 3.14]{DIGing} for $N=2$.
For some step-sizes, such as $\alpha = 10^{-3}$, convergence is not guaranteed by \cite[Theorem 3.14]{DIGing}, even when $N=2$, while our observations suggest that it does occur. And this get worse when $N$ becomes larger since it would require the theoretical step-sizes to be smaller, i.e. $\alpha_{\max} \approx \frac{4 \times 10^{-4}}{\sqrt{N}}$.
Therefore, the spectral bound for DIGing can help to greatly improve the choice of the step-size.
The same observations can be done for other values of $\mu$ and $\lam$. In particular, other values of $\mu$ leads to the same graphs, with a different scale for the vertical axis. We test it for $\mu = 0.01$ and $\mu = 0.001$.
\begin{table}[h]
\vspace{1mm}
\centering
\captionsetup{width=.9\linewidth}
\begin{tabular}{|r|c|c|}
\hline && \\[-2mm]
step-size $\alpha$ & $1- \text{observed rate}$ & $1-\text{theoretical rate}$ \\[1mm] \hline && \\[-2.5mm]
$\hspace{2.2mm} 10^{-4}$ & $2 \times 10^{-5}$ & $7\times 10^{-6}$ \\
$2.6 \times 10^{-4}$ & $5 \times 10^{-5}$ & $2\times 10^{-5}$ \\
$\hspace{2.2mm} 10^{-3}$ & $2 \times 10^{-4}$ & / \\ \hline
\end{tabular}
\caption{Theoretical \cite{DIGing} and observed spectral rates for $N=2$, $\lam = 0.9$, $\mu = 0.1$ and for different step-sizes $\alpha$.\vspace{-4mm}}
\label{tab:rates_DIG}
\end{table}
\paragraph{Convergence rate analysis with PEP}
Fig. \ref{fig:DIGing_Kevol} strongly suggests a linear convergence rate but the performance guarantees only hold for the values of $K$ tested and we cannot extrapolate them with certainty, e.g. the performance could explode after a larger number of iterations.
We now show how to use our PEP formulations to obtain a \emph{guaranteed convergence rate} valid for any number of iterations. The idea is to use the same metric as an initial condition and performance measure to be able to consider the problem over only one general DIGing iteration. We also need to ensure that the DIGing update preserved the assumptions on initial conditions.
The metric we use is the weighted combination of the two error measures \eqref{eq:init_DIGing1} and \eqref{eq:init_DIGing2} previously used separately in the initialization: \vspace{-4mm}
{\small
\begin{align}
\mc{P}_\betac^K = \frac{1}{N}\sum_{i=1}^N \|x_i^K-x^*\|^2 + \frac{\betac}{N}\sum_{i=1}^N \|\yDIG_i^K- \frac{1}{N} \sum_{j=1}^N \nabla f_j(x_j^K)\|^2, \label{eq:rate_metric}\\[-10mm]
\end{align}
\normalsize}
where $\betac$ is a positive weighting coefficient. \smallskip
\begin{proposition}[Convergence rate of DIGing with PEP] \label{thm:dig_conv}
Consider the one iteration spectral PEP formulation of DIGing with $\mc{P}_\betac^1$ as performance criterion and with the following initialization: pick any $x_i^0, s_i^0 \in \Rvec{d}$ such that \vspace{-1mm}
\begin{align}
\sum_{i=1}^N \yDIG_i^0 &= \sum_{i=1}^N \nabla f_i(x_i^0) \qquad \text{ and } & \mc{P}_\betac^0 &= 1. \label{eq:init_rate} \vspace{-1mm}
\end{align}
Let $\theta_\betac$ be the optimal value of this PEP, then
\begin{equation}
\mc{P}^k \le \mc{P}_\betac^k \le \theta_\betac^k \mc{P}_\betac^0 \qquad \text{for any $k$, $\betac \ge 0$.} \vspace{-1mm}
\end{equation}
Convergence is Q-linear\footnote{See definition of Q-linear and R-linear convergence in footnote \ref{note:defConv}} for $\mc{P}_\betac^k$ \eqref{eq:rate_metric} and R-linear for $\mc{P}^k$\,\eqref{eq:DIGing_crit}, both with convergence rate $\theta_\betac$ depending on coefficient $\betac$.
\end{proposition}
\begin{proof}
One can verify that the following changes of variables, using a coefficient $M \ge 0$,
$$\tilde{x}_i = \sqrt{M} x_i, \quad \tilde{s}_i = \sqrt{M} s_i \quad \text{ and } \quad \tilde{f}_i(\tilde{x}_i) = M f_i(x_i),$$
do not affect the behavior of DIGing and scale both $\mc{P}_\betac^0$ and $\mc{P}_\betac^1$ by a factor $M$:
$$\tilde{\mc{P}}_\betac^0 = M \mc{P}_\betac^0, \qquad \tilde{\mc{P}}_\betac^1 = M \mc{P}_\betac^1.$$
Since $\theta_\betac$ is the optimal value of $\mc{P}_\betac^1$ and $M = \tilde{\mc{P}}_\betac^0$ (for $\mc{P}_\betac^0 = 1$), we have that \vspace{-1mm}
\begin{equation} \label{eq:scaling_ineq}
\tilde{\mc{P}}_\betac^1 \le \theta_\betac \tilde{\mc{P}}_\betac^0.
\end{equation}
Equation \eqref{eq:scaling_ineq} holds for any value of $\tilde{\mc{P}}_\betac^0 \ge 0$ (e.g. for $\mc{P}_\betac^k$). Moreover, the iterations of DIGing are independent of $k$, and thus, inequality \eqref{eq:scaling_ineq} is valid for any iteration $k$
\begin{equation} \label{eq:scaling_ineqk}
\mc{P}_\betac^{k+1} \le \theta_\betac \mc{P}_\betac^k,
\end{equation}
provided that iterates $x^k_i, s_i^k$ also satisfy the initial condition
\begin{align}
\sum_{i=1}^N \yDIG_i^k &= \sum_{i=1}^N \nabla f_i(x_i^k), & \text{for any $k$}. \label{eq:dig_proof_rec2}
\end{align}
This condition \eqref{eq:dig_proof_rec2} holds by assumption for $k=0$ and is preserved by a DIGing update with a stochastic matrix $W$ (see Algorithm \ref{algo:DIGing}), as \vspace{-1mm}
\small
\begin{align}
\sum_{i=1}^N \yDIG_i^{k+1} &= \sum_{i=1}^N \sum_{j=1}^N w_{ij}^k \yDIG_j^k + \sum_{i=1}^N \nabla f_i(x_i^{k+1}) - \sum_{i=1}^N \nabla f_i(x_i^k) \\
& =
\sum_{j=1}^N \yDIG_j^k + \sum_{i=1}^N \nabla f_i(x_i^{k+1}) - \sum_{i=1}^N \nabla f_i(x_i^k) \\
&= \sum_{i=1}^N \nabla f_i(x_i^{k+1}), \vspace{-1mm}
\end{align}
\normalsize
Finally, by definition of $\mc{P}^k$ (see \eqref{eq:DIGing_crit}) and $\mc{P}_\betac^k$, we have well that $\mc{P}^k \le \mc{P}_\betac^k$ for any $k$, $\betac \ge 0$.
\end{proof}
Using Proposition \ref{thm:dig_conv}, we can obtain guaranteed convergence rates for DIGing, which depend on the weighting coefficient $\betac$, for the metric $\mc{P}_\betac^K$.
These convergence rates are also valid for $\mc{P}^K$, for all $\betac \ge 0$, and can thus be compared with the observed rates from Fig. \ref{fig:DIGing_Kevol} and the theoretical convergence rates \eqref{eq:DIGing_conv} \mbox{(\cite[Theorem 3.14]{DIGing})}.
An exploration of the different values for the weighting coefficient $\betac \ge 0$ suggests that the best rates are obtained for $\betac = \frac{\alpha}{L}$ but other rates are also valid.
With $\betac=\frac{\alpha}{L}$, we recover exactly the same rates as those observed in Fig. \ref{fig:DIGing_Kevol}, but they are guaranteed with certainty for any number of iterations $K$. The PEP problem from Proposition \ref{thm:dig_conv} has a small size since it only considers one iteration, and is thus rapidly solved. The size of the problem still increases with the number of agents $N$ but once again, we observe that the results are independent of $N$.
The approach above can be applied to other algorithms provided that their updates are independent of each other.
It presents some parallels with the approach used in the automatic analysis with IQC \cite{IQC_dec}. Both approaches analyze the decrease of a particular function over only one iteration. We design the decreasing criterion $\mc{P}_\betac$ by hand and have optimized the value of $\betac$ to find the smallest rate. The IQC approach allows to easily optimize the rate over a wider class of Lyapunov functions and it may therefore give smaller rates. On the other hand, our PEP approach provides the worst functions and communication networks resulting from the worst-case solutions. It also allows comparing what happens over one iteration and over several, and with network matrices that are variable or constant in time. \smallskip
\begin{figure}[t]
\vspace{-3mm}
\centering
\includegraphics[width=0.5\textwidth]{img/wc_DIGing_alphaevol3_TAC_final.pdf}
\caption{Convergence rate evolution with the step-size $\alpha$ for DIGing with $\lam=0.9$ and $\mu=0.1$. Theoretical rates from \cite{DIGing} are shown for different values of $N$. PEP rates $\theta_\betac$, obtained with Proposition \ref{thm:dig_conv} for $\betac = \frac{\alpha}{L}$, are lower and allow for larger step-sizes. These rates are computed with $N=2$ but the results are identical for any value of $N$. \vspace{-4mm}}
\label{fig:DIGing_alpha_evol}
\end{figure}
\paragraph{Impact of the step-size $\alpha$}
Fig. \ref{fig:DIGing_alpha_evol} compares the spectral rates, obtained with the spectral PEP formulation from Proposition \ref{thm:dig_conv} with the theoretical ones from \cite[Theorem 3.14]{DIGing} for a wide range of values for the step-size $\alpha$. The spectral rates are identical for any value of $N$ and present a first regime where they decrease as $\alpha$ increases until a certain threshold step-size $\alpha_t$. This decrease is numerically close to $1-2\mu\alpha$. After $\alpha_t$, we observe a sharp increase in the spectral rates.
Since the theoretical results \cite{DIGing} depends on the value of $N$, Fig. \ref{fig:DIGing_alpha_evol} shows different curves, corresponding to theoretical bounds on the convergence rates for different numbers of agents $N$. All these curves also present two regimes, however, the decrease of the first regime is slower and the sharp increase takes place after a much lower threshold step-size. Moreover, both the sharp increase and the threshold step-size worsen as $N$ increases. \\
Therefore, the spectral rates obtained with PEP allow for significant improvements in the tuning of the step-size $\alpha$ by choosing a larger value ($\approx \alpha_t$), which is independent of $N$. This leads to better convergence guarantees and therefore to better use of DIGing.
For example, in the setting of Fig. \ref{fig:DIGing_alpha_evol}, when $N=20$, the theoretical bound requires a step-size below $10^{-4}$, while the optimal step-size according to our spectral rate is around $4\times 10^{-3}$. This choice of the step-size allows improving the convergence guarantee by at least 2 orders of magnitude. \\
We observed the same two regimes for all the other values of $\mu$ and $\lambda$ we tested. However, the smaller the value of $\lambda$ is, the larger the gap between the values of the theoretical and spectral convergence rate at the threshold step-size. \smallskip
\paragraph{Impact of time-varying averaging matrices} \label{par:var_comp}
In the spectral formulation, we can choose to link different iterations together and to analyze the worst-case when we use the same constant averaging matrix $W$ at each iteration. We can also choose to consider each consensus step independently with potentially time-varying averaging matrices. For DIGing, we have chosen the second option to allow time-varying averaging matrices and be in the same condition as
\cite{DIGing}. However, we observe that both choices lead to the same worst-case values, even though the solution achieving these worst-cases may be different. One worst-case solution obtained with a constant matrix is therefore also a worst-case solution of the situation with time-varying matrices. We can thus analyze what is the worst constant matrix for DIGing and it will also be valid for time-varying settings. \smallskip
\paragraph{Worst averaging matrix and tightness analysis}
When the step-size $\alpha$ is optimized, i.e., equal to the threshold value ($\alpha_t$), we observe that the worst matrix for DIGing is the same as for DGD, and is thus given by $W^{(1)}$ in \eqref{eq:mat}. This matrix is only determined by the value of $\lambda$ and $N$ and the remainder it produces $\|\Yr - W^{(1)}\Xr\|$ is always close to zero in that case. This matrix $W^{(1)}$ is symmetric and generalized doubly stochastic. The same worst matrix is also recovered for larger step-size $\alpha$.
The bounds obtained using the exact PEP formulation with this specific matrix $W^{(1)}$ \emph{exactly} match the corresponding spectral bounds, within numerical errors. This means that the spectral formulation, even though it is a relaxation, provides again a \emph{tight performance bound} for DIGing with symmetric generalized doubly stochastic matrices and sufficiently large step-size. \smallskip
In summary, our spectral PEP formulation provides numerically tight convergence rates for DIGing that are independent of the number of agents $N$, and allows for better tuning of the constant step-size $\alpha$, leading to more efficient use of the DIGing algorithm.
\subsection{Accelerated Distributed Nesterov Gradient Descent}
As third use case, we analyze the accelerated distributed Nesterov
gradient descent (Acc-DNGD) algorithm proposed in \cite{AccDNGD}. We focus on the version designed for convex (not necessarily strongly convex) and $L$-smooth functions, described in Algorithm \ref{algo:ADNGD}.
\begin{algorithm}[b]
\caption{Acc-DNGD}
\begin{algorithmic}
\STATE Initialize $x_i^0 = v_i^0 = y_i^0 = 0$ and $s_i^0 = \nabla f(0)$ for all $i$;
\FOR{$k = 0, 1,\dots$}
\FOR{$i = 1,\dots,N$} \vspace{1mm}
\STATE $x_i^{k+1} = \sum_{j=1}^N w_{ij} y_j^k - \etaADNGD_k s_i^k$; \vspace{1mm}
\STATE $v_i^{k+1} = \sum_{j=1}^N w_{ij} v_j^k - \frac{\etaADNGD_k}{\alphaADNGD_k} s_i^k$; \vspace{1mm}
\STATE $y_i^{k+1} = \alphaADNGD_{k+1} x_i^{k+1} + (1-\alphaADNGD_{k+1})v_i^{k+1}$; \vspace{1mm}
\STATE $s_i^{k+1} = \sum_{j=1}^N w_{ij} s_j^k + \nabla f_i(y_i^{k+1}) - \nabla f_i(y_i^k)$; \vspace{1mm}
\ENDFOR
\ENDFOR
\end{algorithmic}
\label{algo:ADNGD}
\end{algorithm}
It achieves one of the best proved convergence rate in such setting, $\bigO\qty(\frac{1}{K^{1.4-\epsilon}})$ for any $\epsilon \in \qty(0,1.4)$, but there remains several open questions on choice of parameters and actual performance. We show how our technique allows shedding light on these questions.
We use the notations of \cite{AccDNGD}, which are slightly different from the rest of this paper. Here, $\eta_k$ denotes the diminishing step-size and $\alpha_k$ denotes a weighting factor.
Each agent $i$ keeps variables $x_i$, $v_i$, $y_i$, and $s_i$. The variables $s_i$ are local gradient tracking variables allowing each agent to estimate the average gradient $\frac{1}{N} \sum_{i=1}^N \nabla f_i(y_i)$.
The step-size $\etaADNGD_k$ is diminishing as
\begin{equation} \label{eq:accDNGD-sz}
\etaADNGD_k = \frac{\etaADNGD}{(k+k_0)^\beta},
\end{equation}
where $\etaADNGD \in \qty(0,\frac{1}{L})$, $\beta \in \qty(0,2)$ and $k_0 \ge 1$.
The sequence of $\alphaADNGD_k$ starts with
$\alphaADNGD_0 = \sqrt{\etaADNGD_0 L} \in \qty(0,1)$ and the next element of the sequence is each time computed as the unique solution in $(0,1)$ of
$$\alphaADNGD_{k+1}^2 = \frac{\etaADNGD_{k+1}}{\etaADNGD_k}(1-\alphaADNGD_{k+1})\alphaADNGD_k^2.$$
The convergence result \cite[Theorem 4]{AccDNGD} guarantees that the algorithm achieves an average functional error bounded as
$$f(\xb^k) - f(x^*) \le \bigO(\frac{1}{k^{2-\beta}}) \qquad \text{ for $\beta \in (0.6,2)$}.$$ Recall that $f(x) = \frac{1}{N} \sum_{i=1}^N f_i(x)$, $\xb^k = \frac{1}{N} \sum_{i=1}^N x_i^k$
and $x^*$ is a minimizer of $f$.
This convergence guarantee for Acc-DNGD only holds under specific conditions concerning the values of $\etaADNGD$ and $k_0$ \cite[Theorem 4]{AccDNGD}. These assumptions seem strong since they impose, in particular, that $\etaADNGD$ tends to 0 and $k_0$ tends to $\infty$ both when $\lam$ tends to 1 (disconnected graph) and to 0 (fully connected graph).
The authors of \cite{AccDNGD} conjecture that
\begin{enumerate}[(i)]
\item Exact values of parameters $\etaADNGD$, $k_0$ do not actually matter and we can choose values that are not satisfying assumptions from \cite[Theorem 4]{AccDNGD}, and still obtain a rate of $\bigO(\frac{1}{k^{2-\beta}})$. For example, authors of \cite{AccDNGD} used $\etaADNGD = \frac{1}{2L}$, $k_0 = 1$ in their numerical experiments with $\beta = 0.61$.
\item Choosing $\beta \in \qty[0,0.6]$ leads to the same rate $\bigO(\frac{1}{k^{2-\beta}})$. The case $\beta = 0$ uses constant step-size $\eta$ and is important because it would lead to a rate $\bigO(\frac{1}{k^{2}})$, i.e. the best possible rate in centralized framework for similar settings \cite[Theorem 2.1.7]{Nesterov}.
\end{enumerate}
\begin{figure*}[h]
\centering \hspace{-3mm}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{img/ADNGD_eta5_lam0.pdf}
\caption{$\etaADNGD = 0.05$, $\lam = 0$}
\label{fig:ADNGD_05_0}
\end{subfigure} \hspace{-3mm}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{img/ADNGD_eta5_lam75.pdf}
\caption{$\etaADNGD = 0.05$, $\lam = 0.75$}
\label{fig:ADNGD_05_75}
\end{subfigure}
\\ \hspace{-3mm}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{img/ADNGD_eta50_lam0.pdf}
\caption{$\etaADNGD = 0.5$, $\lam = 0$}
\label{fig:ADNGD_5_0}
\end{subfigure} \hspace{-3mm}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{img/ADNGD_eta50_lam75.pdf}
\caption{$\etaADNGD = 0.5$, $\lam = 0.75$}
\label{fig:ADNGD_5_75}
\end{subfigure}
\caption{Evolution with $K$ of the worst-case average functional error $f(\xb^K) - f(x^*)$ for Acc-DNGD, obtained with the spectral PEP formulation, with $N=2$, $ L=1$ and $k_0 = 1$. Different values of $\beta$, $\etaADNGD$ and $\lam$ are shown. Dashed lines show curves evolving in $\bigO(\frac{1}{K^2})$, which corresponds to the rate conjectured in \cite{AccDNGD} when $\beta = 0$. This asymptotic rate is reached for $\beta = 0$ only when $\etaADNGD$ is sufficiently small.}
\label{fig:ADNGD}
\end{figure*}
In this section, we use our spectral PEP formulation to analyze the Acc-DNGD algorithm, which allows having a better idea about the impact of the parameters on its performance and nuance the conjectures (i) and (ii) of \cite{AccDNGD}.
Fig. \ref{fig:ADNGD} shows the evolution of the worst-case average functional error $f(\xb^K) - f(x^*)$ with the total number of iterations $K$. We consider a symmetric range of eigenvalues $-\lm = \lp = \lam$, and thus we obtain spectral bounds valid over the entire class of matrices $\Wcl{-\lam}{\lam}$. The results are shown for different values of $\beta$, $\etaADNGD$ and $\lam$, while $L$, $N$ and $k_0$ are fixed.
We aim at testing the conjectures (i) and (ii).
To test (ii), about the validity range of $\beta$, conjectured to extend below 0.6, we choose $\beta = \{0,0.1,0.3,0.5\}$. To compare with proven valid values of $\beta$ and test conjecture (i), we also choose $\beta = \{0.61,1\}$.
We choose $\lam = \{0,0.75\}$ and $\eta = \{0.05, 0.5\}$ to have two representative values for each parameter (small and large).
We fix $L=1$ because the results for other values of $L$ can be recovered by scaling. We fix $N=2$ because the value of $N$ does not impact the worst-case value obtained with the spectral formulation, as for the two previous algorithms.
We fix $k_0 = 1$ for simplicity and because it does not affect the long term evolution of the performance. In particular, the value of $k_0$ has no impact when $\beta = 0$, see \eqref{eq:accDNGD-sz}.
We first observe that the value $\eta$ influences the performance of the algorithm. Indeed, when $\etaADNGD$ becomes larger (e.g., from Fig. \ref{fig:ADNGD_05_0}, \ref{fig:ADNGD_05_75} to Fig. \ref{fig:ADNGD_5_0}, \ref{fig:ADNGD_5_75}), the worst-case functional error decreases faster in a first phase but it sometimes explodes afterwards, preventing the algorithm to converge.
This occurs for example when $\beta$ is too small (e.g. $\beta = 0$ and $\beta = 0.1$ in Fig. \ref{fig:ADNGD_5_0}) or when $\lam$ is too large (e.g. in Fig. \ref{fig:ADNGD_5_75} and \ref{fig:ADNGD_05_75} where $\lam = 0.75$).
In Fig. \ref{fig:ADNGD_5_75}, this sharp increase even occurs in the experimental setting of \cite{AccDNGD}, i.e. $\etaADNGD = \frac{1}{2L} = 0.5$ and $\beta = 0.61$ (green line). These observations contradict conjecture (i) as currently stated.
Secondly, we observe in Fig. \ref{fig:ADNGD} that the curves for $\beta=0.5$ and $\beta=0.61$ behaves similarly in all the different plot. This suggests that the worst-case values present no phase transition at $\beta = 0.6$, and therefore, supports conjecture (ii) claiming that values of $\beta$ lower than 0.6 also provide rates $\bigO(\frac{1}{K^{2-\beta}})$, including the case $\beta = 0$.
The curves for $\beta = 0$ appear indeed to approach a decrease in $\bigO(\frac{1}{K^2})$, as conjectured in \cite{AccDNGD}, but only in Fig. \ref{fig:ADNGD_05_0} where the values $\etaADNGD$ and $\lam$ are sufficiently small.
Indeed, the rate $\bigO(\frac{1}{K^2})$ cannot be observed for larger values of $\etaADNGD$ or $\lam$ where PEP problems may become unbounded (see Figures \ref{fig:ADNGD_05_75}, \ref{fig:ADNGD_5_0} and \ref{fig:ADNGD_5_75}).
Therefore, the $\etaADNGD$ parameter should be tuned according to the values of $\lam$ and the choice of $\beta$. Qualitatively, we can see that it must decrease when $\lam$ increases or when $\beta$ decreases. Further analysis should be performed to tune the value of $\etaADNGD$ in general. Having sufficiently small step-sizes $\eta$ seems thus to be the key for conjecture (ii) to be true. Since smaller values for $\beta$ appear to require a smaller step-size, the performance would approach the convergence rate $\bigO(\frac{1}{K^{2-\beta}})$ more slowly in that case. However, even with well-tuned $\eta$, we cannot exclude that the worst-case performance of Acc-DNGD does not explode after a large number of iterations in some cases, as is the case in an early phase for some settings in Fig. \ref{fig:ADNGD}.
In addition, it is interesting to note that the worst matrix that is recovered from the spectral PEP formulation for Acc-DNGD is again $W^{(1)}$ from \eqref{eq:mat}, as it was the case for DGD and DIGing.
For Acc-DNGD also, the bounds obtained using the exact PEP formulation with this specific matrix $W^{(1)}$ \emph{exactly} match the corresponding spectral bounds, within numerical errors. This means that the spectral formulation, even though it is a relaxation, provides a \emph{tight performance bound} for Acc-DNGD with symmetric generalized doubly stochastic matrices.
\subsection{Code and Toolbox}
PEP problems have been written and solved using the PESTO Matlab toolbox \cite{PESTO}, with Mosek solver, within 200 seconds maximum. For example, for $N=3$, the time needed for a regular laptop to solve the spectral formulation for DGD (from Section \ref{sec:analysis_DGD}) is about 3, 12, 48, and 192 seconds respectively for $K=5, 10, 15, 20$. The PESTO toolbox is available on \textsc{Github} (\url{https://github.com/PerformanceEstimation/Performance-Estimation-Toolbox}).
We have updated PESTO to allow easy and intuitive PEP formulation of gradient-based decentralized optimization methods, and we have added a code example for DGD, DIGing, and Acc-DNGD.
\section{Conclusion}
We have developed a methodology that automatically computes numerical worst-case performance bounds for any decentralized optimization method that combines first-order oracles with average consensus steps. This opens the way for computer-aided analysis of many other decentralized algorithms, which could lead to improvements in their performance guarantees and parameter tuning, and could allow rapid exploration of new algorithms. Moreover, the guarantees computed with our tool appear tight in many cases.
Our methodology is based on the performance estimation problem (PEP) for which we have developed two representations of average consensus steps.
Our first formulation provides the exact worst-case performance of the method for a specific network matrix.
The second formulation provides upper performance bounds that are valid for an entire spectral class of matrices. This spectral formulation often allows recovering the worst possible network matrix based on the PEP solutions.
We demonstrate the use of our automatic performance methodology on three algorithms and we discover, among other things, that DGD worst-case performance is much better than the theoretical guarantee; DIGing has performance independent of the number of agents and accepts much larger step-size than predicted by the theory; Acc-DNGD appears to present a rate $\bigO(\frac{1}{K^{2-\beta}})$, for $\beta \in \qty(0,2)$, only if the step-sizes are sufficiently small.
Further developments of our methodology may include a spectral formulation that is independent of the number of agents \cite{PEP_dec_Nindep} or that considers other classes of network matrices, e.g. non symmetric or B-connected networks.
\appendices
\section*{Acknowledgment}
The authors wish to thank Adrien Taylor for his helpful advice concerning the PESTO toolbox.
\section{Note on scaling of DGD}
\label{annexe:scaling}
In the DGD analysis, we have one constant $R$ to bound the subgradients
$\|\nabla f_i(x_i^k)\|^2 \le R^2$ (for $k = 0,\dots,K$), and another one $D$ to bound the initial distance to the optimum $\|x^0 - x^*\|^2 \le D^2$.
In our performance estimation problem, we consider general positive values for these parameters $D > 0$ and $R > 0$ and we parametrize the step-size by $\alpha = \frac{Dh}{R\sqrt{K}}$, for some $h > 0$. To pass from this general problem to the specific case where $D=1$ and $R=1$, that we actually solve, we consider the following changes of variables:
$$\tilde{x}_i = \frac{x_i}{D}, \quad \tilde{f}_i(\tilde{x}_i) = \frac{1}{DR} f_i(x_i) \quad \text{ and } \quad \tilde{\alpha} = \frac{R \alpha}{D}.$$
These changes of variables do not alter the updates of the algorithm or the nature of the problem. They allow expressing the worst-case guarantee obtained for $f(\xmoy ) - f(x^*)$ with general values of $D$, $R$, and $h$, denoted $w(D,R,h)$, in terms of the worst-case guarantee obtained for $\tilde{f}(\tilde{x}_{\mathrm{av}}) - \tilde{f}(\tilde{x}^*)$ with $D=R=1$, denoted $\tilde{w}(1,1,h)$:
\begin{equation} \label{eq:scal_wc}
w(D,R,h) = DR~ \tilde{w}(1,1,h).
\end{equation}
The same kind of scaling can be applied to Theorem \ref{thm:bound}. The theorem is valid for general values of $D$ and $R$ but is specific to $\alpha = \frac{1}{\sqrt{K}}$, which is equivalent to picking $h = \frac{R}{D}$. After the scaling, we obtain the following bound, valid for $D = 1$, $R = 1$ and any value of $\alpha = \frac{h}{\sqrt{K}}$ with $h>0$:
\begin{equation} \label{eq:th_bound_scal}
\tilde{f}(\tilde{x}_{\mathrm{av}} ) - \tilde{f}(\tilde{x}^*) \le \frac{h^{-1} + h}{2 \sqrt{K}} + \frac{2 h}{\sqrt{K}(1-\lam)}.
\end{equation}
This scaled theoretical bound with $h=1$ is equivalent to the bound from Theorem \ref{thm:bound} with $D=R=1$, which was the focus of the numerical analysis in Section \ref{sec:NumRes}. \\
This bound \eqref{eq:th_bound_scal} can be extended to any value of $D > 0$ and $R > 0$, using the relation from equation \eqref{eq:scal_wc}:
\begin{equation} \label{eq:th_bound_scal_2}
f(\xmoy ) - f(x^*) \le DR \qty(\frac{h^{-1} + h}{2 \sqrt{K}} + \frac{2 h}{\sqrt{K}(1-\lam)}).
\end{equation}
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Astrophysically realistic black holes rotate about their polar axis and it is therefore
widely believed that they belong to the two-dimensional family \cite{Notetwo} of spinning Kerr spacetimes which
are described by the curved line element \cite{ThWe,Chan,Notebl,Noteun}
\begin{eqnarray}\label{Eq1}
ds^2=-{{\Delta}\over{\rho^2}}(dt-a\sin^2\theta
d\phi)^2+{{\rho^2}\over{\Delta}}dr^2+\rho^2
d\theta^2+{{\sin^2\theta}\over{\rho^2}}\big[a
dt-(r^2+a^2)d\phi\big]^2\ .
\end{eqnarray}
The metric functions in (\ref{Eq1}) are given by the functional expressions
\begin{equation}\label{Eq2}
\Delta\equiv r^2-2Mr+a^2\ \ \ \ ; \ \ \ \ \rho^2\equiv r^2+a^2\cos^2\theta\ .
\end{equation}
The physical parameters $\{M,J=Ma\}$ in (\ref{Eq2}) are respectively the mass and angular momentum \cite{Noteaa} of the spinning
Kerr black-hole spacetime.
The roots of the metric function $\Delta(r)$ determine the characteristic spin-dependent horizon radii
\begin{equation}\label{Eq3}
r_{\pm}=M\pm\sqrt{M^2-a^2}\
\end{equation}
of the black hole.
Interestingly, the spinning and curved black-hole spacetime (\ref{Eq1}) is characterized by a non-trivial (non-zero)
spatially-dependent invariant
\begin{equation}\label{Eq4}
{\cal G}\equiv R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^2\ ,
\end{equation}
known as the Gauss-Bonnet curvature invariant.
From Eq. (\ref{Eq4}) one finds that the Gauss-Bonnet curvature invariant of the
spinning Kerr black-hole spacetime (\ref{Eq1}) is given by the two-dimensional functional expression \cite{Donnw1,Donnw2}
\begin{equation}\label{Eq5}
{\cal G}_{\text{Kerr}}(r,\theta)={{48M^2}\over{(r^2+a^2\cos^2\theta)^6}}
\cdot\big({r^6-15r^4a^2\cos^2\theta+15r^2a^4\cos^4\theta-a^6\cos^6\theta}\big)\ .
\end{equation}
For a given value of the black-hole rotation parameter $a$, the expression (\ref{Eq5})
is a two-dimensional function of the radial coordinate
\begin{equation}\label{Eq6}
r\in[M+\sqrt{M^2-a^2},\infty]\
\end{equation}
and the polar coordinate
\begin{equation}\label{Eq7}
\theta\in[0,\pi]\ \Rightarrow \ \cos^2\theta\in[0,1]\ .
\end{equation}
The Gauss-Bonnet curvature invariant (\ref{Eq4}) has recently attracted the attention of many physicists and mathematicians.
In particular, it has been revealed (see \cite{Donnw1,Donnw2,Dons,ChunHer,Hodca,Herkn} and references therein) that the influential
no-hair conjecture \cite{NHC,JDB} can be
violated in composed Einstein-Gauss-Bonnet-scalar field theories whose action
\begin{equation}\label{Eq8}
S={1\over2}\int
d^4x\sqrt{-g}\Big[R-{1\over2}\nabla_{\alpha}\phi\nabla^{\alpha}\phi+f(\phi){\cal
G}\Big]\
\end{equation}
contains a direct (non-minimal) coupling of the scalar field to the spatially-dependent Gauss-Bonnet invariant.
Interestingly, it has been proved \cite{Dons,Donnw1,Donnw2,ChunHer,Hodca,Herkn} that the boundary between bald spinning
Kerr black holes and hairy black-hole configurations in the Einstein-Gauss-Bonnet field theory (\ref{Eq8}) is marked
by the presence of `cloudy' Kerr black holes that support linearized scalar fields with a non-minimal coupling
to the Gauss-Bonnet invariant (\ref{Eq5}) of the curved spacetime (\ref{Eq1}).
Since the Gauss-Bonnet curvature invariant ${\cal G}_{\text{Kerr}}(r,\cos\theta;a/M)$ plays
the role of an effective {\it spatially-dependent} mass term in the Klein-Gordon wave equation of the
Einstein-Gauss-Bonnet-scalar field theory (\ref{Eq8}) \cite{Donnw1,Donnw2,Dons,ChunHer,Hodca,Herkn},
it is of physical interest to explore its highly non-trivial two-dimensional spatial functional behavior
in the exterior region of the black-hole spacetime.
The main goal of the present paper is to explore, using {\it analytical} techniques,
the physical and mathematical properties of the two-dimensional
Gauss-Bonnet curvature invariant ${\cal G}_{\text{Kerr}}(r,\cos\theta;a/M)$ of astrophysically realistic
spinning Kerr black holes.
In particular, below we shall derive remarkably compact analytical formulas for the
characteristic spin-dependent global extremum points of the Gauss-Bonnet invariant in the
exterior region (\ref{Eq6}) of the rotating Kerr spacetime (\ref{Eq1}).
Interestingly, we shall reveal the existence of two critical black-hole rotation parameters,
$(a/M)^{-}_{\text{crit}}=1/2$ and $(a/M)^{+}_{\text{crit}}\simeq0.7818$
[see below the exact analytically derived dimensionless expression (\ref{Eq46})], which determine three qualitatively
different global functional behaviors of the spin-dependent Kerr
Gauss-Bonnet curvature invariant (\ref{Eq5}).
\section{Spatial behavior of the external two-dimensional Gauss-Bonnet
curvature invariant ${\cal G}_{\text{Kerr}}(r,\theta)$ within its domain of existence}
In the present section we shall
search for local extremum points and saddle points of the two-dimensional Kerr
Gauss-Bonnet invariant (\ref{Eq5}) {\it within} the
physically allowed domain [see Eqs. (\ref{Eq6}) and (\ref{Eq7})]
\begin{equation}\label{Eq9}
r\in(M+\sqrt{M^2-a^2},\infty)\ \ \ \ \text{with}\ \ \ \ \cos^2\theta\in(0,1)\ .
\end{equation}
The condition
\begin{equation}\label{Eq10}
{{\partial {\cal G}_{\text{Kerr}}(r,\theta)}\over{\partial r}}=0\
\end{equation}
yields the effectively cubic equation
\begin{equation}\label{Eq11}
r^6-21r^4a^2\cos^2\theta+35r^2a^4\cos^4\theta-7a^6\cos^6\theta=0\ ,
\end{equation}
whereas the condition
\begin{equation}\label{Eq12}
{{\partial {\cal G}_{\text{Kerr}}(r,\theta)}\over{\partial\theta}}=0\
\end{equation}
yields the effectively cubic equation \cite{Notecos0}
\begin{equation}\label{Eq13}
7r^6-35r^4a^2\cos^2\theta+21r^2a^4\cos^4\theta-a^6\cos^6\theta=0\ .
\end{equation}
From Eqs. (\ref{Eq11}) and (\ref{Eq13}) one finds the extremum condition
\begin{equation}\label{Eq14}
7r^4-14r^2a^2\cos^2\theta+3a^4\cos^4\theta=0\ .
\end{equation}
The solution of (\ref{Eq14}) that can respect the condition (\ref{Eq9}) is
given by the dimensionless relation
\begin{equation}\label{Eq15}
\cos\theta=\pm\sqrt{{{7-2\sqrt{7}}\over{3}}}\cdot {{r}\over{a}}\ .
\end{equation}
Substituting (\ref{Eq15}) into the relation
\begin{equation}\label{Eq16}
d\equiv {{\partial^2 {\cal G}_{\text{Kerr}}(r,\theta)}\over{\partial r^2}}\cdot
{{\partial^2 {\cal G}_{\text{Kerr}}(r,\theta)}\over{\partial\theta^2}}-
\Big[{{\partial^2 {\cal G}_{\text{Kerr}}(r,\theta)}\over{\partial r\partial\theta}}\Big]^2\ ,
\end{equation}
one finds \cite{Notedn}
\begin{equation}\label{Eq17}
d<0\ ,
\end{equation}
which implies that the two-dimensional Gauss-Bonnet curvature invariant (\ref{Eq5}) has no local extremum points within
the domain (\ref{Eq9}).
\section{Functional behavior of the external Gauss-Bonnet curvature invariant ${\cal G}_{\text{Kerr}}(r,\theta)$
along its angular boundaries}
In the present section we shall analyze the functional behavior of the Kerr Gauss-Bonnet curvature invariant (\ref{Eq5})
along the angular boundaries [see Eq. (\ref{Eq7})] of the black-hole spacetime (\ref{Eq1}).
\subsection{Analysis of the Gauss-Bonnet curvature invariant along the equatorial boundary $\cos^2\theta=0$}
Substituting the equatorial boundary relation
\begin{equation}\label{Eq18}
\cos\theta=0\
\end{equation}
into Eq. (\ref{Eq5}), one obtains the remarkably simple functional expression
\begin{equation}\label{Eq19}
{\cal G}_{\text{Kerr}}(r,\cos\theta=0)={{48M^2}\over{r^6}}\ ,
\end{equation}
which is a monotonically decreasing function of the radial coordinate $r$ whose maximum value is obtained at
the black-hole outer horizon:
\begin{equation}\label{Eq20}
{\cal G}_{\text{Kerr}}(r=M+\sqrt{M^2-a^2},\cos\theta=0)={{48M^2}\over{(M+\sqrt{M^2-a^2})^6}}\ .
\end{equation}
\subsection{Analysis of the Gauss-Bonnet curvature invariant along the polar boundary $\cos^2\theta=1$}
Substituting the polar boundary relation
\begin{equation}\label{Eq21}
\cos^2\theta=1\
\end{equation}
into Eq. (\ref{Eq5}), one obtains the spin-dependent curvature expression
\begin{equation}\label{Eq22}
{\cal G}_{\text{Kerr}}(r,\cos^2\theta=1)={{48M^2}\over{(r^2+a^2)^6}}
\cdot(r^2-a^2)(r^4-14a^2r^2+a^4)\ .
\end{equation}
Interestingly, one finds that the radial expression (\ref{Eq22}) has a non-trivial radial functional behavior.
In particular, the Gauss-Bonnet function (\ref{Eq22}) has three radial extremum points which
are determined by the effectively cubic equation
\begin{equation}\label{Eq23}
r^6-21a^2\cdot r^4+35a^4\cdot r^2-7a^6=0\ .
\end{equation}
Defining the dimensionless variable
\begin{equation}\label{Eq24}
x\equiv \Big({{r}\over{a}}\Big)^2\ ,
\end{equation}
the characteristic equation (\ref{Eq23}) can be expressed in the form
\begin{equation}\label{Eq25}
7-35x+21x^2-x^3=0\ .
\end{equation}
The cubic radial equation (\ref{Eq25}) can be solved analytically to yield the spin-dependent radial
extremum points of the Gauss-Bonnet invariant (\ref{Eq22}).
In particular, one finds that the Gauss-Bonnet function (\ref{Eq22}) has one local radial minimum point and one
local radial maximum point that, in principle, can satisfy the radial requirement (\ref{Eq6}) \cite{Notex3}:
\begin{equation}\label{Eq26}
x_{\text{min}}=7+4\sqrt{7}\sin\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]-
4\sqrt{{{7}\over{3}}}\cos\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]
\end{equation}
and
\begin{equation}\label{Eq27}
x_{\text{max}}=7+8\sqrt{{{7}\over{3}}}\cos\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]
\end{equation}
Taking cognizance of Eqs. (\ref{Eq6}), (\ref{Eq24}), (\ref{Eq26}), and (\ref{Eq27}) one finds that,
depending on the magnitude of the dimensionless black-hole rotation parameter $a/M$,
the Gauss-Bonnet invariant (\ref{Eq22})
has three qualitatively different radial functional behaviors:
\newline
{Case I:} From Eqs. (\ref{Eq6}), (\ref{Eq24}), and (\ref{Eq27}) one finds that, in the dimensionless spin regime \cite{Notecranmax}
\begin{equation}\label{Eq28}
{{a}\over{M}}<{{2\sqrt{x_{\text{max}}}}\over{1+x_{\text{max}}}
\ ,
\end{equation}
the expression (\ref{Eq22}) is a monotonically decreasing function in the external radial region (\ref{Eq6}) of the black-hole
spacetime (\ref{Eq1}), whose maximum value
\begin{equation}\label{Eq29}
{\cal G}_{\text{Kerr}}(r=M+\sqrt{M^2-a^2},\cos^2\theta=1)=
{{3(M^2-4a^2)[(M+\sqrt{M^2-a^2})^2-a^2]}\over{M^4(M+\sqrt{M^2-a^2})^4}}\
\end{equation}
is located on the black-hole outer horizon.
Note that the curvature value of the Gauss-Bonnet invariant is larger at the maximum point (\ref{Eq20})
than at the maximum point (\ref{Eq29}).
\newline
{Case II:} From Eqs. (\ref{Eq6}), (\ref{Eq24}), (\ref{Eq26}), and (\ref{Eq27}) one finds that, in the dimensionless spin regime \cite{Notecranmin}
\begin{equation}\label{Eq30}
0.4338\simeq{{2\sqrt{x_{\text{max}}}}\over{1+x_{\text{max}}}}\leq{{a}\over{M}}<
{{2\sqrt{x_{\text{min}}}}\over{1+x_{\text{min}}}}\simeq0.9749\ ,
\end{equation}
the Gauss-Bonnet invariant ${\cal G}_{\text{Kerr}}(r,\cos^2\theta=1)$ has a local maximum radial point at
\begin{equation}\label{Eq31}
r_{\text{max}}=\sqrt{7+8\sqrt{{{7}\over{3}}}\cos\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]}\cdot
\geq r_+\
\end{equation}
with
\begin{eqnarray}\label{Eq32}
{\cal G}_{\text{Kerr}}(r=r_{\text{max}},\cos^2\theta=1)&=&
{{48M^2}\over{a^6}}
\cdot{{(x_{\text{max}}-1)(x^2_{\text{max}}-14x_{\text{max}}+1)}\over{(x_{\text{max}}+1)^6}}
\ .
\end{eqnarray}
One finds that, in the dimensionless spin regime (\ref{Eq30}), the maximum point (\ref{Eq32})
has a Gauss-Bonnet curvature value which is smaller than the corresponding curvature at the
maximum point (\ref{Eq20}).
\newline
{Case III:} From Eqs. (\ref{Eq6}), (\ref{Eq24}), and (\ref{Eq26})
one finds that, for highly spinning Kerr black holes in the dimensionless regime
\begin{equation}\label{Eq33}
{{a}\over{M}}\geq{{2\sqrt{x_{\text{min}}}}\over{1+x_{\text{min}}}}\simeq0.9749\ ,
\end{equation}
the Gauss-Bonnet invariant ${\cal G}_{\text{Kerr}}(r,\cos^2\theta=1)$ has a local minimum radial point at
\begin{eqnarray}\label{Eq34}
r_{\text{min}}&=&\sqrt{7+4\sqrt{7}\sin\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]-
4\sqrt{{{7}\over{3}}}\cos\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]}\cdot a
\geq r_+\
\end{eqnarray}
with
\begin{eqnarray}\label{Eq35}
{\cal G}_{\text{Kerr}}(r=r_{\text{min}},\cos^2\theta=1)&=&
{{48M^2}\over{a^6}}
\cdot{{(x_{\text{min}}-1)(x^2_{\text{min}}-14x_{\text{min}}+1)}\over{(x_{\text{min}}+1)^6}}\nonumber \\
&\simeq&
-1.7581\cdot{{M^2}\over{a^6}}\ ,
\end{eqnarray}
and a local maximum radial point which is characterized by the properties (\ref{Eq31}) and (\ref{Eq32}).
Interestingly, one finds that, in the dimensionless spin regime (\ref{Eq33}), the maximum point (\ref{Eq32})
has a curvature value which is smaller than the corresponding Gauss-Bonnet curvature at the
maximum point (\ref{Eq20}).
\section{Functional behavior of the Gauss-Bonnet curvature invariant ${\cal G}_{\text{Kerr}}(r,\theta)$
along its radial boundaries}
In the present section we shall analyze the functional behavior of the Gauss-Bonnet invariant (\ref{Eq5}) along the radial
boundaries [see Eq. (\ref{Eq6})] of the external black-hole spacetime (\ref{Eq1}).
We first point out that asymptotically flat Kerr black-hole spacetimes are characterized
by the trivial asymptotic functional behavior
\begin{equation}\label{Eq36}
{\cal G}_{\text{Kerr}}(r\to\infty,\cos\theta)\to0^+\ .
\end{equation}
Substituting the spin-dependent horizon boundary relation
\begin{equation}\label{Eq37}
r=r_+=M+\sqrt{M^2-a^2}\
\end{equation}
into Eq. (\ref{Eq5}), one obtains the functional expression
\begin{eqnarray}\label{Eq38}
&{\cal G}_{\text{Kerr}}(r=r_+,\cos\theta)= \nonumber \\ &
{{48M^2[(M+\sqrt{M^2-a^2})^2-a^2\cos^2\theta][(M+\sqrt{M^2-a^2})^4-14a^2\cos^2\theta (M+\sqrt{M^2-a^2})^2+a^4\cos^4\theta]}\over{[(M+\sqrt{M^2-a^2})^2+a^2\cos^2\theta]^6}}\ .
\end{eqnarray}
The Gauss-Bonnet curvature invariant (\ref{Eq38}) on the outer horizon of the spinning Kerr black hole is
characterized by the relations [see Eqs. (\ref{Eq20}) and (\ref{Eq29})]
\begin{equation}\label{Eq39}
{\cal G}_{\text{Kerr}}(r=r_+,\cos^2\theta=0)=
{{48M^2}\over{(M+\sqrt{M^2-a^2})^6}}\
\end{equation}
and
\begin{equation}\label{Eq40}
{\cal G}_{\text{Kerr}}(r=r_+,\cos^2\theta=1)=
{{3(M^2-4a^2)[(M+\sqrt{M^2-a^2})^2-a^2]}\over{M^4(M+\sqrt{M^2-a^2})^4}}\ .
\end{equation}
Inspection of the (rather cumbersome) curvature expression (\ref{Eq38}) reveals that, as a function of the polar
variable $\cos^2\theta$, it has three extremum angular points.
In particular, from Eq. (\ref{Eq38}) one obtains the effectively cubic equation
\begin{equation}\label{Eq41}
7r^6_+-35r^4_+a^2\cos^2\theta+21r^2_+a^4\cos^4\theta-a^6\cos^6\theta=0\
\end{equation}
for the locations of the spin-dependent polar extremum points of the Gauss-Bonnet curvature invariant (\ref{Eq38})
along the black-hole horizon.
Defining the dimensionless variable
\begin{equation}\label{Eq42}
x\equiv \Big[{{a}\over{r_+(a/M)}}\Big]^2\cos^2\theta\ ,
\end{equation}
one finds that Eq. (\ref{Eq41}) can be written in the form
\begin{equation}\label{Eq43}
7-35x+21x^2-x^3=0\ .
\end{equation}
The cubic polar equation (\ref{Eq43}) can be solved analytically to yield the spin-dependent
extremum angular points of the Gauss-Bonnet curvature invariant (\ref{Eq38}) along the polar angular direction of the black-hole horizon.
In particular, one finds that
the only solution of (\ref{Eq43}) that can respect the angular condition (\ref{Eq7}) is a minimum point of the curvature
function (\ref{Eq38}) which is given by the dimensionless relation \cite{Notex2x3,Notexap}
\begin{equation}\label{Eq44}
x_{\text{min}}=7-4\sqrt{7}\sin\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]-
4\sqrt{{{7}\over{3}}}\cos\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]
\ .
\end{equation}
Taking cognizance of Eqs. (\ref{Eq7}), (\ref{Eq42}), and (\ref{Eq44}) one finds that, for spinning
Kerr black holes in the dimensionless sub-critical regime \cite{Notecran}
\begin{equation}\label{Eq45}
{{a}\over{M}}<\Big({{a}\over{M}}\Big)_{\text{crit}}=
\sqrt{{{7+\sqrt{7}\cos\Big[{1\over3}\arctan\big(3\sqrt{3}\big)\Big]-
\sqrt{21}\sin\Big[{1\over3}\arctan\big(3\sqrt{3}\big)\Big]}\over{12}}}\ ,
\end{equation}
the extremum points of the curvature function (\ref{Eq38}) [as obtained from the cubic equation (\ref{Eq43})]
are characterized by the non-physical relation $\cos^2\theta>1$ [see Eq. (\ref{Eq7})], in which
case the Gauss-Bonnet curvature invariant (\ref{Eq38}) monotonically decreases
in the polar angular regime (\ref{Eq7}) from the value (\ref{Eq39}) to the value (\ref{Eq40}).
It is important to point out that, in the dimensionless spin regime $a/M\geq{1\over 2}$, the minimum point (\ref{Eq40}) at
the poles of the black-hole surface is characterized by a non-positive value of the Gauss-Bonnet curvature invariant
which is smaller than the asymptotic limit (\ref{Eq36}).
Intriguingly, one finds that, for rapidly spinning Kerr black holes
in the complementary super-critical regime
\begin{equation}\label{Eq46}
{{a}\over{M}}\geq\Big({{a}\over{M}}\Big)_{\text{crit}}=
\sqrt{{{7+\sqrt{7}\cos\Big[{1\over3}\arctan\big(3\sqrt{3}\big)\Big]-
\sqrt{21}\sin\Big[{1\over3}\arctan\big(3\sqrt{3}\big)\Big]}\over{12}}}
\ ,
\end{equation}
one of the extremum angular points [the polar minimum point (\ref{Eq44})] of the curvature function (\ref{Eq38})
lies within the physically allowed angular region (\ref{Eq7}), in which case the Gauss-Bonnet curvature invariant (\ref{Eq38})
has a non-trivial (non-monotonic) angular functional behavior.
In particular, one obtains from Eqs. (\ref{Eq3}), (\ref{Eq42}), and (\ref{Eq44}) the polar minimum point
\begin{eqnarray}\label{Eq47}
(\cos^2\theta)_{\text{min}}=
\big({{r_+}\over{a}}\big)^2\cdot
\Big\{7-4\sqrt{7}\sin\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]-
4\sqrt{{{7}\over{3}}}\cos\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]\Big\}
\ .
\end{eqnarray}
It is interesting to point out that the analytically derived formula (\ref{Eq47}) implies that the
polar angle $\theta_{\text{min}}=\theta_{\text{min}}(a/M)$ (with $\theta_{\text{min}}\leq90^{\circ}$), which characterizes
the minimum angular point of the Gauss-Bonnet curvature invariant (\ref{Eq38}), is
a monotonically increasing function of the dimensionless black-hole rotation parameter $a/M$.
Taking cognizance of Eqs. (\ref{Eq38}), and (\ref{Eq47}), one obtains the functional expression
\begin{eqnarray}\label{Eq48}
{\cal G}_{\text{Kerr}}[r=r_+(a/M),(\cos^2\theta)_{\text{min}}]=
-{{M^2}\over{r^6_+}}\cdot
{{57+28\sqrt{21}\cos\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]}\over{8}}
\
\end{eqnarray}
for the spin-dependent minimal value of the Gauss-Bonnet curvature invariant (\ref{Eq38}) that characterizes the spinning Kerr
spacetime (\ref{Eq1}) along the black-hole horizon.
Interestingly, one finds that the absolute value of the analytically derived functional relation (\ref{Eq48}) is
a monotonically increasing function of the dimensionless black-hole rotation parameter $a/M$.
Taking cognizance of Eqs. (\ref{Eq35}) and (\ref{Eq48}),
one finds that for rapidly-rotating Kerr black holes in the dimensionless spin
regime (\ref{Eq46}), the value (\ref{Eq48}) of the
curvature invariant ${\cal G}_{\text{Kerr}}[r=r_+(a/M),(\cos^2\theta)_{\text{min}}]$
is smaller (that is, more negative)
than the curvature value ${\cal G}_{\text{Kerr}}(r=r_{\text{min}},\cos^2\theta=1)$ given by Eq. (\ref{Eq35}).
In table \ref{Table1} we present, using the analytically derived formulas (\ref{Eq47}) and (\ref{Eq48}), the angular values of the
polar minimum points $(\cos^2\theta)_{\text{min}}(a/M)$ and the corresponding values of the Gauss-Bonnet curvature
invariant ${\cal G}_{\text{Kerr}}[r=r_+(a/,M),(\cos^2\theta)_{\text{min}}]$ for various values of the dimensionless black-hole rotation parameter $a/M$ in the super-critical regime (\ref{Eq46}).
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|c|}
\hline $a/M$ & \ $(\cos^2\theta)_{\text{min}}$\ \ & \
$M^4{\cal G}_{\text{Kerr}}[r=r_+(a/M),(\cos^2\theta)_{\text{min}}]$\ \ \\
\hline \ $0.80$\ \ \ &\ \ 0.9277\ \ \ &\ \ -1.3788\ \ \\
\hline \ $0.85$\ \ \ &\ \ 0.7482\ \ \ &\ \ -1.8262\ \ \\
\hline \ $0.90$\ \ \ &\ \ 0.5903\ \ \ &\ \ -2.6393\ \ \\
\hline \ $0.95$\ \ \ &\ \ 0.4425\ \ \ &\ \ -4.5301\ \ \\
\hline \ $0.975$\ \ \ &\ \ 0.3644\ \ \ &\ \ -6.9398\ \ \\
\hline \ $0.999$\ \ \ &\ \ 0.2536\ \ \ &\ \ -17.7924\ \ \\
\hline \ $1.000$\ \ \ &\ \ 0.2319\ \ \ &\ \ -23.1318\ \ \\
\hline
\end{tabular}
\caption{The Gauss-Bonnet curvature invariant of spinning Kerr black holes.
We present, for various super-critical values of the dimensionless black-hole
rotation parameter $a/M$ [see Eq. (\ref{Eq46})], the values of $(\cos^2\theta)_{\text{min}}(a/M)$ which characterize the
minimum angular points of the Gauss-Bonnet invariant (\ref{Eq38}) along the black-hole horizon as obtained
from the analytically derived formula (\ref{Eq47}).
We also present the corresponding spin-dependent values of the dimensionless Gauss-Bonnet curvature
invariant ${\cal G}_{\text{Kerr}}[r=r_+(a/M),(\cos^2\theta)_{\text{min}}]$ as obtained
from the analytically derived formula (\ref{Eq48}).}\label{Table1}
\end{table}
\section{Kerr black holes supporting infinitesimally thin massive scalar rings}
In the present section we shall reveal the fact that spinning Kerr black holes can support thin matter rings which
are made of {\it massive} scalar fields with a non-minimal coupling to the Gauss-Bonnet invariant (\ref{Eq5})
of the curved spacetime. As we shall now show, this intriguing physical observation is a direct outcome of our
analytically derived results.
The composed Einstein-Gauss-Bonnet-nonminimally-coupled-massive-scalar field
theory is characterized by the action \cite{Donnw2}
\begin{equation}\label{Eq49}
S=\int
d^4x\sqrt{-g}\Big[{1\over4}R-{1\over2}\nabla_{\alpha}\phi\nabla^{\alpha}\phi
-{1\over2}\mu^2\phi^2+f(\phi){\cal G}\Big]\ .
\end{equation}
Here the physical parameter $\mu$ is the mass of the non-minimally coupled scalar field \cite{Notemuu}.
It has been proved \cite{Dons,Donnw1,Donnw2,ChunHer,Hodca,Herkn} that the boundary between bald spinning
Kerr black holes and hairy (scalarized) black-hole configurations in the Einstein-Gauss-Bonnet-scalar field
theory (\ref{Eq49}) is marked by the presence of marginally stable Kerr black holes (`cloudy' Kerr black holes)
that support linearized configurations of the non-minimally coupled massive scalar fields.
Intriguingly, as we shall now show, the supported spatially regular external field configurations owe their existence to the
non-minimal direct coupling term $f(\phi){\cal G}$
between the massive scalar field $\phi$ and the Gauss-Bonnet invariant (\ref{Eq5}) of the
curved spacetime [see the action (\ref{Eq49})].
The action (\ref{Eq49}) yields the Klein-Gordon differential equation \cite{Donnw2}
\begin{equation}\label{Eq50}
\nabla^\nu\nabla_{\nu}\phi=\mu^2_{\text{eff}}\phi\
\end{equation}
for the non-minimally coupled massive scalar field configurations, where the spatially-dependent
effective mass term in Eq. (\ref{Eq50}),
\begin{equation}\label{Eq51}
\mu^2_{\text{eff}}(r,\theta;M,a)=\mu^2-\eta\cdot{\cal G}_{\text{Kerr}}(r,\theta)\ ,
\end{equation}
reflects the direct massive-scalar-field-Kerr-Gauss-Bonnet coupling in the
composed Einstein-Gauss-Bonnet-nonminimally-coupled-massive-scalar field theory (\ref{Eq49}).
Here the physical parameter $\eta$ \cite{Noteetaa}, which appears in the weak-field expansion \cite{Donnw2}
\begin{equation}\label{Eq52}
f(\phi)={1\over2}\eta\phi^2\
\end{equation}
of the scalar coupling function, controls the strength of the direct (non-minimal) coupling
between the massive scalar field configurations
and the Gauss-Bonnet curvature invariant (\ref{Eq5}).
Interestingly, taking cognizance of Eq. (\ref{Eq5}), one finds that, depending on the relative magnitudes of the physical
parameters $\eta$ and $\mu$ of the composed Einstein-Gauss-Bonnet-nonminimally-coupled-massive-scalar field
theory (\ref{Eq49}), the spatially-dependent effective mass term (\ref{Eq51}) of the
non-minimally coupled scalar field may become negative in the vicinity of the outer horizon of the central
supporting Kerr black hole.
The presence of an effective binding ({\it negative}) potential well outside the
outer horizon of the supporting black hole provides a necessary condition for the existence of spatially regular
bound-state scalar configurations (scalar clouds) that are supported in the asymptotically flat
curved black-hole spacetime (\ref{Eq1}) \cite{Dons,Donnw1,Donnw2,ChunHer,Hodca,Herkn}.
In particular, for a given mass $\mu$ of the non-minimally coupled scalar field in the dimensionless
large-mass regime
[or equivalently, in the dimensionless large-coupling $\eta/M^2\gg1$ regime, see Eq. (\ref{Eq55}) below]
\begin{equation}\label{Eq53}
M\mu\gg1\ ,
\end{equation}
the {\it onset} of the spontaneous scalarization phenomena is marked by the critical relation \cite{Hodca,Herkn,Hodjp}
\begin{equation}\label{Eq54}
\text{min}\{\mu^2_{\text{eff}}(r,\theta;M,a)\}\to 0^{-}\
\end{equation}
of the effective mass term in the characteristic Klein-Gordon differential equation (\ref{Eq50}).
Taking cognizance of Eqs. (\ref{Eq20}), (\ref{Eq51}), and (\ref{Eq54}) one finds that, in the
large-mass regime (\ref{Eq53}) with
\begin{equation}\label{Eq55}
{{48M^2}\over{(M+\sqrt{M^2-a^2})^6}}\cdot{{\eta}\over{\mu^2}}\to 1^{+}\ ,
\end{equation}
the effective mass term (\ref{Eq51}) of the composed Kerr-black-hole-nonminimally-coupled-massive-scalar-field system
becomes {\it negative} (attractive) in the narrow equatorial
{\it ring} which is characterized by the relations $r\to r_+(a,M)$ with $\theta\to\pi/2$.
This physically interesting fact implies that, in the dimensionless large-mass regime (\ref{Eq53}),
spinning Kerr black holes can support infinitesimally thin non-minimally
coupled massive scalar configurations (scalar rings) which are characterized by the dimensionless
ratio (\ref{Eq55}) and are located on the equator of the black-hole surface.
\section{Summary and discussion}
Motivated by the recent growing interest in composed Einstein-Gauss-Bonnet field theories and the presence
of cloudy Kerr black holes \cite{Notekb} that support external matter configurations with a direct non-minimal coupling
to the Gauss-Bonnet invariant (see \cite{Donnw1,Donnw2,Dons,ChunHer,Hodca,Herkn} and references therein),
we have explored the physical and mathematical properties of the two-dimensional
Gauss-Bonnet curvature invariant ${\cal G}_{\text{Kerr}}(r,\cos\theta;a/M)$ of astrophysically realistic
spinning Kerr black holes.
The main {\it analytical} results derived in this paper are as follows:
(1) We have proved that the global maximum point \cite{Notering} of the Kerr Gauss-Bonnet curvature invariant in the
exterior region $r\in[r_+(a/M),\infty]$ of the black-hole spacetime is always
(that is, for all Kerr black holes in the physically allowed regime $a/M\in[0,1]$)
located on the black-hole horizon.
In particular, taking cognizance of Eqs. (\ref{Eq20}), (\ref{Eq29}), and (\ref{Eq32}),
one finds that the global maximum value
\begin{equation}\label{Eq56}
\text{max}\Big\{M^4\cdot{\cal G}_{\text{Kerr}}(r\in[r_+,\infty],\cos^2\theta;a/M)\Big\}=
{{48M^6}\over{(M+\sqrt{M^2-a^2})^6}}\
\end{equation}
of the Gauss-Bonnet curvature invariant is
located on the equator ($\theta_{\text{max}}=90^{\circ}$) of the black-hole surface.
(2) The analytically derived formula (\ref{Eq56}) implies that the global maximum value
of the Gauss-Bonnet curvature invariant is a monotonically increasing function
of the dimensionless black-hole rotation parameter $a/M$.
In particular, the expression (\ref{Eq56}) yields the dimensionless curvature value
\begin{equation}\label{Eq57}
\text{max}\Big\{M^4\cdot{\cal G}_{\text{Kerr}}(r\in[r_+,\infty],\cos^2\theta;a/M=1)\Big\}=48\
\end{equation}
for the maximally-spinning extremal Kerr black hole.
This is the largest curvature value that characterizes the spin-dependent external Gauss-Bonnet
invariant (\ref{Eq5}) of rotating Kerr black-hole spacetimes.
(3) Intriguingly, we have proved that the location of the global minimum point which characterizes
the Gauss-Bonnet curvature invariant of spinning Kerr black holes has a
non-trivial functional dependence on the black-hole rotation parameter.
In particular, we have revealed the existence of two critical black-hole rotation parameters:
\begin{equation}\label{Eq58}
\Big({{a}\over{M}}\Big)^{-}_{\text{crit}}={1\over2}\
\end{equation}
and
\begin{equation}\label{Eq59}
\Big({{a}\over{M}}\Big)^{+}_{\text{crit}}=
\sqrt{{{7+\sqrt{7}\cos\Big[{1\over3}\arctan\big(3\sqrt{3}\big)\Big]-
\sqrt{21}\sin\Big[{1\over3}\arctan\big(3\sqrt{3}\big)\Big]}\over{12}}}
\ ,
\end{equation}
which mark the boundaries between three qualitatively different functional behaviors of the
Gauss-Bonnet curvature invariant:
\newline
(i) For Kerr black holes in the sub-critical regime $a/M<(a/M)^{-}_{\text{crit}}$, the
Gauss-Bonnet curvature invariant attains its global minimum asymptotically
at spatial infinity [see Eq. (\ref{Eq36})].
\newline
(ii) Kerr lack holes in the intermediate regime $(a/M)^{-}_{\text{crit}}\leq a/M\leq(a/M)^{+}_{\text{crit}}$
are characterized by Gauss-Bonnet curvature invariants whose
global minima are located at the black-hole poles
\newline
\begin{eqnarray}\label{Eq60}
(\cos^2\theta)_{\text{min}}=1\ \ \ \ \ \text{for}\ \ \ \ \ (a/M)^{-}_{\text{crit}}\leq a/M\leq(a/M)^{+}_{\text{crit}}\ .
\end{eqnarray}
\newline
(iii) Rapidly-spinning Kerr black holes in the
super-critical regime
$a/M>(a/M)^{+}_{\text{crit}}$
are characterized by Gauss-Bonnet curvature invariants with non-monotonic functional behaviors along
the polar angular direction of the black-hole surface. In particular,
the spin-dependent global minima of the Gauss-Bonnet curvature invariants of these rapidly-rotating Kerr
black holes are determined by the analytically derived dimensionless scaling relation [see Eq. (\ref{Eq47})]
\begin{eqnarray}\label{Eq61}
\Big({{a}\over{r_+}}\Big)^2\cdot(\cos^2\theta)_{\text{min}}&=&
7-4\sqrt{7}\sin\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]-
4\sqrt{{{7}\over{3}}}\cos\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]\nonumber \\ &&
\text{for}\ \ \ \ a/M\geq(a/M)^{+}_{\text{crit}}\ .
\end{eqnarray}
(4) From the analytically derived formula (\ref{Eq61}) one learns that the polar angle $\theta_{\text{min}}$
(with $\theta_{\text{min}}\leq90^{\circ}$), which characterizes the minimum angular point of the Kerr Gauss-Bonnet
curvature invariant in the super-critical regime $a/M\geq(a/M)^{+}_{\text{crit}}$, is
a monotonically increasing function of the dimensionless black-hole spin parameter $a/M$.
In particular, one finds from Eq. (\ref{Eq61}) the relation \cite{Noteth2}
\begin{equation}\label{Eq62}
\theta^{-}_{\text{min}}(a/M=1)\simeq61.212^{\circ}\
\end{equation}
for maximally-spinning (extremal) Kerr black holes.
It is interesting to emphasize the fact that the
value $\theta^{-}_{\text{min}}\simeq61.212^{\circ}$ is the {\it largest} polar angle (with $\theta_{\text{min}}\leq90^{\circ}$) that characterizes the spin-dependent global minimum points of the Gauss-Bonnet invariants
of curved Kerr black-hole spacetimes.
(5) Taking cognizance of Eqs. (\ref{Eq35}), (\ref{Eq36}), (\ref{Eq40}), and (\ref{Eq48}) one concludes that,
in the exterior region (\ref{Eq6}) of the spinning Kerr black-hole spacetime, the
Gauss-Bonnet curvature invariant is characterized by the global dimensionless minimum
\begin{eqnarray}\label{Eq63}
&{\text{min}\Big\{M^4\cdot{\cal G}_{\text{Kerr}}(r\in[r_+,\infty],\cos^2\theta;a/M)\Big\}}=
\nonumber\\
&
\begin{cases}
0^{+} & \ \ \ \text{for}\ \ \ \ \ \ a/M<(a/M)^{-}_{\text{crit}}\ \\
{{3(M^2-4a^2)[(M+\sqrt{M^2-a^2})^2-a^2]}\over{(M+\sqrt{M^2-a^2})^4}} & \ \ \ \text{for}\ \ \ \ \
\ (a/M)^{-}_{\text{crit}}\leq a/M\leq(a/M)^{+}_{\text{crit}}\ \\
-{{57+28\sqrt{21}\cos\big[{1\over3}\arctan\big({{1}\over{3\sqrt{3}}}\big)\big]}\over{8}}\cdot
{{M^6}\over{(M+\sqrt{M^2-a^2})^6}} & \ \ \ \text{for}\ \ \ \ \ \ a/M\geq(a/M)^{+}_{\text{crit}}\ .
\end{cases}
\end{eqnarray}
(6) Interestingly, one learns from the analytically derived formula (\ref{Eq63}) that, in
the dimensionless spin regime $a/M\geq(a/M)^{+}_{\text{crit}}$, the minimum value
of the Gauss-Bonnet curvature invariant is a monotonically decreasing function
of the dimensionless black-hole rotation parameter $a/M$.
In particular, the expression (\ref{Eq63}) yields the dimensionless relation
\begin{eqnarray}\label{Eq64}
\text{min}\Big\{M^4\cdot{\cal G}_{\text{Kerr}}(r\in[r_+,\infty],\cos^2\theta;a/M=1)\Big\}&=&
-{{57+28\sqrt{21}\cos\Big[{1\over3}\arctan\Big({{1}\over{3\sqrt{3}}}\Big)\Big]}\over{8}}
\
\end{eqnarray}
for the maximally-spinning extremal Kerr black hole.
This is the most negative curvature value that characterizes the spin-dependent external Gauss-Bonnet
invariant of rotating Kerr black-hole spacetimes.
(7) We have proved that, in the large-mass regime (\ref{Eq53}) of
the composed Einstein-Gauss-Bonnet-nonminimally-coupled-massive-scalar field theory (\ref{Eq49}),
spinning Kerr black holes can support infinitesimally thin configurations of the non-minimally
coupled massive scalar fields. These supported scalar clouds (massive scalar rings)
are located on the equator of the black-hole surface and
are characterized by the dimensionless large-mass (or equivalently, large-coupling) critical relation [see Eqs. (\ref{Eq3})
and (\ref{Eq55})]
\begin{equation}\label{Eq65}
{{48M^2}\over{r^6_+}}\cdot{{\eta}\over{\mu^2}}\to 1^{+}\ .
\end{equation}
\bigskip
\noindent
{\bf ACKNOWLEDGMENTS}
\bigskip
This research is supported by the Carmel Science Foundation. I would
like to thank Yael Oren, Arbel M. Ongo, Ayelet B. Lata, and Alona B.
Tea for helpful discussions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
3D reconstruction is an elementary problem of image processing and computer vision thanks to many potential useful real-world applications such as robotics, autonomous driving and augmented reality. Recently, 3D reconstruction based on deep learning has achieved remarkable results in many directions, such as 3D shape reconstruction from single view or multiple views \cite{wu2016learning,choy20163d,tatarchenko2019single} and shape completion \cite{xu2019disn}. Generally, most of the 3D reconstruction work is based on CNN encoder-decoder architecture\cite{choy20163d ,richter2018matryoshka}. Specifically, single-view image reconstruction tasks usually take 2D CNN to encode 2D images and use different decoders to produce different final representations according to what representations the task needs. For example, if voxels\cite{wu2018learning ,zhang2018learning}are expected to be the final representation, 3D CNN will be selected as the decoder.
\begin{figure}[htbp]
\centerline{\includegraphics[width=8.5cm]{1.jpg}}
\caption{Single image reconstruction using a state-of-the-art method DISN\cite{xu2019disn}, and our method on real images.}
\label{fig}
\end{figure}
\begin{figure*}[htbp]
\centerline{\includegraphics[width=18cm]{2_abcd.jpg}}
\caption{The workflow of the proposed DmifNet framework. Our model has a main branch and three side branches: (a) The main branch uses the autoencoder to process the sample data and get the prediction results. (b)(c)Branches I and II process data by exploiting sub-branches from different intermediate layers of the main branch. (d)Branch III first uses the DoG to process the samples to obtain the Gaussian difference map, then we concatenate the original
input image and Gaussian difference map as input information to predict result. Finally, we dynamically fuse the prediction results of the main branch and the side branches to get final prediction results. Best viewed in color.
}
\label{fig}
\end{figure*}
Historically, single-view image 3D reconstruction was achieved via shape-from-shading \cite{zhang1999shape,horn1970shape}. These work inferred the depth information of the visible surface through multiple clues (e.g. texture, defocus) and structural information from a single image. Recently, according to the output representation, the existing work on learning-based 3D reconstruction can be categorized into mesh-based, point-based and voxel-based. Since there is no clear way to generate a valid mesh, mesh-based representation is facing great challenges. Then, Wang ${et}$ ${al.}$ \cite{wang2018pixel2mesh} used graph convolutional neural network \cite{scarselli2008graph} to make an ellipsoid template gradually become the target object, but the result was usually limited to the spherical topology. Fan ${et}$ ${al.}$ \cite{fan2017point}introduced point clouds as an output representation for 3D reconstruction. However, the point-based representation requires many complex post-processing steps to generate a 3D mesh. Some work \cite{wu2016learning,choy20163d,wu20153d} have considered voxel-based representations and used 3D CNN to reconstruct the 3D shape from a single image. But, this is only possible with shallow architectures and small batch sizes due to memory limitation, which leads to slow training. To address the impact of memory limitation, Tatarchenko ${et}$ ${al.}$ \cite{tatarchenko2017octree} performed hierarchical partitioning of the output space for computational and storage efficiency, which helped predict higher resolution 3D shapes. Previous work chose to train their model on synthetic data with ground truth 3D information, but suffered from domain adaptation when tested on real data. To address the problem of domain adaptation, Wu ${et}$ ${al.}$ \cite{wu2017marrnet} proposed an end-to-end trainable model that sequentially estimated 2.5D sketches and 3D object shape. Groueix${et}$ ${al.}$ \cite{groueix2018papier} used small patches to splice the result of 3D shapes. However, it is easy to cause overlapping between the small patches. Recently, some work\cite{xu2019disn,mescheder2019occupancy,chen2019learning} implicitly represented the 3D surface as the continuous decision boundary of a deep neural network classifier. In other words, these work learned a classifier to predict whether a point is inside or outside of the boundary, using this classifier as the shape representation. However, since the shape is represented by the weights of the classifier or regression model, these methods ignore some low-level shape information.
In this paper, we propose a multi-branch information fusion network for 3D reconstruction. We first strengthen the model's ability to capture the complex topology. To be more specific, as the difference between two different low-pass filtered images, the DoG is actually a band-pass filter, which removes high frequency components representing noise, and also some low frequency components representing the homogeneous areas in the image. The frequency components in the passing band are assumed to be associated to the edges in the images. So, we utilize DoG to extract edge geometry and corners information, and use a separate branch network to infer the 3D shape. In addition, previous work generally used synthetic data for training, but suffered from domain adaptation when tested on real data. So, inspired by Li ${et}$ ${al.}$ \cite{li2020dynamic} and Xu ${et}$ ${al.}$ \cite{xu20203d}, we design several side branches at different locations of the main branch network to improve the generalization ability. Finally, we dynamically fuse the multi-branch probability distribution information as the final prediction result. As shown in \textbf{Fig.1}, we show the reconstruction results of our method on real images. Our main contributions lie in the following aspects:
\begin{itemize}
\item We use DoG to process input images to extract edge geometry and corners information. Because we realize the object edge geometry and corners information are important for neural network to capture complex topology.
\item We design side branches from the intermediate layers of our neural network, so each side branch produces more diverse representations along its own pathway.
\item Unlike previous methods computing the average value or fixed weight of all branches predicted probability, we dynamically fuse the predicted probability of all branches to obtain the final predicted probability.
\item Extensive evaluation on a large-scale publicly available dataset ShapeNet demonstrates our method can achieve higher evaluation results than the state-of-the-art methods.
\end{itemize}
\section{Method}
\subsection{Overview}
Our goal is to use 2D image $\mathcal{X}$ and points $\mathcal{P}\in \mathcal{R}^3$ to infer the occupation probability of the corresponding points $\mathcal{P}$, i.e., to learn a mapping function$f_\theta: \mathcal{R}^3 \times \mathcal{X} \to [0,1]$. For a closed shape $\mathcal{S}$, the binary neural network is equivalent to giving an occupancy probability between 0 and 1 for each point $\mathcal{P}_i$ to determine whether the point $\mathcal{P}_i$ is within the closed shape:
\begin{equation} \label{eqn1}
\mathcal{S}(\mathcal{P}_i) =
\begin{cases}
0 & \mbox{if }\mathcal{P}_i\notin shape \\
1, & \mbox{if }\mathcal{P}_i\in shape
\end{cases}
\end{equation}
However, most of the existing work is only trained on synthetic data, and it is difficult to accurately predict when the model is tested on real data, i.e., the model lacks generalization ability. Secondly, it is difficult to recover the 3D shape accurately when the object has complex topological structure. To this end, we delicately design branches I,II,III from the intermediate layers of the main branch to assist inference. Finally, we dynamically fuse the prediction results of the main branch and the side branches. Therefore, first, we not only retain representation of the main branch but also generate more diverse representations along pathway of side branches. Second, we also use the DoG map to enhance the ability of our model to capture and learn the edge geometry and corners features information.
In order to better recover the complex topology and detailed information of the object and improve the generalization ability of the model. We design a multi-branch network to implement this idea. The workflow of DmifNet is shown in \textbf{Fig.2}. Specifically, we first utilize DoG to process the input image to obtain a Gaussian difference map. Second, the main branch infers the occupancy probability from the input 2D image. Third, the side branches I and II process the input information along its own pathway to infer the occupancy probability. Then, the side branch III uses the Gaussian difference map and the 2D image as input information to reason the occupancy probability. Finally, according to the different input images, we dynamically fuse the multi-branch probability information as the final prediction result.
\subsection{Main Branch and Side Branch I,II}
In this part, we make the main branch focus on learning the shape prior that explains input well. However, we observe that using a single mapping function is hard to learn the shape prior very well. So, we employ multiple mapping functions to more excellently and effectively learn the shape prior. As shown in \textbf{Fig.2}, we not only retain representative representation of the main branch but also generate more diverse representations along pathway of side branches.
\subsection{Side Branch III}
In this part, we use the DoG to process input image, which removes high frequency components representing noise, and also some low frequency components representing the homogeneous areas in the image. So, we use DoG maps as input information to enhance the model's ability to learn and capture the edge geometry and corners feature information. Then, our model can better retrieve the complex topology and detailed information of the object. In the first step, we convert the input RGB image into a gray image. Then, we convolve the gray image with different Gaussian kernels to obtain blurred images at different Gaussian scales:
\begin{align}
F_i(x,y)&=G_{\sigma_i}(x,y)f(x,y) \\
& =f(x,y)\frac{1}{{\sigma_i}^d{2\pi}^{d/2}}exp-\frac{x^2+y^2}{2\sigma_i^2}\notag
\end{align}
where $f(x,y)$ represents the input gray image, $G_{\sigma_i}$ is the standard deviation of $i_{th}$ Gaussian kernel, $F_i(x,y)$ is the output result of the input image after the $i_{th}$ Gaussian kernel convolution, $d$ is the dimension of the output. The second step, we use the subtraction of two adjacent Gaussian scale feature maps to obtain a Gaussian difference map:
\begin{align}
Diff&=F_{i+1}(x,y)-F_{i}(x,y) \\
&=(G_{\sigma_{i+1}}(x,y)-G_{\sigma_{i}}(x,y))f(x,y) \notag \\
&=\frac{1}{{2\pi}^{\frac{d}{2}}}(\frac{1}{\sigma_{i+1}^d}exp(-\frac{r^2}{2\sigma_{i+1}^2})-\frac{1}{\sigma_{i}^d}exp(-\frac{r^2}{2\sigma_{i}^2}))f(x,y)\notag
\end{align}
Note that the map keeps the spatial information contained in the frequency bands held between the two images. Where $r^2$ represents the blur radius. In addition, we find that the high-frequency random noise is removed by the DOG algorithm when tested on real data. In the third step, we concatenate the original input image and Gaussian difference map as input information to reason 3D shape. The whole process is shown in \textbf{Fig.3}
\begin{figure}[htbp]
\centerline{\includegraphics[width=8.5cm]{3_dog.jpg}}
\caption{The process of side branch III preprocessing input image. }
\label{fig}
\end{figure}
\subsection{Probability Mixture}
In this part, we model a linear combination of multiple branch networks to get a strong regressor. Specifically, when processing samples, the probability fusion module mixes the prediction results of each branch according to its contribution of each branch. Therefore, for each sample, all the branches will be integrated to infer the most accurate prediction probability:
\begin{align}
p(x)&=\sum_{i=1}^4 \alpha_i*\phi_i(x)
\end{align}
\[{s.t. \quad \alpha_i=f_\theta(\phi_i(x))}\]
where $\alpha_i$ represents the weight of each branch in the current sample prediction, $\phi_i(x)$ is prediction probability of each branch, $p(x)$ represents the output of the mixed prediction probability. $f_\theta(.)$ is the learned mapping function.
\subsection{Loss Function}
We train our network on the fully-annotated dataset ShapeNet\cite{chang2015shapenet}:
\begin{align}
Loss&=\frac{1}{|B|}\sum_{i=1}^{|B|}\sum_{j=1}^{K}L_{M_{CE}}(f_\theta(p_{ij},x_i),o_{ij})\\&+\frac{1}{|B|}\sum_{n=1}^{N}\sum_{i=1}^{|B|}\sum_{j=1}^{K}L_{S_{CE}}(f_\Theta{}_n(p_{ij},x_i),o_{ij})\notag
\end{align}
where $Loss$ represents the total training loss of a small batch $B$. $L_{M_{CE}}$ represents the main branch cross-entropy classification loss. $L_{S_{CE}}$ represents the side branch cross-entropy classification loss. $n=1,...,N$ represents the $n_{th}$ side branch. $f_\theta$ represents the main branch network parameters. $f_\Theta{}_n$ represents the $n_{th}$ side branch network parameter. The ${x_i}$ is the $i_{th}$ observation of batch B. $p_{ij}$represents the $j_{th}$ point of the $i_{th}$ observation. $o_{ij}$ is the ground truth of $p_{ij}$.
\subsection{Multi-branch Consistent Optimization}
In general, the optimization direction of the objective function of each branch in the multi-branch network is different. At the same time, the different optimization directions will negatively affect the accuracy of the model. So, we use a consistent optimization goal to optimize the multi-branch network. By the objective function equation (5), we not only directly collect the classification loss of each branch to optimize the network, but also pay attention to the different representations of each branch in its pathway. Once the knowledge is generated from the side branches or main branch, the network will achieve real-time knowledge interaction and information sharing through the common path between branches:
\begin{align}
Loss&=\frac{1}{|B|}\sum_{i=1}^{|B|}\sum_{j=1}^{K}L_{M_{CE}}(f_{\theta;I_{\Theta_n}}(p_{ij},x_i),o_{ij})\\&+\frac{1}{|B|}\sum_{n=1}^{N}\sum_{i=1}^{|B|}\sum_{j=1}^{K}L_{S_{CE}}(f_{_\Theta{}_n;I_{\theta}}(p_{ij},x_i),o_{ij})\notag
\end{align}
where $I_{\theta},I_{\Theta{}_n}$ represent the common path network parameters of each branch network.
\begin{table*}[t]
\begin{center}
\caption{ \textbf{Single Image 3D Reconstruction Results on ShapeNet}. Quantitative evaluations on the ShapeNet under IoU, Normal consistency and Chamfer distance. We observe that our method approach outperforms other state-of-the-art learning based methods in Normal consistency and IoU.}\label{tab:cap}. \label{tab:cap}
\setlength{\tabcolsep}{0.80mm}{
\begin{tabular}{ccccccccccccccccc}
\hline
\textbf{IoU $\uparrow$} & Airplane & Bench & Cabinet & Car & Chair & Display & Lamp & Loudspeaker & Rifle & Sofa & Table & Telephone & Vessel & Mean
\\
\hline
3D-R2N2 \cite{choy20163d} ECCV'16 & 0.426 & 0.373 & 0.667 & 0.661 & 0.439 & 0.440 & 0.281 & 0.611 & 0.375 & 0.626 &0.420 &0.6118 &0.482 & 0.493 \\
Pix2Mesh \cite{wang2018pixel2mesh} ECCV'18 & 0.420 & 0.323 & 0.664 & 0.552 & 0.396 & 0.490 & 0.323 & 0.599 & 0.402& 0.613 & 0.395 & 0.661 &0.397 & 0.480 \\
AtlasNet \cite{groueix2018papier} CVPR'18 & - & - & - & - & - & - & -& - & - & - & - & - &- &- \\
ONet \cite{mescheder2019occupancy} CVPR'19 & 0.571 & 0.485 & 0.733 & 0.737 & 0.501 & 0.471 & 0.371& 0.647 & 0.474 & 0.680 & 0.506 & 0.720 &0.530 &0.571 \\
\textbf{Our} & \textbf{0.603} & \textbf{0.512}& \textbf{0.753}&\textbf{0.758}& \textbf{0.542}& \textbf{0.560} & \textbf{0.416}& \textbf{0.675} & \textbf{0.493} & \textbf{0.701}& \textbf{0.550} & \textbf{0.750}& \textbf{0.574} &\textbf{0.607} \\
\hline
\textbf{Normal Consistency $\uparrow$} & Airplane & Bench & Cabinet & Car & Chair & Display & Lamp & Loudspeaker & Rifle & Sofa & Table & Telephone & Vessel & Mean\\
\hline
3D-R2N2 \cite{choy20163d} ECCV'16 & 0.629 &0.678& 0.782 & 0.714 & 0.663 & 0.720 & 0.560 & 0.711 & 0.670 &0.731&0.732 &0.817 &0.629 & 0.695 \\
Pix2Mesh \cite{wang2018pixel2mesh} ECCV'18 & 0.759 & 0.732 & 0.834 & 0.756 & 0.746 & 0.830 & 0.666 & 0.782 & 0.718& 0.820 & 0.784 & 0.907 &0.699& 0.772 \\
AtlasNet \cite{groueix2018papier} CVPR'18 & 0.836 & 0.779 & 0.850 & 0.836 & 0.791 & 0.858 & 0.694& 0.825 & 0.725 & 0.840 & 0.832 & 0.923 &0.756 &0.811 \\
ONet \cite{mescheder2019occupancy} CVPR'19 & 0.840 & 0.813 & 0.879 & 0.852 & 0.823 & 0.854 & 0.731& 0.832 & 0.766 & 0.863 & 0.858 & 0.935 &0.794 &0.834 \\
\textbf{Our} & \textbf{0.853} & \textbf{0.821}& \textbf{0.885}&\textbf{0.857}& \textbf{0.835}& \textbf{0.872} & \textbf{0.758}& \textbf{0.847} & \textbf{0.781} & \textbf{0.873}& \textbf{0.868} & \textbf{0.936}& \textbf{0.808} &\textbf{0.846} \\
\hline
\textbf{Chamfer-$L_1$ $\downarrow$} & Airplane & Bench & Cabinet & Car & Chair & Display & Lamp & Loudspeaker & Rifle & Sofa & Table & Telephone & Vessel & Mean\\
\hline
3D-R2N2 \cite{choy20163d} ECCV'16 & 0.227 &0.194& 0.217 & 0.213 & 0.270 & 0.314 & 0.778 & 0.318 & 0.183 &0.229&0.239 &0.195 &0.238 & 0.278 \\
Pix2Mesh \cite{wang2018pixel2mesh} ECCV'18 & 0.187 & 0.201 & 0.196 & 0.180 & 0.265 & 0.239 & 0.308 & 0.285 & 0.164& 0.212 & 0.218 & 0.149 &0.212& 0.216 \\
AtlasNet \cite{groueix2018papier} CVPR'18 & \textbf{0.104} & \textbf{0.138}& 0.175&\textbf{0.141}& 0.209& \textbf{0.198} & \textbf{0.305}& \textbf{0.245} & \textbf{0.115} & \textbf{0.177}& 0.190 & 0.128& \textbf{0.151} &\textbf{0.175} \\
ONet \cite{mescheder2019occupancy} CVPR'19 & 0.147 & 0.155 & 0.167 & 0.159 & 0.228 & 0.278 & 0.479& 0.300 & 0.141 & 0.194 & 0.189 & 0.140 &0.218 &0.215 \\
\textbf{Our} & 0.131 & 0.141 &\textbf{0.149} & 0.142 &\textbf{0.203} & 0.220 & 0.351& 0.263 & 0.135 & 0.181 & \textbf{0.173} &\textbf{0.124} &0.189 &0.185 \\
\hline
\multicolumn{4}{l}{$^{\mathrm{a}}$ The Bold-faced numbers represent the best results.}
\end{tabular}}
\end{center}
\end{table*}
\section{Experiment}
In this section, we first introduce settings and implementation details for training and evaluation. Second, we report our results which are evaluated on 13 categories from the ShapeNet repository and compare the performance of our method to several state-of-the-art methods. Finally, we use ablation study to validate each module in our model.
\subsection{Dataset}
ShapeNet\cite{chang2015shapenet}: ShapeNet is a richly-annotated, large-scale dataset consisting of 50000 models and 13 major categories. We split the dataset into training and testing sets, with 4/5 for training and the remaining 1/5 for testing.
Online Products\cite{oh2016deep}: The dataset contains images of 23,000 items sold online. Since the dataset does not have the ground-truth, we only use the dataset for qualitative evaluation.
\subsection{Metrics}
Metrics: Following the work\cite{michalkiewicz2020simple} experimental setup, we use the volumetric Intersection over Union (IoU), the Normal consistency score (NC), and the Chamfer-L1 distance (CD) to evaluate our method.
The first one is Intersection over Union (IoU) between prediction $\mathcal{R}$ and ground truth shape $\mathcal{G}$:
\begin{align}
IoU(\mathcal{R},\mathcal{G})&=\frac{|\mathcal{R} \bigcap \mathcal{G}|}{|\mathcal{R} \bigcup \mathcal{G}|}
\end{align}
The normal consistency (NC) between the normals in prediction generated mesh $\mathcal{R}$ and the normals at the corresponding nearest neighbors in the ground truth generated mesh $\mathcal{G}$ is defined as:
\begin{align}
NC(\mathcal{R},\mathcal{G})&=\frac{1}{|\mathcal{R}|} \sum_{r\in \mathcal{R}}^{g \in \mathcal{G}}|r\cdot g|+\frac{1}{|\mathcal{G}|} \sum_{g\in \mathcal{G}}^{r \in \mathcal{R}}|g\cdot r|
\end{align}
The Chamfer distance (CD) between the ground truth $\mathcal{G}$ and the predicted shape $\mathcal{R}$ is defined as:
\begin{align}
CD(\mathcal{R},\mathcal{G})&=\frac{1}{|\mathcal{R}|} \sum_{r\in \mathcal{R}} min_{g \in \mathcal{G}}||r-g||_2\\&+\frac{1}{|\mathcal{G}|} \sum_{g\in \mathcal{G}} min_{r \in \mathcal{R}}||g-r||_2\notag
\end{align}
\subsection{Implementation Detail}
We train our network using Adam\cite{kingma2014adam}, adopting a starting learning rate of 0.004. We implement our code using python 3.6 on Pytorch 1.0.0, which trains on the ShapeNet dataset in one Nvdia Titan Xp GPU with CUDA 9.0 and cudnn7. Follow the work\cite{wu20153d}, we employ a ResNet18 architecture as encoder, then we use a fully-connected neural network with 5 ResNet blocks \cite{he2016deep} as decoder and condition it on the input using conditional batch normalization (CBN) \cite{de2017modulating,dumoulin2016adversarially}. As shown in \textbf{Fig.4}.
\begin{figure}[htbp]
\centerline{\includegraphics[width=8.5cm]{4.jpg}}
\caption{The architecture of encoder and decoder network. Best viewed in color.}
\label{fig}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[width=8.5cm]{5_small.jpg}}
\caption{Single Image 3D Reconstruction on ShapeNet. The first column is the input 2D images, the last two columns are the results of our method. The other columns show the results for various methods. Best viewed on screen with zooming in.}
\label{fig}
\end{figure}
\subsection{Quantitative Results on ShapeNet}
Quantitative results on ShapeNet are given in \textbf{TABLE I}. For the IoU metric, we observe that our method outperforms other state-of-the-art methods in all categories of ShapeNet. Compared to other methods, our method gives significant promotion in the Normal consistency score metric than all works. For the Chamfer-L1 distance metric, while not trained with regard to Chamfer distance as Pixel2Mesh and AtlasNet, we also observe that our method surpasses other methods except Groueix ${et}$ ${al.}$ \cite{groueix2018papier}.
\subsection{Qualitative Results on ShapeNet}
In \textbf{Fig.5}, we evaluate the performance on the single-view image reconstruction qualitatively. We observe that all methods can capture the basic geometric information of the input image. However, 3D-R2N2 lacks lots of details on complex topology objects. In contrast, Pix2Mesh is able to better capture geometric information, but objects with more complex topologies have deformations and holes. Similarly, AtlasNet can capture geometric information well, but AtlasNet is easy to produce self-intersections and overlapping since it is assembled surfaces from small patches. Moreover, ONet is able to capture geometric information quite well and produce a high-fidelity output, but lacks some details in edges and corners. Finally, our method is able to capture complex topologies, produce high-fidelity 3D shape and preserve most of edges and corners details.
\begin{table}[htbp]
\caption{ \textbf{Ablation Study.} when we ablate our model step by step, we observe that the performance of the model degrades. First, we use B0 as final model to test on ShapeNet. Second, we use B0 with B1 and B2 as final model to test on ShapeNet. Third, we use B0 with B1, B2, and PMM as final model. Finally, we use B0 with B1, B2, B3, and PMM as final model.}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{ Test }&\multicolumn{3}{|c|}{\textbf{Metrics }} \\
\cline{2-4}
\textbf{Model} & \textbf{\textit{IoU $\uparrow$}}& \textbf{\textit{ NC $\uparrow$ }}& \textbf{\textit{ Chamfer $\downarrow$}} \\
\hline
Model w/o $B_3$,PMM,$B_2$and$B_1$& 0.593&0.840 & 0.194 \\
Model w/o $B_3$,PMM& 0.602&0.842 & 0.191 \\
Model w/o $B_3$& 0.604&0.842 & 0.185 \\
Full model& \textbf{0.607}&\textbf{0.846} & \textbf{0.185} \\
\hline
\multicolumn{4}{l}{$^{\mathrm{a}}$ The Bold-faced numbers represent the best results.}
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\subsection{Ablation Study}
We calculate the ablation study on the proposed method, mainly focus on the effects of the side branches and the probability mixture module. We denote the main branch as $B_0$, and side branches I, II are denoted as $B_1$ and $B_2$ respectively. Then, side branch III and probability mixture module are denoted as $B3$ and PMM respectively. The results are shown in \textbf{TABLE II}.
We first examine how $B_1$ and $B_2$ affect the performance of our model. $B_1$ and $B_2$ considering the generalization ability of the model, we utilize them to produce more diverse representations along its own pathway. So, we observe that the $B_0$ and $B_1$ branches perform well on metrics of IoU and NC.
To test the effect of the components of PMM, we utilize PMM to dynamically fuse prediction probabilities. Similarly, we observe that the PMM effectively promotes the metrics of IoU and Chamfer-$L_1$.
Finally, we discuss the effect of $B_3$ on model performance. We employ $B_3$ to enhance the edge geometry and corners information. As shown in Table 2, we observe that the $B_3$ significantly improves the metrics of IoU and NC.
\begin{figure}[htbp]
\centerline{\includegraphics[width=8.5cm]{6_small.jpg}}
\caption{Single Image 3D Reconstruction on Real Data. The first column is the input 2D images, the other columns are the reconstructed results of our method in different viewpoints. Best viewed on screen with zooming in.}
\label{fig}
\end{figure}
\subsection{Qualitative Results on Real Data}
In order to test the generalization ability of our network on real data, we apply our model to the Online Products datasets\cite{oh2016deep} for qualitative evaluation. Note that our network is not trained on this dataset.
Several qualitative results are shown in \textbf{Fig.6}. We carefully select some representative images in the dataset to display the qualitative results. We give the reconstruction results under two viewpoints. Through the results, we observe that although our model is only trained on synthetic data, but it also has good generalization for real data.
\section{Conclusions}
In this paper, we propose a novel approach based on dynamic multi-branch information fusion for 3D reconstruction. Our method has addressed the problem of complex topology and detailed information, which are difficult to recover accurately. We introduce side branches from the intermediate layers of our neural network to make it produce more diverse representations. By utilizing DoG to enhance edge geometry information and mixing multiple branches reasonably, our method is competent for handling the single-view image reconstruction task. Extensive experiments demonstrate our method can boost the model generalization ability and recover the complex topology and detail-rich 3D shape from a 2D image. Since a large number of 3D datasets do not have the ground-truth, so previous work only used these datasets for qualitative evaluation. Inspired by some latest work\cite{xie2020self,kingma2014semi,zhu2009introduction}, in the future work, we plan to introduce semi-supervised learning in order to take advantage of these 3D datasets reasonably.
\bibliographystyle{IEEEbib}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The modern understanding of $\gamma$-ray bursts (GRBs) suggests that it originates from the internal dissipation of the kinetic and/or magnetic energy of a newly born and short-living relativistic jet. The observed variability, energetic and the spectral shapes of the prompt emission drive our theoretical understanding of the dissipation and radiation processes in the extreme jets of GRBs. The ultimate goal of any scrupulous analysis of GRB data is to make one more step towards our understanding of the jet formation mechanisms, its composition, energy transport, and efficiency and character of the particle acceleration.
The most explored scenario for the GRBs is the hot fireball model \citep{1986ApJ...308L..43P,1986ApJ...308L..47G}. For the typical observed prompt emission luminosity $L \sim 10^{52}$~erg/s and the variability time-scale of $10^{-2}$~s, one gets an estimate the initial temperature of the ejecta as high as $T \sim 10^{10} $ (any units) guaranteeing photons, leptons and baryons being coupled. Jets with a small amount of baryons will undergo an adiabatic expansion, reaching bulk Lorentz factors of order of $\sim 100$ (see \citet{1990ApJ...365L..55S}). Due to the uncertainty on the composition and the energy transfer throughout the jet (the jets can be also dominated by the Poynting flux, see \citet{1992Natur.357..472U}), we do not understand the location of the emitting region (below or above the transparency radius) as well as the dominant mechanism for the jet's internal dissipation (shocks versus magnetic re-connection, for the wide range of references see \citet{2004RvMP...76.1143P,2015PhR...561....1K}). To re-construct back the physics of GRB jets, we can refer to the radiative processes that are shaping the observed spectra. The relative importance of the thermal and non-thermal components and their characteristics are then one of the main subjects of our interest.
The observed spectra of GRBs in the $\sim10$~keV$-$10~MeV range are typically modelled as two power laws smoothly connected at a characteristic energy of hundreds of keV which corresponds to the peak energy in the $\nu F_{\nu}$ spectrum (e.g., \citet{1993ApJ...413..281B}). The presence of the well-established power-law tail above the peak energy and the overall inconsistency with the single black body spectrum indicates at first that the observed radiation is produced by a non-thermal population of charged particles rather then simply released from the photosphere of the pair-dominated fireball neither by the synchrotron radiation of the thermalized particles. The most straightforward model is then synchrotron radiation from the non-thermal population of electrons \citep{1994ApJ...430L..93R}. It was shown that the most efficient way to produce the prompt emission spectra in the keV-MeV range by the synchrotron radiation from the accelerated electrons requires the fast cooling regime, i.e., the electrons should cool at timescales much shorter than the dynamical time (e.g., see \citet{2000MNRAS.313L...1G}). The spectral shape of the fast cooling synchrotron emission below the peak energy has a value of -1.5 (photon index) independently from the injected electron's spectra. However, most of the measured spectral indices are larger than -1.5, with a typical value of -1 (e.g., see \citet{1998ApJ...506L..23P}). In other words, the GRB spectra are harder then expected in a synchrotron scenario.
The fail of the simple synchrotron model to account for the GRB spectra has caused very intense theoretical and observational efforts in the literature to resolve the puzzle of the radiation process(es) responsible for GRBs. Some models have proposed modifications within the synchrotron model exploring the possibilities of including the marginally fast cooling regime, assymetry of the pitch angle distributions, cross-section dependent Inverse Compton effects, inhomogenous magnetic field. Other models have suggested photospheric radiation, sub-photospheric dissipation processes, Comptonisation, etc. (see \citet{2015PhR...561....1K} for a review). While different models are capable to explain the typical shapes or even the entire spectra, they correspond to quite different physical models for the GRB jets. Therefore, we are obliged to look for the non trivial ways of confrontation of the prompt emission models with the multi-wavelength and detailed spectral data. In the following, we briefly discuss the recent progresses in the spectral characterization of the GRB spectra by studying their low energy tails in the soft X-rays and optical bands.
\section{Low-energy extension of the GRB spectra}
The inconsistency between the observed and the predicted spectral shapes in the standard fast cooling synchrotron radiation model lies on the low-energy part (below the peak energy) of the GRB spectra. Therefore, the characterization of the GRB spectra below the usual low-energy boundary of $\sim 10$~keV is a very promising tool for distinguishing the proposed radiation models.
\subsection{Prompt emission in soft X-rays}
The X-ray Telescope (XRT, 0.3-10 keV) on board of the Neil Gehrels Swift Observatory (hereafter, Swift; \citet{2004ApJ...611.1005G}) is a unique instrument allowing to partially cover the prompt emission of some long GRBs thanks to its rapid response and relatively fast slewing time to the GRB ($\sim 90$ s). The recent systematic study of the broad-band ($\sim$ 0.5 keV - 1 MeV) spectra of GRBs by an inclusion of XRT data has discovered a common feature of the low-energy spectral break at $\sim$ 2-20 keV \citep{2017ApJ...846..137O,2018A&A...616A.138O}. The spectra were shown to have a hardening below the break energy. Once the break is included in the analysis, the spectral indices below and above the break energy, on average, are consistent with the fast cooling synchrotron model if the break is associated with the synchrotron cooling frequency. These findings have motivated for the search of the similar spectral breaks at higher energies. \citet{2018A&A...613A..16R,2019A&A...625A..60R} have discovered the spectral breaks within the energy range of Fermi/GBM instrument (8 keV - 40 MeV) in the time-resolved spectra of the brightest GRBs. The spectral shapes below the peak energy, also in Fermi/GBM GRBs are found consistent with the synchrotron model in the marginally fast cooling regime.
\subsection{Early optical emission}
The independent confirmation of the presence of the synchrotron-like breaks in the soft and hard X-ray range and by invoking different instrumentation has boosted our motivations to test the synchrotron model with early optical data. In the past, the synchrotron model alone has been rarely applied to the GRB spectra. \citet{1996ApJ...466..768T} has fitted the time-resolved spectra of few GRBs with the slow-cooling synchrotron model, while \citet{2000ApJ...543..722L} fitted two spectra of GRBs with the self-absorbed synchrotron model. Time-resolved spectra of GRB 130606B and several time-resolved spectra of GRB 160625B has been fitted satisfactorily by the single synchrotron model assuming the decay of the magnetic field within the emitting region \citep{2016ApJ...816...72Z,2018NatAs...2...69Z}. \citet{2019A&A...628A..59O} has performed the first systematic modelling of the GRB spectra by the single synchrotron radiation from a non-thermal population of electrons, taking into account their cooling\footnote{The first appearance of this result can be found in the PhD thesis, which can be downloaded from the SISSA website: {http://hdl.handle.net/20.500.11767/84065}}. The results of this analysis has established that the synchrotron radiation alone is capable for accounting for the shapes of the time-resolved spectra of considered GRBs (21 long GRB, 52 spectra) when the cooling of electrons is taken into account. Moreover, the proper modelling of the GRB spectra by the physically derived synchrotron model has confirmed the marginally fast cooling regime of radiation indicated from the discovery of the low energy spectral breaks. Additionally, \citet{2019A&A...628A..59O} has proposed a novel approach to confront single synchrotron model to the widely applied two-component thermal plus non-thermal spectral model. They have used the early-time optical data as a tool to distinguish the two competing models. The basic idea lies on the extrapolation of the GRB spectral model at keV-MeV down to the optical bands. An acceptable model is required to predict the optical flux or at least to not overproduce it. They show that the synchrotron model successfully passes this trivial test, while the two-component model systematically overpredicts the optical flux contradicting the existence of the afterglow component. Moreover, the predicted flux from the synchrotron model is shown always to match the observed optical flux when the keV-MeV and the optical light curves are temporally correlated. And in other way around, when the optical light curves are in agreement with the radiation from an external shock, the synchrotron model savely underpredicts the level of the optical prompt emission. These series of tests give a strong preference to the single synchrotron radiation model for the production of the prompt emission.
\section{Discussion}
The recent studies of the GRB spectra with the inclusion of the soft X-ray and optical data have found series of quite convincing arguments in support of the synchrotron radiation model to account for the production of the prompt emission. The marginally fast cooling regime suggested in these studies corresponds to a non trivial physical scenario. In a single-shot acceleration of electrons (a natural case in the internal shocks scenario), the marginally fast cooling regime returns large radii of the emitting regions ($R_\gamma \ge 10^{16}$\,cm), quite weak magnetic fields (order of unity in the comoving frame), large bulk Lorentz factors (few hundreds) and extreme energies of the accelerated electrons (Lorentz factors $\ge 10^{4}$) \citep{2008MNRAS.384...33K,2013ApJ...769...69B,2019A&A...628A..59O}. All of these constrains are in odds with our naive expectations from the GRB emission side: compact and highly magnetized regions located above the photosphere. Moreover, the requirement on large radii contradicts with the observed variability of the prompt emission within orders. Recently, these difficulties were discussed in details in \citet{2019arXiv191202185G}. They have shown that it is quite challenging to produce the observed marginally fast cooling regime by considering a non-thermal population of the electrons. They suggest a model with synchrotron radiation from protons as a possibility to overcome the basic difficulties that usual, electron-based models face to explain the observed prompt emission spectra.
While we are still far from the complete understanding of the physics of the prompt emission, we can certainly conclude that optical and X-ray domains play critical role in discriminating and constraining the GRB models. The future wide field X-ray mission such as THESEUS \citep{2018AdSpR..62..191A} has a great potential to detect and characterise more in details the prompt emission in the soft X-rays \citep{2018MmSAI..89..245N}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{#1}\setcounter{equation}{0}}
\newcommand{\vir}{\raisebox{0.75mm}{,}}
\begin{document}
\enlargethispage{3cm}
\thispagestyle{empty}
\begin{center}
{\large\bf MULTILINEAR FORMS}
\end{center}
\begin{center}
{\large\bf AND GRADED ALGEBRAS}
\end{center}
\vspace{0.3cm}
\begin{center} Michel DUBOIS-VIOLETTE
\footnote{Laboratoire de Physique Th\'eorique, UMR 8627, Universit\'e Paris XI,
B\^atiment 210, F-91 405 Orsay Cedex, France,
Michel.Dubois-Violette$@$th.u-psud.fr\\
}
\end{center} \vspace{0,5cm}
\begin{abstract}
In this paper we investigate the class of the connected graded algebras which are finitely generated in degree 1, which are finitely presented with relations of degrees greater or equal to 2 and which are of finite global dimension $D$ and Gorenstein. For $D$ greater or equal to 4 we add the condition that these algebras are homogeneous and Koszul. It is shown that each such algebra is completely characterized by a multilinear form satisfying a twisted cyclicity condition and some other nondegeneracy conditions depending on the global dimension $D$. This multilinear form plays the role of a volume form and canonically identifies in the quadratic case to a nontrivial Hochschild cycle of maximal degree. Several examples including the Yang-Mills algebra and the extended 4-dimensional Sklyanin algebra are analyzed in this context. Actions of quantum groups are also investigated.
\end{abstract}
\newpage
\tableofcontents
\section{Introduction}
An important task which is at the beginning of noncommutative algebraic geometry is to provide good descriptions of the connected graded algebras which are finitely generated in degree 1, which are finitely presented with homogeneous relations of degrees $\geq 2$, which are of finite global dimension and which are Gorenstein (a generalization of the Poincar\'e duality property). These algebras play the role of homogeneous coordinates rings for the noncommutative versions of the projective spaces and, more generally, of algebraic varieties. It is usual to add the property of polynomial growth for these graded algebras \cite{art-sch:1987} but we refrain here to impose this property since it eliminates various interesting examples and plays no role in our arguments.\\
Some remarks are in order concerning the above class of algebras. The class of connected graded algebras which are finitely generated in degree 1 and finitely presented in degrees $\geq 2$ is a natural one for a generalization of the polynomial algebras. Concerning the global dimension it is an important fact which is well known \cite{art-tat-vdb:1990} that for this class of algebras it coincides with the projective dimension of the trivial module (the ground field) and it has been shown recently \cite{ber:2005}
that it also coincides with the Hochschild dimension. Thus for these algebras it is {\sl the dimension} from the homological point of view and the requirement of finite dimensionality is clear. That the Gorenstein property is a generalization of the Poincar\'e duality property is already visible if one thinks of the minimal projective resolution of the trivial module as an analog of differential forms and this has been made precise at the Hochschild homological level in \cite{ber-mar:2006} (see also \cite{vdb:1998},\cite{vdb:2002}) . \\
In this paper we shall restrict attention to the smaller class of the algebras which are also homogeneous and Koszul. Homogeneous means that all the relations are of the same degree, say $N\geq 2$ and we speak then of homogeneous algebras of degree $N$ or of $N$-homogeneous algebras. For homogeneous algebras the notion of Koszulity has been introduced in \cite{ber:2001a} and various notions such as the Koszul duality, etc. generalizing the ones occuring in the quadratic case \cite{pri:1970}, \cite{man:1988} have been introduced in \cite{ber-mdv-wam:2003}. It should be stressed that the Koszul property is really a desired property \cite{man:1988} and this is the very reason why we restrict attention to homogeneous algebras since it is only for these algebras that we know how to formulate this property for the moment. It is worth noticing here that these restrictions are immaterial in the case of global dimension $D=2$ and $D=3$. Indeed, as pointed out in \cite{ber-mar:2006} any connected graded algebra which is finitely generated in degree 1, finitely presented with relations of degree $\geq 2$ and which is of global dimension $D=2$ or $D=3$ and Gorenstein, is $N$-homogeneous and Koszul with $N=2$ for $D=2$ and $N\geq 2$ for $D=3$. However this is already not the case for the global dimension $D=4$ \cite{art-sch:1987}, \cite{lu-pal-wu-zha:2007}.\\
In the following we shall give detailed proofs of results announced in
\cite{mdv:2005} which allow to identify the moduli space of the algebras $\cala$ as above with the moduli space of multilinear forms $w$ with specific properties. For each $\cala$, the multilinear form $w$ or more precisely $\mbox{\rm 1\hspace {-.6em} l}\otimes w$ ($\mbox{\rm 1\hspace {-.6em} l} \in \cala$) plays the role of a volume element in the Koszul resolution of the trivial $\cala$-module. It turns out that this is also true from the point of view of the Hochschild homology of $\cala$ at least in the quadratic case as will be shown. This gives another bridge, besides the deep ones described in \cite{ac-mdv:2003} and
\cite{ac-mdv:2008}, between noncommutative differential geometry (\cite{ac:1986a},\cite{ac:1994}) and noncommutative algebraic geometry (\cite{staf:2002} and references therein). We shall analyse several examples in order to illustrate the concepts introduced throughout the paper. Finally we shall introduce related Hopf algebras.\\
It is worth noticing here that in \cite{bon-pol:1994} it has been already shown that the quadratic algebras which are Koszul of finite global dimension and Gorenstein are determined by multilinear forms. This is of course directly related with the results of the present paper and we shall come back to this point later (Section 9).\\
The plan of the paper is the following. In Section 2, we investigate the case of the global dimension $D=2$ and we show that the algebras of the relevant class are associated with the nondegenerate bilinear forms and correspond to the natural quantum spaces for the action of the quantum groups of the associated nondegenerate bilinear forms \cite{mdv-lau:1990}. In Section 3, we introduce and discuss the concept of preregular multilinear form. It turns out that, as shown in this paper (in Section 5), all homogeneous Koszul-Gorenstein algebras of finite global dimension are associated with preregular multilinear forms satisfying some other regularity conditions depending on the global dimension $D$. The case $D=3$ is analysed in Section 4. In Section 5, we define and study the homogeneous algebras associated with multilinear forms. Section 6 consists of the analysis of several examples which illustrate the different items of the paper. In Section 7 the semi-cross product is investigated for the above class of algebras (introduced in Section 5). In Section 8 we define quantum groups preserving the multilinear forms which act on the quantum spaces corresponding to the homogeneous algebras associated with these multilinear forms generalizing thereby the situation for $D=2$ described in Section 2. In Section 9 we discuss several important points connected with the present formulation. For the reader convenience we have added an appendix on homogeneous algebras and an appendix on the quantum group of a nondegenerate bilinear form at the end of the paper.\\
Throughout the paper $\mathbb K$ denotes a field which we assume to be algebraically closed of characteristic zero (though most of our results are independent of this assumption) and all algebras and vector spaces are over $\mathbb K$. The symbol $\otimes$ denotes the tensor product over $\mathbb K$ and if $E$ is a $\mathbb K$-vector space, $E^\ast$ denotes its dual vector space. We use the Einstein summation convention of repeated up down indices in the formulas.
\section{Bilinear forms and global dimension $D=2$}
Let $b$ be a nondegenerate bilinear form on $\mathbb K^{s+1}$ ($s\geq 1$) with components $B_{\mu\nu}=b(e_\mu,e_\nu)$ in the canonical basis $(e_\lambda)_{\lambda\in \{0,\dots,s\}}$ of $\mathbb K^{s+1}$ and let $\cala=\cala(b,2)$ be the quadratic algebra generated by the elements $x^\lambda$ ($\lambda\in \{0,\dots,s\}$) with the relation
\begin{equation}
B_{\mu\nu}x^\mu x^\nu=0
\end{equation}
that is, using the notations of \cite{ber-mdv-wam:2003} (see Appendix 1), one has $\cala=A(E,R)$ with $E=\oplus_\lambda\mathbb K x^\lambda=\cala_1$ and $R=\mathbb K\ B_{\mu\nu}x^\mu\otimes x^\nu\subset E^{\otimes^2}$.
\begin{lemma}\label{reg}
Let $a$ be an element of degree 1 of $\cala$ with $a\not= 0$ $($i.e. $a\in E\backslash \{0\})$. Then $ya=0$ or $ay=0$ for $y\in \cala$ implies $y=0$.
\end{lemma}
\noindent \underbar{Proof}. One has
\begin{equation}
a=a_\lambda x^\lambda
\end{equation}
with $(a_\lambda)\not= 0$ in $\mathbb K^{s+1}$. On the other hand, by the very definition of $\cala$ by generators and relation, $ya=0$ (resp. $ay=0$) is equivalent to
\begin{equation}
ya_\lambda=zx^\mu\ B_{\mu\lambda}\ \ \ (\mbox{resp.} ya_\lambda=x^\mu z\ B_{\lambda\mu})
\label{eq2.3}
\end{equation}
$(\lambda\in \{0,\dots,s\})$ for some $z\in \cala$. Let $y$ be in $\oplus^n_{k=0}\cala_k$, we shall prove the statement by induction on $n$. For $n=0$ (i.e. $y$ of degree 0) the statement is clear ($\cala$ is connected). Assume that the statement is true for $n\leq p$ and let $y\in \oplus^{p+1}_{k=0}\cala_k$ be such that $ya=0$ (resp. $ay=0$). Then $z\in \oplus^p_{k=0}\cala_k$ in (\ref{eq2.3}) and one has $zx^\mu B_{\mu\lambda} v^\lambda=0$ (resp. $B_{\lambda\mu} v^\lambda x^\mu z=0)$ for $(v^\lambda)\not=0$ in $\mathbb K^{s+1}$ such that $a_\lambda v^\lambda=0$ in $\mathbb K$. So, replacing $a$ by $x^\mu B_{\mu\lambda} v^\lambda$ (resp. $B_{\lambda \mu} v^\lambda x^\mu$), one has $z=0$ by the induction hypothesis. This implies $y=0$ by (\ref{eq2.3}) since $(a_\lambda)\not=0$. $\square$
The matrix $B=(B_{\mu\nu})$ of the components of $b$ is invertible and we denote by $B^{\mu\nu}$ the matrix elements of its inverse that is one has
\begin{equation}
B^{\lambda\rho}B_{\rho\mu}=\delta^\lambda_\mu
\end{equation}
$\lambda,\mu\in \{0,\dots,s\}$. The $B^{\mu\nu}$ are the component of a bilinear form on the dual of $\mathbb K^{s+1}$ in the dual basis of the canonical basis $(e_\lambda)$. Notice that with the definitions above the vector space $E$ identifies canonically with the dual of $\mathbb K^{s+1}$ while the $x^\lambda$ identify with the elements of the dual basis of the canonical basis $(e_\lambda)$ of $\mathbb K^{s+1}$. These identifications allow for instance to write $b\in E^{\otimes^2}$ since the involved vector spaces are finite-dimensional.\\
Let us investigate the structure of the dual quadratic algebra $\cala^!$ of $\cala$ \cite{man:1988}, \cite{ber-mdv-wam:2003}. Letting $E^\ast=\mathbb K^{s+1}$ be the dual vector space of $E$, one has $\cala^!=A(E^\ast,R^\perp)$ where $R^\perp \subset (E^\ast)^{\otimes^2}=(E^{\otimes^2})^\ast$ is the orthogonal of $R=\mathbb K B_{\mu\nu}x^\mu\otimes x^\nu\subset E^{\otimes^2}$.
\begin{lemma}\label{d2}
The dual quadratic algebra of $\cala$ is the quadratic algebra $\cala^!$ generated by the elements $e_\lambda$ $(\lambda\in \{0,\dots,s\})$ with the relations
\begin{equation}
e_\mu e_\nu=\frac{1}{s+1}B_{\mu\nu} B^{\tau\rho}e_\rho e_\tau
\end{equation}
for $\mu,\nu\in \{0,\dots,s\}$. One has $\cala^!_0=\mathbb K\mbox{\rm 1\hspace {-.6em} l} \simeq \mathbb K$, $\cala^!_1=E^\ast=\oplus_\lambda\mathbb K e_\lambda\simeq \mathbb K^{s+1}$, $\cala^!_2=\mathbb K B^{\mu\nu}e_\nu e_\mu\simeq \mathbb K$ and $\cala^!_n=0$ for $n\geq 3$.
\end{lemma}
\noindent\underbar{Proof}. One has $\langle e_\mu\otimes e_\nu-\frac{1}{s+1} B_{\mu\nu} B^{\tau \rho}e_\rho \otimes e_\tau,\ \ B_{\lambda\sigma}x^\lambda\otimes x^\sigma\rangle=0$ so the $e_\mu \otimes e_\nu-\frac{1}{s+1}B_{\mu\nu}B^{\tau\rho}e_\rho \otimes e_\tau$ are in $R^\perp$ and it is not difficult to see that they span $R^\perp$ which proves the first part of the lemma including the identifications of $\cala^!_0,\cala^!_1$ and $\cala^!_2$. It remains to show that $\cala^!_3=0$. Setting $\xi=B^{\alpha\beta}e_\beta e_\alpha$ for the generator of $\cala^!_2$ one has $(e_\lambda e_\mu)e_\nu=\frac{1}{s+1}B_{\lambda\mu}\xi e_\nu$ which (by contraction with $B^{\nu\mu}$) implies $e_\lambda\xi=\frac{1}{s+1} B^{\nu\mu} B_{\lambda\mu}\xi e_\nu$ while one also has $e_\lambda(e_\mu e_\nu)=e_\lambda\frac{1}{s+1}B_{\mu\nu}\xi$ which (by contraction with $B^{\mu\lambda}$) implies $\xi e_\nu=\frac{1}{s+1} B^{\mu\lambda}B_{\mu\nu} e_\lambda\xi$. It follows that one has $e_\lambda\xi=\left(\frac{1}{s+1}\right)^2 e_\lambda\xi$ that is $e_\lambda\xi=0$ since $s\geq 1$ and thus $e_\lambda e_\mu e_\nu=0$ for $\lambda,\mu,\nu \in \{0,\dots,s\}$ which means $\cala^!_3=0$. $\square$
In view of this lemma the Koszul complex of $\cala$
\[
\dots \rightarrow \cala \otimes \cala^{!\ast}_{n+1} \stackrel{d}{\rightarrow} \cala \otimes \cala^{!\ast}_n\rightarrow \dots
\]
reads here
\begin{equation}
0\rightarrow \cala \stackrel{x^tB}{\rightarrow} \cala^{s+1} \stackrel{x}{\rightarrow} \cala \rightarrow 0
\end{equation}
where $\cala^{s+1}=(\cala,\dots,\cala)$, $x$ means right multiplication by the column $(x^\lambda)$ and $x^tB$ means right multiplication by the row $(x^\mu B_{\mu\lambda})$. Lemma \ref{reg} implies that $\cala \stackrel{x^tB}{\rightarrow} \cala^{s+1}$ is injective while the definition of $\cala$ by generators and relation means that the sequence $\cala\stackrel{x^tB}{\rightarrow}\cala^{s+1}\stackrel{x}{\rightarrow}\cala\stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0$ is exact ($\varepsilon$ being the projection on degree 0). Therefore $\cala$ is Koszul of global dimension 2 and the exact sequence of left $\cala$-module
\begin{equation}
0\rightarrow \cala \stackrel{x^tB}{\rightarrow} \cala^{s+1} \stackrel{x}{\rightarrow}\cala \stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0
\label{eq2.7}
\end{equation}
is the Koszul (free) resolution of the trivial left $\cala$-module $\mathbb K$. By transposition and by using the invertibility of $B$, it follows that $\cala$ is also Gorenstein.\\
Conversely let $\cala$ be a connected graded algebra generated by $s+1$ elements $x^\lambda$ of degree 1 with (a finite number of) relations of degrees $\geq 2$ which is of global dimension 2 and Gorenstein. Then, as pointed out in the introduction, it is known (and easy to show) that $\cala$ is quadratic and Koszul. The Gorenstein property implies that the space of relations $R$ is 1-dimensional so $\cala$ is generated by the $x^\lambda$ with relation $B_{\mu\nu}x^\mu x^\nu=0$ ($B_{\mu\nu}\in \mathbb K$, $\mu,\nu\in \{0,\dots,s\}$) and the Koszul resolution of $\mathbb K$ is of the above form (\ref{eq2.7}). Furthermore the Gorenstein property also implies that $B$ is invertible so the corresponding bilinear form $b$ on $\mathbb K^{s+1}$ is nondegenerate and $\cala$ is of the above type (i.e. $\cala=\cala(b,2)$). This is summarized by the following theorem.
\begin{theorem}\label{KGD2}
Let $b$ be a nondegenerate bilinear form on $\mathbb K^{s+1}$ $(s\geq 1)$ with components $B_{\mu\nu}=b(e_\mu,e_\nu)$ in the canonical basis $(e_\lambda)$ of $\mathbb K^{s+1}$. Then the quadratic algebra $\cala$ generated by the elements $x^\lambda$ $(\lambda\in \{ 0,\dots,s\})$ with the relation $B_{\mu\nu}x^\mu x^\nu=0$ is Koszul of global dimension 2 and Gorenstein. Furthermore any connected graded algebra generated by $s+1$ element $x^\lambda$ of degree 1 with relations of degree $\geq 2$ which is of global dimension 2 and Gorenstein is of the above kind for some nondegenerate bilinear form $b$ on $\mathbb K^{s+1}$.
\end{theorem}
There is a canonical right action $b\mapsto b\circ L$ ($L\in GL(s+1,\mathbb K)$) of the linear group on bilinear forms, where
\begin{equation}
(b\circ L)(X,Y)=b(LX,LY)
\end{equation}
for $X,Y\in \mathbb K^{s+1}$, which preserves the set of nondegenerate bilinear forms and one has the following straightforward result which is worth noticing in comparison with the similar one in global dimension $D=3$ which is less obvious (see in Section 4).
\begin{proposition} \label{M2}
Two nondegenerate bilinear forms $b$ and $b'$ on $\mathbb K^{s+1}$ correspond to isomorphic graded algebras $\cala(b,2)$ and $\cala(b',2)$ if and only if they belong to the same $GL(s+1,\mathbb K)$-orbit, i.e. if $b'=b\circ L$ for some $L\in GL(s+1,\mathbb K).$\end{proposition}
In view of Theorem \ref{KGD2} and Proposition \ref{M2} it is natural to define the {\sl moduli space} $\calm_s(2)$ of the quadratic algebras with $s+1$ generators which are Koszul of global dimension 2 and Gorenstein to be the space of $GL(s+1,\mathbb K)$-orbits of nondegenerate bilinear forms on $\mathbb K^{s+1}$. The {\sl moduli space} $\calm(2)$ of the connected graded algebras which are finitely generated in degree 1 and finitely presented with relations of degrees $\geq 2$ and which are of global dimension 2 and Gorenstein being then the (disjoint) union $\calm(2)=\cup_{s\geq 1} \calm_s(2)$.\\
The Poincar\'e series $P_\cala(t)=\sum_n {\mbox{dim}}(\cala_n)t^n$ of a graded algebra $\cala$ as above is given by \cite{pri:1970}, \cite{man:1988}
\begin{equation}
P_\cala(t)=\frac{1}{1-(s+1)t+t^2}
\end{equation}
which implies exponential growth for $s\geq 2$. For $s=1\ \ (s+1=2)$ the algebra has polynomial growth so it is regular in the sense of \cite{art-sch:1987}. In the latter case it is easy to classify the $GL(2,\mathbb K)$-orbits of nondegenerate bilinear forms on $\mathbb K^2$ according to the rank $\mathbf{rk}$ of their symmetric part \cite{mdv-lau:1990}:
\begin{itemize}
\item
$\mathbf{rk}=0$ - there is only one orbit which is the orbit of $b=\varepsilon$ with matrix of components $B=\left(\begin{array}{cc}
0 & -1\\
1 & 0
\end{array}\right)$
which corresponds to the relation\linebreak[4] $x^1x^2-x^2x^1=0$ i.e. to the polynomials algebra $\mathbb K[x^1,x^2]$,
\item
$\mathbf{rk}=1$ - there is only one orbit which is the orbit of $b$ with matrix of components $B=\left(\begin{array}{cc}
0 & -1\\
1 & 1
\end{array}\right)$ which corresponds to the relation \linebreak[4] $x^1x^2-x^2x^1-(x^2)^2=0$,
\item
$\mathbf{rk}=2$ - the orbits are the orbits of $b=\varepsilon_q$ with matrix of components $B=\left(\begin{array}{cc}
0 & -1\\
q & 0
\end{array}\right)$ for $q\not=0$ and $q\not=1$ ($q^2\not=q$) modulo $q\sim q^{-1}$ which corresponds to the relation $x^1x^2-qx^2x^1=0$.
\end{itemize}
Thus for $s+1=2$ one recovers the usual description of regular algebras of global dimension 2 \cite{irv:1979} \cite{art-sch:1987}.\\
The algebra $\cala_q$ of the latter case $\mathbf{rk}=2$ corresponds to the Manin plane which is the natural quantum space for the action of the quantum group $SL_q(2, \mathbb K)$ \cite{man:1988}. More generally, given $s\geq 1$ and a nondegenerate bilinear form $b$ on $\mathbb K^{s+1}$ (with matrix of components $B$), the algebra $\cala$ of Theorem \ref{KGD2} corresponds to the natural quantum space for the action of the quantum group of the nondegenerate bilinear form $b$ \cite{mdv-lau:1990} (see Appendix 2). The complete analysis of the category of representations of this quantum group has been done in \cite{bic:2003b}.\\
A lot of general results on regular rings of dimension 2 is contained in Reference \cite{zha:1998}. In particular, Theorem 3 above is covered par Theorem 0.1 of \cite{zha:1998} while Theorem 0.2 (2) of \cite{zha:1998} stating that $\cala$ is a domain clearly implies Lemma 1 above.\\
In Reference \cite{ber:2009}, the cases where the bilinear form $b$ is degenerate is investigated and, more generally the case of the graded algebras $\cala$ having a single homogeneous relation is analized there.
\section{Multilinear forms}
In this section $V$ is a vector space with ${\mbox{dim}}(V)\geq 2$ and $m$ is an integer with $m\geq 2$.
\begin{definition}\label{tc}
Let $Q$ be an element of the linear group $GL(V)$. A $m$-linear form $w$ on $V$ will be said to satisfy the $Q$-twisted cyclicity condition or simply to be $Q$-cyclic if one has
\begin{equation}
w(X_1,\dots,X_m)=w(QX_m,X_1,\dots,X_{m-1})
\end{equation}
for any $X_1,\dots,X_m \in V$.
\end{definition}
Let $w$ be $Q$-cyclic then one has
\begin{equation}
w(X_1,\dots,X_m)=w(QX_k,\dots,QX_m,X_1,\dots,X_{k-1})
\end{equation}
for any $2\leq k\leq m$ and finally
\begin{equation}
w(X_1,\dots,X_m)=w(QX_1,\dots, QX_m)
\end{equation}
for any $X_1,\dots, X_m\in V$ which means that $w$ is invariant by $Q$, $w=w\circ Q$.\\
Let now $w$ be an arbitrary $Q$-invariant ($w=w\circ Q$) $m$-linear form on $V$, then $\pi_Q(w)$ defined by
\begin{equation}
m\pi_Q(w)(X_1,\dots, X_m)=w(X_1,\dots,X_m)+\sum^m_{k=2}w(QX_k,\dots, QX_m,X_1,\dots, X_{k-1})
\end{equation}
is a $Q$-cyclic $m$-linear form on $V$ and the linear mapping $\pi_Q$ is a projection of the space of $Q$-invariant $m$-linear forms onto the subspace of the $Q$-cyclic ones ($(\pi_Q)^2=\pi_Q$).\\
Notice that to admit a non zero $Q$-invariant multilinear form is a nontrivial condition on $Q\in GL(V)$ since it means that the operator $w\mapsto w\circ Q$ has an eigenvalue equal to 1. For instance there is no non zero $(-\mbox{\rm 1\hspace {-.6em} l})$-invariant $m$-linear form if $m$ is odd.\\
Let us consider the right action $w\mapsto w\circ L$ ($L\in GL(V)$) of the linear group on the space of $m$-linear forms on $V$. If $w$ is $Q$-invariant then $w\circ L$ is $L^{-1}QL$-invariant and if $w$ is $Q$-cyclic then $w\circ L$ is $L^{-1}QL$-cyclic.
\begin{definition} \label{PR}
A $m$-linear form $w$ on $V$ will be said to be preregular if it satisfies the conditions $(i)$ and $(ii)$ below.\\
$(i)$ If $X\in V$ satisfies $w(X,X_1,\dots,X_{m-1})=0$ for any $X_1,\dots,X_{m-1}\in V$, then $X=0$.\\
$(ii)$ There is a $Q_w\in GL(V)$ such that $w$ is $Q_w$-cyclic.
\end{definition}
The condition $(i)$ implies that the element $Q_w$ of $GL(V)$ such that $(ii)$ is satisfied is unique for a preregular $m$-linear form $w$ on $V$. In view of $(ii)$ a preregular $w$ is such that if $X\in V$ satisfies
\[
w(X_1,\dots, X_k,X,X_{k+1},\dots,X_{m-1})=0
\]
for any $X_1,\dots, X_{m-1}\in V$ then $X=0$. A $m$-linear form $w$ satisfying this latter condition for any $k$ $ (0\leq k\leq m)$ will be said to be 1-{\sl site nondegenerate}. Thus a preregular $m$-linear form is a 1-site nondegenerate twisted cyclic $m$-linear form.\\
The set of preregular $m$-linear forms on $V$ is invariant by the action of the linear group $GL(V)$ and one has
\begin{equation}
Q_{w\circ L}=L^{-1}Q_wL,\ \ \ \forall L\in GL(V)
\end{equation}
for a preregular $m$-linear form $w$ on $V$.\\
A bilinear form $b$ on $\mathbb K^{s+1}$ ($s\geq 1$) is preregular if and only if it is nondegenerate. Indeed if $b$ is preregular, it is nondegenerate in view of $(i)$. Conversely if $b$ is nondegenerate with matrix of components $B$ then one has $b(X,Y)=b(Q_bY,X)$ with $Q_b=(B^{-1})^tB$.\\
As pointed out in the introduction and as will be shown later all homogeneous Koszul algebras of finite global dimension which are also Gorenstein are associated with preregular multilinear forms satisfying some other conditions depending on the global dimension $D$. Let us spell out a condition for the case $D=3$ which will be the object of the next section.
\begin{definition}\label{3R}
Let $N$ be an integer with $N\geq 2$. A $(N+1)$-linear form $w$ on $V$ will be said to be 3-regular if it is preregular and satisfies the following condition $(iii)$.\\
$(iii)$ If $L_0$ and $L_1$ are endomorphisms of $V$ satisfying
\[
w(L_0X_0,X_1,X_2,\dots,X_N)=w(X_0,L_1X_1,X_2,\dots,X_N)
\]
for any $X_0,\dots,X_N\in V$, then $L_0=L_1=\lambda\mbox{\rm 1\hspace {-.6em} l}$ for some $\lambda\in \mathbb K$.
\end{definition}
The set of all 3-regular $(N+1)$-linear forms on $V$ is also invariant by the right action of $GL(V)$.\\
Notice that condition $(iii)$ is a sort of two-sites (consecutive, etc.) nondegenerate condition. Consider the following more natural two-sites condition $(iii)'$ for a $(N+1)$-linear form $w$ on $V(N\geq 2)$.\\
\noindent $(iii)'$ {\sl If} $\sum_iY_i\otimes Z_i \in V\otimes V$ {\sl satisfies}
\[
\sum_i w(Y_i,Z_i,X_1,\dots,X_{N-1})=0
\]
{\sl for any} $X_1,\dots,X_{N-1} \in V$, {\sl then} $\sum_iY_i\otimes Z_i=0$.\\
It is clear that the condition $(iii)'$ implies the condition $(iii)$, but it is a strictly stronger condition. For instance take $V=\mathbb K^{N+1}$ and let $\varepsilon$ be the completely antisymmetric $(N+1)$-linear form on $\mathbb K^{N+1}$ such that $\varepsilon(e_0,\dots e_N)=1$ in the canonical basis $(e_\alpha)$ of $\mathbb K^{N+1}$. Then $\varepsilon$ is 3-regular but does not satisfy $(iii)'$ since for $Y\otimes Z + Z \otimes Y \in V\otimes V$ one has
\[
\varepsilon (Y,Z,X_1,\dots,X_{N-1})+\varepsilon (Z,Y,X_1,\dots,X_{N-1})=0
\]
identically.
\section{ Global dimension $D=3$}
Let $w$ be a preregular $(N+1)$-linear form on $\mathbb K^{s+1}$ (with $N\geq 2$ and $s\geq 1$) with components $W_{\lambda_0\dots \lambda_N}=w(e_{\lambda_0},\dots,e_{\lambda_N}$) in the canonical basis $(e_\lambda)$ of $\mathbb K^{s+1}$ and let $\cala=\cala(w,N)$ be the $N$-homogeneous algebra generated by the $s+1$ elements $x^\lambda$ ($\lambda\in \{0,\dots,s\}$) with the $s+1$ relations
\begin{equation}
W_{\lambda\lambda_1\dots \lambda_N}x^{\lambda_1} \dots x^{\lambda_N}=0\ \ \ (\lambda\in \{0,\dots,s\})\label{4.1}
\end{equation}
that is, again with the notations of \cite{ber-mdv-wam:2003}, one has $\cala=A(E,R)$ with $E=\oplus_\lambda\mathbb K x^\lambda=\cala_1$ and $R=\sum_\lambda\mathbb K W_{\lambda\lambda_1\dots \lambda_N}x^{\lambda_1}\otimes \dots \otimes x^{\lambda_N} \subset E^{\otimes^N}$. Notice that the condition $(i)$ implies that the latter sum is a direct sum i.e. ${\mbox{dim}}(R)=s+1$. \\
\noindent \underbar{Remark}. Since $w$ is 1-site nondegenerate, there is a (non unique) $(N+1)$-linear form $\tilde w$ on the dual vector space of $\mathbb K^{s+1}$ (i.e. on $E$) with components $\tilde W^{\lambda_0\dots \lambda_N}$ in the dual basis of $(e_\lambda)$ such that $\tilde W^{\mu \lambda_1\dots\lambda_N}W_{\lambda_1\dots \lambda_N\nu}=\delta^\mu_\nu$. Let $\theta_\lambda$ ($\lambda\in \{0,\dots,s\}$) be the generators of $\cala^!=\cala(w,N)^!$ corresponding to the $x^\lambda$ (dual basis). Then the $\Theta^\lambda =\tilde W^{\lambda \lambda_1\dots \lambda_N}\theta_{\lambda_1}\dots \theta_{\lambda_N}$ ($\lambda\in \{0,\dots,s\}$) form a basis of $\cala^!_N$, the relations of $\cala^!$ read
\[
\theta_{\mu_1}\dots \theta_{\mu_N}=W_{\mu_1\dots \mu_N \lambda} \Theta^\lambda, \ \ \ (\mu_k\in \{0,\dots,s\}).
\]
and the $\Theta^\lambda$ are independent of the choice of $\tilde w$ as above. If furthermore $w$ is 3-regular then Proposition \ref{CK3} in Section 9 implies that $\cala^!_{N+1}$ is 1-dimensional generated by $\Theta^\lambda\theta_\lambda$ and that $\cala^!_n=0$ for $n\geq N+2$.\\
Notice that, in view of the $Q_w$-cyclicity, the relations (\ref{4.1}) of $\cala$ read as well $W_{\mu_1\dots \mu_N\lambda} x^{\mu_1}\dots x^{\mu_N}=0$, ($\lambda\in \{0,\dots,s\}$).
\begin{theorem}\label{G3} Let $\cala$ be a connected graded algebra which is finitely generated in degree 1, finitely presented with relations of degree $\geq 2$ and which is of global dimension $D=3$ and Gorenstein. Then $\cala=\cala(w,N)$ for some 3-regular $(N+1)$-linear form $w$ on $\mathbb K^{s+1}$.
\end{theorem}
\noindent \underbar{Proof}. As pointed out in \cite{ber-mar:2006} (see also in the introduction) $\cala$ is $N$-homogeneous with $N\geq 2$ and is Koszul. It follows then from the general theorem \ref{KGD} of next section that $\cala=\cala(w,N)$ for some preregular $(N+1)$-linear form on $\mathbb K^{s+1}$. Let us show that $w$ is in fact 3-regular.\\
The Koszul resolution of the trivial left $\cala$-module $\mathbb K$ reads
\begin{equation}
0\rightarrow \cala\otimes w \stackrel{d}{\rightarrow} \cala\otimes R \stackrel{d^{N-1}}{\rightarrow} \cala\otimes E \stackrel{d}{\rightarrow} \cala \rightarrow \mathbb K \rightarrow 0
\label{KR3}
\end{equation} where $E=\oplus_\mu \mathbb K x^\mu$, $R=\oplus_\mu \mathbb K W_{\mu \mu_1\dots \mu_N} x^{\mu_1}\otimes\dots \otimes x^{\mu_N}\subset E^{\otimes^N}$ and where $w$ is identified with the element $W_{\mu_0 \dots \mu_N}x^{\mu_0}\otimes \dots \otimes x^{\mu_N}$ of $E^{\otimes^{N+1}}$; the $N$-differential $d$ being induced by the mapping $a\otimes (x^0\otimes x^1\otimes \dots \otimes x^n)\mapsto ax^0\otimes (x^1\otimes \dots \otimes x^n)$ of $\cala\otimes E^{\otimes^{n+1}}$ into $\cala\otimes E^{\otimes^n}$, \cite{ber-mdv-wam:2003} (see Appendix 1).\\
Assume that the matrices $L_0,L_1\in M_{s+1}(\mathbb K)$ are such that one has
\begin{equation}
L_{0\ \mu_0}^\mu W_{\mu\mu_1\dots \mu_N}=L_{1\ \mu_1}^\mu W_{\mu_0\mu\mu_2\dots \mu_N} \label{4.2}
\end{equation}
and let $a^\mu\in \cala_1$ be the elements $a^\mu=L^\mu_{1\ \nu} x^\nu$. Equation (\ref{4.2}) implies $W_{\mu_0\mu\mu_2\dots\mu_N} a^\mu x^{\mu_2}\dots x^{\mu_N}=0$ or equivalently in view of Property $(ii)$ of Definition \ref{PR}
\[
W_{\mu\mu_1\dots \mu_N}a^\mu x^{\mu_1}\dots x^{\mu_{N-1}}\otimes x^{\mu_N}=0
\]
which also reads
\begin{equation}
d^{N-1}(a^\mu\otimes W_{\mu\mu_1\dots \mu_N}x^{\mu_1}\otimes \dots \otimes^{\mu_N})=0
\end{equation}
for the element $a^\mu\otimes W_{\mu\mu_1\dots \mu_N}x^{\mu_1}\otimes \dots \otimes x^{\mu_N}$ of $\cala_1\otimes R$. Exactness of the sequence \ref{KR3} at $\cala\otimes R$ implies that one has
\begin{equation}
a^\mu\otimes W_{\mu\mu_1\dots \mu_N}x^{\mu_1}\otimes \dots \otimes x^{\mu_N}=d(\lambda \mbox{\rm 1\hspace {-.6em} l}\otimes w)
\end{equation}
for some $\lambda\in \mathbb K$. This implies
\[
a^\mu=L^\mu_{1\ \nu}x^\nu=\lambda x^\mu
\]
in view of Property $(i)$ of Definition \ref{PR}. Using again Property $(i)$, one finally obtains
\[
L_0=L_1=\lambda\mbox{\rm 1\hspace {-.6em} l} \ \ (\in M_{s+1}(\mathbb K))
\]
as consequence of (\ref{4.2}) which means that $w$ is 3-regular.$\square$\\
The Poincar\'e series of such a $N$-homogeneous algebra $\cala$ which is Koszul of global dimension 3 and which is Gorenstein is given by \cite{art-tat-vdb:1991},\cite{mdv-pop:2002}
\begin{equation}
P_\cala(t)=\frac{1}{1-(s+1)t+(s+1)t^N-t^{N+1}}
\label{S3}
\end{equation}
where $s+1={\mbox{dim}} (\cala_1)$ is as before the number of independent generators (of degree 1). It follows from this formula that $\cala$ has exponential growth if $s+1+N>5$ while the case $s+1=2$ and $N=2$ is impossible since then the coefficient of $t^4$ vanishes and the coefficient of $t^6$ does not vanish ($(\cala_1)^4=0$ and $(\cala_1)^6\not=0$ is impossible). Thus it remains the cases $s+1=3,N=2$ and $s+1=2,N=3$ for which one has polynomial growth \cite{art-sch:1987}. These latter cases are the object of \cite{art-sch:1987} and we shall describe examples with exponential growth in Section \ref{EX}.
\begin{proposition}\label{M3}
Two 3-regular $(N+1)$-linear forms $w$ and $w'$ on $\mathbb K^{s+1}$ correspond to isomorphic graded algebra $\cala(w,N)$ and $\cala(w',N) $if and only if they belong to the same $GL(s+1,\mathbb K)$-orbits.
\end{proposition}
\noindent \underbar{Proof}. If $w'=w\circ L$ for $L\in GL(s+1,\mathbb K)$, the fact that the corresponding algebras are isomorphic is immediate since then $L$ is just a linear change of generators.\\
Assume now that the graded algebras are isomorphic. Then in degree 1 this isomorphism gives an element $L$ of $GL(s+1,\mathbb K)$ such that in components one has
\begin{equation}
W'_{\alpha_0\alpha_1\dots \alpha_N}=K^\alpha_{\alpha_0} L^{\beta_0}_\alpha L^{\beta_1}_{\alpha_1} \dots L^{\beta_N}_{\alpha_N} W_{\beta_0\beta_1\dots \beta_N}
\end{equation}
for some $K\in GL(s+1,\mathbb K)$ since in view of $(i)$ the relations are linearily independent. Using the property $(ii)$ for $w'$ and for $w$, one gets
\begin{equation}
(Q_{w'})^\alpha_{\alpha_N} W'_{\alpha\alpha_0\dots \alpha_{N-1}}=(K^{-1}L^{-1}Q_wL)^\alpha_{\alpha_N} K^\beta_{\alpha_0}W'_{\alpha\beta\alpha_1\dots \alpha_{N-1}}
\end{equation}
from which it follows by using the property $(iii)$ for $w'$
\begin{equation}
Q_{w'}(L^{-1}Q_wL)^{-1}K=K=\lambda \mbox{\rm 1\hspace {-.6em} l}
\end{equation}
for some $\lambda \in \mathbb K$. Since $K$ is invertible, one has $Q_{w'}=Q_{w\circ L}$ and $\lambda\not= 0$, and thus $w'=\lambda w\circ L$. i.e. $w'=w\circ L$ by replacing $L$ by $\lambda^{-\frac{1}{N+1}}L$.$\square$\\
\section{Homogeneous algebras associated with multilinear forms}
In this section $m$ and $N$ are two integers such that $m\geq N\geq 2$ and $w$ is a preregular $m$-linear form on $\mathbb K^{s+1}$ with components $W_{\lambda_1\dots \lambda_m}=w(e_{\lambda_1},\dots,e_{\lambda_m})$ in the canonical basis $(e_\lambda)$ of $\mathbb K^{s+1}$. Let $\cala=\cala(w,N)$ be the $N$-homogeneous algebra generated by the $s+1$ elements $x^\lambda$ ($\lambda \in \{0,\dots,s\}$) with the relations
\begin{equation}
W_{\lambda_1\dots \lambda_{m-N}\mu_1\dots \mu_N} x^{\mu_1}\dots x^{\mu_N}=0
\end{equation}
$\lambda_i\in\{0,\dots,s\}$, that is one has $\cala=A(E,R)$ with $E=\oplus_\lambda\mathbb K x^\lambda=\cala_1$ and $R=\sum_{\lambda_i}\mathbb K W_{\lambda_1\dots \lambda_{m-N}\mu_1\dots \mu_N} x^{\mu_1}\otimes \dots \otimes x^{\mu_N}\subset E^{\otimes^N}$. Define ${\cal W}_n\subset E^{\otimes^n}$ for $m\geq n\geq 0$ by
\begin{equation}
{\cal W}_n=\left\{ \begin{array}{l}
\sum_{\lambda_i} \mathbb K W_{\lambda_1\dots\lambda_{m-n}\mu_1\dots \mu_n} x^{\mu_1}\otimes \dots \otimes x^{\mu_n}\ \ \mbox{if}\ \ m\geq n\geq N\\
\\
E^{\otimes^n}\ \ \ \mbox{if} \ \ \ N-1\geq n\geq 0
\end{array}
\right.
\end{equation}
and consider the sequence
\begin{equation}
\label{Sm}
0\rightarrow \cala \otimes {\cal W}_m \stackrel{d}{\rightarrow} \cala \otimes {\cal W}_{m-1} \stackrel{d}{\rightarrow} \dots \stackrel{d}{\rightarrow} \cala\otimes E \stackrel{d}{\rightarrow} \cala\rightarrow 0
\end{equation}
of free left $\cala$-modules where the homomorphisms $d$ are induced by the homomorphisms of $\cala\otimes E^{\otimes^{n+1}}$ into $\cala\otimes E^{\otimes^n}$ defined by $a\otimes (v_0\otimes v_1\otimes \dots \otimes v_n)\mapsto av_0\otimes (v_1\otimes \dots \otimes v_n)$ for $n\geq 0$, $a\in \cala$ and $v_i\in E=\cala_1$.
\begin{proposition}\label{KSm}
Sequence $(\ref{Sm})$ is a sub-$N$-complex of $K(\cala)$ $($the Koszul $N$-complex of $\cala)$.
\end{proposition}
\noindent \underbar{Proof.} By the property $(ii)$ of $w$ the relations $W_{\lambda_1\dots \lambda_{m-N}\mu_1\dots \mu_N}x^{\mu_1}\dots x^{\mu_N}=0$ are equivalent to $W_{\lambda_{r+1}\dots \lambda_{m-N}\mu_1\dots \mu_N \lambda_1 \dots \lambda_r} x^{\mu_1}\dots x^{\mu_N}=0$ for $m-N\geq r\geq 0$. It follows that ${\cal W}_n\subset E^{\otimes^{n-N-r}}\otimes R\otimes E^{\otimes^r}$ for any $r$ such that $n-N\geq r\geq 0$ so ${\cal W}_n\subset \cap_r E^{\otimes^{n-N-r}}\otimes R\otimes E^{\otimes^r}=\left (\cala^!_n\right)^\ast$ and therefore $\cala\otimes {\cal W}_n\subset K_n(\cala)$ for $n\geq N$. The equalities $\cala\otimes {\cal W}_n=K_n(\cala)$ for $N-1\geq n\geq 0$ are obvious. This implies the result. $\square$\\
In the proof of the above proposition we have shown in particular that one has ${\cal W}_m\subset (\cala^!_m)^\ast$ so that $w\in (\cala^!_m)^\ast$. It follows that $w$ composed with the canonical projection $\cala^!\rightarrow \cala^!_m$ onto degree $m$ defines a linear form $\omega_w$ on $\cala^!$. On the other hand one can write $Q_w\in GL(s+1,\mathbb K)=GL(E^\ast)=GL(\cala^!_1)$. With these notations one has the following proposition.
\begin{proposition} \label{pF}
The element $Q_w$ of ${\mbox{GL}}(\cala^!_1) (={\mbox{GL}} (s+1,\mathbb K))$ induces an automorphism $\sigma_w$ of $\cala^!$ and one has $\omega_w(xy)=\omega_w(\sigma_w(y)x)$ for any $x,y \in \cala^!$. Considered as an element $Q^w$ of $GL(\cala_1)=GL(E)$, the transposed $Q^t_w=Q^w$ of $Q_w$ induces an automorphism $\sigma^w$ of $\cala$.
\end{proposition}
\noindent \underbar{Proof}. $Q_w$ induces an automorphism $\tilde \sigma_w$ of degree 0 of the tensor algebra $T(E^\ast)$. Let $\tilde x\in E^{\ast\otimes^N}$ be in $R^\perp$ i.e. $\tilde x=\rho^{\mu_1\dots \mu_N}e_{\mu_1} \otimes \dots \otimes e_{\mu_N}$ with $W_{\lambda_1\dots \lambda_{m-N} \mu_1\dots\mu_N}\rho^{\mu_1\dots \mu_N}=0$, $\forall \lambda_i$. The invariance of $w$ by $Q_w$ implies
\[
Q^{\rho_1}_{\lambda_1}\dots Q^{\rho_{m-N}}_{\lambda_{m-N}} W_{\rho_1\dots \rho_{m-N} \nu_1\dots \nu_N} Q^{\nu_1}_{\mu_1}\dots Q^{\nu_N}_{\mu_N}\rho^{\mu_1\dots \mu_N}=0
\]
i.e. $W_{\lambda_1\dots \lambda_{m-N} \nu_1\dots \nu_N} Q^{\nu_1}_{\mu_1} \dots Q^{\nu_N}_{\mu_N} \rho^{\mu_1\dots\mu_N}=0$, $\forall \lambda_i$, which means $\tilde \sigma_w(\tilde x)\in R^\perp$. Thus one has $\tilde\sigma_w(R^\perp)=R^\perp$ which implies that $\tilde \sigma_w$ induces an automorphism $\sigma_w$ of the $N$-homogeneous algebra $\cala^!$. The property $\omega_w(xy)=\omega_w(\sigma_w(y)x)$ for $x,y\in \cala^!$ is then just a rewriting of the property $(ii)$ of $w$ (i.e. the $Q_w$-twisted cyclicity and its consequences given by (3.2)).The last statement of the proposition follows again from the invariance of $w$ by $Q_w$ which implies that one has $(Q^w)^{\otimes^N}(R)\subset R$. $\square$\\
\begin{theorem} \label{Fr}
The subset ${\cal I}$ of $\cala^!$ defined by
\[
{\cal I}=\{y\in \cala^!\vert \omega_w(xy)=0,\ \ \ \forall x\in \cala^!\}
\]
is a graded two-sided ideal of $\cala^!$ and the quotient algebra ${\cal F}(w,N)=\cala^!/{\cal I}$ equipped with the linear form induced by $\omega_w$ is a graded Frobenius algebra.\end{theorem}
\noindent \underbar{Proof}. By its very definition, ${\cal I}$ is a left ideal. It follows from $\omega_w(xy)=\omega_w(\sigma_w(y)x)$ that ${\cal I}$ is also a right ideal, so it is a two-sided ideal. By construction one has ${\cal F}(w,N)={\cal F}=\oplus^m_{p=0}{\cal F}_p$ with ${\mbox{dim}}({\cal F}_m)=1$ and the pairing induced by $(x,y)\mapsto \omega_w(xy)$ is nondegenerate and is a Frobenius pairing on ${\cal F}$.$\square$\\
One has ${\mbox{dim}}({\cal F}_0)={\mbox{dim}}({\cal F}_m)=1,\ {\mbox{dim}}({\cal F}_1)={\mbox{dim}}({\cal F}_{m-1})=s+1$ and of course ${\mbox{dim}}({\cal F}_p)={\mbox{dim}}({\cal F}_{m-p})$ for $p\in \{0,\dots,m\}$. The automorphism $\sigma_w$ induces an automorphism $\sigma$ of ${\cal F}$ and one has $xy=\sigma(y)x$ for $x\in {\cal F}_p$ and $y\in{\cal F}_{m-p}$, $m\geq p\geq 0$.\\
Notice that if $L\in GL(s+1,\mathbb K)$ then $\cala(w,N)$ and $\cala(w\circ L,N)$ are isomorphic $N$-homogeneous algebras.\\
In the following we let $^w\cala$ denote the $(\cala,\cala)$-bimodule which coincides with $\cala$ as right $\cala$-module and is such that the structure of left $\cala$-module is given by left multiplication by $(-1)^{(m-1)n}(\sigma^w)^{-1}(a)$ for $a\in \cala_n$. One has the following result in the quadratic case $\cala=\cala(w,2)$.
\begin{proposition}\label{Vol}
Assume that $N=2$, that is that $\cala$ is the quadratic algebra $\cala=\cala(w,2)$. Then $\mbox{\rm 1\hspace {-.6em} l} \otimes w$ is canonically a nontrivial $^w\cala$-valued Hochschild $m$-cycle on $\cala$, that is one has $\mbox{\rm 1\hspace {-.6em} l} \otimes w \in Z_m(\cala,^w\cala)$ with $\mbox{\rm 1\hspace {-.6em} l} \otimes w \not\in B_m(\cala,^w\cala)$.
\end{proposition}
\noindent \underbar{Proof}. The $m$-linear form $w$ on $\mathbb K^{s+1}$ identifies canonically with an element of $E^{\otimes^m}=\cala_1^{\otimes^m}
\subset \cala^{\otimes^m}$, i.e. one can write $w\in \cala^{\otimes^m}$. By
interpreting $\mbox{\rm 1\hspace {-.6em} l} \in \cala$ as an element of $^w\cala$ one can consider that $\mbox{\rm 1\hspace {-.6em} l}\otimes w$ is an $^w\cala$-valued Hochschild $m$-chain. The Hochschild boundary of $\mbox{\rm 1\hspace {-.6em} l}\otimes w$ reads
\[
\begin{array}{lll}
b(\mbox{\rm 1\hspace {-.6em} l}\otimes w) & = & W_{\lambda_1\dots \lambda_m}x^{\lambda_1}\otimes \dots \otimes^{\lambda_m}\\
& + & \sum^{m-1}_{k=1} (-1)^k \mbox{\rm 1\hspace {-.6em} l} \otimes W_{\lambda_1\dots \lambda_m} x^{\lambda_1} \otimes \dots \otimes x^{\lambda_k}x^{\lambda_k+1}\otimes\dots x^{\lambda_{m}}\\
& - & (Q^{-1}_w)^{\lambda_m}_\lambda W_{\lambda_1\dots \lambda_m} x^\lambda \otimes x^{\lambda_1}\otimes\dots \otimes x^{\lambda_{m-1}}
\end{array}
\]
The sum of the first and of the last term vanishes by $Q_w$-cyclicity while each of the other terms vanishes since the relations $W_{\lambda_1\dots \lambda_{m-2}\mu\nu} x^\mu x^\nu=0$ are equivalent to $W_{\lambda_1\dots \lambda_r \mu\nu \lambda_{r+1}\dots \lambda_{m-2}}x^\mu x^\nu=0$ again by $Q_w$-cyclicity. Therefore one has $b(\mbox{\rm 1\hspace {-.6em} l}\otimes w)=0$. By using the fact that the Hochschild boundary preserves the total $\cala$-degree it is easy to see that $\mbox{\rm 1\hspace {-.6em} l}\otimes w$ cannot be a boundary.$\square$\\
Thus in the quadratic case $N=2$, if $Q_w=(-1)^{m-1}$ then $\mbox{\rm 1\hspace {-.6em} l}\otimes w$ represents the analog of a differential $m$-form i.e. an element of $H_m(\cala,\cala)=HH_m(\cala)$; if $Q_w$ is different of $(-1)^{m-1}$, this is a twisted version of a differential $m$-form. We shall come back later to this interpretation in the Koszul-Gorenstein case where $\mbox{\rm 1\hspace {-.6em} l}\otimes w$ plays the role of a volume element.
\begin{theorem}\label{KGD}
Let $\cala$ be a $N$-homogeneous algebra generated by $s+1$ elements $x^\lambda$ $(\lambda \in \{0,\dots,s\})$ which is Koszul of finite global dimension $D$ and which is Gorenstein. Then $\cala=\cala(w,N)$ for some preregular $m$-linear form $w$ on $\mathbb K^{s+1}$ which is such that if $N\geq 3$ then one has $m=Np+1$ and $D=2p+1$ for some integer $p\geq 1$ while for $N=2$ one has $m=D$. Under these conditions $w$ is unique up to a multiplicative factor in $\mathbb K\backslash \{0\}$.
\end{theorem}
\noindent \underbar {Proof}. The Koszul resolution of the trivial left $\cala$-module $\mathbb K$ ends as (
\cite{ber-mdv-wam:2003}, see also Appendix 1)
\begin{equation}
\dots \stackrel{d}{\rightarrow} \cala \otimes \cala^{!\ast}_N \stackrel{d^{N-1}}{\rightarrow} \cala\otimes \cala_1 \stackrel{d}{\rightarrow} \cala \rightarrow \mathbb K \rightarrow 0
\end{equation}
so the Gorenstein property implies that it starts as
\begin{equation}
0\rightarrow \cala \otimes \cala^{!\ast}_m \stackrel{d}{\rightarrow} \cala \otimes \cala^{!\ast}_{m-1} \stackrel{d^{N-1}}{\rightarrow} \dots
\end{equation}
with
\begin{equation}
{\mbox{dim}} (\cala^{!\ast}_m)=1
\end{equation}
and
\begin{equation}
{\mbox{dim}} (\cala^{!\ast}_{m-1})={\mbox{dim}}(\cala_1)=s+1
\end{equation}
for some $m\geq N$ which corresponds to the $D$-th term. This implies that $m=D$ for $N=2$ and that for $N\geq 3$ one has $m=Np+1$ and $D=2p+1$ for some integer $p\geq 1$. Let $w$ be a generator of the 1-dimensional subspace $\cala^{!\ast}_m$ of $\cala^{\otimes^m}_1$. Since $\cala_1$ identifies canonically with the dual vector space of $\mathbb K^{s+1}$, $w$ is (canonically) a $m$-linear form on $\mathbb K^{s+1}$. For $0\leq k\leq m$ and $\theta \in \cala^!_k$, one defines $\theta w\in \cala^{!\ast}_{m-k}$ by setting
\begin{equation}
(\theta w)(\alpha)=\langle w,\alpha\theta\rangle
\end{equation}
for $\alpha\in \cala^!_{m-k}$. The mapping $\theta\mapsto \theta w$ defines a left $\cala^!$-module homomorphism $\bar \Phi$ of $\cala^!$ into $\cala^{!\ast}$ with $\bar\Phi(\cala^!_k)\subset \cala^{!\ast}_{m-k}$ for $k\in\{0,\dots,m\}$ and the Gorenstein property implies that $\bar\Phi$ induces the linear isomorphisms
\begin{equation}\label{Iso}
\bar\Phi:\cala^!_{\nu_N(p)}\simeq \cala^{!\ast}_{\nu_N(D-p)}
\end{equation}
for $p\in \{0,\dots,D\}$ where $\nu_N(2\ell)=N\ell$ and $\nu_N(2\ell+1)=N\ell+1$ for $\ell\in \mathbb N$, \cite{ber-mar:2006} (Theorem 5.4 of \cite{ber-mar:2006}). The isomorphisms (\ref{Iso}) for $p=1$ and $p=D-1$ imply that $w$ is a preregular $m$-linear form on $\mathbb K^{s+1}$ while the isomorphism (\ref{Iso}) for $p=D-2$ implies that the relations of $\cala$ read
\[
W_{\lambda_1\dots \lambda_{m-N}\mu_1\dots \mu_N} x^{\mu_1}\dots x^{\mu_N}=0
\]
with $W_{\lambda_1\dots \lambda_m}=w(e_{\lambda_1},\dots,e_{\lambda_m})=\langle w,e_{\lambda_1}\dots e_{\lambda_m}\rangle$ and hence that one has $\cala=\cala(w,N)$. $\square$\\
Notice that under the assumptions of Theorem \ref{KGD}, the Koszul resolution of the trivial left $\cala$-module $\mathbb K$ reads
\[
0\rightarrow \cala\otimes {\cal W}_m\stackrel{d}{\rightarrow} \cala\otimes {\cal W}_{m-1} \stackrel{d^{N-1}}{\rightarrow} \dots \stackrel{d}{\rightarrow} \cala\otimes {\cal W}_N \stackrel{d^{N-1}}{\rightarrow} \cala\otimes E \stackrel{d}{\rightarrow} \cala \rightarrow \mathbb K \rightarrow 0
\]
i.e.
\begin{equation}
0\rightarrow \cala \otimes {\cal W}_{\nu_N(D)} \stackrel{d'}{\rightarrow} \dots \stackrel{d'}{\rightarrow} \cala \otimes {\cal W}_{\nu_N(k)} \stackrel{d'}{\rightarrow} \cala \otimes {\cal W}_{\nu_N(k-1)} \stackrel{d'}{\rightarrow} \dots \stackrel{d'}{\rightarrow} \cala\rightarrow \mathbb K \rightarrow 0
\end{equation}
where $d'=d^{N-1}:\cala\otimes {\cal W}_{\nu_N(2\ell)}\rightarrow \cala \otimes {\cal W}_{\nu_N(2\ell-1)}$ and $d'=d:\cala\otimes {\cal W}_{\nu_N(2\ell+1)}\rightarrow \cala\otimes {\cal W}_{\nu_N(2\ell)}$ and that one has
\begin{equation}
{\mbox{dim}}({\cal W}_{\nu_N(k)})={\mbox{dim}}({\cal W}_{\nu_N(D-k)})
\end{equation}
for any $0\leq k\leq D$. Thus $\cala\otimes {\cal W}_m=\cala\otimes {\cal W}_{\nu_N(D)}=\cala\otimes w$ so one sees that $\mbox{\rm 1\hspace {-.6em} l} \otimes w$ is the generator of the top module of the Koszul resolution which also leads to an interpretation of $\mbox{\rm 1\hspace {-.6em} l} \otimes w$ as volume form.
\section{Examples}\label{EX}
\subsection{Yang-Mills algebra} \label{subsec1}
Let $g_{\mu\nu}$ be the components of a symmetric nondegenerate bilinear form on $\mathbb K^{s+1}$. The Yang-Mills algebra \cite{ac-mdv:2002b}, \cite{nek:2003} is the cubic algebra $\cala$ generated by the $s+1$ elements $x^\lambda$ ($\lambda \in \{0,\dots s\}$) with the $s+1$ relations
\begin{equation}\label{YM}
g_{\lambda\mu}[x^\lambda,[x^\mu,x^\nu]]=0
\end{equation}
for $\nu\in \{0,\dots, s\}$.\\
It was claimed in \cite{ac-mdv:2002b} that this algebra is Koszul of global dimension 3 and is Gorenstein. The relations (\ref{YM}) can be rewritten as
\begin{equation}
(g_{\rho\lambda} g_{\mu\nu}+g_{\rho\nu}g_{\lambda\mu}-2g_{\rho\mu}g_{\lambda\nu})x^\lambda x^\mu x^\nu=0
\end{equation}
(for $\rho\in \{0,\dots, s\}$) and one verifies that the 4-linear form $w$ on $\mathbb K^{s+1}$ with components
\begin{equation}
W_{\rho\lambda \mu \nu}=g_{\rho\lambda} g_{\mu\nu}+g_{\rho\nu} g_{\lambda\mu}-2g_{\rho\mu}g_{\lambda\nu}
\end{equation}
is 3-regular with $Q_w=\mbox{\rm 1\hspace {-.6em} l}$. So it is invariant by cyclic permutations and one has $\cala=\cala(w,3)$ with the notations of the previous sections. It is easy to see that $w$ does not only satisfy the condition $(iii)$ but satisfies the stronger condition $(iii)'$. This implies (and is equivalent to the fact) that the dual cubic algebra $\cala^!$ is Frobenius.
\subsection{Super Yang-Mills algebra}
With the same conventions as in \ref{subsec1}, the super Yang-Mills algebra
\cite{ac-mdv:2007} is the cubic algebra $\tilde \cala$ generated by the $s+1$ elements $x^\lambda$ with the $s+1$ relations
\begin{equation}\label{SYM}
g_{\lambda\mu}[x^\lambda,\{x^\mu,x^\nu\}]=0
\end{equation}
for $\nu\in \{0,\dots, s\}$ and where $\{A,B\}=AB+BA$. As pointed out in
\cite{ac-mdv:2007} the relations (\ref{SYM}) can be equivalently written as
\begin{equation}
[g_{\lambda\mu}x^\lambda x^\mu, x^\nu]=0
\end{equation}
which means that the quadratic element $g_{\lambda\mu}x^\lambda x^\mu$ is central.\\
It was claimed in \cite{ac-mdv:2007} that this algebra is Koszul of global dimension 3 and is Gorenstein.\\
The relations (\ref{SYM}) can be rewritten as
\begin{equation}
(g_{\rho\lambda} g_{\mu\nu} -g_{\rho\nu} g_{\lambda\mu})x^\lambda x^\mu x^\nu
\end{equation}
and one verifies that the 4-linear form $\tilde w$ on $\mathbb K^{s+1}$ with components
\begin{equation}
\tilde W_{\rho\lambda\mu\nu}=g_{\rho\lambda}g_{\mu\nu} - g_{\rho\nu} g_{\lambda\mu}
\end{equation}
is 3-regular with $Q_{\tilde w}=-\mbox{\rm 1\hspace {-.6em} l}$ and satisfies the stronger condition $(iii)'$. Thus one has $\tilde \cala=\cala(\tilde w,3)$ and the dual cubic algebra $\tilde\cala^!$ is Frobenius.\\
The Poincar\'e series of the Yang-Mills algebra and of the super Yang-Mills algebra coincide and are given by
\begin{equation}
\frac{1}{1-(s+1)t+(s+1)t^3-t^4}=\left(\frac{1}{1-t^2}\right)\left(\frac{1}{1-(s+1)t+t^2}\right)
\end{equation}
which is a particular case of the formula (\ref{S3}). It follows that these algebras have exponential growth for $s\geq 2$. For $s+1=2$ these are particular cubic Artin-Schelter algebras \cite{art-sch:1987} (see also \cite{mdv-pop:2002}).\\
An elegant powerful proof of the Koszulity of the Yang-Mills and the super Yang-Mills algebras is given in \cite{kri-vdb:2010}.
\subsection{The algebras $\cala(\varepsilon, N)$ for $s+1\geq N \geq 2$}
Assume that $s+1\geq N\geq 2$ and let $\varepsilon$ be the completely antisymmetric $(s+1)$-linear form on $\mathbb K^{s+1}$ such that $\varepsilon(e_0,\dots,e_s)=1$, where $(e_\lambda)$ is the canonical basis of $\mathbb K^{s+1}$. It is clear that $\varepsilon$ is preregular with $Q_\varepsilon=(-1)^s\mbox{\rm 1\hspace {-.6em} l}$ and that furthermore it satisfies $(iii)$ whenever $s+1\geq 3$.\\
The $N$-homogeneous algebra $\cala(\varepsilon, N)$ generated by the $s+1$ elements $x^\lambda$ with the relations
\begin{equation}
\varepsilon_{\alpha_0\dots \alpha_{s-N}\lambda_1\dots \lambda_N} x^{\lambda_1}\dots x^{\lambda_N}
\end{equation}
(for $\alpha_i\in \{0,\dots,s\}$) was introduced in \cite{ber:2001a} where it was shown that it is Koszul of finite global dimension. It was then shown in \cite{ber-mar:2006} that $\cala(\varepsilon,N)$ is Gorenstein if and only if either $N=2$ or $N>2$ and $s=Nq$ for some integer $q\geq 1$. The case $N=2$ corresponds to the polynomial algebra with $s+1$ indeterminates which is of global dimension $s+1$ while in the case $N>2$ and $s=Nq$ the $N$-homogeneous algebra $\cala(\varepsilon, N)$ which is then Koszul and Gorenstein has global dimension $D=2q+1$.\\
In the general case if $N>2$ the dual $N$-homogeneous algebra $\cala(\varepsilon, N)^!$ cannot be Frobenius since the ideal $I_\varepsilon$ always contains the non trivial quadratic elements
\begin{equation}
e_\lambda e_\mu +e_\mu e_\lambda
\end{equation}
and is in fact generated by these elements in the Koszul-Gorenstein case i.e. when $s=Nq$ with $q\geq 1$. In this latter case $\cala(\varepsilon,N)^!/I_\varepsilon$ is the exterior algebra $\wedge\mathbb K^{s+1}$ over $\mathbb K^{s+1}$ which is the dual $\cala(\varepsilon,2)^!$ ($=\wedge\mathbb K^{s+1}$) of the quadratic algebra $\cala(\varepsilon,2)$ generated by the $x^\lambda$ ($\lambda\in \{0,\dots,s\}$) with the relations
\begin{equation}
x^\lambda x^\mu=x^\mu x^\lambda
\end{equation}
(which coincides with the polynomial algebra with $s+1$ indeterminates). \\
One thus recovers by this process the quadratic relations implying the original $N$-homogeneous ones with $N>2$ (for $s=Nq$ with $q\geq 1$). One may wonder whether there is a lesson to extract from this example : Namely starting from a Koszul-Gorenstein $N$-homogeneous algebra $\cala$ with $N$-homogeneous dual $\cala^!$ which is not Frobenius, is there a $N_0$-homogeneous $\cala_0$ with $\cala^!_0$ Frobenius such that the relation of $\cala$ are implied by the relations of $\cala_0$ ? Notice that the Koszul-Gorenstein algebras $\cala(\varepsilon,N)$ (for $s=Nq,N\geq 3, q\geq 1$) have exponential growth.
\subsection{The algebra $\cala_{\mathbf u}$}
Let us now discuss in the present context the algebra $\cala_{\mathbf u}$ introduced in \cite{ac-mdv:2002a} and analyzed in details in
\cite{ac-mdv:2003}
and \cite{ac-mdv:2008}. The algebra $\cala_{\mathbf u}$, which corresponds to a noncommutative 4-plane, is the quadratic algebra generated by the 4 generators $x^\lambda$ ($\lambda\in \{0,1,2,3\}$) with the relations
\begin{equation}
\cos (\varphi_0-\varphi_k)[x^0,x^k]=i\sin (\varphi_\ell - \varphi_m) \{x^\ell,x^m\}
\end{equation}
\begin{equation}
\cos(\varphi_\ell-\varphi_m)[x^\ell,x^m]=i\sin (\varphi_0-\varphi_k)\{x^0,x^k\}
\end{equation}
for any cyclic permutation $(k,\ell,m)$ of (1,2,3) and where $\{A,B\}=AB+BA$ as before. The parameter ${\mathbf u}$ being the element
\begin{equation}
{\mathbf u} = (e^{i(\varphi_1-\varphi_0)},e^{i(\varphi_2-\varphi_0)}, e^{i(\varphi_3-\varphi_0)})
\end{equation}
of $T^3$. This algebra is Koszul of global dimension 4 and is Gorenstein (so an example with $N=2$ and $s+1=D=4$) whenever none of these six relations becomes trivial and one has the non trivial Hochschild cycle \cite{ac-mdv:2002a}
\begin{eqnarray}
w_{\mathbf u} = \tilde{\mbox{ch}}_{3/2}(U_{\mathbf u}) & = &-\sum_{\alpha, \beta, \gamma, \delta} \varepsilon_{\alpha\beta\gamma\delta} \cos (\varphi_\alpha-\varphi_\beta + \varphi_\gamma-\varphi_\delta) x^\alpha \otimes x^\beta \otimes x^\gamma \otimes x^\delta\nonumber \\
& + & i\sum_{\mu,\nu} \sin (2(\varphi_\mu-\varphi_\nu)) x^\mu \otimes x^\nu \otimes x^\mu \otimes x^\nu
\end{eqnarray}
which may be considered as a 4-linear form on $\mathbb K^4$ which is preregular. Notice that the components $W_{\rho\lambda\mu\nu}$ of $w_{\mathbf u}=W_{\rho\lambda\mu\nu} x^\rho\otimes x^\lambda\otimes x^\mu\otimes x^\nu$ can be written as
\begin{equation}
W_{\rho\lambda\mu\nu}=-\cos(\varphi_\rho-\varphi_\lambda + \varphi_\mu-\varphi_\nu)\varepsilon_{\rho\lambda\mu\nu}+i\sin (\varphi_\rho-\varphi_\lambda+\varphi_\mu -\varphi_\nu)\delta_{\rho\mu}\delta_{\lambda\nu}
\end{equation}
for $ \rho, \lambda, \mu, \mu \in \{0, 1, 2, 3\}$.\\
One can check that one has $\cala_{\mathbf u} =\cala(w_{\mathbf u},2)$ and, as explained in \cite{ac-mdv:2002a}, one has $Q_{w_{\mathbf u}}=-\mbox{\rm 1\hspace {-.6em} l}$ and $\mbox{\rm 1\hspace {-.6em} l} \otimes w_{\mathbf u}$ is a Hochschild 4-cycle ($\in Z_4(\cala_{\mathbf u},\cala_{\mathbf u})$) which is a particular case of the corresponding more general result of Section 5 in the quadratic case. \\
As pointed out in \cite{ac-mdv:2002a}, for generic values of the parameter ${\mathbf u}$ the algebra $\cala_{\mathbf u}$ is isomorphic to the Sklyanin algebra \cite{skl:1982} which has been studied in detail in \cite{smi-sta:1992} from the point of view of regularity.
\section {Semi-cross product (twist)}
In this section we investigate semi-cross products of algebras of type $\cala(w,N)$ by homogeneous automorphisms of degree 0 and we describe the corresponding transformations of the $Q_w$. We first recall the definition and the properties of the semi-cross product using the notations of \cite{ac-mdv:2008}. This notion has been introduced and analyzed in \cite{art-tat-vdb:1991} where it is refered to as twisting and used to reduce and complete the classification of regular algebras of dimension 3 of \cite{art-sch:1987}. In \cite{ac-mdv:2008} this notion was used to reduce the moduli space of noncommutative 3-spheres.\\
Let $\cala=\otimes_{n\in \mathbb N}\cala_n$ be a graded algebra and let $\alpha$ be an automorphism of $\cala$ which is homogeneous of degree 0. The {\sl semi-cross product $\cala(\alpha)$ of $\cala$ by $\alpha$} is the graded vector space $\cala$ equipped with the bilinear product $\bullet_\alpha=\bullet$ defined by
\[
a\bullet b = a \alpha^n(b)
\]
for $a\in \cala_n$ and $b\in \cala$. This product is associative and satisfies $\cala_m\bullet \cala_n\subset \cala_{m+n}$ so $\cala(\alpha)$ is a graded algebra which is unital whenever $\cala$ is unital. The following result is extracted from \cite{art-tat-vdb:1991}.
\begin{proposition}
Let $\cala$, $\alpha$ and $\cala(\alpha)$ be as above.\\
$(i)$ The global dimensions of $\cala$ and $\cala(\alpha)$ coincide.\\
$(ii)$ Let $\beta$ be an automorphism of $\cala$ which is homogeneous of degree 0 and which commutes with $\alpha$. Then $\beta$ is also canonically an automorphism of $\cala(\alpha)$ and one has
\[
\cala(\alpha)(\beta)=\cala(\alpha\circ \beta)
\]
which implies in particular that $\cala(\alpha)(\alpha^{-1})=\cala$.
\end{proposition}
In fact the category of graded right $\cala$-modules and the category of graded right $\cala(\alpha)$-modules are canonically isomorphic which implies $(i)$. For $(ii)$ we refer to \cite{art-tat-vdb:1991} (see also in
\cite{ac-mdv:2008}). \\
If $\cala$ is $N$-homogeneous then $\cala(\alpha)$ is also $N$-homogeneous and if $R\subset \cala^{\otimes^N}_1$ denotes the space of relations of $\cala$, then the space of relations of $\cala(\alpha)$ is given by \cite{ac-mdv:2008}
\begin{equation}
R(\alpha)=(Id\otimes \alpha^{-1}\otimes \dots \otimes \alpha^{-(N-1)})R
\label{A1}
\end{equation}
with obvious notations. Concerning the stability of the Koszul property and of the Gorenstein property, the following result was proved in \cite{pot:2006}.
\begin{proposition}
Let $\cala$ be a $N$-homogeneous algebra and $\alpha$ be an homogeneous automorphism of degree 0 of $\cala$.\\
$(i)$ $\cala(\alpha)$ is Koszul if and only if $\cala$ is Koszul.\\
$(ii)$ $\cala(\alpha)$ is Koszul of (finite) global dimension $D$ and Gorenstein if and only if $\cala$ is Koszul of global dimension $D$ and Gorenstein.
\end{proposition}
If $\cala$ is a $N$-homogeneous algebra, an automorphism $\alpha$ of degree 0 of $\cala$ is completely specified by its restriction $\alpha\restriction \cala_1$ to $\cala_1$.\\
Let $w$ be a preregular $m$-linear form on $\mathbb K^{s+1}$ with $m\geq N\geq 2$ and let us consider the $N$-homogeneous algebra $\cala=\cala(w,N)$. We denote by $GL_w$ the subgroup of $GL(s+1,\mathbb K)$ of the elements $L\in GL(s+1,\mathbb K)$ which preserve $w$, i.e. such that
\begin{equation}
w\circ L =w
\label{A2}
\end{equation}
It is clear that each $L\in GL_w$ determines an automorphism $\alpha^{(L)}$ of degree 0 of $\cala$ for which $\alpha^{(L)}\restriction \cala_1$ is the transposed $L^t$ of $L$. Furthermore, it follows from (\ref{A1}) and (\ref{A2}) that the semi-cross product of $\cala$ by $\alpha^{(L)}$ is given by
\begin{equation}
\cala(\alpha^{(L)})=\cala(w^{(L)},N)
\label{A3}
\end{equation}
where the components $W^{(L)}_{\lambda_1\dots \lambda_m}$ of the $m$-linear form $w^{(L)}$ are given by
\begin{equation}
W^{(L)}_{\lambda_1\dots\lambda_m}=W_{\lambda_1\lambda'_2\dots\lambda'_m}(L^{-1})^{\lambda'_2}_{\lambda_2} (L^{-2})^{\lambda'_3}_{\lambda_3}\dots
(L^{-(m-1)})^{\lambda'_m}_{\lambda_m}
\label{A4}
\end{equation}
in terms of the components $W_{\mu_1\dots\mu_m}$ of $w$. The $m$-linear form $w^{(L)}$ is again preregular with $Q_{w^{(L)}}$ given by
\begin{equation}
Q_{w^{(L)}}=L^{-1}Q_w L^{-(m-1)}
\label{A5}
\end{equation}
as verified by using (\ref{A4}) and (\ref{A2}).\\
The fact that $\cala(w,N)$ and $\cala(w\circ L,N)$ are isomorphic $N$-homogeneous algebras for $L\in GL(s+1,\mathbb K)$ implies in view of Theorem 11 that, in the study of Koszul-Gorenstein algebras of finite global dimension, one can simplify $Q_w$ by using $Q_w\mapsto L^{-1}Q_wL=Q_{w\circ L}$ ($L\in GL(s+1,\mathbb K)$; e.g. one can assume that $Q_w$ is in Jordan normal form. On the other hand, since the construction of the semi-cross product is very explicit and since it preserves the global dimension (Proposition 12) and the Koszul-Gorenstein property (Proposition 13), it is natural to simplify further $Q_w$ via $Q_w\mapsto L^{-1}Q_wL^{-(m-1)}$ with $L\in GL_w$ (formula (A5)). In many cases one can find a $m$-th root of $\pm Q_w$ in $GL_w$, that is an element $L\in GL(s+1,\mathbb K)$ such that $w\circ L=w,\ [Q_w,L]=0$ and $L^m=\pm Q_w$. In such a case one can restrict attention to $Q_w=\pm \mbox{\rm 1\hspace {-.6em} l}$, i.e. to $w$ which is $\pm$ cyclic, by semi-cross product (twist). There are however some cases where this is not possible (in fact there are cases where $GL_w$ is a small discrete group).\\
It is worth noticing here that a more general notion of twisted graded algebras which is quite optimal has been introduced and analyzed in \cite{zha:1996}. Several results given here are valid for this more general twisting as pointed out there.
\section{Actions of quantum groups}
Although it is evident that in this section we only need 1-site nondegenerate multilinear forms (see Section 3), we shall stay in the context of Section 5. That is we let $m$ and $N$ be two integers with $m\geq N\geq 2$ and $w$ be a pre-regular $m$-linear form on $\mathbb K^{s+1}$ with components $W_{\lambda_1\dots \lambda_m}=w(e_{\lambda_1},\dots,e_{\lambda_m})$ in the canonical basis $(e_\lambda)$ of $\mathbb K^{s+1}$. As pointed out in Section 2 and in more details in Appendix 2, in the case $m=N=2$, that is when $w$ is a nondegenerate bilinear form $b$, there is a Hopf algebra ${\cal H}(b)$ and a natural coaction of ${\cal H}(b)$ on $\cala(b,2)$ or, in dual terms there is a quantum group acting on the quantum space corresponding to $\cala(b,2)$. Our aim in this section is to generalize this and to define a Hopf algebra ${\cal H}$ (in fact several generically) which coacts on $\cala(w,N)$ for general $m\geq N\geq 2$. For the cases where the $\cala(w,N)$ are Artin-Schelter regular algebras, there are the closely related works \cite{ewe-ogi:1994} and \cite{pop:2006}. Here however we merely concentrate on the ``$SL$-like" aspect (instead of the ``$GL$-like" one). This is also closely related to the quantum $SU$ of \cite{wor:1988} and the quantum $SL$ of \cite{bic:2001}.\\
By the 1-site nondegeneracy property of $w$, there is (at least one) a $m$-linear form $\tilde w$ on the dual of $\mathbb K^{s+1}$ with components $\tilde W^{\lambda_1\dots \lambda_m}$ in the dual basis of $(e_\lambda)$ such that one has
\begin{equation}
\tilde W^{\alpha\gamma_1\dots\gamma_{m-1}}W_{\gamma_1\dots \gamma_{m-1}\beta}= \delta^\alpha_\beta\label{7.1}
\end{equation}
for $\alpha, \beta\in \{0,\dots,s\}$. Let ${\cal H}(w,\tilde w)$ be the unital associative algebra generated by the $(s+1)^2$ elements $u^\alpha_\beta$ ($\alpha,\beta\in \{0,\dots,s\}$) with the relations
\begin{equation}
W_{\alpha_1\dots\alpha_m} u^{\alpha_1}_{\beta_1}\dots u^{\alpha_m}_{\beta_m}=W_{\beta_1\dots \beta_m}\mbox{\rm 1\hspace {-.6em} l}\label{7.2}
\end{equation}
and
\begin{equation}
\tilde W^{\beta_1\dots\beta_m}u^{\alpha_1}_{\beta_1}\dots u^{\alpha_m}_{\beta_m}=\tilde W^{\alpha_1\dots\alpha_m}\mbox{\rm 1\hspace {-.6em} l}\label{7.3}
\end{equation}
where $\mbox{\rm 1\hspace {-.6em} l}$ is the unit of ${\cal H}(w,\tilde w)$. One has the following result.
\begin{theorem}
There is a unique structure of Hopf algebra on ${\cal H}(w,\tilde w)$ with coproduct $\Delta$, counit $\varepsilon$ and antipode $S$ such that
\begin{eqnarray}
\Delta(u^\mu_\nu) & = & u^\mu_\lambda \otimes u^\lambda_\nu\label{7.4}\\
\varepsilon(u^\mu_\nu) & = & \delta^\mu_\nu\label{7.5}\\
S(u^\mu_\nu) & = & \tilde W^{\mu\lambda_1\dots \lambda_{m-1}}u^{\rho_1}_{\lambda_1}\dots u^{\rho_{m-1}}_{\lambda_{m-1}} W_{\rho_1\dots \rho_{m-1}\nu}\label{7.6}
\end{eqnarray}
for $\mu,\nu\in \{0,\dots,s\}$. The product and the unit being the ones of ${\cal H}(w,\tilde w$).
\end{theorem}
\noindent \underbar{Proof}. The structure of bialgebra with (\ref{7.4}) and (\ref{7.5}) is more or less classical. The fact that $S$ defines an antipode follows from $S(u^\mu_\lambda)u^\lambda_\nu=\delta^\mu_\nu$ and $u^\mu_\lambda S(u^\lambda_\nu)=\delta^\mu_\nu$ which are immediate consequences of (\ref{7.1}), (\ref{7.2}), (\ref{7.3}) and (\ref{7.6}).$\square$
\begin{proposition}
There is a unique algebra-homomorphism
\[
\Delta_L:\cala(w,N)\rightarrow {\cal H}(w,\tilde w)\otimes \cala(w,N)
\]
such that
\begin{equation}
\Delta_L(x^\mu)=u^\mu_\nu \otimes x^\nu\label{7.7}
\end{equation}
for $\mu\in \{0,\dots,s\}$. This endows $\cala(w,N)$ of a structure of ${\cal H}(w,\tilde w)$-comodule.
\end{proposition}
\noindent \underbar{Proof}. One has $W_{\alpha_1\dots \alpha_{m-N} \mu_1\dots\mu_N}(u^{\alpha_1}_{\lambda_1}\otimes \mbox{\rm 1\hspace {-.6em} l})\dots (u^{\alpha_{m-N}}_{\lambda_{m-N}}\otimes \mbox{\rm 1\hspace {-.6em} l})\Delta_L x^{\mu_1}\dots \Delta_L x^{\mu_N}= \mbox{\rm 1\hspace {-.6em} l} \otimes W_{\lambda_1\dots \lambda_{m-N}\mu_1\dots \mu_N} x^{\mu_1} \dots x^{\mu_N}=0$. By contracting on the right-hand side with $(S(u^{\lambda_{m-N}}_{\nu_{m-N}})\otimes \mbox{\rm 1\hspace {-.6em} l})\dots (S(u^{\lambda_1}_{\nu_1})\otimes \mbox{\rm 1\hspace {-.6em} l})$, this is equivalent to
\[
W_{\nu_1\dots \nu_{m-N}\mu_1\dots \mu_N}\Delta_L x^{\mu_1}\dots \Delta_Lx^{\mu_N}=0
\]
for $\nu_i\in \{0,\dots,s\}$. The fact that $\Delta_L$ induces a structure of ${\cal H}(w,\tilde w)$-comodule is straightforward.$\square$\\
It is worth noticing that the preregularity of $w$ implies (in view of the $Q_w$-cyclicity) that one has
\begin{equation}
\tilde W^{\lambda\gamma_2\dots \gamma_m}W_{\mu\gamma_2\dots \gamma_m}=(Q^{-1}_w)^\lambda_\mu
\end{equation}
which generalizes a formula valid for $m=N=2$.\\
The dual object of the Hopf algebra ${\cal H}(w,\tilde w)$ is a quantum group which acts on the quantum space corresponding (dual object) to the algebra $\cala(w,N)$.\\
The above theorem and the above proposition correspond to the theorem and the proposition of Appendix 2 for the case $m=N=2$. There is however a notable difference in the cases $m\geq N\geq 2$ which is that for $m=N=2$ that is when $w=b$, a nondegenerate bilinear form, then $\tilde w$ is unique under condition (\ref{7.1}) and coincides with $b^{-1}$ (see Appendix 2), $\tilde w=b^{-1}$. In the general case, given $w$ there are several $\tilde w$ satisfying (\ref{7.1}) and thus several Hopf algebras ${\cal H}(w,\tilde w)$. Some choice of $\tilde w$ can be better than other in the sense that ${\cal H}(w,\tilde w)$ can be bigger or can have bigger commutative quotient. For instance, in the case of Example 6.3 of Section 6 where $w=\varepsilon$, the natural choice for $\tilde w$ is $\tilde \varepsilon$ with components $\frac{(-1)^s}{s!}\varepsilon^{\lambda_0\dots \lambda_s}$ where $\varepsilon^{\lambda_0\dots \lambda_s}$ is completely antisymmetric with $\varepsilon^{0\ 1\dots s}=1$; in this case, ${\cal H}(\varepsilon, \tilde \varepsilon)$ is commutative \cite{wor:1988} and coincides with the Hopf algebra of representative functions on $SL(s+1,\mathbb K)$.\\
In fact the right quantum group relevant here corresponds to the universal Hopf algebra ${\cal H}(w)$ preserving $w$ which has been recently studied in \cite{bic-mdv:2013} where an explicit finite presentation by generators and relations is given for it. It turns out that for $w=\varepsilon$, ${\cal H}(\varepsilon)$ is not commutative whenever $m\geq 3$ and therefore that the canonical Hopf algebra map ${\cal H}(\varepsilon)\rightarrow {\cal H}(\varepsilon, \tilde\varepsilon)$ is not injective for $m\geq 3$, (see in \cite{bic-mdv:2013}).
\section{Further prospect}
As pointed out in the introduction it was already shown in \cite{bon-pol:1994} that the quadratic algebras which are Koszul of finite global dimension $D$ and which are Gorenstein are determined by multilinear forms ($D$-linear forms). Furthermore the connection with a generalization of volume forms is also apparent in \cite{bon-pol:1994}. This corresponds to the case $N=2$ of Theorem 11. The argument of \cite{bon-pol:1994} is that the Koszul dual algebra $\cala^!$ of a quadratic algebra $\cala$ which is Koszul of finite global dimension and Gorenstein is a graded Frobenius algebra which is generated in degree 1 and that such an algebra is completely characterized by a multilinear form on the ($D$-dimensional) space of generators $\cala^!_1$ of $\cala^!$. Here, the argument is slightly different and works in the other way round (in the quadratic case). Indeed, Theorem 9 combined with Theorem 11 imply that in the quadratic case the Koszul dual $\cala^!$ of a Koszul Gorenstein algebra $\cala$ of finite global dimension is Frobenius. This points however to the interesting observation that for a $N$-homogeneous algebra $\cala$ which is Koszul of finite global dimension and Gorenstein there are two graded Frobenius algebras which can be extracted from $\cala^!$. These two graded Frobenius algebras coincide with $\cala^!$ in the quadratic case but are distinct whenever $N\geq 3$. The first one is the Yoneda algebra $E(\cala)={\mbox{Ext}}^\bullet_\cala(\mathbb K, \mathbb K)$ which can be obtained by truncation from $\cala^!$ as explained in \cite{ber-mar:2006} while the second one is the quotient $\cala^!/{\cal I}$ of Theorem 9. The Yoneda algebra $E(\cala)$ has been considered by many authors as the generalization of the Koszul dual for nonquadratic graded algebras. However the quotient $\cala^!/{\cal I}$ is also of great interest for homogeneous algebras and deserves further attention (see e.g. the discussion at the end of \S 6.3) .\\
The converse of Theorem 5 which was stated as a conjecture in the last version of this paper (v3) is unfortunately wrong. One has the following counterexample which can be found in \cite{art-sch:1987}. Let $\cala$ be the algebra generated by 3 elements $x,y,z$ with relations $x^2+yz=0,\> y^2+zx=0,\> xy=0$. Then $\cala=\cala(w,2)$ with $w$ given by
\[
w=x^{\otimes^3}+y^{\otimes^3}+x\otimes y\otimes z + y\otimes z\otimes x + z\otimes x\otimes y
\]
which is a 3-regular 3-linear form on $\mathbb K^3$. It turns out that $\cala$ is nevertheless not Koszul, see e.g. in \cite{mdv:2010}. However the following results are worth noticing.
\begin{proposition}\label{CK3}
Let $w$ be a preregular $(N+1)$-linear form on $\mathbb K^{s+1}$ and $\cala=\cala(w,N)$. Then the following conditions are equivalent.\\
$(i)$ $w$ is 3-regular\\
$(ii)$ The dual vector space $(\cala^!_{N+1})^\ast$ of $\cala^!_{N+1}$ is given by $(\cala^!_{N+1})^\ast=\mathbb Kw$\\
$(iii)$ The Koszul complex $\calk(\cala,\mathbb K)$ of $\cala$ is the sequence
\[
0\rightarrow \cala\otimes w \stackrel{d}{\longrightarrow} \cala\otimes R \stackrel{d^{N-1}}{\longrightarrow} \cala\otimes E \stackrel{d}{\longrightarrow} \cala \rightarrow 0
\]
$(iiii)$ The Koszul $N$-complex $K(\cala)$ of $\cala$ is the sequence
\[
0\rightarrow \cala\otimes w \stackrel{d}{\longrightarrow} \cala\otimes R \stackrel{d}{\longrightarrow} \cala\otimes E^{\otimes^{N-1}} \stackrel{d}{\longrightarrow} \dots \stackrel{d}{\longrightarrow} \cala\otimes E \stackrel{d}{\longrightarrow}\cala\rightarrow 0
\]
where $E=\oplus_\lambda \mathbb K x^\lambda=\cala_1$, $R=\oplus_\mu\mathbb K W_{\mu\mu_1\dots \mu_N}x^{\mu_1}\otimes \dots \otimes x^{\mu_N}\subset E^{\otimes^N}$ and where, in $(i)$, $(ii)$ and $(iii)$, $w$ is identified with $W_{\mu_0\dots \mu_N} x^{\mu_0}\otimes \dots \otimes x^{\mu_N}\in E^{\otimes^{N+1}}$.
\end{proposition}
\noindent \underbar{Proof}.
\begin{itemize}
\item
$(iii)\Leftrightarrow (iiii)$. This follows from the definitions and from the fact that
\end{itemize}
$\cala^!_{N+2}=0$ implies $\cala^!_n=0$ for $n\geq N+2$ in view of the associativity of the product of $\cala^!$.
\begin{itemize}
\item
$(i)\Leftrightarrow (ii)$. Let $a\in (\cala^!_{N+1})^\ast=(R\otimes E) \cap (E\otimes R)$ then
\end{itemize}
\[
a=W_{\lambda_0\dots \lambda_{N-1}\rho} L^\rho_{\lambda_N} x^{\lambda_0}\otimes \dots \otimes x^{\lambda_N}=M^\sigma_{\lambda_0}W_{\sigma\lambda_1\dots\lambda_N}x^{\lambda_0}\otimes \dots \otimes x^{\lambda_N}
\]
so one has
\begin{equation}
W_{\lambda_0\dots \lambda_{N-1}\rho} L^\rho_{\lambda_N}=M^\sigma_{\lambda_0}W_{\sigma\lambda_1\dots \lambda_N}, \> \> \> \forall \lambda_i
\label{a}
\end{equation}
and conversely any solution of (\ref{a}) defines an element $a$ of $(\cala^!_{N+1})^\ast$.
By the preregularity property (twisted cyclicity) of $w$, (\ref{a}) is equivalent to
\begin{equation}
(Q^{-1}_w)_{\lambda_0}^\alpha L^\beta_\alpha (Q_w)^\tau_\beta W_{\tau\lambda_1\dots \lambda_N}=M^\sigma_{\lambda_1} W_{\lambda_0\sigma \lambda_2\dots \lambda_N},\>\>\> \forall \lambda_i
\label{b}
\end{equation}
and $a=kw$ $(k\in \mathbb K)$ is then equivalent (1-site nondegeneracy) to $L=M=k\mbox{\rm 1\hspace {-.6em} l}$ (or equivalently $Q_wLQ_w^{-1}=M=k\mbox{\rm 1\hspace {-.6em} l}$). Since $a\in (R\otimes E)\cap (E\otimes R)$ is arbitrary this implies $(i)\Leftrightarrow (ii)$.
\begin{itemize}
\item
$(iiii)\Rightarrow (ii)$. This follows from $K_{N+1}(\cala)=\cala\otimes (\cala^!_{N+1})^\ast$.
\item
$(i)\Rightarrow (iii)$. In order to complete the proof it is sufficient to show that
\end{itemize}
if $w$ is 3-regular then $(\cala^!_{N+2})^\ast=0$. So assume now that $w$ is 3-regular and let $a\in (\cala^!_{N+2})^\ast=(E^{\otimes^2}\otimes R)\cap (E\otimes R\otimes E)\cup (R\otimes E^{\otimes^2})$. One has $a=A^\lambda_{\lambda_0\lambda_1}W_{\lambda\lambda_2\dots \lambda_{N+1}}x^{\lambda_0}\otimes \dots \otimes x^{\lambda_{N+1}}$ with
\begin{equation}
A^\lambda_{\lambda_0 \lambda_1} W_{\lambda\lambda_2\dots \lambda_{N+1}}=B^\rho_{\lambda_0\lambda_{N+1}}W_{\rho\lambda_1\dots \lambda_N}
\label{c}
\end{equation}
\begin{equation}
B^\rho_{\lambda_0\lambda_{N+1}}W_{\rho\lambda_1\dots \lambda_N}=C^\sigma_{\lambda_N\lambda_{N+1}}W_{\sigma\lambda_0\dots \lambda_{N-1}}
\label{d}
\end{equation}
for any $\lambda_i$. By the 3-regularity of $w$, equation (\ref{c}) implies
\[
A^\lambda_{\nu\mu}=\left(Q^{-1}_w\right)^\tau_\mu B^\lambda_{\nu\tau}=K_\nu \delta^\lambda_\mu
\]
while equation (\ref{d}) implies
\[
B^\lambda_{\nu\mu} = \left(Q^{-1}_w\right)^\tau_\nu C^\lambda_{\tau\mu}=L_\mu \delta^\lambda_\nu
\]
and therefore one has
\begin{equation}
K_{\lambda_0}W_{\lambda_1\dots \lambda_{N+1}}=W_{\lambda_0\dots \lambda_N} L_{\lambda_{N+1}}
\label{(e)}
\end{equation}
for any $\lambda_i$. Since $w$ is 1-site nondegenerate this implies $K=L=0$ so $A=B=C=0$ and therefore $a=0$. Thus if $w$ is 3-regular then $(\cala^!_{N+2})^\ast=0$.$\square$
\begin{corollary}
Let $w$ be a 3-regular $(N+1)$-linear form on $\mathbb K^{s+1}$ and assume that the $N$-homogeneous algebra $\cala=\cala(w,N)$ is Koszul. Then $\cala$ is Koszul of global dimension 3 and is Gorenstein.
\end{corollary}
\noindent \underbar{Proof}. From the last proposition it follows that one has the (Koszul) minimal projective resolution
\[
0\rightarrow \cala\otimes w \stackrel{d}{\rightarrow} \cala\otimes R \stackrel{d^{N-1}}{\longrightarrow} \cala\otimes E \stackrel{d}{\rightarrow} \cala \rightarrow \mathbb K \rightarrow 0
\]
of the trivial left $\cala$-module $\mathbb K$. The Gorenstein property is then equivalent to the twisted cyclicity of $w$ (property $(ii)$ of Definition 2); this is the same argument as the one used in \cite{art-sch:1987}. Another way to prove this result is to use the corollary 5.12 of \cite{ber-mar:2006} since it is clear that the 1-site nondegenerate property of $w$ implies here that the Yoneda algebra $E(\cala)$ is Frobenius.~$\square$\\
In fact it is the same here to assume that $\cala(w,N)$ is of global dimension 3 as to assume that it is Koszul and one has the following result.\\
\begin{theorem}
Let $\cala$ be a connected graded algebra which is finitely generated in degree 1 and finitely presented with relations of degree $\geq 2$. Then $\cala$ has global dimension $D=3$ and is Gorenstein if and only if it is Koszul of the form $\cala=\cala(w,N)$ for some 3-regular $(N+1)$-linear form $w$ on $\mathbb K^{s+1}$ $(s+1={\mbox{dim}} \cala_1)$.
\end{theorem}
It is possible to give higher dimensional generalization of the 3-regularity, namely $D$-regularity for (preregular) $D$-linear forms ($D\geq N=2$) and\linebreak[4] $(2q+1)$-regularity for (preregular) $(Nq+1)$-linear forms ($N\geq 2$). However the cases $D=4$ ($N=2$) and $D=5$ are already very cumbersome.
\subsubsection*{Acknowledgements}
It is a pleasure to thank Roland Berger and Alain Connes for their kind interest and for their suggestions as well as, for this version, Andrea Solotar, Paul Smith and Michel Van den Bergh for their critical comments and advices.
\newpage
\section*{Appendix 1: Homogeneous algebras}
\setcounter{section}{10}
\setcounter{equation}{0}
A {\sl homogeneous algebra of degree $N$ or $N$-homogeneous algebra} is an algebra of the form
\[
\cala = A(E,R)=T(E)/(R)
\]
where $E$ is a finite-dimensional vector space, $R$ is a linear subspace of $E^{\otimes^N}$ and where $(R)$ denotes the two-sided ideal of the tensor algebra $T(E)$ of $E$ generated by $R$. The algebra $\cala$ is naturally a connected graded algebra with graduation induced by the one of $T(E)$. To $\cala$ is associated another $N$-homogeneous algebra, {\sl its dual} $\cala^!=A(E^\ast, R^\perp)$ with $E^\ast$ denoting the dual vector space of $E$ and $R^\perp\subset E^{\otimes^N\ast}=E^{\ast \otimes^N}$ being the annihilator of $R$, \cite{ber-mdv-wam:2003}. The $N$-complex $K(\cala)$ of left $\cala$-modules is then defined to be
\begin{equation}
\dots \stackrel{d}{\rightarrow} \cala\otimes \cala^{!\ast}_{n+1} \stackrel{d}{\rightarrow} \cala\otimes \cala^{!\ast}_{n}\stackrel{d}{\rightarrow} \dots \stackrel{d}{\rightarrow}\cala \rightarrow 0
\label{eq7.1}
\end{equation}
where $\cala^{!\ast}_n$ is the dual vector space of the finite-dimensional vector space $\cala^!_n$ of the elements of degree $n$ of $\cala^!$ and where $d:\cala\otimes \cala^{!\ast}_{n+1}\rightarrow \cala\otimes \cala^{!\ast}_n$ is induced by the map $a\otimes (e_1\otimes \dots \otimes e_{n+1})\mapsto ae_1 \otimes (e_2\otimes \dots \otimes e_{n+1})$ of $\cala\otimes E^{\otimes^{n+1}}$ into $\cala\otimes E^{\otimes^n}$, remembering that $\cala^{!\ast}_n\subset E^{\otimes^n}$, (see \cite{ber-mdv-wam:2003}). This $N$-complex will be refered to as the {\sl Koszul $N$-complex of} $\cala$. In (\ref{eq7.1}) the factors $\cala$ are considered as left $\cala$-modules. By considering $\cala$ as right $\cala$-module and by exchanging the factors one obtains the $N$-complex $\tilde K(\cala)$ of right $\cala$-modules
\begin{equation}
\dots \stackrel{\tilde d}{\rightarrow} \cala^{!\ast}_{n+1} \otimes \cala \stackrel{\tilde d}{\rightarrow} \cala^{!\ast}_n\otimes \cala \stackrel{\tilde d}{\rightarrow} \dots \stackrel{\tilde d}{\rightarrow} \cala \rightarrow 0
\label{eq7.2}
\end{equation}
where now $\tilde d$ is induced by $(e_1\otimes \dots \otimes e_{n+1})\otimes a \mapsto (e_1\otimes \dots \otimes e_n)\otimes e_{n+1}a$. Finally one defines two $N$-differentials $d_{{\mathbf L}}$ and $d_{{\mathbf R}}$ on the sequence of $(\cala,\cala)$-bimodules, i.e. of left $\cala\otimes \cala^{opp}$-modules, $(\cala\otimes \cala^{!\ast}_n \otimes \cala)_{n\geq 0}$ by setting $d_{\mathbf L} =d\otimes I_\cala$ and $d_{\mathbf R} = I_\cala\otimes \tilde d$ where $I_\cala$ is the identity mapping of $\cala$ onto itself. For each of these $N$-differentials $d_{\mathbf L}$ and $d_{\mathbf R}$ the sequences
\begin{equation}
\dots \stackrel{d_{\mathbf L},d_{\mathbf R}}{\rightarrow} \cala \otimes \cala^{!\ast}_{n+1}\otimes \cala \stackrel{d_{\mathbf L},d_{\mathbf R}}{\rightarrow} \cala\otimes \cala^{!\ast}_n \otimes \cala \stackrel{d_{\mathbf L}, d_{\mathbf R}}{\rightarrow}\dots
\label{eq7.3}
\end{equation}
are $N$-complexes of left $\cala\otimes \cala^{opp}$-modules and one has
\begin{equation}
d_{\mathbf L} d_{\mathbf R} = d_{\mathbf R} d_{\mathbf L}
\label{eq7.4}
\end{equation}
which implies that
\begin{equation}
d^N_{\mathbf L} -d^N_{\mathbf R} = (d_{\mathbf L} -d_{\mathbf R})\left ( \sum^{N-1}_{p=0} d^p_{\mathbf L} d^{N-p-1}_{\mathbf R}\right) = \left( \sum^{N-1}_{p=0} d^p_{\mathbf L} d^{N-p-1}_{\mathbf R}\right) (d_{\mathbf L} -d_{\mathbf R}) =0
\label{eq7.5}
\end{equation}
in view of $d^N_{\mathbf L}=d^N_{\mathbf R}=0$.\\
As for any $N$-complex \cite{mdv:1998a} one obtains from $K(\cala)$ ordinary complexes $C_{p,r}(K(\cala))$, {\sl the contractions of} $K(\cala)$, by putting together alternatively $p$ and $N-p$ arrows $d$ of $K(\cala)$. Explicitely $C_{p,r}(K(\cala))$ is given by
\begin{equation}
\dots \stackrel{d^{N-p}}{\rightarrow} \cala \otimes \cala^{!\ast}_{Nk+r}\stackrel{d^p}{\rightarrow} \cala\otimes \cala^{!\ast}_{Nk-p+r}\stackrel{d^{N-p}}{\rightarrow}\cala\otimes \cala^{!\ast}_{N(k-1)+r}\stackrel{d^p}{\rightarrow}\dots
\label{eq7.6}
\end{equation}
for $0\leq r< p\leq N-1$, \cite{ber-mdv-wam:2003} . These are here chain complexes of free left $\cala$-modules. As shown in \cite{ber-mdv-wam:2003} the complex $C_{N-1,0}(K(\cala))$ coincides with the {\sl Koszul complex} of \cite{ber:2001a}; this complex will be denoted by $\calk(\cala,\mathbb K)$ in the sequel. That is one has
\begin{equation}
\calk_{2m}(\cala,\mathbb K)=\cala\otimes\cala^{!\ast}_{Nm},\ \ \ \calk_{2m+1}(\cala,\mathbb K)=\cala\otimes \cala^{!\ast}_{Nm+1}
\label{eq7.7}
\end{equation}
for $m\geq 0$, and the differential is $d^{N-1}$ on $\calk_{2m}(\cala,\mathbb K)$ and $d$ on $\calk_{2m+1}(\cala,\mathbb K)$. If $\calk(\cala, \mathbb K)$ is acyclic in positive degrees then $\cala$ will be said to be a {\sl Koszul algebra}. It was shown in \cite{ber:2001a} and this was confirmed by the analysis of \cite{ber-mdv-wam:2003} that this is the right generalization for $N$-homogeneous algebra of the usual notion of Koszulity for quadratic algebras \cite{man:1987}, \cite{lod:1999}. One always has $H_0(\calk(\cala,\mathbb K))\simeq \mathbb K$ and therefore if $\cala$ is Koszul, then one has a free resolution $\calk(\cala,\mathbb K)\rightarrow \mathbb K \rightarrow 0$ of the trivial left $\cala$-module $\mathbb K$, that is the exact sequence
\begin{equation}
\dots \stackrel{d^{N-1}}{\rightarrow} \cala \otimes \cala^{!\ast}_{N+1}\stackrel{d}{\rightarrow}\cala\otimes R\stackrel{d^{N-1}}{\rightarrow}\cala\otimes E\stackrel{d}{\rightarrow} \cala \stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0
\label{eq7.8}
\end{equation}
of left $\cala$-modules where $\varepsilon$ is the projection on degree zero. This resolution is a minimal projective resolution of the $\cala$-module $\mathbb K$ in the graded category \cite{ber:2005} which will be refered to as the {\sl Koszul resolution of $\mathbb K$}.\\
One defines now the chain complex of free $\cala\otimes\cala^{opp}$-modules $\calk(\cala,\cala)$ by setting
\begin{equation}
\calk_{2m}(\cala,\cala)=\cala\otimes \cala^{!\ast}_{Nm}\otimes \cala,\ \ \ \calk_{2m+1}(\cala,\cala)=\cala\otimes \cala^{!\ast}_{Nm+1}\otimes \cala
\label{eq7.9}
\end{equation}
for $m\in \mathbb N$ with differential $\delta'$ defined by
\begin{equation}
\delta'=d_{\mathbf L} -d_{\mathbf R} :\calk_{2m+1}(\cala,\cala)\rightarrow \calk_{2m}(\cala,\cala)
\label{eq7.10}
\end{equation}
\begin{equation}
\delta'=\sum^{N-1}_{p=0} d^p_{\mathbf L} d^{N-p-1}_{\mathbf R}:\calk_{2(m+1)}(\cala,\cala)\rightarrow \calk_{2m+1}(\cala,\cala)
\label{eq7.11}
\end{equation}
the property $\delta^{\prime 2}=0$ following from (\ref{eq7.5}). {\sl This complex is acyclic in positive degrees if and only if $\cala$ is Koszul}, that is if and only if $\calk(\cala,\mathbb K)$ is acyclic in positive degrees, \cite{ber:2001a} and \cite{ber-mdv-wam:2003} . One always has the obvious exact sequence
\begin{equation}
\cala\otimes E\otimes \cala \stackrel{\delta'}{\rightarrow} \cala\otimes \cala \stackrel{\mu}{\rightarrow}\cala\rightarrow 0
\label{eq7.12}
\end{equation}
of left $\cala\otimes \cala^{opp}$-modules where $\mu$ denotes the product of $\cala$. It follows that if $\cala$ is a Koszul algebra then $\calk(\cala,\cala)\stackrel{\mu}{\rightarrow}\cala\rightarrow 0$ is a free resolution of the $\cala\otimes \cala^{opp}$-module $\cala$ which will be refered to as {\sl the Koszul resolution of} $\cala$. This is a minimal projective resolution of the $\cala\otimes \cala^{opp}$-module $\cala$ in the graded category \cite{ber:2005}.\\
Let $\cala$ be a Koszul algebra and let $\calm$ be a $(\cala,\cala)$-bimodule considered as a right $\cala\otimes \cala^{opp}$-module. Then, by interpreting the $\calm$-valued Hochschild homology $H(\cala,\calm)$ as $H_n(\cala,\calm)={\mbox{Tor}}_n^{\cala\otimes \cala^{opp}}(\calm,\cala)$ {\cite{car-eil:1973}, the complex $\calm \otimes_{\cala\otimes \cala^{opp}}\calk(\cala,\cala)$ computes the $\calm$-valued Hochschild homology of $\cala$, (i.e. its homology is the ordinary $\calm$-valued Hochschild homology of $\cala$). We shall refer to this complex as {\sl the small Hochschild complex of} $\cala$ with coefficients in $\calm$ and denote it by ${\cal S}(\cala, \calm)$. It reads
\begin{equation}
\dots \stackrel{\delta}{\rightarrow}\calm\otimes \cala^{!\ast}_{N(m+1)}\stackrel{\delta}{\rightarrow} \calm \otimes \cala^{!\ast}_{Nm+1}\stackrel{\delta}{\rightarrow}\calm\otimes \cala^{!\ast}_{Nm} \stackrel{\delta}{\rightarrow}\dots
\label{eq7.13}
\end{equation}
where $\delta$ is obtained from $\delta'$ by applying the factors $d_L$ to the right of $\calm$ and the factors $d_R$ to the left of $\calm$.\\
Assume that $\cala$ is a Koszul algebra of finite global dimension $D$. Then the Koszul resolution of $\mathbb K$ has length $D$, i.e. $D$ is the largest integer such that $\calk_D(\cala,\mathbb K)\not=0$. By construction, $D$ is also the greatest integer such that $\calk_D(\cala,\cala)\not= 0$ so the free $\cala\otimes \cala^{opp}$-module resolution of $\cala$ has also length $D$. Thus one verifies in this case the general statement of \cite{ber:2005} namely that the global dimension is equal to the Hochschild dimension. Applying then the functor ${\mbox{Hom}}_\cala(\bullet, \cala)$ to $\calk(\cala,\mathbb K)$ one obtains the cochain complex ${\cal L}(\cala,\mathbb K)$ of free right $\cala$-modules
\[
0\rightarrow {\cal L}^0(\cala,\mathbb K)\rightarrow \dots \rightarrow {\cal L}^D(\cala,\mathbb K) \rightarrow 0
\]
where ${\cal L}^n(\cala,\mathbb K)={\mbox{Hom}}_\cala(\calk_n(\cala,\mathbb K),\cala)$. The Koszul algebra $\cala$ is {\sl Gorenstein} iff $H^n({\cal L}(\cala,\mathbb K))=0$ for $n<D$ and $H^D({\cal L}(\cala,\mathbb K))=\mathbb K$ (= the trivial right $\cala$-module). This is clearly a generalisation of the classical Poincar\'e duality and this implies a precise form of Poincar\'e duality between Hochschild homology and Hochschild cohomology \cite{ber-mar:2006}, \cite{vdb:1998}, \cite{vdb:2002}. In the case of the Yang-Mills algebra and its deformations which are Koszul Gorenstein cubic algebras of global dimension 3, this Poincar\'e duality gives isomorphisms
\begin{equation}
H_k(\cala,\calm)= H^{3-k}(\cala,\calm),\>\>\> k\in \{0,1,2,3\}
\label{Pd}
\end{equation}
between the Hochschild homology and the Hochschild cohomology with coefficients in a bimodule $\calm$. This follows from the fact that in these cases one has $Q_w=\mbox{\rm 1\hspace {-.6em} l}$.
\section*{Appendix 2: The quantum group of a nondegenerate bilinear form}
Let $b$ be a non degenerate bilinear form on $\mathbb K^{s+1}$ with components $B_{\mu\nu}=b(e_\mu,e_\nu)$ in the canonical basis $(e_\lambda)_{\lambda\in\{0,\dots,s\}}$. The matrix elements $B^{\mu\nu}$ of the inverse $B^{-1}$ of the matrix $B=(B_{\mu\nu})$ of components of $b$ are the components of a nondegenerate bilinear form $b^{-1}$ on the dual vector space of $\mathbb K^{s+1}$ in the dual basis of $(e_\lambda)$. Let ${\cal H}(b)$ be the unital associative algebra generated by the $(s+1)^2$ elements $u^\mu_\nu$ ($\mu,\nu\in \{0,\dots,s\}$) with the relations
\begin{equation}
B_{\alpha\beta} u^\alpha_\mu u^\beta_\nu=B_{\mu\nu}\mbox{\rm 1\hspace {-.6em} l}\ \ \ (\mu,\nu\in \{0,\dots,s\})
\end{equation}
and
\begin{equation}
B^{\mu\nu} u^\alpha_\mu u^\beta_\nu = B^{\alpha\beta}\mbox{\rm 1\hspace {-.6em} l}\ \ \ (\alpha,\beta \in \{0,\dots,s\})
\end{equation}
where $\mbox{\rm 1\hspace {-.6em} l}$ denotes the unit of ${\cal H}(b)$. One has the following \cite{mdv-lau:1990}.
\begin{theorem}
There is a unique structure of Hopf algebra on ${\cal H}(b)$ with coproduct $\Delta$, counit $\varepsilon$ and antipode $S$ such that
\begin{eqnarray}
\Delta (u^\mu_\nu) & = & u^\mu_\lambda \otimes u^\lambda_\nu\\
\varepsilon (u^\mu_\nu) & = & \delta^\mu_\nu\\
S(u^\mu_\nu) & = & B^{\mu\alpha}B_{\beta\nu} u^\beta_\alpha
\end{eqnarray}
for $\mu,\nu\in \{0,\dots,s\}$.
The product and the unit being the ones of ${\cal H}(b)$.
\end{theorem}
The proof is straightforward and the dual object of the Hopf algebra ${\cal H}(b)$ is called {\sl the quantum group of the nondegenerate bilinear form $b$} ; ${\cal H}(b)$ corresponds to the Hopf algebra of ``representative functions" on this quantum group.
\begin{proposition}
Let $\cala=\cala(b,2)$ be the $($quadratic$)$ algebra of Section 2 and ${\cal H}={\cal H}(b)$ be the above Hopf algebra. There is a unique algebra-homomorphism $\Delta_L:\cala\rightarrow {\cal H}\otimes \cala$ such that
\begin{equation}
\Delta_L(x^\lambda)=u^\lambda_\mu \otimes x^\mu
\end{equation}
for$\lambda\in\{0,\dots,s\}$. This endows $\cala$ of a structure of ${\cal H}$-comodule.
\end{proposition}
Thus the quantum group of $b$ ``acts" on the quantum space corresponding to $\cala$.\\
Let $q\in \mathbb K$ with $q\not=0$ be such that
\begin{equation}
B^{\alpha\beta} B_{\alpha\beta} + q + q^{-1}=0
\label{eq8.7}
\end{equation}
then the linear endomorphisms $R_\pm$ of $\mathbb K^{s+1}\otimes K^{s+1}$ defined by
\begin{equation}
(R_+)^{\alpha\beta}_{\mu\nu}=\delta^\alpha_\mu \delta^\beta_\nu+qB^{\alpha\beta}B_{\mu\nu},\ \ (R_-)^{\alpha\beta}_{\mu\nu}=\delta^\alpha_\mu \delta^\beta_\nu+q^{-1} B^{\alpha\beta} B_{\mu\nu}
\end{equation}
satisfy the Yang-Baxter relation
\begin{equation}
(R_\pm \otimes \mbox{\rm 1\hspace {-.6em} l})(\mbox{\rm 1\hspace {-.6em} l} \otimes R_\pm)(R_\pm\otimes \mbox{\rm 1\hspace {-.6em} l})=(\mbox{\rm 1\hspace {-.6em} l} \otimes R_\pm) (R_\pm\otimes \mbox{\rm 1\hspace {-.6em} l})(\mbox{\rm 1\hspace {-.6em} l} \otimes R_\pm): (\mathbb K^{s+1})^{\otimes^3}\rightarrow (\mathbb K^{s+1})^{\otimes^3}
\end{equation} and $(R_+-1)(R_++q^2)=0$, $(R_--1)(R_-+q^{-2})=0$.\\
Let $\varepsilon_q$ ($q\not=0$) be the nondegenerate bilinear form on $\mathbb K^2$ with matrix of components $\left ( \begin{array}{cc}
0 & -1\\
q & 0
\end{array} \right)$. Then $\cala(\varepsilon_q,2)=\cala_q$ corresponds to the Manin plane (see in Section 2) whereas ${\cal H}(\varepsilon_q)={\cal H}_q$ corresponds to the quantum group $SL_q(2,\mathbb K)$.\\
One has the following result of \cite{bic:2003b} concerning the representations of the quantum group of the nondegenerate bilinear form $b$ on $\mathbb K^{s+1}$.
\begin{theorem}
Let $b$ be a nondegenerate bilinear form on $\mathbb K^{s+1}$ and let $q\in \mathbb K\backslash \{0\}$ be defined by $(\ref{eq8.7})$. Then the category of comodules on ${\cal H}(b)$ is equivalent to the category of comodules on ${\cal H}(\varepsilon_q)={\cal H}_q$.
\end{theorem}
In other words, in the dual picture, the category of representations of the quantum group of the nondegenerate bilinear form $b$ is equivalent to the category of representations of the quantum group $SL_q(2,\mathbb K)$ with $q$ given by (\ref{eq8.7}).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Correlation and interference distinguish quantum from classical physics. The former is manifest in the measurement of many-body coincidences predicted by a quantum joint probability density function (PDF). Some observable correlations cannot be realized classically \cite{bell}. Quantum interference is most familiar as a one-body PDF for an outcome that can be achieved in at least two indistinguishable ways. However, it can also be generated by superposing many-body states in indistinguishable ways \cite{Gottfried}.
A many-body interferometer is more difficult to treat since it is possible to measure correlations in the probability of finding each particle at different positions {\em and} times. It differs from a one-body interferometer primarily in its beamsplitting mechanism. Here, correlation and interference are combined in asynchronous correlation interferometry of a bipartite system, using measurement of one substate as the beamsplitter for the other.
Experimental confirmation of quantum correlation has involved photons \cite{Salart,Gröblacher}, atoms \cite{Wineland}, and Josephson phase qubits \cite{Ansmann}. However, there is little experimental evidence of correlated interference between massive particles.
Analysis of correlated systems usually begins with an expression for the many-body state after it has been prepared by an interaction. For bipartite systems this is routinely done with photon pairs from parametric downconversion. The extremes of either a few or a large number of particles are most often treated. In the former case the analysis typically involves a simultaneous correlation between the positions \cite{Gottfried} or angular momenta of two particles \cite{aspect,bohm}. The formalism often deals only with identical particles. Here a mechanism is describe which generates interferometric correlations between distinguishable particles, which can be of disparate masses.
The issue of non-simultaneous correlations is explored here in the context of perhaps the simplest quantum correlation possible: a particle reflecting elastically from a mirror, both of which are distinguishable. The mirror and particle have non-zero rest mass and motion is in free space along one dimension, with all states unbound. Measurements of particle reflection, but not associated with correlated interference, have involved mirrors that reflect atoms \cite{kouznetsov} and Bose-Einstein condensates \cite{pasquini}, atoms reflecting from a solid surface \cite{shimizu}, neutrons \cite{hils} and atoms \cite{colombe} reflecting from vibrating mirrors, and atoms reflecting from a switchable mirror \cite{szriftgiser}.
Consider first the particle and mirror in uncorrelated eigenstates of energy before reflection, referred to as the `incident harmonic state'. The energy eigenstate after reflection, referred to as the `reflected harmonic state', results in correlation between the particle and mirror via conservation of energy and momentum: an energy or momentum measurement of the reflected particle yields a correlated energy or momentum measurement of the mirror when given the incident harmonic state. Superposing such states yields the incident and reflected particle-mirror wavegroups. These differ from the harmonic states in that a measurement of the energy (or position) of the particle does not uniquely constrain the energy (or position) of the mirror since the reflected particle-mirror wavegroup is not in an eigenstate of the energy (or position) operator.
Such incident and reflected wavegroup particle-mirror states interfere when they overlap. This is similar to the transient one-body interference of an electromagnetic wavegroup reflecting from a stationary mirror \cite{Wiener}. However, classically the mirror experiences only a continuous force due to radiation pressure.
Quantum mechanically, interference occurs since the incident and reflected states are indistinguishable for a measurement of position (but not for a momentum measurement). Interference is expected between the incident and reflected particle substates {\em along with} interference of the mirror substates which have and have not reflected the particle. Their correlation is perhaps not expected, being a consequence of the solution to the Schr\"odinger equation from which a joint PDF is constructed. The correlations in the two-body interference are manifest as coincidence rates, e.g. a correlation in the {\em simultaneous} measurement of particle and mirror positions.
This two-body wavefunction is then modified to incorporate predictions for measurement of the particle first and then later that of the mirror, using the Copenhagen interpretation of measurement in quantum mechanics. The resulting PDF is a function both of the different particle positions and different times at which each is measured. An assumption used is that between the times of the two measurements, there is neither interaction between the particle and mirror nor with the environment.
The focus of the discussion is on asynchronous correlated interference. In this case, a measurement of only the particle in the correlated interference region splits the mirror substate into ones which have and have not reflected the particle. Later measurement of the mirror reveals this interference.
Fig. \ref{fig:overview} is a schematic representation of this state splitting process. Actual two-body PDFs are shown in later sections. The particle PDF before interaction is the Gaussian at $t=0$, moving to the right at speed $v$, while the mirror is represented as the black rectangle, moving to the right at speed $V<v$. At $t=t_{1}=\tau$ interference between the incident and reflected particle substates is shown as oscillations in its PDF while interference between the mirror substates which have and have not reflected the particle is represented by the rectangular checkerboard pattern rather than the solid rectangle. This then is a region of correlated interference, although in this schematic the correlation cannot be illustrated. For simplicity, speeds are not shown in this and the next snapshot.
Measurement of the particle but not the mirror is represented by the `photon' coming from below, interacting with the particle, and then being measured by the detector above at time $t_{1}$. More details on this aspect of the process are given in section \ref{sec:measurementtheory}.
The lower two schematics illustrate the states with and without this measurement, using the lower and upper halves of each schematic, respectively. Without measurement, the particle wavegroup reflects from the mirror and continues to move to the right with reduced speed while the speed of the mirror increases. Without measurement, the correlated interference disappears when the wavegroups no longer overlap as illustrated when $t=2\tau$. In addition, only the mirror state which reflects the particle survives (as is illustrated in the upper half of the $t=2\tau$ schematic). This is a consequence of the complete particle wavegroup having reflected.
A noise-free and non-destructive measurement of only the particle, however, collapses the particle substate as shown in the lower portion of the $t=\tau+\Delta$ schematic, with $\Delta<<\tau$. Yet the mirror remains in a superposition state. At time $t=2\tau$ the two mirror states have separated a distance greater than the wavegroup size due to their different speeds and are then represented by solid rectangles. Although there is then no interference, the mirror remains in a superposition of having both reflected and not reflected the particle for all later times. This splitting is similar to a beamsplitter producing photon states which traverse spatially separate and therefore distinguishable paths. Finally, it should be noted that the particle state after measurement spreads, as shown in the lower portion of the $t=2\tau$ schematic. It is assumed that this spreading wavegroup does not interact with the split mirror states for times $t>\tau$.
\begin{center}
\begin{figure}
\includegraphics[scale=0.29]{overview.pdf}
\caption{Asynchronous measurement schematic. A particle wavegroup reflects from a moving mirror. The `mirror' is represented as the black rectangle, moving to the right. The effects of measuring the particle in the interference region are contrasted with no such measurement in the bottom two schematics.}
\label{fig:overview}
\end{figure}
\end{center}
To determine the consequences of such measurements, a two-body solution of the Schr\"odinger equation for simultaneous measurement is first obtained for the particle-mirror system by applying standard techniques to derive the energy eigenstates. This is used to construct the asynchronous two-body wavefunction. Wavegroups are formed from a superposition of these solutions. Particular emphasis is given to correlated interference or the overlap region of the incident and reflected particle-mirror wavegroups. A marginal PDF is illustrated, using an example of interference for the particle when the mirror is not measured, which yields an analog of the Doppler effect for the interference of light in retro-reflection from a moving mirror. The remainder of the paper deals predominately with issues which illustrate the unique properties of asynchronous correlation interferometry, which are most easily understood using the example of a microscopic particle reflecting from a mesoscopic/macroscopic mirror. This example is not intended as a practical experimental proposal. Rather, observation of asynchronous correlations will more likely first occur with a microscopic particle reflecting from a ``microscopic'' mirror.
\section{Particle reflecting from a mirror}
\label{sec:Theory}
\subsection{Simultaneous measurement}
\label{sec:theoryparticlemirror}
The particle-mirror interaction is modeled as a moving delta function potential where reflection is assumed to occur at the center of mass of the mirror with the Schr\"odinger equation given by
\begin{eqnarray}
(\hbar^{2} \partial_{x_{1}}^{2}/2m+\hbar^{2} \partial_{x_{2}}^{2}/2M+\beta \delta[x_{1}-x_{2}]+i\partial_{t})\Psi=0,
\label{eq:Schreqn}
\end{eqnarray}
where square brackets are used to indicate the argument of a function and $x_{1}$ and $x_{2}$ are the particle and mirror positions along the x-axis. The mirror reflectivity, related to $\beta$, goes to infinity for a lossless mirror.
The standard separable solution to this equation results from a center of mass (cm) and relative (rel) transformation of the particle-mirror system (not to be confused with the cm of the particle or mirror). This does not change the total energy $E=(\hbar K)^{2}/(2M)+(\hbar k)^{2}/(2m)=E_{rel}+ E_{cm}$, where $k$ and $K$ are the wavevectors for the particle and mirror with $k=m v/\hbar$, $K=M V/\hbar$, masses $m$, $M$, and initial velocities $v$ and $V$, respectively. The transformed Schr\"odinger equation becomes
\begin{eqnarray}
(\frac{\hbar^{2} \partial_{x_{cm}}^{2}}{2M_{tot}}+\frac{\hbar^{2} \partial_{x_{rel}}^{2}}{2 \mu}+\beta \delta[x_{rel}] \notag
+i\partial_{t})\Psi[x_{cm},x_{rel},t]=0
\label{eq:Scheqncm}
\end{eqnarray}
where $M_{tot}=m+M$, $\mu=mM/(m+M)$, $x_{cm}=(mx_{1}+Mx_{2})/M_{tot}$, and $x_{rel}=x_{1}-x_{2}$.
The next step in the derivation requires parsing the energy into $E=E_{cm}+E_{rel}$ and assuming the separable solution
\begin{eqnarray}
\Psi[x_{cm},x_{rel},t]=\psi_{cm}[x_{cm},t] \psi_{rel}[x_{rel}, t] \notag \\ =e^{-i E_{cm} t/\hbar} U[x_{cm}] e^{-i E_{rel} t/\hbar}u[x_{rel}],
\label{eq:Schtot}
\end{eqnarray}
which reduces the Schr\"odinger equation into two ordinary differential equations,
\begin{equation}
-\frac{\hbar^{2}}{2M_{tot}} \frac{d^{2}U[x_{cm}]}{dx_{cm}^{2}} = E_{cm} U[x_{cm}]
\label{eq:ScheqODE1}
\end{equation}
\begin{equation}
-\frac{\hbar^{2}}{2 \mu} \frac{d^{2}u[x_{rel}]}{dx_{rel}^{2}}+\beta \delta[x_{rel}] = E_{rel} u[x_{rel}].
\label{eq:ScheqODE2}
\end{equation}
The solution to these equations is then obtained from the initial values and boundary conditions. Finally, this solution is transformed back to the particle-mirror system yielding $\Psi[x_{1},x_{2},t]$.
An example of this procedure is found in the solution to the hydrogen atom where the Schr\"odinger equation is first transformed from the laboratory to the cm-relative coordinates yielding {\em uncorrelated} substates. Transforming back to the electron-proton system yields {\em correlated} electron and proton substates \cite{Tommasini}.
In subsection \ref{sec:non-simultaneous}, the infinite potential energy boundary condition at the mirror surface is used to obtain energy eigenstate solutions to eqns. \ref{eq:ScheqODE1} and \ref{eq:ScheqODE2}. The full solution is then transformed to the particle-mirror system. In subsection \ref{sec:wavegroupsimultaneous}, wavegroups are formed via a superposition of these states.
\subsection{Overview of the asynchronous method}
\label{sec:overview}
This solution, $\Psi[x_{1},x_{2},t]$, is next used to construct the wavefunction which predicts the outcome of asynchronous measurements. For illustrative purposes, let the particle be measured first. Experimental realizations of such a measurement procedure are discussed in subsection \ref{sec:measurementtheory}.
To account for different temporal measurements of the particle and mirror, respective time parameters $t_{1}$ and $t_{2}$ are used instead of $t$ in $\Psi[x_{1},x_{2},t]$. Coefficients of these time parameters in the phase are then the energies of the particle and mirror, respectively.
This is a parsing of the energy in the solution to the Schr\"odinger equation for the particle-mirror system, similar to that applied after eqn. \ref{eq:Schreqn} to transform it into the cm and rel system. Since that transformation yielded a simultaneous PDF there was no need for the two-time notation. To modify it for asynchronous predictions, relative and cm times are introduced in subsection \ref{sec:non-simultaneous}, which then both account for different temporal measurements of the cm and relative positions and isolate the cm and rel energy terms.
However, transformation to the cm-rel system is simply a mathematical technique to facilitate a solution to the Schr\"odinger equation. For example, there exists no observable particle with a reduced mass as described in the cm-rel system. Once a solution, which satisfies the initial conditions and boundary values, is obtained in the cm-rel system the inverse transformation is applied to describe the system that can be measured.
These time parameters provide labels for the particle and mirror energies in the wavefunction's phase and also label the different times at which the particle and mirror are measured. They assist in isolating the energy of the particle from that of the mirror, just as $x_{1}$ and $x_{2}$ isolate the effect of the particle and mirror wavevectors in the phase of the wavefunction. This then allows all particle parameters to be fixed in the two-body wavefunction when the particle is measured. Such labeling, while simple in a separable system, is not trivial in the correlated particle-mirror system.
The different time labels do not indicate evolution of the subsystems via different Hamiltonians as do the different time variables used by McGuire \cite{Mcguire} nor are they used to generate a manifestly Lorentz invariant theory as done by Petrat \cite{petrat}. They neither tick at different rates nor are they out of phase, but rather act as only a label, just as $x_{1}$ and $x_{2}$ label the particle and mirror spatial coordinates along the $x$ axis. Such notation then results in the wavefunction derived in subsection \ref{sec:theoryparticlemirror} being expressed as $\Psi[x_{1},x_{2},t] \rightarrow \Psi[x_{1},t_{1},x_{2},t_{2}]$.
Now let the particle be measured first, at position $x_{1}=x_{10}$ at time $t_{10}$. This then collapses the particle substate into an eigenstate of that operator, $\Psi[x_{1},t_{10}]=\delta[x_{1}-x_{10}]$. The first measurement forces the two-body correlated wavefunction into an uncorrelated state which is, at the time that the particle is measured, a product of the particle substate with the mirror substate, given by $\Psi[x_{1},t_{10}] \Psi[x_{10},t_{10},x_{2},t_{10}]$.
The particle state after measurement is irrelevant to the subsequent time evolution of the mirror since they are no longer correlated and no longer interact. The mirror state is determined by fixing all the particle coordinates in the two-body wavefunction at the time of the measurement of the particle. The two time formalism facilitates this procedure by fixing the particle parameters while allowing only those of the mirror to time evolve. The coordinates of this measurement of the particle, $(x_{10},t_{10})$, then determine the initial state of the mirror from the two-body wavefunction as $\Psi[x_{10},t_{10},x_{2},t_{10}]$. This one-body wavefunction for the mirror then continues to time evolve with $\Psi[x_{10},t_{10},x_{2},t_{2}]$ for $t_{2}>t_{10}$.
For example, the phase of the mirror substate evolves in time only with the mirror's energy times the time $t_{2}$. The particle's energy times the time $t_{1}$ term in the phase should not influence the time evolution of the mirror substate after the particle has been measured. It does not in this formalism since all of the energy terms associated with the particle are fixed at $t_{1}=t_{10}$. Similarly, the particle's wavevector times the position $x_{1}$ is fixed by setting $x_{1}=x_{10}$ while the mirror's wavevector times its position $x_{2}$ is allowed to vary.
The probability of {\em simultaneously} measuring the particle and mirror within a small region $\Delta x_{1} \Delta x_{2}$ around $(x_{10},x_{20})$, over which the PDF is essentially constant, is $\textmd{Pr}[x_{10}-\Delta x_{1}<x_{1}<x_{10}+\Delta x_{1},x_{20}-\Delta x_{2}<x_{2}<x_{20}+\Delta x_{2}]=\textmd{PDF}\Delta x_{1} \Delta x_{2}$ with $\textmd{PDF}=\Psi[x_{10},x_{20},t] \Psi^{*}[x_{10},x_{20},t]$.
The probability of measuring the particle first and the mirror later is given by a product of two probabilities for one-body measurements. The first, the probability of measuring the particle around $ x_{10}$ at time $t_{10}$ without knowledge of the mirror's position, is determined from the average of the two-body PDF over the mirror coordinate $x_{2}$, $\textmd{Pr}_{I}[x_{10}-\Delta x_{1}<x_{1}<x_{10}+\Delta x_{1}]=\textmd{PDF}_{1}\Delta x_{1}$ with $\textmd{PDF}_{1}=\int \Psi[x_{10},t_{10},x_{2},t_{10}] \Psi^{*}[x_{10},t_{10},x_{2},t_{10}]dx_{2}$.
The next, the probability of measuring the mirror at $x_{2}$ and $t_{2}$ after having measured the particle at $x_{10}$ within $\Delta x_{1}$ at $t_{10}$, is given by $\textmd{Pr}_{II}[x_{20}-\Delta x_{2}<x_{2}<x_{20}+\Delta x_{2}]=\textmd{PDF}_{2}\Delta x_{2}$ with $\textmd{PDF}_{2}= \Psi[x_{10},t_{10},x_{2},t_{2}] \Psi^{*}[x_{10},t_{10},x_{2},t_{2}]\Delta x_{1}$. The probability of first measuring the particle at $x_{10}$ and time $t_{10}$ and then the mirror at $x_{2}$ and time $t_{2}$ is therefore $\textmd{Pr}_{I}\textmd{Pr}_{II}$.
The predictions for synchronous measurement presented below are plots of the two-body PDF as a function of $x_{1}$ and $x_{2}$ for snapshots at $t_{1}=t_{2}=t$. The probability of measuring both the particle and mirror at the same time in a small region of these plots is $\Psi[x_{1},x_{2},t] \Psi^{*}[x_{1},x_{2},t] \Delta x_{1} \Delta x_{2}$.
In the asynchronous predictions presented, it is assumed that the particle has been measured and therefore the plots are of the one-body PDF for the mirror as a function of both $x_{2}$ and the position $x_{10}$ and time $t_{10}$ at which the particle was measured. These are snapshots at $t_{2}$. The probability of measuring the mirror in a small region at time $t_{2}$, having already measured the particle at $x_{10}$ and $t_{10}$, is then $\Psi[x_{10},t_{10},x_{2},t_{2}] \Psi^{*}[x_{10},t_{10},x_{2},t_{2}] \Delta x_{2}$. From these figures, only the probability of measuring the mirror after the particle has been measured (not the probability of first measuring the particle {\em and} then the mirror) can be determined. Conservation of probability for this one-body mirror wavefunction is addressed in the appendix.
While the validity of this asynchronous model is limited by the assumption of no interaction after the first measurement it also makes subtle assumptions about collapse of a two-body wavefunction: a measurement of the particle substate influences neither the mirror substate at $t_{10}$ nor the subsequent evolution of the mirror substate apart from fixing the parameters of the particle substate in the two-body wavefunction. A more detailed discussion is found in subsection \ref{sec:measurementtheory}.
\subsection{Details of the asynchronous method}
\label{sec:non-simultaneous}
Before reflection an uncorrelated solution to the Schr\"odinger equation is given by
\begin{eqnarray}
\Psi_{0} \propto \exp[i (k x_{1}-\frac{\hbar k^{2}}{2m}t_{1}+K x_{2}-\frac{\hbar K^{2}}{2M}t_{2})].
\label{eq:ScheqnUnentangled}
\end{eqnarray}
An incident wavegroup constructed from this then leads to uncorrelated predictions about the probability of finding the particle at $(x_{1},t_{1})$ and mirror at $(x_{2},t_{2})$.
For the reflected wavefunction, the solution to eqn. \ref{eq:ScheqODE2} must vanish at $x_{1}=x_{2}$ to satisfy the boundary condition at the mirror and not exist for $x_{rel}<0$ (or $x_{1}>x_{2}$) since the particle cannot move through the mirror (for the uncorrelated incident state, however, there is no interaction and the particle does move past the mirror).
In this transformed system, a solution to eqn. \ref{eq:ScheqODE2} can be constructed from the superposition of incident and ``reflected'' wavefunctions in the cm-rel system (in much the same way as is the solution for a wave traveling along a string toward a rigidly clamped boundary is constructed from free string solutions traveling in opposite directions),
\begin{equation}
\psi_{rel}=(e ^{i \phi_{in}}-e ^{i \phi_{ref}}) \theta [x_{rel}],
\label{eq:PsiSeparable}
\end{equation}
where $\theta [x_{rel}]$ is the unit step function. The only difference between the arguments of the two exponentials is the sign of the relative wavevector $K_{rel}$ corresponding to reflection in the relative coordinate. That is,
\begin{eqnarray}
\phi_{in/ref}= {\bm \pm}~ K_{rel} x_{rel}-\frac{\hbar K_{rel}^{2}}{2 \mu}t_{rel},
\label{eq:ScheqPhase}
\end{eqnarray}
where the initial velocities must allow reflection to occur. Relative and center of mass times $t_{rel}$ and $t_{cm}$ are introduced and associated with the relative and center of mass energies. These time variables satisfy the same properties as do $t_{1}$ and $t_{2}$ but in this case provide the notation needed in separating the energies associated with the relative and center of mass subsystems.
The solution to eqn. \ref{eq:ScheqODE1} for reflection is given by
\begin{equation}
\psi_{cm}=e ^{i (K_{cm} x_{cm}-E_{cm} t_{cm}/\hbar }).
\label{eq:Psicm}
\end{equation}
The complete solution for an eigenstate of energy in reflection is then $\Psi[x_{cm},t_{cm},x_{rel},t_{rel}]=\psi_{cm}\psi_{rel}$.
The particle-mirror system has now been partitioned into separable center of mass and relative coordinate subsystems. This separable solution, for the two uncorrelated substates, can be used to construct a wavegroup whose substates yield uncorrelated wavegroups associated with the cm and relative coordinates which satisfies initial and/or boundary conditions. A measurement of the cm position affects neither the time evolution of the wavegroup associated with the relative motion nor introduces any correlation between the cm and relative positions.
While the energy and momentum of the center of mass subsystem are unaffected by reflection, that is not the case for the relative subsystem where the relative wavevector changes sign upon reflection. Therefore, in this separable system interference of incident and reflected wavefunctions, determined by the PDF
\begin{eqnarray}
\Psi \Psi^{*}=4 \sin^{2}[K_{rel}x_{rel}],
\label{eq:SeparableInterference}
\end{eqnarray}
is associated only with the relative coordinate subsystem.
The change from cm-rel to particle-mirror systems is accomplished by using the following relations in the separable solutions given by equations \ref{eq:Schtot}, \ref{eq:PsiSeparable}, \ref{eq:ScheqPhase}, and \ref{eq:Psicm}: $K_{cm}=k+K$, $K_{rel}=(Mk-mK)/M_{tot}$, $x_{rel}=x_{1}-x_{2}$, $x_{cm}=(mx_{1}+Mx_{2})/M_{tot}$, $E_{rel}=\hbar^2K_{rel}^{2}/2\mu$, and $E_{cm}=\hbar^2K_{cm}^{2}/2(m+M)$.
These relations, however, do not address the two-time labeling issue in the particle-mirror system. To do so, note that the energy of the reflected particle, given by $p_{ref}^{2}/2m$ with $p_{ref}=\hbar \partial \phi_{ref}/\partial x_{1}$, is associated with the temporal coordinate $t_{1}$. Similarly, the energy for the mirror is $P_{ref}^{2}/2M$ with $P_{ref}=\hbar \partial \phi_{ref}/\partial x_{2}$ and is associated with the temporal coordinate $t_{2}$. Both of these energies and momenta are consistent with those of a classical particle reflecting from a moving mirror. These are manifest in the two-body wavefunctions, however, as a Doppler shift.
Application of these transformation relations then changes $e ^{i \phi_{in}}$ in equation \ref{eq:PsiSeparable} into equation \ref{eq:ScheqnUnentangled}. The two-time expression for $e ^{i \phi_{ref}}$ in the particle-mirror system, although simple to calculate using the procedure just outlined, is too large to present here. This complexity is a consequence of the correlations generated in reflection.
Correlated interference of these incident and reflected wavefunctions in the particle and mirror subsystems is then given by (transforming eqn. \ref{eq:SeparableInterference})
\begin{eqnarray}
\Psi \Psi^{*}=4 \sin^{2}[(mK-Mk)\{(m+M)(x_{1}-x_{2})\notag \\
-\hbar (k+K)(t_{1}-t_{2}\}/(m+M)^2].
\label{eq:EntangledInterference}
\end{eqnarray}
This is similar to Gottfried's joint or simultaneous PDF for the interference obtained in the correlation between two particles produced in a momentum-conserving decay after each has traversed separate double slits \cite{Gottfried} when $t_{1}=t_{2}$. Note also that interference of the mirror and particle are coupled in eqn. \ref{eq:EntangledInterference}, illustrating how many-body systems interfere with `themselves' rather than only the particle and mirror each interfering with itself (in which case there would be no correlation) \cite{silverman}.
To gain familiarity with this result, consider a simultaneous measurement. For fixed $x_{1}$, the approximation $m/M<<1$ leads to interference for the mirror which varies from maximum to minimum through a distance
\begin{eqnarray}
\Delta x_{2} \approx \pi \hbar/(m(v-V)).
\label{eq:fringespacing}
\end{eqnarray}
Similarly for fixed $x_{2}$ this approximation leads to interference for the particle which varies from maximum to minimum through a distance $\Delta x_{1}=\Delta x_{2}$. For $V=0$ both the mirror and particle fringes are spaced at half the deBroglie wavelength of the {\em particle}, which can be up to $10^{-6}$ m for ultra cold atoms \cite{cronin}.
The time dependence in equation \ref{eq:EntangledInterference} is determined by the time components of the phase of the incident wavefunction, $\Phi_{in}[t_{1},t_{2}]=p_{in}^{2}t_{1}/2m + P_{in}^{2} t_{2}/2M$, and the reflected wavefunction, $\Phi_{ref}[t_{1},t_{2}]=p_{ref}^{2}t_{1}/2m + P_{ref}^{2} t_{2}/2M$. The temporal part of the joint PDF depends on $\Phi_{in}[t_{1},t_{2}]-\Phi_{ref}[t_{1},t_{2}]$. For simultaneous measurements ($t_{1}=t_{2}=t$) this phase difference is zero since the time variable factors from all energy terms and the total energy before and after reflection does not change: $\Phi_{in}[t]-\Phi_{ref}[t]=(p_{in}^{2}/2m + P_{in}^{2}/2M-p_{ref}^{2}/2m - P_{ref}^{2}/2M) t=0$. That, however, is not the case for non-simultaneous measurements since the times of measurement of the particle and mirror differ and are therefore no longer a common factor of all energy terms in the phase. This leads to the Doppler effect which is described next.
Consider an ensemble of identically prepared particle-mirror systems. Let a measurement on every member of the ensemble be made at both fixed particle-mirror positions, $x_{1}$, $x_{2}$, and time $t_{2}$ while the particle is measured at different times $t_{1}$ for different members of the ensemble. The time dependent interference pattern from eqn. \ref{eq:EntangledInterference} emerges from these ensemble measurements as the expected ``beat frequency" $\Omega$ \cite{beat} associated with interference of the incident and reflected particle substates (the superposition of states with different energies commensurate with the energy exchanged in reflection). If instead, $x_{1}$, $x_{2}$, and $t_{1}$ are fixed while the mirror position is measured at different times $t_{2}$, the ``beat frequency" is that associated with superposing mirror substates differing in energy. These particle and mirror beat frequencies are identical due to the same energy being exchanged between the particle and mirror in reflection and are given by
\begin{eqnarray}
\Omega=\frac{mM(v-V)(mv+MV)}{\hbar(m+M)^{2}}.
\label{eq:beat}
\end{eqnarray}
\subsection{Wavegroups: Simultaneous measurement}
\label{sec:wavegroupsimultaneous}
To better understand the experimental consequences of these results, wavegroups are next formed from a superposition of the incident and reflected `energy eigenstates' (given by eqn. \ref{eq:PsiSeparable}) expressed in terms of the correlated particle and mirror substates rather than the cm and relative substates. It is assumed that the initial particle and mirror Gaussian substates are sufficiently spatially separated that any probability of the particle initially being on the ``wrong'' side of the mirror is negligible. An analytic expression for such wavegroups can be obtained for a Gaussian distribution in wavevector components $k$ and $K$ (or velocities $v$ and $V$). For the mirror this is proportional to $\exp [-(K-K_{0})^{2}]/(2 \Delta K^{2})$ where the peak of the distribution is at $K_{0}$ and $\Delta K$ is its width while for the particle this is proportional to $\exp [-(k-k_{0})^{2}]/(2 \Delta k^{2})$ where the peak of the distribution is at $k_{0}$ and $\Delta k$ is its width.
\begin{center}
\begin{figure}
\includegraphics[scale=0.29]{simultaneous1.pdf}
\caption{Two-body simultaneous joint probability density snapshots for three sequential times vs coordinates $(x_{2},x_{1})$ for a particle reflecting from a mirror. The lower PDF waveform moves toward the diagonal white line, corresponding to $x_{1}=x_{2}$, then reflects in the middle snapshot where the incident and reflected two-body wavefunctions `overlap', and finally it moves away from the diagonal in the upper snapshot. The correlated interference fringes are spaced by about half the deBroglie wavelength of the {\em particle}. The upper left inset is a schematic of the `classical' analog before reflection while the upper right inset is that after reflection with initial and final particle and mirror velocities $v$, $V$, $v_{f}$, and $V_{f}$ respectively. There is no classical analog for the middle snapshot.}
\label{fig:interference}
\end{figure}
\end{center}
Consider first {\em simultaneous} measurement of the particle and mirror. In fig. \ref{fig:interference} snapshots of such a two-body joint PDF are shown at three times, $t_{1}=t_{2}=(-\tau, 0, \tau)$, for $M/m=100$, $\Delta K/\Delta k=2$, and $K/k=60$. The incident wavegroup propagates in the $(x_{1},x_{2})$ plane along a line whose slope is determined by a ratio of the group velocities of each substate and spreads due to dispersion independently in each direction. The particle and mirror initially move in the positive direction for the parameters chosen. During reflection the incident and reflected wavegroups overlap resulting in interference. After reflection the speed of the mirror increases while the particle continues moving in the positive direction with decreased speed (see the insets for the classical analog). Careful inspection of the lower and middle snapshots just to the left of the diagonal white line confirms the prediction that the spatial location of the interference maxima and minima do not depend on time as given in eqn. \ref{eq:EntangledInterference} when $t_{1}=t_{2}$. Wavegroup distortion shown in the upper reflected snapshot is discussed below. In the cm-rel system (not shown) the ``fringes" are aligned parallel to the cm coordinate illustrating the result given in eqn. \ref{eq:SeparableInterference}.
A slice of fig. \ref{fig:interference} for $x_{1}=0$ along the $x_{2}$ coordinate is shown in fig. \ref{fig:fringespacing} (the solid line) along with a slice of this figure for $x_{2}=0$ along $x_{1}$ (the dashed line) for different bandwidth wavegroups. This demonstrates essentially the same fringe spacing for the particle and mirror substates with narrow bandwidth wavegroups, as discussed for the approximation $M/m>>1$ following eqn. \ref{eq:fringespacing}.
The minima of the interference shown in fig. \ref{fig:interference} correspond to positions where the particle and mirror can never simultaneously be found. Verification of this result requires simultaneous cm measurement of the particle and mirror with instruments which have a spatial resolution that is smaller than this fringe spacing along both coordinates.
\begin{center}
\begin{figure}
\includegraphics[scale=0.3]{FringeSpacing.pdf}
\caption{Slices of a snapshot similar to the middle one from fig. \ref{fig:interference} which show the fringe spacing along the $x_{2}$ axis for $x_{1}=0$ (dashed lines) and along the $x_{1}$ axis for $x_{2}=0$ (solid lines). The $x_{1}$ axis has been inverted to display both the dashed and solid lines together. Although each graph has $\Delta K/\Delta k=2$ the value of $\Delta K$ increases sequentially by a factor of $2$ from the front to the back of the figure. }
\label{fig:fringespacing}
\end{figure}
\end{center}
\subsection{Wavegroups: fixed $\bf{t_{10}}$ while $\bf{t_{2}}$ varies}
\label{sec:non-simultaneous limit}
Next consider non-simultaneous measurements of the particle at position $x_{10}$ and time $t_{10}$ while allowing $t_{2}$ to vary until a measurement is made of the mirror's cm position. This can be categorized into two regimes. Measurement of the particle either occurs in the region where the incident and reflected wavegroups do not overlap, regime A, or in the region where the interference similar to that shown in fig. \ref{fig:interference} occurs, regime B.
To illustrate such asynchronous measurements, the particle is measured at a particular $x_{10}$ and $t_{10}$ while the mirror substate then evolves with $\Psi[x_{10},t_{10},x_{2},t_{2}]$. This is shown as one-body PDF plots of snapshots at different times $t_{2}$ using a 3-D graph of $x_{2}$ vs fixed values of $x_{10}$ and $t_{10}$.
\subsubsection{Regime A: the particle is measured when there is no incident and reflected wavefunction overlap}
\label{regime A}
All three joint PDF snapshot results for regime A shown in fig. \ref{fig:regimeA} are for $t_{10}=\tau$. The one farthest to the right is similar to the upper PDF waveform snapshot in fig. \ref{fig:interference} since it occurs at $t_{2}=t_{10}=\tau$. The other two snapshots are for times, $t_{2}= 2\tau$ and $3\tau$ with $M/m=3$, $\Delta K/\Delta k=2$, and $K/k=1.8$. Note that this mirror wavegroup moves only along the $x_{2}$ axes and disperses.
It is useful to compare the physical interpretation of the joint PDF shown in fig. \ref{fig:interference} with that of the one-body PDF of fig. \ref{fig:regimeA}. In the former case the probability of finding the particle and cm of the mirror in a region centered around $x_{1}=a$ and $x_{2}=b$ simultaneously at time $t$ is given by $\int^{b+\delta b}_{b-\delta b}\int^{a+\delta a}_{a-\delta a} PDF[x_{1},t,x_{2},t] dx_{1} dx_{2}$. In fig. \ref{fig:regimeA} the particle is measured at $t_{10}=\tau$ while the snapshots are given for different values of when and where the cm of the mirror is measured. For the leftmost PDF in this figure the probability of measuring just the mirror, once the particle has been measured, is given by $\int^{b+\delta b}_{b-\delta b} PDF[x_{10},\tau,x_{2},3\tau] dx_{2}$.
\begin{center}
\begin{figure}
\includegraphics[scale=0.25]{regimeA.pdf}
\caption{Regime A one-body PDF plots for the mirror at $t_{2}=\tau, 2\tau, 3\tau$ when the particle has been measured at $t_{10}=\tau$ and $x_{10}$. Since $t_{2}=t_{10}=\tau$ for the rightmost PDF, it is similar to the upper snapshot in fig. \ref{fig:interference}. The diagonal white line is the same as that in fig. \ref{fig:interference}.}
\label{fig:regimeA}
\end{figure}
\end{center}
\subsubsection{Regime B: the particle measured in the overlap region}
\label{regime B}
All three joint PDF snapshot results for regime B, shown in fig. \ref{fig:regimeB}, are for $t_{10}=0$. The one farthest to the right is similar to the middle PDF waveform snapshot in fig. \ref{fig:interference} since it occurs at $t_{2}=t_{10}=0$. The middle snapshot of fig. \ref{fig:regimeB} occurs predominately between $x_{2}=4$ and $9$ for $t_{2}= \tau$, while the leftmost snapshot consists of two ``bumps" encompassing the region between $x_{2}=9$ and $18$ for $t_{2}= 2 \tau$. As in the previous figure $M/m=3$, $\Delta K/\Delta k=2$, and $K/k=1.8$.
The complexity of the joint PDF in fig. \ref{fig:regimeB}, in comparison with that of regime A, is a consequence of measuring the particle in the interference region. That is, there are two indistinguishable ways that the particle could have reached the interference region. It could have come from the incident {\em or} reflected particle wavegroup substates. This lack of knowledge about the particle is manifest in the subsequent mirror wavegroup which then consists of a superposition in which the mirror has yet to reflect {\em and} has already reflected the particle.
These two mirror states have different speeds due to the mirror recoil in one but not the other. To see this one need only change the speed of the incident particle wavegroup. The speed of the mirror state which has not reflected the particle does not change while the speed of the mirror state which reflected the particle increases. Overlap of these two mirror wavegroup states is shown in fig. \ref{fig:regimeB} from complete overlap (right hand snapshot) to partial overlap (middle snapshot) and finally to virtually complete separation (left hand snapshot) due to the differing speeds of the two mirror states.
\begin{center}
\begin{figure}
\includegraphics[scale=0.28]{regimeB1.pdf}
\caption{Regime B one-body mirror PDF plots for $t_{10}=0$ and 3 sequential times, $t_{2}=0, \tau, 2\tau$. The rightmost PDF waveform is similar to the middle PDF waveform in fig. \ref{fig:interference}. The diagonal white line is the same as that in fig. \ref{fig:interference}. This is a PDF illustration of the splitting of the mirror states shown schematically in fig. \ref{fig:overview}}
\label{fig:regimeB}
\end{figure}
\end{center}
Although these results illustrate important issues in non-simultaneous measurement, they have explored only a limited set of parameters and interferometric geometries. Rather than present a comprehensive treatment, we use three examples to further probe the consequences of asynchronous measurements. First, the beamsplitting effect, that a measurement on the particle has on the mirror, is explored in more depth using a particle whose mass is equal to that of the mirror. Second, a manifestation of the Doppler effect is illustrated, using a time scale shorter than that used in the figures above. Finally, a specific example of reflection of a microscopic particle from a macroscopic mirror is described.
\subsubsection{Regime B: measurement functioning as a beamsplitter using a coherence transfer example}
\label{subsec:beamsplitter}
After reflection, the spatial width of the mirror wavegroup substate is exchanged with that of the particle wavegroup substate when $M=m$. This is most easily seen by constructing a particle-mirror wavefunction with different bandwidths for the particle and mirror wavegroup substates and is shown in fig. \ref{fig:coherencetransfer}, which is a contour plot of joint PDFs similar to figure \ref{fig:interference}, but without the snapshot in the interference region. The solid and dashed contours correspond to $M/m=1$ and $M/m=20$ respectively with the spread in velocities given by $\Delta V/\Delta v=10$.
\begin{center}
\begin{figure}
\includegraphics[scale=0.30]{contourCoherTrans.pdf}
\caption{Contour plots of the joint probability density snapshot for simultaneous measurement of a particle reflecting from a mirror, similar to fig. \ref{fig:interference} but without the interference or ``overlap'' region snapshot, illustrating coherence transfer for two different particle-mirror mass ratios. The solid and dashed lines are for a spread in velocities given by $\Delta V/\Delta v=10$ with $M/m=1$ and $M/m=20$, respectively. Note the exchange of wavegroup widths between particle and mirror substates for $M/m=1$. The diagonal white line is the same as that in fig. \ref{fig:interference}.}
\label{fig:coherencetransfer}
\end{figure}
\end{center}
This result can be understood by comparing classical and quantum reflection. In a one-dimensional classical collision, conservation of energy and momentum result in the exchange of particle-mirror velocities independent of either velocity for $m=M$. This is manifest quantum mechanically in the exchange of commensurate substate parameters $k$ and $K$ between the incident and reflected two-body wavefunctions. If an incident particle substate, consisting of only one harmonic component (corresponding to speed $v$) reflects from a mirror substate with many velocity components, then each harmonic component of the mirror substate (corresponding to different values of $V$) reflects the particle substate and therefore acquires velocity $v$, while the reflected particle substate acquires different velocity values for each reflected component of the mirror wavegroup. This results in the reduction of the mirror bandwidth and an increase in the particle bandwidth, which is manifest in fig. \ref{fig:coherencetransfer} as the exchange of incident and reflected wavegroup shapes. It also is responsible for the distortion of the reflected wavegroup shape in fig. \ref{fig:interference}.
\begin{center}
\begin{figure}
\includegraphics[scale=0.3]{2TequalMass.pdf}
\caption{One-body mirror PDFs for the same equal mass parameters as in fig. \ref{fig:coherencetransfer} except that the particle was measured at $x_{10}$ and $t_{10}=0$ while the mirror is measured at $x_{2}$ with incremental increases in the time $t_{2}$. Note the difference in scale along the axes. The diagonal white line is the same as that in fig. \ref{fig:coherencetransfer}. The distinct mirror substates in part (d) correspond to those which either have or have not reflected the particle. This is another PDF illustration of the splitting of the mirror states shown schematically in fig. \ref{fig:overview}}
\label{fig:2TequalMass}
\end{figure}
\end{center}
Consider next non-simultaneous joint PDFs for equal particle and mirror masses. However, rather than choosing measurement of the particle at $t_{10}=\pm \tau$, as shown in fig. \ref{fig:coherencetransfer}, the particle is measured at $t_{10}=0$ to illustrate asynchronous measurement in the interference region. The results, shown in fig. \ref{fig:2TequalMass}, again illustrate splitting of the mirror. However, these states differ dramatically in their shapes and in their splitting ratios. That is, depending on the position that the particle is measured at, a slice of the joint PDF along the $x_{2}$ axis can reveal one or two peaks of unequal height. The long PDF waveform parallel to the $x_{10}$ axis is that of the mirror substate associated with no reflection, while the narrow peak corresponds to the mirror substate which reflected the particle. The differing speeds of these two waveforms, due to mirror recoil in one but not the other, are evident in the figure.
The time order of measuring the particle and then the mirror can be reversed. In regime B, the splitting then generates two states consisting of a superposition in which the particle has yet to reflect {\em and} has already reflected from the mirror.
\subsubsection{Regime B: measurement of the particle resulting in a beat frequency for the mirror}
\label{subsec:beat}
Variation of $t_{2}$, on a temporal scale smaller than that shown in fig. \ref{fig:regimeB}, reveals the energy differences between the superposition of these two mirror substates, in regime B, as a ``beat frequency'' in the joint PDF. This is related to the result for harmonic states given in eqn. \ref{eq:beat} and is illustrated for wavegroups in fig. \ref{fig:doppler}. A slice, taken from fig. \ref{fig:interference}, along the $x_{2}$ axis for $x_{10}=0$ and $t_{10}=t_{2}=0$, is shown in fig. \ref{fig:doppler} as the leftmost plot. Plots to the right of this are shown at $x_{10}=0$ and $t_{10}=0$ for $t_{2}= 0.04 \tau,~0.08 \tau$, and $0.12 \tau$, respectively. All other parameters are the same as used in fig. \ref{fig:interference}. This illustrates the expected beat frequency from the ``Doppler shift'' in reflection, although it is shrouded in two-body interference. Equation \ref{eq:EntangledInterference} predicts no such beat frequency for simultaneous measurements ($t_{1}=t_{2}$), which is discussed in more detail in subsection \ref{sec:asynchDoppler}.
\begin{center}
\begin{figure}
\includegraphics[scale=0.28]{Doppler.pdf}
\caption{One-body PDF snapshots of the mirror after the particle was measured at $x_{10}=0$ and $t_{10}=0$, starting at time $t_{2}=0$ (which is a slice of fig. \ref{fig:interference}) as the leftmost plot. The remaining plots to the right progressively increase the time $t_{2}$ when the mirror is measured by $0.04 \tau$.}
\label{fig:doppler}
\end{figure}
\end{center}
\subsubsection{Regime B: interference for a mirror of mesoscopic/macroscopic mass}
\label{subsec:macroscopic}
One constraint for regime B is that the fringe visibility function must be non-zero and the incident and reflected particle-mirror wavegroups must `overlap' in the $(x_{1},x_{2})$ plane. The interference fringes are then determined predominately by a superposition of `energy eigenstates' \cite{Hamilton}. For example, the interference shown in fig. \ref{fig:interference} is determined predominately by eqn. \ref{eq:EntangledInterference}, when the wavegroups `overlap' in the center snapshot, where the longitudinal coherence lengths for both the particle and mirror are greater than the fringe spacing. In the upper snapshot of fig. \ref{fig:interference} there is neither `overlap' nor such interference.
The fringe ``visibility function'' is non-zero if each wavegroup substate `overlaps' within approximately a coherence length \cite{coherencelength}, which is given by $l_{c} \approx \lambda^{2}/\Delta \lambda = \lambda V/\Delta V$ \cite{hasselbach}. For particle substates, this can be $l_{c}^{particle}=10000$ \AA~for ultracold atoms \cite{cronin} or $l_{c}^{particle}=790$ \AA~for slow neutrons \cite{pushin}.
This longitudinal mirror coherence length can be estimated from the uncertainty in the mirror velocity. If it is in thermal equilibrium with the environment then $\Delta V_{thermal} \approx \sqrt{2k_{B}T/M}$, yielding $l_{c}^{thermal} \approx h/\sqrt{2Mk_{B}T}$, which is consistent with results for atoms in a Bose-Einstein condensate \cite{cronin}.
Fig. \ref{fig:spreadvelocity} illustrates how variation in the longitudinal coherence length of the mirror substate affects the particle-mirror interference, for fixed particle coherence length and {\em simultaneous} measurements. Part (a) shows a longer mirror coherence length than is used in fig. \ref{fig:interference}, while parts (b) through (d) progressively reduce the coherence length of only the mirror substate. In fig. \ref{fig:spreadvelocity} (d), the coherence length of the mirror is so small that overlap is prevented, over a range of $x_{1}$ values where it was present before. Nevertheless, a slice along the $x_{2}$ axis for measurement of the particle at a particular $x_{1}$, indicates a splitting of the mirror substate into two states which do not overlap and are therefore distinguishable. This is a consequence of two ways that the particle could have reached $x_{1}$: it could have come from the incident {\em or} reflected particle wavegroup substates, due to the large particle coherence length. As the mirror's coherence length increases, these two ways overlap and generate correlated interference as shown in fig. \ref{fig:spreadvelocity} (a). As the mirror's coherence length decreases, the position of the mirror before reflection is distinguishable from that after reflection, resulting in no interference, as shown in part of fig. \ref{fig:spreadvelocity} (d).
\begin{center}
\begin{figure}
\includegraphics[scale=0.28]{BandwidthVariations.pdf}
\caption{Two-body PDF plots for simultaneous measurement similar to the center snapshot of fig. \ref{fig:interference}, but with different spreads in mirror velocities, for a fixed spread in particle velocity while all other parameters are the same. $\Delta V/\Delta v=80, 20, 5$, and $0.4$ for $M/m=200$ in parts (a), (b), (c), and (d) respectively. Note the change of scales in graph (d).}
\label{fig:spreadvelocity}
\end{figure}
\end{center}
One might expect that the small coherence length associated with a mesoscopic/macroscopic mirror mass would not allow the interference shown in fig. \ref{fig:spreadvelocity} (a). This is modeled in fig. \ref{fig:largemass} where snapshots of two-body PDFs are shown at three times for a rubidium atom with $m=1.4 \times 10^{-25}$ kg reflecting from a mirror with $M=10^{-8}$ kg. The spread in wavevectors is determined by thermal equilibrium using $T=10^{-7}$ K for the atom (which is released from a Bose-Einstein condensate) and $T=1$ K for the mirror. The snapshots are for $t_{10}=0$ with $t_{2}= 10^{-16},~2 \times 10^{-15},$ and $ 4 \times 10^{-15}$ s.
This mirror longitudinal coherence length, determined by thermal equilibrium at $T=1$ K, corresponds to the interference in reflection shown in fig. \ref{fig:spreadvelocity} (c), for simultaneous measurement. For non-simultaneous measurements, Fig. \ref{fig:largemass} shows only snapshots over a small range of times $t_{2}$, when the mirror is measured, while the particle has been measured at $t_{10}=0$.
However, varying $t_{2}$ over a much larger range, for a mirror initially at rest, does not change the character of the joint PDF shown in this figure. Initial mirror and particle velocities of $V=1/100$ and $v=3/100$ m/s are used in fig. \ref{fig:largemass}, which results in a beat frequency of $3 \times 10^{5}$ Hz and changes the joint PDF in a manner similar to that shown in fig. \ref{fig:doppler2} along the $x_{2}$ axis. Although these two interfering mirror states (which either reflect or do not reflect the particle) have different speeds, they do not completely separate, as do those shown in the leftmost plot of fig. \ref{fig:regimeB}. This is a consequence of the small difference in mirror speeds due to the large $M/m$ ratio, resulting in a mirror offset which is small compared with the size of the wavegroup.
\begin{center}
\begin{figure}
\includegraphics[scale=0.3]{LargeMass.pdf}
\caption{One-body mirror PDF snapshots predictions for {\em non-simultaneous} measurement of a rubidium atom reflecting from a mirror of mass $10^{-8}$ kg. The longitudinal coherence length of the atom and mirror are determine by thermal equilibrium at $T=10^{-7}$ K (released from a Bose-Einstein condensate) and $T=1$ K, respectively. Each snapshot progressively increases the time $t_{2}$.}
\label{fig:largemass}
\end{figure}
\end{center}
These results illustrate that a measurement of the particle at one of the minima of the interference pattern, in the joint PDF of fig. \ref{fig:largemass}, yields no possibility of measuring the cm of the mirror for any value of $x_{2}$ and subsequent time $t_{2}$ (within a duration limited by the beat frequency).
This is to be contrasted with the result shown in fig. \ref{fig:interference}, where simultaneous measurement of the particle and mirror at an interference minimum {\em is} constrained temporally and spatially as follows. First, the duration of the simultaneous interference is finite. For a static mirror, it is essentially determined by the coherence length of the particle divided by its speed, $\approx l_{c}^{particle}/v$ when $l_{c}^{particle} >> l_{c}^{mirror}$. Second, the spatial extent of the interference along the $x_{2}$ axis is not infinite but severely limited by $l_{c}^{mirror}$. Experimental confirmation of the interference in fig. \ref{fig:interference} therefore requires both a small spatial and a short temporal resolving power of the measuring instrument.
Destructive interference, on the other hand, is not similarly constrained. The interference minimum in fig. \ref{fig:largemass} occurs over a range of $x_{2}$ much larger than the mirror coherence length. It is also robust over a wide range of mirror masses.
\subsubsection{Regime B: Doppler effect from a measurement of the particle but not the mirror}
\label{subsec:trace}
Correlated measurements are more difficult to perform than only a measurement on the particle, while making none on the mirror. Predictions of such effects are given by an average of the PDF (or trace) over the mirror coordinate, converting it into a marginal PDF, or for this two-body system a `one-body' PDF \cite{Gottfried}.
In the context of this work, the probability for detecting the particle without measuring the mirror for incident and reflected energy eigenstates states is then manifest as an integral of eqn. \ref{eq:EntangledInterference} over the mirror coordinate (or given by the reduced density matrix obtained as a trace over the mirror coordinates \cite{Tommasini}). Such a procedure, applied to eqn. \ref{eq:EntangledInterference}, ``washes out'' the one-body interference. Gottfried carries out a similar averaging on the two-body PDF for simultaneous measurement in a momentum-conserving decay after each particle traverses separate double slits with a similar result \cite{Gottfried}.
However, a one-body standing wave interference pattern is certainly expected for the particle reflecting from a static mirror. For a moving mirror, the Doppler shift introduces a time dependence in this pattern. Therefore, the prediction that there is no such interference between incident and reflected particle states, when the mirror is not measured, seems surprising.
This prediction, however, must be re-evaluated for wavegroups. Consider the averaging procedure when applied to the four PDFs shown in fig. \ref{fig:spreadvelocity}. It is apparent that the one-body interference `washes out' in graph \ref{fig:spreadvelocity}(a) since many cycles in the mirror coordinate of the joint PDF are averaged over, while that is not the case for fig. \ref{fig:spreadvelocity}(c). As shown in fig. \ref{fig:largemass}, the mesoscopic/macroscopic mirror has a coherence length smaller than the fringe spacing determined by an ultracold atom reflecting from it and therefore most closely resembles fig. \ref{fig:spreadvelocity}(c). It is also apparent from this figure that averaging over $x_{2}$ does {\em not} `wash out' the one-body interference. However, this averaging of fig. \ref{fig:spreadvelocity}(c) is only for simultaneous measurement and does not change with time even if the mirror moves (see subsection \ref{sec:asynchDoppler} for more details).
For an asynchronous measurement of the particle at $x_{10}$ and $t_{10}$, the joint PDF must be integrated over $x_{2}$ to obtain the marginal PDF for only the particle, if the mirror is not measured. Yet at what time $t_{2}$ must this integration occur for asynchronous measurements? This is addressed in fig. \ref{fig:doppler2}, where four sequential plots are shown for different $t_{10}$ values, while within each plot snapshots at different times $t_{2}$ are shown. The bandwidth parameters used in fig. \ref{fig:doppler2} have been adjusted to model those of a mesoscopic/macroscopic mirror mass. It is apparent from this figure that averaging over $x_{2}$ does not `wash out' the one-body interference for any value of $t_{2}$. That is, the result of averaging over $x_{2}$ is independent of $t_{2}$. Yet variation of the one-body PDF with $t_{10}$ at fixed $x_{10}$ (the Doppler shift) is evident in fig. \ref{fig:doppler2}. As mentioned above, the initial mirror and particle velocities of $V=1/100$ and $v=3/100$ m/s that are used in fig. \ref{fig:largemass} result in a beat frequency for the particle of $3 \times 10^{5}$ Hz.
This effect of the Doppler shift on the particle PDF (shown along the $x_{1}$ axis) differs from that in fig. \ref{fig:doppler}, which illustrates the effect of the Doppler shift in the mirror's PDF (shown along the $x_{2}$ axis). There is no measurable Doppler interference effect for the mirror in fig. \ref{fig:doppler2}, due to its coherence length being much shorter than the fringe spacing.
The choice of short coherence length mesoscopic/macroscopic mirror and a long coherence length particle substates yields the expected interference for a particle reflecting from a mirror when the mirror is not measured, but requires the use of the two-time formalism (again see subsection \ref{sec:asynchDoppler} for more details). However, such a prediction is limited to a special case of a more general result. For example, this averaging procedure applied to fig. \ref{fig:spreadvelocity}(d), for simultaneous measurement, does not yield the expected particle interference in reflection when the mirror is not measured, since there is no interference over a large range of $x_{1}$ values where it exists in fig. \ref{fig:spreadvelocity}(c).
\begin{center}
\begin{figure}
\includegraphics[scale=0.3]{doppler2.pdf}
\caption{Four sequential one-body mirror PDF snapshots of a particle reflecting from a mirror, (a)$\rightarrow$(d), with different $t_{10}$ values while within each plot $t_{2}$ is varied. This illustrates how the result of averaging the PDF over $x_{2}$ is independent of $t_{2}$, leading to the expected one-body interference (derived from the two-body joint PDF) when the mirror is {\em not} measured.}
\label{fig:doppler2}
\end{figure}
\end{center}
Consider the opposite case of measurement of the mirror without a measurement of the particle, for the assumptions used in fig. \ref{fig:doppler2}. In this case, the averaging is over $x_{1}$, which washes out the fringes. Therefore, measurement of just the mirror results in no interference. This interference for the particle, but not the mirror, is that expected for a classical wave reflecting from a mirror. Yet, it is a special case rather than a general result since it depends on the coherence length assumptions.
\section{Discussion}
\label{sec:discussion}
\subsection{Prolonging interference with a perceptible fringe spacing for a mesoscopic/macroscopic mirror}
\label{sec:prolong}
Synchronous correlated interference in reflection is temporally and spatially limited by the particle-mirror coherence lengths as the incident and reflected two-body wavegroups separate. We have presented a simple model illustrating the magnitude of these effects.
Measurement of the particle first can prolong interference of the mirror, which is measured later. The resulting two one-body mirror states travel in the same direction but with momenta and energies differing by that either exchanged or not exchanged with the particle. This superposition state can either interfere or not depending on the coherence length of the mirror and the time between measurements.
For an asynchronous measurement with a small mirror coherence length, interference persists only for a small momentum exchange. Since these states have different energies their interference results in a `beat' or time dependent fringe pattern for the mirror PDF. For longer times between measurements, these two mirror wavegroups then separate beyond their coherence lengths, due to their different average momenta. This then results in a spatially separated superposition state without interference.
A microscopic particle reflecting from a mesoscopic/macroscopic mirror at rest is well suited to prolong this one-particle interference of the two mirror states for two main reasons. First the two mirror wavegroups maintain overlap longer since their difference in speeds is $\Delta V \approx 4mv/M$, while their coherence length is $l_{c}^{thermal}$. This results in an overlap time $\approx h \sqrt{M}/(4mv\sqrt{2k_{B}T})$. Second their energy difference is negligible for a stationary mirror. That is, the ``beat'' frequency goes to zero in the limit of large mirror mass while the difference in momenta of these mirror states remains small but finite.
It is this difference in mirror momenta which leads to the phase difference $\Delta K~x_{2}$ when superposing these two mirror wavefunctions. That is, the mirror fringe spacing is only determined by the change in particle momentum since conservation of momentum in reflection requires $\Delta K=-\Delta k$. The fringe structure of the mesoscopic/macroscopic mirror is then commensurate with that of the microscopic particle.
Double slit interference with a massive particle, on the other hand, superposes two one-body states with the same momentum whose difference in phase, $K \Delta x$, is due to the difference in path lengths $\Delta x$ from each slit to the measurement point times an extremely large wavevector (rather than the small $\Delta K$ for the mirror reflecting a microscopic particle). This results in an imperceptible fringe spacing on the mesoscopic/macroscopic object traversing the double slits.
The interference of the mesoscopic/macroscopic mirror states does not require utilizing standard division of amplitude nor division of wavefront interferometric methods. It does, however, use measurement acting as a beamsplitter to generate two mirror states which interfere for times greater than that for synchronous measurement. In addition, path lengths need not be carefully matched for interference to be manifest. These are distinct experimental advantages over division of amplitude or wavefront interferometers, particularly for a mesoscopic/macroscopic mirror mass.
\subsection{Measurement theory}
\label{sec:measurementtheory}
Asynchronous measurement contains some subtle issues. Foremost among them is what constitutes a measurement of the particle, after which no interaction with the mirror occurs. An experimental realization of such a measurement might consist of the particle-mirror reflection occurring only along the x-axis while another particle called the ``probe'' moves along the y-axis, as illustrated in fig. \ref{fig:overview}. A measurement then involves the probe absorbing, being absorbed by, being scattered by, or scattering from the particle. The effect of this measurement on the particle (e.g. back action) has no effect on the mirror substate by the no interaction assumption.
The operational definition of such a measurement, when the particle is in the correlated interference region, is then that which causes splitting of the mirror state. This could result in the observation of prolonged interference of the mirror states or observation of these states when they no longer overlap (this is discussed in more detail in section \ref{sec:measuring}) . Note that the probe need not interact with the particle at any particular location apart from it occurring within this correlated interference region (and not at a PDF minimum).
Related issues involve modeling this sequential measurement. In the Copenhagen interpretation, a measurement of the particle's position collapses its substate into an eigenstate of that position operator. The assumption of no interaction after the first measurement then requires that the particle does not reflect a second time from the mirror's substate. The state of the particle after measurement is therefore irrelevant to the subsequent evolution of the mirror substate and is treated as such in the formalism presented above by fixing the particle coordinates in the two-body function at its measurement position and time. A more realistic model would account for a measurement which occurs over a distance $\Delta x_{1}$ rather than at a point in space. Such a treatment is beyond the scope of this work.
This assumption of the lack of interaction can be relaxed to further study the implications of the collapse postulate. To illustrate this let $\Psi_{1}[x_{1},t_{1},x_{2},t_{2}]$ be the solution to the Schr\"odinger equation for the incident uncorrelated plus the reflected correlated wavegroups. Also let the first measurement be of the particle in the correlated interference regime at $x_{1}=x_{10}$ and time $t_{10}$. The particle substate then collapses into an eigenstate of the position operator and therefore generates the two-body uncorrelated wavefunction $\Psi_{20}=\Psi[x_{1},t_{10}] \Psi_{1}[x_{10},t_{10},x_{2},t_{10}]$ where $\Psi[x_{1},t_{10}]=\delta[x_{1}-x_{10}]$. To determine the effects of allowing interaction (a second reflection) before a later measurement of the mirror, the solution to the Schr\"odinger equation $\Psi_{2}[x_{1},t_{1},x_{2},t_{2}]$ for $t_{1}>t_{10}$ and $t_{2}>t_{10}$ is needed. This is determined from the initial condition $\Psi_{20}$ along with the boundary condition at the mirror surface. However, the particle must not be absorbed in the measurement process since it is now allowed to interact again with the mirror after its initial measurement.
As a particular example, consider measuring the particle at a position in the correlated interference region given by the middle snapshot in fig. \ref{fig:interference}. The particle-mirror state just after this measurement is a cut along the $x_{2}$ axis at $x_{1}=x_{10}$ of the particle-mirror wavefunction associated with this PDF, due to the delta function nature of this collapse. This wavefunction in the $(x_{1},x_{2})$ plane is then the initial condition used in solving the two-body Schr\"odinger equation for the subsequent particle-mirror state. The reflection boundary condition at $x_{1}=x_{2}$ must also be satisfied. The narrow particle substate along the $x_{1}$ axis will lead to its ``diffusion'' and subsequent ``re-reflection.''
Note that if the particle is measured a second time, before a measurement of the mirror, then the procedure just mentioned must be repeated. Measurement-induced multiple reflections then occur and are, in themselves, of fundamental interest.
\subsection{Asynchronous measurement: prediction of the Doppler effect}
\label{sec:asynchDoppler}
A time dependent interference pattern is expected classically when a harmonic wave reflects from a moving mirror. For a stationary mirror the incident and reflected waves form a standing wave which is measured by placing a detector at a fixed position where the incident and reflected waves overlap \cite{Wiener}. Motion of the mirror introduces a Doppler shift in the reflected wave which generates time dependence in this interference. The detector fixed in space then measures a time dependent intensity variation.
Consider the analogous process for the particle-mirror system. The incident and reflected two-body wavefunctions have the same total energy in reflection from a moving mirror. Interference in the synchronous formalism involves superposing these two wavefunctions, each with the same frequency. The result, given by eqn. \ref{eq:EntangledInterference} with $t_{1}=t_{2}$, is then independent of time.
To illustrate this, consider the middle snapshot of fig. \ref{fig:interference}. For different snapshots (not shown), all of which still partially overlap spatially with this middle snapshot, the envelope of the PDF waveform moves through the $(x_{1},x_{2})$ plane, yet the maxima and minima of the fringes for these different snapshots remains in the same location.
The Doppler shift for the particle-mirror system, analogous to a classical wave reflecting from a moving mirror, involves only measuring the particle flux at a fixed position with no measurement of the mirror. Not measuring the mirror involves averaging the two-body PDF (trace) over the mirror coordinates. The mirror coherence length determines if this averaging ``washes-out'' the interference.
An example, for the synchronous case is shown in fig. \ref{fig:spreadvelocity} (a). However, a smaller mirror coherence length results in a PDF which, when averaged over $x_{2}$, maintains the interference along $x_{1}$, as is illustrated in fig. \ref{fig:spreadvelocity} (c). The PDF for simultaneous correlation, with a coherence length associated with that shown in fig. \ref{fig:spreadvelocity} (c), then generates only static interference and not the expected time dependent interference effect associated with the Doppler shift.
However, as shown in subsections \ref{subsec:beat} and \ref{subsec:trace}, the two-time formalism predicts a Doppler interference effect similar to that classically when measuring only the particle, but only as a special case (for certain coherence lengths) of a more general result. This is a consequence of using the two-time formalism to parse the total energy and therefore the frequency of the wavefunction into terms associated with the particle and mirror rather than only using the expression for the total energy.
\subsection{Decoherence}
\label{sec:decoher}
Interference in a typical one-body interferometer `washes-out' as the fringe spacing becomes imperceptible with increasing mass. Another mechanism that eliminates interference is a measurement to determine along which path the object traveled through the interferometer. The former issue is mitigated using two-body reflection of a microscopic particle from a mesoscopic/macroscopic object, as described above, while the latter is the first topic of this section.
\subsubsection{Path information}
\label{sec:path}
A method to determine path information and therefore destroy interference utilizes a `Heisenberg microscope,' where an additional particle scatters from the object traversing the interferometer. If the wavelength of this particle, $\lambda$, is smaller than the spatial separation along the two paths, $\Delta x$, then path information can be obtained in principle from the state of this scattered probe particle and interference is destroyed. This was verified using a three grating Mach-Zehnder interferometer traversed by an atom and using a photon as the probe to determine path information \cite{chapman}. The contrast of the interference, as a function of $\Delta x$, was shown to drop dramatically for $\Delta x>\lambda/2$.
Generating such decoherence for particle-mirror interference involves using a probe particle to determine the position of the mirror during interference (after measurement of the particle). There are two distances that the mirror could have moved: that associated with and that without reflection. If the probe particle can resolve the difference in these distances then the `path' associated with reflection is distinguishable from that without reflection.
For a synchronous measurement of a near stationary mirror with $m/M<<1$ and $V/v<<1$, the mean difference in these mirror distances is $\Delta x\approx 2l_{c}^{particle}m/M$, which when equal to the wavelength of a probe particle of mass $m^{*}$ then requires that the probe have velocity $v_{probe}\approx hM/(4l_{c}^{particle}mm^{*})$. For asynchronous measurement $\Delta x\approx 2v\Delta tm/M$ requiring a probe velocity $v_{probe}\approx hM/(2mm^{*}v\Delta t)$, where $\Delta t=t_{2}-t_{1}$.
This calculation can also be used to approximate the effect of such decoherence on the mirror from the collisions with gas atoms forming the thermal bath which surrounds the mirror if these gas atoms are considered to be the probe which determines path information and they scatter from only one of the superposed mirror states. The interference in reflection described above is then maintained if there is a small probability of these atoms having the velocity $v_{probe}$. For a mesoscopic mirror, $M\approx 10^{-10}$ kg, and both microscopic particle and probe atoms of mass $\approx 10^{-25}$ kg, $v_{probe} \approx 10^{6}$ m/s. Atoms with such thermal speeds are not likely to be found in a low temperature gas.
This decoherence mechanism can also be applied to the emission of three examples are used by the mirror during interference. If the photon's wavelength is small enough to allow for the location of the slab associated with one of these positions, interference is destroyed since it can then be determined at which position the reflection occurred \cite{cronin}. For an ultra cold atom reflecting from a mirror of mass $M \approx 10^{-10}$ kg, $\Delta x\approx 2v\Delta tm/M$, requiring a wavelength of $\approx 10^{-15}$ m for $\Delta t=1$ s. The probability of such a thermal photon emission is small.
However, for such a probe to destroy mirror interference it must interact with only one of the mirror states (how this could be accomplished with closely spaced mirror states is not obvious). If not, then the probe (with a coherence length longer than the separation of the mirror states) itself is placed into a superposition state of having interacted with both mesoscopic/macroscopic mirror states. This results in probe interference, which can be used as a method to determine that the mirror is in a superposition state, as discussed in section \ref{sec:measuring},
These calculations indicate that decoherence associated with obtaining path information can be mitigated with a judicious choice of parameters. This is due in part to the displacements associated with the interfering mirror substates being proportional to $m/M$. To measure such small displacements requires probe particles with proportionally smaller wavelengths which are typically not found in the environment.
\subsubsection{Environmental decoherence}
\label{sec:environ}
Another decoherence mechanism is through coupling with the many degrees of freedom of the surrounding environment. Zurek \cite{zurek} predicted exponential decay of the off-diagonal density matrix element terms for a mesoscopic/macroscopic object in a superposition state, $\exp[-t/t_{D}]$, with time constant $t_{D}=t_{R}(\lambda_{T}/\Delta x)^{2}$, where $t_{R}$ is the thermal relaxation time, $\Delta x$ is the difference in distance between these two states, and $\lambda_{T}$ is the thermal de Broglie wavelength of the mesoscopic/macroscopic object. The decay is into a mixed rather than a pure state.
Applying this decoherence model to the mesoscopic/macroscopic mirror states after measurement of the particle, as described above, yields $t_{D}/t_{R}\approx M h^{2}/(8k_{B}T(mv \Delta t)^{2})$. Reflection of an atom from a mirror of mass $M\approx 10^{-8}$ kg at $T=1$ K yields $t_{D}\approx t_{R} 10^{5}$ for $\Delta t=1$ s. However, $t_{R}$ can vary dramatically depending on environmental and material properties along with the dimensional values of the mirror. Macroscopic yet small masses can have short thermal relaxation times. Nevertheless, it appears that environmental decoherence of the mesoscopic/macroscopic mirror states generated from asynchronous measurement can be made either negligible or non-negligible in this model of decoherence.
This decoherence time was derived to show that mesoscopic/macroscopic bodies in superposition states quickly decohere. Therefore, any interference due to such superposition should not be observed. On the other hand, asynchronous measurement generates a superposition of macroscopic mirror states which mitigate this environmental decoherence, due to their small spatial separation.
\subsubsection{Decoherence overview}
\label{sec:overview2}
It has been shown that correlated interference for a microscopic particle reflecting from a mesoscopic/macroscopic mirror, does not disappear with increasing mirror mass, is difficult to eliminate via a path measurement, and can be made insensitive to environmental decoherence. It should also be pointed out that the robust character of interference for objects with many degrees of freedom is reinforced by measurements which demonstrate that even if the size of the object is larger than both the coherence length and deBroglie wavelength, interference can still be observed \cite{cronin}.
\subsection{Measuring the mirror in a superposition state}
\label{sec:measuring}
A third particle, reflecting from the mirror after asynchronous measurement, can act as a probe to confirm that the mirror is in a superposition state. Such reflection is again described by a two-body wavefunction, if the measurement of the initial particle has eliminated this particle from interacting with both the split mirror states and the probe. The result is then a sum of two-body solutions to the Schr\"odinger equation, one for the probe reflecting from each mirror state. Wavegroups are then formed from these harmonic states as described above.
The interference in reflection of these two-body wavegroups is robust since overlap is maintained as they both travel in the same direction. Measuring the probe, without any measurement of the mirror, does not destroy the probe interference for sufficiently small mirror coherence lengths, as described in subsection \ref{sec:asynchDoppler}. Therefore, the more difficult correlation measurement of both the probe and mirror need not be performed.
The coherence length of the probe need only be larger than the very small displacement of the mirror states for this probe interference to be exhibited, while the phase difference between these probe states is determined by the mass of the probe. With such interference, the probe cannot distinguish between the mirror states as discussed in section \ref{sec:path}. Verification of interference from only the reflected probe is then a method to measure the mesoscopic/macroscopic mirror in a superposition state, without any `direct' measurement of the mirror. Details of such a calculation are beyond the scope of this foundational work.
\section{Summary}
\label{sec:summary}
Asynchronous measurement in quantum mechanics has been treated for a particle reflecting from a mirror. The results presented assume measurement of the particle first, collapse of the particle substate, and a later measurement of the mirror while no particle-mirror interaction occurs between the times of these measurements. The two-body Schr\"odinger equation, solved with standard techniques, is parsed to completely separate the variables associated with the particle from those of the mirror, so that when the particle is measured only its parameters are fixed while those of the mirror can continue to evolve in time until the mirror is measured.
The subsequent interference of the mirror states which have and have not reflected the particle is in many ways familiar from examples in quantum mechanics in which an outcome can be achieved in indistinguishable ways. Yet this process of splitting differs by using measurement as its mechanism. This is not possible in a one-body system. In addition, the example used here of a microscopic particle reflecting from a mesoscopic or macroscopic mirror illustrates why these interference effects do not vanish with increasing mirror mass.
{\em Without} such measurement, the two-body wavegroup time evolves into one in which the wavegroup has completely reflected and then no longer overlaps with the incident wavegroup. Correlated interference then disappears since there is then no uncertainty about the particle and mirror belonging to the incident or reflected states.
Correlated interference in reflection is an example of one of the simplest interferometers, utilizing neither division of amplitude nor division of wavefront methods to generate interference. In addition, path lengths need not be carefully matched for interference to be manifest.
The complexity of {\em asynchronous} correlated interferometry is revealed as the coherence lengths of the particle and mirror substates are varied. Examples are: (1) When both are small then essentially classical reflection results. Measurement of the particle either occurs after {\em or} before reflection, resulting in a later measurement of the mirror either having recoiled {\em or} not recoiled. (2) When both are large, compared with the particle's wavelength, then measurement of the particle in the correlated interference region acts as a beamsplitter generating mirror states which have {\em and} have not reflected the particle. (3) Measurement of the particle only, for short mirror and long particle coherence lengths, yields the expected interference due to the Doppler effect but only as a special case of a more general result.
Other issues of asynchronous measurements presented include: transfer of coherence between the particle and mirror, the effects of substate coherence on asynchronous interference, distortion of the two-body wavegroups due to reflection, interference effects on mesoscopic/macroscopic mirrors, and the prolonging of interference of the mirror substate when the particle is measured first in the region of correlated interference. Finally, the superposition of mesoscopic/macroscopic mirror states was shown to be insensitive to environmental decoherence.
Some consequences of asynchronous measurements not presented include: multiple sequential particle measurements before a measurement of the mirror (two-body quantum Zeno effect), treatment of asynchronous measurement in other interpretations of quantum mechanics, two-body effects for spatially varying potentials, reflection of a massless particle, dealing with an inelastic collision, and both synchronous and asynchronous two-body treatment of quantum tunneling. Perhaps the one modification which would result in a more familiar interferometer is to incorporate a partially transmitting, rather than totally reflecting, mirror. One example is of a thin aluminum slab whose two surfaces partially retro-reflect a neutron. This then leads to a two-body interferometer analogous to the one-body thin-film interference of light. Finally, the more likely domain in which these prediction will be tested is in the reflection of a microscopic particle from a microscopic ``mirror,'' the details of which have also not been discussed.
There is little experimental support for asynchronous correlation interferometry. Yet, it involves perhaps the most elementary of only a few exact solutions to the two-body Schr\"odinger equation. It is surprising that such theoretical simplicity is not reflected experimentally.
Measurement in quantum mechanics plays a role different from that in any other physical theory and its interpretation remains controversial. It is shown here that measurement can act as a beamsplitter, which, under the appropriate conditions, can prolong the transient interference associated with reflection in a two-body quantum system. Although far from being comprehensive, these results also indicate a direction, heretofore unexplored, for further research in understanding asynchronous quantum correlation, probing decoherence, and extending quantum measurements to larger masses.
\begin{acknowledgments}
The authors would like to thank Professors C. Durfee and J. Scales for useful comments, along with referee guidance.
\end{acknowledgments}
\section{Appendix}
\label{sec:appendix}
Without collapse the two-body wavefunction, expressed in terms of the two times, evolves and conserves probability. The probability of measuring the particle at $(x_{1},t_{1})$ and the mirror at $(x_{2},t_{2})$ is given by $\iint PDF[x_{1},t_{1},x_{2},t_{2}] dx_{1} dx_{2}$ with the joint PDF determined by the solution of equation \ref{eq:Schreqn} as $\Psi \Psi^{*}$ but expressed in terms of the two time variables in the particle-mirror system. Using this equation, conservation of probability can then be expressed locally as,
\begin{eqnarray}
\partial_{t_{1}}PDF[x_{1},t_{1},x_{2},t_{2}] +\partial_{t_{2}} PDF[x_{1},t_{1},x_{2},t_{2}] +\notag \\
\partial_{x_{1}} j_{1}[x_{1},t_{1},x_{2},t_{2}] +\partial_{x_{2}} j_{2}[x_{1},t_{1},x_{2},t_{2}]=0,
\label{eq:consProb}
\end{eqnarray}
where $j_{1}[x_{1},t_{1},x_{2},t_{2}]=\hbar (\Psi^{*} \partial_{x_{1}} \Psi-\Psi \partial_{x_{1}} \Psi^{*})/(2 i m)$ and $j_{2}[x_{1},t_{1},x_{2},t_{2}]=\hbar (\Psi^{*} \partial_{x_{2}} \Psi-\Psi \partial_{x_{2}} \Psi^{*})/(2 i M)$. While the expressions for these current densities appear similar to that for one particle systems there are subtle but important differences for a two body system \cite{currentdensity}.
Multiplying equation \ref{eq:consProb} by $dx_{1} dx_{2}$, integrating over the segment from $a$ to $b$ along the x-axis ($a \leq x_{1} \leq b$ and $a \leq x_{2} \leq b$), and then rearranging terms yields a solution to equation \ref{eq:consProb} if
\begin{eqnarray}
\partial_{t_{1}} \int_{a}^{b} PDF[x_{1},t_{1},x_{2},t_{2}] dx_{1} +\notag \\ j_{1}[b,t_{1},x_{2},t_{2}]-j_{1}[a,t_{1},x_{2},t_{2}]=0 \notag
\label{eq:consProb2a}
\end{eqnarray}
and
\begin{eqnarray}
\partial_{t_{2}} \int_{a}^{b} PDF[x_{1},t_{1},x_{2},t_{2}] dx_{2} +\notag \\ j_{2}[x_{1},t_{1},b,t_{2}]-j_{2}[x_{1},t_{1},a,t_{2}]=0 \notag.
\label{eq:consProb2b}
\end{eqnarray}
These equations indicate that the time rate of change of probability within the $\overline{ab}$ segment on the x-axis is determined separately by a net change in particle and mirror probability fluxes in that region, which is similar to conservation of probability in a one-body system. Now, however, probability of the two-body system is conserved for the mirror even when $x_{1}$ and $t_{1}$ are fixed. The asynchronous procedure described here therefore maintains conservation of probability for the mirror after the particle has been measured.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In simple quantum systems, such as collections of qubits, entanglement structure has been well-studied. The quintessential example of a two qubit entangled state is a Bell pair\footnote{The Hilbert space corresponding to a single qubit is identified with $\mathbb{C}^2$, with an orthonormal basis typically labelled by $|0\rangle $ and $|1\rangle $. The Hilbert space corresponding to two qubits is $\mathbb{C}^2\otimes \mathbb{C}^2$, for three qubits is $\mathbb{C}^2\otimes \mathbb{C}^2 \otimes \mathbb{C}^2$ and so on.}
\begin{equation}
|\psi_{\mathrm{Bell}}\rangle = \frac{1}{\sqrt{2}} \Big(|0\rangle \otimes |0\rangle + |1\rangle \otimes |1\rangle \Big).
\end{equation}
If we trace out one of the qubits, then we are left with a \emph{mixed} state
\begin{equation}\label{rdm}
\rho_1 = \mathrm{Tr}_{2}\,|\psi_{\mathrm{Bell}}\rangle\langle \psi_{\mathrm{Bell}}| = \frac{1}{2} |0\rangle \langle 0| + \frac{1}{2} |1\rangle \langle 1|,
\end{equation}
which we should think of as an ensemble, or a probability distribution over pure quantum states in the one qubit Hilbert space. A good measure of the entanglement between the original two qubits is the von Neumann entropy of the reduced density matrix \eqref{rdm}, also known as the \emph{entanglement entropy}
\begin{equation}
S_{EE} = -\mathrm{Tr}_1\,\rho_1\ln\rho_1 = \ln\,(2).
\end{equation}
The Bell state should be contrasted with states of the form $|0\rangle \otimes |0\rangle, |0\rangle \otimes |1\rangle$, etc., which are completely factorized and have no entanglement. The entanglement entropy thus measures the non-factorizability of a state.
For larger systems, one can construct states with more intricate patterns of entanglement. For instance, with three qubits one can construct the following two types of multi-party entangled states \cite{Dur:2000zz}:
\begin{equation}
|\psi_{\mathrm{GHZ}} \rangle = \frac{1}{\sqrt{2}} \Big(|000\rangle + |111\rangle \Big),
\label{GHZstate}
\end{equation}
\begin{equation}
|\psi_{\mathrm{W}} \rangle = \frac{1}{\sqrt{3}} \Big(|001\rangle + |010\rangle+|100\rangle \Big),
\label{Wstate}
\end{equation}
where we have neglected to write the tensor product symbols in favor of simpler notation. As we shall see, these two states carry different types of entanglement.
If we trace over one of the factors in the GHZ state, we get the reduced density matrix
\begin{equation}
\rho_{12} = \mathrm{Tr}_3 \,|\psi_{\mathrm{GHZ}}\rangle\langle \psi_{\mathrm{GHZ}}| = \frac{1}{2} |00\rangle \langle 0 0| + \frac{1}{2} |1 1\rangle \langle 11|.
\end{equation}
Thought of as a (mixed) two-qubit state on the first two qubits, $\rho_{12}$ is a classical probabilistic mixture over \emph{product} states, namely $|00\rangle $ and $|11\rangle$. In quantum information theory, such a state $\rho_{12}$ is called \emph{separable}. In other words, the reduced density matrix $\rho_{12}$ contains no quantum entanglement --
all the entanglement between qubit 1 and qubit 2 came from their mutual relatonship with qubit 3 which was traced out.
On the other hand, if we trace over one of the factors in the W state, we obtain the reduced density matrix
\begin{equation}
\tilde{\rho}_{12} = \mathrm{Tr}_3 \,|\psi_{\mathrm{W}}\rangle\langle \psi_{\mathrm{W}}| = \frac{1}{3} |00\rangle \langle 0 0| + \frac{2}{3} |\Psi_+\rangle \langle \Psi_+|, \quad |\Psi_+\rangle = \frac{|01\rangle + |10\rangle}{\sqrt{2}}.
\end{equation}
In this case, $\tilde{\rho}_{12}$ is once again a probabilistic mixture over two qubit states, namely $|00\rangle$ and $|\Psi_+\rangle$, but importantly $\Psi_+$ is \emph{not} a product state. In other words, the state $\tilde{\rho}_{12}$ contains quantum entanglement between qubit 1 and qubit 2; in this case we say that $\tilde{\rho}_{12} $ is not separable. In this sense, the quantum entanglement structure of the W-state is different from that of the GHZ state.
Increasing the number of qubits increases the possible patterns of entanglement very quickly. In fact, for four or more qubits the SLOCC\footnote{SLOCC stands for stochastic local operations and classical communication. This classification effectively amounts to studying the equivalence classes of states in the full Hilbert space under a quotient by local actions of $SL(2,\mathbb{C})$, namely $\frac{\mathbb{C}^2 \otimes \mathbb{C}^2 \otimes \cdots \mathbb{C}^2}{SL(2,\mathbb{C}) \times SL(2,\mathbb{C}) \times \cdots SL(2,\mathbb{C})}$. } classification gives classes of states some of which contain continuous families with fundamentally different patterns of entanglement \cite{2002PhRvA..65e2112V}. The situation is going to be even richer for quantum field theories.
The typical setup for considering entanglement entropy in relativistic quantum field theories is as follows: one starts with a \emph{connected}, codimension-one, spacelike hypersurface, i.e., a Cauchy surface $\Sigma$. In quantum field theory, one associates a Hilbert space $\mathcal{H}(\Sigma)$ to such a surface. We pick some pure state $|\psi\rangle \in \mathcal{H}(\Sigma)$. Now let us imagine partitioning $\Sigma$ into two regions $A$ and its complement $\bar{A}$ (see Fig.~\ref{fig1}a).
\begin{figure}[t]
\centering
\includegraphics[height=3cm]{Fig1.pdf}\label{fig1}
\caption{\small{\textsf{(a) The typical setup for studying entanglement entropy in quantum field theory involves choosing a connected spatial slice $\Sigma$ and partitioning it into two subregions $A$ (the shaded disc) and its complement $\bar{A}$. (b) In the present paper, we are interested in considering disconnected Cauchy surfaces $\Sigma = \Sigma_1 \cup \Sigma_2 \cup \cdots$ and studying the entanglement between these various disconnected components. }}}
\end{figure}
If the Hilbert space on $\Sigma$ factorizes as $\mathcal{H}(\Sigma) = \mathcal{H}(A) \otimes \mathcal{H}(\bar{A})$, then one can trace over one of the factors and obtain the reduced density matrix corresponding to $\psi$ on the subregion $A$:
\begin{equation}
\rho_A = \mathrm{Tr}_{\bar{A}} |\psi\rangle \langle \psi |.
\end{equation}
Generically, the density matrix $\rho_A$ so obtained is mixed and the von Neumann entropy $S(A) = -\mathrm{Tr}_A\,(\rho_A\ln\rho_A)$ measures the entanglement entropy between the region $A$ and its complement. The entropy computed this way is typically divergent in the continuum limit, owing to the short-distance entanglement near the boundary between $A$ and $\bar{A}$, but these divergences are by now well-understood.
In the present paper we will consider a different setup. Instead of considering a connected spatial slice, we will be interested in disconnected spatial slices of the form
\begin{equation}
\Sigma = \Sigma_1 \cup \Sigma_2 \cup \cdots \cup \Sigma_n,
\end{equation}
such as the one shown in Fig.~\ref{fig1}(b). As a consequence, the Hilbert space $\mathcal{H}(\Sigma)$ naturally factorizes
\begin{equation}
\mathcal{H}(\Sigma) = \mathcal{H}(\Sigma_1) \otimes \mathcal{H}(\Sigma_2) \otimes\cdots \otimes \mathcal{H}(\Sigma_n) .
\end{equation}
We can then ask for the entanglement structure of states in $\mathcal{H}(\Sigma)$ with respect to this factorization. We will sometimes refer to this type of entanglement as \emph{multi-boundary entanglement}, in order to distinguish it from the other more conventional setting involving connected spatial slices.
Multi-boundary entanglement was considered recently in the context of the AdS/CFT correspondence in \cite{Balasubramanian:2014hda, Marolf:2015vma} (see also \cite{Peach:2017npp}). In these papers, the conformal field theory (CFT) is 1+1 dimensional, and the Cauchy surface $\Sigma$ is a union of $n$ circles. Further, the states of interest are those dual to classical asymptotically-AdS multi-boundary wormhole geometries. The holographic entropies of entanglement between the various boundary circles can be studied using the Ryu-Takayanagi formula \cite{Ryu:2006bv}. Ideally, one would also like to perform similar entanglement computations entirely using field theory methods (i.e., without using the AdS dual); this was partly accomplished in \cite{Balasubramanian:2014hda, Marolf:2015vma} in certain special limits. Crucially, the CFT states could be obtained by performing the Euclidean field theory path integral on certain Riemann surfaces with $n$ circle boundaries. At special points on the moduli space of these Riemann surfaces, the field theory computation became tractable. However, at a generic point on the moduli space, the computation is too difficult to perform explicitly. It is thus natural to look for a ``simpler'' class of quantum field theories (as compared to CFTs), where we might be able to study multi-boundary entanglement using field theory techniques. A natural candidate is the class of topological quantum field theories (TQFTs) \cite{Witten:1988ze, Atiyah1988}.
Motivated by this, some of the authors of the present paper explored multi-boundary entanglement in Chern-Simons theory, in \cite{Balasubramanian:2016sro} (see also \cite{Salton:2016qpp}).\footnote{Chern Simons theory is also holographic, in the sense that it can be realized as the worldvolume theory of A-branes in topological string theory, and is dual to topological closed strings on 6d resolved conifold geometries.} The Cauchy surface $\Sigma$ was taken to be $n$ copies of a torus, and the states of interest were created by performing the path integral of Chern-Simons theory on {\it link complements} with $n$ torus boundaries. A link complement is a manifold obtained by removing a link from the 3-sphere (see Sec.~\ref{sec2} for details). In fact, with a particular choice of basis for the torus Hilbert space, the wavefunctions of these states are precisely the expectation values of Wilson loop operators in Chern-Simons theory, often called \emph{colored link invariants}. For the gauge group $SU(2)$, these are precisely the \emph{colored Jones polynomials}, as was famously shown by Witten \cite{Witten:1988hf}. The central observation in \cite{Balasubramanian:2016sro} was that these states live in the $n$-fold tensor product of the torus Hilbert space, and as such it is natural to study the entanglement between the various factors (i.e., multi-boundary entanglement) in these states. In other words, the colored Jones polynomial assigns a \emph{quantum entanglement structure} to a link in the 3-sphere.
Recently, the R\'{e}nyi entropies for a class of torus links called $T(2,2n)$ were also studied in detail in \cite{1711.06474} for general gauge groups.
In the present paper, we will further explore this quantum information theoretic approach to link topology. In Sec.~\ref{sec2}, we will review the construction of \cite{Balasubramanian:2016sro}, and show how previous results on multi-boundary entanglement in $U(1)$ Chern-Simons theory may be rewritten from the point of view of stabilizer groups. In Sec.~\ref{mgb}, we will prove that the entanglement entropy between any two sub-links of an arbitrary link gives a lower bound on the minimal-genus Heegaard splitting which separates the two sub-links. In Sec.~\ref{sec4}, we show that in $U(1)$ and $SU(2)$ Chern-Simons theory all {\it torus links} (which can be drawn on the surface of a torus), have a GHZ-like entanglement structure, in that partial traces lead to a separable state. This provides a sharp quantum-information theoretic characterization of the colored Jones polynomial for torus links. By explicit computation, we also show that many hyperbolic links (whose link complements admit a hyperbolic structure) have W-like entanglement, in that partial traces do not lead to separable states. In Sec.~\ref{sec5}, we further study hyperbolic links in the complexified $SL(2,\mathbb{C})$ Chern-Simons theory, which is of interest because of its close connection to Einstein gravity with a negative cosmological constant. In an asymptotic limit (where one of the levels $\sigma \to \infty$, corresponding to small Newton constant) we discuss how the entanglement structure is controlled by the \emph{Neumann-Zagier} potential on the moduli space of (generically incomplete) hyperbolic structures on the link complement.
\section{Setup} \label{sec2}
\subsection{Link Complements and the Colored Jones Polynomial}
In this section, we briefly review the construction of \cite{Balasubramanian:2016sro}. Consider Chern Simons theory with gauge group $G$ at level $k$. The action of the theory on a 3-manifold $M$ is given by
\begin{equation}
S_{CS}[A] = \frac{k}{4\pi} \int_{M} \mathrm{Tr}\,\left(A\wedge dA + \frac{2}{3} A \wedge A \wedge A\right),
\end{equation}
where $A= A_{\mu}dx^{\mu}$ is a gauge field (or equivalently, a connection on a principal $G$-bundle over $M$). Recall from our discussion in the previous section, that we are interested in considering disconnected spatial slices and the entanglement structure of the corresponding states. For simplicity, we consider states defined on $n$ copies of $T^2$, namely on the spatial slice (Fig.~\ref{fig2})
\begin{equation}
\Sigma_n = \cup_{i=1}^n T^2.
\end{equation}
The corresponding Hilbert space is the $n$-fold tensor product $\mathcal{H}^{\otimes n}$, where $\mathcal{H}=\mathcal{H}(T^2;G,k)$ is the Hilbert space of Chern Simons theory on a torus (for the group $G$ at level $k$). A natural way to construct states in a quantum field theory is by performing the Euclidean path integral of the theory on a 3-manifold $M_n$ whose boundary is $\partial M_n = \Sigma_n$. In a general field theory the state constructed in this way will depend on the detailed geometry of $M_n$, for instance the choice of metric on $M_n$; in our situation (i.e., for a TQFT) only the topology of $M_n$ matters. However, there are many topologically distinct Euclidean 3-manifolds with the same boundary, and the path integrals on these manifolds will construct different states on $\Sigma_n$. Following \cite{Balasubramanian:2016sro}, we will focus on a class of such 3-manifolds called link complements, which we now briefly describe.
\begin{figure}
\centering
\includegraphics[height=4cm]{Fig2.pdf}
\caption{\small{\textsf{The spatial manifold $\Sigma_n$ for $n=3$ is the disjoint union of three tori. $M_n$ is a 3-manifold such that $\partial M_n = \Sigma_n$.}}\label{fig2}}
\end{figure}
We start by considering an $n$-component \emph{link} in the 3-sphere $S^3$ (more generally, any connected, closed 3-manifold would do). An $n$-component link in $S^3$ is an embedding of $n$ (non-intersecting) circles in $S^3$. (Note that 1-component links are conventionally called \emph{knots}.) We will often denote a generic $n$-component link as $\mathcal{L}^n$, when we do not need to choose a particular link. We will label the $n$ circles which constitute the link as $L_1, \ldots, L_n$, so $\mathcal{L}^n = L_1 \cup L_2 \cup \cdots \cup L_n$. Now in order to construct the desired 3-manifold $M_n$, we remove a tubular neighbourhood $N(\mathcal{L}^n)$ of the link from inside $S^3$. In other words, we take $M_n$ to be $S^3 - N(\mathcal{L}^n)$, i.e., the complement of $\mathcal{L}^n$ in $S^3$ (Fig.~\ref{fig:fig3}). Since $\mathcal{L}^n$ is an $n$-component link, its link complement $M_n$ is a manifold with $n$ torus boundaries,\begin{equation}
\partial M_n = \cup_{i=1}^n T^2,
\end{equation}
which is precisely what we desired. We can therefore perform the path integral of Chern Simons theory on $M_n$, and obtain a state on $\Sigma_n$. In other words, for any given link $\mathcal{L}^n$ in $S^3$, the path integral of Chern Simons theory on the link complement $M_n = S^3-N( \mathcal{L}^n)$ produces a state $|\mathcal{L}^n\rangle $ in the $n$-fold tensor product of the torus Hilbert space $\mathcal{H}^{\otimes n}$.
\begin{figure}
\centering
\includegraphics[height=5.5cm]{Fig3.pdf}
\caption{\small{\textsf{The link complement (the shaded region) of a 3-component link (bold lines) inside the three-sphere. The white region indicates a tubular neighbourhood of the link which has been drilled out of the 3-sphere.}}\label{fig:fig3}}
\end{figure}
The discussion above was a bit abstract, but we can give a much more concrete expression for these states in terms of a particular basis for the torus Hilbert space, which we will denote $\{ |j\rangle \}$. In order to construct the basis state $|j\rangle $, think of the torus as the boundary of a solid torus, and insert a Wilson line in the core of the solid torus along its non-contractible cycle in the representation $R_j$. For compact gauge groups, we need only consider a finite number of \emph{integrable} representations,\footnote{For instance if $G= SU(2)$, the integrable representations are are labelled by the spin $j = 0, 1/2, 1. \cdots , k/2$. } and so the Hilbert space on the torus obtained as the span of $\{|j\rangle \}$ is finite dimensional. We will not need to know further details for our present discussion, but more details can be found in \cite{Witten:1988hf}. We can write the state $|\mathcal{L}^n\rangle $ obtained by performing the path integral of Chern Simons theory on the link complement of $\mathcal{L}^n$ in terms of the above basis vectors:
\begin{equation}
|\mathcal{L}^n\rangle = \sum_{j_1,\cdots, j_n} C_{\mathcal{L}^n}(j_1, j_2, \cdots j_n) |j_1,j_2, \cdots, j_n\rangle,\qquad |j_1,j_2, \cdots, j_n\rangle\equiv |j_1\rangle \otimes |j_2\rangle \otimes |j_n\rangle
\end{equation}
where $C_{\mathcal{L}^n}(j_1,\cdots, j_n)$ are complex coefficients, which we can write explicitly as
\begin{equation}
C_{\mathcal{L}^n}(j_1, j_2, \cdots j_n)= \langle j_1,j_2,\cdots j_n | \mathcal{L}^n\rangle \, .
\end{equation}
Operationally, this corresponds to gluing in solid tori along the boundary of the link complement $S^3-N(\mathcal{L}^n)$, but with Wilson lines in the conjugate representation $R^*_{j_i}$ placed in the bulk of the $i^{th}$ torus. Thus, the coefficients $C_{\mathcal{L}^n}(j_1,\cdots j_n)$ are precisely the \emph{colored link invariants}\footnote{For the gauge group $SU(2)$, these are often called the colored Jones polynomials, after dividing by the $S^3$ partition function, which is an overall color-independent constant. } of Chern Simons theory with the representation $R^*_{j_i}$ placed along the $i^{th}$ component of the link:
\begin{equation}
C_{\mathcal{L}^n}(j_1, \cdots, j_n) = \left\langle W_{R^*_{j_1}}(L_1) \cdots W_{R^*_{j_n}}(L_n)\right\rangle_{S^3} \label{cli} \,~~ ; ~~ W_{R}(L) = \mathrm{Tr}_R\left(e^{\oint_L A} \right) \, ,
\end{equation}
where we recall that $L_i$ are the individual circles which constitute the link, namely $\mathcal{L}^n = L_1 \cup \cdots \cup L_n$. Thus, the link state $|\mathcal{L}^n\rangle$ encodes all the coloured link invariants corresponding to the link $\mathcal{L}^n$ at level $k$.
\begin{figure}
\centering
\includegraphics[height=5cm]{Fig4.pdf}
\caption{\small{\textsf{We can compute the entanglement between the two sublinks $\mathcal{L}^m_A$ (blue) and $\mathcal{L}^{n-m}_{\bar{A}}$ (orange) of $\mathcal{L}^n$ by tracing out the factor corresponding to $A$ in the full state $|\mathcal{L}^n\rangle$ and computing the von Neumann entropy of the resulting reduced density matrix.}}\label{fig:fig4}}
\end{figure}
The important point emphasized in \cite{Balasubramanian:2016sro} is that the above construction assigns a \emph{quantum entanglement structure}\footnote{By entanglement structure, we mean the pattern of quantum entanglement inherent in the state $|\mathcal{L}^n\rangle$.} to a link in the 3-sphere. In this paper, we will probe this entanglement structure by using standard quantum information theoretic quantities, namely entanglement entropy and separability (discussed in the previous section) upon tracing out various factors in the state. For instance, we can compute the entanglement entropy corresponding to partitioning the $n$-component link into an $m$-component sub-link $\mathcal{L}^m_A = L_1 \cup L_2 \cup \cdots \cup L_m$ and its complement $\mathcal{L}^{n-m}_{\bar{A}} = L_{m+1} \cup \cdots \cup L_n$ (see figure~\ref{fig:fig4})
\begin{eqnarray}
& &S_{EE}(\mathcal{L}^m_{A}| \mathcal{L}^{n-m}_{\bar{A}}) = -\mathrm{Tr}_{L_{m+1}, \cdots , L_n}(\rho\,\mathrm{ln}\,\rho),\nonumber\\
& & \rho = \frac{1}{\langle \mathcal{L}_n |\mathcal{L}_n\rangle }\mathrm{Tr}_{L_1,\cdots, L_m}|\mathcal{L}^n\rangle\langle \mathcal{L}^n| ,
\end{eqnarray}
where by tracing over $L_i$ we mean tracing over the Hilbert space of the torus boundary corresponding to the circle $L_i$. Further, we can also ask about the separability properties of the reduced density matrix $\rho$ obtained by tracing out $\mathcal{L}^m_A$. We will demonstrate these ideas in the simple example of $U(1)$ Chern-Simons theory below.
Before we proceed, we point out two important facts. First, take the link $\mathcal{L}^n$ to be $n$ unlinked knots. In this case, it is well-known that the coloured link-invariant in Eq.~\eqref{cli} factorizes as
\begin{equation}
\frac{C_{\mathrm{unlink}}(j_1, \cdots, j_n)}{C_0} = \prod_{i=1}^n \frac{C_{L_i}(j_{i})}{C_0} \, ,
\end{equation}
where $C_0 = \mathcal{S}^0_0$ is the partition function of Chern-Simons theory on $S^3$. It is then clear that the state $|\mathcal{L}^n\rangle$ is a product state
\begin{equation}
|\mathcal{L}^n\rangle \propto |L_1\rangle \otimes |L_2 \rangle \otimes \cdots \otimes |L_n\rangle,
\end{equation}
and hence $|\mathcal{L}^n\rangle$ is completely unentangled. This suggests that the quantum entanglement of link states captures aspects of the topology of the corresponding links. More generally, if a link \emph{splits} into two sub-links $\mathcal{L}^m_A$ and $\mathcal{L}^{n-m}_{\bar{A}}$, where by split we mean that there exists a 2-sphere separating one sub-link from the other, then
\begin{equation}
|\mathcal{L}^n\rangle \propto |\mathcal{L}^m_{A} \rangle \otimes |\mathcal{L}^{n-m}_{\bar{A}}\rangle,
\end{equation}
and the entanglement entropy between the two sub-links vanishes.\footnote{It is tempting to speculate that there must be a sense in which the converse statement is true as well for non-Abelian gauge groups, that is, if the entanglement entropy between two sub-links of a link vanishes, then the link splits into the two sub-links. A similar conjecture was put forth in \cite{Chun:2017hja}. } We will return to a generalized notion of separating surfaces in section \ref{mgb}, where we will use them to give an upper bound on the entanglement between sublinks.
Secondly, above, we ignored the issue of {\it framing} \cite{Witten:1988hf} of the individual knots comprising the link $\mathcal{L}^n$. Intuitively, if we replace each of the circles in the link with a ribbon, then the relative linking number between the two edges of the ribbon, or \emph{self-linking}, is ambiguous. In general, to fix this ambiguity we must pick a framing for each circle, and consequently the coloured link invariants are really defined for framed links. However a different choice of framing of, let's say, the $i^{th}$ circle $L_i$ by $t$ units is equivalent to performing a $t$-fold Dehn twist on the corresponding torus. This corresponds to a local unitary transformation on the corresponding link state. Local unitary transformations of this type do not affect the entanglement entropies (or more general information-theoretic quantities) we are interested in. Hence, {\it the entanglement structure is framing-independent.}
\subsection{$U(1)$ Chern Simons Theory and Stabilizer Groups}\label{sect:AbelianCS+SG}
As a first example, let us consider $U(1)$ Chern Simons theory at level $k$. Using the expression for the link invariant from \cite{Witten:1988hf}, the wavefunction of a given $n$-component link complement state can be written as
\begin{align}
|\mathcal{L}^n\rangle = \frac{1}{k^{n/2}} \sum_{j_1, \ldots, j_n} \exp \left( \frac{2\pi i}{k} \sum_{a<b} j_a j_b \ell_{ab} \right) |j_1, \ldots, j_n\rangle, \label{eq:linkstates}
\end{align}
where the summation over the basis states $|j_1, \ldots, j_n\rangle$ is taken mod $k$ (i.e., $j_i \in \mathbb{Z}_k,\,\forall i$), and $\ell_{ab}$ is the linking number between $L_a$ and $L_b$ mod $k$. Generically, there are self-linking terms in this formula; however, they can be set to zero by a sequence of local unitary transformations (i.e., changes of framing of the links). Therefore self-linking does not affect the entanglement between sublinks, and has been omitted.
For any bipartition of the link $\mathcal{L}^n$ into two sublinks $\mathcal{L}^m_{A} = L_1 \cup L_2\cdots \cup L_m$ and $\mathcal{L}^{n-m}_{\bar{A}} = L_{m+1} \cup \cdots \cup L_n $, let $\boldsymbol\ell$ be the \emph{linking matrix} across the partition,
\begin{align}
\boldsymbol\ell= \begin{pmatrix} \ell_{1,m+1} & \ell_{2,m+1} & \ldots & \ell_{m,m+1} \\ \ell_{1,m+2} & \ell_{2,m+2} & \ldots &\ell_{m,m+2} \\
\vdots & \vdots & \ddots & \vdots \\ \ell_{1,n} & \ell_{2,n} & \hdots & \ell_{m,n} \end{pmatrix}, \label{eq:linkmat}
\end{align}
and let $|\ker \boldsymbol\ell |$ be the cardinality of the kernel of $\boldsymbol\ell$, where the kernel is taken over the field $\mathbb{Z}_k$ of integers mod $k$. The entanglement entropy across the bipartition is given by:\footnote{For a bipartition of $\mathcal{L}^n$ into $1$-component and $(n-1)$-component sublinks, this formula reduces to
$$S_{EE} = \ln \left(\frac{k}{\text{gcd} (k, \ell_{12}, \ldots, \ell_{1n})}\right).$$}
\begin{align}
S_m = \ln \left(\frac{k^m}{|\ker \boldsymbol\ell|}\right). \label{eq:entropy}
\end{align}
This formula was derived in \cite{Balasubramanian:2016sro} by using the replica trick. A corollary of this formula is that the entanglement entropy across a bipartition vanishes if and only if the linking matrix vanishes (mod $k$). Below, we will give a different derivation of equation \eqref{eq:entropy}.
The link complement states in Abelian Chern Simons theory fall into a special class of states in quantum information theory known as \emph{stabilizer states} \cite{Salton:2016qpp} which find important application in the theory of quantum computing and quantum error correction \cite{gottesman}. In fact any wavefunction of the form in (\ref{eq:linkstates}) is known to be a stabilizer state \cite{gross}. Stabilizer states have the property that they are simultaneous eigenstates of unit eigenvalue of an associated Abelian group of operators called the \emph{stabilizer group} \cite{stabcodes1, stabcodes2, stabcodes3, linden}. We will explicitly construct below the stabilizer group corresponding to a given link complement state. The entanglement entropy of a sub-factor in such states is known in terms of properties of the stabilizer group \cite{linden, stabentropy}. We will show that this formulation precisely reproduces equation (\ref{eq:entropy}) in terms of the linking matrix.
The $U(1)$ Chern-Simons states obtained from link complements in $S^3$ in fact correspond to a subclass of stabilizer states called \emph{weighted graph states} (see \cite{graphreview, graphentangle, schlingemann} for graph states on qubits, and \cite{quditgraphs1, quditgraphs2} for graph states on k-bits).
To construct such states one starts with a weighted graph, which consists of a set of vertices $\mathcal{V}$ joined by edges $\mathcal{E}$. Each edge carries a number called a \emph{weight}; one may equivalently consider an edge of weight $w$ to correspond to $w$ edges between the same two vertices. To each vertex $a \in \mathcal{V}$, one associates the uniform superposition of states
\begin{align}
|+\rangle_a = \frac{1}{\sqrt{k}} \sum_{j \in \mathbb{Z}_k} |j\rangle_a.
\end{align}
The graph state is then built by acting with unitaries on the initial state $ |+\rangle^{\otimes n}= \otimes_{a \in \mathcal{V}} |+\rangle_a$. The unitary $U_{ab}$ creating an edge of weight $\ell_{ab}$ between vertex $a$ and vertex $b$ is specified by the following action on the basis of states for an $n$-vertex graph
\begin{align}
U_{ab} |j_1, \ldots, j_n\rangle = \exp \left( \frac{2\pi i}{k} j_a j_b \ell_{ab} \right) |j_1 , \ldots, j_n\rangle .
\end{align}
The graph state $|\psi\rangle$ is then given by acting with the product of all unitaries corresponding to all choices of pairs of vertices. That is, for an $n$-vertex graph, the graph state is
\begin{align}
|\psi\rangle = \prod_{a,b \,\in \, \mathcal{V}} U_{ab} |+\rangle^{\otimes n}.
\end{align}
The states thus prepared are exactly the link complement states (\ref{eq:linkstates}). One obtains the weighted graph corresponding to a given link by replacing each knot with a vertex and connecting vertices with the number of edges corresponding to the linking number between the respective knots, as in Fig.~\ref{fig:graphlinks}. The linking matrix $\boldsymbol\ell$ for any bipartition then maps to a sub-block of the adjacency matrix of the corresponding graph.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\textwidth]{Fig5.pdf}
\end{center}
\caption{\small{\textsf{A four-component link and its associated weighted graph. Each knot corresponds to one vertex in the graph. The weight of an edge (depicted here by the number of edges connecting two vertices) is the linking number between the circles corresponding to the vertices.}} \label{fig:graphlinks}}
\end{figure}
The stabilizer group of an $n$-vertex weighted graph state for arbitrary $k$ is known \cite{graphentangle} and is constructed from the discrete Heisenberg group generated by ``shift" and ``clock" operators $X$ and $Z$. In terms of the orthonormal basis $|j\rangle$ on the single-torus Hilbert space, we define $X$ and $Z$ by
\begin{align}
X|j\rangle = |j+1\rangle, \qquad Z|j\rangle = e^{\frac{2\pi i j }{k}} |j\rangle,
\end{align}
where as before $j$ is an integer mod $k$. The operators $X$ and $Z$ almost commute, except for a complex phase, $XZ = e^{-\frac{2\pi i}{k}} ZX$, and the center of the group generated by $X$ and $Z$ consists only of the $k$ complex phases $C = \{e^{\frac{2\pi i j }{k}}, j\in \mathbb{Z}_k\}$. The stabilizer group for weighted graph states is generated by the center of the discrete Heisenberg groups acting on the vertices of the graph (the different tori in our link complement states), and all elements of the form
\begin{align}
\left\{K_i = X_i \prod_{j\neq i} Z_j^{\ell_{ij}} \:\biggr| \: i \in \{1\ldots n\}\right\}, \label{eq:genstab}
\end{align}
where
\begin{align}
\mathcal{O}_i = \underbrace{I \otimes I \otimes \ldots }_{i-1 \text{ operators}} \otimes \mathcal{O} \otimes \underbrace{\ldots \otimes I \otimes I}_{n-i \text{ operators}}
\end{align}
is shorthand for an operator that acts as $\mathcal{O}$ on the $i$th vertex and is otherwise the identity, so that
\begin{align}
X_i\prod_{j\neq i} Z_j^{\ell_{ij}} = \underbrace{Z^{\ell_{i1}} \otimes Z^{\ell_{i2}} \ldots}_{i-1 \text{ operators}} \otimes X \otimes \underbrace{\ldots \otimes Z^{\ell_{i(n-1)}} \otimes Z^{\ell_{in}}}_{n-i \text{ operators}}
\end{align}
is the operator which acts as $X$ on the $i$th vertex and otherwise as $Z^{\ell_{ij}}$ on the $j$th vertex.
Suppose we have a multipartite system with $n$ components such that the Hilbert space factorizes across any bipartition of the components into two sets $A$ and $\bar{A}$, $\mathcal{H} = \mathcal{H}_A \otimes \mathcal{H}_{\bar A}$. The entanglement entropy for such a bipartition of this system, for any stabilizer state $|\Psi\rangle$, can be found purely in terms of the stabilizer group $G$ of $|\Psi\rangle$. To this end, define $d_A = \prod_{x \in A} |\mathcal{H}_x|$ to be the size of the Hilbert space associated with the subset $A$, and define $G_A$ to be the set of elements in $G$ so that $G_A / C_A$ acts as the identity on $\bar A$, where $C_A = C \cap G_A$. The subgroup $G_A$ is sometimes called the \emph{local subgroup} \cite{stabentropy} as it consists of exactly the stabilizer group elements which act nontrivially only on $A$. Then the entanglement entropy is given by \cite{linden}
\begin{align}
S_A = \ln \left( \frac{d_A}{|G_A / C_A|} \right). \label{eq:stabentropy}
\end{align}
The stabilizer entropy formula (\ref{eq:stabentropy}) is very similar in appearance to the link entropy formula (\ref{eq:entropy}). For any bipartition of a link $\mathcal{L}^n$ into an $m$-component sublink $\mathcal{L}^m_A$ and its complement $\mathcal{L}^{n-m}_{\bar A}$, it is immediate that $d_A = k^m$, since $A$ consists of $m$ $k$-dimensional torus factors. We now show how the link complement states can be reinterpreted from the perspective of the stabilizer formalism so that $|G_A / C_A| = |\ker \boldsymbol\ell|$.
We can now explicitly compute the local subgroups of the stabilizer to obtain the entropy formula, for the general case of an arbitrary $n$-component link. Consider a general partition of the $n$ components into sets $A$ and $\bar A$. Without loss of generality we may permute the components so that $A$ consists of the first $m$ components, while $\bar A$ consists of the remaining $n-m$ components, with $m<n-m$. All elements of the stabilizer containing an $X_i$ with $i > m$ will not be in the local subgroup $G_A$, as the only way for such an element to generate elements acting trivially in the $i$th vertex is to exponentiate to the $k$th power, yielding the identity. Since the elements of $G_A$ correspond to the different ways we can multiply together generators $K_i$ of the stabilizer group to obtain the identity on $\bar A$, each unique element of $G_A$ is specified by the number of times each generator appears in a product over all generators. That is, to each element of $G_A$ we associate a set of exponents $\alpha_i$ where each $\alpha_i$ counts the multiplicity of $K_i$ in such a product. Therefore, an arbitrary element of $G_A$ can be represented as
\begin{align}
\prod_{i=1}^m \left(X_i \prod_{j\neq i} Z_j^{\ell_{ij}} \right)^{\alpha_i} = \mathcal{O}^{(m)} \otimes I^{\otimes (n-m)},
\end{align}
where $\mathcal{O}^{(m)}$ is some combination of various powers of $X$ and $Z$ operators acting on the first $m$ vertices and $I^{\otimes (n-m)}$ is the identity on $\bar A$. This is true exactly when:
\begin{align}
\prod_{i=1}^m Z^{\ell_{ij}\alpha_i} = I \label{eq:expcondition}
\end{align}
on every vertex $m +1 \leq j \leq n$. The condition (\ref{eq:expcondition}) is satisfied if and only if for each fixed $j$ the exponents vanish:
\begin{align}
\sum_{i=1}^m \ell_{ij} \alpha_i \equiv 0 \mod k.
\end{align}
The above relation on the exponents can be rewritten as the matrix system:
\begin{align}
\begin{pmatrix} \ell_{1,m+1} & \ell_{2,m+1} & \ldots & \ell_{m,m+1} \\ \ell_{1,m+2} & \ell_{2,m+2} & \ldots &\ell_{m,m+2} \\
\vdots & \vdots & \ddots & \vdots \\ \ell_{1,n} & \ell_{2,n} & \hdots & \ell_{m,n} \end{pmatrix} \begin{pmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_m \end{pmatrix} \equiv 0 \mod k.
\end{align}
Therefore, $|G_A/C_A| = |\ker \boldsymbol\ell |$, so we find from (\ref{eq:stabentropy})
\begin{align}
S_A = \ln \left(\frac{k^m}{|\ker \boldsymbol\ell|} \right),
\end{align}
i.e., the stabilizer entropy formula is generally equivalent to the formula (\ref{eq:entropy}). Although the linking number is a simple link invariant, the existence of closed-form formulas for the entropy as well as the stabilizer group formalism makes $U(1)$ link states a useful arena to study entanglement structures.
\section{The Minimal Genus Separating Surface Bounds Entanglement}\label{mgb}
\newcommand{\Sigma}{\Sigma}
In the previous section, we defined the notion of entanglement entropy between a sub-link and its complement inside any arbitrary link as a tool for characterizing entanglement structure of link complement states. The first question to ask is whether topology guarantees any general bounds on this entanglement, or vice versa.
In this section we will prove that for the gauge group $SU(2)$, the entanglement entropy across a link bi-partition gives a lower bound on the genus of surfaces in the ambient $S^3$ which ``separate'' the two sublinks. Reversing this, the minimal genus of surfaces separating sub-links upper bounds their entanglement entropy. In order to explain this bound, we first define the notion of a \emph{separating surface}:
\noindent\textbf{Definition}: Given an $n$-component link $\mathcal{L}^n\subset S^3$ and two sublinks $\mathcal{L}^m_A$ and $\mathcal{L}^{n-m}_{\bar{A}}$ such that
$$ \mathcal{L}^n = \mathcal{L}^m_{A} \cup \mathcal{L}^{n-m}_{\bar{A}},$$
a \emph{separating surface} $\Sigma_{A|\bar{A}} \subset S^3$ is a connected, compact, oriented two-dimensional surface-without-boundary such that: (1) $\mathcal{L}^m_{A}$ in contained in the handlebody inside $\Sigma_{A|\bar{A}}$, (2) $\mathcal{L}^{n-m}_{\bar{A}}$ is contained in the handlebody outside $\Sigma_{A|\bar{A}}$, and (3) $\Sigma_{A|\bar{A}}$ does not intersect any of the components of $\mathcal{L}^n$.
\begin{figure}[t]
\centering
\includegraphics[height=4cm]{Fig6.pdf}
\caption{\small{\textsf{Three examples of minimal-genus surfaces separating two subsets of links indicated in orange and blue: (a) $\mathrm{min}(g_{\Sigma})=0$ for the unlink, (b) $\mathrm{min}(g_{\Sigma})=1$ for the Hopf link and (c) $ \mathrm{min}(g_{\Sigma}) =2 $ for the indicated separation into two sublinks.}}\label{fig:ss} }
\end{figure}
In other words, the separating surface gives what is a known as a Heegaard splitting of the ambient $S^3$ such that the two sublinks $\mathcal{L}^m_A$ and $\mathcal{L}^{n-m}_{\bar{A}}$ are separately contained in the two resulting handlebody-pieces. In order to avoid cluttering notation, we will drop the subscripts and simply write $\Sigma$ for the separating surface corresponding to a given bi-partition. The separating surface is not unique; given $ \mathcal{L}^n = \mathcal{L}^m_{A} \cup \mathcal{L}^{n-m}_{\bar{A}},$ there are multiple topologically distinct surfaces which separate $\mathcal{L}^n$ into the two sublinks. For example, in figure \ref{fig:ss}(a) we have shown the 2-sphere as a separating surface for the unlink. Of course, we could equally well draw a torus around one of the circles, and that would be an acceptable separating surface. However, there is clearly a (topologically) unique separating surface of \emph{minimal genus}; for example, the sphere is the minimal-genus separating surface for the two-component unlink. On the other hand, for the Hopf-link the minimal-genus separating surface is a torus; see figure \ref{fig:ss}(b). Similarly, fig \ref{fig:ss}(c) shows a link where the minimal-genus separating surface has genus two. Now we claim that:\vspace{0.5cm}
\noindent\textbf{Proposition 1}: \emph{Given a bi-partition $\mathcal{L}^n = \mathcal{L}^m_A \cup \mathcal{L}^{n-m}_{\bar{A}}$, let $\mathrm{min}\,(g_\Sigma)$ be the genus of the minimal-genus separating surface. Then, the entanglement entropy between $\mathcal{L}_A$ and $\mathcal{L}_{\bar{A}}$ provides a lower-bound on $\mathrm{min}\,(g_\Sigma)$:
\begin{equation} \label{ineq}
\mathrm{min}\,(g_\Sigma) \geq \frac{1}{C_{k}}\,S_{EE}(\mathcal{L}_A^m |\mathcal{L}^{n-m}_{\bar{A}}),
\end{equation}
where $C_{k} = \ln\,\left(\mathcal{S}_{00}^{-2}\right)$ is a positive constant which depends on the level $k$. }
\vspace{0.5cm}
Here $\mathcal{S}_{j_1j_2} = \sqrt{\frac{2}{k+2} }\sin\left(\frac{\pi (2j_1+1)(2j_2+1)}{k+2}\right)$ is the matrix which implements the large diffeomorphism $\tau \to -\frac{1}{\tau}$ on the torus Hilbert space. We may interpret the inequality \eqref{ineq} as saying that the entanglement entropy between two sublinks gives a measure of the topological obstruction to the splitting of a link between the two sublinks. Of course, we can also flip equation \eqref{ineq} around and use it as an upper-bound on the entanglement entropy, but we will actually prove the following tighter bound below:
\begin{equation} \label{ineq2}
S_{EE}(\mathcal{L}_A^m |\mathcal{L}^{n-m}_{\bar{A}}) \leq \ln \left(\sum_{j=0,} ^{k/2} \frac{1}{\mathcal{S}_{0j}^{2 \mathrm{min}(g_{\Sigma})-2}} \right) .
\end{equation}
For instance in the example of the unlink, $\mathrm{min}\,(g_\Sigma) = 0$, and the bound implies that the entropy is zero (which is indeed true). For the Hopf link, the bound is saturated, as the Hopf link is maximally entangled \cite{Balasubramanian:2016sro}. There is in fact a trivial upper-bound on the entanglement entropy, namely
\begin{equation}
S_{EE}(\mathcal{L}_A^m |\mathcal{L}^{n-m}_{\bar{A}}) \leq \ln(k+1)\, \mathrm{min}(m, n-m) ,
\end{equation}
because the dimension of the Hilbert space of an m-component link is $ \left( \mathrm{dim} \mathcal{H}_{T^2} \right)^m$, but the inequality \eqref{ineq2} is a non-trivial, tighter upper-bound in general, as can be checked in the example in figure \ref{fig:ss}(c). A similar bound can be derived in $U(1)$ Chern Simons theory where we have a general closed form expression, equation (\ref{eq:entropy}), for the entanglement entropy in terms of linking numbers. The bound in this case then implies\footnote{This is of course true for an arbitrary positive integer $k$, but we can get the tightest bound by maximizing the left hand side with respect to $k$.}
\begin{equation}
m - \frac{\mathrm{ln} \,|\mathrm{ker}(\boldsymbol{\ell})|}{\mathrm{ln}\,k} \leq \mathrm{min}\,(g_\Sigma) \leq \mathrm{min}(m,n-m) .
\end{equation}
In order to prove the bound in equation \eqref{ineq}, we use the fact that the state corresponding to $\mathcal{L}^n$ is prepared by performing the Euclidean path integral on the link complement. Now given a bi-partition of the link, let $\Sigma$ be a separating surface with genus $g_\Sigma$. The trick is to cut open the path integral on the link complement along $\Sigma$ by inserting a complete set of states $\sum_J |J\rangle \langle J | $, where $J$ runs over a basis for the Hilbert space corresponding to $\Sigma$. Thus, the state corresponding to $\mathcal{L}^n$ takes the form
\begin{equation} \label{mgb1}
|\mathcal{L}^n \rangle = \sum_{j_1\cdots j_m} \sum_{j_{m+1},\cdots, j_n} \sum_J \psi_A(j_1,\cdots, j_m; J) \psi_{\bar{A}}(j_{m+1},\cdots, j_n; J) |j_1,\cdots j_m\rangle \otimes |j_{m+1},\cdots, j_n\rangle,
\end{equation}
where $\psi_A$ is the path integral over the handlebody ``inside'' $\Sigma$ contracted with $\langle J |$ on $\Sigma$, and $\psi_{\bar{A}}$ is the path integral over the handlebody ``outside'' $\Sigma$ contracted with $| J \rangle$ on $\Sigma$. We can now rewrite equation \eqref{mgb1} in the more accessible form
\begin{equation} \label{mgb2}
|\mathcal{L}^n \rangle = \sum_J |\psi_A(J)\rangle \otimes |\psi_{\bar{A}}(J) \rangle,
\end{equation}
where the first factor is a state in the Hilbert space corresponding to $\mathcal{L}^m_{A}$ and the second factor corresponding to its complement. From this expression, it is clear the reduced density matrix on $A$ takes the form
\begin{equation} \label{mgb3}
\rho_A = \sum_{J,J'} c_{J,J'}\;|\psi_A(J)\rangle \langle \psi_{A}(J') |,
\end{equation}
namely that it is a matrix of maximal rank equal to the dimension of the Hilbert space on $\Sigma$. The dimension of the Hilbert space on a Riemann surface of genus $g_{\Sigma}$ is given by \cite{VERLINDE1988360}
\begin{equation}
\mathrm{dim}\, \mathcal{H}_{\Sigma} = \left(\sum_{j=0,} ^{k/2} \frac{1}{\mathcal{S}_{0j}^{2 g_{\Sigma}-2}} \right),
\end{equation}
where the sum is over the integrable representations $ j =0,\frac{1}{2}, 1,\cdots, \frac{k}{2}$. The entanglement entropy is bounded by the log of the dimension of $\rho_A$ and thus satisfies the upper bound $S_{EE} \leq \ln\,\mathrm{dim}\,\mathcal{H}_{\Sigma}$. The tightest bound is obtained by choosing $\Sigma$ to be the minimal-genus separating surface, in which case we obtain:
\begin{equation}
S_{EE}(\mathcal{L}_A^m |\mathcal{L}^{n-m}_{\bar{A}}) \leq \ln \left(\sum_{j=0,} ^{k/2} \frac{1}{\mathcal{S}_{0j}^{2 \mathrm{min}(g_{\Sigma})-2}} \right) .
\end{equation}
For $\mathrm{min}(g_{\Sigma}) \geq 1$, we can obtain a simpler inequality by noting that $\mathcal{S}_{0j} \geq \mathcal{S}_{00}$ for all $j$, so we can make the replacement $\mathcal{S}_{0j} \to \mathcal{S}_{00}$ in each term above. Using $\ln(k+1) \leq \ln\left(\mathcal{S}_{00}^{-2}\right)$ to further simplify, we finally obtain the advertised result:
\begin{equation}
S_{EE}(\mathcal{L}_A^m |\mathcal{L}^{n-m}_{\bar{A}}) \leq \mathrm{min}(g_{\Sigma}) \ln \left(\mathcal{S}_{00}^{-2} \right) .
\end{equation}
The minimal-genus bound we have proven above is similar in spirit to the Ryu-Takayangi formula for the entanglement entropy of a subregion in a holographic conformal field theory. In that case one is instructed to find a minimal area surface which hangs into the AdS-bulk and is homologous to the CFT subregion, while in the present case we are instructed to minimize the genus of a surface which separates the two sublinks. However, the Ryu-Takayanagi formula is an equality (as opposed to a bound); in this sense, our bound is more closely analogous to the minimal-area bound on the entropy of subregions in the MERA tensor network construction of states in conformal field theory \cite{Nozaki:2012zj, Pastawski:2015qua}. In our case, we arrived at the minimal-genus bound by cutting open the Euclidean path integral along the minimal-genus separating surface, while the minimal-area bound in MERA is proved by cutting open the tensor network along the minimal-area cut through the network. This suggests that our path integral arguments might have natural generalizations to more non-trivial quantum field theories (i.e., beyond topological theories), although we expect the argument would have to deal with the standard ultraviolet divergences of quantum field theory as soon as we move away from the TQFT limit.
\section{Entanglement Structure of Torus and Hyperbolic Links } \label{sec4}
In the previous section we demonstrated a general topological bound on the entanglement entropy between sublinks. This bound shows that if the sublinks can be split, i.e., separated by a 2-sphere, then they must have vanishing entanglement. In this section we consider {\it non-split links} for which there is no bipartition separated by a 2-sphere. Such links can have inherently multi-partite entanglement, because there is no sublink that must disentangle from the remainder. Here, inspired by the two classes of intrinsically 3-qubit entanglement patterns (GHZ and W, see Introduction), we will focus on a limited issue, i.e., whether partial traces over some link components produce a separable state on the remainder. This leads to the following definition:
\noindent\textbf{Definition}: {A state with three or more sub-factors will be said to have \emph{GHZ-like} entanglement if the reduced density matrix obtained by tracing out any sub-factor is mixed (i.e., has a non-trivial von Neumann entropy) but is separable on all the remaining sub-factors. A state with three or more sub-factors will be said to have \emph{W-like} entanglement if the reduced density matrix obtained by tracing out any sub-factor is mixed but not always separable on the remaining sub-factors.}
Two important topological classes of non-split links are the \emph{torus links} (i.e., links which can be drawn on the surface of a torus) and the \emph{hyperbolic links} (i.e., links whose link complement supports a hyperbolic structure). In fact, every non-split, alternating, prime link is either a torus link or a hyperbolic link \cite{MENASCO198437}.\footnote{Here ``alternating" means that crossings along any circle alternate above and below, and ``prime" means that that the link is not a connected sum of other links.} We will study the entanglement structure in these two classes of links.
\subsection{Torus links}
Torus links, namely links which can be embedded on the surface of a two dimensional torus (without self intersection), are an important topological class. Some examples include $2^2_1$ (the Hopf link), $4^2_1$, and $6^3_3$ (see Fig.~\ref{fig:tl}). In fact the entanglement structures of these examples were already studied in \cite{Balasubramanian:2016sro}, where it was shown that in $SU(2)$ Chern-Simons theory the Hopf link is maximally entangled and the three-component link $6^3_3$ is GHZ-like. We will prove the following general result:
\noindent\textbf{Proposition 2}: \emph{All torus links with three or more components have a GHZ-like entanglement structure.}
The proof will show that the state corresponding to any torus link always takes the form
\begin{equation} \label{tl0b}
|\mathcal{L}^n \rangle= \sum_{\ell} \lambda_{\ell} (\mathcal{L}^n) \widetilde{|\ell\rangle }\otimes \widetilde{|\ell\rangle }\otimes \cdots \widetilde{|\ell\rangle },
\end{equation}
where $\{|\widetilde{\ell}\rangle \}$ is a particular basis for the torus Hilbert space to be defined below (compare with Eq.~\ref{GHZstate} for the GHZ state on three qubits). It is clear from \eqref{tl0b} that tracing out any sublink leaves us with a separable density matrix on the remainder. This result establishes a direct connection between a topological property of links and a quantum information-theoretic property of the corresponding states. We now give a short proof of Proposition 2.
\begin{figure}[t]
\centering
\includegraphics[height=3.5cm]{Fig7.pdf}
\caption{\small{\textsf{ Some examples of torus links labeled using Rolfsen notation. \label{fig:tl} }}}
\end{figure}
Torus links are characterized by two integers $P$ and $Q$. Given two integers $(P,Q)$, the $(P,Q)$ torus link (often referred to as $T(P,Q)$) can be constructed as the closure of the braid $(\sigma_1\sigma_2\ldots\sigma_{P-1})^Q$ acting on $P$ strands. Here, $\sigma_i$ denotes the crossing of strand $i$ over $i+1$. This is illustrated in Fig. \ref{fig:tlbraid} for $P=2$. We may take $0 < P < Q$ without loss of generality. It is easy to see that when $P$ and $Q$ are relatively prime, the closure of the braid results in a 1-compnent link (a knot) which wraps around the torus longitude of the torus $P$ times, and around the meridian $Q$ times. However, when $\text{gcd}(P,Q)=n$ the closure of the braid will result in an $n$ component link, each component of which wraps around the torus longitude and meridian $P/n$ and $(Q/n)$ times respectively.
\begin{figure}[h]
\centering
\includegraphics[width=.2\textwidth]{Fig8a.pdf}\includegraphics[width=.3\textwidth]{Fig8b.pdf}\includegraphics[width=.2\textwidth]{Fig8c.pdf}\includegraphics[width=.3\textwidth]{Fig8d.pdf}
\caption{\small{\textsf{(Left) The trefoil knot as a (2,3) torus knot braid and drawn on the surface of a torus. (Right) The Hopf link as a (2,2) torus link braid and drawn on the surface of a torus.}}}
\label{fig:tlbraid}
\end{figure}
Let us warm up by examining torus links in $U(1)$ Chern-Simons theory. This is a useful exercise since, as described in section \ref{sect:AbelianCS+SG}, we possess an exact closed-form formula for the link state of generic $U(1)$ link that depends only on the mutual linking numbers. In fact, for a $(P,Q)$ torus link, examination of the braid word closure shows that the mutual linking numbers are \emph{homogeneous}: i.e., $\ell_{ab}=\ell,\;\;\forall a\neq b$ (for a particular choice of orientation of the individual knots). A counting of the crossings\footnote{That is, let $\mathcal C$ be the total number of crossings excluding self crossings: $\mathcal C=2\sum_{i<j}\ell=\ell\,n(n-1)$. In the braid word $(\sigma_1\sigma_2\ldots\sigma_{P-1})$ there are $P-1$ crossings, $P/n-1$ of which are self crossings. Repeating the braid word $Q$ times yields $\mathcal C=Q(P-1-P/n+1)$. Equating the two gives the stated result.} in the braid diagram reveals $\ell=\frac{PQ}{n^2}$. As such the Abelian link state for a $(P,Q)$ torus link is given by
\begin{equation}
|\mathcal L(P,Q)\rangle=\frac{1}{k^{n/2}}\sum_{j_1,\ldots,j_n}\exp\left(\frac{2\pi i}{k}\ell\sum_{a<b}j_a\,j_b\right)|j_1,\ldots, j_n\rangle \label{eq:toruscomplete}
\end{equation}
Up to phase, $e^{\frac{\pi i}{k}\ell\sum_a j_a^2}$, acting on each tensor factor (which can be removed by a local unitary) this state can be written as
\begin{equation}
|\mathcal L(P,Q)\rangle=\frac{1}{k^{n/2}}\sum_{j_1,\ldots,j_n}\exp\left(\frac{\pi i}{k}\ell\left(\sum_{a}j_a\right)^2\right)|j_1,\ldots, j_n\rangle
\end{equation}
Let us denote the total charge (mod $k$) of the basis element $|j_1,\ldots, j_n\rangle$ by $\hat j=\sum_{a=1}^n j_a$. We can rewrite $|\mathcal L(P,Q)\rangle$ in terms of $\hat j$ by imposing a periodic delta function:
\begin{equation}
|\mathcal L(P,Q)\rangle=\frac{1}{k^{n/2+1}}\sum_{q=1}^k\sum_{\hat j=1}^k\sum_{j_1,\ldots,j_n}\exp\left(\frac{\pi i}{k}\ell\hat j^2\right)\exp\left(\frac{2\pi i}{k}q\left(\hat j-\sum_{a}j_a\right)\right)|j_1,\ldots, j_n\rangle
\end{equation}
(The sum on $q$ imposes the delta function.) We now see that $j_a$-dependent coefficients can be removed by the local unitary change of basis
$|q\rangle = \frac{1}{\sqrt{k}}\sum_{j} \exp(-{2\pi i \over k} q j) | j \rangle$.
The state is then unitarily equivalent to
\begin{equation}
|\mathcal L(P,Q)\rangle=\frac{1}{k}\sum_{q=1}^k\sum_{\hat j=1}^k\exp\left(\frac{2\pi i}{k}q\,\hat j+\frac{\pi i\ell}{k}\hat j^2\right)|q,q,\ldots, q\rangle\equiv \sum_{q=1}^k\lambda_q(P,Q)|q,q,\ldots, q\rangle.
\end{equation}
proving (\ref{tl0b}). Thus we see that torus links in $U(1)$ Chern-Simons are GHZ-like. An alternate proof of this result can also be given using the fact that the wavefunction (\ref{eq:toruscomplete}) describes a \emph{complete} graph state where all edges have weight $\ell$ \cite{graphentangle, graphreview, PhysRevA.95.052340}.\footnote{The proof works by showing that the GHZ state is unitarily equivalent to the state corresponding to the \emph{star graph} by a sequence of discrete Fourier transforms (Hadamard transforms, when $k=2$). Then, a unitary graph operation called \emph{local complementation} takes the star graph to the complete graph and vice versa. }
We now move on to $SU(2)$ Chern Simons theory. In particular, given an n-link $\mathcal{L}^n \subset S^3$, the corresponding state (in the canonical basis introduced previously) is given by
\begin{equation} \label{tl0}
|\mathcal{L}^n \rangle = C_0 \sum_{j_1\ldots j_n}J_{j_1,\cdots j_n}(\mathcal{L}^n) |j_1\ldots j_n\rangle.
\end{equation}
where for $SU(2)$, the colors $j_i$ run over $0, \frac{1}{2}, 1, \cdots, \frac{k}{2}$, $C_0$ is an overall constant (more precisely it is the $S^3$ partition function) and the wavefunction $J_{j_1,\cdots j_n}(\mathcal{L}^n)$ is the colored Jones polynomial. Proceeding generally, we note that a systematic way to evaluate the colored Jones polynomials of torus links is to take a $(P,Q)$ $n$-component link with representations $j_1,\ldots, j_n$ and to fuse them sequentially using the Chern-Simons fusion rules into a $(P/n,Q/n)$ torus knot summed over representations with the appropriate fusion coefficients \cite{Isidro:1992fz, Labastida:2000yw, Brini:2011wi}.\footnote{This fusion is possible because all the components of torus links are simply braiding along one of the cycles of the defining torus.}
We refer the reader to the above references for further details, and merely state here the result for the colored Jones polynomial:\footnote{We are omitting an overall phase proportional to the central charge. Additionally \cite{Brini:2011wi} writes the final link invariant in terms of the quantum dimension which differs from (\ref{tl1}) by a factor of $\mathcal S^0_0$, a matter of normalization.}
\begin{equation} \label{tl01}
J_{j_1, \cdots, j_n} (P,Q) = \sum_{\ell_1, \ell_2,\cdots} N_{j_1j_2\ell_1}N_{\ell_1j_3\ell_2} \cdots N_{\ell_{n-2}j_n\ell_{n-1}} J_{\ell_{n-1}}(P/n, Q/n).
\end{equation}
where $N_{ijk}$ are the fusion coefficients. Further using the Verlinde formula \cite{VERLINDE1988360}
\begin{equation}
N_{ijk} = \sum_{\ell} \frac{\mathcal{S}_{i\ell} \mathcal{S}_{j\ell} \mathcal{S}_{k\ell}}{\mathcal{S}_{0\ell}},
\end{equation}
where, as before, $\mathcal{S}_{j_1j_2} = \sqrt{\frac{2}{k+2} }\sin\left(\frac{\pi (2j_1+1)(2j_2+1)}{k+2}\right)$ is the \emph{unitary} matrix which implements the large diffeomorphism $\tau \to -\frac{1}{\tau}$ on the torus Hilbert space, we can rewrite the colored Jones polynomial in the form
\begin{equation}\label{tl1}
J_{j_1\ldots j_n} (P,Q)=\sum_{\ell}\sum_{j_s}\frac{1}{\left(\mathcal S_{0\ell}\right)^{n-1}}\mathcal S_{\ell j_1}\mathcal S_{\ell j_2}\ldots \mathcal S_{\ell j_n} \mathcal S_{\ell j_s} J_{j_s}(P/n,Q/n).
\end{equation}
Here
\begin{equation}
J_{j_s}(P/n, Q/n) =\sum_{j_p}C^{j_p}_{j_s}(P/n)\,\mathcal S_{0j_p}\, e^{i2\pi \frac{Q}{P} h_p}
\end{equation}
is the colored Jones polynomial for the $(P/n, Q/n)$-torus knot, $h_p$ is the conformal primary weight of the representation $j_p$ and the coefficients $C_{j_s}^{j_p}$ are defined as
\begin{equation}
\mathrm{Tr}_{j_s}\left(\hat U^m\right)=\sum_{j_p}C^{j_p}_{j_s}(m)\, \mathrm{Tr}_{j_p}\left(\hat U\right),
\end{equation}
for any holonomy $\hat{U}$. (For instance, $C^{j_p}_{j_s}(1)=\delta^{j_p}_{j_s}$.) For our purposes, these details are not too important; what is important however is the structure of the colored Jones polynomial in equation \eqref{tl1}, which we can rewrite as
\begin{equation}\label{tl2}
J_{j_1\ldots j_n} (P,Q)=\sum_{\ell} \frac{1}{\left(\mathcal S_{0\ell}\right)^{n-1}}\mathcal S_{\ell j_1}\mathcal S_{\ell j_2}\ldots \mathcal S_{\ell j_n} f_{\ell}(P,Q)
\end{equation}
where
\begin{equation}
f_{\ell}(P,Q) = \sum_{j_s}\mathcal S_{\ell j_s} J_{j_s}(P/n,Q/n).
\end{equation}
Using equations \eqref{tl0} and \eqref{tl2}, we then find that the state corresponding to a generic $(P,Q)$-torus link takes the form
\begin{eqnarray} \label{tl3}
|\mathcal{L}_{(P,Q)}\rangle &=& C_0\sum_{\ell} \frac{1}{\left(\mathcal S_{0\ell}\right)^{n-1}}f_{\ell}(P,Q)\, |\widetilde{\ell}\rangle \otimes |\widetilde{\ell} \rangle \otimes \cdots |\widetilde{\ell}\rangle \nonumber\\
&\equiv & \sum_{\ell} \lambda_{\ell}(P,Q)\, |\widetilde{\ell}\rangle \otimes |\widetilde{\ell} \rangle \otimes \cdots |\widetilde{\ell}\rangle
\end{eqnarray}
where we have defined the new basis $|\widetilde{j} \rangle = \sum_{j'} \mathcal{S}_{jj'} |j'\rangle$, which is related to the old basis by a local unitary transformation ($\mathcal{S}\cdot \mathcal{S}^{\dagger} = \mathcal{S}^{\dagger}\cdot \mathcal{S} = 1$). This is convenient because we are interested here in understanding the entanglement structure, which remains invariant under such a local (i.e., acting on each local tensor factor) change of basis. We have thus arrived at our desired result, equation \eqref{tl0b}.
Now let us investigate what happens when we trace over some subset of links. Since it is obvious from \eqref{tl3} that the state is invariant under permutations of the ordering of the components, without loss of generality we can trace over the final $n-r$ links, leaving a reduced density matrix on the remaining $r$ links. It is easy to see that in doing so the reduced density matrix remains diagonal. The normalized reduced density matrix for any subset of $r$ links can be written as
\begin{equation} \label{tlrdm}
\hat\rho_{r|n-r}(P,Q)=\sum_{\ell}\Lambda_{\ell}(P,Q)|\widetilde{\ell},\cdots, \widetilde{\ell}\rangle\langle \widetilde{\ell},\cdots, \widetilde{\ell}|
\end{equation}
with the normalized eigenvalues
\begin{equation}
\Lambda_l(P,Q)=\frac{\left|\lambda_l(P,Q)\right|^2}{\sum_l \left|\lambda_l(P,Q)\right|^2}
\end{equation}
This is a completely separable density matrix on the remaining sub-links indicating that the entanglement in the full link had a GHZ-like structure.
Note that the eigenvalues, $\Lambda_l(P,Q)$ encode the specifics of the underlying torus link. However these eigenvalues \emph{are independent of how many factors have been traced out,} as long as $0<r<n$. Therefore the multi-boundary entanglement entropy for torus links takes the particularly simple form
\begin{equation}
S_{r|n-r}(P,Q)=-\sum_l\Lambda_l(P,Q)\log\Lambda_l(P,Q)
\end{equation}
for all $0<r<n$. In addition, it is clear that the reduced density matrix \eqref{tlrdm} is separable for any choice of bi-partition. In other words, the reduced density matrix does not contain \emph{any quantum entanglement}; all the quantum entanglement in the original state was genuinely multi-partite and GHZ in character.
While the arguments above were presented in the case of the gauge group $SU(2)$, we expect these arguments to generalize to arbitrary compact gauge groups. This is because the crux of the derivation (equations \eqref{tl01}, \eqref{tl1} and \eqref{tl2}) merely used the fusion rules for Chern-Simons theory (i.e., the Verlinde formula) together with the unitarity of $\mathcal{S}$. Since these are general properties of Chern-Simons theory with compact gauge groups, our arguments will be valid for general compact groups. This concludes our derivation of the result that the entanglement structure of all torus links is GHZ-like.
\subsection{Hyperbolic links}
\begin{figure}
\centering
\includegraphics[height=4cm]{Fig9.pdf}
\caption{\small{\textsf{Two examples of hyperbolic links: Whitehead link (left) and Borromean rings (right).}}\label{fig:HL}}
\end{figure}
Next we consider \emph{hyperbolic links}, whose link complements admit a complete hyperbolic structure, namely a geodesically complete metric with constant negative curvature. Some examples of hyperbolic links, the Whitehead link and the Borromean rings (Fig.~\ref{fig:HL}), were already studied in the $SU(2)$ theory in \cite{Balasubramanian:2016sro}. It was shown there that the Borromean rings have a W-like entanglement structure.
(The Whitehead link has only two components and thus does not have multi-party entanglement.)
In this section, we will present further evidence suggesting that hyperbolic links are generically W-like.
In order to proceed, on the knot theory side we need to compute the colored Jones polynomials of hyperbolic links. Unfortunately, to the best of our knowledge, there is not much known about the general structure of these polynomials for hyperbolic links (as compared to torus links for instance), so we proceeded case-by-case by looking at several three-component hyperbolic links. Our strategy was to compute the colored Jones polynomials by writing the link in terms of a braid representation. We then used the monodromy properties of chiral conformal blocks in $SU(2)_k$ Wess-Zumino-Witten theory. This method was explained in detail in \cite{Kaul:1993hb} and reviewed in the appendix A of \cite{Balasubramanian:2016sro}, so we will not repeat the details here. Actually, we found it convenient to use a slight variant of this technique, where we first expressed the link as a braid in $S^2\times S^1$ (with an extra circle which does not braid with the original link), and then used surgery to obtain the colored Jones polynomial in $S^3$ (as explained in \cite{Witten:1988hf}).\footnote{This procedure was numerically implemented using \emph{Mathematica}.}
On the quantum information theory side, we need an efficient way to detect whether the reduced density matrix obtained after tracing out one of the factors is separable. A useful information theoretic quantity along these lines is the \emph{entanglement negativity} \cite{PhysRevLett.77.1413,PhysRevA.65.032314, Rangamani:2015qwa}. For a given (possibly mixed) density matrix $\rho$ on a bi-partite system, let us start by defining the partial transpose $\rho^{\Gamma}$:
\begin{equation}
\langle j_1,j_2 | \rho^{\Gamma} | \tilde{j}_1, \tilde{j}_2 \rangle = \langle \tilde{j}_1,j_2 | \rho | j_1, \tilde{j}_2 \rangle.
\end{equation}
which also satisfies $\mathrm{Tr}(\rho^\Gamma) = 1$ just like $\rho$. If $\rho^{\Gamma}$ has any negative eigenvalues, then this necessarily implies that the density matrix $\rho$ is not separable \cite{PhysRevLett.77.1413}. The sum of the negative eigenvalues can be captured by the {\it entanglement negativity} $\mathcal{N}$, which is defined as
\begin{equation}
\mathcal{N} =\frac{|| \rho^{\Gamma} || - 1}{2},
\end{equation}
where $|| A || = \mathrm{Tr}\left(\sqrt{A^{\dagger}A} \right)$ is the trace norm. A non-zero value of $\mathcal{N}$ therefore necessarily implies that the reduced density matrix is non-separable. In our context, the results in the previous section (Proposition 2) together with the fact that all alternating, prime, non-split links are either torus or hyperbolic \cite{MENASCO198437}, imply the following corollary:
\noindent \textbf{Corollary 3}: \emph{If a prime, alternating, non-split link has entanglement negativity $\mathcal{N} > 0$ for some bipartion of some proper sublink,\footnote{A proper sublink of $\mathcal{L}$ is a sublink which is not equal to $\mathcal{L}$.} then the link is hyperbolic.}
This provides a quantum information theoretic sufficient-but-not-necessary condition for a link to be hyperbolic. Importantly, the negativity can be computed directly from the colored Jones polynomial. In table \ref{tab1} we present entanglement negativities for twenty three 3-component non-split links in SU(2) Chern-Simons theory, eighteen of which are hyperbolic (i.e., have non-zero hyperbolic volumes). More precisely, we traced out one of the tensor factors in the link, and then computed the entanglement negativity of the reduced density matrix on the remaining two factors. We see that all the hyperbolic links in the table have a non-zero entanglement negativity, showing that the corresponding reduced density matrices are \emph{not} separable. Therefore, these links have a W-like entanglement structure. Furthermore, all the non-hyperbolic links in table \ref{tab1} have zero negativity, which is (at the very least) consistent with our discussion in the previous section. The results presented in table \ref{tab1} suggest the conjecture that hyperbolic links in Chern-Simons theories with a compact non-Abelian gauge group for generic\footnote{It can happen that at special values of $k$, certain hyperbolic links degenerate to a product structure. This happens for instance at $k=1$ for the Borromean rings, but for $k\geq 2$ the Borromean rings are W-like. We will encounter another example of this in $SL(2,\mathbb{C})$ Chern Simons theory in the limit $G_N \to 0$.} values of the level $k$ always have a W-like entanglement structure. It would be interesting to prove this statement.
\begin{table}[t]
\centering
\begin{tabular}{| l |c | r|}
\hline
\textbf{Link Name} & \textbf{Negativity} $\mathcal{N}$ at $k=3$ & \textbf{Hyperbolic volume} \\
\hline
L6a4 & 0.18547 & 7.32772 \\
L6n1 & 0 & 0 \\
L8a16 & 0.097683 & 9.802 \\
L8a18 & 0.189744 & 6.55174 \\
L8a19 & 0.158937 & 10.667 \\
L8n3 & 0 & 0 \\
L8n4 & 0.11423 & 5.33349 \\
L8n5 & 0.18547 & 7.32772 \\
L10a138 & 0.097683 & 10.4486 \\
L10a140 & 0.0758142 & 12.2763 \\
L10a145 & 0.11423 & 6.92738 \\
L10a148 & 0.119345 & 11.8852 \\
L10a156 & 0.0911946 & 15.8637 \\
L10a161 & 0.0354207 & 7.94058 \\
L10a162 & 0.0913699 & 13.464 \\
L10a163 & 0.0150735 & 15.5509 \\
L10n77 & 0 & 0 \\
L10n78 & 0.189744 & 6.55174 \\
L10n79 & 0.097683 & 9.802 \\
L10n81 & 0.15947 & 10.667 \\
L10n92 & 0.11423 & 6.35459 \\
L10n93 & 0 & 0 \\
L10n94 & 0 & 0 \\
\hline
\end{tabular}
\caption{\small{\textsf{Negativity in $SU(2)$ Chern Simons at level $k=3$ for various three-component links alongside the hyperbolic volume of the complement manifold. The hyperbolic volumes were computed using the SnapPy program \cite{SnapPy} (where zero volume implies that the given link is not hyperbolic). The colored Jones polynomials were computed using braiding representations for these links together with monodromy properties of conformal blocks in the $SU(2)$ WZW theory. In order to compute the negativity, we first trace over one of the tensor factors, and then compute the negativity of the reduced density matrix on the remaining two factors.
}}
\label{tab1}}
\end{table}
\section{Hyperbolic Links in $SL(2,\mathbb{C})$ Chern Simons Theory} \label{sec5}
More can be done with hyperbolic links if we complexify the gauge group to $SL(2,\mathbb{C})$.
In this case, in a certain asymptotic limit we can use the known behavior of the colored Jones polynomial of a hyperbolic link in terms of the hyperbolic geometry of its link complement. In this section, we present some results in this direction.
We begin with a brief review of $SL(2,\mathbb{C})$ Chern Simons theory (see \cite{Witten:1989ip, Gukov:2003na,Dimofte:2009yn, Witten:2010cx, Dimofte:2016pua} for detailed expositions on the subject). The fundamental field in the theory is the gauge field $\mathcal{A}$ which takes values in the Lie algebra $\mathfrak{sl}(2,\mathbb{C})$. The path integral for the $SL(2,\mathbb{C})$ Chern Simons theory is given by
\begin{equation}
Z = \int D\mathcal{A} \, D\bar{\mathcal{A}} \;e^{iS[\mathcal{A}, \bar{\mathcal{A}}]},
\end{equation}
\begin{equation} \label{act0}
S= \frac{t}{8\pi} \int \mathrm{Tr}\left(\mathcal{A}\wedge d\mathcal{A} + \frac{2}{3}\mathcal{A}\wedge \mathcal{A} \wedge \mathcal{A}\right)+\frac{\bar{t}}{8\pi} \int \mathrm{Tr}\left(\bar\mathcal{A}\wedge d\bar\mathcal{A} + \frac{2}{3}\bar\mathcal{A}\wedge \bar\mathcal{A} \wedge \bar\mathcal{A}\right),
\end{equation}
where $\bar{\mathcal{A}}$ is the complex conjugate of $\mathcal{A}$. If we write $t = k+is$ and $\bar{t} = k - is$, then $k$ must be an integer, and $s$ has to be either purely real or purely imaginary, results which follow from unitarity \cite{Witten:1989ip, Gukov:2003na}. The case $s\in \mathbb{R}$ corresponds to gravity in Lorentzian signature with a positive cosmological constant, while $s = -i\sigma,\,\sigma \in \mathbb{R}$ corresponds to Euclidean gravity with a negative cosmological constant. We are interested here in this latter case. To be a bit more explicit, we pick $SU(2)$ as a real form of $SL(2,\mathbb{C})$, and write $\mathcal{A} = \omega + \frac{i}{\ell}e$, where both $\omega$ and $e$ are $\mathfrak{su}(2)$-valued connections. It is natural to interpret $\omega$ as the \emph{spin-connection} and $e$ as the \emph{vielbein} of general relativity. Then the action \eqref{act0} becomes (setting $\ell =1 $ for simplicity)
\begin{eqnarray}
S &=& \frac{k}{4\pi} \int \mathrm{Tr}\left(\omega\wedge d\omega + \frac{2}{3}\omega\wedge \omega\wedge \omega- e\wedge de - 2\omega \wedge e\wedge e \right)\nonumber\\
&-& \frac{s}{2\pi}\int \mathrm{Tr}\left(e\wedge d\omega + e\wedge \omega\wedge \omega - \frac{1}{3} e\wedge e\wedge e\right),
\end{eqnarray}
up to a total derivative term. Since the integrand of the path integral is $e^{iS}$, if we are interested in Euclidean signature we must take $s = -i\sigma$ with $\sigma \in \mathbb{R}$. In this case, the exponent in the path integral is of the form
$$\exp\left( - \frac{\sigma}{4\pi} \int \sqrt{g} \left(-R +2\Lambda \right)+\frac{ik}{4\pi}I_{grav\,CS}\right),$$
where the first term above is precisely the Einstein-Hilbert action with negative cosmological constant, while the second term is the gravitational Chern Simons term. We can then regard $\sigma$ as being proportional to the inverse of the Newton constant, $\sigma = \frac{1}{4G_N}$. In this paper, we will be interested in the asymptotic limit $\sigma \to \infty$. For simplicity, we will also set $k=0$.
An important aspect of Chern-Simons theories with non-compact gauge groups such as $SL(2,\mathbb{C})$ is that the Hilbert space on $T^2$ is infinite-dimensional (see discussion below). In the case of compact gauge groups the multi-boundary entanglement was finite for two reasons: (1) the Hilbert space on $T^2$ is finite dimensional, and (2) the multi-boundary entanglement does not involve spatial cuts across which the entanglement can diverge. In the case of $SL(2,\mathbb{C})$ the second property still holds, so the only potential source of divergence is the infinite size of the Hilbert space. However, as we will see below, at least for hyperbolic links and in the asymptotic limit $\sigma \to \infty$, the multi-boundary entanglement in $SL(2,\mathbb{C})$ Chern-Simons remains finite because of the Gaussian structure of the wavefunctions.
\subsection*{Multi-Boundary States}
Let us consider an $n$-component hyperbolic link $\mathcal{L}^n$ inside $S^3$. As before, the link complement $S^3- N(\mathcal{L}^n)$ is a 3-manifold with $n$ torus boundaries. The path integral of $SL(2,\mathbb{C})$ Chern-Simons theory on the link complement then produces a state in the $n$-fold tensor product of the torus Hilbert space, which as before we label $|\mathcal{L}^n\rangle$. In order to proceed, we need a basis for the torus Hilbert space in the $SL(2,\mathbb{C})$ theory. Following \cite{Gukov:2003na} let us denote (the conjugation classes of) the holonomies of $\mathcal{A}$ around the meridian and longitude of the torus by $\rho(\gamma_m)$ and $\rho(\gamma_\ell)$ respectively. (The holonomies will play the role of the Wilson lines that provided a nice basis for the torus Hilbert space when the gauge group was compact.) It is possible to write $\rho(\gamma_m) $ and $\rho(\gamma_\ell)$ in the form
$$\rho(\gamma_m)=\left(\begin{matrix} m & \star \\ 0 & m^{-1}\end{matrix}\right),\;\;\;\; \rho(\gamma_{\ell})=\left(\begin{matrix} \ell & \star \\ 0 & \ell^{-1}\end{matrix}\right),$$
where $m,\ell \in \mathbb{C}^*$ and $\star$ is one if $m = \ell$, and zero otherwise. Let us also introduce the notation $m = e^u$ and $\ell = e^v$ for convenience. Classically, $m$ takes values in $\mathbb{C}^*$ (namely the complex plane minus the origin), so $\mathrm{Re}\,u \in \mathbb{R}$ while $\mathrm{Im}\,u $ is $2\pi$-periodic (i.e., $u$ coordinatizes a cylinder); the same holds for $\ell$ and $v$. Together $(m,\ell)$ or equivalently $(u,v)$ parametrize the classical phase space.\footnote{Typically, one also quotients by the Weyl group, but following \cite{Gukov:2003na} we will suppress this quotient.} Clearly, the phase space is non-compact, indicating that the Hilbert space upon quantization will be infinite-dimensional. At $ k =0$ and $\sigma \to \infty$, we can choose a polarization such that wavefunctions are $L^2$ functions of $u$, and independent of $v$ (i.e., in quantum mechanics we take the wavefunctions to be functions of half of the phase space coordinates, in this case $u$). In other words, the Hilbert space is spanned by the basis $\{ |u \rangle\}$, with $e^u \in \mathbb{C}^*$ as in the classical case above, with the standard norm $\langle u | u'\rangle = \delta^{(2)}(u-u')$. Consequently, a basis for the $n$-fold tensor product of the torus Hilbert spaces takes the form $|u_1,\cdots, u_n\rangle = |u_1\rangle \otimes |u_2\rangle \otimes \cdots |u_n\rangle$.
We can now write the state $|\mathcal{L}^n\rangle$ as
\begin{equation} \label{sp0}
|\mathcal{L}^n\rangle = \int d^2u_1 \cdots \int d^2u_n \langle u_1, \cdots, u_n |\mathcal{L}^n \rangle |u_1,\cdots, u_n\rangle,
\end{equation}
where the integration regions are over cylinders as explained above. The wavefunction $\langle u_1, \cdots, u_n |\mathcal{L}^n \rangle$ is given by the path integral of Chern Simons theory on the link complement $S^3 - N(\mathcal{L}^n)$, with boundary conditions which fix the boundary meridional holonomies to be $m_i$'s. In the $\sigma \to \infty$ limit, we can use the saddle point approximation to the path integral to write
\begin{equation}
\langle u_1, \cdots, u_n |\mathcal{L}^n \rangle = \sum_{\a} e^{-\frac{\sigma}{\pi} V^{(\a)}(u_1,\cdots, u_n)+ \cdots }
\end{equation}
where $\a$ labels the various saddle points which contribute to the path integral in the $\sigma \to \infty$ limit. These naturally correspond to locally hyperbolic ``geometries'' on $S^3 - N(\mathcal{L}^n)$ (loosely speaking, solutions to Einstein's equations with negative cosmological constant, but more precisely flat $SL(2,\mathbb{C})$ connections). The function $V^{(\a)}$ is the corresponding oriented \emph{volume} of the link complement, while $\cdots$ denote higher quantum invariants which will not be relevant for us in this work. While it is not easy to write down the metrics explicitly, these geometries can nevertheless be constructed fairly explicitly by gluing together \emph{ideal tetrahedra} in hyperbolic space, following the seminal work of Thurston \cite{Thurston} (see also \cite{purcell2017hyperbolic, ratcliffe2006foundations, Dimofte:2009yn}). Details of this construction and an explicitly worked example are given in Appendix A.
On a general branch $\a$, the geometry associated to the flat connection labelled by the holonomies $(u_1,\cdots, u_n)$ is not geodesically complete \cite{Thurston}. However, there always exists one branch, often called the \emph{geometric branch} denoted by $\a = \mathrm{geom}$, which at the point $u_i=0 \, \forall i$ gives rise to a complete hyperbolic structure.\footnote{Recall that hyperbolic links are defined by the existence of at least one complete hyperbolic structure on the link complement.} In fact, by the Mostow rigidity theorem, such a complete hyperbolic structure is unique. The corresponding volume $V^{(\mathrm{geom})}(0)$ is therefore a topological invariant. This invariant famously appears in a certain asymptotic (double-scaling) limit of the colored Jones polynomial, a statement which goes by the name of the \emph{volume conjecture} \cite{Kashaev:1996kc, 1999math......5075M, Gukov:2003na, Dimofte:2010ep}. Away from $u_i = 0$, the hyperbolic structure on the link complement (at a generic point $u_i$) is not complete; it is nevertheless a legitimate $SL(2,\mathbb{C})$ flat connection that we must sum over in the path integral.
\begin{figure}[t]
\centering
\includegraphics[height=4.5cm]{Fig10.pdf}
\caption{\small{\textsf{The volume function on the geometric (black) and conjugate (blue) branches for a two component link L6a1. The coordinate $u$ here is the real part of one of the parameters on moduli space.} \label{l6a1} }}
\end{figure}
For our purposes however, a different branch will be relevant. Note that in the $\sigma \to \infty$ limit, the dominant contribution in \eqref{sp1} comes from the branch with the most negative volume (see Fig. \ref{l6a1}).\footnote{Recall that these volumes are oriented and thus can have either sign, as explained in Appendix A.} In other words, the branch most relevant for our purposes in the one which contains the global minimum of the volume function $V^{(\a)}(u_i)$, if one exists. There is indeed one such branch, which turns out to be the \emph{conjugate} of the geometric branch $\a = \overline{\mathrm{geom}}$ \cite{1999InMat.136..623D}, which then dominates the sum over saddle points. (Appendix A explains the sense in which this branch is ``conjugate'' to the geometric one.) On this branch the volume is minimized (most negative) at $u_i = 0$. Then from equation \eqref{sp0}, we find that in the $\sigma \to \infty $ limit,
\begin{equation} \label{sp2}
|\mathcal{L}^n\rangle \sim \mathcal{C} \int d^2u_1 \cdots \int d^2u_n e^{-\frac{\sigma}{\pi} V^{(\overline{\mathrm{geom}})}(u_1,\cdots,u_n)} |u_1,\cdots, u_n\rangle,
\end{equation}
where $\mathcal{C}$ is the normalization constant, and we use the $\sim$ symbol to indicate that we are only focussing on the conjugate-geometric branch; we will drop the superscript $\overline{\mathrm{geom}}$ from now on to prevent cluttering notation. Exploiting the $\sigma \to \infty $ limit further, we can expand the volume function around $u_i = u_i^* + \frac{1}{\sqrt{\sigma}} \delta u_i$, where $u_i^*=0$ is the location of the global minimum of the volume function and $\delta u_i \in \mathbb{C}$. Since we are expanding around $u_i=0$, we may as well drop the $\delta$s (with the understanding that now the $u_i$s are general complex numbers) and write
\begin{equation}
V(u_1,\cdots,u_n) = V(0) + \frac{1}{2\sigma} H_{ij; ab} u^a_i u^b_j + \cdots.
\end{equation}
where $a,b$ run over the real and imaginary parts of $ u_i$. This expansion was first studied in the seminal work of Neumann and Zagier \cite{NEUMANN1985307}; we now briefly review some of their results. The expansion is conveniently formulated in terms of a holomorphic function $\Phi(u_i)$ called the \emph{Neumann-Zagier potential}. Importantly, $\Phi$ is an even function of all of the $u_i$'s, and therefore takes the form
\begin{equation}
\Phi(u_i) = \sum_i \frac{\tau^{(0)}_i}{\sigma} u_i^2 +\frac{1}{2\sigma^2} \sum_{i,j} A_{ij}u_i^2 u_j^2 + \cdots
\end{equation}
where $\tau^{(0)}_i$ is the modular parameter of the $i$th torus boundary metric induced from the complete hyperbolic structure at $u_i=0$. In terms of the Neumann-Zagier potential, we can write the volume of the link complement as
\begin{equation}
V(u_i) = V_0 -\frac{1}{4} \sum_i \mathrm{Im} \left(u_i\overline{v_i} \right) +\frac{1}{8}\sum_{k=0}^{\infty} (k-2) \mathrm{Im}\left( \Phi_{(k)} (u_i) \right),
\end{equation}
where
\begin{equation}
v_i = \frac{1}{2}\frac{\partial \Phi}{\partial u_i},
\end{equation}
and $\Phi_{(k)}$ is the degree $k$ part of $\Phi$. Therefore, the volume function takes the form
\begin{equation}\label{eq:Vexpand}
V(u_i) = V_0 +\frac{1}{4\sigma}\sum_i \mathrm{Im}\left(\tau_i^{(0)}\right) u_i\bar{u_i}- \frac{1}{4\sigma^2} \sum_{i,j} \mathrm{Im}\left(u_i\overline{A_{ij}u_iu_j^2}-\frac{1}{2} A_{ij}u_i^2u_j^2\right) + \cdots.
\end{equation}
The state \eqref{sp2} then takes the form
\begin{equation} \label{sp3}
|\mathcal{L}^n\rangle \sim \frac{\mathcal{C}e^{-\frac\sigma\pi V_0} }{\sigma^n} \int d^2 u_1 \cdots \int d^2 u_n e^{-\frac{1}{4\pi}\sum_i \mathrm{Im}\left(\tau_i^{(0)}\right) u_i\bar{u_i}+ \frac{1}{4\pi\sigma} \sum_{i,j} \mathrm{Im}\left(u_i\overline{A_{ij}u_iu_j^2}-\frac{1}{2} A_{ij}u_i^2u_j^2\right) + \cdots} |\frac{1}{\sqrt{\sigma}}u_1,\cdots, \frac{1}{\sqrt{\sigma}} u_n\rangle,
\end{equation}
where the normalization $\mathcal{C}$ can be systematically determined in terms of $\sigma, \tau_i^{(0)}$ etc. Note that at leading order in $\sigma$, the wavefunction we have obtained is a Gaussian wavepacket centered at the global minimum. Importantly, the quadratic part of the exponential is diagonal in the various torus boundaries. This is a direct consequence of the fact that the Neumann-Zagier potential is an even function of the $u_i$'s. Thus, we conclude:
\noindent\textbf{Proposition 4}: \emph{In the limit $\sigma \to \infty$, the state corresponding to any hyperbolic link in $SL(2,\mathbb{C})$ Chern Simons theory is a completely product state, i.e., the entanglement entropy for any sub-link vanishes.}
However, this is really a somewhat trivial manifestation of the fact that the volume is an even function of the $u_i$s. In order to study the entanglement structure, we must then back off from the $\sigma \to \infty$ limit and look at the $1/\sigma$ terms in the exponential. These indeed introduce entanglement between the various torus boundaries. \emph{The off-diagonal elements of the matrix $A_{ij}$ therefore control the entanglement structure of the state at leading order\footnote{Note that the leading order correction to the entropy appears at order $\frac{1}{\sigma^2}$; the same is true of the entanglement negativity. Another subtlety to keep in mind while computing such corrections is that away from $\sigma = \infty$, some of the moduli might take on discrete values.} in $\frac{1}{\sigma}$} or equivalently at leading order in the Newton constant $G_N$. The reader might worry that since we are expanding the volume to $O(1/\sigma)$, we must also include \emph{quantum corrections} to the path integral at this order. This is indeed correct; however, the quantum corrections are themselves even functions of $u_i$ \cite{Dimofte:2009yn}, and therefore at the order we are working only shift the \emph{diagonal} quadratic terms
$$\sum_i \mathrm{Im}\,(\tau_i^{(0)})u_i\bar{u}_i \to \sum_i \mathrm{Im}\,(\tau_i^{(0)} ) u_i\bar{u}_i+ \frac{1}{\sigma} \sum_i \left(\alpha u_i u_i + \beta u_i \bar{u}_i + \gamma \bar{u}_i\bar{u}_i\right),$$
This shift in the quadratic part is diagonal in the torus boundaries, and therefore does not introduce any entanglement. Therefore, we may safely focus on the matrix $A_{ij}$ coming from the Neumann-Zagier potential. This matrix is computable, case-by-case, from SnapPy data. In Appendix A, we perform this calculation for the Borromean rings (L6a4) and find
\begin{equation}\label{eq:ABorr}
A_{ij}^{Borr.}=i\,64\left(\begin{array}{ccc}-1/3&1&1\\1&-1/3&1\\1&1&-1/3\end{array}\right).
\end{equation}
The off-diagonal components indicate that at quartic order this link state is not a product state of each component.
Unfortunately, beyond doing this link-by-link, this is as far as we can go for now; apart from examples of explicit computation (see \cite{aaber2010closed} for one such example), to our knowledge there has been no systematic study of the matrix $A_{ij}$ in the mathematics literature. An interesting question is whether it is possible to show in generality (from the properties of $A_{ij}$) that hyperbolic links have a W-like entanglement structure. We leave this for future work. We end here with a couple of remarks: first, it is important to note that while the detailed computation uses specific geometric structures on the link complement, the entanglement entropy is a \emph{topological invariant} (by construction)! This is exactly analogous to the fact that the hyperbolic volume of the link complement is a topological invariant -- the explanation lies in the Mostow-Prasad rigidity theorem about the uniqueness of the complete hyperbolic structure. Second, we have seen above that the entanglement structure in the $\sigma \to \infty$ limit is essentially controlled by the matrix $A_{ij}$. This is very reminiscent of Abelian Chern Simons theory, where the entanglement structure is controlled entirely by the linking matrix. Indeed, the $\sigma \to \infty$ limit is in some sense a classical limit, albeit a subtle one.\footnote{For instance, it is well known that taking the $k\to \infty$ limit (while keeping the colors fixed) of colored link invariants in non-Abelian Chern Simons theory reduces these colored link invariants to the Abelian ones (which are only sensitive to linking numbers). However, if one takes the double scaling limit $j\to \infty,\; k \to \infty$ with $2j/k$ fixed, then the asymptotic behaviour is very different. Note that the entanglement entropy is indeed sensitive to such a double-scaling limit.} Nevertheless, we have discovered that in this limit, a new matrix appears to control the entanglement structure.
\section{Discussion}
In this paper, we have presented various results on the information theoretic properties of the colored Jones polynomial of multi-component links. We first reviewed the simple case of $U(1)$ Chern Simons theory, where we recast and clarified previous results from \cite{Balasubramanian:2016sro} in terms of the theory of stabilizer groups. Then we presented several new results for non-Abelian Chern-Simons theory: (i) We proved that the entanglement entropy between two sublinks of an arbitrary link provides a lower bound on the minimum genus Heegaard splitting which separates the two sublinks, and thus gives a measure of the topological obstruction for a link to be split, (ii) We then studied the entanglement structures of two topological classes of links, namely torus and hyperbolic links, in $SU(2)$ Chern-Simons theory. We showed that all torus links have a GHZ-like entanglement structure, and provided evidence to suggest that hyperbolic links tend to have a W-like entanglement structure, (iii) In order to get a better handle on hyperbolic links, we complexified the gauge group to $SL(2,\mathbb{C})$, where in the $\sigma \to \infty$ limit we were able to make partial analytical progress using known results from hyperbolic geometry on link complements. In particular, we showed that in the limit $\sigma \to \infty$, all hyperbolic links correspond to product states with no entanglement. Backing off from this limit, we observed that a certain matrix which appears in the Neumann-Zagier potential on the moduli-space of hyperbolic structures on the link complements controls the entanglement structure at leading order in $1/\sigma$. It would be interesting to use this last observation more fully.
There are several natural questions which present themselves at this stage. Does the SLOCC classification of entanglement structures from quantum information theory have a natural adaptation in knot theory to a classification of links? We saw a baby version of this idea manifest itself in the results of this paper, namely that all torus links have GHZ-like entanglement structures, while hyperbolic links seemingly have W-like entanglement structures. In other words, the GHZ/W-classification based on the robustness of the multi-party quantum entanglement seemingly translates to the torus/hyperbolic classification of links (although we should emphasize that we have not yet proved that all hyperbolic links are W-like). Further exploration is required to clarify whether SLOCC classification gives a useful way of characterizing links. A step in this direction would be to explore more detailed aspects of the entanglement structure of links. For instance, given an $n$-component link, we can assign to it a $(2^{n-1}-1)$-vector whose entries are the entanglement entropies of various bi-partitions of the link, a $3\times (\frac{1}{2}(3^{n-1}+1)-2^{n-1})$ matrix corresponding to the entanglement negativities of various tri-partitions, and so on. All these numbers can be computed directly from the colored Jones polynomial, and give a much more refined characterization of the entanglement structure of links.
A second question is whether one can make useful progress in $SL(2,\mathbb{C})$ Chern-Simons theory by using the geometry of hyperbolic link complements. We have shown here that a certain matrix of coefficients in the Neumann-Zagier potential plays an important role. From a mathematical point of view then, it might be useful to study the properties of these coefficients in more detail for hyperbolic links. There is also a naive analogy one can make in this setup with the ``complexity = volume'' conjecture \cite{Stanford:2014jda}. There exists a state-integral model (see \cite{2007JGP....57.1895H, Dimofte:2009yn} for details), or in other words a tensor-network model, for constructing precisely the type of states we studied in the present paper for $SL(2,\mathbb{C})$. In these tensor-network models, one begins with the ideal-tetrahedral decomposition of the link complement (discussed in Appendix \ref{appHypStr}) and inserts one tensor per tetrahedron. The complexity $\mathcal{C}$ of such a network (i.e., the number of tensors in the full network) is naturally lower bounded by a constant times the hyperbolic volume of the link complement\footnote{This just follows from the trivial observation that the volume of an ideal hyperbolic tetrahedron is upper bounded by $\alpha^{-1} = 3 \Lambda(\pi/3)$, where $\Lambda(x)$ is the Lobachevsky function. } :
\begin{equation}
\mathcal{C} \geq \alpha\, V_{\mathrm{hyp}}.
\end{equation}
It would be interesting to see if one can carefully define the circuit complexity for these tensor networks and show that the ``optimal'' circuit (suitably defined) saturates this inequality.
From a holographic perspective, Chern-Simons theory is known to be dual to closed topological strings on resolved conifold geometries \cite{Gopakumar:1998ki}. It is clearly interesting to ask whether the entanglement entropy we have studied in this work has a suitable Ryu-Takayanagi interpretation from the closed string point of view. The bound on the minimal genus separating surfaces proved in this paper resembles the Ryu-Takayanagi minimal-area prescription (or more precisely the minimal-area bound which appears in MERA tensor networks), and might point to a deeper story underlying this resemblance.
Finally, from a more practical viewpoint it is also an interesting question whether the entanglement we have studied in the present work has any applications to real materials. In particular, one wonders whether the states we have described can be constructed in the lab.
\subsection*{Acknowledgements}
We would like to thank Pawel Caputa, Ron Donagi, Nathan Dunfield, Sergei Gukov, Taylor Hughes, Mark Mezei, Eric Sharpe and Tadashi Takayanagi for useful conversations or email communications. We are particularly grateful to Tudor Dimofte and Alex Maloney for several useful conversations and communications,
and to Alex Maloney for brief initial collaboration. Research funded by the Simons Foundation (\#385592, VB)
through
the It From Qubit Simons Collaboration, the US Department of Energy contract \#FG02-05ER-41367 and the US Department of Energy contract \#DE-SC0015655 (RGL).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Fabry-P\'{e}rot cavities (see, e.g., Ref.~\cite{metzger04,gigan06,kleckner06})
can trap incident light and
have a fixed mirror at one end and a movable mirror \cite{mirror}
at the other end. This movable mirror is allowed to oscillate
harmonically around a fixed position. The oscillating mirror
allows infinitesimal contractions and dilations of the cavity
length, resulting in a radiation pressure on the mirror which is
proportional to the intensity of the trapped cavity field. This
mechanism facilitates an optical-mechanical coupling between the
cavity field and the mirror and is now generating considerable
interest. In recent years, for example, a high-precision
spectrometer for detecting gravitational
waves~\cite{meers89,aguirre87} and an interferometric measurement
apparatus~\cite{tittonen99,arcizet06} have used movable cavity
mirrors as sensing devices. For detecting weak signals, a number
of experiments have reduced the thermal fluctuations in the
mirrors, effectively lowering the temperature of the
mirror~\cite{metzger04,gigan06,kleckner06}.
A key variable in previous designs is the number of photons
trapped inside the cavity. Since the radiation pressure on the
mirror is proportional to the photon number, it is desirable to
increase this photon number in order to increase the magnitude of
the radiation pressure and hence to control or cool down the
mirror more efficiently. Moreover, the cooling of a nanomechanical
resonator or an oscillating mirror is extensively studied recently
(e.g., in Refs.~\cite{fxue07-1,jqyou08}). This then naturally leads us to
conceive a cavity filled by a dielectric medium to achieve this
purpose. Specifically, following our previous idea in
Ref.~\cite{he07}, we now propose that this medium can be made of a
gas of two-level atoms.
Recently, Ref.~\cite{meiser06} has proposed a similar scheme to
target an interesting optical effect: the cavity mode forms an
optical lattice inside the cavity and arranges the free atoms that
were deposited into the cavity to form a Mott-insulator-like
medium with atoms trapped at the lattice sites. It was shown that,
with the atoms assuming an initial Bose-Einstein condensate
distribution, such an atomic condensate would act effectively as a
semi-transparent mirror itself and shift the cavity to function in
its ``superstrong coupling regime''. Nonetheless, based on
Monte-Carlo simulations~\cite{asboth07,meiser07}, there exist
disputes for the realizability of this proposal.
In this paper, we analyze the dynamical effect that occurs when
placing an atomic medium into a Fabry-P\'{e}rot cavity, but assuming
that the atoms have been placed inside a transparent gas chamber.
Due to the strengthened coupling, now enhanced by the mediating
atoms between the cavity field and the mirror, the resulting
three-component system (the gas of atoms, the cavity field, and
the oscillating mirror) induces interesting phenomena worth
investigating. We point out here that, in contrast with the BEC
atoms in Ref.~\cite{meiser06} that can only be realized at very
low temperatures, our gas of atoms makes use of low-energy
collective excitations, which avoids the stringent low-temperature
requirement.
To better extract the physical features of each part of this three-component
system, we assume adiabatic processes over different time scales.
We employ the Born-Oppenheimer approximation to study the dynamic
behavior of a micro-mirror by assuming it is a slow-varying part.
We also study the dynamic behavior of the atomic excitations as a
fast-varying process, while the reflected radiations from the mirror
stays relatively constant. The complex interactions between the system
components leads us to expect many interesting physical phenomena
including: (i) realizing an adiabatic entanglement process~\cite{suncp00-adia},
(ii) producing squeezed modes as in optical parametric oscillators,
(iii) detecting polaritons through the mechanical mode of the mirror,
and (iv) detecting the mechanical mode of the mirror through the polariton
spectrum.
We first describe the model in Sec.~\ref{sec:model}. The resulting
entanglement process is then described in Sec.~\ref{sec:entanglement}
and its quantification follows in Sec.~\ref{sec:quan_entanglement}.
The squeezed variance is derived in Sec.~\ref{sec:quad_variance}
and conclusions are presented in Sec.~\ref{sec:conclusion}.
\section{\label{sec:model}Atomic Optomechanics}
\subsection{The Exciton Model}
As shown in Fig.~\ref{fig:model_setup}, the system we study here
consists of a gas of two-level atoms, each with the same eigen-frequency
$\Omega_{0}$, a Fabry-P\'{e}rot cavity carrying a photonic field with
mode frequency $\Omega_{\mathrm{C}}$, as well as a harmonically bounded
micro-mirror with coordinate $x$, momentum $p$, mass $m$ and oscillating
frequency $\Omega_{\mathrm{M}}$. The system Hamiltonian, with units
normalized according to $\hbar=1$ to simplify the notation, is
\begin{eqnarray}
H & = & \Omega_{0}\sum_{j}\sigma_{j}^{z}+\Omega_{\mathrm{C}}a^{\dagger}a+\sum_{j}(g_{j}\sigma_{j}^{+}a+g_{j}^{\ast}\sigma_{j}^{-}a^{\dagger})\nonumber \\
& & +\frac{p^{2}}{2m}+\frac{1}{2}m\Omega_{\mathrm{M}}^{2}x^{2}+\eta a^{\dagger}ax.\label{eq:orig_tot_ham}
\end{eqnarray}
In Eq.~(\ref{eq:orig_tot_ham}), the Pauli matrix
$\sigma_{j}^{z}=\left|e_j\right\rangle \left\langle e_j\right|$
denotes the internal energy of each two-level atom, while
$\sigma_{j}^{+}=\left|e_j\right\rangle \left\langle g_j\right|$ and
$\sigma_{j}^{-}=\left|g_j\right\rangle \left\langle e_j\right|$ in the
last term of the first line denote the flip-up and flip-down
operators of the $j$-th atom. Here, $a$ and $a^{\dagger}$ denote,
respectively, the annihilation and creation operators of the
cavity field. The last term of the second line is a
radiation-pressure-type interaction on the mirror, which is
proportional to the incident photon number. We assume that no
direct interaction exists between the atoms and the mirror; the
indirect interaction between them solely relies on the cavity
field.
\begin{figure}
\includegraphics[bb=100bp 220bp 640bp 740bp,clip,width=2.8in]{cavity_atom_ensemble}
\caption{(color online) Schematic diagram illustrating the system
with three main components: (i) a gas of two-level atoms, (ii) a
movable mirror at one end, and (iii) a cavity field mediating the
interaction between the atoms and the oscillating
mirror.\label{fig:model_setup}}
\end{figure}
Since all the atoms have the same frequency $\Omega_{0}$, we can
consider the gas of atoms as a whole to be a
Hopfield-dielectric~\cite{hopfield58} filling the cavity, and its
behavior described by collective low-energy excitations using the
exciton annihilation operator~\cite{he07}
\begin{equation}
b=\lim_{N\to\infty}\sum_{j}^{N}\frac{g_{j}^{\ast}}{G}\sigma_{j}^{-}\end{equation}
and its Hermitian conjugate $b^{\dagger}$, where \[
G=\sqrt{\sum_{j=1}^{N}|g_{j}|^{2}}\]
can be understood as the total coupling strength. The exciton operators
$b$ and $b^{\dagger}$ are bosonic and consistent with the Dicke
model; a similar spin-bosonization technique has been used to study
nuclear spins~\cite{song05}. The resulting Hamiltonian for the
system can then be written as \begin{eqnarray}
H & = & \Omega_{0}b^{\dagger}b+(\Omega_{\mathrm{C}}+\eta x)a^{\dagger}a+G(b^{\dagger}a+ba^{\dagger})\nonumber \\
& & +\frac{p^{2}}{2m}+\frac{1}{2}m\Omega_{\mathrm{M}}^{2}x^{2}.\label{eq:tot_ham}\end{eqnarray}
\subsection{Interaction between the oscillating mirror and the polaritons}
The coupling between the excitons and the cavity field can lead to
the emergence of \emph{dressed excitons}, here denoted as
polaritons. In the adiabatic limit of the oscillating mirror, that
is, when the mirror coordinate $x$ stays unchanged with respect to
the fast-varying field occupation number $a^{\dagger}a$, we can
diagonalize the interaction between the excitons and the cavity
field by rotating the Hilbert space of these two components
through an angle\begin{equation}
\theta=\arctan\left(\frac{2G}{\Omega_{0}-\Omega_{\mathrm{C}}-\eta
x}\right)\end{equation}
for which we define a unitary transformation\begin{eqnarray}
A & = & a\,\cos\negmedspace\left(\frac{\theta}{2}\right)-b\,\sin\negmedspace\left(\frac{\theta}{2}\right),\\
B & = & a\,\sin\negmedspace\left(\frac{\theta}{2}\right)+b\,\cos\negmedspace\left(\frac{\theta}{2}\right).\end{eqnarray}
The $A$ and $B$ operators above still obey bosonic commutation
relations and can be understood as ``dressed exciton modes'' that
mix atomic excitations $b$ with the cavity field $a$. In other
words, these dressed exciton modes are polaritons~\cite{he07} of a
phonon mode $A$ and an optical mode $B$.
Under this view, the Hamiltonian of our system in Eq.~(\ref{eq:tot_ham}) can be divided
into two portions, the Hamiltonian $H_{\mathrm{M}}$ of the
mirror's oscillation and the potential $V$ from the polaritons
acting on the mirror, i.e.\begin{eqnarray}
H & = & H_{\mathrm{M}}+V,\\
H_{\mathrm{M}} & = & H_{\mathrm{mirror\ oscillations}}\\
& = & \frac{p^{2}}{2m}+\frac{1}{2}m\Omega_{\mathrm{M}}^{2}x^{2},\\
V & = & V_{\mathrm{polaritons\ on\ mirror}}\\
& = & \frac{1}{2}(\Omega_{0}+\Omega_{\mathrm{C}}+\eta x)(A^{\dagger}A+B^{\dagger}B)\label{eq:polariton_pot}\\
& & -\frac{1}{2}\sqrt{(\Omega_{0}-\Omega_{\mathrm{C}}-\eta x)^{2}+4G^{2}}(A^{\dagger}A-B^{\dagger}B).\nonumber \end{eqnarray}
The potential $V$ in Eq.~(\ref{eq:polariton_pot}) quantifies the
interaction between the mechanical mirror and the modes of the cavity.
Without the {}``filling'' atoms, the cavity mode is simply the photon
field $a$, and this potential $V$ will degenerate back to a linear
radiation pressure impinging on the mirror, if we do not consider
the nonlinear Kerr effect that could be induced by the wave detuning
due to the flexible length of the cavity~\cite{mancini94,zrgong}.
The atoms let the linear radiation pressure be proportional to the
total number ($A^{\dagger}A+B^{\dagger}B$) of polaritons [$\eta
x(A^{\dagger}A+B^{\dagger}B)$ in Eq.~(\ref{eq:polariton_pot})]
rather than the number ($a^{\dagger}a$) of photons [$\eta
a^{\dagger}ax$ in Eq.~(\ref{eq:orig_tot_ham})]. Moreover, the
atoms also impose an additional nonlinear term (the second term in
Eq.~(\ref{eq:polariton_pot})) for non-zero coupling constant $G$.
Note that this nonlinear effect, in the second term of
Eq.~(\ref{eq:polariton_pot}), increases when increasing the number
$N$ of filled atoms because $G$ grows with $N$. Thus, the gas of
atoms enhances the coupling between the cavity field and the mirror.
This enhanced coupling would produce squeezed states of the mirror
mode and also entanglement between the mirror and the polaritons,
which will be discussed in Sec.~\ref{sec:entanglement}. Without
the intervening atoms, the potential $V$ simply introduces a
displacement to the mirror, producing neither squeezing nor
entanglement.
\section{\label{sec:entanglement}Adiabatic Entanglement and Evolution under
Squeezing}
\subsection{\label{sub:geo_entanglement}Entanglement Using the Born-Oppenheimer
Approximation}
By considering fast-varying polariton modes and slow-varying mirror
modes, we can write the wave vector at time $t$ for our system under
the Born-Oppenheimer approximation \begin{equation}
\left|\psi(t)\right\rangle =\sum_{\mathbf{n}}\left|\mathbf{n}\right\rangle \otimes\left|\phi(\mathbf{n},t)\right\rangle \label{eq:state_vec}\end{equation}
where $\mathbf{n}=\left\{ n_{A},n_{B}\right\} $ denotes the collective
index of energy levels of the polariton modes $A$ and $B$. Thus,
$A^{\dagger}A\left|\mathbf{n}\right\rangle =n_{A}\left|\mathbf{n}\right\rangle $
and $B^{\dagger}B\left|\mathbf{n}\right\rangle =n_{B}\left|\mathbf{n}\right\rangle $.
Here, $\left|\mathbf{n}\right\rangle $ describes the time-independent
wave vector for the polariton space in its adiabatic limit and $\left|\phi(\mathbf{n},t)\right\rangle $
the time-dependent wave vector for the mirror. The potential $V$
in Eq.~(\ref{eq:polariton_pot}) then becomes an effective c-number
according to the eigenspectrum $\mathbf{n}$,\begin{eqnarray}
V_{\mathbf{n}} & = & \frac{1}{2}(\Omega_{0}+\Omega_{\mathrm{C}}+\eta x)(n_{B}+n_{A})\nonumber \\
& & +\frac{1}{2}\sqrt{(\Omega_{0}-\Omega_{\mathrm{C}}-\eta x)^{2}+4G^{2}}(n_{B}-n_{A}).\label{eq:BO_pot}\end{eqnarray}
We consider the displacement of the mirror $x$ to be small around
its equilibrium position $x=0$ and thus expand
Eq.~(\ref{eq:BO_pot}) up to second order in $x$
\begin{multline}
V_{\mathbf{n}}=\frac{1}{2}(\Omega_{0}+\Omega_{\mathrm{C}})(n_{B}+n_{A})\\
+\frac{1}{2}\sqrt{(\Omega_{0}-\Omega_{\mathrm{C}})^{2}+4G^{2}}(n_{B}-n_{A})\\
+\frac{\eta}{2}\left[(n_{B}+n_{A})-\frac{(\Omega_{0}-\Omega_{\mathrm{C}})(n_{B}-n_{A})}{\sqrt{(\Omega_{0}-\Omega_{\mathrm{C}})^{2}+4G^{2}}}\right]x\\
+\frac{N|g|^{2}\eta^{2}(n_{B}-n_{A})}{\left((\Omega_{0}-\Omega_{\mathrm{C}})^{2}+4G^{2}\right)^{\frac{3}{2}}}x^{2}.\label{eq:eff_pot}
\end{multline}
Using Eq.~(\ref{eq:eff_pot}), when the polariton modes are in
state $\left|\mathbf{n}\right\rangle $, the effective Hamiltonian
operating on the mirror is
\begin{equation} H_{\mathbf{n}}=H_{\mathrm{M}}
+V_{\mathbf{n}}.\label{eq:eff_ham}
\end{equation}
If we prepare an initial state of the system
\begin{equation}
\left|\psi(0)\right\rangle
=\sum_{\mathbf{n}}\lambda_{\mathbf{n}}\left|\mathbf{n}\right\rangle
\otimes\left|\phi(0)\right\rangle
\end{equation}
where $\lambda_{\mathbf{n}}$ is the expansion coefficient, then
the mirror wave subvector will evolve along the path generated by
the effective Hamiltonian $H_{\mathbf{n}}$ \begin{eqnarray}
\left|\psi(t)\right\rangle & = & \sum_{\mathbf{n}}\lambda_{\mathbf{n}}\left|\mathbf{n}\right\rangle \otimes\left|\phi_{\mathbf{n}}(t)\right\rangle ,\\
\left|\phi_{\mathbf{n}}(t)\right\rangle & = & e^{-iH_{\mathbf{n}}t}\left|\phi(0)\right\rangle .\label{eq:mir_vec}\end{eqnarray}
In other words, the final state of the mirror is determined by or
dependent on the state of the polaritons in their adiabatic limit;
specifically, the number distribution of the polaritons $\left\{ n_{A},n_{B}\right\} $
will decide the evolution of the mirror.
Geometrically, if the initial state $\left|\phi(0)\right\rangle $
was conceived to be represented by a point on a manifold over the
Hilbert space of the mirror, then the effective Hamiltonians
$H_{\mathbf{n}}$ and $H_{\mathbf{m}}$ for
$\mathbf{n}\neq\mathbf{m}$ can be regarded as generators of the
motion of the same vector $\left|\phi(0)\right\rangle $ towards
different directions over the manifold. The evolution over time
due to different generators will leave trajectories of different
branches of paths on the manifold. The end points
$\left|\phi_{\mathbf{n}}(t)\right\rangle $ and
$\left|\phi_{\mathbf{m}}(t)\right\rangle $ of the paths are
separated and the separation depends on the discrepancy between
$H_{\mathbf{n}}$ and $H_{\mathbf{m}}$ induced by different
polariton distributions. The nonzero separation reflects
geometrically the adiabatic entanglement of the mirror and the
polaritons.
The original concept of adiabatic entanglement proposed in Ref.~\cite{suncp00-adia}
concerns a two-component model of a fast-varying main system to be
measured and a slow-varying detector apparatus. The system and the
detector entangles over time as described by Eq.~(\ref{eq:state_vec}).
We hence regard the triplet system discussed above as a practical
realization of the adiabatic entanglement model by comparing the polaritons
with the system and the mirror with the detector.
\subsection{Evolution of squeezed coherent states of the mirror}
Before quantifying the entanglement described above, we first study
the dynamics of the mirror via the effective Hamiltonian $H_{\mathbf{n}}$.
If we write the coordinate operator $x$ and the momentum operator
$p$ of the mirror in their creation and annihilation operator form
\begin{eqnarray}
x & = & \frac{1}{\sqrt{2m\Omega_{\mathrm{M}}}}(c+c^{\dagger}),\label{eq:cordinate}\\
p & = & -i\sqrt{\frac{m\Omega_{\mathrm{M}}}{2}}(c-c^{\dagger}),
\end{eqnarray}
the effective Hamiltonian, i.e. Eq.~(\ref{eq:eff_ham}), then
reads
\begin{equation}
H_{\mathbf{n}}=(\Omega_{\mathrm{M}}+2\alpha_{\mathbf{n}})c^{\dagger}c+\alpha_{\mathbf{n}}(c^{2}+c^{\dagger2})+\beta_{\mathbf{n}}(c+c^{\dagger})+\gamma_{\mathbf{n}},\label{eq:quantized_ham}
\end{equation}
where the coefficients depend on the polariton modes
\begin{align}
\alpha_{\mathbf{n}}= & \frac{G^{2}\eta^{2}(n_{B}-n_{A})}{2m\Omega_{\mathrm{M}}\left((\Omega_{0}-\Omega_{\mathrm{C}})^{2}+4G^{2}\right)^{\frac{3}{2}}},\label{eq:alpha}\\
\beta_{\mathbf{n}}= & \frac{\eta}{\sqrt{8m\Omega_{\mathrm{M}}}}\left[(n_{B}+n_{A})-\frac{(\Omega_{0}-\Omega_{\mathrm{C}})(n_{B}-n_{A})}{\sqrt{(\Omega_{0}-\Omega_{\mathrm{C}})^{2}+4G^{2}}}\right],\label{eq:beta}\\
\gamma_{\mathbf{n}}= & \frac{1}{2}(\Omega_{0}+\Omega_{\mathrm{C}})(n_{B}+n_{A})\\
& +\frac{1}{2}\sqrt{(\Omega_{0}-\Omega_{\mathrm{C}})^{2}+4G^{2}}\;(n_{B}-n_{A})\nonumber \\
& +\frac{N|g|^{2}\eta^{2}(n_{B}-n_{A})}{m\Omega_{\mathrm{M}}\left((\Omega_{0}-\Omega_{\mathrm{C}})^{2}+4G^{2}\right)^{\frac{2}{3}}}.\nonumber
\end{align}
The first- and second-order terms of $c$ and $c^{\dagger}$ in Eq.~(\ref{eq:quantized_ham})
can be recognized as the polaritons inducing a squeezed coherent state
in the mirror. The amount of displacement can be found by writing
Eq.~(\ref{eq:quantized_ham}) as
\begin{align}
H_{\mathbf{n}}= & D^{\dagger}\negmedspace\left(\frac{\beta_{\mathbf{n}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}\right)H'_{\mathbf{n}}\; D\negmedspace\left(\frac{\beta_{\mathbf{n}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}\right)
\end{align}
where $D(\alpha)=\exp\left\{ \alpha^{\ast}a-\alpha a^{\dagger}\right\} $
is the displacement operator. The resulting Hamiltonian in the displaced
space is \begin{equation}
H'_{\mathbf{n}}=(\Omega_{\mathrm{M}}+2\alpha_{\mathbf{n}})c^{\dagger}c+\alpha_{\mathbf{n}}(c^{2}+c^{\dagger2})-\frac{\beta_{\mathbf{n}}^{2}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}+\gamma_{\mathbf{n}}.\label{eq:dis_ham}\end{equation}
The amount of squeezing can be found by further diagonalizing Eq.~(\ref{eq:dis_ham})
through a Bogoliubov transformation\begin{eqnarray}
C_{\mathbf{n}} & = & \mu_{\mathbf{n}}\,c-\nu_{\mathbf{n}}\,c^{\dagger},\label{eq:bogo_tfm}\\
\mu_{\mathbf{n}} & = & \frac{1}{2}\left[\left(\frac{\Omega_{\mathrm{M}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}\right)^{\frac{1}{4}}+\left(\frac{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}{\Omega_{\mathrm{M}}}\right)^{\frac{1}{4}}\right],\\
\nu_{\mathbf{n}} & = & \frac{1}{2}\left[\left(\frac{\Omega_{\mathrm{M}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}\right)^{\frac{1}{4}}-\left(\frac{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}{\Omega_{\mathrm{M}}}\right)^{\frac{1}{4}}\right],\end{eqnarray}
for which the resulting Hamiltonian becomes\begin{equation}
H'_{\mathbf{n}}=\Omega_{\mathrm{M},\mathbf{n}}\:
C_{\mathbf{n}}^{\dagger}C_{\mathbf{n}}+\zeta_{\mathbf{n}}\label{eq:linear_ham}\end{equation}
where $\Omega_{\mathrm{M},\mathbf{n}}$ denotes the modified
pseudo-energy splitting of the transformed mirror excitations
according to $C_{\mathbf{n}}$ and $C_{\mathbf{n}}^{\dagger}$
\begin{equation}
\Omega_{\mathrm{M},\mathbf{n}}=\sqrt{\Omega_{\mathrm{M}}(\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}})},
\end{equation}
and $\zeta_{\mathbf{n}}$ denotes the non-operator terms
\begin{equation}
\zeta_{\mathbf{n}}=-\,\frac{\left(\sqrt{\Omega_{\mathrm{M}}}-\sqrt{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}\right)^{2}}{4}-\frac{\beta_{\mathbf{n}}^{2}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}+\gamma_{\mathbf{n}}.
\end{equation}
Here, $\Omega_{\mathrm{M},\mathbf{n}}$ is called a
pseudo-frequency because it might become imaginary for some cases
of the index $\mathbf{n}$. This reflects the fact that the
distribution of the polaritons have a strong influence over the
time evolution of the mirror, as we have pointed out above. In the
next subsection, we shall give more definite consideration for
$\Omega_{\mathrm{M},\mathbf{n}}$ being real or imaginary when
discussing the Loschmidt echo.
The transformation Eq.~(\ref{eq:bogo_tfm}) is physically equivalent
to squeezing the operator $c$. To simplify the derivation we shall
develop in the following, we define this squeezing process inversely
with the operator $S_{\mathbf{n}}$
\begin{eqnarray}
c & = & S_{\mathbf{n}}^{\dagger}\; C_{\mathbf{n}}\; S_{\mathbf{n}}\\
S_{\mathbf{n}} & = & \exp\left\{ \frac{r_{\mathbf{n}}}{2}\, C_{\mathbf{n}}^{2}-\frac{r_{\mathbf{n}}}{2}\, C_{\mathbf{n}}^{\dagger2}\right\} \label{eq:squeeze_opt}
\end{eqnarray}
where $\cosh r_{\mathbf{n}}=\mu_{\mathbf{n}}$. Over an initial
coherent state $\left|\alpha\right\rangle $ with
$c\left|\alpha\right\rangle =\alpha\left|\alpha\right\rangle $, we
can define a special {}``coherent state''
\begin{equation}
\left|\alpha\right\rangle
_{\mathbf{n}}=S_{\mathbf{n}}\left|\alpha\right\rangle
\end{equation}
according to the operator $C_{\mathbf{n}}$, i.e. \[
C_{\mathbf{n}}\left|\alpha\right\rangle
_{\mathbf{n}}=\alpha\left|\alpha\right\rangle _{\mathbf{n}}.\] The
time evolution of the mirror, starting from an initial vacuum
state, can then be computed as:
\begin{eqnarray}
\left|\phi_{\mathbf{n}}(t)\right\rangle & = & e^{-iH_{\mathbf{n}}t}\left|0\right\rangle \nonumber \\
& = & D^{\dagger}\negmedspace\left(\frac{\beta_{\mathbf{n}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}\right)e^{-iH'_{\mathbf{n}}t}\left|\frac{\beta_{\mathbf{n}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}\right\rangle \nonumber \\
& = & D^{\dagger}\negmedspace\left(\frac{\beta_{\mathbf{n}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}\right)S_{\mathbf{n}}^{\dagger}(t)\left|\frac{\beta_{\mathbf{n}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}\right\rangle _{\mathbf{n}}\nonumber \\
& = & \left|\frac{\beta_{\mathbf{n}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}(e^{-i\Omega_{\mathrm{M},\mathbf{n}}t}-1)\right\rangle \label{eq:final_state}
\end{eqnarray}
where $S_{\mathbf{n}}^{\dagger}$ is the Hermitian conjugate of the
squeezing operator $S_{\mathbf{n}}$ in Eq.~(\ref{eq:squeeze_opt})
in the Heisenberg picture; more explicitly,\begin{equation}
S_{\mathbf{n}}^{\dagger}(t)=\exp\left\{
\frac{r_{\mathbf{n}}(t)}{2}C_{\mathbf{n}}^{\dagger2}-\frac{r_{\mathbf{n}}(t)}{2}C_{\mathbf{n}}^{2}\right\}
\end{equation}
with $r_{\mathbf{n}}(t)=r_{\mathbf{n}}\exp\left\{ -i\Omega_{\mathrm{M},
\mathbf{n}}t\right\} $. This derivation is similar to the
technique used in Ref.~\cite{ydwang04} for computing the evolution
of squeezed states. The difference is that the coupling nature of
the system we consider permits the entanglement of the oscillating
mirror even when initialized from a vacuum state, which avoids the
difficulty of preparing a coherent superposition. The squeezed
states using polaritons have been studied in Ref.~\cite{xdhu96-1},
while phonon squeezed states in Ref.~\cite{xdhu96-2}
\section{\label{sec:quan_entanglement}Quantification of Decoherence and Entanglement}
\subsection{Loschmidt Echo}
At the end of Sec.~\ref{sub:geo_entanglement}, we interpreted the
adiabatic entanglement as two distinct end points of the evolution
over a manifold. The metric distance between the two points naturally
becomes an appropriate measure of the degree of coherence or correlation
between the two quantum states. The Loschmidt echo, which has been
known to characterize the decoherence of a perturbed system~\cite{cucchietti03},
plays the role of metric. Originally this echo was defined as the
wave function overlap between the states with and without the presence
of the perturbing potential. This echo exactly describes the dynamic
sensitivity of the system in the context of quantum chaos.
In our case, the perturbation potential can be understood as $(H_{\mathbf{n}}-H_{\mathbf{m}})$
and the echo as \[
L_{\mathbf{n,m}}(t)=|\left\langle \phi_{\mathbf{n}}(t)|\phi_{\mathbf{m}}(t)\right\rangle |.\]
Using Eq.~(\ref{eq:final_state}), we find
\begin{multline}
L_{\mathbf{n,m}}(t) =
\exp\left\{-\sum_{i=\mathbf{m},\mathbf{n}}\frac{2\beta_{i}^{2}}
{(\Omega_{\mathrm{M}}+4\alpha_{i})^{2}}\sin^{2}
\left(\frac{\Omega_{\mathrm{M},i}}{2}t\right)\right.\nonumber\\
+\left[\sum_{i=\mathbf{m},\mathbf{n}}\sin^{2}\left(\frac{\Omega_{\mathrm{M},i}}{2}t\right)
-\sin^{2}\left(\Omega_{\mathrm{M},\mathbf{n-m}}\,t\right)\right]
\times\nonumber\\
\left.\frac{\beta_{\mathbf{n}}\beta_{\mathbf{m}}}{(\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}})
(\Omega_{\mathrm{M}}+4\alpha_{\mathbf{m}})}\right\}.\label{eq:Loschmidt}
\end{multline}
with
\begin{equation}
\Omega_{\mathrm{M},\mathbf{n-m}}=\frac{1}{2}(\Omega_{\mathrm{M},\mathbf{n}}-\Omega_{\mathrm{M},\mathbf{m}}).
\end{equation}
Note that when $\Omega_{\mathrm{M},\mathbf{n}}$ and
$\Omega_{\mathrm{M},\mathbf{m}}$ are real, the echo exhibits a
cycling collapse and revival, similar to the decoherence effect
shown in Ref.~\cite{ydwang04}, only that the oscillation is not
simply sinusoidal. When the two pseudo-frequencies are indeed
imaginary, which occurs when
\begin{equation}
\alpha_{\mathbf{n}}<-\,\frac{\Omega_{\mathrm{C}}}{4}
\end{equation}
for some $\mathbf{n}$ or equivalently\begin{equation}
\left(n_{A}-n_{B}\right)\;>\frac{m\Omega_{\mathrm{M}}^{2}((\Omega_{0}-\Omega_{\mathrm{C}})^{2}+4G^{2})^{\frac{3}{2}}}{2G^{2}\eta^{2}},\label{eq:critical_cond}\end{equation}
the sinusoidal functions will become hyperbolic and the echo will
damp with time exponentially. Whether the latter can happen depends
on the difference between the excitation numbers of the polaritons.
The requiring difference being large or small depends on the coupling
constant $G$, which in turn increases with the number $N$ of atoms
in the cavity. In other words, we can operate our system in two regimes:
for either periodic or hyperbolic Loschmidt echos, based on the number
$N$ of atoms.
Figure~\ref{fig:loschmidt_echo} plots the echo $L_{\mathbf{n,m}}$
between two mirror states over the same period of time for the two
regimes. Without loss of generality, the parameters are all set to
orders of magnitude accessible to current experiments: $\Omega_{\mathrm{M}}/2\pi=10$
MHz, $\alpha_{\mathbf{n}}/2\pi=10^{11}$ Hz and $\beta_{\mathbf{n}}/2\pi=10^{7}$
Hz. The periodic Loschmidt echo in Fig.~\ref{fig:loschmidt_echo}(a)
demonstrates the collapse and revival of decoherence between two mirror
states whereas the hyperbolic type of echo in Fig.~\ref{fig:loschmidt_echo}(b)
shows a straight one-way decoherence. In the language of Ref.~\cite{cucchietti03},
\[
\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}=0\]
is a critical point of dynamic sensitity. When Eq.~(\ref{eq:critical_cond})
is met, the evolution paths of the two states on the manifold always
remain close to each other, giving an almost perfect echo. Once the
parameters cross into the opposite side of Eq.~(\ref{eq:critical_cond}),
the echo gets lost almost instantly with no comeback, as shown in
Fig.~\ref{fig:loschmidt_echo}(b).
\begin{figure}
\includegraphics[width=3.2in]{loschmidt_echo}
\caption{Plots of the Loschmidt echo $L_{\mathbf{n,m}}$ for two mirror states
under the adiabatic entanglement for the operating regimes of (a)
circular functions and (b) hyperbolic functions. \label{fig:loschmidt_echo}}
\end{figure}
\subsection{Fidelity}
Fidelity serves as another metric for measuring the correlation between
two quantum states. When seen in the coordinate space, the fidelity
roughly represents the overlap of the spatial wave packets of the
two states (illustrated in Fig.(\ref{fig:fidelity}) as the shaded
region). Defined as the inner product of the ground states of two
Hamiltonians, its physical meaning differs from that of the Loschmidt
echo in that it is not a time-dependent measure of the distance between
two evolving states, but a static estimate of the differentiating
effects of two dynamic evolution generators. The fidelity has recently
seen extensive applications on characterizing quantum phase transitions
in strong correlated systems~\cite{zanardi07}.
\begin{figure}
\includegraphics[width=3in]{fidelity}
\caption{Illustration of the fidelity between two quantum state vectors represented
in coordinate space as Gaussian functions\label{fig:fidelity}}
\end{figure}
For our model, we use the fidelity to estimate the effects between
different polariton distributions on the mirror, under the Born-Oppenheimer
approximation. From Eqs.~(\ref{eq:dis_ham}), (\ref{eq:linear_ham})
and (\ref{eq:squeeze_opt}), the mirror ground state of the effective
Hamiltonian in the adiabatic limit is the squeezed coherent vacuum
state $\left|0\right\rangle _{\mathbf{n}}$ displaced by the amount
$\beta_{\mathbf{n}}/(\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}})$.
The fidelity, as the overlap of the ground states of two branching
Hamiltonians $H_{\mathbf{n}}$ and $H_{\mathbf{m}}$ ($\mathbf{n}\neq\mathbf{m}$),
can then be computed as the inner product of two coherent states\begin{align}
F_{\mathbf{n,m}} & =\left|\left\langle 0\right|S_{\mathbf{n}}^{\dagger}D^{\dagger}\negthickspace\left(\frac{\beta_{\mathbf{n}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}\right)\negmedspace D\negthickspace\left(\frac{\beta_{\mathbf{m}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{m}}}\right)\negthickspace S_{\mathbf{m}}\left|0\right\rangle \right|\nonumber \\
& =\exp\left\{ -\frac{1}{2}\left(\frac{\beta_{\mathbf{n}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}}-\frac{\beta_{\mathbf{m}}}{\Omega_{\mathrm{M}}+4\alpha_{\mathbf{m}}}\right)^{2}\right\} .\end{align}
\begin{figure}
\includegraphics[width=3.2in]{fidelity_plot}
\caption{(Color online) Semi-log plots of the fidelity $F_{\mathbf{n,m}}$
between two mirror states as a function of the mirror oscillating
frequency $\Omega_{\mathrm{M}}$ for two typical operating regimes.
The solid line represents the case for $\alpha_{\mathbf{n}},\alpha_{\mathbf{m}}>0$
while the dashed line for $\alpha_{\mathbf{n}},\alpha_{\mathbf{m}}<0$.
\label{fig:fidelity_plot}}
\end{figure}
We hence see that the wave-packet overlap $F_{\mathbf{n,m}}$ depends
on various parameters and generally, based on the relations of $\alpha_{\mathbf{n}}$
and $\beta_{\mathbf{n}}$ with $\Omega_{\mathrm{M}}$ (Cf. Eqs.~(\ref{eq:alpha})
and (\ref{eq:beta})), the overlap decreases for increasing $\Omega_{\mathrm{M}}$
over a normal mechanical oscillating frequency range. Figure \ref{fig:fidelity_plot}
shows the plot of the fidelity with the ordinate being the mirror
frequency over the range 100 kHz to 100 MHz in a logarithmic scale
for two typical values of the parameters: $\alpha_{\mathbf{n}},\alpha_{\mathbf{m}}>0$
and $\alpha_{\mathbf{n}},\alpha_{\mathbf{m}}<0$. The order of magnitude
of $\alpha_{\mathbf{n}}$ and $\beta_{\mathbf{n}}$ are set to ranges
consistent with the values given in the last subsection.
The low frequency range coincides with our expectation that a higher
oscillating frequency of the mirror will render itself more vulnerable
to the effect of the polaritons and induce its entanglement with other
system components faster. When $\Omega_{\mathrm{M}}$ further increases,
the different operating regimes studied using the Loschmidt echo manifest
themselves more apparently. For $\alpha_{\mathbf{n}},\alpha_{\mathbf{m}}>0$,
the two ground states of the mirror always remain close to each other,
corresponding to the periodic collapse and revival region for the
Loschmidt echo, and hence the fidelity retains its value close to
$1$. Whereas for $\alpha_{\mathbf{n}},\alpha_{\mathbf{m}}<0$, it
might cross into the hyperbolic operating region, where $\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}<0$.
For the later case, the fidelity drops to $0$ near the critical point
$\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}}=0$, simulating the behavior of a
phase transition.
\section{\label{sec:quad_variance}Squeezed Quadrature Variance of the Mirror}
The gas of atoms inside the cavity also acts like an optical parametric
oscillator when regarded as a cavity dielectric. The original photon
field traveling in the cavity vacuum is dressed by the atoms into
two polariton modes. These two modes in their adiabatic limit act
on the mirror as if confining the mirror oscillation in a nonlinear
medium (Cf. Eqs.~(\ref{eq:polariton_pot}) and (\ref{eq:quantized_ham})
in which the polariton-mirror mode coupling is nonlinear). This case
occurs in traditional nonlinear optics when the signal beam and the
idler beam have the same frequency and the process of optical interference
is then denoted as {}``degenerate parametric oscillation''. A mechanical
version of the process was suggested in Ref.~\citep{fxue07-2}, where
the interference took place between two nanomechanical resonators
and it was shown to be the analog of a two-mode parametric down-conversion.
For our case, the procedure is half-optical (the polariton excitations)
and half-mechanical (the mirror excitations). To show the similar
squeezing effect in quadrature variance, we write the equations of
motion of the mirror operators from Eq.~(\ref{eq:quantized_ham})
\begin{eqnarray}
\dot{c} & = & -i(\Omega_{\mathrm{M}}+2\alpha_{\mathbf{n}})c-i2\alpha_{\mathbf{n}}c^{\dagger}-i\beta_{\mathbf{n}},\label{eq:eom}\\
\dot{c}^{\dagger} & = & i(\Omega_{\mathrm{M}}+2\alpha_{\mathbf{n}})c^{\dagger}+i2\alpha_{\mathbf{n}}c+i\beta_{\mathbf{n}}.\label{eq:eom_h.c.}
\end{eqnarray}
The solution of the above equations, through Laplace transformation,
reads\begin{align}
c(t)= & \left[\cos\left(\Omega_{\mathrm{M},\mathbf{n}}t\right)-i\frac{\Omega_\mathrm{M}+2\alpha_{\mathbf{n}}}{\Omega_{\mathrm{M},\mathbf{n}}}\sin\left(\Omega_{\mathrm{M},\mathbf{n}}t\right)\right]c(0)\nonumber \\
& -\left[\frac{i2\alpha_{\mathbf{n}}}{\Omega_{\mathrm{M},\mathbf{n}}}\sin\left(\Omega_{\mathrm{M},\mathbf{n}}t\right)\right]c^{\dagger}(0)\label{eq:soln_c}\\
& +\frac{2\beta_{\mathbf{n}}}{\Omega_\mathrm{M}+4\alpha_{\mathbf{n}}}\sin^{2}\left(\frac{\Omega_{\mathrm{M},\mathbf{n}}t}{2}\right)-\frac{i\beta_{\mathbf{n}}}{\Omega_{\mathrm{M},\mathbf{n}}}\sin\left(\Omega_{\mathrm{M},\mathbf{n}}t\right).\nonumber \end{align}
We recognize that, unlike a typical optical parametric oscillator,
even when the mirror is set initially to a vacuum state $\left|0\right\rangle $,
the expectation value $\left\langle 0|c(t)|0\right\rangle $ will
be non-zero over time because of the perturbation from the polaritons.
As long as the numbers $n_{A}$ and $n_{B}$ of the polaritons are
not both zero at the same time, the inhomogeneous term $i\beta_{\mathbf{n}}$
on the right-hand-side of Eqs.~(\ref{eq:eom}-\ref{eq:eom_h.c.})
would become nonzero and the motion of the mirror would be initiated
by the incident polaritons, which is consistent with the vacuum state
entanglement we discussed in the last section. Compared to Ref.~\cite{fxue07-1}
for generating a squeezed entangled state of a mechanical resonator,
the requirement of preparing different initial Fock and coherent states
is lifted.
When the criterion Eq.~(\ref{eq:critical_cond}) is met, the variance
$\left\langle \Delta x^{2}\right\rangle $ in the coordinate quadrature
Eq.~(\ref{eq:cordinate}) demonstrates a squeezing effect
\begin{equation}
\left\langle \Delta x^{2}\right\rangle =\frac{2\cosh^{2}\left(\Omega_{\mathrm{M},\mathbf{n}}t\right)}{m\Omega_{\mathrm{M}}}+\frac{2\sinh^{2}\left(\Omega_{\mathrm{M},\mathbf{n}}t\right)}{m(\Omega_{\mathrm{M}}+4\alpha_{\mathbf{n}})}
\end{equation}
where $\Omega_{\mathrm{M},\mathbf{n}}$ is meant in the equation
above to be the real magnitude of the pseudo-frequency $\Omega_{\mathrm{M},\mathbf{n}}$.
\section{\label{sec:conclusion}Conclusion and Remarks}
We have studied a cavity system composed of atoms, a cavity field
and a movable mirror and showed that the collective excitations of
the atoms are dressed by the cavity field and transformed into polaritons,
causing their entanglement with the cavity mirror. The mirror state,
in the adiabatic limit of the polaritons, is distinctly squeezed,
according to the number distribution of two polariton modes; and its
variance in coordinate space is also squeezed.
Before we conclude this paper, we remark a recent article by Paz and
Roncaglia~\citep{paz08} in which the entanglement dynamics between
two resonators at finite temperatures are classified into {}``sudden
death''~\citep{yu04}, {}``sudden death and revival'' and {}``no-sudden
death'' regions according to the amount of fluctuations the resonators
experience compared to their squeezing rate. Note that the squeezing
rate in our model Eq.~(\ref{eq:squeeze_opt}), defined through Eq.~(\ref{eq:bogo_tfm}),
is also related to the choice of operating regimes determined by Eq.~(\ref{eq:critical_cond}).
Therefore, we conclude that entanglement operates in regions of different
characteristics not only in finite temperature environments but also
in zero temperature settings, as shown by our model.
\begin{acknowledgments}
C.P.S. acknowledges supports by the NSFC with Grant Nos. 90503003
and the NFRPC with Grant Nos. 2006CB921205 and 2005CB724508. F.N.
gratefully acknowledges partial support from the National Security
Agency (NSA), Laboratory Physical Science (LPS), Army Research Office
(ARO), National Science Foundation (NSF) grant No. EIA-0130383, JSPS-RFBR
06-02-91200, and Core-to-Core (CTC) program supported by the Japan
Society for Promotion of Science (JSPS).
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\subsection{Proof of Correctness of Top-$K$ Max-Product}
We now consider the top-$K$ max-product algorithm, shown in full generality
in Algo.~\ref{algo:dp:topK:main}. The following proposition proves its correctness.
\begin{proposition} \label{eq:prop:dp:topK_guarantee}
Consider as inputs to Algo.~\ref{algo:dp:topK:main} an augmented score function $\psi(\cdot, \cdot ; \wv)$ defined on
tree structured graph $\mcG$, and an integer $K > 0$. Then, the outputs
of Algo.~\ref{algo:dp:topK:main} satisfy $\psi\pow{k} = \psi(\yv\pow{k}) = \maxK{k}{\yv \in \mcY} \psi(\yv)$.
Moreover, Algo.~\ref{algo:dp:topK:main} runs in time $\bigO(pK\log K \max_{v\in\mcV} \abs{\mcY_v}^2)$
and uses space $\bigO(p K \max_{v\in\mcV} \abs{\mcY_v})$.
\end{proposition}
\begin{proof}
For a node $v \in \mcV$, let $\tau(v)$ denote the sub-tree of $\mcG$ rooted at $v$. Let $\yv_{\tau(v)}$ denote
$\big( y_{v'} \text{ for } v' \in \tau(v) \big)$. Define $\psi_{\tau(v)}$ as follows:
if $v$ is a leaf, $\yv_{\tau(v)} = (y_v)$ and $\psi_{\tau(v)}(\yv_{\tau(v)}) := \psi_v(y_v)$.
For a non-leaf $v$, define recursively
\begin{align} \label{eq:dp:topk_proof:phi_subtree}
\psi_{\tau(v)}(\yv_{\tau(v)}) := \psi_v(y_v) + \sum_{v' \in C(v)} \left[ \psi_{v, v'}(y_v, y_{v'})
+ \psi_{\tau(v')}(\yv_{\tau(v')}) \right] \,.
\end{align}
We will need some identities about choosing the $k$th largest element from a finite collection.
For finite sets $S_1, \cdots, S_n$ and
functions $f_j: S_j \to \reals$, $h: S_1 \times S_2 \to \reals$, we have,
\begin{gather}
\label{eq:dp:topk_proof:bellman1}
\maxK{k}{u_1 \in S_1, \cdots, u_n \in S_n} \left\{ \sum_{j=1}^n f_j(u_j) \right\}
= \quad \maxK{k}{l_1, \cdots, l_n \in [k]} \left\{
\sum_{j=1}^n \maxK{l_j}{u_j \in S_j} f_j(u_j)
\right\} \,, \\
\label{eq:dp:topk_proof:bellman2}
\maxK{k}{u_1 \in S_1, u_2 \in S_2} \{ f_1(u_1) + h(u_1, u_2) \}
= \quad \maxK{k}{u_1 \in S_1, l \in [k]} \left\{
f_1(u_1) + \maxK{l}{u_2 \in S_2} h(u_1, u_2)
\right\} \,.
\end{gather}
The identities above state that for a sum to take its $k$th largest value,
each component of the sum must take one of its $k$ largest values. Indeed, if one of the components
of the sum took its $l$th largest value for $l > k$, replacing it with any of the $k$ largest values cannot
decrease the value of the sum.
Eq.~\eqref{eq:dp:topk_proof:bellman2} is a generalized version of Bellman's principle of
optimality (see \citet[Chap. III.3.]{bellman1957dynamic} or \citet[Vol. I, Chap. 1]{bertsekas1995dynamic}).
For the rest of the proof, $\yv_{\tau(v)} \backslash y_v$ is used as
shorthand for $\{y_{v'} \, | \, v' \in \tau(v) \backslash \{v\} \}$.
Moreover, $\max_{\yv_{\tau(v)}}$ represents maximization over
$\yv_{\tau(v)} \in \bigtimes_{v' \in \tau(v)} \mcY_{v'}$. Likewise for $\max_{\yv_{\tau(v)} \backslash y_v}$.
Now, we shall show by induction that for all $v \in \mcV$, $y_v \in \mcY_v$ and $k = 1,\cdots, K$,
\begin{align} \label{eq:dp:topk_proof:ind_hyp}
\maxK{k}{\yv_{\tau(v)}\backslash y_v} \psi_{\tau(v)}(\yv_{\tau(v)}) = \psi_v(y_v) +
\maxK{k}{} \bigg\{ \sum_{v' \in C(v)} m_{v'}\pow{l_{v'}}(y_{v'})
\bigg| l_{v'} \in [K] \text{ for } v' \in C(v)
\bigg\} \,.
\end{align}
The induction is based on the height of a node. The statement is clearly true for a leaf $v$ since $C(v) = \varnothing$.
Suppose \eqref{eq:dp:topk_proof:ind_hyp} holds for all nodes of height $\le h$. For
a node $v$ of height $h+1$, we observe that $\tau(v) \backslash v$ can be partitioned
into $\{\tau(v') \text{ for } v' \in C(v)\}$ to get,
\begin{gather} \nonumber
\maxK{k}{\yv_{\tau(v)}\backslash y_v} \psi_{\tau(v)}(\yv_{\tau(v)})
- \psi_v(y_v)
\stackrel{\eqref{eq:dp:topk_proof:phi_subtree}}{=} \maxK{k}{\yv_{\tau(v)}\backslash y_v}
\bigg\{ \sum_{v' \in C(v)} \psi_{v, v'}(y_v, y_{v'}) + \psi_{\tau(v')}(\yv_{\tau(v')}) \bigg\} \\
\label{eq:eq:topk_proof:ind_hyp_todo}
\stackrel{\eqref{eq:dp:topk_proof:bellman1}}{=}
\maxK{k}{} \bigg\{
\sum_{v' \in C(v)}
\underbrace{
\maxK{l_{v'}}{\yv_{\tau(v')}}
\{ \psi_{v, v'}(y_v, y_{v'}) + \psi_{\tau(v')}(\yv_{\tau(v')}) \} }_{=:\mcT_{v'}(y_v)}
\, \bigg| \,
l_{v'} \in [K] \text{ for } v' \in C(v)
\bigg\} \,.
\end{gather}
Let us analyze the term in the underbrace, $\mcT_{v'}(y_v)$. We successively deduce,
with the argument $l$ in the maximization below taking values in $\{1, \cdots, K\}$,
\begin{align*}
\mcT_{v'}(y_v)
&\stackrel{\eqref{eq:dp:topk_proof:bellman2}}{=}
\maxK{l_{v'}}{y_{v'}, l} \bigg\{
\psi_{v, v'}(y_v, y_{v'}) + \maxK{l}{\yv_{\tau(v')}\backslash y_{v'}} \psi_{\tau(v')}(\yv_{\tau(v')})
\bigg\} \\
&\stackrel{\eqref{eq:dp:topk_proof:ind_hyp}}{=}
\maxK{l_{v'}}{y_{v'}, l} \bigg\{
\begin{matrix}
\psi_{v'}(y_{v'}) + \psi_{v, v'}(y_v, y_{v'}) + \\
\maxK{l}{} \big\{ \sum_{v'' \in C(v')} m_{v''}\pow{l_{v''}}(y_{v'})
\, | \, l_{v''} \in [K] \text{ for } v'' \in C(v')
\big\}
\end{matrix}
\bigg\} \\
&\stackrel{\eqref{eq:dp:topk_proof:bellman2}}{=}
\maxK{l_{v'}}{} \bigg\{
\begin{matrix}
\psi_{v'}(y_{v'}) + \psi_{v', v}(y_{v'}, y_v) \\
+ \sum_{v'' \in C(v')} m\pow{l_{v''}}_{v''}(y_{v'})
\end{matrix} \,
\bigg| \,
\begin{matrix}
y_{v'} \in \mcY_{v'} \text{ and } \\ l_{v''} \in [K] \text{ for } v'' \in C(v)
\end{matrix}
\bigg\} \\
&\stackrel{\eqref{eq:dp:topk:algo:update}}{=}
m\pow{l_{v'}}_{v'}(y_v) \, .
\end{align*}
Here, the penultimate step followed from applying in reverse the identity \eqref{eq:dp:topk_proof:bellman2}
with $u_1, u_2$ being by $y_{v'}, \{ l_{v''} \text{ for } v'' \in C(v')\}$ respectively,
and $f_1$ and $h$ respectively being $ \psi_{v'}(y_{v'}) + \psi_{v', v}(y_{v'}, y_v)$
and $ \sum_{v''} m\pow{l_{v''}}_{v''}(y_{v'})$.
Plugging this into \eqref{eq:eq:topk_proof:ind_hyp_todo} completes the induction argument.
To complete the proof, we repeat the same argument over the root as follows. We note that
$\tau(r)$ is the entire tree $\mcG$. Therefore, $\yv_{\tau(r)} = \yv$ and $\psi_{\tau(r)} = \psi$.
We now apply the identity \eqref{eq:dp:topk_proof:bellman2} with $u_1$ and $u_2$ being
$y_r$ and $\yv_{\tau(r) \backslash r}$ respectively and $f_1 \equiv 0$ to get
\begin{align*}
\maxK{k}{\yv \in \mcY} \psi(\yv)
& \stackrel{\eqref{eq:dp:topk_proof:bellman2}}{=}
\maxK{k}{y_r, l} \left\{ \maxK{l}{\yv\backslash y_r} \psi(\yv) \right\}
= \maxK{k}{y_r, l} \left\{ \maxK{l}{\yv_{\tau(r)} \backslash y_r} \psi_{\tau(r)}(\yv_{\tau(r)}) \right\} \\
&\stackrel{\eqref{eq:dp:topk_proof:ind_hyp}}{=}
\maxK{k}{y_r, l} \bigg\{
\begin{matrix}
\psi_{r}(y_{r}) +
\maxK{l}{} \big\{ \sum_{v \in C(r)} m_{v}\pow{l_{v}}(y_r)
\, | \, l_{v} \in [K] \text{ for } v \in C(r)
\big\}
\end{matrix}
\bigg\} \\
&\stackrel{\eqref{eq:dp:topk_proof:bellman2}}{=}
\maxK{k}{} \bigg\{
\begin{matrix}
\psi_r(y_r) + \\
\sum_{v \in C(r)} m\pow{l_{v}}_{v}(y_r)
\end{matrix} \,
\bigg| \,
\begin{matrix}
y_{r} \in \mcY_{r} \text{ and } \\ l_{v} \in [K] \text{ for } v \in C(r)
\end{matrix}
\bigg\} \\
&\,= \psi\pow{k} \,,
\end{align*}
where the last equality follows from Line~\ref{line:algo:dp:topk:final_score_k} of Algo.~\ref{algo:dp:topK:main}.
The algorithm requires storage of $m_v\pow{k}$, an array of size $\max_{v \in \mcV} \abs{\mcY_v}$
for each $k = 1,\cdots, K$, and $v \in \mcV$. The backpointers $\delta, \kappa$ are of the same size.
This adds up to a total storage of $\bigO(pK \max_{v} \abs{\mcY_v})$.
To bound the running time, consider Line~\ref{line:algo:dp:topk:message_k} of Algo.~\ref{algo:dp:topK:main}.
For a fixed $v' \in C(v)$, the computation
\begin{align*}
\maxK{k}{y_v, l_{v'} } \left\{
\psi_v(y_v) + \psi_{v, \rho(v)}(y_v, y_{\rho(v)})
+ m_{v'}^{(l_{v'})}(y_v)
\right\}
\end{align*}
for $k= 1, \cdots, K$ takes time $\bigO(K \log K \max_v \abs{\mcY_v})$.
This operation is repeated for each $y_v \in \mcY_v$ and once for every $(v, v')\in \mcE$. Since
$\abs\mcE = p-1$, the total running time is $\bigO(p K \log K \max_v \abs{\mcY_v}^2)$.
\end{proof}
\subsection{Proof of Correctness of Entropy Smoothing of Max-Product}
Next, we consider entropy smoothing.
\begin{proposition}\label{eq:prop:dp:ent_guarantee}
Given an augmented score function $\psi(\cdot, \cdot ; \wv)$ defined on
tree structured graph $\mcG$ and $\mu > 0$
as input, Algo.~\ref{algo:dp:supp_exp} correctly computes
$f_{-\mu H}(\wv)$
and $\grad f_{-\mu H}(\wv)$.
Furthermore, Algo.~\ref{algo:dp:supp_exp} runs in time $\bigO(p \max_{v\in\mcV} \abs{\mcY_v}^2)$
and requires space $\bigO(p \max_{v\in\mcV} \abs{\mcY_v})$.
\end{proposition}
\begin{proof}
The correctness of the function value $f_{- \mu H}$ follows from the bijection
$f_{- \mu H}(\wv) = \mu \, A_{\psi/\mu}(\wv)$ (cf. Prop.~\ref{prop:smoothing:exp-crf}),
where Thm.~\ref{thm:pgm:sum-product} shows correctness of $A_{\psi / \mu}$.
To show the correctness of the gradient, define the probability distribution $ P_{\psi, \mu}$
as the probability distribution from Lemma~\ref{lemma:smoothing:first-order-oracle}\ref{lem:foo:exp}
and $P_{\psi, \mu, v}, P_{\psi, \mu, v, v'}$ as its node and edge marginal probabilities respectively as
\begin{align*}
P_{\psi, \mu}(\yv ; \wv)
&= \frac{
\exp\left(\tfrac{1}{\mu}\psi(\yv ; \wv)\right)}
{\sum_{\yv' \in \mcY }\exp\left(\tfrac{1}{\mu}\psi(\yv' ; \wv)\right)} \,, \\
P_{\psi, \mu, v}(\overline y_v ; \wv)
&= \sum_{\substack{ \yv \in \mcY \, : \\ y_v = \overline y_v} }P_{\psi, \mu}(\yv ; \wv) \quad
\text{for } \overline y_v \in \mcY_v, v \in \mcV\,, \text{ and, } \\
P_{\psi, \mu, v, v'}(\overline y_v, \overline y_{v'} ; \wv)
&= \sum_{\substack{ \yv \in \mcY : \\ y_v = \overline y_v, \\ y_{v'} = \overline y_{v'} } }P_{\psi, \mu}(\yv ; \wv) \quad \text{for } \overline y_v \in \mcY_v, \overline y_{v'} \in \mcY_{v'}, (v,v') \in \mcE \,.
\end{align*}
Thm.~\ref{thm:pgm:sum-product} again shows that Algo.~\ref{algo:dp:supp_sum-prod} correctly produces
marginals $P_{\psi, \mu, v}$ and $P_{\psi, \mu, v, v'}$.
We now start with Lemma~\ref{lemma:smoothing:first-order-oracle}\ref{lem:foo:exp} and invoke \eqref{eq:smoothing:aug_score_decomp}
to get
\begin{align*}
\grad f_{-\mu H}(\wv) =& \sum_{\yv \in \mcY} P_{\psi, \mu}(\yv ; \wv) \grad \psi(\yv ; \wv) \\
{=}& \sum_{\yv \in \mcY} P_{\psi, \mu}(\yv ; \wv)
\left(
\sum_{v\in \mcV} \grad \psi_v(y_v ; \wv)
+ \sum_{(v, v')\in \mcE} \grad \psi_{v, v'}(y_v, y_{v'} ; \wv)
\right) \,, \\
=& \sum_{v \in \mcV}
\sum_{\yv \in \mcY} P_{\psi, \mu}(\yv ; \wv) \grad \psi_v( y_v ; \wv)
+
\sum_{(v,v') \in \mcE} \sum_{\yv \in \mcY} P_{\psi, \mu}(\yv ; \wv) \grad \psi_{v, v'}(y_v, y_{v'} ; \wv) \\
=& \sum_{v \in \mcV} \sum_{\overline y_v \in \mcY_v}
\sum_{\yv \in \mcY \, : \,y_v = \overline y_v} P_{\psi, \mu}(\yv ; \wv) \grad \psi_v(\overline y_v ; \wv)
\\ &\qquad+
\sum_{(v,v') \in \mcE} \sum_{\overline y_v \in \mcY_v} \sum_{\overline y_{v'} \in \mcY_{v'}}
\sum_{\yv \in \mcY \, :\, \substack{ y_v = \overline y_v \\ y_{v'} = \overline y_{v'} } }
P_{\psi, \mu}(\yv ; \wv) \grad \psi_{v, v'}(\overline y_v, \overline y_{v'} ; \wv) \\
=& \sum_{v \in \mcV} \sum_{\overline y_v \in \mcY_v} P_{\psi, \mu, v}(\overline y_v ; \wv)
\grad \psi_{v}(\overline y_v ; \wv)
\\ &\qquad+
\sum_{(v,v') \in \mcE} \sum_{\overline y_v \in \mcY_v} \sum_{\overline y_{v'} \in \mcY_{v'}}
P_{\psi, \mu, v, v'}(\overline y_v, \overline y_{v'} ; \wv)
\grad \psi_{v, v'}(\overline y_v, \overline y_{v'} ; \wv) \,.
\end{align*}
Here, the penultimate equality followed from breaking the sum over $\yv \in \mcY$ into an outer sum that sums over every
$\overline y_v \in \mcY_v$ and an inner sum over $\yv \in \mcY : y_v = \overline y_v$, and likewise for the edges.
The last equality above followed from the definitions of the marginals.
Therefore, Line~\ref{line:algo:dp:exp:gradient} of Algo.~\ref{algo:dp:supp_exp} correctly computes the gradient.
The storage complexity of the algorithm is $\bigO(p \max_v \abs{\mcY_v})$ provided that the edge marginals $P_{\psi, \mu, v, v'}$
are computed on the fly as needed. The time overhead of Algo.~\ref{algo:dp:supp_exp} after Algo.~\ref{algo:dp:supp_sum-prod}
is $\bigO(p \max_v \abs{\mcY_v}^2)$, by noting that each edge marginal can be computed in constant time
(Remark~\ref{remark:pgm:sum-prod:fast-impl}).
\end{proof}
\begin{algorithm}[ptbh]
\caption{Sum-product algorithm}
\label{algo:dp:supp_sum-prod}
\begin{algorithmic}[1]
\STATE {\bfseries Procedure:} \textsc{SumProduct}
\STATE {\bfseries Input:} Augmented score function $\psi$ defined on
tree structured graph $\mcG$ with root $r \in \mcV$.
\STATE {\bfseries Notation:} Let $N(v) = C(v) \cup \{\rho(v)\}$ denote all the neighbors of
$v \in \mcV$ if the orientation of the edges were ignored.
\STATE {\bfseries Initialize:}
Let $V$ be a list of nodes from $\mcV$ arranged in increasing order of height.
\FOR{$v$ in $V \backslash \{r\}$}
\STATE Set for each $y_{\rho(v)} \in \mcY_{\rho(v)}$: \label{line:dp:exp_dp:update}
\[
m_{v \to \rho(v)}(y_{\rho(v)}) \leftarrow \sum_{y_v \in \mcY_v} \left[ \exp \left(
\psi_v(y_v) + \psi_{v, \rho(v)}(y_v, y_{\rho(v)}) \right) \prod_{v' \in C(v)} m_{v' \to v}(y_v) \right]
\,.
\]
\ENDFOR
\STATE $A \leftarrow \log \sum_{y_r \in \mcY_r} \left[ \exp\left(
\psi_r(y_r) \right) \prod_{v' \in C(r)} m_{v' \to r}(y_r) \right] $.
\label{line:dp:exp_dp:log_part}
\FOR{$v$ in $\mathrm{reverse}(V)$}
\FOR{$v' \in C(v)$}
\STATE Set for each $y_{v'} \in \mcY_{v'}$:
\[
m_{v\to v'}(y_{v'}) = \sum_{y_v \in \mcY_v}\left[
\exp\left(
\psi_v(y_v) + \psi_{v', v}(y_{v'}, y_v)
\right)
\prod_{v'' \in N(v)\backslash\{v'\}} m_{v'' \to v}(y_v)
\right] \,.
\]
\ENDFOR
\ENDFOR
\FOR{ $v$ in $\mcV$}
\STATE Set $P_v(y_v) \leftarrow \exp\left( \psi_v(y_v) - A \right) \prod_{v'' \in N(v)} m_{v''\to v}(y_v)$
for every $y_v \in \mcY_v$.
\ENDFOR
\FOR{$(v, v')$ in $\mcE$}
\STATE For every pair $(y_v, y_{v'}) \in \mcY_v \times \mcY_{v'}$, set \label{line:algo:sum-prod:pair}
\begin{align*}
P_{v, v'}(y_v, y_{v'}) \leftarrow &
\exp\left(\psi_v(y_v) + \psi_{v'}(y_{v'}) + \psi_{v, v'}(y_v, y_{v'}) - A \right) \\&
\prod_{v'' \in N(v) \backslash \{v'\}} m_{v''\to v}(y_v) \prod_{v'' \in N(v') \backslash \{v\}} m_{v''\to v'}(y_{v'}) \,.
\end{align*}
\ENDFOR
\RETURN $A, \{P_v \text{ for } v \in \mcV \}, \{P_{v, v'} \text{ for } (v, v') \in \mcE \}$.
\end{algorithmic}
\end{algorithm}
Given below is the guarantee of the sum-product algorithm (Algo.~\ref{algo:dp:supp_sum-prod}).
See, for instance, \citet[Ch. 10]{koller2009probabilistic} for a proof.
\begin{theorem} \label{thm:pgm:sum-product}
Consider an augmented score function $\psi$ defined over a tree structured graphical model $\mcG$.
Then, the output of Algo.~\ref{algo:dp:supp_sum-prod} satisfies
\begin{align*}
A &= \log \sum_{\yv \in \mcY} \exp(\psi(\yv)) \,, \\
P_v(\overline y_v) &= \sum_{\yv \in \mcY\, :\, y_v = \overline y_v} \exp(\psi(\yv) - A) \,
\quad \text{for all $\overline y_v \in \mcY_v, v \in \mcV$, and, }\, \\
P_{v, v'}(\overline y_v, \overline y_{v'}) &=
\sum_{\yv \in \mcY\, : \, \substack{ y_v = \overline y_v, \\ y_{v'} = \overline y_{v'} } }
\exp(\psi(\yv) - A) \,
\quad \text{for all $\overline y_v \in \mcY_v, \overline y_{v'} \in \mcY_{v'}, (v,v') \in \mcE$.} \\
\end{align*}
Furthermore, Algo.~\ref{algo:dp:supp_sum-prod} runs in time $\bigO(p \max_{v\in\mcV} \abs{\mcY_v}^2)$
and requires an intermediate storage of $\bigO(p \max_{v\in\mcV} \abs{\mcY_v})$.
\end{theorem}
\begin{remark} \label{remark:pgm:sum-prod:fast-impl}
Line~\ref{line:algo:sum-prod:pair} of Algo.~\ref{algo:dp:supp_sum-prod} can be
implemented in constant time by reusing the node marginals $P_v$ and messages $m_{v \to v'},m_{v' \to v}$ as
\begin{align*}
P_{v, v'}(y_v, y_{v'}) =
\frac{P_v(y_v) P_{v'}(y_{v'}) \exp(\psi_{v, v'}(y_v, y_{v'}) + A)}{m_{v'\to v}(y_v) m_{v\to v'}(y_{v'}) } \,.
\end{align*}
\end{remark}
\subsection{Review of Best Max-Marginal First} \label{sec:a:bmmf}
If one has access to an algorithm $\mcM$ that can compute max-marginals,
the top-$K$ oracle is easily implemented via the Best Max Marginal First (BMMF) algorithm of \citet{yanover2004finding},
which is recalled in Algo.~\ref{algo:top_k_map:general}.
This algorithm requires computations of
two sets of max-marginals
per iteration, where a {\em set} of max-marginals refers to max-marginals for all variables $y_v$ in $\yv$.
\paragraph{Details}
The algorithm runs by maintaining a partitioning of the search space $\mcY$
and a table $\varphi\pow{k}(v, j)$ that stores the best score
in partition $k$ (defined by constraints $\mcC\pow{k}$)
subject to the additional constraint that $y_v = j$.
In iteration $k$, the algorithm looks at the $k-1$ existing partitions and picks the best
partition $s_k$ (Line~\ref{alg:bmmf:best_part}).
This partition is further divided into two parts:
the max-marginals in the promising partition (corresponding to $y_{v_k} = j_k$)
are computed (Line~\ref{alg:bmmf:line:max-marg})
and decoded (Line~\ref{alg:bmmf:line:decoding}) to yield $k$th best scoring $\yv\pow{k}$.
The scores of the less promising partition are updated via a second round of
max-marginal computations (Line~\ref{alg:bmmf:line:update_score}).
\begin{algorithm}[tb]
\caption{Best Max Marginal First (BMMF)}
\label{algo:top_k_map:general}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Augmented score function $\psi$, parameters $\wv$, non-negative integer $K$,
algorithm $\mcM$ to compute max-marginals of $\psi$.
\STATE {\bfseries Initialization:} $\mcC\pow{1} = \varnothing$ and $\mcU\pow{2} = \varnothing$.
\FOR{$v \in [p]$}
\STATE For $j \in \mcY_v$, set
$\varphi\pow{1}(v;j) = \max \{ \psi(\yv ; \wv) \, | \, \yv \in \mcY \text{ s.t. } y_v = j \}$ using $\mcM$.
\STATE Set $y\pow{1}_v = \argmax_{j \in \mcY_v} \varphi\pow{1}(v, j)$.
\ENDFOR
\FOR{$k = 2, \cdots, K$}
\STATE Define search space
$\mcS\pow{k} = \left\{ (v, j, s) \in [p] \times \mcY_v \times [k-1] \, \big| \,
y\pow{s}_v \neq j, \text{ and } (v, j, s) \notin \mcU\pow{t} \right\}$.
\label{alg:bmmf:part_search}
\STATE Find indices $(v_k, j_k, s_k) = \argmax_{(v, j, s) \in \mcS\pow{k}} \varphi\pow{s}(v,j)$
and set constraints
$\mcC\pow{k} = \mcC\pow{s_k} \cup \{ y_{v_k} = j_k \}$. \label{alg:bmmf:best_part}
\FOR{$v\in[p]$}
\STATE For each $j \in \mcY_v$, use $\mcM$ to set $\varphi\pow{k}(v, j) = \max \left\{
\psi(\yv ; \wv) \, | \, \yv \in \mcY \text{ s.t. constraints }
\mcC\pow{k} \text{ hold and } y_v = j \right\}$. \label{alg:bmmf:line:max-marg}
\STATE Set $y\pow{k}_v = \argmax_{j \in \mcY_v} \varphi\pow{k}(v, j)$. \label{alg:bmmf:line:decoding}
\ENDFOR
\STATE Update $\mcU\pow{k+1} = \mcU\pow{k} \cup \left\{ (v_k, j_k, s_k) \right\}$ and
$\mcC\pow{s_k} = \mcC\pow{s_k} \cup \{ y_{v_k} \neq j_k \}$ and the max-marginal table
$\varphi\pow{s_k}(v, j) = \max_{\yv \in \mcY, \mcC\pow{s_k}, y_v = j} \psi(\yv ; \wv)$ using $\mcM$.
\label{alg:bmmf:line:update_score}
\ENDFOR
\RETURN $\left\{ \left(\psi(\yv\pow{k} ; \wv), \yv\pow{k} \right) \right\}_{k=1}^K$.
\end{algorithmic}
\end{algorithm}
\paragraph{Guarantee}
The following theorem shows that Algo.~\ref{algo:top_k_map:general} provably
implements the top-$K$ oracle as long as the max-marginals can be computed exactly
under the assumption of unambiguity. With approximate max-marginals however,
Algo.~\ref{algo:top_k_map:general} comes with no guarantees.
\begin{theorem}[\citet{yanover2004finding}] \label{thm:inference:topKmm}
Suppose the score function $\psi$ is unambiguous, that is,
$\psi(\yv' ; \wv) \neq \psi(\yv'' ;\wv)$ for all distinct $\yv', \yv'' \in \mcY$.
Given an algorithm $\mcM$ that can compute the max-marginals of $\psi$ exactly,
Algo.~\ref{algo:top_k_map:general} makes at most $2K$ calls to $\mcM$ and its output
satisfies $\psi(\yv_k ; \wv) = \max\pow{k}_{\yv \in \mcY} \psi(\yv ; \wv)$.
Thus, the BMMF algorithm followed by a projection onto the simplex
(Algo.~\ref{algo:smoothing:top_K_oracle} in Appendix~\ref{sec:a:smoothing})
is a correct implementation of the top-$K$ oracle.
It makes $2K$ calls to $\mcM$.
\end{theorem}
\paragraph{Constrained Max-Marginals}
The algorithm requires computation of max-marginals subject to constraints of the form
$y_v \in Y_v$ for some set $Y_v \subseteq \mcY_v$. This is accomplished by
redefining for a constraint $y_v \in Y_v$:
\[
\overline \psi(\yv) =
\begin{cases}
\psi(\yv), \, \text{ if } y_v \in Y_v \\
-\infty, \, \text{ otherwise}
\end{cases} \,.
\]
\subsection{Max-Marginals Using Graph Cuts} \label{sec:a:graph_cuts}
\begin{algorithm}[tb]
\caption{Max-marginal computation via Graph Cuts}
\label{algo:top_k_map:graph_cuts}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Augmented score function $\psi(\cdot, \cdot ; \wv)$ with $\mcY = \{0, 1\}^p$,
constraints $\mcC$ of the form $y_v = b$ for $b \in \{0,1\}$.
\STATE Using artificial source $s$ and sink $t$, set $V' = \mcV \cup \{s, t\}$ and $E' = \varnothing$.
\FOR{$v \in [p]$}
\STATE Add to $E'$ the (edge, cost) pairs $(s \to y_v, \theta_{v; 0})$ and
$(y_v \to t, \theta_{v; 1})$.
\ENDFOR
\FOR{$v,v' \in \mcR$ such that $v < v'$}
\STATE Add to $E'$ the (edge, cost) pairs
$(s \to y_v , \theta_{vv'; 00})$,
$(y_{v'} \to t , \theta_{vv'; 11})$,
$(y_v \to y_{v'} , \theta_{vv'; 10})$,
$(y_{v'} \to y_v , \theta_{vv'; 01} - \theta_{vv' ;00} - \theta_{vv' ; 11})$.
\ENDFOR
\FOR{constraint $y_v = b$ in $\mcC$}
\STATE Add to $E'$ the edge $y_v \to t$ if $b=0$ or edge $s \to y_v$ if $b=1$ with cost $+\infty$.
\label{line:algo:mincut:constr}
\ENDFOR
\STATE Create graph $G'=(V', E')$, where parallel edges are merged by adding weights.
\STATE Compute minimum cost $s, t$-cut of $G'$. Let $C$ be its cost.
\STATE Create $\widehat \yv \in \{0, 1\}^p$ as follows: for each $v \in \mcV$,
set $\widehat y_v = 0$ if the edge $s\to v$ is cut.
Else $\widehat y_v = 1$.
\RETURN $-C, \widehat \yv$.
\end{algorithmic}
\end{algorithm}
This section recalls a simple procedure to compute max-marginals using graph cuts.
Such a construction was used, for instance, by \citet{kolmogorov2004energy}.
\paragraph{Notation}
In the literature on graph cut inference, it is customary to work with the energy function, which is defined as
the negative of the augmented score $-\psi$.
For this section, we also assume that the labels are binary, i.e., $\mcY_v = \{0,1\}$ for each $v\in[p]$.
Recall the decomposition~\eqref{eq:smoothing:aug_score_decomp} of the augmented score function over nodes and edges.
Define a reparameterization
\begin{gather*}
\theta_{v;z}(\wv) = -\psi_v(z ; \wv) \, \text{ for } v\in \mcV, z \in \{0,1\} \,\\
\theta_{vv';z,z'}(\wv) =
-\psi_{v,v'}(z, z' ;\wv) \,, \, \text{ if } (v, v') \in \mcE \\
\text{ for } (v, v')\in \mcE, (z,z') \in \{0,1\}^2\,.
\end{gather*}
We then get
\begin{gather*}
-\psi(\yv) = \sum_{v=1}^p \sum_{z \in \{0,1\}} \theta_{v; z} \ind(y_v=z)
+ \sum_{v=1}^p \sum_{v'=i+1}^p \sum_{z, z' \in \{0,1\}} \theta_{vv'; z z'} \ind(y_v=z) \ind(y_{v'}=z') \ind((v,v') \in \mcE) \,,
\end{gather*}
where we dropped the dependence on $\wv$ for simplicity.
We require the energies to be submodular, i.e., for every $v, v' \in [p]$, we have that
\begin{align} \label{eq:top_k_map:submodular}
\theta_{vv' ; 00} + \theta_{vv' ; 11} \le \theta_{vv' ; 01} + \theta_{vv' ; 10} \,.
\end{align}
Also, assume without loss of generality that $\theta_{v;z}, \theta_{vv';zz'}$ are non-negative
\citep{kolmogorov2004energy}.
\paragraph{Algorithm and Correctness}
Algo.~\ref{algo:top_k_map:graph_cuts} shows how to compute the max-marginal relative
to a single variable $y_v$. The next theorem shows its correctness.
\begin{theorem}[\citet{kolmogorov2004energy}] \label{thm:top_k_map:graph_cuts}
Given a binary pairwise graphical model with augmented score function $\psi$
which satisfies \eqref{eq:top_k_map:submodular},
and a set of constraints $\mcC$, Algo.~\ref{algo:top_k_map:graph_cuts}
returns $\max_{\yv \in \mcY_\mcC} \psi(\yv ; \wv)$, where $\mcY_\mcC$ denotes the
subset of $\mcY$ that satisfies constraints $\mcC$.
Moreover, Algo.~\ref{algo:top_k_map:graph_cuts} requires one maximum flow computation.
\end{theorem}
\iffalse
\begin{proof}
Suppose first that there are no constraints. We shall use the following two facts
about the graph $G'$ in Algo.~\ref{algo:top_k_map:graph_cuts}.
\begin{fact} \label{fact:graphcuts:fact1}
Every label $\yv \in \mcY$ corresponds to a minimal cut in the graph $G'$,
where the energy of the labeling $-\psi(\yv ; \wv)$ equals the
cost of the cut.
\end{fact}
This shows that there exists a cut that corresponds to the minimum energy label
$\yv^* = \argmax_{\yv \in \mcY} \psi(\yv ; \wv)$.
However, the minimum cut might not correspond to any $\yv \in \mcY$.
\begin{fact} \label{fact:graphcuts:fact2}
The minimum cut of graph $G'$ corresponds to a valid labeling $\widehat \yv \in \mcY$.
This labeling can be decoded as follows: $\widehat y_v = 0$ if the edge $s\to v$ is cut.
Else $\widehat y_v = 1$.
\end{fact}
From Facts~\ref{fact:graphcuts:fact1} and~\ref{fact:graphcuts:fact2}, we deduce
that the minimum cut of $G'$ corresponds to the minimum energy assignment $\yv^*$
and that the cost of the cut equals $\psi(\yv^* ; \wv)$.
When we have constraints, note that the infinite weight edges added in
Line~\ref{line:algo:mincut:constr} of Algo.~\ref{algo:top_k_map:graph_cuts} are never cut,
else the cost of the cut would be $+\infty$.
This enforces the constraints, and the cost of the returned cut is therefore
$\max_{\yv \in \mcY_\mcC} \psi(\yv ; \wv)$.
\end{proof}
Let us now prove Facts~\ref{fact:graphcuts:fact1} and~\ref{fact:graphcuts:fact2}.
\begin{proof}[Fact~\ref{fact:graphcuts:fact1}]
Suppose we have labeling $\yv \in \mcY$.
For each $v \in [p]$, cut $s \to y_v$ if $y_v = 0$, else cut $y_v = t$.
For edges $vv'$, if $y_v = 0$ and $y_{v'} = 1$, also cut the edge $v' \to v$. Else, if
if $y_v = 1$ and $y_{v'} = 0$, cut instead the edge $v \to v'$.
Clearly, this is a $s,t$-cut because every path from $s$ to $t$
has a cut edge.
Moreover, the cut is {\em minimal} in that adding any one cut edge back to the graph
results in a valid $s-t$ path.
The cost of this cut $C$ is now, (each summation on $v, v'$ below runs from $1$ to $p$ unless mentioned otherwise)
\begin{align*}
\mathrm{cost}(C) =&
\sum_{v} \theta_{v ; y_v} +
\sum_{v}\sum_{v' > v}\theta_{vv' ; 00} \ind(y_v = 0) +
\sum_{v}\sum_{v' < v} \theta_{v'v ; 11} \ind(y_v = 1) \\ &+
\sum_{v}\sum_{v' > v} \theta_{vv' ; 10} \ind(y_v = 1) \ind(y_{v'} = 0) +
\sum_{v}\sum_{v' > v} \left( \theta_{vv' ; 01} - \theta_{vv' ; 00} - \theta_{vv' ; 11} \right) \ind(y_v = 0) \ind(y_{v'} = 1) \\
=& \sum_{v} \theta_{v ; y_v} +
\sum_{v} \sum_{v' > v} \big[
\theta_{vv' ; 00} \ind(y_v = 0) \left( 1 - \ind(y_{v'} = 1) \right) +
\theta_{vv' ; 11} \ind(y_{v'} = 1) \left( 1 - \ind(y_v = 0) \right) \\ &+
\theta_{vv' ; 10} \ind(y_v = 1) \ind(y_{v'} = 0) +
\theta_{vv' ; 01} \ind(y_v = 0) \ind(y_{v'} = 1)
\big] \\
=& \sum_{v} \theta_{v ; y_v} +
\sum_{v} \sum_{v' > v} \big[
\theta_{vv' ; 00} \ind(y_v = 0) \ind(y_{v'} = 0) +
\theta_{vv' ; 11} \ind(y_{v'} = 1) \ind(y_v = 1) \\ &+
\theta_{vv' ; 10} \ind(y_v = 1) \ind(y_{v'} = 0) +
\theta_{vv' ; 01} \ind(y_v = 0) \ind(y_{v'} = 1)
\big] \\
=& \sum_{v} \theta_{v ; y_v} + \sum_{v} \sum_{v' > v} \theta_{vv' ; y_v y_{v'}}
= - \psi(\yv ; \wv) \,.
\end{align*}
\end{proof}
Next, let us prove Fact~\ref{fact:graphcuts:fact2}.
\begin{proof}[Fact~\ref{fact:graphcuts:fact2}]
Call the minimum cut $C$.
For every vertex $v$, there exists a path $s \to v \to t$. Therefore,
one of $s \to v$ or $v \to t$ must be cut.
Since each $\theta_{v ; z}$ is assumed to be non-negative,
{\em exactly one} of $s \to v$ or $v \to t$ is cut in the min cost cut.
If $s \to v$ is cut, set $\widehat y_v = 0$. Else set $\widehat y_v = 1$. The
labeling $\widehat \yv$ so obtained corresponds to a minimal cut, say $C'$, by Fact~\ref{fact:graphcuts:fact1}.
By construction, $C$ and $C'$ agree on which $s \to v$ and $v \to t$ edges to cut.
Minimality of $C$ and $C'$ ensure that $C = C'$, up to zero weight edges.
But zero weight edges do not impact the cost of the cut and therefore,
$\mathrm{cost}(C) = \mathrm{cost}(C') = -\psi(\widehat \yv ; \wv)$.
Therefore, the min cut $C$ corresponds to the labeling $\widehat \yv$.
\end{proof}
\fi
\subsection{Max-Marginals Using Graph Matchings} \label{sec:a:graph_matchings}
The alignment problem that we consider in this section is as follows:
given two sets $V, V'$, both of equal size (for simplicity), and a weight function
$\varphi: V \times V' \to \reals$, the task is to find a map $\sigma : V \to V'$
so that each $v \in V$ is mapped to a unique $z \in V'$ and the total weight $\sum_{v \in V} \varphi(v, \sigma(v))$
is maximized.
For example, $V$ and $V'$ might represent two natural language sentences and this task is to align the two sentences.
\paragraph{Graphical Model}
This problem is framed as a graphical model as follows.
Suppose $V$ and $V'$ are of size $p$. Define $\yv = (y_1, \cdots, y_p)$ so that $y_v$ denotes $\sigma(v)$.
The graph $\mcG = (\mcV, \mcE)$ is constructed as the fully connected graph over $\mcV = \{1, \cdots, p\}$.
The range $\mcY_v$ of each $y_v$ is simply $V'$ in the unconstrained case.
Note that when considering constrained max-marginal computations, $\mcY_v$ might be subset of $V'$.
The score function $\psi$ is defined as node and edge potentials as in
Eq.~\eqref{eq:smoothing:aug_score_decomp}. Again, we suppress
dependence of $\psi$ on $\wv$ for simplicity.
Define unary and pairwise scores as
\begin{align*}
\psi_v(y_v) = \varphi(v, y_v) \quad \text{and} \quad
\psi_{v, v'}(y_v, y_{v'}) =
\begin{cases}
0, \text{ if } y_v \neq y_{v'} \\
-\infty, \text{ otherwise }
\end{cases}
\, .
\end{align*}
\paragraph{Max Oracle}
The max oracle with $\psi$ defined as above, or equivalently, the inference problem \eqref{eq:pgm:inference}
(cf. Lemma~\ref{lemma:smoothing:first-order-oracle}\ref{lem:foo:max}) can be cast as a maximum weight bipartite matching,
see e.g., \citet{taskar2005discriminative}.
Define a fully connected bipartite graph $G = (V \cup V', E)$ with partitions $V, V'$, and directed edges from
each $v \in V$ to each vertex $z \in V'$ with weight $\varphi(v, z)$.
The maximum weight bipartite matching in this graph $G$ gives the mapping $\sigma$, and thus implements the max oracle.
It can be written as the following linear program:
\begin{align*}
\max_{\{\theta_{v,z} \text{ for } (v,z) \in E\}} \, & \sum_{(v,z) \in E} \varphi(v, z) \theta_{v, z} \,, \\
\mathrm{s.t.} \quad & 0 \le \theta_{v, z} \le 1 \, \forall (v, z) \in V \times V' \\
& \sum_{v \in V} \theta_{v, z} \le 1 \, \forall z \in V' \\
& \sum_{z \in V'} \theta_{v, z} \le 1 \, \forall v \in V \, .
\end{align*}
\paragraph{Max-Marginal}
For the graphical model defined above, the max-marginal $\psi_{\bar v ; \bar z}$ is the constrained maximum weight matching in the
graph $G$ defined above subject to the constraint that $\bar v$ is mapped to $\bar z$. The linear program above can be
modified to include the constraint $\theta_{\bar v, \bar z} = 1$:
\begin{align} \label{eq:top_k_map:graph_matchings:max-marg:def}
\begin{aligned}
\max_{\{\theta_{v,z} \text{ for } (v,z) \in E\}} \, & \sum_{(v,z) \in E} \varphi(v, z) \theta_{v, z} \,, \\
\mathrm{s.t.} \quad & 0 \le \theta_{v, z} \le 1 \, \forall (v, z) \in V \times V' \\
& \sum_{v \in V} \theta_{v, z} \le 1 \, \forall z \in V' \\
& \sum_{z \in V'} \theta_{v, z} \le 1 \, \forall v \in V \\
& \theta_{\bar v, \bar z} = 1 \, .
\end{aligned}
\end{align}
\paragraph{Algorithm to Compute Max-Marginals}
Algo.~\ref{algo:top_k_map:graph_matchings}, which shows how to compute max-marginals
is due to \citet{duchi2007using}.
Its running time complexity is as follows: the initial
maximum weight matching computation takes $\bigO(p^3)$ via computation of a maximum flow~\citep[Ch.~10]{schrijver-book}.
Line~\ref{line:top_k_map:graph_matching:all-pairs} of Algo.~\ref{algo:top_k_map:graph_matchings}
can be performed by the all-pairs shortest paths algorithm \citep[Ch.~8.4]{schrijver-book} in time $\bigO(p^3)$.
Its correctness is shown by the following theorem:
\begin{theorem}[\citet{duchi2007using}] \label{thm:top_k_map:graph_matching}
Given a directed bipartite graph $G$ and weights $\varphi: V \times V' \to \reals$,
the output $\psi_{v ; z}$ from Algo.~\ref{algo:top_k_map:graph_matchings}
are valid max-marginals, i.e., $\psi_{v ; z}$ coincides with the optimal value of the linear program
\eqref{eq:top_k_map:graph_matchings:max-marg:def}. Moreover, Algo.~\ref{algo:top_k_map:graph_matchings}
runs in time $\bigO(p^3)$ where $p = \abs{V} = \abs{V'}$.
\end{theorem}
\begin{algorithm}[tb]
\caption{Max marginal computation via Graph matchings}
\label{algo:top_k_map:graph_matchings}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Directed bipartite graph $G=(V \cup V', E)$,
weights $\varphi: V \times V' \to \reals$.
\STATE Find a maximum weight bipartite matching $\sigma^*$ in the graph $G$. Let the maximum weight be $\psi^*$.
\STATE Define a weighted residual bipartite graph $\widehat G = (V \cup V', \widehat E)$,
where the set $\widehat E$ is populated as follows:
for $(v,z) \in E$, add an edge $(v,z)$ to $\widehat E$ with weight $1 - \ind(\sigma^*(v) = z)$,
add $(z, v)$ to $\widehat E$ with weights $- \ind(\sigma^*(v) = z)$.
\STATE Find the maximum weight path from every vertex $z\in V'$ to every vertex $v \in V$
and denote this by $\Delta(z, v)$. \label{line:top_k_map:graph_matching:all-pairs}
\STATE Assign the max-marginals $\psi_{v ; z} = \psi^* + \ind(\sigma^*(v) \neq z) \, \left( \Delta(z, v) + \varphi(v, z) \right)$
for all $(v, z) \in V \times V'$.
\RETURN Max-marginals $\psi_{v;z}$ for all $(v, z) \in V \times V'$.
\end{algorithmic}
\end{algorithm}
\subsection{Proof of Proposition~\ref{prop:smoothing:max-marg:all}} \label{sec:a:proof-prop}
\begin{proposition_unnumbered}[\ref{prop:smoothing:max-marg:all}]
Consider as inputs an augmented score function $\psi(\cdot, \cdot ; \wv)$,
an integer $K>0$ and a smoothing parameter $\mu > 0$.
Further, suppose that $\psi$ is unambiguous, that is,
$\psi(\yv' ; \wv) \neq \psi(\yv'' ;\wv)$ for all distinct $\yv', \yv'' \in \mcY$.
Consider one of the two settings:
\begin{enumerate}[label={\upshape(\Alph*)}, align=left, leftmargin=*]
\item the output space $\mcY_v = \{0,1\}$ for each $v \in \mcV$, and the function
$-\psi$ is submodular (see Appendix~\ref{sec:a:graph_cuts} and, in particular, \eqref{eq:top_k_map:submodular}
for the precise definition), or,
\item the augmented score corresponds to an alignment task where the
inference problem~\eqref{eq:pgm:inference} corresponds to a
maximum weight bipartite matching (see Appendix~\ref{sec:a:graph_matchings} for a precise definition).
\end{enumerate}
In these cases, we have the following:
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*]
\item The max oracle can be implemented at a
computational complexity of $\bigO(p)$ minimum cut computations in Case~\ref{part:prop:max-marg:cuts},
and in time $\bigO(p^3)$ in Case~\ref{part:prop:max-marg:matching}.
\item The top-$K$ oracle can be implemented at a
computational complexity of $\bigO(pK)$ minimum cut computations in Case~\ref{part:prop:max-marg:cuts},
and in time $\bigO(p^3K)$ in Case~\ref{part:prop:max-marg:matching}.
\item The exp oracle is \#P-complete in both cases.
\end{enumerate}
\end{proposition_unnumbered}
\begin{proof}
A set of max-marginals can be computed by an algorithm $\mcM$ defined as follows:
\begin{itemize}
\item In Case~\ref{part:prop:max-marg:cuts}, invoke Algo.~\ref{algo:top_k_map:graph_cuts} a total of $2p$ times,
with $y_v =0$, and $y_v = 1$ for each $v \in \mcV$. This takes a total of $2p$ min-cut computations.
\item In Case~\ref{part:prop:max-marg:matching}, $\mcM$ is simply Algo.~\ref{algo:top_k_map:graph_matchings}, which takes time
$\bigO(p^3)$.
\end{itemize}
The max oracle can then be implmented by the decoding in Eq.~\eqref{eq:max-marg:defn}, whose correctness is
guaranteed by Thm.~\ref{thm:a:loopy:decoding}.
The top-$K$ oracle is implemented by invoking the BMMF algorithm with $\mcM$ defined above, followed by
a projection onto the simplex (Algo.~\ref{algo:smoothing:top_K_oracle} in Appendix~\ref{sec:a:smoothing})
and its correctness is guaranteed by Thm.~\ref{thm:inference:topKmm}.
Lastly, the result of exp oracle follows from \citet[Thm. 15]{jerrum1993polynomial} in conjunction with
Prop.~\ref{prop:smoothing:exp-crf}.
\end{proof}
\subsection{Inference using branch and bound search} \label{sec:a:bb_search}
Algo.~\ref{algo:top_k:bb} with the input $K=1$ is the standard best-first branch and bound
search algorithm.
Effectively, the top-$K$ oracle is implemented by simply
continuing the search procedure until $K$ outputs have been produced - compare
Algo.~\ref{algo:top_k:bb} with inputs $K=1$ and $K > 1$. We now prove the correctness guarantee.
\begin{algorithm}[tb]
\caption{Top-$K$ best-first branch and bound search}
\label{algo:top_k:bb}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Augmented score function $\psi(\cdot, \cdot ; \wv)$, integer $K > 0$,
search space $\mcY$, upper bound $\widehat \psi$, split strategy.
\STATE {\bfseries Initialization:} Initialize priority queue with
single entry $\mcY$ with priority $\widehat \psi(\mcY ; \wv)$,
and solution set $\mcS$ as the empty list.
\WHILE{$\abs{\mcS} < K$}
\STATE Pop $\widehat \mcY$ from the priority queue. \label{line:algo:bbtopk:pq}
\IF{${\widehat \mcY} = \{\widehat \yv\}$ is a singleton} \label{line:algo:bbtopk:1}
\STATE Append $( \widehat \yv, \psi(\widehat \yv ; \wv) )$ to $S$.
\ELSE
\STATE $\mcY_1, \mcY_2 \leftarrow \mathrm{split}(\widehat \mcY)$.
\STATE Add $\mcY_1$ with priority $\widehat \psi(\mcY_1 ; \wv)$
and $\mcY_2$ with priority $\widehat \psi(\mcY_2 ; \wv)$ to the priority queue.
\ENDIF
\ENDWHILE
\RETURN $\mcS$.
\end{algorithmic}
\end{algorithm}
\begin{proposition_unnumbered}[\ref{prop:smoothing:bb-search}]
Consider an augmented score function $\psi(\cdot, \cdot, \wv)$,
an integer $K > 0$ and a smoothing parameter $\mu > 0$.
Suppose the upper bound function $\widehat \psi(\cdot, \cdot ; \wv): \mcX \times 2^{\mcY} \to \reals$
satisfies the following properties:
\begin{enumerate}[label=(\alph*), align=left, widest=a, leftmargin=*]
\item $\widehat \psi(\widehat \mcY ; \wv)$ is finite for every $\widehat \mcY \subseteq \mcY$,
\item $\widehat \psi(\widehat \mcY ; \wv) \ge \max_{\yv \in \widehat \mcY} \psi(\yv ; \wv)$
for all $\widehat \mcY \subseteq \mcY$, and,
\item $\widehat \psi(\{\yv\} ; \wv) = \psi(\yv ; \wv)$ for every $\yv \in \mcY$.
\end{enumerate}
Then, we have the following:
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=ii, leftmargin=*]
\item Algo.~\ref{algo:top_k:bb} with $K=1$ is a valid implementation of the max oracle.
\item Algo.~\ref{algo:top_k:bb} followed by a projection onto the simplex
(Algo.~\ref{algo:smoothing:top_K_oracle} in Appendix~\ref{sec:a:smoothing}) is a valid implementation of the top-$K$ oracle.
\end{enumerate}
\end{proposition_unnumbered}
\begin{proof}
Suppose at some point during the execution of the algorithm,
we have a $\widehat \mcY = \{\widehat \yv\}$ on Line~\ref{line:algo:bbtopk:1}
and that $\abs{\mcS} = k$ for some $0 \le k < K$.
From the properties of the quality upper bound $\widehat \psi$,
and using the fact that $ \{\widehat \yv\}$ had the highest priority
in the priority queue (denoted by $(*)$), we get,
\begin{align*}
\psi(\widehat\yv ; \wv) &= \widehat \psi(\{ \widehat \yv\} ; \wv) \\
&\stackrel{(*)}{\ge} \max_{Y \in \mcP} \widehat \psi(Y ; \wv) \\
&\ge \max_{Y \in \mcP} \max_{\yv \in Y} \psi(\yv ; \wv) \\
&\stackrel{(\#)}{=} \max_{\yv \in \mcY - \mcS} \psi(\yv ; \wv) \,,
\end{align*}
where the equality $(\#)$ followed from the fact that
any $\yv \in \mcY$ exits the priority queue only if it is added to $\mcS$.
This shows that if a $\widehat \yv$ is added to $\mcS$, it has a score that is no less than
that of any $\yv \in \mcY - \mcS$. In other words, Algo.~\ref{algo:top_k:bb} returns
the top-$K$ highest scoring $\yv$'s.
\end{proof}
\subsection{Behavior of the Sequence $(\alpha_k)_{k \ge 0}$} \label{sec:a:c_alpha_k}
\begin{lemma_unnumbered}[\ref{lem:c:alpha_k}]
Given a positive, non-decreasing sequence $(\kappa_k)_{k\ge 1}$ and $\lambda \ge 0$,
consider the sequence $(\alpha_k)_{k \ge 0}$ defined by \eqref{eq:c:update_alpha}, where
$\alpha_0 \in (0, 1)$ such that $\alpha_0^2 \ge \lambda / (\lambda + \kappa_1)$.
Then, we have for every $k \ge 1$ that $0< \alpha_k \le \alpha_{k-1}$ and,
$
\alpha_k^2 \ge {\lambda}/({\lambda + \kappa_{k+1}}) \,.
$
\end{lemma_unnumbered}
\begin{proof}
It is clear that \eqref{eq:c:update_alpha} always has a positive root, so the update is well defined.
Define sequences $(c_k)_{k \ge 1}, (d_k)_{k \ge 0}$ as
\begin{align*}
c_k = \frac{\lambda + \kappa_k}{\lambda + \kappa_{k+1}}\,, \quad \mbox{and} \quad
d_k = \frac{\lambda}{\lambda + \kappa_{k+1}} \,.
\end{align*}
Therefore, we have that $c_k d_{k-1} = d_k$, $0 < c_k \le 1$ and $0 \le d_k < 1$.
With these in hand, the rule for $\alpha_k$ can be written as
\begin{align} \label{eq:lem:c:alpha_k}
\alpha_k = \frac{ -(c_k \alpha_{k-1}^2 - d_k ) + \sqrt{ (c_k \alpha_{k-1}^2 - d_k )^2 + 4 c_k \alpha_{k-1}^2 }}{2} \,.
\end{align}
We show by induction that that $d_k \le \alpha_k^2 < 1$.
The base case holds by assumption. Suppose that $\alpha_{k-1}$ satisfies
the hypothesis for some $k \ge 1$.
Noting that $\alpha_{k-1}^2 \ge d_{k-1}$ is equivalent to $c_k \alpha_{k-1}^2 - d_k \ge 0$, we get that
\begin{align}
\nonumber
\sqrt{ (c_k \alpha_{k-1}^2 - d_k )^2 + 4 c_k \alpha_{k-1}^2 }
&\le
\sqrt{ (c_k \alpha_{k-1}^2 - d_k )^2 + 4 c_k \alpha_{k-1}^2
+ 2 (c_k \alpha_{k-1}^2 - d_k) (2\sqrt{c_k} \alpha_{k-1}) } \\
&= c_k \alpha_{k-1}^2 - d_k + 2\sqrt{c_k} \alpha_{k-1} \,.
\label{eq:lem:c:alpha_k_helper}
\end{align}
We now conclude from \eqref{eq:lem:c:alpha_k} and \eqref{eq:lem:c:alpha_k_helper} that
\begin{align}
\nonumber
\alpha_k &\le \frac{ -(c_k \alpha_{k-1}^2 - d_k ) + (c_k \alpha_{k-1}^2 - d_k + 2\sqrt{c_k} \alpha_{k-1}) }{2} \\
&= \sqrt{c_k}{\alpha_{k-1}} \le \alpha_{k-1} < 1\,,
\label{eq:lem:c:alpha_k_dec}
\end{align}
since $c_k \le 1$ and $\alpha_{k-1} < 1$. To show the other side, we expand out \eqref{eq:lem:c:alpha_k}
and apply \eqref{eq:lem:c:alpha_k_helper} again to get
\begin{align*}
\alpha_k^2 - d_k
&= \frac{1}{2}(c_k \alpha_{k-1}^2 - d_k)^2 + (c_k \alpha_{k-1}^2 - d_k)
- \frac{1}{2}(c_k \alpha_{k-1}^2 - d_k) \sqrt{(c_k \alpha_{k-1}^2 - d_k)^2 + 4 c_k \alpha_{k-1}^2 } \\
&= \frac{1}{2}(c_k \alpha_{k-1}^2 - d_k) \left(2 + (c_k \alpha_{k-1}^2 - d_k)
- \sqrt{(c_k \alpha_{k-1}^2 - d_k)^2 + 4 c_k \alpha_{k-1}^2 }
\right) \\
&\ge \frac{1}{2}(c_k \alpha_{k-1}^2 - d_k) \left(2 + (c_k \alpha_{k-1}^2 - d_k)
- (c_k \alpha_{k-1}^2 - d_k+ 2\sqrt{c_k} \alpha_{k-1})
\right) \\
&= (c_k \alpha_{k-1}^2 - d_k) ( 1- \sqrt{c_k}\alpha_{k-1}) \ge 0 \,.
\end{align*}
The fact that $(\alpha_{k})_{k\ge 0}$ is a non-increasing sequence follows from~\eqref{eq:lem:c:alpha_k_dec}.
\end{proof}
\subsection{Proofs of Corollaries to Theorem~\ref{thm:catalyst:outer}} \label{subsec:c:proofs_missing_cor}
We rewrite \eqref{thm:c:main:main} from Theorem~\ref{thm:catalyst:outer} as follows:
\begin{align} \label{eq:c:app:main}
F&(\wv_k) - F^* \le
\left( \prod_{j=1}^k \frac{1-\alpha_{j-1}}{1-\delta_j} \right)
\left( F(\wv_0) - F^* + \frac{\gamma_0}{2} \normsq{\wv_0 - \wv^*} \right) + \mu_k D_\omega \\
&+
\frac{1}{1-\alpha_k} \left[
\left( \prod_{j=1}^k \frac{1-\alpha_j}{1-\delta_j} \right) (1 + \delta_1) \mu_1 D_\omega +
\sum_{j=2}^k \left( \prod_{i=j}^k \frac{1-\alpha_i}{1-\delta_i} \right)
\left( \mu_{j-1} - (1-\delta_j)\mu_j \right)D_\omega
\right]
\,, \nonumber
\end{align}
Next, we have proofs of Corollaries~\ref{cor:c:outer_sc} to~\ref{cor:c:outer_smooth_dec_smoothing}.
\begin{corollary_unnumbered}[\ref{cor:c:outer_sc}]
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Let $q = \frac{\lambda}{\lambda + \kappa}$.
Suppose $\lambda > 0$ and $\mu_k = \mu$, $\kappa_k = \kappa$, for all $k \ge 1$. Choose $\alpha_0 = \sqrt{q}$ and,
$\delta_k = \frac{\sqrt{q}}{2 - \sqrt{q}} \,.$
Then, we have,
\begin{align*}
F(\wv_k) - F^* \le \frac{3 - \sqrt{q}}{1 - \sqrt{q}} \mu D +
2 \left( 1- \frac{\sqrt q}{2} \right)^k \left( F(\wv_0) - F^* \right) \,.
\end{align*}
\end{corollary_unnumbered}
\begin{proof}
Notice that when $\alpha_0 = \sqrt{q}$, we have, $\alpha_k = \sqrt{q}$ for all $k$. Moreover, for our choice of $\delta_k$,
we get, for all $k, j$, $\frac{1-\alpha_k}{1-\delta_j} = 1 - \frac{\sqrt q}{2}$.
Under this choice of $\alpha_0$, we have, $\gamma_0 = \lambda$. So, we get the dependence on initial conditions as
\begin{align*}
\Delta_0 = F(\wv_0) - F^* + \frac{\lambda}{2} \normsq{\wv_0 - \wv^*} \le 2( F(\wv_0) - F^*) \,,
\end{align*}
by $\lambda$-strong convexity of $F$. The last term of \eqref{eq:c:app:main} is now,
\begin{align*}
\frac{\mu D}{1-\sqrt {q}} \left[ \underbrace{\left( 1 - \frac{\sqrt q}{2} \right)^{k-1} }_{\le 1}
+ \underbrace{\frac{\sqrt q}{2} \sum_{j=2}^k \left( 1 - \frac{\sqrt q}{2} \right)^{k-j}}_{\stackrel{(*)}{\le} 1 }
\right] \le \frac{2 \mu D}{1 - \sqrt q} \, ,
\end{align*}
where $(*)$ holds since
\begin{align*}
\sum_{j=2}^k \left( 1 - \frac{\sqrt q}{2} \right)^{k-j} \le \sum_{j=0}^\infty \left( 1 - \frac{\sqrt q}{2} \right)^{j}
= \frac{2}{\sqrt{q}} \,.
\end{align*}
\end{proof}
\begin{corollary_unnumbered}[\ref{cor:c:outer_sc:decreasing_mu_const_kappa}]
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Let $q = \frac{\lambda}{\lambda + \kappa}, \eta = 1 - \frac{\sqrt q}{2}$.
Suppose $\lambda > 0$ and
$\kappa_k = \kappa$, for all $k \ge 1$. Choose $\alpha_0 = \sqrt{q}$ and,
the sequences $(\mu_k)_{k \ge 1}$ and $(\delta_k)_{k \ge 1}$ as
\begin{align*}
\mu_k = \mu \eta^{{k}/{2}} \,, \qquad \text{and,} \qquad
\delta_k = \frac{\sqrt{q}}{2 - \sqrt{q}} \,,
\end{align*}
where $\mu > 0$ is any constant.
Then, we have,
\begin{align*}
F(\wv_k) - F^* \le \eta^{{k}/{2}} \left[
2 \left( F(\wv_0) - F^* \right)
+ \frac{\mu D_\omega}{1-\sqrt{q}} \left(2-\sqrt{q} + \frac{\sqrt{q}}{1 - \sqrt \eta} \right)
\right] \, .
\end{align*}
\end{corollary_unnumbered}
\begin{proof
As previously in Corollary~\ref{cor:c:outer_sc}, notice that under the specific parameter choices here, we have,
$\gamma_0 = \lambda$, $\alpha_k = \sqrt{q}$ for each $k$, and $\frac{1 - \delta}{1 - \alpha} = 1 - \frac{\sqrt q}{2} = \eta$.
By $\lambda$-strong convexity of $F$ and the fact that $\gamma_0 = \lambda$, the contribution of $\wv_0$ can be upper
bounded by $2(F(\wv_0) - F^*)$. Now, we plugging these into \eqref{eq:c:app:main}
and collecting the terms dependent on
$\delta_k$ separately, we get,
\begin{align} \label{eq:c:outer:sc:dec_smoothing}
\nonumber
F(\wv_k) - F^* \le & \underbrace{2 \eta^k (F(\wv_0) - F^*)}_{=: \mcT_1} +
\underbrace{\mu_k D}_{=: \mcT_2} \\ &+
\frac{1}{1 - \sqrt{q}} \left(
\underbrace{\eta^k \mu_1 D}_{=: \mcT_3} +
\underbrace{\sum_{j=2}^k \eta^{k-j+1} (\mu_{j-1} - \mu_j)D}_{=:\mcT_4} +
\underbrace{\sum_{j=1}^k \eta^{k-j+1} \mu_j \delta_j D}_{=: \mcT_5}
\right) \,.
\end{align}
We shall consider each of these terms. Since $\eta^k \le \eta^{k/2}$, we get
$\mcT_1 \le 2\eta^{k/2}(F(\wv_0) - F^*)$ and $\mcT_3 = \eta^k \mu_1 D \le \eta^k \mu D \le \eta^{k/2} \mu D$.
Moreover, $\mcT_2 = \mu_k D = \eta^{k/2} \mu D$.
Next, using $ 1- \sqrt \eta \le 1 - \eta = \frac{\sqrt q}{2}$,
\begin{align*}
\mcT_4 &= \sum_{j=2}^k \eta^{k-j+1}(\mu_{j-1} - \mu_j) D
= \sum_{j=2}^k \eta^{k-j+1} \mu \eta^{\nicefrac{(j-1)}{2}} (1 - \sqrt\eta) D \\
&\le \frac{\sqrt{q}}{2} \mu D \sum_{j=2}^k \eta^{k - \frac{j-1}{2}}
= \frac{\sqrt{q}}{2} \mu D \eta^{\nicefrac{(k+1)}{2}} \sum_{j=0}^{k-2} \eta^{j/2}
\le \frac{\sqrt{q}}{2} \mu D \frac{\eta^{\nicefrac{(k+1)}{2}} }{1- \sqrt\eta} \\
&\le \frac{\sqrt{q}}{2} \mu D \frac{\eta^{\nicefrac{k}{2}} }{1- \sqrt\eta} \, .
\end{align*}
Similarly, using $\delta_j = \nicefrac{\sqrt q}{2\eta}$, we have,
\begin{align*}
\mcT_5 &= \sum_{j=1}^k \eta^{k-j+1} \mu \eta^{j/2} D \frac{\sqrt q}{2\eta}
= \frac{\sqrt{q}}{2} \mu D\sum_{j=1}^k \eta^{\nicefrac{k-j}{2}}
\le \frac{\sqrt{q}}{2} \mu D \frac{\eta^{\nicefrac{k}{2}} }{1- \sqrt\eta} \, .
\end{align*}
Plugging these into \eqref{eq:c:outer:sc:dec_smoothing} completes the proof.
\end{proof}
\begin{corollary_unnumbered}[\ref{cor:c:outer_smooth}]
Consider the setting of Thm.~\ref{thm:catalyst:outer}. Suppose $\mu_k = \mu$, $\kappa_k = \kappa$, for all $k \ge 1$
and $\lambda = 0$. Choose $\alpha_0 = \frac{\sqrt{5}-1}{2}$ and
$\delta_k = \frac{1}{(1 + k)^2} \,.$
Then, we have,
\begin{align*}
F(\wv_k) - F^* \le \frac{8}{(k+2)^2} \left( F(\wv_0) - F^* + \frac{\kappa}{2} \normasq{2}{\wv_0 - \wv^*} \right)
+ \mu D_\omega\left( 1 + \frac{12}{k+2} + \frac{30}{(k+2)^2} \right) \, .
\end{align*}
\end{corollary_unnumbered}
\begin{proof
Firstly, note that $\gamma_0 = \kappa \frac{\alpha_0^2}{1-\alpha_0} = \kappa$. Now, define
\begin{align*}
\mcA_k &= \prod_{i=0}^k (1- \alpha_i) \text{, and, }
\mcB_k = \prod_{i=1}^k (1-\delta_i) \, .
\end{align*}
We have,
\begin{align} \label{lem:c:b_k_1}
\mcB_k = \prod_{i=1}^k \left( 1 - \frac{1}{(i+1)^2} \right) = \prod_{i=1}^k \frac{i(i+2)}{(i+1)^2} = \frac{1}{2} + \frac{1}{2(k+1)}\,.
\end{align}
Therefore,
\begin{align*}
F(\wv_k) - F^* \le& \frac{\mcA_{k-1}}{\mcB_k} \left( F(\wv_0) - F^*
+ \frac{\gamma_0}{2} \normsq{\wv_0 - \wv^*} \right)
+ \mu D \\ &+ \frac{\mu D}{1-\alpha_0} \left( \prod_{j=1}^k \frac{1-\alpha_{j-1}}{1-\delta_k} \right) (1 + \delta_1) +
\mu D \sum_{j=2}^k \left( \prod_{i=j}^k \frac{1- \alpha_{i-1}}{1-\delta_i} \right) \frac{\delta_j}{1-\alpha_{j-1}}
\\
\le& \underbrace{\frac{\mcA_{k-1}}{\mcB_k} \left( F(\wv_0) - F^* + \frac{\gamma_0}{2} \normsq{\wv_0 - \wv^*} \right)}_{=:\mcT_1}
+ \mu D \\
&+
\underbrace{ \frac{\tfrac{5}{4}\mu D}{1-\alpha_0} \frac{\mcA_{k-1}}{\mcB_k} }_{=:\mcT_2}+
\underbrace{\mu D \sum_{j=2}^k \frac{ \nicefrac{\mcA_{k-1}} {\mcA_{j-2}}} { \nicefrac{\mcB_k}{\mcB_{j-1}}}
\frac{\delta_j}{1-\alpha_{j-1}}}_{=:\mcT_3} \,.
\end{align*}
From Lemma~\ref{lem:c:A_k:const_kappa}, which analyzes the evolution of $(\alpha_k)$ and $(\mcA_k)$,
we get that $\frac{2}{(k+2)^2} \le \mcA_{k-1} \le \frac{4}{(k+2)^2}$ and $\alpha_k \le \frac{2}{k+3}$ for $k \ge 0$.
Since $\mcB_k \ge \frac{1}{2}$,
\begin{align*}
\mcT_1 \le \frac{8}{(k+2)^2} \left( F(\wv_0) - F^* + \frac{\gamma_0}{2} \normsq{\wv_0 - \wv^*} \right) \,.
\end{align*}
Moreover, since $\alpha_0 \le 2/3$,
\begin{align*}
\mcT_2 \le \frac{30}{(k+2)^2} \,.
\end{align*}
Lastly, we have,
\begin{align*}
\mcT_3 &\le \sum_{j=2}^k \frac{4}{(k+2)^2} \times \frac{(j+1)^2}{2} \times
2\left( \frac{1}{2}+ \frac{1}{2j} \right) \times \frac{1}{(j+1)^2} \times \frac{1}{1 - \nicefrac{2}{j+2}} \\
&\le 2\frac{2}{(k+2)^2} \sum_{j=2}^k \frac{j+2}{j} \le \frac{4}{(k+2)^2} \left(k -1 + 2 \log k \right)
\le \frac{12}{k+2} \, ,
\end{align*}
where we have used the simplifications $\sum_{j=2}^k 1/k \le \log k$ and $ k-1+2\log k \le 3k$.
\end{proof}
\begin{corollary_unnumbered}[\ref{cor:c:outer_smooth_dec_smoothing}]
Consider the setting of Thm.~\ref{thm:catalyst:outer} with $\lambda = 0$.
Choose $\alpha_0 = \frac{\sqrt{5}-1}{2}$, and for some non-negative constants $\kappa, \mu$,
define sequences $(\kappa_k)_{k \ge 1}, (\mu_k)_{k \ge 1}, (\delta_k)_{k \ge 1}$ as
\begin{align*}
\kappa_k = \kappa \, k\,, \quad
\mu_k = \frac{\mu}{k} \quad \text{and,} \quad
\delta_k = \frac{1}{(k + 1)^2} \,.
\end{align*}
Then, for $k \ge 2$, we have,
\begin{align}
F(\wv_k) - F^* \le
\frac{\log(k+1)}{k+1} \left(
2(F(\wv_0) - F^*) + \kappa \normasq{2}{\wv_0 - \wv^*} + 27 \mu D_\omega
\right) \,.
\end{align}
For the first iteration (i.e., $k = 1$), this bound is off by a constant factor $1 / \log2$.
\end{corollary_unnumbered}
\begin{proof
Notice that $\gamma_0 = \kappa_1 \frac{\alpha_0^2}{1- \alpha_0} = \kappa$.
As in Corollary~\ref{cor:c:outer_smooth}, define
\begin{align*}
\mcA_k &= \prod_{i=0}^k (1- \alpha_i)\,, \quad \text{and,} \quad
\mcB_k = \prod_{i=1}^k (1-\delta_i) \, .
\end{align*}
From Lemma~\ref{lem:c:A_k:inc_kappa} and \eqref{lem:c:b_k_1} respectively, we have for $k \ge 1$,
\begin{align*}
\frac{1- \frac{1}{\sqrt 2}}{k+1} &\le \mcA_{k} \le \frac{1}{k+2}\,, \quad \text{and,} \quad
\frac{1}{2} \le \mcB_{k} \le 1\, .
\end{align*}
Now, invoking Theorem~\ref{thm:catalyst:outer}, we get,
\begin{align} \label{eq:cor:c:nsc:dec_smoothing_eq}
F(\wv_k) - F^* \le& \nonumber
\underbrace{\frac{\mcA_{k-1}}{\mcB_k} \left( F(\wv_0) - F^*
+ \frac{\gamma_0}{2} \normsq{\wv_0 - \wv^*} \right)}_{=:\mcT_1} +
\underbrace{\mu_k D}_{=:\mcT_2} +
\underbrace{\frac{1}{1 - \alpha_0} \frac{\mcA_{k-1}}{\mcB_k} \mu_1 D(1 + \delta_1)}_{=:\mcT_3} + \\
&\underbrace{\sum_{j=2}^k \frac{\mcA_{k-1}/\mcA_{j-1}}{\mcB_k / \mcB_{j-1}} (\mu_{j-1} - \mu_j) D}_{=:\mcT_4} +
\underbrace{\sum_{j=2}^k \frac{\mcA_{k-1}/\mcA_{j-1}}{\mcB_k / \mcB_{j-1}} \delta_j \mu_j D }_{=:\mcT_5} \,.
\end{align}
We shall bound each of these terms as follows.
\begin{gather*}
\mcT_1 = \frac{\mcA_{k-1}}{\mcB_k} \left( F(\wv_0) - F^* + \frac{\gamma_0}{2} \normsq{\wv_0 - \wv^*} \right)
= \frac{2}{k+1} \left( F(\wv_0) - F^* + \frac{\kappa _0}{2} \normsq{\wv_0 - \wv^*} \right) \,, \\
\mcT_2 = \mu_k D = \frac{\mu D}{k} \le \frac{2\mu D}{k+1} \, , \\
\mcT_3 = \frac{1}{1 - \alpha_0} \frac{\mcA_{k-1}}{\mcB_k} \mu_1 D(1 + \delta_1)
\le 3 \times \frac{2}{k+1} \times {\mu} \times \frac{5}{4}D = \frac{15}{2} \frac{\mu D}{k+1} \,,
\end{gather*}
where we used the fact that $\alpha_0 \le 2/3$. Next,
using $\sum_{j=2}^k {1}/({j-1}) = 1 + \sum_{j=2}^{k-1} {1}/{j} \le 1 + \int_{1}^{k-1}{dx}/{x} = 1 + \log(k-1)$,
we get,
\begin{align*}
\nonumber
{\mcT_4}
&= \sum_{j=2}^k \frac{2}{k+1} \cdot \frac{j}{1- \frac{1}{\sqrt 2}} \left(\frac{\mu}{j-1} - \frac{\mu}{j}\right) D
= 2\sqrt2(\sqrt2 + 1) \frac{\mu D}{k+1} \sum_{j=2}^k \frac{1}{j-1} \nonumber \\
&\le 2\sqrt2(\sqrt2 + 1) \mu D \left( \frac{1 + \log(k+1)}{k+1} \right) \,.
\end{align*}
Moreover, from $\sum_{j=2}^k {1}/{(j+1)^2} \le \int_{2}^{k+1} {dx}/{x^2} \le 1/2$, it follows that
\begin{align*}
\mcT_5
= \sum_{j=2}^k \frac{2}{k+1} \cdot \frac{j}{1- \frac{1}{\sqrt 2}} \frac{\mu}{j} \cdot \frac{1}{(j+1)^2} D
= 2\sqrt2(\sqrt2+1) \frac{\mu D}{k+1} \sum_{j=2}^k \frac{1}{(j+1)^2}
\le \sqrt2(\sqrt2 + 1) \frac{\mu D}{k+1} \, .
\end{align*}
Plugging these back into \eqref{eq:cor:c:nsc:dec_smoothing_eq}, we get
\begin{align*}
F(\wv_k) - F^* \le& \frac{2}{k+1} \left( F(\wv_k) - F^* + \frac{\kappa}{2} \normsq{\wv_0 - \wv^*} \right)+ \\
&\frac{\mu D}{k+1} \left(2 + \frac{15}{2} + \sqrt2(1 + \sqrt2) \right) +
2\sqrt2(1 + \sqrt2)\mu D \frac{1 + \log(k+1)}{k+1} \,.
\end{align*}
To complete the proof, note that $\log(k+1) \ge 1$ for $k\ge 2$ and numerically verify that the coefficient of $\mu D$ is
smaller than 27.
\end{proof}
\subsection{Inner Loop Complexity Analysis for Casimir} \label{sec:c:proofs:inner_compl}
Before proving Prop.~\ref{prop:c:inner_loop_final}, the following lemmas will be helpful.
First, we present a lemma from \citet[Lemma 11]{lin2017catalyst}
about the expected number of iterations a randomized linearly convergent first order methods requires
to achieve a certain target accuracy.
\begin{lemma}
\label{lem:c:inner_loop}
Let $\mcM$ be a linearly convergent algorithm and $f \in \mcF_{L, \lambda}$.
Define $f^* = \min_{\wv \in \reals^d} f(\wv)$.
Given a starting point $\wv_0$ and a target accuracy $\eps$,
let $(\wv_k)_{k \ge 0}$ be the sequence of iterates generated by $\mcM$.
Define
$T(\eps) = \inf \left\{ k \ge 0 \, | \, f(\wv_k) - f^* \le \eps \right\} \,.$
We then have,
\begin{align}
\expect[T(\eps)] \le \frac{1}{\tau(L, \lambda)} \log \left( \frac{2C(L, \lambda)}
{\tau(L,\lambda)\eps} (f(\wv_0) - f^*) \right) + 1 \,.
\end{align}
\end{lemma}
This next lemma is due to \citet[Lemma 14, Prop.~15]{lin2017catalyst}.
\begin{lemma
\label{lem:c:inner_loop_restart}
Consider $F_{\mu\omega, \kappa}(\cdot \, ;\zv)$ defined in Eq.~\eqref{eq:prox_point_algo}
and let $\delta \in [0,1)$. Let $\widehat F^* = \min_{\wv \in \reals^d} F_{\mu\omega, \kappa}(\wv ;\zv)$
and $\widehat \wv^* = \argmin_{\wv \in \reals^d} F_{\mu\omega, \kappa}(\wv ;\zv)$.
Further let $F_{\mu\omega}(\cdot \, ;\zv)$ be $L_{\mu\omega}$-smooth.
We then have the following:
\begin{gather*}
F_{\mu\omega, \kappa}(\zv ;\zv) - \widehat F^* \le \frac{L_{\mu\omega} + \kappa}{2} \normasq{2}{\zv - \widehat \wv^*} \,,
\quad \text{and,} \\
F_{\mu\omega, \kappa}(\widehat\wv ;\zv) - \widehat F^* \le \frac{\delta\kappa}{8} \normasq{2}{\zv - \widehat \wv^*}
\, \implies \,
F_{\mu\omega, \kappa}(\widehat\wv ;\zv) - \widehat F^* \le \frac{\delta\kappa}{2} \normasq{2}{\widehat \wv - \zv} \,.
\end{gather*}
\end{lemma}
We now restate and prove Prop.~\ref{prop:c:inner_loop_final}.
\begin{proposition_unnumbered}[\ref{prop:c:inner_loop_final}]
Consider $F_{\mu\omega, \kappa}(\cdot \, ;\zv)$ defined in Eq.~\eqref{eq:prox_point_algo},
and a linearly convergent algorithm $\mcM$ with parameters $C$, $\tau$.
Let $\delta \in [0,1)$. Suppose $F_{\mu\omega}$ is $L_{\mu\omega}$-smooth and
$\lambda$-strongly convex.
Then the expected number of iterations $\expect[\widehat T]$ of $\mcM$ when started at $\zv$
in order to obtain $\widehat \wv \in \reals^d$ that satisfies
\begin{align}\label{eq:inner_stopping_criterion}
F_{\mu\omega, \kappa}(\widehat\wv;\zv) - \min_\wv F_{\mu\omega, \kappa}(\wv;\zv)\leq \tfrac{\delta\kappa}{2} \normasq{2}{\wv - \zv}
\end{align}
is upper bounded by
\begin{align*}
\expect[\widehat T] \le \frac{1}{\tau(L_{\mu\omega} + \kappa, \lambda + \kappa)} \log\left(
\frac{8 C(L_{\mu\omega} + \kappa, \lambda + \kappa)}{\tau(L_{\mu\omega} + \kappa, \lambda + \kappa)} \cdot
\frac{L_{\mu\omega} + \kappa}{\kappa \delta} \right) + 1 \,.
\end{align*}
\end{proposition_unnumbered}
\begin{proof
In order to invoke
Lemma~\ref{lem:c:inner_loop}, we must appropriately set $\eps$ for
$\widehat\wv$ to satisfy \eqref{eq:inner_stopping_criterion} and then bound the ratio
$(F_{\mu\omega, \kappa}(\zv ;\zv) - \widehat F^*) / \eps$.
Firstly, Lemma~\ref{lem:c:inner_loop_restart} tells us that choosing
$\eps = \frac{\delta_k \kappa_k}{8} \normasq{2}{\zv_{k-1} - \widehat \wv^*}$ guarantees
that the $\widehat \wv$ so obtained satisfies \eqref{eq:inner_stopping_criterion},
where $\widehat \wv^* := \argmin_{\wv \in \reals^d} F_{\mu\omega, \kappa}(\wv ;\zv)$,
Therefore, $(F_{\mu\omega, \kappa}(\zv ;\zv) - \widehat F^*) / \eps$
is bounded from above by ${4(L_{\mu\omega} + \kappa)}/{\kappa \delta}$.
\end{proof}
\subsection{Information Based Complexity of {Casimir-SVRG}} \label{sec:c:proofs:total_compl}
Presented below are the proofs of Propositions~\ref{prop:c:total_compl_svrg_sc} to
\ref{prop:c:total_compl_nsc:dec_smoothing} from Section~\ref{sec:catalyst:total_compl}.
We use the following values of $C, \tau$, see e.g., \citet{hofmann2015variance}.
\begin{align*}
\tau(L, \lambda) &= \frac{1}{8 \tfrac{L}{\lambda} + n} \ge \frac{1}{8 \left( \tfrac{L}{\lambda} + n \right)}\\
C(L, \lambda) &= \frac{L}{\lambda} \left( 1 + \frac{n \tfrac{L}{\lambda}}{8 \tfrac{L}{\lambda} + n} \right)\,.
\end{align*}
\begin{proposition_unnumbered}[\ref{prop:c:total_compl_svrg_sc}]
Consider the setting of Thm.~\ref{thm:catalyst:outer} with $\lambda > 0$ and
fix $\eps > 0$.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with parameters:
$\mu_k = \mu = \eps / {10 D_\omega}$, $\kappa_k = k$ chosen as
\begin{align*}
\kappa =
\begin{cases}
\frac{A}{\mu n} - \lambda \,, \text{ if } \frac{A}{\mu n} > 4 \lambda \\
\lambda \,, \text{ otherwise}
\end{cases} \,,
\end{align*}
$q = {\lambda}/{(\lambda + \kappa)}$, $\alpha_0 = \sqrt{q}$, and
$\delta = {\sqrt{q}}/{(2 - \sqrt{q})}$.
Then, the number of iterations $N$ to obtain $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde \bigO \left(
n + \sqrt{\frac{A_\omega D_\omega n}{\lambda \eps}}
\right) \,.
\end{align*}
\end{proposition_unnumbered}
\begin{proof
We use shorthand $A:=A_\omega$, $D := D_\omega$, $L_\mu = \lambda + \nicefrac{A}{\mu}$ and
$\Delta F_0 = F(\wv_0) - F^*$.
Let $C, \tau$ be the linear convergence parameters of SVRG.
From Cor.~\ref{cor:c:outer_sc}, the number of outer iterations $K$ required to obtain
$F(\wv_K) - F^* \le \eps$ is
\begin{align*}
K \le \frac{2}{\sqrt{q}} \log\left(\frac{ 2 \Delta F_0}{\eps - c_q \mu D} \right)\, ,
\end{align*}
where $c_q = (3 - \sqrt q)/(1 - \sqrt q)$.
From Prop.~\ref{prop:c:inner_loop_final}, the number $T_k$ of inner iterations
for inner loop $k$ is, from $\delta_k = {\sqrt q}/({2 - \sqrt{q}})$,
\begin{align*}
\expect[T_k] &\le \frac{1}{\tau(L_\mu + \kappa, \lambda + \kappa)} \log\left(
\frac{8 C(L_\mu + \kappa, \lambda + \kappa)}{\tau(L_\mu + \kappa, \lambda + \kappa)} \cdot
\frac{L_\mu + \kappa}{\kappa} \cdot \frac{2 - \sqrt{q}}{\sqrt{q}} \right) + 1 \\
&\le \frac{2}{\tau(L_\mu + \kappa, \lambda + \kappa)} \log\left(
\frac{8 C(L_\mu + \kappa, \lambda + \kappa)}{\tau(L_\mu + \kappa, \lambda + \kappa)} \cdot
\frac{L_\mu + \kappa}{\kappa} \cdot \frac{2 - \sqrt{q}}{\sqrt{q}} \right) \,.
\end{align*}
Let the total number $N$ of iterations of SVRG to obtain an iterate $\wv$ that satisfies $F(\wv) - F^* \le \eps$.
Next, we upper bound $\expect[N] \le \sum_{i=1}^K \expect[T_k]$ as
\begin{align} \label{eq:c:total_compl_sc}
\expect[N] \le \frac{4}{\sqrt{q} \tau(L_\mu + \kappa, \lambda +\kappa)} \log \left(
\frac{8 C(L_\mu + \kappa, \lambda + \kappa)}{\tau(L_\mu + \kappa, \lambda + \kappa)}
\frac{L_\mu + \kappa}{\kappa} \frac{2 - \sqrt{q}}{\sqrt q} \right)
\log\left( \frac{2(F(\wv_0) - F^*)}{\eps - c_q \mu D} \right)\,.
\end{align}
Next, we shall plug in $C, \tau$ for SVRG in two different cases:
\begin{itemize}
\item Case 1: $A > 4\mu \lambda n$, in which case $\kappa + \lambda = A / (\mu n)$ and $q < 1/4$.
\item Case 2: $A \le 4 \mu \lambda n$, in which case, $\kappa = \lambda$ and $q = 1/2$.
\end{itemize}
We first consider the term outside the logarithm. It is, up to constants,
\begin{align*}
\frac{1}{\sqrt{q}} \left( n + \frac{A}{\mu(\lambda + \kappa)} \right)
= n \sqrt{\frac{\lambda + \kappa}{\lambda}} + \frac{A}{\mu \sqrt{\lambda(\lambda + \kappa)}} \,.
\end{align*}
For Case 1, plug in $\kappa + \lambda = A / (\mu n)$ so this term evaluates to $\sqrt{{ADn}/({\lambda \eps})}$.
For Case 2, we use the fact that $A \le 4 \mu \lambda n$ so that this term can be upper bounded by,
\[
n\left( \sqrt{\frac{\lambda + \kappa}{\lambda}} + 4 \sqrt{ \frac{\lambda}{\lambda + \kappa}} \right) = 3\sqrt{2}n \,,
\]
since we chose $\kappa= \lambda$.
It remains to consider the logarithmic terms. Noting that $\kappa \ge \lambda$ always,
it follows that the first log term of \eqref{eq:c:total_compl_sc} is clearly
logarithmic in the problem parameters.
As for the second logarithmic term, we must evaluate $c_q$. For Case 1, we have that $q < 1/4$ so that $c_q < 5$
and $c_q \mu D < \eps / 2$. For Case 2, we get that $q = 1/2$ and $c_q < 8$ so that $c_q \mu D < 4\eps/5$. Thus, the
second log term of \eqref{eq:c:total_compl_sc} is also logarithmic in problem parameters.
\end{proof}
\begin{proposition_unnumbered} [\ref{prop:c:total_compl_sc:dec_smoothing_main}]
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Suppose $\lambda > 0$ and $\kappa_k = \kappa$, for all $k \ge 1$ and
that $\alpha_0$, $(\mu_k)_{k \ge 1}$ and $(\delta_k)_{k \ge 1}$
are chosen as in Cor.~\ref{cor:c:outer_sc:decreasing_mu_const_kappa},
with $q = \lambda/(\lambda + \kappa)$ and $\eta = 1- {\sqrt q}/{2}$.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with these parameters,
the number of iterations $N$ of SVRG required to obtain $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde \bigO \left( n
+ \frac{A_\omega}{\mu(\lambda + \kappa)\eps} \left( F(\wv_0) - F^* + \frac{\mu D_\omega}{1-\sqrt{q}} \right)
\right) \,.
\end{align*}
\end{proposition_unnumbered}
\begin{proof
{
We continue to use shorthand $A:=A_\omega$, $D := D_\omega$.
First, let us consider the minimum number of outer iterations $K$ required to achieve $F(\wv_K) - F^* \le \eps$.
From Cor.~\ref{cor:c:outer_sc:decreasing_mu_const_kappa}, if we have $\eta^{-K/2} \Delta_0 \le \eps$, or,
\[
K \ge K_{\min} := \frac{\log\left( {\Delta_0}/{\eps} \right)}{\log\left({1}/{\sqrt\eta}\right)} \,.
\]
For this smallest value, we have,
\begin{align} \label{eq:c:min_smoother}
\mu_{K_{\min}} = \mu \eta^{K_{\min}/2} = \frac{\mu \eps}{\Delta_0} \,.
\end{align}
Let $C, \tau$ be the linear convergence parameters of SVRG, and
define $L_k := \lambda + {A}/{\mu_k}$ for each $k\ge 1$.
Further, let $\mcT'$ be such that
\[
\mcT' \ge \max_{k\in\{1, \cdots, K_{\min}\}} \log\left( 8
\frac{C(L_k + \kappa, \lambda + \kappa)}{\tau(L_k + \kappa, \lambda+\kappa)} \frac{L_k + \kappa}{\kappa\delta} \right) \,.
\]
Then, the total complexity is, from Prop.~\ref{prop:c:inner_loop_final}, (ignoring absolute constants)
\begin{align}
\nonumber
\expect[N] &\le \sum_{k=1}^{K_{\min}} \left( n + \frac{\lambda + \kappa + \frac{A}{\mu_k}}{\lambda + \kappa} \right) \mcT' \\
\nonumber
&= \sum_{k=1}^{K_{\min}} \left( n+1 + \frac{\nicefrac{A}{\mu}}{\lambda + \kappa} \eta^{-k/2} \right) \mcT' \\
\nonumber
&= \left( K_{\min}(n+1) + \frac{\nicefrac{A}{\mu}}{\lambda + \kappa} \sum_{k=1}^{K_{\min}} \eta^{-k/2} \right) \mcT' \\
\nonumber
&\le \left( K_{\min}(n+1) + \frac{\nicefrac{A}{\mu}}{\lambda + \kappa}
\frac{\eta^{-K_{\min}/2}}{1 - \eta^{1/2} } \right) \mcT' \\
&= \left( (n+1)\frac{\log\left( \frac{\Delta_0}{\eps} \right)}{\log(\nicefrac{1}{\sqrt\eta})}
+ \frac{\nicefrac{A}{\mu}}{\lambda + \kappa} \frac{1}{1 - \sqrt\eta} \frac{\Delta_0}{\eps} \right) \mcT' \,.
\end{align}
It remains to bound $\mcT'$. Here, we use $\lambda + \frac{A}{\mu} \le L_k \le \lambda + \frac{A}{\mu_K}$ for all $k \le K$
together with \eqref{eq:c:min_smoother} to
note that $\mcT'$ is logarithmic in $\Delta_0/\eps, n, AD, \mu, \kappa, \lambda\inv$.
}
\end{proof}
\begin{proposition_unnumbered}[\ref{prop:c:total_compl_svrg_smooth}]
Consider the setting of Thm.~\ref{thm:catalyst:outer} and fix $\eps > 0$.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with parameters:
$\mu_k = \mu ={\eps}/{20 D_\omega}$, $\alpha_0 = \tfrac{\sqrt{5} - 1}{2}$,
$\delta_k = {1}/{(k+1)^2}$, and $\kappa_k = \kappa = {A_\omega}/{\mu(n+1)}$.
Then, the number of iterations $N$ to get a point $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde \bigO \left( n\sqrt{\frac{F(\wv_0) - F^*}{\eps}} +
\sqrt{A_\omega D_\omega n} \frac{\norma{2}{\wv_0 - \wv^*}}{\eps} \right) \, .
\end{align*}
\end{proposition_unnumbered}
\begin{proof
We use shorthand $A:=A_\omega$, $D := D_\omega$, $L_\mu = \nicefrac{A}{\mu}$ and
$\Delta F_0 = F(\wv_0) - F^* + \frac{\kappa}{2} \normsq{\wv_0 -\wv^*}$.
Further, let $C, \tau$ be the linear convergence parameters of SVRG.
In Cor.~\ref{cor:c:outer_smooth}, the fact that $K \ge 1$ allows us to bound the contribution of the
smoothing as $10 \mu D$. So, we get that the number of outer iterations $K$ required to get
$F(\wv_K) - F^* \le \eps$ can be bounded as
\begin{align*}
K+1 \le \sqrt{\frac{8\Delta F_0}{\eps - 10 \mu D}} \,.
\end{align*}
Moreover, from our choice $\delta_k = 1 / (k+1)^2$, the number of inner iterations $T_k$
for inner loop $k$ is, from Prop.~\ref{prop:c:inner_loop_final},
\begin{align*}
\expect[T_k] &\le \frac{1}{\tau(L_\mu + \kappa, \kappa)} \log\left(
\frac{8 C(L_\mu + \kappa, \kappa)}{\tau(L_\mu + \kappa, \kappa)} \cdot
\frac{L_\mu + \kappa}{\kappa} \cdot (k+1)^2 \right) + 1\\
&\le \frac{2}{\tau(L_\mu + \kappa, \kappa)} \log\left(
\frac{8 C(L_\mu + \kappa, \kappa)}{\tau(L_\mu + \kappa, \kappa)} \cdot
\frac{L_\mu + \kappa}{\kappa} \cdot {\frac{8\Delta F_0}{\eps - 10 \mu D}} \right) \,.
\end{align*}
Next, we consider the total number $N$ of iterations of SVRG to obtain an iterate $\wv$ such that
$F(\wv) - F^* \le \eps$. Using the fact that $\expect[N] \le \sum_{i=1}^K \expect[T_k]$, we
bound it as
\begin{align} \label{eq:c:total_compl_smooth}
\expect[N] \le \frac{1}{\tau(L_\mu + \kappa, \kappa)}
\sqrt{\frac{8 \Delta F_0}{\eps - 10\mu D}}
\log \left(
\frac{64 C(L_\mu + \kappa, \kappa)}{\tau(L_\mu + \kappa, \kappa)}
\frac{L_\mu + \kappa}{\kappa} \frac{\Delta F_0}{\eps - 10\mu D} \right)\,.
\end{align}
Now, we plug into \eqref{eq:c:total_compl_smooth} the values of $C, \tau$ for SVRG.
Note that $\kappa = {L_\mu}/({n+1})$. So we have,
\begin{align*}
\frac{1}{\tau(L_\mu + \kappa, \kappa)} &= 8 \left( \frac{L_\mu + \kappa }{\kappa} + n \right) = 16(n+1) \,, \text{ and, } \\
C(L_\mu+\kappa, \kappa) &= \frac{L_\mu + \kappa}{\kappa} \left( 1 + \frac{n \tfrac{L_\mu+\kappa}{\kappa}}{8\tfrac{L+\kappa}{\kappa} + n} \right)
\le (n+2) \left(1 + \tfrac{n}{8} \right)\, .
\end{align*}
It now remains to assign $\mu = {\eps}/({20D})$ and plug $C, \tau$ from above into \eqref{eq:c:total_compl_smooth},
noting that $\kappa = {20A D}/({\eps(n+1)})$.
\end{proof}
\begin{proposition_unnumbered}[\ref{prop:c:total_compl_nsc:dec_smoothing}]
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Suppose $\lambda = 0$ and that $\alpha_0$, $(\mu_k)_{k\ge 1}$,$ (\kappa_k)_{k\ge 1}$ and $(\delta_k)_{k \ge 1}$
are chosen as in Cor.~\ref{cor:c:outer_smooth_dec_smoothing}.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with these parameters,
the number of iterations $N$ of SVRG required to obtain $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde\bigO \left( \frac{1}{\eps}
\left( F(\wv_0) - F^* + \kappa \normasq{2}{\wv_0 - \wv^*} + \mu D \right)
\left( n + \frac{A_\omega}{\mu \kappa} \right)
\right) \,.
\end{align*}
\end{proposition_unnumbered}
\begin{proof
Define short hand $A:= A_\omega$, $D := D_\omega$ and
\begin{gather}
\label{eq:c:nsc:dec_smoothing_1}
\Delta_0 := 2(F(\wv_0) - F^*) + \kappa \normsq{\wv_0 - \wv^*} + 27 \mu D \, .
\end{gather}
From Cor.~\ref{cor:c:outer_smooth_dec_smoothing},
the number of iterations $K$ required to obtain $F(\wv_K) - F^* \le \frac{\log(K+1)}{K+1} \Delta_0 \le \eps$ is
(see Lemma~\ref{lem:c:helper_logx}),
\begin{align} \label{eq:c:nsc:dec_smoothing}
K + 1 = \frac{2\Delta_0}{\eps} \log \frac{2\Delta_0}{\eps} \,.
\end{align}
Let $C, \tau$ be such that SVRG is linearly convergent with parameters $C, \tau$, and
define $L_k := {A}/{\mu_k}$ for each $k \ge 1$.
Further, let $\mcT'$ be such that
\[
\mcT' \ge \max_{k\in\{1, \cdots, K\}} \log\left( 8
\frac{C(L_k + \kappa, \kappa)}{\tau(L_k + \kappa, \kappa)} \frac{L_k + \kappa}{\kappa \delta_k} \right) \, .
\]
Clearly, $\mcT'$ is logarithmic in $K, n, AD, \mu, \kappa $.
From Prop.~\ref{prop:c:inner_loop_final}, the minimum total complexity is (ignoring absolute constants)
\begin{align}
\expect[N] &= \sum_{k=1}^K \left( n + \frac{\nicefrac{A}{\mu_k} + \kappa_k}{\kappa_k} \right) \mcT' \nonumber \\
&= \sum_{k=1}^K \left( n + 1 + \frac{A}{\mu_k\kappa_k} \right) \mcT' \nonumber \\
&= \sum_{k=1}^K \left( n + 1 + \frac{A}{\mu\kappa} \right) \mcT' \nonumber \\
&\le \left( n+ 1 + \frac{A}{\mu \kappa} \right)K \mcT' \,,
\end{align}
and plugging in $K$ from~\eqref{eq:c:nsc:dec_smoothing} completes the proof.
\end{proof}
\subsection{Prox-Linear Convergence Analysis} \label{sec:c:pl_struct_pred}
We first prove Lemma~\ref{lem:pl:struct_pred} that specifies the assumption required by the prox-linear in the case of structured prediction.
\begin{lemma_unnumbered}[\ref{lem:pl:struct_pred}]
Consider the structural hinge loss $f(\wv) = \max_{\yv \in \mcY} \psi(\yv ; \wv) = h\circ \gv(\wv)$
where $h, \gv$ are as defined in \eqref{eq:mapping_def}.
If the mapping $\wv \mapsto \psi(\yv ; \wv)$ is $L$-smooth with respect to $\norma{2}{\cdot}$ for all
$\yv \in \mcY$, then it holds for all $\wv, \zv \in \reals^d$ that
\begin{align*}
|h(\gv(\wv+\zv)) - h(\gv(\wv) + \grad\gv(\wv) \zv)| \le \frac{L}{2}\normasq{2}{\zv}\,.
\end{align*}
\end{lemma_unnumbered}
\begin{proof}
For any $\Am \in \reals^{m \times d}$ and $\wv \in \reals^d$, and $\norma{2,1}{\Am}$ defined in~\eqref{eq:matrix_norm_defn},
notice that
\begin{align} \label{eq:pl-struc-pred-pf:norm}
\norma{\infty}{\Am\wv} \le \norma{2, 1}{\Am} \norma{2}{\wv} \,.
\end{align}
Now using the fact that max function $h$ satisfies $|h(\uv') - h(\uv)| \le \norma{\infty}{\uv' - \uv}$
and the fundamental theorem of calculus $(*)$, we deduce
\begin{align}
|h(\gv(\wv+\zv)) - h(\gv(\wv) + \grad\gv(\wv) \zv)|
&\le \norma{\infty}{\gv(\wv+\zv)- \left( \gv(\wv) + \grad\gv(\wv) \zv \right) } \nonumber \\
&\stackrel{(*)}{\le} \norm*{\int_0^1 (\grad\gv(\wv + t\zv) - \grad\gv(\wv) )\zv \, dt }_{\infty}
\nonumber \\
&\stackrel{\eqref{eq:pl-struc-pred-pf:norm}}{\le}
\int_0^1 \norma{2,1}{\grad\gv(\wv + t\zv) - \grad\gv(\wv) } \norma{2}{\zv} \, dt \,.
\label{eq:pl-struc-pred-pf:1}
\end{align}
Note that the definition \eqref{eq:matrix_norm_defn} can equivalently be stated as
$\norma{2,1}{\Am} = \max_{\norma{1}{\uv}\le 1} \norma{2}{\Am\T \uv}$.
Given $\uv \in \reals^m$, we index its entries $u_\yv$ by $\yv \in \mcY$. Then, the matrix norm
in \eqref{eq:pl-struc-pred-pf:1} can be simplified as
\begin{align*}
\norma{2,1}{\grad\gv(\wv + t\zv) - \grad\gv(\wv) }
&= \max_{\norma{1}{\uv} \le 1} \bigg\|{\sum_{\yv \in \mcY} u_\yv ( \grad \psi( \yv ; \wv + t\zv)
- \grad \psi( \yv ; \wv)) } \bigg\|_2 \\
&\le \max_{\norma{1}{\uv} \le 1} \sum_{\yv \in \mcY} |u_\yv| \norma{2}{\grad \psi( \yv ; \wv + t\zv)
- \grad \psi( \yv ; \wv)} \\
&\le L t \norma{2}{\zv} \,,
\end{align*}
from the $L$-smoothness of $\psi$.
Plugging this back into \eqref{eq:pl-struc-pred-pf:1} completes the proof. The bound on the smothing approximation holds similarly by noticing that if $h$ is $1$-Lipschitz then $h_{\mu \omega}$ too since $\nabla h_{\mu\omega}(\uv) \in \dom h^*$ for any $\uv \in \dom h$.
\end{proof}
\subsection{Information Based Complexity of the Prox-Linear Algorithm with {Casimir-SVRG}} \label{sec:c:pl_proofs}
\begin{proposition_unnumbered}[\ref{prop:pl:total_compl}]
Consider the setting of Thm.~\ref{thm:pl:outer-loop}. Suppose the sequence $\{\eps_k\}_{k\ge 1}$
satisfies $\eps_k = \eps_0 / k$ for some $\eps_0 > 0$ and that
the subproblem of Line~\ref{line:pl:algo:subprob} of Algo.~\ref{algo:prox-linear} is solved using
{Casimir-SVRG}{} with the settings of Prop.~\ref{prop:c:total_compl_svrg_sc}.
Then, total number of SVRG iterations $N$ required to produce a $\wv$ such that
$\norma{2}{\bm{\varrho}_\eta(\wv)} \le \eps$ is bounded as
\begin{align*}
\expect[N] \le \widetilde\bigO\left(
\frac{n}{\eta \eps^2} \left(F(\wv_0) - F^* + \eps_0 \right) +
\frac{\sqrt{A_\omega D_\omega n \eps_0\inv}}{\eta \eps^3} \left( F(\wv_0) - F^* + \eps_0 \right)^{3/2}
\right) \, .
\end{align*}
\end{proposition_unnumbered}
\begin{proof
First note that $\sum_{k=1}^{K} \eps_k \le \eps_0 \sum_{k=1}^K k\inv \le 4 \eps_0 \log K$
for $K \ge 2$. Let $\Delta F_0 := F(\wv_0) - F^*$ and use shorthand $A, D$ for $A_\omega, D_\omega$ respectively.
From Thm.~\ref{thm:pl:outer-loop}, the number $K$ of prox-linear iterations required to find a $\wv$ such that
$\norma{2}{\bm{\varrho}_\eta(\wv)} \le \eps$ must satisfy
\begin{align*}
\frac{2}{\eta K} \left( \Delta F_0 + 4\eps_0 \log K \right) \le \eps \,.
\end{align*}
For this, it suffices to have (see e.g., Lemma~\ref{lem:c:helper_logx})
\begin{align*}
K \ge \frac{4(\Delta F_0 + 4 \eps_0)}{\eta\eps^2} \log\left( \frac{4(\Delta F_0 + 4 \eps_0)}{\eta\eps^2} \right) \,.
\end{align*}
Before we can invoke Prop.~\ref{prop:c:total_compl_svrg_sc},
we need to bound the dependence of each inner loop on its warm start:
$F_\eta(\wv_{k-1} ; \wv_{k-1}) - F_\eta(\wv_{k}^* ; \wv_{k-1})$ in terms of problem parameters,
where $\wv_k^* = \argmin_{\wv} F_\eta(\wv ; \wv_{k-1})$
is the exact result of an exact prox-linear step.
We note that $F_\eta(\wv_{k-1} ; \wv_{k-1}) = F(\wv_{k-1}) \le F(\wv_0)$, by Line~\ref{line:pl:algo:accept}
of Algo.~\ref{algo:prox-linear}.
Moreover, from $\eta \le 1/L$ and Asmp.~\ref{asmp:pl:upper-bound}, we have,
\begin{align*}
F_\eta(\wv_k^* ; \wv_{k-1}) &= \frac{1}{n} \sum_{i=1}^n h \big( \gv\pow{i}(\wv_{k-1}) + \grad \gv\pow{i}(\wv_{k-1})(\wv_k^* - \wv_{k-1}) \big)
+ \frac{\lambda}{2}\normasq{2}{\wv_k^*} + \frac{1}{2\eta} \normasq{2}{\wv_k^* - \wv_{k-1}} \\
&\ge \frac{1}{n} \sum_{i=1}^n h \big( \gv\pow{i}(\wv_{k-1}) + \grad \gv\pow{i}(\wv_{k-1})(\wv_k^* - \wv_{k-1}) \big)
+ \frac{\lambda}{2}\normasq{2}{\wv_k^*} + \frac{L}{2} \normasq{2}{\wv_k^* - \wv_{k-1}} \\
&\ge \frac{1}{n} \sum_{i=1}^n h \big( \gv\pow{i}(\wv_k^*) \big)
+ \frac{\lambda}{2}\normasq{2}{\wv_k^*} \\
&= F(\wv_k^*) \ge F^* \,.
\end{align*}
Thus, we bound $F_\eta(\wv_{k-1} ; \wv_{k-1}) - F_\eta(\wv_{k}^* ; \wv_{k-1}) \le \Delta F_0$.
We now invoke Prop.~\ref{prop:c:total_compl_svrg_sc} and collect all constants and terms logarithmic in
$n$, $\eps\inv, \eps_0 \inv$, $\Delta F_0$, $\eta\inv$, $A_\omega D_\omega$ in $\mcT, \mcT', \mcT''$.
We note that all terms in the logarithm in Prop.~\ref{prop:c:total_compl_svrg_sc} are logarithmic in the problem parameters here.
Letting $N_k$ be the number of
SVRG iterations required for iteration $k$, we get,
\begin{align*}
\expect[N] &= \sum_{k=1}^K \expect[N_k]
\le \sum_{k=1}^K \left( n + \sqrt{\frac{\eta A D n}{\eps_k}} \right) \, \mcT \\
&\le \left[ nK + \sqrt{\frac{\eta A D n}{\eps_0}} \left( \sum_{k=1}^K \sqrt{k} \right) \right] \, \mcT \\
&\le \left[ nK + \sqrt{\frac{\eta A D n}{\eps_0}}\, K^{3/2} \right] \, \mcT' \\
&\le \left[ \frac{n}{\eta \eps^2} (\Delta F_0 + \eps_0)
+ \sqrt{\frac{\eta A D n}{\eps_0}}\, \left( \frac{\Delta F_0 + \eps_0}{\eta \eps^2} \right)^{3/2}
\right] \, \mcT'' \\
&= \left[ \frac{n}{\eta \eps^2} (\Delta F_0 + \eps_0)
+ \frac{\sqrt{ADn}}{\eta \eps^3} \frac{(\Delta F_0 + \eps_0)^{3/2}}{\sqrt{\eps_0}}
\right] \, \mcT'' \,.
\end{align*}
\end{proof}
\subsection{Some Helper Lemmas} \label{subsec:a:catalyst:helper}
The first lemma is a property of the squared Euclidean norm from \citet[Lemma 5]{lin2017catalyst},
which we restate here.
\begin{lemma}\label{lem:c:helper:quadratic}
For any vectors, $\wv, \zv, \rv \in \reals^d$, we have, for any $\theta > 0$,
\begin{align*}
\normsq{\wv - \zv} \ge (1-\theta) \normsq{\wv - \rv} + \left( 1 - \frac{1}{\theta} \right) \normsq{\rv - \zv} \,.
\end{align*}
\end{lemma}
The next lemmas consider rates of the sequences $(\alpha_k)$ and $(A_k)$ under different recursions.
\begin{lemma} \label{lem:c:A_k:const_kappa}
Define a sequence $(\alpha_k)_{k \ge 0}$ as
\begin{align*}
\alpha_0 &= \frac{\sqrt 5 - 1}{2} \\
\alpha_k^2 &= (1 - \alpha_k) \alpha_{k-1}^2 \,.
\end{align*}
Then this sequence satisfies
\begin{align*}
\frac{\sqrt 2}{k+3} \le \alpha_k \le \frac{2}{k+3} \,.
\end{align*}
Moreover, $A_k := \prod_{j=0}^k (1-\alpha_k)$ satisfies
\begin{align*}
\frac{2}{(k+3)^2} \le A_k \le \frac{4}{(k+3)^2} \,.
\end{align*}
\end{lemma}
\begin{proof}
Notice that $\alpha_0$ satisfies $\alpha_0^2 = 1 - \alpha_0$.
Further, it is clear from definition that $\alpha_k \in (0, 1)\, \forall k \ge 0$.
Hence, we can define a sequence $(b_k)_{k\ge 0}$ such that $b_k := 1/\alpha_k$.
It satisfies the recurrence, $b_k^2 - b_k = b_{k-1}^2$ for $k \ge 1$,
or in other words, $b_k = \tfrac{1}{2}\left( 1 + \sqrt{1 + 4 b_{k-1}^2} \right)$.
Form this we get,
\begin{align*}
b_k &\ge b_{k-1} + \frac{1}{2} \ge b_0 + \frac{k}{2} \ge \frac{3}{2} + \frac{k}{2} \,.
\end{align*}
since $b_0 = \frac{\sqrt 5 + 1}{2}$. This gives us the upper bound on $\alpha_k$.
Moreover, unrolling the recursion,
\begin{align} \label{eq:c:helper:2_}
\alpha_k^2 = (1- \alpha_k) \alpha_{k-1}^2 = A_k \frac{\alpha_0^2}{1 - \alpha_0} = A_k \, .
\end{align}
Since $\alpha_k \le 2/(k+3)$, \eqref{eq:c:helper:2_} yields the upper bound on $A_k$.
The upper bound on $\alpha_k$ again gives us,
\begin{align*}
A_k \ge \prod_{i=0}^k \left( 1 - \frac{2}{i+3} \right) = \frac{2}{(k+2)(k+3)} \ge \frac{2}{(k+3)^2} \,,
\end{align*}
to get the lower bound on $A_k$. Invoking \eqref{eq:c:helper:2_} again to obtain the lower bound on $\alpha_k$
completes the proof.
\end{proof}
The next lemma considers the evolution of the sequences $(\alpha_k)$ and $(A_k)$ with a different recursion.
\begin{lemma} \label{lem:c:A_k:inc_kappa}
Consider a sequence $(\alpha_k)_{k\ge 0}$ defined by $\alpha_0 = \frac{\sqrt{5}- 1}{2}$, and
$\alpha_{k+1}$ as the non-negative root of
\begin{align*}
\frac{\alpha_k^2}{1 - \alpha_k} = \alpha_{k-1}^2 \frac{k}{k+1} \,.
\end{align*}
Further, define
\begin{align*}
A_k = \prod_{i=0}^k ( 1- \alpha_i) \, .
\end{align*}
Then, we have for all $k\ge 0$,
\begin{align}
\frac{1}{k+1} \left(1 - \frac{1}{\sqrt2} \right) \le A_k \le \frac{1}{k+2} \, .
\end{align}
\end{lemma}
\begin{proof}
Define a sequence $(b_k)_{k\ge 0}$ such that $b_k = 1/\alpha_k$, for each $k$. This is well-defined because
$\alpha_k \neq 0$, which may be verified by induction. This sequence satisfies the recursion for $k\ge 1$:
$b_k (b_k -1) = \left( \frac{k+1}{k} \right)b_{k-1}$.
From this recursion, we get,
\begin{align}
\nonumber
b_k &= \frac{1}{2} \left( 1 + \sqrt{1 + 4 b_{k-1}^2 \left( \frac{k+1}{k} \right)} \right) \\
\nonumber
&\ge \frac{1}{2} + b_{k-1} \sqrt\frac{k+1}{k} \\
\nonumber
&\ge \frac{1}{2}\left( 1 + \sqrt{\frac{k+1}{k}} + \cdots + \sqrt{\frac{k+1}{2}} \right) + b_0\sqrt{k+1} \\
\nonumber
&= \frac{\sqrt{k+1}}{2} \left( 1/\sqrt 2 + \cdots + 1/\sqrt{k+1} \right) + b_0 \sqrt{k+1} \\
&\stackrel{(*)}{\ge} \sqrt{k+1}\left( \sqrt{k+2} + b_0 - \sqrt 2 \right)
= \sqrt{k+1} \left( \sqrt{k+2} + b_0 - \sqrt 2 \right) \,,
\end{align}
where $(*)$ followed from noting that $1/\sqrt{2}+\cdots+1/\sqrt{k+1} \ge \int_2^{k+2} \frac{dx}{\sqrt x}
= 2(\sqrt{k+2}-\sqrt 2)$\,.
Since $b_0 = 1/\alpha_0 = \frac{\sqrt{5} + 1}{2} > \sqrt 2$, we have, for $k \ge 1$,
\begin{align}
\alpha_k \le \frac{1}{\sqrt{k+1}(\sqrt{k+2}+ b_0 - \sqrt{2})} \le \frac{1}{\sqrt{k+1}\sqrt{k+2}} \,.
\end{align}
This relation also clearly holds for $k=0$. Next, we claim that
\begin{align}
A_k = (k+1) \alpha_k^2 \le \frac{k+1}{(\sqrt{k+1}\sqrt{k+2})^2} = \frac{1}{k+2}\, .
\end{align}
Indeed, this is true because
\begin{align*}
\alpha_k^2 = (1 - \alpha_k) \alpha_{k-1}^2 \frac{k}{k+1} = A_k \frac{\alpha_0^2}{1 - \alpha_0} \frac{1}{k+1}
= \frac{A_k}{k+1} \,.
\end{align*}
For the lower bound, we have,
\begin{align*}
A_k = \prod_{i=0}^k (1 - \alpha_i)
\ge \prod_{i=0}^k \left(1 - \frac{1}{\sqrt{i+1}\sqrt{i+2}} \right)
\ge \left( 1 - \frac{1}{\sqrt2} \right) \prod_{i=1}^k \left(1 - \frac{1}{i+1} \right)
= \frac{1 - \frac{1}{\sqrt{2}}}{k+1} \, .
\end{align*}
\end{proof}
\begin{lemma} \label{lem:c:helper_logx}
Fix some $\eps > 0$.
If $k \ge \frac{2}{\eps} \log \frac{2}{\eps}$, then we have that
$\frac{\log k}{k} \le \eps$.
\end{lemma}
\begin{proof}
We have, since $\log x \le x$ for $x > 0$,
\begin{align*}
\frac{\log k }{k} \le \frac{\log\frac{2}{\eps} + \log\log\frac{2}{\eps}}{\frac{2}{\eps} \log\frac{2}{\eps}}
= \frac{\eps}{2} \left( 1 + \frac{\log\log\frac{2}{\eps}}{\log\frac{2}{\eps}} \right) \le \eps \,.
\end{align*}
\end{proof}
\subsection{Related Work} \label{sec:related_work}
\begin{table*}[t!]
\caption{\small{Convergence rates given in terms of the number of calls to various oracles for different optimization algorithms
on the learning problem~\eqref{eq:c:main:prob} in
case of structural support vector machines~\eqref{eq:pgm:struc_hinge}.
The rates are specified in terms of the target accuracy $\eps$,
the number of training examples $n$, the
regularization $\lambda$, the
size of the label space~$\scriptsize{\abs\mcY}$, the
max feature norm $R=\max_i \norma{2}{\Phi(\xv\pow{i},\yv) - \Phi(\xv\pow{i}, \yv\pow{i})}$ and $\widetilde R \ge R$
(see Remark~\ref{remark:smoothing:l2vsEnt} for explicit form).
The rates are specified up to constants and factors logarithmic in the
problem parameters. The dependence on the initial error is ignored.
* denotes algorithms that make $\bigO(1)$ oracle calls per iteration.
\vspace{2mm}
}}
\label{tab:rates}
\footnotesize\setlength{\tabcolsep}{2pt}
\begin{minipage}{.32\linewidth}
\centering
\begin{adjustbox}{width=1\textwidth}
\begin{tabular}{|c|c|}
\hline
\rule{0pt}{10pt}
\textbf{Algo.} (\textit{exp} oracle) & \textbf{\# Oracle calls} \\[0.45ex] \hline\hline
\rule{0pt}{15pt}
\begin{tabular}{c} Exponentiated \\ gradient* \\ \citep{collins2008exponentiated}\end{tabular} &
$\dfrac{(n + \log |\mcY|) R^2 }{\lambda \eps}$ \\[2.54ex] \hline
\rule{0pt}{15pt}
\begin{tabular}{c} Excessive gap \\ reduction \\ \citep{zhang2014accelerated} \end{tabular} &
$n R \sqrt{\dfrac{\log |\mcY|}{\lambda \eps}}$ \\[2.54ex] \hline
\rule{0pt}{15pt}
\begin{tabular}{c} Prop.~\ref{prop:c:total_compl_svrg_main}*, \\ entropy smoother \end{tabular}
& $\sqrt{\dfrac{nR^2 \log\abs\mcY}{\lambda \eps}}$ \\[2.54ex] \hline
\rule{0pt}{15pt}
\begin{tabular}{c} Prop.~\ref{prop:c:total_compl_sc:dec_smoothing_main}*, \\ entropy smoother \end{tabular}
& $n + {\dfrac{R^2 \log\abs\mcY}{\lambda \eps}}$ \\[2.54ex] \hline
\end{tabular}
\end{adjustbox}
\end{minipage} \hspace{2.6mm}%
\begin{minipage}{.35\linewidth}
\centering
\begin{adjustbox}{width=1\textwidth}
\begin{tabular}{|c|c|}
\hline
\rule{0pt}{10pt}
\textbf{Algo.} (\textit{max} oracle) & \textbf{\# Oracle calls} \\[0.45ex] \hline\hline
\rule{0pt}{15pt}
\begin{tabular}{c} BMRM \\ \citep{teo2009bundle}\end{tabular} &
$\dfrac{n R^2}{\lambda \eps}$ \\[2ex] \hline
\rule{0pt}{15pt}
\begin{tabular}{c} QP 1-slack \\ \citep{joachims2009cutting} \end{tabular}&
$\dfrac{n R^2}{\lambda \eps}$ \\[2ex] \hline
\rule{0pt}{15pt}
\begin{tabular}{c} Stochastic \\ subgradient* \\ \citep{shalev2011pegasos} \end{tabular}&
$\dfrac{R^2}{\lambda \eps}$ \\[2ex] \hline
\rule{0pt}{15pt}
\begin{tabular}{c} Block-Coordinate \\ Frank-Wolfe* \\ \citep{lacoste2012block} \end{tabular} &
$n + \dfrac{R^2}{\lambda \eps}$ \\[2ex] \hline
\end{tabular}
\end{adjustbox}
\end{minipage} \hspace{2.2mm}
\begin{minipage}{.25\linewidth}
\centering
\medskip
\begin{adjustbox}{width=1\textwidth}
\begin{tabular}{|c|c|}
\hline
\rule{0pt}{0pt}
\begin{tabular}{c} \textbf{Algo.} \\ (\textit{top-$K$} oracle) \end{tabular}
& \textbf{\# Oracle calls} \\[0.45pt] \hline\hline
\rule{0pt}{12pt}
\begin{tabular}{c} Prop.~\ref{prop:c:total_compl_svrg_main}*, \\ $\ell_2^2$ smoother \end{tabular}
& $\sqrt{\dfrac{n{\widetilde R}^2}{\lambda \eps}}$ \\[2.54ex] \hline
\rule{0pt}{12pt}
\begin{tabular}{c} Prop.~\ref{prop:c:total_compl_sc:dec_smoothing_main}*, \\ $\ell_2^2$ smoother \end{tabular}
& $n + {\dfrac{{\widetilde R}^2}{\lambda \eps}}$ \\[2.45ex] \hline
\end{tabular}
\end{adjustbox}
\end{minipage}%
\end{table*}
\paragraph{Optimization for Structural Support Vector Machines}
Table~\ref{tab:rates} gives an overview of different optimization algorithms designed for structural
support vector machines.
Early works~\citep{taskar2004max,tsochantaridis2004support,joachims2009cutting,teo2009bundle}
considered batch dual quadratic optimization (QP) algorithms.
The stochastic subgradient method operated directly
on the non-smooth primal formulation~\citep{ratliff2007approximate,shalev2011pegasos}.
More recently,~\citet{lacoste2012block} proposed a block coordinate Frank-Wolfe (BCFW) algorithm to
optimize the dual formulation of structural support vector machines; see also~\citet{osokin2016minding} for variants and extensions.
Saddle-point or primal-dual approaches include
the mirror-prox algorithm~\citep{taskar2006structured,cox2014dual,he2015semi}.~\citet{palaniappan2016stochastic} propose an incremental optimization algorithm for saddle-point problems. However, it is unclear how to extend it to the structured prediction problems considered here. Incremental optimization algorithms for conditional random fields were proposed by~\citet{schmidt2015non}.
We focus here on primal optimization algorithms in order to be able to
train structured prediction models with affine or nonlinear mappings
with a unified approach, and on incremental optimization algorithms which can scale to large datasets.
\paragraph{Inference}
The ideas of dynamic programming inference in tree structured graphical models have been around
since the pioneering works of \citet{pearl1988probabilistic} and \citet{dawid1992applications}.
Other techniques emerged based on graph cuts \citep{greig1989exact,ishikawa1998segmentation},
bipartite matchings \citep{cheng1996maximum,taskar2005discriminative} and search
algorithms \citep{daume2005learning,lampert2008beyond,lewis2014ccg,he2017deep}.
For graphical models that admit no such a discrete structure,
techniques based on loopy belief propagation~\citep{mceliece1998turbo,murphy1999loopy},
linear programming (LP)~\citep{schlesinger1976syntactic}, dual decomposition \citep{johnson2008convex}
and variational inference \citep{wainwright2005map,wainwright2008graphical}
gained popularity.
\paragraph{Top-$K$ Inference}
Smooth inference oracles with $\ell_2^2$ smoothing echo older heuristics
in speech and language processing~\citep{jurafsky2014speech}.
Combinatorial algorithms for top-$K$ inference have been studied extensively by the graphical models community under the name
``$M$-best MAP''.
\citet{seroussi1994algorithm} and \citet{nilsson1998efficient}
first considered the problem of finding the $K$ most probable configurations
in a tree structured graphical model.
Later, \citet{yanover2004finding} presented the Best Max-Marginal First algorithm which solves this problem with access only to
an oracle that computes max-marginals.
We also use this algorithm in Sec.~\ref{subsec:smooth_inference_loopy}.
\citet{fromer2009lp} study top-$K$ inference for LP relaxation, while
\citet{batra2012efficient} considers the dual problem to exploit graph structure.
\citet{flerova2016searching} study top-$K$ extensions of the popular $\text{A}^\star$
and branch and bound search algorithms in the context of graphical models.
Other related approaches include diverse $K$-best solutions \citep{batra2012diverse} and
finding $K$-most probable modes \citep{chen2013computing}.
\paragraph{Smoothing Inference}
Smoothing for inference was used to speed up iterative algorithms for continuous relaxations.
\citet{johnson2008convex} considered smoothing dual decomposition inference using the entropy smoother,
followed by \citet{jojic2010accelerated} and \citet{savchynskyy2011study} who studied its theoretical properties.
\citet{meshi2012convergence} expand on this study to include $\ell_2^2$ smoothing.
Explicitly smoothing discrete inference algorithms in order to smooth the learning problem was considered by
\citet{zhang2014accelerated} and \citet{song2014learning} using the entropy and $\ell_2^2$ smoothers respectively.
The $\ell_2^2$ smoother was also used by \citet{martins2016softmax}.
\citet{hazan2016blending} consider the approach of blending learning and inference, instead of using inference
algorithms as black-box procedures.
Related ideas to ours appear in the independent works~\citep{mensch2018differentiable,niculae2018sparsemap}.
These works partially overlap with ours, but the papers choose different perspectives,
making them complementary to each other.~\citet{mensch2018differentiable} proceed
differently when, {e.g.},~smoothing inference based on dynamic programming.
Moreover, they do not establish complexity bounds
for optimization algorithms making calls to the resulting smooth inference oracles.
We define smooth inference oracles in the context
of black-box first-order optimization and
establish worst-case complexity bounds for incremental optimization algorithms making calls to these oracles.
Indeed we relate the amount of smoothing controlled by $\mu$ to the resulting complexity of the optimization algorithms relying on smooth inference oracles.
\paragraph{End-to-end Training of Structured Prediction}
The general framework for global training of structured prediction models was introduced by~\cite{bottou1990framework}
and applied to handwriting recognition by~\cite{bengio1995lerec} and to document processing by~\cite{bottou1997global}.
This approach, now called ``deep structured prediction'', was used, e.g.,
by \citet{collobert2011natural} and \citet{belanger2016structured}.
\subsection{Notation}
Vectors are denoted by bold lowercase characters as $\wv \in \reals^d$ while matrices are denoted by bold uppercase characters as $\Am \in \reals^{d \times n}$.
For a matrix $\Am \in \reals^{m \times n}$, define the norm for $\alpha,\beta \in \{1, 2, \infty\}$,
\begin{align} \label{eq:matrix_norm_defn}
\norma{\beta, \alpha}{\Am} = \max\{ \inp{\yv}{\Am\xv} \, | \, \norma{\alpha}{\yv} \le 1 \, , \, \norma{\beta}{\xv} \le 1 \}
\,.
\end{align}
For any function $f: \reals^d \to \reals \cup \{ +\infty \}$, its convex conjugate $f^*:\reals^d \to \reals \cup \{+\infty\}$ is defined as
\begin{align*}
f^*(\zv) = \sup_{\wv \in \reals^d} \left\{ \inp{\zv}{\wv} - f(\wv) \right\} \, .
\end{align*}
A function $f : \reals^d \to \reals$ is said to be $L$-smooth with respect to an arbitrary norm $\norm{\cdot}$
if it is continuously differentiable and its gradient $\grad f$
is $L$-Lipschitz with respect to $\norm{\cdot}$.
When left unspecified, $\norm{\cdot}$ refers to $\norma{2}{\cdot}$.
Given a continuously differentiable map $\gv : \reals^d \to \reals^m$,
its Jacobian $\grad \gv(\wv) \in \reals^{m \times d}$ at $\wv \in \reals^d$
is defined so that its $ij$th entry is $[\grad \gv(\wv)]_{ij} = \partial g_i(\wv) / w_j$
where $g_i$ is the $i$th element of $\gv$ and $w_j$ is the $j$th element of $\wv$.
The vector valued function $\gv : \reals^d \to \reals^m$ is said to be $L$-smooth with respect to $\norm{\cdot}$
if it is continuously differentiable and
its Jacobian $\grad \gv$ is $L$-Lipschitz with respect to $\norm{\cdot}$.
For a vector $\zv \in \reals^m$, $z_{(1)} \ge \cdots \ge z_{(m)}$ refer to its components enumerated in non-increasing order
where ties are broken arbitrarily.
Further, we let $\zv_{[k]} = (z_{(1)}, \cdots, z_{(k)}) \in \reals^k$ denote the vector of the $k$ largest components of $\zv$.
We denote by $\Delta^{m-1}$ the standard probability simplex in $\reals^{m}$.
When the dimension is clear from the context, we shall simply denote it by $\Delta$.
Moreover, for a positive integer $p$, $[p]$ refers to the set $\{1, \ldots,p\}$.
Lastly, $\widetilde \bigO$ in the big-$\bigO$ notation hides factors logarithmic
in problem parameters.
\subsection{Structural Hinge Loss}
On a given input-output pair $(\xv, \yv)$, the error of prediction of $\yv$ by the inference procedure with a
score function $\phi(\cdot, \cdot; \wv)$, is measured by a task loss $\ell \big( \yv, \yv^*(\xv; \wv) \big)$ such as the Hamming loss.
The learning procedure would then aim to find the best parameter $\wv$ that minimizes
the loss on a given dataset of input-output training examples.
However, the resulting problem is piecewise constant and hard to optimize.
Instead, \citet{altun2003hidden,taskar2004max,tsochantaridis2004support} propose to minimize a majorizing surrogate of the task loss,
called the structural hinge loss defined on an input-output pair $(\xv\pow{i}, \yv\pow{i})$ as
\begin{align}\label{eq:pgm:struc_hinge}
f\pow{i}(\wv) = \max_{\yv \in \mcY}
\left\{ \phi(\xv\pow{i}, \yv ; \wv) + \ell(\yv\pow{i}, \yv) \right\}
- \phi(\xv\pow{i}, \yv\pow{i} ; \wv) = \max_{\yv \in \mcY} \psi\pow{i}(\yv, \wv) \, .
\end{align}
where $\psi\pow{i}(\yv ; \wv) = \phi(\xv\pow{i}, \yv ; \wv) + \ell(\yv\pow{i}, \yv) - \phi(\xv\pow{i}, \yv\pow{i} ; \wv)$ is the augmented score function.
This approach, known as {\em max-margin structured prediction},
builds upon binary and multi-class support vector machines~\citep{crammer2001algorithmic}, where the term $\ell(\yv\pow{i}, \yv)$ inside the maximization in \eqref{eq:pgm:struc_hinge}
generalizes the notion of margin.
The task loss $\ell$ is assumed to possess appropriate structure
so that the maximization inside \eqref{eq:pgm:struc_hinge}, known as {\em loss augmented inference},
is no harder than the inference problem in \eqref{eq:pgm:inference}.
When considering a fixed input-output pair $(\xv\pow{i}, \yv\pow{i}$),
we drop the index with respect to the sample $i$ and consider the
structural hinge loss as
\begin{equation}\label{eq:struct_hinge}
f(\wv) = \max_{\yv \in \mathcal{\mcY}} \psi(\yv;\wv),
\end{equation}
When the map $\wv \mapsto \psi(\yv ; \wv)$ is affine,
the structural hinge loss $f$ and the objective $F$ from \eqref{eq:c:main:prob} are both convex -
we refer to this case as the structural support vector machine. When $\wv \mapsto \psi(\yv ; \wv)$
is a nonlinear but smooth map, then the structural hinge loss $f$ and the objective $F$ are nonconvex.
\subsection{Smoothing Strategy}
A convex, non-smooth function $h$ can be smoothed by taking its infimal convolution with a smooth function~\citep{beck2012smoothing}.
We now recall its dual representation, which \citet{nesterov2005smooth}
first used to relate the amount of smoothing to optimal complexity bounds.
\begin{definition} \label{defn:smoothing:inf-conv}
For a given convex function $h:\reals^m \to \reals$, a smoothing function $\omega: \dom h^* \to \reals$ which is
1-strongly convex with respect to $\norma{\alpha}{\cdot}$ (for $\alpha \in \{1,2\}$),
and a parameter $\mu > 0$, define
\begin{align*}
h_{\mu \omega}(\zv) = \max_{\uv \in \dom h^*} \left\{ \inp{\uv}{\zv} - h^*(\uv) - \mu \omega(\uv) \right\} \,.
\end{align*}
as the smoothing of $h$ by $\mu \omega$.
\end{definition}
\noindent
We now state a classical result showing how the parameter $\mu$
controls both the approximation error and the level of the smoothing.
For a proof, see \citet[Thm. 4.1, Lemma 4.2]{beck2012smoothing} or Prop.~\ref{prop:smoothing:difference_of_smoothing}
of Appendix~\ref{sec:a:smoothing}.
\begin{proposition} \label{thm:setting:beck-teboulle}
Consider the setting of Def.~\ref{defn:smoothing:inf-conv}.
The smoothing $h_{\mu \omega}$ is continuously differentiable and its gradient, given by
\[
\grad h_{\mu \omega}(\zv) = \argmax_{\uv \in \dom h^*} \left\{ \inp{\uv}{\zv} - h^*(\uv) - \mu \omega(\uv) \right\}
\]
is $1/\mu$-Lipschitz with respect to $\normad{\alpha}{\cdot}$.
Moreover, letting $h_{\mu \omega} \equiv h$ for $\mu = 0$, the smoothing satisfies, for all $\mu_1 \ge \mu_2 \ge 0$,
\begin{align*}
(\mu_1 - \mu_2) \inf_{\uv \in \dom h^*} \omega(\uv)
\le
h_{\mu_2 \omega}(\zv) - h_{\mu_1 \omega}(\zv)
\le
(\mu_1 - \mu_2) \sup_{\uv \in \dom h^*} \omega(\uv) \,.
\end{align*}
\end{proposition}
\paragraph{Smoothing the Structural Hinge Loss}
We rewrite the structural hinge loss as a composition
\begin{equation}\label{eq:mapping_def}
\gv:\
\begin{cases}
\reals^d &\to \reals^m \\
\wv &\mapsto (\psi(\yv;\wv))_{\yv \in \mathcal{Y}},
\end{cases} \, \qquad h: \begin{cases}
\reals^{m} &\to \reals \\
\zv &\mapsto \max_{i \in [m]} z_i,
\end{cases}
\end{equation}
where $m= |\mcY|$ so that the structural hinge loss reads
\begin{align} \label{eq:pgm:struc_hinge_vec}
f(\wv) = h \circ \gv(\wv)\,.
\end{align}
We smooth the structural hinge loss~\eqref{eq:pgm:struc_hinge_vec} by simply smoothing the
non-smooth max function $h$ as
\begin{align*}
f_{\mu \omega} = h_{\mu \omega} \circ \gv.
\end{align*}
When $\gv$ is smooth and Lipschitz continuous,
$f_{\mu \omega}$ is a smooth approximation of the structural hinge loss, whose gradient is readily given by the chain-rule.
In particular, when $\gv$ is an affine map $\gv(\wv) = \Am\wv + \bv$,
if follows that
$f_{\mu \omega}$ is $(\normasq{\beta,\alpha}{\Am} / \mu)$-smooth with respect to $\norma{\beta}{\cdot}$
(cf. Lemma~\ref{lemma:smoothing:composition} in Appendix~\ref{sec:a:smoothing}).
Furthermore, for $\mu_1 \ge \mu_2 \ge 0$, we have,
\[
(\mu_1 - \mu_2) \min_{\uv \in \Delta^{m-1}} \omega(\uv) \le f_{\mu_2\omega}(\wv) - f_{\mu_1 \omega}(\wv)
\le (\mu_1 - \mu_2) \max_{\uv \in \Delta^{m-1}} \omega(\uv) \,.
\]
\subsection{Smoothing Variants}
In the context of smoothing the max function, we now describe two popular choices for the smoothing function $\omega$,
followed by computational considerations.
\subsubsection{Entropy and $\ell_2^2$ smoothing}
When $h$ is the max function, the smoothing operation can be computed analytically for
the \emph{entropy} smoother and the $\ell_2^2$ smoother, denoted respectively as
\begin{align*}
-H(\uv) := \inp{\uv}{\log \uv} \qquad \mbox{and} \qquad \ell_2^2(\uv) := \tfrac{1}{2}(\normasq{2}{\uv} - 1) \,.
\end{align*}
These lead respectively to the log-sum-exp function~\citep[Lemma 4]{nesterov2005smooth}
\[
h_{-\mu H}(\zv) = \mu \log\left(\sum_{i=1}^{m}e^{z_i/\mu}\right), \quad \nabla h_{-\mu H}(\zv) = \left[\frac{e^{z_i/\mu}}{\sum_{j=1}^{m}e^{z_j/\mu}}\right]_{i=1,\ldots,m} \,,
\]
and an orthogonal projection onto the simplex,
\[
h_{\mu \ell_2^2}(\zv) = \langle \zv, \operatorname{proj}_{\Delta^{m-1}}(\zv/\mu) \rangle
- \tfrac{\mu}{2}\|\operatorname{proj}_{\Delta^{m-1}}(\zv/\mu)\|^2 + \tfrac{\mu}{2},
\quad \nabla h_{\mu \ell_2^2}(\zv) = \operatorname{proj}_{\Delta^{m-1}}(\zv/\mu) \,.
\]
Furthermore, the following holds for all $\mu_1 \ge \mu_2 \ge 0$ from Prop.~\ref{thm:setting:beck-teboulle}:
\[
0 \le h_{-\mu_1 H}(\zv) - h_{-\mu_2 H}(\zv) \le (\mu_1 - \mu_2) \log m, \quad \text{and,} \quad
0 \le h_{\mu_1 \ell_2^2}(\zv) - h_{\mu_2 \ell_2^2}(\zv) \le \tfrac{1}{2}(\mu_1-\mu_2) \,.
\]
\subsubsection{Top-$K$ Strategy}
Though the gradient of the composition $f_{\mu \omega} = h_{\mu \omega} \circ \gv$
can be written using the chain rule, its actual computation for structured prediction problems
involves computing $\grad \gv$ over all $m = \abs{\mcY}$ of its components, which may be intractable.
However, in the case of $\ell_2^2$ smoothing, projections onto the simplex are sparse, as pointed out by the following proposition.
\begin{proposition} \label{prop:smoothing:proj-simplex-1}
Consider the Euclidean projection
$\uv^* = \argmin_{\uv \in \Delta^{m-1}}\normasq{2}{\uv -{\zv}/{\mu}} $ of $\zv/\mu \in \reals^m$ onto the simplex,
where $\mu > 0$.
The projection $\uv^*$ has exactly $k \in [m]$ non-zeros if and only if
\begin{align} \label{eq:smooth:proj:simplex_1_statement}
\sum_{i=1}^k \left(z_{(i)} - z_{(k)} \right) < \mu
\le \sum_{i=1}^k \left(z_{(i)} - z_{(k+1)} \right) \,,
\end{align}
where $z_{(1)}\ge \cdots \ge z_{(m)}$ are the components of $\zv$ in non-decreasing order
and $z_{(m+1)} := -\infty$.
In this case, $\uv^*$ is given by
\begin{align*}
u_i^* = \max \bigg\{0, \, \tfrac{1}{k\mu }\sum_{j=1}^k \big( z_i - z_{(j)} \big) + \tfrac{1}{k} \bigg\} \,.
\end{align*}
\end{proposition}
\begin{proof}
The projection $\uv^*$ satisfies $u^*_i = (z_i/\mu + \rho^*)_+$,
where $\rho^*$ is the unique solution of $\rho$ in the equation
\begin{align} \label{eq:smooth:proj:simplex_1}
\sum_{i=1}^m \left( \frac{z_i}{\mu} + \rho \right)_+ = 1 \,,
\end{align}
where $\alpha_+ = \max\{0, \alpha\}$. See, e.g.,
\citet{held1974validation} for a proof of this fact.
Note that $z_{(i)}/\mu + \rho^* \le 0$
implies that $z_{(j)}/\mu + \rho^* \le 0$ for all $j \ge i$. Therefore
$\uv^*$ has $k$ non-zeros if and only if $z_{(k)}/\mu + \rho^* > 0$ and $z_{(k+1)}/\mu + \rho^* \le 0$.
Now suppose that $\uv^*$ has exactly $k$ non-zeros, we can then solve \eqref{eq:smooth:proj:simplex_1} to obtain $\rho^* = \varphi_k(\zv/\mu)$, which is defined as
\begin{align} \label{eq:smooth:proj:simplex_1b}
\varphi_k\left( \frac \zv \mu \right) := \frac{1}{k} - \frac{1}{k} \sum_{i=1}^k \frac{z_{(i)}}{\mu} \,.
\end{align}
Plugging in the value of $\rho^*$ in $z_{(k)}/\mu + \rho^* > 0$ gives $\mu > \sum_{i=1}^k \left(z_{(i)} - z_{(k)} \right)$.
Likewise, $z_{(k+1)}/\mu + \rho^* \le 0$ gives $\mu \le \sum_{i=1}^k \left(z_{(i)} - z_{(k+1)} \right)$.
Conversely assume~\eqref{eq:smooth:proj:simplex_1_statement} and let $\widehat \rho = \varphi_k(\zv/\mu)$.
Eq. \eqref{eq:smooth:proj:simplex_1_statement} can be written as
$z_{(k)}/\mu + \widehat\rho > 0$ and $z_{(k+1)}/\mu + \widehat\rho \le 0$. Furthermore, we verify that
$\widehat\rho$ satisfies Eq.~\eqref{eq:smooth:proj:simplex_1}, and so $\widehat \rho = \rho^*$ is its unique root.
It follows, therefore, that the sparsity of $\uv^*$ is $k$.
\end{proof}
Thus, the projection of $\zv /\mu$ onto the simplex picks out some number $K_{\zv/\mu}$
of the largest entries of $\zv / \mu$ - we refer to this as the sparsity of
$\operatorname{proj}_{\Delta^{m-1}}(\zv/\mu)$.
This fact motivates the {\em top-$K$ strategy}: given $\mu>0$, fix an integer $K$ {\em a priori} and
consider as surrogates for $h_{\mu\ell_2^2}$ and $\grad h_{\mu\ell_2^2}$ respectively
\[
h_{\mu, K}(\zv) := \max_{\uv \in \Delta^{K-1}} \left\{ \inp*{\zv_{[K]}}{\uv} - \mu \ell_2^2(\uv) \right\}\,,
\quad \text{and,} \quad
\widetilde \grad h_{\mu, K}(\zv) := \Omega_K(\zv)\T\operatorname{proj}_{\Delta^{K-1}}\left( \frac{\zv_{[K]}}{\mu} \right) \,,
\]
where $\zv_{[K]}$ denotes the vector composed of the $K$ largest entries of $\zv$ and
$
\Omega_K : \reals^m \to \{0,1\}^{K \times m}
$
defines their extraction, i.e.,
$\Omega_K(\zv) = (\ev_{j_1}\T, \ldots, \ev_{j_K} \T)^\top\in \{0, 1\}^{K \times m}$
where $j_1, \cdots, j_K$ satisfy $z_{j_1} \ge \cdots \ge z_{j_K}$
such that
$ \zv_{[K]} = \Omega_K(\zv) \zv$ \,.
A surrogate of the $\ell_2^2$ smoothing is then given by
\begin{align} \label{eq:smoothing:fmuK_defn}
f_{\mu, K} := h_{\mu, K} \circ \gv \,,
\quad\text{and,}\quad
\widetilde \grad f_{\mu, K}(\wv) := \grad \gv(\wv)\T \widetilde \grad h_{\mu, K}(\gv(\wv)) \,.
\end{align}
\paragraph{Exactness of Top-$K$ Strategy}
We say that the top-$K$ strategy is {\em exact} at $\zv$ for $\mu>0$ when it recovers the first order information
of $h_{\mu \ell_2^2}$, i.e. when $ h_{\mu \ell_2^2}(\zv) = h_{\mu, K}(\zv)$ and
$\grad h_{\mu \ell_2^2}(\zv) = \widetilde \grad h_{\mu, K}(\zv)$.
The next proposition outlines when this is the case. Note that if the top-$K$ strategy is exact at $\zv$ for
a smoothing parameter $\mu>0$ then it will be exact at $\zv$ for any $\mu'<\mu$.
\begin{proposition} \label{prop:smoothing:proj-simplex-2}
The top-$K$ strategy is exact at $\zv$
for $\mu>0$ if
\begin{align} \label{eq:smooth:proj:simplex_2}
\mu \le \sum_{i=1}^K \left(\zv_{(i)} - \zv_{( {\scriptscriptstyle K}+1)} \right) \,.
\end{align}
Moreover, for any fixed $\zv \in \reals^m$ such that the vector $\zv_{[\scriptscriptstyle K+1]} = \Omega_{K+1}(\zv)\zv$
has at least two unique elements, the top-$K$ strategy is exact at $\zv$ for
all $\mu$ satisfying $0 < \mu \le z_{(1)} - z_{({\scriptscriptstyle K}+1)}$.
\end{proposition}
\begin{proof}
First, we note that the top-$K$ strategy is exact when the sparsity $K_{\zv/\mu}$ of the projection
$\operatorname{proj}_{\Delta^{m-1}}(\zv/\mu)$ satisfies $K_{\zv/\mu} \le K$.
From Prop.~\ref{prop:smoothing:proj-simplex-1}, the condition that
$K_{\zv/\mu} \in \{1, 2, \cdots, K\}$ happens when
\begin{align*}
\mu \in
\bigcup_{k=1}^K \left( \sum_{i=1}^k \left(z_{(i)} - z_{(k)} \right), \,
\sum_{i=1}^k \left(z_{(i)} - z_{(k+1)} \right) \right] =
\left( 0 , \sum_{i=1}^K \left(z_{(i)} - z_{({\scriptscriptstyle K}+1)} \right) \right] \,,
\end{align*}
since the intervals in the union are contiguous.
This establishes \eqref{eq:smooth:proj:simplex_2}.
The only case when \eqref{eq:smooth:proj:simplex_2} cannot hold for any value of $\mu > 0$ is when the right hand size
of \eqref{eq:smooth:proj:simplex_2} is zero. In the opposite case when $\zv_{[{\scriptscriptstyle K} + 1]}$ has at least
two unique components, or equivalently, $z_{(1)} - z_{({\scriptscriptstyle K}+1)} > 0$, the condition
$0 < \mu \le z_{(1)} - z_{({\scriptscriptstyle K}+1)}$ implies \eqref{eq:smooth:proj:simplex_2}.
\end{proof}
If the top-$K$ strategy is exact at $\gv(\wv)$ for $\mu$, then
\[
f_{\mu, K}(\wv) = f_{\mu \ell_2^2}(\wv)
\quad \text{and} \quad
\widetilde \grad f_{\mu, K}(\wv) = \grad f_{\mu \ell_2^2}(\wv) \,,
\]
where the latter follows from the chain rule.
When used instead of $\ell_2^2$ smoothing in the algorithms presented in Sec.~\ref{sec:cvx_opt},
the top-$K$ strategy provides a computationally efficient heuristic to smooth the structural hinge loss.
Though we do not have theoretical guarantees using this surrogate,
experiments presented in Sec.~\ref{sec:expt} show its efficiency and its robustness to the choice of $K$.
\subsection{Score Functions} \label{sec:inf_oracles:score_func}
Structured prediction is defined by the structure of the output $\yv$, while input $\xv \in \mcX$ can be arbitrary.
Each output $\yv \in \mcY$ is composed of $p$ components $y_1, \ldots, y_p$ that are linked through a graphical model
$\mathcal{G} = (\mathcal{V}, \mathcal{E})$ -
the nodes $\mcV=\{1,\cdots,p\}$ represent the components of the output $\yv$ while the edges $\mcE$ define the
dependencies between various components.
The value of each component $y_v$ for $v \in \mcV$ represents the state of the node $v$ and takes values from a finite set $\mcY_v$.
The set of all output structures $\mcY = \mcY_1 \times \cdots \times \mcY_p$
is then finite yet potentially intractably large.
The structure of the graph (i.e., its edge structure) depends on the task.
For the task of sequence labeling, the graph is a chain,
while for the task of parsing, the graph is a tree. On the other hand, the graph
used in image segmentation is a grid.
For a given input $\xv$ and a score function $\phi(\cdot, \cdot ; \wv)$,
the value $\phi(\xv, \yv; \wv)$ measures the compatibility of the
output $\yv$ for the input $\xv$.
The essential characteristic of the score function is that it decomposes over the nodes and edges of the graph as
\begin{align} \label{eq:setting:score:decomp}
\phi(\xv, \yv ; \wv) = \sum_{v \in \mcV} \phi_v(\xv, y_v; \wv)
+ \sum_{(v,v') \in \mcE} \phi_{v,v'}(\xv, y_v, y_{v'} ; \wv) \,.
\end{align}
For a fixed $\wv$, each input $\xv$ defines a specific compatibility function $\phi(\xv, \cdot\, ; \wv)$.
The nature of the problem and the optimization algorithms we consider hinge upon whether
$\phi$ is an affine function of $\wv$ or not. The two settings studied here are the following:
\begin{description}
\item{\bfseries Pre-defined Feature Map.}
In this structured prediction framework, a pre-specified feature map
$\Phi: \mcX \times \mcY \to \reals^d$ is employed and the score $\phi$ is then defined as the linear function
\begin{equation}\label{eq:pre_spec_feature_map}
\phi(\xv, \yv ; \wv) = \inp{\Phi(\xv, \yv)}{\wv} = \sum_{v \in \mcV} \inp{\Phi_v(\xv, y_v)}{\wv}
+ \sum_{(v,v') \in \mcE} \inp{\Phi_{v,v'}(\xv, y_v, y_{v'})}{\wv}\,.
\end{equation}
\item{\bfseries Learning the Feature Map.}
We also consider the setting where the feature map $\Phi$ is parameterized by $\wv_0$,
for example, using a neural network, and is learned from the data. The score function can then be written as
\begin{equation}\label{eq:deep_setting}
\phi(\xv, \yv ; \wv) = \inp{\Phi(\xv, \yv; \wv_0)}{\wv_1}
\end{equation}
where $\wv = (\wv_0, \wv_1)$ and the scalar product decomposes into nodes and edges as above.
\end{description}
Note that we only need the decomposition of the score function over nodes and edges of the $\mcG$ as in
Eq.~\eqref{eq:setting:score:decomp}.
In particular, while Eq.~\eqref{eq:deep_setting} is helpful
to understand the use of neural networks in structured prediction,
the optimization algorithms developed in Sec.~\ref{sec:ncvx_opt}
apply to general nonlinear but smooth score functions.
This framework captures both generative probabilistic models such as Hidden Markov Models (HMMs)
that model the joint distribution between $\xv$ and $\yv$
as well as discriminative probabilistic models,
such as conditional random fields~\citep{lafferty2001conditional}
where dependencies among the input variables $\xv$ do not need to be explicitly represented.
In these cases, the log joint and conditional probabilities respectively
play the role of the score $\phi$.
\begin{example}[Sequence Tagging]
\label{example:inf_oracles:viterbi_example}
Consider the task of sequence tagging in natural language processing
where each $\xv = (x_1, \cdots, x_p) \in \mcX$ is a sequence of words and
$\yv = (y_1, \cdots, y_p) \in \mcY$ is a sequence of labels,
both of length $p$. Common examples include part of speech tagging and named entity recognition.
Each word $x_v$ in the sequence $\xv$ comes from a finite dictionary $\mcD$,
and each tag $y_v$ in $\yv$ takes values from a finite set $\mcY_v = \mcY_{\mathrm{tag}}$.
The corresponding graph is simply a linear chain.
The score function measures the compatibility of a sequence $\yv\in\mcY$ for the input $\xv\in\mcX$ using parameters
$\wv = (\wv_{\mathrm{unary}}, \wv_{\mathrm{pair}})$ as, for instance,
\[
\phi(\xv, \yv; \wv) = \sum_{v =1}^p \inp{\Phi_{\mathrm{unary}}(x_v, y_v)}{\wv_{\mathrm{unary}}}
+ \sum_{v=0}^p \inp{\Phi_{\mathrm{pair}}(y_v, y_{v+1})}{\wv_{\mathrm{pair}}}\,,
\]
where, using $\wv_{\mathrm{unary}} \in \reals^{\abs\mcD\abs{\mcY_{\mathrm{tag}}}}$
and $\wv_{\mathrm{pair}} \in \reals^{\abs{\mcY_{\mathrm{tag}}}^2}$ as node and edge weights respectively,
we define for each $v \in [p]$,
\[
\inp{\Phi_{\mathrm{unary}}(x_v, y_v)}{\wv_{\mathrm{unary}}} = \sum_{x \in \mathcal{D},\, j \in \mcY_{\mathrm{tag}}}
w_{\mathrm{unary},\, x, j} \ind(x = x_v) \ind(j = y_v) \,.
\]
The pairwise term $\inp{\Phi_{\mathrm{pair}}(y_v, y_{v+1})}{\wv_{\mathrm{pair}}}$ is analogously defined.
Here, $y_0, y_{p+1}$ are special ``start'' and ``stop'' symbols respectively.
This can be written as a dot product of $\wv$ with a pre-specified feature map as in~\eqref{eq:pre_spec_feature_map},
by defining
\[
\Phi(\xv, \yv) = \big(\sum_{v=1}^p \ev_{x_v} \otimes \ev_{y_v} \big)
\oplus \big(\sum_{v=0}^p \ev_{y_v} \otimes \ev_{y_{v+1}} \big) \,,
\]
where $\ev_{x_v}$ is the unit vector $(\ind(x = x_v))_{x \in \mcD} \in \reals^{\abs\mcD}$,
$ \ev_{y_v}$ is the unit vector $(\ind(j=y_v))_{j \in \mcY_{\mathrm{tag}}} \in \reals^{\abs{\mcY_{\mathrm{tag}}}}$,
$\otimes$ denotes the Kronecker product between vectors and $\oplus$ denotes vector concatenation.
\end{example}
\subsection{Inference Oracles}
We define now inference oracles as first order oracles in structured prediction.
These are used later
to understand the information-based complexity of optimization algorithms.
\subsubsection{First Order Oracles in Structured Prediction}
A first order oracle for a function $f :\reals^d \to \reals$ is a routine which,
given a point $\wv \in \reals^d$, returns on output a value $f(\wv)$ and
a (sub)gradient $\vv \in \partial f(\wv)$, where $\partial f$ is the
Fr\'echet (or regular) subdifferential
\citep[Def. 8.3]{rockafellar2009variational}.
We now define inference oracles as first order oracles for the structural hinge loss
$f$ and its smoothed variants $f_{\mu \omega}$.
Note that these definitions are independent of the graphical structure.
However, as we shall see, the graphical structure plays a crucial role in the implementation of
the inference oracles.
\begin{definition} \label{defn:inf-oracles-all}
Consider an augmented score function $\psi$,
a level of smoothing $\mu > 0$
and the structural hinge loss $f(\wv) = \max_{\yv \in \mathcal{\mcY}} \psi(\yv;\wv)$. For a given $\wv \in \reals^d$,
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*]
\item the {\em max oracle}
returns $f(\wv)$ and $\vv \in \partial f(\wv)$.
\item the {\em exp oracle}
returns $f_{-\mu H}(\wv)$ and $\grad f_{-\mu H}(\wv)$.
\item the {\em top-$K$ oracle}
returns $f_{\mu, K}(\wv)$ and $\widetilde \grad f_{\mu, K}(\wv)$ as surrogates for
$f_{\mu \ell_2^2}(\wv)$ and $\grad f_{\mu \ell_2^2}(\wv)$ respectively.
\end{enumerate}
\end{definition}
\noindent
Note that the exp oracle gets its name since it can be written as an expectation
over all $\yv$, as revealed by the next lemma,
which gives analytical expressions for the gradients returned by the oracles.
\begin{lemma} \label{lemma:smoothing:first-order-oracle}
Consider the setting of Def.~\ref{defn:inf-oracles-all}. We have the following:
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*]
\item \label{lem:foo:max}
For any $\yv^* \in \argmax_{\yv \in \mathcal{\mcY}} \psi(\yv;\wv)$, we have that
$\grad_\wv \psi(\yv^* ; \wv) \in \partial f(\wv)$. That is, the max oracle can be implemented
by inference.
\item
The output of the exp oracle satisfies
$\grad f_{-\mu H}(\wv) = \sum_{\yv \in \mcY} P_{\psi, \mu}(\yv ; \wv) \grad \psi(\yv ; \wv)$,
where
\[P_{\psi, \mu}(\yv ; \wv)
= \frac{
\exp\left(\tfrac{1}{\mu}\psi(\yv ; \wv)\right)}
{\sum_{\yv' \in \mcY }\exp\left(\tfrac{1}{\mu}\psi(\yv' ; \wv)\right)} \,.
\]
\label{lem:foo:exp}
\item \label{lem:foo:l2}
The output of the top-$K$ oracle satisfies
$
\widetilde \grad f_{\mu, K}(\wv) = \sum_{i=1}^K u_{\psi, \mu, i}^*(\wv) \grad \psi(\yv_{(i)}; \wv) \,,
$
where $Y_K = \left\{\yv_{(1)}, \cdots, \yv_{(K)} \right\}$ is the set of $K$ largest scoring outputs
satisfying
\[
\psi(\yv_{(1)} ; \wv) \ge \cdots \ge \psi(\yv_{(K)} ; \wv) \ge \max_{\yv \in \mcY \setminus Y_K} \psi(\yv ; \wv)\,,
\]
and $
\uv^*_{\psi, \mu} = \operatorname{proj}_{\Delta^{K-1}} \left( \left[\psi(\yv_{(1)} ; \wv), \cdots,
\psi(\yv_{(K)} ; \wv) \right]\T \right)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Part~\ref{lem:foo:exp} deals with the composition of differentiable
functions, and follows from the chain rule. Part~\ref{lem:foo:l2} follows from the definition in Eq.~\eqref{eq:smoothing:fmuK_defn}.
The proof of Part~\ref{lem:foo:max} follows from the chain rule for Fr\'echet subdifferentials of compositions
\citep[Theorem 10.6]{rockafellar2009variational}
together with the fact that by convexity and
Danskin's theorem \citep[Proposition B.25]{bertsekas1999nonlinear},
the subdifferential of the max function is given by
$\partial h(\zv) = \conv \{ \ev_i \, | \, i \in [m] \text{ such that } z_i = h(\zv) \}$.
\end{proof}
\begin{figure*}[!t
\centering
\begin{subfigure}[b]{0.28\linewidth}
\centering
\adjincludegraphics[width=\textwidth,trim={0.11\width 0.2\height 0.11\width 0.2\height},clip]{fig/viterbi/viterbi-max.pdf}
\caption{\small{Non-smooth.}}
\label{subfig:viterbi:1:max}
\end{subfigure}
\hspace{5mm}%
\begin{subfigure}[b]{0.28\linewidth}
\centering
\adjincludegraphics[width=\textwidth,trim={0.11\width 0.2\height 0.11\width 0.2\height},clip]{fig/viterbi/viterbi-K.pdf}
\caption{\small{$\ell_2^2$ smoothing.}}
\label{subfig:viterbi:1:l2}
\end{subfigure}
\hspace{5mm}%
\begin{subfigure}[b]{0.28\linewidth}
\centering
\adjincludegraphics[width=\textwidth,trim={0.11\width 0.2\height 0.11\width 0.2\height},clip]{fig/viterbi/viterbi-exp.pdf}
\caption{\small{Entropy smoothing.}}
\label{subfig:viterbi:1:ent}
\end{subfigure}
\caption{\small{Viterbi trellis for a chain graph with $p=4$ nodes and 3 labels.
}}
\label{fig:viterbi:1}
\end{figure*}
\begin{example} \label{example:inf_oracles:viterbi_example_2}
Consider the task of sequence tagging from Example~\ref{example:inf_oracles:viterbi_example}.
The inference problem~\eqref{eq:pgm:inference} is a search over all
$\abs{\mcY} = \abs{\mcY_{\mathrm{tag}}}^p$ label sequences. For chain graphs, this is equivalent
to searching for the shortest path in the associated trellis, shown in Fig.~\ref{fig:viterbi:1}.
An efficient dynamic programming approach called the Viterbi algorithm \citep{viterbi1967error}
can solve this problem in space and time polynomial in $p$ and $\abs{\mcY_{\mathrm{tag}}}$.
The structural hinge loss is non-smooth because a small change in $\wv$ might lead to a radical change
in the best scoring path shown in Fig.~\ref{fig:viterbi:1}.
When smoothing $f$ with $\omega = \ell_2^2$,
the smoothed function $f_{\mu \ell_2^2}$ is given by a projection onto the simplex,
which picks out some number $K_{\psi/\mu}$ of the highest scoring outputs $\yv \in \mcY$ or equivalently,
$K_{\psi/\mu}$ shortest paths in the Viterbi trellis (Fig.~\ref{subfig:viterbi:1:l2}).
The top-$K$ oracle then uses the top-$K$ strategy to approximate $f_{\mu \ell_2^2}$ with $f_{\mu, K}$.
On the other hand, with entropy smoothing $\omega = -H$, we get the log-sum-exp function and
its gradient is obtained by averaging
over paths with weights such that
shorter paths have a larger weight (cf. Lemma~\ref{lemma:smoothing:first-order-oracle}\ref{lem:foo:exp}).
This is visualized in Fig.~\ref{subfig:viterbi:1:ent}.
\end{example}
\subsubsection{Exp Oracles and Conditional Random Fields}
Recall that a {\em Conditional Random Field (CRF)}~\citep{lafferty2001conditional}
with augmented score function $\psi$
and parameters $\wv \in \reals^d$ is a probabilistic model that assigns
to output $\yv \in \mcY$ the probability
\begin{align} \label{eq:smoothing:crf:def}
\prob(\yv \mid \psi ; \wv) = \exp\left(\psi(\yv ; \wv) - A_\psi(\wv) \right) \,,
\end{align}
where $A_\psi(\wv)$ is known as the log-partition function,
a normalizer so that the probabilities sum to one.
Gradient-based maximum likelihood learning algorithms for CRFs require computation
of the log-partition function $A_\psi(\wv)$ and its gradient $\grad A_\psi(\wv)$.
Next proposition relates the computational costs of the exp oracle and the log-partition function.
\begin{proposition} \label{prop:smoothing:exp-crf}
The exp oracle for an augmented score function $\psi$ with parameters $\wv \in \reals^d$ is
equivalent in hardness to computing the log-partition function $A_\psi(\wv)$
and its gradient $\grad A_\psi(\wv)$ for a conditional
random field with augmented score function $\psi$.
\end{proposition}
\begin{proof}
Fix a smoothing parameter $\mu > 0$.
Consider a CRF with augmented score function
$\psi'(\yv ; \wv) = \mu\inv \psi(\yv ; \wv)$. Its log-partition function
$A_{\psi'}(\wv)$ satisfies
$\exp(A_{\psi'}(\wv)) = \sum_{\yv \in \mcY} \exp \left( \mu\inv \psi(\yv ; \wv) \right)$.
The claim now follows from the bijection $f_{- \mu H}(\wv) = \mu \, A_{\psi'}(\wv)$
between $f_{-\mu H}$ and $A_{\psi'}$.
\end{proof}
\subsection{Inference Oracles in Trees} \label{subsec:smooth_inference_trees}
We first consider algorithms implementing the inference algorithms in trees and examine their computational complexity.
\subsubsection{Implementation of Inference Oracles}
\paragraph{Max Oracle}
In tree structured graphical models, the inference problem~\eqref{eq:pgm:inference}, and thus the max oracle
(cf. Lemma~\ref{lemma:smoothing:first-order-oracle}\ref{lem:foo:max})
can always be solved exactly in polynomial time by the max-product algorithm~\citep{pearl1988probabilistic},
which uses the technique of dynamic programming~\citep{bellman1957dynamic}.
The Viterbi algorithm (Algo.~\ref{algo:dp:max:chain}) for chain graphs from Example~\ref{example:inf_oracles:viterbi_example_2}
is a special case. See Algo.~\ref{algo:dp:supp} in Appendix~\ref{sec:a:dp} for
the max-product algorithm in full generality.
\paragraph{Top-$K$ Oracle}
The top-$K$ oracle uses a generalization of the max-product algorithm that we name top-$K$ max-product algorithm.
Following the work of \citet{seroussi1994algorithm}, it keeps track of the $K$-best intermediate structures while
the max-product algorithm just tracks the single best intermediate structure.
Formally, the $k$th largest element from a discrete set $S$ is defined as
\begin{align*}
\maxK{k}{x \in S} f(x) =
\begin{cases}
\text{$k$th largest element of $\{f(y)\, |\, y \in S\} $} & k \le |S| \\
-\infty, & k > |S| \,.
\end{cases}
\end{align*}
We present the algorithm in the simple case of chain structured graphical models in
Algo.~\ref{algo:dp:topK:chain}.
The top-$K$ max-product algorithm for general trees is given in
Algo.~\ref{algo:dp:topK:main} in Appendix~\ref{sec:a:dp}.
Note that it requires $\widetilde\bigO(K)$
times the time and space of the max oracle.
\paragraph{Exp oracle}
The relationship of the exp oracle with CRFs (Prop.~\ref{prop:smoothing:exp-crf})
leads directly to
Algo.~\ref{algo:dp:supp_exp}, which is
based on marginal computations from the sum-product algorithm.
\begin{algorithm}[tb]
\caption{Max-product (Viterbi) algorithm for chain graphs
}
\label{algo:dp:max:chain}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Augmented score function $\psi(\cdot, \cdot; \wv)$ defined on a chain graph $\mcG$.
\STATE Set $\pi_1(y_1) \leftarrow \psi_1(y_1)$ for all $y_1 \in \mcY_1$.
\FOR{$v= 2, \cdots p$}
\STATE For all $y_v \in \mcY_v$, set
\begin{align} \label{eq:dp:viterbi:update}
\pi_{v}(y_v) \leftarrow \psi_v(y_v) + \max_{y_{v-1} \in \mcY_{v-1}}
\left\{ \pi_{v-1}(y_{v-1}) + \psi_{v, v-1}(y_v, y_{v-1}) \right\} \,.
\end{align}
\STATE Assign to $\delta_v(y_v)$ the $y_{v-1}$
that attains the $\max$ above for each $y_v \in \mcY_v$.
\ENDFOR
\STATE Set $\psi^* \leftarrow \max_{y_p \in \mcY_p} \pi_p(y_p)$
and store the maximizing assignments of $y_p$ in $y_p^*$.
\FOR{$v= p-1, \cdots, 1$}
\STATE Set $y_v^* \leftarrow \delta_{v+1}( y_{v+1})$.
\ENDFOR
\RETURN $\psi^*, \yv^*:=(y_1^*, \cdots, y_p^*) $.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[tb]
\caption{Top-$K$ max-product (top-$K$ Viterbi) algorithm for chain graphs}
\label{algo:dp:topK:chain}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Augmented score function $\psi(\cdot, \cdot ; \wv)$ defined on chain graph $\mcG$,
integer $K>0$.
\STATE For $k=1,\cdots, K$, set $\pi_1\pow{k}(y_1) \leftarrow \psi_1(y_1)$ if $k=1$ and $-\infty$ otherwise for all $y_1 \in \mcY_1$.
\FOR{$v= 2, \cdots p$ and $k=1,\cdots, K$}
\STATE For all $y_v \in \mcY_v$, set
\begin{align} \label{eq:dp:viterbi:topk:update}
\pi_{v}\pow{k}(y_v) \leftarrow \psi_v(y_v) + \maxK{k}{y_{v-1} \in \mcY_{v-1}, \ell \in [K]}
\left\{ \pi_{v-1}\pow{\ell}(y_{v-1}) + \psi_{v, v-1}(y_v, y_{v-1}) \right\} \,.
\end{align}
\STATE Assign to $\delta_v\pow{k}(y_v), \kappa_v\pow{k}(y_v)$ the $y_{v-1}, \ell$
that attain the $\max\pow{k}$ above for each $y_v \in \mcY_v$.
\ENDFOR
\STATE For $k=1,\cdots, K$, set $\psi\pow{k} \leftarrow \max\pow{k}_{y_p \in \mcY_p, k \in [K]} \pi_p\pow{k}(y_p)$
and store in $y_p\pow{k}, \ell\pow{k}$ respectively the maximizing assignments of $y_p , k$.
\FOR{$v= p-1, \cdots 1$ and $k=1,\cdots, K$}
\STATE Set $y_v\pow{k} \leftarrow \delta_{v+1}\pow{\ell\pow{k}} \big( y_{v+1}\pow{k} \big)$ and
$\ell\pow{k} \leftarrow \kappa_{v+1}\pow{\ell\pow{k}} \big( y_{v+1}\pow{k} \big)$.
\ENDFOR
\RETURN $\left\{ \psi\pow{k}, \yv\pow{k}:=(y_1\pow{k}, \cdots, y_p\pow{k}) \right\}_{k=1}^K$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[tb]
\caption{Entropy smoothed max-product algorithm}
\label{algo:dp:supp_exp}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Augmented score function $\psi(\cdot, \cdot ; \wv)$ defined on
tree structured graph $\mcG$,
$\mu > 0$.
\STATE Compute the log-partition function and marginals using the sum-product algorithm
(Algo.~\ref{algo:dp:supp_sum-prod} in Appendix~\ref{sec:a:dp})
\[
A_{\psi/\mu}, \{P_v \text{ for } v \in \mcV\}, \{ P_{v, v'} \text{ for } (v, v') \in \mcE \}
\leftarrow \textsc{SumProduct}\left( \tfrac{1}{\mu} \psi(\cdot \, ; \wv), \mcG \right) \,.
\]
\STATE Set $f_{-\mu H}(\wv) \leftarrow \mu A_{\psi /\mu}$ and
\[
\grad f_{-\mu H}(\wv) \leftarrow \sum_{v \in \mcV} \sum_{y_v \in \mcY_v} P_v(y_v) \grad \psi_v(y_v ; \wv)
+ \sum_{(v, v') \in \mcE} \sum_{y_v \in \mcY_v} \sum_{y_{v'} \in \mcY_{v'}} P_{v,v'}(y_v, y_{v'})\grad \psi_{v, v'}(y_v ; \wv) \,.
\]
\label{line:algo:dp:exp:gradient}
\RETURN $f_{-\mu H}(\wv), \grad f_{-\mu H}(\wv)$.
\end{algorithmic}
\end{algorithm}
\begin{remark}
We note that clique trees allow the generalization of the
algorithms of this section to general graphs with cycles.
However, the construction of a clique tree requires time and space
exponential in the {\em treewidth} of the graph.
\end{remark}
\begin{example}
Consider the task of sequence tagging from Example~\ref{example:inf_oracles:viterbi_example}.
The Viterbi algorithm (Algo.~\ref{algo:dp:max:chain}) maintains
a table $\pi_v(y_v)$, which stores the best length-$v$ prefix ending in label $y_v$.
One the other hand, the top-$K$ Viterbi algorithm (Algo.~\ref{algo:dp:topK:chain})
must store in $\pi_v\pow{k}(y_v)$ the score of $k$th best length-$v$ prefix that ends in $y_v$ for each $k \in [K]$.
In the vanilla Viterbi algorithm, the entry $\pi_v(y_v)$ is updated by looking the previous column
$\pi_{v-1}$
following~\eqref{eq:dp:viterbi:update}.
Compare this to update \eqref{eq:dp:viterbi:topk:update}
of the top-$K$ Viterbi algorithm.
In this case, the exp oracle is implemented by the forward-backward algorithm, a specialization of the
sum-product algorithm to chain graphs.
\end{example}
\subsubsection{Complexity of Inference Oracles}
The next proposition presents the correctness guarantee and complexity of
each of the aforementioned algorithms. Its proof has been placed in Appendix~\ref{sec:a:dp}.
\begin{proposition} \label{prop:dp:main}
Consider as inputs an augmented score function $\psi(\cdot, \cdot ; \wv)$ defined on a tree structured graph $\mcG$,
an integer $K>0$ and a smoothing parameter $\mu > 0$.
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*]
\item The output $(\psi^*, \yv^*)$ of the max-product algorithm
(Algo.~\ref{algo:dp:max:chain} for the special case when $\mcG$ is chain structured
Algo.~\ref{algo:dp:supp} from Appendix~\ref{sec:a:dp} in general) satisfies
$\psi^* = \psi(\yv^* ; \wv) = \max_{\yv \in \mcY} \psi(\yv ; \wv)$.
Thus, the pair $\big(\psi^*, \grad \psi(\yv^* ; \wv)\big)$ is a correct implementation of the max oracle.
It requires time $\bigO(p \max_{v\in\mcV} \abs{\mcY_v}^2)$
and space $\bigO(p \max_{v\in\mcV} \abs{\mcY_v})$.
\item The output $\{ \psi\pow{k}, \yv\pow{k} \}_{k=1}^K$
of the top-$K$ max-product algorithm
(Algo.~\ref{algo:dp:topK:chain} for the special case when $\mcG$ is chain structured
or Algo.~\ref{algo:dp:topK:main} from Appendix~\ref{sec:a:dp} in general)
satisfies $\psi\pow{k} = \psi(\yv\pow{k}) = \max\pow{k}_{\yv \in \mcY} \psi(\yv)$.
Thus, the top-$K$ max-product algorithm followed by a projection onto the simplex
(Algo.~\ref{algo:smoothing:top_K_oracle} in Appendix~\ref{sec:a:smoothing})
is a correct implementation of the top-$K$ oracle.
It requires time $\bigO(pK\log K \max_{v\in\mcV} \abs{\mcY_v}^2)$
and space $\bigO(p K \max_{v\in\mcV} \abs{\mcY_v})$.
\label{prop:dp:main:part:topk}
\item Algo.~\ref{algo:dp:supp_exp}
returns $\big(f_{-\mu H}(\wv), \grad f_{-\mu H}(\wv)\big)$.
Thus, Algo.~\ref{algo:dp:supp_exp} is a correct implementation of the exp oracle.
It requires time $\bigO(p \max_{v\in\mcV} \abs{\mcY_v}^2)$
and space $\bigO(p \max_{v\in\mcV} \abs{\mcY_v})$.
\end{enumerate}
\end{proposition}
\subsection{Inference Oracles in Loopy Graphs} \label{subsec:smooth_inference_loopy}
For general loopy graphs with high tree-width,
the inference problem \eqref{eq:pgm:inference} is NP-hard \citep{cooper1990computational}.
In particular cases, graph cut, matching or search algorithms
can be used for exact inference in dense loopy graphs, and therefore,
to implement the max oracle as well (cf. Lemma~\ref{lemma:smoothing:first-order-oracle}\ref{lem:foo:max}).
In each of these cases,
we find that the top-$K$ oracle can be implemented, but the exp oracle is intractable.
Appendix~\ref{sec:a:smooth:loopy} contains a review of the algorithms and guarantees referenced
in this section.
\subsubsection{Inference Oracles using Max-Marginals}
We now define a {\em max-marginal},
which is a constrained maximum of the augmented score $\psi$.
\begin{definition}
The max-marginal of $\psi$ relative to a variable $y_v$ is defined,
for $j \in \mcY_v$ as
\begin{align}
\psi_{v; j}(\wv) := \max_{\substack{\yv \in \mcY \,: \, y_v = j}} \psi(\yv ; \wv)\, .
\end{align}
\end{definition}
\noindent
In cases where exact inference is tractable using graph cut or matching algorithms,
it is possible to extract max-marginals as well.
This, as we shall see next, allows the implementation of the max and top-$K$ oracles.
When the augmented score function $\psi$ is {\em unambiguous}, i.e.,
no two distinct $\yv_1, \yv_2 \in \mcY$ have the same augmented score,
the output $\yv^*(\wv)$ is unique can be decoded from the max-marginals as
(see \citet{pearl1988probabilistic,dawid1992applications} or Thm.~\ref{thm:a:loopy:decoding}
in Appendix~\ref{sec:a:smooth:loopy})
\begin{align} \label{eq:max-marg:defn}
y_v^*(\wv) = \argmax_{j \in \mcY_v} \psi_{v ; j}(\wv) \,.
\end{align}
If one has access to an algorithm $\mcM$ that can compute max-marginals,
the top-$K$ oracle is also easily implemented via the {\em Best Max-Marginal First (BMMF)}
algorithm of \citet{yanover2004finding}.
This algorithm requires computations of $2K$ sets of max-marginals,
where a {\em set} of max-marginals refers to max-marginals for all $y_v$ in $\yv$.
Therefore,
the BMMF algorithm followed by a projection onto the simplex
(Algo.~\ref{algo:smoothing:top_K_oracle} in Appendix~\ref{sec:a:smoothing})
is a correct implementation of the top-$K$ oracle at a computational cost of
$2K$ sets of max-marginals.
The BMMF algorithm and its guarantee are recalled in Appendix~\ref{sec:a:bmmf} for completeness.
\paragraph{Graph Cut and Matching Inference}
\citet{kolmogorov2004energy} showed that submodular energy functions \citep{lovasz1983submodular}
over binary variables can be efficiently minimized exactly via a minimum cut algorithm.
For a class of alignment problems, e.g., \citet{taskar2005discriminative},
inference amounts to finding the best bipartite matching.
In both these cases, max-marginals can be computed exactly and efficiently
by combinatorial algorithms.
This gives us a way to implement the max and top-$K$ oracles.
However, in both settings,
computing the log-partition function $A_\psi(\wv)$ of a CRF with score $\psi$
is known to be \#P-complete~\citep{jerrum1993polynomial}.
Prop.~\ref{prop:smoothing:exp-crf} immediately extends this result to the exp oracle.
This discussion is summarized by the following proposition, whose proof is provided in Appendix~\ref{sec:a:proof-prop}.
\begin{proposition} \label{prop:smoothing:max-marg:all}
Consider as inputs an augmented score function $\psi(\cdot, \cdot ; \wv)$,
an integer $K>0$ and a smoothing parameter $\mu > 0$.
Further, suppose that $\psi$ is unambiguous, that is,
$\psi(\yv' ; \wv) \neq \psi(\yv'' ;\wv)$ for all distinct $\yv', \yv'' \in \mcY$.
Consider one of the two settings:
\begin{enumerate}[label={\upshape(\Alph*)}, align=left, leftmargin=*]
\item the output space $\mcY_v = \{0,1\}$ for each $v \in \mcV$, and the function
$-\psi$ is submodular (see Appendix~\ref{sec:a:graph_cuts} and, in particular, \eqref{eq:top_k_map:submodular}
for the precise definition), or,
\label{part:prop:max-marg:cuts}
\item the augmented score corresponds to an alignment task where the
inference problem~\eqref{eq:pgm:inference} corresponds to a
maximum weight bipartite matching (see Appendix~\ref{sec:a:graph_matchings} for a precise definition).
\label{part:prop:max-marg:matching}
\end{enumerate}
In these cases, we have the following:
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*]
\item The max oracle can be implemented at a
computational complexity of $\bigO(p)$ minimum cut computations in Case~\ref{part:prop:max-marg:cuts},
and in time $\bigO(p^3)$ in Case~\ref{part:prop:max-marg:matching}.
\item The top-$K$ oracle can be implemented at a
computational complexity of $\bigO(pK)$ minimum cut computations in Case~\ref{part:prop:max-marg:cuts},
and in time $\bigO(p^3K)$ in Case~\ref{part:prop:max-marg:matching}.
\item The exp oracle is \#P-complete in both cases.
\end{enumerate}
\end{proposition}
Prop.~\ref{prop:smoothing:max-marg:all} is loose in that the max oracle can be implemented with just one
minimum cut computation instead of $p$ in in Case~\ref{part:prop:max-marg:cuts}~\citep{kolmogorov2004energy}.
\subsubsection{Branch and Bound Search}
Max oracles implemented via search algorithms can often be extended to implement the top-$K$ oracle.
We restrict our attention to best-first branch and bound search such as the
celebrated Efficient Subwindow Search \citep{lampert2008beyond}.
Branch and bound methods partition the search space into disjoint subsets,
while keeping an upper bound
$\widehat \psi: \mcX \times 2^{\mcY} \to \reals$,
on the maximal augmented score for each of the subsets $\widehat \mcY \subseteq \mcY$.
Using a best-first strategy, promising parts of the search space are explored first.
Parts of the search space whose upper bound indicates that they cannot contain the maximum
do not have to be examined further.
The top-$K$ oracle is implemented by simply
continuing the search procedure until $K$ outputs have been produced - see
Algo.~\ref{algo:top_k:bb} in Appendix~\ref{sec:a:bb_search}.
Both the max oracle and the top-$K$ oracle
can degenerate to an exhaustive search in the worst case, so we do not have sharp running time
guarantees. However, we have the following correctness guarantee.
\begin{proposition} \label{prop:smoothing:bb-search}
Consider an augmented score function $\psi(\cdot, \cdot ; \wv)$,
an integer $K > 0$ and a smoothing parameter $\mu > 0$.
Suppose the upper bound function $\widehat \psi(\cdot, \cdot ; \wv): \mcX \times 2^{\mcY} \to \reals$
satisfies the following properties:
\begin{enumerate}[label=(\alph*), align=left, widest=a, leftmargin=*]
\item $\widehat \psi(\widehat \mcY ; \wv)$ is finite for every $\widehat \mcY \subseteq \mcY$,
\item $\widehat \psi(\widehat \mcY ; \wv) \ge \max_{\yv \in \widehat \mcY} \psi(\yv ; \wv)$
for all $\widehat \mcY \subseteq \mcY$, and,
\item $\widehat \psi(\{\yv\} ; \wv) = \psi(\yv ; \wv)$ for every $\yv \in \mcY$.
\end{enumerate}
Then, we have the following:
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=ii, leftmargin=*]
\item Algo.~\ref{algo:top_k:bb} with $K=1$ is a correct implementation of the max oracle.
\item Algo.~\ref{algo:top_k:bb} followed by a projection onto the simplex
(Algo.~\ref{algo:smoothing:top_K_oracle} in Appendix~\ref{sec:a:smoothing}) is a correct implementation of the top-$K$ oracle.
\end{enumerate}
\end{proposition}
\noindent
See Appendix~\ref{sec:a:bb_search} for a proof.
The discrete structure that allows inference via branch and bound search cannot be
leveraged to implement the exp oracle.
\subsection{{Casimir}: Catalyst with Smoothing}
The Catalyst~\citep{lin2017catalyst} approach minimizes regularized objectives centered around the current iterate.
The algorithm proceeds by computing approximate proximal point steps instead of the classical (sub)-gradient steps.
A proximal point step from a point $\wv$ with step-size $\kappa^{-1}$ is defined as the minimizer of
\begin{equation}\label{eq:prox_point}
\min_{\zv \in \reals^m} F(\zv) + \frac{\kappa}{2}\normasq{2}{\zv-\wv},
\end{equation}
which can also be seen as a gradient step on the Moreau envelope of $F$ - see \citet{lin2017catalyst} for a detailed discussion.
While solving the subproblem~\eqref{eq:prox_point} might be as hard as the original problem we only
require an approximate solution returned by a given optimization method $\mathcal{M}$.
The Catalyst approach is then an inexact accelerated proximal point algorithm
that carefully mixes approximate proximal point steps with the extrapolation scheme of \citet{nesterov1983method}.
The {Casimir}{} scheme extends this approach to non-smooth optimization.
For the overall method to be efficient, subproblems~\eqref{eq:prox_point} must have a low complexity.
That is, there must exist an optimization algorithm $\mathcal{M}$ that solves them linearly.
For the {Casimir}{} approach to be able to handle non-smooth objectives, it means that we need not only to regularize the objective
but also to smooth it. To this end we define
\[
F_{\mu \omega}(\wv) := \frac{1}{n}\sum_{i=1}^n h_{\mu \omega}(\Am\pow{i} \wv + {\bm b}\pow{i}) + \frac{\lambda}{2} \normasq{2}{\wv}
\]
as a smooth approximation of the objective $F$, and,
\[
F_{\mu \omega, \kappa}(\wv; \zv) := \frac{1}{n}\sum_{i=1}^n h_{\mu \omega}(\Am\pow{i} \wv + {\bm b}\pow{i}) + \frac{\lambda}{2} \normasq{2}{\wv} + \frac{\kappa}{2}\normasq{2}{\wv-\zv}
\]
a smooth and regularized approximation of the objective centered around a given point $\zv \in \reals^d$.
While the original Catalyst algorithm considered a fixed regularization term $\kappa$,
we vary $\kappa$ and $\mu$ along the iterations.
This enables us to get adaptive smoothing strategies.
The overall method is presented in Algo.~\ref{algo:catalyst}. We first analyze in Sec.~\ref{sec:catalyst:analysis} its complexity
for a generic linearly convergent algorithm $\mcM$.
Thereafter, in Sec.~\ref{sec:catalyst:total_compl}, we compute the total complexity with SVRG~\citep{johnson2013accelerating}
as $\mcM$.
Before that, we specify two practical aspects of the implementation: a proper stopping criterion~\eqref{eq:stopping_criterion}
and a good initialization of subproblems (Line~\ref{line:algo:c:prox_point}).
\paragraph{Stopping Criterion}
Following~\citet{lin2017catalyst}, we
solve subproblem $k$ in Line~\ref{line:algo:c:prox_point} to a degree of relative accuracy specified by
$\delta_k \in [0, 1)$.
In view of the $(\lambda+\kappa_k)$-strong convexity of $F_{\mu_k\omega, \kappa_k}(\cdot\,; \zv_{k-1})$, the functional gap can be controlled by the norm of the gradient, precisely
it can be seen that $\normasq{2}{\grad F_{\mu_k\omega, \kappa_k}(\widehat\wv; \zv_{k-1})}
\le (\lambda+\kappa_k)\delta_k \kappa_k \normasq{2}{\widehat \wv - \zv_{k-1}}$
is a sufficient condition for
the stopping criterion \eqref{eq:stopping_criterion}.
A practical alternate stopping criterion proposed by \citet{lin2017catalyst} is to fix an iteration budget $T_{\mathrm{budget}}$
and run the inner solver $\mcM$ for exactly $T_{\mathrm{budget}}$ steps.
We do not have a theoretical analysis for this scheme but find that it works well in experiments.
\paragraph{Warm Start of Subproblems}
Rate of convergence of first order optimization algorithms depends on the initialization
and we must warm start $\mcM$ at an appropriate initial point in order to obtain
the best convergence of subproblem~\eqref{eq:prox_point_algo} in Line~\ref{line:algo:c:prox_point} of Algo.~\ref{algo:catalyst}.
We advocate the use of the prox center $\zv_{k-1}$ in iteration $k$ as the warm start strategy.
We also experiment with other warm start strategies in Section~\ref{sec:expt}.
\begin{algorithm}[tb]
\caption{The {Casimir}{} algorithm}
\label{algo:catalyst}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Smoothable objective $F$ of the form \eqref{eq:cvx_pb} with $h$ simple,
smoothing function $\omega$,
linearly convergent algorithm $\mcM$,
non-negative and non-increasing sequence of smoothing parameters $(\mu_k)_{k \ge 1}$,
positive and non-decreasing sequence of regularization parameters $(\kappa_k)_{k\ge1}$,
non-negative sequence of relative target accuracies $(\delta_k)_{k\ge 1}$ and,
initial point $\wv_0$, $\alpha_0 \in (0, 1)$,
time horizon $K$.
\STATE {\bfseries Initialize:} $\zv_0 = \wv_0$.
\FOR{$k=1$ \TO $K$}
\STATE Using $\mcM$ with $\zv_{k-1}$ as the starting point, find \label{line:algo:c:prox_point}
$\wv_{k} \approx \argmin_{\wv\in\reals^d} F_{\mu_k \omega, \kappa_k}(\wv; \zv_{k-1})$ where
\begin{equation}\label{eq:prox_point_algo}
F_{\mu_k \omega, \kappa_k}(\wv; \zv_{k-1}) := \frac{1}{n}\sum_{i=1}^n h_{\mu_k \omega}(\Am\pow{i} \wv + {\bm b}\pow{i}) + \frac{\lambda}{2} \normasq{2}{\wv} + \frac{\kappa_k}{2}\normasq{2}{\wv- \zv_{k-1}}
\end{equation}
such that
\begin{align}\label{eq:stopping_criterion}
F_{\mu_k\omega, \kappa_k}(\wv_k;\zv_{k-1}) - \min_\wv F_{\mu_k\omega, \kappa_k}(\wv;\zv_{k-1})\leq \tfrac{\delta_k\kappa_k}{2} \normasq{2}{\wv_k - \zv_{k-1}}
\end{align}
\STATE Solve for $\alpha_k \geq 0$
\begin{align} \label{eq:c:update_alpha}
\alpha_k^2 (\kappa_{k+1} + \lambda) = (1-\alpha_k) \alpha_{k-1}^2 (\kappa_k + \lambda) + \alpha_k \lambda.
\end{align}
\STATE Set
\begin{align} \label{eq:c:update_support}
\zv_k = \wv_k + \beta_k (\wv_k - \wv_{k-1}),
\end{align}
where
\begin{align} \label{eq:c:update_beta}
\beta_k = \frac{ \alpha_{k-1}(1-\alpha_{k-1}) (\kappa_k + \lambda) }
{ \alpha_{k-1}^2 (\kappa_k + \lambda) + \alpha_k(\kappa_{k+1} + \lambda) }.
\end{align}
\ENDFOR
\RETURN $\wv_K$.
\end{algorithmic}
\end{algorithm}
\subsection{Convergence Analysis of Casimir} \label{sec:catalyst:analysis}
We first state the outer loop complexity results of Algo.~\ref{algo:catalyst} for any generic
linearly convergent algorithm $\mcM$ in Sec.~\ref{sec:catalyst:outer_compl}, prove it in Sec.~\ref{subsec:c:proof}.
Then, we consider the complexity of each inner optimization problem~\eqref{eq:prox_point_algo} in Sec.~\ref{sec:catalyst:inner_compl}
based on properties of $\mcM$.
\subsubsection{Outer Loop Complexity Results} \label{sec:catalyst:outer_compl}
The following theorem states the convergence of the algorithm for general choice of parameters, where we denote $\wv^* \in \argmin_{\wv\in\reals^d} F(\wv)$ and $F^* = F(\wv^*)$.
\begin{theorem} \label{thm:catalyst:outer}
Consider Problem~\eqref{eq:cvx_pb_finite_sum}.
Suppose
$\delta_k \in [0, 1)$ for all $k \ge 1$, the sequence $(\mu_k)_{k\ge 1}$ is non-negative and non-increasing,
and the sequence $(\kappa_k)_{k \ge 1}$ is strictly positive and non-decreasing.
Further, suppose the smoothing function $\omega: \dom h^* \to \reals$ satisfies
$-D_\omega \le \omega(\uv) \le 0$ for all $\uv \in \dom h^*$ and that
$\alpha_0^2 \ge \lambda / (\lambda + \kappa_1)$.
Then, the sequence $(\alpha_k)_{k \ge 0}$ generated by Algo.~\ref{algo:catalyst}
satisfies $0 < \alpha_k \le \alpha_{k-1} < 1$ for all $k \ge 1$.
Furthermore, the sequence $(\wv_{k})_{k \ge 0}$
of iterates generated by Algo.~\ref{algo:catalyst} satisfies
\begin{align} \label{thm:c:main:main}
F(\wv_k) - F^* \le
\frac{\mcA_0^{k-1}}{\mcB_1^k} \Delta_0 + \mu_k D_\omega
+ \sum_{j=1}^k \frac{\mcA_j^{k-1}}{\mcB_j^k} \left( \mu_{j-1} - (1-\delta_j)\mu_j \right) D_\omega
\,,
\end{align}
where $\mcA_i^j := \prod_{r=i}^j (1-\alpha_r)$,
$\mcB_i^j := \prod_{r=i}^j (1-\delta_r)$,
$\Delta_0 := F(\wv_0) - F^* + \frac{(\kappa_1 + \lambda) \alpha_0^2 - \lambda \alpha_0 } {2(1 - \alpha_0)} \normasq{2}{\wv_0 - \wv^*}$ and
$\mu_0 := 2\mu_1$.
\end{theorem}
Before giving its proof, we present various parameters strategies as corollaries.
Table~\ref{tab:catalyst_corollaries_summary} summarizes the parameter settings and the rates obtained for each setting.
Overall, the target accuracies $\delta_k$ are chosen such that $\mcB_j^k$ is a constant and
the parameters $\mu_k$
and $\kappa_k$ are then carefully chosen
for an almost parameter-free algorithm with the right rate of convergence.
Proofs of these corollaries are provided in Appendix~\ref{subsec:c:proofs_missing_cor}.
The first corollary considers the strongly convex case ($\lambda > 0$) with constant smoothing $\mu_k=\mu$,
assuming that $\eps$ is known {\em a priori}. We note that this is, up to constants, the same complexity obtained by
the original Catalyst scheme on a fixed smooth approximation $F_{\mu\omega}$ with $\mu = \bigO(\eps D_\omega)$.
\begin{corollary} \label{cor:c:outer_sc}
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Let $q = {\lambda}/(\lambda + \kappa)$.
Suppose $\lambda > 0$ and $\mu_k = \mu$, $\kappa_k = \kappa$, for all $k \ge 1$. Choose $\alpha_0 = \sqrt{q}$ and,
$\delta_k = {\sqrt{q}}/({2 - \sqrt{q}}) \,.$
Then, we have,
\begin{align*}
F(\wv_k) - F^* \le \frac{3 - \sqrt{q}}{1 - \sqrt{q}} \mu D_\omega +
2 \left( 1- \frac{\sqrt q}{2} \right)^k \left( F(\wv_0) - F^* \right) \,.
\end{align*}
\end{corollary}
\noindent
Next, we consider the strongly convex case where the target accuracy $\eps$ is not known in advance.
We let smoothing parameters $( \mu_k )_{k \ge 0}$ decrease over time to obtain an adaptive smoothing scheme
that gives progressively better surrogates of the original objective.
\begin{corollary} \label{cor:c:outer_sc:decreasing_mu_const_kappa}
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Let $q = {\lambda}/(\lambda + \kappa)$ and $\eta = 1 - {\sqrt q}/{2}$.
Suppose $\lambda > 0$ and
$\kappa_k = \kappa$, for all $k \ge 1$. Choose $\alpha_0 = \sqrt{q}$ and,
the sequences $(\mu_k)_{k \ge 1}$ and $(\delta_k)_{k \ge 1}$ as
\begin{align*}
\mu_k = \mu \eta^{{k}/{2}} \,, \qquad \text{and,} \qquad
\delta_k = \frac{\sqrt{q}}{2 - \sqrt{q}} \,,
\end{align*}
where $\mu > 0$ is any constant.
Then, we have,
\begin{align*}
F(\wv_k) - F^* \le \eta^{{k}/{2}} \left[
2 \left( F(\wv_0) - F^* \right)
+ \frac{\mu D_\omega}{1-\sqrt{q}} \left(2-\sqrt{q} + \frac{\sqrt{q}}{1 - \sqrt \eta} \right)
\right] \, .
\end{align*}
\end{corollary}
\noindent
The next two corollaries consider the unregularized problem, i.e., $\lambda = 0$ with constant and adaptive smoothing respectively.
\begin{corollary} \label{cor:c:outer_smooth}
Consider the setting of Thm.~\ref{thm:catalyst:outer}. Suppose $\mu_k = \mu$, $\kappa_k = \kappa$, for all $k \ge 1$
and $\lambda = 0$. Choose $\alpha_0 = (\sqrt{5}-1)/{2}$ and
$\delta_k = (k+1)^{-2} \,.$
Then, we have,
\begin{align*}
F(\wv_k) - F^* \le \frac{8}{(k+2)^2} \left( F(\wv_0) - F^* + \frac{\kappa}{2} \normasq{2}{\wv_0 - \wv^*} \right)
+ \mu D_\omega\left( 1 + \frac{12}{k+2} + \frac{30}{(k+2)^2} \right) \, .
\end{align*}
\end{corollary}
\begin{corollary} \label{cor:c:outer_smooth_dec_smoothing}
Consider the setting of Thm.~\ref{thm:catalyst:outer} with $\lambda = 0$.
Choose $\alpha_0 = (\sqrt{5}-1)/{2}$, and for some non-negative constants $\kappa, \mu$,
define sequences $(\kappa_k)_{k \ge 1}, (\mu_k)_{k \ge 1}, (\delta_k)_{k \ge 1}$ as
\begin{align*}
\kappa_k = \kappa \, k\,, \quad
\mu_k = \frac{\mu}{k} \quad \text{and,} \quad
\delta_k = \frac{1}{(k + 1)^2} \,.
\end{align*}
Then, for $k \ge 2$, we have,
\begin{align*}
F(\wv_k) - F^* \le
\frac{\log(k+1)}{k+1} \left(
2(F(\wv_0) - F^*) + \kappa \normasq{2}{\wv_0 - \wv^*} + 27 \mu D_\omega
\right) \,.
\end{align*}
For the first iteration (i.e., $k = 1$), this bound is off by a constant factor $1 / \log2$.
\end{corollary}
\begin{table*}[t!]
\caption{\small{Summary of outer iteration complexity for Algorithm~\ref{algo:catalyst}
for different parameter settings. We use shorthand
$\Delta F_0 := F(\wv_0) - F^*$ and {$\Delta_0 = \norma{2}{\wv_0 - \wv^*}$}.
Absolute constants are omitted from the rates.
\vspace{2mm}
}}
\begin{adjustbox}{width=\textwidth}
\label{tab:catalyst_corollaries_summary}
\centering
\begin{tabular}{|c||ccccc|c|c|}
\hline
{Cor.} & $\lambda>0$ & $\kappa_k$ & $\mu_k$ & $\delta_k$ & $\alpha_0$ & $F(\wv_k)-F^*$ & Remark \\ \hline\hline
\ref{cor:c:outer_sc} & Yes &
$\kappa$ & $\mu$ & $\frac{\sqrt{q}}{2-\sqrt{q}}$ & $\sqrt{q}$ &
$\left(1- \frac{\sqrt{q}}{2}\right)^k \Delta F_0 + \frac{\mu D}{1-\sqrt{q}}$
& $q = \frac{\lambda}{\lambda+\kappa}$
\\ \hline
\ref{cor:c:outer_sc:decreasing_mu_const_kappa} & Yes & $\kappa$ &
$\mu \left( 1 - \frac{\sqrt{q}}{2} \right)^{k/2}$ &
$\frac{\sqrt{q}}{2-\sqrt{q}}$ & $\sqrt{q}$ &
$\left(1- \frac{\sqrt{q}}{2}\right)^{k/2} \left( \Delta F_0 + \frac{\mu D}{1-\sqrt{q}} \right)$
& $q = \frac{\lambda}{\lambda+\kappa}$
\\ \hline
\rule{0pt}{12pt}
\ref{cor:c:outer_smooth} & No &
$\kappa$ & $\mu$ & $k^{-2}$ & $c$ &
$\frac{1}{k^2} \left(\Delta F_0 + \kappa \Delta_0^2 \right) + \mu D$
& $c = (\sqrt 5 - 1)/ 2$
\\[3pt]
\hline
\rule{0pt}{12pt}
\ref{cor:c:outer_smooth_dec_smoothing} & No &
$\kappa \, k$ & $\mu /k$ & $k^{-2}$ & $c$ &
$\frac{\log k}{k} (\Delta F_0 + \kappa \Delta_0^2 + \mu D )$
& $c = (\sqrt 5 - 1)/ 2$
\\[3pt]
\hline
\end{tabular}
\end{adjustbox}
\end{table*}
\subsubsection{Outer Loop Convergence Analysis}\label{subsec:c:proof}
We now prove Thm.~\ref{thm:catalyst:outer}.
The proof technique largely follows that of \citet{lin2017catalyst}, with the added challenges of accounting
for smoothing and varying Moreau-Yosida regularization.
We first analyze the sequence $(\alpha_k)_{k \ge 0}$. The proof follows from
the algebra of Eq.~\eqref{eq:c:update_alpha}
and has been given in Appendix~\ref{sec:a:c_alpha_k}.
\begin{lemma} \label{lem:c:alpha_k}
Given a positive, non-decreasing sequence $(\kappa_k)_{k\ge 1}$ and $\lambda \ge 0$,
consider the sequence $(\alpha_k)_{k \ge 0}$ defined by \eqref{eq:c:update_alpha}, where
$\alpha_0 \in (0, 1)$ such that $\alpha_0^2 \ge \lambda / (\lambda + \kappa_1)$.
Then, we have for every $k \ge 1$ that $0< \alpha_k \le \alpha_{k-1}$ and,
$
\alpha_k^2 \ge {\lambda}/({\lambda + \kappa_{k+1}}) \,.
$
\end{lemma}
\noindent
We now characterize the effect of an approximate proximal point step
on $F_{\mu\omega}$.
\begin{lemma} \label{lem:c:approx_descent}
Suppose $\widehat \wv \in \reals^d$ satisfies
$F_{\mu\omega, \kappa}(\widehat \wv ;\zv) - \min_{\wv \in \reals^d} F_{\mu\omega, \kappa}( \wv ;\zv) \le \widehat\eps$
for some $\widehat \eps > 0$.
Then, for all $0 < \theta < 1$ and all $\wv \in \reals^d$, we have,
\begin{align} \label{eq:c:approx_descent}
F_{\mu\omega}(\widehat\wv) + \frac{\kappa}{2} \normasq{2}{\widehat\wv - \zv}
+ \frac{\kappa + \lambda}{2}(1-\theta) \normasq{2}{\wv - \widehat\wv}
\le F_{\mu\omega}(\wv) + \frac{\kappa}{2} \normasq{2}{\wv - \zv} + \frac{\widehat\eps}{\theta} \,.
\end{align}
\end{lemma}
\begin{proof
Let $\widehat F^* = \min_{\wv \in \reals^d} F_{\mu\omega,\kappa}(\wv ; \zv)$.
Let $\widehat \wv^*$ be the unique minimizer of $F_{\mu\omega,\kappa}(\cdot \,; \zv)$.
We have, from $(\kappa + \lambda)$-strong convexity of $F_{\mu\omega,\kappa}(\cdot \,; \zv)$,
\begin{align*}
F_{\mu\omega,\kappa}(\wv ; \zv) &\ge \widehat F^* +\frac{\kappa +\lambda}{2} \normasq{2}{\wv - \widehat \wv^*} \\
&\ge \left( F_{\mu\omega,\kappa}(\widehat\wv ; \zv) - \widehat\eps \right)
+ \frac{\kappa + \lambda}{2}(1-\theta) \normasq{2}{\wv - \widehat\wv}
- \frac{\kappa + \lambda}{2} \left( \frac{1}{\theta} - 1 \right) \normasq{2}{\widehat\wv - \widehat \wv^*} \,,
\end{align*}
where we used that $\widehat\eps$ was sub-optimality of $\widehat\wv$ and Lemma~\ref{lem:c:helper:quadratic}
from Appendix~\ref{subsec:a:catalyst:helper}.
From $(\kappa + \lambda)$-strong convexity of $F_{\mu\omega, \kappa}(\cdot ; \zv)$,
we have,
\begin{align*}
\frac{\kappa + \lambda }{2} \normasq{2}{\widehat\wv - \widehat \wv^*} \le
F_{\mu\omega,\kappa}(\widehat\wv ; \zv) - \widehat F^* \le \widehat\eps\,,
\end{align*}
Since $(1/\theta - 1)$ is non-negative,
we can plug this into the previous statement to get,
\begin{align*}
F_{\mu\omega,\kappa}(\wv ; \zv) \ge F_{\mu\omega,\kappa}(\widehat\wv ; \zv)
+ \frac{\kappa + \lambda}{2} (1-\theta) \normasq{2}{\wv - \widehat\wv} - \frac{\widehat\eps}{\theta}\,.
\end{align*}
Substituting the definition of $F_{\mu\omega,\kappa}(\cdot \,; \zv)$
from \eqref{eq:prox_point_algo} completes the proof.
\end{proof}
We now define a few auxiliary sequences integral to the proof.
Define sequences $(\vv_k)_{k \ge 0}$, $(\gamma_k)_{k \ge 0}$, $(\eta_k)_{k \ge 0}$, and $(\rv_k)_{k \ge 1}$ as
\begin{align}
\label{eq:c:v_defn_base}
\vv_0 &= \wv_0 \, \\
\label{eq:c:v_defn}
\vv_k &= \wv_{k-1} + \frac{1}{\alpha_{k-1}} (\wv_k - \wv_{k-1}) \,, \, k \ge 1 \,, \\
\label{eq:c:gamma_defn_base}
\gamma_0 &= \frac{(\kappa_1 + \lambda) \alpha_0^2 - \lambda \alpha_0 } {1 - \alpha_0} \,, \\
\label{eq:c:gamma_defn}
\gamma_k &= (\kappa_k + \lambda) \alpha_{k-1}^2 \, , \, k \ge 1 \,, \\
\label{eq:c:eta_defn}
\eta_k &= \frac{\alpha_k \gamma_k}{\gamma_{k+1} + \alpha_k \gamma_k} \,, \, k\ge 0 \,, \\
\label{eq:c:ly_vec_defn}
\rv_k &= \alpha_{k-1} \wv^* + ( 1- \alpha_{k-1}) \wv_{k-1} \, , \, k \ge 1\,.
\end{align}
One might recognize $\gamma_k$ and $\vv_k$ from their resemblance to
counterparts from the proof of \citet{nesterov2013introductory}.
Now, we claim some properties of these sequences.
\begin{claim} \label{claim:c:sequences}
For the sequences defined in \eqref{eq:c:v_defn_base}-\eqref{eq:c:ly_vec_defn}, we have,
\begin{align}
\label{eq:c:gamma_defn_2}
\gamma_k &= \frac{(\kappa_{k+1} + \lambda) \alpha_k^2 - \lambda \alpha_k } {1 - \alpha_k} \, , \, k \ge 0\,, \\
\label{eq:c:gamma_defn_3}
\gamma_{k+1} &= (1- \alpha_k) \gamma_k + \lambda \alpha_k \, , \, k \ge 0\,, \\
\label{eq:c:eta_defn_2}
\eta_k &= \frac{\alpha_k \gamma_k}{\gamma_k + \alpha_k \lambda} \, , \, k \ge 0 \\
\label{eq:c:v_defn_2}
\zv_k &= \eta_k \vv_k + (1- \eta_k) \wv_k \, , \, k \ge 0\,, \,.
\end{align}
\end{claim}
\begin{proof
Eq.~\eqref{eq:c:gamma_defn_2}
follows from plugging in \eqref{eq:c:update_alpha} in \eqref{eq:c:gamma_defn} for $k\ge 1$,
while for $k=0$, it is true by definition.
Eq.~\eqref{eq:c:gamma_defn_3} follows from plugging \eqref{eq:c:gamma_defn} in \eqref{eq:c:gamma_defn_2}.
Eq.~\eqref{eq:c:eta_defn_2} follows from \eqref{eq:c:gamma_defn_3} and \eqref{eq:c:eta_defn}.
Lastly, to show \eqref{eq:c:v_defn_2}, we shall show instead that \eqref{eq:c:v_defn_2} is equivalent
to the update \eqref{eq:c:update_support} for $\zv_k$. We have,
\begin{align*}
\zv_k &\,= \eta_k \vv_k + (1-\eta_k) \wv_k \\
&\stackrel{\eqref{eq:c:v_defn}}{=} \eta_k \left( \wv_{k-1}
+ \frac{1}{\alpha_{k-1}} (\wv_k - \wv_{k-1}) \right) + (1-\eta_k) \wv_k \\
&\,= \wv_k + \eta_k \left(\frac{1}{\alpha_{k-1}} - 1 \right) (\wv_{k} - \wv_{k-1}) \,.
\end{align*}
Now,
\begin{align*}
\eta_k \left(\frac{1}{\alpha_{k-1}} - 1 \right)
&\stackrel{\eqref{eq:c:eta_defn}}{=} \frac{\alpha_k \gamma_k}{\gamma_{k+1} + \alpha_k \gamma_k } \cdot \frac{1-\alpha_{k-1}}{\alpha_{k-1}}
\\& \stackrel{\eqref{eq:c:gamma_defn}}{=} \frac{\alpha_k (\kappa_k + \lambda) \alpha_{k-1}^2 }
{ \alpha_k^2 (\kappa_{k+1} + \lambda) +\alpha_k (\kappa_k + \lambda) \alpha_{k-1}^2 }
\cdot \frac{1-\alpha_{k-1}}{\alpha_{k-1}}
\stackrel{\eqref{eq:c:update_beta}}{=} \beta_k \, ,
\end{align*}
completing the proof.
\end{proof}
\begin{claim} \label{claim:c:ly_sequence}
The sequence $(\rv_k)_{k \ge 1}$ from \eqref{eq:c:ly_vec_defn} satisfies
\begin{align} \label{eq:c:norm_ly_sequence}
\normasq{2}{\rv_k - \zv_{k-1}} \le \alpha_{k-1} (\alpha_{k-1} - \eta_{k-1}) \normasq{2}{\wv_{k-1} - \wv^*}
+ \alpha_{k-1} \eta_{k-1} \normasq{2}{\vv_{k-1} - \wv^*} \,.
\end{align}
\end{claim}
\begin{proof
Notice that $\eta_k \stackrel{\eqref{eq:c:eta_defn_2}}{=} \alpha_k \cdot \frac{\gamma_k}{\gamma_k + \alpha_k \lambda} \le \alpha_k$.
Hence, using convexity of the squared Euclidean norm, we get,
\begin{align*}
\normasq{2}{\rv_k - \zv_{k-1}} & \stackrel{\eqref{eq:c:v_defn_2}}{=}
\normasq{2}{ (\alpha_{k-1} - \eta_{k-1})(\wv^* - \wv_{k-1}) + \eta_{k-1}(\wv^* - \vv_{k-1}) } \\
&\,= \alpha_{k-1}^2 \normsq*{ \left(1 - \frac{\eta_{k-1}}{\alpha_{k-1}} \right) (\wv^* - \wv_{k-1})
+ \frac{\eta_{k-1}}{\alpha_{k-1}} (\wv^* - \vv_{k-1}) }_2 \\
&\stackrel{(*)}{\le} \alpha_{k-1}^2 \left(1 - \frac{\eta_{k-1}}{\alpha_{k-1}} \right) \normasq{2}{\wv_{k-1} - \wv^*}
+ \alpha_{k-1}^2 \frac{\eta_{k-1}}{\alpha_{k-1}} \normasq{2}{\vv_{k-1} - \wv^*} \\
&\,= \alpha_{k-1} (\alpha_{k-1} - \eta_{k-1}) \normasq{2}{\wv_{k-1} - \wv^*}
+ \alpha_{k-1} \eta_{k-1} \normasq{2}{\vv_{k-1} - \wv^*} \,.
\end{align*}
\end{proof}
For all $\mu \ge \mu' \ge 0$, we know from Prop.~\ref{thm:setting:beck-teboulle} that
\begin{align}
0 \le F_{\mu \omega}(\wv) - F_{\mu'\omega}(\wv) \le (\mu - \mu') D_\omega \,.
\label{asmp:c:smoothing:1}
\end{align}
We now define the sequence $( S_k )_{k\ge0}$ to play the role of a
potential function here.
\begin{align}
\label{eq:c:ly_fn_defn}
\begin{split}
S_0 &= (1 - \alpha_0) (F(\wv_0) - F(\wv^*)) + \frac{\alpha_0 \kappa_1 \eta_0}{2} \normasq{2}{\wv_0 - \wv^*}\,, \\
S_k &= (1-\alpha_k) ( F_{\mu_k \omega}(\wv_k) - F_{\mu_k \omega}(\wv^*)) + \frac{\alpha_k \kappa_{k+1} \eta_k}{2} \normasq{2}{\vv_k - \wv^*}\,, \, k\ge 1 \,.
\end{split}
\end{align}
We are now ready to analyze the effect of one outer loop. This lemma is the crux of the analysis.
\begin{lemma} \label{lem:c:one_step_ly}
Suppose $F_{\mu_k\omega, \kappa_k}(\wv_k ;\zv) - \min_{\wv\in\reals^d} F_{\mu_k\omega, \kappa_k}(\wv ; \zv) \le \eps_k$
for some $\eps_k > 0$. The following statement holds for all $0 < \theta_k < 1$:
\begin{align} \label{eq:c:one_step_ly}
\frac{S_k}{1-\alpha_k} \le S_{k-1} + (\mu_{k-1} - \mu_k) D_\omega + \frac{\eps_k}{\theta_k}
- \frac{\kappa_k}{2}\normasq{2}{\wv_k - \zv_{k-1}} + \frac{\kappa_{k+1}\eta_k \alpha_k \theta_k}{2(1-\alpha_k)} \normasq{2}{\vv_k - \wv^*} \,,
\end{align}
where we set $\mu_0 := 2 \mu_1$.
\end{lemma}
\begin{proof}
For ease of notation, let $F_k := F_{\mu_k \omega}$, and $D := D_\omega$.
By $\lambda$-strong convexity of $F_{\mu_k\omega}$, we have,
\begin{align} \label{eq:c:proof:step_sc_r_k}
F_k(\rv_k) \le \alpha_{k-1} F_k(\wv^*) + (1-\alpha_{k-1}) F_k(\wv_{k-1}) -
\frac{\lambda \alpha_{k-1} (1-\alpha_{k-1})}{2} \normasq{2}{\wv_{k-1} - \wv^*} \,.
\end{align}
We now invoke Lemma~\ref{lem:c:approx_descent} on the function $F_{\mu_k \omega, \kappa_k}(\cdot ; \zv_{k-1})$ with
$\widehat \eps = \eps_k$ and $\wv = \rv_k$ to get,
\begin{align} \label{eq:c:proof:main_eq_unsimplified}
F_k(\wv_k) + \frac{\kappa_k}{2} \normasq{2}{\wv_k - \zv_{k-1}} + \frac{\kappa_k + \lambda}{2}(1-\theta_k) \normasq{2}{\rv_k - \wv_k}
\le F_k(\rv_k) + \frac{\kappa_k}{2} \normasq{2}{\rv_k - \zv_{k-1}} + \frac{\eps_k}{\theta_k} \, .
\end{align}
We shall separately manipulate the left and right hand sides of \eqref{eq:c:proof:main_eq_unsimplified},
starting with the right hand side, which we call $\mcR$.
We have, using \eqref{eq:c:proof:step_sc_r_k} and~\eqref{eq:c:norm_ly_sequence},
\begin{align*}
\mcR
\le& \,
(1- \alpha_{k-1}) F_k(\wv_{k-1}) + \alpha_{k-1} F_k(\wv^*)
- \frac{\lambda \alpha_{k-1} (1-\alpha_{k-1})}{2} \normasq{2}{\wv_{k-1} - \wv^*}
\\
&+ \frac{\kappa_k}{2} \alpha_{k-1}(\alpha_{k-1} - \eta_{k-1}) \normasq{2}{\wv_{k-1} - \wv^*}
+ \frac{\kappa_k \alpha_{k-1} \eta_{k-1}}{2} \normasq{2}{\vv_{k-1} - \wv^*} + \frac{\eps_k}{\theta_k} \,.
\end{align*}
We notice now that
\begin{align}
\alpha_{k-1} - \eta_{k-1}
&\stackrel{\eqref{eq:c:eta_defn_2}}{=} \alpha_{k-1} - \frac{\alpha_{k-1}\gamma_{k-1}}{\gamma_k + \alpha_{k-1} \gamma_{k-1}} \nonumber\\
&\,= \alpha_{k-1} \left( \frac{\gamma_k - \gamma_{k-1} ( 1- \alpha_{k-1})}{\gamma_k + \alpha_{k-1} \gamma_{k-1}} \right)
\nonumber\\
&\stackrel{\eqref{eq:c:gamma_defn_3}}{=} \frac{\alpha_{k-1}^2 \lambda}{\gamma_{k-1} + \alpha_{k-1}\lambda}
\nonumber\\
&\stackrel{\eqref{eq:c:gamma_defn_2}}{=} \frac{\alpha_{k-1}^2 \lambda (1-\alpha_{k-1})}
{(\kappa_k + \lambda) \alpha_{k-1}^2 - \lambda \alpha_{k-1} + (1-\alpha_{k-1})\alpha_{k-1}\lambda}
\nonumber\\
&\,= \frac{\lambda}{\kappa_k}(1-\alpha_{k-1}) \,, \label{eq:c:one_step_ly_pf_1}
\end{align}
and hence the terms containing $\normasq{2}{\wv_{k-1} - \wv^*}$ cancel out.
Therefore, we get,
\begin{align} \label{eq:c:proof:main_eq:rhs:simplified}
\mcR
\le
(1 - \alpha_{k-1}) F_k(\wv_{k-1}) + \alpha_{k-1} F_k(\wv^*)
+ \frac{\kappa_k \alpha_{k-1} \eta_{k-1}}{2} \normasq{2}{\vv_{k-1} - \wv^*} + \frac{\eps_k}{\theta_k} \,.
\end{align}
To move on to the left hand side, we note that
\begin{align} \label{eq:c:one_step_ly_proof_prod}
\alpha_k \eta_k \nonumber
&\stackrel{\eqref{eq:c:eta_defn_2}}{=} \frac{\alpha_k^2 \gamma_k}{\gamma_k + \alpha_k \lambda}
\stackrel{\eqref{eq:c:gamma_defn},\eqref{eq:c:gamma_defn_2}}{=} \frac{\alpha_k^2 \alpha_{k-1}^2 (\kappa_k + \lambda)}
{\frac{(\kappa_{k+1} + \lambda) \alpha_k^2 - \lambda \alpha_k}{1-\alpha_k} + \alpha_k \lambda } \\
&\,=\frac{ (1-\alpha_k)(\kappa_k + \lambda) \alpha_{k-1}^2 \alpha_k^2}{(\kappa_{k+1} + \lambda)\alpha_k^2 - \lambda \alpha_k^2}
= (1-\alpha_k) \alpha_{k-1}^2 \frac{\kappa_k + \lambda}{\kappa_{k+1}} \,.
\end{align}
Therefore,
\begin{align} \label{eq:c:one_step_ly_pf_2}
F_k(\wv_k) - F_k(\wv^*) + \frac{\kappa_k + \lambda}{2} \alpha_{k-1}^2 \normasq{2}{\vv_k - \wv^*}
\stackrel{\eqref{eq:c:ly_fn_defn},\eqref{eq:c:one_step_ly_proof_prod}}{=}
\frac{S_k}{1 - \alpha_{k}} \,.
\end{align}
Using
$\rv_k - \wv_k \stackrel{\eqref{eq:c:v_defn}}{=} \alpha_{k-1}(\wv^* - \vv_{k})$,
we simplify the left hand side of \eqref{eq:c:proof:main_eq_unsimplified}, which we call $\mcL$, as
\begin{align} \label{eq:c:proof:main_eq:lhs:simplified}
\nonumber
\mcL &= F_k(\wv_k) - F_k(\wv^*) + \frac{\kappa_k}{2} \normasq{2}{\wv_k - \zv_{k-1}} +
\frac{\kappa_k + \lambda}{2}(1-\theta_k) \alpha_{k-1}^2 \normasq{2}{\vv_k - \wv^*} \\
&\stackrel{\eqref{eq:c:one_step_ly_pf_2}}{=}
\frac{S_k}{1-\alpha_k} + F_k(\wv^*) + \frac{\kappa_k}{2} \normasq{2}{\wv_k - \zv_{k-1}}
- \frac{\kappa_{k+1} \alpha_k \eta_k \theta_k}{2 (1-\alpha_k)} \normasq{2}{\vv_k - \wv^*} \,.
\end{align}
In view of \eqref{eq:c:proof:main_eq:rhs:simplified} and \eqref{eq:c:proof:main_eq:lhs:simplified},
we can simplify \eqref{eq:c:proof:main_eq_unsimplified} as
\begin{align} \label{eq:c:one_step_ly_pf_2_int}
\begin{aligned}
\frac{S_k}{1-\alpha_k} & + \frac{\kappa_k}{2} \normasq{2}{\wv_k - \zv_{k-1}}
- \frac{\kappa_{k+1}\alpha_k\eta_k\theta_k}{2(1-\alpha_k)} \normasq{2}{\vv_k - \wv^*}
\\&\le (1 - \alpha_{k-1})\left( F_k(\wv_{k-1}) - F_k(\wv^*) \right)
+ \frac{\kappa_k \alpha_{k-1} \eta_{k-1}}{2}\normasq{2}{\vv_{k-1} - \wv^*} + \frac{\eps_k}{\theta_k} \,.
\end{aligned}
\end{align}
We make a distinction for $k \ge 2$ and $k=1$ here. For $k \ge 2$,
the condition that $\mu_{k-1} \ge \mu_k$ gives us,
\begin{align} \label{eq:c:one_step_ly_pf_3}
F_k(\wv_{k-1}) - F_k(\wv^*) \stackrel{\eqref{asmp:c:smoothing:1}}{\le}
F_{k-1}(\wv_{k-1}) - F_{k-1}(\wv^*) + (\mu_{k-1} - \mu_{k}) D \, .
\end{align}
The right hand side of \eqref{eq:c:one_step_ly_pf_2_int} can now be upper bounded by
\begin{align*}
(1 - \alpha_{k-1})(\mu_{k-1} - \mu_k) D + S_{k-1} + \frac{\eps_k}{\theta_k} \,,
\end{align*}
and noting that $1-\alpha_{k-1} \le 1$ yields \eqref{eq:c:one_step_ly} for $k \ge 2$.
For $k=1$, we note that $S_{k-1} (= S_0)$ is defined in terms of $F(\wv)$. So we have,
\begin{align*}
F_1(\wv_0) - F_1(\wv^*) \le F(\wv_0) - F(\wv^*) + \mu_1 D = F(\wv_0) - F(\wv^*) + (\mu_0 - \mu_1) D\,,
\end{align*}
because we used $\mu_0 = 2\mu_1$. This is of the same form as \eqref{eq:c:one_step_ly_pf_3}. Therefore,
\eqref{eq:c:one_step_ly} holds for $k=1$ as well.
%
\end{proof}
\noindent
We now prove Thm.~\ref{thm:catalyst:outer}.
\begin{proof} [Proof of Thm.~\ref{thm:catalyst:outer}]
We continue to use shorthand $F_k := F_{\mu_k \omega}$, and $D := D_\omega$.
We now apply Lemma~\ref{lem:c:one_step_ly}.
In order to satisfy the supposition of Lemma~\ref{lem:c:one_step_ly} that $\wv_k$ is $\eps_k$-suboptimal,
we make the choice $\eps_k = \frac{\delta_k \kappa_k}{2} \normasq{2}{\wv_k - \zv_{k-1}}$ (cf.~\eqref{eq:stopping_criterion}).
Plugging this in and setting $\theta_k = \delta_k < 1$, we get from~\eqref{eq:c:one_step_ly},
\begin{align*}
\frac{S_k}{1-\alpha_k} - \frac{\kappa_{k+1} \eta_k \alpha_k \delta_k}{2(1 - \alpha_k)} \normasq{2}{\vv_k - \wv^*}
\le S_{k-1}
+ (\mu_{k-1} - \mu_k) D \,.
\end{align*}
The left hand side simplifies to $ S_k \, ({1- \delta_k})/({1-\alpha_k}) + \delta_k ( F_k(\wv_k) - F_k(\wv^*))$.
Note that $F_k(\wv_k) - F_k(\wv^*) \stackrel{\eqref{asmp:c:smoothing:1}}{\ge} F(\wv_k) - F(\wv^*) - \mu_k D \ge -\mu_k D$.
From this, noting that $\alpha_k \in (0, 1)$ for all $k$, we get,
\begin{align*}
S_k \left(\frac{1-\delta_k}{1-\alpha_k} \right) \le S_{k-1} + \delta_k \mu_k D + (\mu_{k-1} - \mu_k) D\,,
\end{align*}
or equivalently,
\begin{align*}
S_k \le \left( \frac{1-\alpha_k}{1-\delta_k} \right) S_{k-1} +
\left( \frac{1-\alpha_k}{1-\delta_k} \right) (\mu_{k-1} - (1-\delta_k) \mu_k) D\,.
\end{align*}
Unrolling the recursion for $S_k$, we now have,
\begin{align} \label{eq:c:pf_thm_main_1}
S_k \le \left( \prod_{j=1}^k \frac{1-\alpha_j}{1-\delta_j} \right) S_0
+ \sum_{j=1}^k \left( \prod_{i=j}^k \frac{1-\alpha_i}{1-\delta_i} \right) (\mu_{j-1} - (1- \delta_j) \mu_j) D\,.
\end{align}
Now, we need to reason about $S_0$ and $S_k$ to complete the proof. To this end, consider $\eta_0$:
\begin{align}
\eta_0 &\stackrel{\eqref{eq:c:eta_defn}}{=} \frac{\alpha_0 \gamma_0}{\gamma_1 + \alpha_0 \gamma_0}
\nonumber\\
&\stackrel{\eqref{eq:c:gamma_defn_base}}{=} \frac{\alpha_0 \gamma_0}
{(\kappa_1 + \lambda)\alpha_0^2 + \tfrac{\alpha_0}{1-\alpha_0}\left( (\kappa_1 + \lambda)\alpha_0^2 - \lambda \alpha_0 \right)}
\nonumber\\
&\, = \frac{\alpha_0 \gamma_0 (1-\alpha_0)}{(\kappa_1 + \lambda)\alpha_0^2 - \lambda \alpha_0^2}
= (1-\alpha_0) \frac{\gamma_0}{\kappa_1 \alpha_0} \,. \label{eq:c:thm_pf_1}
\end{align}
With this, we can expand out $S_0$ to get
\begin{align*}
S_0 &\stackrel{\eqref{eq:c:ly_fn_defn}}{=}
(1- \alpha_0) \left(F(\wv_0) - F(\wv^*)\right) + \frac{\alpha_0 \kappa_1 \eta_0}{2} \normasq{2}{\wv_0 - \wv^*} \\
&\stackrel{\eqref{eq:c:thm_pf_1}}{=}
(1- \alpha_0) \left( F(\wv_0) - F^* + \frac{\gamma_0}{2}\normasq{2}{\wv_0 - \wv^*} \right) \,.
\end{align*}
Lastly, we reason about $S_k$ for $k \ge 1$ as,
\begin{align*}
S_k \stackrel{\eqref{eq:c:ly_fn_defn}}{\ge}
(1-\alpha_k) \left(F_k(\wv_k) - F_k(\wv^*) \right)
\stackrel{\eqref{asmp:c:smoothing:1}}{\ge}
(1-\alpha_k) \left( F(\wv_k) - F(\wv^*) - \mu_k D \right) \,.
\end{align*}
Plugging this into the left hand side of \eqref{eq:c:pf_thm_main_1} completes the proof.
\end{proof}
\subsubsection{Inner Loop Complexity} \label{sec:catalyst:inner_compl}
Consider a class $\mcF_{L, \lambda}$ of functions defined as
\[
\mcF_{L, \lambda} = \left\{
f : \reals^d \to \reals \text{ such that $f$ is $L$-smooth and $\lambda$-strongly convex}
\right\} \,.
\]
We now formally define a linearly convergent algorithm on this class of functions.
\begin{definition} \label{defn:c:linearly_convergent}
A first order algorithm $\mcM$ is said to be linearly convergent with parameters
$C : \reals_+ \times \reals_+ \to \reals_+$ and $\tau : \reals_+ \times \reals_+ \to (0, 1)$
if the following holds: for all $L \ge \lambda > 0$, and every $f \in \mcF_{L, \lambda}$ and $\wv_0 \in \reals^d$,
$\mcM$ started at $\wv_0$ generates a sequence $(\wv_k)_{k \ge 0}$ that satisfies:
\begin{align} \label{eq:def:linearly_convergent_2}
\expect f(\wv_k) - f^* \le C(L, \lambda) \left( 1 - \tau(L, \lambda) \right)^k \left( f(\wv_0) - f^* \right)\, ,
\end{align}
where $f^* := \min_{\wv\in\reals^d} f(\wv)$ and the expectation is over the randomness of $\mcM$.
\end{definition}
The parameter $\tau$ determines the rate of convergence of the algorithm.
For instance, batch gradient descent is a deterministic linearly convergent algorithm with $\tau(L, \lambda)\inv = L/\lambda$ and
incremental algorithms such as SVRG and SAGA satisfy requirement~\eqref{eq:def:linearly_convergent_2} with
$\tau(L,\lambda)\inv = c(n + \nicefrac{L}{\lambda})$ for some universal constant $c$.
The warm start strategy in
step $k$ of Algo.~\ref{algo:catalyst} is to initialize $\mcM$ at the prox center $\zv_{k-1}$.
The next proposition, due to \citet[Cor. 16]{lin2017catalyst} bounds the expected number of iterations of $\mcM$ required to
ensure that $\wv_k$ satisfies \eqref{eq:stopping_criterion}. Its proof has been given in Appendix~\ref{sec:c:proofs:inner_compl}
for completeness.
\begin{proposition} \label{prop:c:inner_loop_final}
Consider $F_{\mu\omega, \kappa}(\cdot \, ;\zv)$ defined in Eq.~\eqref{eq:prox_point_algo},
and a linearly convergent algorithm $\mcM$ with parameters $C$, $\tau$.
Let $\delta \in [0,1)$. Suppose $F_{\mu\omega}$ is $L_{\mu\omega}$-smooth and
$\lambda$-strongly convex.
Then the expected number of iterations $\expect[\widehat T]$ of $\mcM$ when started at $\zv$
in order to obtain $\widehat \wv \in \reals^d$ that satisfies
\begin{align*}
F_{\mu\omega, \kappa}(\widehat\wv;\zv) - \min_\wv F_{\mu\omega, \kappa}(\wv;\zv)\leq \tfrac{\delta\kappa}{2} \normasq{2}{\wv - \zv}
\end{align*}
is upper bounded by
\begin{align*}
\expect[\widehat T] \le \frac{1}{\tau(L_{\mu\omega} + \kappa, \lambda + \kappa)} \log\left(
\frac{8 C(L_{\mu\omega} + \kappa, \lambda + \kappa)}{\tau(L_{\mu\omega} + \kappa, \lambda + \kappa)} \cdot
\frac{L_{\mu\omega} + \kappa}{\kappa \delta} \right) + 1 \,.
\end{align*}
\end{proposition}
\begin{table*}[t!]
\caption{\small{Summary of global complexity of {Casimir-SVRG}, i.e., Algorithm~\ref{algo:catalyst}
with SVRG as the inner solver for various parameter settings.
We show $\expect[N]$, the expected total number of SVRG iterations required to obtain an accuracy $\eps$,
up to constants and factors logarithmic in problem parameters.
We denote
$\Delta F_0 := F(\wv_0) - F^*$ and {$\Delta_0 = \norma{2}{\wv_0 - \wv^*}$}.
Constants $D, A$ are short for $D_\omega, A_\omega$ (see \eqref{eq:c:A_defn}).
\vspace{2mm}
}}
\begin{adjustbox}{width=\textwidth}
\label{tab:sc-svrg_rates_summary}
\centering
\begin{tabular}{|c||cccc|c|c|}
\hline
\rule{0pt}{12pt}
{Prop.} & $\lambda>0$ & $\mu_k$ & $\kappa_k$ & $\delta_k$ & $\expect[N]$ & Remark \\[3pt] \hline\hline
\rule{0pt}{15pt}
\ref{prop:c:total_compl_svrg_main}
& Yes & $\sfrac{\eps}{D}$ &
$\sfrac{A D}{\eps n} - \lambda$ & $\sqrt\frac{\lambda\eps n}{A D}$ &
$n + \sqrt{\frac{A D n}{\lambda \eps}} $ & fix $\eps$ in advance
\\[5pt] \hline
\rule{0pt}{15pt}
\ref{prop:c:total_compl_sc:dec_smoothing_main}
& Yes & $\mu c^k $ & $\lambda$ & $c'$ &
$ n + \frac{A}{\lambda\eps} \frac{\Delta F_0 + \mu D}{\mu}$ &
$c,c'<1$ are universal constants
\\[5pt] \hline
\rule{0pt}{15pt}
\ref{prop:c:total_compl_svrg_smooth_main}
& No & $\sfrac{\eps}{D}$ &
$\sfrac{A D}{\eps n}$ & $1/k^2$ &
$n\sqrt{\frac{\Delta F_0}{\eps}} +
\frac{ \sqrt{A D n} \Delta_0}{\eps} $ & fix $\eps$ in advance
\\[5pt] \hline
\rule{0pt}{15pt}
\ref{prop:c:total_compl_nsc:dec_smoothing_main} & No & $\sfrac{\mu}{k}$ &
$\kappa_0 \, k$ & $1/k^2$ &
$\frac{\widehat\Delta_0}{\eps} \left( n + \frac{A}{\mu \kappa_0} \right) $ &
$\widehat\Delta_0 = \Delta F_0 + \frac{\kappa_0}{2} \Delta_0^2 + \mu D$
\\[5pt] \hline
\end{tabular}
\end{adjustbox}
\end{table*}
\subsection{Casimir with SVRG} \label{sec:catalyst:total_compl}
We now choose SVRG \citep{johnson2013accelerating} to be the linearly convergent algorithm $\mcM$,
resulting in an algorithm called {Casimir-SVRG}{}.
The rest of this section analyzes the total iteration complexity of
{Casimir-SVRG}{} to solve Problem~\eqref{eq:cvx_pb_finite_sum}.
The proofs of the results from this section are calculations
stemming from combining the outer loop complexity from
Cor.~\ref{cor:c:outer_sc} to~\ref{cor:c:outer_smooth_dec_smoothing} with
the inner loop complexity from Prop.~\ref{prop:c:inner_loop_final},
and are relegated to Appendix~\ref{sec:c:proofs:total_compl}.
Table~\ref{tab:sc-svrg_rates_summary} summarizes the results of this section.
Recall that if $\omega$ is 1-strongly convex with respect to $\norma{\alpha}{\cdot}$, then
$h_{\mu\omega}(\Am \wv + \bv)$ is $L_{\mu\omega}$-smooth with respect to $\norma{2}{\cdot}$,
where $L_{\mu\omega} = \normasq{2,\alpha}{\Am} / \mu$.
Therefore, the complexity of solving problem~\eqref{eq:cvx_pb_finite_sum} will depend on
\begin{align} \label{eq:c:A_defn}
A_\omega := \max_{i=1,\cdots,n} \normasq{2, \alpha}{\Am\pow{i}} \,.
\end{align}
\begin{remark} \label{remark:smoothing:l2vsEnt}
We have that $\norma{2,2}{\Am} = \norma{2}{\Am}$ is the spectral norm of $\Am$ and
$\norma{2,1}{\Am} = \max_j \norma{2}{\av_j}$ is the largest row norm, where $\av_j$ is the $j$th row of $\Am$.
Moreover, we have that $\norma{2,2}{\Am} \ge \norma{2,1}{\Am}$.
\end{remark}
\noindent
We start with the strongly convex case with constant smoothing.
\begin{proposition} \label{prop:c:total_compl_svrg_sc} \label{prop:c:total_compl_svrg_main}
Consider the setting of Thm.~\ref{thm:catalyst:outer} and
fix $\eps > 0$.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with parameters:
$\mu_k = \mu = \eps / {10 D_\omega}$, $\kappa_k = k$ chosen as
\begin{align*}
\kappa =
\begin{cases}
\frac{A}{\mu n} - \lambda \,, \text{ if } \frac{A}{\mu n} > 4 \lambda \\
\lambda \,, \text{ otherwise}
\end{cases} \,,
\end{align*}
$q = {\lambda}/{(\lambda + \kappa)}$, $\alpha_0 = \sqrt{q}$, and
$\delta = {\sqrt{q}}/{(2 - \sqrt{q})}$.
Then, the number of iterations $N$ to obtain $\wv$ such that $F(\wv) - F(\wv^*) \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde \bigO \left(
n + \sqrt{\frac{A_\omega D_\omega n}{\lambda \eps}}
\right) \,.
\end{align*}
\end{proposition}
\noindent
Here, we note that $\kappa$ was chosen to minimize the total complexity (cf. \citet{lin2017catalyst}).
This bound is known to be tight, up to logarithmic factors \citep{woodworth2016tight}.
\noindent
Next, we turn to the strongly convex case with decreasing smoothing.
\begin{proposition} \label{prop:c:total_compl_sc:dec_smoothing} \label{prop:c:total_compl_sc:dec_smoothing_main}
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Suppose $\lambda > 0$ and $\kappa_k = \kappa$, for all $k \ge 1$ and
that $\alpha_0$, $(\mu_k)_{k \ge 1}$ and $(\delta_k)_{k \ge 1}$
are chosen as in Cor.~\ref{cor:c:outer_sc:decreasing_mu_const_kappa},
with $q = \lambda/(\lambda + \kappa)$ and $\eta = 1- {\sqrt q}/{2}$.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with these parameters,
the number of iterations $N$ of SVRG required to obtain $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde \bigO \left( n
+ \frac{A_\omega}{\mu(\lambda + \kappa)\eps} \left( F(\wv_0) - F^* + \frac{\mu D_\omega}{1-\sqrt{q}} \right)
\right) \,.
\end{align*}
\end{proposition}
\noindent
Unlike the previous case, there is no obvious choice of $\kappa$, such as to minimize the global complexity.
Notice that we do not get the accelerated rate of Prop.~\ref{prop:c:total_compl_svrg_sc}.
We now turn to the case when $\lambda = 0$ and $\mu_k = \mu$ for all $k$.
\begin{proposition} \label{prop:c:total_compl_svrg_smooth} \label{prop:c:total_compl_svrg_smooth_main}
Consider the setting of Thm.~\ref{thm:catalyst:outer} and fix $\eps > 0$.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with parameters:
$\mu_k = \mu ={\eps}/{20 D_\omega}$, $\alpha_0 = (\sqrt{5} - 1)/{2}$,
$\delta_k = {1}/{(k+1)^2}$, and $\kappa_k = \kappa = {A_\omega}/{\mu(n+1)}$.
Then, the number of iterations $N$ to get a point $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde \bigO \left( n\sqrt{\frac{F(\wv_0) - F^*}{\eps}} +
\sqrt{A_\omega D_\omega n} \frac{\norma{2}{\wv_0 - \wv^*}}{\eps} \right) \, .
\end{align*}
\end{proposition}
\noindent
This rate is tight up to log factors~\citep{woodworth2016tight}.
Lastly, we consider the non-strongly convex case ($\lambda = 0$) together with decreasing smoothing.
As with Prop.~\ref{prop:c:total_compl_sc:dec_smoothing}, we do not obtain an accelerated rate here.
\begin{proposition} \label{prop:c:total_compl_nsc:dec_smoothing} \label{prop:c:total_compl_nsc:dec_smoothing_main}
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Suppose $\lambda = 0$ and that $\alpha_0$, $(\mu_k)_{k\ge 1}$,$ (\kappa_k)_{k\ge 1}$ and $(\delta_k)_{k \ge 1}$
are chosen as in Cor.~\ref{cor:c:outer_smooth_dec_smoothing}.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with these parameters,
the number of iterations $N$ of SVRG required to obtain $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde\bigO \left( \frac{1}{\eps}
\left( F(\wv_0) - F^* + \kappa \normasq{2}{\wv_0 - \wv^*} + \mu D \right)
\left( n + \frac{A_\omega}{\mu \kappa} \right)
\right) \,.
\end{align*}
\end{proposition}
\subsection{The Prox-Linear Algorithm} \label{sec:pl:pl-algo}
The exact prox-linear algorithm of \citet{burke1985descent} generalizes the
proximal gradient algorithm (see e.g., \citet{nesterov2013introductory})
to compositions of convex functions with smooth mappings such as~\eqref{eq:n-cvx_pb}.
When given a function $f=h\circ \gv$, the prox-linear algorithm defines a local convex approximation
$f(\cdot \, ; \wv_k)$ about some point $\wv \in \reals^d$ by linearizing the smooth map $\gv$ as
$
f(\wv; \wv_k) := h(\gv(\wv_k) + \grad\gv(\wv_k)(\wv - \wv_k)) \, .
$
With this, it builds a convex model $F(\cdot \, ; \wv_k)$ of $F$ about $\wv_k$ as
\[
F(\wv ; \wv_k) := \frac{1}{n}\sum_{i=1}^n h(\gv\pow{i}(\wv_k) + \grad\gv\pow{i}(\wv_k)(\wv - \wv_k))
+ \frac{\lambda}{2} \normasq{2}{\wv}\,.
\]
Given a step length $\eta > 0$, each iteration of the exact prox-linear algorithm
then minimizes the local convex model plus a proximal term as
\begin{align} \label{eq:pl:exact_pl}
\wv_{k+1} = \argmin_{\wv \in \reals^d} \left[ F_{\eta}( \wv ; \wv_k) := F(\wv ; \wv_k) + \frac{1}{2\eta}\normasq{2}{\wv-\wv_k} \right] \,.
\end{align}
\begin{algorithm}[tb]
\caption{(Inexact) Prox-linear algorithm: outer loop}
\label{algo:prox-linear}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Smoothable objective $F$ of the form \eqref{eq:n-cvx_pb} with $h$ simple,
step length $\eta$,
tolerances $( \epsilon_k )_{k\ge1}$,
initial point $\wv_0$,
non-smooth convex optimization algorithm, $\mcM$,
time horizon $K$
\FOR{$k=1$ \TO $K$}
\STATE Using $\mcM$ with $\wv_{k-1}$ as the starting point, find
\begin{align} \label{eq:pl:algo:update}
\nonumber
\widehat \wv_k \approx \argmin_{\wv} \bigg[
F_\eta(\wv ; \wv_{k-1}) :=
\frac{1}{n} \sum_{i=1}^n & h\big(\gv\pow{i}(\wv_{k-1}) + \grad\gv\pow{i}(\wv_{k-1})(\wv - \wv_{k-1}) \big) \\ &+ \frac{\lambda}{2}\normasq{2}{\wv}
+ \frac{1}{2\eta}\normasq{2}{\wv-\wv_{k-1}} \,,
\bigg]
\end{align}
such that
\begin{align} \label{eq:pl:algo:stop}
F_\eta(\widehat \wv_k ; \wv_{k-1}) - \min_{ \wv \in \reals^d} F_\eta(\wv ; \wv_{k-1}) \le \eps_k \,.
\end{align}
\label{line:pl:algo:subprob}
\STATE Set $\wv_k = \widehat \wv_k$ if $F(\widehat \wv_k) \le F(\wv_{k-1})$, else set
$\wv_k = \wv_{k-1}$.
\label{line:pl:algo:accept}
\ENDFOR
\RETURN $\wv_K$.
\end{algorithmic}
\end{algorithm}
Following \citet{drusvyatskiy2016efficiency},
we consider an inexact prox-linear algorithm, which approximately solves \eqref{eq:pl:exact_pl}
using an iterative algorithm. In particular, since the function to be minimized in \eqref{eq:pl:exact_pl}
is precisely of the form~\eqref{eq:cvx_pb}, we employ the fast convex solvers developed in the previous section
as subroutines. Concretely, the prox-linear outer loop is displayed in Algo.~\ref{algo:prox-linear}.
We now delve into details about the algorithm and convergence guarantees.
\subsubsection{Inexactness Criterion}
As in Section~\ref{sec:cvx_opt}, we must be prudent in choosing when to terminate the inner optimization
(Line~\ref{line:pl:algo:subprob} of Algo.~\ref{algo:prox-linear}).
Function value suboptimality is used as the inexactness criterion here. In particular, for some specified tolerance
$\eps_k > 0$, iteration $k$ of the prox-linear algorithm accepts a solution $\widehat \wv$ that satisfies
$F_\eta(\widehat \wv_k ; \wv_{k-1}) - \min_{ \wv} F_\eta(\wv ; \wv_{k-1}) \le \eps_k$.
\paragraph{Implementation}
In view of the $(\lambda + \eta\inv)$-strong convexity of $F_\eta(\cdot \, ; \wv_{k-1})$,
it suffices to ensure that $(\lambda + \eta\inv) \normasq{2}{\vv} \le \eps_k$ for a subgradient
$\vv \in \partial F_\eta(\widehat \wv_k ; \wv_{k-1})$.
\paragraph{Fixed Iteration Budget}
As in the convex case, we consider as a practical alternative a fixed iteration budget $T_{\mathrm{budget}}$
and optimize $F_\eta(\cdot\, ; \wv_k)$ for exactly $T_{\mathrm{budget}}$ iterations of $\mcM$.
Again, we do not have a theoretical analysis for this scheme but find it to be effective in practice.
\subsubsection{Warm Start of Subproblems}
As in the convex case, we advocate the use of
the prox center $\wv_{k-1}$ to warm start the inner optimization problem in iteration $k$
(Line~\ref{line:pl:algo:subprob} of Algo.~\ref{algo:prox-linear}).
\subsection{Convergence analysis of the prox-linear algorithm} \label{sec:pl:convergence}
We now state the assumptions and the convergence guarantee of the prox-linear algorithm.
\subsubsection{Assumptions}
For the prox-linear algorithm to work, the only requirement is that we minimize an upper model.
The assumption below makes this concrete.
\begin{assumption} \label{asmp:pl:upper-bound}
The map $\gv\pow{i}$ is continuously differentiable everywhere for each $i \in [n]$.
Moreover, there exists a constant $L > 0$ such that for all $\wv, \wv' \in \reals^d$ and $i\in [n]$, it holds that
\begin{align*}
h\big(\gv\pow{i}(\wv') \big) \le
h\big(\gv\pow{i}(\wv) + \grad\gv\pow{i}(\wv) (\wv'-\wv) \big) + \frac{L}{2}\normasq{2}{\wv'-\wv} \,.
\end{align*}
\end{assumption}
\noindent
When $h$ is $G$-Lipschitz and each $\gv\pow{i}$ is $\widetilde L$-smooth, both with respect to
$\norma{2}{\cdot}$, then Assumption~\ref{asmp:pl:upper-bound} holds with $L = G\widetilde L$
\citep{drusvyatskiy2016efficiency}.
In the case of structured prediction,
Assumption~\ref{asmp:pl:upper-bound} holds when
the augmented score $\psi$ as a function of $\wv$ is $L$-smooth.
The next lemma makes this precise and its proof is in Appendix~\ref{sec:c:pl_struct_pred}.
\begin{lemma} \label{lem:pl:struct_pred}
Consider the structural hinge loss $f(\wv) = \max_{\yv \in \mcY} \psi(\yv ; \wv) = h\circ \gv(\wv)$
where $h, \gv$ are as defined in \eqref{eq:mapping_def}.
If the mapping $\wv \mapsto \psi(\yv ; \wv)$ is $L$-smooth with respect to $\norma{2}{\cdot}$ for all
$\yv \in \mcY$, then it holds for all $\wv, \zv \in \reals^d$ that
\begin{align*}
|h(\gv(\wv+\zv)) - h(\gv(\wv) + \grad\gv(\wv) \zv)| \le \frac{L}{2}\normasq{2}{\zv}\,.
\end{align*}
\end{lemma}
\subsubsection{Convergence Guarantee}
Convergence is measured via the norm of the {\em prox-gradient} $\bm{\varrho}_\eta(\cdot)$,
also known as the {\em gradient mapping}, defined as
\begin{align}
\bm{\varrho}_\eta(\wv) = \frac{1}{\eta} \left( \wv - \argmin_{\zv \in \reals^d} F_\eta(\zv ; \wv) \right) \,.
\end{align}
The measure of stationarity $\norm{\bm{\varrho}_\eta(\wv)}$ turns out to be related
to the norm of the gradient of the Moreau envelope of $F$ under certain conditions - see
\citet[Section 4]{drusvyatskiy2016efficiency} for a discussion.
In particular, a point $\wv$ with small $\norm{\bm{\varrho}_\eta(\wv)}$ means that $\wv$ is close to
$\wv' = \argmin_{\zv \in \reals^d} F_\eta(\zv ; \wv)$, which is nearly stationary for $F$.
The prox-linear outer loop shown in Algo.~\ref{algo:prox-linear} has the following convergence guarantee
\citep[Thm.~5.2]{drusvyatskiy2016efficiency}.
\begin{theorem} \label{thm:pl:outer-loop}
Consider $F$ of the form~\eqref{eq:n-cvx_pb} that satisfies Assumption~\ref{asmp:pl:upper-bound},
a step length $0 < \eta \le 1/L$ and a non-negative sequence $(\eps_k)_{k\ge1}$.
With these inputs, Algo.~\ref{algo:prox-linear} produces a sequence $(\wv_k)_{k \ge 0}$ that satisfies
\begin{align*}
\min_{k=0, \cdots, K-1} \normasq{2}{\bm{\varrho}_\eta(\wv_k)} \le \frac{2}{\eta K} \left( F(\wv_0) - F^* + \sum_{k=1}^{K} \eps_k \right) \,,
\end{align*}
where $F^* = \inf_{\wv \in \reals^d} F(\wv)$.
In addition, we have that the sequence $(F(\wv_k))_{k\ge0}$ is non-increasing.
\end{theorem}
\begin{remark}
Algo.~\ref{algo:prox-linear} accepts an update only if it improves the function value (Line~\ref{line:pl:algo:accept}).
A variant of Algo.~\ref{algo:prox-linear} which always accepts the update has a guarantee identical to
that of Thm.~\ref{thm:pl:outer-loop},
but the sequence $(F(\wv_k))_{k\ge0}$ would not guaranteed to be non-increasing.
\end{remark}
\subsection{Prox-Linear with {Casimir-SVRG}{}} \label{sec:pl:total-compl}
We now analyze the total complexity of minimizing the finite sum problem~\eqref{eq:n-cvx_pb}
with {Casimir-SVRG}{} to approximately solve the subproblems of Algo.~\ref{algo:prox-linear}.
For the algorithm to converge, the map
$\wv \mapsto \gv\pow{i}(\wv_k) + \grad \gv\pow{i}(\wv_k)(\wv - \wv_k)$ must be Lipschitz for each $i$ and each iterate $\wv_k$.
To be precise, we assume that
\begin{align}
A_\omega := \max_{i=1,\cdots,n} \sup_{\wv \in \reals^d} \normasq{2, \alpha}{\grad \gv\pow{i}(\wv)}
\end{align}
is finite, where $\omega$, the smoothing function, is 1-strongly convex
with respect to $\norma{\alpha}{\cdot}$.
When $\gv\pow{i}$ is the linear map $\wv \mapsto \Am\pow{i}\wv$, this reduces to \eqref{eq:c:A_defn}.
We choose the tolerance $\eps_k$ to decrease as $1/k$.
When using the {Casimir-SVRG}{} algorithm with constant smoothing (Prop.~\ref{prop:c:total_compl_svrg_sc})
as the inner solver, this method effectively smooths the $k$th prox-linear subproblem as $1/k$.
We have the following rate of convergence for this method, which is proved in Appendix~\ref{sec:c:pl_proofs}.
\begin{proposition} \label{prop:pl:total_compl}
Consider the setting of Thm.~\ref{thm:pl:outer-loop}. Suppose the sequence $(\eps_k)_{k\ge 1}$
satisfies $\eps_k = \eps_0 / k$ for some $\eps_0 > 0$ and that
the subproblem of Line~\ref{line:pl:algo:subprob} of Algo.~\ref{algo:prox-linear} is solved using
{Casimir-SVRG}{} with the settings of Prop.~\ref{prop:c:total_compl_svrg_sc}.
Then, total number of SVRG iterations $N$ required to produce a $\wv$ such that
$\norma{2}{\bm{\varrho}_\eta(\wv)} \le \eps$ is bounded as
\begin{align*}
\expect[N] \le \widetilde\bigO\left(
\frac{n}{\eta \eps^2} \left(F(\wv_0) - F^* + \eps_0 \right) +
\frac{\sqrt{A_\omega D_\omega n \eps_0\inv}}{\eta \eps^3} \left( F(\wv_0) - F^* + \eps_0 \right)^{3/2}
\right) \, .
\end{align*}
\end{proposition}
\begin{remark} \label{remark:pl:choosing_eps0}
When an estimate or an upper bound $B$ on $F(\wv_0) - F^*$, one could set
$\eps_0 = \bigO(B)$. This is true, for instance, in the structured prediction task where
$F^* \ge 0$ whenever the task loss $\ell$ is non-negative (cf.~\eqref{eq:pgm:struc_hinge}).
\end{remark}
\subsection{Dataset and Task Description} \label{subsec:expt:task_description}
For each of the tasks, we specify below the following:
(a) the dataset $\{ (\xv\pow{i}, \yv\pow{i})\}_{i=1}^n$,
(b) the output structure $\mcY$,
(c) the loss function $\ell$,
(d) the score function $\phi(\xv, \yv ; \wv)$,
(e) implementation of inference oracles, and lastly,
(f) the evaluation metric used to assess the quality of predictions.
\subsubsection{CoNLL 2003: Named Entity Recognition}
Named entities are phrases that contain the names of persons, organization, locations, etc,
and the task is to predict the label (tag) of each entity.
Named entity recognition can be formulated as a sequence tagging problem where the set
$\mcY_{\mathrm{tag}}$ of individual tags is of size 7.
Each datapoint $\xv$ is a sequence of words $\xv = (x_1, \cdots, x_p)$,
and the label $\yv = (y_1, \cdots, y_p) \in \mcY(\xv)$ is a sequence of the same length,
where each $y_i \in \mcY_{\mathrm{tag}}$ is a tag.
\paragraph{Loss Function}
The loss function is the Hamming Loss $\ell(\yv, \yv') = \sum_i \ind(y_i \neq y_i')$.
\paragraph{Score Function}
We use a chain graph to represent this task. In other words,
the observation-label dependencies are encoded as a Markov chain of order 1 to enable efficient
inference using the Viterbi algorithm.
We only consider the case of linear score $\phi(\xv, \yv ; \wv) = \inp{\wv}{\Phi(\xv, \yv)}$
for this task. The feature map $\Phi$ here is very similar to that given in
Example~\ref{example:inf_oracles:viterbi_example}.
Following \citet{tkachenko2012named}, we use local context $\Psi_i(\xv)$ around $i$\textsuperscript{th} word $x_i$ of $\xv$.
In particular, define $\Psi_i(\xv) = \ev_{x_{i-2}} \otimes \cdots \otimes \ev_{x_{i+2}}$,
where $\otimes$ denotes the Kronecker product between column vectors,
and $\ev_{x_i}$ denotes a one hot encoding of word $x_i$, concatenated with the one hot encoding of its
the part of speech tag and syntactic chunk tag which are provided with the input.
Now, we can define the feature map $\Phi$ as
\begin{align*}
\Phi(\xv, \yv) = \left[ \sum_{v=1}^p \Psi_v(\xv) \otimes \ev_{y_v} \right] \oplus
\left[ \sum_{i=0}^p \ev_{y_{v}} \otimes \ev_{y_{v+1}} \right] \,,
\end{align*}
where $\ev_y \in \reals^{\abs{\mcY_{\mathrm{tag}}}}$ is a one hot-encoding of $y \in \mcY_{\mathrm{tag}}$,
and $\oplus$ denotes vector concatenation.
\paragraph{Inference}
We use the Viterbi algorithm as the max oracle (Algo.~\ref{algo:dp:max:chain})
and top-$K$ Viterbi algorithm (Algo.~\ref{algo:dp:topK:chain}) for the top-$K$ oracle.
\paragraph{Dataset}
The dataset used was CoNLL 2003 \citep{tjong2003introduction},
which contains about $\sim 20K$ sentences.
\paragraph{Evaluation Metric}
We follow the official CoNLL metric: the $F_1$ measure excluding the `O' tags.
In addition, we report the objective function value measured on the training set (``train loss'').
\paragraph{Other Implementation Details}
The sparse feature vectors obtained above are hashed onto $2^{16} - 1$ dimensions for efficiency.
\subsubsection{PASCAL VOC 2007: Visual Object Localization}
Given an image and an object of interest, the task is to localize the object in the given image,
i.e., determine the best bounding box around the object. A related, but harder task is object detection,
which requires identifying and localizing any number of objects of interest, if any, in the image.
Here, we restrict ourselves to pure localization with a single instance of each object.
Given an image $\xv \in \mcX$ of size $n_1 \times n_2$, the label $\yv \in \mcY(\xv)$
is a bounding box, where $\mcY(\xv)$ is the set of all bounding boxes in an image of size $n_1 \times n_2$.
Note that $\abs{\mcY(\xv)} = \bigO(n_1^2n_2^2)$.
\paragraph{Loss Function}
The PASCAL IoU metric \citep{everingham2010pascal} is used to measure the quality of localization.
Given bounding boxes $\yv, \yv'$, the IoU is defined as the ratio of the intersection of the
bounding boxes to the union:
\begin{align*}
\mathrm{IoU}(\yv, \yv') = \frac{\mathrm{Area}(\yv \cap \yv')}{\mathrm{Area}(\yv \cup \yv')} \,.
\end{align*}
We then use the $1 - \mathrm{IoU}$ loss defined as $\ell(\yv, \yv') = 1 - \mathrm{IoU}(\yv, \yv')$.
\paragraph{Score Function}
The formulation we use is based on the popular R-CNN approach \citep{girshick2014rich}.
We consider two cases: linear score and non-linear score $\phi$, both of which are based on the following
definition of the feature map $\Phi(\xv, \yv)$.
\begin{itemize}
\item Consider a patch $\xv|_\yv$ of image $\xv$ cropped to box $\yv$,
and rescale it to $64\times 64$.
Call this $\Pi(\xv|_\yv)$.
\item Consider a convolutional neural network known as AlexNet \citep{krizhevsky2012imagenet}
pre-trained on ImageNet \citep{ILSVRC15} and
pass $\Pi(\xv|_\yv)$ through it.
Take the output of {\tt conv4}, the penultimate convolutional layer as the feature map $\Phi(\xv, \yv)$.
It is of size $ 3 \times 3\times 256$.
\end{itemize}
In the case of linear score functions, we take $\phi(\xv, \yv ; \wv) = \inp{\wv}{\Phi(\xv, \yv)}$.
In the case of non-linear score functions, we define the score $\phi$ as the the result of
a convolution composed with a non-linearity and followed by a linear map. Concretely,
for $\thetav \in \reals^{H \times W \times C_1}$ and $\wv \in \reals^{C_1 \times C_2}$
let the map $\thetav \mapsto \thetav \star \wv \in \reals^{H \times W \times C_2}$ denote a
two dimensional convolution with stride $1$ and kernel size $1$,
and $\sigma: \reals \to \reals$ denote the exponential linear unit, defined respectively as
\begin{align*}
[\thetav \star \wv]_{ij} = \wv\T [\thetav]_{ij} \quad \text{and}
\quad \sigma(x) = x \, \ind(x \ge 0) + (\exp(x) - 1) \, \ind(x < 0) \,,
\end{align*}
where $[\thetav]_{ij} \in \reals^{C_1}$ is such that its $l$th entry is $\thetav_{ijl}$
and likewise for $[\thetav \star \wv]_{ij}$.
We overload notation to let $\sigma:\reals^d\to \reals^d$ denote the exponential linear unit applied element-wise.
Notice that $\sigma$ is smooth.
The non-linear score function $\phi$ is now defined, with
$\wv_1 \in \reals^{256\times16}, \wv_2 \in \reals^{16\times3\times3}$ and $\wv=(\wv_1, \wv_2)$, as,
\begin{align*}
\phi(\xv, \yv ; \wv) = \inp{\sigma(\Phi(\xv, \yv) \star \wv_1)}{\wv_2} \,.
\end{align*}
\paragraph{Inference}
For a given input image $\xv$, we follow the R-CNN approach~\citep{girshick2014rich} and use
selective search~\citep{van2011segmentation} to prune the search space.
In particular, for an image $\xv$, we use the selective search implementation provided by OpenCV~\citep{opencv_library}
and take the top 1000 candidates returned to be the set $\widehat{\mcY}(\xv)$,
which we use as a proxy for $\mcY(\xv)$.
The max oracle and the top-$K$ oracle are then implemented as exhaustive searches over
this reduced set $\widehat\mcY(\xv)$.
\paragraph{Dataset}
We use the PASCAL VOC 2007 dataset \citep{everingham2010pascal}, which contains
$\sim 5K$ annotated consumer (real world) images shared on the photo-sharing site Flickr
from 20 different object categories.
For each class, we consider all images with only a single occurrence of the object, and train
an independent model for each class.
\paragraph{Evaluation Metric}
We keep track of two metrics.
The first is the localization accuracy, also known as CorLoc (for correct localization),
following \citet{deselaers2010localizing}. A bounding box with IoU $> 0.5$
with the ground truth is considered correct and the localization accuracy is the fraction
of images labeled correctly.
The second metric is average precision (AP), which
requires a confidence score for each prediction.
We use $\phi(\xv, \yv' ; \wv)$ as the confidence score of $\yv'$.
As previously, we also plot the objective function value measured on the training examples.
\paragraph{Other Implementation Details}
For a given input-output pair $(\xv, \yv)$ in the dataset,
we instead use $(\xv, \widehat \yv)$ as a training example, where
$\widehat\yv = \argmax_{\yv' \in \widehat\mcY(\xv)} \mathrm{IoU}(\yv, \yv')$
is the element of $\widehat\mcY(\xv)$ which overlaps the most with the true output $\yv$.
\subsection{Methods Compared} \label{subsec:expt:competing_methods}
The experiments compare various convex stochastic and incremental
optimization methods for structured prediction.
\begin{itemize}
\item {\bfseries SGD}: Stochastic subgradient method with a learning rate $\gamma_t = \gamma_0 / (1 + \lfloor t / t_0 \rfloor)$,
where $\eta_0, t_0$ are tuning parameters. Note that this scheme of learning rates does not have
a theoretical analysis. However, the averaged iterate $\overline \wv_t = {2}/(t^2+t)\sum_{\tau=1}^t \tau \wv_\tau$
obtained from the related scheme
$\gamma_t = 1/(\lambda t)$ was shown to have a convergence rate of $\bigO((\lambda \eps)\inv)$
\citep{shalev2011pegasos,lacoste2012simpler}. It works on the non-smooth formulation directly.
\item {\bfseries {BCFW}}: The block coordinate Frank-Wolfe algorithm of \citet{lacoste2012block}.
We use the version that was found to work best in practice, namely,
one that uses the weighted averaged iterate $\overline \wv_t = {2}/(t^2+t)\sum_{\tau=1}^t \tau \wv_\tau$
(called {\tt bcfw-wavg} by the authors)
with optimal tuning of learning rates. This algorithm also works on the non-smooth formulation
and does not require any tuning.
\item {\bfseries {SVRG}}: The SVRG algorithm proposed by \citet{johnson2013accelerating},
with each epoch making one pass through the dataset and using the averaged iterate
to compute the full gradient and restart the next epoch.
This algorithm requires smoothing.
\item {\bfseries {Casimir-SVRG-const}}: Algo.~\ref{algo:catalyst} with SVRG as the inner optimization algorithm.
The parameters $\mu_k$ and $\kappa_k$ as chosen in
Prop.~\ref{prop:c:total_compl_svrg_sc}, where $\mu$ and $\kappa$ are hyperparameters.
This algorithm requires smoothing.
\item {\bfseries {Casimir-SVRG-adapt}}: Algo.~\ref{algo:catalyst} with SVRG as the inner optimization algorithm.
The parameters $\mu_k$ and $\kappa_k$ as chosen in
Prop.~\ref{prop:c:total_compl_sc:dec_smoothing}, where $\mu$ and $\kappa$ are hyperparameters.
This algorithm requires smoothing.
\end{itemize}
On the other hand, for non-convex structured prediction, we only have two methods:
\begin{itemize}
\item {\bfseries SGD}: The stochastic subgradient method \citep{davis2018stochastic}, which we call as SGD. This
algorithm works directly on the non-smooth formulation. We try learning rates
$\gamma_t = \gamma_0$, $\gamma_t = \gamma_0 /\sqrt{t}$
and $\gamma_t = \gamma_0 / t$, where $\gamma_0$ is found by grid search in each of these cases.
We use the names SGD-const, SGD-$t^{-1/2}$ and SGD-$t^{-1}$ respectively for these variants.
We note that SGD-$t^{-1}$ does not have any theoretical analysis in the non-convex case.
\item {\bfseries {PL-Casimir-SVRG}}: Algo.~\ref{algo:prox-linear} with {Casimir-SVRG-const}{} as the inner solver using the settings
of Prop.~\ref{prop:pl:total_compl}. This algorithm requires smoothing the inner subproblem.
\end{itemize}
\subsection{Hyperparameters and Variants} \label{subsec:expt:hyperparam}
\paragraph{Smoothing}
In light of the discussion of Sec.~\ref{sec:smooth_oracle_impl},
we use the $\ell_2^2$ smoother $\omega(\uv) = \normasq{2}{\uv} / 2$ and use the top-$K$ strategy
for efficient computation.
We then have $D_\omega = 1/2$.
\paragraph{Regularization}
The regularization coefficient $\lambda$ is chosen as $\nicefrac{c}{n}$,
where $c$ is varied in $\{ 0.01, 0.1, 1, 10\}$.
\paragraph{Choice of $K$}
The experiments use $K = 5$ for named entity recognition where the performance of the top-$K$
oracle is $K$ times slower,
and $K=10$ for visual object localization, where the running time of the top-$K$ oracle is independent of $K$.
We also present results for other values of $K$ in Fig.~\ref{fig:plot_ner_K} and find that
the performance of the tested algorithms is robust to the value of $K$.
\paragraph{Tuning Criteria}
Some algorithms require tuning one or more hyperparameters such as the learning rate.
We use grid search to find the best choice of the hyperparameters using the following criteria:
For the named entity recognition experiments, the train function value and the validation $F_1$ metric
were only weakly correlated. For instance, the 3 best learning rates in the grid in terms of $F_1$ score,
the best $F_1$ score attained the worst train function value and vice versa.
Therefore, we choose the value of the tuning parameter that attained the best objective function value within 1\% of the
best validation $F_1$ score in order to measure the optimization performance while still remaining relevant
to the named entity recognition task.
For the visual object localization task,
a wide range of hyperparameter values achieved nearly equal performance in terms of
the best CorLoc over the given time horizon, so we choose
the value of the hyperparameter that achieves the best objective function value within
a given iteration budget.
\subsubsection{Hyperparameters for Convex Optimization}
This corresponds to the setting of Section~\ref{sec:cvx_opt}.
\paragraph{Learning Rate}
The algorithms {SVRG}{} and {Casimir-SVRG-adapt}{} require tuning of a learning rate,
while SGD requires $\eta_0, t_0$ and
{Casimir-SVRG-const}{} requires tuning of the Lipschitz constant $L$ of $\grad F_{\mu\omega}$,
which determines the learning rate $\gamma = 1/(L + \lambda + \kappa)$.
Therefore, tuning the Lipschitz parameter is similar to tuning the learning rate.
For both the learning rate and Lipschitz parameter, we use grid search on a logarithmic grid,
with consecutive entries chosen a factor of two apart.
\paragraph{Choice of $\kappa$}
For {Casimir-SVRG-const}{}, with the Lipschitz constant in hand, the parameter $\kappa$
is chosen to minimize the overall complexity as in Prop.~\ref{prop:c:total_compl_svrg_sc}.
For {Casimir-SVRG-adapt}{}, we use $\kappa = \lambda$.
\paragraph{Stopping Criteria}
Following the discussion of Sec.~\ref{sec:cvx_opt}, we use
an iteration budget of $T_{\mathrm{budget}} = n$.
\paragraph{Warm Start}
The warm start criterion determines the starting iterate of an epoch of the inner optimization algorithm.
Recall that we solve the following subproblem using SVRG for the $k$th iterate (cf. \eqref{eq:prox_point_algo}):
\begin{align*}
\wv_k \approx \argmin_{\wv \in \reals^d} F_{\mu_k\omega, \kappa_k}(\wv_k;\zv_{k-1}) \,.
\end{align*}
Here, we consider the following warm start strategy to choose the initial iterate $\widehat \wv_0$ for this subproblem:
\begin{itemize}
\item {\tt Prox-center}: $\widehat \wv_0 = \zv_{k-1}$.
\end{itemize}
In addition, we also try out the following warm start strategies of \citet{lin2017catalyst}:
\begin{itemize}
\item {\tt Extrapolation}: $\widehat \wv_0 = \wv_{k-1} + c(\zv_{k-1} - \zv_{k-2})$ where $c = \frac{\kappa}{\kappa + \lambda}$.
\item {\tt Prev-iterate}: $\widehat \wv_0 = \wv_{k-1}$.
\end{itemize}
We use the {\tt Prox-center} strategy unless mentioned otherwise.
\paragraph{Level of Smoothing and Decay Strategy}
For {SVRG}{} and {Casimir-SVRG-const}{} with constant smoothing, we try various values of the smoothing
parameter in a logarithmic grid. On the other hand, {Casimir-SVRG-adapt}{} is more robust to the choice of
the smoothing parameter (Fig.~\ref{fig:plot_ner_smoothing}).
We use the defaults of $\mu = 2$ for named entity recognition and $\mu = 10$ for
visual object localization.
\subsubsection{Hyperparameters for Non-Convex Optimization}
This corresponds to the setting of Section~\ref{sec:ncvx_opt}.
\paragraph{Prox-Linear Learning Rate $\eta$}
We perform grid search in powers of 10 to find the best prox-linear learning rate $\eta$.
We find that the performance of the algorithm is robust to the choice of $\eta$ (Fig.~\ref{fig:ncvx:pl_lr}).
\paragraph{Stopping Criteria}
We used a fixed budget of 5 iterations of {Casimir-SVRG-const}{}.
In Fig.~\ref{fig:ncvx:inner-iter},
we experiment with different iteration budgets.
\paragraph{Level of Smoothing and Decay Strategy}
In order to solve the $k$th prox-linear subproblem with {Casimir-SVRG-const}{},
we must specify the level of smoothing $\mu_k$. We experiment with two schemes,
(a) constant smoothing $\mu_k = \mu$, and (b) adaptive smoothing $\mu_k = \mu / k$.
Here, $\mu$ is a tuning parameters, and the adaptive smoothing scheme is designed
based on Prop.~\ref{prop:pl:total_compl} and Remark~\ref{remark:pl:choosing_eps0}.
We use the adaptive smoothing strategy as a default, but compare the two in Fig.~\ref{fig:ncvx_smoothing}.
\paragraph{Gradient Lipschitz Parameter for Inner Optimization}
The inner optimization algorithm {Casimir-SVRG-const}{} still requires a hyperparameter
$L_k$ to serve as an estimate to the Lipschitz parameter of the gradient
$\grad F_{\eta, \mu_k\omega}(\cdot\,; \wv_k)$. We set this parameter as follows,
based on the smoothing strategy:
(a) $L_k = L_0$ with the constant smoothing strategy, and
(b) $L_k = k\, L_0$ with the adaptive smoothing strategy (cf. Prop.~\ref{thm:setting:beck-teboulle}).
We note that the latter choice has the effect of decaying the learning rate as $~1/k$
in the $k$th outer iteration.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{plots/ner_cvx/ner_best.pdf}
\caption{Comparison of convex optimization algorithms for the task of Named Entity Recognition on CoNLL 2003.}\label{fig:plot_all_ner}
\end{figure}
\begin{figure}[!thb]
\centering
\includegraphics[width=0.93\textwidth]{plots/loc_cvx/loc_all_best_1E+01_0.pdf}
\caption{Comparison of convex optimization algorithms
for the task of visual object localization on PASCAL VOC 2007 for $\lambda=10/n$.
Plots for all other classes are in
Appendix~\ref{sec:a:expt}.}\label{fig:plot_all_loc}
\end{figure}
\begin{figure}[!thb]
\centering
\includegraphics[width=0.93\textwidth]{plots/loc_ncvx/loc_best_all_1E+00_0.pdf}
\caption{Comparison of non-convex optimization algorithms
for the task of visual object localization on PASCAL VOC 2007 for $\lambda=1/n$.
Plots for all other classes are in
Appendix~\ref{sec:a:expt}.}\label{fig:plot_ncvx_loc}
\end{figure}
\subsection{Experimental study of different methods} \label{subsec:expt:competing_results}
\paragraph{Convex Optimization}
For the named entity recognition task,
Fig.~\ref{fig:plot_all_ner} plots the performance of various methods on CoNLL 2003.
On the other hand, Fig.~\ref{fig:plot_all_loc} presents
plots for various classes of PASCAL VOC 2007 for visual object localization.
The plots reveal that smoothing-based methods converge faster in terms of training error
while achieving a competitive performance in terms of the performance metric on a held-out set.
Furthermore, BCFW and SGD make twice as many actual passes as SVRG based algorithms.
\paragraph{Non-Convex Optimization}
Fig.~\ref{fig:plot_ncvx_loc} plots the performance of various algorithms on the task of visual object localization
on PASCAL VOC.
\subsection{Experimental Study of Effect of Hyperparameters: Convex Optimization}
We now study the effects of various hyperparameter choices.
\paragraph{Effect of Smoothing}
Fig.~\ref{fig:plot_ner_smoothing} plots the effect of the level of smoothing for {Casimir-SVRG-const}{}
and {Casimir-SVRG-adapt}{}. The plots reveal that, in general, small values of the smoothing parameter lead
to better optimization performance for {Casimir-SVRG-const}. {Casimir-SVRG-adapt}{} is robust to the choice
of $\mu$.
Fig.~\ref{fig:plot_ner_smoothing-2} shows how the smooth optimization algorithms work when used heuristically on the
non-smooth problem.
\begin{figure}[!thb]
\centering
\begin{subfigure}[b]{0.88\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/ner_cvx/plot_smoother_main.pdf}
\caption{\small{Effect of level of smoothing.}}
\label{fig:plot_ner_smoothing}
\end{subfigure}
\begin{subfigure}[b]{0.88\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/ner_cvx/plot_nonsmooth_svrg_main.pdf}
\caption{\small{Effect of smoothing: use of smooth optimization with smoothing (labeled ``smooth'')
versus the heuristic use of these
algorithms without smoothing (labeled ``non-smooth'') for $\lambda = 0.01/n$.}}
\label{fig:plot_ner_smoothing-2}
\end{subfigure}
\begin{subfigure}[b]{0.88\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/ner_cvx/plot_warm_start.pdf}
\caption{\small{Effect of warm start strategies for $\lambda=0.01/n$ (first row) and $\lambda = 1/n$ (second row).}}
\label{fig:plot_ner_warm-start}
\end{subfigure}
\begin{subfigure}[b]{0.88\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/ner_cvx/plot_K_main.pdf}
\caption{\small{Effect of $K$ in the top-$K$ oracle ($\lambda = 0.01/n$).}}
\label{fig:plot_ner_K}
\end{subfigure}
\caption{Effect of hyperparameters for the task of Named Entity Recognition on CoNLL 2003.
C-SVRG stands for Casimir-SVRG in these plots.}\label{fig:plot_cvx_hyperparam}
\end{figure}
\paragraph{Effect of Warm Start Strategies}
Fig.~\ref{fig:plot_ner_warm-start} plots different warm start strategies for {Casimir-SVRG-const}{}
and {Casimir-SVRG-adapt}.
We find that {Casimir-SVRG-adapt}{} is robust to the choice of the warm start strategy while {Casimir-SVRG-const}{} is not.
For the latter, we observe that {\tt Extrapolation} is less stable (i.e., tends to diverge more) than {\tt Prox-center},
which is in turn less stable than {\tt Prev-iterate}, which always works (cf. Fig.~\ref{fig:plot_ner_warm-start}).
However, when they do work, {\tt Extrapolation} and {\tt Prox-center} provide greater acceleration than {\tt Prev-iterate}.
We use {\tt Prox-center} as the default choice to trade-off between acceleration and applicability.
\paragraph{Effect of $K$}
Fig.~\ref{fig:plot_ner_K} illustrates the robustness of the method to choice of $K$: we observe that
the results are all within one standard deviation of each other.
\subsection{Experimental Study of Effect of Hyperparameters: Non-Convex Optimization}
We now study the effect of various hyperparameters for the non-convex optimization algorithms.
All of these comparisons have been made for $\lambda = 1/n$.
\paragraph{Effect of Smoothing}
Fig.~\ref{fig:ncvx_smoothing:1} compares the adaptive and constant smoothing strategies.
Fig.~\ref{fig:ncvx_smoothing:2} and Fig.~\ref{fig:ncvx_smoothing:3} compare the effect of the level of smoothing on the
the both of these.
As previously, the adaptive smoothing strategy is more robust to the choice of the smoothing parameter.
\begin{figure}[!thb]
\centering
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/loc_ncvx/loc_fixed-vs-adapt-smth_sheep_1E+00.pdf}
\caption{Comparison of adaptive and constant smoothing strategies.}
\label{fig:ncvx_smoothing:1}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/loc_ncvx/loc_smoother-decay_sheep_1E+00.pdf}
\caption{Effect of $\mu$ of the adaptive smoothing strategy.}
\label{fig:ncvx_smoothing:2}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/loc_ncvx/loc_smoother-const_sheep_1E+00.pdf}
\caption{Effect of $\mu$ of the constant smoothing strategy.}
\label{fig:ncvx_smoothing:3}
\end{subfigure}
\caption{Effect of smoothing on {PL-Casimir-SVRG}{} for the task of visual object localization on PASCAL VOC 2007.}\label{fig:ncvx_smoothing}
\end{figure}
\begin{figure}[!thb]
\centering
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/loc_ncvx/loc_pl-lr_sheep_1E+00.pdf}
\caption{\small{Effect of the hyperparameter $\eta$.}}\label{fig:ncvx:pl_lr}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/loc_ncvx/loc_inner-iter_sheep_1E+00.pdf}
\caption{\small{Effect of the iteration budget of the inner solver.}}\label{fig:ncvx:inner-iter}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/loc_ncvx/loc_warmstart_sheep_1E+00.pdf}
\caption{\small{Effect of the warm start strategy of the inner {Casimir-SVRG-const}{} algorithm.}}
\label{fig:ncvx:warm-start}
\end{subfigure}
\caption{Effect of hyperparameters on {PL-Casimir-SVRG}{} for the task of visual object localization on PASCAL VOC 2007.}\label{fig:ncvx_hyperparam}
\end{figure}
\paragraph{Effect of Prox-Linear Learning Rate $\eta$}
Fig.~\ref{fig:ncvx:pl_lr} shows the robustness of the proposed method to the choice of $\eta$.
\paragraph{Effect of Iteration Budget}
Fig.~\ref{fig:ncvx:inner-iter} also shows the robustness of the proposed method to the choice of iteration budget of the inner solver, {Casimir-SVRG-const}.
\paragraph{Effect of Warm Start of the Inner Solver}
Fig.~\ref{fig:ncvx:warm-start} studies the effect of the
warm start strategy used within the inner solver {Casimir-SVRG-const}{} in each inner prox-linear iteration. The results are similar to
those obtained in the convex case, with {\tt Prox-center} choice being the best compromise between acceleration and compatibility.
\section{Introduction}
\input{sections/01_intro}
\section{Smooth Structured Prediction} \label{sec:setting}
\input{sections/02_struct_pred}
\section{Inference Oracles} \label{sec:inference_oracles}
\input{sections/03_inf_oracles}
\section{Implementation of Inference Oracles} \label{sec:smooth_oracle_impl}
\input{sections/04_smoothing}
\section{The Casimir Algorithm} \label{sec:cvx_opt}
\input{sections/05_Casimir}
\section{Extension to Non-Convex Optimization} \label{sec:ncvx_opt}
\input{sections/06_proxlin}
\section{Experiments} \label{sec:expt}
\input{sections/08_expt}
\section{Future Directions}
\input{sections/09_conclusion}
\paragraph{Acknowledgments}
This work was supported by NSF Award CCF-1740551, the Washington Research Foundation
for innovation in Data-intensive Discovery, and the program ``Learning in Machines and Brains'' of CIFAR.
\clearpage
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
The solutions of the Euler equation for fluid dynamics are not unique.
An additional law of physics, in the form of an entropy principle, is
needed to ensure a physically meaningful solution. Wild and manifestly
nonphysical solutions have been studied extensively \cite{DelSze09,DelSze10} and
offer counter examples to studies of the Euler equation as a model for
fully developed turbulence. This paper is concerned with the nonuniqueness
for Euler equation solutions that are the limit of Navier-Stokes solutions as
the viscosity tends to zero. We address common practice in the
construction of numerical solutions for turbulent flows.
Applications to type Ia supernova are discussed.
Nonuniqueness (both mathematical and numerical)
of solutions to the Euler equation is well known in the study of shock waves
and its resolution is also well known: a maximum rate of
entropy production is imposed
as a selection criteria to yield a unique and physically
relevant solution.
But nonuniqueness persists
in solutions of the incompressible Euler equation, where shock waves do not
occur. Again, a physical principle must be added to select the physically
meaningful solution.
This paper poses a challenge to existing standards of verification and
validation (V\&V).
We propose that if turbulence is present in the
problem solved, standards of V\&V
should ensure the physical relevance of the solutions.
As with the shock wave example, inadmissible numerical solutions of
turbulent phenomena are also possible. We identify three broad classes
of numerical solutions to the problems of Rayleigh-Taylor (RT) turbulent mixing
and compare them to experimental data \cite{SmeYou87}.
one of these agrees with the data, while two do not.
The second main result of this
paper is to identify these other two, solutions that include no subgrid
terms and those for which the subgrid terms are limited, i.e., the
Implicit Large Eddy Simulation (ILES), as physically inadmissible
solutions of the turbulent RT mixing data \cite{SmeYou87}.
ILES and solutions which report a DNS status and lack subgrid terms.
These latter two solutions do not agree with each other,
further indicating nonuniqueness issues.
To account for observed discrepancies between ILES predictions and
experimental data, it is common to add ``noise'' to the physics model.
As noise increases the entropy, some discrepancies between simulation
and measured data are removed.
The solution with noise is, however, not predictive. Not only
can it be missing in the required amounts, but it is only a qualitative cure,
with no defined noise level or noise frequency spectrum specified.
The maximum entropy rate is a clearly defined physics principle. We
propose it as a solution to the Euler equation nonuniqueness problem.
Reynolds averaged Navier Stokes (RANS) simulations resolve all length
scales needed to specify the problem geometry.
Large eddy simulations (LES) not only
resolve these scales, but in addition they resolve some, but not
all, of the generic turbulent flow. The mesh scale, i.e., the finest of the
resolved scales,
occurs within the turbulent flow. As this is a strongly coupled flow
regime, problems occur at the mesh cutoff. Resolution of all relevant
length scales, known as Direct Numerical Simulation (DNS) is
computationally infeasible for many problems of scientific and
technological interest. As a consequence, an understanding of the
problems and opportunities of LES is an important issue.
The subgrid scale
(SGS) flow exerts an influence on the flow at the resolved level.
Because this SGS effect
is not part of the Navier-Stokes equations,
additional modeling terms are needed in the equations. These
SGS terms added to the right hand side (RHS) of the
momentum and species concentration equations
generally have the form
\begin{equation}
\label{eq:sgs}
\nabla\nu_t \nabla \quad {\mathrm{and}} \quad \nabla D_t \nabla \ .
\end{equation}
The coefficients $\nu_t$ and $D_t$ are called eddy viscosity and eddy
diffusivity.
According to ideas of Kolmogorov \cite{Kol41}, the energy in a turbulent
flow, conserved, is passed in a cascade from larger vortices to smaller ones.
This idea leads to the scaling law \cite{Kol41}
\begin{equation}
\label{eq:K41}
\langle |v(k)|^2 \rangle = C_K \epsilon^{2/3} |k|^{-5/3}
\end{equation}
for the Fourier coefficient $v(k)$ of the velocity $v$. Here
$C_K$ is a numerical coefficient and
$\epsilon$, the energy dissipation rate, denotes the rate at which the energy
is transferred within the cascade.
It is a measure of the intensity of the turbulence.
At the grid level, the numerically modeled
cascade is broken. The role of the
SGS terms is to dissipate this excess grid level energy so that the
resolved scales see a diminished effect from the grid cutoff.
This analysis motivates the SGS coefficient $\nu_t$, while a conservation
law for species concentration
similarly motivates the coefficient $D_t$.
Higher order compact schemes may omit any subgrid model
in their study of RT mixing. As an example,
\cite{CabCoo06} present a nominally DNS solution,
which, however, is not validated by comparison to experiment.
Moreover, the DNS characterization
of the simulation is not documented, with $D$ and $\nu$ not specified.
It appears from the text that DNS refers to globally defined solution
parameters such as the globally defined Kolmogorov scale $\eta$ in relation
to the mesh spacing, with $\nu$ and $D$ defined on this basis.
Such resolution misses local fluctuations in
the turbulent intensity, which require dynamically defined SGS terms
added to the equation. As \cite{CabCoo06} is focused on applications to
supernova Ia, additional comments are placed in our SN Ia discussion.
ILES is the computational model
in which the minimum value of $\nu_t$ is chosen so that a minimum of grid level
excess energy is removed to retain the $|k|^{-5/3}$ scaling law, while
the prefactor $C_K\epsilon^{2/3}$ is not guaranteed. It thus depends on
limited and not full use of the subgrid terms that correspond to the
local values of the energy dissipation cascade.
An ILES version of Miranda, a modern higher order compact scheme, given in
\cite{MorOlsWhi17},
details in the construction of the ILES version of this code and analyzes
a number of scaling related properties of the RT solutions the
algorithm generates. The subgrid terms are chosen
not proportional to the Laplacian as in (\ref{eq:sgs}),
but as higher power dissipation rates,
so that large wave numbers are more strongly suppressed.
The SGS modeling coefficients $\nu_t$ and $D_t$ are chosen as global
constants. The basis for the choice is to regard the accumulation of
energy at the grid level as a Gibbs phenomena to be minimized
\cite{MorOlsWhi17}. Miranda
achieves the ILES goal of an exact $-5/3$ spectral decay,
see Fig. 3 right frame in Ref. \cite{MorOlsWhi17}.
FronTier uses dynamic
SGS models \cite{GerPioMoi91,MoiSquCab91},
and additionally uses a sharp interface model to reduce numerical
diffusion. In this method,
SGS coefficients $\nu_t$ and $D_t$ are defined in terms
based on local flow conditions,
using turbulent scaling laws, extrapolated from an analysis
of the flow at one scale coarser, where the subgrid flow is known.
The philosophy and choices of the SGS terms are completely different among
the compact schemes, ILES and
FronTier, a fact which leads to differences in the obtained
solutions. Solution differences between FronTier and ILES were reviewed in
\cite{ZhaKamShe18}, with FronTier but not ILES showing agreement with the
data \cite{SmeYou87}. The schemes totally lacking SGS terms are even
further from the experimental data \cite{SmeYou87}.
As shown in \cite{ZhaKamShe18},
long wave length noise in the initial conditions was eliminated as
a possible explanation of the discrepancies between ILES simulations and
experimental data for the RT instability growth
rate constant $\alpha_b$.
We also note the mixedness parameter measured in \cite{MueSch1_09}, is furthest
from experiment in \cite{CabCoo06}, is improved in the Miranda simulation
code \cite{MueSch1_09} lacking subgrid terms but with improved modeling of
experimental parameters, and further improved by the FronTier simulation
\cite{GliPloLim15}.
\section{Scaling laws compared}
\label{sec:scaling}
Here we focus on differences in the spectral scaling exponents. As
\cite{CabCoo06,MorOlsWhi17} employ a thinly diffused initial layers
separating two fluids of distinct densities,
the immiscible experiments of \cite{SmeYou87} are the most appropriate for
comparison. \cite{CabCoo06} does not report velocity spectral
scaling properties,
but this reference does report the very large growth of the interfacial mixing
area, \cite{CabCoo06} Fig. 6,
a phenomena which we have also observed \cite{LeeJinYu07,LimYuJin07}.
The scaling rate we observe, Fig.~\ref{fig:spectral}
from the late time
FronTier simulations reported in \cite{ZhaKamShe18}, shows
a strong decay rate in the velocity spectrum,
resulting from a combination of the turbulent fractal decay
and a separate cascading process we refer to stirring.
Stirring is the mixing of distinct regions in a two phase flow. It occurs in the
concentration equation and is driven by velocity fluctuations. For stirring,
the concentration equation describes the
(tracked) front between the phases. Stirring fractal behavior is
less well studied than turbulent velocity. It
accounts for the very steep velocity spectral decay seen in
Fig.~\ref{fig:spectral}. In contrast, ILES \cite{MorOlsWhi17} captures
neither the expected turbulent intermittency correction to the decay rate nor
any stirring correction beyond this.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{fft105.jpg}
\end{center}
\caption{
\label{fig:spectral}
Plot of the spectral decay rate, in log log variables, from the
two point function in log variables (as studied in \cite{Mah17}).
Numerical data from the final time step
RT simulations of experiment 105 reported in
\cite{ZhaKamShe18}.
The immiscible decay rate -3.17 reflects a combination of
turbulent intermittency and the effects of a stirring cascade.
}
\end{figure}
We summarize in Table~\ref{table:compare}
the major code comparisons of this paper,
based on the RT instability growth rate $\alpha_b$.
A compact, higher order scheme \cite{CabCoo06} has the
smallest value $\alpha_b$. ILES is larger, and the FronTier
scheme using dynamic SGS is the largest of the three,
and in agreement with experiment.
\begin{table}
\caption{
\label{table:compare}
Three types of RT simulation algorithms according to their treatment of
SGS terms and their value for $\alpha_b$, compared
to the data of \cite{SmeYou87}.
}
\begin{centering}
\begin{tabular}{|l|l|l|c|}
\hline
Code & SGS terms & solution & evaluation \\
& & properties & relative to \cite{SmeYou87} \\
\hline
\hline
compact high & No SGS & $\alpha_b \sim 0.02$ & Inconsistent\\
order \cite{CabCoo06} & & & \\
\hline
Miranda & Limited SGS & $\alpha_b \sim 0.03$ & Inconsistent\\
ILES \cite{MorOlsWhi17} & & & \\
\hline
FronTier & Dynamic & $\alpha_b \sim 0.06$ & Consistent \\
\cite{ZhaKamShe18} & SGS & & \\
\hline
\end{tabular}
\end{centering}
\end{table}
\section{Maximum entropy production rate}
\label{sec:max-entropy}
Our first main result is to establish a plausible argument
for the validity of the maximum entropy
production rate for Euler equation turbulence.
The admissibility condition is an extension of the second law of
thermodynamics, in the sense that under this extension,
the physically admissible dynamic processes are constrained
more tightly than those allowed by the second law itself.
It has
been applied successfully to many natural processes
\cite{MarSel06,MihFarPai17}
including problems in climate science (terrestrial and other planets) \cite{OzaOhmLor03}, in
astrophysics, and the clustering of galaxies. As
noted in \cite{KleDyk10}, it does not have the status of an
accepted law of physics.
A fundamental obstacle to validation of this
principle can be seen in the lack of a variational principle which combines
conservative and dissipative processes.
We avoid this
fundamental question, and more narrowly outline a possible validation of the
maximum entropy principle
in the context of Euler equation turbulence.
The variational principle we find, in this context,
specifies an extreme value for the entropy production.
As this is applied at each infinitesimal increment
of time, the maximum entropy production
principle actually guarantees a maximal rate of
entropy production. For thermal processes, such a law is well validated,
and leads to the phenomenological Fourier law for thermal conductivity.
According to multifractal theories of turbulence \cite{Fri95}, turbulence
is intermittent, with intense regions of turbulence occurring in
clusters. There is a further clustering of clusters, a process which
continues to all orders. These higher order clusters are defined
in terms of structure functions, to be introduced in Sec.~\ref{sec:vel-ent}.
Before getting into technical details, we emphasize the
central modeling assumptions that make the maximum entropy principle
valid. For each order $p$ of clustering, a fractal set is
defined. Given a length scale $l$, the fractal set at this length scale has
a measure which is exponentially small in $l$. The central physics modeling
assumption for fractal turbulence is
\begin{itemize}
\item
(Fractal)
All the energy for the $p$ level of clustering
is contained in a small fractal set $x_p$,
realized at the length scale $l$ as the set $x_{p,l}$.
The energy on the set, $E_{p,l}$, (defined at the scale $l$) is a constant.
\end{itemize}
This modeling assumption is used in the analysis of power laws
and Poisson processes describing the beta model of
Euler equation turbulence \cite{Fri95}.
It follows that the steady state energy dissipation and entropy production
of the order $p$ clustering
at length scale $l$ are given by
\begin{equation}
\label{eq:entropy-def}
E_{p,l} \int_{x_{p,l}} x dx \quad {\mathrm{and}} \quad
E_{p,l} \int_{x_{p,l}} x \ln x dx \ ,
\end{equation}
We observe that the energy occurs
outside of the integrals,
and that the term $(1-x) \ln (1-x)$ is missing from the entropy.
To model a time
dependent state which has not yet achieved equilibrium, and is still
evolving in time, the only change to
(\ref{eq:entropy-def}) is that the equations are
multiplied by the fractional equilibrium part of the state.
The log Poisson model \cite{SheLev94} selects a fractal set
to describe each order $p$ of clustering. The choice, conditional on
prior choices for smaller $p$ values, is not defined by an exponential, i.e., a
pure fractal, but a mixture of exponentials in the energy dissipation rates.
As the mixture is not
narrowly concentrated about its peak value, the applicability of
hypothesis
(Fractal) cannot be assumed. In the limit of large $p$, however,
the mixture of exponentials is narrow, so that (Fractal) is justified
for physically realizable solutions of Euler equation turbulence.
The peak values for finite $p$ are not identified in the log Poisson
analysis, which finds the mean of the mixture exponentials on the
basis a universality hypothesis. This multifractal model,
evaluated for large $p$ is
applied uniformly to all $p$. From the excellent agreement of these predictions
with multiple experiments and simulations (1\% accuracy) \cite{SheLev94},
the log Poisson model is validated.
A plausible principle to select the physically relevant solutions from among
the multiple nonunique solutions of the Euler equation,
suggested by this analysis, is the principle of
maximal rate of energy dissipation. The analysis of \cite{SheLev94} maximizes
the mean value of competing exponentials rather than their peak value.
The mean and peak coincide in the limit of large $p$, but the distinction
between them for finite $p$ is a gap remaining in any validation argument.
The maximum energy dissipation rate is a viable candidate for the
required selection principle among nonunique solutions of the Euler equation.
Accepting this, our analysis will be complete with solutions lacking
subgrid terms and ILES seen to be invalid physically.
To the extent that some maximal entropy likelihood reasoning is applicable,
for example such as (Fractal), a maximum entropy production rate principle
for the selection of physically relevant solutions of the Euler equation
for fully developed turbulence would follow.
We refer to the highly
developed extensions \cite{StJ05,DubGra96,DubGra96a,SheWay95}
of \cite{SheLev94}. The references \cite{DubGra96,DubGra96a,SheWay95}
extend the log Poisson model to continuous $p > 0$. These references
do not resolve the issue
of either a maximum entropy production rate or a maximum energy dissipation
rate for fully developed turbulence, but they appear to offer a plausible
route for possible validation of either of these.
The dynamic equations are of Fokker-Plank type.
The dissipation operator is a sum of a conventional Laplacian, for the
thermal diffusion and an integral over $p > 0$ of the order $p$ clustering
contribution, which is a fractal, or power law dissipation, expressed in
powers of the length scale $l$.
\section{The second main result}
\label{sec:results}
In view of these observations, we note three
independent reasons for concluding that the absence of subgrid
terms or their limited presence in ILES is problematic
on physical grounds.
\begin{enumerate}
\item The two limited subgrid schemes do not satisfy
the maximum entropy production rate principle.
\item The two limited subgrid schemes are
in violation of incontrovertible experimental and
simulation evidence that the true total spectral
decay rate is more negative than $-5/3$ \cite{Fri95}.
\item These two schemes understate the dissipated energy
and are thus unphysical.
\end{enumerate}
These are logically independent statements. The order is decreasing in the
fundamental nature of the statement and increasing in the simplicity of the
assumptions. Any one of these points is sufficient to invalidate
schemes lacking in subgrid terms or ILES , with limited subgrid terms.
Point 1 is the most fundamental in nature, and it is the
subject of the remainder of this paper. Point 2
rests on established laws of physics and assumes the relevance of
Kolmogorov scaling laws with their intermittency corrections to RT mixing.
Point 3 assumes nothing. Simulations, even ILES simulations, show a transfer
of energy from large to small scales. Point 3 accepts this as a physical
fact. The energy transfer
is not a numerical feature to be minimized, but a property of the
solutions to be modeled correctly. The grid level cutoff terminates this
transfer, and point 3 notes that the transfer, from the grid level to the
subgrid level is incorrectly modeled in both types of limited dissipation
schemes.
For the reader satisfied with
either points 2 or 3, the remainder of the paper can be ignored, and the
discussion has been completed,
independent of the remainder of the paper.
The problems with current computational paradigms are well summarized by
Zhou \cite{Zho17a}, Sec. 6
regarding evaluation of the RT
instability growth rate
$\alpha_b$, ``agreement between simulations and experiment are worse today
than it was several decades ago because of the availability of more
powerful computers.''
As our computational method depends on front tracking in addition to
dynamic subgrid modes (which address items 1-3 above), we additionally
quote from Zhou \cite{Zho17a}, Sec. 5.2, in discussing \cite{GeoGliLi05}:
``it was clear that accurate numerical tracking to control numerical mass
diffusion and accurate modeling of physical scale-breaking phenomena
surface tension were the critical steps for the simulations to agree with
the experiments of Read and Smeeton and Youngs''.
We raise the possibility of ILES related errors in an
analysis of the deflagration to detonation transition in type
Ia supernova. In that these simulations depend on ILES,
their predictive value may be questioned.
We propose a simple simulation search method for
rare events, in which a physics simulation code drives the turbulence
modeling. In agreement with \cite{CabCoo06}, we recommend a new
class of turbulent combustion subgrid models. See Sec.~\ref{sec:Ia}.
\subsection{Rayleigh-Taylor turbulent mixing}
\label{sec:RT}
We assess ILES in terms of the RT instability of
acceleration driven instabilities, and the prediction of the growth
rate $\alpha_b$ of this instability.
We identify situations in which ILES is in near agreement with
experimental measurement of the growth rate
in this measure \cite{Mue08,MueSch1_09}, and ones
\cite{DimYou04} where its predictions differ
by a factor of about 2 from experiments \cite{SmeYou87}.
The first case is characterized by
\begin{itemize}
\item{(a)} low levels of turbulence
\item{(b)} high levels of long wave length perturbations
(``noise'') in the initial
conditions and \item{(c)} diffusive parameters in the physics model.
\end{itemize}
Regarding item (c), we observe that the successful ILES simulations
referenced above concerned hot-cold water, with a moderate Schmidt
number of 7, whereas no results are reported for the very low diffusive
fresh-salt water channel with a Schmidt number of 600.
\subsection{Noise as an adjustable parameter}
The postulate \cite{You03} of noise in the initial data
\cite{SmeYou87}
was shown to lead to agreement of the ILES predictions with experiment.
In previous studies \cite{GliShaKam11,ZhaKamShe18}, we have shown that this
postulate is not valid.
The long wave length noise is present,
but with a sufficiently small amplitude that its influence on the
instability growth rate is about $5\%$. Thus long wave length
initial ``noise'' in the initial conditions for \cite{SmeYou87}
is not sufficient to account
for the factor of 2 discrepancy between ILES and this data.
We regard ``noise'' as a palliative, and not a fundamental principle.
The noise level is not specified, nor is its frequency spectrum,
so that standards of predictive
science are not met. As noted, ``noise'', of the required intensity,
is missing in some instances. We propose the maximum entropy production
rate as a more satisfactory solution to the problem of Euler equation
turbulence nonuniqueness.
ILES simulations have been used in the study of incompressible turbulence,
a problem with ample experimental data reviewed in \cite{SheLev94}.
In such simulations,
``noise'' is added to the initial conditions. In this case the high
frequency component of the noise is important. Agreement with experiment
is obtained. Pure ILES, with no added noise would not meet this test.
\subsection{Outline of derivation}
\label{sec:outline}
Our reasoning is based on three fundamental laws of physics:
\begin{itemize}
\item Conservation of energy, the first law of thermodynamics
\item Maximum entropy production rate, an extension of the second law of
thermodynamics
\item Universality in the clustering and compound clustering of intermittency
in fully developed turbulence.
\end{itemize}
The third item is formulated in \cite{SheLev94}.
In the multifractal description of turbulence, universality
states that the compound clusters, that is the multiple fractals
in the description of turbulent intermittency, must all obey a common
law. There can be no new physical law or parameter in passing
from one level of clustering to the next. The law is evaluated
in closed form \cite{SheLev94} in the limit that the order of the clustering
becomes infinite.
It is a power scaling law. By universality, this
law is then applied to clustering at all orders.
The universality theories are developed for single constant density
incompressible turbulence. Our use in a variable
density context is an extrapolation of these theories
beyond their domain of strict validated applicability. Scaling
laws are similarly extrapolated. Such extrapolations are widely
used (and verified) in simulation studies. For convenience,
the Reynolds stress analysis uses this approximation.
In shock wave modeling, the Euler equation shock
wave introduces a Gibbs phenomena of overshoot. The instability resulting
is removed by dissipation (artificial viscosity, and its modern
variants) of the minimum amount to just prevent the overshoot.
The turbulent cascade of energy is not a Gibbs phenomena. It is
an observable fact and not a numerical artifact.
Minimizing its magnitude is an error, as opposed to
an accurate model of the mesh dissipated error in the dynamic SGS models.
We proceed in the following steps. Using the Reynolds stress, we
express the SGS terms to be modeled as a truncated two point function.
In this formulation, we identify the minimum (ILES) and maximum
(dynamic SGS) alternatives.
We then proceed
from velocity fluctuations
to the energy dissipation rate $\epsilon$ and from the latter to
the entropy production rate. At each step we are looking at truncated
two point functions. At the end, we are looking at the
entropy production rate and must choose the
solution with maximum entropy production rate.
Each step is monotone and preserves the minimum-maximum choice.
Reasoning backwards, we see that the maximum
choice is needed at the outset, and so ILES is inadmissible.
The transition, from velocities to energy truncated two point functions,
has two components. The first is a scaling analysis to show equivalence,
but in the process the order of clustering changes. The second component
in this transition
is to apply universality: all orders of clustering must obey a common
minimum-maximum choice.
.
\subsection{From velocities to entropy}
\label{sec:vel-ent}
\subsubsection{Reynolds stress}
The Reynolds stress results from regarding the mesh values as
cell averaged quantities. This creates an obvious problem for nonlinear
terms of the Euler equation. From the momentum equation, the quadratic
nonlinearity is replaced by the product of the cell mean values. The
resulting error, transferred to the RHS of the momentum equation is the
negative of the gradient of the Reynolds stress, defined as
\begin{equation}
\label{eq:rey}
R = \overline{ v^2} - \overline{v} ~ \overline{v}
\end{equation}
in the case of constant density, with a more complex expression involving
density weighted (Favre) averages in the variable density case.
The added force term $-\nabla R$ on the right hand side (RHS)
of the momentum equation is modeled
as $\nu_t \Delta v$. Thus we see that the minimum and maximum values for
the energy dissipation rate $\nu_t$ correspond to
minimum and maximum values for models of $-\nabla R$. $R$, as a truncated
two point function, vanishes as its argument becomes infinite and is peaked
at the origin. Thus minimum and maximum values for $-\nabla R$
correspond to minimum and maximum values for $R$ itself.
\subsubsection{Velocities to energy}
As technical preparation for the analysis of this section, we define the
structure functions. They make precise the intuitive picture
of multiple orders of clustering for intermittency.
There are two families of structure functions, one for
velocity fluctuations and the other for the energy
dissipation rate $\epsilon$. The structure functions are the expectation
value of the $p^{th}$ power of the variable. For each value of $p$,
they define a fractal and satisfy a power law in their decay in a scaling
variable $l$. The structure functions and the associated scaling exponents
$\zeta_l$ and $\tau_l$ are defined as
\begin{equation}
\label{eq:zeta-tau}
\langle \delta v_l^p \rangle \sim l^{\zeta_p}
\quad {\mathrm{and}} \quad
\langle \epsilon_l^p \rangle \sim l^{\tau_p}
\end{equation}
where $\delta v_l$ and $\epsilon_l$ are respectively
the averages of velocity differences and
of $\epsilon$ over a ball of size $l$.
The two families of exponents are related by a simple scaling law
\begin{equation}
\label{eq:zeta-tau2}
\zeta_p = p/3 + \tau_{p/3}
\end{equation}
derived on the basis of scaling laws and dimensional analysis \cite{Kol62}.
This would seem to accomplish the velocity fluctuation
to energy dissipation rate
step, preserving the minimum vs. maximum choice,
but it does not, because
the value of $p$ to which it applies has changed.
To fill this gap, we turn to the assumption of universality formulated
in terms of the $\tau_p$ \cite{SheLev94}, and as explained
with mathematical formalisms replacing some of the reasoning of a theoretical
modeling nature, \cite{SheWay95,DubGra96,DubGra96a,SheZha09,LiuShe03}.
As a function of $p$,
$\tau_p$ is a fractional order cubic, defined in terms of a fractional
order dissipative operator with a fractional order exponent $\beta$.
This relation is derived exactly in the limit as $p \rightarrow \infty$,
and in the name of universality, then applied to all values of $p$.
As a monotone fractional order cubic, it follows that the minimum-maximum choice
for any $p$ is reflected in the same choice for all $p$. We have thereby
completed the velocity to energy dissipation rate step, and
preserved the minimum vs. maximum choice.
From the modeling principle (Fractal), the
energy dissipation rate is maximized exactly when the
entropy production rate is maximized.
The maximum choice for the entropy production rate is required
and the minimum choice is inadmissible. Reasoning backwards to the
original energy dissipation choices, the minimum rate of energy
dissipation (ILES) is inadmissible.
\section{Significance: an example}
\label{sec:Ia}
For simulation modeling of turbulent flow nonlinearly coupled to other
physics (combustion and reactive flows, particles embedded in turbulent flow,
radiation), the method of dynamic SGS turbulent flow models, which only
deals with average subgrid effects, may be insufficient. In such cases,
the turbulent fluctuations or the full two point correlation function
is a helpful component of SGS modeling. Such a goal is only partially
realized in the simplest of cases, single density incompressible
turbulence. For highly complex physical processes, the knowledge of the
domain scientist must still be retained, and it appears to be
more feasible to bring
multifractal modeling ideas into the domain science communities.
In this spirit, we propose here a simple method
for the identification of (turbulence related) extreme events
through a modification of adaptive mesh refinement (AMR), which we
call Fractal Mesh Refinement (FMR). We propose FMR to seek a
deflagration to detonation transition (DDT)
in type Ia supernova.
FMR allows high levels of strongly focused resolution.
The method is proposed to assess the extreme events generated
by multifractal turbulent nuclear deflagration. Such events, in a white
dwarf type Ia supernova progenitor, are assumed to lead to DDT,
which produces the observed type Ia supernova.
See \cite{ZinAlmBar17,CalKruJac12} and references cited there.
FMR refines the mesh not adaptively where needed,
but only in the most highly critical regions where most important, and
thereby may detect
DDT trigger events within large volumes at a feasible computational cost.
The detailed mechanism for DDT is presumed to be diffused
radiative energy
arising from some local combustion event of extreme
intensity, in the form of a convoluted flame front, embedded in
a nearby volume of unburnt stellar material close to
ignition.
Consistent with the Zeldovich theory \cite{Lee08}, a wide spread
ignition and explosion may result.
FMR refinement criteria will search for such
events. In this plan, the FMR search should avoid ILES.
See \cite{Gli18}.
There is a minimum length scale for wrinkling of a turbulent
combustion front, called the Gibson scale. Mixing can proceed in the
absence of turbulence, via stirring. Thus the Gibson scale is not the
correct limiting scale for a DDT event. Stirring, for a flame front,
terminates at a smaller scale, the width of the flame itself. The analysis of
length scales must also include correctly modeled transport for charged
ions \cite{MelLimRan14},
which can be orders of magnitude larger than those inferred
from hydro considerations. The
microstructure of mixing for a flame front could be thin flame regions
surrounded by larger regions of burned and unburned stellar material
(as with a foam of soap bubbles, with a soap film between the bubbles).
Here again multifractal and entropy issues appear to be relevant,
although not subject to theoretical analysis comparable to that
of multifractal for turbulent flow.
A multifractal clustering of smaller bubbles separated by flame fronts
can be anticipated, and
where a sufficient fraction of these bubbles are unburnt stellar material,
a trigger for DDT could occur.
FMR, with its narrow focus on extreme events, will come closer to discovering
such DDT triggers than will an AMR algorithm design.
For this purpose, the astrophysics code should be based on dynamic subgrid
SGS, not on ILES.
We return to the discussion of \cite{CabCoo06}. Our FronTier computations
of a 2D interface surface length are in qualitative agreement with those of
\cite{CabCoo06} for the surface area.
Such models of interface area should be the basis for subgrid scale modeling
of the turbulent flame intensity. Work is currently in progress to
construct an experimentally validated subgrid scale microstructure to
complement models of turbulent flame surface area. These subgrid
models may play a role in reaching beyond length scales reachable by FMR.
\section{Conclusions }
\label{sec:conc}
We have shown that the ILES algorithm for the solution of Euler equation
turbulence is inadmissible physically. It is in violation of the
physical principle of maximum rate of entropy production.
We have explained observations of experimental flows for which this
error in ILES has only a minor effect. They are associated with high levels
of noise in the initial conditions, low levels of turbulent intensity
and diffusive flow parameters.
Prior work, e.g., \cite{GliShaKam11,GeoGliLi05,ZhaKamShe18}
pertain to simulation validation studies RT instability
experiments with a stronger intensity of turbulence
and for which such significant long wave length perturbations to the
initial data are missing. In these experiments, the present analysis
provides a partial explanation for the factor of about 2 discrepancy between
observed and ILES predicted instability growth rates.
We have noted the potential for ILES related errors to influence
ongoing scientific investigations, including the search for
DDT in type Ia supernova.
We believe V\&V standards should include an analysis of the physical
relevance of proposed solutions to flow problems, specifically turbulent
and stirring problems.
The ILES simulations of the experiments of \cite{SmeYou87} fail this
test by a factor of 2 in the RT growth rate $\alpha_b$, and
on this basis we judge them to be physically inadmissible.
We recognize that the conclusions of this paper will be controversial within
the ILES and high order compact turbulent simulation communities.
A deeper consideration of the
issues raised here is a possible outcome.
The issues to be analyzed are clear:
\begin{itemize}
\item
Is the transport of energy and concentration, blocked at the
grid level, to be ignored entirely \cite{CabCoo06}?
\item
Is it to be regarded as a Gibbs phenomena \cite{MorOlsWhi17},
and thus to be minimized?
\item
Is it a physical phenomena,
to be modeled accurately \cite{GerPioMoi91,MoiSquCab91}?
\end{itemize}
If the response to this paper is an appeal to
consensus (everyone else is doing it),
the argument fails. Consensus is of course a weak argument, and
one that flies in the face of standards of V\&V. More significantly,
there is a far larger engineering community
using dynamic SGS models in the design of engineering structures
tested in actual practice.
This choice is backed by nearly three decades of
extensive experimental validation. It is further used
to extend the calibration range of RANS simulations beyond available
experimental data. The resulting RANS, calibrated to dynamic SGS LES data,
are widely used in the design and optimization
of engineering structures; these are also tested in real applications.
Consensus in this larger community overwhelms the ILES consensus
by its shear magnitude, and ILES loses the consensus argument.
\section{Acknowledgements}
\label{ack}
Use of computational support by the Swiss National Supercomputing Centre is gratefully acknowledged.
Los Alamos National Laboratory Preprint LA-UR-18-30837.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\input{sections/Introduction}
\section{Background and Related Work}
\input{sections/RelatedWork}
\section{The Solution Framework}
\input{sections/NeuralNetworkDesign}
\section{The Gas Transport Model} \label{sec:The Gas Transport Model}
\input{sections/GasTransportModel}
\section{Computational Experiments} \label{sec:Computational Experiments}
\input{sections/ComputationalExperiments}
\section{Computational Results} \label{sec:Computation Results}
\input{sections/Results}
\section{Conclusion}
\input{sections/Conclusion}
\section*{Acknowledgements}
\input{sections/Funding}
\bibliographystyle{abbrv}
\subsection{Data Generation}
As mentioned previously, acquiring gas network data is notoriously difficult \cite{yueksel2020lessons, kunz2017electricity}. Perhaps because of this difficulty, there exists no standard method for generating valid states for a fixed gas network. Below we outline our methods for generating synthetic transient gas instances for training purposes, i.e. generating $\ensuremath{\pi}\xspace \in \ensuremath{\Pi}\xspace$ and artificial \ensuremath{z_{1}}\xspace values. For our application of transient gas instances, \ensuremath{\pi}\xspace is a tuple of a boundary forecast and an initial state.
\subsubsection{Boundary Forecast Generation}
\label{subsection:Boundary Forecast Generation}
We consider network stations as our gas network topology. They contain all heavy machinery and at most only short segments of large scale transport pipelines. As such, our gas networks cannot be used to store large amounts of gas. We thus aim to generate balanced demand scenarios, with the requirement described as follows:
\begin{align}
\sum_{v \in \ensuremath{\setVertices^\text{b}}\xspace} \demandInflow{v}{t} = 0 \quad \forall t \in \ensuremath{\mathcal{T}}\xspace \label{eq:forecast_1}
\end{align}
The distribution of gas demand scenarios is not well known. Hence we naively assume a uniform distribution, and using the largest absolute flow value found over any node and time step in our real-world data, create an interval as follows:
\begin{align}
\begin{split}
M_{\text{q}} &= \max_{v \in \ensuremath{\setVertices^\text{b}}\xspace, t \in \ensuremath{\mathcal{T}}\xspace} | \demandInflow{v}{t} | \\
\demandInflow{v}{t} &\in \interval{-1.05M_{\text{q}}}{1.05M_{\text{q}}}
\end{split}\label{eq:forecast_2}
\end{align}
In addition to the above, we require three MILP formulation specific requirements. The first is that the absolute difference between the flow values of a node is not too large for any adjacent time steps. Secondly, the sign of the generated flow values must match the attribute of the boundary node, i.e., entry (+), exit (-). Thirdly, the flow values do not differ too largely between boundary nodes of the same \textit{fence group} within the same time step. A fence group is denoted by $g\in \ensuremath{\mathcal{G}}\xspace$, and enforces the sign of all nodes in the group to be identical. These constraints are described below:
\begin{align}
\begin{split}
| \demandInflow{v}{t} - \demandInflow{v}{t-1} | &\leq 200 \quad \forall t \in \ensuremath{\mathcal{T}}\xspace, \quad v \in \ensuremath{\setVertices^\text{b}}\xspace \\
\text{sign}(\demandInflow{v}{t}) &=
\begin{cases}
1 \quad \text{if} \quad v \in \ensuremath{\setVertices^{+}}\xspace \\
-1 \quad \text{if} \quad v \in \ensuremath{\setVertices^{-}}\xspace
\end{cases} \forall t \in \ensuremath{\mathcal{T}}\xspace, \quad v \in \ensuremath{\setVertices^\text{b}}\xspace \\
| \demandInflow{v_{1}}{t} - \demandInflow{v_{2}}{t} | &\leq 200 \quad \forall t \in \ensuremath{\mathcal{T}}\xspace, \quad v_{1},v_{2} \in g, \hspace{0.5em} g\in \ensuremath{\mathcal{G}}\xspace, \hspace{0.5em} v_{1},v_{2} \in \ensuremath{\setVertices^\text{b}}\xspace
\end{split} \label{eq:forecast_3}
\end{align}
To generate demand scenarios that satisfy constraints \eqref{eq:forecast_1} and \eqref{eq:forecast_2}, we use the method proposed in \cite{rubin1981bayesian}. Its original purpose was to generate samples from the Dirichlet distribution, but it can be used for a special case of the Dirichlet distribution that is equivalent to a uniform distribution over a simplex in 3-dimensions. Such a simplex is exactly described by \eqref{eq:forecast_1} and \eqref{eq:forecast_2} for each time step. Hence we can apply it for all time-steps and reject all samples that do not satisfy constraints \eqref{eq:forecast_3}. Note that this method is insufficient for network stations with more than three boundary nodes.
In addition to flow demands, we require a pressure forecast for all boundary nodes. Our only requirements here is that the pressures between adjacent time steps for a single node not fluctuate heavily and that the bounds are respected. We create a bound on the range of pressure values by finding maximum and minimum values over all nodes and time steps in our test set. We once again assume our samples to be uniformly distributed and sample appropriately over \eqref{eq:forecast_4} with rejection of samples that do not respect constraint \eqref{eq:forecast_5}. Note that many scenarios generated by this approach are unlikely to happen in practice, as the pressure and flow profiles may not match.
\begin{align}
\begin{split}
M_{\text{p}}^{+} &= \max_{v \in \ensuremath{\setVertices^\text{b}}\xspace, t \in \ensuremath{\mathcal{T}}\xspace} \demandPressure{v}{t} \quad \quad
M_{\text{p}}^{-} = \min_{v \in \ensuremath{\setVertices^\text{b}}\xspace, t \in \ensuremath{\mathcal{T}}\xspace} \demandPressure{v}{t} \\
\demandPressure{v}{t} &\in \interval{M_{\text{p}}^{-} - 0.05(M_{\text{p}}^{+} - M_{\text{p}}^{-})}{M_{\text{p}}^{+} + 0.05(M_{\text{p}}^{+} - M_{\text{p}}^{-})}
\end{split} \label{eq:forecast_4}
\end{align}
\begin{align}
| \demandPressure{v}{t} - \demandPressure{v}{t-1} | &\leq 5 \quad \forall t \in \ensuremath{\mathcal{T}}\xspace, \quad v \in \ensuremath{\setVertices^\text{b}}\xspace \label{eq:forecast_5}
\end{align}
Combining the two procedures from above yields the artificial forecast data generation method described in Algorithm \ref{alg:boundary_data_generator}.
\begin{algorithm}[H]
\SetAlgoLined
\KwResult{A forecast of pressure and flow values over the time horizon}
flow\_forecast = Sample simplex \eqref{eq:forecast_1}\eqref{eq:forecast_2} uniformly, rejecting via \eqref{eq:forecast_3} \;
pressure\_forecast = Sample \eqref{eq:forecast_4} uniformly, rejecting via \eqref{eq:forecast_5} \;
\Return (flow\_forecast, pressure\_forecast)
\caption{Boundary Value Forecast Generator}
\label{alg:boundary_data_generator}
\end{algorithm}
\subsubsection{Operation Mode Sequence Generation}
\label{subsection:Operation Mode Sequence Generation}
During offline training, \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace requires optimal solutions for a fixed $z_{1}$. In Algorithm \ref{alg:op_mode_generator} we outline a naive yet effective approach of generating reasonable $z_{1}$ values, i.e., operation mode sequences:
\begin{algorithm}
\SetAlgoLined
\KwResult{An Operation Mode per time step}
operation\_modes = $\lbrack$ $\rbrack$ \;
\For{$t = 1;\ t < | \ensuremath{\mathcal{T}}\xspace |;\ t = t + 1$}{
\uIf{$t == 1$}{
new\_operation\_mode = rand($\ensuremath{\mathcal{O}}\xspace$) \;
}
\ElseIf{rand(0,1) $\geq$ 0.9}{
new\_operation\_mode = rand($\ensuremath{\mathcal{O}}\xspace \setminus$ new\_operation\_mode) \;
}
operation\_modes.append(new\_operation\_mode) \;
}
\Return operation\_modes
\caption{Operation Mode Sequence Generator}
\label{alg:op_mode_generator}
\end{algorithm}
\subsubsection{Initial State Generation}
\label{subsection:Initial State Generation}
Many coefficients of $A_{\pi}$ are invariant due to static network topology. Many others however are found by substituting multiple parameters into an equation describing gas properties. This information is contained in the initial state, and we generate them similar to boundary forecasts:
\begin{align}
\begin{split}
c_{\text{state}} &\in \text{$\{$Temperature, Inflow Norm Density, Molar Mass,} \\
\text{Pseudo } & \text{Critical Temperature, Pseudo Critical Pressure$\}$}
\end{split} \\
\begin{split}
M_{\text{c}}^{+} &= \max_{\text{state} \in \text{initial states}} c_{\text{state}} \quad \quad M_{\text{c}}^{-} = \min_{\text{state} \in \text{initial states}} c_{\text{state}} \\
c_{\text{state}} &\in \interval{M_{\text{c}}^{-} - 0.05(M_{\text{c}}^{+} - M_{\text{c}}^{-})}{M_{\text{c}}^{+} + 0.05(M_{\text{c}}^{+} - M_{\text{c}}^{-})}
\end{split}\label{eq:generated_constants}
\end{align}
We now have the tools to generate synthetic initial states, see Algorithm \ref{alg:initial_state_generator}.
Algorithm \ref{alg:initial_state_generator} is designed to output varied and valid initial states w.r.t our MILP formulation. However, it comes with some drawbacks. Firstly, the underlying distribution of demand scenarios for both flow and pressure are probably not uniform nor conditionally independent. Moreover, the sampling range we use is significantly larger than that of our test set as we take single maximum and minimum values over all nodes. Secondly, the choice of operation modes that occur in reality is also not uniform. In reality, some operation modes occur with a much greater frequency than others. Our data is thus more dynamic than reality, and likely to contain operation mode choices that do match the demand scenarios. Finally, we rely on a MILP solver to generate new initial states in our final step. Hence we cannot rule out the possibility of a slight bias. One example would be the case of a repeated scenario, which has multiple optimal solutions, but the MILP solver always returns an identical solution.
\begin{algorithm}[H]
\SetAlgoLined
\SetKwInOut{Input}{Input}
\Input{Desired time-step distance $j \in [1, \cdots, k]$}
\KwResult{An initial state to the transient gas optimisation problem}
flow\_forecast, pressure\_forecast = Boundary Prognosis Generator() \footnote{See Algorithm \ref{alg:boundary_data_generator}} \;
gas\_constants = Sample
\eqref{eq:generated_constants} uniformly \;
initial\_state = Select random state from real-world data \;
\ensuremath{\pi}\xspace = (flow\_forecast, pressure\_forecast, gas\_constants, initial\_state) \footnote{Note that in general our \ensuremath{\pi}\xspace does not include gas\_constants. This is because the information is generally encoded in initial\_state. Our gas\_constants in this context are randomly generated however, and may not match the initial\_state. This does not affect solving as these values are simply taken as truths.} \;
\ensuremath{z_{1}}\xspace = Operation Mode Sequence Generator() \footnote{See Algorithm \ref{alg:op_mode_generator}} \;
\ensuremath{\mathbb{P}_{\pi}^{z_{1}}}\xspace = generate from \ensuremath{\pi}\xspace and \ensuremath{z_{1}}\xspace \;
$($ state\_1, $\cdots$, state\_k $)$ = Optimal solution states from solving \ensuremath{\mathbb{P}_{\pi}^{z_{1}}}\xspace \;
\Return state\_j
\caption{Initial State Generator}
\label{alg:initial_state_generator}
\end{algorithm}
\begin{algorithm}[H]
\SetAlgoLined
\SetKwInOut{Input}{Input}
\Input{num\_states, num\_scenarios, time\_step\_difference}
\KwResult{num\_scenarios many gas instances and their optimal solutions}
initial\_states = [] \;
\For{$i = 0;\ i < \text{num\_states};\ i = i + 1$}{
initial\_states.append(Initial State Generator(time\_step\_difference))\footnote{See Algorithm \ref{alg:initial_state_generator}}\;
}
forecasts = [] \;
\For{$i = 0;\ i < \text{num\_scenarios};\ i = i + 1$}{
flow\_forecast, pressure\_forecast = Boundary Prognosis Generator()\footnote{See Algorithm \ref{alg:boundary_data_generator}}\;
forecasts.append((flow\_forecast, pressure\_forecast)) \;
}
solve\_data = [] \;
\For{$i = 0;\ i < \text{num\_scenarios};\ i = i + 1$}{
\ensuremath{z_{1}}\xspace = Operation Mode Sequence Generator() \footnote{See Algorithm \ref{alg:op_mode_generator}} \;
initial\_state = Uniformly select from initial\_states \;
\ensuremath{\pi}\xspace = (forecasts[i], initial\_state) \;
\ensuremath{\mathbb{P}_{\pi}^{z_{1}}}\xspace = Create MILP from \ensuremath{\pi}\xspace and \ensuremath{z_{1}}\xspace \;
solution = Solve \ensuremath{\mathbb{P}_{\pi}^{z_{1}}}\xspace \;
solve\_data.append((\ensuremath{z_{1}}\xspace, \ensuremath{\pi}\xspace, solution)) \;
}
\Return solve\_data
\caption{Synthetic Gas Data Generator}
\label{alg:data_generator}
\end{algorithm}
In the case of initial state generation, we believe that further research needs to be performed. Our method is effective in the context of machine learning where we aim for a diverse set of data, but it is naive and incapable of ensuring that generated boundary scenarios are realistic.
\subsubsection{Complete Transient Gas Instance Generation}
To train \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace and \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace, we need both the transient gas transportation scenario, and an optimal solution for it. Combining the generation methods for synthetic data in subsections \ref{subsection:Boundary Forecast Generation}, \ref{subsection:Operation Mode Sequence Generation}, \ref{subsection:Initial State Generation}, and the solving process of the created instances, we derive Algorithm \ref{alg:data_generator}.
\subsection{Experimental Design}
We generated our initial training and validation sets offline. To do so we use Algorithm \ref{alg:data_generator} with inputs:
num\_states = $10^{4}$, num\_scenarios = $4\times 10^{6}$, and time\_step\_difference = 8.
This initial training data is exclusively used for training \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace, and is split into a training set of size $3.2\times 10^{6}$, a test set of $4\times 10^{5}$, and a validation set of $4\times 10^{5}$.
The test set is checked against at every epoch, while the validation set is only referred to at the end of the initial training. Following this initial training, we begin to train \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace as a whole, alternating between \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace and \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace. The exact algorithm is given in \ref{alg:train}, which references functions provided in Appendix \ref{sec:appendix}. For training, we used the Adam algorithm \cite{kingma2014adam} as our descent method. The associated parameters to this algorithm and a complete set of other training parameters are listed in Table \ref{tab:initial_discriminator}. In the case of a parameter being non-listed, the default value was used. The intention behind our training method is to ensure that \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace receives no real-world data prior to its final evaluation. With this method we hope to show that synthetic data is sufficient for training purposes and that \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace successfully generalises to additional data sets. However, we should note that Algorithm \ref{alg:initial_state_generator} does use real-world data as a starting point from which to generate artificial data.
\newpage
\input{sections/TrainingAlgo}
We consider the solution of \ensuremath{\mathbb{P}_{\pi}^{\hat{z_{1}}}}\xspace as a primal heuristic for the original problem \ensuremath{\mathbb{P}_{\pi}}\xspace. Due to our usage of slack, i.e. the application of variables $x_{2}$, any valid solution for \ensuremath{\mathbb{P}_{\pi}^{z_{1}}}\xspace is a valid solution of \ensuremath{\mathbb{P}_{\pi}}\xspace. We aim to incorporate \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace in a global MIP context and do this by using a partial solution of \ensuremath{\mathbb{P}_{\pi}^{\hat{z_{1}}}}\xspace as a warm-start suggestion for \ensuremath{\mathbb{P}_{\pi}}\xspace. The partial solution consists of \ensuremath{\hat{z_{1}}}\xspace, an additional set of binary variables called the flow directions, which are a subset of $z_{2}$ in \eqref{eq:MIP}, and \varPressure{v}{t} $ \forall v \in \ensuremath{\setVertices^\text{b}}\xspace, t \in \ensuremath{\mathcal{T}}\xspace$, which are a subset of $x_{1}$ in \eqref{eq:MIP}. Note that partial solutions are used as instances are numerically difficult. In doing so, we hope to generate valid solutions quickly, and speed up the global solution process. The primal heuristic and warm-start algorithm can be seen in Algorithms \ref{alg:primal_heuristic} and \ref{alg:warm_start} respectively. \\
\begin{algorithm}[H]
\SetAlgoLined
\SetKwInOut{Input}{Input}
\Input{\ensuremath{\mathbb{P}_{\pi}}\xspace}
\ensuremath{\hat{z_{1}}}\xspace = \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace(\ensuremath{\pi}\xspace) \;
\ensuremath{\mathbb{P}_{\pi}^{\hat{z_{1}}}}\xspace = Create MILP from \ensuremath{\pi}\xspace and \ensuremath{\hat{z_{1}}}\xspace \;
solution = Solve \ensuremath{\mathbb{P}_{\pi}^{\hat{z_{1}}}}\xspace \;
\Return solution \;
\KwResult{Optimal solution of \ensuremath{\mathbb{P}_{\pi}^{\hat{z_{1}}}}\xspace, primal solution of \ensuremath{\mathbb{P}_{\pi}}\xspace.}
\caption{Primal Heuristic}
\label{alg:primal_heuristic}
\end{algorithm}
\begin{algorithm}[H]
\SetAlgoLined
\SetKwInOut{Input}{Input}
\Input{\ensuremath{\mathbb{P}_{\pi}}\xspace}
primal\_solution = Primal Heuristic(\ensuremath{\mathbb{P}_{\pi}}\xspace) \footnote{See Algorithm \ref{alg:primal_heuristic}} \;
optimum = Solve \ensuremath{\mathbb{P}_{\pi}}\xspace with primal\_solution as a warm-start suggestion \;
\KwResult{Optimal solution of \ensuremath{\mathbb{P}_{\pi}}\xspace}
\caption{Warm Start Algorithm}
\label{alg:warm_start}
\end{algorithm}
For our experiments we used PyTorch 1.4.0 \cite{paszke2019pytorch} as our ML modelling framework, Pyomo v5.5.1 \cite{hart2017pyomo, hart2011pyomo} as our MILP modelling framework, and Gurobi v9.02 \cite{gurobi} as our MILP solver. The MILP solver settings are available in Table \ref{tab:mip_params} in Appendix \ref{sec:appendix}. \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace was trained on a machine running Ubuntu 18, with 384\,GB of RAM, composed of 2x \emph{Intel(R) Xeon(R) Gold 6132} running $@$ 2.60GHz, and 4x \emph{NVIDIA Tesla V100 GPU-NVTV100-16}. The final evaluation times were performed on a cluster using 4 cores and 16\,GB of RAM of a machine composed of 2x \emph{Intel Xeon CPU E5-2680} running $@$ 2.70\,GHz.
Our validation set for the final evaluation of \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace consists of 15 weeks of live real-world data from our project partner OGE. Instances are on average 15 minutes apart for this period and total 9291.
All instances, both in training and test, contain 12 time steps (excluding the initial state) with 30 minutes between each step. Additionally, we focus on Station D from \cite{hennings2020controlling}, and present only results for this station. The statistics for Station D can be seen in Table \ref{tab:stationStatistics}, and its topology in Figure \ref{fig:porz_topology}. Station D can be thought of as a T intersection, and is of average complexity compared to the stations presented in \cite{hennings2020controlling}. The station contains 6 boundary nodes, but they are paired, such that for each pair only one can be active, i.e., have non-zero flow. Due to this, our sampling method in subsection \ref{subsection:Boundary Forecast Generation} exists in 3-dimensions and is uniform $\forall t \in \ensuremath{\mathcal{T}}\xspace$.
\begin{table}[ht]
\centering
\begin{tabular}{*{8}{c}}
Name & $|\ensuremath{\mathcal{V}}\xspace|$ & $|\ensuremath{\mathcal{A}}\xspace|$ & $\frac{\sum\limits_{a\in\ensuremath{\setArcs^{\text{pi}}}\xspace}\pipeLength{a}}{|\ensuremath{\setArcs^{\text{pi}}}\xspace|}$ &
$|\setCompressorConfigurations{a}| \enskip \forall a\in\ensuremath{\setArcs^{\text{cs}}}\xspace$ &
$|\ensuremath{\mathcal{O}}\xspace|$ & $|\ensuremath{\setVertices^\text{b}}\xspace|$ & $|\ensuremath{\setArcs^{\text{va}}}\xspace|$ \\
\midrule
D & 31 & 37 & 0.404 km & 2, 6 & 56 & 3x2 & 11 \\
\end{tabular}
\caption{Overview of different properties of station D.}
\label{tab:stationStatistics}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.2, angle=0]{sections/Pictures/porz_1.png}
\caption{Topology of Station D.}
\label{fig:porz_topology}
\end{figure}
\subsection{Exact Network Designs}
As a large portion portion of our input data into both \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace and \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace is time-expanded data, we originally believed that the ideal design would be a series of LSTMs \cite{hochreiter1997long}. Preliminary results however showed that convolutional neural networks (CNNs) were more effective for our problem, in particular when using Inception Blocks, see \cite{szegedy2017inception}.
The exact block design used in \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace can be seen in Figure \ref{fig:block_design}, and the general layout in Figure \ref{fig:forward_pass}. For the complete network design we refer readers to Figure \ref{fig:graph_neural_network} and Table \ref{tab:nn_architecture} in the Appendix.
\subsection{Pipe Equations}
Pipes constitute the majority of elements in any gas transmission network. The dynamics of flow through pipes are governed by the Euler Equations, a set of nonlinear hyperbolic partial differential equations, see \cite{Osi1996}. We consider the isothermal case and discretise as in \cite{hennings2018benefits}. Consider the pipe $a=(u,v)$, $a \in \ensuremath{\setArcs^{\text{pi}}}\xspace$, where $u, v \in \ensuremath{\mathcal{V}}\xspace$ are the two incident nodes. We attach a flow-in $\varPipeFlow{u}{a}{t}$ and flow-out $\varPipeFlow{v}{a}{t}$ variable to each pipe. Additionally, each incident node has an attached pressure variable, namely $(\varPressure{u}{t})$ and $(\varPressure{v}{t})$. Moreover, these flow-in, flow-out, and pressure values also appear for each time step.
$R_{s}$, $z_{a}$, and \ensuremath{T}\xspace are assumed to be constant, and \pipeDiameter{a}, \pipeLength{a}, \pipeSlope{a}, \pipeArea{a}, \ensuremath{g}\xspace, and \frictionFactor{a} are themselves constant. The above constant assumptions are quite common in practice \cite{rios2015optimization}.
It is only after setting the velocity of gas within each individual pipe, $\paramAbsoluteVelocity{w}{a}$ to be constant that all non-linearities are removed however. We do this via a method developed in \cite{hennings2018benefits} and seen in \cite{fang2017dynamic}. The resulting pipe equations are:
\begin{align}
\varPressure{u}{t_2} + \varPressure{v}{t_2} - \varPressure{u}{t_1} - \varPressure{v}{t_1}
+ \frac{2\ensuremath{\gasConst_s}\xspace\ensuremath{T}\xspace\compressibilityFactor{a}(\granularity{t_2} - \granularity{t_1})}{\pipeLength{a}\pipeArea{a}}
\left(\varPipeFlow{v}{a}{t_2} - \varPipeFlow{u}{a}{t_2}\right) &=0 \label{eq:pipes_constVelo_continuity}\\
\varPressure{v}{t_2} - \varPressure{u}{t_2}
+ \frac{\frictionFactor{a}\pipeLength{a}}{4\pipeDiameter{a}\pipeArea{a}}
\left(\paramAbsoluteVelocity{u}{a}\varPipeFlow{u}{a}{t_2} + \paramAbsoluteVelocity{v}{a}\varPipeFlow{v}{a}{t_2}\right) & \notag\\
+ \frac{\ensuremath{g}\xspace\pipeSlope{a}\pipeLength{a}}{2\ensuremath{\gasConst_s}\xspace\ensuremath{T}\xspace\compressibilityFactor{a}}
\left(\varPressure{u}{t_2} + \varPressure{v}{t_2}\right) &=0 \label{eq:pipes_constVelo_momentum}
\end{align}
As nodes represent junctions between network elements and thus have no volume in which to store any gas, the flow conservation constraints \eqref{eq:flowConservation_boundaryNodes} \eqref{eq:flowConservation_innerNodes} are required. In the below equations, $\varInflowValue{v}{t}$ represents the inflow resp. outflow of entry and exit nodes in the network at time $t \in \ensuremath{\mathcal{T}_0}\xspace$. Note that network elements that aren't pipes have only one associated flow variable, instead of the in-out flow exhibited by pipes. This is due to them having no volume, and as such no ability to store gas over time, i.e. line-pack.
\begin{align}
& \sum_{(u,w)=a\in\ensuremath{\setArcs^{\text{pi}}}\xspace} \varPipeFlow{w}{a}{t} - \sum_{(w,v)=a\in\ensuremath{\setArcs^{\text{pi}}}\xspace} \varPipeFlow{w}{a}{t} \notag\\
+& \sum_{(u,w)=a\in\ensuremath{\mathcal{A}}\xspace\setminus\ensuremath{\setArcs^{\text{pi}}}\xspace} \varArcFlow{a}{t} - \sum_{(w,v)=a\in\ensuremath{\mathcal{A}}\xspace\setminus\ensuremath{\setArcs^{\text{pi}}}\xspace} \varArcFlow{a}{t} + \varInflowValue{w}{t} = 0 \qquad \forall w\in\ensuremath{\setVertices^\text{b}}\xspace \label{eq:flowConservation_boundaryNodes}\\
& \sum_{(u,w)=a\in\ensuremath{\setArcs^{\text{pi}}}\xspace} \varPipeFlow{w}{a}{t} - \sum_{(w,v)=a\in\ensuremath{\setArcs^{\text{pi}}}\xspace} \varPipeFlow{w}{a}{t} \notag\\
+& \sum_{(u,w)=a\in\ensuremath{\mathcal{A}}\xspace\setminus\ensuremath{\setArcs^{\text{pi}}}\xspace} \varArcFlow{a}{t} - \sum_{(w,v)=a\in\ensuremath{\mathcal{A}}\xspace\setminus\ensuremath{\setArcs^{\text{pi}}}\xspace} \varArcFlow{a}{t} = 0 \qquad \forall w\in\ensuremath{\setVertices^{0}}\xspace \label{eq:flowConservation_innerNodes}
\end{align}
\subsection{Operation Modes}
\textit{Operation modes} represent binary decisions in our gas network. We identify the corresponding binary variables with the \ensuremath{z_{1}}\xspace variables from our MILP formulation \eqref{eq:MIP}.
Let $\ensuremath{\mathcal{O}}\xspace$ represent the set of operation modes, and $\varNSmode{o}{t}$ the associated variables. Operation Modes are very important in our modelling context as they describe every allowable combination of discrete decisions associated with \textit{valves} and \textit{compressors}.
\subsubsection{Compressors}
Compressors are typically set up as a compressor station consisting of multiple compressor units, which represent the union of one single compressor machine and its associated drive. These compressor units are dynamically switched on or off and used in different sequences to meet the current needs in terms of compression ratios and flow rates. Out of the theoretically possible arrangements of compressor units, the set of technically feasible arrangements are known as the \textit{configurations} of a compressor station.
Selecting an operation mode results in fixed configurations for all compressor stations. The binary variables associated with a compressor station $a = (u,v) \in \ensuremath{\setArcs^{\text{cs}}}\xspace$ at time $t \in \ensuremath{\mathcal{T}_0}\xspace$ are $\varModeBypass{a}{t}$ (bypass), $\varModeClosed{a}{t}$ (closed), and $\varModeConfiguration{c}{a}{t}$ $\forall c \in \setCompressorConfigurations{a}$ (active). $\setCompressorConfigurations{a}$ denotes the set of configurations associated to compressor station $a$ available in active mode, where the configuration's operating range is a polytope in space $(\varPressure{u}{t}, \varPressure{v}{t}, \varPipeFlow{u}{a}{t})$. The polytope of configuration $c$ is represented by the intersection of half-spaces, $\setConfigurationFacets{c} = \{(\alpha_0,\alpha_1,\alpha_2,\alpha_3) \in \mathbb{R}^4\}$.
\begin{align}
1 &= \sum_{c\in\setCompressorConfigurations{a}} \varModeConfiguration{c}{a}{t} + \varModeBypass{a}{t} + \varModeClosed{a}{t} \label{eq:compressorStation_OneModeOrConfig} \\
\begin{split}
\alpha_0\varConfigPressureL{c}{a}{t} + \alpha_1\varConfigPressureR{c}{a}{t} + \alpha_2\varConfigFlow{c}{a}{t} + \alpha_3\varModeConfiguration{c}{a}{t} &\leq 0 \\
& \forall (\alpha_0,\alpha_1,\alpha_2,\alpha_3) \in\setConfigurationFacets{c} \quad \forall c\in\setCompressorConfigurations{a} \label{eq:compressorStation_cfg_facets}
\end{split}
\end{align}
Note that the variables in \eqref{eq:compressorStation_cfg_facets} have an extra subscript and superscript compared to those in \eqref{eq:pipes_constVelo_continuity} and \eqref{eq:pipes_constVelo_momentum}. This is due to our use of the convex-hull reformulation, see \cite{Bal2018}. The additional subscript refers to the configuration in question, and the superscript the mode, with the pressure variables having an additional node identifier. It should also be noted that the continuous variables attached to a compressor station are not fixed by a choice in operation mode or configuration, but rather the operation mode restricts the variables to some polytope.
\subsubsection{Valves}
\textit{Valves} decide the allowable paths through a network, and can separate areas, decoupling their pressure levels. They are modelled as an arc $a=(u,v)$, whose discrete decisions can be decided by an operation mode choice. Valves have two modes, namely open and closed. When a valve is open, similar to a compressor station in bypass, flow is unrestricted and there exists no pressure difference between the valves start and endpoints. Alternatively in the closed mode, a valve allows no flow to pass, and decouples the pressure of the start- and endpoints of the arc. The variable $\varModeOpen{a}{t}$ represents a valve being open with value 1 and closed with value 0. The general notation \lb{x} and \ub{x} refer to lower and upper bounds of a variable $x$. The constraints describing valves are then as follows:
\begin{align}
\varPressure{u}{t} - \varPressure{v}{t} &\leq ( 1 - \varModeOpen{a}{t} )(\paramPressureUB{u}{t}-\paramPressureLB{v}{t}) \label{eq:valves_first}\\
\varPressure{u}{t} - \varPressure{v}{t} &\geq ( 1 - \varModeOpen{a}{t} )(\paramPressureLB{u}{t}-\paramPressureUB{v}{t}) \\
\varArcFlow{a}{t} &\leq ( \varModeOpen{a}{t} )\paramArcFlowUB{a}{t} \\
\varArcFlow{a}{t} &\geq ( \varModeOpen{a}{t} )\paramArcFlowLB{a}{t}. \label{eq:valves_last}
\end{align}
\subsubsection{Valid Operation Modes}
As mentioned earlier, not all combinations of compressor station configurations and valve states are possible. We thus define a mapping $M(o,a)$ from operation mode $o \in \ensuremath{\mathcal{O}}\xspace$ to the discrete states of all $a \in \ensuremath{\setArcs^{\text{va}}}\xspace \cup \ensuremath{\setArcs^{\text{cs}}}\xspace$
\begin{align*}
M(o,a) := m &\text{ where }m\text{ is the mode or configuration of arc }a\\
& \text{ in operation mode }o \quad \forall o\in\ensuremath{\mathcal{O}}\xspace\quad\forall a\in\ensuremath{\setArcs^{\text{va}}}\xspace\cup\ensuremath{\setArcs^{\text{cs}}}\xspace\\
\text{with}\qquad & m\in \{\text{op}, \text{cl}\} \text{ if }a\in\ensuremath{\setArcs^{\text{va}}}\xspace \\
& m\in \{\text{by}, \text{cl}\} \cup \setCompressorConfigurations{a} \text{ if }a\in\ensuremath{\setArcs^{\text{cs}}}\xspace
\end{align*}
Using this mapping we can then define a set of constraints for all valid combinations of compressor station and valve discrete states for each $t \in \ensuremath{\mathcal{T}}\xspace$. The variable \varNSmode{o}{t}, $o \in \ensuremath{\mathcal{O}}\xspace$ $t \in \ensuremath{\mathcal{T}}\xspace$, is a binary variable, where the value 1 represents the selection of $o$ at time step $t$.
\begin{align}
\sum_{o \in \ensuremath{\mathcal{O}}\xspace} \varNSmode{o}{t} &= 1 \label{eq:opMode_choice} \\
\varModeOpen{a}{t} &= \sum_{o\in\ensuremath{\mathcal{O}}\xspace : M(o,a)=\text{op}} \varNSmode{o}{t} \quad\forall a\in\ensuremath{\setArcs^{\text{va}}}\xspace \label{eq:valve_opMode_coupling}\\
\varModeBypass{a}{t} &= \sum_{o\in\ensuremath{\mathcal{O}}\xspace : M(o,a)=\text{by}} \varNSmode{o}{t} \quad\forall a\in\ensuremath{\setArcs^{\text{cs}}}\xspace \label{eq:cs_bypass_opMode_coupling} \\
\varModeClosed{a}{t} &= \sum_{o\in\ensuremath{\mathcal{O}}\xspace : M(o,a)=\text{cl}} \varNSmode{o}{t} \quad\forall a\in\ensuremath{\setArcs^{\text{cs}}}\xspace \label{eq:cs_closed_opMode_coupling} \\
\varModeConfiguration{c}{a}{t} &= \sum_{o\in\ensuremath{\mathcal{O}}\xspace : M(o,a)=c} \varNSmode{o}{t} \quad\forall c\in\setCompressorConfigurations{a}\quad\forall a\in\ensuremath{\setArcs^{\text{cs}}}\xspace \label{eq:cs_cfg_opMode_coupling} \\
\varNSmode{o}{t} &\in \{0,1\} \quad \forall o\in\ensuremath{\mathcal{O}}\xspace. \notag
\end{align}
\subsection{Flow Directions}
\textit{Flow Directions} define the sign of flow values over the boundary nodes of a network station. With regards to our MILP they are a further set of decision variables. We avoid generating these decisions with our deep learning framework as not all combinations of operation modes and flow directions are feasible. These variables thus exist as integer variables in \ensuremath{\mathbb{P}_{\pi}^{z_{1}}}\xspace, namely as a subset of $z_{2}$, see \eqref{eq:MIP}. They are few in number however due to the limited combinations after the operation modes are fixed.
\subsection{Boundary Nodes and Slack} \label{sec:Boundary Nodes and Slaack}
Boundary nodes, unlike inner nodes, have a prescribed flow and pressure values for all future time steps. For each boundary node $v \in \ensuremath{\setVertices^\text{b}}\xspace$ and $t \in \ensuremath{\mathcal{T}}\xspace$, we have \varSlackPressurePos{v}{t} and \varSlackPressureNeg{v}{t}, which capture the positive and negative difference between the prescribed
and realised pressure. In addition to these pressure slack variables, we have the inflow slack variables \varSlackFlowPos{v}{t} and \varSlackFlowNeg{v}{t} which act in a similar manner but for inflow. The relationships between the slack values, prescribed values, and realised values can be modelled for each $v \in \ensuremath{\setVertices^\text{b}}\xspace$ and $t \in \ensuremath{\mathcal{T}}\xspace$ as:
\begin{align}
\demandPressure{v}{t} &= \varPressure{v}{t} - \varSlackPressurePos{v}{t} + \varSlackPressureNeg{v}{t} \quad \forall v\in\ensuremath{\setVertices^\text{b}}\xspace \label{eq:slack_pressure}\\
\demandInflow{v}{t} &= \varInflowValue{v}{t} - \varSlackFlowPos{v}{t} + \varSlackFlowNeg{v}{t} \quad \forall v \in \ensuremath{\setVertices^\text{b}}\xspace \label{eq:slack_flow}
\end{align}
Note that unlike the model from \cite{hennings2020controlling}, we do not allow the inflow over a set of boundary nodes to be freely distributed according to which group they belong to. This is an important distinction, as each single node has a complete forecast.
\subsection{Initial State}
In addition to the forecast mentioned in subsection \ref{sec:Boundary Nodes and Slaack}, we also start our optimisation problem with an initial state. This initial state contains complete information of all discrete states and continuous values for all network elements at $t=0$.
\subsection{Objective function}
The objective of our formulation is to both minimise slack usage, and changes in network operation. Specifically, it is a weighted sum of changes in the active element modes, changes in the continuous active points of operation, and the deviations from given pressure and flow demands. For the exact objective function we refer readers to \cite{hennings2020controlling}.
\subsection{Generator and Discriminator Design}
\ensuremath{\mathbb{G}_{\theta_{1}}}\xspace and \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace are NNs whose structure is inspired by
\cite{goodfellow2016nips}, as well as both inception blocks and residual NNs, which have greatly increased large scale model performance \cite{szegedy2017inception}.
We use the block design from Resnet-v2 \cite{szegedy2017inception}, see Figure \ref{fig:block_design}, albeit with slight modifications for the case of transient gas-network optimisation. Namely, we primarily use 1-D convolutions with that dimension being time. Additionally, we separate initial input streams by their characteristics, and when joining two streams, use 2-D convolutions, where the second dimension is of size 2 and quickly becomes one dimensional again. See Figure \ref{fig:2d1d_convolution} for an example of this process.
The final layer of \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace contains a softmax activation function with temperature. As the softmax temperature increases, this activation function's output approaches a one-hot vector encoding. The final layer of \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace contains a softplus activation function. All other intermediate layers of \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace use the ReLU activation function.
We refer readers to \cite{goodfellow2016deep} for a thorough overview of deep learning, and to Figure \ref{fig:graph_neural_network} in Appendix \ref{sec:appendix} for our complete design.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\linewidth]{sections/Pictures/2d1d_convolution.pdf}
\caption{Method of merging two 1-D input streams}
\label{fig:2d1d_convolution}
\end{figure}
For a vector $x=(x_{1}, \cdots, x_{n})$, the Softmax function with temperature $T \in \mathbb{R}$ \eqref{eq:softmax}, ReLu function \eqref{eq:relu}, and Softplus function with parameter $\beta \in \mathbb{R}$ \eqref{eq:softplus} are:
\begin{align}
\sigma_{1}(x,T) &:= \frac{\exp(Tx_{i})}{\sum_{j=1}^{n}\exp(Tx_{j})} \label{eq:softmax} \\
\sigma_{2}(x_{i}) &:= \max(0,x_{i}) \label{eq:relu} \\
\sigma_{3}(x_{i},\beta) &:= \frac{1}{\beta}\log(1 + \exp(\beta x_{i})) \label{eq:softplus}
\end{align}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth, height=0.5\textwidth]{sections/Pictures/NN_inception_A.pdf}
\caption{1-D Resnet-v2 Block Design}
\label{fig:block_design}
\end{figure}
We can compose \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace with \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace, as in Figure
\ref{fig:forward_pass}, so that the combined resulting NN is defined as:
\begin{align}
\ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace(\pi) := \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace(\ensuremath{\mathbb{G}_{\theta_{1}}}\xspace(\pi),\pi)
\end{align}
\subsection{Interpretations}
In a similar manner to GANs and actor-critic algorithms, see \cite{pfau2016connecting}, the design of \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace has a bi-level optimisation interpretation, see \cite{dempe2002foundations} for an overview of bi-level optimisation. Here we list the explicit objectives of both \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace and \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace, and how their loss functions represent these objectives.
The objective of \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace is to predict \ensuremath{f(\fixedapproxmip)}\xspace, the optimal induced objective values of \ensuremath{\mathbb{P}_{\pi}^{\hat{z_{1}}}}\xspace. Its loss function is thus:
\begin{align}
L(\theta_{2}, \pi) := \big| \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace(\ensuremath{\mathbb{G}_{\theta_{1}}}\xspace(\pi), \pi) - f(\ensuremath{\mathbb{P}_{\pi}^{\generator(\pi)}}\xspace) \big| \label{eq:discriminator_approx_loss}
\end{align}
The objective of \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace is to minimise the induced prediction of \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace. Its loss function is thus:
\begin{align}
L'(\theta_{1}, \pi) := \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace(\ensuremath{\mathbb{G}_{\theta_{1}}}\xspace(\pi), \pi) \label{eq:generator_loss}
\end{align}
The corresponding bi-level optimisation problem can then be viewed as:
\begin{align}
\begin{split}
&\min\limits_{\theta_{1}} \quad \mathbb{E}_{\pi \sim \Pi} [ \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace(\ensuremath{\mathbb{G}_{\theta_{1}}}\xspace(\pi), \pi) ] \\
& \text{s.t} \quad \min\limits_{\theta_{2}} \quad \mathbb{E}_{\pi \sim \Pi} [ \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace(\ensuremath{\mathbb{G}_{\theta_{1}}}\xspace(\pi), \pi) - f(\ensuremath{\mathbb{P}_{\pi}^{\generator(\pi)}}\xspace) ]
\end{split}
\label{eq:bi-level}
\end{align}
\subsection{Training Method}
For effective training of \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace, a capable \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace is needed. We therefore pre-train \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace. The following loss function, which replaces $\ensuremath{\mathbb{G}_{\theta_{1}}}\xspace(\ensuremath{\pi}\xspace)$ with prior generated \ensuremath{z_{1}}\xspace values in \eqref{eq:discriminator_approx_loss}, is used for this pre-training:
\begin{align}
L''(\theta_{2}, \pi) := \big| \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace(z_{1}, \pi) - \ensuremath{f(\fixedmip)}\xspace \big|
\label{eq:discriminator_loss}
\end{align}
However, performing this initial training requires generating instances of \ensuremath{\mathbb{P}_{\pi}^{z_{1}}}\xspace. Here we do supervised training in an offline manner on prior generated data.
After the initial training of \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace, we train \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace as a part of \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace, using samples $\pi \in \Pi$, the loss function \eqref{eq:generator_loss}, and fixed $\theta_{2}$. The issue of \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace outputting continuous values for $\hat{z_{1}}$ is overcome by the final layer's activation function of \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace. The softmax with temperature \eqref{eq:softmax} ensures that adequate gradient information still exists to update $\theta_{1}$, and that the results are near binary. When using these results to explicitly solve \ensuremath{\mathbb{P}_{\pi}^{\hat{z_{1}}}}\xspace, we round our result to a one-hot vector encoding along the appropriate dimension.
After the completion of both initial training, we alternately train both NN's using updated loss functions in the following way:
\begin{itemize}
\item \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace training:
\begin{itemize}
\item As in the initial training, using loss function \eqref{eq:discriminator_loss}.
\item In an online fashion, using predictions from \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace and loss function \eqref{eq:discriminator_approx_loss}.
\end{itemize}
\item \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace training:
\begin{itemize}
\item As explained above with loss function \eqref{eq:generator_loss}.
\end{itemize}
\end{itemize}
Our design allows the loss to be back-propagated through \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace and distributed to the individual nodes of the final layer of \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace, i.e., that representing \ensuremath{z_{1}}\xspace. This is largely different to other methods, many of which rely on using binary cross entropy loss against optimal solutions of $\ensuremath{\mathbb{P}_{\pi}}\xspace$. Our advantage over these is that the contribution to the objective function we are trying to minimise of each variable decision in $z_{1}$ can be calculated. This has an added benefit of generated suboptimal solutions being much more likely to be near-optimal, as they are trained in a manner to minimise the objective rather than copy previously observed optimal solutions.
For our application, transient gas network optimisation, methods for sampling instances currently do not exist. In fact, even gathering data is notoriously difficult, see \cite{kunz2017electricity} and \cite{yueksel2020lessons}. For this reason, we introduce a new method for generating training data in section \ref{sec:Computational Experiments}.
\subsection{Training Results}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{sections/Pictures/discriminator_training.png}
\caption{The loss per epoch of \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace during the initial training of Algorithm \ref{alg:pretrain-discriminator}. The dashed lines show the performance of \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace after \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace has been completely trained.}
\label{fig:training_discriminator}
\end{figure}
Figure \ref{fig:training_discriminator} shows the training loss throughout the initial offline training. We see that \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace learns how to accurately predict \ensuremath{f(\fixedmip)}\xspace as the loss decreases. This is a required result, as without a trained discriminator we cannot expect to train a generator. Both the training and test loss converge to approximately 1000, which is excellent considering the generated \ensuremath{f(\fixedmip)}\xspace range well into the millions. As visible by both the test loss and final validation loss, we see \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace generalises to \ensuremath{\mathbb{P}_{\pi}^{z_{1}}}\xspace instances of our validation set that it has not seen. This generalisation ability doesn't translate perfectly to real-world data however. This is due to the underlying distribution of real-world data and our generated data being substantially different. Despite this we believe that an L1 loss, in this case simply the average distance between \ensuremath{\hat{f}(\fixedmip)}\xspace and \ensuremath{f(\fixedmip)}\xspace, of 10000 is still very good. We discuss the issues of different distributions in subsection \ref{sec:data_generation_results}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.65]{sections/Pictures/discriminator_retraining.png}
\caption{The loss per epoch of \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace as it is trained using Algorithm \ref{alg:train}}
\label{fig:retraining_discriminator}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.60]{sections/Pictures/full_generator_training_combined.png}
\caption{(Left) The loss per epoch of \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace as it is trained using Algorithm \ref{alg:train}. On the left the loss over all epochs is shown. (Right) A magnified view of the loss starting from epoch 20.}
\label{fig:full_training_generator}
\end{figure}
The loss during training using Algorithm~\ref{alg:train} for \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace is shown in Figure \ref{fig:retraining_discriminator}, and for \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace in Figure \ref{fig:full_training_generator}. The cyclical nature of the \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace loss is caused by the re-training of \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace, which learns how to induce sub-optimal predictions from the then static \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace. These sub-optimal predictions are quickly re-learned, but highlight that learning how to perfectly predict \ensuremath{f(\fixedapproxmip)}\xspace over all possibilities, potentially due to the rounded nature of \ensuremath{\hat{z_{1}}}\xspace, is unlikely without some error. Figure \ref{fig:full_training_generator} (left) shows the loss over time of \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace as it is trained, with Figure \ref{fig:full_training_generator} (right) displaying magnified losses for the final epochs. We observe that \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace quickly learns important \ensuremath{z_{1}}\xspace decision values. We hypothesise that this quick descent is helped by \ensuremath{\hat{z_{1}}}\xspace that are unlikely given our generation method in Algorithm \ref{alg:op_mode_generator}. The loss increases following this initial decrease in the case of \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace, showing the ability of \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace to further improve. It should also be noted that significant step-like decreases in loss are absent in both (left) and (right) of Figure \ref{fig:full_training_generator}. Such steps would indicate \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace discovering new important \ensuremath{z_{1}}\xspace values (operation modes). The diversity of produced operation modes however, see Figure \ref{fig:generated_opmodes}, implies that early in training a complete spanning set of operation modes is derived, and the usage of their ratios is then learned and improved.
\subsection{Data Generation Results}
\label{sec:data_generation_results}
\begin{figure}[h]
\centering
\includegraphics[scale=0.6]{sections/Pictures/boxplot_combined.png}
\caption{Comparison of generated flow (Left) / pressure (Right) value distributions per node vs. the distribution seen in real-world data.}
\label{fig:generated_boundary_values}
\end{figure}
As an interlude between results from \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace, we outline the performance of our synthetic gas network data generation methods. Figure \ref{fig:generated_boundary_values} (left) shows how our generated flow prognosis compares to that of historic real-world data. We see that Nodes A, B, and C are not technically entry or exits, but over historical data are dominated by a single orientation for each node. Specifically, Node C is the general entry, and Nodes A / B are the exits. In addition to the general orientation, we see that each node has significantly different ranges and distributions. These observations highlight the simplicity of our data generation methods, as we see near identical distributions for all nodes over the artificial data. We believe this calls for further research in prognosis generation methods. Figure \ref{fig:generated_boundary_values} (right) shows our pressure prognosis compared to that of historic values. Unlike historic flow values, we observe little difference between historic pressure values of different nodes. This is supported by the optimal choices \ensuremath{z_{1}^{*}}\xspace over the historic data, see Figure \ref{fig:generated_opmodes}, as in a large amount of cases compression is not needed and the network station is in bypass. Note that each corresponding entry (+) and exit (-) have identical pressure distributions due to the way they are constructed.
A further comparison of how our generated data compares to historic data can be seen in Figure \ref{fig:pred_obj_trained}. Here one can see the distribution of \ensuremath{\hat{f}(\fixedapproxmip)}\xspace and \ensuremath{f(\fixedapproxmip)}\xspace for the generated validation set, and \ensuremath{\hat{f}(\optmip)}\xspace and \ensuremath{f(\mip)}\xspace for the real-world data. As expected, the distributions are different depending on whether the data is artificial or not. Our data generation was intended to be simplistic, and as independent as possible from the historic data. As such, the average scenario has optimal solution larger than that of any real-world data point. The performance of \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace is again clearly visible here, with \ensuremath{\hat{f}(\fixedapproxmip)}\xspace and \ensuremath{f(\fixedapproxmip)}\xspace being near identical over the artificial data, keeping in mind that these data points were never used in training. We see that this ability to generalise is relatively much worse on real-world data, mainly due to the the lower values of \ensuremath{f(\mip)}\xspace over this data. Figure \ref{fig:pred_obj_trained} (right) shows the results with log-scale axes to better highlight this disparity. It should be noted that the real-world instances with larger \ensuremath{f(\mip)}\xspace are predicted quite well, and all real-world instances have an L1 distance between \ensuremath{\hat{f}(\optmip)}\xspace and \ensuremath{f(\mip)}\xspace that is small in terms of absolute differences.
\begin{figure}[h]
\centering
\includegraphics[scale=0.6]{sections/Pictures/pred_obj_vs_mip_obj_combined.png}
\caption{\ensuremath{\hat{f}(\fixedapproxmip)}\xspace for the validation set, and \ensuremath{\hat{f}(\optmip)}\xspace for real-world data, compared to \ensuremath{f(\fixedapproxmip)}\xspace and \ensuremath{f(\mip)}\xspace respectively. Linear scale (Left) and log-scale (Right). }
\label{fig:pred_obj_trained}
\end{figure}
\subsection{Real-World Results}
\begin{figure}
\centering
\includegraphics[scale=0.6]{sections/Pictures/fixed_obj_vs_mip.png}
\caption{A comparison of \ensuremath{f(\fixedapproxmip)}\xspace and \ensuremath{f(\mip)}\xspace for all real-world data instances.}
\label{fig:fixed_obj_vs_true_obj}
\end{figure}
We now present results of our fully trained \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace applied to the 15 weeks of real-world data. Note that we had to remove 651 instances from our 9291 instances, as the warm-start resulted in an optimal solution value further away than the optimality tolerances we set. These instances have been kept in the graphics, but are marked and conclusions will not be drawn from them. We believe the problems with reproducibility are caused by the numeric difficulties in managing the pipe equality constraints.
Figure \ref{fig:fixed_obj_vs_true_obj} shows the comparison of \ensuremath{f(\fixedapproxmip)}\xspace and \ensuremath{f(\mip)}\xspace. In a similar manner to \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace, we see that \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace struggles with instances where \ensuremath{f(\mip)}\xspace is small. This is visible in the bottom left, where we see \ensuremath{f(\fixedapproxmip)}\xspace values much larger than \ensuremath{f(\mip)}\xspace for like \ensuremath{\pi}\xspace. This comes as little surprise given the struggle of \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace with small \ensuremath{f(\mip)}\xspace values. Drawing conclusions becomes more complicated for instances with larger \ensuremath{f(\mip)}\xspace values, because the majority hit the time limit. We can clearly see however, the value of our primal heuristic. There are many cases, those below the line \ensuremath{f(\fixedapproxmip)}\xspace = \ensuremath{f(\mip)}\xspace, where our primal heuristic retrieves a better solution than the MILP solver does in one hour. Additionally, we see that no unsolved point above the line is very far from the line, showing that our primal heuristic produced a comparable, sometimes equivalent solution in a much shorter time frame. For a comparison of solve-times, see Table \ref{tab:solve_times}.
\begin{table}[!ht]
\centering
\begin{small}
\begin{tabular}{lrrrrr}
\toprule
{} & Mean & Median & STD & Min & Max \\
\midrule
\ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace Inference Time (s) & 0.009 & 0.008 & 0.001 & 0.008 & 0.017 \\
Warmstarted \ensuremath{\mathbb{P}_{\pi}}\xspace Time (s) & 100.830 & 9.380 & 421.084 & 0.130 & 3600.770 \\
\ensuremath{\mathbb{P}_{\pi}}\xspace Time (s) & 147.893 & 24.380 & 463.279 & 3.600 & 3601.280 \\
\ensuremath{\mathbb{P}_{\pi}^{\hat{z_{1}}}}\xspace + Warmstarted \ensuremath{\mathbb{P}_{\pi}}\xspace Time (s) & 103.329 & 12.130 & 424.543 & 0.190 & 3726.110 \\
\ensuremath{\mathbb{P}_{\pi}^{\hat{z_{1}}}}\xspace Time (s) & 2.499 & 1.380 & 12.714 & 0.060 & 889.380 \\
\bottomrule
\end{tabular}
\end{small}
\caption{Solve time statistics for different solving strategies.}
\label{tab:solve_times}
\end{table}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.6]{sections/Pictures/pred_obj_vs_fixed_obj.png}
\caption{A comparison of \ensuremath{\hat{f}(\fixedapproxmip)}\xspace and \ensuremath{f(\fixedapproxmip)}\xspace for all real-world data instances.}
\label{fig:fixed_obj_vs_pred_obj}
\end{figure}
\newpage
Figure \ref{fig:fixed_obj_vs_pred_obj} shows the performance of the predictions \ensuremath{\hat{f}(\fixedapproxmip)}\xspace compared to \ensuremath{f(\fixedapproxmip)}\xspace. Interestingly, \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace generally predicts \ensuremath{\hat{f}(\fixedapproxmip)}\xspace values slightly larger than \ensuremath{f(\fixedapproxmip)}\xspace. We expect this for the smaller valued instances, as we know that \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace struggles with \ensuremath{f(\fixedapproxmip)}\xspace instances near 0, but the trend is evident for larger valued instance too. The closeness of the data points to the line \ensuremath{\hat{f}(\fixedapproxmip)}\xspace = \ensuremath{f(\fixedapproxmip)}\xspace show that \ensuremath{\mathbb{D}_{\theta_{2}}}\xspace can adequately predict \ensuremath{\hat{z_{1}}}\xspace solutions from \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace despite the change in data sets. Figure \ref{fig:fixed_obj_vs_true_obj} showed that \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace successfully generalised to a new data set, albeit with difficulties around instances with \ensuremath{f(\mip)}\xspace valued near 0. From Figures \ref{fig:fixed_obj_vs_true_obj} and \ref{fig:fixed_obj_vs_pred_obj}, we can see that the entire \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace generalises to unseen real-world instances, despite some generalisation loss.
\begin{figure}
\centering
\includegraphics[scale=0.6]{sections/Pictures/opmodes_combined.png}
\caption{Frequency of operation mode choice by \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace compared to MILP solver for all real-world instances. (Left) Linear scale, and (Right) log scale.}
\label{fig:generated_opmodes}
\end{figure}
\input{sections/Pictures/operation_modes_matrix}
We now compare the operation modes \ensuremath{\hat{z_{1}}}\xspace, which are generated by \ensuremath{\mathbb{G}_{\theta_{1}}}\xspace, and the \ensuremath{z_{1}^{*}}\xspace, which are produced by our MILP solver. To do so we use the following naming convention: We name the three pairs of boundary nodes N (north), S (south), and W (west). Using W\_NS\_C\_2 as an example, we know that flow comes from W, and goes to N and S. The C in the name stands for active compression, and the final index is to differentiate between duplicate names. As seen in Figure \ref{fig:generated_opmodes}, which plots the frequency of specific \ensuremath{z_{1}}\xspace if they occurred more than 50 times, a single choice dominates \ensuremath{z_{1}^{*}}\xspace. This is interesting, because we expected there to be a-lot of symmetry between \ensuremath{z_{1}}\xspace, with the MILP solver selecting symmetric solutions with equal probability. For instance, take W\_NS\_C\_1 and take W\_NS\_C\_2. \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace only ever predicts W\_NS\_C\_2, however with half the frequency the MILP solver selects each of them. This indicates that from the MILP's point of view they are symmetric, and either can be chosen, while \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace has recognised this and converged to a single choice. We can support this by analysing the data, where the difference in W\_NS\_C\_1 and W\_NS\_C\_2 is which compressor machine is used, with both machines being identical. This duplicate choice apparently does not exist in bypass modes however, where the uniqueness of \ensuremath{z_{1}}\xspace, determined by valve states, results in different \ensuremath{f(\fixedmip)}\xspace values. It is observable then that for the majority of instances NS\_NSW\_1 is the optimal choice, and that \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace has failed to identify its central importance. We believe this is due to the training method, where over generalisation to a single choice is strongly punished. For a comprehensive overview of the selection of operation modes and the correlation between \ensuremath{\hat{z_{1}}}\xspace and \ensuremath{z_{1}^{*}}\xspace, we refer interested readers to Table \ref{tab:operation_mode_correlation}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.6]{sections/Pictures/combined_warm_start_vs_mip_log.png}
\caption{The combined running time of solving \ensuremath{\mathbb{P}_{\pi}^{\hat{z_{1}}}}\xspace, and solving a warm-started \ensuremath{\mathbb{P}_{\pi}}\xspace, compared to solving \ensuremath{\mathbb{P}_{\pi}}\xspace directly.}
\label{fig:warm_start_plus_lp}
\end{figure}
As discussed above, \ensuremath{\mathbb{N}_{\{\theta_{1},\theta_{2}\}}}\xspace cannot reliably produce \ensuremath{z_{1}^{*}}\xspace. Nevertheless, it produces near-optimal \ensuremath{\hat{z_{1}}}\xspace suggestions, which are still useful in a warm-start context, see Algorithm \ref{alg:warm_start}. The results of our warm-start algorithm are displayed in Figure \ref{fig:warm_start_plus_lp}. Our warm-start suggestion was successful 72\% of the time, and the algorithm resulted in an average speed up of 60.5\%. We use the shifted geometric mean with a shift of 1 for this measurement to avoid distortion by relative variations of the smaller valued instances. Especially surprising is that some instances that were previously unsolvable within the time-limit were easily solvable given the warm-start suggestion. In addition, many of the solvable but complicated instances are also solved near instantly with the warm-start suggestion. As such, we have created an effective primal heuristic that is both quick to run and beneficial in the context of locating a globally optimal solution.
\section{Appendix} \label{sec:appendix}
\input{sections/appendix/DataPrepareAlgo}
\input{sections/appendix/PretrainDiscriminator}
\input{sections/appendix/DiscriminatorTraining}
\input{sections/appendix/GeneratorTraining}
\input{sections/appendix/TableInitialTraining}
\input{sections/appendix/MIPParams}
\begin{figure}
\centering
\includegraphics[scale=0.55]{sections/Pictures/nn_graph.pdf}
\caption{Neural Network Architecture}
\label{fig:graph_neural_network}
\end{figure}
\input{sections/appendix/parameterTable}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction} All varieties in this paper are algebraic and are defined over $\mathbb{C}$.
A smooth projective variety $V$
is called {\em cylindrical} if it contains a cylinder, i.e.,
a principal Zariski open subset $U$ isomorphic to a product $Z\times{\mathbb A}^1$,
where $Z$ is a variety and ${\mathbb A}^1$ stands for the affine line
(\cite{Kishimoto-Prokhorov-Zaidenberg,
Kishimoto-Prokhorov-Zaidenberg-criterion}).
Assuming that $ \operatorname{rk} \operatorname{Pic}(V)=1$,
by a criterion of
\cite[Cor.\ 3.2]{Kishimoto-Prokhorov-Zaidenberg-criterion},
the affine cone over $V$ admits an effective action of the additive group
$\mathbb{G}_{\operatorname{a}}$
if and only if $V$ is cylindrical. In the latter case $V$ is a Fano variety. Indeed, the existence of a
$\mathbb{G}_{\operatorname{a}}$-action on the affine cone over
$V$ implies that $V$ is uniruled. Since $ \operatorname{rk} \operatorname{Pic}(V)=1$, $V$ must be a Fano variety.
In \cite{Kishimoto-Prokhorov-Zaidenberg, Kishimoto-Prokhorov-Zaidenberg-Fano, Prokhorov-Zaidenberg-2014}
several families of smooth cylindrical Fano threefolds and fourfolds with Picard number 1 were constructed.
Here we provide further examples of such fourfolds. Let us recall the standard terminology and notation.
\subsection{Notation}
Given a smooth Fano fourfolds $V$ with Picard rank 1, the \textit{index} of $V$
is the integer $r$ such that $-K_V=rH$, where $H$ is the ample divisor generating
the Picard group: $\operatorname{Pic} (V)=\mathbb{Z}\cdot H$ (by abuse of notation, we denote by the same letter a divisor
and its class in the Picard group).
The \textit{degree} $d=\deg V$ is the degree with respect to $H$.
It is known that $1\le r\le 5$. Moreover,
if $r=5$ then
$V\cong \mathbb{P}^4$, and if $r=4$ then $V$ is a quadric in
$\mathbb{P}^5$. Smooth Fano fourfolds of index $r=3$ are called {\em del
Pezzo fourfolds};
their degrees vary in the range $1\le d\le 5$ (\cite{Fujita1980}-\cite{Fujita-620281}).
Smooth Fano fourfolds of index $r=2$ are called {\em Mukai fourfolds};
their degrees are even and can be written as $d=2g-2$, where $g$ is called the \textit{genus} of $V$.
The genera of Mukai fourfolds satisfy
$2\le g\le 10$ (\cite{Mukai-1989}). The classification of Fano fourfolds of
index $r=1$ is not known.
According to \cite[Thm.\ 0.1]{Prokhorov-Zaidenberg-2014} a smooth intersection of two quadrics in $\mathbb{P}^6$
is a cylindrical del Pezzo fourfold of degree $4$.
A smooth del Pezzo fourfold $W=W_5\subset \mathbb{P}^7$ of degree $5$ is also cylindrical ({\it ibid.}).
\subsection{On the content.} Starting with the del Pezzo quintic fourfold $W$ and performing suitable Sarkisov links
we constructed in \cite{Prokhorov-Zaidenberg-2014}
two families of cylindrical Mukai fourfolds $V_{12}$ of genus $7$ and $V_{14}$ of genus $8$. Proceeding in a similar
fashion, in the present paper
we construct two more families of cylindrical Mukai fourfolds $V_{16}$ of genus $9$ and $V_{18}$ of genus $10$, see
Theorem \ref{theorem-1} and Corollary \ref{cor:cylinder}. These are the main results of the paper.
The paper is divided into 6 sections. After formulating in Section \ref{sec:Sarkisov} our principal results, we give
in Section \ref{sec:preliminaries} necessary preliminaries. In particular, we recall some useful facts from
\cite{Prokhorov-Zaidenberg-2014}. In Section \ref{sec:proof-of-theorem-1} we prove Theorem \ref{theorem-1}
about the existence of suitable Sarkisov links. This theorem depends on the existence of certain specific
surfaces in the quintic fourfold $W$. Section \ref{sec:constructions} is devoted to constructions of such
surfaces, see Proposition \ref{lemma-existence-surfaces}. The resulting Mukai fourfolds $V_{16}$ and $V_{18}$
occur to be cylindrical, with a cylinder coming from a one on $W$ via the corresponding Sarkisov link, see
Corollary \ref{cor:cylinder}. Section 6 contains concluding remarks and some open problems.
\section{Main results} \label{sec:Sarkisov}
The following theorem describes the Sarkisov links used in our constructions.
\begin{theorem}
\label{theorem-1}
Let $W=W_5\subset \mathbb{P}^7$ be a del Pezzo fourfold of degree $5$, and let
$F\subset W\cap \mathbb{P}^6$ be a smooth surface of one of the following types:
\begin{enumerate}
\renewcommand\labelenumi{\alph{enumi})}
\renewcommand\theenumi{\textup{\alph{enumi})}}
\item\label{case-g=10}
$F\subset \mathbb{P}^6$ is a rational normal quintic scroll, $F\cong \mathbb{F}_1$,
and
\item\label{case-g=9}
$F\subset \mathbb{P}^6$ is an anticanonically embedded sextic del Pezzo surface such that $c_2(W)\cdot F=26$
\textup(see Lemma \textup{\ref{lem:enumerative}}\textup).
\end{enumerate}
Suppose that $F$ does not intersect any plane in $W$ along a \textup(possibly, degenerate\textup) conic.
Then there is a commutative diagram
\begin{equation}\label{eq:diagram}
\vcenter{
\xymatrix{
&D\ar[dl]&\hookrightarrow&\widetilde W\ar[dr]^{\varphi}\ar[dl]_{\rho}&\hookleftarrow& \tilde E\ar[dr]
\\
F&\hookrightarrow&W\ar@{-->}[rr]^{\phi} &&V&\hookleftarrow& S
}
}
\end{equation}
where
\begin{itemize}
\item
$V=V_{2g-2}\subset \mathbb{P}^{g+2}$ is a Mukai fourfold of genus $g=10$ in case \ref{case-g=10}
and
$g=9$ in case \ref{case-g=9};
\item
the map $\phi: W\dashrightarrow V\subset \mathbb{P}^{g+2}$ is given by
the linear system of quadrics passing through $F$, while $\phi^{-1}: V \dashrightarrow W$ is the
projection from the linear span $\langle S\rangle$ of $S$.
\end{itemize}
Furthermore,
\begin{enumerate}
\item
$\rho: \widetilde W\longrightarrow W$ is the blowup of $F$
with exceptional divisor $D$, and
$\varphi: \widetilde W\longrightarrow V$ is
the blowup of a smooth surface $S\subset V$ with exceptional divisor
$\tilde E$, where
\begin{itemize}
\item
in case \ref{case-g=10}
$S\subset \mathbb{P}^4\subset \mathbb{P}^{12}$ is a normal cubic scroll with
$c_2(V)\cdot S=7$, and
\item
in case \ref{case-g=9}
$S\subset \mathbb{P}^3\subset \mathbb{P}^{11}$ is a quadric with
$c_2(V)\cdot S=5$;
\end{itemize}
\item
if $H$ is an ample generator of $\operatorname{Pic}(W)$
and $L$ is an ample generator of $\operatorname{Pic}(V)$, then on $\widetilde W$ we have
\begin{equation}\label{equation-intersections}
\begin{array}{ll}
\rho^*H\equiv\varphi^* L-\tilde E,&
D\equiv \varphi^* L-2\tilde E,
\\[7pt]
\varphi^* L\equiv 2\rho^*H-D,&
\tilde E\equiv \rho^*H-D.
\end{array}
\end{equation}
\end{enumerate}
\end{theorem}
The proof is done in Section \ref{sec:proof-of-theorem-1}. In Section \ref{sec:constructions}
we establish the existence
of surfaces $F$ as in Theorem \ref{theorem-1}.
Using this theorem, we deduce our main results.
\begin{theorem}\label{main-theorem}
Under the assumptions of Theorem \textup{\ref{theorem-1}},
there is an isomorphism
\begin{equation*}
V\setminus\varphi(D)\cong W\setminus\rho(\tilde E)\,,
\end{equation*}
where $\varphi(D)$ is a hyperplane section of $V=V_{2g-2}\subset \mathbb{P}^{g+2}$ singular along
$S=\varphi(\tilde E)$, and
$\rho(\tilde E)=W\cap \langle F\rangle$ is a singular hyperplane section of
$W=W_5\subset \mathbb{P}^7$ by the linear span of $F$.
\end{theorem}
Recall the following fact (\cite[Thm.\ 4.1]{Prokhorov-Zaidenberg-2014}).
\begin{theorem}\label{thm:W-cyl}
For any hyperplane section $M$ of $W$, the complement $W\setminus M$ contains a cylinder.
\end{theorem}
\begin{corollary}\label{cor:cylinder}
Any Fano fourfold $V$ as in Theorem \textup{\ref{theorem-1}} is cylindrical.
\end{corollary}
\begin{proof} Since $M=\rho(\tilde E)$ is a hyperplane section of $W$, the complement
$W\setminus \rho(\tilde E)$ contains a cylinder. Hence also
$V\setminus\varphi(D)\cong W\setminus\rho(\tilde E)$ does, and so,
$V$ is cylindrical.
\end{proof}
\section{Preliminaries}\label{sec:preliminaries}
\begin{sit}
Recall the following notation, see e.g. \cite[\S 3]{Prokhorov-Zaidenberg-2014}.
There are two types of planes in the Grassmannian
$\operatorname{Gr}(2, 5)$, namely, the Schubert varieties $\sigma_{3,1}$ and
$\sigma_{2,2}$ (\cite[Ch.\ 1, \S 5]{Griffiths-Harris-1978}), where
\begin{itemize}
\item
$\sigma_{3,1}=\{l\in \operatorname{Gr}(2, 5)\mid p\in l \subset h\}$ with $h\subset \mathbb{P}^4$ a fixed
hyperplane and $p\in h$ a fixed point;
\item
$\sigma_{2,2}=\{l\in \operatorname{Gr}(2, 5)\mid l \subset e\}$ with $e\subset \mathbb{P}^4$ a fixed
plane.
\end{itemize}
In the terminology of \cite[\S 10]{Fujita-620281}, the $\sigma_{3,1}$-planes (the $\sigma_{2,2}$-planes,
respectively) are called planes
of {\em vertex type} (of {\em non-vertex type}, respectively).
\end{sit}
\begin{sit}
Let $W=W_5\subset \mathbb{P}^7 $ be a del Pezzo fourfold of index $3$ and degree $5$. Due to \cite{Fujita-620281}
such a variety is
unique up to isomorphism and can be realized as
a section of $\operatorname{Gr}(2,5)\subset \mathbb{P}^9$
by two general hyperplanes.
By the Lefschetz hyperplane section theorem $\operatorname{Pic}(W)\cong \mathbb{Z}$. We have $-K_W=3H$, where
$H$ is the ample generator of $\operatorname{Pic}(W)$.
The variety $W$ is an intersection of quadrics (see
\cite[Ch.\ 1, \S 5]{Griffiths-Harris-1978}).
\end{sit}
The following proposition proven in \cite{Todd1930} (see also \cite[Sect.\ 2]{Fujita-1986-1})
deals with the planes in the fourfold $W=W_5$.
\begin{proposition}
\label{Proposition-2.2.}
Let $W=W_5\subset \mathbb{P}^7 $ be a Fano fourfold of index $3$ and degree $5$.
Then the following hold.
\begin{enumerate}
\item
$W$ contains a unique $\sigma_{2,2}$-plane $\Xi$, a one-parameter
family $(\Pi_t)$ of $\sigma_{3,1}$-planes, and no further plane.
\item
Any $\sigma_{3,1}$-plane $\Pi$ meets $\Xi$ along a tangent line to a fixed conic
$\delta\subset \Xi$.
\item
Any two $\sigma_{3,1}$-planes $\Pi'$ and $\Pi''$ meet at a point $p\subset \Xi \setminus \delta$.
\item
Let $R$ be the union of all $\sigma_{3,1}$-planes on $W$.
Then $R$ is a hyperplane section of $W$ and $\operatorname{Sing} R = \Xi$.
\item
There is a 1-parameter family of lines in $W$ through each point in $W$. A line $l\subset W$ meets the plane $\Xi$ if and
only if $l\subset R$, and then $l$ is contained in a plane in $R$.
\end{enumerate}
\end{proposition}
By abuse of notation, the cohomology class associated with
an algebraic subvariety will be denoted by the same letter as the subvariety itself.
By the Lefschetz hyperplane section theorem, the group $H^4(W,\mathbb{Z})$ is torsion free, since the group $H^4(\operatorname{Gr}(2,5),\mathbb{Z})$ is.
In the next lemma we describe a natural basis in $H^4(W,\mathbb{Z})$,
see \cite[Cor.\ 4.2 and 4.7]{Prokhorov-Zaidenberg-2014}.
\begin{lemma}\label{lem:H4} The group $H^4(W,\mathbb{Z})$
is freely generated by the classes of the planes $\Xi$ and $\Pi$, where
\begin{equation}\label{eq:ci}
\Pi^2 = 1,\quad \Xi^2 = 2, \quad \Pi\cdot\Xi = -1,\quad\mbox{and}\quad c_2(W)=9\Xi+13\Pi\,.
\end{equation}
\end{lemma}
\begin{lemma}\label{lem:enumerative}
\begin{enumerate}
\renewcommand\labelenumi{\alph{enumi})}
\renewcommand\theenumi{\textup{\labelenumi}}
\item\label{case-g=10-a}
Let $F\subset W\cap \mathbb{P}^6$ be a smooth rational quintic scroll.
Then
\begin{equation*}
F\equiv 2 \Xi+3\Pi\quad\mbox{and}
\quad F\cdot \Xi= 1,\,\, F\cdot\Pi= 1,\,\, c_2(W)\cdot F=22\,.
\end{equation*}
\item\label{case-g=9-a}
Let $F\subset W\cap \mathbb{P}^6$ be a smooth anticanonically embedded sextic del Pezzo surface. Then
either
\vspace{10pt}
\begin{enumerate}
\itemsep=10pt
\renewcommand\labelenumii{\alph{enumi}\arabic{enumii})}
\renewcommand\theenumii{\textup{-\alph{enumi}\arabic{enumii})}}
\item\label{case-g=9-a-26}
$F\equiv 2 \Xi+4\Pi$ and
$F\cdot \Xi= 0$, $F\cdot\Pi= 2$, $c_2(W)\cdot F=26$, or
\item\label{case-g=9-a-27}
$F\equiv 3 \Xi+3\Pi$ and
$F\cdot \Xi= 3$, $F\cdot\Pi= 0$, $c_2(W)\cdot F=27$.
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{proof}
By Lemma \ref{lem:H4} one can write $F\equiv a \Xi+b\Pi$, where
\begin{equation}\label{eq:a-b}
a+b=\deg F,\qquad c_2(W)\cdot F=5a+4b.
\end{equation}
From the exact sequence
\begin{equation*}
0\longrightarrow \mathscr T_F \longrightarrow \mathscr T_W \longrightarrow \mathscr N_{F/W}
\longrightarrow 0\,
\end{equation*}
we deduce
\begin{equation}\label{eq:c1}
c_1(W)|_F= c_1(F)+ c_1( \mathscr N_{F/W})\,
\end{equation}
and
\begin{equation}
\begin{aligned}\label{eq:c2}
{\quad} c_2(W)\cdot F &= c_2(F)+ c_1(F)\cdot c_1( \mathscr N_{F/W}) +c_2(\mathscr N_{F/W})
\\[7pt]
&= c_2(F)- c_1(F)^2+ c_1(F)\cdot c_1(W)|_F +c_2(\mathscr N_{F/W})\,.\\
\end{aligned}
\end{equation}
The Noether formula for the rational surface $F$ can be written as follows:
\begin{equation*}
c_2(F)- c_1(F)^2=2c_2(F) -12\,.
\end{equation*}
Note that
\begin{equation*}
c_2(\mathscr N_{F/W})= F^2= 2a^2+b^2-2ab\,.
\end{equation*}
Since $c_1(W)=\mathcal{O}_W(3)$, from \eqref{eq:a-b} and \eqref{eq:c2} we obtain
\begin{multline*}
c_2(W)\cdot F=5a+4b=2c_2(F) -12 + c_1(F)\cdot \mathcal{O}_W(3)|_F + 2a^2+b^2-2ab\,.
\end{multline*}
In case \ref{case-g=10-a} using \eqref{eq:a-b} and the latter equality we get $b=5-a$ and
\begin{eqnarray*}
c_2(W)\cdot F&=&20+a=
5a^2 - 20a + 42\,,\quad\mbox{hence}\quad a=2\,.
\end{eqnarray*}
Similarly, in case \ref{case-g=9-a} we have $b=6-a$ and
\begin{eqnarray*}
c_2(W)\cdot F=24+a=5a^2-24a+54
\,,\quad\mbox{hence}\quad a\in\{ 2, 3\}\,.
\end{eqnarray*}
Now the assertions follow.
\end{proof}
\begin{remark}\label{lem:unique-hyperplane} For a surface
$F$ as in Lemma \ref{lem:enumerative} we have $\dim \langle F\rangle=6$. Hence $F$ is contained in a
unique hyperplane section $\langle F\rangle\cap W\subset \mathbb{P}^7$.
\end{remark}
\section{Construction of quintic and sextic surfaces $F\subset W$}\label{sec:constructions}
In this section we prove the existence of surfaces $F$ satisfying the assumptions of Theorem \ref{theorem-1}.
Our main results can be stated as follows.
\begin{proposition}\label{lemma-existence-surfaces}
The quintic fourfold $W\subset\mathbb{P}^7$ admits hyperplane sections which contain
\begin{enumerate}
\item[a)]
a rational quintic scroll $F=F_5\subset \mathbb{P}^6$,
\end{enumerate}
and other ones which contain
\begin{enumerate}
\item[b)]
an anticanonically embedded sextic del Pezzo surface $F=F_6\subset \mathbb{P}^6$
of type \ref{case-g=9-a-26}.
\end{enumerate}
In both cases, the surface $F$ can be chosen so that none of the planes in $W$ meets $F$ along a
(possibly, degenerate) conic.
\end{proposition}
\begin{proof}[Proof of Proposition \textup{\ref{lemma-existence-surfaces}} {\rm b).}]
We start with a smooth sextic del Pezzo threefold $X=X_6\subset \mathbb{P}^7$. Up to isomorphism,
there is a unique such threefold $X$ with $\operatorname{rk}\operatorname{Pic} X=2$ (\cite{Fujita1980}, \cite{Iskovskikh-Prokhorov-1999}).
In fact, the latter is the threefold which parametrizes the complete flags in $\mathbb{P}^2$.
Consider the following diagram (\cite[\S 8]{Prokhorov-GFano-1}):
\begin{equation*}
\xymatrix{
&\tilde X\ar[dr]\ar[dl]&
\\
X\ar@{-->}[rr]&& U\subset \mathbb{P}^{6}
}
\end{equation*}
where $U=U_5\subset \mathbb{P}^{6}$ is a quintic del Pezzo threefold
with two nodes (ordinary double points), $X \dashrightarrow U=U_5\subset \mathbb{P}^{6}$
is the projection from a general point $P\in X$, and $\tilde X\to X$ is the blowup of $P$.
Recall that $X$ can be realized as a smooth divisor
of bidegree $(1,1)$ in $\mathbb{P}^2\times \mathbb{P}^2$ (see, e.g., \cite{Fujita1980},
\cite{Iskovskikh-Prokhorov-1999}).
The natural projections ${\rm pr}_1,{\rm pr}_2: X\to \mathbb{P}^2$ define
$\mathbb{P}^1$-bundles with total space $X$.
Let $l_i$, $i=1,2$, be the corresponding fibers passing through $P$.
Then $l_1, l_2$ are contracted to the nodes $P_1,P_2\in U$.
The threefold $U$ contains a unique
plane $\mathcal{P}$, and this plane is
the image of the exceptional divisor of $\tilde X\to X$ (\cite[\S 8]{Prokhorov-GFano-1}).
The intersection $Z$ of $X$ with a general divisor
of bidegree $(1,1)$ in $\mathbb{P}^2\times \mathbb{P}^2$ is a smooth sextic del Pezzo
surface $Z\cong Z_6\subset \mathbb{P}^6$.
We can choose $Z$ so that $P\not\in Z$. Let $F\subset U$ be the image of $Z$.
Then $F=F_6\subset W\cap \mathbb{P}^6$ is an anticanonically embedded smooth sextic del Pezzo surface,
and $F\cap \mathcal{P}=\{P_1,P_2\}$.
Note that the del Pezzo quintic threefold $U=U_5\subset \mathbb{P}^{6}$ with two nodes as above
is unique up to isomorphism.
On the other hand, such a variety can be obtained as a section of $\operatorname{Gr}(2,5)\subset \mathbb{P}^9$
by a general hyperplane $\Lambda$ and two general Schubert subvarieties $\Sigma_1, \Sigma_2$ of codimension one in $\operatorname{Gr}(2,5)$
(see \cite{Todd1930}, \cite{Fujita-1986-1}).
Letting $\Sigma'$
be a general linear combination of $\Sigma_1$ and $\Sigma_2$, the section of $\operatorname{Gr}(2,5)\subset \mathbb{P}^9$
by $\Lambda$ and $\Sigma'$ is smooth. Therefore, this section is
a del Pezzo fourfold $W=W_5\subset \mathbb{P}^7$. By construction, $W$ contains $F$ and $\mathcal{P}$.
Since $F\cdot \mathcal{P}=2$ in $W$, it follows that $F$ is of type \ref{case-g=9-a-26}, see (2) in Lemma \ref{lem:H4}
and Lemma \ref{lem:enumerative}.
Since $U$ contains a unique
plane $\mathcal{P}$, and $F$ meets $\mathcal{P}$ just in two points and not along a conic,
$F$ satisfies the last condition of Proposition \ref{lemma-existence-surfaces}. Indeed, it is easily seen that
$U=W\cap\langle F\rangle$. If $\mathcal{T}$ is a plane, which meets $F$ along a conic, then $\mathcal{T}$ is contained in $U$.
So, $\mathcal{T}=\mathcal{P}$ due to the uniqueness of $\mathcal{P}$. The latter equality leads to a contradiction,
since $\mathcal{P}\cap F$ is not a conic.
\end{proof}
To show
Proposition \ref{lemma-existence-surfaces} {\rm a)} we need to recall Proposition 4.11 in \cite{Prokhorov-Zaidenberg-2014}.
It describes a construction (borrowed in \cite[Sect.\ 10]{Fujita-620281} and
\cite{Prokhorov1994}), which allows to recover the fourfold $W$ via a Sarkisov link starting with a certain 2-dimensional
cubic scroll $S$ in $\mathbb{P}^5$ contained in a smooth quadric $Q^4$.
\begin{proposition}\label{prop:PZ4.11}
Let as before $W=W_5\subset \mathbb{P}^7 $ be a del Pezzo quintic fourfold, and let $l\subset W$ be a line, which is
not contained
in any plane in $W$, that is, $l\not\subset R$.
Then there is a commutative diagram
\begin{equation}\label{eq:diagram-Q4}
\vcenter{
\xymatrix{
&\hat D\ar[dl]&\hookrightarrow&\widehat W\ar[dr]^{\hat\varphi}\ar[dl]_{\hat\rho}&\hookleftarrow& \hat E\ar[dr]
\\
l&\hookrightarrow&W\ar@{-->}[rr]^{\hat\phi} &&\quad Q^4 &\hookleftarrow& S
}}
\end{equation}
where
\begin{enumerate}
\item
$\hat\rho: \widehat W\longrightarrow W$ is the blowup of $l$,
$\hat\phi: W\dashrightarrow \mathbb{P}^5$ is the projection from $l$,
$Q^4=\hat\phi(\widehat W)\subset \mathbb{P}^5$ is a smooth quadric, and
$\hat\varphi:\widehat W\longrightarrow Q^4$ is
the blowup of a cubic scroll $S\subset Q^4\subset \mathbb{P}^5$ with exceptional divisor $\hat E$;
\item
the morphism $\hat\rho: \widehat W \longrightarrow W\subset \mathbb{P}^7$ is defined by the linear system
$|\hat\rho^* H - \hat D|$,
where $H\subset W$ is a hyperplane section and $\hat D =\hat\rho^{-1}(l)$
is the exceptional divisor of $\hat\rho$;
\item
$\hat\varphi(\hat D) = Q^4\cap \langle S\rangle$ is a quadric cone, where $\langle S\rangle\cong\mathbb{P}^4$
is the linear span of $S$ in $\mathbb{P}^5$;
\item
the image $\hat\rho(\hat E)\subset W$
is a hyperplane section of $W$ singular along $l$ and swept out by lines in $W$ meeting $l$;
\item
for a hyperplane section $\mathcal{L}$ of $Q^4$ we have on $\widehat W$
\begin{equation*}
\hat\varphi^* \mathcal{L}\sim \hat D - \hat E\quad\mbox{and}\quad \hat\rho^*H\sim\hat D-2\hat E\sim \hat\varphi^*
\mathcal{L}-\hat E\sim 2 \hat\varphi^* \mathcal{L}- \hat D\,.
\end{equation*}
\end{enumerate}
Conversely, given a pair $(Q^4,S)$, where $Q^4\subset \mathbb{P}^5$ is a smooth quadric fourfold and $S\subset Q^4$is a cubic
scroll in $ \mathbb{P}^5$ such that the hyperplane section $Q^4\cap \langle S\rangle$ is a quadric cone, one can recover the
quintic fourfold $W$ together with diagram \eqref{eq:diagram-Q4}) satisfying {\rm (i)-(v)}.
\end{proposition}
To construct surfaces $F\subset W$ as in Proposition \ref{lemma-existence-surfaces} {\rm a)} we use the following Lemmas
\ref{lem:cubic scrolls-1}-\ref{lem:cubic scrolls-2}.
\begin{lemma}\label{lem:cubic scrolls-1}
Consider a quadric cone threefold $Q^3\subset\mathbb{P}^4$ with a zero-dimensional vertex $P$, a smooth hyperplane section
$Q^2=Q^3\cap \mathcal{H}$, where $Q^2\cong\mathbb{P}^1\times\mathbb{P}^1$, and a smooth
conic $C\subset Q^2$. Consider also a plane $\mathcal{T}\subset Q^3$, $\mathcal{T}\cong\mathbb{P}^2$, and a general quadric $Q^{\bullet 3}
\subset\mathbb{P}^4$ which contains $\mathcal{T}\cup C$. Then $Q^3\cap Q^{\bullet 3}=\mathcal{T}\cup S$, where
$S\cong\mathbb{F}_1$ is a smooth rational normal cubic scroll in $\mathbb{P}^4$ passing through $P$ and $C$.
\end{lemma}
\begin{proof}
The exact sequence
\begin{equation*}
0\longrightarrow \mathcal{O}_{Q^3}(1)\longrightarrow \mathcal{O}_{Q^3}(2)
\longrightarrow \mathcal{O}_{Q^2}(2)\longrightarrow 0\,
\end{equation*}
yields the exact cohomology sequence
\begin{equation}\label{eq-cohom}
0\to H^0(\mathcal{O}_{Q^3}(1))\longrightarrow H^0(\mathcal{O}_{Q^3}(2))\stackrel{\psi}{\longrightarrow} H^0(\mathcal{O}_{Q^2}(2))\to 0\,.
\end{equation}
Let $l_1$ and $l_2$ be general horizontal and vertical generators of the quadric $Q^2$, and let $s\in H^0(\mathcal{O}_{Q^2}(2))$ be a
section vanishing along the $(2,2)$-divisor $C+l_1+l_2$. By virtue of \eqref{eq-cohom} the affine subspace $\psi^{-1}(s)\subset H^0(\mathcal{O}_{Q^3}(2))$
has dimension $5$. It projects into a 5-dimensional family of divisors
$D\in |\mathcal{O}_{Q^3}(2)|$ such that $D\cap Q^2= C+l_1+l_2 $.
The plane $\mathcal{T}\subset Q^3$ is spanned by $l_1$ and $P$. It defines a
2-dimensional subfamily $\mathcal{Q}$ of divisors $D$ containing
$\mathcal{T}$ and such that $D\cap Q^2=C+l_1+l_2$.
Write $D=\mathcal{T}\cup S$, where $S$ is the residual cubic surface. Then $S\cap Q^2=C+l_2$.
Suppose that $S$ is reducible: $S=\mathcal{T}_2\cup S'$,
where $\mathcal{T}_2\cap Q^2=l_2$ and $S'\cap Q^2=C$. Then $D=\mathcal T_1\cup \mathcal T_2\cup S'$, where $\mathcal T_1=\mathcal T$,
$\mathcal{T}_2={\rm span} (l_2,P)$ is a plane, and $S'$ is a hyperplane section of $Q^3$.
Here $\mathcal T_1 \cup \mathcal T_2$
is uniquely determined by $l_1\cup l_2$, and $S'$ runs over a 1-parameter family.
Since $\dim\mathcal{Q}=2$, one can conclude that a general divisor $D\in \mathcal Q$ has the form
$D=\mathcal{T}\cup S$, where $S\subset \mathbb{P}^4$ is an irreducible cubic surface.
The cubic surface $S$ is linearly nondegenerate, because a hyperplane section of $Q^3$ is a quadric surface.
Thus, $S$ is a linearly nondegenerate surface of minimal degree $3$ in $\mathbb{P}^4$. Such a surface is either a cone over
a twisted cubic $\Gamma\subset\mathbb{P}^3$, or a rational normal scroll $S=S_{2,1}\cong\mathbb{F}_1$
(see \cite[Ch.\ 4, Prop.\ on p.\ 525]{Griffiths-Harris-1978}).
If $S$ were a cone over $\Gamma\subset\mathbb{P}^3$ with vertex $P'$, then the twisted cubic $\Gamma$ would
be dominated by the conic $C$ under the projection from $P'$, which is impossible. Thus $F\cong\mathbb{F}_1$ is smooth.
Finally, $P\in S$ since otherwise $S$ would be a Cartier divisor on $Q^3$ linearly proportional to a hyperplane section.
\end{proof}
\begin{lemma}\label{lem:cubic scrolls-2}
Let $Q^4\subset\mathbb{P}^5$ be a smooth quadric. There exist two smooth cubic scrolls $S$ and
$S'$ in $Q^4\subset \mathbb{P}^5$ such that
\begin{itemize}
\item
$S\cong\mathbb{F}_1\cong S'$;
\item
$S$ and $S'$ span hyperplanes $L$ and $L'$ in $\mathbb{P}^5$, respectively, where $L\neq L'$;
\item
$L\cap Q^4=Q^3$ and $L'\cap Q^4={Q'}^3$
are quadric cones with zero-dimensional vertices $P$ and $P'$, respectively, where $P\neq P'$;
\item
the scheme theoretical intersection $C=S\cdot S'$ is a smooth conic.
\end{itemize}
\end{lemma}
\begin{proof}
A general pencil $(Q^3_{\lambda})$ of hyperplane sections of $Q^4$ contains exactly 6 degenerate members.
Consider two of them, say, $Q^3=Q\cap T_{P}Q$ and ${Q'}^3=Q\cap T_{P'}Q$, where $P,P'\in Q$. Then $Q^3$
and ${Q'}^3$ are quadric cones with zero-dimensional vertices $P$ and $P'$, respectively. The base locus
of the pencil $(Q^3_{\lambda})$ is the smooth quadric $Q^2=Q^3\cap {Q'}^3\cong\mathbb{P}^1\times\mathbb{P}^1$.
Applying Lemma \ref{lem:cubic scrolls-1} to $Q^3$ and ${Q'}^3$, the assertions follow.
\end{proof}
Using Lemma \ref{lem:cubic scrolls-2} and Proposition \ref{prop:PZ4.11} we proceed now with construction of surfaces $F$
as in Proposition \ref{lemma-existence-surfaces} a).
\begin{construction}\label{sit} {\rm Consider the smooth cubic scrolls $S$ and $S'$ in $\mathbb{P}^5$ as in Lemma
\ref{lem:cubic scrolls-2}. The embedding $\mathbb{F}_1\stackrel{\cong}{\longrightarrow} S'{\hookrightarrow}\mathbb{P}^4$
is given by the linear system $|\sigma+2f|$ on $\mathbb{F}_1$, where $\sigma$ is the exceptional section of
$\mathbb{F}_1\to\mathbb{P}^1$ and $f$ is a fiber. On $S'$ we have $C=S\cdot S'\sim\sigma+f$, where the images of
$\sigma$ and $f$ on $S'$ are denoted by the same letters.
In what follows we employ the notation of Proposition \ref{prop:PZ4.11}. Let $\hat S'$ be the proper transform of
$S'$ in $\widehat W$ (see diagram \eqref{eq:diagram-Q4}). Then, clearly, $\hat S'\cong S'\cong \mathbb{F}_1$.
By Proposition \ref{prop:PZ4.11}(v), the morphism $\hat\rho: \widehat W\to W\subset\mathbb{P}^7$ is defined by
the linear system $|\hat\rho^*H|=|2\hat\varphi^*\mathcal{L}-\hat D|$, where $\mathcal{L}$ is a hyperplane section
of $Q^4\subset\mathbb{P}^5$ and $\hat D=\hat\varphi^*(S)$ is the exceptional divisor of $\hat\varphi$. Identifying $S'$
with $\tilde S'$ one can write
\begin{equation}\label{eq:morphism}
(2\varphi^*\mathcal{L}-\hat D)|_{\hat S'}
=2\mathcal{L}|_{S'}-S|_{S'}
\sim 2(\sigma+2f)-C
\sim \sigma+3f\,.
\end{equation}
We let $F=\hat\rho (\hat S')\subset W$. Since $\hat S'\not\subset \hat D$, the map
$\hat\rho|_{\hat S'}: \hat S' \to F$ is a birational morphism, and the surface $F$ is a quintic scroll.}
\end{construction}
\begin{remark}\label{rem:l}
Since $S'\cap \langle S\rangle =C+f_0$, where $f_0$ is a fiber of $S'$, we have $\hat S'\cap \hat D\supset \hat f_0$.
Therefore, $\hat \rho(\hat f_0)=l\subset F$ (because $l=\hat\rho(\hat D)$ and $F=\hat\rho(\hat S')$).
Moreover, $l$ is a ruling of $F$.
\end{remark}
\begin{lemma}\label{lem:quintic scroll} The morphism $\hat\rho|_{\hat S'}: \hat S'\to F$
is an isomorphism onto a smooth rational normal quintic scroll $F\supset l$ contained in a hyperplane in
$\mathbb{P}^7$.
\end{lemma}
\begin{proof} It suffices to show that the morphism
$\hat\rho|_{\hat S'}: \hat S'\to \mathbb{P}^6\subset\mathbb{P}^7$ is given by the (very ample)
{\em complete} linear system $|\sigma+3f|$ on $\hat S'\cong\mathbb{F}_1$ (cf.\ \eqref{eq:morphism}), or, in other words, that the induced morphism $\mathbb{F}_1\to F$ is an isomorphism, see
\cite[Ch.\ 4, p.\ 523]{Griffiths-Harris-1978} or \cite{Harris1992}.
Suppose to the
contrary that $\langle F\rangle\cong\mathbb{P}^5$, that is, $F$ is cut out in $W$ by two hyperplanes.
Then the quintic scroll $F$
cannot be normal. Indeed, for a general hyperplane section $\gamma$ on $F$ we have by adjunction
$\omega_\gamma=(K_W+3H)|_\gamma\sim 0$. Hence the arithmetic genus of $\gamma$ equals 1. The genus of
the proper transform of $\gamma$ on the normalization of $F$ equals 0, hence $\gamma$ is a rational
curve with one double point.
Such double points of hyperplane sections of $F$ fill in a line in $F$, and $F$ is singular along this line. In particular, $F$ is not normal.
This leads to the following claim.
\begin{claim}\label{claim1}
If $\langle F\rangle\cong\mathbb{P}^5$ then $\operatorname{Sing} F=l$ is a ruling of $F$.
\end{claim}
\begin{proof}
We know that $l\subset F$ is a ruling, see Remark \ref{rem:l}.
Since $\hat W\to W$ is an isomorphism over $W\setminus l$, its restriction $\hat S'\to F$ is an isomorphism over $F\setminus l$.
Since $F$ is not normal, the claim follows.
\end{proof}
On the other hand, we have
\begin{claim}\label{claim2}
Let as before $\langle F\rangle\cong\mathbb{P}^5$, and let $\nu:\mathbb{F}_1\to F$ be
the normalization. Then on $\mathbb{F}_1$ we have $K_{\mathbb{F}_1}\sim \nu^*\omega_F-B$,
where $B\sim \sigma$ is an effective divisor supported by the proper transform in $\mathbb{F}_1$
of the non-normal locus of $F$.
\end{claim}
\begin{proof}
Under our assumption, $F$ is a complete intersection in a smooth variety $W$.
Hence $F$ is Cohen-Macaulay, and so, the standard formula $K_{\mathbb{F}_1}\sim \nu^*\omega_F-B$
holds with $B$ supported by the proper transform in $\mathbb{F}_1$ of the non-normal locus of $F$.
Using this formula and adjunction one gets on $\mathbb{F}_1$:
\begin{eqnarray*}
B\sim \nu^*\omega_F-K_{\mathbb{F}_1}& \sim & (K_W+2H)|_F+(2\sigma+3f) \sim -H|_F+(2\sigma+3f)
\\
& \sim & -(\sigma+3f)+(2\sigma+3f) \quad\,\, \sim \sigma\,,
\end{eqnarray*}
as stated.
\end{proof}
Due to Claim \ref{claim1} we have supp$(B)=f$, and so, $B\cdot f=0$. This yields a contradiction,
since by Claim \ref{claim2}, $B\cdot f=\sigma\cdot f=1$ on $\mathbb{F}_1$.
\end{proof}
\begin{lemma}\label{lem:no plane conic}
None of the planes in $W$ meets the quintic scroll
$F\subset W$ along a \textup(possibly, degenerate\textup) conic.
\end{lemma}
\begin{proof}
Recall that $R$ stands for the hyperplane section of $W$ swept out by
the 1-parameter family of planes $(\Pi_t)$ in $W$. It is singular along the plane $\Xi$,
see Proposition \ref{Proposition-2.2.}(iv). Since $l\subset F$ and $l\not\subset R$, we
have $F\not\subset R$ and $l\cap\Xi=\emptyset$, see Proposition \ref{Proposition-2.2.}(v).
Suppose to the contrary that $F$ meets a plane $\mathcal{P}\subset W$ along a conic, say, $\eta$.
\begin{claim*}
The conic $\eta$ coincides with the exceptional section $\sigma_F$
of the scroll $F\cong\mathbb{F}_1$.
\end{claim*}
\begin{proof}
Suppose that the conic $\eta$ is degenerate. Since any two lines on
$F$ are disjoint, $\eta\subset P$ cannot be a bouquet of two distinct lines. Hence $\eta$ is a double line $2f$.
For any line $f'\neq f$ in $F$ there exists an automorphism $\alpha\in\operatorname{Aut} F\cong\operatorname{Aut}\mathbb{F}_1$
such that $\alpha(f)=f'$. Since the embedding
\begin{equation*}
\mathbb{F}_1\stackrel{\cong}{\longrightarrow} F\hookrightarrow \mathbb{P}^6\subset\mathbb{P}^7
\end{equation*}
is given by an $(\operatorname{Aut} F)$-invariant linear system $|\sigma+ 3f|$, $\alpha$ can be extended to an
automorphism $\bar\alpha\in\operatorname{Aut}\mathbb{P}^7$, which leaves $\langle F\rangle\cong\mathbb{P}^6$ invariant. Hence
there exists a second plane $\mathcal{P}'=\bar\alpha(\mathcal{P})$, which meets $F$ along a double
line $2f'$ (this plane $\mathcal{P}'$ does not need to be contained in $W$).
\footnote{Alternatively, the further proof can proceed as follows. The plane $\mathcal{P}'$ is tangent
to $F$ along the ruling $f'$. Thus, the Gauss map of $F$ is degenerate, and so, $F$ is a developable
surface. Such a surface, which is not a plane, is a cone or the tangential developable of a curve,
see, e.g., \cite{Cayley1864} or more general results in \cite[(2.29)]{Griffiths-Harris-1978},
\cite[Cor.\ 5]{Zak1987}, or \cite[\S 2.3.3]{Fischer-Piontkowski2001}. Hence $F$ cannot
be smooth, a contradiction. Our argument in the text is more elementary.}
The planes $\mathcal{P}$ and $\mathcal{P}'$ span a subspace $\mathcal{N}\subset\mathbb{P}^7$ with $\dim\mathcal{N} \le 5$.
Thus, there exists a hyperplane $\mathcal{M}\supset \mathcal{N}$ in $\mathbb{P}^7$ different from $\langle F\rangle$. We have
$\mathcal{M}\cdot F=2f+2f'+f''$, where $f''\subset F$ is an extra line. However, this divisor $\mathcal{M}\cdot F$ on $F$ is not ample,
which is a contradiction.
Thus, the conic $\eta=F\cap\mathcal{P}$ is smooth. Since the image $\sigma$ of the exceptional section $\sigma_{\hat S'}\subset\hat S'$
is a unique smooth conic in
the quintic scroll $F\cong\mathbb{F}_1$, we obtain that $\eta=\sigma_F$.
\end{proof}
The line $l\subset F$ meets the section $\sigma=\sigma_F$ in a point $p\in\sigma$. Hence it meets also the plane $\mathcal{P}$ in $p$.
The projection $\hat\phi: W\dashrightarrow\mathbb{P}^5$ with center $l$ sends $\sigma_F$ to the exceptional section $\sigma_{S'}\subset S'$, and
$\mathcal{P}$ to a line on $S'\cong\mathbb{F}_1$, which should coincide with $\sigma_{S'}$. Recall that by our construction
$S\cap S'=C\sim\sigma_{S'}+f_{S'}$ is a smooth conic on $S'$. Since
$\sigma_{S'}\cap C=\emptyset$, the exceptional divisor $\hat E\subset\widehat W$ does not meet the section $\sigma_{\hat S'}$ of the scroll
$\hat S'\subset\widehat W$. Thus $\hat\varphi:\widehat W\to Q^4$ is an isomorphism near $\sigma_{\hat S'}$.
On the other hand, let $\hat{\mathcal{P}}$ be the proper transform of $\mathcal{P}$ in $\widehat W$. Then $\hat{\mathcal{P}}\to\mathcal{P}$
is the blowup of the point $p=\mathcal{P}\cap l$, and $\hat{\mathcal{P}}\cap\hat S'\supset\sigma_{\hat S'}$. Thus the image
$\hat\varphi(\hat{\mathcal{P}})\subset Q^4$ should be a surface, and not a line.
This yields as well a contradiction.
\end{proof}
Examples show that the last assumption in Theorem \ref{theorem-1} cannot be omitted.
Without this assumption one arrives at a singular fourfold $V$ in diagram \eqref{eq:diagram},
or else $\varphi$ is the blowup of a singular surface. According to Proposition \ref{lemma-existence-surfaces},
this does not happen for our choice of $F$.
\section{Proof of Theorem \ref{theorem-1}.}\label{sec:proof-of-theorem-1}
Let us start with the following well known lemmas.
\begin{lemma}\label{lem:intersec-quadrics}
Any surface $F$ as in Theorem \textup{\textup{\ref{theorem-1}}} is a scheme theoretical intersection of quadrics.
\end{lemma}
\begin{proof} In case \ref{case-g=10} the assertion follows from
\cite[Thm.\ 8.4.1]{Dolgachev-ClassicalAlgGeom}, and in case \ref{case-g=9} from \cite[Lect.\ 9, Exs.\ 9.10--9.11]{Harris1992}.
\end{proof}
The next well known lemma is immediate.
\begin{lemma}\label{lem:morphism}
Let a smooth surface $F\subset \mathbb{P}^n$, $n\ge 4$, be a scheme theoretical
intersection of quadrics. Let $\tilde \mathbb{P}^n\to \mathbb{P}^n$ be the blowup of $F$
with exceptional divisor $T$. Then the linear system $|2H^* - T |$ defines a
morphism $\tilde \mathbb{P}^n\to \mathbb{P}^N$, which contracts the proper transform of
any $2$-secant line of $F$.
\end{lemma}
\begin{sit}\label{sit:pf-2.1} In what follows we keep the notation as in
Theorem \textup{\ref{theorem-1}}. In particular, we let $g=10$ in case \ref{case-g=10}
and $g=9$ in case \ref{case-g=9}.
A surface $F\subset W$ as in Theorem \textup{\ref{theorem-1}} is contained in a unique
hyperplane section $E=\langle F\rangle\cap W$ of $W$, see Remark \ref{lem:unique-hyperplane}. We let
\begin{itemize}
\item
$\rho:\widetilde W\longrightarrow W$ be the blowup of $F$
with exceptional divisor $D$,
\item
$\tilde E\subset\widetilde W$ be the proper transform of $E$,
\item
$H\subset W$ be a general hyperplane section, and
\item
$H^*=\rho^*H\in {\rm Div}\,W$.
\end{itemize}
Clearly, one has $\operatorname{rk}\operatorname{Pic}\widetilde W=2$ and $\tilde E\sim H^*-D$ on $\widetilde W$.
\end{sit}
\begin{lemma}\label{lem:Fano}
The variety $\widetilde W$ is a smooth Fano fourfold.
\end{lemma}
\begin{proof}
We have
\begin{equation*}
-K_{\widetilde W}=3H^*-D=2H^*-D+H^*\,,
\end{equation*}
where both $2H^*-D$ and $H^*$ are nef, because the linear systems
$|2H^*-D|$ and $|H^*|$ are free. Since $\operatorname{rk}\operatorname{Pic}\widetilde W=2$ and the
nef divisors $2H^*-D$ and $H^*$ are not proportional,
their sum is an ample divisor by the Kleiman ampleness criterion.
\end{proof}
The nef and non-ample linear systems $|H^*|$ and $|2H^*-D|$ on $\widetilde W$ define
the two extremal Mori contractions on $\widetilde W$. The first one is $\rho:\widetilde W\to W$;
the second one $\varphi:\widetilde W\to V$ makes the subject of our following studies. We need the next lemma.
\begin{lemma}\label{lemma-equation-intersection}
On $\widetilde W$ one has $({H^*})^4=5$,\quad $(H^*)^3\cdot D=0$,
\begin{equation*}
(H^*)^2\cdot D^2=
\begin{cases}
-5
\\
-6
\end{cases},
\
H^*\cdot D^3=\begin{cases}
-8
\\
-12
\end{cases},
\
D^4=\begin{cases}
-6&\text{in case \ref{case-g=10}}
\\
-16&\text {in case \ref{case-g=9}\,.}
\end{cases}
\end{equation*}
\end{lemma}
\begin{proof} The lemma follows easily from the equalities (see \cite[Lem.\ 1.4]{Prokhorov-Zaidenberg-2014})
\begin{equation*}
(H^*)^2\cdot D^2=-F\cdot H^2\,,
\end{equation*}
\begin{equation*}
H^*\cdot D^3=-H|_F\cdot K_F-3H\cdot H\cdot F\,,
\end{equation*}
and
\begin{equation*}
D^4= c_2(W) \cdot F+K_W|_F\cdot K_F-c_2(F)-K_W^2\cdot F\,.
\end{equation*}
\end{proof}
\begin{lemma}\label{lem:5.6} Let $U$ be a Mukai fourfold of genus
$g(U)\ge 4$ with at worst terminal Gorenstein singularities and with $\operatorname{rk} \operatorname{Pic} U=1$.
Assume that the linear system $|-\frac12 K_U|$ is base point free.
Then the divisor $-\frac 12 K_U$ is very ample and defines an embedding $U\hookrightarrow\mathbb{P}^{g+2}$.
\end{lemma}
\begin{proof}
This follows from the corresponding result in the three-dimensional case, see \cite[Prop.\ 1]{Mukai-1989}, \cite{Iskovskikh-Prokhorov-1999}, and
\cite{Przhiyalkovskij-Cheltsov-Shramov-2005en}, by recursion on the dimension, likewise this is done in \cite[Lem.\ (2.8)]{Iskovskih1977a}.
\end{proof}
\begin{sit}\label{sit:Stein factorization} Using Lemma \ref{lemma-equation-intersection} we obtain
\begin{equation}\label{eq:4th power}
\deg V=(2H^*-D)^4=2g-2=\begin{cases} 18 & \mbox{in case \ref{case-g=10}}\\
16& \mbox{in case \ref{case-g=9}}\,
\end{cases}\end{equation}
and
\begin{equation}\label{eq:LE=0}
\tilde E\cdot (2H^*-D)^3=(H^*-D)\cdot (2H^*-D)^3=0\,.
\end{equation}
Therefore,
the linear system $|2H^*-D|$ defines a generically finite morphism
\begin{equation*}
\Phi_{|2H^*-D|}:\widetilde W\to V\subset\mathbb{P}^{g+2}\,
\end{equation*}
onto a fourfold $V$, where $\Phi_{|2H^*-D|}$ contracts the divisor $\tilde E\sim H^*-D$.
Consider the Stein factorization
\begin{equation*}
\Phi_{|2H^*-D|}:\widetilde W\stackrel{\varphi}{\longrightarrow} U\to V\subset\mathbb{P}^{g+2}\,.
\end{equation*}
Here $\varphi$ is a divisorial Mori contraction, and $\operatorname{Pic} U=\mathbb{Z}\cdot L$, where $L$ is an ample
Cartier divisor with $\varphi^*L=2H^*-D$.
Once again, the exceptional divisor of $\varphi$ is $\tilde E\sim H^*-D$.
Hence $D\sim\varphi^* L-2E$.
\end{sit}
\begin{lemma}\label{lem:U-Mukai}
The variety $U$ as in \textup{\ref{sit:Stein factorization}} is a Mukai fourfold with at worst terminal
Gorenstein singularities and $\operatorname{rk}\operatorname{Pic} U=1$.
\end{lemma}
\begin{proof}
Since $\varphi$ is a divisorial Mori contraction,
$U$ has at worst terminal singularities. We have $\operatorname{rk}\operatorname{Pic} U=1$ because $\operatorname{rk} \operatorname{Pic} \widetilde W=2$.
Since
\begin{equation*}
-K_{\tilde W}= 3H^*-D= 2(2H^*-D) -\tilde E\,
\end{equation*}
we also have
$ -K_U = 2L$. Hence $-K_U$ is an ample Cartier divisor divisible by $2$ in $\operatorname{Pic} U$.
So $U$ is a Mukai fourfold.
\end{proof}
\begin{convention}
\label{nota:U-V}
The morphism $U\to V\subset \mathbb{P}^{g+2}$ is
given by the linear system $|L|=|-\frac12 K_U|$.
As follows from Lemma \ref{lem:5.6}, this is an isomorphism.
In the sequel we identify $V$ with $U$ and $\Phi_{|2H^*-D|}$ with $\varphi$.
\end{convention}
\begin{lemma}\label{cor:5.4}
For the image $V= \varphi (\widetilde W)\subset \mathbb{P}^{g+2}$
the following hold.
\begin{enumerate}
\item
The morphism
$\varphi: \widetilde W\to V$ is birational and $\deg V=2g-2$;
\item
the morphism $\varphi$ contracts the divisor $\tilde E$
to an irreducible surface $S\subset V$;
\item
$\deg S= g-7=\begin{cases} 3 & \mbox{in case}\quad \ref{case-g=10}\\
2 & \mbox{in case}\quad \ref{case-g=9}\,;
\end{cases}$
\item
$S$ can have only isolated singularities.
\end{enumerate}
\end{lemma}
\begin{proof}
Upon convention \ref{nota:U-V}, $\varphi$ is birational. By \eqref{eq:4th power}
we have $\deg V=2g-2$. By virtue of \eqref{eq:LE=0}, $\tilde E$ is the exceptional
divisor of $\varphi$.
Using Lemma \ref{lemma-equation-intersection} we deduce the equalities
\begin{eqnarray*}
(2H^*-D)^2\cdot \tilde E^2= (2H^*-D)^2(H^*-D)^2=\begin{cases}
-3 & \mbox{in case}\quad \ref{case-g=10}\\
-2 & \mbox{in case}\quad \ref{case-g=9}\,.
\end{cases}
\end{eqnarray*}
Since the latter number is nonzero, $S$ is a surface of degree
\[
\deg S=- (2H^*-D)^2\cdot \tilde E^2
\]
satisfying (iii).
Since $\operatorname{rk}\operatorname{Pic}\widetilde W=2$,
the exceptional locus of $\varphi$ coincides with $\tilde E$, and $\tilde E$ is a prime divisor.
Therefore, $\varphi$ has at most a finite number of 2-dimensional fibers. By the Andreatta--Wisniewski
Theorem (\cite{AndreattaWisniewski1998}) $S$ has at most isolated singularities.
\end{proof}
\begin{corollary}\label{lem:normal}
The surface $S$ is normal.
\end{corollary}
\begin{proof}
The assertion is certainly true if $\deg S=2$. If $\deg S=3$ and the cubic surface $S\subset\mathbb{P}^4$ is not normal,
then $S$ is contained in a 3-dimensional subspace and the singular locus of $S$ is 1-dimensional, which
contradicts (iv).
\end{proof}
\begin{lemma}\label{lem:replacement of 2.6 and 2.7}
In the notation of \textup{\ref{sit:Stein factorization}}
the morphism $\varphi:\widetilde W\to V$ is the blowup of the surface $S$, where both $S$ and $V$ are smooth.
\end{lemma}
\begin{proof}
If to the contrary $S$ or $V$ were singular, then by \cite[Thm.\ 2.3]{Ando1985} the extremal
$K_{\widetilde W}$-negative contraction $\varphi: \widetilde W\to V$ would have a 2-dimensional
fiber, say, $\widetilde Y\subset\widetilde W$.
Since $S$ is normal (see Corollary \ref{lem:normal}), by the main theorem and Prop.\ 4.11 in
\cite{AndreattaWisniewski1998} one has
$\widetilde Y\cong \mathbb{P}^2$ and
\begin{equation*}
(3H^*-D)|_{\widetilde Y}=-K_{\widetilde W}|_{\tilde Y}=\mathcal{O}_{\mathbb{P}^2}(1)\,.
\end{equation*}
Since $\tilde Y$ is contracted to a point under $\varphi$, we have $(2H^*-D)|_{\widetilde Y}\sim 0$. Thus
$H^*|_{\widetilde Y}=\mathcal{O}_{\mathbb{P}^2}(1)$ and $D|_{\widetilde Y}=\mathcal{O}_{\mathbb{P}^2}(2)$.
It follows that the image $Y=\rho(\widetilde Y)\subset W$, where $\rho=\Phi_{|H^*|}$,
is a plane, $Y\neq F$, and $Y\cap F\cong \widetilde Y\cap D$ is a conic in $Y\cong\mathbb{P}^2$.
However, the latter contradicts our assumption in Theorem \textup{\ref{theorem-1}} that $F$
does not meet any plane in $W$ along a conic.
Therefore, $\varphi$ has no 2-dimensional fiber. Hence the surface $S$ and the fourfold $V$
are smooth, and $\varphi$ is the blowup of $S$ by \cite[Thm.\ 2.3]{Ando1985}.
\end{proof}
\begin{corollary}\label{cor:new}
The surface $S\subset V\subset\mathbb{P}^{g+2}$ is a smooth normal cubic scroll in case \ref{case-g=10}
and a smooth quadric in case \ref{case-g=9}.
\end{corollary}
\begin{proof}
By Lemmas \ref{cor:5.4}(iii) and \ref{lem:replacement of 2.6 and 2.7}, $S$ is a
smooth surface of degree 3 in case \ref{case-g=10} and of degree 2 in case \ref{case-g=9}.
It remains to show that in case \ref{case-g=10}, $S$ is a normal scroll in $\mathbb{P}^4$ and not a
smooth cubic surface in $\mathbb{P}^3$. Using \eqref{equation-intersections} and Lemma
\ref{lemma-equation-intersection} one can compute
\begin{equation*}
L^*\cdot \tilde E^3=(2H^*-D)\cdot (H^*-D)^3= -1.
\end{equation*}
On the other hand,
\begin{equation*}
L^*\cdot \tilde E^3= -L|_{S}\cdot K_S+ K_V\cdot L\cdot S
\end{equation*}
(see e.g. \cite[Lem.\ 1.4]{Prokhorov-Zaidenberg-2014}), and so, due to \ref{nota:U-V},
\begin{equation*}
L|_{S}\cdot K_S= -L^*\cdot \tilde E^3-2 L^2\cdot S=1-6=-5.
\end{equation*}
If $\dim \langle S\rangle <4$, then $S$ is a cubic surface in $\mathbb{P}^3$
and we have $L|_{S}\cdot K_S=-K_S^2=-3$, a contradiction.
Therefore, $\dim \langle S\rangle=4$, and so, $S\subset \mathbb{P}^4$ is a linearly nondegenerate
surface of degree $3$, i.e., a normal cubic scroll.
\end{proof}
\begin{lemma}\label{lem:U=V}
Under the setting as before, the following hold.
\begin{itemize}
\item
$\varphi(D)$ is a hyperplane section of $V$ singular along $S=\varphi(\tilde E)$,
\item
there is an isomorphism $V\setminus\varphi(D)\cong W\setminus\rho(\tilde E)$.
\end{itemize}
\end{lemma}
\begin{proof}
We have $D\sim\varphi^*L-2\tilde E$ in $\widetilde W$ and $S\subset\varphi(D)$,
because any fiber $\varphi^{-1}(s)$, $s\in S$, meets $D$. Thus, $\varphi(D)\sim L$ is
a hyperplane section of $V\subset\mathbb{P}^{g+2}$ singular along $S=\varphi(\tilde E)$.
Finally, since $F\subset E=\rho(\tilde E)$ we have isomorphisms
\begin{equation*}
W\setminus\rho(\tilde E)\cong \widetilde W\setminus (\tilde E\cup D)\cong
V\setminus (S\cup\varphi(D))=V\setminus \varphi(D)\,.
\end{equation*}
\end{proof}
The following corollary is immediate from \eqref{eq:4th power}
and Lemma \ref{lem:U-Mukai}. It ends the proof of Theorem \ref{theorem-1}.
\begin{corollary}\label{main-corollary}
Under the assumptions of Theorem \textup{\ref{theorem-1}},
\begin{itemize}
\item
in case \ref{case-g=10}
$V\cong V_{18}\subset\mathbb{P}^{12}$ is a smooth Mukai fourfold of genus $g=10$, and
\item
in case \ref{case-g=9} $V\cong V_{16}\subset\mathbb{P}^{11}$ is a smooth Mukai fourfold of genus $g=9$.
\end{itemize}
\end{corollary}
\section{Concluding remarks.}
\subsection{Cylindricity in families}
Our Theorem \ref{main-theorem} and the results in \cite{Prokhorov-Zaidenberg-2014}
show that for any $g\ge 7$, in the family of all
Mukai fourfolds of genus $g$ there exist subfamilies of cylindrical such fourfolds.
The question about cylindricity of all the Mukai fourfolds of genus $g\ge 7$ remains open,
and as well the question about cylindricity of Mukai fourfolds
of lower genera is. We expect that the answers to both questions are negative in general.
However, at the moment we do not dispose suitable tools to prove this.
\subsection{Rationality questions}
The question about cylindricity is ultimately related to the rationality problem.
For instance, in dimension $3$ cylindricity of a Fano variety implies its rationality.
Note that for any $g=5,\ldots, 8$ there exist
rational Mukai fourfolds $V=V_{2g-2}\subset \mathbb{P}^{g+2}$ of genus $g$. We also have the following fact.
\begin{proposition}\label{prop:rationality}
Any Mukai fourfold $V=V_{2g-2}\subset \mathbb{P}^{g+2}$ of genus $g\in\{7, 9, 10\}$ is rational.
\end{proposition}
\begin{proof} By Shokurov's theorem (\cite{Shokurov-1979}) applied to a hyperplane section,
there exists a line $\lambda$ on $V$.
By an easy parameter count (see \cite[Lem.\ 2.4]{Prokhorov-Zaidenberg-2014}) a general hyperplane
section of $V$ passing through $\lambda$ is smooth. Hence one can take a pencil $\mathcal{H}$
of hyperplane sections of $V$ passing through $\lambda$ whose general member $U=H_{2g-2}\in \mathcal H$ is
a smooth anticanonically embedded
Fano threefold of genus $g$ with $\operatorname{Pic} U=\mathbb{Z}\cdot K_U$.
Blowing up the base locus of $\mathcal H$ yields a family $\mathfrak V\to \mathbb{P}^1$,
whose fibers are the members of $\mathcal H$ and the total space $\mathfrak V$ is birational to $V$.
Consider the generic fiber
$X=\mathfrak V\times \operatorname{Spec}\,\mathbb{C}(\mathbb{P}^1)$, where
$\mathbb{P}^1$ is the parameter space of
the pencil $\mathcal H$.
As before, $X$ is a Fano threefold of genus $g$ over the non-closed field
$\mathbb{C}(\mathbb{P}^1)$ with $\operatorname{Pic} X=\mathbb{Z}\cdot K_X$.
It suffices to show the $\mathbb{C}(\mathbb{P}^1)$-rationality of $X$.
By construction, the line $ \lambda\subset V$ gives a
line $\Lambda \subset X$ defined over $\mathbb{C}(\mathbb{P}^1)$.
Then we can apply the Fano-Iskovskikh double projection $\Psi: X \dashrightarrow Y$
from $\Lambda$, see \cite{Iskovskikh-Prokhorov-1999}. For $g=9$ ($g=10$, respectively)
the map $\Psi$ is birational and $Y$ is a form of $\mathbb{P}^3$, i.e., a Brauer-Severi scheme,
(a smooth quadric $Q\subset \mathbb{P}^4$, respectively).
Since $\mathbb{C}(\mathbb{P}^1)$ is a $c_1$-field, by Tsen's theorem, $Y$ is
$\mathbb{C}(\mathbb{P}^1)$-rational, and so, $X$ is as well. In the case $g=7$ we have $Y\cong \mathbb{P}^1$
and $\Psi$ is a birational map to a del Pezzo fibration of degree $5$.
Thus, the original variety $V$ has a birational structure of a del Pezzo fibration
of degree $5$ over a surface. Then $V$ is rational by the Enriques--Manin--Swinnerton-Dyer
theorem (see, e.g.,
\cite{Shepherd-Barron-1992}).
\end{proof}
We do not know whether the rationality as in Proposition \ref{prop:rationality} holds also for the Mukai fourfolds $V_{2g-2}$ of genera $g=5,6,8$.
\subsection{Compactifications of $\mathbb{C}^4$}
The Hirzebruch problem about compactifications of the affine space $\mathbb A^n$ (\cite{Hirzebruch-1954})
is also closely related to our cylindricity problem. One can ask the following natural question:
\begin{quote}
Which Mukai fourfolds can serve as compactifications of $\mathbb A^4$?
\end{quote}
We hope that the corresponding examples can be constructed via Sarkisov links,
likewise this is done in the present paper for cylindricity.
For the del Pezzo fourfolds, a similar problem was completely solved in \cite{Prokhorov1994}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\PARstart{T}{he} \emph{Josephson junction} (JJ) is considered as the work horse in superconducting electronics. They are based on two weakly coupled superconducting electrodes via a constriction, e.g., made up by a normal metal or a tunnel barrier and are routinely applied in ultra-high sensitive SQUID (Superconducting Quantum Interference Devices) magnetometers, or the voltage standard \cite{BuckelKleiner2004Superconductivity}. Especially $\ensuremath{\mathrm{Nb}}/\ensuremath{\mathrm{Al}}_2\O_3/\ensuremath{\mathrm{Nb}}$ tunnel junctions attract considerable interest as the fabrication of high density $\ensuremath{\mathrm{Nb}}$-based Josephson circuits with promising small parameters spreads is possible. With the advent of high-quality magnetic tunnel junctions one decade ago, new so far unexplored devices are now under development which combine both fabrication techniques by advanced multilayers of superconducting (S), insulating (I) and magnetic (F) materials. These superconducting spintronic devices were in the focus of recent research activities, like so called $0$--$\pi$ Josephson junctions \cite{WeidesFractVortex,PfeifferPRB08} where the type of coupling ($0$ or $\pi$) is related to the local thickness of the stepped F-layer barrier in the junction. In this work stepped JJs with variation of critical current densities, but same coupling, are discussed.\par Generally, for a variety of JJs a non-uniform critical current density $j_c$ is desirable, e.g., for tunable superconducting resonators, toy systems for magnetic flux pinning or magnetic-field driven electronic switches being similar to SQUIDs.\par
The first considerations \cite{Russo1978NonuniformJc} of non-uniform $j_c$'s were caused by technological drawbacks leading to a variation of the effective barrier thickness by either fabrication \cite{Schwidtal1969} or illumination of light-sensitive barriers \cite{Barone1977PhysStat}. JJs with periodic spatially modulations of $j_c$ were intensively studied regarding the pinning of fluxons \cite{McLaughlinScottPRB1978,Vystavkin1988,MaloUstinovJAP1990}, the spectrum of electromagnetic waves \cite{Lazarides2005,FistulPRB1999} or their magnetic field dependencies \cite{LazaridesPeriodicDefects2003}. Experimentally, a modulation of $j_c$ was done by lithographic insertion of defects such as i) insulation stripes ($j_c=0$) \cite{GolubovUstinovPLA1988,ItzlerTinkhamPRB95}, microshorts ($j_c$ increased) or microresistors ($j_c$ decreased).\\The properties of JJs depend on geometrical (width, length, thickness) and the physical (dielectric constant $\epsilon$, specific resistance $\rho$, magnetic thickness $\Lambda$ and $j_c$) parameters. When tailoring $j_c$ all other parameters should be unchanged to facilitate calculations and avoid further inhomogeneities in the system. These conventional methods for changing $j_c$ necessarily modify either $\epsilon$ or $\rho$.\par
In this paper a new method is used to gradually modify $j_c$ in one half of the junction. The fabrication technology presented (see Fig. \ref{SFpatterning}) permits the controlled change of only the interlayer thicknesses $d_1$ and $d_2=d_1+\Delta d$, i.e., the local $j_c$, while keeping both $\epsilon$ and $\rho$ constant. The magnetic field dependence of the critical current is measured for several stepped junctions and compared to simulations.
\section{Experiment}
The deposition and patterning of the stepped junctions was performed by a four level photolithographic mask procedure \cite{WeidesFabricationJJPhysicaC,WeidesSteppedJJ}. The SIFS stack were deposited by a magnetron sputter system. $\ensuremath{\mathrm{Nb}}$ and $\ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}}$ were statically deposited, whereas $\ensuremath{\mathrm{Al}}$ was deposited during sample rotation and at much lower deposition rates to obtain very homogeneous and uniform films.\\
After the deposition of the $\ensuremath{\mathrm{Nb}}$ as cap layer and subsequent lift-off the complete SIFS stack, without steps in F-layer yet, was obtained.
The part of the JJ that was supposed to have a larger thickness $d_2\approx5\textrm{-}7\:\rm{nm}$ was protected by photoresist, see Fig.~\ref{SFpatterning}.
It was shown that $\S\ensuremath{\mathrm{F}}_6$ reactive ion etching provides an excellent chemistry for low-voltage anisotropic etching of $\ensuremath{\mathrm{Nb}}$ with high selectivity towards other materials\cite{SF6EtchingLichtenberger1993} and the photoresist. The inert $\S\ensuremath{\mathrm{F}}_6$ dissociated in a RF-plasma and the fluor reacted with niobium
$5\ensuremath{\mathrm{F}}+\ensuremath{\mathrm{Nb}}\rightarrow\ensuremath{\mathrm{Nb}}\ensuremath{\mathrm{F}}_5$. The volatile $\ensuremath{\mathrm{Nb}}\ensuremath{\mathrm{F}}_5$ was pumped out of the etching-chamber.
When $\S\ensuremath{\mathrm{F}}_6$ was used as process gas all non-metallic etching products such as fluorides and sulfides from the top-layer of the $\ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}}$-layer had to be removed by subsequent argon etching. \par
The patterning process of the step is depicted in Fig.~\ref{SFpatterning} (a)--(c). The key points were a) \emph{selective reactive etching} of $\ensuremath{\mathrm{Nb}}$, b) \emph{argon etching} of $\ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}}$ to define $d_1=d_2-\Delta d_F$ and c) subsequent \emph{in situ} deposition of $\ensuremath{\mathrm{Nb}}$.\par
The $\ensuremath{\mathrm{Nb}}$ cap layer was removed by reactive dry etching using $\S\ensuremath{\mathrm{F}}_6$. A few $0.1\;\rm{nm}$ of $\ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}}$ were $\ensuremath{\mathrm{Ar}}$ ion etched at a very low rate to avoid any damaging of the $\ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}}$ film under the surface and to keep a good control over the step height, see etching rates in table \ref{sputter}. The etching was stopped when the F-layer thickness was reduced down to the thickness $d_1$ and subsequently $\ensuremath{\mathrm{Nb}}$ was deposited as cap-layer. The complete etching and subsequent $\ensuremath{\mathrm{Nb}}$ deposition was done in-situ. The chip contained stacks with the new $\ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}}$ thicknesses $d_1$ (uniformly etched), $d_2$ (non-etched) and with step in the F-layer thickness $\Delta d_F$ ranging from $d_1$ to $d_2$ as depicted in Fig. \ref{SFpatterning} d).\\
\begin{figure}[tb]
\begin{center}
\includegraphics[width=8.6cm]{fabrication_SF}
\caption{\label{SFpatterning} The complete SIFS stack was protected in part by photoresist. (a) reactive etching of
$\ensuremath{\mathrm{Nb}}$ with $\S\ensuremath{\mathrm{F}}_6$ down to $\ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}}$ layer, (b) ion-etching of $\ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}}$ to increase the local $j_c$, (c) in situ deposition of cap $\ensuremath{\mathrm{Nb}}$ layer and (d) the cross-section of the two reference junctions (having fully and not-etched interlayer) and the stepped (half-etched) junction.}
\end{center}
\end{figure}
\begin{table}[tb]
\caption{Etching parameters of $\ensuremath{\mathrm{Nb}}$ and $\ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}}$. The rates were determined by profiler measurements.\label{sputter}}
\begin{tabular}{ccccc}
\hline
\hline
& partial pressure & power density & etching rate & $\quad$ \\
&[$\rm{mbar}$]&[$\rm{W/cm^2}$] & [$\rm{nm/s}$]& \\
\hline
$\S\ensuremath{\mathrm{F}}_6$ on \ensuremath{\mathrm{Nb}} & $15\cdot10^{-3}$ & $0.6$ &$\sim1$ \\
$\S\ensuremath{\mathrm{F}}_6$ on \ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}} & $15\cdot10^{-3}$ & $0.6$ & $<$0.001 \\
$\ensuremath{\mathrm{Ar}}$ on \ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}} & $5\cdot10^{-3}$ & $0.6$ &$\sim0.01$ & \\
\hline
\hline
\end{tabular}
\end{table}
The junction mesas were defined by aligning the photo mask on the optically visible step-terraces, followed by $\ensuremath{\mathrm{Ar}}$ ion-beam etching of the upper $\ensuremath{\mathrm{Nb}}$, $\ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}}$ and $\ensuremath{\mathrm{Al}}$ layers. The length of both junctions halves $L_1$ and $L_{2}$ are within the lithographic alignment accuracy of $1\;\rm{\mu m}$. The etching was stopped after the complete etching of the $\ensuremath{\mathrm{Al}}_2\O_3$ tunnel barrier. Afterwards the mesas were insulated by SNEAP (Selective Niobium Etching and Anodization Process) \cite{Gurvitch82NbAlONb}. In the last photolithographic step the wiring layer was defined. After a short argon etch to remove the contact resistance the thick $\ensuremath{\mathrm{Nb}}$ wiring was deposited.
\section{Results and Discussion}
Three different types of junctions are discussed: being either fully etched or not-etched (so-called reference JJs) and half etched (stepped JJs) (Fig. \ref{SFpatterning} d). From the current-voltage characteristic (IVC) and magnetic field dependence of critical current $I_c(H)$ of the reference JJs one can estimate parameters for the stepped junction, such as the ratio of symmetry $\Delta=j_2/j_1$, where $j_1=j_c(d_1)$ and $j_2=j_c(d_2)$, and the quality of the etched and non-etched parts.\par
The uniformity of the supercurrent transport in a Josephson junction can be judged qualitatively from the $I_c(H)$ pattern. The magnetic field $H$ was applied in-plane and along one junction axis. The magnetic diffraction pattern depends in a complex way on the current distribution over the junction area \cite{BaronePaterno} and the effective junction length. The ideal pattern of a short ($L_1+L_2\leq\lambda_J$) JJ with Josephson penetration depth $\lambda_J$ is symmetric with respect to both polarities of the critical
current and the magnetic field and has completely vanishing $I_c$ at the minima. Asymmetry, irregularity or current offsets in $I_c(H)$ indicate a non-uniform current transport over the interlayers. If the JJs is flux-free, this non-uniformity can be located in both the insulating and ferromagnetic layers as well as at the interfaces.\\Transport measurements were made in a liquid He dip probe using low-noise home made electronics and room-temperature voltage amplifier. The critical current was determined by a voltage criteria of $3\:\rm{\mu V}$.\par
\subsection{Uniformly etched junction}
In this subsection the properties of reference JJs are discussed.
The $I_c(H)$ dependencies for a non-etched junction (triangle) with the F-layer thickness $d_2$ and a uniformly etched junction
(circle) with thickness $d_1=d_2-\Delta d_F$ are depicted in Fig.~\ref{Fig:IcHIVtogether} a). Their $I_c(H)$ pattern are normalized to the maximum value $j_1\cdot A$ or $j_2 \cdot A$, respectively, with junction area $A$. The larger offset of the non-etched JJs is due to their lower absolute critical current. Fig.~\ref{Fig:IcHIVtogether} b) depicts the IVCs for large and small (inset) ranges of bias current. As the electric transport is in the dirty limit \cite{VasenkoPRB}, $j_c$ scales exponentially with the variation of $d_F$. The polycrystalline structure of room-temperature sputtered layers and the very low etching rate of $\ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}}$ led to a good control over $\Delta d_F$. However, one has to keep in mind that the local variation of F-layer thickness might exceed this value, and $d_1$, $d_2$ and $\Delta d_F$ are just the mean thicknesses seen by the transport current. Besides the difference in $I_c$, the $I_c(H)$ dependence and the IVCs showed no evidence for an inhomogeneous current transport for both samples. The resistance $R$ is nearly independent from $d_F$ as the voltage drop over the tunnel barrier is much larger than the serial resistance of a few nanometer thick metal \cite{WeidesHighQualityJJ}. However, an etching-induced change of transparency at the F/S interface might modify $R$. No change of $R$ is visible in the IVCs of both JJs in Fig.~\ref{Fig:IcHIVtogether} b), apart from the change in $I_c$. A change of capacitance $C$ requires a change of $R$. Both $R$ and $C$, i.e., $\rho$ and $\epsilon$, are determined by the dielectric tunnel barrier, thus the only difference between both junctions is the local $j_c$. The larger $I_c$, but same resistance $R$ and capacitance $C$ led to a slightly hysteretic IVC of the etched sample, as the width of hysteresis is determined by the McCumber parameter $\beta_c \propto I_c^2 R C$.\par
\begin{figure}[tb]
\begin{center}
\includegraphics[width=8.6cm]{IcHIVtogether}
\caption{(color online) a) $I_c(H)$ of etched (circle) and non-etched (triangle) JJs. $I_c$ was normalized to the maximum value $j_1\cdot A$ or $j_2 \cdot A$. b) IVCs for large and small (inset) bias current ranges, measured at zero magnetic field. In b) the IVC of not-etched JJs is nearly completely covered by the IVC of fully etched JJ. The junction lengths are $100\:\rm{\mu m}$, both JJs are in the short JJ limit. Measurements were done at $4.2\;\rm{K}$.}
\label{Fig:IcHIVtogether}
\end{center}
\end{figure}
\subsection{Half-etched (stepped) junctions}
Stepped JJs with a centered step and same ground state in both halves are treated in this subsection. A theoretical review of the magnetic diffraction pattern for junctions with different $j_1$ and $j_2$ ratios is given and compared with recent measurements.
\subsubsection{Calculated $I_c(h)$ of stepped JJ}
The magnetic diffraction pattern $I_c(H)$ of a JJ depends on its $j_c$ profile, see Ref.~\cite{BaronePaterno}. The analytic solution for a short stepped junction with centered step ($L_1=L_2=L/2$) and different critical current density $j_1$ and $j_2$ in both halves is given by
\begin{align*}&I_c(h)=w\left[\int\limits_{-L/2}^0\!{j_1\sin{(\phi_0+\frac{hx}{L})}dx}+\int\limits^{L/2}_0\!{j_2\sin{(\phi_0+\frac{hx}{L})}dx}\right]\\
&=A\cdot\frac{j_1\cos{(\phi_0-\frac{h}{2})}-j_{2}\cos{(\phi_0+\frac{h}{2})}+(j_2-j_1)\cos{\phi_0}}{h},
\end{align*}
where $\phi_0$ is an arbitrary initial phase, $h=2\pi\Lambda\mu_0LH/\Phi_0$ the normalized magnetic flux through the junction cross section, $\Lambda$ the magnetic thickness of junction and $w$ the junction width. The phase-field relation for maximum $I_c$ is reached for
\[\phi_0=\arctan{\left[\frac{\sin{(\frac{h}{2})}\cdot(j_{1}+j_2)}{2\sin^2{\left(\frac{h}{4}\right)}\cdot \left(j_2-j_1\right)}\right].}\] The general analytical form of $I_c(h)$ for multi-step junctions can be found in Ref.\cite{Lazarides2004IcH0PI}.\\
The calculated $I_c(h)$ for various ratios of $\Delta=j_2/j_1$ are depicted in Fig.~\ref{Fig:simuSteJJIcH} a). Characteristic features are the centered maximum peak and the appearance of periodic minima of $I_c(h)$. The depths of the odd-order minima depend on the asymmetry ratio $\Delta$ and decrease for smaller values of $j_2$. $I_c(h)$ is completely vanishing at the even-order minima. The maximum critical current at $I_c(0)$ decreases linearly down to $I_c=0.5\cdot j_1A$ for $j_2=0$. The corresponding $I_c(h)$ pattern becomes that of a junction with half the width and uniform $j_c=j_1$.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=8.6cm]{simuSteJJIcH}
\caption{(color online) a) Calculated $I_c(h)$ dependence for various ratios of $\Delta=j_2/j_1$ and centered step in $j_c$ profile. b) Measured $I_c(H)$ of stepped JJs plus calculated $I_c(h)$ (grey line). $I_c$ was normalized to the maximum value $j_1\cdot A$. The junctions are in the short JJ limit and measurements were done at $4.2\;\rm{K}$. Magnetic field is applied in-plane of the junction. The data are shifted along $I_c$-axis.}
\label{Fig:simuSteJJIcH}
\end{center}
\end{figure}
\subsubsection{Measured $I_c(H)$ of stepped JJ}
The measured JJs are in short limit, i.e., $L=50\:\rm{\mu m}<\lambda_J$.
In Fig.~\ref{Fig:simuSteJJIcH} b) the measured magnetic diffraction pattern $I_c(H)$ of stepped JJs with different ratio of $\Delta=j_2/j_1$ along with the calculated $I_c(h)$ curves (grey lines) are depicted. The step in interlayer $\Delta d_F$ is carefully varied to trace out various regimes of $I_c(H)$ as function of $\Delta$. $j_1$ and $j_2$ were determined from reference junctions being either fully or not-etched and located close to the stepped junction. Both junctions halves have a phase difference of $\pi$ in the ground state, i.e., the stepped junction was $\pi$ coupled, too. The magnetic field axis $h$ was scaled to fit the first measured minima of $I_c(H)$.\\It can be seen that for smaller $\Delta$ i) the oscillation period changes to the double value by comparing the $I_c(H)$ for $\Delta=1$ and $\Delta=0.25$, ii) the depth of the odd-order minima decreases and iii) the maximum critical current $I_c(0)$ is reduced.\\ The slight asymmetry of some $I_c(H)$ pattern, for example at the first side-maxima of the $\Delta=0.45$ sample, and in consequence the deviation to simulation can be explained by a modification of the magnetic flux penetration due to different magnetic states in both halves \cite{KemmlerPRB0Pi}.\\Recently, it was shown by the author that remanent magnetization of F-layer can lead to strong deviation of the expected $I_c(H)$ pattern \cite{WeidesAnisotropySIFS}. Here, the weak magnet $\ensuremath{\mathrm{Ni}}\ensuremath{\mathrm{Cu}}$ was used as interlayer. However, as both halves are in the $\pi$ coupled state the magnetic properties of F-layer can not be neglected. The difference in F-layer thickness for both halves even facilitates some variation of the local magnetic configuration. Nevertheless, the fair agreement of measurement and simulation in Fig.~\ref{Fig:simuSteJJIcH} b) shows the good reliability of the step formation procedure.
\section{Conclusions and Outlook}
As conclusion, SIFS Josephson junctions were fabricated with a well-defined step by local etching of the ferromagnetic interlayer. The etched and not-etched SIFS junctions differ only by F-layer thickness. No inhomogeneities can be seen in the current transport characteristics of the etched junctions. Magnetic field transport measurements on stepped junctions have a good correlation with the simple analytical model.\par As an outlook, the use of a non-magnetic stepped layer avoids the modifications of magnetic cross-section by intrinsic magnetic remanence, and may help to further improve the consistence of $I_c(H)$ measurement with simulation. Replacing the optical lithography with electron beam lithography may enhance the lateral accuracy of the step down to the dimension of e-beam resist. The patterning of stepped JJs allows free lateral placement of well-defined $j_c$'s and/or local coupling regimes within a single junction. JJs with varying $j_c$ and planar phase could be used for devices with special shaped $I_c(H)$ pattern \cite{BaronePaterno}, toy systems for flux pinning or tunable superconducting resonators. The stepped junctions can be realized in $\ensuremath{\mathrm{Nb}}$ based JJs with any interlayer material, which is chemically stable towards the reactive etching gas.\\The patterning process could be adjusted to all thin film multilayer structure providing that the reactive etching rates of the layer materials differ, e.g., it can be applied to other metallic multilayer systems such as magneto-resistance devices (GMR/TMR elements) where a local variation of magnetic properties may enhance their functionality.
\section*{Acknowledgement}
The author thanks H. Kohlstedt, U. Peralagu and E. Goldobin for fruitful discussions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
On Saturday, August 16th 2008, Usain Bolt shattered the world record
of 100 meter dash in the Bird's Nest at the Beijing Olympics 2008. In
a spectacular run dubbed ``the greatest 100 meter performance in the
history of the event'' by Michael Johnson, Bolt finished at 9.69
seconds, improving his own previous world record from earlier this
year by 0.03 seconds. However, the most impressive fact about this run
was the way in which he did it: After accelerating away from the rest
of the field, he looked to his sides when two seconds and 20 meters
remained, and when that noting he was completely alone, he started
celebrating! He extended his arms, and appeared to almost dance along
the track.
\begin{figure*}
\begin{center}
\mbox{\epsfig{file=screenshot2.eps,width=0.8\linewidth,clip=}}
\end{center}
\caption{Example screen shot used to estimate the runners' position as
a function of time.}
\label{fig:screenshot}
\end{figure*}
Despite this, he broke the world record by 0.03 seconds. But, needless
to say, this celebration left spectators and commentators all over the
world wondering about one big question: What would the world record
have been if he had \emph{not} celebrated the last 20 meters? Bolt's
coach, Glen Mills, recently suggested at a press conference of the
Golden League tournament in Z{\"u}rich, that the record could have
been 9.52 seconds, or even better.
We wanted to check this for ourselves, by attempting to measure Bolt's
position as a function of time, and extrapolate from the dynamics
before the celebration began, into the last two seconds of the race.
Based on (hopefully) reasonable assumptions, we could then obtain an
estimate of the new world record.
In this paper we analyze footage of the run obtained from various web
sites and the Norwegian Broadcasting Corporation (NRK), with the goal
of estimating this ``hypothetical'' world record. The main technical
difficulty in performing this analysis lies in obtaining accurate
distance measurements as a function of time for each
runner. Fortunately, this task is made considerably easier by the
presence of a moving camera mounted to a rail along the track. This
rail is bolted to the ground at regular intervals, and thereby
provides the required standard ruler. Using the methods detailed in
the following sections, and properly taking into account all major
sources of statistical uncertainty, we believe that our measurements
are sufficiently accurate and robust to support interesting
conclusions.
\section{Method}
\label{sec:method}
Our analysis is based on the following simple steps:
\begin{enumerate}
\item We first obtained several different videos of the race from the
Internet (NBC and BBC) and the Norwegian Broadcasting Company (NRK),
and printed out $\sim30$ screen shots at different times from these.
\item We then constructed a standard ruler by counting the total
number of bolts (called ``ticks'' in the following) on the rail of
the moving camera along the 100 meter track (see Figure
\ref{fig:screenshot}). We assumed the distance between these to be
constant.
\item Next, we drew lines orthogonal to the track, using whatever
means most accurate for a given screen shot. For early and late
frames, lines in the actual track itself (e.g., starting and
finishing lines) were most useful, while for intermediate frames,
the lower right edge of the camera mount was utilized (Figure
\ref{fig:screenshot}).
\item For a given frame, we then read off the positions of Usain Bolt
and Richard Thompson, the runner-up, with the ruler, and recorded
these together with the time from the screen clock.
\item Next, we assigned an uncertainty to each distance measurement,
by estimating how many ticks we believed we were off in a given
frame. For later frames, when the camera angle is almost orthogonal
to the track, this uncertainty is smaller than in the beginning of
the race because of the camera perspective.
\item Based on these uncertainties, we fitted a smooth spline with
inverse variance weights to the data. This provided us with a smooth
approximation to the runners' positions as a function of time, and
also with the first and second derivatives, i.e., their speeds and
accelerations.
\item To make the projections, we consider two cases: First, we
conservatively assume that Bolt would have been able to keep up with
Thompson's acceleration profile in the end race after 8 seconds of
elapsed time, and project a new finishing time. Second, given his
clearly stronger acceleration around 6 seconds, we also consider the
case in which he is able to maintain a $\Delta a =$0.5
m/$\textrm{s}^2$ higher acceleration than Thompson through to the
end.
\end{enumerate}
The final goal is the new projected world record, which is found by
extrapolating the resulting motion profile to 100 meters. We also
estimate the uncertainty in this number by repeating the above
analysis 10\,000 times, each time adding a random fluctuation with
specified uncertainties to each time and tick count.
\begin{deluxetable*}{ccccccl}
\tablewidth{0pt}
\tablecaption{Position as a function of time for Bolt and Thompson\label{tab:data}}
\tablecomments{Compilation of distance-vs-time observations for Usain
Bolt and Richard Thompson in the 100 meter dash in Beijing 2008, obtained
from screen shot prints of the race. }
\tablecolumns{7}
\tablehead{Uncalibrated & \multicolumn{2}{c}{{\bf Usain Bolt}} &
\multicolumn{2}{c}{{\bf Richard Thompson}} &
& \\
elapsed time & Ticks & Distance & Ticks & Distance & Uncertainty & Data set \\
(s) & (\#) & (m)& (\#) & (m) & (m)&
}
\startdata
0.0\tablenotemark{*} & -7.0 & 0.0 & -7.0 & 0.0 &
0.0 &
None \\
(0.01 & -7.0 & 0.0 & -7.0 & 0.0 & 0.0 &None)\tablenotemark{$\dagger$}\\
1.1 & -2.0 & 5.0 & -2.1 & 4.9 & 0.5 & NRK\\
3.0 & 15.5 & 22.5 & 15.6 & 22.6 & 0.5 & NRK\\
4.0 & 27.0 & 34.0 & 27.0 & 34.0 & 0.4& NRK\\
4.5 & 34.3 & 41.3 & 34.1 & 41.1 & 0.5& NRK\\
5.4 & 45.1 & 52.1 & 44.3 & 51.3 & 0.5& NBC\\
5.8 & 48.9 & 55.9 & 48.3 & 55.3 & 0.5& BBC\\
6.2 & 54.5 & 61.5 & 53.8 & 60.8 & 0.5& NBC\\
6.5 & 57.8 & 64.8 & 56.9 & 63.9 & 0.4& BBC\\
6.9 & 62.6 & 69.6 & 61.5 & 68.5 & 0.2& NBC\\
7.3 & 66.3 & 73.3 & 65.1 & 72.1 & 0.2& NBC\\
7.7 & 71.5 & 78.5 & 70.1 & 77.1 & 0.2& NBC\\
8.0 & 74.7 & 81.7 & 72.9 & 79.9 & 0.2& NBC\\
8.3 & 78.6 & 85.6 & 76.8 & 83.8 & 0.2& NBC\\
8.6 & 82.2 & 89.2 & 80.5 & 87.5 & 0.2& NBC\\
8.8 & 84.3 & 91.3 & 82.4 & 89.4 & 0.2& NBC \\
9.4 & 91.6 & 98.6 & 89.4 & 96.4 & 0.2& NBC\\
9.69\tablenotemark{*} & 93.0 & 100. & \nodata & \nodata & 0.0& NRK\\
9.89\tablenotemark{*} & \nodata & \nodata & 93.0 & 100. & 0.0& NRK\\
(13 & 105 & 112. & 105. & 112 & 5.0 &NRK)\tablenotemark{$\dagger$}
\enddata
\tablenotetext{*}{The first point is taken from the known starting
position, and the last is taken from a high-resolution picture of
the finishing line. The times for these are not read from the screen
clock, but are adopted from official sources. Zero uncertainties are
assigned to these points.}
\tablenotetext{$\dagger$}{These two points are not real observations, but
auxiliary points to ensure sensible boundary conditions for the
smooth spline; the first ensures zero starting velocity, and the
last gives a smooth acceleration at the finishing line.}
\end{deluxetable*}
\begin{figure}
\mbox{\epsfig{file=time_evolution_corrected.eps,width=\linewidth,clip=}}
\caption{Estimated position (top), speed (middle) and acceleration
(bottom) for Bolt (red curves) and Thompson (blue curves) as a
function of time. Actual distance measurements are indicated in the
top panel with $5\sigma$ error bars.}
\label{fig:evolution}
\end{figure}
\section{Data and observations}
\label{sec:data}
\subsection{Data sets}
The data used for this analysis consist of three clips filmed by three
cameras located along the finishing line at slightly different
positions. Specifically, the clips were obtained from NRK, NBC, and
BBC.
Unfortunately, the NRK and BBC clips were filmed with cameras
positioned fairly close to the track, and the rail of the moving
camera therefore disappears outside the field-of-view after about 6
seconds. This is not the case for the NBC clip, which was filmed from
further away. Even though the quality of this version is rather poor,
it is possible to count the number of ticks to the end.
\begin{figure*}
\mbox{\subfigure{\label{fig:end_run_a}\epsfig{figure=projected_end_run_corrected.eps,width=0.45\textwidth,clip=}}
\quad
\subfigure{\label{fig:end_run_b}\epsfig{figure=projected_end_run_zoom_corrected.eps,width=0.45\textwidth,clip=}}}
\caption{Comparison of real and projected distance profiles at the
end of the race. The point where the profiles cross the
horizontal 100 meter line is the new world record for a given
scenario. The right panel is only a zoomed version of the left
panel.}
\label{fig:end_run}
\end{figure*}
Using these data sets, we measured the position of Usain Bolt and
Richard Thompson at 16 different times in units of ticks. These are
all listed in Table \ref{tab:data}.
\subsection{Calibration of measurements}
There are three issues that must be addressed before the tick counts
listed in Table \ref{tab:data} can be translated into proper distance
measurements. First, the camera rail is not visible entirely to the
starting line, as the very first part is obscured by a camera man. The
tick counts in Table \ref{tab:data} are therefore counted relative to
the first visible tick. Fortunately, it is not very problematic to
extrapolate into the obscured region by using the distance between the
visible ticks, and knowing that the distance between the starting
lines for the 100 meter dash and 110 meter hurdles is precisely 10
meters. We estimate the number of obscured ticks to be $7 \pm 1$.
Second, the precision of the screen clock is only a tenth of a second,
and the clock also appears to truncate the time, not round off. We
therefore add 0.05 seconds to each time measurement, and define our
uncertainty in time to uniform between -0.05 and 0.05 seconds.
Finally, the screen clock is not calibrated perfectly with the stadium
clock. (See Figure 1 for an example frame.) A little more than half of
all frames appear to be synchronized, while in the rest the screen
clock is lagging behind by 0.1 seconds. We assume that the stadium
clock is the correct one, and re-calibrate the screen clock by adding
an additional 0.04 seconds to each time measurement.
With these assumptions, it is straightforward to calibrate both the
clock and distance measurements, and this is done in the corresponding
columns in Table \ref{tab:data}.
\section{Estimation of motion profiles}
\label{sec:analysis}
With calibrated distance information ready at hand, it is
straightforward to make the desired predictions. First, we compute a
smooth spline \citep{green:1994}, $s(t)$, through each of the two
runners' measured positions. A nice bonus of using splines is that we
automatically obtain the second derivatives of $s$ (ie., acceleration)
at each time step, and also the first derivatives (ie., speed),
\begin{equation}
v(t) = \frac{ds}{dt}; \quad a(t) = \frac{dv}{dt} = \frac{d^2 s}{dt^2}.
\label{eq:v_and_a}
\end{equation}
To obtain a well-behaved spline, we impose three constraints. First,
we add two auxiliary data points at $t=0.01$ and $t=13.0$
seconds. These are not measurements, but included only in order to
guarantee sensible boundary conditions at each end: The first one
implies that the starting velocity is zero, while the last one leads
to a smooth acceleration at the finishing line. Thirdly, we adopt a
smooth spline stiffness parameter of $\alpha=0.5$ \citep{green:1994}
to minimize unphysical fluctuations. The results are fairly
insensitive to the specific value of this parameter.
The resulting functions are plotted in Figure
\ref{fig:evolution}. Some interesting points to notice are the
following:
\begin{itemize}
\item Bolt and Thompson are virtually neck by neck up to four seconds,
corresponding to a distance of 35 meters.
\item Bolt's Olympic gold medal is essentially won between 4 and 8
seconds.
\item At 8 seconds Bolt decelerates noticeably, and Thompson equalizes
and surpasses Bolt's speed. Note however that Thompson is also not
able to maintain his speed to the very end, but runs out of power
after about 8.5 seconds. Still, his acceleration is consistently
higher than Bolt's after 8 seconds.
\end{itemize}
\section{World record projections}
\label{sec:projections}
We are now in the position to quantitatively answer the original
question: How fast would Bolt really have run, if he hadn't celebrated
the last 2 seconds? To make this projection, we consider the following
two scenarios:
\begin{enumerate}
\item Bolt matches Thompson's acceleration profile after 8 seconds.
\item Bolt maintains a 0.5 m/s$^2$ higher acceleration than Thompson
after 8 seconds.
\end{enumerate}
The justification of scenario 1 is obvious, as Bolt outran Thompson
between 4 and 8 seconds. The justification of scenario 2 is more
speculative, as it is difficult to quantify exactly \emph{how much}
stronger Bolt was. Still, looking at the acceleration profiles in
Figure \ref{fig:evolution}, and noting that Bolt traditionally was
considered a 200 meter specialist, a value of 0.5 m/s$^2$ seems fairly
realistic.
Then, for each scenario we compute a new trajectory for Bolt by
choosing initial conditions, $s_0 = s(8 \,\textrm{sec})$ and $v_0 =
v(8\,\textrm{sec})$, and an acceleration profile as described
above. The computation of these trajectories are performed by simply
integrating Equation \ref{eq:v_and_a} with respect to time,
\begin{align}
\hat{s}(t) &= s_0 + \int_{t_0}^{t} \hat{v}(t) dt \\
\hat{v}(t) &= v_0 + \int_{t_0}^{t} \hat{a}(t) dt \\
\hat{a}(t) &= a_{\textrm{Thompson}}(t).
\end{align}
In Figure \ref{fig:end_run} we compare the projected trajectories,
$\hat{s}(t)$ (dashed red line shows scenario 1, dotted red line shows
scenario 2), with the actual trajectory, $s(t)$ (solid red line). For
comparison, Thompson's trajectory is indicated by a solid blue
line.
The projected new world record is the time for which $\hat{s}(t)$
equals 100 meter. Including 95\% statistical errors estimated by Monte
Carlo simulations as described in Section \ref{sec:method}, we find
that the new world record would be $9.61\pm0.04$ seconds in scenario
1, and $9.55\pm0.04$ in scenario 2.
\section{Conclusions}
\label{sec:conclusions}
Glen Mills, Usain Bolt's coach, suggested that the world record could
have been 9.52 seconds if Bolt had not danced along the track in
Beijing for the last 20 meters. According to our calculations, that
seems like an good, but perhaps slightly optimistic, estimate:
Depending on assumptions about Bolt's acceleration at the end of the
race, we find that his time would have been somewhere between 9.55 and
9.61 seconds, with a 95\% statistical error of $\pm0.04$
seconds. Clearly, the uncertainties due to the assumptions about the
acceleration are comparable to or larger than the statistical
uncertainties. Therefore, 9.52 seconds does by no means seem to be out
of reach.
In Figure \ref{fig:manipulated} we show an illustration of how such a
record would compare to the actual world record of 9.69 seconds,
relative to the rest of the field: The left version of Bolt shows his
actual position at $\sim9.5$ seconds, while the right version
indicates his position in the new scenarios.
Of course, there are potential several systematics errors involved in
these calculations. For instance, it is impossible to know for sure
whether Usain might have been tired at the end, which of course would
increase the world record beyond our estimates. On the other hand,
judging from his facial expressions as he crossed the finishing line,
this doesn't immediately strike us as a very plausible hypothesis.
\begin{figure}
\mbox{\epsfig{file=bolt_manip.eps,width=\linewidth,clip=}}
\caption{Photo montage showing Bolt's position relative to his
competitors for real (left Bolt) and projected (right Bolt) world
records.}
\label{fig:manipulated}
\end{figure}
Another issue to consider is the wind. It is generally agreed that a
tail wind speed of 1 m/s improves a 100 meter time by 0.05 seconds
\citep{mureika:2000}. Further, for IAAF (International Association of
Athletics Federations) to acknowledge a given run as a record attempt,
the wind speed must be less than +2 m/s. When Bolt
ran in Beijing, there was no measurable wind speed at all, and one can
therefore safely assume that the world record could have been further
decreased, perhaps by as much as 0.1 seconds, under more favorable
wind conditions.
A corollary of this study is that a new world record of less than 9.5
seconds is within reach for Usain Bolt in the near future.
\begin{acknowledgements}
First and foremost, we would like to thank Christian Nitschke Smith
at NRK Sporten for providing very useful high-resolution footage of
the Beijing 100 dash run, and also BBC and NBC for making their
videos available on their web pages. Second, we thank the pizza guy
from Peppe's who provided us with a very good half-n-half ``Thai
Chicken'' and ``Heavy Heaven'' pizza on a late Friday night. This
article has been submitted to the American Journal of
Physics. After it is published, it will be found at
http://scitation.aip.org/ajp.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
DNA, a polymer made from four basic nucleotides (abbreviated by A, C, G, T), is the main carrier of the genetic information. The DNA sequencing is the process in which this information is extracted by converting physical DNA molecules into a signal that describes the exact order and type of the constituent nucleotides. The ability to sequence DNA has revolutionized molecular biology, biomedicine and life sciences in general. Among many applications, some of which we review in Section~\ref{sec:applications}, it is recognized as a critical method for diagnosing and improving human health (e.g. dissecting genetic mechanisms of cancer~\cite{Stratton2009}), identifying pathogens and protecting public health (e.g. detecting and tracking spread of infectious diseases~\cite{Gardy2017}) or understanding our environment (e.g. impact of microorganisms on water, air~and~soil~\cite{Quince2017}).
\begin{figure}[b]
\centering
\scalebox{0.9}{
\includegraphics{MiniSeq-small}\hspace{0.2in}\includegraphics{minion}\hspace{0.05in}\includegraphics{smidgion}
}
\caption{MiniSeq DNA sequencer from Illumina (left), vs. portable MinION (middle) and forthcoming SmidgION (right). Pictures not to scale. The approximate size in centimeters is marked for reference. The current price of MinION is \$1,000 compared to \$50K for an entry-level benchtop sequencer.}\label{fig:ont}
\end{figure}
The end-to-end DNA sequencing and analysis involves a combination of laboratory and bioinformatics steps (see Section~\ref{sec:sequencing}). In~a~traditional setup, these steps are performed at massive scales by highly trained personnel using expensive benchtop DNA sequencers and supporting computational servers. Consequently, the process has been confined to high-end laboratories with financial resources and skilled personnel, limiting the access and extending the time it takes from sample collection to results.
The recently introduced portable DNA sequencers, specifically Oxford Nanopore Technology (ONT) MinION~\cite{ONT} (see Fig.~\ref{fig:ont}), are changing this situation. Compared to the traditional DNA sequencers, these devices are relatively inexpensive and truly mobile: smaller than a cell phone, USB powered, and designed to be easily operable ``in the field.'' Moreover, they use biochemical principles (i.e. nanopore-based single molecule sequencing~\cite{Lu2016}) that enable near real-time streaming of the raw signal as soon as the DNA molecules are ``sensed,'' usually within minutes from the process initiation. As a result, portable DNA sequencing emerges as a rapid {\it in situ} diagnostic tool, especially when DNA samples are difficult or impossible to preserve or transport. Examples include the DNA surveillance of Ebola and Zika during the recent outbreaks in Africa~\cite{Quick2016} and Brazil~\cite{Faria2016}, or successful deployments in the Arctic~\cite{Edwards2017b} and Antarctic~\cite{Johnson2017}, in rainforests of Ecuador~\cite{Pomerantz2017}, and even on the International Space Station~\cite{Castro-Wallace2016}.
However, portable DNA sequencing and analysis are challenging for mobile systems. This is because the underlying computations, as well as data and communication intensive operations, have to be balanced to ensure the desired quality of the analysis while running in real-time, typically in the energy and bandwidth constrained environments.
Currently, mobile DNA sequencing is driven by the bioinformatics tools designed primarily for the desktop systems, and organized into mobile workflows in an {\it ad hoc} manner (see Section~\ref{sec:work}). While these solutions have been successful in the initial trials, they cannot be expected to scale if the underlying problems of processing speed, energy efficiency and resilience are not addressed. As the technology behind portable DNA sequencing matures and becomes more accessible, it is reasonable to assume that it will be adopted by individual consumers. Consequently, the underlying algorithms and software will have to be able to operate in the wide range of conditions, under different loads, and with varying resources, while remaining easy~to~use.
In this paper, we discuss the challenges that mobile systems and mobile computing must address to maximize the potential of portable DNA sequencing, and {\it in situ} DNA processing. Our analysis is guided by the current and envisioned applications of DNA sequencing. We look at the identified challenges from the perspective of both algorithms and systems design, and argue for careful co-design and functionality separation. We note that the paper is written from the systems perspective, for readers with no prior background in genomics or bioinformatics.
\section{Sequencing Applications}\label{sec:applications}
We begin with a discussion of general applications that highlight the enabling nature of the portable DNA sequencing.
\textbf{Sequencing When Time is Critical:} Rapid diagnosis of infectious diseases is critical for protecting human health~\cite{Gardy2017}. The DNA sequencing can be used to identify the infectious agent, assess its responsiveness to vaccines or antibiotics, and prescribe the best treatment. In some diseases, starting the right treatment within hours is vital~\cite{Cao2016,Hewitt2017}, and when responding to epidemics, real-time genomic surveillance increases situational awareness (e.g. by tracking evolution rate and transmission patterns), helping with planning and resource allocation~\cite{Faria2016,Quick2016}. Importantly, the same principles apply in detection and mitigation of biological threats~\cite{Walter2017}. However, the current culture-based laboratory methods for identifying pathogens take days, and in fact some pathogens are difficult or impossible to grow in culture.
Because emerging portable DNA sequencers are free of these limitations, they offer significant advantage when time is critical. One excellent example is the response to the recent Ebola epidemics, where mobile laboratory based on MinIONs, transported in a standard airplane luggage, was deployed in Guinea~\cite{Quick2016}. Despite logistic difficulties, such as lack of continuous electric power and poor Internet connectivity, the laboratory became operational within two days, was generating results in less than 24 hours from receiving a sample, and provided valuable insights into disease dynamics.
\textbf{Sequencing When Location is Critical:} The samples we know the least about, and would like to study by DNA sequencing, are usually located far from established sequencing facilities. This is especially true for metagenomic studies in which communities of microbial organisms are sampled directly from their native environments, to characterize their structure and function~\cite{Quince2017}. However, many types of samples cannot be transported due to the legal and international export barriers, or other practical considerations (e.g. cost effectiveness). Furthermore, sequencing in laboratory does not allow for on-site iterative surveillance, in which sampling decisions are made in real-time. Such approaches are important when studying rapidly changing environments.
Portable DNA sequencers reached the level where they can be operated in some of the most demanding environments. For example, in one recent study, a battery-only powered laboratory, consisting of MinION and {\it ad hoc} cluster of two laptops without Internet connection, was harnessed for {\it in situ} analysis of microorganisms found in the 100 meters deep South Wales Coalfield~\cite{Edwards2017}. Although the entire process was far from simple, the study demonstrated that DNA sequencing in remote locations is currently feasible. Other studies~\cite{Johnson2017,Pomerantz2017,Castro-Wallace2016} serve as further proof~of~principle.
\textbf{Future:} With the continuing improvements to the sequencing technology, and simplification and automation of DNA extraction and preparation protocols (see next section), we may expect that portable DNA analysis will become a ubiquitous tool. Rapid medical diagnostics, forensics, agriculture, and general exploration of microbial diversity on Earth and in outer space, are just some domains that will benefit. However, the most exciting opportunities are in consumer genomics. ONT has been promoting the idea of Internet of Living Things (IoLT)~\cite{IoLT,Waltz2017}, where anyone will be able to sequence anything anywhere, opening endless possibilities for DNA-driven discoveries. Yet, because even short DNA sequencing runs can easily deliver gigabytes of data, which may require hours to analyze, the success of IoLT will depend on the ability of mobile and cloud computing to provide~adequate~support.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.575]{workflow}
\caption{The major steps in a typical DNA sequencing workflow.}\label{fig:workflow}
\end{figure*}
\section{Portable vs. Benchtop}\label{sec:sequencing}
In order to understand computational challenges in portable DNA sequencing, it helps to first look at the end-to-end DNA sequencing process, and how this process differs between the current benchtop sequencing platforms and the emerging portable~sequencers.
\definecolor{Gray10}{gray}{0.9}
\begin{table*}
\centering
\footnotesize\sf
\caption{General characteristics of different steps in DNA sequencing.}\label{tab:properties}
\begin{tabular}{|p{0.675in}|p{0.87in}|p{0.87in}|p{0.87in}|p{0.87in}|p{0.87in}|p{0.87in}|}
\hline
\rowcolor{Gray10}
& \multicolumn{2}{c|}{\textsf{\textbf{Library Preparation}}} & \multicolumn{2}{c|}{\textsf{\textbf{DNA Sequencing}}} & \multicolumn{2}{c|}{\textsf{\textbf{DNA Reads Processing}}} \\
\rowcolor{Gray10}
& MinION & Illumina & MinION & Illumina & MinION & Illumina \\
\hline
\textsf{\textbf{Time Scale}} & Minutes-hours & Hours-days & Real-time, Minutes-hours & Days-weeks & Real-time, Minutes-hours & Batch-mode, Minutes-days \\
\hline
\textsf{\textbf{Equipment}} & \multicolumn{2}{p{1.74in}|}{Basic, portable laboratory equipment (e.g. pipettes, centrifuge)} & USB-stick-size portable device & Large benchtop machine & \multicolumn{2}{c|}{Laptop to data center} \\
\hline
\textsf{\textbf{Energy}} & \multicolumn{2}{c|}{1-2W} & 1W & Up to 10KW & \multicolumn{2}{p{1.74in}|}{No reliable analysis available,\newline sustained high load} \\
\hline
\textsf{\textbf{Software}} & \multicolumn{2}{c|}{N/A} & \multicolumn{2}{p{1.74in}|}{Proprietary firmware, drivers,\newline and control software} & \multicolumn{2}{p{1.74in}|}{Usually open source, complex string and statistical algorithms} \\
\hline
\textsf{\textbf{Advantages}} & Fast and easy\newline protocol & Protocols for very low DNA mass & Portable, real-time, long reads,\newline inexpensive device & Low cost per base, low error rate & Streaming algorithms~and~interactive sequencing & Many tested workflows available\\
\hline
\textsf{\textbf{Challenges}} & High mass\newline of input DNA & Time and labor\newline intensive & High error rate,\newline high cost per base & Short reads,\newline expensive device & High error rate & Short read length, high data volume \\
\hline
\end{tabular}
\end{table*}
\subsection{How DNA is Studied}
A typical DNA sequencing workflow involves both laboratory and computational steps (see Fig.~\ref{fig:workflow} and Tab.~\ref{tab:properties}). While the specifics of the protocols executed in every step may vary, the main steps remain the same irrespective of the DNA sequencing platform.
\textbf{Sample Collection:} The first step is to obtain the material from which DNA will be extracted. The choice and quantity of material to sample, and the actual number of samples, are dictated by the particular application. Sample types include patient's blood (e.g. in epidemiology), feces (e.g. when studying gut microbial flora), water or soil (e.g. when tracking biological contaminants), etc. The samples are preserved, e.g. by freezing or adding a chemical buffer, to minimize degradation or contamination until they can be further processed. As we mentioned earlier, sometimes samples may be impossible to adequately preserve, necessitating immediate processing.
\textbf{DNA Isolation:} In this step, the collected samples are subjected to chemical, mechanical or thermal processing to extract and purify the DNA molecules. DNA extractions performed in a laboratory with the standard commercially available kits take from minutes to hours, and in addition to the basic tools like pipettes, involve heating/cooling and specialized equipment, e.g. a centrifuge. Consequently, this step requires the access to power supply, or the use of improvised solutions.
\textbf{Library Preparation:} The purified DNA is further processed, to make it compatible with the sensing machinery of the sequencer. This usually involves basic biochemical processing that nevertheless may require complex protocols. The step becomes even more difficult, when, to lower the cost of sequencing, instead of whole genome DNA, only specific DNA regions (e.g. corresponding to known marker genes) are to be sequenced, or if multiple samples are to be sequenced concurrently in a single run. Overall, the entire library preparation takes from several minutes to several days, and with the exception of targeted DNA sequencing, does not require additional equipment beyond the portable laboratory tools.
\textbf{DNA Sequencing:} Once the DNA library is ready, the actual sequencing can be performed. Currently, several sequencing platforms are available, for example Illumina, PacBio, Ion Torrent or Oxford Nanopore. They differ in how DNA is detected and read, which translates into differences in: sequencing speed and throughput (i.e. the number of nucleotides detected per unit of time), length of the output reads (i.e. how long DNA fragments sequencer can sense), and error rate (i.e. how many incorrectly detected nucleotides one may expect in the output). These differences are crucial, since they directly affect the downstream processing and analysis (see e.g.~\cite{Pop2009}). The current sequencers are controlled by computers, which alse receive and store output data. With the exception of the MinION, they are not portable, taking days to complete a single sequencing run in laboratory. Finally, the sequencing process involves additional consumable resources, such as biochemical reagents and flow cells -- devices in which the actual DNA sequencing happens.
\textbf{DNA Reads Processing:} The final step is purely computational, and its goal is to first convert signal produced by a sequencer into DNA reads, and then analyze these reads for insights. Because of the volatility of DNA, and the technical limitations of the sequencing platforms, DNA is hard to sequence as a single large molecule (e.g. a chromosome). Instead, it is sequenced in fragments that the sequencer is able to sense. The raw signal produced for each detected DNA fragment is run through base calling algorithms to generate DNA reads -- the actual strings where each detected nucleotide is represented by its corresponding letter, commonly referred as base (A -- adenine, C -- cytosine, G -- guanine, T -- thymine).
The collected DNA reads are the input to bioinformatics analysis. Here different workflows can be applied, depending on how samples had been prepared for sequencing, and what are the questions of interest. For example, {\it de novo} DNA assembly aims at reconstructing genome from input reads~\cite{Pop2009}, while metagenomic analysis uses DNA reads to detect, classify, and functionally annotate microorganisms present in the sequenced samples~\cite{Quince2017}. However, irrespective of the applied analysis, the common denominator is the reliance on compute and memory intensive string, combinatorial and statistical algorithms, ranging from massive graphs construction and traversal~\cite{Pop2009}, through clustering~\cite{Yang2011}, to large databases querying~\cite{Kim2016}. Consequently, this step requires access to non-trivial computational resources, often exceeding capabilities of a single laptop or even a~desktop~computer.
\subsection{Comparison}
To compare portable and benchtop sequencing we concentrate on the MinION, currently the only portable DNA sequencing technology, and Illumina platform, the dominating benchtop solution. We make our comparison in the context of {\it Library Preparation}, {\it DNA Sequencing}, and {\it DNA Reads Processing}, since these steps vary between platforms. Table~\ref{tab:properties} summarizes our~comparison.
\textbf{Library Preparation:} The attractiveness of the MinION is in rapid sequencing protocol that can be executed in the field for genomic DNA sequencing. The protocol takes roughly 10~min, and can be automated by using portable hands-off sample preparation hardware~\cite{VOLTRAX}. However, compared to more time demanding protocols, it has limitations: it usually leads to lower quality sequencing output (i.e. with higher error rates), and it requires significant amount of input DNA ($\sim$240~ng). In comparison, protocols for the Illumina platform, while much more complex and labor intensive (fastest take $\sim$90~min, and most take hours), can be performed with an order of magnitude less DNA than MinION, without impacting sequencing quality.
\textbf{DNA Sequencing:} The MinION is based on the idea of nanopore sequencing, where DNA molecules pass through organic nanoscale sensors in a flow cell~\cite{Lu2016}. This approach has three main advantages: first, it permits portable and easy to use design with a minimal power consumption, second, it enables real-time sequencing -- the signal gathered by hundreds of nanopores in a sequencer becomes immediately available for downstream processing, third, it can produce long reads (current average is $\sim$7K bases compared to $\sim$250 bases for Illumina), and theoretically this length is limited by the physical size of DNA fragments. All this comes at the price of high error rate (anywhere between 10\%-30\% compared to $\sim$0.1\% for Illumina), which may lead to poor sequencing yield and complicates downstream analysis. Finally, because of the lower throughput, the cost of sequencing a single base is higher compared to benchtop sequencers (if we exclude the required upfront investments). This is in part because the MinION flow cells last for at most 48h of continuous sequencing.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.625]{software-flow}
\caption{Current MinION software workflow.}\label{fig:softflow}
\end{figure*}
\textbf{DNA Reads Processing:} The DNA reads produced in real-time by MinION allow for flexible approach to bioinformatics analysis, with the emphasis on streaming and one-pass strategies executed outside of data centers. This makes ``interactive'' sequencing possible, where the process can be terminated as soon as sufficient DNA reads have been collected to answer questions at hand. Long reads simplify tasks such as querying of databases or DNA assembly, but the high error rate makes certain analyses (e.g. variants detection) extremely challenging, or requires more input reads to resolve uncertainties. In contrast, the Illumina platform is high-throughput and high-volume oriented. The data from a single batch run is typically processed by well established, and to some extent standardized, workflows executed in a data center close to the sequencing facility. Low error rate combined with high DNA reads volume, makes tasks such as variants detection possible. However, short reads force complex algorithms on other~tasks,~e.g.~assembly.
\section{Mobile Computing Perspective}\label{sec:work}
We are now ready to highlight some of the open problems in portable DNA analysis as pertaining to mobile computing. To help understand the problems, we first provide overview the state of the art in mobile DNA analysis software. Then we discuss the limitations and open problems. While we base our discussion on the current MinION platform, we believe that the points we make are equally applicable to the future generation of~portable~sequencers.
\subsection{Current Software Overview}
Figure~\ref{fig:softflow} shows the general MinION software architecture. The first component is MinKNOW -- the proprietary control and signal acquisition software suite. MinKNOW is responsible for the configuration and supervision, including initiation and termination, of sequencing runs. It also receives and stores DNA signals generated by the ASIC in the sequencer's flow cell.
The second component is base calling software. Here multiple options are available~\cite{BASECALLERS}, including execution on the host device or delegation to a specialized cloud service (e.g. Metrichor~\cite{METRICHOR}). The current basecallers are very compute intensive, e.g. they typically involve Recurrent Neural Nets (RNNs), and most of the time are recommended to run in the cloud.
The final software component consists of the application specific bioinformatics tools selected by end-user. Here, bioinformatics and computational biology deliver very broad range of open source solutions to choose from, and new solutions are constantly being developed. Usually, these tools are organized into a pipeline of their own, and may involve multiple processing stages (e.g. preprocessing to remove low-quality data, querying a database to find sequences matching given DNA read, building a tree of generic relationship, etc.). Depending on the complexity, this DNA analytics may be deployed on the host machine, but more frequently it will be running in~the~cloud.
\subsection{Limitations}
The software architecture discussed above has been already used with success to showcase the promise of portable DNA sequencing. However, the approaches thus far focused on manual and {\it ad hoc} organization of the existing bioinformatics tools to perform mobile DNA analysis, without addressing many important concerns, including a systematic approach to energy, data and network management. While this is a great first step, in the long run this strategy has too many limitations to be scalable.
\textbf{Energy Management:} Energy is one of the most critical resources in mobile environments. It is especially important for mobile DNA analysis, since a DNA sequencing device is directly attached to, and draws energy from, a host device. Moreover, computational tasks executed at different stages of the sequencing workflow can run for tens of minutes to hours. However, the current software tools do not have any mechanisms to consider energy as a manageable resource. In fact, bioinformatics software tools are routinely designed under the assumption that they will be executing either on computational servers or in data centers, with abundant main memory and storage, parallel execution~capabilities, and stable power supply.
\textbf{Data Management:} DNA sequencer generates large volumes of data, which flows through various processing stages. For example, a 48h continuous run may produce up to $\sim$250~GB of output and $\sim$20~GB of temporary data, scattered across millions of files. Furthermore, if a cloud backend is used for the analysis, data needs to be transferred back and forth between a mobile device and~the~backend.
Unfortunately, data management is currently done independently by each software component. This puts an unnecessary burden on software designers -- they need to implement not only core functionality (e.g. a base calling algorithm) but also data management logic (e.g. sending data between a mobile device and a cloud, ensuring interoperability with other software, etc.). In addition, this makes it difficult to deploy new data management mechanisms rapidly, as they have to be integrated into multiple and disjoint software elements.
\textbf{Network Management:} Some of the most promising and anticipated applications of mobile DNA analysis are {\it in situ} processing scenarios, where DNA has to be completely handled at a remote location. In such scenarios, network connectivity could be sporadic and bandwidth could highly fluctuate. However, the current software is not designed to be adaptive to changing network conditions. It assumes either no network connectivity, and hence runs locally, or depends on full network connectivity, and hence assumes the always-on availability of a cloud service. Moreover, the decision to run locally or remotely is left to the end user, who must decide before executing the experiment.
\textbf{Consumables Management:} As we mentioned earlier, a flow cell is the workhorse of a sequencer. It is a consumable that can be used to analyze only a limited number of DNA samples, and within a limited time. Moreover, a flow cell degrades over time, and that translates into progressively lower sequencing throughput and potentially growing error rate. Consequently, in truly mobile setups it is necessary to manage flow cells as a scarce resource. While this problem has been recognized, currently no systematic solution exists that would offer the necessary functionality.
\subsection{Open Problems and Proposed Approach}
The majority of the limitations we identified above, are cross-cutting issues that involve multiple software components in the current workflow design. For example, all software components should have some form of data management, energy management, and network management; in order to implement a new solution that addresses a limitation for any one of these, we need to work with multiple software components and apply the solution across all of the components. This is time consuming and error prone, and thus hinders rapid innovation.
To address this challenge, we envision a new software architecture that identifies all necessary functions and separates them into different software components with clean interfaces to ensure interoperability. Specifically, we propose the architecture based on three elements: the data management layer, the DNA analytics layer, and the workflow manager. This architectural separation has the benefit of allowing different components to innovate independently from other components. It also has an advantage of simplifying software development, by allowing each component to focus on its core functionality. At the same time, it allows to accommodate the existing software, especially rich and growing set of bioinformatics tools.
\textbf{Data Management Layer:} The goal of having a separate data management layer is to free other software components from the burden of managing data on their own. Thus, the data management layer should provide all functionality related to managing DNA sequencing and analysis data. This includes 1) an interface for other components to read and store data, including the back compatibility support for the POSIX interface that the existing tools use for flat files, 2) efficient algorithms and mechanisms for data management, including discovery, monitoring and delivery, and 3) integration with cloud services for processing delegation and~data~backup.
Interesting questions arise for the design of a data management layer in all three aspects. First, for the clean slate interface design, the primary question is what kind of abstractions make the most sense for DNA sequencing and analysis. As mentioned earlier, there are mainly three types of data -- raw signals generated by a sequencer, DNA reads (strings), and analysis results (e.g. stored as data tables). Thus, perhaps the most natural interface design is to have an abstraction for each data type. Such design would allow other components to easily search, access, and when needed join data without dealing with low-level details such as file~management.
Second, once the interface is fixed, the underlying implementation can freely employ various mechanisms to manage data. For example, it is well known that DNA sequencing and analysis produce a large amount of data. It is also known that DNA has much inherent redundancy due to small alphabet (only four letters) and its repetitive nature. Thus, the data management layer can employ some compression strategies. However, it is an open question as to how best to compress this data considering the trade-offs between computation cost, computation precision and constrained storage inherent to mobile systems. The existing general strategies, as well as many DNA-specific methods that take into account DNA quality and prior knowledge of sequenced genomes~\cite{Brandon2009}, may work well, but it remains to be seen which strategy is practical in a mobile setup.
Lastly, often times DNA sequencing and analysis data needs to be shipped back and forth between a mobile device and a cloud service, for further analysis. As discussed earlier, base calling requires extensive RNNs and it is recommended to run it in the cloud. Similarly, DNA analysis is often times computationally intensive and requires much computational power and access to large reference databases. Thus, the data management layer needs algorithms and mechanisms to optimize data transfer. Here again, the existing techniques, such as similarity detection and dynamic chunking, may or may not work well.
\textbf{DNA Analytics Layer:} Ideally, the DNA analytics layer should have multiple sub-components, each implementing one algorithm relevant to DNA sequencing and analysis. This includes base calling algorithms, as well as the DNA reads processing algorithms. The goal of this layer is to allow algorithm designers to solely focus on algorithms and their implementations without worrying about other orthogonal issues, such as data or energy management.
Interesting questions arise if we consider that DNA processing can leverage both mobile devices and cloud services at the same time. This provides an opportunity to revisit current solutions and redesign them such that they become amenable to running in both domains, or in either one of the domains. In fact, we can envision a programming model to simplify implementation of such strategies. The model could provide primitives to encapsulate alternative realizations of the same DNA processing task as small migratable entities, allowing them to move across mobile and cloud domains or run in parallel if necessary. It could be further extended to account for the fact that certain DNA processing problems can be answered with different quality, trading specificity or sensitivity for computational, memory or energy performance. For instance, one of the most general questions a user may wish to ask is {\it ``what's in my pot''}~\cite{Juul2015,Kim2016}, which is to report all known organisms whose DNA has been found in a metagenomic sample, and hence sequenced. The question can be answered by classifying detected organisms at the species level (i.e. fine-grain assignment) or just at the family level (i.e. coarse-grain assignment). The fine-grain assignment will typically require large reference databases and compute intensive sequence comparison algorithms. However, the process can be accelerated by using e.g. data abbreviation techniques and clever indexing schemes at the cost of lower sensitivity and precision. By supporting such multiple task realizations of the same DNA analysis problem, the model would permit for more flexible execution paths. For example, in cases where resources are scarce, ``approximate'' tasks could provide less precise but potentially useful information, instead of waiting until resources are sufficient to deliver the detailed answer.
One additional advantage of having tasks-based analytics layer is the ability to easily deploy workflows with stream processing and speculative execution capabilities (the techniques known to improve resource utilization). We note that many of the existing algorithms, especially involving querying of reference databases, can be cast into this model with little or no effort, some other (e.g. construction of DNA assembly graphs, clustering, etc.) would require additional research and reformulation.
\textbf{Workflow Manager:} The goal of the workflow manager is to orchestrate all aspects of the DNA sequencing and analysis workflow, taking into account multiple static and dynamic factors that a user might encounter during a sequencing experiment. Many of the limitations that we discussed earlier fall into this category. Energy consumption, network connectivity, bandwidth variation, flow cell degradation, etc. all contribute to dynamically changing conditions of an experiment. Hence, the workflow manager should carefully monitor these variables and continuously make decisions on what the best course of actions is. This leads to several design questions. For example, how to monitor an experiment including not only the basic properties of a mobile device, e.g. how much energy is left or what is the current network condition, but also status of a flow cell and the quality of reads it is producing. Currently, very limited resources are available regarding flow cell monitoring, which we believe is an interesting research opportunity.
Once we have monitoring capabilities, the second question is how best to utilize available resources to get a desired outcome. This is especially important since a user conducting a DNA analysis in the field often has to make critical decisions based on limited information. For example, suppose a user is conducting DNA analysis in a remote area where there is no network connectivity. The user might wonder if she has enough energy to finish pending DNA analysis on her laptop and use the resulting data to adjust her experiment (e.g. collect more samples, etc.), or if she should move to an area where there is network connectivity to offload the analysis to a cloud and then continue experiment. The workflow manager should either assist users to make well-informed decisions, or be intelligent enough to make decisions on its own, without requiring any user intervention. In cases where DNA analysis algorithms are amenable for partitioning or migrating across different domains, the workflow manager could make fine-grained decisions of moving various computational tasks across domains directly leveraging capabilities exposed by the {\it Data Management Layer} and {\it DNA Analytics Layer}.
\section{Final Remarks}
The proposed architecture is our attempt to introduce a more systematic and scalable approach to mobile DNA analytics, by tapping into concepts known from edge computing. One potential caveat is that the proposed architecture would require reimplementation of some of the existing bioinformatics tools to fully leverage facilities in the {\it DNA Analytics Layer}. However, we believe this is feasible considering that portable sequencers are relatively new technology, with almost no algorithms designed for mobile systems. Currently, our team uses the latest release of MinION to investigate the questions we pose in this paper in the context of environmental DNA analysis.
\bibliographystyle{ACM-Reference-Format}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Measurements of a complex evolving system from different view angles provide us many time series that are usually long-range cross-correlated and exhibit multifractal nature. In turbulent flows, there are velocity field, temperature field and concentration field embedded in the same spatial domain. One can measure these quantities at fixed locations to obtain time series, which are mutually correlated \cite{Antonia-VanAtta-1975-JFM,Meneveau-Sreenivasan-Kailasnath-Fan-1990-PRA}. In financial markets, there are also many pairs that are cross-correlated, such as market index volatilities, price returns of different markets, price returns of different equities, different quantities of a same equity \cite{Lin-2008-PA,Podobnik-Horvatic-Petersen-Stanley-2009-PNAS,SiqueirJr-Stosic-Bejan-Stosic-2010-PA,Wang-Wei-Wu-2010-PA,He-Chen-2011-CSF,Zhou-2012-QF,Zhou-2012-NJP,Zhuang-Wei-Zhang-2014-PA}. Moreover, examples come from very diverse fields, including agronomy \cite{Kravchenko-Bullock-Boast-2000-AJ,Zeleke-Si-2004-AJ}, seismic data \cite{Shadkhoo-Jafari-2009-EPJB}, meteorology \cite{JimenezHornero-JimenezHornero-deRave-PavonDominguez-2010-EMA,JimenezHornero-PavonDominguez-deRave-ArizaVillaverde-2010-AR,Shen-Li-Si-2015-PA}, medical science \cite{Lin-Sharif-2010-Chaos,Ghosh-Dutta-Chakraborty-2014-CSF}, geophysics \cite{Hajian-Movahed-2010-PA}, transportation \cite{Xu-Shang-Kamae-2010-ND,Zhao-Shang-Lin-Chen-2011-PA,Zebende-daSilva-Filho-2011-PA}, to list a few.
To extract the joint multifractality between a pair of multifractal time series, a variety of methods have been developed, such as the MF-X-PF method that performs joint multifractal analysis \cite{Antonia-VanAtta-1975-JFM,Meneveau-Sreenivasan-Kailasnath-Fan-1990-PRA,Schmitt-Schertzer-Lovejoy-Brunet-1996-EPL,Xu-Antonia-Rajagopalan-2000-EPL,Xu-Antonia-Rajagopalan-2007-EPL,Wang-Shang-Ge-2012-Fractals} based on the partition function approach \cite{Halsey-Jensen-Kadanoff-Procaccia-Shraiman-1986-PRA}, the MF-X-DFA method that conducts multifractal detrended cross-correlation analysis \cite{Zhou-2008-PRE} based on the detrended fluctuation analysis \cite{Peng-Buldyrev-Havlin-Simons-Stanley-Goldberger-1994-PRE,Kantelhardt-KoscielnyBunde-Rego-Havlin-Bunde-2001-PA}, multifractal detrended fluctuation analysis
\cite{CastroESilva-Moreira-1997-PA,Weber-Talkner-2001-JGR,Kantelhardt-Zschiegner-KoscielnyBunde-Havlin-Bunde-Stanley-2002-PA}, and the detrended cross-correlation analysis \cite{Jun-Oh-Kim-2006-PRE,Podobnik-Stanley-2008-PRL,Podobnik-Horvatic-Petersen-Stanley-2009-PNAS,Horvatic-Stanley-Podobnik-2011-EPL,Kristoufek-2013-EPJB,Kristoufek-2014a-PA,Ying-Shang-2014-Fractals,Kristoufek-2015-PRE}, the MF-X-DMA method \cite{Jiang-Zhou-2011-PRE} that carries out the multifractal detrended cross-correlation analysis based on the detrending moving-average analysis \cite{Vandewalle-Ausloos-1998-PRE,Alessio-Carbone-Castelli-Frappietro-2002-EPJB,Carbone-Castelli-2003-SPIE,Carbone-Castelli-Stanley-2004-PA,Carbone-Castelli-Stanley-2004-PRE,Varotsos-Sarlis-Tanaka-Skordas-2005-PRE,Xu-Ivanov-Hu-Chen-Carbone-Stanley-2005-PRE,Arianos-Carbone-2007-PA,Bashan-Bartsch-Kantelhardt-Havlin-2008-PA,Arianos-Carbone-2009-JSM,Carbone-2009-IEEE} and multifractal detrending moving-average analysis \cite{Gu-Zhou-2010-PRE,He-Chen-2011b-PA}, the multifractal height cross-correlation analysis (MF-HXA) method \cite{Kristoufek-2011-EPL}, the multiscale multifractal detrended cross-correlation analysis (MM-DCCA) \cite{Shi-Shang-Wang-Lin-2014-PA}, and the MF-DPXA method \cite{Qian-Liu-Jiang-Podobnik-Zhou-Stanley-2015-PRE} that generalizes the detrended partial cross-correlation analysis \cite{Liu-2014,Yuan-Fu-Zhang-Piao-Xoplaki-Luterbacher-2015-SR} in which the partial correlation is considered. Properly designed statistical tests can be used to quantify these cross correlations \cite{Podobnik-Grosse-Horvatic-Ilic-Ivanov-Stanley-2009-EPJB,Zebende-2011-PA,Podobnik-Jiang-Zhou-Stanley-2011-PRE}.
The joint multifractal analysis is a classic method and has been applied to study the joint multifractal nature between different pairs of time series recorded in natural and social sciences \cite{Kravchenko-Bullock-Boast-2000-AJ,Zeleke-Si-2004-AJ,Lin-2008-PA,JimenezHornero-JimenezHornero-deRave-PavonDominguez-2010-EMA,Lin-Sharif-2010-Chaos,JimenezHornero-PavonDominguez-deRave-ArizaVillaverde-2010-AR,Wang-Shang-Ge-2012-Fractals}. Due to its elegant geometric nature, many important properties can be derived, which is however very difficult in the frameworks of other methods mentioned above. For instance, although there is numerical evidence and analytical results for the relationship between the cross-multifractal spectrum $f_{xy}(\alpha_x,\alpha_y)$ and the multifractal spectra $f_x(\alpha_x)$ and $f_y(\alpha_y)$ of individual time series \cite{Podobnik-Stanley-2008-PRL,Zhou-2008-PRE,Jiang-Zhou-2011-PRE,Wang-Shang-Ge-2012-Fractals,Kristoufek-2015-PA}, the problem is not solved. Moreover, the original MF-X-PF method is important because it handles moments with two different orders, while recent methods for multifractal cross-correlation analysis focus only on one order.
In this work, we recover the uni-order MF-X-PF method \cite{Wang-Shang-Ge-2012-Fractals} and propose a direct determination approach for the multifractal spectrum using the idea from the bi-order MF-X-PF framework \cite{Meneveau-Sreenivasan-Kailasnath-Fan-1990-PRA}. Based on this framework, we are able to derive important geometric properties of the uni-order MF-X-PF method. We perform numerical simulations using different mathematical models and explain the results of multifractal binomial measures analytically. Finally, we apply the bi-order MF-X-PF method to stock market indices.
\section{Joint multifractal analysis based on partition function approach}
\label{S2:MF-X-PF}
In this section, we first present the joint multifractal analysis based on partition function approach with two moment orders \cite{Meneveau-Sreenivasan-Kailasnath-Fan-1990-PRA}, abbreviated MF-X-PF$(p,q)$, and then derive the uni-order method MF-X-PF$(q)$ that was independently proposed recently \cite{Wang-Shang-Ge-2012-Fractals}. Although the joint partition function $\chi_{xy}(q,s)$ of the uni-order method can be directly recovered from the joint partition function $\chi_{xy}(p,q,s)$ of the bi-order method by posing $p=q$, we will show that the nexus between the multifractal properties of the two methods is not obvious, which is caused by the application of the steepest descent approach.
\subsection{MF-X-PF$(p,q)$}
Based on the box-counting idea, the geometric support is partitioned into boxes of size $s$. We consider two integrated measures $m_x(s,t)$ and $m_y(s,t)$ in the $t$-th box. The local singularity strengths $\alpha_x$ and $\alpha_y$ are defined according to the following relationships:
\begin{subequations}
\begin{equation}
m_x(s,t) \sim s^{\alpha_x},
\label{Eq:MF-X-PF:mx:s:alphax}
\end{equation}
and
\begin{equation}
m_y(s,t) \sim s^{\alpha_y}.
\label{Eq:MF-X-PF:my:s:alphay}
\end{equation}
\label{Eq:MF-X-PF:mxy:s:alpha}
\end{subequations}
Let $N_s(\alpha_x,\alpha_y)$ denote the number of boxes of size $s$ needed to cover the set of points in which the singularity strengths are around $\alpha_x$ and $\alpha_y$ with bands $d\alpha_x$ and $d\alpha_y$. Hence, the fractal dimension of the set is determined according to \cite{Mandelbrot-1983}
\begin{equation}
N_s(\alpha_x,\alpha_y) \sim s^{-f_{xy}(\alpha_x,\alpha_y)},
\label{Eq:MF-X-PF:Ns:s:f:alpha:xy}
\end{equation}
in which $f_{xy}(\alpha_x,\alpha_y)$ is the joint distribution of the two singularity strengths \cite{Meneveau-Sreenivasan-Kailasnath-Fan-1990-PRA} or the joint multifractal spectrum.
We consider the joint partition function
\begin{equation}
\chi_{xy}(p,q,s)= \sum_t\left[m_x(s,t)\right]^{p/2}[m_y(s,t)]^{q/2}.
\label{Eq:MF-X-PF:pq:chi:s}
\end{equation}
This definition is slightly different from that in Ref.~\cite{Meneveau-Sreenivasan-Kailasnath-Fan-1990-PRA}, in which the orders are $p$ and $q$ rather than $p/2$ and $q/2$. In this setting, we recover the traditional partition function when $m_x=m_y$ and $p=q$ \cite{Halsey-Jensen-Kadanoff-Procaccia-Shraiman-1986-PRA}. The joint mass exponent function $\tau_{xy}(p,q)$ can be obtained from the following relation
\begin{equation}
\chi_{xy}(p,q,s) \sim s^{\tau_{xy}(p,q)}.
\label{Eq:MF-X-PF:pq:tauxy}
\end{equation}
In practice, for a given pair $(p,q)$, we compute $\chi_{xy}(p,q,s)$ for a various of box sizes $s$ and perform linear regression of $\ln\chi_{xy}(p,q,s)$ against $\ln{s}$ in a proper scaling range to obtain $\tau_{xy}(p,q)$.
We insert the two relations in Eq.~(\ref{Eq:MF-X-PF:mxy:s:alpha}) into the joint partition function, rewrite the sum into a double integral over $\alpha_x$ and $\alpha_y$, and then apply the steepest descent approach to estimate the integral at small $s$ values, which leads to
\begin{equation}
\tau_{xy}(p,q) = p\alpha_x/2+q\alpha_y/2 - f_{xy}(\alpha_x,\alpha_y),
\label{Eq:MF-X-PF:tau:alpha:f}
\end{equation}
where
\begin{subequations}
\begin{equation}
{\partial f_{xy}(\alpha_x,\alpha_y)}/{\partial \alpha_x} = {p}/{2},
\label{Eq:MF-X-PF:df:alphax:p}
\end{equation}
and
\begin{equation}
{\partial f_{xy}(\alpha_x,\alpha_y)}/{\partial \alpha_y} = {q}/{2}.
\label{Eq:MF-X-PF:df:alphay:q}
\end{equation}
\end{subequations}
Taking partial derivative of Eq.~(\ref{Eq:MF-X-PF:tau:alpha:f}) over $p$, we have
\begin{equation}
{\partial\tau_{xy}(p,q)}/{\partial p} = {\alpha_x}/{2}.
\end{equation}
Similar derivation can be done over $q$ and one can obtain the double Legendre transforms
\begin{subequations}
\begin{equation}
\alpha_x = 2{\partial \tau_{xy}(p,q)}/{\partial p},
\label{Eq:MF-X-PF:alphax:tau}
\end{equation}
\begin{equation}
\alpha_y = 2{\partial \tau_{xy}(p,q)}/{\partial q},
\label{Eq:MF-X-PF:alphay:tau}
\end{equation}
\begin{equation}
f_{xy}(\alpha_x,\alpha_y) = p\alpha_x(p,q)/2+q\alpha_y(p,q)/2 -\tau_{xy}(p,q).
\label{Eq:MF-X-PF:f:alpha}
\end{equation}
\label{Eq:MF-X-PF:pq:Legendre}
\end{subequations}
Therefore, after obtaining $\tau_{xy}(p,q)$, we can numerically determine $\alpha_x$ using Eq.~(\ref{Eq:MF-X-PF:alphax:tau}), $\alpha_y$ using Eq.~(\ref{Eq:MF-X-PF:alphay:tau}), and $f_{xy}(\alpha_x,\alpha_y)$ using Eq.~(\ref{Eq:MF-X-PF:f:alpha}).
From the canonical perspective, one can obtain the $f_{xy}(\alpha_x,\alpha_y)$ function directly \cite{Chhabra-Jensen-1989-PRL,Chhabra-Meneveau-Jensen-Sreenivasan-1989-PRA,Meneveau-Sreenivasan-Kailasnath-Fan-1990-PRA}. Defining the canonical measures as follows
\begin{equation}
\mu_{xy}(p,q,s,t) = \frac{[m_x(s,t)]^{p/2}[m_y(s,t)]^{q/2}}{\sum_t [m_x(s,t)]^{p/2}[m_y(s,t)]^{q/2}},
\label{Eq:MF-X-PF:pq:muxy}
\end{equation}
the two singularity strengths $\alpha_x(p,q)$ and $\alpha_y(p,q)$ and the joint multifractal spectrum $f_{xy}(p,q)$ can be computed by linear regressions in log-log scales using the following equations:
\begin{subequations}
\begin{equation}
{\alpha_x(p,q)} = \lim_{s\to0} \frac{\sum_t \mu_{xy}(p,q,s,t) \ln{m_x(s,t)}}{\ln{s}},
\label{Eq:MF-X-PF:pq:alphax:mu:mx}
\end{equation}
\begin{equation}
{\alpha_y(p,q)} = \lim_{s\to0}\frac{\sum_t \mu_{xy}(p,q,s,t) \ln{m_y(s,t)}}{\ln{s}},
\label{Eq:MF-X-PF:pq:alphay:mu:my}
\end{equation}
\begin{equation}
{f_{xy}(p,q)} = \lim_{s\to0}\frac{\sum_t \mu_{xy}(p,q,s,t) \ln\left[\mu_{xy}(p,q,s,t)\right]}{\ln{s}}.
\label{Eq:MF-X-PF:pq:fpq:mu}
\end{equation}
\label{Eq:MF-X-PF:pq:Direct}
\end{subequations}
The joint mass exponent function can be obtained by using Eq.~(\ref{Eq:MF-X-PF:tau:alpha:f}).
\subsection{MF-X-PF$(q)$}
The multifractal cross-correlation analysis based on statistical moments (MFSMXA) proposed in Ref.~\cite{Wang-Shang-Ge-2012-Fractals} is actually a special case of MF-X-PF$(p,q)$ when $p=q$. We call it MF-X-PF$(q)$ here for consistency. In this case, we have
\begin{equation}
\chi_{xy}(q,s)= \sum_t\left[m_x(s,t)m_y(s,t)\right]^{q/2} \sim s^{\tau_{xy}(q)},
\label{Eq:MF-X-PF:q:chi:s}
\end{equation}
in which $\tau_{xy}(q,q)\triangleq \tau_{xy}(q)$. Applying the method of steepest descent, Eq.~(\ref{Eq:MF-X-PF:tau:alpha:f}) becomes
\begin{equation}
\tau_{xy}(q) = q(\alpha_x+\alpha_y)/2 - f_{xy}(\alpha_x,\alpha_y),
\label{Eq:MF-X-PF:q:tau:ax:ay:f}
\end{equation}
where
\begin{equation}
{\partial f_{xy}(\alpha_x,\alpha_y)}/{\partial \alpha_x} = {\partial f_{xy}(\alpha_x,\alpha_y)}/{\partial \alpha_y} = q/2.
\label{Eq:MF-X-PF:q:df:dalphax:q}
\end{equation}
Taking derivative of Eq.~(\ref{Eq:MF-X-PF:q:tau:ax:ay:f}) over $q$ and using Eq.~(\ref{Eq:MF-X-PF:q:df:dalphax:q}), we have
\begin{equation}
\frac{d\tau_{xy}(q)}{dq} = \frac{\alpha_x}{2} +\frac{q}{2}\frac{d\alpha_{x}}{dq} +\frac{\alpha_y}{2} + \frac{q}{2}\frac{d\alpha_{y}}{dq}
-\frac{\partial f_{xy}}{\partial \alpha_x}\frac{d\alpha_{x}}{dq}
-\frac{\partial f_{xy}}{\partial \alpha_y}\frac{d\alpha_{y}}{dq}
= \frac{\alpha_x+\alpha_y}{2}.
\label{Eq:MF-X-PF:q:dtau:dq:alphax:alphay}
\end{equation}
Defining that
\begin{equation}
\alpha_{xy} \triangleq [\alpha_{x}(q) + \alpha_{y}(q)]/2,
\label{Eq:MF-X-PF:q:alpha:alphax:alphay}
\end{equation}
Eq.~(\ref{Eq:MF-X-PF:q:dtau:dq:alphax:alphay}) and Eq.~(\ref{Eq:MF-X-PF:q:tau:ax:ay:f}) can be rewritten as follows
\begin{subequations}
\begin{equation}
\alpha_{xy} = d\tau_{xy}(q)/dq,
\label{Eq:MF-X-PF:q:dtau:dq:alpha}
\end{equation}
\begin{equation}
f_{xy}(\alpha_{xy}(q)) = q \alpha_{xy}(q) -\tau_{xy}(q),
\label{Eq:MF-X-PF:q:f:alpha}
\end{equation}
\label{Eq:MF-X-PF:q:Legendre}
\end{subequations}
where $f_{xy}(\alpha_{xy})\triangleq f_{xy}(\alpha_x,\alpha_y)$. We notice that Eq.~(\ref{Eq:MF-X-PF:q:Legendre}) has the same form of the Legendre transform \cite{Halsey-Jensen-Kadanoff-Procaccia-Shraiman-1986-PRA}.
Because $\alpha_x=d\tau_x(q)/dq$ and $\alpha_y=d\tau_y(q)/dq$ \cite{Halsey-Jensen-Kadanoff-Procaccia-Shraiman-1986-PRA}, it is easy to verify that the following relationship
\begin{equation}
\tau_{xy}(q)= [\tau_x(q) + \tau_y(q)]/2 + C
\label{Eq:MF-X-PF:q:tauxy:taux:tauy:c}
\end{equation}
satisfies Eq.~(\ref{Eq:MF-X-PF:q:dtau:dq:alpha}), where $C$ is a constant. According to Eq.~(\ref{Eq:MF-X-PF:q:chi:s}), we have $\chi(0,s) \sim s^{\tau_{xy}(0)} \sim s^{-D_0}$, which is used to measure the fractal dimension of the geometric support. It follows that
\begin{equation}
\tau_{xy}(0)= -D_0 = -1.
\label{Eq:MF-X-PF:q:tauxy:q=0}
\end{equation}
Combining Eqs.~(\ref{Eq:MF-X-PF:q:tauxy:taux:tauy:c}) and (\ref{Eq:MF-X-PF:q:tauxy:q=0}) and using $\tau_x(0)=\tau_y(0)=-1$, we have $C=0$ and thus
\begin{equation}
\tau_{xy}(q)= [\tau_x(q) + \tau_y(q)]/2.
\label{Eq:MF-X-PF:q:tauxy:taux:tauy}
\end{equation}
Inserting Eq.~(\ref{Eq:MF-X-PF:q:alpha:alphax:alphay}) and Eq.~(\ref{Eq:MF-X-PF:q:tauxy:taux:tauy}) into Eq.~(\ref{Eq:MF-X-PF:q:f:alpha}), we obtain that
\begin{equation}
f_{xy}(q)= [f_x(q) + f_y(q)]/2.
\label{Eq:MF-X-PF:q:fxy:fx:fy}
\end{equation}
We note that Eq.~(\ref{Eq:MF-X-PF:q:tauxy:taux:tauy}) and Eq.~(\ref{Eq:MF-X-PF:q:fxy:fx:fy}) still hold when $D_0\neq1$. In this case, we use $\tau_{xy}(0)=\tau_x(0)=\tau_y(0)= -D_0$ to conduct the derivation. These relations were observed numerically using the MF-X-DFA method \cite{Zhou-2008-PRE}, the MF-X-DMA method \cite{Jiang-Zhou-2011-PRE} and the MF-X-PF$(q)$ method \cite{Wang-Shang-Ge-2012-Fractals}.
As shown in Eq.~(\ref{Eq:MF-X-PF:q:chi:s}), the problem is to handle a measure $[m_x(s,t)m_y(s,t)]^{1/2}$. From the canonical perspective, we can obtain the $f_{xy}(\alpha_{xy})$ function directly \cite{Chhabra-Jensen-1989-PRL,Chhabra-Meneveau-Jensen-Sreenivasan-1989-PRA,Meneveau-Sreenivasan-Kailasnath-Fan-1990-PRA}. We can define the canonical measures
\begin{equation}
\mu_{xy}(q,s,t) = \frac{[m_x(s,t)m_y(s,t)]^{q/2}}{\sum_t [m_x(s,t)m_y(s,t)]^{q/2}}.
\label{Eq:MF-X-PF:q:mu:p:q}
\end{equation}
The two singularity strengths $\alpha_x(p)$ and $\alpha_x(p)$ and the joint multifractal spectrum $f_{xy}(p,q)$ can be computed by linear regressions in log-log scales using the following equations:
\begin{subequations}
\begin{equation}
\alpha_{xy}(q) = \lim_{s\to0} \frac{\sum_t \mu_{xy}(q,s,t) \ln{[m_x(s,t)m_y(s,t)]^{1/2}}}{\ln{s}}
= \frac{\alpha_{x}(q)+\alpha_{y}(q)}{2},
\label{Eq:MF-X-PF:q:alpha:mu:mx}
\end{equation}
where Eq.~(\ref{Eq:MF-X-PF:pq:alphax:mu:mx}) and Eq.~(\ref{Eq:MF-X-PF:pq:alphay:mu:my}) are used in the second equality, and
\begin{equation}
f_{xy}(\alpha_{xy}(q)) = \lim_{s\to0}\frac{\sum_t \mu_{xy}(q,s,t) \ln\left[\mu_{xy}(q,s,t)\right]}{\ln{s}}.
\label{Eq:MF-X-PF:q:fpq:mu}
\end{equation}
\end{subequations}
The joint mass exponent function can be obtained by using Eq.~(\ref{Eq:MF-X-PF:q:f:alpha}).
\section{Joint multifractal analysis of binomial measures}
\subsection{Numerical analysis applying MF-X-PF$(p,q)$}
We perform joint multifractal analysis numerically of two binomial measures \cite{Meneveau-Sreenivasan-1987-PRL}. We use $p_x=0.3$ and $p_y=0.4$ and generate two binomial measures of length $2^{20}$. Figure \ref{Fig:MF-X-PF:pq:pmodel}(a) shows on log-log scales the dependence of $\chi_{xy}(p,q,s)$ against box size $s$ for different $q$ with fixed $p=2$. It is obvious that the curves for different $q$ exhibit excellent power law relationships. The power-law exponents obtained by linear regressions of $\ln\chi_{xy}(p,q,s)$ against $\ln s$ are estimates of the mass exponents $\tau_{xy}(p,q)$, whose contour plot is shown in Fig.~\ref{Fig:MF-X-PF:pq:pmodel}(e). We find that $\tau_{xy}(p,q)$ increases with $p$ and $q$. Adopting the double Legendre transform in Eq.~(\ref{Eq:MF-X-PF:pq:Legendre}), we obtain numerically the singularity functions $\alpha_x(p,q)$ and $\alpha_y(p,q)$ and the multifractal spectrum $f_{xy}(p,q)$, whose contour plots are illustrated in Fig.~\ref{Fig:MF-X-PF:pq:pmodel}(f-h) respectively. We find that $\alpha_x(p,q)$ and $\alpha_y(p,q)$ are decreasing functions of $p$ and $q$, while $f_{xy}(p,q)$ has a saddle shape. An intriguing feature is that the contour lines are parallel to each other for $\alpha_x(p,q)$, $\alpha_y(p,q)$ and $f_{xy}(p,q)$. Figure \ref{Fig:MF-X-PF:pq:pmodel}(i) plots the singularity spectrum $f_{xy}(\alpha_x, \alpha_y)$, which is not a surface but a curve.
We also calculate the multifractal functions using the direct determination approach presented in Eq.~(\ref{Eq:MF-X-PF:pq:Direct}) for comparison. In Fig.~\ref{Fig:MF-X-PF:pq:pmodel}(b) to Fig.~\ref{Fig:MF-X-PF:pq:pmodel}(d), we illustrate respectively the linear dependence of $\sum_t\mu_{xy}(2,q,s,t)\ln[m_{x}(s,t)]$, $\sum_t\mu_{xy}(2,q,s,t)\ln[m_{y}(s,t)]$ and $\sum_t\mu_{xy}(2,q,s,t)\ln[\mu_{xy}(2,q,s,t)]$ against $\ln{s}$ for different $q$ with fixed $p=2$. The singularity strength functions $\alpha_x(p,q)$ and $\alpha_y(p,q)$ and the multifractal spectrum $f_{xy}(p,q)$ are computed from the slopes of the lines in these three plots. The corresponding contour plots are presented in Fig.~\ref{Fig:MF-X-PF:pq:pmodel}(j) to Fig.~\ref{Fig:MF-X-PF:pq:pmodel}(l), which are the same as those in Fig.~\ref{Fig:MF-X-PF:pq:pmodel}(f) to Fig.~\ref{Fig:MF-X-PF:pq:pmodel}(h). The numerical results presented in to Fig.~\ref{Fig:MF-X-PF:pq:pmodel} can be derived analytically.
\begin{figure}[tb]
\centering
\includegraphics[width=0.96\linewidth]{Fig1.png}
\caption{Joint multifractal analysis of two binomial measures with $p_x=0.3$ and $p_y=0.4$ based on the bi-order MF-X-PF$(p,q)$ method. (a) Power-law dependence of $\chi_{xy}(p,q,s)$ on box size $s$ for different $q$ with fixed $p=2$. (b) Linear dependence of $\sum_t\mu_{xy}(2,q,s,t)\ln[m_{x}(s,t)]$ against $\ln{s}$ for different $q$ with fixed $p=2$. (c) Linear dependence of $\sum_t\mu_{xy}(2,q,s,t)\ln[m_{y}(s,t)]$ against $\ln{s}$ for different $q$ with fixed $p=2$. (d) Linear dependence of $\sum_t\mu_{xy}(2,q,s,t)\ln[\mu_{xy}(2,q,s,t)]$ against $\ln{s}$ for different $q$ with fixed $p=2$. (e-i) Mass exponent function $\tau_{xy}(p,q)$, singularity functions $\alpha_x(p,q)$ and $\alpha_y(p,q)$, and multifractal spectra $f_{xy}(p,q)$ and $f_{xy}(\alpha_x, \alpha_y)$ obtained from (a). (j) Singularity function $\alpha_x(p,q)$ obtained from (b). (k) Singularity function $\alpha_y(p,q)$ obtained from (c). (l) Multifractal spectrum $f_{xy}(p,q)$ obtained from (d). }
\label{Fig:MF-X-PF:pq:pmodel}
\end{figure}
\subsection{Analytical results for MF-X-PF$(p,q)$}
Let us start with two multifractal binomial measures of length $2^L$. Consider two integrated measures $m_x(s,t)$ and $m_y(s,t)$ in boxes of size $s=2^{l}$. There are $n$ types of boxes whose integrated measures are different, in which
\begin{equation}
n=L-l=L-\frac{\ln s}{\ln 2}.
\label{Eq:MF-X-PF:pq:n}
\end{equation}
For the $t$-the box, we have
\begin{subequations}
\begin{equation}
m_x(s,t) = m_x(2^{n+1},t) =p_x^k(1-p_x)^{n-k},
\label{Eq:MF-X-PF:analytic:mx}
\end{equation}
\begin{equation}
m_y(s,t) = m_y(2^{n+1},t) =p_y^k(1-p_y)^{n-k},
\label{Eq:MF-X-PF:analytic:my}
\end{equation}
\end{subequations}
where $k\in\{1,...,n\}$.
It follows that
\begin{equation}
k = \frac{\ln m_y(s,t)-n\ln(1-p_y)}{\ln p_y-\ln (1-p_y)}.
\label{Eq:MF-X-PF:analytic:k}
\end{equation}
Inserting Eq.~(\ref{Eq:MF-X-PF:analytic:k}) into Eq.~(\ref{Eq:MF-X-PF:analytic:mx}), we get
\begin{equation}
m_x(s,t) =C(s) \left[m_y(s,t)\right]^{\beta} = e^{-\gamma L}s^{{\gamma}/{\ln 2}} \left[m_y(s,t)\right]^{\beta},
\label{Eq:MF-X-PF:analytic:mxmy}
\end{equation}
where
\begin{equation}
\beta=\frac{\ln p_x-\ln (1-p_x)}{\ln p_y-\ln (1-p_y)}
\label{Eq:MF-X-PF:analytic:beta}
\end{equation}
and
\begin{equation}
\gamma=\beta\ln (1-p_y) -\ln (1-p_x).
\label{Eq:MF-X-PF:analytic:gamma}
\end{equation}
Note that $\beta$ and $\gamma$ depend only on $p_x$ and $p_y$. When $p_x+p_y=1$, we have $\beta=-1$. When $p_x=p_y$, we have $\beta=1$ and $C(s)=1$. When both $p_x$ and $p_y$ are greater than 0.5 or less than 0.5, that is, $(p_x-0.5)(p_y-0.5)>0$, we have $\beta>0$; Otherwise, when $(p_x-0.5)(p_y-0.5)<0$, we have $\beta<0$.
Combining Eq.~(\ref{Eq:MF-X-PF:pq:chi:s}) and Eq.~(\ref{Eq:MF-X-PF:analytic:mxmy}), we obtain
\begin{equation}
\chi_{xy}(p,q,s) = C(s)^{p/2}\sum_t\left[m_y(s,t)\right]^{Q}
\label{Eq:MF-X-PF:analytic:chi:s}
\end{equation}
where
\begin{equation}
Q = \beta p/2+q/2
\label{Eq:MF-X-PF:analytic:chi:p:q:c}.
\end{equation}
Because $m_y$ is a multifractal measure, we have
\begin{equation}
\sum_t\left[m_y(s,t)\right]^{Q}\sim s^{\tau_y(Q)}
\end{equation}
where $\tau_y(Q)$ has an analytical expression \cite{Halsey-Jensen-Kadanoff-Procaccia-Shraiman-1986-PRA}:
\begin{equation}
\tau_y(Q) =- \ln[p_y^{Q}+(1-p_y)^{Q}]/\ln{2}.
\label{Eq:MF-X-PF:analytic:tauy}
\end{equation}
The joint partition function can be rewritten as follows
\begin{equation}
\chi_{xy}(p,q,s) \sim s^{\frac{p\gamma}{2\ln 2}}e^{\frac{-p\gamma L}{2}}s^{\tau_y(Q)}.
\label{Eq:MF-X-PF:analytic:chi:p:q:s}
\end{equation}
Comparing Eq.(\ref{Eq:MF-X-PF:pq:tauxy}) and Eq.~(\ref{Eq:MF-X-PF:analytic:chi:p:q:s}), we obtain the joint mass exponent function:
\begin{equation}
\tau_{xy}(p,q) =\frac{p\gamma}{2\ln 2}+\tau_y(Q) =\frac{p\gamma}{2\ln 2}-\frac{\ln[p_y^{Q}+(1-p_y)^{Q}]}{\ln{2}}
\label{Eq:MF-X-PF:analytic:tauxy:p:q}
\end{equation}
It follows that
\begin{equation}
\alpha_x =\frac{2\partial \tau_{xy}(p,q)}{\partial p
=\frac{\gamma}{\ln 2}-\frac{\beta}{\ln 2}\frac{p_y^{Q}\ln p_y+(1-p_y)^{Q}\ln (1-p_y) }{p_y^{Q}+(1-p_y)^{Q}}
\label{Eq:MF-X-PF:analytic:alphax}
\end{equation}
and
\begin{equation}
\begin{aligned}
\alpha_y = \frac{2\partial \tau_{xy}(p,q)}{\partial q
=-\frac{1}{\ln 2}\frac{p_y^{Q}\ln p_y+(1-p_y)^{Q}\ln (1-p_y) }{p_y^{Q}+(1-p_y)^{Q}}.
\end{aligned}
\label{Eq:MF-X-PF:analytic:alphay}
\end{equation}
We obtain immediately the relationship between $\alpha_x$ and $\alpha_y$
\begin{equation}
\alpha_x =\frac{\gamma}{\ln 2}+\beta\alpha_y.
\label{Eq:MF-X-PF:analytic:alphaxalphay}
\end{equation}
This relationship explains the observation in Fig.~\ref{Fig:MF-X-PF:pq:pmodel}(i) that $f_{xy}(\alpha_x,\alpha_y)$ is a curve along this line rather than a surface and the line segment (\ref{Eq:MF-X-PF:analytic:alphaxalphay}) is the projection of $f_{xy}(\alpha_x,\alpha_y)$ onto the $(\alpha_x,\alpha_y)$ plane.
We now derive the main geometric properties of $\alpha_x(Q)$ and $\alpha_y(Q)$. We find that $\alpha_y(Q)$ is a monotonically decreasing function of $Q$, because
\begin{equation}
\frac{d \alpha_y}{d Q} = -\frac{1}{\ln 2}\frac{p_y^{Q}(1-p_y)^{Q}\left[\ln p_y-\ln (1-p_y)\right]^2} {\left[p_y^{Q}+(1-p_y)^{Q}\right]^2} < 0.
\label{Eq:MF-X-PF:properties:analytic:d_alphay}
\end{equation}
We can prove that the limits of $\alpha_y$ exist when $Q\to\pm\infty$. We rewrite Eq.~(\ref{Eq:MF-X-PF:analytic:alphay}) as follows
\begin{equation}
\alpha_y =-\frac{1}{\ln 2}\frac{\ln p_y+\left[(1-p_y)/{p_y}\right]^{Q}\ln (1-p_y) }{1+\left[(1-p_y)/{p_y}\right]^{Q}}.
\label{Eq:MF-X-PF:properties:analytic:alphay}
\end{equation}
We can obtain that
\begin{equation}
\left\{
\begin{aligned}
\alpha_{y,\min} &= \lim_{Q\rightarrow \infty} \alpha_y = \min\left\{-\frac{\ln p_y}{\ln 2},-\frac{\ln(1-p_y)}{\ln 2}\right\} \\
\alpha_{y,\max} &= \lim_{Q\rightarrow -\infty} \alpha_y = \max\left\{-\frac{\ln p_y}{\ln 2},-\frac{\ln(1-p_y)}{\ln 2}\right\}
\end{aligned}
\right.
\label{Eq:MF-X-PF:properties:analytic:max:min:alphay}
\end{equation}
Therefore, the solution of Eq.~(\ref{Eq:MF-X-PF:analytic:alphay}) exists and is unique if and only if $\alpha_y\in[\alpha_{y,\min},\alpha_{y,\min}]$. The explicit form of the solution is
\begin{equation}
\begin{aligned}
Q = {\ln \left[-\frac{\log_2[(1-p_y)/p_y]}{\alpha_y+\log_2 p_y}-1\right]}/{\ln\left[\frac{p_y}{(1-p_y)}\right]}.
\end{aligned}
\label{Eq:MF-X-PF:analytic:cpqalphay}
\end{equation}
Further, the width of the singularity spectrum of $\alpha_y$ is
\begin{equation}
\Delta \alpha_y =\frac{\left|\ln (1-p_y) - \ln p_y\right|}{\ln 2}.
\label{Eq:MF-X-PF:properties:analytic:deltaalphay}
\end{equation}
These results explain the parallel observation of the contour lines in Fig.~\ref{Fig:MF-X-PF:pq:pmodel}(g). When $p_y=0.5$, $\Delta \alpha_y=0$. In this case, the measure is neither multifractal nor monofractal since it is uniformly distributed on the support.
According to Eq.~(\ref{Eq:MF-X-PF:analytic:alphaxalphay}), we have
\begin{equation}
\frac{d \alpha_x}{d Q} = -\frac{\beta}{\ln 2}\frac{p_y^{Q}(1-p_y)^{Q}\left[\ln p_y-\ln (1-p_y)\right]^2} {\left[p_y^{Q}+(1-p_y)^{Q}\right]^2},
\label{Eq:MF-X-PF:properties:analytic:d_alphax}
\end{equation}
which suggests that $\alpha_x$ is a strictly monotonic function of $Q$. Moreover, it is easy to show that
\begin{equation}
\left\{
\begin{aligned}
\alpha_{x,\min} &= \lim_{Q\rightarrow \infty} \alpha_x = \min\left\{-\frac{\ln p_x}{\ln 2},-\frac{\ln(1-p_x)}{\ln 2}\right\} \\
\alpha_{x,\max} &= \lim_{Q\rightarrow -\infty} \alpha_x = \max\left\{-\frac{\ln p_x}{\ln 2},-\frac{\ln(1-p_x)}{\ln 2}\right\}
\end{aligned}
\right.
\label{Eq:MF-X-PF:properties:analytic:max:min:alphax}
\end{equation}
Therefore, the solution of Eq.~(\ref{Eq:MF-X-PF:analytic:alphax}) exists, which is unique if and only if $\alpha_x\in[\alpha_{x,\min},\alpha_{x,\max}]$. Due to the symmetry between the two measures $m_x$ and $m_y$, the results for $\alpha_x$ are obvious, provided that we know the geometric properties of $\alpha_y$.
We now turn to investigate the geometric properties of the multifractal spectrum $f_{xy}(\alpha_x,\alpha_y)$, which has the following form:
\begin{equation}
\begin{aligned}
f_{xy}(\alpha_x,\alpha_y) &= p\alpha_x/2+q\alpha_y/2 -\tau_{xy}(p,q)\\
& = \frac{p}{2}\left(\frac{\gamma}{\ln 2}+\beta\alpha_y\right)+\frac{q}{2}\alpha_y -\frac{p\gamma}{2\ln 2}+\frac{\ln[p_y^{Q}+(1-p_y)^{Q}]}{\ln{2}}\\
&=-\frac{Q}{\ln2}\frac{\ln p_y+\left(\frac{1-p_y}{p_y}\right)^{Q}\ln (1-p_y) }{1+\left(\frac{1-p_y}{p_y}\right)^{Q}}
+\frac{\ln p_y^{Q}+\ln\left[1+\left(\frac{1-p_y}{p_y}\right)^{Q}\right]}{\ln{2}}\\
&=\frac{1}{\ln 2}\frac{Q\left(\frac{1-p_y}{p_y}\right)^{Q} \ln\left(\frac{p_y}{1-p_y}\right) +\left[1+\left(\frac{1-p_y}{p_y}\right)^{Q}\right]\ln\left[1+\left(\frac{1-p_y}{p_y}\right)^{Q}\right]} {1+\left(\frac{1-p_y}{p_y}\right)^{Q}}
\end{aligned}
\label{Eq:MF-X-PF:analytic:fpq}
\end{equation}
It is easy to find that
\begin{equation}
f_{xy}(Q=0) = 1 ~~~~~{\rm{and}}~~~~~f_{xy}(Q)=f_{xy}(-Q),
\end{equation}
where $f_{xy}(Q)\triangleq f_{xy}(\alpha_x,\alpha_y; Q)$. It indicates that $f_{xy}(\alpha_x,\alpha_y)$ is symmetric with respect to the line $Q=0$, as numerically shown in Fig.~\ref{Fig:MF-X-PF:pq:pmodel}(h). Furthermore, we obtain
\begin{equation}
\lim_{Q\to\pm\infty} f_{xy}(p,q) = 0.
\label{Eq:MF-X-PF:properties:analytic:fpq:infty}
\end{equation}
Taking derivative of $f_{xy}(\alpha_x,\alpha_y)$ with respect to $Q$, we have
\begin{equation}
\frac{d f_{xy}(Q)}{d Q} = -\frac{Q}{\ln2} \left[\ln\left(\frac{p_y}{(1-p_y)}\right)\right]^2
{\left[\left(\frac{p_y}{1-p_y}\right)^{{Q}/2}+\left(\frac{1-p_y}{p_y}\right)^{Q/2}\right]^{-2}}.
\label{Eq:MF-X-PF:properties:analytic:df}
\end{equation}
When $Q<0$, $df_{xy}(Q)/dQ>0$ so that $f_{xy}(Q)$ is a monotonically increasing function of $Q$. When $Q>0$, $df_{xy}(Q)/dQ<0$ so that $f_{xy}(Q)$ is a monotonically decreasing function of $Q$. Therefore, the maximum of $f_{xy}(Q)$ is 1 and its minimum is 0. These properties explain the parallel feature of the contour lines in Fig.~\ref{Fig:MF-X-PF:pq:pmodel}(h).
We note that the numerical results are in excellent agreement with the analytical results for $\tau_{xy}(p,q)$ in Eq.~(\ref{Eq:MF-X-PF:analytic:tauxy:p:q}), $\alpha_x(p,q)$ in Eq.~(\ref{Eq:MF-X-PF:analytic:alphax}), $\alpha_y(p,q)$ in Eq.~(\ref{Eq:MF-X-PF:analytic:alphay}), and $f_{xy}(p,q)$ in Eq.~(\ref{Eq:MF-X-PF:analytic:fpq}). Combining Eq.~(\ref{Eq:MF-X-PF:analytic:cpqalphay}) and Eq.~(\ref{Eq:MF-X-PF:analytic:fpq}), we find that $f_{xy}(\alpha_x,\alpha_y)$ is a univariate function of $\alpha_y$, or of $\alpha_x$ by using Eq.~(\ref{Eq:MF-X-PF:analytic:alphaxalphay}).
\subsection{Numerical analysis applying MF-X-PF$(q)$}
We also apply the MF-X-PF$(q)$ method to the same mathematical example. The results are shown in Fig.~\ref{Fig:MF-X-PF:p=q:p-model}. We find that the three theoretical relationships in Eq.~(\ref{Eq:MF-X-PF:q:tauxy:taux:tauy}), Eq.~(\ref{Eq:MF-X-PF:q:alpha:alphax:alphay}), and Eq.~(\ref{Eq:MF-X-PF:q:fxy:fx:fy}) are nicely verified. In addition, we observe again that the results from the classic partition function approach and the direct determination approach agree with each other. We note that this is also the case for other mathematical and empirical examples investigated in this work. Thus we will not show the results obtained from the direct determination approach in the rest of this paper.
\begin{figure}[htb]
\centering%
\includegraphics[width=0.96\linewidth]{Fig2.png}
\caption{Joint multifractal analysis of two binomial measures with $p_x=0.3$ and $p_y=0.4$ based on the MF-X-PF$(q)$ method. (a) Power-law dependence of $\chi_{xy}(q,s)$ on box size $s$ for different $q$. (b) Linear dependence of $\sum_t \mu_{xy}(q,s,t) \ln{[m_{x}(s,t)m_y(s,t)]^{1/2}}$ against $\ln{s}$. (c) Linear dependence of $\sum_t\mu_{xy}(q,s,t)\ln[\mu_{xy}(q,s,t)]$ against $\ln{s}$. (d) The mass exponent function $\tau_{xy}(q)$. (e) The singularity strength function $\alpha(q)$. (f) The multifractal singularity spectrum $f_{xy}(\alpha)$. }
\label{Fig:MF-X-PF:p=q:p-model}
\end{figure}
\section{Joint multifractal analysis of bivariate fractional Brownian motions}
We further investigate the MF-X-PF$(p,q)$ algorithm using monofractal measures. If $m_x$ and $m_y$ are monofractal, we have $\alpha_x=\alpha_y=1$ and $f_{xy}=1$ according to its definition in Eq.~(\ref{Eq:MF-X-PF:Ns:s:f:alpha:xy}) \cite{Halsey-Jensen-Kadanoff-Procaccia-Shraiman-1986-PRA,Jiang-Zhou-2008a-PA}. Together with Eq.~(\ref{Eq:MF-X-PF:f:alpha}), we have
\begin{equation}
\tau_{xy}(p,q)=p/2+q/2-1.
\label{Eq:MF-X-PF:monofractal:tau}
\end{equation}
These properties are indicators of monofractality.
The mathematical model used here is bivariate fractional Brownian motions (BFBMs). The two components $x(t)$ and $y(t)$ of the BFBM are two univariate fractional Brownian motions with Hurst indices $H_{xx}$ and $H_{yy}$, respectively. The basic properties of multivariate fractional Brownian motions have been comprehensively studied \cite{Lavancier-Philippe-Surgailis-2009-SPL,Coeurjolly-Amblard-Achard-2010-EUSIPCO,Amblard-Coeurjolly-Lavancier-Philippe-2013-BSMF}. Extensive numerical experiments of other MF-DCCA algorithms have been conducted using bivariate fractional Brownian motions \cite{Jiang-Zhou-2011-PRE,Qian-Liu-Jiang-Podobnik-Zhou-Stanley-2015-PRE}.
The two Hurst indexes $H_{xx}$ and $H_{yy}$ of the two univariate FBMs and their cross-correlation coefficient $\rho$ are input arguments of the simulation algorithm. By using the simulation procedure described in Refs.~\cite{Coeurjolly-Amblard-Achard-2010-EUSIPCO,Amblard-Coeurjolly-Lavancier-Philippe-2013-BSMF}, we have generated as an example a realization of BFBM with $H_{xx}=0.1$, $H_{yy}=0.5$ and $\rho=0.5$. The length of the BFBM is $2^{16}$. The joint multifractal analysis of the BFBM using the MF-X-PF$(p,q)$ algorithm is presented in Fig.~\ref{Fig:MF-X-PF:BFBM}.
\begin{figure}[htb]
\includegraphics[width=0.96\linewidth]{Fig3.png}
\caption{Joint multifractal analysis of bivariate fractional Brownian motions with $H_{xx}=0.1$, $H_{yy}=0.5$ and $\rho=0.5$. (a) Power-law dependence of $\chi_{xy}(p,q,s)$ on box size $s$ for different $q$ with fixed $p=2$. (b) Mass exponent function $\tau_{xy}(p,q)$ function obtained from (a). (c) Errors $\Delta \tau_{xy}(p,q)$ between the estimated exponent $\tau_{xy}(p,q)$ and the theoretical function $p/2+q/2-1$. (d,e,f) Singularity functions $\alpha_x(p,q)$ and $\alpha_y(p,q)$, and multifractal spectra $f_{xy}(\alpha_x, \alpha_y)$ obtained from (b). }
\label{Fig:MF-X-PF:BFBM}
\end{figure}
The corresponding power-law dependence of the joint partition function $\chi_{xy}(p,q,s)$ with respect to the box size $s$ for different $q$'s and fixed $p=2$ is shown in Fig.~\ref{Fig:MF-X-PF:BFBM}(a). The scaling ranges span over two orders of magnitude. The slopes of the lines give the estimates of $\tau_{xy}(p,q)$, where $p$ and $q$ vary from $-10$ to $10$ with a spacing of 0.1. The resulting mass exponents $\tau_{xy}(p,q)$ are shown in the contour plot of Fig.~\ref{Fig:MF-X-PF:BFBM}(b). We observe that $\tau_{xy}(p,q)$ increases with $p$ and $q$, the contour curves are parallel lines, and the parallel lines are evenly spaced. These features suggest that $\tau_{xy}(p,q)$ is a linear function of $p$ and $q$, which is an indicator of monofractality.
In order to further show the performance of the MFXPF algorithm, we calculate the errors between the estimated exponents $\tau_{xy}(p,q)$ and the theoretical exponents as $\Delta \tau_{xy}(p,q)=\tau_{xy}(p,q)-[p/2+q/2-1]$. Fig.~\ref{Fig:MF-X-PF:BFBM}(c) shows the dependence of $\Delta \tau_{xy}(p,q)$ with respect to $p$ and $q$. All the $\Delta \tau_{xy}(p,q)$ values are less than 0.15, implying that the algorithm gives good estimates.
By adopting the double Legendre transform in Eq.~(\ref{Eq:MF-X-PF:pq:Legendre}) numerically, we get the singularity strength functions $\alpha_x(p,q)$ and $\alpha_y(p,q)$ and the multifractal spectrum $f_{xy}(\alpha_x, \alpha_y)$, whose contour plots are shown in Fig.~\ref{Fig:MF-X-PF:BFBM}(d,e,f). The singularity strength functions $\alpha_x(p,q)$ and $\alpha_y(p,q)$ are close to 1, indicating that the functions $\alpha_x(p,q)$ and $\alpha_y(p,q)$ are independent of the order $p$ and $q$. Although there is a trend in each function $\alpha_x(p,q)$ and $\alpha_y(p,q)$, the theoretical functions $\alpha_x(p,q)=1$ and $\alpha_y(p,q)=1$ are basically satisfied. Hence, the MF-X-PF algorithm is able to correctly capture the monofractal nature of the BFBMs.
Fig.~\ref{Fig:MF-X-PF:BFBM}(g) plots the singularity spectrum $f_{xy}(\alpha_x, \alpha_y)$, which is a surface and the contour lines are closed curves. It is easy to find that the vast majority of the surface is nearly equal to the theoretical function $f_{xy}(\alpha_x, \alpha_y)=1$. We observe that the errors $\Delta \tau_{xy}(p,q)$ is equal to the difference between $f_{xy}(p,q)$ and 1, as shown by the Legendre transform.
We point out that the results using the direct determination approach are exactly the same as shown in Fig.~\ref{Fig:MF-X-PF:BFBM}. We thus summarize that the theoretical analysis is well verified by the numerical results.
\section{Application to stock market indexes}
We now apply the MF-X-PF$(p,q)$ algorithm to investigate the long-range power-law cross correlations of the daily volatility time series of the Dow Jones Industrial Average (DJIA) and the National Association of Securities Dealers Automated Quotations (NASDAQ) index. The daily volatility is defined as the absolute value of the logarithmic difference of daily closing prices:
\begin{equation}
R(t)=|\ln P(t)-\ln P(t-1)|,
\label{Eq:MF-X-PF:return}
\end{equation}
where $P(t)$ is the closing price on day $t$ and has been retrieved for the DJIA and NASDAQ indices. The time period of the samples is from 5 February 1971 to 25 January 2011, containing 10084 data points. The daily return time series of the two indexes are shown in Figure S1 (New J. Phys. online).
\begin{figure}[htb]
\centering
\includegraphics[width=0.96\linewidth]{Fig4.png}
\caption{Joint multifractal analysis of the cross correlations between the daily volatility time series of DJIA index and NASDAQ index using the MF-X-PF$(p,q)$ approach. (a) Power-law dependence of $\chi_{xy}(p,q,s)$ on box size $s$ for different $q$'s with fixed $p=2$. (b) Mass exponent function $\tau_{xy}(p,q)$. (c) Singularity strength function $\alpha_x(p,q)$. (d) Singularity strength function $\alpha_y(p,q)$. (e) Multifractal function $f_{xy}(p,q)$. (f) Multifractal singularity spectrum $f_{xy}(\alpha_x,\alpha_y)$. }
\label{Fig:MF-X-PF:Index}
\end{figure}
Fig.~\ref{Fig:MF-X-PF:Index}(a) shows on log-log scales the dependence of the joint partition function $\chi_{xy}(p,q,s)$ with respect to the box size $s$ for different $q$'s and fixed $p=2$. We observe nice power-law scaling over about 1.5 orders of magnitude. The contour plot of the exponents $\tau_{xy}(p,q)$ is shown in Fig.~\ref{Fig:MF-X-PF:Index}(b), where $p$ and $q$ vary from $-10$ to $10$ with a spacing of 0.1. The contour curves are not straight lines and the spacings between neighboring curves are not equidistant. Fig.~\ref{Fig:MF-X-PF:Index}(c) and Fig.~\ref{Fig:MF-X-PF:Index}(d) illustrate respectively the contour plots of the singularity strength functions $\alpha_x(p,q)$ and $\alpha_y(p,q)$, which are obtained numerically from $\tau_{xy}(p,q)$. We observe that the values of the singularity strength range from 0.6 to 1.2, which are well dispersed. In addition, the singularity strength functions are not monotonic with respect to $p$ or $q$. Fig.~\ref{Fig:MF-X-PF:Index}(e) illustrates the multifractal function $f_{xy}(p,q)$ obtained from the Legendre transform, whose values range from 0 to 1. The maximum $f_{xy}(p,q)=1$ is reached at point $(p,q)=(0,0)$. Within the investigated intervals of $p$ and $q$, the small $f_{xy}(p,q)$ values concentrated in the region with large values of $p$ and $q$. In Fig.~\ref{Fig:MF-X-PF:Index}(f), we present the singularity spectrum $f_{xy}(\alpha_x,\alpha_y)$. These empirical findings suggest that the cross correlations between daily volatilities of DJIA and NASDAQ possess multifractal nature, which is consistent with previous results using the MF-X-DFA, MF-X-DMA and MF-X-PF$(q)$ methods \cite{Zhou-2008-PRE,Jiang-Zhou-2011-PRE,Wang-Shang-Ge-2012-Fractals,Xiong-Shang-2015-CNSNS}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.96\linewidth]{Fig5.png}
\caption{Comparison of the joint multifractal singularity spectrum $f_{xy}(\alpha_x,\alpha_y)$ between the daily volatility time series of the DJIA index and the NASDAQ index in different time periods with and without market turmoil.}
\label{Fig:MF:X:PF:index:t}
\end{figure}
To reveal whether the joint multifractality between the daily volatilities of the two indices remains or changes along time, we perform the MF-X-PF$(p,q)$ analysis in moving windows on a decade basis with a step of one year. The results are presented in Figure S2 (New J. Phys. online). We show six plots in Fig.~\ref{Fig:MF:X:PF:index:t}. We find that the joint multifractal singularity spectrum $f_{xy}(\alpha_x,\alpha_y)$ changes over time. Moreover, the inclusion or exclusion of financial turmoils (high volatile periods) has a significant impact on the shape of $f_{xy}(\alpha_x,\alpha_y)$. In the sample period under investigation, there were two infamous market crises, the Black Monday in 1987 and the latest crisis in 2008. During relatively calm periods, the $f_{xy}(\alpha_x,\alpha_y)$ contour looks roughly like an American football. However, when one of the crisis is included, the contours are significantly stretched to the southwest. In other words, the singularity strengthes $\alpha_x$ and $\alpha_y$ have much smaller values during turmoil periods. This is actually not surprising because this feature is well-documented for ordinary multifractals \cite{Halsey-Jensen-Kadanoff-Procaccia-Shraiman-1986-PRA}.
We repeat the same analysis for two stocks Du Pont (NYSE:DD) and Exxon Mobil (NYSE:XOM) over time period from 05-Jan-1970 to 01-Sep-2015, containing 11522 data points. The daily return time series of the two stocks are shown in Figure S3 (New J. Phys. online). The results are illustrated in Figure S4 (New J. Phys. online). As expected, very similar results are observed. Two more pairs of financial time series are investigated and the results are presented in Figure S5 to Figure S8 of the Supplementary data (New J. Phys. online). One pair is about crude oil commodities, Arab Light to USA and WTI Cushing. The sample period from is 03-Jan-1991 to 18-Dec-2012, containing 5510 data points. Another pair is about Special Drawing Rights (SDRs) per
currency unit for the U.K. pound sterling (GBP) and the U.S. dollar (USD) over time period from 05-Jan-1994 to 01-Sep-2015, containing 5452 data points.
Compared with the results of binomial measures and fractional Brownian motions, the multifractal function and the multifractal singularity spectrum exhibit different shapes for different data sets studied. For example, in Fig.~\ref{Fig:MF-X-PF:Index}(f) for the financial market data there is a pronounced asymmetry, and the spectrum exhibits a stretched shape, in sharp contrast to Fig.~\ref{Fig:MF-X-PF:BFBM}(f) for the artificial BFBM data. These features reflect the irregular nonlinear traits of financial indexes. Roughly, the spectrum contour parallels to the diagonal $\alpha_x=\alpha_y$ (cf. Eq.~(\ref{Eq:MF-X-PF:analytic:alphaxalphay})), which is due to the fact that the DJIA and NASDAQ indexes comove along time so that the volatilities fulfill Eq.~(\ref{Eq:MF-X-PF:analytic:mxmy}) to certain extend. A direct conjecture is that the correlation coefficient $\rho(\alpha_x,\alpha_y)$ is greater if the correlation coefficient $\rho(R_x(t),R_y(t))$ is greater. This is validated by Fig.~S9 in the Supplementary Data (New J. Phys. online).
\section{Conclusions}
We have studied the properties of joint multifractal analysis based on partition function with two moment orders, termed MF-X-PF$(p,q)$. The uni-order method MF-X-PF$(q)$ has then been derived. The main properties of these methods have been obtained analytically. For instance, for the MF-X-PF$(q)$ method, we have obtained the relationship between the joint mass exponent function and the individual mass exponent functions, $\tau_{xy}(q)=[\tau_x(q)+\tau_y(q)]/2$, which was numerically and empirically observed in the literature.
We applied the MF-X-PF$(p,q)$ method to multifractal binomial measures. The expressions of mass function, singularity strength and multifractal spectrum of the cross correlations have been derived, which agree excellently with the numerical results. We further validated the performance of the method by using bivariate fractional Brownian motions without multifractal cross correlations. When applied to the daily volatility time series of two stock market indexes, intriguing multifractality in the cross correlations is confirmed. The multifractal properties of these examples are found to be the same when we use the conventional determination approach and the direct determination approach.
Multifractal cross-correlation analysis has been applied in many fields, especially in Econophysics. Although there are numerous methods, most of them consider only one moment order. It is natural that bi-order methods such as MF-X-PF$(p,q)$ can be developed for other uni-order methods. We expect that such bi-order methods will unveil new stylized facts in the analysis of financial time series, which can serve to calibrate agent-based models \cite{Li-Zhang-Zhang-Zhang-Xiong-2014-IS}. In addition, the joint multifractal nature extracted from two long-range cross-correlated time series has potential applications. One possibility is to construct a multi-scale cross-correlation measure, analogous to other DCCA coefficients \cite{Zebende-2011-PA,Podobnik-Jiang-Zhou-Stanley-2011-PRE,Zebende-daSilva-Filho-2013-PA,Kristoufek-2014a-PA,Kristoufek-2014b-PA}. Another possibility is to construct a measure quantifying market efficiency \cite{DiMatteo-Aste-Dacorogna-2005-JBF,Zunino-Tabak-Figliola-Perez-Garavaglia-Rosso-2008-PA,Zunino-Figliola-Tabak-Perez-Garavaglia-Rosso-2009-CSF,Wang-Liu-Gu-2009-IRFA}. A related possibility is to quantitatively characterize the degree of market unrest other than the volatility measure \cite{Oh-Eom-Havlin-Jung-Wang-Stanley-Kim-2012-EPJB}.
\section*{Acknowledgments}
We are grateful to the referees for their insightful suggestions. We acknowledge financial support from the National Natural Science Foundation of China (11375064 and 71131007), the Program for Changjiang Scholars and Innovative Research Team in University (IRT1028), and the Fundamental Research Funds for the Central Universities.
\providecommand{\newblock}{}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Helium white dwarfs (He WDs) are astrophysical objects which are composed predominantly of helium nuclei and degenerate electrons. At typical WD densities, the nuclei are much closer together than typical atomic sizes but are still widely separated compared to typical nuclear sizes. It has long been known that as WDs cool the nuclei crystallize, locked into position by their mutual Coulomb interactions\cite{abr60,kir60,sal60}. Recently, it was pointed out that in He WDs the temperature at which the helium nuclei form a Bose-Einstein condensate (BEC) might be higher than the crystallization temperature and an intermediate superconducting phase may exist between the plasma and the crystal phases \cite{Gabadadze:2008mx,Gabadadze:2007si,Gabadadze:2009jb,Gabadadze:2009dz,PhysRevLett.66.2915,Ashcroft:kx}.
In this phase, it is the ions that are superconducting; the electrons form an ordinary Fermi liquid. The low temperature properties of this phase are dominated by the physics of an unusual ``phonon" excitation \cite{Bedaque:2011hs} and leads to a very small specific heat and enhanced neutrino emission \cite{Bedaque:2012mr}, with possible consequences for the cooling of He WDs \cite{Benvenuto:2011fj}. A similar phase could also exist in a deuterium layer in brown dwarfs \cite{Berezhiani:2010db} and be relevant for inertial confinement \cite{PhysRevLett.78.483,Silva:1997fk,Jeanloz29052007} as well as other kinds of experiments \cite{Badiei200970,Andersson20093067} where high densities are also achieved .
It is guaranteed that at large enough densities there will be a range of temperatures where the BEC can exist while the Coulomb crystal cannot. This can be understood by simple scaling arguments: a BEC should form when the thermal de Broglie wavelength $\sqrt{2\pi/MT}$ (here $M$ is the ion mass and $T$ is the temperature) becomes comparable to the interparticle spacing $l$, so that the condensation temperature should scale as $\ensuremath{T_\text{BEC}}\sim 1/Ml^{2}$. A Coulomb crystal should melt when the thermal energy is comparable to the nearest-neighbor interaction, so that $\ensuremath{T_\text{Coulomb}}\sim Z^{2}\alpha/l$, where $Z$ is the atomic number of the crystallized nuclei and $\alpha=e^{2}/4\pi\approx 1/137$, where $e$ is the size of the electron charge. Since the number density $n\sim l^{-3}$, $\ensuremath{T_\text{BEC}}\sim n^{2/3}/M$ while $\ensuremath{T_\text{Coulomb}}\sim n^{1/3}$. Thus, at very high density, the crystallization temperature is markedly lower than the condensation temperature, and for intermediate temperatures, the system should be a BEC.
The natural question is: are astrophysical densities in this interesting regime? To answer this quantitative question, one needs to know the numerical coefficients that specify these critical temperatures. Because the condensation temperature scales inversely with the ion mass, the density at which $\ensuremath{T_\text{BEC}}=\ensuremath{T_\text{Coulomb}}$ and beyond which \mbox{$\ensuremath{T_\text{BEC}}>\ensuremath{T_\text{Coulomb}}$} is smaller for lighter nuclei. Thus, if a nuclear condensate forms in WDs, it should be most easily established in He WDs and not carbon-oxygen WDs.
Detailed studies have determined the crystallization temperature to be $\ensuremath{T_\text{Coulomb}} \sim (Ze)^{2}/180 l$ \cite{1975ApJ...200..306L,PhysRevA.21.2087,1993ApJ...414..695C}, meaning $\ensuremath{T_\text{Coulomb}}\sim (a_{0}/l) 7000$K, where $a_{0}$ is the Bohr radius. There are various suggestions for the proportionality constant in \ensuremath{T_\text{BEC}}. Simply equating the de Broglie wavelength to the interparticle spacing suggests $\ensuremath{T_\text{BEC}} = 2\pi/Ml^{2}\approx 6.2/Ml^{2}$. A free Bose gas has $\ensuremath{T_\text{BEC}}=T_{c}^{(0)}\equiv 2\pi (4\pi\zeta(3/2)/3)^{-2/3} / Ml^{2}\approx 1.27/Ml^{2}$, where $\zeta$ is the Riemman zeta function. The temperature \ensuremath{T_\text{BEC}}\ is expected to go up when one considers repulsive interactions \cite{Huang:1999zz}. A slightly more detailed estimate (see \reference{Gabadadze:2009jb}) suggests $\ensuremath{T_\text{BEC}}=4\pi^{2}/3Ml^{2}\approx 13.2/Ml^{2}$, which is qualitatively supported by the numerical calculations in \reference{Rosen:2010es}. It is the object of this paper to make a reliable estimate of \ensuremath{T_\text{BEC}}.
A BEC composed of nuclei (and not whole atoms) with a background of degenerate electrons is a novel system with rich phenomenology. Because the condensed nuclei are charged, the substance is electrically superconducting. The electrons provide a neutralizing electric charge, and additionally the dynamical response of these electrons implies an unusual gapless quasiparticle\cite{Bedaque:2011hs}. These quasiparticles imbue the substance with a very small specific heat \cite{Bedaque:2011hs}. Moreover, these quasiparticles can annihilate into neutrinos, and the power emitted per unit volume scales like $T^{11}$, so that the phenomenological relevance of this annihilation for He WD cooling depends strongly on the critical temperature of the nuclear condensate, with higher temperatures corresponding to more relevant neutrino emission\cite{Bedaque:2012mr}. These considerations motivate the detailed study of the thermodynamics of such a nuclear condensate.
For calculational simplicity, we will work in the regime of stiff electrons, so that they simply screen the Coulomb interaction with a screening mass. In this light, our investigation may be seen as an investigation of nonrelativistic charged spin-0 bosons interacting via a screened Coulomb (that is, Yukawa) interaction. Surprisingly, we find that the system is significantly more complex than expected and that we can merely set an upper bound for the first-order transition temperature: $\ensuremath{T_\text{BEC}}<T_{c}^{(0)}$. We conjecture that this low first-order transition temperature is accompanied by an unforeseen second-order transition at $T_{c}^{(0)}$.
This paper is organized as follows: in \secref{sec:model} we discuss this model in detail and calculate its one-loop effective potential. In \secref{sec:phase-diagram} we establish the phase diagram described by this model and investigate its properties analytically and numerically. In \secref{sec:global} we demonstrate that the condensed phase is globally disfavored anywhere the usual uncondensed phase exists, and to resolve this puzzle conjecture that the phase transitions that this system undergoes are more complicated than previously appreciated. Finally, in \secref{sec:conclusion} we make some remarks about the phenomenological relevance of nuclear condensates and discuss priorities for more deeply understanding this model.
\section{Model, quasiparticles and effective potential}\label{sec:model}
At the densities we are considering our system can be described by the (Euclidean space) Lagrangian
\begin{equation}\label{eq:L_initial}
{\mathcal L} = \psi^\dagger \left( D_0- \mu - \frac{{\mathbf{D}}^2}{2M}\right)\psi + \frac{1}{4}F_{\mu\nu}F_{\mu\nu}+\mathcal{L}_{gauge} +\bar\eta (D_\mu\gamma_\mu+m+\mu_e \gamma_0)\eta,
\end{equation}
where $\psi$ is the spin-0 helium nucleus field with charge $Ze=2e$ so that $D_\mu\psi = \partial_\mu\psi - i Ze A_\mu\psi$, $\eta$ represents the usual electron, with $D_{\mu}\eta=\partial_{\mu}\eta + i e A_{\mu}\eta$, $A_\mu$ is the photon field, $M$ ($m$) is the helium nucleus (electron) mass and $\mu$ ($\mu_e$) is the chemical potential for the nuclei (electrons). $\mathcal{L}_{gauge}$ is the gauge-fixing action required for perturbative calculations.
The Lagrangian in \eqref{eq:L_initial} does not contain the nuclear force between ions. We omit this force because in the regime we are considering the Coulomb repulsion prevents two nuclei from approaching one another to distances comparable to the nuclear size and, consequently, the nuclear force between them is inoperative: the nuclei interact with each other (and with electrons) only through the electromagnetic force. At the densities we consider ($n < 10^{8} g/cm^3 $) the nuclei are non-relativistic while the electrons may or may not be relativistic. A chemical potential for the electron is included and chosen so that the charge density of electrons equals that of the nuclei, ensuring charge neutrality.
Despite its apparent simplicity the action above describes a tremendous array of phenomena; this is not surprising giving the number of parameters present. We will concentrate on the regime described in the introduction where three different small parameters can be identified, namely i) $\alpha m l = l/a_0$, the ratio between the particle distances and the Bohr radius, ii) the fine structure constant $\alpha$ and iii) the mass ratio $m/M$. In order to make progress we will attempt a calculation that captures the leading order effect on these three parameters. Sometimes, however, certain effects will be proportional to ratios of these parameters. In these circumstances we will consider their numerical value to decide which terms to neglect. For instance, we count $m/(\alpha M)\approx 10^{-2}$ as a small parameter.
The first step is to integrate out the electrons. The result is, in general, a complicated non-local action for nuclei and photons only. Later, we will use only the part of the action quadratic in the fields. So, at leading order in $\alpha$ we have
\begin{equation}
\mathcal{L}_{\psi A} = \psi^\dagger \left( D_0- \mu - \frac{{\mathbf{D}}^2}{2M}\right)\psi + \frac{1}{4}F_{\mu\nu}F_{\mu\nu} + {\mathcal L}_{gauge} +\frac{1}{2}A_\mu \Pi_{\mu\nu}A_\nu - (eA_0+\mu_{e}) n.
\end{equation} The quadratic term in $A_\mu$ is the one-loop photon polarization tensor due to the electrons. Higher loop corrections are suppressed by powers of $\alpha ml = l/a_0$ and are small for the dense electron plasmas considered here \cite{Fetter:1971fk}.
In the density and temperature regime considered here, the electrons are degenerate and we can use the $T=0$ form of the polarization tensor
\begin{equation}
\Pi_{\mu\nu} =
\begin{pmatrix}
\Pi & -\frac{p_ip_0}{\mathbf{p}^2}\Pi \\
-\frac{p_j p_0}{\mathbf{p}^2}\Pi & \frac{p_i p_j p_0^2}{\mathbf{p}^4}\Pi +
(p_i p_j-\delta_{ij}\mathbf{p}^2)\ensuremath{\Pi^\perp}
\end{pmatrix}
\end{equation}
where $\Pi$ and $\ensuremath{\Pi^\perp}$ are functions of $p_0, \mathbf{p}$ . The form of the polarization tensor and the fact that it is determined by two functions follow from the Ward identity
$p_\mu \Pi_{\mu\nu}=0$. $\ensuremath{\Pi^\perp}$ will play no role in what follows but $\Pi$ will be essential for our discussion. It can be written as $\Pi(p_0, \mathbf{p}) = m_s^2 f(p_0 m/k_F^2, \mathbf{p}/k_F)$ where $f\rightarrow 1$ at small $p_0 m/k_F^2, \mathbf{p}/k_F$ \cite{Fetter:1971fk}. The zero momentum value of $\Pi$ describes the static screening of Coulomb forces by the cold electron gas. For certain values of the momentum $\Pi$ also has a small imaginary part. In this paper we will neglect both the imaginary part and the momentum dependence of $\Pi$, effectively working with a model where the electrons provide a negative charge background canceling the ion charge and screening for the Coulomb force. In reality the electron gas leads to the ``Friedel oscillations" in the screened Coulomb force that will be considered in a further publication. Our calculations are thus applicable to a model of spinless bosons interacting through a screened Coulomb (Yukawa) potential. The effect of the full momentum dependence of $\Pi$ and the Friedel oscillations on the thermodynamics will be left for a later publication.
We are interested in the possibility that nuclei condense, that is, that the condensate $\langle \psi \rangle=v$ be non-zero, breaking the electromagnetic $U(1)$ symmetry spontaneously. Whether this happens or not can be decided by minimizing the effective potential $V(v)$\footnote{We are assuming that translation symmetry is not spontaneously broken and $v$ is position independent.}. For this purpose we compute now the one-loop effective potential by using standard methods \cite{Peskin:1995ev}. First, we split the nuclear field into a classical part $v$ and a fluctuating piece $\chi$ as $\psi=v+\chi_R + i \chi_I$. Then we expand the action to quadratic order in the fields $A_\mu, \chi, \chi^\dagger$ and perform the gaussian path integral. The effective potential is given by
\begin{align}
e^{-\beta\int d^3r V(v)}
&= \int D\chi D\chi^\dagger DA_\mu\ e^{-S_{quad}}\nonumber\\
&=\left({\rm det}\ S_{quad} \right)^{-1/2}\nonumber\\
&= e^{-\frac{1}{2}{\rm tr}\ln S_{quad}},
\end{align} where $\beta=1/T$ is the inverse temperature.
The fields $\chi, \chi^\dagger$ and $A_\mu$ mix in the quadratic part of the action. The unitary gauge-fixing used in \reference{Bedaque:2011hs},
\begin{equation}
\mathcal{L}_{gauge} = -\frac{1}{2\xi}\left(
\nabla.{\mathbf{A}} - \frac{2M}{ev}\partial_0 \chi_R - \frac{\xi Zev^2}{M} \chi_I
\right)^2,
\end{equation} followed by the limit $\xi\rightarrow \infty$ allows us to decouple the fields at quadratic order and therefore simplify many calculations. However, the use of the unitary gauge (and $R_\xi$ gauges in general) is known to be problematic at finite temperature \cite{Dolan:1973qd}. The gauge fixing condition depends explicitly on $v$ and with this gauge we would be computing the effective potential (a gauge-dependent quantity) at different values of $v$ in different gauges. For this reason we will instead use the Coulomb gauge-fixing,
\begin{equation}
\mathcal{L}_{gauge} = -\frac{1}{2\xi}\left(
\nabla.{\mathbf{A}} \right)^2,
\end{equation} followed by the $\xi\rightarrow 0$ limit. For the one-loop calculations described in this paper the use of Coulomb gauge is a modest calculational complication that preempts more complicated conceptual questions.
The quadratic part of the action is given, in momentum space by \begin{align}\label{eq:S_mom}
S_{quad} &=\int \frac{d^4p}{(2\pi)^4}
\begin{pmatrix}
\chi_R(-p) &
\chi_I(-p) &
A_0(-p) &
A_\parallel(-p)
\end{pmatrix}\times \nonumber \\
&\phantom{\int \frac{d^{4}p}{(2\pi)^{4}}}\ \begin{pmatrix}
-\frac{p^2}{2M}+\mu & ip_0 & Zev & 0 \\
-ip_0 & -\frac{p^2}{2M}+\mu & 0 & \frac{iZev p}{2M} \\
Zev & 0 & \frac{p^2+\Pi}{2} & -\frac{p_0}{2p}(p^2+\Pi)\\
0 & - \frac{iZev p}{2M} & -\frac{p_0}{2p}(p^2+\Pi) & \frac{1}{2}\left ( -\frac{Z^{2}e^2v^2}{M}+p_0^2 - \frac{p^2}{\xi}+\frac{p_0^2 \Pi}{p^2} \right)
\end{pmatrix}
\begin{pmatrix}
\chi_R(p)\\
\chi_I(p)\\
A_0(p)\\
A_\parallel(p)
\end{pmatrix}.
\end{align}
The eigenvalues of the matrix in \eqref{eq:S_mom} are $p_0-iE_p$ and $p_0+iE_p$, where
\begin{align}\label{eq:Ep}
E_{p}^{2} =&\ \frac{p^{4}-2M p^{2}\mu -2 M m_{A}^{2}\mu \xi}{p^{4}+4 m_{A}^{2}p^{2}\xi-2 M m_{A}^{2}\mu\xi}\left( \frac{p^{2}}{2M}\left( \frac{p^{2}}{2M}-\mu \right) +\frac{p^{2}m_{A}^{2}}{p^{2}+\Pi}\right) \nonumber\\
\stackrel{\xi\rightarrow 0}{\longrightarrow}&
\left( \frac{p^2}{2M}-\mu\right)^2 + \left( \frac{p^2}{2M}-\mu\right) \frac{2M m_A^2}{p^2+\Pi},
\end{align}
where $m_A^2 = 4\pi \alpha Z^2 v^2/M$. Eq.~(\ref{eq:Ep}) gives the dispersion relation for the quasiparticles in the system (after the analytic continuation $p_0\rightarrow i p_0$). At zero temperature and up to higher loop corrections we can use the tree value of the chemical potential $\mu=0$ in the dispersion relation above. In that case, the result in \eqref{eq:Ep} becomes independent of the gauge fixing parameter $\xi$ (as it should since it is an observable quantity) and it agrees with the dispersion relation obtained in using the unitary gauge \cite{Bedaque:2011hs}.
The computation of the one-loop part of the potential $V^{(1)}$ can then proceed in the usual fashion. We introduce $\delta$ a spurious offset to $p_{0}^{2}+E_{p}^{2}$, do formal manipulation, and then eliminate $\delta$.
\begin{align}
V^{(1)} &= \oneover{2} T \sum_{p_{0}}\int \frac{d^{3}p}{(2\pi)^{3}} \ln\left( p_{0}^{2}+E_{p}^{2} \right)
&&= \left.\oneover{2} T \sum_{p_{0}}\int \frac{d^{3}p}{(2\pi)^{3}} \ln\left( p_{0}^{2}+E_{p}^{2} + \delta \right)\right|_{\delta=0} \nonumber\\
&= \int^{0}d\delta\ \frac{d}{d\delta}\oneover{2} T \sum_{p_{0}}\int \frac{d^{3}p}{(2\pi)^{3}} \log\left( p_{0}^{2}+E_{p}^{2} + \delta \right)
&&= \oneover{2} \int^{0}d\delta\ T \sum_{p_{0}}\int \frac{d^{3}p}{(2\pi)^{3}} \oneover{\left( p_{0}^{2}+E_{p}^{2} + \delta \right)} \nonumber\\
&= \oneover{2} \int^{0} d\delta\ \int \frac{d^{3}p}{(2\pi)^{3}} \oneover{2 \sqrt{E_{p}^{2}+\delta}}\coth\left( \oneover{2}\beta\sqrt{E_{p}^{2}+\delta} \right)
&&= T \int \frac{d^{3}p}{(2\pi)^{3}} \ln 2 \sinh\left( \oneover{2} \beta E_{p} \right) \nonumber\\
&= \int \frac{d^{3}p}{(2\pi)^{3}} \oneover{2} E_{p} + T \int \frac{d^{3}p}{(2\pi)^{3}} \ln\left( 1-e^{-\beta E_{p}} \right),
\label{eq:VEp}
\end{align}
where the sum is over the discrete values $p_0=2\pi T j$, $j=0, \pm 1, \cdots$. Had we kept the full $\Pi(p_0,\mathbf{p})$ instead of merely $\Pi=m_s^2$ we would have encountered cuts in the complex $p_0$ plane and the effective potential would have a more complicated form. We will ignore the temperature-independent piece when exploring this effective potential in pursuit of its corresponding phase diagram.
\section{Phase diagram}\label{sec:phase-diagram}
The minimization of $V$ in relation to $v$ gives us the actual expectation value of the condensate at any given value of the chemical potential $\mu$. We would like, however, to have the ion density $n$ fixed in order to neutralize the charge of the electrons. We then have to solve simultaneously the pair of equations
\begin{subequations}\label{eq:dV}
\begin{align}
n &= -\frac{\partial V}{\partial\mu} = v^2+
\int \frac{d^3p}{(2\pi)^3}
\frac{1}{e^{\beta E_p}-1}\frac{1}{E_p} \left[ \frac{p^2}{2M}-\mu+ \frac{M m_A^2}{p^2+m_s^2} \right],
\label{eq:dVdmu}\\
0&= \phantom{-}\frac{\partial V}{\partial v} = -\mu v+v \int \frac{d^3p}{(2\pi)^3}
\frac{1}{e^{\beta E_p}-1}\frac{1}{E_p} \left( \frac{p^2}{2M}-\mu\right) \frac{4\pi Z^{2} \alpha}{p^2+m_s^2}
\label{eq:dVdv} .
\end{align}
\end{subequations}
The first terms in these equations come from the tree level contribution $V^{(0)}=-\mu v^2$ to the effective potential. In the absence of dynamical electrons and screening effects ($m_s=0$), \eqref{eq:dV} were derived in \reference{Fetter1970464}.
We notice that the dispersion relation dependence on $\mu$, shown in equation \eqref{eq:Ep}, is important in deriving these relations, even if $\mu$ is set to zero afterwards. The non-relativistic limit of the dispersion relation obtained in \reference{Rosen:2010es}, for instance, differs from ours at finite values of $\mu$ and is, consequently, at odds with \reference{Fetter1970464}.
\begin{figure}[tbp]
\centerline{ {\epsfxsize=3.0in\epsfbox{phase_diagram.pdf}} }
\noindent
\caption{The red curve is the solution to \eqref{eq:dVdmu0}; it is shown as dashed where the condition $\mu \ll p^2/2M$ is violated and should not be trusted. The result of the phonon (dotted blue) and plasmon (dotted orange) approximations are also shown. The purple dashed line separates the phonon-dominated from the plasmon-dominated regions. }
\label{fig:phase_diagram}
\end{figure}
To the extent that higher loop contributions to the effective potential are small \eqref{eq:dV} determine the variation of the condensate $v=v(T)$ with the temperature.
However, as we can see from \eqref{eq:dVdv}, $\mu$ cannot be negative if $v\neq 0$. We are led then to expect that the condensate forms at positive values of $\mu$. However, at positive $\mu$, $E_{p}^{2}$ can be negative for some values of $p$, and thus the effective potential is complex at positive $\mu$. A complex one-loop effective potential is a common occurrence and signals an instability \cite{Weinberg:1987vp}. Frequently, this instability is an artifact of the loop expansion and can be cured by a resummation of higher loop contributions, as it occurs, for instance, in the relativistic $\lambda\phi^4$ model at finite temperature \cite{Dolan:1973qd}. We are unable at this point to identify the necessary resummations needed in our model. But, just like in the models discussed in \reference{Dolan:1973qd}, the un-resummed one-loop potential already carries important information about the thermodynamics of our problem. We will now proceed to extract as much information from the one-loop effective potential as possible.
The analysis of \eqref{eq:dV} is simple in the $v=0$ case and found in textbooks. In this case equation \eqref{eq:dVdmu} becomes
\begin{equation}
n =
\int \frac{d^3p}{(2\pi)^3}
\frac{1}{e^{\beta (\frac{p^2}{2M}-\mu)}-1}.
\end{equation} Following the usual analysis, $\mu$ is negative for
\begin{equation}
T>T^{(0)}_c=\frac{2\pi}{M} \left(\frac{n}{\zeta(3/2)}\right)^{2/3} = \left( \frac{9\pi}{2\zeta(3/2)^2} \right)^{1/3}\frac{1}{M l^2},
\end{equation} and vanishes at $T=T^{(0)}_c$. For $T<T^{(0)}_c$ it is impossible to satisfy the equations, which signals the need for a non-vanishing condensate. The line $v(T)=0$ is shown in \figref{fig:phase_diagram}.
In order to consider non-zero values of $v$ our strategy will be to neglect $\mu$ as compared to $p^2/2M$ in \eqref{eq:dV} obtaining
\begin{subequations}\label{eq:dV0}
\begin{align}
n -v^2&=
\int \frac{d^3p}{(2\pi)^3}
\frac{1}{e^{\beta E_p}-1}\frac{1}{E_p} \left[ \frac{p^2}{2M}+ \frac{M m_A^2}{p^2+m_s^2} \right],
\label{eq:dVdmu0}\\
\mu&= \int \frac{d^3p}{(2\pi)^3}
\frac{1}{e^{\beta E_p}-1}\frac{1}{E_p} \frac{p^2}{2M} \frac{4\pi Z^{2}\alpha}{p^2+m_s^2}
\label{eq:dVdv0} ,
\end{align}
\end{subequations}
with $E^2_p=(p^2/2M)^2+p^2 m_A^2/(p^2+m_s^2)$, which obviates the complex effective potential problem. We then solve \eqref{eq:dV0} and carefully verify the regime of validity of the $\mu\ll p^2/2M$ approximation. We will solve the equations both numerically and, in some limits, analytically. The numerical solution of \eqref{eq:dVdmu0} and $a_0/l=35$ (corresponding to a density of $\rho=4.6\times10^5 g/cm^3$ is shown as the thick line in \figref{fig:phase_diagram}.
\figref{fig:phase_diagram} has the typical shape of a first order phase transition. In the usual scenario, for values of temperatures where three solutions exist ($T_c^{(0)} < T \alt 8T_c^{(0)}$ in \figref{fig:phase_diagram} ), two are locally stable and the middle one is unstable. The $v=0$ solution is stable for temperatures higher than a critical value $T_c > T_c^{(0)}$ and the $v\neq 0$ is only metastable. At temperatures lower than $T_c$ the roles between the $v=0$ and $v\neq 0$ are reversed. Thus, the condensate $v$ jumps discontinuously to zero as the temperature is increased past $T=T_c$. As we will see in the next section, the curve shown in figure \figref{fig:phase_diagram} cannot be trusted for all values of $v$ and the situation in our model is more complicated.
We now assess the validity of the approximation $\mu \ll p^2/2M$ leading to our values of $v(T)$. For any given value of $v$ and $T$ we can estimate $\mu$ by using \eqref{eq:dVdv0}. This estimate will be accurate if $\mu$ is indeed negligible compared to $p^2/2M$ but not otherwise. In this sense, the use of \eqref{eq:dVdv0} conservatively estimates the range of validity of the $\mu \ll p^2/2M$ approximation.
The estimate of $p^2/2M$ is a little trickier. Ordinarily, the value of $p^2/2M$ could be estimated from the knowledge of the typical value of $p$ contributing to the integral in \eqref{eq:dVdmu0}. The integrand in \eqref{eq:dVdmu0}, however, has a double hump structure dominated by two widely separate scales as shown in \figref{fig:humps}. Depending on the values of $v,T$, one or the other hump will dominate the value of the integral. Fortunately, analytical approximations are available in these two cases.
\begin{figure}[tbp]
\centerline{ {\epsfxsize=3.0in\epsfbox{double_hump.pdf}} }
\noindent
\caption{Integrand in the second equation in \eqref{eq:dVdmu0} as a function of momentum (full line). The dotted lines correspond to the phonon and plasmon approximations. }
\label{fig:humps}
\end{figure}
For the lower momentum hump we are in the ``phonon region" where the approximations
\begin{subequations}\label{eq:phonon_cond}
\begin{align}
p^2 &\ll m_s^2,\\
\left( \frac{p^2}{2M} \right)^2&\ll \frac{m_A^2 p^2}{m_s^2}
\end{align}
\end{subequations}
are adequate. In this region the dispersion relation approaches that of a phonon \cite{Bedaque:2011hs}
\begin{equation}
E_p\approx \frac{m_A}{m_s} p
\end{equation}
and the integrals in \eqref{eq:dV0} can be analytically calculated:
\begin{subequations}\label{eq:dV_phonon}
\begin{align}
n&=v^2+\frac{M m_s}{12 m_A}T^2 \label{eq:dVdmu_phonon}, \\
\mu &= \frac{Z^{2}\pi^3}{15}\frac{\alpha m_s^3}{M}\frac{T^4}{m_A^5} \label{eq:dVdv_phonon}.
\end{align}
\end{subequations}
Equation \eqref{eq:dVdmu_phonon} determines the condensate $v(T)$ as a function of $T$.
Its value is plotted as the (blue) dotted line in \figref{fig:phase_diagram}.
For the phonon approximation to \eqref{eq:dV0} to be legitimate it is necessary that the equations \eqref{eq:phonon_cond} be satisfied.
The integrals in \eqref{eq:dV0} are cutoff by the Boltzman factor $e^{-m_A p/(T m_s)}$. So, the typical value of the momentum is $p\approx m_s T/m_A$. Using this value of $p$, both conditions in \eqref{eq:phonon_cond} become
\begin{equation}
T \ll m_A.
\end{equation}
Since $v^2<n$, the second condition in \eqref{eq:phonon_cond} follows from the first one. The region excluded by this condition is shown as the darker blue area in \figref{fig:exclude}. We can now compare $\mu$ to $p^2/2M$. We find
\begin{equation}\label{eq:muless_phonon}
\mu \ll \frac{p^2}{2M} \Rightarrow \frac{Z^{2}\pi^3}{15} \alpha m_s \frac{T^2}{m_A^3} \ll 1.
\end{equation}
The region where equation \eqref{eq:muless_phonon} fails is shown in blue in \figref{fig:exclude}.
This condition $T \ll m_A$ is always satisfied when \eqref{eq:muless_phonon} holds and for the relevant values of the density parameter $l/a_0$. We conclude then that, as long as the phonon region dominates the integrals in \eqref{eq:dVdmu0}, we are justified in neglecting $\mu$.
\begin{figure}[tbp]
\centerline{ {\epsfxsize=3.0in\epsfbox{exclude.pdf}} }
\noindent
\caption{Regions where the phonon (plasmon) approximation is not valid are shown in blue (yellow). The purple dashed line separates the phonon-dominated from the plasmon-dominated regions. The red curve is the solution to \eqref{eq:dVdmu0}; it is shown as dashed where the condition $\mu \ll p^2/2M$ is violated and should not be trusted.}
\label{fig:exclude}
\end{figure}
At higher momentum, on the second hump of the integrand, the ``plasmon approximation" is adequate:
\begin{subequations}\label{eq:plasmon_cond}
\begin{align}
m_{s}^{2} &\ll p^2 \\
\frac{p^2}{2M} &\ll \frac{M m_A^2}{p^2} .
\end{align}
\end{subequations}
In this region, the dispersion relation becomes that of a massive excitation (plasmon)
\begin{equation}
E_p = \sqrt{\left( \frac{p^2}{2M} \right)^2+ m_A^2},
\end{equation}
the integrals are cutoff by the exponential statistical factor $e^{-\sqrt{(p^2/2M)^2+m_A^2}/T} \approx e^{-m_A/T} e^{-p^4/8M^2 m_A T}$. Also it turns out, \eqref{eq:plasmon_cond} implies $\Rightarrow T \ll \frac{1}{4} m_A$ and hence we can drop 1 compared to $e^{\beta E_p}$ in the integrand. The typical momentum is given by
\begin{equation}
p^2 \approx \sqrt{8 m_A T} M.
\end{equation}
With the approximations in \eqref{eq:plasmon_cond}, \eqref{eq:dV0} reduces to
\begin{subequations}\label{eq:dV_plasmon}
\begin{align}
n &= v^2+\frac{\Gamma(5/4)}{2^{1/4} \pi^2} M^{3/2} m_A^{5/4} T^{1/4} e^{-m_A/T},
\label{eq:dVdmu_plasmon}\\
\mu &= \frac{2^{1/4} Z^2 \Gamma(3/4)}{\pi} Z^2 \alpha \sqrt{M}\frac{T^{3/4}}{m_A^{1/4}} e^{-m_A/T}
\label{eq:dVdv_plasmon}
\end{align}
\end{subequations}
We can now verify for which values of $v$ and $T$ the plasmon approximation is valid:
\begin{subequations}
\begin{align}
m_s^2 &\ll p^2 &\Rightarrow& &\frac{m_s^4}{8 M^2 m_A^2} &\ll T, \label{eq:plasmon_validity_ms}\\
\frac{p^2}{2M} &\ll \frac{M m_A^2}{p^2} &\Rightarrow& &T &\ll \frac{1}{4} m_A, \label{eq:plasmon_validity_ma}\\
\mu &\ll \frac{p^2}{2M} &\Rightarrow& &\frac{Z^2\Gamma(3/4)}{2^{1/4}\pi} \frac{\sqrt{M}T^{1/4}}{m_A^{3/4}} \alpha\ e^{-m_A/T} &\ll 1. \label{eq:plasmon_validity_mu}
\end{align}
\end{subequations}
The first condition \eqref{eq:plasmon_validity_ms} excludes a tiny region of the $v-T$ plane near $T\approx 0$ where our calculations which neglect the temperature independent one-loop effective potential are not valid. The second and third condition are actually very similar since the exponential factor $e^{-m_A/T}$ is very small when as $T\alt m_A$. The areas excluded by these conditions are shown in yellow in \figref{fig:exclude}.
Finally, the phonon and plasmon region contributions to the integral in \eqref{eq:dVdmu0} should be compared by taking the ratio of \eqref{eq:dVdmu_phonon} and \eqref{eq:dVdmu_plasmon}. We find
\begin{equation}\label{eq:comparison}
\frac{\left.\left( n-v^2 \right)\right|_{\text{phonon}}}{\left.\left( n-v^2 \right)\right|_{\text{plasmon}}} \sim \frac{2^{1/4} \pi^2}{12\ \Gamma(5/4)}\frac{m_s T^{7/4}}{\sqrt{M} m_A^{9/4}} e^{m_A/T}.
\end{equation}
The contour separating the phonon-dominated from the plasmon-dominated regions is shown as a dashed line \figref{fig:phase_diagram} and \figref{fig:exclude}.
For $v\approx \sqrt{n}$, the only part of the $v=v(T)$ solution where those approximations make sense, the phonon contribution dominates at small temperatures while the plasmon contribution dominates at higher temperatures.
We have now our full verification of the $\mu=0$ approximation. On the upper branch of the $v=v(T)$ solution up to $T\alt 5 T_c^{(0)}$ the phonon and the $\mu \approx 0$ approximations are valid. At higher $T$ until $T\approx 7.5 T_c^{(0)}$, the plasmon and the $\mu \approx 0$ approximations are valid. Beyond that and for the lower branch of $v=v(T)\alt 0.7 \sqrt{n}$ our approximations, including the neglect of $\mu$, are no longer valid. For this reason we draw the $v=v(T)$ solution as a dashed line in \figref{fig:phase_diagram} and \figref{fig:exclude}.
The numerical examples presented in the figures correspond to a fixed value of the density parameter $l/a_0 = 1/35$. It turns out that the dependence of $v/\sqrt{n}$ and other quantities on the density is very mild, particularly if we remember that only a relatively narrow range of densities around $10^5\ g/cm^3$ is phenomenologically relevant. In fact, in addition to the dependence of $m_s \sim k_F^{1/2} \sim n^{1/6}$, only the overall normalization of the integral on \eqref{eq:dVdmu0} is dependent on $n$. As a rule, however, there is a reduction of the phonon dominated region in the phase diagram as the density is raised, as can be seen on \eqref{eq:comparison}.
\section{Global stability and a conjecture}\label{sec:global}
Having identified the locally stable states of the model we now study their global stability.
In a certain range of temperatures two states, the trivial $v=0$ and the condensed $v\neq 0$ one satisfy the conditions for the local minimization of the effective potential. We want now to decide which one is the global minimum. There two states, however, are obtained from solving \eqref{eq:dVdmu0} and, consequently, have the same particle density $n$. In order to decide which one is the stable state we have to compare their free energies defined by
\begin{equation}
F(n,T) = V_{eff}(v,T)+\mu n,
\end{equation}
where $\mu$ and $v$ are the solutions to \eqref{eq:dVdmu0} and \eqref{eq:dVdv0} for given values of $T$ and $n$. The evaluation of $F$ is particularly simple right at $T=T_c^{(0)}$.
Let us first compute this value for the trivial $v=0$ solution. Right at $T=T_{c}^{0}$ the uncondensed solution has $\mu=0$, and we find
\begin{align}
F_{v=0}\left( n, T_{c}^{(0)} \right) &= T_{c}^{(0)}\int \frac{d^3p}{(2\pi)^3} \ln\left( 1-e^{-\frac{p^2}{2MT_c^{(0)}}}\right) \nonumber\\
&= -\zeta(5/2)\ T_c^{(0)}\ \left( \frac{MT_c^{(0)}}{2\pi} \right)^{3/2} \nonumber\\
&= -\frac{3^{5/3}\zeta(5/2)}{2^{7/3}\zeta^{5/3}(3/2)} \frac{1}{Ml^5}
\end{align}
This is to be compared to the free energy of the $v\neq 0$ states. The $T=T_c^{(0)}$, $v\neq 0$ solution is well within the phonon dominated region where the free energy can be computed, with the results in \eqref{eq:dV_phonon}, to be
\begin{align}
F_{\text{phonon}}\left( n, T \right) &=
\mu(n-v^2)+T\int \frac{d^3p}{(2\pi)^3} \ln\left( 1-e^{-\frac{m_A p}{m_sT}}\right)\nonumber\\
&=
\frac{\pi^3 Z^{2} \alpha m_s^4 T^6}{180 m_A^6} - \frac{\pi^2 m_s^3 T^4}{90 m_A^3}\\
F_{\text{phonon}}(n,T_{c}^{(0)})&=
- \frac{3^{1/6}\pi^{7/3}}{5\ 2^{1/3}\zeta^{8/3}(3/2)Z^{5/2}} \oneover{M l^{5}}\left( \frac{m}{M} \right)^{3/2}\left( 1 - \frac{3^{1/6}\pi^{4/3}}{2 \zeta^{4/3}(3/2)Z^{5/6}}\sqrt{\frac{m}{M}} \right)
\end{align}
The free energy of the $v=0$ solution is more negative than the free energy of the condensed state by a factor proportional to the large parameter $(M/m)^{3/2}$. This shows that at all temperatures where the $v=0$ state exists it is it, and not the condensed $v\neq 0$ state, which is the globally stable state of the system. A numerical calculation of the free energy using the solution of \eqref{eq:dVdmu0} and \eqref{eq:dVdv0} confirms this result and shows that the difference in free energy between the two competing states {\it increases} as the temperature is further increased past $T=T_c^{(0)}$.
This result is at odds with the standard picture of a first order transition. In the usual case, one has $F_{\text{phonon}} < F_{v=0}$ for all $T<T_c$ where the critical temperature $T_c$ is greater than $T_c^{(0)}$. Even though we were unable to compute $F(v,n,T)$ for arbitrary $v$, it is easy to see that no function $F(v,n,T)$ could have a set of local and global minima as shown in \figref{fig:phase_diagram} while also globally favoring the $v=0$ state all the way down to $T_{c}^{(0)}$. To understand this impossibility, consider the shape of this purported function around $T\approx T_c^{(0)}$ as $T$ is increased. A new minimum at $v=0$ is supposed to appear in addition to the non-trivial one with $v\neq 0$ and immediately become the global minimum of the function. This is clearly impossible.
The only way out of this inconsistency is to assume that the state with small $v$ beats the superconducting state $v\approx\sqrt{n}$ at temperatures smaller than $T_c$. This can occur if the
actual curve $v=v(T)$ has the shape shown in the left panel of \figref{fig:conjecture}. In that case a first order transition occurs at a lower smaller than $T_c^{(0)}$ followed by a second order transition. The shape of the free energy as a function of $V$ at different temperatures indicated on the right panel is sketched on the right panel of \figref{fig:conjecture}. The extra knee in the $v=v(T)$ curve can occur in the region of the $v-T$ plane where our calculation is not under control. The part of our calculation that is under good theoretical control forces us to believe in this more exotic possibility
\begin{figure}
\begin{tabular}{cc}
{\label{fig:conjecture}
\includegraphics[width=0.3\textwidth]{conjecture.pdf}}
&
\begin{tabular}{ccc}
\subfloat[]
{\label{fig:f1}
\includegraphics[width=0.2\textwidth]{f1.pdf}}
&
\subfloat[]
{\label{fig:f2}
\includegraphics[width=0.2\textwidth]{f2.pdf}}
&
\subfloat[]
{\label{fig:f3}
\includegraphics[width=0.2\textwidth]{f3.pdf}}
\\
\subfloat[]
{\label{fig:f4}
\includegraphics[width=0.2\textwidth]{f4.pdf}}
&
\subfloat[]
{\label{fig:f5}
\includegraphics[width=0.2\textwidth]{f5.pdf}}
&
\subfloat[]
{\label{fig:f6}
\includegraphics[width=0.2\textwidth]{f6.pdf}}
\end{tabular}
\end{tabular}
\caption{\label{fig:multi_panel}
The left panel shows a schematic of our conjectured phase diagram for the condensate $v$ as a function of $T$. The red curve in the left panel shows the minima in $F(v,n,T)$ for a fixed $n$. The green curves in the right panel are cross-sections of $F(v,n,T)$ at fixed $T$, with the sub-figures in the right panel corresponding to the slices in the left panel with the same label. At temperature a only the condensed solution exists. At temperature b the free energy develops an inflection point which turns into one minima and one maxima. The first-order phase transition happens at temperature c when the wells become equally deep. By temperature d (ie. $T_{c}^{(0)}$), the well with small $v$ is already favored, and there is a second-order transition to $v=0$. At higher temperatures the $v\neq0$ minimum vanishes, shown in (e) and (f).
}\label{fig:conjecture}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
Despite the difficulty posed by the fact that the effective potential is complex for positive chemical potential we were able to extract some physical consequences from our one-loop calculation. The main one is the indication of a sequence of a first order transition followed by a second order phase transition. This conclusion was developed by looking at the free energy values computed within the range of validity of the approximations performed and, as such, is quite robust. On the other hand we were not able to compute the critical temperature where the stable (superconducting) and metastable (normal) states trade places. But if the double transition conjecture described above is correct, this temperature is below the free boson critical temperature $T_c^{(0)}$.
We also found that the superconducting phase exists, as a metastable state, for temperatures up to about 8 times $T_c^{(0)}$. This conclusion agrees with that in \reference{Rosen:2010es}; this is not entirely a coincidence. Contrary to the present paper, \reference{Rosen:2010es} analyzes the unscreened model ($m_s=0$). But the High $T$ region of the $v=v(T)$ curve is in the plasmon-dominated region where the screening is not important. In addition, the methodological differences between this paper and in \reference{Rosen:2010es} lead to a change in \eqref{eq:dVdmu} that is numerically small and, just like here, the $\mu$ dependence of the dispersion relation is neglected, albeit for different reasons.
Although the model we analyzed (bosons interacting through a screened Coulomb (Yukawa) potential) is interesting on its own merits, applications to high density physics require a proper treatment of the effects of a dynamical electron background. Technically the main effect is the inclusion of the contribution of the cuts of $\Pi(p_0,\mathbf{p})$ in the effective potential calculation. Physically they correspond to the fact that the actual force between two bosons presents an oscillating component (Friedel oscillations) \cite{Gabadadze:2009zz,Dolgov:2010gy,Gabadadze:2008pj}. A detailed of the influence of these effects on the thermodynamics of the system will be left for a future publication.
We have also not discussed the metastability of the superconducting state in a quantitative fashion. In particular, we have not estimated its lifetime. This is due to the fact that a proper estimate would us require to compute the free energy $F$ as a function $v$ in order to understand the size of the potential barrier separating normal from superconducting phases. Until a deeper understanding of the resummations needed to make sense of the one-loop results is achieved, this calculation is impossible. In fact, all the results discussed here as a well as a confirmation or falsification of our conjectured phase diagram hinge on an understanding of this resummation and that should be viewed as the number one priority for further progress in this topic.
Finally, we can use the results of this paper to assess where the idea of nuclear condensates in dense matter stands. Recall that, for the existence of a intermediate temperature regime where the nuclear condensate can exist it is necessary that the crystallization temperature be smaller than the condensation temperature. If one is willing to consider metastable states, our estimate that the superconducting state extends up to $\approx 8\ T_c^{(0)}$ is similar to the hypothesis made in \cite{Gabadadze:2007si}. A such, the estimate that, at densities around $10^5 g/cm^3$ relevant for white dwarf physics a nuclear condensate should exist, stands unaltered. Only an estimate of the decay time of the false superconducting ground state can decide whether the inclusion of the metastable state is appropriate but, considering the extreme slow evolution of white dwarfs and the fact that they start out at high temperatures, suggest the the metastable state is irrelevant. In that case, only at much higher densities ($\rho \agt 2.4\times 10^7\ g/cm^3$) and temperatures ($T\agt 10^6\ K$) can the nuclear condensate exist.
\section*{Acknowledgements}
The authors would like to thank Tom Cohen and Aleksey Cherman for extensive discussions. This work was supported by the U.S. Dept. of Energy under grant \#DE-DG02-93ER-40762, and E.~B. is also supported by the Jefferson Science Associates under the JSA/JLab Graduate Fellowship program.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and summary}
\noindent
The baby Skyrme model is a useful laboratory for studying soliton physics.
It is the $2{+}1$ dimensional analog of the usual Skyrme model~\cite{S}, which
describes the low-energy chiral dynamics of quantum chromodynamics~\cite{ANW}.
This model has direct applications in condensed matter physics~\cite{Mac},
where baby Skyrmions give an effective description in quantum Hall systems.
The action of this model consists of three terms: a kinetic sigma-model term
(scale invariant), the (four-derivative) Skyrme term (breaking scale invariance)
and a potential (or mass) term (stabilizing the size of solutions).
All three terms are needed to prevent the collapse of topological configurations
which yield to Skyrmion solutions. These stable baby Skyrmions can be determined
numerically~\cite{Zak}. Their mass is strictly larger than the Bogomol'nyi bound
given by the topological charge (Skyrmion number), and the two-Skyrmion
configuration becomes stable showing the existence of bound states~\cite{Zak}.
A noncommutative deformation~(for reviews see~\cite{nc})
serves as a substitute for the potential term,
because it introduces a new length scale into the theory, which also
stabilizes solitons against collapse or spreading.
Moreover, Moyal-deformed field theories have a much richer soliton spectrum
than their commutative counterparts
(see, e.g.,~\cite{lepo01,sendai} and references therein).
Indeed, the noncommutativity gives rise to a new class of baby Skyrmions,
as was shown in~\cite{iole}.
Furthermore, the noncommutative deformation may be of help in semi-classically
quantizing the (perturbatively non-renormalizable) baby Skyrme model, since it
introduces a regulating parameter.
The two above applications of noncommutativity are our main motivation
for Moyal-deforming the baby Skyrme model.
In a previous paper~\cite{iole} by one of the authors on this subject,
the Moyal-deformed baby Skyrme model was introduced~\footnote{
See also \cite{mieck} for different aspects of Moyal-deforming a Skyrme model.}
for group-valued or Grassmannian target spaces and without a potential term.
In the abelian case, a class of exact analytic solitonic solutions was discovered,
which are stable against scaling due to the noncommutativity but have no analogues
in the commutative theory. This surprising feat succeeded because certain BPS
configurations of the Moyal-deformed ordinary sigma model extremize the Skyrme part
of the energy as well. The static energy of these noncommutative baby Skyrmions
and their repulsive potential at large distances was computed~\cite{iole}.
However, their stability could not be ascertained, because a BPS bound for
the full baby Skyrme model (in a given Grassmannian) was not available.~\footnote{
For the pure sigma model, the energy is of course bounded by the topological
charge~\cite{dolepe}. The Skyrme term together with a potential also enjoys
a BPS bound which, however, becomes trivial for zero potential~\cite{gipa,adam}.}
In the present Letter, we fill this gap.
After reviewing the salient features of the noncommutative baby Skyrme model and
its known solutions, we prove the expected BPS bound for the Skyrme term in the
energy functional. The special case of unit topological charge is established
independently by mapping it to the quantum mechanical uncertainty relation.
Finally, we develop the second-order perturbation of the energy functional
around a classical solution and apply it to the charge-one baby Skyrmion,
affirming our previous results.
\bigskip
\section{The noncommutative abelian baby Skyrme model}
\noindent
The Moyal-deformed baby Skyrme model was first introduced in~\cite{iole}.
Its abelian version describes maps~$g$ from a time interval $I\ni t$ into the unitaries
$\textrm{U}({\mathcal H})$ of a Hilbert space~${\mathcal H}$
or into a Grassmannian subspace
\begin{equation}
\textrm{Gr}_k\equiv\textrm{Gr}(P)\=\frac{\textrm{U}({\mathcal H})}{\textrm{U}(\textrm{im}P)\times\textrm{U}(\textrm{ker}P)}
\end{equation}
for a hermitian projector~$P$ of finite rank~$k$.
In other words, the field variable $g(t)$ is a unitary operator-valued function of time.
Inside the Grassmannian Gr${}_k\subset\textrm{U}({\mathcal H})$, it satisfies the constraint
\begin{equation} \label{Gr}
g^2=\mathbbm{1} \qquad\Leftrightarrow\qquad g^\+=g \qquad\Leftrightarrow\qquad
g=\mathbbm{1}-2P \quad\textrm{with}\quad P^\+=P=P^2\ ,
\end{equation}
defining a hermitian projector~$P(t)$ of rank~$k$ as an alternative field variable.
The Hilbert space~${\mathcal H}$ carries a representation of the Heisenberg algebra,
\begin{equation} \label{heisenberg}
[\,a\,,\,a^{\dagger}\,] \= \mathbbm{1}\ ,
\end{equation}
which acts on the orthonormal basis states
\begin{equation}
|m\>\=\sfrac{1}{\sqrt{m!}}\,(a^{\dagger})^m\,|0\>
\qquad\text{for}\quad m\in{\mathbbm{N}}_0 \quad\text{and}\quad a|0\>=0
\end{equation}
in the following way,
\begin{equation}
a\,|m\> \= \sqrt{m}\,|m{-}1\> \ ,\qquad
a^{\dagger}\,|m\> \= \sqrt{m{+}1}\,|m{+}1\> \ ,\qquad
N\,|m\> \ := a^{\dagger} a\,|m\> \= m\,|m\>\ .
\end{equation}
With the help of the auxiliary gauge potentials
\begin{equation}
A_t=g^\+\dot{g} \qquad\textrm{and}\qquad
A_z=g^\+[a^{\dagger},g] \qquad\textrm{as well as}\qquad A_{\bar{z}}=g^\+[a\,,g]=(A_z)^\+\ ,
\end{equation}
the model is defined by its action functional,
\begin{equation}
S\=-2\pi\!\int\!\!\textrm{d} t\ \textrm{Tr}_{{\mathcal H}} \Bigl\{
\sfrac{\theta}{2}A_t^2\ +\ A_z A_{\bar{z}}\ -\
\kappa^2[A_t,A_z][A_t,A_{\bar{z}}]\ +\ \sfrac{\kappa^2}{2\theta}[A_z,A_{\bar{z}}]^2 \Bigr\}\ ,
\end{equation}
which depends on two parameters: the noncommutativity scale~$\theta\in\mathbb R_+$ of the
dimension of length$^2$ and a coupling parameter~$\kappa$ of the dimension of length.
Note that no potential term is needed, because the presence of the scale~$\theta$
stabilizes the solitonic solutions.
In the limit $\theta{\to}0$, which includes scaling away the central charge
of the Heisenberg algebra~(\ref{heisenberg}), one recovers the commutative U(1)
baby Skyrme model on $\mathbb R^{1,2}$, which is a free theory because all commutators vanish.
Sending the Skyrme coupling $\kappa{\to}0$ also removes the quartic terms, leaving us
with the Moyal-deformed abelian sigma model.
The latter has been investigated intensively and features static BPS solitons
(see, e.g.~\cite{dolepe,klalepe}).
In this paper we are concerned with static solutions to the equation of motion,
$\dot{g}=0$. These extremize the energy functional
\begin{equation}
\begin{aligned}
E &\= 2\pi\,\textrm{Tr}_{{\mathcal H}} \Bigl\{ A_z A_{\bar{z}}\ +\ \sfrac{\kappa^2}{2\theta}[A_z,A_{\bar{z}}]^2 \Bigr\}
\ =:\ E_0 + \sfrac{\kappa^2}{\theta}E_1 \\[6pt]
&\= 8\pi\,\textrm{Tr}_{{\mathcal H}} \Bigl\{ Q\,a^{\dagger} P\,a+Q\,a\,P\,a^{\dagger} \Bigr\} \\
&\ +\ 32\pi\sfrac{\kappa^2}{\theta}\,\textrm{Tr}_{{\mathcal H}} \Bigl\{
P\,a\,Q\,a^{\dagger} P\,a\,Q\,a^{\dagger} + P\,a^{\dagger} Q\,a\,P\,a^{\dagger} Q\,a -
P\,a\,Q\,a^{\dagger} P\,a^{\dagger} Q\,a - P\,a^{\dagger} Q\,a^{\dagger} P\,a\,Q\,a \Bigr\}
\end{aligned}
\end{equation}
which, for later convenience, we have expressed in terms of the projectors
\begin{equation}
P \quad\textrm{and}\quad Q=\mathbbm{1}{-}P \qquad\textrm{via}\qquad
A_z=-2(Q\,a^{\dagger} P+P\,a^{\dagger} Q) \qquad\textrm{and}\qquad A_{\bar{z}}=-2(Q\,a\,P+P\,a\,Q)\ .
\end{equation}
The energy depends only on the dimensionless combination~$\sfrac{\kappa^2}{\theta}$.
It was shown in~\cite{iole} that the diagonal projectors
\begin{equation} \label{Pk}
P^{(k)} \ :=\ \sum_{n=0}^{k-1}\,|n\>\<n|
\end{equation}
and their translates
\begin{equation}
P^{(k|\alpha)}\ :=\ \textrm{e}^{\aa^{\dagger}-\bar\alpha a}\,P^{(k)}\,\textrm{e}^{-\aa^{\dagger}+\bar\alpha a}
\qquad\textrm{for}\quad \alpha\in\mathbb C \quad\textrm{and}\quad k\in{\mathbbm{N}}
\end{equation}
extremize both $E_0$ and $E_1$.\footnote{
Actually, one can show that {\sl any\/} diagonal projector solves the baby Skyrme
equation of motion.}
The Moyal deformation is essential for this property;
in the commutative (nonabelian) case, sigma-model BPS solitons can never
obey the baby Skyrme equation of motion.
The projector $P^{(k|\alpha)}$ can be interpreted
(via the Moyal-Weyl map) as a localized rank-$k$ baby Skyrmion, formed by
$k$~rank-one baby Skyrmions sitting on top of each other.
These configurations form a complex one-parameter
subfamily inside the complex $k$-parameter family of BPS~projectors for the
noncommutative abelian sigma model (at $\kappa{=}0$), where they saturate the bound
\begin{equation} \label{E0bound}
E_0 \= 8\pi\,\textrm{Tr}_{{\mathcal H}} \bigl\{Qa^{\dagger}\!Pa+QaPa^{\dagger}\!\bigr\}
\= 8\pi\,\textrm{Tr}_{{\mathcal H}} \bigl\{ P + 2\,QaPa^{\dagger}\!\bigr\}
\= 8\pi k\,+\,16\pi\,\textrm{Tr}_{{\mathcal H}} |Q\,aP|^2 \ \ge\ 8\pi k\ .
\end{equation}
No such bound was known for $E_1$,
but the full energy of $P^{(k|\alpha)}$ was easily computed~\cite{iole},
\begin{equation}
E[P^{(k|\alpha)}]\= 8\pi\,\bigl(k + 4\sfrac{\kappa^2}{\theta}k^2\bigr)\ ,
\end{equation}
and is independent of~$\alpha$. The ensueing inequality
\begin{equation}
E[P^{(k|\alpha)}] \ \ge\ E[P^{(1|\alpha_1)}]+E[P^{(1|\alpha_2)}]+\ldots+E[P^{(1|\alpha_k)}]
\=k\,E[P^{(1)}]
\end{equation}
signals an instability of the localized rank-$k$ baby Skyrmion against decay
into its constituents, a collection of $k$ well-separated rank-one baby Skyrmions.
Indeed, a repulsive force between two rank-one baby Skyrmions was found in~\cite{iole}.
General multi-center BPS~solitons of the $\kappa{=}0$ sigma model do not solve the
baby Skyrme equation of motion, but approach a classical solution for
near-infinite mutual separation. This observation suggests a BPS bound also for
the Skyrme term,
\begin{equation} \label{E1bound}
E_1\ \ge\ 32\pi k\ .
\end{equation}
We will establish this bound in the following section.
\bigskip
\section{BPS bound for the Skyrme term}
\noindent
It is well known that, inside the full group of $\textrm{U}({\mathcal H})$, one can connect any
Grassmannian solution to the vacuum via
\begin{equation}
g(s) \= \textrm{e}^{\textrm{i}(\pi-s)P} \= \mathbbm{1}\ -\ (1{+}\textrm{e}^{-\textrm{i} s})P
\qquad\textrm{with}\quad P^\+=P=P^2 \quad\textrm{and}\quad s\in[0,\pi]\ ,
\end{equation}
which monotonically decreases the energy from that of $g(0)=\mathbbm{1}{-}2P$
to the zero value of the vacuum $g(\pi)=\mathbbm{1}$~\cite{iole}.
Therefore, noncommutative baby Skyrmions can be stable only in the Grassmannian models.
Moreover, in Gr$_k$, only configurations of $k$~well-separated rank-one baby Skyrmions
have a chance to be stable, as we argued above.
To prove this assertion, we rewrite the energy functional as
\begin{equation}
E\=8\pi\,\textrm{Tr}_{{\mathcal H}} \Bigl\{ |F|^2 + |G|^2 \ +\
2\sfrac{\kappa^2}{\theta} \bigl(F\,F^\+ - G^\+ G\bigr)^2 +
2\sfrac{\kappa^2}{\theta} \bigl(F^\+ F - G\,G^\+\bigr)^2 \Bigr\}
\end{equation}
with the abbreviations
\begin{equation}
F = P\,a\,Q \quad\textrm{and}\quad G = Q\,a\,P \qquad\Rightarrow\qquad
F^\+ = Q\,a^{\dagger} P \quad\textrm{and}\quad G^\+ = P\,a^{\dagger} Q\ .
\end{equation}
The positivity of this expression is obvious, but improving the lower bound requires
using the Heisenberg algebra~(\ref{heisenberg}) and the topological charge formula
\begin{equation}
\textrm{Tr}_{{\mathcal H}}\bigl\{F^\+ F - G\,G^\+\bigr\} \=
\textrm{Tr}_{{\mathcal H}}\bigl\{F\,F^\+ - G^\+ G\bigr\} \=
\textrm{Tr}_{{\mathcal H}}\bigl\{P\,a\,a^{\dagger}-P\,a^{\dagger} a\bigr\}\= k\ .
\end{equation}
Note that all four operators
\begin{equation}
F\,F^\+\ ,\quad F^\+ F\ ,\quad G\,G^\+ \quad\textrm{and}\quad G^\+ G
\end{equation}
are hermitian and non-negative definite with a rank at most equal to~$k$.
Therefore, the spectral theorem guarantees that both differences
$F\,F^\+ - G^\+ G$ and $F^\+ F - G\,G^\+$ have,
in appropriate orthonormal bases, the form
\begin{equation}
\textrm{diag}\bigl(\lambda_1,\lambda_2,\ldots,\lambda_\ell,
-\mu_1,-\mu_2,\ldots,-\mu_m,0,0,\ldots\bigr)
\qquad\textrm{with}\qquad
\sum_{i=1}^\ell \lambda_i-\sum_{j=1}^m \mu_j=k\ ,
\end{equation}
where $\lambda_i,\mu_j>0$ and $\ell+m\le 2k$. It may happen that $m=0$ (no negative
eigenvalues), but always $\ell\ge 1$ (since the trace is positive). We claim that
$\ell\le k$. Indeed, in the first case,
\begin{equation}
\textrm{im}\bigl(F\,F^\+ - G^\+ G\bigr)\,\subseteq\,\textrm{im}\,P
\qquad\Rightarrow\qquad \textrm{rk}\bigl(F\,F^\+ - G^\+ G\bigr)\le k\ ,
\end{equation}
so that the stronger condition $\ell{+}m\le k$ holds.
In the second case,
\begin{equation}
\textrm{im}\bigl(F^\+ F-G\,G^\+\bigr)\,\subseteq\,\textrm{im}\,Q\ ,
\end{equation}
and $F^\+ F-G\,G^\+$ is obviously non-positive definite on~$\textrm{ker}\,F$.
But $\textrm{ker}\,F$ is the orthogonal complement to $\textrm{im}\,F^\+$ and,
therefore, has codimension at most equal to~$k$.
In case $\ell>k$, it would have a non-zero intersection with the $\ell$-dimensional
linear span of all eigenvectors of $F^\+ F-G\,G^\+$ corresponding to the positive
eigenvalues $\lambda_1,\lambda_2,\ldots,\lambda_\ell$. The resulting contradiction shows
that $\ell\le k$ in the second case as well.
To prove our inequality~(\ref{E1bound}), we have to estimate the trace of the square
of the two difference operators, which in each case is given by
\begin{equation}
\sum_i\lambda_i^2+\sum_j\mu_j^2 \qquad\textrm{subject to}\qquad
\sum_i\lambda_i-\sum_j\mu_j=k \qquad\textrm{and}\qquad \lambda_i,\mu_j>0\ .
\end{equation}
Implementing the first subsidiary condition via Lagrange multipliers in the variational problem,
one sees that the existence of extrema is in contradiction with the positivity of the~$\mu_j$.
Therefore, a minimum is attained for any $\ell\le k$ but only for $m{=}0$
(no negative eigenvalues) and at
\begin{equation}
\lambda_1=\lambda_2=\cdots=\lambda_\ell=\sfrac{k}{\ell} \qquad\Rightarrow\qquad
\sum_i\lambda_i^2\ \ge\ \ell\bigl(\sfrac{k}{\ell}\bigr)^2\ \ge\ k\ .
\end{equation}
This bound is saturated only for $\ell{=}k$,
i.e.~when there are precisely $k$~eigenvalues of magnitude one.
We have thus shown that
\begin{equation}
\textrm{Tr}_{{\mathcal H}}\Bigl\{\bigl(F\,F^\+ - G^\+ G\bigr)^2\Bigr\}\ \ge\ k \qquad\textrm{and}\qquad
\textrm{Tr}_{{\mathcal H}}\Bigl\{\bigl(F^\+ F - G\,G^\+\bigr)^2\Bigr\}\ \ge\ k\ ,
\end{equation}
and (\ref{E1bound}) follows.
The complete bound in Gr$_k$ then reads
\begin{equation} \label{fullbound}
E\ \ge\ 8\pi\,k\,\bigl(1+4\sfrac{\kappa^2}{\theta}\bigr)\ .
\end{equation}
This confirms the exclusive stability of the noncommutative
abelian rank-one baby Skyrmion and widely separated collections of them,
\begin{equation}
\begin{aligned}
P^{(k|\alpha_1,\alpha_2,\ldots,\alpha_k)} &\= \sum_{i,j=1}^k
|\alpha_i\> \,\bigl( \<\alpha_.|\alpha_.\> \bigr)^{-1}_{ij} \<\alpha_j|
\qquad\textrm{for}\qquad \alpha_i\in\mathbb C \qquad\textrm{and}\qquad |\alpha_i{-}\alpha_j|\to\infty \\
&\ \approx\ \sum_i \textrm{e}^{-|\alpha_i|^2} |\alpha_i\> \<\alpha_i|\ ,
\end{aligned}
\end{equation}
employing $k$ coherent states defined by \
$|\alpha_i\>=\textrm{e}^{\alpha_i a^\+}\,|0\>$ \
and the matrix of their overlaps~$\<\alpha_i|\alpha_j\>$.
These are the only configurations saturating the BPS bound~(\ref{fullbound}).
The rank-one case Gr$_1$ is critical, so let us give it a different look.
Any rank-one hermitian projector is determined by a state vector~$|\psi\>\in{\mathcal H}$,
\begin{equation}
P \= |\psi\>\<\psi| \qquad\textrm{with}\quad \<\psi|\psi\>=1\ .
\end{equation}
After some algebra, the energy functional in Gr$_1$ takes the following form,
\begin{equation} \label{Erk1}
\begin{aligned}
E &\= 8\pi\bigl\{ \<a\,a^{\dagger}\> + \<a^{\dagger} a\,\> \bigr\} \ +\
32\pi\sfrac{\kappa^2}{\theta}\bigl\{ 1+\<a\,a^{\dagger}\>\<a^{\dagger} a\,\>-\<a\,a\,\>\<a^\+a^\+\> \bigr\} \\
&\= 8\pi\bigl\{ \<x^2\> + \<p^2\> \bigr\} \ +\
32\pi\sfrac{\kappa^2}{\theta}\bigl\{ \sfrac34 + \<x^2\>\<p^2\> - \sfrac14\<xp{+}px\> \bigr\}\ ,
\end{aligned}
\end{equation}
with the {\sl connected\/} expectation values
\begin{equation}
\<Y\>=\<\psi|Y|\psi\> \qquad\textrm{and}\qquad \<YZ\>=\<\psi|YZ|\psi\>-\<\psi|Y|\psi\>\<\psi|Z|\psi\>\ .
\end{equation}
In the second line of~(\ref{Erk1}), we expressed the raising and lowering operators
through the hermitian combinations $x$ and~$p$ (quantum mechanical position and momentum),
\begin{equation}
a \= \sfrac1{\sqrt{2}}(x+\textrm{i} p) \qquad\textrm{and}\qquad a^{\dagger} \= \sfrac1{\sqrt{2}}(x-\textrm{i} p)
\qquad\Rightarrow\qquad [x,p]=\textrm{i}\mathbbm{1}\ .
\end{equation}
The Robertson uncertainty relation~\cite{rob} of elementary quantum mechanics tells us that
\begin{equation}
\<x^2\>\<p^2\>\ \ge\ \bigl|\sfrac1{2\textrm{i}}\< [x,p] \>\bigr|^2 \= \sfrac14
\qquad\Rightarrow\qquad \<x^2\> + \<p^2\> \ \ge\ 1\ ,
\end{equation}
which recovers the familiar bound~(\ref{E0bound}) for~$E_0$. To estimate~$E_1$,
we need the (stronger) Schr\"odinger uncertainty relation~\cite{sch},\footnote{
We are grateful to Reinhard F.~Werner for the hint.}
\begin{equation}
\<x^2\>\<p^2\>\ \ge\
\bigl|\sfrac12\<\{x,p\}\>\bigr|^2\ +\ \bigl|\sfrac1{2\textrm{i}}\< [x,p] \>\bigr|^2
\qquad\Rightarrow\qquad \<x^2\>\<p^2\>-\sfrac14\<xp{+}px\>\ \ge\ \sfrac14\ ,
\end{equation}
which bounds the second curly bracket on each line of~(\ref{Erk1}) by~1
and thus yields $E_1\ge 32\sfrac{\kappa^2}{\theta}$, as anticipated.
Mathematically, it is nothing but the Cauchy-Schwarz inequality at work.
\bigskip
\section{Second-order perturbation around baby Skyrmions}
\noindent
It is instructive to study the energy functional in the neighborhood of a classical
solution~$g$. In order to remain inside the Grassmannian, where $g^\+=g=\mathbbm{1}{-}2P$,
we set up a multiplicative perturbation expansion,
\begin{equation}
g(\epsilon) \= g\,\textrm{e}^\phi \qquad\textrm{with}\qquad \phi^\+=-\phi\ ,\quad \{\phi,g\}=0
\qquad\textrm{and}\qquad \phi=O(\epsilon)\ ,
\end{equation}
which is `odd' with respect to $P$ in the sense that
\begin{equation}
P\,\phi =\phi\,Q \quad\textrm{and}\quad \phi\,P=Q\,\phi \qquad\Leftrightarrow\qquad \phi = P\,\phi +\phi\,P\ .
\end{equation}
To second order in the perturbation, we compute
\begin{equation}
P(\epsilon)\=P\ -\ \sfrac12(\mathbbm{1}{-}2P)\bigl(\phi+\sfrac12\phi^2+O(\epsilon^3)\bigr)
\=P\ +\ \sfrac12[P,\phi]\ +\ \sfrac18\bigl[[P,\phi],\phi\bigr]\ +\ O(\epsilon^3)
\end{equation}
and introduce the abbreviations
\begin{equation}
A=A_z=g\,[a^{\dagger},g]\ ,\quad \bar{A}=A_{\bar{z}}=g\,[a\,,g]\ ,\quad
B=[a^{\dagger}{+}A\,,\phi]\ ,\quad \bar{B}=[a\,{+}\bar{A}\,,\phi]\ .
\end{equation}
The equation of motion takes the form
\begin{equation} \label{eom}
[a\,,C]+[a^{\dagger},\bar{C}]\=0 \qquad\textrm{with}\qquad
C=A-\sfrac{\kappa^2}{\theta}\bigl[A\,,[A\,,\bar{A}]\bigr] \quad\textrm{and}\quad
\bar{C}=\bar{A}-\sfrac{\kappa^2}{\theta}\bigl[\bar{A}\,,[\bar{A}\,,A]\bigr]\ .
\end{equation}
After a straightforward but lengthy calculation, the energy functional
inside Gr$_k$, expanded to second order in~$\epsilon$ around a classical
projector~$P$ subject to~(\ref{eom}), can be simplified to
\begin{equation}
\begin{aligned}
E[P(\epsilon)]\=E[P]\ +\ \pi\,&\textrm{Tr}_{{\mathcal H}}
\bigl\{ 2\,B\bar{B}-[C,\phi]\bar{B}-[\bar{C},\phi]B\bigr\} \\
+\ 2\pi\sfrac{\kappa^2}{\theta}\,&\textrm{Tr}_{{\mathcal H}}\bigl\{
2B\bar{A}A\bar{B}+2\bar{B}A\bar{A}B-B\bar{A}\bar{A}B-\bar{B}AA\bar{B}-
BA\bar{A}\bar{B}-\bar{B}\bar{A}AB \\
&\qquad +B\bar{A}B\bar{A}+\bar{B}A\bar{B}A- B\bar{A}\bar{B}A-BA\bar{B}\bar{A}
\bigr\} \ +\ O(\epsilon^3)\ .
\end{aligned}
\end{equation}
Note that $B$ and $\bar{B}$ contain~$\phi$ and are thus of~$O(\epsilon)$,
and there is a hidden $\kappa$~dependence in $C$ and~$\bar{C}$.
Let us evaluate this expression for the unique (up to translation) rank-one
baby Skyrmion,
\begin{equation}
P^{(1)}\=|0\>\<0| \qquad\Rightarrow\qquad
A=-2\;|1\>\<0| \qquad\textrm{and}\qquad C=\bigl(1{+}8\sfrac{\kappa^2}{\theta}\bigr)\,A\ ,
\end{equation}
and the most general perturbation inside~Gr$_1$,
\begin{equation}
\phi\= \sum_{n=1}^{\infty}\Bigl\{ \phi_n\,|0\>\<n|-\phi^*_n\,|n\>\<0| \Bigr\}
\qquad\textrm{with}\quad \phi_n\in\mathbb C\ .
\end{equation}
One finds that
\begin{equation}
B\=-\sum_{n=1}^{\infty}\Bigl\{
\phi_n\,|1\>\<n|+\sqrt{n}\,\phi_n\,|0\>\<n{-}1|-2\delta_{n1}\phi_1\,|0\>\<0|
+\sqrt{n{+}1}\,\phi^*_n\,|n{+}1\>\<0| \Bigr\}
\end{equation}
\begin{equation}
\!\:\textrm{and} \qquad
[C,\phi]\=-2\bigl(1{+}8\sfrac{\kappa^2}{\theta}\bigr)\sum_{n=1}^{\infty}\Bigl\{
\phi_n\,|1\>\<n|-\delta_{n1}\phi_1\,|0\>\<0| \Bigr\}
\qquad\qquad\qquad\qquad{}
\end{equation}
and finally
\begin{equation} \label{Epert}
E[P^{(1)}(\epsilon)]\= 8\pi\bigl(1{+}4\sfrac{\kappa^2}{\theta}\bigr)\ +\ 8\pi\,|\phi_2|^2
\ +\ 4\pi\bigl(1{+}2\sfrac{\kappa^2}{\theta}\bigr)\sum_{n=3}^{\infty}n\,|\phi_n|^2
\ +\ O(\epsilon^3)\ .
\end{equation}
A $\phi_1$ perturbation corresponds to the translational mode and does not cost
any energy. The Skyrme term does not see the $\phi_2$ perturbation either.
Clearly, the bound~(\ref{fullbound}) for $k{=}1$ is respected.
One can go beyond perturbation theory by probing all basis directions in Gr$_1$
exactly,\footnote{
We suppress the possibility of adding relative phases in $|\psi_n(\epsilon)\>$ as well.}
\begin{equation} \label{Pn}
P^{(1)}_n(\epsilon)\=|\psi_n(\epsilon)\>\<\psi_n(\epsilon)| \qquad\textrm{with}\qquad
|\psi_n(\epsilon)\> \= \cos\epsilon\,|0\>+\sin\epsilon\,|n\> \qquad\textrm{and}\qquad \epsilon\in[0,2\pi]\ .
\end{equation}
Inserting these projector families into~(\ref{Erk1}), we arrive at
\begin{equation}
E[P^{(1)}_n(\epsilon)]\= \begin{cases}
8\pi(1{+}2\sin^4\!\epsilon) + 32\pi\sfrac{\kappa^2}{\theta}(1{+}2\sin^6\!\epsilon)
& \quad \textrm{for} \quad n=1 \\
8\pi(1{+}4\sin^2\!\epsilon) + 32\pi\sfrac{\kappa^2}{\theta}(1{+}6\sin^4\!\epsilon)
& \quad \textrm{for} \quad n=2 \\
8\pi(1{+}2n\sin^2\!\epsilon) + 32\pi\sfrac{\kappa^2}{\theta}(1{+}n\sin^2\!\epsilon{+}n^2\sin^4\!\epsilon)
& \quad \textrm{for} \quad n\ge3
\end{cases}\ .
\end{equation}
To order~$\epsilon^2$, this precisely reproduces the coefficients of~$|\phi_n|^2$ in~(\ref{Epert})
after matching $|\phi_n|^2=4\epsilon^2$.
Again, it is apparent that only $P^{(1)}_n(0)=P^{(1)}$ is stable. Beyond $O(\epsilon^2)$,
the flat valley traced by \ $P^{(1|\alpha)}=\textrm{e}^{-|\alpha|^2}|\alpha\>\<\alpha|$ \ deviates from the
curves defined in~(\ref{Pn}).
We close with a list of open problems. It would be interesting to
work out the scattering of two rank-one baby Skyrmions in the Moyal plane.
It is also an open question whether there exist abelian noncommutative
baby Skyrmions not based on diagonal projectors.
Another promising task is to deform the {\it full\/} Skyrme model (on $\mathbb R^{1,3}$)
and to construct noncommutative Skyrmions from noncommutative instantons~\cite{inst}.
\noindent
{\bf Acknowledgements}\\
\noindent
We are thankful for hospitality by UAM-Iztapalapa (O.L.) and by Leibniz University (R.L.~and M.M.).
Useful discussions with Mohab Abou Zeid and Reinhard F.~Werner are gratefully acknowledged.
This work is partially supported by DFG--CONACyT grants B330/285/11 and B330/418/11 as well as
by a DFG--RFFI collaboration grant LE 838/12-1. In addition, A.D.~was supported by the Russian
Foundation for Basic Research under the grants 13-01-00622 and 13-01-12417.
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Fanaroff-Riley I (FR I) objects \citep{1985PASAu...6..130B} have been detected in the last decades at very high energy (VHE) range with gamma-ray telescopes.
Between them Centaurus A and M87 are well known thanks to the multiple campaigns dedicated by several observatories and imaging atmospheric Cherenkov telescopes (IACTs).
Centaurus A was observed at TeV energies by the Narrabry Optical Intensity Interferometer \citep{1975ApJ...201...82G} between 1972 and 1974, by JANZOS experiment \citep{1993APh.....1..269J}
between 1988 and 1989, by EGRET \citep{1998A&A...330...97S} between 1991 and 1995, by CANGAROO \citep{2007ApJ...668..968K} in 1999 and 2004 and by
H.E.S.S.\citep{2009ApJ...695L..40A} with the observations performed between 2004 and 2008. On the other hand, M87 was observed at TeV gamma-ray energies by HEGRA experiment between
1998 and 1999, and by H.E.S.S., VERITAS and MAGIC \citep{2006Sci...314.1424A,2008ApJ...679..397A,2012A&A...544A..96A} between 2004 and 2007. The origin of the TeV
gamma-ray emission from these radiogalaxies is still under debate. Not all the TeV gamma-ray data obtained from the IACTs are well explained within synchrotron self-Compton (SSC) scenarios
favoring the introduction of hadronic mechanisms \citep{2012grb..confE.131F} to explain the whole spectral energy distribution (SED). The assumption of hadronic origin of the TeV gamma-ray
spectra observed from Centaurus A and M87 gives rise to a neutrino counterpart expectation from these radiogalaxies. Here we investigate two different hadronic scenarios to describe the TeV
gamma-ray data: the interaction of accelerated protons in the jet, with the second SSC peak, and in the giant lobes \citep{2014ApJ...783...44F}, with the thermal particle density. Therefore, we obtain
the expected neutrino spectra considering both scenarios for M87 and Centaurus A. We introduce the obtained neutrino spectra in a Monte Carlo simulation of a hypothetical Km$^{3}$ neutrino
telescope in the north hemisphere and we get the signal to noise ratio for one year of data-taking. Furthermore, we compare our results with the observations performed by IceCube experiment.
Considering that no particular excess of neutrino ($\nu_{\mu}\bar\nu_{\mu}$) track-event is expected for the obtained spectra, we estimate the time and the infrastructure needed to obtain a
positive signal to noise ratio from these two FR I sources.
\section{Hadronic Interactions}
Radiogalaxies have been proposed as powerful accelerators of charged particles through the Fermi acceleration mechanism \citep{2007Ap&SS.309..119R}. The Fermi-accelerated protons can be described by a simple power law
\begin{equation}\label{prot_esp}
\frac{dN_p}{dE_p}=A_p E_p^{-\alpha}\,,
\end{equation}
where $\alpha$ is the power index and $A_p$ is the proportionality constant. For this work, we consider that these protons are cooled down by p$\gamma$ and pp interactions occurring in the jet and giant lobes, respectively. Both interactions produce VHE gamma-rays and neutrinos as explained in the following subsections. We hereafter use primes (unprimes) to define the quantities in a comoving (observer) frame, c=$\hbar$=1 in natural units and redshift z$\simeq$ 0.
Charged $\pi^+$ and neutral $\pi^0$ pions are obtained from p$\gamma$ interaction through the following channels
\begin{eqnarray}
p\, \gamma &\longrightarrow&
\Delta^{+}\longrightarrow
\left\{
\begin{array}{lll}
p\,\pi^{0}\ && \mbox{fraction }2/3, \\
n\, \pi^{+} && \mbox{fraction }1/3,\nonumber
\end{array}\right. \\
\end{eqnarray}
neutral pions decay into photons, $\pi^0\rightarrow \gamma\gamma$, carrying $20\%\,(\xi_{\pi^0}=0.2)$ of the proton's energy $E_p$. As has been pointed out by Waxman and Bahcall \citep{PhysRevLett.78.2292}, the photo-pion spectrum is obtained from the efficiency of this process
\begin{equation}
f_{\pi^0,p\gamma} \simeq \frac {t'_{dyn}} {t'_{\pi^0}} =\frac{r_d}{2\,\delta_{D}\,\gamma^2_p}\int\,d\epsilon\,\sigma_\pi(\epsilon)\,\xi_{\pi^0}\,\epsilon\int dx\, x^{-2}\, \frac{dn_\gamma}{d\epsilon_\gamma} (\epsilon_\gamma=x)\,,
\end{equation}
where $t'_{dyn}$ and $t'_{\pi^0}$ are the dynamical and the pion cooling times, $\gamma_p$ is the proton Lorentz factor, $r_{d}=\delta_{D}dt$ is the comoving dissipation radius as function of Doppler factor ($\delta_{D}$) and observational time ($t^{obs}$), $dn_\gamma/d\epsilon_\gamma$ is the spectrum of target photons, $\sigma_\pi(\epsilon_\gamma)=\sigma_{peak}\approx 9\times\,10^{-28}$ cm$^2$ is the cross section of pion production. Solving the integrals we obtain
{\small
\begin{eqnarray}
f_{\pi^0,p\gamma} \simeq \frac{L_\gamma\,\sigma_{peak}\,\Delta\epsilon_{peak}\,\xi_{\pi^0}}{8\pi\,\delta_D^2\,r_d\,\epsilon_{\gamma,b}\,\epsilon_{peak}}
\cases{
\left(\frac{\epsilon_{\pi^0,\gamma,c}}{\epsilon_{0}}\right)^{-1} \left(\frac{\epsilon_{\pi^0,\gamma}}{\epsilon_{0}}\right) & $\epsilon_{\pi^0,\gamma} < \epsilon_{\pi^0,\gamma,c}$\cr
1 & $\epsilon_{\pi^0,\gamma,c} < \epsilon_{\pi^0,\gamma}$\,,\cr
}
\end{eqnarray}
}
where $\Delta\epsilon_{peak}$=0.2 GeV, $\epsilon_{peak}\simeq$ 0.3 GeV, $L_\gamma$ is the luminosity and $\epsilon_{\gamma,b}$ is the break energy of the seed photon field. By considering the simple power law for a proton distribution (Eq. \ref{prot_esp}) and the conservation of the photo pion flux for this process, $f_{\pi^0,p\gamma}\,E_p\,(dN/dE)_p\,dE_p=\epsilon_{\pi^0,\gamma}\,(dN/d\epsilon)_{\pi^0,\gamma}\,d\epsilon_{\pi^0,\gamma}$, the photo-pion spectrum is given by
{\small
\begin{eqnarray}
\label{pgammam}
\left(\epsilon^2\,\frac{dN}{d\epsilon}\right)_{\pi^0,\gamma}= A_{p\gamma,\gamma} \cases{
\left(\frac{\epsilon_{\pi^0,\gamma,c}}{\epsilon_{0}}\right)^{-1} \left(\frac{\epsilon_{\pi^0,\gamma}}{\epsilon_{0}}\right)^{-\alpha+3} & $ \epsilon_{\pi^0,\gamma} < \epsilon_{\pi^0,\gamma,c}$\cr
\left(\frac{\epsilon_{\pi^0,\gamma}}{\epsilon_{0}}\right)^{-\alpha+2} & $\epsilon_{\pi^0,\gamma,c} < \epsilon_{\pi^0,\gamma}$\,,\cr
}
\end{eqnarray}
}
\noindent
where $\epsilon_0$ is the energy normalization; the proportionality constant of p$\gamma$ interaction is given by
\begin{equation}\label{Apg}
A_{p\gamma,\gamma}= \frac{L_\gamma\,\epsilon^2_0\,\sigma_{peak}\,\Delta\epsilon_{peak}\left(\frac{2}{\xi_{\pi^0}}\right)^{1-\alpha}}{4\pi\,\delta_D^2\,r_d\,\epsilon_{\gamma,b}\,\epsilon_{peak}} \,A_p\,,
\end{equation}
\noindent and the break photon-pion energy is given by
\begin{equation}
\epsilon_{\pi^0,\gamma,c}\simeq 31.87\,{\rm GeV}\, \delta_D^2\, \left(\frac{\epsilon_{\gamma,b}}{ {\rm MeV}}\right)^{-1}\,.
\label{pgamma}
\end{equation}
The Eq. (\ref{pgammam}) describes the contribution of photo-pion emission to the SED.
\\
Pions are also obtained from the pp interaction by means of channel \citep{2008PhR...458..173B,2003ApJ...586...79A,2002MNRAS.332..215A}
\begin{eqnarray}
p\,+ p &\longrightarrow& \pi^++\pi^-+\pi^0 + X.
\label{pp}
\end{eqnarray}
\noindent
Once again $\pi^0\rightarrow \gamma\gamma$, carrying in this case $33\%\,(\xi_{\pi^0}=0.33)$ of the proton's energy $E_p$. Assuming that accelerated protons interact in the lobe region, spatially constrained by $R$ and thermal particle density $n_p$, we describe the efficiency of the process through
\begin{equation}
f_{\pi^0,pp}\approx R\,n_p\,k_{pp}\,\sigma_{pp}\,,
\end{equation}
where $\sigma_{pp}\simeq 30(0.95 +0.06\,\rm{ln(E/GeV))}$ mb is the nuclear interaction cross section and $k_{pp}=1/2$ is the inelasticity coefficient. Taking into account the proton distribution (Eq. \ref{prot_esp}) and the conservation of the photo pion flux \citep{2003ApJ...586...79A, 2012ApJ...753...40F, 2002MNRAS.332..215A}
\begin{equation}\label{fpp}
f_{\pi^0 , pp}(E_p)\,E_p\,\left(\frac{dN_p}{dE_p}\right)^{obs}\,dE_p=\epsilon_{\gamma, {\pi^0}}\,\left(\frac{dN_\gamma}{d\epsilon_\gamma}\right)^{obs}_{\pi^0}\,d\epsilon_{\gamma, {\pi^0}},
\end{equation}
then the observed gamma-ray spectrum can be written as
\begin{equation}
\label{spe_pp}
\left(\epsilon^{2}_\gamma\, \frac{dN_\gamma}{d\epsilon_\gamma}\right)^{obs}_{\pi^0}= A_{pp,\gamma}\, \left(\frac{\epsilon_{\gamma,\pi^0}}{{\rm \epsilon_0}}\right)^{2-\alpha},
\end{equation}
where the proportionality constant of pp interaction is
\begin{equation}
A_{pp,\gamma}= R\,n_p\,k_{pp}\,\sigma_{pp}\,(2/\xi_{\pi^0})^{2-\alpha}\,\epsilon_0^2\,A_p\,.
\label{App}
\end{equation}
The Eq. (\ref{spe_pp}) shows the contribution of pp interactions to the spectrum of gamma rays produced in the lobes.
\section{The VHE neutrino expectation}
The hadronic interactions described above produce a neutrino counterpart in the jet (through p$\gamma$) and in the lobes (through pp) of the AGN
by means of $\pi^{\pm}\rightarrow e^{\pm}+\nu_{\mu}/\bar{\nu}_{\mu}+\bar{\nu}_{\mu}/\nu_{\mu}+\nu_{e}/\bar{\nu}_{e}$. The effect of neutrino oscillations on the
expected flux balances the number of neutrinos per flavor \citep{2008PhR...458..173B} arriving to Earth. Assuming the described interactions, we expect that the VHE gamma rays
and the respective neutrino counterpart have a SED strictly linked to the SED of accelerated primary protons.
The spectrum of expected neutrino can be written as:
\begin{equation}
\frac{dN_\nu}{dE_\nu}=A_\nu\,\left(\frac{E_\nu}{{\rm TeV}}\right)^{-\alpha_\nu},
\label{espneu1}
\end{equation}
where the normalization factor, A$_{\nu}$, is calculated by correlating the neutrino flux luminosity with the TeV photon flux \citep{2008PhR...458..173B}. This correlation is given by:
\begin{equation}
\int \frac{dN_\nu}{dE_\nu}\,E_\nu\,dE_\nu=K\int \frac{dN_\gamma}{dE_\gamma}\,E_\gamma\,dE_\gamma\,.
\end{equation}
Where for pp interaction should be used $K=1$ and for p$\gamma$ interaction $K=1/4$. The spectral indices for neutrino and
gamma-ray spectrum are considered similar $\alpha\simeq \alpha_\nu$ \citep{2008PhR...458..173B} while the carried energy is slightly different: each neutrino brings 5$\%$ of the initial proton energy ($E_\nu=1/20\,E_p$) while each photon brings around 16.7$\%$. With these considerations the normalization factors are related by
\begin{equation}
A_{(pp,\nu/p\gamma,\nu)}=K\cdot A_{(pp,\gamma/p\gamma,\gamma)}\,\epsilon_0^{-2}\, (2)^{-\alpha+2},
\label{nu-gamma}
\end{equation}
where A$_{pp,\gamma}$ and A$_{p\gamma,\gamma}$ are given by the Eq. (\ref{App}) and the Eq. (\ref{Apg}) and the factor $2^{-\alpha+2}$ is introduced because the neutrino carries $1/2$ of $\gamma$ energy.
Therefore extending the spectrum of expected neutrino to maximum energies detectable by a Km$^{3}$ Cherenkov detector array, we can obtain the number expected neutrino events detected as:
\begin{equation}
N_{ev} \approx\,T \rho_{water/ice}\,N_A\,V_{eff}\,\int_{E_{min}}^{E_{max}}\sigma_{\nu}\,A_{(pp,\nu/p\gamma,\nu)}\left(\frac{E_{\nu}}{TeV}\right)^{-\alpha}dE_{\nu}.
\label{nuMCevt}
\end{equation}
Where $N_A$ is the Avogadro number, $\rho_{water/ice}$ is the density of environment for the neutrino telescope, $E_{min}$ and $E_{max}$ are the low and high energy threshold considered,
$V_{eff}$ is the $\nu_{\mu}+\bar\nu_{\mu}$ effective volume, obtained through Monte Carlo simulation, for a hypothetical Km$^{3}$ neutrino telescope considering the neutrino source at the declination of Centaurus A and M87.
\section{Analysis and Results}
The TeV gamma-ray fluxes collected from M87 in the last decade by H.E.S.S., VERITAS and MAGIC cannot be explained with a one-zone SSC model introduced by the Fermi collaboration \citep{2009ApJ...707...55A} to explain the SED. Fig. \ref{Spectra} shows the TeV spectra collected by IACT experiments requiring the introduction of multi-zone leptonic models or hadronic scenarios. The high-energy components in the radio galaxy Centaurus A was imaged by Fermi and H.E.S.S. observatories. Also for Centaurus A the entire SED cannot be
described by a one-zone SSC model suggesting the hypothesis of hadronic emission components. For both Fanaroff Riley I objects we fit the TeV spectra with the hadronic models introduced in
the Eqs. (\ref{pgammam}) and (\ref{spe_pp}), considering as a free parameters of the fits $A_{pp,\gamma}$, $A_{p\gamma,\gamma}$ and the respective $\alpha$ obtained for pp and p$\gamma$ scenarios. No extragalactic background light (EBL) models have been assumed for the spectra of Centaurus A and M87, at 3.4 Mpc \citep{1998A&ARv...8..237I} and 16.7 Mpc \citep{2010A&A...524A..71B}, respectively. For the case of Centaurus A we study the pp scenario for both giant lobes \citep{2014ApJ...783...44F} thanks to the individual SEDs obtained by Fermi satellite (see Fig. \ref{Spectra}).
\begin{table}[ht!]
\centering\begin{tabular}{ l c c c c }
\hline
\scriptsize{} & \scriptsize{Parameter} & \scriptsize{H.E.S.S.} & \scriptsize{MAGIC} & \scriptsize{VERITAS} \\
\hline
\scriptsize{Proportionality constant} ($10^{-13}\,{\rm TeV/cm^2/s}$) & \scriptsize{$ A_{p\gamma,\gamma} $} & \scriptsize{ $13.3\pm 0.096$} & \scriptsize{ $3.38\pm 0.431$} &
\scriptsize{ $5.39\pm 0.94$}\\
\scriptsize{Power index} & \scriptsize{$ \alpha $} & \scriptsize{ $2.28\pm 0.052$} & \scriptsize{ $2.97\pm 0.121$} & \scriptsize{ $2.70\pm 0.23$} \\
\scriptsize{Chi-square/d.o.f} & \scriptsize{$ \chi^2/{\rm d.o.f}$} & \scriptsize{ $14.62/7$} & \scriptsize{ $12.59/4$} & \scriptsize{ $4.794/4$} \\
\hline
\end{tabular}
\caption{Set of parameters obtained for the best fit of M87 SED with p$\gamma$ model}\label{table1}
\end{table}
\begin{table}[ht!]
\centering\begin{tabular}{ l c c c c}
\hline
\scriptsize{} & \scriptsize{Parameter} & \scriptsize{H.E.S.S.} & \scriptsize{MAGIC} & \scriptsize{VERITAS} \\
\hline
\scriptsize{Proportionality constant} ($10^{-13}\,{\rm TeV/cm^2/s}$) & \scriptsize{$ A_{pp,\gamma} $} & \scriptsize{ $12.0\pm 0.08$} & \scriptsize{ $4.00\pm 0.43$} &
\scriptsize{ $5.11\pm 0.89$}\\
\scriptsize{Power index} & \scriptsize{$ \alpha $} & \scriptsize{ $2.22\pm 0.05$} & \scriptsize{ $2.33\pm 0.12$} & \scriptsize{ $2.48\pm 0.20$} \\
\scriptsize{Chi-square/d.o.f.} & \scriptsize{$ \chi^2/{\rm d.o.f.}$} & \scriptsize{ $13.93/7$} & \scriptsize{ $7.28/4$} & \scriptsize{ $3.60/4$} \\
\hline
\end{tabular}
\caption{Set of parameters obtained for the best fit of M87 SED with pp model}\label{table2}
\end{table}
The parameters of $A_{pp,\gamma}$, $A_{p\gamma,\gamma}$ and $\alpha$ obtained for the pp and p$\gamma$ models are reported in Tables \ref{table1} and \ref{table2} for M87, and in Tables
\ref{table3} and \ref{table4} for Centaurus A \citep{2014MNRAS.441.1209F}. From these parameters we obtain the neutrino fluxes expected from the two scenarios, through the Eq. (\ref{nu-gamma}).
Therefore, we calculate the neutrino ($\nu_\mu\bar{\nu}_\mu$) event rate in a Km$^{3}$ Cherenkov telescope as explained by the Eq. (\ref{nuMCevt}). We compute the $V_{eff}$ through Monte Carlo
simulations considering the position of both sources (M87 and Centaurus A) and a Km$^{3}$ neutrino telescope located in the north hemisphere. As shown in Fig. \ref{sig-to-noise-nu}, we take into
account also two neutrino ``backgrounds'': the atmospheric neutrinos and extragalactic diffuse neutrinos. The analysis performed did not give expectation of significant excess of M87 and Centaurus A
neutrino signal (see Fig. \ref{sig-to-noise-nu}) in a few years of observations for the two hadronic models (pp and p$\gamma$). This result is in accordance with the observations made by IceCube
experiment up to now. Only neutrino flux obtained with the hadronic fit (pp) of H.E.S.S. data for M87 has the possibility to be detected in several years of observation with a global neutrino network
\footnote{Global network of neutrino telescopes (IceCube, KM3NeT, ANTARES and Baikal); a infrastructure of several Km$^{3}$ and a complete coverage of the sky} (GNN). Because of the low
equatorial declination of M87 the cross correlation between IceCube and a future northern Km$^{3}$ neutrino telescope will be favorable.
\begin{figure}[h!]
\hspace{-1.5cm}
$\begin{array}{cc}
\includegraphics[width=0.66\textwidth]{M87-tot-fit-spec.pdf} &
\includegraphics[width=0.48\textwidth]{nsfit.pdf}
\end{array}$
\caption{On the left: Correlations of GeV flux (Fermi data) with TeV fluxes (VERITAS, MAGIC and H.E.S.S.) for M87. On the right:
the Wilkinson Microwave Anisotropy Probe (WMAP) at the low energy of 10-5 eV and a faint gamma-ray flux imaged by the Fermi-LAT at an energy greater than 100 MeV for
Centaurus A.}\label{Spectra}
\end{figure}
\begin{table}[ht!]
\centering
\begin{tabular}{ l c c c}
\hline
Lobes (Fermi data) & \scriptsize{} & \scriptsize{North} & \scriptsize{South} \\
\hline
\scriptsize{} & \scriptsize{Symbol} & \scriptsize{Value} & \scriptsize{Value} \\
\hline
\scriptsize{Proportionality constant} ($10^{-12}\,{\rm TeV/cm^2/s}$) & \scriptsize{$ A_{pp,\gamma} $} & \scriptsize{ $5.10\pm 0.96$} & \scriptsize{ $8.07\pm 1.58$}\\
\scriptsize{Spectral index} & \scriptsize{$\alpha$} & \scriptsize{2.52$\pm$ 0.23} & \scriptsize{2.59$\pm$ 0.25}\\
\hline
\end{tabular}
\caption{Set of parameters obtained for the best fit of Centaurus A SED with pp model}\label{table3}
\end{table}
\begin{table}[ht!]
\centering\begin{tabular}{ l c c c}
\hline
Jet & \scriptsize{} & \scriptsize{Fermi data} & \scriptsize{H.E.S.S. data} \\
\hline
\scriptsize{} & \scriptsize{Symbol} & \scriptsize{Value} & \scriptsize{Value} \\
\hline
\scriptsize{Proportionality constant} ($10^{-12}\,{\rm TeV/cm^2/s}$) & \scriptsize{$ A_{p\gamma,\gamma} $} & \scriptsize{ $2.37\pm 0.61$} & \scriptsize{ $0.25 \pm 0.05$}\\
\scriptsize{Spectral index} & \scriptsize{$\alpha$} & \scriptsize{2.23$\pm$ 0.03} & \scriptsize{2.81$\pm$ 0.38}\\
\hline
\end{tabular}
\caption{Set of parameters obtained for the best fit of Centaurus A SED with p$\gamma$ model}\label{table4}
\end{table}
\begin{figure}[h!]
\hspace{-1.5cm}
$\begin{array}{cc}
\includegraphics[width=0.6\textwidth]{M87-nu-pp-evtrate.pdf} &
\includegraphics[width=0.6\textwidth]{M87-nu-pgamma-evtrate.pdf} \\
\includegraphics[width=0.6\textwidth]{CenA-nu-pp-evtrate.pdf} &
\includegraphics[width=0.6\textwidth]{CenA-nu-pgamma-evtrate.pdf}
\end{array}$
\caption{Signal to noise ratio for M87 (top) and Centaurus A (bottom) in a northern Km$^{3}$ neutrino telescope considering the region of 1$^{\circ}$ square around the positions of the two sources. As a ``background'' we take into account the atmospheric neutrinos and the extragalactic diffuse neutrinos. On the left side are showed the plots related to the pp model while on the right side are showed the plots related to the p$\gamma$ model.}\label{sig-to-noise-nu}
\end{figure}
\section*{Acknowledgements}
This work was supported by Luc Binette scholarship and the project PAPIIT IN-108713.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Double-beta decay~\cite{DBDReview} is a rare spontaneous process in which the atomic number of a nucleus changes by two units.
It can only occur in some even-even nuclei where single-beta decay is energetically forbidden.
Two-neutrino double-beta decay mode (2$\nu$DBD) conserves the lepton number and is allowed by the Standard Model of particle physics.
It is the rarest decay ever observed, with half-lives in the range (10$^{19}$ -- 10$^{24}$)$\,$y~\cite{2nudbd}.
Neutrinoless double-beta decay (0$\nu$DBD) has never been observed and the half-life lower limits are in the range (10$^{21}$ -- 10$^{26}$)$\,$y.
Its observation would prove that the total lepton number is not conserved, and that neutrinos are Majorana particles~\cite{Valle}.
Under the assumption that 0$\nu$DBD takes place by the exchange of a light Majorana neutrino, the inverse of the decay half-life can be written as
\begin{equation}\label{eq:decayrate}
\frac{1}{T^{0\nu}_{1/2}} ~ = ~
G^{0\nu} \left| M^{0\nu}\right|^2 \frac{\left< m_{\beta\beta}\right>^2}{m_e^2},
\end{equation}
where $G^{0\nu}$ represents the decay phase-space, $M^{0\nu}$ is a matrix element that accounts for the nuclear part of the decay, $m_e$ is the electron mass, and $\left<m_{\beta\beta}\right>=\left|\sum_i{U^2_{ei}m_i}\right|$ is a function of the neutrino masses and the neutrino mixing matrix elements.
Formula~\ref{eq:decayrate} links the experimentally measurable decay half-life to $\left<m_{\beta\beta}\right>$, which encloses information about neutrino properties.
$\left<m_{\beta\beta}\right>$ can be expressed as a function of four unknown parameters: the sign of $\Delta$m$^2_{23}$, the mass of the lightest neutrino, and two Majorana phases.
This is represented in fig.~\ref{fig:mbb_mlight}, reprinted from~\cite{delloro}.
\begin{figure}[!b]
\centering
\includegraphics[width=0.65\textwidth]{fig1}
\caption{Allowed value for $\left<m_{\beta\beta}\right>$ as a function of the mass of the lightest neutrino for normal (NH, red band) and inverted (IH, green band) neutrino mass hierarchy.
Reprinted from~\cite{delloro}.}
\label{fig:mbb_mlight}
\end{figure}
If the assumption of the exchange of a light Majorana neutrino is valid, then
0$\nu$DBD could also give information on the neutrino mass hierarchy and the absolute scale.
Moreover this assumption allows to compare the sensitivity of 0$\nu$DBD searches based on different isotopes.
This comes at the cost of introducing uncertainties from the theoretical calculation of $M^{0\nu}$, which cannot be performed exactly.
Several models exist that make different approximations, leading to results whose reliability is difficult to assess.
A big effort has been put in these calculations in recent years, resulting into a better agreement between different models.
See e.g.~\cite{Engel} for a recent review.
In principle 0$\nu$DBD has a clear experimental signature.
The sum-energy of the two emitted electrons is fixed and equal to Q$_{\beta\beta}$, the Q-value of the decay.
This signal is qualitatively different from that of 2$\nu$DBD, in which part of the energy is carried away undetected by the two anti-neutrinos, resulting in a continuous energy spectrum extending up to Q$_{\beta\beta}$.
The value of Q$_{\beta\beta}$ and other relevant quantities are reported in table~\ref{tab:isotopes} for a selection of experimentally interesting isotopes.
Q$_{\beta\beta}$ lies in the energy range of natural radioactivity, which represents the dominant source of background for experiments.
Since the signal must be maximized and the background minimized, all experiments searching for 0$\nu$DBD share the need for a large number of source isotopes, a low background and a good energy resolution.
After a brief review of the current status of the experimental searches, we consider as a target sensitivity for future experiments the full exploration of the values of $\left<m_{\beta\beta}\right>$ corresponding to the inverted-hierarchy region of the neutrino mass.
We then conclude discussing the experimental techniques that will be used and some of the proposed future experiments.
\begin{table}[bt]
\begin{center}
\begin{tabular}{r|cccc}
Isotope & Q$_{\beta\beta}$ [keV] & i.a. [\%] & T$_{1/2}$(0$\nu$DBD) [10$^{25}\,$y] & T$_{1/2}$(2$\nu$DBD) [10$^{21}\,$y] \\
\hline
$^{76}$Ge & 2039 & 7.61 & $>$5.3~\cite{Ge0nu} & 1.65$^{+0.14}_{-0.12}$\\
$^{82}$Se & 2995 & 8.73 & $>$0.036~\cite{Nemo3} & 0.092$\pm$0.007 \\
$^{100}$Mo & 3034 & 9.63 & $>$0.11~\cite{Nemo3} & 0.0071$\pm$0.0004 \\
$^{130}$Te & 2528 & 34.17 & $>$0.40~\cite{Te0nu} & 0.69$\pm$0.13 \\
$^{136}$Xe & 2479 & 8.87 & $>$11~\cite{Xe0nu} & 2.19$\pm$0.06 \\
\end{tabular}
\caption{Q-value, isotopic abundance, 0$\nu$DBD half-life limits and 2$\nu$DBD half-life measurements (from~\cite{2nudbd}) for a selection of double-beta decay candidate isotopes.}
\label{tab:isotopes}
\end{center}
\end{table}
\section{Present status of experimental searches}\label{sec:presentstatus}
The best 0$\nu$DBD half-life limits available at present come from experiments studying $^{130}$Te, $^{76}$Ge and $^{136}$Xe (see table~\ref{tab:isotopes}).
These translate into $\left<m_{\beta\beta}\right>$ upper limits of (270--760)$\,$meV, (150--330)$\,$meV and (61--165)$\,$meV for $^{130}$Te, $^{76}$Ge and $^{136}$Xe respectively.
The $^{130}$Te result was obtained by CUORE-0~\cite{Te0nu} with an array of 52 TeO$_2$ crystals operated as cryogenic calorimeters.
The detector had a total $^{130}$Te mass of 11$\,$kg, an energy resolution of 5$\,$keV FWHM and a background at Q$_{\beta\beta}$ of 0.058$\,$counts/(keV$\cdot$kg$\cdot$y).
CUORE~\cite{cuore} is a larger and more sensitive version of CUORE-0 that will start data taking in 2017.
The mass of $^{130}$Te is 206$\,$kg and the expected background at Q$_{\beta\beta}$ is 0.01$\,$counts/(keV$\cdot$kg$\cdot$y).
If these performance parameters will be met, CUORE will have a $\left<m_{\beta\beta}\right>$ sensitivity in the range (50 -- 130)$\,$meV.
The $^{76}$Ge result was obtained by GERDA~\cite{Ge0nu} using bare germanium detectors enriched in $^{76}$Ge.
The total detector mass is 35.2$\,$kg, with enrichment in $^{76}$Ge going from 7.8\% to 87\%.
The energy resolution is 3$\,$keV in the best detectors, and the background at Q$_{\beta\beta}$ is as low as 0.001$\,$counts/(keV$\cdot$kg$\cdot$y).
The GERDA collaboration plans to continue taking data in the current configuration until an exposure of 100$\,$kg$\cdot$y will be obtained.
The expected half-life sensitivity will be about 2$\cdot$10$^{26}\,$y.
The $^{136}$Xe result was obtained by the KamLAND-Zen~\cite{Xe0nu} collaboration by dissolving xenon in the ultra-pure liquid scintillator of the KamLAND detector.
The total amount of $^{136}$Xe was of 340$\,$kg but the amount of useful isotope mass was sensibly reduced by the fiducial volume cut imposed during data analysis.
The poor energy resolution, 270$\,$keV FWHM, was compensated by the large mass and by the extremely low background of $\sim$160$\,$counts/(ton$\cdot$y) in the region of interest (ROI).
In 2017 the KamLAND-Zen collaboration plans to reach a sensitivity of about 40$\,$meV on $\left<m_{\beta\beta}\right>$ by performing minor upgrades to the current detector configuration.
\section{Sensitivity of future searches}
Present and near-future experimental searches have sensitivities on $\left<m_{\beta\beta}\right>$ not better than (40--50)$\,$meV.
In this section we discuss the general features of future experiments taking as a target a sensitivity on $\left<m_{\beta\beta}\right>$ of about 15$\,$meV, which corresponds to the full exploration of the inverted region of the neutrino mass-hierarchy.
Using nuclear matrix elements from~\cite{Engel}, this translates into half-life sensitivities roughly in the range (10$^{27}$ -- 10$^{28}$)$\,$y.
We begin discussing the formula that give a rough estimate of the sensitivity $S^{0\nu}$ of a given experiment.
It has different expressions, depending on whether or not the background can be considered negligible.
If the background is not negligible, then we have
$$
S^{0\nu}\;\propto\;\eta\,\varepsilon\,\sqrt{\frac{M\,t}{b\,\Delta E}},
$$
where $\eta$ is the isotopic abundance of the DBD isotope, $\varepsilon$ is the detection efficiency, M is the total detector mass, t is the measurement time, b is the background index expressed in counts/(keV$\cdot$kg$\cdot$y) and $\Delta$E is the FWHM energy resolution.
If instead the background is negligible, we have
$$
S^{0\nu}\;\propto\;\eta\,\varepsilon\,M\,t.
$$
This condition is verified when the total number of expected background counts over the whole duration of the experiment is negligible, i.e. when M$\,$t$\,$b$\,\Delta$E$\,<\,$1, and is obviously preferable because the sensitivity scales linearly with the detector mass.
We note that presently GERDA is the only experiment that was able to obtain this condition of zero-background.
For a given 0$\nu$DBD half-life, the number of expected signal counts N$_s$ can be expressed as N$_s$ = $\ln 2\,\varepsilon\,N_{\beta\beta}\,t\,/\,T^{0\nu}_{1/2}$, where $\varepsilon$ is the detection efficiency and N$_{\beta\beta}$ is the number of isotopes under observation.
Even when the zero-background condition is met, observing a number of signal counts of the order of 1 over a data taking period of the order of one year would demand for a number of source isotopes N$_{\beta\beta}\,\sim\,T^{0\nu}_{1/2}/(1\,y)$.
Therefore about (10$^{27}$ -- 10$^{28}$) source isotopes are needed for a sensitivity of $\sim$15$\,$meV on $\left<m_{\beta\beta}\right>$.
This roughly corresponds to one ton of active source mass.
Background from radioactivity represents the major concern for most present experiments, and will be probably the most challenging scientific problem to address in the future.
However, even assuming that this background contribution could be made negligible, there is still a background coming from the high-energy tail of the 2$\nu$DBD spectrum that cannot be eliminated.
The number of background counts from 2$\nu$DBD in an energy window of width $\Delta$E around Q$_{\beta\beta}$ can be expressed as~\cite{DBDReview}
$$
N_{2\nu}~\sim~\frac{1}{T^{2\nu}_{1/2}}\frac{\Delta E^6}{Q^{5}_{\beta\beta}}.
$$
It is therefore preferable to have small $\Delta$E and large Q$_{\beta\beta}$ and T$^{2\nu}_{1/2}$.
Moreover, even when other background sources are made negligible, energy resolution still remains an important parameter for 0$\nu$DBD searches.
Other considerations are related to the choice of the source isotope to be investigated and of the experimental technique to be used.
The two choices are often not disjoint, because certain experimental techniques can only be exploited for some isotopes.
This is true for example for Ge-diodes or Xe-TPCs, which will be discussed in the next section.
Concerning the isotope choice, essentially two factors come into play.
The first is related to Q$_{\beta\beta}$.
Isotopes with large Q$_{\beta\beta}$ are preferable because they allow to obtain lower background levels.
This is not only due to the fact that the background for 2$\nu$DBD is smaller for larger Q$_{\beta\beta}$, but also because the radioactive background is lower at higher energy.
In this regard a clear distinction can be made between isotopes with Q$_{\beta\beta}$ smaller or larger than 2615$\,$keV, which is the energy of the most intense $\gamma$-line from natural radioactivity.
Other important aspects are the isotopic abundance of the isotope under investigation and the cost for its enrichment~\cite{barabash2012}.
Finally, one could wonder if there are isotopes with a larger expected signal rate for a given amount of source mass.
As pointed out in~\cite{Robertson}, this does not turn out to be the case, as all interesting isotopes are almost equivalent in this regard.
\section{Future searches}\label{sec:futureexp}
We now discuss the experimental techniques that will be used in future 0$\nu$DBD searches and present some of the most promising experiments that implement them.
\subsection{Germanium detectors}
Germanium detectors feature superior energy resolution and allow to discriminate very effectively between single-site (electron-like) and multi-site ($\gamma$-like) energy releases.
This experimental technique can only be applied to $^{76}$Ge, that has a not so high Q$_{\beta\beta}$ of 2039$\,$keV.
Nevertheless, thanks to particle discrimination germanium experiments already demonstrated background rates as low as 0.001$\,$counts/(keV$\cdot$kg$\cdot$y).
The mass scalability of this technique can be extended to the ton scale, but probably not much more than that.
Enrichment in $^{76}$Ge is feasible and already used in experiments, however it is somehow expensive if compared to other isotopes.
The GERDA~\cite{Ge0nu} and Majorana~\cite{Majorana} experiments aim at building a ton-scale germanium experiment, joining their scientific and financial efforts.
GERDA was already discussed in section~\ref{sec:presentstatus}, it demonstrated the possibility to obtain an energy resolution of 3$\,$keV FWHM and background as low as 2$\,$counts/(ton$\cdot$y) in the ROI.
The experiment is currently running with a total detector mass of 36$\,$kg.
Majorana is currently more focused on the radioassay of the materials to be used in the construction of a ton-scale detector~\cite{MajoranaNim}.
A background of less than 3.5$\,$counts/(ton$\cdot$y) in the ROI is expected, however the current prototype detectors measured a slightly higher background corresponding to about 23$\,$counts/(ton$\cdot$y).
\subsection{Bolometric detectors}
Bolometers feature an energy resolution ($\sim$5$\,$keV) comparable to that of germanium detectors and can be exploited to investigate several isotopes.
It has been already demonstrated by CUORE that the detector mass can be extended up to the ton scale, but it probably cannot be increased much more than that.
Radioactive background is the main concern for current bolometric experiments, but the situation can be improved by the introduction of active background rejection techniques.
This is what is planned for CUPID~\cite{cupid1,cupid2}, an upgrade of CUORE with particle-identification capabilities aiming at a background of 0.1$\,$counts/(ton$\cdot$y) in the ROI and a $\left<m_{\beta\beta}\right>$ sensitivity in the 15$\,$meV range.
Two possible strategies are being considered.
The first is to use enriched $^{130}$TeO$_2$ crystals and to perform particle discrimination based on the Cerenkov radiation emitted by 0$\nu$DBD electrons and not by background $\alpha$ particles.
The second is to move to an isotope with Q$_{\beta\beta}$ above 2615$\,$keV and use enriched scintillating crystals such as Zn$^{82}$Se or Li$^{100}$MoO$_{4}$. In this case the particle discrimination would be based on the different light yield of electrons and $\alpha$ particles.
Another promising bolometric experiment is AMoRE~\cite{amore}.
It is still at a preliminary stage, but the plan is to build an array of $^{48depl}$Ca$^{100}$MoO$_4$ scintillating bolometers with a mass of 200$\,$kg and a background of about 1$\,$counts/(ton$\cdot$y) in the ROI.
The expected sensitivity on $\left<m_{\beta\beta}\right>$ is of about 15$\,$meV.
\subsection{Xenon time-projection chambers}
Noble gas or liquid time-projection chambers (TPC) are widely employed in rare event searches.
TPCs for 0$\nu$DBD are filled with xenon enriched in $^{136}$Xe.
The radioactive contaminations can be made very low, and
there are R\&D activities aiming at tagging the Ba$^{++}$ daughter nucleus emitted in double-beta decay.
If these R\&D succeed, only the background from 2$\nu$DBD would remain.
Gaseous TPC are filled with $^{enr}$Xe at pressure as high as 10$\,$bar.
In this environment electrons tracks are a few cm long, and background suppression based on event-topology reconstruction can be performed.
In high-pressure gaseous TPCs the mass scalability is possible, but obviously harder than in liquid TPCs.
The energy resolution can be smaller than 1\% FWHM at Q$_{\beta\beta}$, fairly adequate to make 2$\nu$DBD background negligible.
In liquid xenon TPCs it is easier to increase the source mass, but the energy resolution is not optimal.
EXO is a liquid xenon TPC containing 150$\,$kg of $^{enr}$Xe.
The energy resolution at Q$_{\beta\beta}$ is 3.5\% FWHM and the background is of about 10$^{-3}\,$counts/(keV$\cdot$kg$\cdot$y).
A planned extension of EXO, called nEXO~\cite{nexo}, will contain 5$\,$ton of enriched xenon.
Thanks to the better self-shielding of the larger detector volume, and if the energy resolution will be improved to 2.5\% FWHM, nEXO could be able to obtain a background of about 2$\,$counts/(ton$\cdot$y) in the ROI, or even lower if Ba$^{++}$ tagging will be implemented.
The expected sensitivity on $\left<m_{\beta\beta}\right>$ will be (15 -- 25)$\,$meV.
The NEXT collaboration will start operating a 100$\,$kg high pressure xenon TPC in 2018~\cite{next100}.
The expected energy resolution and background at Q$_{\beta\beta}$ are 0.7\% FWHM and less than 10$^{-3}\,$counts/(keV$\cdot$kg$\cdot$y) respectively.
A ton scale experiment would have background of about 10$\,$counts/(ton$\cdot$y) in the ROI, but this can be reduced by improving the energy resolution and the design of the detector.
The PandaX-III project~\cite{pandax} plans to build a 1-ton xenon experiment made of five identical high-pressure xenon TPC of 200$\,$kg each.
The expected energy resolution and background at Q$_{\beta\beta}$ are 1\% FWHM and about 1.5$\,$counts/(ton$\cdot$y), resulting in a $\left<m_{\beta\beta}\right>$ sensitivity of (20 -- 50)$\,$meV.
\subsection{Tracking detectors}
Tracking detectors are different from other 0$\nu$DBD techniques because here the source isotope is not embedded in the detector.
It is deposited on foils, and the track and energy of the emitted electrons are measured by conventional gas detectors and calorimeters.
This approach allows to investigate any isotope that can undergo double-beta decay.
Energy resolution is quite poor, because part of the electrons energy is released in the source foils themselves and is not measured.
This problem is mitigated by making the source foils very thin, but this imposes severe limits in the mass scalability of this technique.
Nevertheless the reconstruction of the electron tracks provides excellent background suppression and allows to study the angular distribution of the decay~\cite{SuperNemo}.
This feature would be very important to study decay mechanism alternative to the exchange of a light Majorana neutrino.
The SuperNEMO collaboration plans to build a tracking experiment with a source mass of about 100$\,$kg of $^{82}$Se.
This isotope has a Q$_{\beta\beta}$ above 2615$\,$keV and the 2$\nu$DBD half-life is long enough to make its contribution to the background negligible.
The projected sensitivity on $\left<m_{\beta\beta}\right>$ is in the (50 -- 100)$\,$meV range.
The detector will be composed by 20 identical modules.
The commissioning of the first prototype module was completed recently, it contains a source mass of 7$\,$kg in $^{82}$Se, and the measured energy resolution is 8\% FWHM at 1$\,$MeV.
\subsection{Loaded liquid scintillators}
Large volume liquid scintillator detectors can be loaded with double-beta decay isotopes, and can obtain source masses that are hardly achievable with other techniques.
Future experiments are considering $^{130}$Te and $^{136}$Xe as isotopes, which are relatively cheap to enrich.
Energy resolution is far from being optimal, but is compensated by the very low background and the large mass.
As already discussed in section~\ref{sec:presentstatus}, a very effective demonstration of this technique is KamLAND-Zen.
A planned upgrade of this experiment will see the deployment of one ton of enriched xenon, resulting in an expected sensitivity of about 20$\,$meV on $\left<m_{\beta\beta}\right>$.
This will be possible only after a major detector upgrade that will reduce the energy resolution at Q$_{\beta\beta}$ to 5\% FWHM and therefore mitigate the background from 0$\nu$DBD decay.
A similar approach is being pursued by the SNO+ experiment~\cite{sno}.
In this case the plan is to load 780$\,$ton of liquid scintillator with 0.5\% in mass of natural tellurium.
This will correspond to about 1.3$\,$ton of $^{130}$Te, or about 0.25$\,$ton after fiducial volume cuts.
The data taking is expected to start in about two years, and after five years of data taking, the expected sensitivity on $\left<m_{\beta\beta}\right>$ is of (40 -- 90)$\,$meV.
Using $^{130}$Te instead of natural tellurium, or increasing its concentration, could increase the sensitivity in the future.
\section{Conclusion}
We discussed the target sensitivity of future neutrinoless double-beta decay experiments aiming at the full exploration of the inverted region of the neutrino mass-hierarchy, and the scientific challenges they will have to face.
The targets to be achieved are a source mass of the order of one ton and a background of about one count/(ton$\cdot$y) in the region of interest.
No experiment at present can fulfill these requirements, but promising proposals could meet them in the next decade.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Background and motivation}
\label{background}
Submission to ICML 2022 will be entirely electronic, via a web site
(not email). Information about the submission process and \LaTeX\ templates
are available on the conference web site at:
\begin{center}
\textbf{\texttt{http://icml.cc/}}
\end{center}
The guidelines below will be enforced for initial submissions and
camera-ready copies. Here is a brief summary:
\begin{itemize}
\item Submissions must be in PDF\@.
\item \textbf{New to this year}: If your paper has appendices, submit the appendix together with the main body and the references \textbf{as a single file}. Reviewers will not look for appendices as a separate PDF file. So if you submit such an extra file, reviewers will very likely miss it.
\item Page limit: The main body of the paper has to be fitted to 8 pages, excluding references and appendices; the space for the latter two is not limited. For the final version of the paper, authors can add one extra page to the main body.
\item \textbf{Do not include author information or acknowledgements} in your
initial submission.
\item Your paper should be in \textbf{10 point Times font}.
\item Make sure your PDF file only uses Type-1 fonts.
\item Place figure captions \emph{under} the figure (and omit titles from inside
the graphic file itself). Place table captions \emph{over} the table.
\item References must include page numbers whenever possible and be as complete
as possible. Place multiple citations in chronological order.
\item Do not alter the style template; in particular, do not compress the paper
format by reducing the vertical spaces.
\item Keep your abstract brief and self-contained, one paragraph and roughly
4--6 sentences. Gross violations will require correction at the
camera-ready phase. The title should have content words capitalized.
\end{itemize}
\subsection{Submitting Papers}
\textbf{Paper Deadline:} The deadline for paper submission that is
advertised on the conference website is strict. If your full,
anonymized, submission does not reach us on time, it will not be
considered for publication.
\textbf{Anonymous Submission:} ICML uses double-blind review: no identifying
author information may appear on the title page or in the paper
itself. \cref{author info} gives further details.
\textbf{Simultaneous Submission:} ICML will not accept any paper which,
at the time of submission, is under review for another conference or
has already been published. This policy also applies to papers that
overlap substantially in technical content with conference papers
under review or previously published. ICML submissions must not be
submitted to other conferences and journals during ICML's review
period.
Informal publications, such as technical
reports or papers in workshop proceedings which do not appear in
print, do not fall under these restrictions.
\medskip
Authors must provide their manuscripts in \textbf{PDF} format.
Furthermore, please make sure that files contain only embedded Type-1 fonts
(e.g.,~using the program \texttt{pdffonts} in linux or using
File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3)
might come from graphics files imported into the document.
Authors using \textbf{Word} must convert their document to PDF\@. Most
of the latest versions of Word have the facility to do this
automatically. Submissions will not be accepted in Word format or any
format other than PDF\@. Really. We're not joking. Don't send Word.
Those who use \textbf{\LaTeX} should avoid including Type-3 fonts.
Those using \texttt{latex} and \texttt{dvips} may need the following
two commands:
{\footnotesize
\begin{verbatim}
dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi
ps2pdf paper.ps
\end{verbatim}}
It is a zero following the ``-G'', which tells dvips to use
the config.pdf file. Newer \TeX\ distributions don't always need this
option.
Using \texttt{pdflatex} rather than \texttt{latex}, often gives better
results. This program avoids the Type-3 font problem, and supports more
advanced features in the \texttt{microtype} package.
\textbf{Graphics files} should be a reasonable size, and included from
an appropriate format. Use vector formats (.eps/.pdf) for plots,
lossless bitmap formats (.png) for raster graphics with sharp lines, and
jpeg for photo-like images.
The style file uses the \texttt{hyperref} package to make clickable
links in documents. If this causes problems for you, add
\texttt{nohyperref} as one of the options to the \texttt{icml2022}
usepackage statement.
\subsection{Submitting Final Camera-Ready Copy}
The final versions of papers accepted for publication should follow the
same format and naming convention as initial submissions, except that
author information (names and affiliations) should be given. See
\cref{final author} for formatting instructions.
The footnote, ``Preliminary work. Under review by the International
Conference on Machine Learning (ICML). Do not distribute.'' must be
modified to ``\textit{Proceedings of the
$\mathit{39}^{th}$ International Conference on Machine Learning},
Baltimore, Maryland, USA, PMLR 162, 2022.
Copyright 2022 by the author(s).''
For those using the \textbf{\LaTeX} style file, this change (and others) is
handled automatically by simply changing
$\mathtt{\backslash usepackage\{icml2022\}}$ to
$$\mathtt{\backslash usepackage[accepted]\{icml2022\}}$$
Authors using \textbf{Word} must edit the
footnote on the first page of the document themselves.
Camera-ready copies should have the title of the paper as running head
on each page except the first one. The running title consists of a
single line centered above a horizontal rule which is $1$~point thick.
The running head should be centered, bold and in $9$~point type. The
rule should be $10$~points above the main text. For those using the
\textbf{\LaTeX} style file, the original title is automatically set as running
head using the \texttt{fancyhdr} package which is included in the ICML
2022 style file package. In case that the original title exceeds the
size restrictions, a shorter form can be supplied by using
\verb|\icmltitlerunning{...}|
just before $\mathtt{\backslash begin\{document\}}$.
Authors using \textbf{Word} must edit the header of the document themselves.
\section{Format of the Paper}
All submissions must follow the specified format.
\subsection{Dimensions}
The text of the paper should be formatted in two columns, with an
overall width of 6.75~inches, height of 9.0~inches, and 0.25~inches
between the columns. The left margin should be 0.75~inches and the top
margin 1.0~inch (2.54~cm). The right and bottom margins will depend on
whether you print on US letter or A4 paper, but all final versions
must be produced for US letter size.
Do not write anything on the margins.
The paper body should be set in 10~point type with a vertical spacing
of 11~points. Please use Times typeface throughout the text.
\subsection{Title}
The paper title should be set in 14~point bold type and centered
between two horizontal rules that are 1~point thick, with 1.0~inch
between the top rule and the top edge of the page. Capitalize the
first letter of content words and put the rest of the title in lower
case.
\subsection{Author Information for Submission}
\label{author info}
ICML uses double-blind review, so author information must not appear. If
you are using \LaTeX\/ and the \texttt{icml2022.sty} file, use
\verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information
will not be printed unless \texttt{accepted} is passed as an argument to the
style file.
Submissions that include the author information will not
be reviewed.
\subsubsection{Self-Citations}
If you are citing published papers for which you are an author, refer
to yourself in the third person. In particular, do not use phrases
that reveal your identity (e.g., ``in previous work \cite{langley00}, we
have shown \ldots'').
Do not anonymize citations in the reference section. The only exception are manuscripts that are
not yet published (e.g., under submission). If you choose to refer to
such unpublished manuscripts \cite{anonymous}, anonymized copies have
to be submitted
as Supplementary Material via CMT\@. However, keep in mind that an ICML
paper should be self contained and should contain sufficient detail
for the reviewers to evaluate the work. In particular, reviewers are
not required to look at the Supplementary Material when writing their
review (they are not required to look at more than the first $8$ pages of the submitted document).
\subsubsection{Camera-Ready Author Information}
\label{final author}
If a paper is accepted, a final camera-ready copy must be prepared.
For camera-ready papers, author information should start 0.3~inches below the
bottom rule surrounding the title. The authors' names should appear in 10~point
bold type, in a row, separated by white space, and centered. Author names should
not be broken across lines. Unbolded superscripted numbers, starting 1, should
be used to refer to affiliations.
Affiliations should be numbered in the order of appearance. A single footnote
block of text should be used to list all the affiliations. (Academic
affiliations should list Department, University, City, State/Region, Country.
Similarly for industrial affiliations.)
Each distinct affiliations should be listed once. If an author has multiple
affiliations, multiple superscripts should be placed after the name, separated
by thin spaces. If the authors would like to highlight equal contribution by
multiple first authors, those authors should have an asterisk placed after their
name in superscript, and the term ``\textsuperscript{*}Equal contribution"
should be placed in the footnote block ahead of the list of affiliations. A
list of corresponding authors and their emails (in the format Full Name
\textless{}email@domain.com\textgreater{}) can follow the list of affiliations.
Ideally only one or two names should be listed.
A sample file with author names is included in the ICML2022 style file
package. Turn on the \texttt{[accepted]} option to the stylefile to
see the names rendered. All of the guidelines above are implemented
by the \LaTeX\ style file.
\subsection{Abstract}
The paper abstract should begin in the left column, 0.4~inches below the final
address. The heading `Abstract' should be centered, bold, and in 11~point type.
The abstract body should use 10~point type, with a vertical spacing of
11~points, and should be indented 0.25~inches more than normal on left-hand and
right-hand margins. Insert 0.4~inches of blank space after the body. Keep your
abstract brief and self-contained, limiting it to one paragraph and roughly 4--6
sentences. Gross violations will require correction at the camera-ready phase.
\subsection{Partitioning the Text}
You should organize your paper into sections and paragraphs to help
readers place a structure on the material and understand its
contributions.
\subsubsection{Sections and Subsections}
Section headings should be numbered, flush left, and set in 11~pt bold
type with the content words capitalized. Leave 0.25~inches of space
before the heading and 0.15~inches after the heading.
Similarly, subsection headings should be numbered, flush left, and set
in 10~pt bold type with the content words capitalized. Leave
0.2~inches of space before the heading and 0.13~inches afterward.
Finally, subsubsection headings should be numbered, flush left, and
set in 10~pt small caps with the content words capitalized. Leave
0.18~inches of space before the heading and 0.1~inches after the
heading.
Please use no more than three levels of headings.
\subsubsection{Paragraphs and Footnotes}
Within each section or subsection, you should further partition the
paper into paragraphs. Do not indent the first line of a given
paragraph, but insert a blank line between succeeding ones.
You can use footnotes\footnote{Footnotes
should be complete sentences.} to provide readers with additional
information about a topic without interrupting the flow of the paper.
Indicate footnotes with a number in the text where the point is most
relevant. Place the footnote in 9~point type at the bottom of the
column in which it appears. Precede the first footnote in a column
with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can
appear in each column, in the same order as they appear in the text,
but spread them across columns and pages if possible.}
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{icml_numpapers}}
\caption{Historical locations and number of accepted papers for International
Machine Learning Conferences (ICML 1993 -- ICML 2008) and International
Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was
produced, the number of accepted papers for ICML 2008 was unknown and instead
estimated.}
\label{icml-historical}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Figures}
You may want to include figures in the paper to illustrate
your approach and results. Such artwork should be centered,
legible, and separated from the text. Lines should be dark and at
least 0.5~points thick for purposes of reproduction, and text should
not appear on a gray background.
Label all distinct components of each figure. If the figure takes the
form of a graph, then give a name for each axis and include a legend
that briefly describes each curve. Do not include a title inside the
figure; instead, the caption should serve this function.
Number figures sequentially, placing the figure number and caption
\emph{after} the graphics, with at least 0.1~inches of space before
the caption and 0.1~inches after it, as in
\cref{icml-historical}. The figure caption should be set in
9~point type and centered unless it runs two or more lines, in which
case it should be flush left. You may float figures to the top or
bottom of a column, and you may set wide figures across both columns
(use the environment \texttt{figure*} in \LaTeX). Always place
two-column figures at the top or bottom of the page.
\subsection{Algorithms}
If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic''
environments to format pseudocode. These require
the corresponding stylefiles, algorithm.sty and
algorithmic.sty, which are supplied with this package.
\cref{alg:example} shows an example.
\begin{algorithm}[tb]
\caption{Bubble Sort}
\label{alg:example}
\begin{algorithmic}
\STATE {\bfseries Input:} data $x_i$, size $m$
\REPEAT
\STATE Initialize $noChange = true$.
\FOR{$i=1$ {\bfseries to} $m-1$}
\IF{$x_i > x_{i+1}$}
\STATE Swap $x_i$ and $x_{i+1}$
\STATE $noChange = false$
\ENDIF
\ENDFOR
\UNTIL{$noChange$ is $true$}
\end{algorithmic}
\end{algorithm}
\subsection{Tables}
You may also want to include tables that summarize material. Like
figures, these should be centered, legible, and numbered consecutively.
However, place the title \emph{above} the table with at least
0.1~inches of space before the title and the same after it, as in
\cref{sample-table}. The table title should be set in 9~point
type and centered unless it runs two or more lines, in which case it
should be flush left.
\begin{table}[t]
\caption{Classification accuracies for naive Bayes and flexible
Bayes on various data sets.}
\label{sample-table}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
Data set & Naive & Flexible & Better? \\
\midrule
Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\
Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\
Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\
Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\
Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\
Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\
Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\
Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
Tables contain textual material, whereas figures contain graphical material.
Specify the contents of each row and column in the table's topmost
row. Again, you may float tables to a column's top or bottom, and set
wide tables across both columns. Place two-column tables at the
top or bottom of the page.
\subsection{Theorems and such}
The preferred way is to number definitions, propositions, lemmas, etc. consecutively, within sections, as shown below.
\begin{definition}
\label{def:inj}
A function $f:X \to Y$ is injective if for any $x,y\in X$ different, $f(x)\ne f(y)$.
\end{definition}
Using \cref{def:inj} we immediate get the following result:
\begin{proposition}
If $f$ is injective mapping a set $X$ to another set $Y$,
the cardinality of $Y$ is at least as large as that of $X$
\end{proposition}
\begin{proof}
Left as an exercise to the reader.
\end{proof}
\cref{lem:usefullemma} stated next will prove to be useful.
\begin{lemma}
\label{lem:usefullemma}
For any $f:X \to Y$ and $g:Y\to Z$ injective functions, $f \circ g$ is injective.
\end{lemma}
\begin{theorem}
\label{thm:bigtheorem}
If $f:X\to Y$ is bijective, the cardinality of $X$ and $Y$ are the same.
\end{theorem}
An easy corollary of \cref{thm:bigtheorem} is the following:
\begin{corollary}
If $f:X\to Y$ is bijective,
the cardinality of $X$ is at least as large as that of $Y$.
\end{corollary}
\begin{assumption}
The set $X$ is finite.
\label{ass:xfinite}
\end{assumption}
\begin{remark}
According to some, it is only the finite case (cf. \cref{ass:xfinite}) that is interesting.
\end{remark}
\subsection{Citations and References}
Please use APA reference format regardless of your formatter
or word processor. If you rely on the \LaTeX\/ bibliographic
facility, use \texttt{natbib.sty} and \texttt{icml2022.bst}
included in the style-file package to obtain this format.
Citations within the text should include the authors' last names and
year. If the authors' names are included in the sentence, place only
the year in parentheses, for example when referencing Arthur Samuel's
pioneering work \yrcite{Samuel59}. Otherwise place the entire
reference in parentheses with the authors and year separated by a
comma \cite{Samuel59}. List multiple references separated by
semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.'
construct only for citations with three or more authors or after
listing all authors to a publication in an earlier reference \cite{MachineLearningI}.
Authors should cite their own work in the third person
in the initial version of their paper submitted for blind review.
Please refer to \cref{author info} for detailed instructions on how to
cite your own papers.
Use an unnumbered first-level section heading for the references, and use a
hanging indent style, with the first line of the reference flush against the
left margin and subsequent lines indented by 10 points. The references at the
end of this document give examples for journal articles \cite{Samuel59},
conference publications \cite{langley00}, book chapters \cite{Newell81}, books
\cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports
\cite{mitchell80}, and dissertations \cite{kearns89}.
Alphabetize references by the surnames of the first authors, with
single author entries preceding multiple author entries. Order
references for the same authors by year of publication, with the
earliest first. Make sure that each reference includes all relevant
information (e.g., page numbers).
Please put some effort into making references complete, presentable, and
consistent, e.g. use the actual current name of authors.
If using bibtex, please protect capital letters of names and
abbreviations in titles, for example, use \{B\}ayesian or \{L\}ipschitz
in your .bib file.
\section*{Accessibility}
Authors are kindly asked to make their submissions as accessible as possible for everyone including people with disabilities and sensory or neurological differences.
Tips of how to achieve this and what to pay attention to will be provided on the conference website \url{http://icml.cc/}.
\section*{Software and Data}
If a paper is accepted, we strongly encourage the publication of software and data with the
camera-ready version of the paper whenever appropriate. This can be
done by including a URL in the camera-ready copy. However, \textbf{do not}
include URLs that reveal your institution or identity in your
submission for review. Instead, provide an anonymous URL or upload
the material as ``Supplementary Material'' into the CMT reviewing
system. Note that reviewers are not required to look at this material
when writing their review.
\section*{Acknowledgements}
\textbf{Do not} include acknowledgements in the initial version of
the paper submitted for blind review.
If a paper is accepted, the final camera-ready version can (and
probably should) include acknowledgements. In this case, please
place such acknowledgements in an unnumbered section at the
end of the paper. Typically, this will include thanks to reviewers
who gave useful comments, to colleagues who contributed to the ideas,
and to funding agencies and corporate sponsors that provided financial
support.
\nocite{langley00}
\subsection{PRE-PROCESSING \& FEATURE ENGINEERING}
\textbf{\ul{Question 1: Did you perform any data pre-processing methods?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{fig/Q1.png}
\caption{Pre-processing methods include: Standard Scaler, Feature Scaling, One-hot Encoding, extracting the 90\% convergence time and final performances.}
\label{fig:my_label}
\end{figure}
\textbf{\ul{Question 2: Did you perform feature engineering methods?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{fig/Q2.png}
\caption{Feature engineering methods include: Clustering; Polynomial features of train\_num, feat\_num; Multi-step forecasting for data augmentation.}
\label{fig:my_label}
\end{figure}
\subsection{DATA USED FOR LEARNING}
\textbf{\ul{Question 3: Did you use all points on the learning curves or only some of them?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{fig/Q3.png}
\caption{Most of the methods use all points on the learning curves for learning.}
\label{fig:my_label}
\end{figure}
\textbf{\ul{Question 4: Did you make use of meta-features of datasets?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{fig/Q4.png}
\caption{All methods took advantage of meta-features of datasets.}
\label{fig:my_label}
\end{figure}
\textbf{\ul{Question 5: Did you implement a Hyperparameter Optimization component for your agent using the provided hyperparameters of algorithms?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{fig/Q5.png}
\caption{Some HPO tools were used, such as Hyperband
and FSBO.}
\label{fig:my_label}
\end{figure}
\textbf{\ul{Question 6: In case you used either or both meta-features of datasets and algorithms, did it improve the performance of your method?}}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{fig/Q6.png}
\caption{More experiments need to be done to confirm whether meta-features of datasets and algorithms help.}
\label{fig:my_label}
\end{figure}
\textbf{\ul{Question 7: In any case, did you find the meta-features useful in our meta-learning setting?}}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{fig/Q7.png}
\caption{Not all participants find the provided meta-features useful.}
\label{fig:my_label}
\end{figure}
\subsection{POLICY CHARACTERISTICS}
\textbf{\ul{Question 8: Does your agent learn a policy from datasets in the meta-training phase?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{fig/Q8.png}
\caption{Most of the agents use a learned policy from the meta-training phase.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 9: How does your agent manage the exploration-exploitation trade-offs (revealing a known good algorithm's learning curve vs. revealing a new algorithm candidate's learning curve )?}}
\begin{enumerate}
\item With an $\epsilon$ greedy policy, in a Reinforcement Learning framework, only in meta- training we create different Q-matrices. In the meta-testing phase, we perform the choice of the new algorithm with the computed Q-matrices.
\item We are very restrictive with switching the explored algorrithm. We preselect the single best performing algorithm from the validation learning curves statically and only explore other algorithms, if its learning curve is stale. So we strongly emphasize exploiting the single best algorithm.
\item A modified Round Robin on the top-k performing algorithms. The incumbent i.e. the value of the top performing algorithm on the test dataset is challenged by zero budget allocated algorithms. Since the training on the validation data allows for immediate look up in the test dataset.
\item Bayesian Optimization
\item We find the best algorithm by a learning curve predictor.
\end{enumerate}
\ul{\textbf{Question 10: Does your agent switch between learning curves during an episode (i.e. switching between training different algorithms on the dataset at hand)?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{fig/Q10.png}
\caption{All agents switch between learning curves to find the best algorithm for the task at hand.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 11: Does your agent leverage partially revealed learning curves on the dataset at hand?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{fig/Q11.png}
\caption{Some agents do not take into account information of partial learning curves on the task at hand for deciding their actions.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 12: Did you make any assumptions about the shapes of the learning curves?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{fig/Q12.png}
\caption{Only one team made an assumption that the learning curves are \textit{monotonically increasing}.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 13: Does your agent predict unseen points on a learning curve?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{fig/Q13.png}
\caption{3 out of 5 agents make decisions based on predicted performance scores.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 14: Does your agent perform pairwise comparisons of algorithms' learning curves?}}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{fig/Q14.png}
\caption{Pairwise comparisons of algorithms' learning curves have been exploited in 60\% of the agents.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 15: How does your agent spend the given time budgets (i.e. choosing delta\_t at each step)?}}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{fig/Q15.png}
\caption{Participants use either or both hard-coded policy and learned policy to distribute a given time budget (no one does it randomly).}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 16: How does your agent choose which algorithm to contribute to its learning curve at each step (i.e. choosing A\_star)?}}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{fig/Q16.png}
\caption{Most of the agents choose the best algorithm so far as $A^*$, with only one exception that uses Sequential Model Based Optimisation (SMBO) with a Gaussian Process Surrogate.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 17: Which phase did you focus on more to improve your agent's performance?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{fig/Q17.png}
\caption{Participants give a slightly higher importance weight to meta-training than meta-testing.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 18: Did you build an algorithm ranking?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{fig/Q18.png}
\caption{More than half of the participants build an algorithm ranking and use it as a tool for selecting algorithms.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 19: Did you use Reinforcement Learning to train your agent?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{fig/Q19.png}
\caption{Only one participant applies Reinforcement Learning to train the agent.}
\label{fig:my_label}
\end{figure}
\subsection{METHOD IMPLEMENTATION}
\ul{\textbf{Question 20: What is the percentage of originality of your method/implementation?}}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{fig/Q20.png}
\caption{The percentage of originality of their methods/implementations ranges from 60\% to 80\%.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 21: Did you use a pre-trained agent?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{fig/Q21.png}
\caption{Only one participant pre-train their agent using all the provided datasets in meta-training.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 22: Is your method strongly based on an existing solution? Which one(s)?}}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{fig/Q22.png}
\caption{Q-learning and starting kit baselines served as bases for 2 methods, while the other methods were built from scratch.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 23: Did you use Neural Networks for your agent?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{fig/Q23.png}
\caption{Neural Networks were implemented in 3 out of 5 methods.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 24: Do you find the provided libraries / packages / frameworks sufficient?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{fig/Q24.png}
\caption{The provided libraries/packages/frameworks were sufficient for most of the participants.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 25: Check all Python packages/frameworks you used.}}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{fig/Q25.png}
\caption{Scikit-learn and Pytorch are the most used packages by participants.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 26: Did you use any specific AutoML / Meta-learning / Hyperparameter Optimization libraries?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{fig/Q26.png}
\caption{Only one participant uses SMAC for hyperparameter optimization.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 27: Was it difficult for you to deal with the provided data format of the learning curves and meta-features?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{fig/Q27.png}
\caption{40\% of participants struggled with the provided data format. Some comments include: rather than nested dictionaries create a single dictionary with a tuple identifier (dataset, algorithm).}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 28: How much time did you spend developing your agents?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{fig/Q28.png}
\caption{All participants completed their solutions within 4 weeks.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 29: What's the difficulty induced by the computation resource (memory, time budget, etc) constraints?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{fig/Q29.png}
\caption{The provided computation resource was reasonable to participants.}
\label{fig:my_label}
\end{figure}
\subsection{USER EXPERIENCE}
\ul{\textbf{Question 30: Was the challenge duration enough for you to develop your methods?}}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{fig/Q30.png}
\caption{The challenge duration was enough for the participants.}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 31: Your evaluation on the starting kit}}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{fig/Q31.png}
\caption{Some improvements on the starting kit need to be made (see below).}
\label{fig:my_label}
\end{figure}
\ul{\textbf{Question 32: Which improvements can be made for the starting kit? You are welcome to list all issues and bugs you ran into.}}
\begin{enumerate}
\item The algorithm meta features were non-informative, the dataset meta features' categorical columns at some point were non informative and the categories were not entirely represented in the validation dataset. Also the environment allows to exploit 0 budgets to improve the agent's performance in a non realistic way. Also we wanted to use many libraries, that were not available (e.g pytorch geometric or networkx) or with other versions (e.g. sklearn's quantile regression) was not available initially. The validation \& test set (on server) were slightly different. In particular, the distribution of timestamps was dramatically different.
\item Return the whole learning curve after a suggestion, not only the last observed budget and the last observed performance.
\item It may be useful to query the learning curve at specific iterations or explain clearly the meaning of the timestamps of the observed samples. If they are random or there is not an underlying pattern or structure, it is difficult to predict the next budget.
\item The final rank should be obtained by running the methods on other meta-train dataset. In the current set-up, the test curve is highly correlated to the validation curve (seen in the development stage), therefore simply overfitting the latter will probably help to obtain good results in the former.
\item Decreasing the budget when the suggested algorithm+budget tuple is not enough to query a new point in the learning curve is unrealistic. In the real world, once we decide to run some algorithm for a given time-range or number of epochs, we will obtain some change in the accuracy (unless the learning curve is in a plateau).
\end{enumerate}
\ul{\textbf{Question 33: Your evaluation on the challenge website:\\ https://codalab.lisn.upsaclay.fr/competitions/753}}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{fig/Q33.png}
\caption{One comment for the challenge website is: the way the ALC is explained is not very straightforward.}
\label{fig:my_label}
\end{figure}
\section{Background and motivation}
\label{background}
\textbf{Meta-learning} has been playing an increasingly important role in Automated Machine Learning.
While it is a natural capability of living organisms, who constantly transfer acquired knowledge across tasks to quickly adapt to changing environments, artificial learning systems are still in their meta-learning "infancy". They are only capable, so far, to transfer knowledge between very similar tasks. At a time when society is pointing fingers at AI for being wasteful with computational resources, there is an urgent need for learning systems, which \textbf{recycle their knowledge}. To achieve that goal, some research areas have been widely studied, including few-shot learning \cite{Wang2020}, transfer learning \cite{Zhuang2020}, representation learning \cite{Bengio2013}, continual learning \cite{Delange2021}, life-long learning \cite{Chen2018}, and meta-learning \cite{Vanschoren2018}.
However, \textbf{meta-learning from learning curves}, an essential sub-problem in meta-learning \cite{mohr2022learning}, is still under-studied. This motivates the design of this new challenge series we are proposing.
Learning curves evaluate algorithm incremental performance improvement, as a function of training time, number of iterations, and/or number of examples. Our challenge design builds on top of previous challenges and work, which considered other aspects of the problem: meta-learning as a recommendation problem, but not from learning curves \cite{Guyon2019, liu_winning_2020} and few-shot learning \cite{metadl2021}. Analysis of past challenges revealed that top-ranking methods often involve switching between algorithms during training, including "freeze-thaw" Bayesian techniques \cite{Swersky2014}. However, previous challenge protocols did not allow evaluating such methods separately, due to inter-dependencies between various heuristics. Furthermore, we want to study the potential benefit of \textbf{learned policies}, as opposed to applying hand-crafted black-box optimization methods.
Our challenge took inspiration from MetaREVEAL {\cite{nguyen2021}}, ActivMetal \cite{SunHosoya2018}, REVEAL \cite{sunhosoya2019}, and Freeze-Thaw Bayesian Optimization \cite{Swersky2014}. A meta-learner needs to learn to solve two problems at the same time: \textbf{algorithm selection} and \textbf{budget allocation}. We are interested in meta-learning strategies that leverage information on \textbf{partially trained algorithms}, hence reducing the cost of training them to convergence. We offer pre-computed learning curves as a function of time, to facilitate benchmarking. Meta-learners must "pay" a cost emulating computational time for revealing their next values. Hence, meta-learners are expected to learn the \textbf{exploration-exploitation trade-offs} between continuing "training" an already tried good candidate algorithm and checking new candidate algorithms.
In this paper, we first recap design of the first round of our challenge. Then, we present the first round results and compare the participants' solutions with our baselines. After the end of the first round, we have identified some limitations in our design and proposed a new design and a new meta-dataset for the second round.
\section {Design of the first round}
In this section, we describe the design of the first round of our challenge, including: data, challenge protocol, and evaluation. More details can be found on our competition page \footnote{{\url{https://codalab.lisn.upsaclay.fr/competitions/753}}}.
\subsection{Learning curve data}
Despite the fact that learning curves are widely used, there were not many learning curve meta-datasets available in the Machine Learning community at the time of organizing this challenge. Taking advantage of 30 cross-domain datasets used in the {AutoML challenge \cite{Guyon2019}}, we computed {\em de novo} learning curves (both on the validation sets and the test sets) for 20 algorithms, by submitting them to the {AutoML challenge \cite{Guyon2019}}\ as post-challenge submissions. These algorithms are created from two base algorithms, Random Forest and Gradient Boosting, but with different values of the \textit{max\_features} hyperparameter.
We also provided meta-features of each dataset, such as type of learning task, evaluation metric, time budget, etc. In this first round, we considered a \textbf{learning curve as a function of time}. Each point on the learning curve corresponds to the algorithm performance at a certain time.
In addition to the aforementioned real-world meta-dataset, we synthetically generated 4000 learning curves for the participants to practice and get familiar with our starting kit. The points on each synthetic learning curve are sampled from a parameterized sigmoid function with its hyperparameters generated from matrix factorizations. This allows us to create some hidden relationships between algorithms and datasets (i.e. some algorithms perform well particularly for some datasets). Details of how this synthetic meta-dataset was created can be found in {\cite{nguyen2021}}.
\subsection{Challenge protocol}
We organized a novel two-phase competition protocol:
\begin{itemize}
\setlength{\itemsep}{0pt}%
\setlength{\parskip}{0pt}
\item \textbf{Development phase}: participants make as many submissions as they want \footnote{up to a comfortable limit that was never reached in our competition. The limit also aims to avoid overfitting the validation sets, though it has been shown that participants are usually aware not to overfit them \cite{overfitting-kaggle}},
which are evaluated on the \textit{validation learning curves}.
\item \textbf{Final test phase}: no new submissions are accepted in this phase. The last submission of each participant in the Development phase is transferred automatically to this phase. It is evaluated on the \textit{test learning curves}.
\end{itemize}
Note that the meta-datasets are never exposed to the participants in neither phase, because this is a challenge with code submission (only the participants' agent sees the data).
This setting is novel because it is uncommon in reinforcement learning to have a separate phase for agent ``development'' and agent ``testing''. Validation learning curves are used during the development phase and test learning curves during the final phase, to prevent agent from overfitting. Moreover, we implemented the k-fold meta-cross-validation (with k=6) to reduce variance in the evaluations of the agents (i.e. 25 datasets for meta-training and 5 datasets for meta-testing). The final results are averaged over datasets in the test folds.
During meta-training, learning curves and meta-data collected on 25 datasets are passed to the agent for meta-learning in any possible ways implemented by the agent. Then during meta-testing, one dataset is presented to the agent at a time. The agent interacts back and forth with an ``environment'', similarly to a Reinforcement Learning setting. It keeps suggesting to reveal algorithms' validation learning curves and choosing the current best performing algorithm based on observations of the partially revealed learning curves.
\subsection{Evaluation}
In this challenge series, we want to search for agents with high ``any-time learning" capacities, which means the ability to have good performances if they were to be stopped at any point in time. Hence, the agent is evaluated by the Area under the agents' Learning Curve (ALC) which is constructed using the learning curves of the best algorithms chosen at each time step (validation learning curves in the Development phase, and the test learning curves in the Final phase). The computation of the ALC is explained in {\cite{nguyen2021}}. The results will be averaged over all meta-test datasets and shown on the leaderboards. The final ranking is made according to the average test ALC.
As indicated in the competition rules, participants should make efforts to guarantee the reproducibility of their methods (e.g. by fixing all random seeds involved). In the Final Phase, all submissions were run \textbf{three times} with the same seed, and the run with the \textbf{worst performance} is used for the final ranking\footnote{The output of each run can be found in this Google Drive folder: {\href{https://drive.google.com/drive/folders/1HrMZNQeR1kDOmLqjRGgcozut2M_k-jHs?usp=sharing}{[link]}}}. This penalizes any team who did not fix their own seeds.
On each dataset $\mathcal{D}_i$, we evaluated an agent $\mathcal{A}_j$ by the Area under the Learning curve $ALC_i^j$ of the agent on the dataset. In the final ranking, the agent is ranked based on its average ALC over all datasets ($\mathcal{N}=30$ datasets):
\begin{equation}
\mu_j = \frac{\sum_{i=1}^{\mathcal{N}} ALC_i^j}{\mathcal{N}}
\end{equation}
To measure the variability of an agent $\mathcal{A}_j$, we computed the standard deviation of ALC scores obtained by the agent over all datasets:
\begin{equation}
\sigma_j = \sqrt{\frac{\sum_{i=1}^{\mathcal{N}} (ALC_i^j - \mu_j)^2}{\mathcal{N}}}
\end{equation}
\section{Analyses of the first round results}
Results, final rankings, and prizes in the Final phase of the first round are shown in Table \ref{table:summary}. The 1st, 2nd, and 3rd ranked teams qualifying for prizes, as per the challenge rules\footnote{{\url{https://codalab.lisn.upsaclay.fr/competitions/753}}}, were awarded prizes of 500\$, 300\$, and 200\$, respectively. In the remainder of this section, we give a more in-depth view of how each team performed on each dataset in this round, compared to our baselines.
In this round, the participants are asked to solve two tasks simultaneously: \textit{algorithm selection} and \textit{budget allocation}. Both of them are crucial to achieving our goal of maximizing the area under an agent's learning curve. We found approaches submitted by the participants using a wide range of methods, from simple (e.g. using algorithm ranking and pre-defined values for $\Delta t$) to more sophisticated (e.g. predicting scores and timestamps of unseen learning curve points). The results first indicate that using both learned policies (models) for choosing algorithms and spending time budget (used by team \textit{{MoRiHa}} and \textit{{neptune}}) yields better ALC scores than hard-coded ones (e.g. using a fixed pre-defined list of $\Delta t$ in \textit{{AIpert}} and our \textit{DDQN} baseline).
According to Table \ref{table:result-details}, team \textit{{MoRiHa}} obtained the highest average ALC of 0.43 in the final phase.
It succeeded in achieving the highest ALC score on 21 out of 30 datasets. In addition, it performed notably better than other teams in some datasets, such as \textit{tania}, \textit{robert}, \textit{newsgroups}, and \textit{marco}. They are datasets of either multi-label or multi-class classification tasks with a very high number of features. Moreover, most of them are sparse datasets, which is often seen as a challenge for learners. Further investigation will be done in our future work to understand the outperformance of {MoRiHa}'s method on these particular datasets. We will also encourage team {MoRiHa} to join the second round to study the robutness of their results.
Team \textit{{neptune}} has a score of 0.42 which is very close to the winner's, followed by team \textit{{AIpert}} with a score of 0.40. Team \textit{{automl-freiburg}}, which was ranked 4th, achieved a slightly lower score (0.37) than our DDQN baseline (0.38), and so did team \textit{{automl-hannover}} (0.32).
The successes of the top-ranked team can be explained by the strategies they implemented. Team \textit{{MoRiHa}}, which finished in the 1st place, uses a simple yet efficient approach that explores the most promising algorithms and avoids wasting time switching between too many different algorithms. Interestingly, team \textit{{neptune}} learns a policy for allocating time budget using learning curve convergence speed. The Reinforcement Learning-based approach of team \textit{{AIpert}} is very intuitive as our competition borrows the RL framework to formulate our problem, which also explains our baseline choice of DDQN. However, by complementing it with the K-means clustering method, \textit{{AIpert}}'s approach achieved higher performance than our baseline. Both \textit{{AIpert}} and \textit{{automl-freiburg}} share the same idea of suggesting algorithms based on dataset similarity.
\begin{table*}[]
\centering
\caption{\textbf{Final phase ranking of the first round}. Teams are ranked based on their average ALC scores, which were recorded on the worst run among three runs. Team \textit{{MoRiHa}} was ranked 1st, but not qualified for a monetary prize. Teams \textit{{neptune}}, \textit{{AIpert}}, and \textit{{automl-freiburg}} are qualified for the prizes.}
\label{table:summary}
\begin{tabular}{|c|c|c|c|l|}
\hline
\rowcolor[HTML]{FFFFFF}
\textbf{Rank} & \textbf{\begin{tabular}[c]{@{}c@{}}Team\\ (username)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}ALC \\ score\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Monetary Prize\\ qualification\end{tabular}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFFFF}\textbf{\begin{tabular}[c]{@{}c@{}}Comments/\\source code\end{tabular}}} \\ \hline
\rowcolor[HTML]{FFFFFF}
1 & \begin{tabular}[c]{@{}c@{}}\textbf{{MoRiHa}}\\ (username: jmhansel)\end{tabular} & 0.43 & NO & \begin{tabular}[c]{@{}l@{}}This team is not qualified for\\ a monetary prize due to close\\ relation with the organizers\\ {\href{https://github.com/fmohr/meta-learn-from-LC-2022}{[CODE URL]} \href{https://drive.google.com/file/d/1CdLOEuM-7C94WdTSwHHgelGAOfZ-xCNW/view?usp=sharing}{[FACTSHEET]}} \end{tabular} \\ \hline
\rowcolor[HTML]{FFFFFF}
2 & \begin{tabular}[c]{@{}c@{}}\textbf{{neptune}}\\ (username: {neptune})\end{tabular} & 0.42 & OK & \begin{tabular}[c]{@{}l@{}} {\href{https://github.com/neptuneai/MetaDL}{[CODE URL]} \href{https://drive.google.com/file/d/1eAI0GD-0cYwI3XND23pkX5t0nvWVRy6t/view?usp=sharing}{[FACTSHEET]}} \end{tabular} \\ \hline
\rowcolor[HTML]{FFFFFF}
3 & \begin{tabular}[c]{@{}c@{}}\textbf{{AIpert}}\\ (username: {AIpert})\end{tabular} & 0.40 & OK & {\href{https://github.com/EleGo9/Meta-Learning-Curve_AIpert}{[CODE URL]} \href{https://drive.google.com/file/d/1gMSTlcD2__EkFKW3qd7bV9WYb51PWs1O/view?usp=sharing}{[FACTSHEET]}} \\ \hline
\rowcolor[HTML]{FFFFFF}
4 & \begin{tabular}[c]{@{}c@{}}\textbf{{automl-freiburg}}\\(username: {automl-freiburg})\end{tabular} & 0.37 & OK & {\href{https://github.com/sebastianpinedaar/meta-learn-from-LC}{[CODE URL]} \href{https://drive.google.com/file/d/1y8mwxa5jVt-BM5iLPXJHVHwxpbW0OIt-/view?usp=sharing}{[FACTSHEET]}} \\ \hline
\rowcolor[HTML]{FFFFFF}
5 & \begin{tabular}[c]{@{}c@{}}\textbf{{automl-hannover}} \\ (username: {amsks})\end{tabular} & 0.32 & & {\href{https://github.com/timruhkopf/Meta_challenge}{[CODE URL]} \href{https://drive.google.com/file/d/1bABinDdTtQ524QtHMyte0oYL336JvCgk/view?usp=sharing}{[FACTSHEET]}} \\ \hline
\rowcolor[HTML]{FFFFFF}
6 &\begin{tabular}[c]{@{}c@{}}\textbf{{pprp}} \\ (username: {pprp}) \end{tabular} & 0.24 & & \\ \hline
\rowcolor[HTML]{FFFFFF}
7 & \begin{tabular}[c]{@{}c@{}}\textbf{{arushsharma24}} \\ (username: {arushsharma24}) \end{tabular} & 0.23 & & \\ \hline
\rowcolor[HTML]{FFFFFF}
8 & \begin{tabular}[c]{@{}c@{}}\textbf{{Xavier}} \\ (username: {Xavier}) \end{tabular} & 0.17 & & \\ \hline
\end{tabular}
\end{table*}
\renewcommand{\arraystretch}{1.2}
\begin{table*}
\centering
\caption{\textbf{ALC scores of the top 5 methods: {MoRiHa}~(AT01), {neptune}~(AT02), {AIpert}~(AT03), {automl-freiburg}~(AT04), {automl-hannover}~(AT05), and our baselines: Double Deep Q Network (DDQN), Best on Samples (BOS), Freeze-Thaw BO (FT), Average Rank (AR), Random Search (RS)}. The reported scores correspond to the worst of 3 runs for each method. The last row shows the average ALC scores (in descending order, from left to right) over 30 datasets.}
\label{table:result-details}
\includegraphics[width=0.9\textwidth]{fig/1st-round-heatmap.png}
\end{table*}
\section{Winning solutions}
In this section, we briefly introduce strategies of the winning solutions. More details on their implementations can be found in our factsheet summary (Appendix \ref{sec:factsheet}) or in individual factsheets provided by the winning teams in Table \ref{table:summary}.
\subsection{Team \textit{{MoRiHa}} (1st place)}
According to team \textit{{MoRiHa}}, they focus on ``doing the right thing at the right time". Their agent is very goal-oriented (i.e. maximizing the area under the learning curve) and does not rely on complex models or expensive computations. They emphasize the importance of having an accurate schedule regarding the invested time (choosing $\Delta t$ for each query) in order to avoid wasting time budget. One of their key findings is that switching between algorithms during exploration is very costly and it is rarely beneficial to switch the explored algorithm more than once. They build a \textbf{time model} for each algorithm to predict the time when the \textbf{first point} on the algorithm's learning curve is available. In addition, they keep a \textbf{list of algorithms ranked descending} based on their ALC in the meta-training phase. In meta-testing their algorithm scheduler explores algorithms in an order starting from the best algorithm. If an algorithm's learning curve stales, the next algorithm will be chosen. Time allocation is done using the time models and a \textbf{heuristic} procedure. This is the only team having an assumption that the learning curves are monotonically increasing.
\subsection{Team \textit{{neptune}} (2nd place)}
As described by team \textit{{neptune}}, they train a learning curve predictor to \textbf{predict unseen points} on the learning curves (i.e. by interpolating the original scores) in order to find the best algorithm for a new dataset. In addition, they train an algorithm classifier to categorize algorithms into three groups based on their \textbf{learning curve convergence speed}: Fast/Median/Slow. Different budget allocation strategies will be selected according to the algorithm's convergence type. Regarding their implementation, they train MLP networks to perform both tasks: learning curve prediction and algorithm classification.
\subsection{Team \textit{{AIpert}} (3rd place)}
According to team \textit{{AIpert}}, their method aims at uncovering good algorithms as fast as possible, using a low-computational cost and simple process. The novelty lies in the combination of an off-policy Reinforcement Learning method (\textbf{Q-learning}) and \textbf{K-means clustering} model. As similar datasets usually have the same \textit{learning behavior}, organizing similar datasets into groups based on their meta-features is essential. They thus build a Q-learning matrix for each dataset cluster (12 clusters in total). In meta-training, each dataset is seen multiple times and the corresponding Q-learning matrix is updated. This allows the agents to be exposed to more situations (different observations and rewards) on a dataset. In meta-testing, they first determine which cluster the given dataset belongs to. Then, the Q-learning matrix associated with the cluster is utilized as a policy to guide the agent. $\Delta t$ is not chosen as an absolute value but from \textbf{a fixed list of time portions} of the total time budget.
\subsection{Team \textit{{automl-freiburg}} (4th place)}
As described by team \textit{{automl-freiburg}}, their algorithm selection policy is based on a \textbf{Deep Gaussian Processes} (DGP) surrogate, in a similar way to FSBO \cite{fsbo}. The surrogate aims to predict the performance of an algorithm at time $t + \Delta t$, with $\Delta t$ chosen by a \textbf{trained budget predictor}. The DGP is trained during the meta-training phase, and fine-tuned in the meta-testing phase. During meta-training, they store the best algorithm for each dataset to be used as the first algorithm to query during meta-testing. The ``best" algorithm is defined as the one that has the highest $\frac{y_0}{t_0}$, which means they favor algorithms that achieve \textbf{high performance early}. Given a new dataset in meta-testing, the best algorithm of the \textbf{closest dataset} (previously seen in meta-training, based on the euclidean
distance) will be selected. During meta-testing, they keep updating the observed algorithms and fine-tune the DGP surrogate.
\section{Lessons learned and new design}
In this section, we describe the limitations of our original design and how we addressed them in the second round.
First, one set of limitations arises because we precomputed learning curves (performance as a function of time) and therefore have a fixed predetermined sampled time points. In our first challenge edition, we opted to interpolate in between time points with the last recorded performance. The criticism of participants was that in real life, if they chose an in between time point, they would get newer information. To mitigate that, in the new challenge round, the participants cannot choose any time point (thus no need to interpolate), they have to choose the next pre-computed learning curve point.
We thus provide a new type of learning curve for the second round: learning curve as a function of training data size, as opposed to learning curves as a function of time. The time budget for querying a point on a learning curve will be returned by the environment and not chosen by the agent, which means that the agent has to pay whatever it costs.
Second, the test learning curves were highly correlated with the validation curves. Therefore, one could overfit the former by simply overfitting the latter. In the second round, the agent will always be evaluated using the test learning curves but on a completely different set of datasets in each phase (Development phase and Final phase).
The second round of our challenge was accepted at the {AutoML-Conf 2022}. Participation in the first round is not a prerequisite for the second round. The second round comes with some \textbf{new features}, including:
\begin{itemize}
\item \textbf{Learning curve}: we focus on learning curves as functions of training data size. We thus collected a new large meta-dataset of such learning curves.
\item \textbf{Competition protocol}: Given a portfolio of algorithms, an agent suggests which algorithm and the amount of training data to evaluate the algorithm on a new task (dataset) efficiently. The agent observes information on both the \textit{training learning curves} and \textit{validation learning curves} to plan for the next step. Test learning curves, which are kept hidden, will be used for evaluating the agent.
\item \textbf{Data split}: We use half of the meta-dataset for the Development phase and the other "fresh" half to evaluate the agent in the Final phase.
\end{itemize}
The second round is split into three phases:
\begin{itemize}
\item \textbf{Public phase (1 week)}: participants practice with the given starting kit and sample data
\item \textbf{Development phase (6 weeks)}: participants submit agents that are meta-trained and meta-tested on the platform. 15 datasets will be used in this phase.
\item \textbf{Final phase (1 week)}: no further submissions are made in this phase. Your last submission in the Development phase will be forwarded automatically to this phase and evaluated on 15 fresh datasets (not used in the Development phase).
\end{itemize}
Like in the first round, this is a competition with code submission and the participants do not see the data in either phase (only their submitted agent is exposed to the meta-datasets).
We created a new meta-dataset of pre-computed learning curves of 40 algorithms with different hyperparameters on 30 datasets used in the AutoML challenge. The algorithms are created from four base algorithms: K-Nearest Neighbors (KNN), Multilayer Perceptron (MLP), Adaboost, Stochastic Gradient Descent (SGD), but with different values of hyperparameters. We added meta-features of datasets and hyperparameters of algorithms. We respected the data split of the AutoML challenge to produce three sets of learning curves for each task, from the \textbf{training, validation, and test sets}. The type of metric used to compute the learning curves of the meta-dataset is provided in the meta-features of the dataset. We also generated a new synthetic meta-dataset that contains 12000 learning curves, in a similar way used in the first round, but with a new type of learning curve as explained above (learning curve as a function of training data size).
In meta-training, the following data is given to the agent to meta-learn: meta-features of datasets, hyperparameters of algorithms, training learning curves, validation learning curves, and test learning curves. While in meta-testing, the agent interacts with an environment in a Reinforcement Learning style. Given a portfolio of algorithms, an agent suggests which algorithm and the amount of training data to evaluate the algorithm on a new task (dataset) efficiently. The agent observes information on both the training learning curve and validation learning curve to plan for the next step. An episode ends when the given time budget is exhausted. The following two lines of code demonstrate the interactions between the agent and the environment:
\begin{center}
$action = trained\_agent.suggest(observation)$
$observation, done = env.reveal(action)$
\end{center}
with:
\textbf{observation} : a tuple of\\ $(A, p, t, R\_train\_A\_p, R\_validation\_A\_p)$, with:
\begin{itemize}
\item $A$: index of the algorithm provided in the previous action,
\item $p$: decimal fraction of training data used, with the value of p in [0.1, 0.2, 0.3, ..., 1.0]
\item $t$: the amount of time it took to train A with training data size of p, and make predictions on the training/validation/test sets.
\item $R\_train\_A\_p$: performance score on the training set
\item $R\_validation\_A\_p$: performance score on the validation set
\end{itemize}
\textbf{action}: a tuple of $(A, p)$, with:
\begin{itemize}
\item $A$: index of the algorithm to be trained and tested
\item $p$: decimal fraction of training data used, with the value of p in [0.1, 0.2, 0.3, ..., 1.0]
\end{itemize}
The scoring program automatically chooses the best algorithm at each time step (i.e. the algorithm with the highest validation score found so far, which is different from the first round where it was chosen by the agent) to compute the agent's test learning curve (as a function of time spent). The metric used for ranking on the leaderboard is the Area under the agent's Learning Curve (ALC).
\subsection{Baseline results of the second round}
We re-use our baselines from the first round, including: Double Deep Q Network, Best on Samples, Freeze-Thaw Baysian Optimization, Average Rank, and Random Search agents. Their implementations are taken from the first round (\textit{Anonymous-paper}) with some modifications in order to work with our new protocol and meta-dataset. We also provide them (except the Random Search agent) a smarter policy for choosing the training data size in each step. Each agent keeps track of the algorithms ($A$) and the training data size ($p$) tested on the algorithms. Every time it re-selects an algorithm, it should increase $p$ by 0.1 to go forward on the algorithm learning curve.
We run each method three times and report the worst run of each of them in Table \ref{table:2nd-round-baselines}. Similar to the first round, among the baselines, \textbf{DDQN} still performs best in the second round with an average ALC of 0.38. It is the winner (co-winner) of 16 out of 30 datasets with the highest performance difference compared to other baseline methods seen on the dataset \textit{tania}. \textbf{BOS} and \textbf{FT} are not far behind with average scores of 0.37 and 0.36, respectively. The improvement of \textbf{BOS} compared to the first round can be explained by the adaptation of its strategy in this round. It tries each method on a small subset of training data first (e.g. 10 percent of training samples), instead of spending a fixed amount of time as in the previous round. This should help it not waste a time budget since choosing an adequate amount of time to get a new point on learning is no longer necessary in the second round. The success of \textbf{BOS} suggests that information on the first point of the learning curve is crucial to decisions on selecting algorithms.
\textbf{AR} and \textbf{RS} perform poorly with the same score of 0.28. Although focusing only on one algorithm as what \textbf{AR} method does can bring benefits in some cases (e.g. on dataset \textit{cadata} and \textit{didonis}), a dataset-dependent policy for selecting algorithms is still necessary to be successful on multiple cross-domain tasks.
\begin{table*}
\centering
\caption{\textbf{Baseline results in the second round.} We show ALC scores of Double Deep Q Network (DDQN), Best on Samples (BOS), Freeze-Thaw Bayesian Optimization (FT), Average Rank (AR), and Random Search (RS) on each dataset, in descending order of the average ALC scores (on the last row), from left to right. The first 15 datasets are used in the Development phase, the rest is for the Final phase.}
\label{table:2nd-round-baselines}
\includegraphics[width=0.7\textwidth]{fig/2nd-round-baselines-heatmap.png}
\end{table*}
\section{Conclusion}
The first round results of our challenge have revealed that agents that learn both
policies for selecting algorithms and allocating time budget are more successful in our challenge setting. Team \textit{{MoRiHa}}, who finished in 1st place, outperformed all other teams and our baselines on two-thirds of the datasets. We propose a novel setting and a new meta-dataset for the second round and present baseline results. \textbf{DDQN} maintains its advantages in the second round, while an intuitive and simple baseline as \textbf{BOS} can work quite well in the second round. We are looking forward to see whether the findings of the first round will be reinforced and whether participants' solutions can outperform our baselines significantly in the second round.
For our future work, we want to perform more post-challenge analyses to verify whether progress was made in meta-learning from learning curves. First, we would like to do a point-by-point comparison of the winning methods in the first round, based on their fact sheets. Second, to investigate further the winning methods and see what contributed the most to their success, we want to perform systematic experiments in collaboration with the winners. More concretely, we will build a common workflow and ask participants to conduct ablation studies. Lastly, we are also interested in examining
the effect of changes in our reward function hyper-parameters on participants' performances.
\section*{Acknowledgment}
We would like to thank challenge participants for providing feedback and sharing their methods. We are grateful to Chalearn for donating prizes and Google for providing computing resources. This work was supported by ChaLearn, the ANR Chair of Artificial Intelligence HUMANIA ANR-19-CHIA-0022 and TAILOR EU Horizon 2020 grant 952215.
\section*{Software and Data}
All software (including starting kit and winning solutions) is open-sourced on our website (\url{https://metalearning.chalearn.org/}). The meta-datasets will remained private on the challenge platform ({\textit{Codalab}}) to serve as a long-lasting benchmark for research in meta-learning.
\bibliographystyle{icml2022}
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{ALGORITHM}
\label{sec:algorithm}
\begin{algorithm}[t]
\caption{Hierarchical Thompson sampling.}
\label{alg:ts}
\begin{algorithmic}[1]
\State \textbf{Input:} Hyper-prior $Q$
\State Initialize $Q_1 \gets Q$
\For{$t = 1, 2, \dots$}
\State Observe tasks $\mathcal{S}_t \subseteq [m]$
\State Sample $\mu_t \sim Q_t$
\For{$s \in \mathcal{S}_t$}
\State Compute $P_{s, t}(\theta \mid \mu_t) \propto \mathcal{L}_{s, t}(\theta) P(\theta \mid \mu_t)$
\State Sample $\theta_{s, t} \sim P_{s, t}(\cdot \mid \mu_t)$
\State Take action $A_{s, t} \gets \argmax_{a \in \mathcal{A}} r(a; \theta_{s, t})$
\State Observe reward $Y_{s, t}$
\EndFor
\State Update $Q_{t + 1}$
\EndFor
\end{algorithmic}
\end{algorithm}
We take a Bayesian view and use hierarchical \emph{Thompson sampling (TS)}, which we call \ensuremath{\tt HierTS}\xspace, to solve our problem class. \ensuremath{\tt HierTS}\xspace samples task parameters from their posterior conditioned on history. Specifically, let $H_{s, t} = ((A_{s, \ell}, Y_{s, \ell}))_{\ell < t, \, s \in \mathcal{S}_\ell}$ denote the history of all interactions of \ensuremath{\tt HierTS}\xspace with task $s$ until round $t$, and $H_t = (H_{s, t})_{s \in [m]}$ be the concatenation of all histories up to round $t$. For each task $s \in \mathcal{S}_t$ in round $t$, \ensuremath{\tt HierTS}\xspace samples $\theta_{s, t} \sim \prob{\theta_{s, *} = \cdot \mid H_t}$ and then takes action $A_{s, t} = \argmax_{a \in \mathcal{A}} r(a; \theta_{s, t})$. The key difference from classical Thompson sampling is that the history $H_t$ includes observations of multiple tasks.
To sample $\theta_{s, t}$, we must address how the uncertainty over the unknown hyper-parameter $\mu_*$ and task parameters $\theta_{s, *}$ is modeled. The key idea is to maintain a \emph{hyper-posterior} $Q_t$ over $\mu_*$, given by
\begin{align*}
Q_t(\mu)
= \prob{\mu_* = \mu \mid H_t}\,,
\end{align*}
and then perform two-stage sampling. In particular, in round $t$, we first sample hyper-parameter $\mu_t \sim Q_t$. Next, for any task $s \in \mathcal{S}_t$, we sample the task parameter $\theta_{s, t} \sim P_{s, t}(\cdot \mid \mu_t)$, where
\begin{align*}
P_{s, t}(\theta \mid \mu)
= \prob{\theta_{s, *} = \theta \mid \mu_* = \mu, H_{s, t}}\,.
\end{align*}
In $P_{s, t}(\cdot \mid \mu)$, we only condition on the history of task $s$, since $\theta_{s, *}$ is independent of the other task histories given $\mu_* = \mu$ (\cref{fig:setting}). This process clearly samples from the true posterior, which is given by
\begin{align}
\condprob{\theta_{s, *} = \theta}{H_t}
& = \int_\mu \condprob{\theta_{s, *} = \theta, \mu_* = \mu}{H_t} \dif \mu
\label{eq:task posterior} \\
& = \int_\mu P_{s, t}(\theta \mid \mu)
Q_t(\mu) \dif \mu\,,
\nonumber
\end{align}
where $P_{s, t}(\theta \mid \mu) \propto \mathcal{L}_{s, t}(\theta) P(\theta \mid \mu)$ and
\begin{align*}
\textstyle
\mathcal{L}_{s, t}(\theta)
= \prod_{(a, y) \in H_{s, t}} P(y \mid a; \theta)
\end{align*}
denotes the likelihood of rewards in task $s$ given task parameter $\theta$.
The pseudo-code of \ensuremath{\tt HierTS}\xspace is shown in \cref{alg:ts}. Sampling in \eqref{eq:task posterior} can be implemented exactly in Gaussian graphical models (\cref{sec:models}). These models have interpretable closed-form posteriors, which permit the regret analysis of \ensuremath{\tt HierTS}\xspace. In practice, \ensuremath{\tt HierTS}\xspace can be implemented for any posterior distributions, but may require approximate inference \citep{doucet01sequential} to tractably sample from the posterior.
\section{Proof of \cref{lem:bayes regret}}
\label{sec:bayes regret proof}
The first claim is proved as follows. Fix round $t$ and task $s \in \mathcal{S}_t$. Since $\hat{\mu}_{s, t}$ is a deterministic function of $H_t$, and $A_{s, *}$ and $A_{s, t}$ are i.i.d.\ given $H_t$, we have
\begin{align*}
\E{}{A_{s, *}^\top \theta_{s, *} - A_{s, t}^\top \theta_{s, *}}
= \E{}{\condE{A_{s, *}^\top (\theta_{s, *} - \hat{\mu}_{s, t})}{H_t}} +
\E{}{\condE{A_{s, t}^\top (\hat{\mu}_{s, t} - \theta_{s, *})}{H_t}}\,.
\end{align*}
Moreover, $\theta_{s, *} - \hat{\mu}_{s, t}$ is a zero-mean random vector independent of $A_{s, t}$, and thus $\condE{A_{s, t}^\top (\hat{\mu}_{s, t} - \theta_{s, *})}{H_t} = 0$. So we only need to bound the first term above. Let
\begin{align*}
E_{s, t} =
\set{\normw{\theta_{s, *} - \hat{\mu}_{s, t}}{\hat{\Sigma}_{s, t}^{-1}}
\leq \sqrt{2 d \log(1 / \delta)}}
\end{align*}
be the event that a high-probability confidence interval for the task parameter $\theta_{s, *}$ holds. Fix history $H_t$. Then by the Cauchy-Schwarz inequality,
\begin{align*}
\condE{A_{s, *}^\top (\theta_{s, *} - \hat{\mu}_{s, t})}{H_t}
& \leq \condE{\normw{A_{s, *}}{\hat{\Sigma}_{s, t}}
\normw{\theta_{s, *} - \hat{\mu}_{s, t}}{\hat{\Sigma}_{s, t}^{-1}}}{H_t} \\
& \leq \sqrt{2 d \log(1 / \delta)} \, \condE{\normw{A_{s, *}}{\hat{\Sigma}_{s, t}}}{H_t} +
\underbrace{\max_{a \in \mathcal{A}} \normw{a}{\hat{\Sigma}_{s, t}}}_{\leq \sigma_{\max}}
\condE{\normw{\theta_{s, *} - \hat{\mu}_{s, t}}{\hat{\Sigma}_{s, t}^{-1}}
\I{\bar{E}_{s, t}}}{H_t} \\
& = \sqrt{2 d \log(1 / \delta)} \, \condE{\normw{A_{s, t}}{\hat{\Sigma}_{s, t}}}{H_t} +
\sigma_{\max} \, \condE{\normw{\theta_{s, *} - \hat{\mu}_{s, t}}{\hat{\Sigma}_{s, t}^{-1}}
\I{\bar{E}_{s, t}}}{H_t}\,.
\end{align*}
The equality follows from the fact that $\hat{\Sigma}_{s, t}$ is a deterministic function of $H_t$, and that $A_{s, *}$ and $A_{s, t}$ are i.i.d.\ given $H_t$. Now we focus on the second term above. First, note that
\begin{align*}
\normw{\theta_{s, *} - \hat{\mu}_{s, t}}{\hat{\Sigma}_{s, t}^{-1}}
= \normw{\hat{\Sigma}^{- \frac{1}{2}}_{s, t} (\theta_{s, *} - \hat{\mu}_{s, t})}{2}
\leq \sqrt{d} \maxnorm{\hat{\Sigma}^{- \frac{1}{2}}_{s, t} (\theta_{s, *} - \hat{\mu}_{s, t})}\,.
\end{align*}
By definition, $\theta_{s, *} - \hat{\mu}_{s, t} \mid H_t \sim \mathcal{N}(\mathbf{0}, \hat{\Sigma}_{s, t})$, and hence $\hat{\Sigma}^{- \frac{1}{2}}_{s, t} (\theta_{s, *} - \hat{\mu}_{s, t}) \mid H_t$ is a $d$-dimensional standard normal variable. Moreover, note that $\bar{E}_{s, t}$ implies $\maxnorm{\hat{\Sigma}^{- \frac{1}{2}}_{s, t} (\theta_{s, *} - \hat{\mu}_{s, t})} \geq \sqrt{2 \log(1 / \delta)}$. Finally, we combine these facts with a union bound over all entries of $\hat{\Sigma}^{- \frac{1}{2}}_{s, t} (\theta_{s, *} - \hat{\mu}_{s, t}) \mid H_t$, which are standard normal variables, and get
\begin{align*}
\condE{\maxnorm{\hat{\Sigma}^{- \frac{1}{2}}_{s, t} (\theta_{s, *} - \hat{\mu}_{s, t})}
\I{\bar{E}_{s, t}}}{H_t}
\leq 2 \sum_{i = 1}^d \frac{1}{\sqrt{2 \pi}} \int_{u = \sqrt{2 \log(1 / \delta)}}^\infty
u \exp\left[- \frac{u^2}{2}\right] \dif u
\leq \sqrt{\frac{2}{\pi}} d \delta\,.
\end{align*}
Now we combine all inequalities and have
\begin{align*}
\condE{A_{s, *}^\top (\theta_{s, *} - \hat{\mu}_{s, t})}{H_t}
\leq \sqrt{2 d \log(1 / \delta)} \, \condE{\normw{A_{s, t}}{\hat{\Sigma}_{s, t}}}{H_t} +
\sqrt{\frac{2}{\pi}} \sigma_{\max} d^\frac{3}{2} \delta\,.
\end{align*}
Since the above bound holds for any history $H_t$, we combine everything and get
\begin{align*}
\E{}{\sum_{t \geq 1} \sum_{s \in \mathcal{S}_t} A_{s, *}^\top \theta_{s, *} - A_{s, t}^\top \theta_{s, *}}
& \leq \sqrt{2 d \log(1 / \delta)} \,
\E{}{\sum_{t \geq 1} \sum_{s \in \mathcal{S}_t} \normw{A_{s, t}}{\hat{\Sigma}_{s, t}}} +
\sqrt{\frac{2}{\pi}} \sigma_{\max} d^\frac{3}{2} m n \delta \\
& \leq \sqrt{2 d m n \log(1 / \delta)}
\sqrt{\E{}{\sum_{t \geq 1} \sum_{s \in \mathcal{S}_t} \normw{A_{s, t}}{\hat{\Sigma}_{s, t}}^2}} +
\sqrt{\frac{2}{\pi}} \sigma_{\max} d^\frac{3}{2} m n \delta\,.
\end{align*}
The last step uses the Cauchy-Schwarz inequality and the concavity of the square root.
To bound $\sigma_{\max}$, we use Weyl's inequalities together with \eqref{eq:covariance decomposition}, the second claim in \cref{lem:covariance decomposition}, and \eqref{eq:linear hyperposterior}. Specifically, under the assumption that $\normw{a}{2} \leq 1$ for all $a \in \mathcal{A}$, we have
\begin{align*}
\max_{a \in \mathcal{A}} \normw{a}{\hat{\Sigma}_{s, t}}^2
& \leq \lambda_1(\hat{\Sigma}_{s, t})
\leq \lambda_1((\Sigma_0^{-1} + G_{s, t})^{-1}) +
\lambda_1((\Sigma_0^{-1} + G_{s, t})^{-1} \Sigma_0^{-1} \bar{\Sigma}_t
\Sigma_0^{-1} (\Sigma_0^{-1} + G_{s, t})^{-1}) \\
& \leq \lambda_1(\Sigma_0) +
\frac{\lambda_1^2(\Sigma_0) \lambda_1(\Sigma_q)}{\lambda_d^2(\Sigma_0)}
= \sigma_{\max}^2\,.
\end{align*}
This concludes the proof of the first claim.
The second claim is proved by modifying the first proof as follows. Fix round $t$ and task $s \in \mathcal{S}_t$. Let
\begin{align*}
E_{s, t} =
\set{\forall a \in \mathcal{A}: |a^\top (\theta_{s, *} - \hat{\mu}_{s, t})|
\leq \sqrt{2 \log(1 / \delta)} \normw{a}{\hat{\Sigma}_{s, t}}}
\end{align*}
be the event that all high-probability confidence intervals hold. Then we have
\begin{align*}
\condE{A_{s, *}^\top (\theta_{s, *} - \hat{\mu}_{s, t})}{H_t}
\leq \sqrt{2 \log(1 / \delta)} \, \condE{\normw{A_{s, t}}{\hat{\Sigma}_{s, t}}}{H_t} +
\condE{A_{s, *}^\top (\theta_{s, *} - \hat{\mu}_{s, t}) \I{\bar{E}_{s, t}}}{H_t}\,.
\end{align*}
Now note that for any action $a$, $a^\top (\theta_{s, *} - \hat{\mu}_{s, t}) / \normw{a}{\hat{\Sigma}_{s, t}}$ is a standard normal variable. It follows that
\begin{align*}
\condE{A_{s, *}^\top (\theta_{s, *} - \hat{\mu}_{s, t}) \I{\bar{E}_{s, t}}}{H_t}
\leq 2 \sum_{a \in \mathcal{A}} \normw{a}{\hat{\Sigma}_{s, t}} \frac{1}{\sqrt{2 \pi}}
\int_{u = \sqrt{2 \log(1 / \delta)}}^\infty
u \exp\left[- \frac{u^2}{2}\right] \dif u
\leq \sqrt{\frac{2}{\pi}} \sigma_{\max} K \delta\,.
\end{align*}
The rest of the proof proceeds as in the first claim, yielding
\begin{align*}
\condE{A_{s, *}^\top (\theta_{s, *} - \hat{\mu}_{s, t})}{H_t}
\leq \sqrt{2 \log(1 / \delta)} \, \condE{\normw{A_{s, t}}{\hat{\Sigma}_{s, t}}}{H_t} +
\sqrt{\frac{2}{\pi}} \sigma_{\max} K \delta\,.
\end{align*}
This completes the proof.
\section{Proof of \cref{thm:sequential regret}}
\label{sec:sequential proof}
\cref{lem:bayes regret} says that the Bayes regret $\mathcal{BR}(m, n)$ can be bounded by bounding the sum of posterior variances $\mathcal{V}(m, n)$. Since $|\mathcal{S}_t| = 1$, we make two simplifications. First, we replace the set of tasks $\mathcal{S}_t$ by a single task $S_t \in [m]$. Second, there are exactly $m n$ rounds.
Fix round $t$ and task $s = S_t$. To reduce clutter, let $M = \Sigma_0^{-1} + G_{s, t}$. By the total covariance decomposition in \cref{lem:covariance decomposition}, we have that
\begin{align}
\normw{A_{s, t}}{\hat{\Sigma}_{s, t}}^2
& = \sigma^2 \frac{A_{s, t}^\top \hat{\Sigma}_{s, t} A_{s, t}}{\sigma^2}
= \sigma^2 \left(\sigma^{-2} A_{s, t}^\top \tilde{\Sigma}_{s, t} A_{s, t} +
\sigma^{-2} A_{s, t}^\top M^{-1} \Sigma_0^{-1} \bar{\Sigma}_t
\Sigma_0^{-1} M^{-1} A_{s, t}\right)
\nonumber \\
& \leq c_1 \log(1 + \sigma^{-2} A_{s, t}^\top \tilde{\Sigma}_{s, t} A_{s, t}) +
c_2 \log(1 + \sigma^{-2} A_{s, t}^\top M^{-1} \Sigma_0^{-1} \bar{\Sigma}_t
\Sigma_0^{-1} M^{-1} A_{s, t})
\nonumber \\
& = c_1 \log\det(I_d + \sigma^{-2}
\tilde{\Sigma}_{s, t}^\frac{1}{2} A_{s, t} A_{s, t}^\top \tilde{\Sigma}_{s, t}^\frac{1}{2}) +
c_2 \log\det(I_d + \sigma^{-2}
\bar{\Sigma}^\frac{1}{2}_t \Sigma_0^{-1} M^{-1} A_{s, t} A_{s, t}^\top
M^{-1} \Sigma_0^{-1} \bar{\Sigma}^\frac{1}{2}_t)\,.
\label{eq:sequential proof decomposition}
\end{align}
The logarithmic terms are introduced using
\begin{align*}
x
= \frac{x}{\log(1 + x)} \log(1 + x)
\leq \left(\max_{x \in [0, u]} \frac{x}{\log(1 + x)}\right) \log(1 + x)
= \frac{u}{\log(1 + u)} \log(1 + x)\,,
\end{align*}
which holds for any $x \in [0, u]$. The resulting constants are
\begin{align*}
c_1
= \frac{\lambda_1(\Sigma_0)}{\log(1 + \sigma^{-2} \lambda_1(\Sigma_0))}\,, \quad
c_2
= \frac{c_q}{\log(1 + \sigma^{-2} c_q)}\,, \quad
c_q
= \frac{\lambda_1^2(\Sigma_0) \lambda_1(\Sigma_q)}{\lambda_d^2(\Sigma_0)}\,.
\end{align*}
The derivation of $c_1$ uses that
\begin{align*}
A_{s, t}^\top \tilde{\Sigma}_{s, t} A_{s, t}
= \lambda_1(\tilde{\Sigma}_{s, t})
= \lambda_d^{-1}(\Sigma_0^{-1} + G_{s, t})
\leq \lambda_d^{-1}(\Sigma_0^{-1})
= \lambda_1(\Sigma_0)\,.
\end{align*}
The derivation of $c_2$ follows from
\begin{align*}
A_{s, t}^\top M^{-1} \Sigma_0^{-1} \bar{\Sigma}_t \Sigma_0^{-1} M^{-1} A_{s, t}
\leq \lambda_1^2(M^{-1}) \lambda_1^2(\Sigma_0^{-1}) \lambda_1(\bar{\Sigma}_t)
\leq \frac{\lambda_1^2(\Sigma_0) \lambda_1(\Sigma_q)}{\lambda_d^2(\Sigma_0)}\,.
\end{align*}
This is also proved as the second claim in \cref{lem:covariance decomposition}. Now we focus on bounding the logarithmic terms in \eqref{eq:sequential proof decomposition}.
\subsection{First Term in \eqref{eq:sequential proof decomposition}}
\label{sec:sequential proof 1}
This is a per-instance term and can be rewritten as
\begin{align*}
\log\det(I_d + \sigma^{-2}
\tilde{\Sigma}_{s, t}^\frac{1}{2} A_{s, t} A_{s, t}^\top \tilde{\Sigma}_{s, t}^\frac{1}{2})
= \log\det(\tilde{\Sigma}_{s, t}^{-1} + \sigma^{-2} A_{s, t} A_{s, t}^\top) - \log\det(\tilde{\Sigma}_{s, t}^{-1})\,.
\end{align*}
When we sum over all rounds with task $s$, we get telescoping and the contribution of this term is at most
\begin{align*}
\sum_{t = 1}^{m n} \I{S_t = s}
\log\det(I_d + \sigma^{-2} \tilde{\Sigma}_{s, t}^\frac{1}{2} A_{s, t}
A_{s, t}^\top \tilde{\Sigma}_{s, t}^\frac{1}{2})
& = \log\det(\tilde{\Sigma}_{s, m n + 1}^{-1}) - \log\det(\tilde{\Sigma}_{s, 1}^{-1})
= \log\det(\Sigma_0^\frac{1}{2} \tilde{\Sigma}_{s, m n + 1}^{-1} \Sigma_0^\frac{1}{2}) \\
& \leq d \log\left(\frac{1}{d} \trace(\Sigma_0^\frac{1}{2} \tilde{\Sigma}_{s, m n + 1}^{-1}
\Sigma_0^\frac{1}{2})\right)
\leq d \log\left(1 + \frac{\lambda_1(\Sigma_0) n}{\sigma^2 d}\right)\,,
\end{align*}
where we use that task $s$ appears at most $n$ times. Now we sum over all $m$ tasks and get
\begin{align*}
\sum_{t = 1}^{m n}
\log\det(I_d + \sigma^{-2} \tilde{\Sigma}_{S_t, t}^\frac{1}{2} A_{S_t, t}
A_{S_t, t}^\top \tilde{\Sigma}_{S_t, t}^\frac{1}{2})
\leq d m \log\left(1 + \frac{\lambda_1(\Sigma_0) n}{\sigma^2 d}\right)\,.
\end{align*}
\subsection{Second Term in \eqref{eq:sequential proof decomposition}}
\label{sec:sequential proof 2}
This is a hyper-parameter term. Before we analyze it, let $v = \sigma^{-1} M^{- \frac{1}{2}} A_{s, t}$ and note that
\begin{align}
\bar{\Sigma}_{t + 1}^{-1} - \bar{\Sigma}_t^{-1}
& = (\Sigma_0 + (G_{s, t} + \sigma^{-2} A_{s, t} A_{s, t}^\top)^{-1})^{-1} -
(\Sigma_0 + G_{s, t}^{-1})^{-1}
\nonumber \\
& = \Sigma_0^{-1} - \Sigma_0^{-1} (M + \sigma^{-2} A_{s, t} A_{s, t}^\top)^{-1} \Sigma_0^{-1} -
(\Sigma_0^{-1} - \Sigma_0^{-1} M^{-1} \Sigma_0^{-1})
\nonumber \\
& = \Sigma_0^{-1} (M^{-1} - (M + \sigma^{-2} A_{s, t} A_{s, t}^\top)^{-1}) \Sigma_0^{-1}
\nonumber \\
& = \Sigma_0^{-1} M^{- \frac{1}{2}}
(I_d - (I_d + \sigma^{-2} M^{- \frac{1}{2}} A_{s, t} A_{s, t}^\top M^{- \frac{1}{2}})^{-1})
M^{- \frac{1}{2}} \Sigma_0^{-1}
\nonumber \\
& = \Sigma_0^{-1} M^{- \frac{1}{2}}
(I_d - (I_d + v v^\top)^{-1})
M^{- \frac{1}{2}} \Sigma_0^{-1}
\nonumber \\
& = \Sigma_0^{-1} M^{- \frac{1}{2}}
\frac{v v^\top}{1 + v^\top v}
M^{- \frac{1}{2}} \Sigma_0^{-1}
\nonumber \\
& = \sigma^{-2} \Sigma_0^{-1} M^{-1}
\frac{A_{s, t} A_{s, t}^\top}{1 + v^\top v}
M^{-1} \Sigma_0^{-1}\,,
\label{eq:linear telescoping}
\end{align}
where we first use the Woodbury matrix identity and then the Sherman-Morrison formula. Since $\normw{A_{s, t}}{2} \leq 1$,
\begin{align*}
1 + v^\top v
= 1 + \sigma^{-2} A_{s, t}^\top M^{-1} A_{s, t}
\leq 1 + \sigma^{-2} \lambda_1(\Sigma_0) = c\,.
\end{align*}
Based on the above derivations, we bound the second logarithmic term in \eqref{eq:sequential proof decomposition} as
\begin{align*}
& \log\det(I_d +
\sigma^{-2} \bar{\Sigma}_t^\frac{1}{2} \Sigma_0^{-1} M^{-1} A_{s, t} A_{s, t}^\top
M^{-1} \Sigma_0^{-1} \bar{\Sigma}_t^\frac{1}{2}) \\
& \quad \leq c \log\det(I_d +
\sigma^{-2} \bar{\Sigma}^\frac{1}{2}_t \Sigma_0^{-1} M^{-1} A_{s, t} A_{s, t}^\top
M^{-1} \Sigma_0^{-1} \bar{\Sigma}^\frac{1}{2}_t / c) \\
& \quad = c \left[\log\det(\bar{\Sigma}_t^{-1} +
\sigma^{-2} \Sigma_0^{-1} M^{-1} A_{s, t} A_{s, t}^\top M^{-1} \Sigma_0^{-1} / c) -
\log\det(\bar{\Sigma}_t^{-1})\right] \\
& \quad \leq c \left[\log\det(\bar{\Sigma}_{t + 1}^{-1}) -
\log\det(\bar{\Sigma}_t^{-1})\right]\,.
\end{align*}
The first inequality holds because $\log(1 + x) \leq c \log(1 + x / c)$ for any $x \geq 0$ and $c \geq 1$. The second inequality follows from the fact that we have a rank-$1$ update of $\bar{\Sigma}_t^{-1}$. Now we sum over all rounds and get telescoping
\begin{align*}
& \sum_{t = 1}^{m n}
\log\det(I_d + \sigma^{-2} \bar{\Sigma}_t^\frac{1}{2} \Sigma_0^{-1}
(\Sigma_0^{-1} + G_{S_t, t})^{-1} A_{S_t, t} A_{S_t, t}^\top (\Sigma_0^{-1} + G_{S_t, t})^{-1}
\Sigma_0^{-1} \bar{\Sigma}_t^\frac{1}{2}) \\
& \quad \leq c \left[\log\det(\bar{\Sigma}_{m n + 1}^{-1}) -
\log\det(\bar{\Sigma}_1^{-1})\right]
= c \log\det(\Sigma_q^\frac{1}{2} \bar{\Sigma}_{m n + 1}^{-1} \Sigma_q^\frac{1}{2})
\leq c d \log\left(\frac{1}{d} \trace(\Sigma_q^\frac{1}{2} \bar{\Sigma}_{m n + 1}^{-1}
\Sigma_q^\frac{1}{2})\right) \\
& \quad \leq c d
\log(\lambda_1(\Sigma_q^\frac{1}{2} \bar{\Sigma}_{m n + 1}^{-1} \Sigma_q^\frac{1}{2}))
\leq c d \log\left(1 + \frac{\lambda_1(\Sigma_q) m}{\lambda_d(\Sigma_0)}\right)\,.
\end{align*}
Finally, we combine the upper bounds for both logarithmic terms and get
\begin{align*}
\mathcal{V}(m, n)
= \E{}{\sum_{t = 1}^{m n} \normw{A_{S_t, t}}{\hat{\Sigma}_{S_t, t}}^2}
\leq d \left[c_1 m \log\left(1 + \frac{\lambda_1(\Sigma_0) n}{\sigma^2 d}\right) +
c_2 c \log\left(1 + \frac{\lambda_1(\Sigma_q) m}{\lambda_d(\Sigma_0)}\right)\right]\,,
\end{align*}
which yields the desired result after we substitute this bound into \cref{lem:bayes regret}. To simplify presentation in the main paper, $c_1$ and $c_2$ in \cref{thm:sequential regret} include the above logarithmic terms that multiply them.
\section{Proof of \cref{thm:concurrent regret}}
\label{sec:concurrent proof}
From \cref{ass:basis}, there exists a basis of $d$ actions such that if all actions in the basis are taken in task $s$ by round $t$, it is guaranteed that $\lambda_d(G_{s, t}) \geq \eta / \sigma^2$. We modify \ensuremath{\tt HierTS}\xspace to takes these actions first in any task $s$. Let $\mathcal{C}_t = \{s \in \mathcal{S}_t: \lambda_d(G_{s, t}) \geq \eta / \sigma^2\}$ be the set of \emph{sufficiently-explored tasks} by round $t$.
Using $\mathcal{C}_t$, we decompose the Bayes regret as
\begin{align*}
\mathcal{BR}(m, n)
\leq
\E{}{\sum_{t \geq 1} \sum_{s \in \mathcal{S}_t} \I{s \in \mathcal{C}_t}
(A_{s, *}^\top \theta_{s, *} - A_{s, t}^\top \theta_{s, *})} +
\E{}{\sum_{t \geq 1} \sum_{s \in \mathcal{S}_t} \I{s \not\in \mathcal{C}_t}
(A_{s, *}^\top \theta_{s, *} - A_{s, t}^\top \theta_{s, *})}\,.
\end{align*}
For any task $s$ and round $t$, we can trivially bound
\begin{align*}
\E{}{(A_{s, *} - A_{s, t})^\top \theta_{s, *}}
\leq \E{}{\normw{A_{s, *} - A_{s, t}}{\hat{\Sigma}_{s, 1}}
\normw{\theta_{s, *}}{\hat{\Sigma}_{s, 1}^{-1}}}
\leq 2 \sigma_{\max} \left(\normw{\mu_q}{\hat{\Sigma}_{s, 1}^{-1}} +
\E{}{\normw{\theta_{s, *} - \mu_q}{\hat{\Sigma}_{s, 1}^{-1}}}\right)\,,
\end{align*}
where $\sigma_{\max} = \sqrt{\lambda_1(\Sigma_q + \Sigma_0)}$ as in \cref{sec:sequential proof}. Here we use that $\normw{A_{s, *} - A_{s, t}}{2} \leq 2$ and that the prior covariance of $\theta_{s, *}$ is $\hat{\Sigma}_{s, 1} = \Sigma_q + \Sigma_0$. We know from \eqref{eq:gaussian hierarchical} that $\theta_{s, *} - \mu_q \sim \mathcal{N}(\mathbf{0}, \Sigma_q + \Sigma_0)$. This means that $\hat{\Sigma}^{- \frac{1}{2}}_{s, 1} (\theta_{s, *} - \mu_q)$ is a vector of $d$ independent standard normal variables. It follows that
\begin{align*}
\E{}{\normw{\theta_{s, *} - \mu_q}{\hat{\Sigma}_{s, 1}^{-1}}}
= \E{}{\normw{\hat{\Sigma}^{- \frac{1}{2}}_{s, 1} (\theta_{s, *} - \mu_q)}{2}}
\leq \sqrt{\E{}{\normw{\hat{\Sigma}^{- \frac{1}{2}}_{s, 1}
(\theta_{s, *} - \mu_q)}{2}^2}}
= \sqrt{d}\,.
\end{align*}
Since $s \not\in \mathcal{C}_t$ occurs at most $d$ times for any task $s$, the total regret due to forced exploration is bounded as
\begin{align*}
\E{}{\sum_{t \geq 1} \sum_{s \in \mathcal{S}_t} \I{s \not\in \mathcal{C}_t}
(A_{s, *}^\top \theta_{s, *} - A_{s, t}^\top \theta_{s, *})}
\leq 2 \sigma_{\max} \left(\normw{\mu_q}{\hat{\Sigma}_{s, 1}^{-1}} + \sqrt{d}\right) d m
= c_3\,.
\end{align*}
It remains to bound the first term in $\mathcal{BR}(m, n)$. On event $s \in \mathcal{C}_t$, \ensuremath{\tt HierTS}\xspace samples from the posterior and behaves exactly as \cref{alg:ts}. Therefore, we only need to bound $\mathcal{V}(m, n) = \E{}{\sum_{t \geq 1} \sum_{s \in \mathcal{S}_t} \I{s \in \mathcal{C}_t} \normw{A_{s, t}}{\hat{\Sigma}_{s, t}}^2}$ and then substitute the bound into \cref{lem:bayes regret}. By the total covariance decomposition in \cref{lem:covariance decomposition}, we have
\begin{align}
\normw{A_{s, t}}{\hat{\Sigma}_{s, t}}^2
= A_{s, t}^\top \tilde{\Sigma}_{s, t} A_{s, t} +
A_{s, t}^\top M^{-1} \Sigma_0^{-1} \bar{\Sigma}_t \Sigma_0^{-1} M^{-1} A_{s, t}\,,
\label{eq:concurrent proof decomposition}
\end{align}
where $M = \Sigma_0^{-1} + G_{s, t}$ to reduce clutter. As in \cref{sec:sequential proof}, we bound the contribution of each term separately.
\subsection{First Term in \eqref{eq:concurrent proof decomposition}}
This term depends only on $\tilde{\Sigma}_{s, t}$, which does not depend on interactions with other tasks than task $s$. Therefore, the bound is the same as in the sequential case in \cref{sec:sequential proof 1},
\begin{align*}
\sum_{t \geq 1} \sum_{s \in \mathcal{S}_t} \I{s \in \mathcal{C}_t} A_{s, t}^\top \tilde{\Sigma}_{s, t} A_{s, t}
\leq c_1 d m \log\left(1 + \frac{\lambda_1(\Sigma_0) n}{\sigma^2 d}\right)\,,
\end{align*}
where $c_1$ is defined in \cref{sec:sequential proof}.
\subsection{Second Term in \eqref{eq:concurrent proof decomposition}}
The difference from the sequential setting is in how we bound the second term in \eqref{eq:concurrent proof decomposition}. Before we had $|\mathcal{S}_t| = 1$, while now we have $|\mathcal{S}_t| \leq L \leq m$ for some $L$. Since more than one task is acted upon per round, the telescoping identity in \eqref{eq:linear telescoping} no longer holds. To remedy this, we reduce the concurrent case to the sequential one. Specifically, suppose that task $s \in \mathcal{S}_t$ in round $t$ has access to the concurrent observations of prior tasks in round $t$, for some order of tasks $\mathcal{S}_t = \{S_{t, i}\}_{i = 1}^L$. As \cref{thm:sequential regret} holds for any order, we choose the order where sufficiently-explored tasks $s \in \mathcal{C}_t$ appear first.
Let $\mathcal{S}_{t, i} = \{S_{t, j}\}_{j = 1}^{i - 1}$ be the first $i - 1$ tasks in $\mathcal{S}_t$ according to our chosen order. For $s = S_{t, i}$, let
\begin{align*}
\bar{\Sigma}_{s, t}^{-1}
= \Sigma_q^{-1} + \sum_{z \in \mathcal{S}_{t, i}} (\Sigma_0 + G_{z, t + 1}^{-1})^{-1} +
\sum_{z \in [m] \setminus \mathcal{S}_{t, i}} (\Sigma_0 + G_{z, t}^{-1})^{-1}
\end{align*}
be the reciprocal of the hyper-posterior covariance updated with concurrent observations in tasks $\mathcal{S}_{t, i}$. Next we show that $\bar{\Sigma}_t$ and $\bar{\Sigma}_{s, t}$ are similar.
\begin{lemma}
\label{lem:sequential concurrent ratio} Fix round $t$ and $i \in [L]$. Let $s = S_{t, i}$ and $\lambda_d(G_{s, t}) \geq \eta / \sigma^2$. Then
\begin{align*}
\lambda_1(\bar{\Sigma}_{s, t}^{-1} \bar{\Sigma}_t)
\leq 1 + \frac{\sigma^{-2} \lambda_1(\Sigma_q) (\lambda_1(\Sigma_0) + \sigma^2 / \eta)}
{\lambda_1(\Sigma_q) + (\lambda_1(\Sigma_0) + \sigma^2 / \eta) / L}\,.
\end{align*}
\end{lemma}
\begin{proof}
Using standard eigenvalue inequalities, we have
\begin{align}
\lambda_1(\bar{\Sigma}_{s, t}^{-1} \bar{\Sigma}_t)
= \lambda_1((\bar{\Sigma}_t^{-1} + \bar{\Sigma}_{s, t}^{-1} - \bar{\Sigma}_t^{-1})
\bar{\Sigma}_t)
\leq 1 + \lambda_1((\bar{\Sigma}_{s, t}^{-1} - \bar{\Sigma}_t^{-1}) \bar{\Sigma}_t)
\leq 1 + \frac{\lambda_1(\bar{\Sigma}_{s, t}^{-1} - \bar{\Sigma}_t^{-1})}
{\lambda_d(\bar{\Sigma}_t^{-1})}\,.
\label{eq:ratio decomposition}
\end{align}
By Weyl's inequalities, and from the definition of $\bar{\Sigma}_t$, we have
\begin{align*}
\lambda_d(\bar{\Sigma}_t^{-1})
& \geq \lambda_d(\Sigma_q^{-1}) + \sum_{z \in [m]} \lambda_d((\Sigma_0 + G_{z, t}^{-1})^{-1})
= \lambda_d(\Sigma_q^{-1}) + \sum_{z \in [m]} \lambda_1^{-1}(\Sigma_0 + G_{z, t}^{-1}) \\
& \geq \lambda_d(\Sigma_q^{-1}) +
\sum_{z \in [m]} (\lambda_1(\Sigma_0) + \lambda_1(G_{z, t}^{-1}))^{-1}
\geq \lambda_d(\Sigma_q^{-1}) + (i - 1) (\lambda_1(\Sigma_0) + \sigma^2 / \eta)^{-1}\,.
\end{align*}
In the last inequality, we use that the previous $i - 1$ tasks $\mathcal{S}_{t, i}$ are sufficiently explored. Analogously to \eqref{eq:linear telescoping},
\begin{align*}
\bar{\Sigma}_{s, t}^{-1} - \bar{\Sigma}_t^{-1}
& = \sum_{z \in \mathcal{S}_{t, i}}
(\Sigma_0 + (G_{z, t} + \sigma^{-2} A_{z, t} A_{z, t}^\top)^{-1})^{-1} -
(\Sigma_0 + G_{z, t}^{-1})^{-1} \\
& = \sigma^{-2} \sum_{z \in \mathcal{S}_{t, i}} \Sigma_0^{-1} M_{z, t}^{-1}
\frac{A_{z, t} A_{z, t}^\top}{1 + \sigma^{-2} A_{z, t}^\top M_{z, t}^{-1} A_{z, t}}
M_{z, t}^{-1} \Sigma_0^{-1}\,,
\end{align*}
where $M_{z, t} = \Sigma_0^{-1} + G_{z, t}$ to reduce clutter. Moreover, since $\normw{A_{z, t}}{2} \leq 1$ and $\sigma^{-2} A_{z, t}^\top M_{z, t}^{-1} A_{z, t} \geq 0$, we have
\begin{align*}
\lambda_1(\bar{\Sigma}_{s, t}^{-1} - \bar{\Sigma}_t^{-1})
\leq (i - 1) \sigma^{-2}\,.
\end{align*}
Finally, we substitute our upper bounds to the right-hand side of \eqref{eq:ratio decomposition} and get
\begin{align*}
\frac{\lambda_1(\bar{\Sigma}_{s, t}^{-1} - \bar{\Sigma}_t^{-1})}
{\lambda_d(\bar{\Sigma}_t^{-1})} \leq
\frac{(i - 1) \sigma^{-2}}{\lambda_1^{-1}(\Sigma_q) +
(i - 1) (\lambda_1(\Sigma_0) + \sigma^2 / \eta)^{-1}}
\leq \frac{\sigma^{-2} \lambda_1(\Sigma_q) (\lambda_1(\Sigma_0) + \sigma^2 / \eta)}
{\lambda_1(\Sigma_q) + (\lambda_1(\Sigma_0) + \sigma^2 / \eta) / L}\,,
\end{align*}
where we use that the ratio is maximized when $i - 1 = L$. This completes the proof.
\end{proof}
Now we return to \eqref{eq:concurrent proof decomposition}. First, we have that
\begin{align*}
A_{s, t}^\top M^{-1} \Sigma_0^{-1} \bar{\Sigma}_t \Sigma_0^{-1} M^{-1} A_{s, t}
& = A_{s, t}^\top M^{-1} \Sigma_0^{-1} \bar{\Sigma}_{s, t}^\frac{1}{2}
\left(\bar{\Sigma}_{s, t}^{- \frac{1}{2}} \bar{\Sigma}_t^\frac{1}{2}
\bar{\Sigma}_t^\frac{1}{2} \bar{\Sigma}_{s, t}^{- \frac{1}{2}}\right)
\bar{\Sigma}_{s, t}^\frac{1}{2} \Sigma_0^{-1} M^{-1} A_{s, t} \\
& \leq \lambda_1(\bar{\Sigma}_{s, t}^{- \frac{1}{2}} \bar{\Sigma}_t^\frac{1}{2}
\bar{\Sigma}_t^\frac{1}{2} \bar{\Sigma}_{s, t}^{- \frac{1}{2}})
A_{s, t}^\top M^{-1} \Sigma_0^{-1} \bar{\Sigma}_{s, t} \Sigma_0^{-1} M^{-1} A_{s, t} \\
& \leq \lambda_1(\bar{\Sigma}_{s, t}^{-1} \bar{\Sigma}_t)
A_{s, t}^\top M^{-1} \Sigma_0^{-1} \bar{\Sigma}_{s, t} \Sigma_0^{-1} M^{-1} A_{s, t}\,,
\end{align*}
where we use that the above expression is a quadratic form. Next we apply \cref{lem:sequential concurrent ratio} and get
\begin{align*}
\lambda_1(\bar{\Sigma}_{s, t}^{-1} \bar{\Sigma}_t)
\leq 1 + \frac{\sigma^{-2} \lambda_1(\Sigma_q) (\lambda_1(\Sigma_0) + \sigma^2 / \eta)}
{\lambda_1(\Sigma_q) + (\lambda_1(\Sigma_0) + \sigma^2 / \eta) / L}
= c_4\,.
\end{align*}
After $\bar{\Sigma}_t$ is turned into $\bar{\Sigma}_{s, t}$, we follow \cref{sec:sequential proof 2} and get that the hyper-parameter regret is
\begin{align*}
c_2 c_4 c d \log\left(1 + \frac{\lambda_1(\Sigma_q) m}{\lambda_d(\Sigma_0)}\right)\,,
\end{align*}
where the only difference is the extra factor of $c_4$. Finally, we combine all upper bounds and get
\begin{align*}
\mathcal{V}(m, n)
= \E{}{\sum_{t \geq 1} \sum_{s \in \mathcal{S}_t} \I{s \in \mathcal{C}_t}
\normw{A_{s, t}}{\hat{\Sigma}_{s, t}}^2}
\leq d \left[c_1 m \log\left(1 + \frac{\lambda_1(\Sigma_0) n}{\sigma^2 d}\right) +
c_2 c_4 c \log\left(1 + \frac{\lambda_1(\Sigma_q) m}{\lambda_d(\Sigma_0)}\right)\right]\,,
\end{align*}
which yields the desired result after we substitute it into \cref{lem:bayes regret}. To simplify presentation in the main paper, $c_1$ and $c_2$ in \cref{thm:concurrent regret} include the above logarithmic terms that multiply them.
\section{Gaussian Bandit Regret Bounds}
\label{sec:mab bounds}
Our regret bounds in \cref{sec:regret bounds} can be specialized to $K$-armed Gaussian bandits (\cref{sec:gaussian bandit}). Specifically, when the action set $\mathcal{A} = \set{e_i}_{i \in [K]}$ is the standard Euclidean basis in $\mathbb{R}^K$, \cref{thm:sequential regret,thm:concurrent regret} can be restated as follows.
\begin{theorem}[Sequential Gaussian bandit regret]
\label{thm:sequential mab regret} Let $|\mathcal{S}_t| = 1$ for all rounds $t$. Let $\delta = 1 / (m n)$. Then the Bayes regret of \ensuremath{\tt HierTS}\xspace is
\begin{align*}
\mathcal{BR}(m, n)
\leq \sqrt{2 K m n [c_1 m + c_2] \log(m n)} + c_3\,,
\end{align*}
where $c_3 = O(K)$,
\begin{align*}
c_1
= \frac{\sigma_0^2}{\log(1 + \sigma^{-2} \sigma_0^2)}
\log\left(1 + \frac{\sigma_0^2 n}{\sigma^2 K}\right)\,, \quad
c_2
= \frac{\sigma_q^2 c}{\log(1 + \sigma^{-2} \sigma_q^2)}
\log\left(1 + \frac{\sigma_q^2 m}{\sigma_0^2}\right)\,, \quad
c
= 1 + \frac{\sigma_0^2}{\sigma^2}\,.
\end{align*}
\end{theorem}
The main difference from the proof of \cref{thm:sequential regret} is that we start with the finite-action bound in \cref{lem:bayes regret}. Other than that, we use the facts that $\lambda_1(\Sigma_0) = \lambda_d(\Sigma_0) = \sigma_0^2$ and $\lambda_1(\Sigma_q) = \sigma_q^2$.
\begin{theorem}[Concurrent Gaussian bandit regret]
\label{thm:concurrent mab regret} Let $|\mathcal{S}_t| \leq L \leq m$. Let $\delta = 1 / (m n)$. Then the Bayes regret of \ensuremath{\tt HierTS}\xspace is
\begin{align*}
\mathcal{BR}(m, n)
\leq \sqrt{2 K m n [c_1 m + c_2] \log(m n)} + c_3\,,
\end{align*}
where $c_1$ and $c$ are defined as in \cref{thm:sequential mab regret},
\begin{align*}
c_2
= \frac{\sigma_q^2 c_4 c}{\log(1 + \sigma^{-2} \sigma_q^2)}
\log\left(1 + \frac{\sigma_q^2 m}{\sigma_0^2}\right)\,, \quad
c_4
= 1 + \frac{\sigma^{-2} \sigma_q^2 (\sigma_0^2 + \sigma^2)}
{\sigma_q^2 + (\sigma_0^2 + \sigma^2) / L}\,,
\end{align*}
and $c_3 = O(K m)$.
\end{theorem}
When we specialize \cref{thm:concurrent regret}, we note that $\eta = 1$, since the action set $\mathcal{A}$ is the standard Euclidean basis.
\section{Image Classification Experiment}
\label{sec:classification experiments}
\begin{figure*}[t]
\centering
\begin{minipage}{0.45\textwidth}
\includegraphics[width=\linewidth]{figures/mnist_pos0.pdf}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\includegraphics[width=\linewidth]{figures/mnist_pos1.pdf}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\includegraphics[width=\linewidth]{figures/mnist_mu_bar_pos0.pdf}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\includegraphics[width=\linewidth]{figures/mnist_mu_bar_pos1.pdf}
\end{minipage}
\caption{Evaluation of \ensuremath{\tt HierTS}\xspace on multi-task digit classification using MNIST with different positive image classes. On the top, we plot the cumulative Bayes regret at each round. On the bottom, we visualize the most-rewarding image according to the learned hyper-parameter at evenly-spaced intervals.}
\label{fig:mnist}
\end{figure*}
We conduct an additional experiment that considers online classification using a real-world image dataset. The problem is cast as a multi-task linear bandit with Bernoulli rewards. Specifically, we construct a set of tasks where one image class is selected randomly to have high reward. In each task, at every round, $K$ images are uniformly sampled at random as actions, and the aim of the learning agent is to select an image from the unknown positive image class. The reward of an image from the positive class is $\mathsf{Ber}(0.9)$ and for all other classes is $\mathsf{Ber}(0.1)$.
We use the MNIST dataset \citep{mnist}, which consists of $60, 000$ images of handwritten digits, which we split equally into a training and test set. We down-sample each image to $d = 49$ pixels, which become the feature vector for the corresponding action in the bandit problem.
For each digit, the training set is used to estimate $\mu_*, \Sigma_0$, where all three algorithms use $\Sigma_0$ but only \ensuremath{\tt OracleTS}\xspace can use $\mu_*$. The algorithms are evaluated on the test set.
Given a positive digit class, we construct a different task $s$ by sub-sampling from the test set, and computing $\theta_{s, *}$ using positive images from the sub-sampled data.
For each digit as the positive image class, we evaluate our three algorithms on a multi-task linear bandit with $m = 10$ tasks, $n = 400$ interactions per task, and $K = 30$ actions, uniformly sampled from the test images. We chose $L = 5$ tasks per round, leading to $800$ rounds total. We assume a hyper-prior of $Q = \mathcal{N}(\mathbf{0}, I_d)$ with reward noise $\sigma = 0.5$ because the rewards are Bernoulli.
\cref{fig:mnist} shows the performance of all algorithms for two digits across $20$ independent runs. We see that \ensuremath{\tt HierTS}\xspace performs very well compared to standard \ensuremath{\tt TS}\xspace. In addition to regret, we also visualized the learned hyper-parameter $\bar{\mu}_t$ every $80$ rounds. We see that \ensuremath{\tt HierTS}\xspace very quickly learns the correct hyper-parameter, showing that it effectively leverages the shared structure across task. Overall, this experiment shows that even if \ensuremath{\tt HierTS}\xspace assumes a misspecified model of the environment, with non-Gaussian rewards and not knowing the true hyper-prior $Q$ and covariance $\Sigma_0$, \ensuremath{\tt HierTS}\xspace still performs very well.
\section{CONCLUSIONS}
\label{sec:conclusions}
We study \emph{hierarchical Bayesian bandits}, a general setting for solving similar bandit tasks. Instances of our setting recover meta-, multi-task, and federated bandits in prior works. We propose a natural hierarchical Thompson sampling algorithm, which can be implemented exactly and analyzed in Gaussian models. We analyze it using a novel total variance decomposition, which leads to interpretable regret bounds that scale with the hyper-prior and task prior widths. The benefit of hierarchical models is shown in both synthetic and real-world domains.
While we view our work as solving an extremely general problem, there are multiple directions for future work. For instance, we only study a specific hierarchical Gaussian structure in \cref{sec:models}. However, based on the discussion in \cref{sec:extensions}, we believe that our tools would apply to arbitrary graphical models with general sub-Gaussian distributions. Another direction for future work are frequentist upper bounds and matching lower bounds, in both the frequentist and Bayesian settings.
\section{EXPERIMENTS}
\label{sec:experiments}
\begin{figure*}[t!]
\centering
\begin{minipage}{0.32\textwidth}
\includegraphics[width=\linewidth]{figures/synthetic_d2_sigma_q0.500.pdf}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\includegraphics[width=\linewidth]{figures/synthetic_d2_sigma_q1.000.pdf}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\includegraphics[width=\linewidth]{figures/synthetic_tasks_per_round.pdf}
\end{minipage}
\vspace{-0.1in}
\caption{Evaluation of \ensuremath{\tt HierTS}\xspace on synthetic bandit problems. From left to right, we report the Bayes regret (a) for smaller $\sigma_q$, (b) for larger $\sigma_q$, (c) and as a function of the number of concurrent tasks $L$.}
\label{fig:synthetic}
\end{figure*}
We compare \ensuremath{\tt HierTS}\xspace to two TS baselines (\cref{sec:tightness}) that do not learn the hyper-parameter $\mu_*$. The first baseline is an idealized algorithm that knows $\mu_*$ and uses the true prior $\mathcal{N}(\mu_*, \Sigma_0)$. We call it \ensuremath{\tt OracleTS}\xspace. As \ensuremath{\tt OracleTS}\xspace has more information than \ensuremath{\tt HierTS}\xspace, we expect it to outperform \ensuremath{\tt HierTS}\xspace. The second baseline, which we call \ensuremath{\tt TS}\xspace, ignores that $\mu_*$ is shared among the tasks and uses the marginal prior of $\theta_{s, *}$, $\mathcal{N}(\mu_q, \Sigma_q + \Sigma_0)$, in each task.
We experiment with two linear bandit problems with $m = 10$ tasks: a synthetic problem with Gaussian rewards and an online image classification problem. The former is used to validate our regret bounds. The latter has non-Gaussian rewards and demonstrates that \ensuremath{\tt HierTS}\xspace is robust to prior misspecification. Our setup closely follows \citet{basu21noregrets}. However, our tasks can arrive in an arbitrary order and in parallel. Due to space constraints, we only report the synthetic experiment here, and defer the rest to \cref{sec:classification experiments}.
The synthethic problem is defined as follows: $d = 2$, $\abs{\mathcal{A}} = 10$, and each action is sampled uniformly from $[-0.5, 0.5]^d$. Initially, the number of concurrent tasks is $L = 5$; but we vary it later to measure its impact on regret. The number of rounds is $n = 200 m / L$ and $\mathcal{S}_t$ is defined as follows. First, we take a random permutation of the list of tasks where each task appears exactly $200$ times. Then we batch every $L$ consecutive elements of the list and set $\mathcal{S}_t$ to the $t$-th batch. The hyper-prior is $\mathcal{N}(\mathbf{0}, \Sigma_q)$ with $\Sigma_q = \sigma_q^2 I_d$, the task covariance is $\Sigma_0 = \sigma_0^2 I_d$, and the reward noise is $\sigma = 0.5$. We choose $\sigma_q \in \set{0.5, 1}$ and $\sigma_0 = 0.1$, where $\sigma_q \gg \sigma_0$ so that the effect of learning $\mu_*$ on faster learning of $\theta_{s, *}$ is easier to measure.
The regret of all compared algorithms is reported in \cref{fig:synthetic}. In plots (a) and (b), we show how the regret scales with the number of rounds for small ($\sigma_q = 0.5$) and large ($\sigma_q$ = 1) hyper-prior width. As suggested in \cref{sec:sequential regret}, \ensuremath{\tt HierTS}\xspace outperforms \ensuremath{\tt TS}\xspace that does not try to learn $\mu_*$. It is comparable to \ensuremath{\tt OracleTS}\xspace when $\sigma_q$ is small, but degrades as $\sigma_q$ increases. This matches the regret bound in \cref{thm:concurrent regret}, where $c_2$ grows with $\sigma_q$. In plot (c), we show how the regret of \ensuremath{\tt HierTS}\xspace varies with the number of concurrent tasks $L$. We observe that it increases with $L$, but the increase is sublinear, as suggested in \cref{sec:concurrent regret}.
\section{INTRODUCTION}
\label{sec:introduction}
A \emph{stochastic bandit} \citep{lai85asymptotically,auer02finitetime,lattimore19bandit} is an online learning problem where a \emph{learning agent} sequentially interacts with an environment over $n$ rounds. In each round, the agent takes an \emph{action} and receives a \emph{stochastic reward}. The agent aims to maximize its expected cumulative reward over $n$ rounds. It does not know the mean rewards of the actions \emph{a priori}, and must learn them by taking the actions. This induces the \emph{exploration-exploitation dilema}: \emph{explore}, and learn more about an action; or \emph{exploit}, and take the action with the highest estimated reward. In online advertising, an action could be showing an advertisement and its reward could be an indicator of a click.
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\footnotetext[1]{The work started while being at Google Research.}
\renewcommand{\thefootnote}{\arabic{footnote}}
More statistically-efficient exploration is the primary topic of bandit papers. This is attained by leveraging the structure of the problem, such as the form of the reward distribution \citep{garivier11klucb}, prior distribution over model parameters \citep{thompson33likelihood,agrawal12analysis,chapelle11empirical,russo18tutorial}, conditioning on known feature vectors \citep{dani08stochastic,abbasi-yadkori11improved,agrawal13thompson}, or modeling the process by which the total reward arises \citep{radlinski08learning,kveton15cascading,gai12combinatorial,chen16combinatorial,kveton15tight}. In this work, we solve multiple similar bandit tasks, and each task teaches the agent how to solve other tasks more efficiently.
We formulate the problem of learning to solve similar bandit tasks as regret minimization in a \emph{hierarchical Bayesian model} \citep{gelman13bayesian}. Each task is parameterized by a \emph{task parameter}, which is sampled i.i.d.\ from a distribution parameterized by a \emph{hyper-parameter}. The parameters are unknown and this relates all tasks, in the sense that each task teaches the agent about any other task. We derive Bayes regret bounds that reflect the structure of the problem and show that the price for learning the hyper-parameter is low. Our derivations use a novel \emph{total variance decomposition}, which decomposes the parameter uncertainty into per-task uncertainty conditioned on knowing the hyper-parameter and hyper-parameter uncertainty. After that, we individually bound each uncertainty source by elliptical lemmas \citep{dani08stochastic,abbasi-yadkori11improved}. Our approach can be exactly implemented and analyzed in hierarchical multi-armed and linear bandits with Gaussian rewards, but can be extended to other graphical model structures.
We build on numerous prior works that study a similar structure, under the names of collaborating filtering bandits \citep{gentile14online,kawale15efficient,li16collaborative}, bandit meta-learning and multi-task learning \citep{azar13sequential,deshmukh17multitask,bastani19meta,cella20metalearning,kveton21metathompson,moradipari21parameter}, and representation learning \citep{yang21impact}. Despite it, we make major novel contributions, both in terms of a more general setting and analysis techniques. Our setting relaxes the assumptions that the tasks are solved in a sequence and that exactly one task is solved per round. Moreover, while the design of our posterior sampling algorithm is standard, we make novel contributions in its analysis. In the sequential setting (\cref{sec:sequential regret}), we derive a Bayes regret bound by decomposing the posterior covariance, which is an alternative to prior derivations based on filtered mutual information \citep{russo16information,lu19informationtheoretic}. This technique is general, simple, and yields tighter regret bounds because it avoids marginal task parameter covariance; and so is of a broad interest. In the concurrent setting (\cref{sec:concurrent regret}), we bound the additional regret due to not updating the posterior after each interaction. This is non-trivial and a major departure from other bandit analyses. Our Bayes regret bound for this setting is the first of its kind.
The paper is organized as follows. In \cref{sec:setting}, we formalize our setting of \emph{hierarchical Bayesian bandits}. In \cref{sec:algorithm}, we introduce a natural Thompson sampling algorithm (\ensuremath{\tt HierTS}\xspace) for solving it. In \cref{sec:models}, we instantiate it in hierarchical Gaussian models. In \cref{sec:key ideas}, we review key ideas in our regret analyses, including a novel total covariance decomposition that allows us to analyze posteriors in hierarchical models. In \cref{sec:regret bounds}, we prove Bayes regret bounds for \ensuremath{\tt HierTS}\xspace in sequential and concurrent settings. Finally, in \cref{sec:experiments}, we evaluate \ensuremath{\tt HierTS}\xspace empirically to confirm our theoretical results.
\section{KEY IDEAS IN OUR ANALYSES}
\label{sec:key ideas}
This section reviews key ideas in our regret analyses, including a novel variance decomposition for the posterior of a hierarchical Gaussian model. Due to space constraints, we only discuss the linear bandit in \cref{sec:linear bandit}.
\subsection{Bayes Regret Bound}
Fix round $t$ and task $s \in \mathcal{S}_t$. Since \ensuremath{\tt HierTS}\xspace is a posterior sampling algorithm, both the posterior sample $\theta_{s, t}$ and the unknown task parameter $\theta_{s, *}$ are i.i.d.\ conditioned on $H_t$. Moreover, \eqref{eq:task posterior} is a marginalization and conditioning in a hierarchical Gaussian model given in \cref{fig:setting}. Therefore, although we never explicitly derive $\theta_{s, *} \mid H_t$, we know that it is a multivariate Gaussian distribution \citep{koller09probabilistic}; and we denote it by $\condprob{\theta_{s, *} = \theta}{H_t} = \mathcal{N}(\theta; \hat{\mu}_{s, t}, \hat{\Sigma}_{s, t})$.
Following existing Bayes regret analyses \citep{russo14learning}, we have that
\begin{align*}
& \E{}{A_{s, *}^\top \theta_{s, *} - A_{s, t}^\top \theta_{s, *} \mid H_t} = \\
& \E{}{A_{s, *}^\top (\theta_{s, *} - \hat{\mu}_{s, t}) \mid H_t} +
\E{}{A_{s, t}^\top (\hat{\mu}_{s, t}- \theta_{s, *}) \mid H_t}\,.
\end{align*}
Conditioned on history $H_t$, we observe that $\hat{\mu}_{s, t} - \theta_{s, *}$ is a zero-mean random vector and that $A_{s, t}$ is independent of it. Hence $\condE{A_{s, t}^\top (\hat{\mu}_{s, t} - \theta_{s, *})}{H_t} = 0$ and the Bayes regret is bounded as
\begin{align*}
\mathcal{BR}(m, n)
\leq \E{}{\sum_{t \geq 1}\sum_{s \in \mathcal{S}_t}
\E{}{A_{s, *}^\top (\theta_{s, *} - \hat{\mu}_{s, t}) \mid H_t}}\,.
\end{align*}
The following lemma provides an upper bound on the Bayes regret for $m$ tasks, with at most $n$ interactions with each, using the sum of posterior variances
\begin{align}
\mathcal{V}(m, n)
= \E{}{\sum_{t \geq 1} \sum_{s \in \mathcal{S}_t} \normw{A_{s, t}}{\hat{\Sigma}_{s, t}}^2}\,.
\label{eq:posterior variances}
\end{align}
The proof is deferred to \cref{sec:bayes regret proof}.
\begin{lemma}
\label{lem:bayes regret} For any $\delta > 0$, the Bayes regret $\mathcal{BR}(m, n)$ in a hierarchical linear bandit (\cref{sec:linear bandit}) is bounded by
\begin{align*}
\sqrt{2 d m n \mathcal{V}(m, n) \log(1 / \delta)} +
\sqrt{2 / \pi} \sigma_{\max} d^\frac{3}{2} m n \delta\,,
\end{align*}
where $\sigma_{\max}^2 = \lambda_1(\Sigma_0) + \lambda_1^2(\Sigma_0) \lambda_1(\Sigma_q) / \lambda_d^2(\Sigma_0)$. When the action space is finite, $|\mathcal{A}| = K$, we also get
\begin{align*}
\sqrt{2 m n \mathcal{V}(m, n) \log(1 / \delta)} +
\sqrt{2 / \pi} \sigma_{\max} K m n \delta\,.
\end{align*}
\end{lemma}
Therefore, to bound the regret, we only need to bound the posterior variances induced by the taken actions. The main challenge is that our posterior is over multiple variables. As can be seen in \eqref{eq:task posterior}, it comprises the hyper-posterior $Q_t$ over $\mu_*$ and the conditional $P_{s, t}$ over $\theta_{s, *}$. For any fixed $\mu_*$, $P_{s, t}(\cdot \mid \mu_*)$ should concentrate at $\theta_{s, *}$ as the agent gets more observations from task $s$. In addition, $Q_t$ should concentrate at $\mu_*$ as the agent learns more about $\mu_*$ from all tasks.
\subsection{Total Variance Decomposition}
\label{sec:total variance decomposition}
Due to the hierarchical structure of our problem, it is difficult to reason about the rate at which $\hat{\Sigma}_{s, t}$ \say{decreases}. In this work, we propose a novel variance decomposition that allows this. The decomposition uses the law of total variance \citep{weiss05probability}, which states that for any $X$ and $Y$,
\begin{align*}
\var{X}
= \E{}{\condvar{X}{Y}} + \var{\condE{X}{Y}}\,.
\end{align*}
If $X = \theta$ was a scalar task parameter and $Y = \mu$ was a scalar hyper-parameter, and we conditioned on $H$, the law would give
\begin{align*}
\condvar{\theta}{H}
= \condE{\condvar{\theta}{\mu, H}}{H} +
\condvar{\condE{\theta}{\mu, H}}{H}\,.
\end{align*}
This law extends to covariances \citep{weiss05probability}, where the conditional variance $\condvar{\cdot}{H}$ is substituted with the covariance $\condcov{\cdot}{H}$. We show the decomposition for a hierarchical Gaussian model below.
\subsection{Hierarchical Gaussian Models}
Recall that $\hat{\Sigma}_{s, t} = \condcov{\theta_{s, *}}{H_t}$. We derive a general formula for decomposing $\condcov{\theta_{s, *}}{H_t}$ below. To simplify notation, we consider a fixed task $s$ and round $t$, and drop subindexing by them.
\begin{lemma}
\label{lem:covariance decomposition} Let $\theta \mid \mu \sim \mathcal{N}(\mu, \Sigma_0)$ and $H = (x_t, Y_t)_{t = 1}^n$ be $n$ observations generated as $Y_t \mid \theta, x_t \sim \mathcal{N}(x_t^\top \theta, \sigma^2)$. Let $\condprob{\mu}{H} = \mathcal{N}(\mu; \bar{\mu}, \bar{\Sigma})$. Then
\begin{align*}
\condcov{\theta}{H}
= {} & (\Sigma_0^{-1} + G)^{-1} + {} \\
& (\Sigma_0^{-1} + G)^{-1} \Sigma_0^{-1} \bar{\Sigma}
\Sigma_0^{-1} (\Sigma_0^{-1} + G)^{-1}\,,
\end{align*}
where $G = \sigma^{-2} \sum_{t = 1}^n x_t x_t^\top$. Moreover, for any $x \in \mathbb{R}^d$,
\begin{align*}
& x^\top (\Sigma_0^{-1} + G)^{-1} \Sigma_0^{-1} \bar{\Sigma}
\Sigma_0^{-1} (\Sigma_0^{-1} + G)^{-1} x \\
& \quad \leq \frac{\lambda_1^2(\Sigma_0) \lambda_1(\bar{\Sigma})}{\lambda_d^2(\Sigma_0)}
\normw{x}{2}^2\,.
\end{align*}
\end{lemma}
\begin{proof}
By definition,
\begin{align*}
\condcov{\theta}{\mu, H}
& = (\Sigma_0^{-1} + G)^{-1}\,, \\
\condE{\theta}{\mu, H}
& = \condcov{\theta}{\mu, H}
(\Sigma_0^{-1} \mu + B)\,,
\end{align*}
where $B = \sigma^{-2}\sum_{t = 1}^n x_t Y_t$. Because $\condcov{\theta}{\mu, H}$ does not depend on $\mu$, $\condE{\condcov{\theta}{\mu, H}}{H} = \condcov{\theta}{\mu, H}$. In addition, since $B$ is a constant conditioned on $H$,
\begin{align*}
& \condcov{\condE{\theta}{\mu, H}}{H} \\
& \quad = \condcov{\condcov{\theta}{\mu, H} \Sigma_0^{-1} \mu}{H} \\
& \quad = (\Sigma_0^{-1} + G)^{-1} \Sigma_0^{-1} \bar{\Sigma}
\Sigma_0^{-1} (\Sigma_0^{-1} + G)^{-1}\,.
\end{align*}
This proves the first claim. The second claim follows from standard norm and eigenvalue inequalities.
\end{proof}
We use \cref{lem:covariance decomposition} as follows. For task $s$ and round $t$, the posterior covariance decomposes as
\begin{align}
\hat{\Sigma}_{s, t}
= {} & (\Sigma_0^{-1} + G_{s, t})^{-1} + {}
\label{eq:covariance decomposition} \\
& (\Sigma_0^{-1} + G_{s, t})^{-1} \Sigma_0^{-1} \bar{\Sigma}_t
\Sigma_0^{-1} (\Sigma_0^{-1} + G_{s, t})^{-1}\,.
\nonumber
\end{align}
The first term is $\condcov{\theta_{s, *}}{\mu_*, H_t}$ and captures uncertainty in $\theta_{s, *}$ conditioned on $\mu_*$. The second term depends on hyper-posterior covariance $\bar{\Sigma}_t$ and represents uncertainty in $\mu_*$. Since the first term is exactly $\tilde{\Sigma}_{s, t}$ in \eqref{eq:linear conditional}, while the second term is weighted by it, both are small when we get enough observations for task $s$. The above also says that $\normw{A_{s, t}}{\hat{\Sigma}_{s, t}}^2 = A_{s, t}^\top \hat{\Sigma}_{s, t} A_{s, t}$ in \eqref{eq:posterior variances} decompose into the two respective norms, which yields our regret decomposition.
\subsection{Extensions}
\label{sec:extensions}
So far, we only focused on hierarchical Gaussian models with known hyper-prior and task prior covariances. This is only because they have closed-form posteriors that are easy to interpret and manipulate, without resorting to approximations \citep{doucet01sequential}. This choice simplifies algebra and allows us to focus on the key hierarchical structure of our problem. We believe that the tools developed in this section can be applied more broadly. We discuss this next.
\cref{lem:bayes regret} decomposes the Bayes regret into posterior variances and upper bounds on the regret due to tail events. The posterior variance can be derived for any exponential-family posterior with a conjugate prior. On the other hand, the tail inequalities require sub-Gaussianity, which is a property of many exponential-family distributions.
\cref{lem:covariance decomposition} decomposes the posterior covariance in a hierarchical Gaussian model. It relies on the law of total covariance, which holds for any distribution, to obtain the task and hyper-parameter uncertainties. We expect that similar lemmas can be proved for other hierarchical models, so long as closed-form expressions for the respective uncertainties exist. Another notable property of our decomposition is that it does not require the marginal posterior of $\theta_{s, *}$. We view it as a strength. It means that our approach can be applied to complex graphical models where the marginal uncertainty may be hard to express, but the conditional and prior uncertainties are readily available.
One limitation of our analyses is that we bound the Bayes regret, instead of a stronger frequentist regret. This simplifies our proofs while they still capture our problem structure. Our analyses can be extended to the frequentist setting. This only requires a new proof of \cref{lem:bayes regret}, with martingale bounds for tail events and anti-concentration bounds for posterior sampling. The rest of the analysis, where our main contributions are, would not change.
\section{HIERARCHICAL GAUSSIAN BANDITS}
\label{sec:models}
Now we instantiate \ensuremath{\tt HierTS}\xspace in hierarchical Gaussian models. This yields closed-form posteriors, which permit regret analysis (\cref{sec:key ideas}). We discuss generalization to other distributions in \cref{sec:extensions}.
We assume that the environment is generated as
\begin{align}
\mu_*
& \sim \mathcal{N}(\mu_q, \Sigma_q)\,,
\label{eq:gaussian hierarchical} \\
\theta_{s, *} \mid \mu_*
& \sim \mathcal{N}(\mu_*, \Sigma_0)\,,
& \forall s \in [m] \,,
\nonumber \\
Y_{s, t} \mid A_{s, t}, \theta_{s, *}
& \sim \mathcal{N}(A_{s, t}^\top \theta_{s, *}, \sigma^2)\,,
& \forall t \geq 1, \, s \in \mathcal{S}_t\,,
\nonumber
\end{align}
where $\Sigma_q \in \mathbb{R}^{d \times d}$ and $\Sigma_0 \in \mathbb{R}^{d \times d}$ are covariance matrices; $\mu_q$, $\mu_*$, $\theta_{s, *}$ are $d$-dimensional vectors; the set of actions is $\mathcal{A} \subseteq \mathbb{R}^d$; and the mean reward of action $a \in \mathcal{A}$ is $r(a; \theta) = a^\top \theta$. The reward noise is $\mathcal{N}(0, \sigma^2)$. This formulation captures both the multi-armed and linear bandits, since the actions in the former can be viewed as vectors in a standard Euclidean basis. We assume that all of $\mu_q$, $\Sigma_q$, $\Sigma_0$, and $\sigma$ are known by the agent. This assumption is only needed in the analysis of \ensuremath{\tt HierTS}\xspace, where we require an analytically tractable posterior. We relax it in our experiments (\cref{sec:experiments}), where we learn these quantities from past data.
\subsection{Gaussian Bandit}
\label{sec:gaussian bandit}
We start with a $K$-armed Gaussian bandit, which we instantiate as \eqref{eq:gaussian hierarchical} as follows. The task parameter $\theta_{s, *}$ is a vector of mean rewards in task $s$, where $\theta_{s, *, i}$ is the mean reward of action $i$. The covariance matrices are diagonal, $\Sigma_q = \sigma_q^2 I_K$ and $\Sigma_0 = \sigma_0^2 I_K$. We assume that both $\sigma_q > 0$ and $\sigma_0 > 0$ are known. The reward distribution of action $i$ is $\mathcal{N}(\theta_{s, *, i}, \sigma^2)$, where $\sigma > 0$ is a known reward noise.
Because $\Sigma_q$ and $\Sigma_0$ are diagonal, the hyper-posterior in round $t$ factors across the actions. Specifically, it is $Q_t = \mathcal{N}(\bar{\mu}_t, \bar{\Sigma}_t)$, where $\bar{\Sigma}_t = \diag{(\bar{\sigma}_{t, i}^2)_{i \in [K]}}$ and
\begin{align}
\bar{\mu}_{t, i}
& = \bar{\sigma}_{t, i}^2 \left(\frac{\mu_{q, i}}{\sigma_q^2} +
\sum_{s \in [m]} \frac{N_{s, t, i}}{N_{s, t, i} \sigma_0^2 + \sigma^2}
\frac{B_{s, t, i}}{N_{s, t, i}}\right)\,,
\label{eq:mab hyperposterior} \\
\bar{\sigma}_{t, i}^{-2}
& = \sigma_q^{-2} + \sum_{s \in [m]} \frac{N_{s, t, i}}{N_{s, t, i} \sigma_0^2 + \sigma^2}\,.
\nonumber
\end{align}
Here $N_{s, t, i} = \sum_{\ell < t} \I{s \in \mathcal{S}_\ell, A_{s, \ell} = i}$ is the number of times that action $i$ is taken in task $s$ up to round $t$ and $B_{s, t, i} = \sum_{\ell < t} \I{s \in \mathcal{S}_\ell, A_{s, \ell} = i} Y_{s, \ell}$ is its total reward. The hyper-posterior is derived in Appendix D of \citet{kveton21metathompson}. To understand it, it is helpful to view it as a Gaussian posterior where each task is a single observation. The observation of task $s$ is the empirical mean reward estimate of action $i$ in task $s$, $B_{s, t, i} / N_{s, t, i}$, and its variance is $(N_{s, t, i} \sigma_0^2 + \sigma^2) / N_{s, t, i}$. The tasks with more observations affect the value of $\bar{\mu}_{t, i}$ more, because their mean reward estimates have lower variances. The variance never decreases below $\sigma_0^2$, because even the actual mean reward $\theta_{s, *, i}$ would be a noisy observation of $\mu_{*, i}$ with variance $\sigma_0^2$.
After the hyper-parameter is sampled, $\mu_t \sim Q_t$, the task parameter is sampled, $\theta_{s, t} \sim \mathcal{N}(\tilde{\mu}_{s, t}, \tilde{\Sigma}_{s, t})$, where $\tilde{\Sigma}_{s, t} = \diag{(\tilde{\sigma}_{s, t, i}^2)_{i \in [K]}}$ and
\begin{align}
\tilde{\mu}_{s, t, i}
& = \tilde{\sigma}_{s, t, i}^2 \left(\frac{\mu_t}{\sigma_0^2} +
\frac{B_{s, t, i}}{\sigma^2}\right)\,,
\label{eq:mab conditional} \\
\tilde{\sigma}_{s, t, i}^{-2}
& = \frac{1}{\sigma_0^2} + \frac{N_{s, t, i}}{\sigma^2}\,.
\nonumber
\end{align}
Note that the above is a Gaussian posterior with prior $\mathcal{N}(\mu_t, \sigma_0^2 I_K)$ and $N_{s, t, i}$ observations.
\subsection{Linear Bandit with Gaussian Rewards}
\label{sec:linear bandit}
Now we study a $d$-dimensional linear bandit, which is instantiated as \eqref{eq:gaussian hierarchical} as follows. The task parameter $\theta_{s, *}$ are coefficients in a linear model. The covariance matrices $\Sigma_q$ are $\Sigma_0$ are positive semi-definite and known. The reward distribution of action $a$ is $\mathcal{N}(a^\top \theta_{s, *}, \sigma^2)$, where $\sigma > 0$ is a known reward noise.
Similarly to \cref{sec:gaussian bandit}, we obtain closed-form posteriors using \citet{kveton21metathompson}. The hyper-posterior in round $t$ is $Q_t = \mathcal{N}(\bar{\mu}_t, \bar{\Sigma}_t)$, where
\begin{align}
\bar{\mu}_t
& = \bar{\Sigma}_t \Big(\Sigma_q^{-1} \mu_q +
\smashoperator{\sum_{s \in [m]}}
B_{s, t} - G_{s, t}(\Sigma_0^{-1} + G_{s, t})^{-1} B_{s, t}\Big)
\nonumber \\
& = \bar{\Sigma}_t \Big(\Sigma_q^{-1} \mu_q +
\sum_{s \in [m]} (\Sigma_0 + G_{s, t}^{-1})^{-1} G_{s, t}^{-1} B_{s, t}\Big)\,,
\nonumber \\
\bar{\Sigma}_t^{-1}
& = \Sigma_q^{-1} +
\sum_{s \in [m]} G_{s, t} - G_{s, t}(\Sigma_0^{-1} + G_{s, t})^{-1} G_{s, t}
\nonumber \\
& = \Sigma_q^{-1} +
\sum_{s \in [m]} (\Sigma_0 + G_{s, t}^{-1})^{-1}\,.
\label{eq:linear hyperposterior}
\end{align}
Here
\begin{align*}
G_{s, t}
= \sigma^{-2} \sum_{\ell < t} \I{s \in \mathcal{S}_\ell} A_{s, \ell} A_{s, \ell}^\top
\end{align*}
is the outer product of the features of taken actions in task $s$ up to round $t$ and
\begin{align*}
B_{s, t}
= \sigma^{-2} \sum_{\ell < t} \I{s \in \mathcal{S}_\ell} A_{s, \ell} Y_{s, \ell}
\end{align*}
is their sum weighted by the observed rewards. Similarly to \eqref{eq:mab hyperposterior}, it is helpful to view \eqref{eq:linear hyperposterior} as a multivariate Gaussian posterior where each task is a single observation. The observation of task $s$ is the least squares estimate of $\theta_{s, *}$ from task $s$, $G_{s, t}^{-1} B_{s, t}$, and its covariance is $\Sigma_0 + G_{s, t}^{-1}$. Again, the tasks with many observations affect the value of $\bar{\mu}_t$ more, because $G_{s, t}^{-1}$ approaches a zero matrix in these tasks. In this setting, the covariance approaches $\Sigma_0$, because even the unknown task parameter $\theta_{s, *}$ would be a noisy observation of $\mu_*$ with covariance $\Sigma_0$.
After the hyper-parameter is sampled, $\mu_t \sim Q_t$, the task parameter is sampled, $\theta_{s, t} \sim \mathcal{N}(\tilde{\mu}_{s, t}, \tilde{\Sigma}_{s, t})$, where
\begin{align}
\tilde{\mu}_{s, t}
& = \tilde{\Sigma}_{s, t} \left(\Sigma_0^{-1} \mu_t + B_{s, t}\right)\,,
\label{eq:linear conditional} \\
\tilde{\Sigma}_{s, t}^{-1}
& = \Sigma_0^{-1} + G_{s, t}\,.
\nonumber
\end{align}
The above is the posterior of a linear model with a Gaussian prior $\mathcal{N}(\mu_t, \Sigma_0)$ and Gaussian observations.
\subsubsection*{\bibname}}
\usepackage{algorithm}
\usepackage{algorithmicx}
\usepackage[noend]{algpseudocode}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{bbm}
\usepackage{bm}
\usepackage{caption}
\usepackage{color}
\usepackage{dirtytalk}
\usepackage{dsfont}
\usepackage{enumerate}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{mathtools}
\usepackage{subfigure}
\usepackage{xspace}
\usepackage[usenames,dvipsnames]{xcolor}
\usepackage[bookmarks=false]{hyperref}
\hypersetup{
pdffitwindow=true,
pdfstartview={FitH},
pdfnewwindow=true,
colorlinks,
linktocpage=true,
linkcolor=Green,
urlcolor=Green,
citecolor=Green
}
\usepackage[capitalize,noabbrev]{cleveref}
\usepackage[backgroundcolor=white]{todonotes}
\newcommand{\todob}[2][]{\todo[color=Red!20,size=\tiny,inline,#1]{B: #2}}
\newcommand{\todoj}[2][]{\todo[color=Blue!20,size=\tiny,inline,#1]{J: #2}}
\newcommand{\todomgh}[2][]{\todo[color=Cyan!20,size=\tiny,inline,#1]{MGH: #2}}
\newcommand{\mz}[2][]{\textcolor{red}{[MZ: #2]}}
\newcommand{\commentout}[1]{}
\newcommand{\junk}[1]{}
\usepackage{tikz}
\usetikzlibrary{bayesnet}
\Crefname{corollary}{Corollary}{Corollaries}
\Crefname{proposition}{Proposition}{Propositions}
\Crefname{theorem}{Theorem}{Theorems}
\Crefname{definition}{Definition}{Definitions}
\Crefname{assumption}{Assumption}{Assumptions}
\Crefname{example}{Example}{Examples}
\Crefname{remark}{Remark}{Remarks}
\Crefname{setting}{Setting}{Settings}
\Crefname{lemma}{Lemma}{Lemmas}
\usepackage{thmtools}
\declaretheorem[name=Theorem,refname={Theorem,Theorems},Refname={Theorem,Theorems}]{theorem}
\declaretheorem[name=Lemma,refname={Lemma,Lemmas},Refname={Lemma,Lemmas},sibling=theorem]{lemma}
\declaretheorem[name=Corollary,refname={Corollary,Corollaries},Refname={Corollary,Corollaries},sibling=theorem]{corollary}
\declaretheorem[name=Assumption,refname={Assumption,Assumptions},Refname={Assumption,Assumptions}]{assumption}
\declaretheorem[name=Proposition,refname={Proposition,Propositions},Refname={Proposition,Propositions},sibling=theorem]{proposition}
\declaretheorem[name=Definition,refname={Definition,Definitions},Refname={Definition,Definitions},sibling=theorem]{definition}
\declaretheorem[name=Example,refname={Example,Examples},Refname={Example,Examples}]{example}
\declaretheorem[name=Remark,refname={Remark,Remarks},Refname={Remark,Remarks}]{remark}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\diag}[1]{\mathrm{diag}\left(#1\right)}
\newcommand{\domain}[1]{\mathrm{dom}\left(#1\right)}
\newcommand{\mathsf{pa}}{\mathsf{pa}}
\newcommand{\range}[1]{\mathrm{rng}\left[#1\right]}
\newcommand{\E}[2]{\mathbb{E}_{#1} \left[#2\right]}
\newcommand{\condE}[2]{\mathbb{E} \left[#1 \,\middle|\, #2\right]}
\newcommand{\Et}[1]{\mathbb{E}_t \left[#1\right]}
\newcommand{\prob}[1]{\mathbb{P} \left(#1\right)}
\newcommand{\condprob}[2]{\mathbb{P} \left(#1 \,\middle|\, #2\right)}
\newcommand{\probt}[1]{\mathbb{P}_t \left(#1\right)}
\newcommand{\var}[1]{\mathrm{var} \left[#1\right]}
\newcommand{\condvar}[2]{\mathrm{var} \left[#1 \,\middle|\, #2\right]}
\newcommand{\std}[1]{\mathrm{std} \left[#1\right]}
\newcommand{\condstd}[2]{\mathrm{std} \left[#1 \,\middle|\, #2\right]}
\newcommand{\cov}[1]{\mathrm{cov} \left[#1\right]}
\newcommand{\condcov}[2]{\mathrm{cov} \left[#1 \,\middle|\, #2\right]}
\newcommand{\abs}[1]{\left|#1\right|}
\newcommand{\ceils}[1]{\left\lceil#1\right\rceil}
\newcommand{\dbar}[1]{\bar{\bar{#1}}}
\newcommand*\dif{\mathop{}\!\mathrm{d}}
\newcommand{\floors}[1]{\left\lfloor#1\right\rfloor}
\newcommand{\I}[1]{\mathds{1} \! \left\{#1\right\}}
\newcommand{\inner}[2]{\langle#1, #2\rangle}
\newcommand{\kl}[2]{D_\mathrm{KL}(#1 \,\|\, #2)}
\newcommand{\klplus}[2]{D_\mathrm{KL}^+(#1 \,\|\, #2)}
\newcommand{\maxnorm}[1]{\|#1\|_\infty}
\newcommand{\maxnormw}[2]{\|#1\|_{\infty, #2}}
\newcommand{\negpart}[1]{\left[#1\right]^-}
\newcommand{\norm}[1]{\|#1\|}
\newcommand{\normw}[2]{\|#1\|_{#2}}
\newcommand{\pospart}[1]{\left[#1\right]^+}
\newcommand{\rnd}[1]{\bm{#1}}
\newcommand{\set}[1]{\left\{#1\right\}}
\newcommand{\subreal}[0]{\preceq}
\newcommand{\supreal}[0]{\succeq}
\newcommand{^\top}{^\top}
\DeclareMathOperator*{\argmax}{arg\,max\,}
\DeclareMathOperator*{\argmin}{arg\,min\,}
\let\det\relax
\DeclareMathOperator{\det}{det}
\DeclareMathOperator{\poly}{poly}
\DeclareMathOperator{\rank}{rank}
\DeclareMathOperator{\sgn}{sgn}
\let\trace\relax
\DeclareMathOperator{\trace}{tr}
\mathchardef\mhyphen="2D
\newcommand{\ensuremath{\tt HierTS}\xspace}{\ensuremath{\tt HierTS}\xspace}
\newcommand{\ensuremath{\tt LinTS}\xspace}{\ensuremath{\tt LinTS}\xspace}
\newcommand{\ensuremath{\tt LinUCB}\xspace}{\ensuremath{\tt LinUCB}\xspace}
\newcommand{\ensuremath{\tt OracleTS}\xspace}{\ensuremath{\tt OracleTS}\xspace}
\newcommand{\ensuremath{\tt TS}\xspace}{\ensuremath{\tt TS}\xspace}
\newcommand{\ensuremath{\tt UCB}\xspace}{\ensuremath{\tt UCB}\xspace}
\def\mathcal{R}{\mathcal{R}}
\def\mathcal{BR}{\mathcal{BR}}
\begin{document}
\twocolumn[
\aistatstitle{Hierarchical Bayesian Bandits}
\aistatsauthor{Joey Hong \And Branislav Kveton \And Manzil Zaheer \And Mohammad Ghavamzadeh}
\aistatsaddress{UC Berkeley$^*$ \And Amazon$^*$ \And Google DeepMind \And Google Research}]
\begin{abstract}
Meta-, multi-task, and federated learning can be all viewed as solving similar tasks, drawn from a distribution that reflects task similarities. We provide a unified view of all these problems, as learning to act in a \emph{hierarchical Bayesian bandit}. We propose and analyze a natural hierarchical Thompson sampling algorithm (\ensuremath{\tt HierTS}\xspace) for this class of problems. Our regret bounds hold for many variants of the problems, including when the tasks are solved sequentially or in parallel; and show that the regret decreases with a more informative prior. Our proofs rely on a novel total variance decomposition that can be applied beyond our models. Our theory is complemented by experiments, which show that the hierarchy helps with knowledge sharing among the tasks. This confirms that hierarchical Bayesian bandits are a universal and statistically-efficient tool for learning to act with similar bandit tasks.
\end{abstract}
\input{introduction}
\input{setting}
\input{algorithm}
\input{models}
\input{key_ideas}
\input{regret}
\input{experiments}
\input{related_work}
\input{conclusions}
\bibliographystyle{abbrvnat}
\section{REGRET BOUNDS}
\label{sec:regret bounds}
This section bounds the Bayes regret of \ensuremath{\tt HierTS}\xspace in the linear bandit in \cref{sec:linear bandit}. Our bounds are specialized to multi-armed bandits in \cref{sec:mab bounds}. The key idea is to bound the posterior variances $\mathcal{V}(m, n)$ in \eqref{eq:posterior variances} and then substitute the bound into the infinite-action bound in \cref{lem:bayes regret}. We bound the variances using the total covariance decomposition in \cref{sec:total variance decomposition}. Without loss of generality, we assume that the action set $\mathcal{A}$ is a subset of a unit ball, that is $\max_{a \in \mathcal{A}} \normw{a}{2} \leq 1$ for any action $a \in \mathcal{A}$.
We make the following contributions in theory. First, we prove regret bounds using a novel variance decomposition (\cref{sec:total variance decomposition}), which improves in constant factors over classical information-theory bounds \citep{russo16information}. Second, we prove the first Bayes regret bound for the setting where an agent that interacts with multiple tasks simultaneously.
This section has two parts. In \cref{sec:sequential regret}, we assume that only one action is taken in any round $t$, $|\mathcal{S}_t| = 1$. We call this setting \emph{sequential}, and note that it is the primary setting studied by prior works \citep{kveton21metathompson,basu21noregrets}. In \cref{sec:concurrent regret}, we focus on a \emph{concurrent} setting, where a single action can be taken in up to $L$ tasks in any round $t$, $|\mathcal{S}_t| \leq L \leq m$. The challenge of this setting is that the task parameters are only updated after all actions are taken.
\subsection{Sequential Regret}
\label{sec:sequential regret}
The following theorem provides a regret bound for the sequential setting.
\begin{theorem}[Sequential regret]
\label{thm:sequential regret} Let $|\mathcal{S}_t| = 1$ for all rounds $t$ and $\mathcal{A} \subseteq \mathbb{R}^d$. Choose $\delta = 1 / (m n)$. Then the Bayes regret of \ensuremath{\tt HierTS}\xspace is
\begin{align*}
\mathcal{BR}(m, n)
\leq d \sqrt{2 m n [c_1 m + c_2] \log(m n)} + c_3\,,
\end{align*}
where $c_3 = O(d^\frac{3}{2})$,
\begin{align*}
c_1
& = \frac{\lambda_1(\Sigma_0)}{\log(1 + \sigma^{-2} \lambda_1(\Sigma_0))}
\log\left(1 + \frac{\lambda_1(\Sigma_0) n}{\sigma^2 d}\right)\,, \\
c_2
& = \frac{c_q c}{\log(1 + \sigma^{-2} c_q)}
\log\left(1 + \frac{\lambda_1(\Sigma_q) m}{\lambda_d(\Sigma_0)}\right)\,, \\
c_q
& = \frac{\lambda_1^2(\Sigma_0) \lambda_1(\Sigma_q)}{\lambda_d^2(\Sigma_0)}, \quad
c = 1 + \sigma^{-2} \lambda_1(\Sigma_0)\,.
\end{align*}
\end{theorem}
The proof of \cref{thm:sequential regret} is based on three steps. First, we use \cref{lem:bayes regret}. Second, we employ \cref{lem:covariance decomposition} to decompose the posterior variance in any round into that of the task parameters and hyper-parameter. Finally, we apply elliptical lemmas to bound each term separately. \cref{thm:sequential regret} has a nice interpretation: $m \sqrt{c_1 n}$ is the regret for learning task parameters and $\sqrt{c_2 m n}$ is the regret for learning the hyper-parameter $\mu_*$. We elaborate on both terms below.
The term $m \sqrt{c_1 n}$ represents the regret for solving $m$ bandit tasks, which are sampled i.i.d.\ from a known prior $\mathcal{N}(\mu_*, \Sigma_0)$. Under this assumption, no task provides information about any other task, and thus the term is linear in $m$. The constant $c_1$ is $O(\lambda_1(\Sigma_0))$ and reflects the dependence on the prior width $\sqrt{\lambda_1(\Sigma_0)}$. Roughly speaking, when the task prior is half as informative, $\sqrt{c_1}$ doubles and so does $m \sqrt{c_1 n}$. This is the expected scaling with conditional uncertainty of $\theta_{s, *}$ given $\mu_*$.
The term $\sqrt{c_2 m n}$ is the regret for learning the hyper-parameter. Asymptotically, it is $O(\sqrt{m})$ smaller than $m \sqrt{c_1 n}$. Therefore, for a large number of tasks $m$, its contribution to the total regret is negligible. This is why hierarchical Bayesian bandits perform so well in practice. The constant $c_2$ is $O(\lambda_1(\Sigma_q))$ and reflects the dependence on the hyper-prior width $\sqrt{\lambda_1(\Sigma_0)}$. When the hyper-prior is half as informative, $\sqrt{c_2}$ doubles and so does $\sqrt{c_2 m n}$. This is the expected scaling with the marginal uncertainty of $\mu_*$.
\subsection{Tightness of Regret Bounds}
\label{sec:tightness}
One shortcoming of our current analysis is that we do not provide a matching lower bound.
To the best of our knowledge, Bayes regret lower bounds are rare and do not match existing upper bounds. The only lower bound that we are aware of is $\Omega(\log^2 n)$ in Theorem 3 of \citet{lai87adaptive}. The bound is for $K$-armed bandits and it is unclear how to apply it to structured problems. Seminal works on Bayes regret minimization \citep{russo14learning,russo16information} do not match it. Therefore, to show that our problem structure is reflected in our bound, we compare the regret of \ensuremath{\tt HierTS}\xspace to baselines that have more information or use less structure.
Now we compare the regret of \ensuremath{\tt HierTS}\xspace to two \ensuremath{\tt LinTS}\xspace \citep{agrawal13thompson} baselines that do use our hierarchical model. This first is an oracle \ensuremath{\tt LinTS}\xspace that knows $\mu_*$, and so has more information than \ensuremath{\tt HierTS}\xspace. Its Bayes regret would be as in \cref{thm:sequential regret} with $c_2 = 0$. Not surprisingly, it is lower than that of \ensuremath{\tt HierTS}\xspace. The second baseline is \ensuremath{\tt LinTS}\xspace that knows that $\mu_* \sim \mathcal{N}(\mu_q, \Sigma_q)$, but does not model the structure that the tasks share $\mu_*$. In this case, each task parameter can be viewed as having prior $\mathcal{N}(\mu_q, \Sigma_q + \Sigma_0)$. The regret of this algorithm would be as in \cref{thm:sequential regret} with $c_2 = 0$, while $\lambda_1(\Sigma_0)$ in $c_1$ would be $\lambda_1(\Sigma_q + \Sigma_0)$. Since $c_1$ is multiplied by $m$ while $c_2$ is not, \ensuremath{\tt HierTS}\xspace would have lower regret as $m \to \infty$. This is a powerful testament to the benefit of learning the hyper-parameter.
Finally, we want to comment on linear dependence in $d$ and $m$ in \cref{thm:sequential regret}. The dependence on $d$ is standard in Bayes regret analyses for linear bandits with infinitely many arms \citep{russo14learning,lu19informationtheoretic}. As for the number of tasks $m$, since the tasks are drawn i.i.d.\ from the same hyper-prior, they do not provide any additional information about each other. So, even if the hyper-parameter is known, the regret for learning to act in $m$ tasks with $n$ rounds would be $O(m \sqrt{n})$. Our improvements are in constants due to better variance attribution. Other bandit meta-learning works \citep{kveton21metathompson,basu21noregrets} made similar observations. Also note that the frequentist regret of \ensuremath{\tt LinTS}\xspace applied to $m$ independent linear bandit tasks is $\tilde{O}(m d^\frac{3}{2} \sqrt{n})$ \citep{agrawal13thompson}. This is worse by a factor of $\sqrt{d}$ than the bound in \cref{thm:sequential regret}.
\subsection{Concurrent Regret}
\label{sec:concurrent regret}
Now we investigate the concurrent setting, where the agent acts in up to $L$ tasks per round. This setting is challenging because the hyper-posterior $Q_t$ is not updated until the end of the round. This is because the task posteriors are not refined with the observations from concurrent tasks. This delayed feedback should increase regret. Before we show it, we make the following assumption on the action space.
\begin{assumption}
\label{ass:basis} There exist actions $\{a_i\}_{i = 1}^d \subseteq \mathcal{A}$ and $\eta > 0$ such that $\lambda_d(\sum_{i = 1}^d a_i a_i^\top) \geq \eta$.
\end{assumption}
This assumption is without loss of generality. Specifically, if $\mathbb{R}^d$ was not spanned by actions in $\mathcal{A}$, we could project $\mathcal{A}$ into a subspace where the assumption would hold. Our regret bound is below.
\begin{theorem}[Concurrent regret]
\label{thm:concurrent regret} Let $|\mathcal{S}_t| \leq L \leq m$ and $\mathcal{A} \subseteq \mathbb{R}^d$. Let $\delta = 1 / (m n)$. Then the Bayes regret of \ensuremath{\tt HierTS}\xspace is
\begin{align*}
\mathcal{BR}(m, n)
\leq d \sqrt{2 m n [c_1 m + c_2] \log(m n)} + c_3\,,
\end{align*}
where $c_1$, $c_q$, and $c$ are defined as in \cref{thm:sequential regret},
\begin{align*}
c_2
& = \frac{c_q c_4 c}{\log(1 + \sigma^{-2} c_q)}
\log\left(1 + \frac{\lambda_1(\Sigma_q) m}{\lambda_d(\Sigma_0)}\right)\,, \\
c_4
& = 1 + \frac{\sigma^{-2} \lambda_1(\Sigma_q) (\lambda_1(\Sigma_0) + \sigma^2 / \eta)}
{\lambda_1(\Sigma_q) + (\lambda_1(\Sigma_0) + \sigma^2 / \eta) / L}\,,
\end{align*}
and $c_3 = O(d^\frac{3}{2} m)$.
\end{theorem}
The key step in the proof is to modify \ensuremath{\tt HierTS}\xspace as follows. For the first $d$ interactions with any task $s$, we take actions $\{a_i\}_{i = 1}^d$. This guarantees that we explore all directions within the task, and allows us to bound losses from not updating the task posterior with concurrent observations. This modification of \ensuremath{\tt HierTS}\xspace is trivial and analogous to popular initialization in bandits, where each arm is pulled once in the first rounds \citep{auer02finitetime}.
The regret bound in \cref{thm:concurrent regret} is similar to that in \cref{thm:sequential regret}. There are two key differences. First, the additional scaling factor $c_4$ in $c_2$ is the price for taking concurrent actions. It increases as more actions $L$ are taken concurrently, but is sublinear in $L$. Second, $c_3$ arises due to trivially bounding $dm$ rounds of forced exploration. To the best of our knowledge, \cref{thm:concurrent regret} is the first Bayes regret bound where multiple bandit tasks are solved concurrently. Prior works only proved frequentist regret bounds \citep{yang21impact}.
\section{RELATED WORK}
\label{sec:related work}
The most related works are recent papers on bandit meta-learning \citep{bastani19meta,ortega19metalearning,cella20metalearning,kveton21metathompson,basu21noregrets,peleg21metalearning,simchowitz21bayesian}, where a learning agent interacts with a single task at a time until completion. Both \citet{kveton21metathompson} and \citet{basu21noregrets} represent their problems using graphical models and apply Thompson sampling to solve them. The setting of these papers is less general than ours. \citet{wan21metadatabased} study a setting where the tasks can arrive in any order. We differ from this work in several aspects. First, they only consider a $K$-armed bandit. Second, their model is different. In our notation, \citet{wan21metadatabased} assume that the mean reward of action $a$ in task $s$ is $x_{s, a}^\top \mu_*$ plus i.i.d.\ noise, where $x_{s, a}$ is an observed feature vector. The i.i.d.\ noise prevents generalization to a large number of actions. In our work, the mean reward of action $a$ in task $s$ is $a^\top \theta_{s, *}$, where $\theta_{s, *} \sim \mathcal{N}(\mu_*, \Sigma_0)$. Third, \citet{wan21metadatabased} derive a frequentist regret bound, which matches \cref{thm:sequential regret} asymptotically, but does not explicitly depend on prior widths. Finally, \citet{wan21metadatabased} do not consider the concurrent setting. To the best of our knowledge, we are the first to study Bayesian bandits with arbitrarily ordered and concurrent tasks.
The novelty in our analysis is the total covariance decomposition, which leads to better variance attribution in structured models than information-theoretic bounds \citep{russo16information,lu19informationtheoretic,basu21noregrets}. For instance, take Theorem 5 of \citet{basu21noregrets}, which corresponds to our sequential meta-learning setting. Forced exploration is needed to make their task term $O(\lambda_1(\Sigma_0))$. This is because the upper bound on the regret with filtered mutual information depends on the maximum marginal task parameter covariance, which can be $\lambda_1(\Sigma_q + \Sigma_0)$. In our analysis, the comparable term $c_1$ (\cref{thm:sequential regret}) is $O(\lambda_1(\Sigma_0))$ without any forced exploration. We also improve upon related analysis of \citet{kveton21metathompson} in several aspects. First, \citet{kveton21metathompson} analyze only a $K$-armed bandit. Second, they derive that the additional regret for meta-learning is $\tilde{O}(\sqrt{m} n^2)$; while our bound shows $\tilde{O}(\sqrt{m n})$. Finally, our setting generalizes bandit meta-learning.
Meta- and multi-task bandits have also been studied in the frequentist setting \citep{azar13sequential,deshmukh17multitask}. \citet{cella20metalearning} propose a \ensuremath{\tt LinUCB}\xspace algorithm \citep{abbasi-yadkori11improved} that constructs an ellipsoid around the unknown hyper-parameter in a linear bandit. The concurrent setting has also been studied, but with a different shared structure of task parameters. \citet{dubey20kernel} use a kernel matrix, \citet{wang21multitask} utilize pairwise distances of task parameters, and \citet{yang21impact} use low-rank factorization. Our structure, where the task parameters are drawn from an unknown prior, is both novel and important to study because it differs significantly from the aforementioned works. Earlier works on bandits with similar instances rely on clustering \citep{gentile14online,gentile17clustering,li16collaborative} and low-rank factorization \citep{kawale15efficient,sen17contextual,katariya16dcm,katariya17stochastic}. They analyze the frequentist regret, which is a stronger metric than the Bayes regret. Except for one work, all algorithms are UCB-like and conservative in practice. In comparison, \ensuremath{\tt HierTS}\xspace uses a natural stochastic structure. This makes it practical, to the point that the analyzed algorithm performs well in practice without any additional tuning.
Another related line of work are latent bandits \citep{maillard14latent,hong20latent,hong22thompson}, where the bandit problem is parameterized by an unknown latent state. If known, the latent state could help the agent to identify the bandit instance that it interacts with. These works reason about latent variables; but the purpose is different from our work, where we introduce the unknown hyper-parameter $\mu_*$ to relate multiple similar tasks.
\section{SETTING}
\label{sec:setting}
We use the following notation. Random variables are capitalized, except for Greek letters like $\theta$ and $\mu$. For any positive integer $n$, we define $[n] = \set{1, \dots, n}$. The indicator function is denoted by $\I{\cdot}$. The $i$-th entry of vector $v$ is $v_i$. If the vector is already indexed, such as $v_j$, we write $v_{j, i}$. A matrix with diagonal entries $v$ is $\diag{v}$. For any matrix $M \in \mathbb{R}^{d \times d}$, the maximum eigenvalue is $\lambda_1(M)$ and the minimum is $\lambda_d(M)$. The big O notation up to logarithmic factors is $\tilde{O}$.
Now we present our setting for solving similar bandit tasks. Each task is a \emph{bandit instance} with actions $a \in \mathcal{A}$, where $\mathcal{A}$ denotes an \emph{action set}. Rewards of actions are generated by \emph{reward distribution} $P(\cdot \mid a; \theta)$, where $\theta \in \Theta$ is an unknown parameter shared by all actions. We assume that the rewards are $\sigma^2$-sub-Gaussian and denote by $r(a; \theta) = \E{Y \sim P(\cdot \mid a; \theta)}{Y}$ the mean reward of action $a$ under $\theta$. The learning agent interacts with $m$ tasks. In a recommender system, each task could be an individual user. The task $s \in [m]$ is parameterized by a \emph{task parameter} $\theta_{s, *} \in \Theta$, which is sampled i.i.d.\ from a \emph{task prior distribution} $\theta_{s, *} \sim P(\cdot \mid \mu_*)$; which is parameterized by an unknown \emph{hyper-parameter} $\mu_*$.
The agent acts at discrete decision points, which are integers and we call them \emph{rounds}. At round $t \geq 1$, the agent is asked to act in a set of tasks $\mathcal{S}_t \subseteq [m]$. It takes actions $A_t = (A_{s, t})_{s \in \mathcal{S}_t}$, where $A_{s, t} \in \mathcal{A}$ is the action in task $s$; and receives rewards $Y_t = (Y_{s, t})_{s \in \mathcal{S}_t} \in \mathbb{R}^{|\mathcal{S}_t|}$, where $Y_{s, t} \sim P(\cdot \mid A_{s, t}; \theta_{s, *})$ is a stochastic reward for taking action $A_{s, t}$ in task $s$. The rewards are drawn i.i.d.\ from their respective distributions. The set $\mathcal{S}_t$ can depend arbitrarily on the history. The assumption that the action set $\mathcal{A}$ is the same across all tasks and rounds is only to simplify exposition.
\begin{figure}[t]
\centering
\input{graphical_model}
\caption{Graphical model of our hierarchical Bayesian bandit.}
\label{fig:setting}
\end{figure}
In \emph{hierarchical Bayesian bandits}, the hyper-parameter $\mu_*$ is initially sampled from a \emph{hyper-prior} $Q$ known by the learning agent. Our full model is given by
\begin{align*}
\mu_* & \sim Q\,, \\
\theta_{s, *} \mid \mu_*
& \sim P(\cdot \mid \mu_*)\,,
& \forall s \in [m]\,, \\
Y_{s, t} \mid A_{s, t}, \theta_{s, *}
& \sim P(\cdot \mid A_{s, t}; \theta_{s, *})\,,
& \forall t \geq 1, \, s \in \mathcal{S}_t\,,
\end{align*}
and also visualized in \cref{fig:setting}. Note that $P$ denotes both the task prior and reward distribution; but they can be distinguished based on their parameters. Our setting is an instance of a hierarchical Bayesian model commonly used in supervised learning \citep{lindley72bayes,zhang17survey}, and has been studied in bandits in special cases where tasks appear sequentially \citep{kveton21metathompson,basu21noregrets}.
Our learning agent interacts with each of $m$ tasks for at most $n$ times. So the total number of rounds varies, as it depends on the number of tasks that the agent interacts with simultaneously in each round. However, the maximum number of interactions is $m n$. The goal is to minimize the \emph{Bayes regret} \citep{russo14learning} defined as
\begin{align*}
\mathcal{BR}(m, n)
= \E{}{\sum_{t \geq 1} \sum_{s \in \mathcal{S}_t}
r(A_{s, *}; \theta_{s, *}) - r(A_{s, t}; \theta_{s, *})}\,,
\end{align*}
where $A_{s, *} = \argmax_{a \in \mathcal{A}} r(a; \theta_{s, *})$ is the optimal action in task $s$. The expectation in $\mathcal{BR}(m, n)$ is over $\mu_*$, $\theta_{s, *}$ for $s \in [m]$, and actions $A_{s,t}$ of the agent. While weaker than a traditional frequentist regret, the Bayes regret is a practical performance metric, as we are often interested in an average performance \citep{hong20latent,kveton21metathompson}. For example, when recommending to a group of users, it is natural to optimize over the whole population rather than an individual. Our definition of $\mathcal{BR}(m, n)$ is also dictated by the fact that $m$ and $n$ are the primary quantities of interest in our regret analyses. Our goal is to minimize $\mathcal{BR}(m, n)$ without knowing $\mu_*$ and $\theta_{s, *}$ a priori.
Since the set of tasks $\mathcal{S}_t$ can be chosen arbitrarily, our setting is general and subsumes many prior settings. For instance, when the agent interacts with the same task for $n$ rounds before shifting to the next one, and $\mathcal{S}_t = \{\lceil t / n \rceil\}$, we get a \emph{meta-learning bandit} \citep{kveton21metathompson}. More generally, when the agent interacts with the tasks sequentially, $|\mathcal{S}_t|$ = 1, our setting can be viewed as multi-task learning where any task helps the agent to solve other tasks. Therefore, we recover a \emph{multi-task bandit} \citep{wan21metadatabased}. Finally, when the agent acts in multiple tasks concurrently, $|\mathcal{S}_t| > 1$, we recover the setting of \emph{collaborative filtering bandits} \citep{gentile14online,li16collaborative} or more recently \emph{federated bandits} \citep{shi21federated}. Our algorithm and its analysis apply to all of these settings.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Overview}
Jaeger \cite{jaeger08} presents a paper which attempts to show that P is not equal to NP. This follows from a novel model of computation which has intrinsic uncertainty in its computation of all problems. However, we will show that this model cannot be used to solve the same class of decisions that a Turing machine can.
\subsection{Summary of the Paper in Question}
Jaeger \cite{jaeger08} in his paper: Solving the P/NP Problem Under Intrinsic Uncertainty, attempts to resolve P-versus-NP through unique, unconventional means, by sketching a computer science analogue to the Heisenberg Uncertainty Principle. He begins by establishing a supposedly Turing equivalent computing machine largely based off of the Turing machine that contains a tape of binary storage cells of any length, infinite or finite. Rather than assuming the machine is preprogrammed to accept certain inputs, the model described has a single tape in which both the input and the program code are randomly placed. This allows him to introduce the uncertainty that the paper is based on. He argues that since multiple, unique programs can perform the same actions at different speeds, program size should be considered when analyzing program complexity. He continues that his concept of "intrinsic uncertainty" implies that it is impossible to distinguish whether, given a segment of the input, a part of this segment is program code or input.
Jaeger \cite{jaeger08} likens the relationship of the parts of this segment to the relationship described in Heisenberg's Uncertainty Principle -- that you can not determine whether a part is code or input, rather you can compute the answer given both options and then compute the "certainty" that each computation is correct. There is a problem with this, he explains, because according to his model every computation done on the machine has uncertainty, including his certainty calculations. To solve this problem, he proposes what he calls "self-computation", or applying his Turing-machine-based machine to itself to compute a confidence value for its output, thus increasing the confidence value of its output. As a result of this self-computation he claims that the confidence value of a computation is directly proportional to the ratio of code length to input length. Through use of the sigmoid function, he claims one can calculate the entropy, or the probability that this confidence value is smaller than, larger than or equal to 0.5.
To relate this all to the P/NP problem, Jaeger \cite{jaeger08} constructs three lemmas. His first lemma says that a Turing machine simulating a computable decision in NP to an arbitrary precision is itself computable regardless of the precision. Lemma three states that this Turing machine is also in NP for some given uncertainties. Lemma two allows him to state an uncertainty threshold for which the Turing machine is not in P, thus proving that there exists some function in NP but not in P.
\subsection{P = NP Problem}
The problem known as P=NP is arguably the biggest open problem in computer science \cite{sipser92}. The Clay Mathematics Institute has offered a \$1,000,000 prize for anyone who provides a proof one way or another. This incentive has attracted quite a few attempts by amateurs as well as professionals, since the problem at first seems deceptively simple. The basic question asked by this problem is this: if an answer to a yes or no problem can be verified in polynomial time, can the answer also be computed in polynomial time? The answer is commonly assumed to be `no', although no proof has yet emerged.
One way to prove that P is not equal to NP is to give a counter example of a problem that is in NP but proveably not in P. This is more difficult than it sounds because it is difficult to prove that there are no algorithms in P which compute the answer. Jaeger's paper \cite{jaeger08} attempts to prove that there is some inrinsic information necessary for having the answer to a problem, and that this intrinsic information can only be computed in nonpolynomial time.
\subsection{Turing Machines}
Jaeger 2008 \cite{jaeger08} uses the model of a Turing machine, a theoretical model of computing yes or no answers. The model of a Turing machine has several parts \cite{sipser05}: a finite set of states including the initial state and a subset of final or accepting states, a finite set of tape alphabet symbols composed of the blank symbol and a set of input symbols, and a transition function which computes the new state, the alphabet symbol to be printed, and the direction to move (right or left), based on the current state and the current tape symbol. Informally there are two parts in a Turing machine: a finite set of states in which the current state is arrived at deterministically, and an infinite tape which may start with a finite number of non-blank symbols printed on it.
\section{Challenging the Arguments Made}
We will show that the model presented in this paper is flawed because it is not equivalent to a Turing machine. Moreover, the self-computation algorithm presented here has several major shortcomings which prevent it from resolving that uncertainty.
\subsection{Theoretical Differences with Turing Machines}
The most fundamental flaw to the argument presented in the paper under consideration is that rather than using a theoretically established model like a Turing machine, it presents a model in which the ``code" of the model, presumably corresponding to the set of states and the transition function, is indistinguishable from the ``input" to the code, presumably corresponding to the initial content of the tape. Although there are numerous very obvious theoretical and practical ways of distinguishing ``code" from ``input", they are not clearly marked in this model, causing the intrinsic uncertainty upon which the core argument is based.
``Let us assume in the following that we have a Turing-complete machine architecture...", reads paragraph 2, pg 5. The paper seems to rely on this model being Turing-complete, a term which is never clearly defined but which we assume to mean Turing equilavent. However, this is clearly not the case, since the elements of the 7-tuple defining a Turing machine are clearly identifiable, and the same can be said for the initial state of the tape. This method of arbitrarily introducing uncertainty to the computation of a Turing machine does not produce a Turing equivalent model, as it cannot be used to simulate a Turing machine, nor can it compute the same class of decisions that a Turing machine can. Because the output of this model is always uncertain, there are no problems in NP that can be reduced to it. It is similar to a model that flips a coin and upon seeing a heads, returns an arbitrary answer.
\subsection{Lack of Rigor in Analysis}
The definition of the machine the proof revolves around is seemingly flawed. The machine which is supposed to be equivalent to a Turing machine in computational power detailed has a tape that contains both its program code and input, with both being randomly distributed throughout the tape. The machine reads the tape which is partitioned into two subsets, $S_1$ and $S_2$. We are lead to assume that these subsets form the direct sum of the original input, but this is never verified. As a result, one could infer that perhaps there is some intersection between $S_1$ and $S_2$. This is an example of the unrigorous definitions throughout the paper. These subsets are never fully defined - no methods for finding them are ever given. The machine evaluates the tape by computing the results as if $S_1$ were the code and $S_2$ the input and vice-versa, then calculates confidence values for both of those computations to help decide which configuration is correct. This has some problems, namely that the method for partitioning the input into two distinct blocks is non-existent. On page 6, the partition is described as being N bits long, but no further information is given. Given the fact that the input and code are randomly intermixed, one could go as far as to say that its impossible to partition the input with any precision - both the code and input string are represented in binary and are thus indistinguishable. This means that not only would the machine have to evaluate for all possible $S_1$-$S_2$ dichotomies, but also for all possible partitions of the input. This additional factor is left completely unaccounted for in the remainder of the paper.
There is another error on page 6 in the uncertainty section. The claim that, ''we can accept the interpretation whose program code encompasses the larger number of bits as more likely," is never justified. Such a statement is invalid - in many cases, for instance any program dealing with databases, the size of the input vastly exceeds the size of the program code.
One mistake that appears quite often in Jaeger 2008 \cite{jaeger08} is the constant changing of terms. In section 2, page 5, the machine used throughout the paper is described as a ''Turing-complete" machine based on the Turing machine. Just two pages later on page 7, the same machine is referred to as a Turing machine. This inconsistency is detrimental to the reader's understanding as well as the validity of the paper itself. The paper also mentions an ''outside program" that performs the interpreting and execution aspects of the previously defined computing machine. While the existence of such a ''program" is provable, the author has omitted any kind of proof that such a program exists.
\subsection{The Self Computation Algorithm}
The bulk of this article describes an algorithm to be used for calculation of certainty measurements to be reported alongside the actual findings of a Turing machine. The method of self-computation proposed is hugely complex, riddled with nonstandard notation and very confusing; the authors of this paper were unable to make much sense of it. Self-computation, as described in the paper, involves executing the uncertain Turing machine on a copy of a similar machine running arbitrary ''code". The chance that the machine could execute a copy of itself and gain higher precision is unlikely and left completely unproven. The insinuation is that as you run the machine on itself, the bitlength of the ''code" portion gets longer and longer, increasing the ratio of code to data, which is directly proportional to certainty.
While this method may appear useful for calculating uncertainty, it is unlikely that this will work because it assumes, among other things, that bit length is a useful metric for telling code and data apart, that this is a necessary determination in a Turing machine, that Turing equivalent models can report a certainty measurement, and that no other algorithms exist for computing this function. As we have shown these things to be untrue, the internal merit of the self computation algorithm is irrelevant.
\subsection{There is No Proof of Nonpolynomiality for Certainty Calculation}
One problem with the argumunt presented in Jaeger 2008 \cite{jaeger08} (Lemmas 1 and 3) is that it is nowhere proven that the method of self-computation presented in there is the only method of determining the certainty of a result. Even granting that there is a problem of determining certainty of answers produced by Turing machines, and given that this method of self-computation is a reasonable way of determining certainty, and given that a certainty threshold can be set such that computing certainty by this method cannot be done in polynomial time, all doubtful claims, nothing in Jaeger's paper \cite{jaeger08} attempts to prove that there is no other method of computing this certainty measurement. It could very well be that there is a polynomial time algorithm for computing this measurement that the author simply did not think of.
Indeed, it is unclear from the paper under consideration why it should be possible to set a threshold ''dynamically" so that it requires NP time to reach that level of certainty. Even using the algorithm outlined there, it is not clearly explained why this would require exponential time rather than polynomial time. This trick of requiring that an exponential amount of computation go into producing the answer is only possible as a use of the certainty value which is produced by the modified Turing machine. As the output of a standard Turing machine is binary, this type of trick, which requires using the extra non-binary output, would not normally be possible. It is as if one constructed an algorithm to run in exponential time by requiring that it print an exponential number of characters.
\section{Conclusion}
Overall, we have shown several things which should each individually render the argument described in Jaeger's paper \cite{jaeger08} impotent. We have shown that a model of computation that requires a measure of intrinsic uncertainty cannot be reduceable to a Turing machine nor can it reliably compute any NP-complete problem. We have also shown that the algorithm proposed for calculating uncertainty relies on faulty or unproven assumptions. Finally, we note that while one algorithm is presented here, which may very well have an exponential runtime, there is no attempt to prove that there cannot be a polynomial-time algorithm to compute the same. Any of these things strikes a fatal blow to the argument outlined by Jaeger \cite{jaeger08}. As such, we do not find this compelling support for P not equal to NP.
\section{Acknowledgements}
This work was done as a project in the Spring 2009 CSC 200H course at the University of Rochester. We thank the professor, Lane A. Hemaspaandra, and the TA, Adam Sadilek, for their comments and advice. Any opinions, errors, or omissions are the sole responsibility of the authors.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{Intro}
Despite that many dynamical systems can be well characterized by PDEs derived mathematically/physically from basic principles such as conservation laws, lots of other systems have unclear or elusive underlying mechanisms (e.g., ones in neuroscience, finance, and ecology). Thus, the governing equations are usually empirically formulated \cite{rudy2017data}. Data-driven physics discovery of dynamical systems gradually became possible in recent years due to the rapid development and extensive application of sensing technologies and computational power \cite{long2019pde}. Over the past years, extensive efforts have been devoted into discovering representative PDEs for complex dynamical systems of which limited prior knowledge are available \cite{schmidt2009distilling,raissi2018hidden,rudy2017data,long2019pde}.
Among all the methods investigated for PDE identification/learning \cite{schmidt2009distilling,raissi2018hidden,rudy2017data,long2019pde,maslyaev1903data,atkinson2019data,hasan2020learning,xu2020dlga}, sparse regression gains the most attention in recent studies due to its inherent sparsity-promoting advantage. Considering a nonlinear PDE of the general form $u_t = N(u,u_x,u_{xx},...,x)$, in which the subscripts denote partial differentiation with respect to temporal or spatial coordinate(s), $N(\cdot)$ is an unknown expression on the right hand side of the PDE. $N(\cdot)$ is usually a nonlinear function of the spatial coordinate $x$, the measured quantity $u(x,t)$, and its spatial derivatives $u_x$ and $u_{xx}$. Given time series measurements of $u$ at certain spatial locations, the above equation can be approximated as $\mathbf{U}_t=\mathbf{\Theta}(\mathbf{U})\boldsymbol{\xi}$, in which $\mathbf{U}_t$ is the discretized form of $u_t$. $\mathbf{\Theta}(\mathbf{U})$ is a library matrix with each column corresponding to a candidate term in $N(\cdot)$. A key assumption in sparse identification is that $N(\cdot)$ consists of only a few term for a real physical system, which requires the solution of regression (i.e., $\xi$) to be a sparse vector with only a small number of nonzero elements. This assumption promotes a parsimonious form of the learned PDE and avoids overfitting the measured data with a complex model containing redundant nonlinear higher-order terms.
In most of current sparse regression methods for PDE learning, the parsimony of the learned model is realized through least squares regression with hard thresholding \cite{rudy2017data,rudy2019data,chen2020deep}, $\ell_1$ norm regularization \cite{bekar2021peridynamics,berg2019data,xiong2019data}, or $\ell_1$ norm regularization with hard thresholding \cite{both2021deepmod}. A critical challenge of these methods is that, the identification results using these method are susceptible to the selection of hyperparameters of the algorithm, including the regularizer $\lambda$ of the $\ell_1$ norm and the tolerance parameter $tol$ for hard thresholding. The identification results can be very different when hyperparameter settings change, especially when the measurement noise is large. As a result, the hyperparameter tuning is especially critical and challenging for cases with noisy measurements. Additionally, hard thresholding tends to suppress small coefficients that may not correspond to the most trivial terms of the intermediately learned PDEs. Recently, Zhang and Liu \cite{zhang2021robust} proposed a robust PDE learning method (i.e., the $\psi$-PDE method) that progressively selects important model terms from the candidate library which avoids the hyperparameter tuning.
Above discussions are about the deterministic sparse regression. Probabilistic sparse regression has also been investigated for physics learning through Sparse Bayesian Learning (SBL) \cite{zhang2018robust,chen2021robust,fuentes2021equation,nayek2020spike,yuan2019machine,zhang2019robust,chen2020gaussian,Bhouri2021gaussian}. Unlike the deterministic sparse regression that yields a coefficient vector $\boldsymbol{\xi}$ without accompanying confidence intervals, the Bayesian approach enables quantifying uncertainties in the estimates of model parameters and enables model selection based on model complexities. Another important benefit is that the evaluated model uncertainty in Bayesian inference can be propagated into the confidence levels in system predictions. It has been shown that the SBL method automatically avoids overfitting with a complex model through implementing a form of ``Occam's razor'' in the prescribed prior that controls the model complexity \cite{jefferys1992ockham}. Hence, the lemma of hyperparameter tuning can be largely obviated in SBL since it avoids setting additional regularization parameters \cite{tipping2003fast}.
Relevance Vector Machine (RVM) is the most widely used SBL method for Bayesian sparse regression in learning governing equations from observed data \cite{zhang2018robust,chen2021robust,fuentes2021equation,nayek2020spike,yuan2019machine,zhang2019robust}. RVM was originally developed for nonparametric modeling as a sparse and probabilistic alternative to the Support Vector Machine (SVM) \cite{tipping2003fast}. In RVM, sparsity is promoted by incorporating a parameterized independent Gaussian prior for each weight parameter, i.e., $p(w_i|\alpha_i) = \mathcal{N}(w_i|0,\alpha_i^{-1})$ with $w_i$ denoting a certain weight parameter and $\alpha_i$ its precision. The hyperparameter of each prior controls its strength over the associated weight. When solving this Bayesian inference problem through maximizing the evidence with respect to these hyperparameters, most of them go to infinity during the optimization. This leads to the posteriors of many weight parameters infinitely peaked at zero and consequently the weight vector $\mathbf{w}$ comprising only a few non-zero elements. RVM has been proved efficient for basis selection and advantageous over traditional methods (such as FOCUSS and basis pursuit) regarding obviating modeling/structural errors and algorithmic convergence \cite{wipf2004sparse}. These merits are essentially important in learning the governing equation(s) for a physical system, in which the model parsimony and interpretability must be preserved.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.5]{RVM.pdf}
\caption{Framework of the SBL method for discovering PDE(s) from measured data.}
\label{Figure:RVM}
\end{figure}
Figure \ref{Figure:RVM} illustrates the basic idea of discovering governing PDEs using the SBL method via RVM. With temporal/spatial derivatives numerically calculated from measured data in the spatial-temporal domain, a library for regression is built with representative terms containing physics meaning and/or others such as polynomials. Then the equation discovering problem can be defined as a sparse regression problem such as $\mathbf{U}_t=\mathbf{\Theta}(\mathbf{U})\boldsymbol{\xi}$ introduced above. This sparse regression problem can be solved sequentially (such as in \cite{tipping2003fast}), during which many terms in the prescribed library are excluded according to their statistics. Finally, the equations can be formulated using the remaining terms and their coefficients/weights.
Fuentes et al. \cite{fuentes2021equation} used RVM to discover the governing Ordinary Differential Equations (ODEs) for several Single-Degree-of-Freedom (SDOF) nonlinear mechanical systems. Model uncertainty was quantified and propagated into predictions of system responses. This work was extended by Nayek et al. \cite{nayek2020spike} by adopting spike and slab priors for improved selective shrinkage of the weight parameters. Zhang and Lin \cite{zhang2018robust,zhang2019robust} attempted to discover ODEs/PDEs for the investigated dynamical systems using RVM. Robustness is lacking in this work because of the following limitations: 1) hard thresholding is implemented on the SBL results to further prune undisired terms, which compromises the merits of SBL methods for ODE/PDE learning; 2) as argued in \cite{lagergren2020learning}, noise is only added to the time derivative term instead of to the measured data, which largely suppresses the influence of noise in numerical differentiation and thus limits the method's robustness. This work was extended by Chen and Lin \cite{chen2021robust} through threshold Bayesian group Lasso with spike and slab prior (tBGL-SS) considering non-constant model parameters. However, the tBGL-SS method failed to resolve the hard thresholding issue. Additionally, the performance of PDE discovery significantly decreases when the noise level is beyond 5\%, which makes this method less robust than many state-of-the-art methods \cite{zhang2021robust} and thus limits its application in real practices.Yuan et al. \cite{yuan2019machine} used the RVM method to discover PDEs from measured spatiotemporal data and conducted convergence and consistency analysis. The robustness of this method in the presence of a large level of noise needs to be further examined. Zanna and Bolton \cite{zanna2020data} compared the SBL method using RVM and the physics-constrained deep learning (PCDL) using convolutional neural networks (CNN) for modeling turbulence in ocean mesoscale eddies and discussed the advantages and shortcomings of each method. In contrast to the PCDL method that yields a deep black-box CNN model, the SBL method provides physics-interpretable governing equations for the investigated complex dynamical system.
Considering the limitations of existing studies on PDE learning via SBL approaches, this study proposes the Parsimony-Enhanced Sparse Bayesian Learning (\textsc{Pe}SBL) method for robust data-driven discovery of PDEs. The main argument in the proposed study is that the Parsimony principle should include both sparsity and complexity measures. Sparsity refers to the number of terms in a model, which has been widely addressed in the open literature. Complexity refers to the order and/or functional form complexity of each term, which has not been addressed in the physics discovery, to the best knowledge of the authors. Compared with traditional SBL methods for sparse regression, the proposed \textsc{Pe}SBL method enhances the parsimony of learned equations through quantifying model complexity and recommending less complex terms to the solution of sparse regression. The key issues investigated in this study include:
\begin{enumerate}
\item Preprocessing measured signals to improve the robustness of SBL in the presence of significant levels of noise;
\item Promoting parsimony (rather than sparsity alone) of the identified physics model in PDE learning;
\item Quantifying uncertainties of the learned model and its representativeness of the intrinsic physics underlying observations;
\item Further reducing the uncertainties of learned model from SBL via Bayesian Model Updating (BMU);
\item Propagating the learned model uncertainties into predictions of system responses;
\item Investigating system diagnosis and prognosis when system varies from time to time;
\item Multiscale modeling of stochastic dynamical systems through Hierarchical Bayesian Inference (HBI);
\end{enumerate}
The remaining part of this paper is structured as follows. Section \ref{Sec:method} establishes the framework of the \textsc{Pe}SBL method for discovering PDEs from observed dynamical systems; section \ref{Sec:results} presents and discusses the results of discovering governing equations for several canonical systems using the \textsc{Pe}SBL method; Section \ref{uncertainProp} investigates propagating the learned model uncertainties to the prediction of system dynamics; Section \ref{Section:DAP} investigates system diagnosis and prognosis when the essential properties of a certain system vary with time; Section \ref{Section:MSM} presents the framework and results of multiscale modeling and physics discovery through HBI; Section \ref{Section:Conclusion} concludes this study with remarks and recommendations for future work.
\section{Methodology: the \textsc{Pe}SBL method for PDE learning}\label{Sec:method}
This section establishes the framework of the the \textsc{Pe}SBL method for learning governing PDEs from observed dynamical systems. The \textsc{Pe}SBL method is developed based on the RVM method of sparse Bayesian modeling or SBL. This method further promotes model parsimony instead of only sparsity when solving the regression problem sequentially through marginal likelihood maximization. Section \ref{Section:RVM} introduces the basic knowledge of SBL and RVM method; Section \ref{Section:PeSBL} presents the framework of \textsc{Pe}SBL, including the preprocessing procedures for minimizing the influence of measurement noise (Section \ref{sec:preprocess}), the sequential \textsc{Pe}SBL algorithm (Section \ref{Sec:PeSBL_Alg}) (with its robustness examined in Section \ref{Sec:robustness}), and Bayesian model updating (BMU) for further enhancing the performance of PDE learning using the measured raw data (Section \ref{Sec:BMU}).
\subsection{SBL and RVM \cite{tipping2003fast}}\label{Section:RVM}
Given the measured dataset $\mathbf{t}=\left(t_{1}, \ldots, t_{N}\right)^{\mathrm{T}}$, it is assumed that $\mathbf{t}$ can be expressed as the sum of model approximation $\mathbf{y}=\left(y_1, \ldots, y_N\right)^{\mathrm{T}}$ and the residual $\epsilon=\left(\epsilon_{1}, \ldots, \epsilon_{N}\right)^{\mathrm{T}}$, such that
\begin{equation}
\mathbf{t} =\mathbf{y}+\boldsymbol{\epsilon}
\end{equation}
In sparse regression, the model approximation $\mathbf{y}$ is expressed as the product of the library matrix $\mathbf{\Phi}$ and the coefficient vector $\mathbf{w}$, i.e.,
\begin{equation}
\mathbf{y} = \mathbf{\Phi} \mathbf{w}
\end{equation}
in which the library matrix $\mathbf{\Phi}=\left[\phi_{1} \ldots \phi_{M}\right]$ has a dimension of $N\times M$ with its columns comprising the over-complete set of $M$ basis vectors.
In sparse Bayesian modeling, the residuals $\epsilon_n(n=1,2,\ldots,N)$ are assumed following independent zero-mean Gaussian distributions, with a common variance $\sigma^2$, such that
\begin{equation}
\begin{split}
p(\boldsymbol{\epsilon})&= \prod_{n=1}^{N}p(\epsilon_n)\\
&=\prod_{n=1}^{N} \mathcal{N}\left(\epsilon_{n}|0,\sigma^{2}\right)
\end{split}
\end{equation}
Apparently, this assumption leads to a multivariate Gaussian likelihood for $\mathbf{t}$:
\begin{equation}\label{Eq:llh0}
p\left(\mathbf{t}|\mathbf{w}, \sigma^{2}\right)=(2 \pi)^{-N / 2} \sigma^{-N} \exp \left[-\frac{(\mathbf{t}-\mathbf{y})^{2}}{2 \sigma^{2}}\right]
\end{equation}
To promote sparsity, the coefficient parameters $\mathbf{w} = \left[w_1,\ldots,w_M\right]^\mathrm{T}$ are given independent Gaussian priors:
\begin{equation}\label{Eq:prior}
\begin{split}
p(\mathbf{w}|\boldsymbol{\alpha})&=\prod_{m=1}^{M}p(w_m|\alpha_m)\\
&=\prod_{m=1}^{M} \mathcal{N}\left(w_{m}|0,\alpha_{m}^{-1}\right)\\
&=(2 \pi)^{-M / 2} \prod_{m=1}^{M} \alpha_{m}^{1 / 2} \exp \left(-\frac{\alpha_{m} w_{m}^{2}}{2}\right)
\end{split}
\end{equation}
in which $\boldsymbol{\alpha}=\left(\alpha_{1},\ldots,\alpha_{M}\right)^{\mathrm{T}}$. Each element (e.g., $\alpha_m$) individually regularizes the strength of the prior over the corresponding coefficient parameter (e.g., $w_m$).
Following the Bayes' rule, combining the prior in Equation \ref{Eq:prior} with the multivariate Gaussian likelihood in Equation \ref{Eq:llh0} yields the posterior distribution of $\mathbf{w}$:
\begin{equation}
p\left(\mathbf{w}|\mathbf{t}, \boldsymbol{\alpha}, \sigma^{2}\right)=\frac{p\left(\mathbf{t}|\mathbf{w}, \sigma^{2}\right) p(\mathbf{w}|\boldsymbol{\alpha})}{p\left(\mathbf{t}|\boldsymbol{\alpha}, \sigma^{2}\right)}
\end{equation}
The posterior is proved to be Gaussian, that is, $p\left(\mathbf{w}|\mathbf{t}, \boldsymbol{\alpha}, \sigma^{2}\right)=\mathcal{N}(\boldsymbol{\mu}, \mathbf{\Sigma})$ with
\begin{equation}\label{Eq:SM}
\mathbf{\Sigma}=\left(\mathbf{A}+\sigma^{-2} \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi}\right)^{-1} \quad \boldsymbol{\mu}=\sigma^{-2} \mathbf{\Sigma} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t}
\end{equation}
in which $\mathbf{A} = \mathrm{diag}(\alpha_1,...,\alpha_M)$. A most-probable point estimate of $\boldsymbol{\alpha}$, i.e., $\boldsymbol{\alpha}_\mathrm{MP}$, can be achieved by maximizing the following marginal log-likelihood $\mathcal{L}(\boldsymbol{\alpha})$ with respect to $\boldsymbol{\alpha}$:
\begin{equation} \label{Eq:llh}
\begin{split}
\mathcal{L}(\boldsymbol{\alpha}) &=\log p\left(\mathbf{t}|\boldsymbol{\alpha}, \sigma^{2}\right)=\log \int_{-\infty}^{\infty} p\left(\mathbf{t}|\mathbf{w}, \sigma^{2}\right) p(\mathbf{w}|\boldsymbol{\alpha}) d \mathbf{w} \\ &=-\frac{1}{2}\left[N \log 2 \pi+\log |\mathbf{C}|+\mathbf{t}^{\mathrm{T}} \mathbf{C}^{-1} \mathbf{t}\right]
\end{split}
\end{equation}
in which
\begin{equation}
\mathbf{C}=\sigma^{2} \mathbf{I}+\mathbf{\Phi} \mathbf{A}^{-1} \mathbf{\Phi}^{\mathrm{T}}
\end{equation}
Subsequently, $\mathbf{\Sigma}$ and $\boldsymbol{\mu}$ can be estimated correspondingly from Equation \ref{Eq:SM} by setting $\boldsymbol{\alpha} = \boldsymbol{\alpha}_\mathrm{MP}$. When solving this optimization problem sequentially, it can be found that many elements of $\boldsymbol{\alpha}$ go to infinity, leading to the corresponding parameter posteriors peaked at zero infinitely and thus a sparse regression model. However, sparsity itself may not be sufficient for learning governing equations from dynamical systems, which requires representing the intrinsic physics underlying measured data with the least number of simplest terms in the prescribed library without considerable loss of approximation accuracy. Therefore, the parsimony of the learned model must be guaranteed when defining or solving the regression problem through Bayesian inference.
\subsection{The \textsc{PeSBL} Method} \label{Section:PeSBL}
Figure \ref{Figure:Framework} illustrates the framework of the proposed \textsc{Pe}SBL method for discovering the governing equations using measured data from a dynamical system. This framework starts from the noisy curve on the upper left corner, which denotes the measured signals containing all the information one can directly obtain from the instrumented system. The rest of this section will explain in detail each step of this framework.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.5]{Framework.pdf}
\caption{Framework of the \textsc{Pe}SBL method for discovering PDE(s) from measured data.}
\label{Figure:Framework}
\end{figure}
\subsubsection{ Signal Preprocessing and Data Preparation} \label{sec:preprocess}
For preprocessing the measured data, a neural network (NN) model is built following the practices in \cite{xu2019dl,berg2019data}, setting the independent variables (i.e., $t$, $x$, etc.) as inputs and the measured quantity (e.g., $u$) as the output. The measured data are split into training and validation sets and the early stopping strategy is devised in the model training to prevent the NN model from overfitting the measured noise. A smoothed series of signals is expected from this preprocessing, which will be shown as follows taking the Burgers equation as an example.
Burgers equation is used to describe the dynamics of a dissipative system. A 1D viscous Burgers equation is used to demonstrate the effects of signal preprocessing in the \textsc{PeSBL} method. It has the expression of $u_t = -uu_x+\nu u_{xx}$ with the initial conditions $u(0,x) = -\mathrm{sin}(\pi x)$ and boundary condition $u(t,-1) = u(t,1) =0$. In the Burgers equation, $\nu = \frac{0.01}{\pi}$ denotes the diffusion coefficient. It should be noted that compared with the Burgers equation widely used in the literature \cite{chen2021robust,rudy2017data} (i.e., $u_t = -uu_x+0.1u_{xx}$), this Burgers equation has a much smaller diffusion coefficient and thus is much more difficult to be correctly identified. In this study, this challenging equation is selected to examine the effectiveness and robustness of the proposed \textsc{PeSBL} method for PDE learning. Figure \ref{Figure:solBurgers0} (a) shows the simulated data from this dissipative system within the range $t\in [0,1]$ and $x\in [-1,1]$.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{solBurgers0.eps}
\caption{Simulated data for the 1D dissipative system characterized by the Burgers equation $u_t = -uu_x+\frac{0.01}{\pi} u_{xx}$ with (a) 0\% noise, (b) 10\% noise, and (c) 10\% noise afer NN denoising.}
\label{Figure:solBurgers0}
\end{figure}
To demonstrate the effects of NN denoising step in the \textsc{PeSBL} method, 10\% white Gaussian noise is added to the numerical solution of the Burgers equation, which significantly varies the values of the solution (as shown in Figure \ref{Figure:solBurgers0} (b)) and thus poses challenge to calculating the numerical derivatives in the following step. In this study, the noise level is quantified by the percentage of the standard deviation of the measured variable. For example, if 10\% noise is added to $u$, then the outcome is $u_n = u+10\%*\mathrm{std}(u)*\mathrm{randn}(\mathrm{size}(u))$ where $\mathrm{std}(\cdot)$ evaluates the standard deviation, $\mathrm{randn}(\cdot)$ generates white Gaussian noise of the specified dimension, and $\mathrm{size}(\cdot)$ measures the dimension. Without much prior knowledge about the noise characteristics, NN modeling is applied to denoise the noisy measurements, and the processed data is visualized in Figure \ref{Figure:solBurgers0} (c). Comparing the three plots in Figure \ref{Figure:solBurgers0}, one can observe that NN denoising largely reduces the noise level in the collected data (from 10\% to 2\%) and makes the solution curve much smoother than the noisy measurement. Hence, it has the potential of improving the accuracy of subsequent numerical differentiation.
The denoised data will be subsequently used to calculate the numerical derivatives (i.e., $\mathbf{U}_t$, $\mathbf{U}_x$, $\mathbf{U}_{xx}$, etc.) and then construct the library matrix $\mathbf{\Theta}(\mathbf{U})$ for sparse regression. Numerical methods such as finite difference method (FDM) and polynomial interpolation are used to calculate the temporal and spatial derivatives. For the system characterized by the Burgers equation, the library is built with polynomials of $u$ to the power of 3, spatial derivatives to the $3^\mathrm{rd}$ order, and their products. As a result, it contains 16 terms in total, i.e., $\mathbf{\Theta}(\mathbf{U}) = \{1, \mathbf{U}, \mathbf{U}^2, \mathbf{U}^3, \mathbf{U}_x, \mathbf{UU}_x,..., \mathbf{U}^3\mathbf{U}_{xxx}\}$.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{effectFFT.eps}
\caption{Color maps of relative errors in $u_{xx}$ of Burger equation caused by adding 50\% noise.
(a) $\mathrm{log}(\left|\frac{u^n_{xx}-u^0_{xx}}{u^0_{xx}}\right|)$;
(b) $\mathrm{log}(\left|\frac{\widetilde{u}^n_{xx}-\widetilde{u}^0_{xx}}{\widetilde{u}^0_{xx}}\right|)$;
(c) $\mathrm{log}(\left|\frac{u^{nn}_{xx}-u^0_{xx}}{u^0_{xx}}\right|)$;
(d)$\mathrm{log}(\left|\frac{\widetilde{u}^{nn}_{xx}-\widetilde{u}^0_{xx}}{\widetilde{u}^0_{xx}}\right|)$. $u^0_{xx}$, $u^n_{xx}$, and $u^{nn}_{xx}$ are the $2^\mathrm{nd}$ order derivatives calculated using the clean data, noise data, and NN-denoised data, respectively; the tilde $\widetilde{(\cdot)}$ denote the results of 2D FFT; $k_x$ and $k_t$ represent the corresponding coordinates in the frequency domain.}
\label{Figure:effectFFT}
\end{figure}
With $\mathbf{U}_t$ and $\mathbf{\Theta}(\mathbf{U})$ established, to further reduce the influence of noise in sparse regression, fast Fourier transform (FFT) is applied to transform $\mathbf{U}_t$ and $\mathbf{\Theta}(\mathbf{U})$ to their frequency domain counterparts $\widetilde{\mathbf{U}}_t$ and $\mathbf{\Theta}(\widetilde{\mathbf{U}})$, respectively. Following FFT, a frequency cutoff is implemented to preserve only the low frequency components that are expected to be less susceptible to noise. Moreover, this step converts the regression problem from the temporal-spatial domain to the frequency domain, which does not change the form of learned PDEs \cite{cao2020machine,zhang2021robust}.
Taking the Burgers equation above as an example, Figures \ref{Figure:effectFFT} (a) to (d) compare the relative errors in the spatial-temporal domain and frequency domain. Figures \ref{Figure:effectFFT} (a) and (b) show the difference between the polluted data with 50\% noise and the simulated clean data. It can be observed that after taking 2D FFT, the low-frequency components are less affected by the added noise. In addition, Figures \ref{Figure:effectFFT} (c) and (d) show that NN denoising can largely reduce not only the relative error in the spatial-temporal domain, as can be predicted from Figure \ref{Figure:solBurgers0}, but also the relative error in the frequency domain. Therefore, it can be expected that the performance of PDE learning can be considerably improved by implementing these preprocessing procedures. This improvement will be demonstrated through comparison in Section \ref{Sec:results}.
\subsubsection{Sparse Regression Using the Sequential \textsc{PeSBL} Algorithm}\label{Sec:PeSBL_Alg}
With $\widetilde{\mathbf{U}}_t$ and $\mathbf{\Theta}(\widetilde{\mathbf{U}})$ from FFT with frequency cutoff, sparse regression is conducted using the sequential \textsc{PeSBL} algorithm proposed in this study. The details of this algorithm are elaborated in Algorithm \ref{alg1}. Unlike the sequential SBL algorithm in \cite{tipping2003fast}, the principle of this algorithm is promoting model parsimony instead of sparsity alone. This is achieved by preventing the algorithm from selecting/adding complex terms for marginal increase of the log-likelihood (i.e., $\mathcal{L}$ in Equation \ref{Eq:llh}). To this end, when establishing the library $\mathbf{\Theta}(\mathbf{U})$ for sparse regression, all terms in the library are arranged in an increasing level of complexity regarding the power of polynomials and the order of spatial derivatives. For example, for the Burgers equation, $\mathbf{\Theta}(\mathbf{U}) = \{1, \mathbf{U}, \mathbf{U}^2, \mathbf{U}^3, \mathbf{U}_x, \mathbf{UU}_x,..., \mathbf{U}^3\mathbf{U}_{xxx}\}$. In this way, the index of a certain term or its function can be taken as an indicator of the complexity of this term, which is inspired by the concept of Minimum Description Length (MDL) that favors a short code for describing objects \cite{von2011statistical}. This study selects the square of the index of certain term in the library to evaluate its complexity, which is proved efficient for all the investigated dynamical systems.
To incorporate model complexity in the evaluation of regression quality in each iteration of this sequential algorithm, the Akaike Information Criterion (AIC) for model selection \cite{wagenmakers2004aic} is modified as $\mathcal{\widetilde{AIC}}$ and defined as follows:
\begin{equation}
\mathcal{\widetilde{AIC}} = 2\frac{\Sigma i_s^{\scriptscriptstyle{2}}}{M}+2\mathrm{len}(i_s)-2\mathcal{L}
\end{equation}
in which $i_s$ denotes the list of indices of currently selected terms from the library, $\mathrm{len}(i_s)$ evaluates its length, and $M$ denotes the number of terms in the library, as in Section \ref{Section:RVM}. Compared with the standard AIC ($AIC = 2\mathrm{len}(i_s)-2\mathcal{L}$), $\mathcal{\widetilde{AIC}}$ penalizes the complexity of selected model terms in addition to the complexity from the number of terms in the current model. In addition, in the sequential \textsc{PeSBL} algorithm, the relative increase of log-likelihood is evaluated in each iteration. If it goes below a certain threshold (i.e., $tol_2$ in Algorithm \ref{alg1}), the operation of adding new terms will be obviated in this iteration (Line 34 in Algorithm \ref{alg1}). To the best of the authors' knowledge, this is the first time that parsimony is enhanced in regression problems to pursue the most representative model of the intrinsic physics underlying observed data. More details about this algorithm can be found in Algorithm \ref{alg1}.
Given the measured data from a certain dynamical system, with $\widetilde{\mathbf{U}}_t$ and $\mathbf{\Theta}(\widetilde{\mathbf{U}})$ taken as the inputs and the parameters (i.e., $maxIters$, $tol_1$, and $tol_2$) set, the sequential \textsc{PeSBL} algorithm yields the library $\mathbf{\Theta}^\mathrm{0}$ composed of remaining contributive terms and the corresponding coefficient vector $\boldsymbol{\xi}^0$ with each element following a Gaussian distribution, i.e., $\xi_i^0 \sim \mathcal{N}\left(\mu_{\xi_i^0},\sigma_{\xi_i^0}^2\right)$. These outputs can be used to formulate a stochastic PDE governing the observed system, i.e., $\boldsymbol{u}_\mathrm{t} = \boldsymbol{\Theta}^0(\boldsymbol{u})\boldsymbol{\xi}^0$. In Section \ref{Sec:results}, it will be demonstrated that the correct model forms can be successfully identified for several canonical dynamical systems using the sequential \textsc{PeSBL} algorithm. For example, with the clean data (0\% noise) measured from the dissipative system characterized by the Burgers equation ($u_t = -uu_x+\frac{0.01}{\pi} u_{xx}$), the sequential \textsc{PeSBL} algorithm yields $\boldsymbol{\Theta}^0 = \left\lbrace uu_x,u_{xx}\right\rbrace$ with $\xi_1^0 \sim \mathcal{N}(-1.02,1.06)$ and $\xi_2^0 \sim \mathcal{N}(-\frac{0.022}{\pi},5.19\times 10^{-5})$. Hence, the resulting PDE is $u_t = -1.02(\pm 1.03)uu_x+\frac{0.022}{\pi}(\pm 7.20\times10^{-3})u_{xx}$. In this study, the coefficients of PDE terms are presented in the form of $\mu_{\xi_i}(\pm \sigma_{\xi_i})$, in which $\mu_{\xi_i}$ denotes the mean of certain coefficient $\xi_i$ and $\sigma_{\xi_i}$ denotes its standard deviation. It shows that the learned model has limited coefficient accuracy and a significant level of uncertainty, while the model form is identical to the truth. This lack of model accuracy and considerable model uncertainty may result from the similarity of terms in the library (such as $uu_x$ and $u^2u_x$ for the Burgers equation) and will lead to even larger error and uncertainty in system predictions. Therefore, Section \ref{Sec:BMU} will investigate further improving the model accuracy and its confidence level through Bayesian inference.
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
\begin{algorithm}
\scriptsize
\caption{\scriptsize \textsc{Pe}SBL algorithm: $[\mathbf{\Theta}^\mathrm{0},\boldsymbol{\mu}_{\xi^0},\boldsymbol{\alpha}_{\xi^0}] = \textsc{PeSBL}(\mathbf{\Theta},\widetilde{\mathbf{U}}_t,maxIters=1000,tol_1=10^{-4},tol_2=10^{-2}$)}\label{alg1}
\begin{algorithmic}[1]
\State \textbf{Input}: library matrix $\mathbf{\Theta}$ (with the size $N\times M$), discretized temporal derivative $\widetilde{\mathbf{U}}_t$ (with the size $N\times 1$), the maximum number of iterations $maxIters$, tolerance $tol_1$ of the $\Delta\mathcal{L}$ for stopping the iteration, and tolerance ratio $tol_2$ of the $\Delta\mathcal{L}$ for stopping adding new terms. $\Delta\mathcal{L}$ is the change of log-likelihood ($\mathcal{L}$).
\State \textbf{output}: parsimonious library $\mathbf{\Theta}^\mathrm{0}$ containing contributive terms, and the corresponding coefficient vector $\boldsymbol{\xi}^0$ with the mean $\boldsymbol{\mu}_{\xi^0}$ and precision $\boldsymbol{\alpha}_{\xi^0}$.
\State Normalize $\mathbf{\Theta}$ and $\widetilde{\mathbf{U}}_t$:
$\mathbf{\Theta} = \mathbf{\Theta}/\norm{\mathbf{\Theta}}_2$,
$\widetilde{\mathbf{U}}_t = \widetilde{\mathbf{U}}_t/||\widetilde{\mathbf{U}}_t||_2$.
\State Let $\mathbf{t} = \widetilde{\mathbf{U}}_t$ and $\mathbf{\Phi} = \mathbf{\Theta}$.
\State Initialize $\mathcal{L}_{rec} = \emptyset$. \textcolor{gray}{\texttt{\#} $\mathcal{L}_{rec}$ denotes the record of log-likelihood in each iteration.}
\State Initialize $\sigma^2 = \mathrm{var}(\mathbf{t})$, $\alpha = \mathrm{Inf}(M,1)$, and $i_{s} = \emptyset$. \textcolor{gray}{\texttt{\#} var($\cdot$) denotes the variance of a given vector; Inf($\cdot$) denotes a matrix of all infinite values; $i_{s}$ denotes the collection of selected term indices during the following iterations.}
\State Initialize $S = [S_1,S_2,...,S_M]$ \& $Q = [Q_1,Q_2,...,Q_M]$ with
$S_m = \phi_m^\mathrm{T}\mathbf{C}^{-1}\phi_m$ \& $Q_m = \phi_m^\mathrm{T}\mathbf{C}^{-1}\mathbf{t}$. \textcolor{gray}{\texttt{\#} Equation (22) in \cite{tipping2003fast} }
\For{$i = 1,2,...,maxIters$}
{
\State $s = S$; $q = Q$;
\State $s(i_{s}) = \frac{\alpha(i_{s}).*S(i_{s})}{\alpha(i_{s})-S(i_{s})}$; $q(i_{s}) = \frac{\alpha(i_{s}).*Q(i_{s})}{\alpha(i_{s})-S(i_{s})}$; \textcolor{gray}{\texttt{\#} Equation (23) in \cite{tipping2003fast}; $.*$ denotes elementwise multiplication. }
\State $\theta = q^2-s$; \textcolor{gray}{\texttt{\#} Step 5 of the Sqential SBL Algorithm in Section 4 in \cite{tipping2003fast}}
\State $i_{a} = (\theta>0)$; \textcolor{gray}{\texttt{\#} active indices in the current iteration, see Steps 6 \& 7 in \cite{tipping2003fast} }
\State $i_{r} = i_{s}\cap i_{a}$; \textcolor{gray}{\texttt{\#} indices of terms to be re-estimated, see Steps 6 in \cite{tipping2003fast} }
\State $i_{\scriptscriptstyle{+}} = i_{a}\setminus i_{s}$; \textcolor{gray}{\texttt{\#} indices of terms to be added, see Steps 7 in \cite{tipping2003fast} }
\State $i_{\scriptscriptstyle{-}} = i_{s}\setminus i_{a}$; \textcolor{gray}{\texttt{\#} indices of terms to be deleted, see Steps 8 in \cite{tipping2003fast} }
\State Initialize $\Delta\mathcal{L} = -\mathrm{Inf}(M,1)$; \textcolor{gray}{\texttt{\#} potential change of log-likelihood}
\State Initialize $\Delta\mathcal{C} = \mathrm{O}(M,1)$; \textcolor{gray}{\texttt{\#} potential change of model complexity $\mathcal{C}$; $\mathrm{O}$ denotes a matrix of all zero values.}
\State Initialize $\mathcal{C}_1 = \mathrm{O}(M,1)$; \textcolor{gray}{\texttt{\#} potential changed model complexity}
\State $\mathcal{C}_0 = 2\Sigma i_s^{\scriptscriptstyle{2}}/M+2\mathrm{len}(i_s)$; \textcolor{gray}{\texttt{\#} initial model complexity; $\mathrm{len}(\cdot)$ denotes the length of a vector.}
\If{$i_{r}\neq\emptyset$}
\State $\widetilde{\alpha} = \frac{s(i_{r})}{\theta(i_{r})}$; \textcolor{gray}{\texttt{\#} Equation (20) in \cite{tipping2003fast} }
\State $d\alpha = \widetilde{\alpha}^{-1}-\alpha(i_{r})^{-1}$;
\State$2\Delta\mathcal{L}(i_{r}) = \frac{Q(i_{r})^2}{S(i_{r})+ {d\alpha}^{-1}}-\mathrm{log}\left[1 + S(i_{r})d\alpha \right]$;
\textcolor{gray}{\texttt{\#} Equation (32) in \cite{tipping2003fast} }
\EndIf
\If{$i_{\scriptscriptstyle{+}}\neq\emptyset$}
\State$2\Delta\mathcal{L}(i_{\scriptscriptstyle{+}}) = \frac{Q(i_{\scriptscriptstyle{+}})^2-S(i_{\scriptscriptstyle{+}})}{S(i_{\scriptscriptstyle{+}})}+\mathrm{log}\frac{S(i_{\scriptscriptstyle{+}})}{Q(i_{\scriptscriptstyle{+}})^2}$; \textcolor{gray}{\texttt{\#} Equation (27) in \cite{tipping2003fast} }
\State $\mathcal{C}_1(i_{\scriptscriptstyle{+}}) = 2(\Sigma i_s^{\scriptscriptstyle{2}}+i_{\scriptscriptstyle{+}}^{\scriptscriptstyle{2}})/M+2(len(i_s)+1)$;
\State $\Delta\mathcal{C}(i_{\scriptscriptstyle{+}}) = \mathcal{C}_1(i_{\scriptscriptstyle{+}})-\mathcal{C}_0$;
\EndIf
\If{$i_{\scriptscriptstyle{-}}\neq\emptyset$}
\State$2\Delta\mathcal{L}(i_{\scriptscriptstyle{-}}) = \frac{Q(i_{\scriptscriptstyle{-}})^2}{S(i_{\scriptscriptstyle{-}})-\alpha(i_{\scriptscriptstyle{-}})}-\mathrm{log}\left(1-\frac{S(i_{\scriptscriptstyle{-}})}{\alpha(i_{\scriptscriptstyle{-}})}\right)$; \textcolor{gray}{\texttt{\#} Equation (37) in \cite{tipping2003fast} }
\State $\mathcal{C}_1(i_{\scriptscriptstyle{-}}) = 2(\Sigma i_s^{\scriptscriptstyle{2}}-i_{\scriptscriptstyle{-}}^{\scriptscriptstyle{2}})/M+2(\mathrm{len}(i_s)-1)$;
\State $\Delta\mathcal{C}(i_{\scriptscriptstyle{-}}) = \mathcal{C}_1(i_{\scriptscriptstyle{-}})-\mathcal{C}_0$;
\EndIf
\State $\Delta \mathcal{\widetilde{AIC}} = \Delta\mathcal{C}-2\Delta\mathcal{L}$; \textcolor{gray}{\texttt{\#} potential change of $\mathcal{\widetilde{AIC}}$}
\State $\left[\Delta\mathcal{\widetilde{AIC}}_{m},i_{\scriptscriptstyle{m}}\right] = min\left(\Delta\mathcal{\widetilde{AIC}}\right)$; \textcolor{gray}{\texttt{\#} find the operation yielding the smallest $\Delta\mathcal{\widetilde{AIC}}$}
\textcolor{gray}{\texttt{\#} If the relative increase of log-likelihood is smaller than $tol_2$, then obviate adding new terms in this iteration.}
\If{$\Delta\mathcal{L}(i_{\scriptscriptstyle{m}})<tol_2*\mathcal{L}(i_1) \;\&\; i_{\scriptscriptstyle{m}}\in i_{\scriptscriptstyle{+}}$}
$\left[\Delta\mathcal{\widetilde{AIC}}_{m},i_{\scriptscriptstyle{m}}\right] = min\left(\Delta\mathcal{\widetilde{AIC}}(i_{\scriptscriptstyle{s}})\right)$;
\EndIf
\If{$\Delta\mathcal{L}(i_{\scriptscriptstyle{m}})<tol_1$} \textbf{break};
\EndIf
\State $\Delta\mathcal{L}_{m} = \Delta\mathcal{L}(i_{\scriptscriptstyle{m}})$; \textcolor{gray}{\texttt{\#} change of log-likelihood corresponding to $\Delta\mathcal{\widetilde{AIC}}_{m}$}
\State $\mathcal{L}_{rec}(i) = \mathcal{L}_{rec}(i-1)+\Delta\mathcal{L}_{m}$;
\textcolor{gray}{\texttt{\#} Update all variables according to the operation type.}
\Switch{$i_{m}$}
\Case{$\in i_{r}$}
\State Update $\mathbf{\Sigma}$,$\boldsymbol{\mu}$,$S$,and $Q$ by Equations (33) to (36) in \cite{tipping2003fast};
\State Update $\alpha$ by $\alpha(i_{m}) = \frac{s(i_{m})}{\theta(i_{m})}$; \textcolor{gray}{\texttt{\#} Equation (20) in \cite{tipping2003fast} }
\EndCase
\Case{$\in i_{\scriptscriptstyle{+}}$}
\State Update $\mathbf{\Sigma}$,$\boldsymbol{\mu}$,$S$,and $Q$ by Equations (28) to (31) in \cite{tipping2003fast};
\State Update $\alpha$ by $\alpha(i_{m}) = \frac{s(i_{m})}{\theta(i_{m})}$; \textcolor{gray}{\texttt{\#} Equation (20) in \cite{tipping2003fast} }
\State Update $i_s$ by $i_s = i_s \cup i_m$;
\EndCase
\Case{$\in i_{\scriptscriptstyle{-}}$}
\State Update $\mathbf{\Sigma}$,$\boldsymbol{\mu}$,$S$,and $Q$ by Equations (38) to (41) in \cite{tipping2003fast};
\State Update $\alpha$ by $\alpha(i_{m}) = \mathrm{Inf}$;
\State Update $i_s$ by $i_s = i_s \setminus i_m$;
\EndCase
\EndSwitch
}
\EndFor
\State Outputs: $\mathbf{\Theta}^\mathrm{0} = \mathbf{\Theta}(:,i_s)$; $\boldsymbol{\mu}_{\xi^0} = \boldsymbol{\mu}$; $\boldsymbol{\alpha}_{\xi^0} = \alpha(i_s)$.
\end{algorithmic}
\end{algorithm}
\subsubsection{Robustness of Algorithm \ref{alg1} Regarding Variations of Parameters $tol_1$ and $tol_2$}\label{Sec:robustness}
Many existing PDE learning methods in the literature, whether deterministic \cite{rudy2017data,both2021deepmod,chen2020deep} or probabilistic \cite{zhang2018robust,chen2021robust}, lack robustness in the learned model forms, because they intermediately or finally determine model terms through hard thresholding and thus easily fall into the lemma of hyperparameter tuning. Hence, it is of essential importance to examine the robustness of the proposed \textsc{PeSBL} method with respect to the variation of its hyperparameters (i.e., $tol_1$ \& $tol_2$ in Algorithm \ref{alg1}) in addition to the robustness with respect to the measurement noise. Without loss of generality, the Burgers equation ($u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$) with 20\% measurement noise is used as an example in this analysis. Considering the challenge of learning this Burgers equation with such a small diffusion coefficient, the conclusion about the method robustness should be convincing and generalizes well to other system equations.
In Algorithm \ref{alg1}, $tol_1$ is the convergence criterion of the iteration regarding the increase of log-likelihood $\Delta\mathcal{L}$. When $\Delta\mathcal{L}$ goes below $tol_1$, the algorithm will stop updating the model form by re-estimating existing terms, adding new terms, or deleting unimportant terms and prepare the outputs. In this analysis, the value of $tol_1$ is varied from $10^{-6}$ to $10^{-1}$ to test its influence on the algorithm outcome. It is found that the correct terms of the Burgers equation (i.e., $uu_x$ and $u_{xx}$) can always be extracted without any redundant terms added. When a large value is assigned for $tol_1$, for example $10^{-1}$, the ``re-estimate'' iterations in Algorithm \ref{alg1} will be reduced, which will slightly affect the coefficient means and variances. Unlike $tol_1$, $tol_2$ prevents the algorithm from adding terms that will not considerably increase the log-likelihood but increases the model complexity. In this analysis, $tol_2$ is varied in the same range with $tol_1$. It shows that the condition in Line 34 of Algorithm \ref{alg1} is never triggered for the Burger equation, so that the variation of $tol_2$ does not affect the results of PDE learning. This means that the modeling complexity defined in Algorithm \ref{alg1} is sufficient to guarantee the proper parsimony of learned models. The effects of $tol_2$ will be further investigated for other systems in future studies. Therefore, compared with the current methods in the literature, the proposed \textsc{PeSBL} method has advantageous robustness in learning the correct model forms. This merit is noteworthy especially when faced with a novel system of which very limited prior knowledge is available.
\subsubsection{Bayesian Model Updating (BMU) with the Raw Data} \label{Sec:BMU}
Taking the Gaussian distributions of model parameters obtained from sparse regression as priors, their accuracy and confidence level can be further improved through BMU when more data/information are available. In this section, the measured raw data is reused in BMU, considering possible loss of information during signal processing and numerical differentiation.
For the dynamical systems investigated in this study, the evidence for BMU is set as the system measurement $\widetilde{u}$. With the model form correctly identified from sparse regression, the system response (i.e., $u(\boldsymbol{\xi})$) with certain model parameter set (i.e., $\boldsymbol{\xi}$) can be estimated by solving the associated PDE(s) $\boldsymbol{u}_\mathrm{t} = \boldsymbol{\Theta}^0(\boldsymbol{u})\boldsymbol{\xi}$ numerically with boundary/initial conditions given or extracted from measurements. Then the error function (i.e., the estimation error) can be defined as the discrepancy between the estimation $u(\boldsymbol{\xi})$) and measurement $\widetilde{u}$ normalized by the $\ell_2$ norm of $\widetilde{u}$, such that
\begin{equation}\label{Eq:err}
\boldsymbol{e} = \frac{\widetilde{u}-u(\boldsymbol{\xi})}{ \lVert \widetilde{u} \rVert}
\end{equation}
For the convenience of usage in BMU, $\boldsymbol{e}$ is reshaped to a column vector. This study assumes that the elements of $\boldsymbol{e}$ (i.e., $e_i$, $i=1,2,..., N_e$; $N_e$ is the number of error terms) are independent and follows an identical Gaussian distribution for the maximum information entropy. That is
\begin{equation}\label{Eq:err2}
\mathrm{i.i.d.} \quad e_i \sim \mathcal{N}(\mu_e,\sigma_e^2)
\end{equation}
which provides the likelihood function of the measurement $\widetilde{u}$. Given certain value of $\boldsymbol{\xi}$, the difference between the measurement $\widetilde{u}$ and prediction $u(\boldsymbol{\xi})$ follows a multivariate Gaussian distribution. That is,
\begin{equation}\label{Eq:llh}
\begin{split}
p(\widetilde{u}\big\lvert \boldsymbol{\xi},\mu_e,\sigma_e^2) &= p(\boldsymbol{e}\big\lvert \mu_e,\sigma_e^2) \\
&= \prod_{i=1}^{N_e}\mathcal{N}(e_i\big\lvert \mu_e,\sigma_e^2)
\end{split}
\end{equation}
In BMU, the priors of $\xi_i$ are set as their posteriors obtained from sparse regression in Section \ref{Sec:PeSBL_Alg}. The mean of errors $\mu_e$ is assumed following a zero-mean Gaussian distribution with its standard deviation equal to 1/3, such that most of error values lie in the range $(-1,1)$. An non-informative inverse-gamma prior is used for $\sigma_e^2$. In summary,
\begin{equation}\label{Eq:prior1}
\xi_i \sim \mathcal{N}\left(\mu_{\xi_i^0},\sigma_{\xi_i^0}^2\right)
\end{equation}
\begin{equation}\label{Eq:prior2}
\mu_e \sim \mathcal{N}(0,\sigma_{\mu_e}^2)
\end{equation}
\begin{equation}\label{Eq:prior3}
\sigma_e^2 \sim Inv\-Gamma(\alpha_e,\beta_e)
\end{equation}
in which $\sigma_{\mu_e}^2 = \left(\frac{1}{3}\right)^2 = \frac{1}{9}$, $\alpha_e=1$ is the shape parameters of the inverse-gamma distribution, and $\beta_e=2$ is its scale parameter. With the likelihood function and priors specified, according to the Bayes' theorem, posterior probability density function (PDF) of the parameter set (including $\boldsymbol{\xi} = \{\xi_1,\xi_2,...,\xi_M\}$, $\mu_e$, and $\sigma_e$) is proportional to the product of the likelihood function and the prior PDFs. That is:
\begin{equation}
p\left(\boldsymbol{\xi},\mu_e,\sigma_e^2 \big\lvert \widetilde{u}\right)
\propto
p(\widetilde{u}\big\lvert \boldsymbol{\xi},\mu_e,\sigma_e^2)
p(\boldsymbol{\xi})p(\mu_e)p(\sigma_e^2)
\end{equation}
Plugging in the expressions of priors in Equations \ref{Eq:prior1} to \ref{Eq:prior3} and likelihood function in Equation \ref{Eq:llh}, the posterior PDF becomes
\begin{equation}\label{Eq:postPDF1}
\begin{split}
&p\left(\boldsymbol{\xi},\mu_e,\sigma_e^2 \big\lvert \widetilde{u}\right)
\propto\\
&\left\lbrace\prod_{i=1}^{N_e}\frac{1}{\sqrt{\sigma_e^2}}\mathrm{exp}\left[-\frac{1}{2}\left(\frac{e_i-\mu_e}{\sigma_e}\right)^2\right]\right\rbrace
\left\lbrace\prod_{i=1}^{M}\frac{1}{\sqrt{\sigma_{\xi_i^0}^2}}\mathrm{exp}\left[-\frac{1}{2}\left(\frac{\xi_i-\mu_{\xi_i^0}}{\sigma_{\xi_i^0}}\right)^2\right]\right\rbrace\\
&\mathrm{exp}\left[-\frac{1}{2}\left(\frac{\mu_e}{\sigma_{\mu_e}}\right)^2\right]
(\sigma_e^2)^{-\alpha_e-1}\mathrm{exp}\left(-\frac{\beta_e}{\sigma_e^2}\right)\\
&\propto (\sigma_e^2)^{-\frac{N_e}{2}-\alpha_e-1}
\mathrm{exp}\left[-\frac{1}{2}\sum_{i=1}^{N_e}\left(\frac{e_i-\mu_e}{\sigma_e}\right)^2
-\frac{1}{2}\sum_{i=1}^M\left(\frac{\xi_i-\mu_{\xi_i^0}}{\sigma_{\xi_i^0}}\right)^2
-\frac{1}{2}\left(\frac{\mu_e}{\sigma_{\mu_e}}\right)^2
-\frac{\beta_e}{\sigma_e^2}\right]
\end{split}
\end{equation}
The posterior PDF in Equation \ref{Eq:postPDF1} is known hard to be solved analytically \cite{rubinstein2016simulation}. Among the numerical methods for solving the posterior PDFs (e.g., variational inference, Markov Chain Monte Carlo (MCMC), and expectation-maximization (EM)), Gibbs sampling, a branch of MCMC algorithm, has been proved efficient in approximating multivariate probability distributions, especially when direct sampling from the PDF is challenging. Gibbs sampling requires deriving the full conditional distribution of each parameter from the joint posterior PDF. For certain parameters, their conditional posterior distributions are found to be in standard forms, such as normal distribution. In this case, these parameters can be directly sampled in each iteration. Otherwise, the Metropolis-Hasting algorithm can be used for sampling by proposing candidate parameter values and accepting or rejecting them according to certain criteria. If the Metropolis-Hasting algorithm is used for sampling certain parameter(s), the algorithm is usually called Metropolis-within-Gibbs \cite{rubinstein2016simulation}.
The posterior full conditional distribution of a parameter (or a parameter set) is the conditional distribution of that parameter given current values of all other parameters. For a certain parameter $v$, the rest of parameters can be denoted as $V_{-v}$, then the full conditional distribution $P(v|V_{-v})$ has the form \cite{gilks1995markov}:
\begin{equation}\label{Eq:conditional}
\begin{split}
P(v|V_{-v})& \propto P(v,V_{-v})\\
& = P(v|\text{parents}[v]) \prod_{w \in \text{children}[v]} P(w|\text{parents}[w])
\end{split}
\end{equation}
in which parents of a parameter are the parameters contributive to the generation of $v$ and children of a parameter denote parameters resulting from it. Following this principle, the full conditional distributions can be derived as follows:
\begin{equation} \label{eq:cod_pos1}
p\left(\xi_i\big\vert\cdot\right)
\propto
\mathrm{exp}\left[-\frac{1}{2}\sum_{i=1}^{N_e}\left(\frac{e_i-\mu_e}{\sigma_e}\right)^2
-\frac{1}{2}\left(\frac{\xi_i-\mu_{\xi_i^0}}{\sigma_{\xi_i^0}}\right)^2\right]
\end{equation}
\begin{equation} \label{eq:cod_pos2}
\begin{split}
p\left(\mu_e\big\vert\cdot\right)
&\propto
\mathrm{exp}\left[-\frac{1}{2}\sum_{i=1}^{N_e}\left(\frac{e_i-\mu_e}{\sigma_e}\right)^2
-\frac{1}{2}\left(\frac{\mu_e}{\sigma_{\mu_e}}\right)^2\right]\\
&\sim\mathcal{N}\left(\frac{\sum_{i=1}^{N_e}e_i}{N_e+\frac{\sigma_e^2}{\sigma_{\mu_e}^2}},\frac{1}{\frac{N_e}{\sigma_e^2}+\frac{1}{\sigma_{\mu_e}^2}}\right)
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\left(\sigma_e^2\big\vert\cdot\right)
&\propto
(\sigma_e^2)^{-\frac{N_e}{2}-\alpha_e-1}
\mathrm{exp}\left[-\frac{1}{2}\sum_{i=1}^{N_e}\left(\frac{e_i-\mu_e}{\sigma_e}\right)^2
-\frac{\beta_e}{\sigma_e^2}\right]\\
&\sim Inv\-Gamma\left(\frac{N_e}{2}+\alpha_e,\frac{1}{2}\sum_{i=1}^{N_e}(e_i-\mu_e)^2+\beta_e\right)\\
\end{split}
\end{equation}
A Metropolis-within-Gibbs scheme is necessary for sampling the posterior PDFs of the parameter set, since the conditional posterior distribution of $\xi_i$ is analytically intractable while that of $\mu_e$ and $\sigma_e^2$ are both in standard form. The convergence of Markov chains can be determined as follows \cite{cowles1996markov}: 1) run several simulations independently with different random starting points; 2) check the ratio of inter-chain to intra-chain variances of all parameters; 3) if the ratio becomes very close to 1, then stationary distribution is achieved.
In summary, Bayesian inference is conducted in this step to further update the model parameters obtained from sparse regression by exploiting the information contained in the raw data. It is expected to improve the accuracy and confidence level of the learned sparse model. Taking the Burgers equation ($u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$) as an example, with the data containing 0\% noise, the PDE after BMU becomes $u_t = -1.0020(\pm 0.0260)uu_x+\frac{0.0107}{\pi}(\pm 8.04\times10^{-4})u_{xx}$. Comparing this model with that from sparse regression in Section \ref{Sec:PeSBL_Alg} (i.e., $u_t = -1.02(\pm 1.03)uu_x+\frac{0.022}{\pi}(\pm 7.20\times10^{-3})u_{xx}$), one can find that BMU considerably improves the accuracy of model coefficients and reduces the model's uncertainty. This comparison is more clearly demonstrated in Figure \ref{Figure:BMU_burgersN0}. The performance of the proposed \textsc{PeSBL} method in learning the correct governing PDEs will be further examined with more canonical dynamical systems in Section \ref{Sec:results}.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{BMU_burgersN0.eps}
\caption{Results of sparse regression and BMU for the Burgers equation ($u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$) with 0\% noise. $\xi_1$ and $\xi_2$ denote the coefficient of $uu_x$ and $u_{xx}$, respectively. The superscripts $^0$ and $^1$ denote the prior obtained from sparse regression using the sequential \textsc{PeSBL} algorithm and the posterior after BMU. (c) and (d) show the distribution of samples by Metroplis-within-Gibbs with histograms.}
\label{Figure:BMU_burgersN0}
\end{figure}
\section{Results of PDE Learning and Discussions}\label{Sec:results}
This section presents and discusses the results of PDE learning using the \textsc{PeSBL} method with simulated noisy data from several canonical systems covering a number of scientific domains. 0 to 50\% noise is added to the numerically simulated clean data to demonstrate the effects of preprocessing and the robustness of the \textsc{PeSBL} method in cases with noisy measurements. Two 1D systems are investigated in section \ref{results1D}: (1) the dissipative system characterized by the 1D Burgers equation (as shown in section \ref{Sec:method}); (2) the traveling waves described by the Korteweg–de Vries (KdV) equation. Section \ref{results2D} presents the results of two 2D systems: (1) an extended dissipative system characterized by the 2D Burgers equation; (2) the lid-driven cavity flow governed by the 2D Navier Stokes equation. Codes of all demonstrated examples are available on the website: \href{https://github.com/ymlasu}{\textcolor{blue}{https://github.com/ymlasu}}.
\subsection{Discovering PDEs for 1D systems}\label{results1D}
Table \ref{Table:burgersNoisy} lists the results of learning the Burgers equation ($u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$) from noisy data using the proposed \textsc{PeSBL} method. It shows that the \textsc{PeSBL} method yields identically correct PDE form with data containing up to 30\% noise. Figures \ref{Figure:BMU_burgersN20} (a) to (d) show the intermediate and final results of stochastic PDE learning using the \textsc{PeSBL} method with measured data containing 20\% noise from this dissipative system. It can be observed that, by virtue of the BMU step in the \textsc{PeSBL} method, the mean values of coefficients get very close to that of the ground truth PDE and the model uncertainty is largely reduced compared with the results of sparse regression. When the noise level goes above 30\%, the diffusion term of the Burgers equation (i.e., $\frac{0.01}{\pi}u_{xx}$) cannot be identified using the \textsc{PeSBL} method due to the challenge from the extremely small diffusion coefficient. It is worth noting that, the Burgers equation with a much larger diffusion coefficient (i.e., $u_t = -uu_x+0.1u_{xx}$) is widely used in existing studies \cite{chen2021robust,rudy2017data} for PDE learning. It has been verified that with the \textsc{PeSBL} method, the correct form of this less challenging Burgers equation can be successfully identified even with data containing larger than 50\% noise.
\begin{table} [!h]
\caption{Results of PDE learning using the \textsc{PeSBL} method (Burgers equation: $u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$).}
\begin{tabular}{ |c|c| }
\hline
noise level & identified PDE \\ \hline
0\% & $u_t = -1.0020(\pm 0.0260)uu_x+\frac{0.0107}{\pi}(\pm 8.04\times10^{-4})u_{xx}$\\ \hline
10\% & $u_t = -1.0007(\pm0.0255)uu_x+\frac{0.0106}{\pi}(\pm 7.98\times10^{-4})u_{xx}$\\ \hline
20\% & $u_t = -1.0010(\pm0.0252)uu_x+\frac{0.0108}{\pi}(\pm 8.18\times10^{-4})u_{xx}$\\ \hline
30\% & $u_t = -0.9988(\pm0.0262)uu_x+\frac{0.0107}{\pi}(\pm 8.21\times10^{-4})u_{xx}$\\ \hline
\end{tabular}\label{Table:burgersNoisy}
\end{table}
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{BMU_burgersN20.eps}
\caption{Results of sparse regression and BMU for the Burgers equation ($u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$) with 20\% noise. Please refer to the caption of Figure \ref{Figure:BMU_burgersN0} for more information.}
\label{Figure:BMU_burgersN20}
\end{figure}
The second example of learning PDE from a 1D dynamical system examines the effectiveness of the \textsc{PeSBL} method in correctly identifying governing equations containing higher-order spatial derivatives. This section considers a mathematical model of traveling waves on shallow water surfaces, i.e., the KdV equation with the form $u_t = \xi_1uu_x + \xi_2u_{xxx}$. The KdV equation can be used to characterize the evolution of many long 1D waves such as the ion acoustic waves in a plasma and acoustic waves on a crystal lattice \cite{raissi2019physics}. This study investigates the system described by the following KdV equation: $u_t = -uu_x-0.0025u_{xxx}$ with the initial condition $u(x,0)=\mathrm{cos}(\pi x)$ and periodic boundary conditions. Figure \ref{Figure:KdV} visualizes this system within the range $x\in[-1,1]$ and $t\in[0,1]$.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{KdV.eps}
\caption{The traveling waves characterized by the KdV equation ($u_t = -uu_x-0.0025u_{xxx}$) with the initial condition $u(x,0)=\mathrm{cos}(\pi x)$ and periodic boundary conditions. $x\in[-1,1]$ and $t\in[0,1]$.}
\label{Figure:KdV}
\end{figure}
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{BMU_KdVN20.eps}
\caption{Results of sparse regression and BMU for the KdV equation ($u_t = -uu_x-0.0025u_{xxx}$) with 20\% noise. $\xi_1$ and $\xi_2$ denote the coefficient of $uu_x$ and $u_{xxx}$, respectively. The superscripts $^0$ and $^1$ denote the prior obtained from SBL and the posterior after BMU.}
\label{Figure:BMU_KdVN20}
\end{figure}
\begin{table} [!h]
\caption{Results of PDE learning using the PE-SBL method (KdV equation: $u_t = -uu_x-0.0025u_{xxx}$).}
\begin{tabular}{ |c|c| }
\hline
noise level & identified PDE \\ \hline
0\% & $u_t = -1.0000(\pm 0.0034)uu_x-0.0025(\pm 9.48\times 10^{-6})u_{xxx}$\\ \hline
10\% & $u_t = -1.0002(\pm 0.0035)uu_x-0.0025(\pm 9.40\times 10^{-6})u_{xxx}$\\ \hline
20\% & $u_t = -1.0004(\pm 0.0035)uu_x-0.0025(\pm 9.70\times 10^{-6})u_{xxx}$\\ \hline
50\% & $u_t = -1.0012(\pm 0.0036)uu_x-0.0025(\pm 9.85\times 10^{-6})u_{xxx}$\\ \hline
\end{tabular}\label{Table:kdvNoisy}
\end{table}
Table \ref{Table:kdvNoisy} summarizes the results of learning PDEs from the simulated traveling waves containing 0\% to 50\% noise. It shows that the correct PDE form with accurate coefficient means and limited uncertainties can be successfully identified using the \textsc{PeSBL} method in all simulated cases. Figure \ref{Figure:BMU_KdVN20} shows the results of PDE learning for this system with data containing 20\% measurement noise.
\subsection{Discovering PDEs for 2D systems}\label{results2D}
This section investigates the effectiveness of the \textsc{PeSBL} method in learning the correct governing equation(s) characterizing 2D dynamical systems. First, the \textsc{PeSBL} method is used to discover the physics of a 2D dissipative system characterized by the Burgers equation $u_t = -(uu_x+ uu_y)+0.01(u_{xx} +u_{yy})$ with the initial condition $u(x,y,0) = 0.1\mathrm{sech}(20x^2+25y^2)$ and periodic boundary conditions. Figure \ref{Figure:burgers2DN0} shows the snapshots of simulated data for this system. It should be noted that with the increase of dimensionality, the knowledge discovery of dynamical systems becomes more complex. Hence, in the sparse regression scheme of PDE learning, the library matrix $\mathbf{\Theta}$ should no longer be built in an exhaustive manner considering all possible combinations of polynomials to a certain power and spatial derivatives to a certain order, which will make the sparse regression problem intractable. Instead, $\mathbf{\Theta}$ is built with representative terms in multi-dimensional nonlinear dynamical systems (e.g., convective derivative $\mathbf{u} \cdot \nabla$, advective acceleration $(\mathbf{u} \cdot \nabla)\mathbf{u}$, and the Laplacian $ \nabla^2(\mathbf{u})$) and their products with polynomials. Table \ref{Table:2dBurgers} summarizes the results of learning PDEs from the 2D dissipative system using simulated clean and noisy data. It shows that the correct equation form with accurate coefficient means and limited uncertainties can be successfully identified using the \textsc{PeSBL} method with data containing as much as 50\% random noise. Figure \ref{Figure:BMU_Burgers2DN20} shows the results of learning this 2D Burgers equation with data containing 20\% noise.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.80]{bugers2DN0.eps}
\caption{Dissipative system characterized by the 2D Burgers equation $u_t = -(uu_x+ uu_y)+0.01(u_{xx} +u_{yy})$.}
\label{Figure:burgers2DN0}
\end{figure}
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{BMU_Burgers2DN20.eps}
\caption{Results of sparse regression and BMU for 2D Burgers equation ( $u_t = -(uu_x+ uu_y)+0.01(u_{xx} +u_{yy})$) with 20\% noise. $\xi_1$ and $\xi_2$ denote the coefficient of $(uu_x+ uu_y)$ or ($\mathbf{u}\cdot\nabla)u$ and $(u_{xx} +u_{yy})$ or $\nabla^2u$, respectively. The superscripts $^0$ and $^1$ denote the prior obtained from SBL and the posterior after BMU.}
\label{Figure:BMU_Burgers2DN20}
\end{figure}
\begin{table} [!h]
\caption{Results of PDE learning using the \textsc{PeSBL} method (2D Burgers equation: $u_t = -(uu_x+ uu_y)+0.01(u_{xx} +u_{yy})$).}
\begin{tabular}{ |c|c| }
\hline
noise level & identified PDE \\ \hline
0\% & $u_t = -0.9980(\pm 0.0564)(uu_x+ uu_y)+0.0100(\pm 3.16\times 10^{-4})(u_{xx} +u_{yy})$\\ \hline
10\% & $u_t = -1.0017(\pm 0.0556)(uu_x+ uu_y)+0.0100(\pm 3.18\times 10^{-4})(u_{xx} +u_{yy})$\\ \hline
20\% & $u_t = -1.0049(\pm 0.0570)(uu_x+ uu_y)+0.0100(\pm 3.19\times 10^{-4})(u_{xx} +u_{yy})$\\ \hline
50\% & $u_t = -1.0153(\pm 0.0635)(uu_x+ uu_y)+0.0101(\pm 3.55\times 10^{-4})(u_{xx} +u_{yy})$\\ \hline
\end{tabular}\label{Table:2dBurgers}
\end{table}
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.65]{solNS_N0_2.eps}
\caption{Simulated lid driven flow (at $t=4.0$ sec) characterized by the 2D Navier Stokes equation ( $\frac{\partial \mathbf{u}}{\partial t} = -(\mathbf{u}\cdot\nabla)\mathbf{u}-\nabla p + \frac{1}{100}\nabla^2\mathbf{u}$ with $\nabla\cdot\mathbf{u}=0$). The small arrows represent the velocity field; the color plot with contour line s denotes the pressure distribution; the closed contour lines are the streamlines.}
\label{Figure:solNS_N0_2}
\end{figure}
Another 2D system investigated in this study is the lid-driven cavity flow which is a benchmark problem for viscous incompressible fluid flow \cite{Zienkiewicz2005finite}. This study uses a geometry of a square cavity that is comprised of a lid on the top moving with a tangential unit velocity and three no-slip rigid walls. The velocity and pressure distributions are numerically simulated for a Reynolds number of 100. Figure \ref{Figure:solNS_N0_2} visualizes this system at $t = 4.0$ sec, containing the velocity (small arrows) and pressure (color map with contour lines) distributions as well as the streamlines.
\begin{table} [!h]
\caption{Results of PDE learning using the \textsc{PeSBL} method (2D Navier Stokes equation: $\frac{\partial \mathbf{u}}{\partial t} = -(\mathbf{u}\cdot\nabla)\mathbf{u}-\nabla p + \frac{1}{100}\nabla^2\mathbf{u}$ with $\nabla\cdot\mathbf{u}=0$).}
\begin{tabular}{ |c|c| }
\hline
noise level & identified PDE \\ \hline
0\% & $\frac{\partial \mathbf{u}}{\partial t} = -1.0000(\pm 0.0014)(\mathbf{u}\cdot\nabla)\mathbf{u}-1.0000(\pm 0.0014)\nabla p + \frac{1}{100.00}(\pm 1.38\times 10^{-5})\nabla^2\mathbf{u}$\\ \hline
10\% & $\frac{\partial \mathbf{u}}{\partial t} = -1.0000(\pm 0.0013)(\mathbf{u}\cdot\nabla)\mathbf{u}-1.0000(\pm 0.0013)\nabla p + \frac{1}{100.00}(\pm 1.37\times 10^{-5})\nabla^2\mathbf{u}$ \\ \hline
20\% & $\frac{\partial \mathbf{u}}{\partial t} = -1.0000(\pm 0.0014)(\mathbf{u}\cdot\nabla)\mathbf{u}-1.0001(\pm 0.0014)\nabla p + \frac{1}{99.99}(\pm 1.43\times 10^{-5})\nabla^2\mathbf{u}$\\ \hline
50\% & $\frac{\partial \mathbf{u}}{\partial t} = -0.9999(\pm 0.0025)(\mathbf{u}\cdot\nabla)\mathbf{u}-1.0002(\pm 0.0023)\nabla p + \frac{1}{99.99}(\pm 2.29\times 10^{-5})\nabla^2\mathbf{u}$\\ \hline
\end{tabular}\label{Table:NS}
\end{table}
Table \ref{Table:NS} lists the results of PDE learning using the \textsc{PeSBL} method with simulated data containing various levels of noise. The correct PDE form can be learned for cases with as much as 50\% noise. When the noise level increases to 50\%, the standard deviations of model coefficients are almost doubled. This increased level of model uncertainty is mainly caused by the increase of noise level in addition to the challenge of learning PDE(s) for this complex system. However, this adverse influence is still well controlled mainly due to the efficient signal processing in the proposed method. Figure \ref{Figure:BMU_NSN20} shows the results of PDE learning for the case with 20\% measurement noise.
\subsection{Robustness of the \textsc{PeSBL} Method}
To highlight the merits of the \textsc{PeSBL} method in robustness, Table \ref{Table:burgersSBL_S3d} compares the results of learning the Burgers equation ($u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$) using \textsc{PeSBL} method considering model parsimony and methods in the literature promoting only model sparsity. Without loss of generality, the simulated data with 20\% noise is used. It shows that the SBL method yields an as sparse model as the \textsc{PeSBL} method, however, with an incorrect and more complex diffusion term. This failure is due to that traditional RVM method (Section \ref{Section:RVM}) only promotes sparsity in regression and thus tends to include complex terms for marginal improvement in regression accuracy. More comparison about the learned form ($u_t = \xi_1uu_x+\xi_2u^2u_{xx}$) with the correct form ($u_t = \xi_1uu_x+\xi_2u_{xx}$) can be found the in the authors' previous study \cite{zhang2021robust}. The $\mathrm{S}^3\mathrm{d}$ method yields a complex PDE form which represents a totally intractable dynamical system.
\begin{table} [!h]
\caption{Comparison of PDE learning results using methods with and without parsimony enhancement. The Burgers equation ($u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$) is used for demonstration. The noise level in the measured data is 20\%.}
\begin{tabular}{ |m{4.5cm}|m{10cm}| }
\hline
method & identified PDE \\ \hline
\textsc{Pe}SBL (with parsimony enhancement)& $u_t = -1.0010(\pm0.0252)uu_x+\frac{0.0108}{\pi}(\pm 8.18\times10^{-4})u_{xx}$ \\ \hline
SBL (without parsimony enhancement) & $u_t = -0.9773(\pm 0.9797)uu_x+0.0073(\pm 0.0074)u^2u_{xx}$\\ \hline
$\mathrm{S}^3\mathrm{d}$ (without parsimony enhancement) \cite{yuan2019machine}& $u_t = -0.01381-0.1832u-0.1285u^3+0.0147u_x+0.0067u_{xx}+1.8\times 10^{-5}u_{xxx}-1.1525uu_x-0.0009uu_{xx}-0.0013uu_{xxx}-0.0039u^2u_{xx}+1.0385u^3u_{x}+0.0009u^3u_{xx}+0.0013u^3u_{xxx}$\\ \hline
\end{tabular}\label{Table:burgersSBL_S3d}
\end{table}
\begin{table} [!h]
\caption{Comparison of PDE learning results with and without signal preprocessing. The Burgers equation ($u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$) is used for demonstration. The noise level in the measured data is 20\%.}
\begin{tabular}{ |m{3.0cm}|m{10cm}| }
\hline
method & identified PDE \\ \hline
\textsc{Pe}SBL without preprocessing& $u_t =-0.3296u+0.1844u^3-0.6474uu_x+0.3833u^3u_x-0.0059u_{xx}-0.0034u2^u_{xx}-0.0002uu_{xxx}+0.0001u^3u_{xxx}$ \\ \hline
\textsc{Pe}SBL with preprocessing& $u_t = -1.0010(\pm0.0252)uu_x+\frac{0.0108}{\pi}(\pm 8.18\times10^{-4})u_{xx}$ \\ \hline
SBL without preprocessing&$u_t =-0.3296u+0.1844u^3-0.6474uu_x+0.3833u^3u_x-0.0059u_{xx}-0.0034u^2u_{xx}-0.0002uu_{xxx}+0.0001u^3u_{xxx}$ \\ \hline
SBL with preprocessing& $u_t = -0.9773(\pm 0.9797)uu_x+0.0073(\pm 0.0074)u^2u_{xx}$\\ \hline
\end{tabular}\label{Table:burgersSBL_S3d2}
\end{table}
Additionally, Table \ref{Table:burgersSBL_S3d2} compares the results of PDE learning using the \textsc{PeSBL} and SBL methods with and without signal preprocessing. It shows that, without preprocessing the measured signals, the \textsc{PeSBL} method cannot successfully identify the correct PDE form due to the significant influence from the measurement noise. With signal preprocessing, the \textsc{PeSBL} method yields the correct model. This finding further approves the effectiveness of the signal preprocessing strategy established in this study (Section \ref{sec:preprocess}). The identical PDE form is learned through SBL using the measured raw data. When preprocessing is implemented, SBL yields a sparse equation form. Though with incorrect term, this equation is more tractable and has been proved to be considerably representative of the dissipative system in the authors' previous study \cite{zhang2021robust}. Therefore, the signal preprocessing strategy established in this study has the potential of improving the performance of other methods for PDE learning without parsimony enhancement.
\section{Propagating Model Uncertainties to System Dynamics} \label{uncertainProp}
Given noisy measurements from a certain dynamical system, the \textsc{PeSBL} method outputs the most probable model (i.e., PDE(s)) reflecting the underlying physics as well as the model uncertainties. The learned model uncertainties can be propagated to the simulated system dynamics, so that a confidence interval can be provided for the system responses at certain temporal and spatial coordinates. In this section, the two systems characterized by the Burgers equation ($u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$) and the KdV equation ($u_t = -uu_x-0.0025u_{xxx}$) are used as examples to demonstrate the results of uncertainty propagation. Without loss of generality, the learned models together with their uncertainties in cases with 20\% measurement noise are used in this section. The results of uncertainty propagation for the 2D systems investigated in Section \ref{results2D} are not demonstrated in this section due to the expensive computational costs and difficulty in visualization. The codes for uncertainty propagation of all 1D and 2D systems investigated in Section \ref{Sec:results} can be found on the website: \href{https://github.com/ymlasu}{\textcolor{blue}{https://github.com/ymlasu}}.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.7]{BMU_NSN20.eps}
\caption{Results of sparse regression and BMU for 2D Navier Stokes equation ( $\frac{\partial \mathbf{u}}{\partial t} = -(\mathbf{u}\cdot\nabla)\mathbf{u}-\nabla p + \frac{1}{100}\nabla^2\mathbf{u}$ with $\nabla\cdot\mathbf{u}=0$) with 20\% noise. $\xi_1$ denotes the coefficient of $\nabla p$, $\xi_2$ denotes the coefficient of $\nabla^2\mathbf{u}$, and $\xi_3$ denotes the coefficient of $(\mathbf{u}\cdot\nabla)\mathbf{u}$. The superscripts $^0$ and $^1$ denote the prior obtained from SBL and the posterior after BMU.}
\label{Figure:BMU_NSN20}
\end{figure}
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.75]{BurgersUP_N20_2.eps}
\caption{Estimated system response at $x = -0.6$ during the time interval ($1.0\leqslant t \leqslant 2.0$) for the system characterized by the Burgers equation ($u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$). (a) simulated samples and the mean value; (b) comparison of the sample mean and standard deviation with the truth; (c) an amplified view of (b) in the range ($1.6\leqslant t \leqslant 1.8$).}
\label{Figure:BurgersUP_N20}
\end{figure}
Unlike linear systems for which the model parameters and its outputs are explicitly related, the nonlinear dynamical system has an implicit relationship between the model parameters and outputs. Therefore, for a certain system in this study, the model uncertainty is propagated to the system responses by numerically simulating the system with the rich parameter samples obtained from MCMC in BMU. Summarizing the results of numerical simulations, the uncertainties of system dyanmics can be quantified and analyzed. To effectively demonstrate the representativeness of learned models and the efficiency of uncertainty quantification and propagation, the responses of the two systems at ``future'' time ($t\geqslant 1$) are simulated and presented in this section.
Figure \ref{Figure:BurgersUP_N20} (a) plots the simulated samples at the cross section $x=-0.6$ and $1\leqslant t \leqslant 2$ and their mean value for the dissipative system characterized by the Burgers equation. It can be observed that the system uncertainty remains at nearly the same level within this region, most probably due to the smoothness of the system therein and the constant uncertainty from the learned model. Figure \ref{Figure:BurgersUP_N20} (b) (amplified in (c)) compares the sample mean with the underlying truth of the system at the same cross section. It shows that the mean of samples deviates slightly from the true value which is always encompassed within the range (mean-3std,mean+3std) (std stands for standard deviation). This observation approves the reliability of the learned model using the proposed \textsc{PeSBL} method in accurately predicting systems responses. Figures \ref{Figure:KdVUP_N20} (a) to (c) show the results of uncertainty propagation for the traveling waves characterized by the KdV equation, the system responses at $t = 1.6$ are plotted for demonstration. Due to the sharp transition of the system responses at this cross section, the propagated uncertainty varies in magnitude as $x$ increases from -1 to 1. Nevertheless, the mean of samples stays close to the true value which is always covered by the statistic interval (mean-3std,mean+3std).
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.75]{KdVUP_N20_2.eps}
\caption{Estimated system response at $t = 1.6$ for the system characterized by the KdV equation ($u_t = -uu_x-0.0025u_{xxx}$). (a) simulated samples and the mean value; (b) comparison of the sample mean and standard deviation with the truth; (c) an amplified view of (b) in the range ($0\leqslant x \leqslant 0.5$).}
\label{Figure:KdVUP_N20}
\end{figure}
\section{System Diagnosis and Prognosis}\label{Section:DAP}
This section examines the capability of the \textsc{PeSBL} framework in system diagnosis when significant variation occurs to the investigated system. The dissipative system characterized by the Burgers equation ($u_t = \xi_1uu_x+\xi_2u_{xx}$) is used as an example in this analysis without loss of generality. In this example analysis, the simulated data contains 20\% noise. To simulate the variation of this system, both advection and diffusion coefficients are varied such that the governing equation changes from $u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$ to $u_t = -0.9uu_x+\frac{0.02}{\pi}u_{xx}$. Figure \ref{Figure:BMU_burgers1N20} shows the results of sparse regression and BMU using the measured data from the varied system. The learned stochastic PDE has the form $u_t = -0.8980(\pm0.0251)uu_x+\frac{0.0204}{\pi}(\pm 0.001)u_{xx}$, which significantly deviates from that of the original system (i.e., $u_t = -1.0010(\pm0.0252)uu_x+\frac{0.0108}{\pi}(\pm 8.18\times10^{-4})u_{xx}$). Figures \ref{Figure:BMU_Burgers_diagnosis1_N20} (a) and (b) compare the posterior PDFs of advection and diffusion coefficients (i.e., $\xi_1$ and $ \xi_2$ respectively) of the original and varied systems. It can be observed that the governing model of the investigated system has shifted considerably with a high probability. These outcomes with uncertainties enable us to diagnose possible change of the system in a probabilistic manner. For example, with the fitted normal posteriors shown in the figure, the variation of the advection and diffusion coefficients can be evaluated as $\delta\xi_1 \sim \mathcal{N}(0.103,0.0013)$ and $\delta\xi_2 \sim \mathcal{N}(0.0031,1.75\times 10^{-6})$, respectively. It remains to be examined whether this framework can be used for the prognostic health management of real physical systems.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{BMU_Burgers1_0.9_0.02pi_N20.eps}
\caption{Results of sparse regression and BMU for the Burgers equation ($u_t = -0.9uu_x+\frac{0.02}{\pi}u_{xx}$) with 20\% noise. Please refer to the caption of Figure \ref{Figure:BMU_burgersN0} for more information.}
\label{Figure:BMU_burgers1N20}
\end{figure}
\section{Multiscale Modeling through Hierarchical Bayesian Inference (HBI)}\label{Section:MSM}
In Sections \ref{Sec:method} to \ref{Section:DAP}, the PDE learning is implemented with a certain dataset measured from a certain system, and the learned model uncertainty comes from the disturbance of other candidate terms in the library $\boldsymbol{\Theta}$ and the influence of measurement noise. However, in reality, the system may vary from time to time due to the change of environmental and other factors (such as temperature change and external excitations). This variation of system makes the corresponding model intrinsically uncertain, and this intrinsic model uncertainty cannot be quantified using the Bayesian inference framework established in Section \ref{Sec:BMU}. Therefore, the multiscale Bayesian modeling of dynamical systems is investigated in this section through Hierarchical Bayesian Inference (HBI) to quantify the intrinsic model uncertainty of systems.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{BMU_Burgers_diagnosis1_N20.eps}
\caption{Comparison of BMU results for two dissipative systems characterized by the Burgers equation ($u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$ and $u_t = -0.9uu_x+\frac{0.02}{\pi}u_{xx}$) with data containing 20\% noise. In this figure, the superscript $^1$ denotes the results from system 1 characterized by Burgers equation $u_t = -uu_x+\frac{0.01}{\pi}u_{xx}$, and the superscript $^2$ denotes the results from system 2 characterized by Burgers equation $u_t = -0.9uu_x+\frac{0.02}{\pi}u_{xx}$. Please refer to the caption of Figure \ref{Figure:BMU_burgersN0} for more information.}
\label{Figure:BMU_Burgers_diagnosis1_N20}
\end{figure}
\subsection{Framework}\label{Sec:HBI-frame}
Figure \ref{Figure:HBI} illustrates the graphical HBI model for multiscale modeling of dynamical systems. It explicitly describes the relationship between measurements from a certain system and its model parameters and hyperparameters, and thus helps deriving the posterior distribution and full conditional distributions. In this framework, the system model during a certain test $t$ has model parameters $\boldsymbol{\xi}_t$ which is a sample of its underlying distribution. In this study, it is assumed that $\xi_{it}$ follows a Gaussian distribution such that $\xi_{it} \sim \mathcal{N}\left(\mu_{\xi_i},\sigma_{\xi_i}^2\right)$. Given the measurement during a certain test $\widetilde{u}_t$, the error function in this test can be established in a similar way to Equations \ref{Eq:err} and \ref{Eq:err2}, such that
\begin{equation}
\boldsymbol{e}_t = \frac{\widetilde{u}_t-u(\boldsymbol{\xi}_t)}{ \lVert \widetilde{u}_t \rVert}
\end{equation}
with
\begin{figure*}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=1.0]{HBI}
\caption{Graphical model for hierarchical Bayesian modeling.}\label{Figure:HBI}
\end{figure*}
\begin{equation}
\mathrm{i.i.d.} \quad e_{it} \sim \mathcal{N}(\mu_e,\sigma_e^2)
\end{equation}
in which $t=1,2,...N_t$ and $N_t$ is the number of tests conducted. Following Equation \ref{Eq:llh}, the likelihood function can be built as
\begin{equation}\label{Eq:llh2}
\begin{split}
p(\widetilde{\boldsymbol{u}} \big\lvert \boldsymbol{\Xi},\mu_e,\sigma_e^2)&= \prod_{t=1}^{N_t}p(\widetilde{u}_t\big\lvert \boldsymbol{\xi}_t,\mu_e,\sigma_e^2) \\
&= \prod_{t=1}^{N_t}p(\boldsymbol{e}_t\big\lvert \mu_e,\sigma_e^2) \\
&= \prod_{t=1}^{N_t}\prod_{i=1}^{N_e}\mathcal{N}(e_{it}\big\lvert \mu_e,\sigma_e^2)
\end{split}
\end{equation}
in which $\boldsymbol{\Xi}=\left\lbrace \boldsymbol{\xi}_1, \boldsymbol{\xi}_2, ..., \boldsymbol{\xi}_{N_t} \right\rbrace$ is the collection of model parameters in all tests. The prior of $\mu_{\xi_i}$ is designed as a uniform distribution with upper limit $\mu_{\xi_i}^\mathrm{l}$ and lower limit $\mu_{\xi_i}^\mathrm{u}$ as shown in Equation \ref{Eq:prior1_2}. These limits are estimated from literature survey or prior knowledge of experts.
\begin{equation}\label{Eq:prior1_2}
\mu_{\xi_i}\sim U(\mu_{\xi_i}^\mathrm{l},\mu_{\xi_i}^\mathrm{u})
\end{equation}
Without much knowledge about the uncertainties of the model parameters, their variance are assumed following the same non-informative inverse gamma prior, such that
\begin{equation}
\sigma_{\xi_i}^2 \sim Inv\-Gamma(\alpha_\xi,\beta_\xi)
\end{equation}
in which $\alpha_\xi=1$ and $\beta_\xi=2$. The same priors for $\mu_e$ and $\sigma_e$ as that in Equations \ref{Eq:prior2} and {Eq:prior3} are used and rewritten as follows:
\begin{equation}
\mu_e \sim \mathcal{N}(0,\sigma_{\mu_e}^2)
\end{equation}
\begin{equation}\label{Eq:prior2_2}
\sigma_e^2 \sim Inv\-Gamma(\alpha_e,\beta_e)
\end{equation}
With priors and the likelihood function defined, the posterior PDF of the parameter set can be derived as:
\begin{equation}\label{eq:pos_2}
\begin{split}
&p\left(
\boldsymbol{\Xi},\boldsymbol{\mu}_\xi,
\boldsymbol{\Sigma}_\xi,
\mu_e,\sigma_e
\big\lvert
\widetilde{\boldsymbol{u}}
\right)
\propto \\
&\prod_{t=1}^{N_t}
p\left(\widetilde{u}_t\big\lvert\boldsymbol{\xi}_t, \mu_e,\sigma_e^2\right)
p\left( \boldsymbol{\xi}_t\big\lvert\boldsymbol{\mu}_\xi, \boldsymbol{\Sigma}_\xi\right)
p\left(\boldsymbol{\mu}_\xi\right) p\left(\boldsymbol{\Sigma}_\xi\right)
p\left(\mu_e\right) p\left(\sigma_e^2\right)
\end{split}
\end{equation}
in which $\boldsymbol{\mu}_\xi=\left\lbrace \mu_{\xi_1},\mu_{\xi_2},..., \mu_{\xi_{N_\xi}} \right\rbrace$ contains the mean of each model parameter, $\boldsymbol{\Sigma}_\xi$ is the covariance matrix of all model parameters and is a diagonal matrix with $\boldsymbol{\Sigma}_\xi(i,i) = \sigma_{\xi_i}^2$ $(i=1,2,...,N_\xi)$, and $\widetilde{\boldsymbol{u}}=\left\lbrace \widetilde{u}_1, \widetilde{u}_2, ..., \widetilde{u}_{N_t}\right\rbrace$ is the collection of measurement in all tests. Plugging in the expressions of priors and likelihood function, the posterior PDF becomes
\begin{equation}\label{Eq:postPDF}
\begin{split}
&p\left(\boldsymbol{\xi},\mu_e,\sigma_e^2 \big\lvert \widetilde{u}\right)
\propto\\
&\prod_{t=1}^{N_t}\left\lbrace\prod_{i=1}^{N_e}\frac{1}{\sqrt{\sigma_e^2}}\mathrm{exp}\left[-\frac{1}{2}\left(\frac{e_{it}-\mu_e}{\sigma_e}\right)^2\right]\right\rbrace
\left\lbrace\prod_{i=1}^{N_\xi}\frac{1}{\sqrt{\sigma_{\xi_i}^2}}\mathrm{exp}\left[-\frac{1}{2}\left(\frac{\xi_{it}-\mu_{\xi_i}}{\sigma_{\xi_i}}\right)^2\right]\right\rbrace\\
& \prod_{i=1}^{N_\xi}1\left(\mu_{\xi_{i}}^l<\mu_{\xi_{i}}<\mu_{\xi_{i}}^u\right)
\left[\prod_{i=1}^{N_\xi}\left(\sigma_{\xi_i}^2\right)^{-\alpha_\xi-1}\mathrm{exp}\left(-\frac{\beta_\xi}{\sigma_{\xi_i}^2}\right)\right]\\
&\mathrm{exp}\left[-\frac{1}{2}\left(\frac{\mu_e}{\sigma_{\mu_e}}\right)^2\right]
(\sigma_e^2)^{-\alpha_e-1}\mathrm{exp}\left(-\frac{\beta_e}{\sigma_e^2}\right)\\
&\propto (\sigma_e^2)^{-\frac{N_tN_e}{2}-\alpha_e-1}\prod_{i=1}^{N_\xi}(\sigma_{\xi_i}^2)^{-\frac{N_t}{2}-\alpha_\xi-1}
\times1\left(\mu_{\xi_{i}}^l<\mu_{\xi_{i}}<\mu_{\xi_{i}}^u\right)\\
&\mathrm{exp}\left[-\frac{1}{2}\sum_{t=1}^{N_t}\sum_{i=1}^{N_e}\left(\frac{e_{it}-\mu_e}{\sigma_e}\right)^2
-\frac{1}{2}\sum_{t=1}^{N_t}\sum_{i=1}^{N_\xi}\left(\frac{\xi_{it}-\mu_{\xi_i}}{\sigma_{\xi_i}}\right)^2
-\sum_{i=1}^{N_\xi}\frac{\beta_\xi}{\sigma_{\xi_i}^2}
-\frac{1}{2}\left(\frac{\mu_e}{\sigma_{\mu_e}}\right)^2
-\frac{\beta_e}{\sigma_e^2}\right]
\end{split}
\end{equation}
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{HBI_KdV.eps}
\caption{Model and noise parameters generated for the KdV equation ($u_t = \xi_1uu_x+\xi_2u_{xxx}$). (a) $\xi_1$: the coefficient of $uu_x$; (b) $\xi_2$: the coefficient of $u_{xxx}$ (c) the noise level in each dataset.}
\label{Figure:dataHBI}
\end{figure}
Following the principle in Equation \ref{Eq:conditional}, the full conditional posterior distributions can be derived as follows:
\begin{equation} \label{eq:cod_pos1}
p\left(\xi_{it}\big\vert\cdot\right)
\propto
\mathrm{exp}\left[-\frac{1}{2}\sum_{i=1}^{N_e}\left(\frac{e_{it}-\mu_e}{\sigma_e}\right)^2
-\frac{1}{2}\left(\frac{\xi_{it}-\mu_{\xi_i}}{\sigma_{\xi_i}}\right)^2\right]
\end{equation}
\begin{equation}\label{Eq:trcN}
\begin{split}
\left(\mu_{\xi_i}\big\vert\cdot\right) &\propto
\mathrm{exp}\left[-\frac{1}{2}\sum_{t=1}^{N_t}\left(\frac{\xi_{it}-\mu_{\xi_i}}{\sigma_{\xi_i}}\right)^2\right]
\times1\left(\mu_{\xi_{i}}^l<\mu_{\xi_{i}}<\mu_{\xi_{i}}^u\right)\\
&\sim\mathcal{N}_{\mu_{\xi_i}^l}^{\mu_{\xi_i}^u}\left(\frac{1}{N_t}\sum_{t=1}^{N_t}\xi_{it},\frac{1}{N_t}\sigma_{\xi_i}^2\right)
\end{split}
\end{equation}
\begin{figure}[!htb]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{HBI_KdV_xi1.eps}
\caption{Example simulated samples and their distributions for $\xi_1^t$. Superscript $^t$ denotes the test number. (a) samples of $\xi_1^1$; (b) distribution of samples in (a) and the fitted Gaussian distribution; (c) samples of $\xi_1^{11}$; (d) distribution of samples in (c) and the fitted Gaussian distribution.}
\label{Figure:HBI_KdV_xi1}
\end{figure}
\begin{equation}
\begin{split}
\left(\sigma_{\xi_i}^2\big\vert\cdot\right)
&\propto
(\sigma_{\xi_i}^2)^{-\frac{N_t}{2}-\alpha_\xi-1}
\mathrm{exp}\left[-\frac{1}{2}\sum_{t=1}^{N_t}\left(\frac{\xi_{it}-\mu_{\xi_i}}{\sigma_{\xi_i}}\right)^2
-\frac{\beta_\xi}{\sigma_{\xi_i}^2}\right]\\
&\sim Inv\-Gamma\left(\frac{N_t}{2}+\alpha_\xi,\frac{1}{2}\sum_{t=1}^{N_t}(\xi_{it}-\mu_{\xi_i})^2+\beta_\xi\right)\\
\end{split}
\end{equation}
\begin{equation} \label{eq:cod_pos2}
\begin{split}
p\left(\mu_e\big\vert\cdot\right)
&\propto
\mathrm{exp}\left[-\frac{1}{2}\sum_{t=1}^{N_t}\sum_{i=1}^{N_e}\left(\frac{e_{it}-\mu_e}{\sigma_e}\right)^2
-\frac{1}{2}\left(\frac{\mu_e}{\sigma_{\mu_e}}\right)^2\right]\\
&\sim\mathcal{N}\left(\frac{\sum_{t=1}^{N_t}\sum_{i=1}^{N_e}e_{it}}{N_tN_e+\frac{\sigma_e^2}{\sigma_{\mu_e}^2}},\frac{1}{\frac{N_tN_e}{\sigma_e^2}+\frac{1}{\sigma_{\mu_e}^2}}\right)
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\left(\sigma_e^2\big\vert\cdot\right)
&\propto
(\sigma_e^2)^{-\frac{N_tN_e}{2}-\alpha_e-1}
\mathrm{exp}\left[-\frac{1}{2}\sum_{t=1}^{N_t}\sum_{i=1}^{N_e}\left(\frac{e_{it}-\mu_e}{\sigma_e}\right)^2
-\frac{\beta_e}{\sigma_e^2}\right]\\
&\sim Inv\-Gamma\left(\frac{N_tN_e}{2}+\alpha_e,\frac{1}{2}\sum_{t=1}^{N_t}\sum_{i=1}^{N_e}(e_{it}-\mu_e)^2+\beta_e\right)\\
\end{split}
\end{equation}
\subsection{A Case Study}
The system for traveling waves on shallow water surfaces characterized by the KdV equation ($u_t = \xi_1uu_x + \xi_2u_{xxx}$) is used for a case study in this section. 1000 models are randomly generated with $\xi_1 \sim \mathcal{N}(-1,0.05^2)$ and $\xi_2 \sim \mathcal{N}(-0.0025,0.0002^2)$, and one dataset is simulated for each model with the noise level randomly sampled from the uniform distribution $U(0,50\%)$. Figure \ref{Figure:dataHBI} shows generated samples of model parameters and noise level in the simulated data. Considering the computational cost in Bayesian inference via MCMC, 20 sets of data will be used for multiscale Bayesian modeling in this section using the HBI framework established in Section \ref{Sec:HBI-frame}.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{HBI_KdV_xi2.eps}
\caption{Example simulated samples and their distributions for $\xi_2^t$. Please refer to the caption of Figure \ref{Figure:HBI_KdV_xi1} for more information.}
\label{Figure:HBI_KdV_xi2}
\end{figure}
With the expressions of posteriors derived, the Bayesian inference problem for multiscale modeling is solved numerically via MCMC. 5000 samples are simulated for each model parameter. It may take more iterations for certain parameter to converge than others. Samples simulated prior to the convergence will be excluded for visualization and further analysis. Figures \ref{Figure:HBI_KdV_xi1} and \ref{Figure:HBI_KdV_xi2} show the simulated samples of model parameters $\xi_1$ and $\xi_2$ in two example tests and their distributions. A Gaussian distribution is fitted from the simulated samples. Its mean value is used as the estimated model parameter in a certain test, and its variance quantifies the uncertainty from measurement noise and numerical simulation. Figure \ref{Figure:HBI_KdV_xi_t} compare the estimated model parameters in the 20 tests with the reference values numerically generated in the beginning. Figure \ref{Figure:HBI_KdV_xi_t} (a) shows that the estimated values of $\xi_1^t$ remain close to the reference values in all tests with an average relative error of 3.49\%. All tests have a relative error below 10\% except test 4 in which the relative error is 10.86\%. Compared with $\xi_1^t$, the estimation of $\xi_2^t$ has improved accuracy with all relative errors below 10\%, as shown in Figure \ref{Figure:HBI_KdV_xi_t} (b). The average relative estimation error is as low as 2.88\%. The comparison results verify the accuracy of model parameter estimation for tests under various conditions in multiscale system modeling.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{HBI_KdV_xi_t.eps}
\caption{Comparison between the estimated model parameters via Bayesian inference and the reference values. (a) $\xi_1^t$; (b) $\xi_2^t$.}
\label{Figure:HBI_KdV_xi_t}
\end{figure}
Figures \ref{Figure:HBI_KdV_mu_xi1} and \ref{Figure:HBI_KdV_mu_xi2} show the results of Bayesian inference of the statistics of model parameters $\xi_1$ and $\xi_2$, respectively. Figures \ref{Figure:HBI_KdV_mu_xi1} (b) and (d) show that the model parameter $\xi_1$ has an estimated mean of -1.0243 and an estimated standard deviation of 0.0489, which are very close to the reference values, i.e., -1.00 and 0.05, respectively. An accurate estimation of statistics of the model parameter $\xi_2$ can also be observed in Figure \ref{Figure:HBI_KdV_mu_xi2}. The accurate estimation of statistics of model parameters approve that the proposed HBI method for multiscale system modeling can efficiently evaluate the state of system governing model including its uncertainty under varying conditions, especially with large and various measurement noise levels.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{HBI_KdV_mu_xi1.eps}
\caption{Samples of statistics of model parameter $\xi_1$ and their distribution. (a) samples of the mean of $\xi_1$; (b) distribution of samples in (a) and the fitted Gaussian distribution; (c) samples of the standard deviation of $\xi_1$; (d) distribution of samples in (c) and the fitted Gaussian distribution.}
\label{Figure:HBI_KdV_mu_xi1}
\end{figure}
\section{Summary and Further Discussions} \label{Section:Conclusion}
In this study, a Parsimony-Enhanced Sparse Bayesian Learning method (i.e., the \textsc{PeSBL} method) is proposed for robust data-driven discovery of governing PDEs. The \textsc{PeSBL} method is advantageous over most existing methods in discovering the accurate governing models of new complex dynamical systems. Compared with deterministic PDE learning methods, this SBL-based method automatically promotes sparsity via specifying an independent prior for each model parameter and automatically pruning redundant parameters in sequential iterations. However, promoting sparsity alone is not necessarily sufficient for discovering an accurate governing model for a complex dynamical system. Therefore, based on existing SBL-based PDE learning methods in the literature, the \textsc{PeSBL} method further enhances parsimony via modifying the evaluation criteria in the sequential solving algorithm. This modification accounts for the complexity of each candidate model term and thus prevents adding complex terms for marginal increase of regression accuracy in each iteration. Thus, this method fundamentally avoids the model pruning procedure in most existing deterministic/stochastic PDE learning methods that inevitably falls into the lemma of hyperparameter tuning. Additionally, advanced signal preprocessing techniques and Bayesian model updating (BMU) further increase the robustness of PDE learning results. Results of numerical simulations show that the accurate parsimonious governing PDEs can be correctly identified from noisy data for several canonical dynamical systems using the proposed \textsc{PeSBL} method.
Moreover, the stochastic identification scheme in the \textsc{PeSBL} method enables quantifying model uncertainties, propagating uncertainties to system prediction, and conducting probabilistic system diagnosis and prognosis. In reality, the investigated system may vary with the change of exterior conditions such as temperature. As a result, the system model parameters may be intrinsically stochastic. In this study, the intrinsic system model uncertainty is simulated via multiscale Bayesian system modeling through Hierarchical Bayesian Inference (HBI). A numerical case study is conducted with the traveling wave system characterized by the KdV equation. The results show that the system uncertainty under varying conditions can be accurately evaluated even with large measurement noise using the established multiscale Bayesian system modeling framework.
The future work may include applying the \textsc{PeSBL} method in real operating dynamical systems to further examine or improve its effectiveness in knowledge discovery.
\begin{figure}[!h]
\centering
\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.8]{HBI_KdV_mu_xi2.eps}
\caption{Samples of statistics of model parameter $\xi_2$ and their distribution. Please refer to the caption of Figure \ref{Figure:HBI_KdV_mu_xi1} for more information.}
\label{Figure:HBI_KdV_mu_xi2}
\end{figure}
\section{Acknowledgments}
The research reported in this paper was supported by funds from NASA University Leadership Initiative Program (Contract No. NNX17AJ86A, Project Officer: Dr. Anupa Bajwa, Principal Investigator: Dr. Yongming Liu). The support is gratefully acknowledged.
\bibliographystyle{elsarticle-num}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and preliminary results}
In studying Prudnikov's sequence of orthogonal polynomials (see \cite{YAP}) with the weight function $x^\alpha \rho_\nu(x)$, where \ $ \rho_{\nu}(x)= 2 x^{\nu/2} K_\nu(2\sqrt x),\ x >0, \nu \ge 0,\ \alpha > -1$ and $K_\nu(z)$ is the Macdonald or modified Bessel function \cite{Bateman}, Vol. II the author interpreted this sequence in terms of the Laguerre composition orthogonality, involving the differential operator $\theta= xDx$, where $D= d/dx$. The main aim of this paper is to extend the method, employing classical orthogonal polynomials (Hermite, Laguerre, Jacobi) \cite{Bateman} to find new sequences of orthogonal polynomials with non-classical weights such as, for instance, the square of Macdonald function and other hypergeometric functions. In fact, we will define the composition orthogonality in the following way.
{\bf Definition 1.} {\it Let $\omega(t), \varphi(t) \ t \in [a,b],\ -\infty \le a < b \le \infty$ be nonnegative functions, $\theta = tDt$. Let $f, g$ be complex-valued functions such that the product of two operators $f(\theta) g(\theta)$ commutes. Then $f,\ g $ are compositionally orthogonal with respect to the measure $\omega(t)dt$ relatively to the function $\varphi$ if }
$$\int_a^b f(\theta) g(\theta) \{ \varphi(t)\} \omega(t) dt = 0.\eqno(1.1)$$
We will see in the sequel that this definition suits well with the vector space of polynomials over $\mathbb{R}$, when the composition orthonormality of the sequence $\{P_n\}_{n\ge 0}$ is defined by
$$\int_a^b P_n(\theta) P_m(\theta) \{ \varphi(t)\} \omega(t) dt = \delta_{n,m},\eqno(1.2)$$
where $\delta_{n,m},\ n,m\in\mathbb{N}_{0}$ is the Kronecker symbol. Moreover, for some class of functions $\varphi$ it is possible to transform the left-hand side of the equality (1.2) to the usual orthogonality with respect to a new weight function. Then since the $n$-th power of the operator $\theta$ satisfies the Viskov-type identity \cite{Viskov}
$$ \theta^n = \left( xDx\right)^n = x^n D^n x^n,\quad n \in\mathbb{N}_{0},\eqno(1.3)$$
we can get via integration by parts new properties of the sequence $\{P_n\}_{n\ge 0}$ and its relationship, for instance, with classical orthogonal polynomials. In fact, let us consider a class of functions $\varphi$ representable by the modified Laplace transform of some nonnegative function $\psi$
$$\varphi(t)= {1\over t} \int_0^\infty e^{-x/t} \psi(x) dx,\ t \in [a,b] \subset (0, \infty).\eqno(1.4)$$
Hence it is easily seen that
$$\theta^k \left\{ t^{-1} e^{-x/t} \right\} = \left( t D t\right)^k \left\{ t^{-1} e^{-x/t} \right\} = x^k t^{-1} e^{-x/t},\quad k \in \mathbb{N}_0,\eqno(1.5)$$
and therefore, differentiating under the integral sign in (1.4), we derive
$$ \theta^k \left\{ \varphi(t) \right\} = {1\over t} \int_0^\infty e^{-x/t} x^k \psi(x) dx,\quad k \in \mathbb{N}_0. \eqno(1.6)$$
It is indeed allowed, for instance, due to the assumed convergence of the integral
$$ \int_0^\infty e^{-x/b} x^k \psi(x) dx < \infty, \quad k \in \mathbb{N}_0.\eqno(1.7)$$
Consequently, returning to (1.2) we write its left-hand side in the form
$$ \int_a^b P_n(\theta) P_m(\theta) \{ \varphi(t)\} \omega(t) dt = \int_a^b \int_0^\infty e^{-x/t} P_n(x) P_m(x) \psi(x) \omega(t) {dx dt \over t}.\eqno(1.8) $$
The interchange of the order of integration on the right-hand side in (1.8) is permitted by Fubini's theorem under the imposed condition
$$ \int_a^b \int_0^\infty e^{-x/t} x^k \psi(x) \omega(t) {dx dt \over t} < \infty,\quad k \in \mathbb{N}_0, \eqno(1.9)$$
and, combining with (1.2), we find the equalities
$$\int_a^b P_n(\theta) P_m(\theta) \{ \varphi(t)\} \omega(t) dt = \int_0^\infty P_n(x) P_m(x) \psi(x) \Omega (x) dx = \delta_{n,m},\eqno(1.10)$$
where
$$ \Omega (x) = \int_a^b e^{-x/t} \omega(t) {dt \over t},\quad x >0.\eqno(1.11) $$
Thus we see that the sequence $\{P_n\}_{n\ge 0}$ is orthonormal over $(0,\infty)$ with respect to the measure $\Omega (x) dx$. Moreover, one can consider
the left-hand side of the first equality in (1.10) as an inner product on the vector space of polynomials over $\mathbb{R}$ (a pre-Hilbert space)
$$\langle p, q \rangle = \int_a^b p (\theta) q (\theta) \{ \varphi(t)\} \omega(t) dt,\eqno(1.12)$$
inducing the norm by the equality
$$ ||p||= \sqrt{\langle p, p \rangle} = \left( \int_0^\infty p^2(x) \psi(x) \Omega (x) dx\right)^{1/2}.\eqno(1.13)$$
\section{The use of classical orthogonal polynomials}
\subsection{Laguerre polynomials}
We begin to consider sequences of orthogonal polynomials which are compositionally orthogonal with respect to the measure $t^\nu e^{-t} dt$ over $\mathbb{R}_+$ related to Laguerre polynomials $\{L_n^\nu\}_{n\ge 0},\ \nu > -1$. In fact, letting $\omega(t)= t^\nu e^{-t},\ t > 0$ in (1.2), we get
$$\int_0^\infty P_n(\theta) P_m(\theta) \{ \varphi(t)\} t^\nu e^{-t} dt = \delta_{n,m}.\eqno(2.1)$$
The corresponding integral (1.11) is calculated in \cite{Bateman}, Vol. II, and we obtain
$$\Omega(x)= \int_0^\infty e^{-x/t -t} t^{\nu-1} dt = 2 x^{\nu/2} K_\nu\left( 2\sqrt x\right) \equiv \rho_\nu(x),\ x > 0,\eqno(2.2)$$
where $K_\nu(z)$ is the modified Bessel function or Macdonald function \cite{YaL}. The function $\rho_{\nu}$ has the Mellin-Barnes integral representation in the form (cf. \cite{YAP})
$$
\rho_\nu(x)= \frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \Gamma(\nu+s) \Gamma (s) x^{-s} ds\ , \quad x, \gamma \in \mathbb{R}_{+},\ \nu \in \mathbb{R},\eqno(2.3)
$$
where $\Gamma(z)$ is Euler's gamma-function \cite{Bateman}, Vol. I. The asymptotic behavior of the modified Bessel function at infinity and near the origin \cite{Bateman}, Vol. II gives the corresponding values for the function $\rho_\nu,\ \nu \in \mathbb{R}$. Precisely, we have
$$\rho_\nu (x)= O\left( x^{(\nu-|\nu|)/2}\right),\ x \to 0,\ \nu\neq 0, \quad \rho_0(x)= O( \log x),\ x \to 0,\eqno(2.4)$$
$$ \rho_\nu(x)= O\left( x^{\nu/2- 1/4} e^{- 2\sqrt x} \right),\ x \to +\infty.\eqno(2.5)$$
Therefore, if the condition (cf. (1.9))
$$ \int_0^\infty x^k \rho_\nu(x) \psi(x) dx < \infty,\quad k \in \mathbb{N}_0 \eqno(2.6)$$
holds valid, we arrive at the following proposition.
{\bf Proposition 1.} {\it Let $\nu > -1, \varphi, \psi$ be nonnegative functions defined on $\mathbb{R}_+$ which are related by the modified Laplace transform $(1.4)$. Then under condition $(2.6)$ the finiteness of the integral
$$ \int_0^\infty x^k e^{-x/M} \psi(x) dx < \infty,\quad k \in \mathbb{N}_0 \eqno(2.7)$$
for some $M >0$ the sequence $\{P_n\}_{n\ge 0}$ of orthogonal polynomials with respect to the measure $\rho_\nu(x) \psi(x) dx$ over $\mathbb{R}_+$ is compositionally orthogonal in the sense of Laguerre relatively to the function $\varphi$, i.e.}
$$ \int_0^\infty P_n(\theta) P_m(\theta) \{ \varphi(t)\} t^\nu e^{-t} dt= \int_0^\infty P_n(x) P_m(x) \rho_\nu(x) \psi(x) dx = \delta_{n,m}.\eqno(2.8)$$
\begin{proof} Since via (2.7) the integral
$$ {1\over t} \int_0^\infty e^{-x/t} x^k \psi(x) dx, \quad k \in \mathbb{N}_0$$
converges uniformly with respect to $t \in [1/M, M], M >0$, the consecutive differentiation under the integral sign is allowed, and we derive, recalling (1.6)
$$P_n(\theta) P_m(\theta) \{ \varphi(t)\} = P_n(\theta) P_m(\theta) \left\{ {1\over t} \int_0^\infty e^{-x/t} \psi(x) dx \right\}$$
$$ = {1\over t} \int_0^\infty e^{-x/t} P_n(x) P_m(x) \psi(x) dx.$$
Then
$$ \int_0^\infty P_n(\theta) P_m(\theta) \{ \varphi(t)\} t^\nu e^{-t} dt = \lim_{M\to \infty} \int_{1/M}^M \int_0^\infty e^{-x/t} P_n(x) P_m(x) \psi(x) t^{\nu-1} e^{-t} dx dt $$
$$= \int_0^\infty P_n(x) P_m(x) \rho_\nu(x) \psi(x) dx = \delta_{n,m},$$
where the latter equality is guaranteed by condition (2.6) and Fubini's theorem.
\end{proof}
{\bf Remark 1.} {\it Letting $\nu \ge 0, \psi(x)= x^\alpha,\ \alpha > -1$, we find the sequence $\{P^{\nu,\alpha}_n\}_{n\ge 0}$ of Prudnikov's orthogonal polynomials studied in \cite{YAP}}
$$ \int_0^\infty P^{\nu,\alpha}_n(x) P^{\nu,\alpha}_m(x) \rho_\nu(x) x^\alpha dx = \Gamma(\alpha +1 ) \int_0^\infty P_n(\theta) P_m(\theta) \{ t^\alpha\} t^\nu e^{-t} dt = \delta_{n,m}.\eqno(2.9)$$
\subsection{Hermite polynomials} Other interesting case is the Hermite orthogonality with respect to the measure $e^{-t^2} dt$ over $\mathbb{R}$. Our method will work on the following even extension of the modified Laplace transform (1.4)
$$\varphi( |t |)= {1\over| t|} \int_0^\infty e^{-x/|t| } \psi(x) dx,\ t \in \mathbb{R} \backslash{\{0\}}.\eqno(2.10)$$
Hence when the Parseval identity for the Mellin transform \cite{Tit} suggests the equality
$$ \int_0^\infty e^{-x/t -t^2} {dt\over t} = {1\over 4\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \Gamma\left({s\over 2}\right) \Gamma (s)\ x^{-s} ds,\quad x,\gamma > 0. \eqno(2.11)$$
Invoking the duplication formula for the gamma function \cite{Bateman}, we get the right-hand side of (2.11) in the form
$$ {1\over 4\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \Gamma\left({s\over 2}\right) \Gamma (s)\ x^{-s} ds = {1\over 4\pi^{3/2} i} \int_{\gamma/2 -i\infty}^{\gamma /2 +i\infty} \Gamma^2 \left(s\right) \Gamma \left({1\over 2}+ s \right)\ \left( {x\over 2}\right)^{- 2s} ds$$
$$ = {1\over 2\sqrt\pi}\ \rho_{1/2, 2} \left({x^2\over 4}\right),\eqno(2.12)$$
where by $\rho_{\nu, k}(x)$ we denote the ultra-exponential weight function introduced in \cite{YAP}
$$\rho_{\nu, k}(x) = {1\over 2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \Gamma^k \left(s\right) \Gamma \left(\nu + s \right) x^{-s}ds, \ x,\nu,\gamma >0, \ k \in \mathbb{N}_0 .\eqno(2.13)$$
{\bf Proposition 2.} {\it Let $t >0,\ x \in \mathbb{R},\ \varphi(t) , \psi (x)$ be nonnegative functions which are related by $(2.10)$. If $\psi$ is even then under conditions $(2.7)$ and
$$ \int_{-\infty}^\infty |x|^k \rho_{1/2, 2} \left({x^2\over 4}\right) \psi(x) dx < \infty,\quad k \in \mathbb{N}_0 \eqno(2.14)$$
the sequence $\{P_n\}_{n\ge 0}$ of orthogonal polynomials with respect to the measure $2^{-1}\pi^{-1/2} \rho_{1/2, 2} \left({x^2/4}\right) \psi(x) dx$ over $\mathbb{R}$ is compositionally orthogonal in the sense of Hermite relatively to the function $\varphi(|t|)$, i.e.}
$$ \int_{-\infty}^\infty P_n(\theta) P_m(\theta) \{ \varphi(|t|)\} e^{-t^2} dt= \int_{-\infty}^\infty P_n(x) P_m(x) \rho_{1/2, 2} \left({x^2\over 4}\right) \psi(x) { dx\over \sqrt \pi} = \delta_{n,m}.\eqno(2.15)$$
\begin{proof} Indeed, since
$$ \int_{-\infty}^\infty P_n(\theta) P_m(\theta) \{ \varphi(|t|)\} e^{-t^2} dt = \int_{0}^\infty \left[ P_n(\theta) P_m(\theta) + P_n(- \theta) P_m(- \theta)\right] \{ \varphi(t)\} e^{-t^2} dt $$
we have due to (1.4), (1.6), (2.7)
$$\left[ P_n(\theta) P_m(\theta) + P_n(- \theta) P_m(- \theta)\right] \{ \varphi(t)\} = {1\over t} \int_0^\infty e^{-x/t} \left[ P_n(x) P_m(x) + P_n(- x) P_m(- x) \right] \psi(x) dx.\eqno(2.16)$$
Thus appealing to (2.11), (2.12), (2.14) and Fubini's theorem, we obtain finally from (2.16)
$$ \int_{0}^\infty \int_0^\infty e^{-x/t- t^2} \left[ P_n(x) P_m(x) + P_n(- x) P_m(- x) \right] \psi(x) { dx dt \over t}$$
$$= \int_0^\infty \left[ P_n(x) P_m(x) + P_n(- x) P_m(- x) \right] \rho_{1/2, 2} \left({x^2\over 4}\right) \psi(x) {dx\over \sqrt \pi}$$
$$= \int_{-\infty}^\infty P_n(x) P_m(x) \rho_{1/2, 2} \left({x^2\over 4}\right) \psi(x) { dx\over 2\sqrt \pi} = \delta_{n,m}.$$
\end{proof}
\subsection{Jacobi polynomials} The modified sequence $\{P_n^{\alpha,\beta}(2t-1)\}_{n\ge 0}$ of these classical polynomials is orthogonal with respect to the measure $(1-t)^{\alpha} t^\beta dt,\ \alpha, \beta > -1$ over the interval $[0,1].$ Therefore the kernel $\Omega(x),\ x >0$ (1.11) is calculated accordingly, and we obtain
$$\Omega(x) = \int_{0}^1 (1-t)^{\alpha } t^{\beta-1} e^{-x/t} dt = e^{-x} \int_{0}^\infty t^{\alpha}\ (1+t)^{-\alpha- \beta-1} e^{-xt} dt= \Gamma(1+\alpha) e^{-x} U(1+\alpha, 1-\beta, x),\eqno(2.17)$$
where $U(a,b,z)$ is the Tricomi function \cite{nist}. Hence we arrive at
{\bf Proposition 3.} {\it Let $\alpha > -1, \beta > 0, \varphi, \psi$ be nonnegative functions defined on $\mathbb{R}_+$ which are related by the modified Laplace transform $(1.4)$. Then under the condition
$$\int_0^\infty x^k e^{-x} \psi(x) dx < \infty, \quad k \in \mathbb{N}_0\eqno(2.18)$$
the sequence $\{P_n\}_{n\ge 0}$ of orthogonal polynomials with respect to the measure $e^{-x} U(1+\alpha, 1-\beta, x) \psi(x) dx$ over $\mathbb{R}_+$ is compositionally orthogonal in the sense of Jacobi relatively to the function $\varphi$, i.e.}
$$ \int_0^1 P_n(\theta) P_m(\theta) \{ \varphi(t)\} (1-t)^\alpha t^\beta dt= \Gamma(1+\alpha) \int_0^\infty P_n(x) P_m(x) e^{-x} U(1+\alpha, 1-\beta, x) \psi(x) dx = \delta_{n,m}.\eqno(2.19)$$
\begin{proof} Indeed, since
$$ P_n(\theta) P_m(\theta) \{ \varphi(t)\} = P_n(\theta) P_m(\theta) \left\{ {1\over t} \int_0^\infty e^{-x/t} \psi(x) dx \right\} = {1\over t} \int_0^\infty e^{-x/t} P_n(x) P_m(x) \psi(x) dx ,$$
where the consecutive differentiation under the integral sign is permitted owing to the estimate
$$ {1\over t} \int_0^\infty e^{-x/t} x^k \psi(x) dx \le {1\over \delta} \int_0^\infty e^{-x} x^k \psi(x) dx,\ 0 < \delta \le t \le 1,\ k \in \mathbb{N}_0,$$
and the latter integral by $x$ is finite via (2.18). Hence, taking into account (2.17),
$$ \int_0^1 P_n(\theta) P_m(\theta) \{ \varphi(t)\} (1-t)^\alpha t^\beta dt= \lim_{\delta \to 0+} \int_\delta^1 P_n(\theta) P_m(\theta) \{ \varphi(t)\} (1-t)^\alpha t^\beta dt$$
$$= \lim_{\delta \to 0+} \int_\delta^1 \int_0^\infty e^{-x/t} P_n(x) P_m(x) \psi(x) (1-t)^\alpha t^{\beta-1} dx dt $$
$$= \int_0^1 \int_0^\infty e^{-x/t} P_n(x) P_m(x) \psi(x) (1-t)^\alpha t^{\beta-1} dx dt$$
$$ = \Gamma(1+\alpha) \int_0^\infty P_n(x) P_m(x) e^{-x} U(1+\alpha, 1-\beta, x) \psi(x) dx = \delta_{n,m},$$
where the interchange of the order of integration is possible by Fubini's theorem due to (2.18) and an elementary estimate of the Tricomi function
$$\Gamma(1+\alpha) U(1+\alpha, 1-\beta, x) = \int_{0}^\infty t^{\alpha}\ (1+t)^{-\alpha- \beta-1} e^{-xt} dt \le \int_{0}^\infty t^{\alpha}\ (1+t)^{-\alpha- \beta-1} dt =
B(1+\alpha,\beta),$$
where $B(a,b)$ is the Euler beta function \cite{Bateman}, Vol. I.
\end{proof}
\section{Properties of Prudnikov's weight functions and their products}
In this section we will exhibit properties of the weight functions $\rho_{\nu+1}(x)\rho_\nu(x),\ \rho_\nu^2(x),\ x >0$, where $\rho_\nu(x)$ is defined by (2.2), (2.3), in order to involve them in the sequel to investigate the corresponding orthogonal and multiple orthogonal polynomials. In particular, we will establish their differential properties, integral representations and differential equations. Concerning the function $\rho_\nu$, we found in \cite{YAP} the following integral representation in terms of Laguerre polynomials
$${(-1)^n x^n\over n!}\ \rho_\nu(x)= \int_0^\infty t^{\nu+n -1} e^{-t - x/t} L_n^\nu(t) dt,\quad n \in\mathbb{N}_{0}.\eqno(3.1)$$
It has a relationship with the Riemann-Liouville fractional integral \cite{YaL}
$$ \left( I_{-}^\alpha f \right) (x) \equiv \left( I_{-}^\alpha f(x) \right) = {1\over \Gamma(\alpha)} \int_x^\infty (t-x)^{\alpha-1} f(t) dt,\quad {\rm Re} \alpha > 0,\eqno(3.2)$$
namely, we get the formula
$$\rho_\nu(x)= \left( I_{-}^\nu \rho_0 \right) (x),\ \nu >0.\eqno(3.3)$$
Further, the index law for fractional integrals immediately implies
$$ \rho_{\nu+\mu} (x)= \left( I_{-}^\nu \rho_\mu \right) (x)= \left( I_{-}^\mu \rho_\nu \right) (x).\eqno(3.4)$$
The corresponding definition of the fractional derivative presumes the relation $ D^\mu_{-}= - D I_{-}^{1-\mu}$. Hence for the ordinary $n$-th derivative of $\rho_\nu$ we find
$$D^n \rho_\nu(x)= (-1)^n \rho_{\nu-n} (x),\quad n \in \mathbb{N}_0.\eqno(3.5)$$
The function $\rho_\nu$ possesses the following recurrence relation (see \cite{YAP})
$$\rho_{\nu+1} (x) = \nu \rho_\nu(x)+ x \rho_{\nu-1} (x),\quad \nu \in \mathbb{R}.\eqno(3.6)$$
In the operator form it can be written as follows
$$\rho_{\nu+1} (x) = \left( \nu - xD \right) \rho_\nu(x).\eqno(3.7)$$
Now we will derive the Mellin-Barnes representations for the product of functions $\rho_{\nu+1}\rho_\nu$ and for the square $\rho_\nu^2$. In fact, it can be done, using the related formulas for the product of Macdonald functions. Thus, appealing to Entries 8.4.23.31, 8.4.23.27 in \cite{PrudnikovMarichev}, Vol. III, we obtain
$$\rho_{\nu+1} (x) \rho_\nu (x) = {4^{-\nu} \over 2 i \sqrt\pi} \int_{\gamma-i\infty}^{\gamma +i\infty} \frac{\Gamma( s+2\nu+1) \Gamma(s+\nu) \Gamma(s)}
{\Gamma(s+\nu+1/2)} (4x)^{-s} ds,\quad x,\gamma > 0,\eqno(3.8) $$
$$\rho^2_\nu (x) = {4^{-\nu} \over i\sqrt\pi} \int_{\gamma-i\infty}^{\gamma +i\infty} \frac{\Gamma( s+2\nu) \Gamma(s+\nu) \Gamma(s)}
{\Gamma(s+\nu+1/2)} (4x)^{-s} ds,\quad x,\gamma > 0.\eqno(3.9) $$
Immediate consequences of these formulas are relationships of the products of Prudnikov's weight functions $\rho_{\nu+1}\rho_\nu,\ \rho_\nu^2$ with $\rho_{2\nu+1},\ \rho_{2\nu}$, correspondingly. Indeed, it can be done via Entry 8.4.2.3 in \cite{PrudnikovMarichev}, Vol. III and the Parseval equality for the Mellin transform. Hence we deduce from (3.8), (3.9), respectively,
$$\rho_{\nu+1} (x) \rho_\nu (x) = 4^{-\nu} \int_0^1 (1-t)^{-1/2} t^{\nu-1} \rho_{2\nu+1}\left({4x\over t}\right) dt$$
$$= {x^{\nu} \sqrt\pi\over \Gamma(\nu+1/2)} \int_0^1 (1-t)^{\nu-1/2} t^{-\nu-1} \rho_{\nu+1}\left({4x\over t}\right) dt,\eqno(3.10)$$
$$ \rho^2_\nu (x) = 2^{1-2\nu} \int_0^1 (1-t)^{-1/2} t^{\nu-1} \rho_{2\nu}\left({4x\over t}\right) dt$$
$$= {2 x^\nu \sqrt\pi \over \Gamma(\nu+1/2)} \int_0^1 (1-t)^{\nu-1/2} t^{-\nu-1} \rho_{\nu}\left({4x\over t}\right) dt .\eqno(3.11)$$
An ordinary differential equation for the function $\rho_{\nu+1}\rho_\nu$ is given by
{\bf Proposition 4.} {\it The function $u_\nu = \rho_{\nu+1}\rho_\nu$ satisfies the following third order differential equation}
$$x^2{d^3u_\nu\over dx^3} + x(2- 3\nu) {d^2 u_\nu\over dx^2} + 2 (\nu( \nu-1) -2x) {d u_\nu\over dx} + 2(2\nu-1) u_\nu(x) = 0.\eqno(3.12)$$
\begin{proof} Differentiating $u_\nu$ with the use of (3.5), we have
$$ {d u_\nu\over dx} = - \rho_\nu^2(x) - \rho_{\nu+1}(x) \rho_{\nu-1} (x).\eqno(3.13)$$
Multiplying both sides of (3.13) by $x$ and employing the recurrence relation (3.6), we get
$$x {d u_\nu\over dx} = - x \rho_\nu^2(x) + \nu \rho_{\nu+1}(x) \rho_{\nu} (x) - \rho^2_{\nu+1}(x).$$
Differentiating both sides of the latter equality, using (3.6), (3.13) and the notation $u_\nu= \rho_{\nu+1}\rho_\nu$, we obtain
$${d\over dx} \left(x {d u_\nu\over dx}\right) = - (1+2\nu) \rho_\nu^2(x) +\nu {d u_\nu\over dx}+ 4u_\nu (x).\eqno(3.14)$$
Differentiating (3.14) and multiplying the result by $x$, we recall (3.6) to find
$$x {d^2\over dx^2} \left(x {d u_\nu\over dx}\right) = - 2\nu (1+2\nu) \rho^2_\nu(x) +\nu x {d^2 u_\nu\over dx^2}+ 4x {d u_\nu\over dx} + 2(1+2\nu) u_\nu(x).\eqno(3.15)$$
Hence, expressing $ (1+2\nu) \rho^2_\nu(x)$ from (3.14) and fulfilling the differentiation on the left-hand side of (3.15), we get (3.12).
\end{proof}
Concerning the differential equation for the function $\rho_\nu^2$, we prove
{\bf Proposition 5.} {\it The function $h_\nu = \rho^2_\nu$ satisfies the following third order differential equation}
$$x^2{d^3h_\nu\over dx^3} + 3x(1- \nu) {d^2 h_\nu\over dx^2} + (2\nu^2+1- 3\nu- 4x) {d h_\nu\over dx} + 2(2\nu-1) h_\nu(x) = 0.\eqno(3.16)$$
\begin{proof} Since $x \left(\rho^2_\nu(x) \right)^\prime = -2x \rho_{\nu-1}(x) \rho_{\nu} (x) = 2\nu \rho^2_\nu(x) - 2 \rho_{\nu+1}(x) \rho_{\nu} (x) = 2\nu h_\nu(x) - 2 \rho_{\nu+1}(x) \rho_{\nu} (x) $ we derive, owing to (3.5), (3.6)
$$x {d\over dx} \left( x {d h_\nu\over dx} - 2\nu h_\nu(x) \right) = -2 x {d\over dx} \left( \rho_{\nu+1}(x) \rho_{\nu} (x)
\right)$$
$$ = 2x\ h_\nu(x) - 2\nu \rho_{\nu+1}(x) \rho_{\nu} (x) + 2 \rho_{\nu+1}^2(x)= 2 \left( x - \nu^2 \right) h_\nu(x) +\nu x {d h_\nu\over dx} + 2 \rho_{\nu+1}^2(x).$$
Hence one more differentiation yields
$$ {d\over dx} \left( x {d\over dx} \left( x {d h_\nu\over dx} - 2\nu h_\nu(x) \right) \right) = {d\over dx} \left( 2 \left( x - \nu^2 \right) h_\nu(x) +\nu x {d h_\nu\over dx}\right) - 4 \rho_{\nu+1}(x) \rho_{\nu} (x).$$
But from the beginning of the proof we find
$$2 \rho_{\nu+1}(x) \rho_{\nu} (x) = 2\nu h_\nu(x) - x {d h_\nu\over dx}.\eqno(3.17)$$
Therefore, substituting this expression into the previous equality, we fulfil the differentiation to arrive at (3.16) and to complete the proof.
\end{proof}
{\bf Corollary 1}. {\it The following recurrence relations between functions $u_\nu,\ h_\nu$ hold}
$$ u_\nu= \nu h_\nu + x u_{\nu-1},\eqno(3.18)$$
$$h_{\nu+1} = \nu^2 h_\nu + 2 x \nu u_{\nu-1} + x^2 h_{\nu-1} = 2\nu u_\nu+ x^2 h_{\nu-1} - \nu^2 h_\nu .\eqno(3.19)$$
\begin{proof} Equality (3.18) is a direct consequence of (3.17) and (3.5). Equalities (3.19), in turn, are obtained, taking squares of both sides of (3.6) and employing (3.18).
\end{proof}
\section{Orthogonal polynomials with $\rho^2_\nu$ weight function}
The object of this section is to characterize the sequence of orthogonal polynomials $\{P_n\}_{n\ge 0}$, satisfying the orthogonality conditions
$$\int_{0}^{\infty} P_{m} (x) P_{n}(x) \rho^2_\nu(x) dx = \delta_{n,m}, \quad \nu > - {1\over 2}.\eqno(4.1)$$
Clearly, up to a normalization factor conditions (4.1) are equivalent to the equalities
$$\int_{0}^{\infty} P_{n} (x) \rho^2_\nu(x) x^m dx = 0,\quad m=0,1,\dots, n-1,\ \quad n \in \mathbb{N}.\eqno(4.2)$$
The moments of the weight $\rho^2_\nu(x)$ can be obtained immediately from the Mellin-Barnes representation (3.9), treating it as the inverse Mellin transform \cite{Tit}. Hence we get
$$ \int_{0}^{\infty} \rho^2_\nu(x) x^\mu dx = \sqrt\pi \ \frac{\Gamma( 1+\mu+2\nu) \Gamma(1+\mu+\nu) \Gamma(1+\mu)}
{ 2^{1+2(\nu+\mu)}\ \Gamma(\mu+\nu+3/2)}, \ \mu > \max\{-1, -1-\nu,\ 1-2\nu\}.\eqno(4.3)$$
Furthermore, the sequence $ \left\{P_n\right\}_{n \ge 0}$ satisfies the 3-term recurrence relation in the form
$$x P_n (x) = A_{n+1} P_{n+1} (x) + B_n P_n (x) + A_n P_{n-1} (x),\eqno(4.4)$$
where $P_{-1}^{\nu} (x)\equiv 0,\ P_n(x)= \sum_{k=0}^n a_{n,k} x^k,\ a_{n,n} \neq 0$ and
$$A_{n+1}= {a_n\over a_{n+1} }, \quad B_{n}= {b_n\over a_n} - {b_{n+1}\over a_{n+1}},\quad a_n\equiv a_{n,n},\ b_n\equiv a_{n,n-1}.$$
Basing on properties of the functions $\rho^2_\nu,\ \rho_{\nu+1}\rho_\nu$ and orthonormality (4.1), we establish
{\bf Proposition 6}. {\it Let $\nu > -1$. The following formulas hold}
$$ \int_{0}^{\infty} P^2_{n}(x) \rho_{\nu+1}(x)\rho_\nu(x) dx = {1\over 2} + \nu + n,\eqno(4.5)$$
$$ \int_{0}^{\infty} P^2_{n}(x) \ x \rho_{\nu}(x) \rho_{\nu-1}(x) dx = {1\over 2} + n,\eqno(4.6)$$
$$ \int_{0}^{\infty} P^2_{n}(x) \ x^2 \rho_{\nu}(x) \rho_{\nu-2}(x) dx = B_n- (\nu-1) \left( {1\over 2} + n\right),\eqno(4.7)$$
$$ \int_{0}^{\infty} P^2_{n}(x) \rho_{\nu+2}(x)\rho_\nu(x) dx = B_n+ (\nu+1)\left( {1\over 2} + \nu + n\right).\eqno(4.8)$$
\begin{proof} In fact, recurrence relation (3.6) and orthonormality (4.1) implies
$$ \int_{0}^{\infty} P^2_{n}(x) \rho_{\nu+1}(x)\rho_\nu(x) dx = \nu + \int_{0}^{\infty} P^2_{n}(x)\ x \rho_{\nu}(x)\rho_{\nu-1}(x) dx.\eqno(4.9)$$
Hence, integrating by parts in the latter integral and eliminating integrated terms by virtue of the asymptotic behavior (2.4), (2.5) , we find
$$\int_{0}^{\infty} P^2_{n}(x) \ x \rho_{\nu}(x)\rho_{\nu-1}(x) dx = \int_0^\infty \left( {1\over 2} P^2_{n}(x) + x P_{n}(x) P_n^\prime (x) \right) \rho^2_{\nu}(x) dx.\eqno(4.10) $$
Therefore, we have from (4.9), (4.10) and orthogonality (4.1)
$$ \int_{0}^{\infty} P^2_{n}(x) \rho_{\nu+1}(x)\rho_\nu(x) dx = {1\over 2} + \nu + n,$$
which proves (4.5). Concerning equality (4.6), it is a direct consequence of (3.6), (4.1). The same idea is to prove (4.7), (4.8), employing (4.3) as well.
\end{proof}
The composition orthogonality which is associated with the sequence $\{P_n\}_{n\ge 0}$ is given by
{\bf Theorem 1.} {\it Let $\nu > -1/2$. The sequence $\{P_n\}_{n\ge 0}$ is compositionally orthogonal in the sense of Laguerre relatively to the function $t^\nu e^t \Gamma(-\nu,t)$, where $\Gamma(\mu,z)$ is the incomplete gamma function, i.e.}
$$\ \int_{0}^{\infty} t^{\nu } e^{-t} P_n\left(\theta\right) P_m\left(\theta\right)\left( t^\nu e^t \Gamma(-\nu, t) \right) dt = { \delta_{n,m}\over \Gamma(1+\nu)}.\eqno(4.11)$$
\begin{proof} Writing $P_n$ explicitly, we appeal to the integral representation (3.1) for $\rho_\nu(x)$ to write the left-hand side of (4.2) in the form
$$ \int_{0}^{\infty} P_{n} (x) \rho^2_\nu(x) x^m dx = (-1)^m m! \sum_{k=0}^n a_{n,k} (-1)^k k! \int_{0}^{\infty} \int_{0}^{\infty} t^{\nu+k -1} e^{-t - x/t} L_k^\nu(t) dt$$
$$\times \int_{0}^{\infty} y^{\nu+m -1} e^{-y - x/y} L_m^\nu(y) dy dx.\eqno(4.12) $$
The Fubini theorem allows to interchange the order of integration in (4.12) due to the convergence of the following integral for all $r_1, r_2 \in \mathbb{N}_0$
$$\int_{0}^{\infty} \int_{0}^{\infty} t^{\nu+k+r_1 -1} e^{-t - x/t} dt \int_{0}^{\infty} y^{\nu+m +r_2 -1} e^{-y - x/y} dy\ dx $$
$$= \int_{0}^{\infty} \int_{0}^{\infty} t^{\nu+k+r_1 } y^{\nu+m +r_2} e^{-t-y } {dt dy\over t+ y} \le {1\over 2} \int_{0}^{\infty} \int_{0}^{\infty} t^{\nu+k+r_1 -1/2 } y^{\nu+m +r_2-1/2 } e^{-t-y } dt dy$$
$$ = {1\over 2} \Gamma(\nu+k+r_1 +1/2) \Gamma(\nu+m+r_2 +1/2).$$
Therefore after integration with respect to $x$ we find from (4.12)
$$ \int_{0}^{\infty} P_{n} (x) \rho^2_\nu(x) x^m dx = (-1)^m m! \sum_{k=0}^n a_{n,k} (-1)^k k! \int_{0}^{\infty} \int_{0}^{\infty} t^{\nu+k} y^{\nu+m } e^{-t-y} L_k^\nu(t) L_m^\nu(y) {dt dy \over t+y}.\eqno(4.13) $$
However the Rodrigues formula for Laguerre polynomials and Viskov-type identity (1.3) suggest to write the right-hand side of (4.13) as follows
$$ (-1)^m m! \sum_{k=0}^n a_{n,k} (-1)^k k! \int_{0}^{\infty} \int_{0}^{\infty} t^{\nu+k} y^{\nu+m } e^{-t-y} L_k^\nu(t) L_m^\nu(y) {dt dy \over t+y}$$
$$= (-1)^m \sum_{k=0}^n a_{n,k} (-1)^k k! \int_{0}^{\infty} t^{\nu+k} e^{-t} L_k^\nu(t) \int_{0}^{\infty} \theta^m \left( y^\nu e^{-y} \right) {dy dt \over t+y} . \eqno(4.14)$$
The inner integral with respect to $y$ on the right-hand side of the latter equality can be treated via integration by parts, and we obtain
$$ \int_{0}^{\infty} \theta^m \left( y^\nu e^{-y} \right) {dy \over t+y} = (-1)^m \int_{0}^{\infty} y^\nu e^{-y} \theta^m \left( {1\over t+y} \right) dy.\eqno(4.15)$$
Working out the differentiation, we find
$$ \theta^m \left( {1\over t+y} \right) = y^m {d^m\over dy^m} \left( \sum_{k=0}^m \binom{m}{k} (-1)^{m-k} t^{m-k} (t+y)^{k-1}\right) $$
$$= (-1)^m t^m y^m {d^m\over dy^m} \left( {1\over t+y}\right) = {m! \ t^m y^m \over (t+y)^{m+1} }.$$
Consequently, combining with (4.14), (4.15), equality (4.13) becomes
$$ \int_{0}^{\infty} P_{n} (x) \rho^2_\nu(x) x^m dx = m! \sum_{k=0}^n a_{n,k} (-1)^k k! \int_{0}^{\infty} t^{\nu+k+m } e^{-t} L_k^\nu(t) \int_{0}^{\infty}
{y^{\nu+m} e^{-y} \over (t+y)^{m+1} } dy dt.\eqno(4.16) $$
Meanwhile, the integral with respect to $y$ in (4.16) is calculated in (2.17) in terms of the Tricomi function up to a simple change of variables. Thus we get
$$ \int_{0}^{\infty} P_{n} (x) \rho^2_\nu(x) x^m dx = m! \Gamma(\nu+m+1) \sum_{k=0}^n a_{n,k} (-1)^k k! \int_{0}^{\infty} t^{2\nu+k+m } e^{-t} L_k^\nu(t) U(\nu+m+1, 1+\nu, t) dt.\eqno(4.17) $$
But appealing to the differential formula 13.3.24 in \cite{nist} for the Tricomi function, we have
$$ m! \ \Gamma(\nu+m+1) t^{\nu+m} U(\nu+m+1, 1+\nu, t) = \Gamma(\nu+1) \theta^m\left( t^\nu U(1+\nu,1+\nu,t)\right)$$
$$ = \Gamma(\nu+1) \theta^m\left( t^\nu e^t \Gamma(-\nu, t) \right),\eqno(4.18)$$
%
where $\Gamma(a,z)$ is the incomplete gamma function
$$ \Gamma(a,z) = \int_z^\infty e^{-u} u^{a-1} du.\eqno(4.19)$$
Hence, plugging this result in (4.17), recalling the Rodrigues formula for Laguerre polynomials and integrating by parts on its right-hand side, we combine with (4.2) to end up with equalities
$$ \int_{0}^{\infty} P_{n} (x) \rho^2_\nu(x) x^m dx = \Gamma(\nu+1) $$
$$\times \int_{0}^{\infty} t^{\nu } e^{-t} P_n\left(\theta\right) \theta^m \left( t^\nu e^t \Gamma(-\nu, t) \right) dt = 0,\quad m=0,1,\dots, n-1,\ n \in \mathbb{N},\eqno(4.20)$$
which yield (4.11) and complete the proof of Theorem 1.
\end{proof}
Further, the right-hand side of the first equality in (4.20) can be rewritten via integration by parts as follows
$$ \int_{0}^{\infty} t^{\nu+m } e^{-t} L_m^\nu(t) P_n\left(\theta\right) \left( t^\nu e^t \Gamma(-\nu, t) \right) dt = 0,\quad m=0,1,\dots, n-1,\ n \in \mathbb{N}.\eqno(4.21)$$
Then we expand the function $ F_n(t)= \theta^n \left( t^\nu e^t \Gamma(-\nu, t) \right)$ in a series of Laguerre polynomials
$$ F_n(t)= \sum_{r=0}^\infty c_{n,r} L_r^\nu(t),\eqno(4.22)$$
where the coefficients $c_{n,r}$ are calculated in terms of the ${}_3F_2$-hypergeometric functions at unity with the aid of Entry 3.31.12.1 in \cite{Brychkov}. Precisely, we get via (4.18)
$$c_{n,r}= {r!\over \Gamma(1+\nu+r)} \int_0^\infty t^\nu e^{-t} L_r^\nu(t) \theta^n \left( t^\nu e^t \Gamma(-\nu, t) \right) dt$$
$$= {r! n! (1+\nu)_n \over \Gamma(1+\nu+r)} \int_0^\infty t^{2\nu+n} e^{-t} L_r^\nu(t) U(\nu+n+1, 1+\nu, t) dt$$
$$= {(1+\nu)_n\over \Gamma(1+\nu)} \left[ {n! r! \ \Gamma(\nu) \over (1+\nu)_r} \ {}_3F_2 \left( n+1, r+1, \nu+n+1; 1-\nu, 1; \ 1 \right) \right.$$
$$\left. + \Gamma(2\nu+n+1) \Gamma(-\nu)\ {}_3F_2 \left( \nu+ r+1, \nu+n+1, 2\nu+n+1; 1+\nu, 1+\nu; \ 1 \right)\right],\eqno(4.23)$$
where $(a)_z$ is the Pochhammer symbol \cite{Bateman}, Vol. I and it is valid for nonnegative integers $\nu$ by continuity. Hence after substitution the series (4.22) into (4.21) these orthogonality conditions take the form
$$ \int_{0}^{\infty} t^{\nu+m } e^{-t} L_m^\nu(t) \sum_{k=0}^n a_{n,k} \sum_{r=0}^{2m} c_{k,r} L_r^\nu(t) dt = 0,\quad m=0,1,\dots, n-1,\ n \in \mathbb{N}.\eqno(4.24)$$
Calculating the integral in (4.24) via relation (2.19.14.8) in \cite{PrudnikovMarichev}, Vol. II we obtain
$$d_{m,r} = \int_{0}^{\infty} t^{\nu+m } e^{-t} L_m^\nu(t) L_r^\nu(t) dt = { (-1)^r \over r! } \ (1+\nu)_r\ \Gamma(1+\nu+m) $$
$$\times {}_3F_2 \left( -r, \nu+ m+1, m+1; 1+\nu, 1; \ 1 \right).\eqno(4.25)$$
Hence, equalities (4.24) become the linear system of $n$ algebraic equations with $n+1$ unknowns
$$\sum_{k=0}^n a_{n,k} f_{k,m} = 0,\quad m=0,1,\dots, n-1,\ n \in \mathbb{N},\eqno(4.26)$$
where
$$f_{k,m} = \sum_{r=0}^{2m} c_{k,r} d_{m,r}.\eqno(4.27)$$
Consequently, explicit values of the coefficients $a_{n,k},\ k =1,2,\dots, n$ can be expressed via Cramer's rule in terms of the free coefficient $a_{n,0}$ as follows
$$ a_{n,k} = - a_{n,0}\ { D_{n,k} \over D_{n}},\quad k = 1,\dots, n,\eqno(4.28)$$
where
$$ D_{n}= \begin{vmatrix}
f_{1, 0} & f_{2, 0}& \dots& \dots& f_{n, 0} \\
f_{1, 1} & \dots & \dots& \dots& f_{n, 1} \\
\dots& \dots & \dots& \dots& \dots \\
\vdots & \ddots & \ddots & \ddots& \vdots\\
f_{1, n-1} & \dots& \dots& \dots& f_{n, n-1}\\
\end{vmatrix},\eqno(4.29)$$
$$ D_{n,k}= \begin{vmatrix}
f_{1, 0} & \dots & f_{k-1, 0}& f_{0,0} & f_{k+1, 0} & \dots& f_{n, 0} \\
f_{1, 1} & \dots & f_{k-1, 1}& f_{0,1} & f_{k+1, 1} & \dots& f_{n, 1} \\
\dots& \dots & \dots& \dots& \dots& \dots& \dots \\
\vdots & \ddots & \ddots & \vdots& \ddots & \ddots & \vdots\\
\vdots & \ddots & \ddots & \vdots& \ddots & \ddots & \vdots\\
\vdots & \ddots & \ddots & \vdots& \ddots & \ddots & \vdots\\
f_{1, n-1} & \dots & f_{k-1, n-1}& f_{0,n-1} & f_{k+1, 1} & \dots& f_{n, n-1} \\
\end{vmatrix}.\eqno(4.30)$$
The free coefficient can be determined, in turn, from the orthogonality conditions (4.1), (4.2), which imply the formula
$$\int_{0}^{\infty} P_{n} (x) \rho^2_\nu(x) \ x^n dx = {1\over a_{n,n}}.\eqno(4.31)$$
Therefore from (4.28) and (4.3) we derive
$$ {1\over a_{n,n}} = - { a_{n,0} \sqrt\pi \over D_n }\ \frac{\Gamma( 1+2\nu) \Gamma(1+\nu) }
{ 2^{1+2\nu}\ \Gamma(\nu+3/2)} \ \sum_{k=0}^n D_{n,k} \ (n+k)! \ \frac{( 1+2\nu)_{n+k} (1+\nu)_{n+k} }
{ 4^{n+k}\ (\nu+3/2)_{n+k}},\eqno(4.32)$$
where $D_{n,0}\equiv - D_n.$ Hence
$$ a_{n,0} = \pm \ { D_n \over [ D_{n,n} ]^{1/2} }\left[ \sqrt\pi \ \frac{\Gamma( 1+2\nu) \Gamma(1+\nu) }
{ 2^{1+2\nu}\ \Gamma(\nu+3/2)} \ \sum_{k=0}^n D_{n,k} \ (n+k)! \ \frac{( 1+2\nu)_{n+k} (1+\nu)_{n+k} }
{ 4^{n+k}\ (\nu+3/2)_{n+k}} \right]^{-1/2},\eqno(4.33)$$
where the sign can be chosen accordingly, making positive expressions under the square roots. Assuming also the positivity of the leading coefficient $a_{n,n}$ we have its value, correspondingly,
$$ a_{n,n} = \mp \ [ D_{n,n}]^{1/2} \left[ \sqrt\pi \ \frac{\Gamma( 1+2\nu) \Gamma(1+\nu) }
{ 2^{1+2\nu}\ \Gamma(\nu+3/2)} \ \sum_{k=0}^n D_{n,k} \ (n+k)! \ \frac{( 1+2\nu)_{n+k} (1+\nu)_{n+k} }
{ 4^{n+k}\ (\nu+3/2)_{n+k}} \right]^{-1/2}.\eqno(4.34)$$
{\bf Theorem 2}. {\it Let $\nu > -1/2$. The sequence of orthogonal polynomials $\left\{P_n\right\}_{n \ge 0}$ can be expressed explicitly, where the coefficients $a_{n,k},\ k=1,2,\dots, n$ are calculated by relations $(4.28)$ and the free term $a_{n,0}$ is defined by the equality $(4.33)$. Besides, it satisfies the 3-term recurrence relation $(4.4)$, where }
$$A_{n+1}= { a_{n,0} \ D_{n+1} \ D_{n,n} \over a_{n+1,0}\ D_n \ D_{n+1,n+1}}, \quad B_n= { D_{n,n-1} \over D_{n,n}} - { D_{n+1,n} \over D_{n+1,n+1}}.\eqno(4.35)$$
An analog of the Rodrigues formula for the sequence $ \left\{P_n\right\}_{n \in \mathbb{N}_0}$ can be established, appealing to the integral representation of an arbitrary polynomial in terms of the associated polynomial of degree $2n$ (see \cite{YAP}). Hence, employing (2.2), we deduce
$$P_n(x)= {1\over \rho_\nu(x)} \int_0^\infty t^{\nu-1} e^{-t -x/t } q_{2n}(t) dt = {1\over \rho^2_\nu(x)} \int_0^\infty u^{\nu-1} e^{-u -x/u } du \int_0^\infty t^{\nu-1} e^{-t -x/t } q_{2n}(t) dt $$
$$ = {(-1)^n \over \rho^2_\nu(x)} {d^n\over dx^n } \int_0^\infty \int_0^\infty (u t)^{\nu+n-1} e^{-u -t -x(1/u +1/t) } q_{2n}(t) {dt du\over (u+t)^n},\eqno(4.36)$$
where
$$ q_{2n}(x) = \sum_{k=0}^{n} a_{n,k} (-1)^k k! x^k L_k^\nu(x)\eqno(4.37)$$
and the $k$-th differentiation under the integral sign is permitted due to the estimate
$$\int_0^\infty \int_0^\infty (u t)^{\nu+k-1} e^{-u -t -x(1/u +1/t) } \left| q_{2n}(t)\right| {dt du\over (u+t)^k}$$
$$ \le 2^{-k} \int_0^\infty u^{\nu+ k/2 -1} e^{-u} du \int_0^\infty t^{\nu+ k/2 -1} e^{-t} \left| q_{2n}(t)\right| dt $$
$$= 2^{-k} \Gamma\left( \nu+ {k\over 2} \right) \int_0^\infty t^{\nu+ k/2 -1} e^{-t} \left| q_{2n}(t)\right| dt < \infty,\quad k = 1,\dots, n.$$
Then writing
$${(ut)^n\over (u+t)^n} = {1\over (n-1)!} \int_0^\infty e^{ - (1/u+1/t) y} y^{n-1} dy,$$
we get from (4.36)
$$P_n(x)= {(-1)^n \over (n-1)! \ \rho^2_\nu(x)} {d^n\over dx^n } \int_0^\infty \int_0^\infty \int_0^\infty y^{n-1} (u t)^{\nu-1} e^{-u -t -(x+y)/u -(x+y)/t } q_{2n}(t) dt du dy$$
$$= {(-1)^n \over (n-1)! \ \rho^2_\nu(x)} {d^n\over dx^n } \int_0^\infty \int_0^\infty y^{n-1} t^{\nu-1} e^{ -t -(x+y)/t } q_{2n}(t) \rho_{\nu}(x+y) dt dy.\eqno(4.38)$$
Then, expressing $q_{2n}$ in terms of the Laguerre polynomials
$$ q_{2n}(x)= \sum_{k=0}^{2n} h_{2n,k} L_k^\nu(x),\eqno(4.39)$$
where the coefficients $h_{2n,k}$ are calculated by virtue of (4.28), (4.37) and relation (2.19.14.15) in \cite{PrudnikovMarichev}, Vol. II, namely,
$$ h_{2n,k} = {k!\over \Gamma(1+\nu+k)} \int_0^\infty t^\nu e^{-t} L_k^\nu(t) q_{2n}(t) dt $$
$$= - {k! \ a_{n,0} \over D_n\ \Gamma(1+\nu+k)} \sum_{r=0}^n D_{n,r} (-1)^r r! \int_0^\infty t^{\nu+r} e^{-t} L_k^\nu(t) L_r^\nu(t) dt $$
$$= - { a_{n,0} \over D_n} \sum_{r=0}^n D_{n,r} \ r! \ (1+\nu)_r \ {}_3F_2 \left(-k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right). $$
So,
$$ h_{2n,k} = - { a_{n,0} \over D_n} \sum_{r=0}^n D_{n,r} \ r! \ (1+\nu)_r \ {}_3F_2 \left(-k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right),\eqno(4.40) $$
and the values of the generalized hypergeometric function can be simplified via relations (7.4.4; 90,91,92,93) in \cite{PrudnikovMarichev}, Vol. III. Precisely, we get for $k =0,1\dots, n$
$$ {}_3F_2 \left(-k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right) = 0,\quad k > 2r,\eqno(4.41)$$
$$ (1+\nu)_r\ {}_3F_2 \left(-k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right) = { (2r) ! \over r! },\quad k=2r,$$
$$ (1+\nu)_r\ {}_3F_2 \left(-k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right) = - { (2r-1) ! (\nu+ 3r ) \over (r-1)! }, \quad k=2r-1, $$
$$ (1+\nu)_r\ {}_3F_2 \left(-k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right) = { (2(r-1)) ! \over 2\ r! }
\left( 2 r^2 (2r+\nu-1)(2r-1) \right.$$
$$\left. + r(r-1)(r+\nu-1)(r+\nu ) \right),\quad k=2(r-1).$$
Therefore, returning to (4.38) and minding (4.39), (4.40), (4.41), (3.1), (3.2) we find
$$P_n(x)= {(-1)^{n+1} a_{n,0} \over D_n\ (n-1)! \ \rho^2_\nu(x)} {d^n\over dx^n } \sum_{k=0}^{2n} \sum_{r=0}^n D_{n,r} \ r! \ (1+\nu)_r \ {}_3F_2 \left(-k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right) $$
$$\times \int_0^\infty \int_0^\infty y^{n-1} t^{\nu-1} e^{ -t -(x+y)/t } L_k^\nu(t) \rho_{\nu}(x+y) dt dy$$
$$= {(-1)^{n+1} a_{n,0} \over D_n\ (n-1)! \ \rho^2_\nu(x)} {d^n\over dx^n } \sum_{k=0}^{2n} {1\over k!} \sum_{r=0}^n D_{n,r} \ r! \ (1+\nu)_r \ {}_3F_2 \left(-k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right) $$
$$\times \int_0^\infty y^{n-1} {d^k\over dx^k} \left( (x+y)^k \rho_\nu(x+y) \right) \rho_{\nu}(x+y) dy$$
$$= {(-1)^{n+1} a_{n,0} \over D_n\ (n-1)! \ \rho^2_\nu(x)} {d^n\over dx^n } \sum_{k=0}^{2n} {1\over k!} \sum_{r=0}^n D_{n,r} \ r! \ (1+\nu)_r \ {}_3F_2 \left(-k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right) $$
$$\times \int_x^\infty (y-x)^{n-1} {d^k\over dy^k} \left(y^k \rho_{\nu}(y)\right) \rho_\nu(y) dy = - { a_{n,0} \over D_n \ \rho_\nu(x)} \sum_{k=0}^{2n} {d^k\over dx^k} \left( x^k \rho_{\nu}(x)\right) \sum_{r=0}^n D_{n,r} \ r! $$
$$\times \ {(1+\nu)_r\over k!} \ {}_3F_2 \left(-k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right)$$
$$= - { a_{n,0} \over D_n \ \rho_\nu(x)} \sum_{r=0}^n D_{n,r} \ r! \ (1+\nu)_r \sum_{k=0}^{2r} {d^k\over dx^k} \left( x^k \rho_{\nu}(x)\right) {1\over k!} \ {}_3F_2 \left(-k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right).$$
Thus it proves
{\bf Theorem 3}. {\it Let $\nu > -1/2,\ n \in \mathbb{N}_0$. Orthogonal polynomials $P_n$ satisfy the Rodrigues-type formula
$$P_n(x)= - { a_{n,0} \over D_n \ \rho_\nu(x)} \sum_{r=0}^n D_{n,r} r! (1+\nu)_r \sum_{k=0}^{2r} {1\over k!} {d^k\over dx^k} \left( x^k \rho_{\nu}(x)\right){}_3F_2 \left(-k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right),\eqno(4.42)$$
where $a_{n,0}$ is defined by $(4.33)$ and $D_n,\ D_{n,r}$ by $(4.29), (4.30)$, respectively.}
{\bf Corollary 2}. {\it Orthogonal polynomials $P_n$ have the following representation
$$P_n(x)= - { a_{n,0} \over D_n \ } \sum_{r=0}^n D_{n,r} r! (1+\nu)_r\left[ \sum_{k=0}^{r} {A_{k,k-1} (x) \over (2k)!} {}_3F_2 \left(-2k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right)\right.$$
$$\left. + \sum_{k=0}^{r-1} {A_{k,k} (x) \over (2k+1)!} {}_3F_2 \left(-2k-1,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right) \right], \eqno(4.43)$$
where $ A_{k,k-1}, A_{k,k}$ are the type $1$ multiple orthogonal polynomials of degree $k$, associated with the vector of weight functions $(\rho_\nu, \rho_{\nu+1})$ }.
\begin{proof} In fact, we write (4.42) in the form
$$P_n(x)= - { a_{n,0} \over D_n \ \rho_\nu(x)} \sum_{r=0}^n D_{n,r} r! (1+\nu)_r \left[ \sum_{k=0}^{r} {1\over (2k)!} {d^{2k} \over dx^{2k}} \left( x^{2k} \rho_{\nu}(x)\right){}_3F_2 \left(-2k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right)\right.$$
$$\left. + \sum_{k=0}^{r-1} {1\over (2k+1)!} {d^{2k+1} \over dx^{2k+1}} \left( x^{2k+1} \rho_{\nu}(x)\right){}_3F_2 \left(-2k-1,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right)\right].\eqno(4.44)$$
Meanwhile, appealing to the Rodrigues formulas for the type $1$ multiple orthogonal polynomials associated with the vector of weight functions $(\rho_\nu, \rho_{\nu+1})$ (see in \cite{AsscheYakubov2000}), it gives
$$ {d^{2k} \over dx^{2k}} \left( x^{2k} \rho_{\nu}(x)\right) = A_{k,k-1} (x)\rho_\nu(x) + B_{k,k-1} (x)\rho_{\nu+1}(x),$$
$$ {d^{2k+1} \over dx^{2k+1}} \left( x^{2k+1} \rho_{\nu}(x)\right) = A_{k,k} (x)\rho_\nu(x) + B_{k,k} (x)\rho_{\nu+1}(x),$$
where $ A_{k,k-1}, B_{k,k-1}$ are polynomials of degree $k, k-1$, respectively, and $ A_{k,k}, B_{k,k}$ are polynomials of degree $k$. Therefore, substituting these expressions into (4.44), we obtain
$$P_n(x)= - { a_{n,0} \over D_n \ } \sum_{r=0}^n D_{n,r} r! (1+\nu)_r\left[ \sum_{k=0}^{r} {A_{k,k-1} (x) \over (2k)!} {}_3F_2 \left(-2k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right)\right.$$
$$\left. + \sum_{k=0}^{r-1} {A_{k,k} (x) \over (2k+1)!} {}_3F_2 \left(-2k-1,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right) \right] $$
$$ - { a_{n,0} \ \rho_{\nu+1}(x) \over D_n \ \rho_\nu(x)} \sum_{r=0}^n D_{n,r} r! (1+\nu)_r \left[ \sum_{k=0}^{r} {B_{k,k-1} (x) \over (2k)!} {}_3F_2 \left(-2k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right)\right.$$
$$\left. + \sum_{k=0}^{r-1} {B_{k,k} (x) \over (2k+1)!} {}_3F_2 \left(-2k-1,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right) \right] .$$
But the existence of a multiple orthogonal polynomial sequence with respect to the vector of weight functions $(\rho_\nu, \rho_{\nu+1})$ implies the identity
$$ \sum_{r=0}^n D_{n,r} r! (1+\nu)_r \left[ \sum_{k=0}^{r} {B_{k,k-1} (x) \over (2k)!} {}_3F_2 \left(-2k,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right)\right.$$
$$\bigg. + \sum_{k=0}^{r-1} {B_{k,k} (x) \over (2k+1)!} {}_3F_2 \left(-2k-1,\ 1+\nu+r,\ 1+r;\ 1+\nu,\ 1;\ 1 \right) \bigg]\equiv 0,$$
which drives to (4.43) and completes the proof.
\end{proof}
{\bf Remark 2}. In a similar manner orthogonal polynomials with the weight function $\rho_{\nu+1}(x)\rho_\nu(x)$ can be investigated. We leave this topic to the interested reader.
Finally, in this section we establish the generating function for polynomials $P_n$, which is defined as usually by the equality
$$G(x,z) = \sum_{n=0}^\infty P_n (x) {z^n\over n!} ,\quad x >0,\ z \in \mathbb{C},\eqno(4.45)$$
where $|z| < h_x$ and $h_x >0$ is a convergence radius of the power series. To do this, we employ (3.1), (4.36) and (4.39), having the following equality from (4.45)
$$ G(x,z) = {1 \over \rho_\nu(x) } \sum_{n=0}^\infty {z^n\over n!} \sum_{k=0}^{2n} {h_{2n,k} \over k! } {d^{k}\over dx^{k}} \left[ x^{k} \rho_\nu(x) \right]= {1\over \rho_\nu(x) } \sum_{n=0}^\infty {z^n\over n!} \sum_{k=0}^{2n} h_{2n,k} \sum_{j=0}^k \binom{k}{j} {(-1)^j\over (k-j)!} \ x^j \rho_{\nu-j}(x) . $$
Meanwhile, the product $x^j\rho_{\nu-j}(x)$ is expressed in \cite{Cous} as follows
$$x^{ j} \rho_{\nu-j}(x) = x^{ j/2} r_j(2\sqrt x; \nu ) \rho_{\nu}(x) + x^{ (j-1)/2} r_{j-1} (2\sqrt x; \nu -1) \rho_{\nu+1}(x), \quad j \in {\mathbb N}_0,$$
where $r_{-1}(z;\nu)=0$,
$$ x^{ j/2} r_j(2\sqrt x; \nu ) = (-1)^j \sum_{i=0}^{[j/2]} (\nu+i-j+1)_{j-2i} (j-2i+1)_i {x^i\over i!}.\eqno(4.46)$$
Therefore this leads to the final expression of the generating function for the sequence $\left(P_n\right)_{n \in \mathbb{N}_0}$, namely,
$$ G(x,z) = \sum_{n=0}^\infty \sum_{k=0}^{2n} \sum_{j=0}^k \binom{k}{j} { (-1)^j \ h_{2n,k} \over n!\ (k-j)! } x^{j/2} r_j(2\sqrt x; \nu ) z^n $$
$$+ {\rho_{\nu+1}(x) \over \rho_\nu(x) } \sum_{n=0}^\infty \sum_{k=0}^{2n} \sum_{j=0}^k \binom{k}{j} { (-1)^j \ h_{2n,k} \over n!\ (k-j)! } x^{(j-1)/2} r_{j-1}(2\sqrt x; \nu-1 ) z^n,\eqno(4.47) $$
where coefficients $h_{2n,k}$ are defined by (4.40).
\section{Note on the multiple orthogonal polynomials}
In this section we will exhibit two types of multiple orthogonal polynomials for the vector of weight functions $(\rho^2_\nu,\ \rho^2_{\nu+1}, \ \rho_\nu\rho_{\nu+1}),\ \nu > -1/2 $ over $\mathbb{R}_+$ with an additional factor $x^\alpha,\ \alpha > -1$. Precisely, we consider the type $1$ polynomials $(A^\alpha_n,\ B^\alpha_{n-1},\ C^\alpha_{n-1}),\ n \in \mathbb{N}$ of degree $n,\ n-1$, respectively, satisfying the orthogonality conditions
$$\int_0^\infty \left[ A^\alpha_n(x) \rho^2_\nu (x) + B^\alpha_{n-1}(x) \rho^2_{\nu+1} (x)+ C^\alpha_{n-1}(x) \rho_\nu (x) \rho_{\nu+1} (x) \right] x^{\alpha + m} dx = 0, \ m= 0,1,\dots, 3n-1.\eqno(5.1)$$
So, we have $3n$ linear homogeneous equations with $3n+1$ unknown coefficients of polynomials $A^\alpha_n,\ B^\alpha_{n-1},\ C^\alpha_{n-1}$. Therefore we can find type $1$ polynomials up to a multiplicative factor. Let us denote the function $q^\alpha_{n,n-1,n-1}$ for the convenience
$$ q^\alpha_{n,n-1,n-1}(x) = A^\alpha_n(x) \rho^2_\nu (x) + B^\alpha_{n-1}(x) \rho^2_{\nu+1} (x)+ C^\alpha_{n-1}(x) \rho_\nu (x) \rho_{\nu+1} (x) .\eqno(5.2)$$
Type $2$ polynomials $p^\alpha_{n,n-1,n}$ are monic polynomials of degree $3n-1$ which satisfy the multiple orthogonality conditions
$$ \int_0^\infty p^\alpha_{n,n-1,n}(x) \rho^2_\nu (x) x^{\alpha+m} dx = 0,\quad m= 0,1,\dots, n-1,\eqno(5.3)$$
$$ \int_0^\infty p^\alpha_{n,n-1,n}(x) \rho^2_{\nu+1} (x) x^{\alpha+m} dx = 0,\quad m= 0,1,\dots, n-2,\eqno(5.4)$$
$$ \int_0^\infty p^\alpha_{n,n-1,n}(x) \rho_\nu (x) \rho_{\nu+1}(x) x^{\alpha+m} dx = 0,\quad m= 0,1,\dots, n-1.\eqno(5.5)$$
This gives $3n-1$ linear equations with $3n-1$ unknown coefficients since the leading coefficient is equal to $1$. Hence it can be uniquely determined.
When $|\nu| < 1/2$ the uniqueness of the representation (5.2) is validated by the following
{\bf Theorem 4}. {\it Let $n, m, l \in \mathbb{N}_0, |\nu| < 1/2, \ f_n,\ g_{m},\ h_l$ be polynomials of degree at most $n,\ m,\ l$, respectively. Let
$$f_n(x) \rho^2_\nu(x)+ g_{m} (x) \rho^2_{\nu+1} (x) + h_{l} (x) \rho_\nu(x) \rho_{\nu+1} (x) = 0\eqno(5.6)$$
%
for all $x >0$. Then $f_n \equiv 0, \ g_{m} \equiv 0,\ h_l \equiv 0.$ }
\begin{proof} As is known (cf. \cite{YAP}) the quotient $\rho_\nu/ \rho_{\nu+1}$ is represented by the Ismail integral
$$ {\rho_{\nu}(x) \over \rho_{\nu+1}(x) } = {1\over \pi^2} \int_0^\infty { s^{-1} ds \over (x+s)( J_{\nu+1}^2(2\sqrt s) +
Y_{\nu+1}^2(2\sqrt s) )},\eqno(5.7)$$
where $J_\nu, Y_\nu$ are Bessel functions of the first and second kind, respectively \cite{PrudnikovMarichev}, Vol. II. In fact, let $r \ge \max\{ n, m+1, l+1\}. $ Hence, dividing (5.6) by $\rho_\nu(x) \rho_{\nu+1} (x)$ and using (3.6), we find
$$f_n(x) {\rho_\nu(x)\over \rho_{\nu+1} (x)} + x g_{m} (x) {\rho_{\nu-1} (x)\over \rho_{\nu} (x) } + \nu g_{m} (x) + h_{l} (x) = 0.\eqno(5.8)$$
%
Then, differentiating $r$ times, it gives
$${d^{r}\over dx^{r} } \left[ f_n(x) {\rho_\nu(x)\over \rho_{\nu+1} (x)} \right] + {d^{r}\over dx^{r} } \left[ x g_m(x) {\rho_{\nu-1} (x)\over \rho_{\nu} (x)} \right] = 0.\eqno(5.9)$$
Assuming $f_n(x)= \sum_{k=0}^n f_{n,k}\ x^k$ and employing (5.7), the first term on the left-hand side of (5.9) can be treated as follows
$$ {d^{r}\over dx^{r} } \left[ f_n(x) {\rho_\nu(x)\over \rho_{\nu+1} (x)} \right] = {1\over \pi^2} {d^{r}\over dx^{r} } \sum_{k=0}^n f_{n,k} x^k \int_0^\infty e^{-xy } dy \int_0^\infty { e^{-sy} \ s^{-1} ds \over J_{\nu+1}^2(2\sqrt s) +
Y_{\nu+1}^2(2\sqrt s)}$$
$$= {1\over \pi^2} \sum_{k=0}^n f_{n,k} (-1)^k {d^{r}\over dx^{r} } \int_0^\infty {d^k\over dy^k} \left[ e^{-xy } \right] dy \int_0^\infty { e^{-sy} \ s^{-1} ds \over J_{\nu+1}^2(2\sqrt s) + Y_{\nu+1}^2(2\sqrt s)}$$
$$= {1\over \pi^2} \sum_{k=0}^n f_{n,k} (-1)^k \int_0^\infty {\partial^{k+r} \over \partial y^k \partial x^{r} } \left[ e^{-xy } \right] dy \int_0^\infty { e^{-sy} \ s^{-1} ds \over J_{\nu+1}^2(2\sqrt s) + Y_{\nu+1}^2(2\sqrt s)}$$
$$= {1\over \pi^2} \sum_{k=0}^n f_{n,k} (-1)^{k+r} \int_0^\infty {d^k\over dy^k} \left[ y^{r} e^{-xy } \right] dy \int_0^\infty { e^{-sy} \ s^{-1} ds \over J_{\nu+1}^2(2\sqrt s) + Y_{\nu+1}^2(2\sqrt s)},$$
where the differentiation under the integral sign is possible via the absolute and uniform convergence. Now, we integrate $k$ times by parts in the outer integral with respect to $y$ on the right-hand side of the latter equality, eliminating the integrated terms due to the choice of $r$, and then differentiate under the integral sign in the inner integral with respect to $s$ owing to the same arguments, to obtain
$$ {d^{r}\over dx^{r} } \left[ f_n(x) {\rho_\nu(x)\over \rho_{\nu+1} (x)} \right] = {1\over \pi^2} \int_0^\infty y^{r} e^{-xy } \int_0^\infty { e^{-sy} \ s^{-1} \over J_{\nu+1}^2(2\sqrt s) + Y_{\nu+1}^2(2\sqrt s)} \left( \sum_{k=0}^n f_{n,k} (-1)^{k+r} s^k \right) ds .\eqno(5.10)$$
In the same fashion the second term in (5.9) is worked out to find $( g_m(x)= \sum_{k=0}^m g_{m,k}\ x^k$ )
$$ {d^{r}\over dx^{r} } \left[ x g_m(x) {\rho_{\nu-1} (x)\over \rho_{\nu} (x)} \right] = {1\over \pi^2} \int_0^\infty y^{r} e^{-xy } \int_0^\infty { e^{-sy} \over J_{\nu}^2(2\sqrt s) + Y_{\nu}^2(2\sqrt s)} \left( \sum_{k=0}^m g_{m,k} (-1)^{k+r+1} s^k \right) ds.\eqno(5.11)$$
Substituting (5.10), (5.11) into (5.9) and cancelling twice the Laplace transform via its injectivity for integrable and continuous functions \cite{Tit}, we arrive at the equality
$$x g_m(-x) \left[ J_{\nu+1}^2(2\sqrt x) + Y_{\nu+1}^2(2\sqrt x) \right] + f_n(-x) \left[ J_{\nu}^2(2\sqrt x) + Y_{\nu}^2(2\sqrt x)\right] = 0,\ x > 0.\eqno(5.12) $$
The sum of squares of Bessel functions in brackets is called the Nicholson kernel, which has the Mellin-Barnes representation by virtue of Entry 8.4.20.35 in \cite{PrudnikovMarichev}, Vol. III
$$x^k \left[ J_{\nu}^2(2\sqrt x) + Y_{\nu}^2(2\sqrt x)\right] = {2^{1-2k} \cos(\pi\nu)\over 2\pi^{7/2} i} \int_{\gamma-i\infty}^{\gamma+i\infty}
\Gamma(s+k) \Gamma(s+k+\nu) \Gamma(s+k-\nu) $$
$$\times \Gamma\left({1\over 2} -s-k\right) (4x)^{-s} ds,\quad |\nu | - k < \gamma < {1\over 2} - k.\eqno(5.13)$$
Then, using on the right-hand side of (5.13) the reflection formula for gamma function \cite{Bateman}, Vol. I, it can be written as follows
$$ {2^{1-2k} \cos(\pi\nu)\over 2\pi^{7/2} i} \int_{\gamma-i\infty}^{\gamma+i\infty} \Gamma(s+k) \Gamma(s+k+\nu) \Gamma(s+k-\nu) \Gamma\left({1\over 2} -s-k\right) (4x)^{-s} ds$$
$$= {2^{1-2k} (-1)^k \cos(\pi\nu)\over 2\pi^{7/2} i} \int_{\gamma-i\infty}^{\gamma+i\infty} \Gamma(s+k) \Gamma(s+k+\nu) \Gamma(s+k-\nu) { \Gamma(1/2 -s ) \over (1/2+s)_k } (4x)^{-s} ds.\eqno(5.14)$$
Our goal now is to shift the contour to the right to make integration along the straight line with $|\nu| < {\rm Re} s < 1/2$. To do this we should take into account the residues at $k$ simple poles $s_m= -1/2- m,\ m=0, 1,\dots, k-1$ which have the values
$$\hbox{Res}_{s= s_m} \left(\frac{\Gamma(s+k) \Gamma(s+k+\nu) \Gamma(s+k-\nu) \Gamma(1/2 -s )} {(1/2+s) (3/2+s)\dots (s +k-1/2) } (4x)^{-s}\right)$$
$$= \frac{\Gamma(s_m+k) \Gamma(s_m+k+\nu) \Gamma(s_m+k-\nu) \Gamma(1/2 -s_m )} {(1/2+s_m) (1/2+s_m+1)\dots (1/2+s_m +m-1) (1/2+s_m +m+1)\dots(1/2+s_m +k-1) } (4x)^{-s_m}$$
$$= { (-1)^m (4x)^{1/2+m} \over (k-m-1)!} \ \Gamma(k-m-1/2) \Gamma(k-m-1/2+\nu) \Gamma(k-m-1/2-\nu) . $$
Therefore we get from (5.14)
$$ {2^{1-2k} \cos(\pi\nu)\over 2\pi^{7/2} i} \int_{\gamma-i\infty}^{\gamma+i\infty} \Gamma(s+k) \Gamma(s+k+\nu) \Gamma(s+k-\nu) \Gamma\left({1\over 2} -s-k\right) (4x)^{-s} ds$$
$$= {2^{1-2k} (-1)^k \cos(\pi\nu)\over 2\pi^{7/2} i} \int_{\mu-i\infty}^{\mu+i\infty} \Gamma(s+k) \Gamma(s+k+\nu) \Gamma(s+k-\nu) { \Gamma(1/2 -s ) \over (1/2+s)_k } (4x)^{-s} ds$$
$$- {4^{1-k} \sqrt x \cos(\pi\nu)\over \pi^{5/2}}\sum_{m=0}^{k-1} {(-1)^{k+m} (4x)^m \over (k-m-1)!} \ \Gamma(k-m-1/2) \Gamma(k-m-1/2+\nu) \Gamma(k-m-1/2-\nu),\eqno(5.15)$$
where $|\nu| < \mu < 1/2$. Now, recalling Parseval's equality for the Mellin transform and Entries 8.4.2.5, 8.4.23.27 in \cite{PrudnikovMarichev}, Vol. III, we derive from (5.13), (5.15)
$$x^k \left[ J_{\nu}^2(2\sqrt x) + Y_{\nu}^2(2\sqrt x)\right] = {2^{3- 2k} (-1)^k \sqrt x \over \pi^{3}} \cos(\pi\nu) \int_0^\infty K^2_\nu(\sqrt t) \ {t^{k-1/2}\over 4x +t} dt$$
$$- \ {4^{1-k} \sqrt x \cos(\pi\nu)\over \pi^{5/2}}\sum_{m=0}^{k-1} {(-1)^{k+m} (4x)^m \over (k-m-1)!} \ \Gamma(k-m-1/2) \Gamma(k-m-1/2+\nu) \Gamma(k-m-1/2-\nu).\eqno(5.16)$$
Analogously, we find
$$x^{k+1} \left[ J_{\nu+1}^2(2\sqrt x) + Y_{\nu+1}^2(2\sqrt x)\right] = {2^{1- 2k} (-1)^k \sqrt x \over \pi^{3}} \cos(\pi\nu) \int_0^\infty K^2_{\nu+1}(\sqrt t) \ {t^{k+1/2}\over 4x +t} dt$$
$$ - \ {4^{-k} \sqrt x \cos(\pi\nu)\over \pi^{5/2}}\sum_{m=0}^{k} {(-1)^{k+m+1} (4x)^m \over (k-m)!} \ \Gamma(k-m+1/2) \Gamma(k-m+1/2+\nu) \Gamma(k-m+1/2-\nu).\eqno(5.17)$$
Substituting expressions (5.16), (5.17) in (5.12), we derive after straightforward simplifications
$$ \int_0^\infty K^2_{\nu+1}(2 \sqrt t) \ g_m (t) {\sqrt t \over x +t} dt + \int_0^\infty K^2_{\nu}(2 \sqrt t) \ f_n(t) {dt \over \sqrt t (x +t)}$$
$$ + \sqrt\pi \sum_{k=0}^m \sum_{j=0}^{k} { (-x)^j \over 4^{k-j} (k-j)!} \ \Gamma(k-j+1/2) \Gamma(k-j+1/2+\nu) \Gamma(k-j+1/2-\nu)$$
$$- \sqrt\pi \sum_{k=0}^{n-1} \sum_{j=0}^{k} { (-x)^j \over 4^{k-j} (k-j)!} \ \Gamma(k-j+1/2) \Gamma(k-j+1/2+\nu) \Gamma(k-j+1/2-\nu) = 0,\ x >0.\eqno(5.18)$$
Last two terms in (5.18) are polynomials of degree $m, n-1$, respectively. Hence, differentiating through $r_1 \ge \max\{n, m+1\}$ times by $x$, we obtain
$$ {d^{r_1}\over dx^{r_1}} \int_0^\infty K^2_{\nu+1}(2 \sqrt t) \ g_m (t) {\sqrt t \over x +t} dt + {d^{r_1}\over dx^{r_1}} \int_0^\infty K^2_{\nu}(2 \sqrt t) \ f_n(t) {dt \over \sqrt t (x +t)} = 0.\eqno(5.19)$$
The left-hand side of (5.19) is the $r_1$-th derivative of the sum of two Stieltjes transforms which are, in turn, two fold Laplace transforms. Consequently, fulfilling the differentiation under the integral sign in (5.19) as above owing to the absolute and uniform convergence by $x \ge x_0 >0 $, we cancel Laplace transforms of integrable functions via the injectivity. Then with (2.2) it yields the equality
$$f_n(x) \rho^2_\nu(x)+ g_{m} (x) \rho^2_{\nu+1} (x) = 0,\quad x >0.\eqno(5.20)$$
Comparing with (5.6), we see that $ h_l \equiv 0.$ Further, identity (5.20) implies that $f_n, \ g_m$ have the same positive roots, if any. Let $x >0$ be not a root of $g_m$. Then dividing (5.20) by $g_m$ and making a differentiation, we get
$$- 2 \rho_\nu (x) \rho_{\nu-1} (x) {f_n(x)\over g_m(x)} + \rho^2_\nu(x) {f^\prime_n(x) g_m(x)- f_n(x) g_m^\prime(x) \over g_m^2(x)} - 2 \rho_{\nu+1}(x)\rho_\nu(x) =0.$$
Since $\rho_\nu(x) >0$, we divide the previous equation by $\rho_\nu$, multiply by $x, g_m^2$ and employ (3.6) to find
$$- 2 g_m(x) \rho_{\nu+1}(x) \left( f_n(x) +x g_m(x) \right) + \rho_\nu(x) \left( 2\nu f_n(x) g_m(x) + x \left( f^\prime_n(x) g_m(x)- f_n(x) g_m^\prime(x) \right) \right) =0.\eqno(5.21)$$
However, the existence of the type 1 multiple orthogonal polynomials with respect to the vector of weight functions $(\rho_\nu, \rho_{\nu+1})$ suggests the equalities
$$ g_m(x) \left( f_n(x) +x g_m(x) \right) \equiv 0,\quad\quad 2\nu f_n(x) g_m(x) + x \left( f^\prime_n(x) g_m(x)- f_n(x) g_m^\prime(x)\right)\equiv 0.\eqno(5.22)$$
So, if $g_m \equiv 0$, it proves the theorem. Otherwise $f_n(x)+ x g_m(x) \equiv 0$, and with the second equation in (5.22) we have $ x(2\nu+1) g^2_m(x) \equiv 0.$ Thus $g_m \equiv 0$ and $f_n \equiv 0$ from (5.20). Theorem 4 is proved.
\end{proof}
{\bf Remark 3}. The choice of $\nu$ is important. For instance, for $\nu= -1/2$ the theorem fails. This can be seen, taking $f_n(x) \equiv -x,\ g_m(x)\equiv 1, \ h_l(x) \equiv 0$.
{\bf Theorem 5}. {\it Let $\nu \in [0,1/2)$. For every $\alpha > 0$
$${d\over dx} \left[ x^\alpha q^\alpha_{n,n-1,n-1}(x) \right] = x^{\alpha-1} q^{\alpha-1}_{n,n-1,n} (x)\eqno(5.23)$$
and the following differential recurrence relations hold}
$$ A^{\alpha-1} _n(x) = (\alpha +2\nu) A^\alpha_n(x) + x [ A^\alpha_n(x) ]^\prime - x C^\alpha_{n-1}(x) ,\eqno(5.24)$$
$$ B^{\alpha-1} _{n-1}(x) = \alpha B^\alpha_{n-1}(x) + x [B^\alpha_{n-1}(x) ]^\prime - C^\alpha_{n-1}(x),\eqno(5.25)$$
$$ C^{\alpha-1} _n(x) = (\alpha+\nu) C^\alpha_{n-1}(x) + x [C^\alpha_{n-1}(x)]^\prime - 2 A^\alpha_n(x) -2x B^\alpha_{n-1}(x).\eqno(5.26)$$
\begin{proof} From (5.1), (5.2) and integration by parts we get
$$ \int_0^\infty q^\alpha_{n,n-1,n-1}(x) x^{\alpha+m} dx = - {1\over m+1} \int_0^\infty {d\over dx} \left[ x^\alpha q^\alpha_{n,n-1,n-1}(x) \right] x^{m+1} dx = 0,$$
where the integrated terms vanish for every $\alpha > -1$ via asymptotic behavior (2.4), (2.5). Hence (5.1) suggests the equality
$$\int_0^\infty {d\over dx} \left[ x^\alpha q^\alpha_{n,n-1,n-1}(x) \right] x^{m} dx = 0, \quad m= 1,\dots, 3n.$$
But, evidently,
$$\int_0^\infty {d\over dx} \left[ x^\alpha q^\alpha_{n,n-1,n-1}(x) \right] dx = 0,\quad \alpha > 0.$$
Therefore
$$\int_0^\infty {d\over dx} \left[ x^\alpha q^\alpha_{n,n-1,n-1}(x) \right] x^{m} dx = 0, \quad m= 0,\dots, 3n.\eqno(5.27)$$
Now, working out the differentiation in (5.27), involving (3.5), (3.6), we find
$$ {d\over dx} \left[ x^\alpha q^\alpha_{n,n-1,n-1}(x) \right] = \alpha\ x^{\alpha-1} q^\alpha_{n,n-1,n-1}(x) + x^{\alpha-1} \left( x [ A^\alpha_n(x) ]^\prime \rho^2_\nu (x) + x [B^\alpha_{n-1}(x) ]^\prime \rho^2_{\nu+1} (x)+ x [C^\alpha_{n-1}(x)]^\prime \rho_\nu (x) \rho_{\nu+1} (x) \right. $$
$$\left. + 2 A^\alpha_n(x) \rho_\nu (x) (\nu \rho_\nu(x)- \rho_{\nu+1} (x) ) -2x B^\alpha_{n-1}(x) \rho_{\nu+1} (x)\rho_\nu(x)\right. $$
$$\left. - x C^\alpha_{n-1}(x) \rho^2_\nu (x) + C^\alpha_{n-1}(x) \rho_{\nu+1} (x) (\nu \rho_\nu(x)- \rho_{\nu+1} (x) ) \right).$$
Thus
$$ {d\over dx} \left[ x^\alpha q^\alpha_{n,n-1,n-1}(x) \right] = x^{\alpha-1} \left[ A^{\alpha-1}_n(x) \rho^2_\nu (x) + B^{\alpha-1}_{n-1}(x) \rho^2_{\nu+1} (x)+ C^{\alpha-1}_{n}(x) \rho_\nu (x) \rho_{\nu+1} (x) \right],\eqno(5.28)$$
where $A^{\alpha-1}_n(x) ,\ B^{\alpha-1}_{n-1}(x), \ C^{\alpha-1}_{n}(x) $ are polynomials of degree at most $n, n-1, n$, respectively, being defined by formulas (5.24), (5.25), (5.26). The linear homogeneous system (5.27) of $3n+1$ equations contains $3n+ 2$ unknowns. Therefore up to a constant, choosing to be one, the left-hand side of (5.28) is equal to $ x^{\alpha-1} q^{\alpha-1}_{n,n-1,n}(x)$, and the representation (5.28) is unique by virtue of Theorem 4. This proves (5.23) and completes the proof of Theorem 5.
\end{proof}
{\bf Remark 4}. The same analysis for the function $q^\alpha_{n,n-1,n}(x)$ does not work. In fact, in this case we derive analogously
$$\int_0^\infty {d\over dx} \left[ x^\alpha q^\alpha_{n,n-1,n}(x) \right] x^{m} dx = 0, \quad m= 0,\dots, 3n+1.$$
However, working out the differentiation, we will get polynomials of degree at most $n+1, n, n$, respectively, which implies $3n+4$ unknowns for $3n+2$ equations (the so-called quasi multiple orthogonal case.)
Finally, we establish the differentiation property for the type $2$ multiple orthogonal polynomials $p^\alpha_{n,n-1,n}$ (or $3$-orthogonal polynomials).
{\bf Theorem 6.} {\it For every $\nu \ge 0,\ \alpha > -1$
$${d\over dx} \left[ p^\alpha_{n,n-1,n} (x)\right] = (3n-1) p^{\alpha+1}_{n,n-1,n-1} (x).\eqno(5.29)$$
\begin{proof} Recalling (3.5), (3.6) and asymptotic behavior of the weight functions (2.4), (2.5), we integrate by parts in (5.3), eliminating the integrated terms, to deduce
$$ \int_0^\infty {d\over dx} \left[ p^\alpha_{n,n-1,n}(x) \right] \rho^2_\nu (x) x^{\alpha+1+m} dx = 0,\quad m= 0,1,\dots, n-1.\eqno(5.30)$$
Concerning equalities (5.4), (5.5), it corresponds the following ones
$$ \int_0^\infty {d\over dx} \left[ p^\alpha_{n,n-1,n}(x) \right] \rho^2_{\nu+1} (x) x^{\alpha+1+m} dx = 0,\quad m= 0,1,\dots, n-2,\eqno(5.31)$$
$$ \int_0^\infty {d\over dx} \left[ p^\alpha_{n,n-1,n}(x) \right] \rho_\nu (x) \rho_{\nu+1}(x) x^{\alpha+1+m} dx = 0,\quad m= 0,1,\dots, n-2.\eqno(5.32)$$
Now $\left[ p^\alpha_{n,n-1,n}(x) \right]^\prime$ is a polynomial of degree $3n-2$ with leading coefficient $3n-1$ and by equalities (5.30), (5.31), (5.32) it satisfies orthogonality conditions (5.3), (5.4), (5.5) for the type $2$ multiple orthogonal polynomial $ p^{\alpha+1}_{n,n-1,n-1} (x)$. Hence we get (5.29) by unicity. Theorem 6 is proved.
\end{proof}
\bibliographystyle{amsplain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In the theory of surfaces, the \emph{Gauss-Bonnet Theorem} is one of the most profound and fundamental results:
\begin{align}\label{eqn:Gauss-Bonnet}
\int_M K_g dv_g = 2\pi \chi (M),
\end{align}
for any closed surfaces $(M^2 ,g)$ with Euler characteristic $\chi (M)$. \\
On the other hand, the \emph{Uniformization Theorem} assures that any metric on $M^2$ is locally conformally flat. Thus an interesting question is that whether we can find a scalar type curvature quantity such that it generalizes the classic \emph{Gauss-Bonnet Theorem} in higher dimensions? \\
For any closed $4$-dimensional Riemannian manifold $(M^4, g)$, we can define the $Q$-curvature as follows
\begin{align}\label{Q_4}
Q_g = - \frac{1}{6} \Delta_g R_g - \frac{1}{2} |Ric_g|_{g}^2 + \frac{1}{6} R_g^2,
\end{align}
which satisfies the \emph{Gauss-Bonnet-Chern Formula}
\begin{align}\label{Gauss_Bonnet_Chern}
\int_{M^4} \left( Q_g + \frac{1}{4} |W_g|^2_g \right) dv_g = 8\pi^2 \chi(M).
\end{align}
Here $R_g$, $Ric_g$ and $W_g$ are scalar curvature, Ricci curvature and Weyl tensor for $(M^4, g)$ respectively.\\
In particular, if $W_g = 0$, \emph{i.e.} $(M^4, g)$ is locally conformally flat, we have
\begin{equation}\label{total_Q}
\int_{M^4} Q_g dv_g = 8\pi^2 \chi(M),
\end{equation}
which can be viewed as a generalization of (\ref{eqn:Gauss-Bonnet}).\\
Inspired by Paneitz's work (\cite{Paneitz}), Branson (\cite{Branson}) extended (\ref{Q_4}) and defined the $Q$-curvature for arbitrary dimension $n \geq 3$ to be
\begin{align}
Q_{g} = A_n \Delta_{g} R_{g} + B_n |Ric_{g}|_{g}^2 + C_nR_{g}^2,
\end{align}
where $A_n = - \frac{1}{2(n-1)}$ , $B_n = - \frac{2}{(n-2)^2}$ and
$C_n = \frac{n^2(n-4) + 16 (n-1)}{8(n-1)^2(n-2)^2}$.\\
In the study of conformal geometry, there is an $4^{th}$-order differential operator closely related to $Q$-curvature, called Paneitz operator, which can be viewed as an analogue of conformal Laplacian operator:
\begin{align}
P_g = \Delta_g^2 - div_g \left[(a_n R_g g + b_n Ric_g) d\right] + \frac{n-4}{2}Q_g,
\end{align}
where $a_n = \frac{(n-2)^2 + 4}{2(n-1)(n-2)}$ and $b_n = - \frac{4}{n-2}$.\\
We leave the discussion of conformal covariance of $Q$-curvature and Paneitz operator in the appendix at the end of the article for readers who are interested in it.\\
The most fundamental motivation in this article is to seek the connection between $Q$-curvature and scalar curvature both as scalar-type curvature quantities. Intuitively, they should share some properties in common since both of them are generalizations of Gaussian curvature on surfaces. Of course, as objects in conformal geometry, a lot of successful researches have revealed their profound connections in the past decades. (See the appendix for a brief discussion.) However, when beyond conformal classes there are only very few researches on $Q$-curvature from the viewpoint of Riemannian geometry.
Motivated by the early work of Fischer and Marsden (\cite{F-M}) on the deformation of scalar curvature, we started to consider generic deformation problems of $Q$-curvature. \\
In order to study deformations of scalar curvature, the central idea of \cite{F-M} is to investigate the kernel of $L^2$-formal adjoint for the linearization of scalar curvature. To be precise, regard the scalar curvature $R(g)$ as a second-order nonlinear map on the space of all metrics on $M$,
\begin{align*}
\notag
R: &\mathcal{M} \rightarrow C^{\infty}(M); \
g\mapsto R_{g}.
\end{align*}
Let $\gamma_g : S_2(M) \rightarrow C^\infty(M)$ be its linearization at $g$ and $\gamma_g^*: C^\infty(M) \rightarrow S_2(M)$ be the $L^2$-formal adjoint of $\gamma_g$, where $S_2(M)$ is the space of symmetric $2$-tensors on $M$. \\
A crucial related concept is the so-called \emph{vacuum static space}, which is the spatial slice of a type of special solutions to \emph{vacuum Einstein equations} (see \cite{Q-Y}). A vacuum static space can also be defined as a complete Riemannian manifold with $\ker \gamma_g^* \neq \{0\}$ (see the last section for an explicit definition). Typical examples of vacuum static spaces are space forms. In this sense, we can regard the notion of vacuum static spaces as a generalization of space forms. Of course, there are many other interesting examples of vacuum static spaces besides space forms, say $S^1\times S^2$ \emph{etc}.
The classification problem is a fundamental question in the study of vacuum static spaces, even in the field of mathematical general relativity. We refer the article \cite{Q-Y} for readers who are interested in it.\\
In fact, being vacuum static or not is the criterion to determine whether the scalar curvature is being linearized stable at this given metric. It was shown in \cite{F-M} that a closed non-vacuum static manifold is linearized stable and hence any smooth function sufficiently close to the scalar curvature of the background metric can be realized as the scalar curvature of a nearby metric. This result was generalized to non-vacuum static domains by Corvino (\cite{Corvino}). On the other hand, for a vacuum static space, rigidity results are expected. In \cite{F-M}, authors also showed that there is a local rigidity phenomenon on any torus. Later, Schoen-Yau and Gromov-Lawson (\cite{S-Y_1, S-Y_2, G-L_1, G-L_2}) showed that global rigidity also holds on torus. Inspired by the \emph{Positive Mass Theorem}, many people made a lot of progress in understanding the rigidity phenomena of vacuum static spaces (c.f. \cite{Min-Oo, A-D, A-C-G, Miao, S-T, B-M}). It was demonstrated that local rigidity is a universal phenomena in all vacuum static spaces (see \cite{Q-Y_1}). \\
Along the same line, we can also consider $Q$-curvature as a $4^{th}$-order nonlinear map on the space of all metrics on $M$,
\begin{align}
\notag
Q: &\mathcal{M} \rightarrow C^{\infty}(M); g\mapsto Q_g.
\end{align}
Due to the complexity of $Q$-curvature, this map is extremely difficult to study. However, by linearizing the map we may still expect some interesting results to hold. We denote $$\Gamma_g : S_2(M) \rightarrow C^\infty(M)$$ to be the linearization of $Q$-curvature at metric $g$. \\
Now we give the following notion of $Q$-singular space introduced by Chang-Gursky-Yang in \cite{C-G-Y}, which plays a crucial role in this article.
\begin{definition}[Chang-Gursky-Yang \cite{C-G-Y}]
We say a complete Riemannian manifold $(M, g)$ is $Q$-singular, if $$\ker \Gamma_g^* \neq \{ 0 \},$$ where
$\Gamma_g^* : C^\infty(M) \rightarrow S_2(M)$ is the $L^2$-formal adjoint of $\Gamma_g$. We refer the triple $(M, g, f)$ as a $Q$-singular space as well, if $f (\not\equiv 0)$ is in the kernel of $\Gamma_g^*$.
\end{definition}
By direct calculations, we can obtain the precise expression of $\Gamma_g^*$ (see Proposition \ref{prop:Gamma^*}). Then follow their argument, we observed that the results in \cite{C-G-Y} for dimension $4$ can be extended to any other dimensions:
\begin{theorem}[Chang-Gursky-Yang \cite{C-G-Y}]\label{thm:Q_const}
A $Q$-singular space $(M^n, g)$ has constant $Q$-curvature and
\begin{align}
\frac{n+4}{2}Q_g \in Spec(P_g).
\end{align}
\end{theorem}
As in the study of vacuum static spaces, the collection of all $Q$-singular spaces is also expected to be a ''small'' set in some sense, which lead us to the classification problems naturally. In fact, when restricted in the class of closed $Q$-singular Einstein manifolds with nonnegative scalar curvature, Ricci flat and spherical metrics are the only possible ones:
\begin{theorem}\label{Classificaition_Q_singular_Einstein}
Suppose $(M^n,g,f)$ is a closed $Q$-singular Einstein manifold. If the scalar curvature $R_g \geq 0$, then
\begin{itemize}
\item $f$ is a non-vanishing constant if and only if $(M,g)$ is Ricci flat;
\item $f$ is not a constant if and only if $(M,g)$ is isometric to a round sphere with radius $r = \left( \frac{n(n-1)}{R_g}\right)^{\frac{1}{2}}$.
\end{itemize}
\end{theorem}
For non-$Q$-singular spaces, we can show that they are actually linearized stable, which answers locally prescribing $Q$-curvature problem for this category of manifolds:
\begin{theorem}\label{Q_stability}
Let $(M,\bar{g})$ be a closed Riemannian manifold. Assume
$(M,\bar{g})$ is not $Q$-singular, then the Q-curvature is linearized stable at $\bar g$ in the sense that $Q : \mathcal{M} \rightarrow C^{\infty}(M) $ is a submersion at $\bar{g}$.
Thus, there is a neighborhood $U \subset C^{\infty}(M)$ of $Q_{\bar{g}}$ such that for any $\psi \in U$, there exists a metric $g$ on $M$ closed to $\bar{g}$ with $Q_g = \psi$.
\end{theorem}
In particular, with the aid of Theorem \ref{Classificaition_Q_singular_Einstein}, we know a generic Einstein metric with positive scalar curvature has to be linearized stable:
\begin{corollary}\label{cor:stab_pos_Einstein}
Let $(M,\bar{g})$ be a closed positive Einstein manifold. Assume $(M,\bar{g})$ is not spherical, then the Q-curvature is linearized stable at $\bar g$.
\end{corollary}
Theorem \ref{Q_stability} actually provides an answer to global prescribing $Q$-curvature problem:
\begin{corollary}\label{cor:prescribing_zero_Q}
Suppose $(M, \bar g)$ is a closed non-$Q$-singular manifold with vanishing $Q$-curvature. Then any smooth function $\varphi$ can be realized as a $Q$-curvature for some metric $ g$ on $M$.
\end{corollary}
As a direct corollary of Theorem \ref{Q_stability}, we can obtain the existence of a negative $Q$-curvature metric as the following:
\begin{corollary}\label{cor:pos_Y_negative_Q}
Let $M$ be a closed manifold with positive Yamabe invariant $Y(M) > 0$. There is a metric $g$ with $Q$-curvature $Q_g < 0$ on $M$.
\end{corollary}
We also investigate the stability of Ricci flat metrics, which are in fact $Q$-singular. Due to the speciality of such metrics, we can give a sufficient condition for prescribing $Q$-curvature problem.
\begin{theorem}\label{Ricci_flat_stability}
Let $(M,\bar{g})$ be a closed Ricci flat Manifold. Denote $$\Phi:=\{\psi \in C^{\infty}(M): \int_M \psi dv_{\bar{g}} = 0\}$$ to be the set of smooth functions with zero average. Then for any $\psi \in \Phi$, there exists a metric $g$ on $M$ such that $$Q_g = \psi.$$
\end{theorem}
On the other hand, we are interested in the question of how much the generic stability fails for flat metrics. Applying perturbation analysis, we discovered the positivity of $Q$-curvature is the obstruction for flat metrics exactly as what scalar curvature means for them. That is, flat metrics are locally rigid for nonnegative $Q$-curvature. As a special case, we proved a local rigidity result for $Q$-curvature on torus $T^n$ ($n \geq 3$), which is similar to the non-existence result of positive scalar curvature metrics on torus due to Schoen-Yau and Gromov-Lawson (\cite{S-Y_1, S-Y_2, G-L_1, G-L_2}).
\begin{theorem}\label{flat_local_rigidity}
For $n \geq 3$, let $(M^n,\bar{g})$ be a closed flat Riemannian manifold and $g$ be a metric on $M$ with $$Q_g \geq 0.$$ Suppose $||g - \bar{g}||_{C^2(M, \bar{g})}$ is sufficiently small, then $g$ is also flat.
\end{theorem}
As for global rigidity, we have the following interesting result:
\begin{theorem}\label{thm:rigidity_tori}
Suppose $g$ is a locally conformally flat metric on $T^n$ with $$Q_g \geq 0,$$ then $g$ is flat.
In particular, any metric $g$ on $T^4$ with nonnegative $Q$-curvature has to be flat.
\end{theorem}
It would be interesting to compare notions of $Q$-singular and vacuum static spaces. In fact, we have the following observation:
\begin{theorem}\label{Q-static_R-static}
Let $\mathcal{M}_R$ be the space of all closed vacuum static spaces and $\mathcal{M}_Q$ be the space of all closed $Q$-singular spaces.
Suppose $(M, g, f) \in \mathcal{M}_R \cap \mathcal{M}_Q$, then $M$ is Einstein necessarily. In particular, it has to be either Ricci flat or isometric to a round sphere.
\end{theorem}
This article is organized as follows: We explain some general notations and conventions in Section 2. We then give a characterization of $Q$-singular spaces and proofs of Theorem \ref{thm:Q_const}, \ref{Classificaition_Q_singular_Einstein} in Section 3. In Section 4, we investigate several stability results, including Theorem \ref{Q_stability}, Corollary \ref{cor:stab_pos_Einstein}, \ref{cor:prescribing_zero_Q}, \ref{cor:pos_Y_negative_Q} and Theorem \ref{Ricci_flat_stability}. In Section 5, local rigidity of flat metrics, say Theorem \ref{flat_local_rigidity} and \ref{thm:rigidity_tori} will be shown. Some relevant results will also be discussed there. In Section 6, we discuss the relation between $Q$-singular and vacuum static spaces and prove Theorem \ref{Q-static_R-static} in the end of Section 6. In the end of this article, we provide a brief discussion about conformal properties of $Q$-curvature and Paneitz operator from the viewpoint of conformal geometry.\\
\paragraph{\textbf{Acknowledgement}}
The authors would like to express their appreciations to professor Sun-Yung Alice Chang, Professor Justin Corvino, Professor Matthew Gursky, Professor Fengbo Hang, Professor Jie Qing and Professor Paul Yang, and Dr.Yi Fang for their interests in this work and inspiring discussions. Especially, we would like to thank Professor Matthew Gursky for pointing out the work in \cite{C-G-Y}. The authors are also grateful for MSRI/IAS/PCMI Geometric Analysis Summer School 2013, which created the opportunity for initiating this work. Part of the work was done when the second author visited \emph{Institut Henri Poincar\'e} and we would like to give our appreciations to the hospitality of IHP. \\
\section{Notations and conventions}
Throughout this article, we will always assume $(M^n, g)$ to be an $n$-dimensional closed Riemannian manifold ($n \geq 3$) unless otherwise stated. Here by \emph{closed}, we mean compact without boundary. We also take $\{\partial_i\}_{i=1}^n$ to be a local coordinates around certain point. Hence we also use its components to stand for a tensor or vector in the article.\\
For convenience, we use following notions:
$\mathcal{M}$ - the set of all smooth metric on $M$;
$\mathscr{D}(M)$ - the set of all smooth diffeomorphisms $ \varphi : M \rightarrow M$;
$\mathscr{X}(M)$ - the set of all smooth vector fields on $M$;
$S_2(M)$ - the set of all smooth symmetric 2-tensors on $M$.\\
We adopt the following convention for Ricci curvature tensor,
\begin{align*}
R_{jk} = R^i_{ijk} = g^{il} R_{ijkl}.
\end{align*}
As for Laplacian operator, we simply use the common convention as follow,
\begin{align*}
\Delta_g := g^{ij} \nabla_i \nabla_j.
\end{align*}
Let $h, k \in S_2(M)$, for convenience, we define following operations:
\begin{align*}
(h \times k )_{ij} := g^{kl}h_{ik}k_{jl} = h_i^lk_{lj}
\end{align*}
and
\begin{align*}
h \cdot k := tr (h \times k) = g^{ij}g^{kl}h_{ik}k_{jl} = h^{jk}k_{jk}.
\end{align*}
Let $X \in \mathscr{X}(M)$ and $h \in S_2(M)$, we use the following notations for operators
\begin{align*}
(\overset{\circ}{Rm} \cdot h )_{jk}:= R_{ijkl} h^{il},
\end{align*}
and
\begin{align*}
(\delta_g h)_i := - (div_g h)_i = -\nabla^j h_{ij},
\end{align*}
which is the $L^2$-formal adjoint of Lie derivative (up to scalar multiple) $$\frac{1}{2}(L_g X)_{ij} = \frac{1}{2} ( \nabla_i X_j + \nabla_j X_i).$$\\
\section{Characterizations of Q-singular spaces
Let $(M,g)$ be a Riemannian manifold and $\{g(t)\}_{t \in (-\varepsilon, \varepsilon)}$ be a 1-parameter family of metrics on $M$ with $g(0) =g$ and $g'(0) = h$. Easy to see, ${g'}^{ij} = - h^{ij}$.\\
We have the following well-known formulae for linearizations of geometric quantities. (c.f. \cite{C-L-N, F-M, Yuan})
\begin{proposition}
The linearization of Christoffel symbol is
\begin{align}
{\Gamma'}_{ij}^k = \frac{1}{2} g^{kl} \left( \nabla_i h_{jl} +\nabla_j h_{il} - \nabla_l h_{ij} \right).
\end{align}
\label{1st_variation_Ricci_scalar}
The linearization of Ricci tensor is
\begin{align}
Ric' = \left.\frac{d}{dt}\right|_{t=0} Ric(g(t))_{jk} = - \frac{1}{2}\left( \Delta_L h_{jk} +
\nabla_j \nabla_k (tr h) + \nabla_j (\delta h)_k + \nabla_k (\delta
h)_j \right),
\end{align}
where the Lichnerowicz Laplacian acting on $h$ is defined to be
\begin{align*}
\Delta_L h_{jk} = \Delta h_{jk} + 2 (\overset{\circ}{Rm}\cdot
h)_{jk} - R_{ji} h^i_k - R_{ki}h^i_j,
\end{align*}
and the linearization of scalar curvature is
\begin{align}\label{scalar_1st_variation}
R' = \left.\frac{d}{dt}\right|_{t=0} R(g(t)) = - \Delta (tr h) + \delta^2 h - Ric \cdot h.
\end{align}
\end{proposition}
For simplicity, we use $'$ to denote the differentiation with
respect to the parameter $t$ and evaluated at $t=0$.\\
\begin{lemma}
The linearization of the Laplacian acting on the scalar curvature is
\begin{align}\label{laplacian_scalar_variation}
(\Delta R)' = \left.\frac{d}{dt}\right|_{t=0} \Delta_{g(t)} R(g(t)) = - \nabla^2 R\cdot h + \Delta
R' + \frac{1}{2} dR \cdot (d( tr h ) + 2\delta h).
\end{align}
\end{lemma}
\begin{proof}
Calculating in normal coordinates at an arbitrary point,
\begin{align*}
&\left.\frac{d}{dt}\right|_{t=0} \Delta_{g(t)} R(g(t)) \\
=& \left.\frac{d}{dt} \right|_{t=0}\left({g^{ij}(\partial_i\partial_j R-\Gamma^k_{ij}\partial_kR)}\right)\\
=& -h^{ij}\partial_i\partial_jR+g^{ij}(\partial_i\partial_j R'-(\Gamma^k_{ij})'\partial_k R-\Gamma^k_{ij}\partial_k R')\\
=& -h^{ij}\partial_i\partial_j R - g^{ij}(\Gamma^k_{ij})'\partial_k R+\Delta R'\\
=& -h^{ij}\partial_i\partial_j R - \frac{1}{2}g^{ij}g^{kl}(\nabla_ih_{jl}+\nabla_jh_{il}-\nabla_lh_{ij})\partial_kR+\Delta R'\\
=& -h^{ij}\nabla_i\nabla_jR-\nabla^ih^k_i\nabla_kR+\frac{1}{2}\nabla^k(trh)\nabla_kR+\Delta R'\\
=& - \nabla^2 R\cdot h + \Delta R' + \frac{1}{2} dR \cdot (d( tr h ) + 2\delta h).
\end{align*}
\end{proof}
Now we can calculate the linearization of $Q$-curvature (1st variation).\\
\begin{proposition}\label{Q_1st_variation}
The linearization of $Q$-curvature is
\begin{align}\label{Q_linearization}
\Gamma_g h :=& DQ_g \cdot h
= A_n \left( - \Delta^2 (tr h) + \Delta \delta^2 h -
\Delta ( Ric \cdot h ) + \frac{1}{2} dR \cdot (d( tr h ) + 2\delta h) - \nabla^2 R\cdot h\right)
\\ \notag & - B_n \left( Ric \cdot \Delta_L h + Ric \cdot
\nabla^2(tr h) + 2 Ric \cdot\nabla (\delta h) +2 (Ric\times Ric) \cdot h
\right) \\
\notag &+ 2 C_n R \left( - \Delta (tr h) + \delta^2 h - Ric \cdot h
\right).
\end{align}
\end{proposition}
\begin{proof}
Note that
\begin{align*}
\left.\frac{d}{dt}\right|_{t=0} |Ric(g(t))|_{g(t)}^2
=&\left.\frac{d}{dt}\right|_{t=0} \left(g^{ik}g^{jl}R_{ij}R_{kl}\right)
=2g^{ik}g^{jl}R'_{ij}R_{kl}+2g'^{ik}g^{jl}R_{ij}R_{kl}\\
=&2Ric' \cdot Ric - 2(Ric\times Ric) \cdot h,
\end{align*}
then we finish the proof by combining it with (\ref{1st_variation_Ricci_scalar}), (\ref{scalar_1st_variation}) and (\ref{laplacian_scalar_variation}).
\end{proof}
Now we can derive the expression of $\Gamma_g^*$ and hence the $Q$-singular equation $$\Gamma_g^* f = 0.$$
\begin{proposition}\label{prop:Gamma^*}
The $L^2$-formal adjoint of $\Gamma_g$ is
\begin{align*}
\Gamma_g^* f :=& A_n \left( - g \Delta^2 f + \nabla^2
\Delta f - Ric \Delta f + \frac{1}{2} g \delta (f dR) + \nabla ( f
dR) - f \nabla^2 R \right)\\ \notag
& - B_n \left( \Delta (f Ric) + 2 f
\overset{\circ}{Rm}\cdot Ric + g \delta^2 (f Ric) + 2 \nabla \delta (f
Ric) \right)\\ \notag
&- 2 C_n \left( g\Delta (f R) - \nabla^2 (f R) + f R
Ric \right).
\end{align*}
\end{proposition}
\begin{proof}
For any compactly supported symmetric 2-tensor $h \in S_2(M)$, we have
\begin{align*}
&\int_M f \left.\frac{d}{dt}\right|_{t=0} {\Delta_{g(t)} R(g(t))}\ dv_g\\
=&\int_M f\left(- \nabla^2 R\cdot h + \frac{1}{2} dR \cdot (d( tr h ) + 2\delta h)+\Delta R'\right)dv_g\\
=&\int_M \left(-h \cdot f\nabla^2R+h \cdot \nabla(fdR) + \frac{1}{2}h \cdot \delta(fdR)g + \Delta f(-\Delta trh + \delta^2h-Ric\cdot h)\right)dv_g\\
=&\int_M \left\langle-f\nabla^2R+\nabla(fdR) + \frac{1}{2}g\delta(fdR) -g\Delta^2f+\nabla^2\Delta f-Ric\Delta f,\ h\right\rangle dv_g.
\end{align*}\\
Similarly,
\begin{align*}
&\int_M f \left.\frac{d}{dt}\right|_{t=0} |Ric(g(t))|^2_{g(t)}\ dv_g\\
=&\int_M f\left(-2h^{il}g^{jk}R_{ij}R_{kl}+2R^{ij}{R'}_{ij}\right)dv_g\\
=&\int_M -2f(Ric\times Ric)\cdot h-fR^{ij}\left(\Delta_Lh_{ij} +\nabla_i\nabla_j(tr h)
+\nabla_i(\delta h)_j + \nabla_j(\delta h)_i\right) dv_g\\
=&\int_M \left\langle -\Delta_L (fRic)-2f(Ric\times Ric) - 2\nabla\delta(fRic) - g\delta^2(fRic),\ h \right\rangle dv_g\\
=&\int_M \left\langle-\Delta (fRic)-2f\overset\circ{Rm}\cdot Ric - 2\nabla\delta(fRic)-g\delta^2(fRic),\ h \right\rangle dv_g.
\end{align*}\\
And also,
\begin{align*}
&\int_M f \left.\frac{d}{dt}\right|_{t=0} (R(g(t)))^2 \ dv_g\\
=&\int_M 2 f R \left(-\Delta trh+\delta^2h-Ric\cdot h \right)dv_g\\
=&\int_M 2\left(-\Delta (fR)trh+\nabla^2(fR) \cdot h-(fR)Ric\cdot h\right)dv_g\\
=&\int_M 2 \left\langle -g\Delta (fR)+\nabla^2(fR)-(fR)Ric,\ h \right\rangle dv_g
\end{align*}
Combining all the equalities above, we have
\begin{align*}
\int_M f DQ_g \cdot hdv_g=\int_M \langle f,\Gamma_g h\rangle dv_g=\int_M \langle\Gamma_g^*f,h\rangle dv_g.
\end{align*}
\end{proof}
\begin{corollary}
\begin{align*}
\mathscr{L}_g f := tr_g \Gamma_g^* f = \frac{1}{2} \left( P_g - \frac{n + 4}{2} Q_g \right) f.
\end{align*}
\end{corollary}
\begin{proof}
Taking trace of $\Gamma_g^* f$, we have
\begin{align*}
tr \Gamma_g^* f =& -2 Q_g f - (n-1) A_n \Delta^2 f - \left( \frac{n-4}{2} A_n + \frac{n}{2}B_n + 2(n-1)C_n \right)f \Delta R \\
&- (A_n + B_n + 2(n-1)C_n) R \Delta f - \left( \frac{n-2}{2}A_n + n B_n + 4(n-1) C_n\right) df \cdot dR \\
&- (n-2) B_n Ric \cdot \nabla^2 f\\
=& -2 Q_g f + \frac{1}{2} \Delta^2 f - \frac{(n-2)^2 + 4}{4(n-1)(n-2)} R \Delta f - \frac{n-6}{4(n-1)} df \cdot dR + \frac{2}{n-2} Ric \cdot \nabla^2 f\\
=& -2 Q_g f + \frac{1}{2} \left( \Delta^2 f - a_n R \Delta f - \left(a_n + \frac{1}{2}b_n\right) df \cdot dR - b_n Ric \cdot \nabla^2 f \right)\\
=& \frac{1}{2} \left( \Delta_g^2 f- div_g \left[(a_n R_g g + b_n Ric_g) df \right] \right) - 2 Q_g \\
= &\frac{1}{2} \left( P_g - \frac{n+4}{2} Q_g \right) f.
\end{align*}
Here we used the fact that $\frac{n-4}{2} A_n + \frac{n}{2}B_n + 2(n-1)C_n =0$.
\end{proof}
Combining above calculations, we can justify that $Q$-curvature is a constant for $Q$-singular spaces using exactly the same argument in \cite{C-G-Y}. For the convenience of readers, we include a sketch of the proof as follows. For more details, please refer to \cite{C-G-Y}.
\begin{theorem}[Chang-Gursky-Yang \cite{C-G-Y}
A $Q$-singular space $(M^n, g)$ has constant $Q$-curvature and
\begin{align}
\frac{n+4}{2}Q_g \in Spec(P_g).
\end{align}
\end{theorem}
\begin{proof
We only need to show $Q_g$ is a constant.
For any smooth vector field $X \in \mathscr{X}$ on $M$, we have
\begin{align*}
\int_M \langle X, \delta_g \Gamma_g^* f \rangle dv_g = \frac{1}{2}\int_M \langle L_X g, \Gamma_g^* f \rangle dv_g = \frac{1}{2}\int_M f\ \Gamma_g (L_X g)\ dv_g = \frac{1}{2}\int_M \langle f dQ_g, X \rangle \ dv_g.
\end{align*}
Thus $$fdQ_g = 2 \delta_g \Gamma_g^* f = 0$$ on $M$.
Suppose there is an $x_0 \in M$ with $f(x_0) = 0$ and $dQ_g (x_0) \neq 0$. By taking derivatives, we can see $f$ vanishes to infinite order at $x_0$. \emph{i.e.} $$\nabla^m f(x_0) = 0$$ for any $m \geq 1$. Since $f$ also satisfies that $$\mathscr{L}_g f = \frac{1}{2} \left(P_g f - \frac{n+4}{2}Q_g f\right) = 0,$$ by applying the Carleman estimates of Aronszajn (\cite{A}), we can conclude that $f$ vanishes identically on $M$. But this contradicts to the fact that $g$ is $Q$-singular.
Therefore, $dQ_g$ vanishes on $M$ and thus $Q_g$ is a constant. Here we obtain the proof of Theorem \ref{thm:Q_const}.
\end{proof}
\textbf{Examples of $Q$-singular spaces.}
\begin{itemize}
\item
Ricci flat spaces\\
Take $f$ to be a nonzero constant, then it satisfies the $Q$-singular equation.\\
\item
Spheres\\
Take $f$ to be the $(n+1)^{th}$-coordinate function $x_{n+1}$ restricted on $$S^n = \{x\in \mathbb{R}^{n+1}: |x|^2 = 1\}.$$ Then $f$ satisfies the Hessian type equation $$\nabla^2 f + g f = 0.$$ With the aid of the following Lemma \ref{Q-static_Einstein}, we can easily check that $f$ satisfies the $Q$-singular equation.\\
\item
Hyperbolic spaces\\
Similarly, if we still take $f$ to be the $(n+1)^{th}$-coordinate function $x_{n+1}$ but restricted on $$H^n = \{(x',x_{n+1})\in \mathbb{R}^{n+1}: |x'|^2 - |x_{n+1}|^2 = -1, x' \in \mathbb{R}^n, x_{n+1} > 0\}.$$ Then $f$ satisfies the similar Hessian type equation $$\nabla^2 f - g f = 0,$$ and hence solves the $Q$-singular equation.
\end{itemize}
Based on the complexity of the $Q$-singular equation, one can easily see that it is very difficult to study the geometry of generic $Q$-singular spaces. However, when restricted to some special classes of Riemannian manifolds we can still get some interesting results of $Q$-singular spaces.
\begin{lemma}\label{Q-static_Einstein}
Let $(M,g)$ be an Einstein manifold and $f \in \ker \Gamma_g^*$, we have
\begin{align}
\Gamma_g^* f = A_n \left[ \nabla^2 \left(\Delta f + \Lambda_n R f
\right) + \frac{R}{n(n-1)}g \left(\Delta f + \Lambda_n R f\right)
\right] = 0,
\end{align}
where $\Lambda_n : = \frac{2}{A_n} (\frac{B_n}{n} + C_n) = - \frac{(n+2)(n-2)}{2n(n-1)} < 0$, for any $n \geq 3$.
\end{lemma}
\begin{proof}
Since $g$ is Einstein, \emph{i.e.} $Ric=\frac{R}{n}g$, we get
\begin{align*}
\Gamma_g^* f=&A_n \left[ -g\Delta^2f+\nabla^2\Delta f-\frac{R}{n}g\Delta f \right] -B_n\left[\frac{R}{n}\Delta (fg)+2f \overset\circ{Rm}\cdot \frac{R}{n}g + 2\frac{R}{n}\nabla\delta(fg) +\frac{R}{n}g\delta^2(fg) \right]\\
& \ \ -2C_n\left[Rg\Delta f-R\nabla^2f+\frac{R^2}{n}fg\right]\\
=&A_n \left[-g\Delta^2f+\nabla^2\Delta f-\frac{R}{n}g\Delta f\right] - 2B_n \left[ \frac{R}{n}g\Delta f+\frac{R^2}{n^2}gf - \frac{R}{n}\nabla^2f \right] \\
& \ \ -2C_n \left[ Rg\Delta f-R\nabla^2f+\frac{R^2}{n}fg \right]\\
=& A_n \left[-g\Delta^2f+\nabla^2\Delta f - \left(\frac{1}{n} + \Lambda_n\right) R g\Delta f + \Lambda_n R \nabla^2 f - \frac{\Lambda_n}{n}R^2 g f \right]\\
=& A_n \left[ \nabla^2 \left(\Delta f + \Lambda_n R f\right) - g \left( \Delta \left(\Delta f + \Lambda_n R f\right) +\frac{R}{n}\left(\Delta f + \Lambda_n R f\right) \right) \right].
\end{align*}
By assuming $f\in \ker \Gamma_g^*$, \emph{i.e.} $\Gamma_g^*(f)=0$, when taking trace, we have
\begin{align*}
A_n \left[ - (n-1) \Delta \left(\Delta f + \Lambda_n R f\right) - R\left(\Delta f + \Lambda_n R f\right) \right] = 0.
\end{align*}
Thus,
\begin{align*}
\Delta \left(\Delta f + \Lambda_n R f\right) = - \frac{R}{n-1}\left(\Delta f + \Lambda_n R f\right).
\end{align*}
Now substitute it in the expression of $\Gamma_g^* f$, we have
$$
\Gamma_g^* f =A_n \left[ \nabla^2 \left(\Delta f + \Lambda_n R f\right) + \frac{R}{n(n-1)}g \left(\Delta f + \Lambda_n R f\right)\right] = 0,
$$
where $\Lambda_n= \frac{2}{A_n}\left(\frac{B_n}{n}+C_n\right) = - \frac{(n+2)(n-2)}{2n(n-1)} < 0$, for any $n \geq 3$.\\
\end{proof}
Next we show that a closed $Q$-singular Einstein manifold with positive scalar curvature has to be spherical:
\begin{proposition} \label{Q-static_sphere}
Let $(M^n,g)$ be a complete $Q$-singular Einstein manifold with positive scalar curvature. Then $(M^n,g)$ is isometric to the round sphere $(S^n(r), g_{_{S^n(r)}})$, with radius $r = \left(\frac{n(n-1)}{R_g}\right)^{\frac{1}{2}}$. Moreover, $\ker \Gamma_g^*$ is consisted of eigenfunctions of $(-\Delta_g)$ associated to $\lambda_1 = \frac{R_g}{n-1}> 0$ on $S^n(r)$ and hence $\dim \ker \Gamma_{g}^* = n+1$.
\end{proposition}
\begin{proof}
Note that $\Lambda_n < 0$ and $Spec (-\Delta_g)$ consisted of nonnegative real numbers, then $$\Lambda_n R_g \not\in Spec ( - \Delta_g).$$
Let $f \in \ker \Gamma_g^*$, $f\not\equiv 0$ and $\varphi := \Delta_g f + \Lambda_n R_g f$, then
\begin{align*}
\varphi \not\equiv 0.
\end{align*}
Therefore, by Lemma \ref{Q-static_Einstein}, we have the following equation
\begin{align}\label{Obata_type_eqn}
\nabla^2\varphi=-\frac{R_g}{n(n-1)}\varphi g
\end{align}
with a nontrivial solution $\varphi$.\\
Taking trace, we get
\begin{align}
\Delta_g \varphi = - \frac{R_g}{n-1}\varphi.
\end{align}
\emph{i.e.} $\frac{R_g}{n-1}$ is an eigenvalue of $-\Delta_g$.\\
On the other hand, by \emph{Lichnerowicz-Obata's Theorem}, the first nonzero eigenvalue of $(- \Delta_g)$ satisfies
$$\lambda_1 \geq \frac{R_g}{n-1}$$
with equality if and only if $(M^n,g)$ is isometric to the round sphere $(S^n(r), g_{S^n(r)})$ with radius $r = \left(\frac{n(n-1)}{R} \right)^{\frac{1}{2}}$. Hence the first part of the theorem follows.\\
Since $\Lambda_n R_g \not\in Spec ( - \Delta_g)$, it implies that the operator $\Delta_g + \Lambda_n R_g$ is invertible. On the other hand,
\begin{align*}
\left( \Delta_g + \Lambda_n R_g \right) \left(\Delta_g + \frac{R_g}{n-1}\right) f = \left(\Delta_g + \frac{R_g}{n-1}\right) \left( \Delta_g + \Lambda_n R_g \right) f = \left(\Delta_g + \frac{R_g}{n-1}\right) \varphi=0.
\end{align*}
Thus $$\left(\Delta_g + \frac{R_g}{n-1}\right) f = 0.$$ \emph{i.e.} $f$ is an eigenfunction associated to $\lambda_1 = \frac{R_g}{n-1}$.\\
Therefore, $\ker \Gamma_g^*$ can be identified by the eigenspace associated to the first nonzero eigenvalue $\lambda_1> 0$ of $(-\Delta_g)$ on $S^n(r)$ and hence $\dim\Gamma_g^* = n+1$.
\end{proof}
As for Ricci flat ones, we have:
\begin{theorem}\label{Static_Ricci_flat}
Suppose $(M^n,g)$ is a $Q$-singular Riemannian manifold. If $(M,g)$ admits an nonzero constant potential $f \in \ker \Gamma_g^*$, then $(M,g)$ is $Q$-flat, \emph{i.e.} the $Q$-curvature vanishes identically. Furthermore, suppose $(M,g)$ is a closed $Q$-singular Einstein manifold, then $(M,g)$ is Ricci flat if and only if $\ker \Gamma_g^*$ is consisted of constant functions.
\end{theorem}
\begin{proof}
Without lose of generality, we can assume that $1 \in \ker \Gamma_g^*$. We have
\begin{align*}
tr\Gamma^{*}_{g} 1 = \mathscr{L}_g 1 = \frac{1}{2} \left( P_g 1 - \frac{n+4}{2}Q_g\right) = - 2 Q_g = 0,
\end{align*}
which implies that $Q_g=0$.\\
Now assume $(M,g)$ is a closed $Q$-singular Einstein manifold.\\
We have
\begin{align*}
\int_M Q_g dv_g &= \int_M \left(A_n \Delta_{g} R + B_n |Ric|^2 + C_n R^2\right) dv_g \\
&= \left(\frac{B_n}{n} + C_n \right) R^2\ Vol_g(M) \\
&= \frac{(n+2)(n-2)}{8n(n-1)^2}R^2\ Vol_g(M).
\end{align*}
Suppose $1 \in \ker \Gamma_g^*$, then $Q_g = 0$ and hence
\begin{align*}
R = 0 ,
\end{align*}
That means $(M, g)$ is Ricci flat.\\
On the other hand, if $(M,g, f)$ is Ricci flat,
\begin{align*}
\mathscr{L}_g f = \frac{1}{2}\Delta^2 f = 0.
\end{align*}
Since $M$ is compact without boundary, $f$ is a nonzero constant on $M$.
\end{proof}
\begin{remark}
It was shown in \cite{C-G-Y} that for a closed $4$-dimensional $Q$-singular space $(M, g)$, $1 \in \ker \Gamma_g^*$ if and only if $g$ is Bach flat with vanishing $Q$-curvature. But this theorem doesn't generalized to other dimensions automatically. In fact, for generic dimensions, $1 \in \ker \Gamma_g^*$ doesn't imply $g$ is Bach flat by direct calculations, although it is still $Q$-flat as shown above.
\end{remark}
Theorem \ref{Classificaition_Q_singular_Einstein} follows from combining Propositions \ref{Q-static_sphere} and \ref{Static_Ricci_flat}.\\
For Einstein manifolds with negative scalar curvature, generically they are not $Q$-singular:
\begin{proposition}\label{Q-hyperbolic}
Let $(M, g)$ be a closed Einstein manifold with scalar curvature $R < 0$. Suppose $\Lambda_n R \not\in Spec(-\Delta_g)$, then $$\ker \Gamma_g^* = \{0\}.$$ \emph{i.e.} $(M, g)$ is not $Q$-singular.
\end{proposition}
\begin{proof}
For any smooth function $f\in \ker \Gamma_g^*$, let $\varphi := \Delta f + \Lambda_n R f$. By Lemma \ref{Q-static_Einstein}, we have $$\nabla^2 \varphi + \frac{R}{n(n-1)}g\varphi = 0.$$
Taking trace,
$$\Delta \varphi + \frac{R}{n-1}\varphi = 0.$$
Thus $$\varphi = \Delta f + \Lambda_n R f = 0$$ identically on $M$, since $0 > \frac{R}{n-1} \not\in spec(-\Delta_g)$. Thus $$\Delta f = - \Lambda_n R f .$$
By assuming $\Lambda_n R \not\in Spec(-\Delta_g)$, we conclude that $f \equiv 0$. That means, $\ker \Gamma_g^*$ is trivial.
\end{proof}
\section{Stability of Q-curvature}
In this section, we will discuss the linearized stability of $Q$-curvature on closed manifolds.
As main tools, we need following key results. Proofs can be found in referred articles.
\begin{theorem}[Splitting Theorem (\cite{B-E, F-M})]\label{Splitting_Theorem}
Let $(M,g)$ be a closed Riemannian manifold, $E$ and $F$ be vector bundles on $M$. Let $D : C^\infty (E) \rightarrow C^\infty(F)$ be a $k^{th}$-order differential operator and $D^* : C^\infty(F) \rightarrow C^\infty(E)$ be its $L^2$-formal adjoint operator.
For $k \leq s\leq \infty$ and $1< p < \infty$, let $D_s : W^{s,p}(E) \rightarrow W^{s-k,p}(F)$ and $D_s^* : W^{s,p}(F) \rightarrow W^{s-k,p}(E)$ be the bounded linear operators by extending $D$ and $D^*$ respectively.
Assume that $D$ or $D^*$ has injective principal symbols, then
\begin{align}
W^{s,p}(F) = \Ima D_{s+k} \oplus \ker D_s^*.
\end{align}
Moreover,
\begin{align}
C^\infty(F) = \Ima D \oplus \ker D^*.
\end{align}
\end{theorem}
In particular, take the vector bundle $F$ to be all symmetric 2-tensors $S_2(M)$, we have
\begin{corollary}[Canonical decomposition of $S_2$ (\cite{B-E, F-M})]\label{Canonical_decomposition_S_2}
Let $(M,g)$ be a closed Riemannian manifold, then the space of symmetric 2-tensors can be decomposed into
\begin{align}
S_2(M) = \{ L_X g : X \in \mathscr{X}(M)\} \oplus \ker \delta_g.
\end{align}
\end{corollary}
Another result we need is
\begin{theorem}[Generalized Inverse Function Theorem (\cite{Gel'man})]\label{Generalized_Inverse_Function_Theorem}
Let $X$, $Y$ be Banach spaces and $f : U_{x_0} \rightarrow Y$ be a continuously differentiable map with $f(x_0) = y_0$, where $U_{x_0} \subset X$ is a neighborhood of $x_0$. Suppose the derivative $D f (x_0) : X \rightarrow Y$ is a surjective bounded linear map. Then there exists $V_{y_0} \subset Y$, a neighborhood of $y_0$, and a continuous map $\varphi : V_{y_0} \rightarrow U_{x_0}$ such that
\begin{align*}
f(\varphi(y)) = y, \ \ \ \forall y\in V_{y_0};
\end{align*}
and
\begin{align*}
\varphi(y_0) = x_0.
\end{align*}
\end{theorem}
Combining Corollary \ref{Canonical_decomposition_S_2} and Theorem \ref{Generalized_Inverse_Function_Theorem}, we obtain \emph{Ebin's Slice Theorem}(\cite{Ebin}), which we will use later in this article.
\begin{theorem}[Slice Theorem (\cite{Ebin, F-M})]\label{Slice_Theorem}
Let $(M,\bar{g})$ be a Riemannian manifold. For $p> n$, suppose that $g$ is also a Riemannian metric on $M$ and $||g - \bar{g}||_{W^{2,p}(M,\bar{g})}$ is sufficiently small, then there exists a diffeomorphism $\varphi \in \mathscr{D} (M)$ such that $h := \varphi^* g - \bar{g}$ satisfies that $\delta_{\bar{g}} h = 0$ and moreover,
$$||h||_{W^{2,p}(M,\bar{g})} \leq N ||g - \bar{g}||_{W^{2,p}(M,\bar{g})},$$
where $N$ is a positive constant only depends on $(M, \bar g)$.
\end{theorem}
\begin{remark}
Brendle and Marques (c.f. \cite{B-M}) proved an analogous decomposition and slice theorem for a compact domain with boundary.
\end{remark}
Now we can give the proof of the main theorem (Theorem \ref{Q_stability}) in this section.
\begin{theorem
Let $(M,\bar{g})$ be a closed Riemannian manifold. Assume
$(M,\bar{g})$ is not $Q$-singular, then the Q-curvature is linearized stable at $\bar g$ in the sense that $Q : \mathcal{M} \rightarrow C^{\infty}(M) $ is a submersion at $\bar{g}$.
Thus, there is a neighborhood $U \subset C^{\infty}(M)$ of $Q_{\bar{g}}$ such that for any $\psi \in U$, there exists a metric $g$ on $M$ closed to $\bar{g}$ with $Q_g = \psi$.
\end{theorem}
\begin{proof
The principal symbol of $\Gamma_{\bar{g}}^*$ is
\begin{align*}
\sigma_{\xi}(\Gamma_{\bar{g}}^*) = - A_n \left( g |\xi|^2 - \xi \otimes\xi \right)|\xi|^2.
\end{align*}
Taking trace, we get
\begin{align*}
tr\ \sigma_{\xi}(\Gamma_{\bar{g}}^*) = - A_n \left( n - 1 \right)|\xi|^4.
\end{align*}
Thus, $\sigma_{\xi}(\Gamma_{\bar{g}}^*) = 0$ will imply that $\xi = 0$. \emph{i.e.} $\Gamma^*_{\bar{g}}$ has an injective principal symbol.
By the \emph{Splitting Theorem} \ref{Splitting_Theorem},
$$C^\infty(M) = \Ima \Gamma_{\bar{g}} \oplus \ker \Gamma^*_{\bar{g}},$$
which implies that $\Gamma_{\bar{g}}$ is surjective, since we assume that $(M,\bar{g})$ is not $Q$-singular, \emph{i.e.} $\ker \Gamma^*_{\bar{g}} = \{0\}$.
Therefore, applying the \emph{Generalized Implicit Function Theorem} (Theorem \ref{Generalized_Inverse_Function_Theorem}), $Q$ maps a neighborhood of $\bar g$ to a neighborhood of $Q_{\bar{g}}$ in $C^\infty(M)$.
\end{proof}
As a consequence, we can derive the stability of generic positive Einstein manifolds (Corollary \ref{cor:stab_pos_Einstein}).
\begin{corollary
Let $(M,\bar{g})$ be a closed positive Einstein manifold. Assume $(M,\bar{g})$ is not spherical, then the Q-curvature is linearized stable at $\bar g$.
\end{corollary}
\begin{proof
Since $M$ is assumed to be not spherical, by Theorem \ref{Classificaition_Q_singular_Einstein}, $(M,g)$ is not $Q$-singular. Now the stability follows from Theorem \ref{Q_stability}. \\
\end{proof}
For a generic $Q$-flat manifold, we can prescribe any smooth function such that it is the $Q$-curvature for some metric on the manifold (Corollary \ref{cor:prescribing_zero_Q}).
\begin{corollary
Suppose $(M, \bar g)$ is a closed non-$Q$-singular manifold with vanishing $Q$-curvature. Then any smooth function $\varphi$ can be realized as a $Q$-curvature for some metric $ g$ on $M$.
\end{corollary}
\begin{proof
Since $(M, \bar g)$ is non-$Q$-singular, applying Theorem \ref{Q_stability}, as a nonlinear map, $Q$ maps a neighborhood of the metric $\bar g$ to a neighborhood of $Q_{\bar g} = 0$ in $C^\infty(M)$. Thus there exists an $\varepsilon_0 > 0$ such that for any smooth function $\psi$ with $||\psi||_{C^\infty(M)}< \varepsilon_0$, we can find a smooth metric $g_\psi$ closed to $\bar g$ with $Q_{g_\psi} = \psi$.
Now for any nontrivial $\varphi \in C^\infty(M)$, let $\tilde \varphi:= u_{\varepsilon_0 , \varphi}\varphi$, where $u_{\varepsilon_0 , \varphi} = \frac{\varepsilon_0 }{2 ||\varphi||_{_{C^\infty(M)}}} > 0$. Clearly, $||\tilde \varphi||_{C^\infty(M)} < \varepsilon_0$ and hence there is a metric $g_{\tilde \varphi}$ closed to $\bar g$ such that $Q_{g_{\tilde \varphi}} = \tilde \varphi$. Let
$$g = u_{\varepsilon_0 , \varphi}^{\frac{1}{2}} g_{\tilde \varphi},$$ then we have $$Q_g = u_{\varepsilon_0 , \varphi}^{-1} \cdot Q_{g_{\tilde \varphi}} = \varphi.$$
\end{proof}
Now we can prove the Corollary \ref{cor:pos_Y_negative_Q}.
\begin{corollary}
Let $M$ be a closed manifold with positive Yamabe invariant $Y(M) > 0$. There is a metric $g$ with $Q$-curvature $Q_g < 0$ on $M$.
\end{corollary}
\begin{proof
By Matsuo's theorem (see Corollary 2 in \cite{Mat14}), on a closed manifold $M$ with dimension $n\geq 3$ and positive Yamabe invariant, there exists a metric $g$ with scalar curvature $R_g = 0$ but $Ric_g \not\equiv 0$ on $M$. Thus the $Q$-curvature satisfies $$Q_g = - \frac{2}{(n-2)^2} |Ric_g|^2 \leq 0.$$
If $|Ric_g|>0$ pointwisely, then $Q_g < 0$ on $M$. Otherwise, there is a point $p \in M$ such that $|Ric_g(p)|^2 = 0$ and hence $Q_g$ is not a constant on $M$. This implies that the metric $g$ is not $Q$-singular. Therefore, by Theorem \ref{Q_stability}, we can perturb the metric $g$ to obtain a metric with strictly negative $Q$-curvature. This gives the conclusion.
\end{proof}
For the Ricci flat case, we have a better result since we can identify $\ker \Gamma_{\bar{g}}^*$ with constants.
\begin{theorem
Let $(M,\bar{g})$ be a closed Ricci flat Manifold. Denote $$\Phi:=\{\psi \in C^{\infty}(M): \int_M \psi dv_{\bar{g}} = 0\}$$ to be the set of smooth functions with zero average. Then for any $\psi \in \Phi$, there exists a metric $g$ on $M$ such that $$Q_g = \psi.$$
\end{theorem}
\begin{proof
In the proof of Theorem \ref{Q_stability}, we have showed that the principal symbol of $\Gamma_{\bar{g}}^*$ is injective and the decomposition
$$C^\infty(M) = \Ima \Gamma_{\bar{g}} \oplus \ker \Gamma^*_{\bar{g}}$$
holds.
On the other hand, by Theorem \ref{Static_Ricci_flat}, $\ker \Gamma^*_{\bar{g}}$ is consisted by constant functions and hence $ \Ima \Gamma_{\bar{g}} = \Phi$. By identifying $\Phi$ with its tangent space, we can see the map $Q$ is a submersion at $g$ with respect to $\Phi$. Therefore, by \emph{Generalized Inverse Function Theorem} (Theorem \ref{Generalized_Inverse_Function_Theorem}), we have the local surjectivity. That is, there exists a neighborhood of $\bar{g}$, say $U_{\bar{g}} \subset \mathcal{M}$ and a neighborhood of $0$, say $V_0 \subset C^\infty(M)$ such that $ V_0|_\Phi \subset Q_{g}(U_{\bar{g}})$, where $g\mapsto Q_{g}$ is a map.\\
Now for any $\psi \in \Phi$, let $r > 0$ be a sufficiently large constant such that $\frac{1}{r^4}\psi \in V_0$. There exists a metric $g_r \in U_{\bar{g}}$, such that $Q_{g_r} = \frac{1}{r^4} \psi$. Let $g := \frac{1}{r^2}g_r$, we have $$Q_g = r^4 \cdot Q_{g_r} = r^4 \cdot \frac{1}{r^4}\psi = \psi.$$
Thus we prove Theorem \ref{Ricci_flat_stability}.
\end{proof}
Similarly, we can also give an answer to the prescribing $Q$-curvature problem near the standard spherical metric by noticing the fact that
$$\ker \Gamma_{\bar{g}}^* = E_{\lambda_1}$$ from Theorem \ref{Q-static_sphere}:
\begin{theorem}\label{thm:stability_sphere}
Let $(S^n, \bar{g})$ be the standard unit sphere and $E_{\lambda_1}$ be the eigenspace of $(-\Delta_g)$ associated to the first nonzero eigenvalue $\lambda_1 = n$. Then for any $\psi \in E_{\lambda_1}^\perp$ with $||\psi - Q_{\bar{g}}||_{C^\infty(S^n,\bar{g})}$ sufficiently small, there exists a metric $g$ near $\bar{g}$ such that $$Q_g = \psi.$$
\end{theorem}
\section{Rigidity Phenomena of Flat manifolds}
Let $(M,g)$ be a closed Riemannian manifold. Consider a deformation of $(M,g)$, $g(t)=g+th$, $t \in (-\varepsilon, \varepsilon)$.\\
We have calculated the first variation of $Q$-curvature (see equation (\ref{Q_linearization})). In order to study the local rigidity of $Q$-curvature, we are going to calculate the second variation of $Q$-curvature.\\
First, we recall the following well known $2^{nd}$-variation formulae, which can be found in \cite{F-M}. For detailed calculations, we refer to the appendices of \cite{Yuan}.
\begin{proposition}
We have the following $2^{nd}$-variation formulae for metrics,
\begin{align}
g''_{ij} = \left.\frac{d^2}{dt^2}\right|_{t=0} (g(t))_{ij} = 0,
\end{align}
and
\begin{align}
g''^{ij} = \left.\frac{d^2}{dt^2}\right|_{t=0} (g(t))^{ij} = 2 h_k^j h^{ik}.
\end{align}
Also for Christoffel Symbols,
\begin{align}
{\Gamma''}_{ij}^k = \left.\frac{d^2}{dt^2}\right|_{t=0} \Gamma(g(t))^k_{ij} = - h^{kl} \left( \nabla_i h_{jl} + \nabla_j h_{il}
- \nabla_l h_{ij} \right).
\end{align}
\end{proposition}
\begin{lemma}
The second variation of $Q$-curvature is
\begin{align}
\left.\frac{d^2}{dt^2}\right|_{t=0} Q(g(t)) =& A_n [ \Delta_g R'' + 2 \nabla^2 R \cdot ( h \times h ) - 2 h \cdot \nabla^2 R' + ( 2 \delta h + d (tr h) ) \cdot dR' \\ \notag &\ \ \ \ \ \ + h^{ij} ( 2 \nabla_i h_j^k - \nabla^k h_{ij} ) \nabla_k R - h \cdot ((2 \delta h + d(tr h)) \otimes d R ) ]\\ \notag
& + B_n [ 4 (Ric \times Ric) \cdot ( h \times h ) + 2 | Ric \times h |^2 - 8 Ric' \cdot ( Ric \times h )\\ \notag &\ \ \ \ \ \ + 2 Ric''\cdot Ric + 2 | Ric' |^2 ]\\ \notag
& + C_n [ 2 R R'' + 2(R')^2], \notag
\end{align}
where $Ric'$, $Ric''$; $R'$, $R''$ are the first and second variations of Ricci tensor and scalar curvature at $g$ respectively.
\end{lemma}
\begin{proof}
Choose normal coordinates at any point $p \in M$, \emph{i.e.} $\Gamma_{jk}^i = 0$ at $p$, then
\begin{align*}
&\left.\frac{d^2}{dt^2}\right|_{t=0}(\Delta_{g(t)}R(g(t)))\\
=& (g^{ij}(\partial_i\partial_j R - \Gamma_{ij}^k \partial_k R))''\\
=& g''^{ij} \partial_i\partial_j R + 2 g'^{ij}(\partial_i\partial_j R' - {\Gamma'}_{ij}^k \partial_k R) + g^{ij}(\partial_i\partial_j R'' - {\Gamma''}_{ij}^k \partial_k R - 2{\Gamma'}_{ij}^k \partial_k R')\\
=& 2 h_k^j h^{ik} \nabla_i \nabla_j R - 2 h^{ij} (\nabla_i\nabla_j R' - {\Gamma'}_{ij}^k \nabla_k R) + \Delta R'' - g^{ij} {\Gamma''}_{ij}^k \nabla_k R - 2 g^{ij} {\Gamma'}_{ij}^k \nabla_k R'\\
=& \Delta R'' + 2 \nabla^2 R \cdot (h \times h) - 2 h\cdot \nabla^2 R' + ( 2 \delta h + d trh) \cdot dR' + h^{ij} ( 2 \nabla_i h_j^k - \nabla^k h_{ij}) \nabla_k R\\
&- h \cdot ((2 \delta h + d tr h) \otimes d R),
\end{align*}
by substituting the expression of ${\Gamma'}_{ij}^k$ and ${\Gamma''}_{ij}^k$.\\
And
\begin{align*}
&\left.\frac{d^2}{dt^2}\right|_{t=0} |Ric(g(t))|^2_{g(t)}\\
=& \left( g^{ik} g^{jl} R_{ij} R_{kl} \right)''\\
=& 2 g''^{ik}R_{ij} R_k^j + 2 g'^{ik} g'^{jl} R_{ij} R_{kl} + 8 g'^{ik} R'_{ij} R_k^j + 2 R''_{ij} R^{ij} + 2 g^{ik} g^{jl} R'_{ij} R'_{kl}\\
=& 4 (Ric \times Ric) \cdot ( h \times h) + 2 |Ric \times h|^2 - 8 Ric' \cdot (Ric \times h) + 2 Ric'' \cdot Ric + 2|Ric'|^2. \\
\end{align*}
Also,
\begin{align*}
\left.\frac{d^2}{dt^2}\right|_{t=0} R(g(t))^2 = (R^2)'' = 2 (R')^2 + 2 R R''.
\end{align*}
We prove the lemma by combining all three parts together.
\end{proof}
Simply by taking $Ric = 0$ and $R = 0$, we get the second variation for $Q$-curvature at a Ricci flat metric.
\begin{corollary}\label{2nd_variation_Ricci_flat}
Suppose the metric $g$ is Ricci flat, then
\begin{align}
D^2 Q_g \cdot ( h, h )
=A_n(\Delta_{g}R'' - 2 h \cdot \nabla^2 R' + ( 2 \delta h + d (tr h)) \cdot dR') + 2 B_n |Ric'|^2 + 2C_n(R')^2.
\end{align}
\end{corollary}
Now we assume $(M,\bar{g}, f)$ is a $Q$-singular metric.\\
Consider the functional
\begin{align*}
\mathscr{F}(g)=\int_M Q_g \cdot f dv_{\bar{g}}.
\end{align*}
Note that here we fix the volume form to be the one associated to the $Q$-singular metric $\bar{g}$.\\
\begin{remark}
The analogous functional
\begin{align*}
\mathscr{G}(g)=\int_M R_g \cdot f dv_{\bar{g}},
\end{align*}
plays a fundamental role in studying rigidity phenomena of vacuum static spaces (c.f. \cite{F-M, B-M, Q-Y_1}, \emph{etc.}).
\end{remark}
\begin{lemma}
The metric $\bar{g}$ is a critical point of the functional $\mathscr{F}(g)$.
\end{lemma}
\begin{proof}
For any symmetric 2-tensor $h \in S_2$, let $g(t) = \bar{g} + t h$, $t \in (-\varepsilon, \varepsilon)$ be a family of metrics on $M$. clearly, $g(0) = \bar{g}$ and $g'(0) = h$. Then
\begin{align*}
\left. \frac{d}{dt}\right|_{t=0} \mathscr{F}(g(t)) = \int_M DQ_{\bar{g}} \cdot h f dv_{\bar{g}} = \int_M \Gamma_{\bar{g}} h \cdot f dv_{\bar{g}} = \int_M h \cdot \Gamma_{\bar{g}}^* f \ dv_{\bar{g}} = 0,
\end{align*}
\emph{i.e.} $\bar{g}$ is a critical point for the functional $\mathscr{F}(g)$.
\end{proof}
Furthermore, if we assume $\bar{g}$ is a flat metric, by Theorem \ref{Static_Ricci_flat}, we can take $f$ to be a nonzero constant. In particular, we can take $f \equiv 1$, since $Q$-singular equation is a linear equations system with respect to $f$.\\
\begin{lemma}\label{2nd_variation_flat}
Let $\bar{g}$ be a flat metric and $f \equiv 1$. Suppose $\delta h = 0$, then the second variation of $\mathscr{F}$ at $\bar{g}$ is given by
\begin{align}
D^2 \mathscr{F}_{\bar{g}} \cdot (h,h) = -2 \alpha_n \int_M|\Delta(tr h)|^2 dv_{\bar{g}} + \frac{1}{2}B_n\int_M(|\Delta \overset{\circ}{h}|^2)dv_{\bar{g}},
\end{align}
where $\overset{\circ}{h}$ is the traceless part of $h$ and $\alpha_n := - \frac{1}{2}\left(A_n + \frac{n+1}{2n}B_n + 2C_n \right)= \frac{(n^2 - 2) (n^2 - 2n - 2)}{8n(n-1)^2(n-2)^2} > 0$, $B_n=-\frac{2}{(n-2)^2}< 0$, for any $n \geq 3$.
\end{lemma}
\begin{proof}
With the aid of Corollary \ref{2nd_variation_Ricci_flat}, we have
\begin{align*}
&\left. \frac{d^2}{dt^2}\right|_{t=0} \mathscr{F}(g(t))\\
=& \int_M Q''_{\bar{g}} dv_{\bar{g}}\\
=& \int_M \left[A_n(\Delta R'' - 2 h \cdot \nabla^2 R' + ( 2 \delta h + d (tr h)) \cdot dR') + 2 B_n |Ric'|^2 + 2C_n(R')^2 \right] dv_{\bar{g}}\\
=& \int_M \left[A_n(- 2 \delta h \cdot d R' + ( 2 \delta h + d (tr h)) \cdot dR') + 2 B_n |Ric'|^2 + 2C_n(R')^2 \right] dv_{\bar{g}}\\
=& \int_M \left[A_n( - \Delta (tr h)) \cdot R' + 2 B_n |Ric'|^2 + 2C_n(R')^2 \right] dv_{\bar{g}}\\
=& \int_M \left[A_n( - \Delta (tr h)) \cdot (- \Delta (tr h) + \delta^2 h - Ric \cdot h) + 2 B_n |Ric'|^2 + 2C_n(R')^2 \right] dv_{\bar{g}}\\
=& \int_M \left[A_n( \Delta (tr h))^2 + 2 B_n |Ric'|^2 + 2C_n(R')^2 \right] dv_{\bar{g}},
\end{align*}
where the last step is due to the assumptions of $\bar{g}$ being a flat metric and divergence-free property of $h$.\\
For exact the same reasons, by Proposition \ref{1st_variation_Ricci_scalar}, we have
\begin{align*}
|Ric'|^2 = \frac{1}{4} | \Delta h + \nabla^2 (tr h)|^2 = \frac{1}{4} \left( |\Delta h|^2 + 2 \Delta h \cdot \nabla^2 (trh) + |\nabla^2 (tr h)|^2 \right)
\end{align*}
and
\begin{align*}
(R')^2 = ( \Delta (tr h))^2.
\end{align*}
Thus
\begin{align*}
\int_M |Ric'|^2 dv_{\bar{g}}=& \frac{1}{4} \int_M\left[\left( |\Delta h|^2 + 2 \Delta h \cdot \nabla^2 (trh) + |\nabla^2 (tr h)|^2 \right)\right] dv_{\bar{g}}\\
=& \frac{1}{4}\int_M \left[ \left( |\Delta h|^2 + 2 \delta\Delta h \cdot d(trh) + \delta\nabla^2 (tr h) \cdot d(tr h) \right)\right] dv_{\bar{g}}\\
=& \frac{1}{4}\int_M \left[\left( |\Delta h|^2 + 2 \Delta \delta h \cdot d(trh) + \delta d (tr h) \cdot \delta d(tr h) \right)\right] dv_{\bar{g}}\\
=& \frac{1}{4}\int_M \left[\left( |\Delta h|^2 + |\Delta (tr h)|^2 \right)\right] dv_{\bar{g}}\\
=& \frac{1}{4}\int_M \left[ |\Delta \overset{\circ}{h}|^2 + \frac{n+1}{n}( \Delta (tr h))^2 \right] dv_{\bar{g}}.
\end{align*}
Now
\begin{align*}
\left. \frac{d^2}{dt^2}\right|_{t=0} \mathscr{F}(g(t)) =& \int_M \left[(A_n + \frac{n+1}{2n}B_n + 2 C_n)( \Delta (tr h))^2 +\frac{1}{2}B_n |\Delta \overset{\circ}{h}|^2 \right] dv_{\bar{g}}\\
=& -2 \alpha_n \int_M|\Delta(tr h)|^2 dv_{\bar{g}} + \frac{1}{2}B_n\int_M(|\Delta \overset{\circ}{h}|^2)dv_{\bar{g}}.
\end{align*}
This gives the equation we claimed.
\end{proof}
Now we are ready to prove Theorem \ref{flat_local_rigidity}.
\begin{theorem
For $n \geq 3$, let $(M^n,\bar{g})$ be a closed flat Riemannian manifold and $g$ be a metric on $M$ with $$Q_g \geq 0.$$ Suppose $||g - \bar{g}||_{C^2(M, \bar{g})}$ is sufficiently small, then $g$ is also flat.
\end{theorem}
\begin{proof
Since $g$ is $C^2$-closed to $\bar{g}$, by the \emph{Slice Theorem} (Theorem \ref{Slice_Theorem}), there exists a diffeomorphism $\varphi \in \mathscr{D}( M ) $ such that $$h := \varphi^* g - \bar{g}$$ is divergence free with respect to $\bar{g}$ and $$||h||_{C^2(M,\bar{g})} \leq N ||g-\bar{g}||_{C^2(M,\bar{g})},$$ where $N > 0$ is a constant only depends on $(M,\bar{g})$.\\
Apply Lemma \ref{2nd_variation_flat}, we can expand $\mathscr{F}(\varphi^*g)$ at $\bar{g}$ as follows,
\begin{align}\label{eqn:F_expansion}
\mathscr{F}(\varphi^*g) &= \mathscr{F}(\bar g) + D\mathscr{F}_{\bar g} \cdot h + \frac{1}{2}D^2 \mathscr{F}_{\bar g} \cdot (h, h) + E_3\\
\notag &= - \alpha_n \int_M|\Delta(tr h)|^2 dv_{\bar{g}} + \frac{1}{4}B_n\int_M |\Delta \overset{\circ}{h}|^2 dv_{\bar{g}} + E_3,
\end{align}
where $$|E_3| \leq C_0 \int_M |h|\ |\nabla^2 h|^2 \ dv_{\bar{g}}$$ for some constant $C_0=C_0 (n, M, \bar g)> 0$.\\
We know $$\mathscr{F}(\varphi^*g) = \int_M Q_g\circ\varphi\ dv_{\bar{g}} \geq 0$$ since $Q_{g}\geq0$ and $\varphi$ is a diffeomorphism near the identity. Since
$$\alpha_n =-\frac{1}{2}\left(A_n + \frac{n+1}{2n}B_n + 2C_n \right) > 0,\ \ B_n = - \frac{2}{(n-2)^2} < 0$$ for $n\geq 3$, we can take
$\mu_n > 0$, a sufficiently small constant only depends on the dimension $n$ such that
$$\min\{ \alpha_n - \frac{\mu_n}{n} , - \frac{1}{4}B_n- \mu_n\} > 0.$$
Therefore, with the aid of equation (\ref{eqn:F_expansion}), we have
\begin{align*}
&\mu_n \int_M |\Delta h|^2 dv_{\bar{g}}\\
\leq& \left( \alpha_n - \frac{\mu_n}{n}\right)\int_M|\Delta(tr h)|^2 dv_{\bar{g}} + \left( - \frac{1}{4}B_n- \mu_n \right)\int_M |\Delta \overset{\circ}{h}|^2 dv_{\bar{g}} + \mu_n \int_M |\Delta h|^2 dv_{\bar{g}}\\
=& \alpha_n \int_M|\Delta(tr h)|^2 dv_{\bar{g}} - \frac{1}{4}B_n\int_M |\Delta \overset{\circ}{h}|^2 dv_{\bar{g}}\\
=& - \mathscr{F}(\varphi^*g) + E_3\\
\leq& |E_3|\\
\leq& C_0 \int_M |h|\ |\nabla^2{h}|^2 \ dv_{\bar{g}}.
\end{align*}
Suppose $g$ is sufficiently $C^2$-closed to $\bar g$, say $||g-\bar{g}||_{C^2(M,\bar{g})} < \frac{\mu_n}{2NC_0}$, then $$||h||_{C^0(M,\bar{g})} \leq ||h||_{C^2(M,\bar{g})} \leq N ||g-\bar{g}||_{C^2(M,\bar{g})} < \frac{\mu_n}{2C_0}$$ and therefore,
\begin{align*}
\mu_n\int_M |\nabla^2 h|^2dv_{\bar{g}} = \mu_n\int_M |\Delta h|^2dv_{\bar{g}} \leq C_0 \int_M |h|\ |\nabla^2{h}|^2 \ dv_{\bar{g}} \leq \frac{\mu_n}{2}\int_M |\nabla^2 h|^2dv_{\bar{g}},
\end{align*}
which implies $\nabla^2 h = 0$ on $M$. \\
Now we have
\begin{align*}
\int_M |\nabla h |^2 dv_{\bar{g}} = - \int_M h \Delta h\ dv_{\bar{g}} = 0.
\end{align*}
That is $\nabla h = 0$.\\
Since $\bar{g}$ is flat, then on a neighborhood $U_p$ for any $ p \in M$, we can find a local coordinates, such that $\bar{g}_{ij} = \delta_{ij}$ and $\partial_k \bar{g}_{ij} = 0$, $i,j,k = 1, \cdots, n$ on $U_p$.\\
Under the same coordinates, Christoffel symbols of $\varphi^*g$ are
\begin{align*}
\Gamma^k_{ij}(\varphi^*g) &= \frac{1}{2} (\varphi^*g)^{kl} \left(\partial_i (\bar{g}_{jl} + h_{jl}) + \partial_j (\bar{g}_{il} + h_{il}) - \partial_l (\bar{g}_{ij} + h_{ij}) \right) \\
&= \frac{1}{2} (\varphi^*g)^{kl} (\nabla_i h_{jl} + \nabla_j h_{il} - \nabla_l h_{ij}) = 0
\end{align*}
on $U_p$, since $h$ is parallel with respect to $\bar{g}$. \\
Thus the Riemann curvature tensor of $\varphi^* g$ vanishes identically on $U_p$ for any $p \in M$, which implies that the metric $\varphi^*g$ is flat and so is $g$.
\end{proof}
\begin{remark}
Fischer and Marsden proved an analogous result for the scalar curvature. (see \cite{F-M})
\end{remark}
As an application, we can get the local rigidity of $Q$-curvature on compact domain of $\mathbb{R}^n$, which can be thought as an analogue of rigidity part of \emph{Positive Mass Theorem}.
\begin{corollary}\label{flat_domain_rigidity}
Suppose $\Omega \subset \mathbb{R}^n$ is a compactly contained domain. Let $\delta$ be the flat metric and $g$ be a metric on $\mathbb{R}^n$ satisfying
\begin{itemize}
\item $Q_g \geq 0$,
\item $supp(g - \delta) \subset \Omega$,
\item $||g - \delta||_{C^2(\mathbb{R}^n, \delta)}$ is sufficiently small;
\end{itemize}
then $g$ is also flat.
\end{corollary}
\begin{proof}
Since $\Omega$ is compactly contained in $\mathbb{R}^n$, we can choose a rectangle domain $\Omega'$ which contains $\Omega$ strictly. Thus, the metric $g$ is identically the same as the Euclidean metric on $\Omega' - \Omega$. Now we can derive a metric with nonnegative $Q$-curvature on the torus $T^n$ by identifying the boundary of $\Omega'$ correspondingly. Clearly, this new metric on $T^n$ is $C^2$-closed to the flat metric, hence has to be flat by Theorem \ref{flat_local_rigidity}. Now the claim follows.
\end{proof}
It would be interesting to ask whether there is a global rigidity result for $Q$-curvature. No result is known so far to the best of authors' knowledge, but we observed that the global rigidity holds in some special cases.
\begin{proposition}\label{prop:conf_Ricci_flat}
Let $(M, g)$ be a closed Riemannian manifold with $$Q_g \geq 0$$ pointwisely. Suppose that $g$ is conformal to a Ricci flat metric, then $(M, g)$ is Ricci flat.
\end{proposition}
\begin{proof}
For $n = 4$, by the assumptions, there exists a smooth function $u$ such that $g = e^{2u} \bar g$, where $\bar g$ is a Ricci flat metric on $M$. By (\ref{eqn:conf_Q_4}) in the appendix, we have
$$Q_g = e^{-4u} ( P_{\bar g} u + Q_{\bar g} )= e^{-4u} \Delta_{\bar g}^2 u \geq 0.$$ Hence $ \Delta_{\bar g}u$ is subharmonic on $M$ which implies that $u$ is a constant. Therefore, $g$ is also Ricci flat. \\
Similarly, for $n \neq 4$, there exists a positive function $u > 0$ such that $g = u^{\frac{4}{n-4}}\bar{g}$, where $\bar g$ is a Ricci flat metric on $M$. Then by (\ref{eqn:conf_Q_n}), $$Q_g = \frac{2}{n-4}u^{-\frac{n+4}{n-4}} P_{\bar{g}} u = \frac{2}{n-4} u^{-\frac{n+4}{n-4}} \Delta_{\bar{g}}^2 u. $$
For $n=3$, we have $$\Delta_{\bar{g}}^2 u \leq 0,$$ and for $n>4$,
$$\Delta_{\bar{g}}^2 u \geq 0,$$ which imply that $u$ is a constant in both cases and thus $g$ is Ricci flat.
\end{proof}
In particular, we can consider tori $T^n$. First, we need a lemma which characterizes the conformally flat structure on $T^n$.
\begin{lemma}\label{lem:conf_structure_tori}
On the torus $T^n$, any locally conformally flat metric has to be conformal to a flat metric.
\end{lemma}
\begin{proof}
Let $g$ be a locally conformally flat Riemannian metric on $T^n$. According to the solution of Yamabe problem, $g$ is conformal to a metric $\bar g$, whose scalar curvature is a constant.
Suppose $R_{\bar g} < 0$, by Proposition 1.2 in \cite{S-Y_3}, the fundamental group of $T^n$ is non-amenable. But this contradicts to the fact that $\pi_1(T^n)$ is abelian, which is amenable. Thus, $R_{\bar g} \geq 0$, which implies that $\bar g$ is flat by famous results of Schoen-Yau and Gromov-Lawson (\cite{S-Y_1, S-Y_2, G-L_1, G-L_2}).
\end{proof}
Now we can derive the rigidity of tori with respect to nonnegative $Q$-curvature (Theorem \ref{thm:rigidity_tori}):
\begin{theorem
Suppose $g$ is a locally conformally flat metric on $T^n$ with $$Q_g \geq 0,$$ then $g$ is flat.
In particular, any metric $g$ on $T^4$ with nonnegative $Q$-curvature has to be flat.
\end{theorem}
\begin{proof
By Lemma \ref{lem:conf_structure_tori}, we can see that $g$ is conformal to a flat metric. Applying Proposition \ref{prop:conf_Ricci_flat}, we conclude that $g$ has to be flat.
In particular for $T^4$, we have the \emph{Gauss-Bonnet-Chern formula},
$$\int_{T^4} \left( Q_g + \frac{1}{4}|W_g|^2 \right) dv_{g} = 8\pi^2 \chi(T^4) = 0$$
on $T^4$. Thus the non-negativity of $Q$-curvature automatically implies that Weyl tensor $W_g$ vanishes identically on $T^4$, which means $g$ is locally conformally flat. Therefore, $g$ is flat by the previous argument.
\end{proof}
As for dimension 3, we have the following result.
\begin{proposition}
The $3$-dimensional torus $T^3$ does not admit a metric with constant scalar curvature and nonnegative $Q$-curvature, unless it is flat.
\end{proposition}
\begin{proof}
Suppose such a metric $g$ exists, its scalar curvature is non-positive and it is flat only if $R_g = 0$ (c.f. \cite{ S-Y_1, S-Y_2, G-L_1, G-L_2}).
If it is non-flat, without loss of generality, we can assume $$R_g = -1.$$ Then we have $$Q_g = - \frac{1}{4}\Delta_g R_g - 2 |Ric_g|^2 + \frac{23}{32} R_g^2 = - 2 |Ric_g|^2 + \frac{23}{32} R_g^2 \geq 0.$$
That is $$|Ric_g|^2 \leq \frac{23}{64}R_g^2 = \frac{23}{64}.$$
At any point $p \in M$, choose an orthonormal basis $\{e_1, e_2, e_3 \}$ for $T_p M$ such that the Ricci tensor at $p$ is diagonal. Let $\lambda_i$, $i = 1,2,3$ be the eigenvalues of $Ric_g (p)$. Then we have $$\lambda_1 + \lambda_2 + \lambda_3 = -1$$ and $$\lambda_1^2 + \lambda_2^2 + \lambda_3^2 \leq \frac{23}{64}.$$
Hence for $i \neq j$, we have
\begin{align*}
0 \geq& \lambda_i^2 + \lambda_j^2 + ( 1+ (\lambda_i + \lambda_j) )^2 - \frac{23}{64} \\
=& (\lambda_i + \lambda_j)^2 + 2 (\lambda_i + \lambda_j) + \lambda_i^2 + \lambda_j^2 + \frac{41}{64} \\
\geq &\frac{3}{2}(\lambda_i + \lambda_j)^2 + 2(\lambda_i + \lambda_j) + \frac{41}{64},
\end{align*}
where the last step was achieved by the mean value inequality $$\lambda_i^2 + \lambda_j^2 \geq \frac{(\lambda_i + \lambda_j)^2}{2} .$$
That is,
$$(\lambda_i + \lambda_j)^2 + \frac{4}{3}(\lambda_i + \lambda_j) + \frac{41}{96} \leq 0,$$
which implies that $\lambda_i + \lambda_j < - \frac{2}{3} + \frac{\sqrt{10}}{24} < -\frac{1}{2}$.
Then the sectional curvature of the plane spanned by $e_i$ and $e_j$ satisfies
\begin{align*}
K_{ij} = R_{ijji} = R_{ii} g_{jj} + R_{jj} g_{ii} - \frac{1}{2} R g_{ii}g_{jj} = \frac{1}{2} + (\lambda_i + \lambda_j) < 0.
\end{align*}
Thus $g$ has negative sectional curvature. But the torus does not admit a metric with nonpositive sectional curvature by Corollary 2 in \cite{B-B-E}, which is a contradiction.
\end{proof}
\begin{remark}
From the last inequality in the above argument, we can easily see that the conclusion actually allows a perturbation on the metric. That is, any metric on $T^3$ with scalar curvature sufficiently $C^4$-closed to a negative constant can not have a nonnegative $Q$-curvature, unless it is flat.
\end{remark}
\begin{remark}
According to the solution of Yamabe problem, there is a metric with constant scalar curvature in each conformal class on any closed Riemannian manifold. Thus we can conclude that for any metric $g$ on $T^n$, if $g$ is not conformally flat, then it must be conformal to a metric with constant negative scalar curvature. However, $Q$-curvature cannot stay nonnegative when performing conformal changes on metrics in general. We hope to develop some techniques to resolve this issue in the future.
\end{remark}
According to the \emph{Bieberbach Theorem} (See Theorem 10.33 in \cite{C-L-N}), any $n$-dimensional flat Riemannian manifold is a finite quotient of the torus $T^n$. So based on Theorem \ref{flat_local_rigidity}, \ref{thm:rigidity_tori} and above observations, we give the following conjecture:
\begin{conjecture}\label{Positive_Q_on_torus}
For $n \geq 3$, let $(M^n,\bar{g})$ be a connected closed flat Riemannian manifold, then there is no metric with pointwisely positive $Q$-curvature. Moreover, if $g$ is a metric on $M$ with $$Q_g\geq 0,$$ then $g$ is flat.
\end{conjecture}
\begin{remark}
This is a higher order analogue of the famous result due to Schoen-Yau and Gromov-Lawson (\cite{S-Y_1, S-Y_2, G-L_1, G-L_2}), which says that there is no nonnegative scalar curvature metric on tori unless they are flat. \end{remark}
\section{Relation between vacuum static spaces and $Q$-singular spaces}
In this section, we will discuss the relation between $Q$-singular spaces and vacuum static spaces.
\begin{definition
We say a complete Riemannian manifold $(M,g)$ is vacuum static, if there is a smooth function $f (\not\equiv 0)$ on $M$ solving the following \emph{vacuum static equation}
\begin{align}
\gamma_g^* f := \nabla^2 f - \left(Ric_g - \frac{R_g}{n-1} g \right) f = 0.
\end{align}
We also refer $(M,g,f)$ as a \emph{vacuum static space}, if $f (\not\equiv 0) \in \ker \gamma_g^*$.
\end{definition}
Now we can prove the main result (Theorem \ref{Q-static_R-static}) in this section.
\begin{theorem
Let $\mathcal{M}_R$ be the space of all closed vacuum static spaces and $\mathcal{M}_Q$ be the space of all closed $Q$-singular spaces.
Suppose $(M, g, f) \in \mathcal{M}_R \cap \mathcal{M}_Q$, then $M$ is Einstein necessarily. In particular, it has to be either Ricci flat or isometric to a round sphere.
\end{theorem}
\begin{proof
If $(M,g,f)$ is vacuum static, $M$ has constant scalar curvature necessarily (c.f. \cite{F-M}). Then the $Q$-singular equation can be reduced to
\begin{align}\label{eqQ_and_Rstatic}
\Gamma_g^* f =& A_n \left( - g \Delta^2 f + \nabla^2
\Delta f - Ric \Delta f \right)- 2 C_n R \left( g\Delta f - \nabla^2 f + f
Ric \right)\\
&\notag - B_n \left( \Delta (f Ric) + 2 f
\overset{\circ}{Rm}\cdot Ric + g \delta^2 (f Ric) + 2 \nabla \delta (f
Ric) \right)\\ =&
\notag 0.
\end{align}
By \emph{Contracted $2^{nd}$ Bianchi Identity}
\begin{align*}
\delta Ric = - \frac{1}{2} dR = 0,
\end{align*}
we can simplify (\ref{eqQ_and_Rstatic}) furthermore,
\begin{align*}
&-\frac{1}{B_n}\Gamma_g^* f \\
=& -\frac{A_n}{B_n} \left( \nabla^2 \Delta f - g \Delta^2 f - Ric \Delta f \right) - \frac{2C_n}{B_n}R \left( \nabla^2 f - g \Delta f - Ric f \right) \\ &+ f (\Delta Ric + 2 \overset\circ{Rm} \cdot Ric) + g (Ric \cdot \nabla^2 f) + Ric \Delta f - 2 \nabla^2 f \times Ric + 2 C \cdot \nabla f\\
=& -\frac{A_n}{B_n}\ \gamma_g^* (\Delta f) - \frac{2C_n}{B_n}R\ \gamma_g^* f + f \Delta_L Ric + g (Ric \cdot \nabla^2 f) + Ric \Delta f +\frac{2 R }{n-1}f Ric + 2 C \cdot \nabla f\\
=& 0,
\end{align*}
where $\Delta_L$ is the Lichnerowicz Laplacian and $C$ is Cotton tensor,
\begin{align*}
C_{ijk} = \left( \nabla_i R_{jk} - \frac{1}{2(n-1)}g_{jk}\nabla_i R\right) - \left( \nabla_j R_{ik} - \frac{1}{2(n-1)}g_{ik} \nabla_j R \right) = \nabla_i R_{jk} - \nabla_j R_{ik}
\end{align*}
and
\begin{align*}
(C \cdot \nabla f)_{jk} := C_{ijk} \nabla^i f.
\end{align*}
Now suppose $g$ is also vacuum static, then $\Delta f = -\frac{R}{n-1}f$ and hence
\begin{align*}
\gamma_g^* (\Delta f) = -\frac{R}{n-1} \gamma_g^* f = 0.
\end{align*}
Thus we have
\begin{align*}
\Gamma_g^* f
= f \Delta_L Ric + g (Ric \cdot \nabla^2 f) + \frac{R}{n-1} f Ric + 2 C \cdot \nabla f = 0.
\end{align*}
Taking trace and applying the vacuum static equation,
\begin{align*}
tr\ \Gamma_g^* f
=& f \Delta R + n Ric \cdot \nabla^2 f + \frac{R^2}{n-1} f\\
=& n Ric \cdot \left( Ric - \frac{R}{n-1}g\right) f + \frac{R^2}{n-1} f\\
=& n \left( |Ric|^2 - \frac{R^2}{n} \right) f \\
=& n \left|Ric - \frac{R}{n} g\right|^2 f\\
=& 0.
\end{align*}
Since in an vacuum static space, $df \neq 0$ on $f^{-1}(0)$ (c.f. \cite{F-M}). Then $f^{-1}(0)$ is a regular hypersurface in $M$ and hence $f \neq 0$ on a dense subset of $M$, which implies that $$Ric = \frac{R}{n} g.$$ \emph{i.e.} $(M,g)$ is Einstein.
For a closed vacuum static space, its scalar curvature is nonnegative necessarily. Hence the assertion follows easily from Theorem \ref{Classificaition_Q_singular_Einstein}.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
There has been lots of interest in the study of five-dimensional ${\cal N}=2$ supergravity in past years.
On one hand, the solutions in this theory have rich structures including black holes, black rings and
black strings \cite{sab1}-\cite{chong}. On the other hand, this theory can come from string/M theory via Calabi-Yau compactifications \cite{pap1,ant1} which provides a platform for a detailed comparison between the microscopic and macroscopic descriptions of black holes in string theory \cite{Strominger:1996sh,Breckenridge:1996is}. A further compactification of the 5D theory on a circle gives rise to 4D ${\cal N}=2$ supergravity which is important for the study of string triality \cite{Duff:1995sm,Behrndt:1996hu}.
The ordinary classical two-derivative theory does not give a complete description of the physics in the presence of
quantum corrections which can be effectively described by the higher derivative terms. For instance, the five dimensional ${\cal N}=2$ supergravity has gauge-gravity anomaly whose supersymmetrization requires the inclusion of curvature squared terms. Thus, the complete structure of supersymmetric higher derivative terms is important for exploring the full theory at quantum level. Usually, there exist two viewpoints on the supersymmetrization of higher derivative terms. One is the on-shell supersymmetrization, and the other is the off-shell supersymmetrization. Because the off-shell supersymmetrization demands the knowledge of auxiliary fields, currently, it can only be performed up to six dimensions. In string theory, the supersymmetry is on-shell and works only order by order in $\alpha'$. Since the supersymmetry transformation rules depend on $\alpha'$, the supersymmetrization procedure becomes tedious requiring the modification of the transformation rules and the Lagrangian at the same time. However, in the off-shell formalism, supersymmetrizing higher derivative terms can be done without modifying the supersymmetry transformation rules.
In this work, we use five-dimensional superconformal tensor calculus \cite{Bergshoeff:2001hc}-\cite{Coomans:2012cf} which is an off-shell formalism facilitating the construction tremendously\footnote{A superspace formulation of five dimensional N=2 supergravity and matter multiplets has been obtained in \cite{Kuzenko:2007hu}.}. In five dimensions, there are two inequivalent Weyl multiplets: the standard Weyl multiplet and the dilaton Weyl multiplet. The main difference between these two Weyl multiplets is that the dilaton Weyl multiplet contains a graviphoton in its field content whereas the standard Weyl multiplet does not. A supergravity theory based on the standard Weyl multiplet requires coupling to an external vector multiplet.
Utilizing the standard Weyl multiplet, the supersymmetric Weyl tensor squared invariant has been found in
\cite{Hanaki:2006pj} while the supersymmetric Riemann squared \cite{Bergshoeff:2011xn, Ozkan:2013uk} and Gauss-Bonnet combination \cite{Ozkan:2013uk} are based on the dilaton Weyl multiplet. Similar constructions in $D=4$, ${\cal N}=2$ and $D=6,\,\mathcal{N}=(1,0)$ theories have been done in \cite{LopesCardoso:2000qm, deWit:2006gn, butt,Bergshoeff:1986vy}. Therefore, for the completeness of off-shell curvature squared invariants, the missing supersymmetric curvature squared terms in $D=5,\, {\cal N}=2$ theory are the Riemann tensor squared in the standard Weyl multiplet and Ricci scalar squared in both Weyl multiplets. Using the vector multiplet actions, we derive the supersymmetric completion of Ricci scalar squared term by composing the fields in the vector multiplet in terms of the elements of linear multiplet and Weyl multiplet. As a consequence, the supersymmetric curvature squared structures become complete in the dilaton Weyl multiplet. We then proceed to couple all these curvature squared invariants to $n$ number of vector multiplets. When the standard Weyl multiplet is adopted, the Ricci scalar squared term is coupled to $n$ Abelian vector multiplets. We show that it modifies the very special geometry defined on the moduli space of the $n$ vector multiplets. Compared with the supersymmetric Weyl tensor squared action \cite{Hanaki:2006pj}, the supersymmetric Ricci scalar squared term provides a simpler modification to the two-derivative theory. Although, the Ricci scalar squared can be generated by a field redefinition from the string theory viewpoint, the off-shell supersymmetric completion of Ricci scalar squared term is an independent worthwhile superinvariant and does modify the physics in the context of gauge/gravity duality \cite{Blau:1999vz,Nojiri:1999mh}.
The remainder of this paper is organized as follows. In section \ref{section: multiplets}, we introduce the superconformal multiplets in $D=5$, ${\cal N}=2$ superconformal theory. In section \ref{section: sactions}, we list the superconformal actions for the matter multiplets. In section \ref{section: alldilaton} we first review the previously constructed minimal off-shell curvature squared invariants purely based on the dilaton Weyl multiplet including the supersymmetric Riemann squared action and the Gauss-Bonnet action. Then, we construct the minimal off-shell Ricci scalar squared invariant. In section \ref{section: dilationvector}, we derive the vector multiplets coupled Rieman tensor squared and Ricci scalar squared invariants and review the vector multiplets coupled Weyl tensor squared for a complete discussion. Starting from section \ref{section: Sugra} we begin to use the standard Weyl multiplet. We obtain an off-shell two-derivative Poincar\'e supergravity by using the linear and vector multiplets as compensators and gauge the $\mathop{\rm U}(1)$ $R$-symmetry by coupling this theory to $n$ number of vector multiplets. In section \ref{section: Ricci2SW}, after a brief discussion about the supersymmetric Weyl tensor squared, we construct an off-shell vector multiplets coupled Ricci scalar squared invariant. We then analyze the effects of the Ricci scalar squared term to very special geometry, particularly in the case of $AdS_5$ vacuum. In section \ref{section: Solution}, we study the supersymmeric magnetic string and electric black hole solutions. We summarize in section \ref{section: conc}.
\section{Multiplets of Five Dimensional Superconformal Theory}\label{section: multiplets}
In this section, we introduce the five dimensional $\mathcal{N}=2$ superconformal multiplets to be used in the construction of off-shell two-derivative Poincar\'e supergravities and curvature squared actions. The section starts with the introduction of two superconformal Weyl multiplets. A superconformal Weyl multiplet contains all the gauge fields associated with the superconformal algebra as well as proper matter fields. The latter is included in order to balance the bosonic and fermionic degrees of freedom and implement the off-shell closure of the algebra. In $D=5,\, \mathcal{N}=2$ theory, there exist two different choices for the matter fields which leads to two different Weyl multiplets: the standard Weyl multiplet and the dilaton Weyl multiplet. In the first two subsections, we exhibit the standard Weyl multiplet and the dilaton Weyl multiplet. The last two subsections are devoted to the review of the superconformal matter multiplets including the vector multiplet and the linear multiplet which will be used as the compensating multiplets in the construction of superconformal actions.
\subsection{The Standard Weyl Multiplet}\label{ss: standardweyl}
The standard Weyl multiplet contains 32+32 off-shell degrees of freedom including a f\"unfbein $e_\mu{}^a$, a gravitino, $\psi_\mu^i$, a dilaton gauge field $b_\mu$, an $\mathop{\rm SU}(2)$ gauge field $V_\mu{}^{ij}$, a scalar $D$, an antisymmetric tensor $T_{ab}$, and a symplectic Majorana spinor $\chi^i$. The full $Q$, $S$ and $K$ transformations of these fields are given by\footnote{In this paper, we use the conventions of \cite{Bergshoeff:2001hc} where the signature of the metric is diag$(-, +, +, +, +)$. A table introducing the correspondence between the notations of \cite{Hanaki:2006pj} and \cite{Bergshoeff:2001hc} is given in Appendix \ref{section: notation} for reader's convenience.} \cite{Bergshoeff:2001hc}
\begin{eqnarray}
\delta e_\mu{}^a &=& \ft 12\bar\epsilon \gamma^a \psi_\mu \nonumber\, ,\\
\delta \psi_\mu^i &=& (\partial_\mu+\tfrac{1}{2}b_\mu+\tfrac{1}{4}\omega_\mu{}^{ab}\gamma_{ab})\epsilon^i-V_\mu^{ij}\epsilon_j + {\rm i} \gamma\cdot T \gamma_\mu
\epsilon^i - {\rm i} \gamma_\mu
\eta^i \nonumber\, ,\\
\delta V_\mu{}^{ij} &=& -\ft32{\rm i} \bar\epsilon^{(i} \phi_\mu^{j)} +4
\bar\epsilon^{(i}\gamma_\mu \chi^{j)}
+ {\rm i} \bar\epsilon^{(i} \gamma\cdot T \psi_\mu^{j)} + \ft32{\rm i}
\bar\eta^{(i}\psi_\mu^{j)} \nonumber\, ,\\
\delta T_{ab} &=& \tfrac12 {\rm i}\bar\epsilon \gamma_{ab} \chi - \tfrac3{32} {\rm i} \bar\epsilon \widehat{R}_{ab}(Q)\,, \nonumber
\end{eqnarray}
\begin{eqnarray}
\delta \chi^i &=& \tfrac14 \epsilon^i D - \tfrac1{64} \gamma \cdot \widehat{R}^{ij}(V) \epsilon_j + \tfrac18 {\rm i} \gamma^{ab}\slashed{\mathcal{D}}T_{ab}\epsilon^i - \tfrac18 {\rm i} \gamma^a \mathcal{D}^b T_{ab} \epsilon^i \nonumber\\
&& - \tfrac14 \gamma^{abcd}T_{ab} T_{cd} \epsilon^i + \tfrac16 T^2 \epsilon^i + \tfrac14 \gamma \cdot T \eta^i\,, \nonumber\\
\delta D &=& \bar\epsilon \slashed{\mathcal{D}}\chi - \tfrac53 {\rm i} \bar\epsilon \gamma \cdot T \chi - {\rm i} \bar\eta \chi\,, \nonumber\\
\delta b_\mu &=& \ft12 {\rm i} \bar\epsilon\phi_\mu -2 \bar\epsilon\gamma_\mu \chi +
\ft12{\rm i} \bar\eta\psi_\mu+2\Lambda _{K\mu } \,,
\label{SWMTR}
\end{eqnarray}
where
\begin{eqnarray}
\mathcal{D}_\mu\chi^i&=&(\partial_\mu - \tfrac72 b_\mu +\tfrac14 \omega_\mu{}^{ab} \gamma_{ab})\chi^i -V_\mu^{ij}\chi_j
- \tfrac14 \psi_\mu^i D + \tfrac1{64} \gamma \cdot \widehat{R}^{ij}(V) \psi_{\mu j}\nonumber\\
&& - \tfrac18 {\rm i} \gamma^{ab}\slashed{\mathcal{D}}T_{ab}\psi_\mu^i + \tfrac18 {\rm i} \gamma^a \mathcal{D}^b T_{ab} \psi_\mu^i + \tfrac14 \gamma^{abcd}T_{ab} T_{cd} \psi_\mu^i - \tfrac16 T^2 \psi_\mu^i - \tfrac14 \gamma \cdot T \phi_\mu^i\,, \nonumber \\
\mathcal{D}_\mu T_{ab}&=&\partial_\mu T_{ab}-b_\mu T_{ab}-2\omega_\mu{}^c{}_{[a}T_{b]c}-\tfrac{1}{2}i\bar{\psi}_\mu\gamma_{ab}\chi+\tfrac{3}{32}i\bar{\psi}_\mu\widehat{R}_{ab}(Q)\,.
\end{eqnarray}
The supercovariant curvatures appearing in the transformation rules (\ref{SWMTR}) are given by \footnote{A more detailed discussion about the supercovariant curvatures can be found in \cite{Bergshoeff:2001hc}}
\begin{eqnarray}
\widehat{R}_{\mu\nu}{}^{ab}(M)&=&2\partial_{[\mu}\omega_{\nu ]}{}^{ab}+2\omega_{[\mu}{}^{ac}\omega_{\nu ]c}{}^{b} + 8 f_{[\mu}{}^{[a}e_{\nu ]}{}^{b]}+{\rm i} \bar\psi_{[\mu}\gamma^{ab}\psi_{\nu ]} + {\rm i} \bar\psi_{[\mu}\gamma^{[a} \gamma \cdot T \gamma^{b]}\psi_{\nu ]} \nonumber\\
&& +\bar\psi_{[\mu} \gamma^{[a} \widehat{R}_{\nu ]}{}^{b]}(Q)+\tfrac12 \bar\psi_{[\mu}\gamma_{\nu ]} \widehat{R}^{ab}(Q) -8 \bar\psi_{[\mu} e_{\nu ]}{}^{[a} \gamma^{b]}\chi+i\bar{\phi}_{[\mu} \gamma^{ab} \psi_{\nu]}\nonumber\,, \\
\widehat{R}_{\mu\nu}{}^{ij}(V)&=&2\partial_{[\mu} V_{\nu]}{}^{ij} -2V_{[\mu}{}^{k( i}
V_{\nu ]\,k}{}^{j)} {-3{\rm i}}{\bar\phi}^{( i}_{[\mu}\psi^{j)}_{\nu ]} - 8 \bar{\psi}^{(i}_{[\mu}
\gamma_{\nu]} \chi^{j)} - {\rm i} \bar{\psi}^{(i}_{[\mu} \gamma\cdot T \psi_{\nu]}^{j)} \,, \\
\widehat{R}_{\mu\nu}^i(Q)&=&2\partial_{[\mu}\psi_{\nu]}^i+\frac{1}{2}\omega_{[\mu}{}^{ab}\gamma_{ab}\psi_{\nu]}^i+b_{[\mu}\psi_{\nu]}^i-2V_{[\mu}^{ij}\psi_{\nu] j}-2i\gamma_{[\mu}\phi_{\nu]}^i+2i\gamma\cdot T\gamma_{[\mu}\psi_{\nu]}^i\,, \nonumber
\end{eqnarray}
where the spin connection $\omega_{\mu}{}^{ab}$, the $S$-supersymmetry gauge field $\phi_\mu^i$ and the special conformal symmetry gauge field $f_\mu{}^a$ are composites. Their explicit expressions are given by
\begin{eqnarray} \omega_\mu{}^{ab}
&=& 2 e^{\nu[a} \partial_{[\mu} e_{\nu]}^{~b]} - e^{\nu[a} e^{b]\sigma} e_{\mu c}
\partial_\nu e^{~c}_\sigma
+ 2 e_\mu^{~~[a} b^{b]} - \ft12 \bar{\psi}^{[b} \gamma^{a]} \psi_\mu - \ft14
\bar{\psi}^b \gamma_\mu \psi^a \,,\nonumber\\
\phi^i_\mu &=& \ft13{\rm i} \gamma^a \widehat{R}^\prime _{\mu a}{}^i(Q) - \ft1{24}{\rm i}
\gamma_\mu \gamma^{ab} \widehat{R}^\prime _{ab}{}^i(Q)\,, \label{transfDepF} \\
f^a_\mu &=& - \ft16{\cal R}_\mu {}^a + \ft1{48}e_\mu {}^a {\cal R},\quad {\cal R}_{\mu \nu }\equiv \widehat{R}_{\mu \rho }^{\prime~~ab}(M) e_b{}^\rho
e_{\nu a},\quad {\cal R}\equiv {\cal R}_\mu {}^\mu\,, \nonumber
\label{cf1}
\end{eqnarray}
where we used the notation $\widehat{R}_{ab}'(Q)$ and $\widehat{R}_{\mu \rho }^{\prime~~ab}(M)$ to indicate that these expressions are obtained from $\widehat{R}_{ab}(Q)$ and $\widehat{R}_{\mu \rho }^{~~ab}(M) $ by omitting the $\phi_\mu^i$ and $f_{\mu}^a$ terms respectively.
\subsection{The Dilaton Weyl Multiplet}\label{ss: dilatonWeyl}
The gauge sector of the dilaton Weyl multiplet is the same as the gauge sector of the standard Weyl multiplet. The matter sector of the dilaton Weyl multiplet differs from that of the standard Weyl as it contains a physical vector $C_\mu$, an antisymmetric two-form gauge field $B_{\mu\nu}$, a dilaton field $\sigma$ and a dilatino $\psi^i$. The $Q$-, $S$- and $K$- transformation rules of the fields in the dilaton Weyl multiplet can be found in \cite{Bergshoeff:2001hc}
\begin{eqnarray}
\delta e_\mu{}^a &=& \ft 12\bar\epsilon \gamma^a \psi_\mu \nonumber\, ,\\
\delta \psi_\mu^i &=& (\partial_\mu+\tfrac{1}{2}b_\mu+\tfrac{1}{4}\omega_\mu{}^{ab}\gamma_{ab})\epsilon^i-V_\mu^{ij}\epsilon_j + {\rm i} \gamma\cdot \underline{T} \gamma_\mu
\epsilon^i - {\rm i} \gamma_\mu
\eta^i \nonumber\, ,\\
\delta V_\mu{}^{ij} &=& -\ft32{\rm i} \bar\epsilon^{(i} \phi_\mu^{j)} +4
\bar\epsilon^{(i}\gamma_\mu \underline{\chi}^{j)}
+ {\rm i} \bar\epsilon^{(i} \gamma\cdot \underline{T} \psi_\mu^{j)} + \ft32{\rm i}
\bar\eta^{(i}\psi_\mu^{j)} \,, \nonumber\\
\delta C_\mu
&=& -\ft12{\rm i} \sigma \bar{\epsilon} \psi_\mu + \ft12
\bar{\epsilon} \gamma_\mu \psi, \nonumber\\
\delta B_{\mu\nu}
&=& \ft12 \sigma^2 \bar{\epsilon} \gamma_{[\mu} \psi_{\nu]} + \ft12 {\rm i} \sigma \bar{\epsilon}
\gamma_{\mu\nu} \psi + C_{[\mu} \delta(\epsilon) C_{\nu]}, \nonumber
\end{eqnarray}
\begin{eqnarray}
\delta \psi^i &=& - \ft14 \gamma \cdot \widehat{G} \epsilon^i -\ft12{\rm i} \slashed{\mathcal{D}} \sigma
\epsilon^i + \sigma \gamma \cdot \underline{T} \epsilon^i -\ft14{\rm i}\sigma^{-1}\epsilon_j \bar\psi^i \psi^j + \sigma\eta^i \,,\nonumber\\
\delta \sigma &=& \ft12 {\rm i} \bar{\epsilon} \psi \, ,\nonumber\\
\delta b_\mu &=& \ft12 {\rm i} \bar\epsilon\phi_\mu -2 \bar\epsilon\gamma_\mu \underline{\chi} +
\ft12{\rm i} \bar\eta\psi_\mu+2\Lambda _{K\mu } \,,
\label{TransDW}
\end{eqnarray}
where
\begin{eqnarray}
\mathcal{D}_\mu\, \sigma &=& (\partial_\mu - b_\mu) \sigma
- \tfrac12\, {\rm i}\bar{\psi}_\mu \psi \ ,
\nonumber\\
\mathcal{D}_\mu \psi^i &=& (\partial_\mu -\ft32 b_\mu +\ft14\, \omega_\mu{}^{ab}\gamma_{ab} ) \psi^{i} - V_\mu^{ij} \psi_j +\tfrac 14 \gamma
\cdot \widehat{G} \psi_\mu^i \nonumber\\
&& + \ft12{\rm i} \slashed{\mathcal{D}} \sigma \psi_\mu^i
+\ft14{\rm i}\sigma^{-1}\psi_{\mu j}\bar\psi^i\psi^j - \sigma \gamma \cdot \underline{T} \psi_\mu^i - \sigma
\phi_\mu^i\,,
\label{cd1}
\end{eqnarray}
and the supercovariant curvatures are defined according to
\begin{eqnarray}
\widehat{G}_{\mu\nu} &=& G_{\mu\nu} - \bar{\psi}_{[\mu} \gamma_{\nu]} \psi + \tfrac 12 {\rm i}
\sigma \bar{\psi}_{[\mu} \psi_{\nu]} \label{hatG} ,\nonumber\\
\widehat{H}_{\mu\nu\rho} &=& H_{\mu\nu\rho} - \ft34
\sigma^2 \bar{\psi}_{[\mu} \gamma_\nu \psi_{\rho]} - \ft32{\rm i} \sigma \bar{\psi}_{[\mu}
\gamma_{\nu\rho]} \psi.
\label{DefH}
\end{eqnarray}
In the above expressions, $G_{\mu\nu}=2 \partial_{[\mu } C_{\nu ]}$ and $H_{\mu\nu\rho} = 3\partial _{[\mu }B_{\nu \rho ]} + \ft32 C_{[\mu} G_{\nu\rho]}$. Note that $\widehat{G}_{\mu\nu}$ and ${\widehat H}_{\mu\nu\rho}$ are invariant under the following gauge transformations
\begin{equation} \delta C_\mu= \partial_\mu\Lambda\ ,\qquad \delta B_{\mu\nu} =
2\partial_{[\mu} \Lambda_{\nu]} -\ft12 \Lambda G_{\mu\nu}.
\end{equation}
The underlined expressions $\underline{T}^{ab}, \underline{\chi}^i$ and $\underline{D}$, which are the fundamental auxiliary fields in the standard Weyl multiplet, become composite expressions in the dilaton Weyl multiplet \cite{Bergshoeff:2001hc}
\begin{eqnarray}
\underline{T}^{ab} &=& \ft18 \sigma^{-2} \Big( \sigma \widehat G^{ab} + \ft16 \epsilon^{abcde} \widehat H_{cde} + \ft14 {\rm i} \bar\psi \gamma^{ab} \psi \Big) \,, \nonumber\\
\underline{\chi}^i &=& \ft18 {\rm i} \sigma^{-1} \slashed{\mathcal{D}} \psi^i + \ft1{16} {\rm i} \sigma^{-2} \slashed{\mathcal{D}} \sigma \psi^i - \ft1{32} \sigma^{-2} \gamma \cdot \widehat G \psi^i + \ft14 \sigma^{-1} \gamma \cdot \underline{T} \psi^i \nonumber\\
&& + \ft1{32} {\rm i} \sigma^{-3} \psi_j \bar\psi^i\psi^j ,\,\nonumber\\
\underline{D} &=& \ft14 \sigma^{-1} \Box^c \sigma + \ft18 \sigma^{-2} (\mathcal{D}_a \sigma) (\mathcal{D}^a \sigma) - \ft1{16} \sigma^{-2} \widehat G_{\mu\nu} \widehat G^{\mu\nu}\nonumber\\
&& - \ft18 \sigma^{-2} \bar\psi \slashed{\mathcal{D}} \psi -\ft1{64} \sigma^{-4} \bar\psi^i \psi^j \bar\psi_i \psi_j - 4 {\rm i} \sigma^{-1} \psi \underline{\chi} \nonumber\\
&& + \Big( - \ft{26}3 \underline{T_{ab}} + 2 \sigma^{-1} \widehat G_{ab} + \ft14 {\rm i} \sigma^{-2} \bar\psi \gamma_{ab} \psi \Big) \underline{T}^{ab}\,,
\label{UMap}
\end{eqnarray}
where the superconformal d'Alambertian for $\sigma$ is given by
\begin{eqnarray}
&&\Box^c \sigma= (\partial^a - 2b^a + \omega_b{}^{ba}) \mathcal{D}_a \sigma - \ft12 {\rm i} \bar\psi_a \mathcal{D}^a\psi - 2\sigma \bar\psi_a \gamma^a {\chi} \nonumber\\
&&\quad\quad + \ft12 \bar\psi_a \gamma^a \gamma \cdot \underline{T} \psi + \ft12 \bar\phi_a \gamma^a \psi + 2 f_a{}^a \sigma \,.
\end{eqnarray}
These composite expressions define a map from the dilaton Weyl multiplet to the standard Weyl multiplet.
\subsection{The Vector Multiplet} \label{ss: vector}
The off-shell Abelian $D=5$, $\mathcal{N}=2$ vector multiplet
contains $8+8$ degrees of freedom. Its bosonic sector consists of a vector field $A_\mu$, a scalar field $\rho$ and
an auxiliary $\mathop{\rm SU}(2)$ triplet field $Y^{ij} = Y^{(ij)}$. The fermionic sector contains an $\mathop{\rm SU}(2)$ doublet
$\lambda^i$. The $Q$- and $S$-transformations for the vector multiplet are given by \cite{Bergshoeff:2002qk}
\begin{eqnarray}
\delta A_\mu &=& -\ft12{\rm i} \rho \bar{\epsilon} \psi_\mu + \ft12 \bar{\epsilon}
\gamma_\mu \lambda \ ,
\nonumber\\
\delta Y^{ij} &=& -\ft12
\bar{\epsilon}^{(i} \slashed{\mathcal{D}} \lambda^{j)} + \ft12 {\rm i} \bar{\epsilon}^{(i}
\gamma \cdot T \lambda^{j)} - 4 {\rm i} \sigma \bar{\epsilon}^{(i} \chi^{j)} +
\ft12 {\rm i} \bar{\eta}^{(i} \lambda^{j)}\,, \nonumber\\
\delta\lambda^{i} &=& - \ft14 \gamma \cdot \widehat{F} \epsilon^i -\ft12{\rm i}
\slashed{\mathcal{D}}\rho \epsilon^i + \rho \gamma \cdot T \epsilon^i - Y^{ij} \epsilon_j +
\rho \eta^i \ ,
\nonumber\\
\delta\rho &=& \ft12 {\rm i} \bar{\epsilon}
\lambda . \label{VMTR}
\end{eqnarray}
In the above expressions, the superconformally covariant derivatives are defined as
\begin{eqnarray}
\mathcal{D}_\mu\, \rho &=& (\partial_\mu - b_\mu) \rho
- \tfrac12\, {\rm i}\bar{\psi}_\mu \lambda \ ,
\nonumber\\
\mathcal{D}_\mu \lambda^i &=& (\partial_\mu -\ft32 b_\mu +\ft14\, \omega_\mu{}^{ab}\gamma_{ab} ) \lambda^{i} - V_\mu^{ij} \lambda_j \nonumber\\
&& +\tfrac 14 \gamma
\cdot \widehat{F} \psi_\mu^i + \ft12{\rm i} \slashed{\mathcal{D}} \rho \psi_\mu^i
+ Y^{ij} \psi_{\mu\, j} - \rho \gamma \cdot T \psi_\mu^i - \rho
\phi_\mu^i,
\label{cd2}
\end{eqnarray}
where the supercovariant Yang-Mills curvature is defined as
\begin{equation}
\widehat{F}_{\mu\nu} = F_{\mu\nu} - \bar{\psi}_{[\mu} \gamma_{\nu]} \lambda + \tfrac 12 {\rm i}
\rho \bar{\psi}_{[\mu} \psi_{\nu]}\,,\quad F_{\mu\nu}=2 \partial_{[\mu } A_{\nu ]}.
\label{hatF}
\end{equation}
The local supersymmetry transformation rules given in (\ref{VMTR}) are obtained by coupling the rigid supersymmetric theory to a Weyl multiplet \cite{Bergshoeff:2002qk}. In the above transformation rules, we utilized the standard Weyl multiplet. If the dilaton Weyl multiplet is considered, the supersymmetry tranformation rules can be obtained straightforwardly by replacing $T_{ab}, D$ and $\chi^i$ by their composite expressions according to (\ref{UMap}).
\subsection{The Linear Multiplet} \label{ss: linear}
The off-shell $D=5, \, {\cal{N}} = 2$ linear multiplet contains $8+8$ degrees of freedom. The bosonic fields consist of an $\mathop{\rm SU}(2)$ triplet $L^{ij} = L^{(ij)}$, a constrained vector $E_{a}$ and a scalar $N$. The fermionic field contains an $\mathop{\rm SU}(2)$ doublet
$\varphi^i$. Adopting the standard Weyl multiplet, the $Q$-and $S$-supersymmetry transformations of the linear multiplet are given in \cite{Bergshoeff:2002qk}
\begin{eqnarray}
\delta L^{ij} &=& {\rm i} \bar{\epsilon}^{(i}\varphi^{j)}\,,\nonumber\\
\delta \varphi^{i} &=& - \tfrac{1}{2} {\rm i} \slashed{\mathcal{D}}L^{ij}\epsilon_{j} - \tfrac{1}{2} {\rm i} \gamma^{a} E_{a} \epsilon^i + \tfrac{1}{2} N \epsilon^{i} - \gamma \cdot T L^{ij} \epsilon_{j} + 3 L^{ij}\eta_{j}\,,\nonumber\\
\delta E_{a} &=& -\tfrac{1}{2} {\rm i} \bar{\epsilon} \gamma_{ab} \mathcal{D}^{b} \varphi - 2 \bar{\epsilon} \gamma^{b} \varphi T_{ba} - 2\bar{\eta} \gamma_{a} \varphi\,, \nonumber\\
\delta N &=& \tfrac{1}{2} \bar{\epsilon} \slashed{\mathcal{D}}\varphi + \tfrac{3}{2} {\rm i} \bar{\epsilon} \gamma \cdot T \varphi + 4 {\rm i} \bar{\epsilon}^i \chi^j L_{ij} + \tfrac{3}{2} {\rm i} \bar{\eta} \varphi\,,
\label{trlm}
\end{eqnarray}
where the following superconformally covariant derivatives are used
\begin{eqnarray}
\mathcal{D}_\mu L^{ij}&=&(\partial_{\mu}-3b_{\mu})L^{ij}+2V_{\mu}{}^{(i}{}_kL^{j)k}-{\rm i}\bar{\psi}_{\mu}^{(i}\varphi^{j)}\,, \nonumber \\
\mathcal{D}_\mu \varphi^i&=&(\partial_{\mu}-\tfrac{7}{2}b_\mu +\ft14\omega_\mu{}^{ab}\gamma_{ab})\varphi^i-V_{\mu}^{ij}\varphi_j+\tfrac{1}{2} {\rm i} \slashed{\mathcal{D}}L^{ij}\psi_{\mu\,j} + \tfrac{1}{2} {\rm i} \gamma^{a} E_{a} \psi_\mu^i \nonumber \\
&&- \tfrac{1}{2} N \psi_\mu^{i} + \gamma \cdot T L^{ij} \psi_{\mu\,j} - 3 L^{ij}\phi_{\mu\,j}\,, \nonumber \\
\mathcal{D}_\mu E_a &=&(\partial_\mu-4b_\mu)E_a+\omega_{\mu ab}E^b+\tfrac{1}{2} {\rm i} \bar{\psi}_\mu \gamma_{ab} \mathcal{D}^{b} \varphi +2 \bar{\psi}_\mu \gamma^{b} \varphi T_{ba} + 2\bar{\phi}_\mu \gamma_{a} \varphi\,.
\end{eqnarray}
As mentioned before, $E_a$ is constrained for the closure of the superconformal algebra
\begin{eqnarray}
\mathcal{D}^{a} E_{a}= 0\,. \label{closure}
\end{eqnarray}
Therefore $E_a$ can be solved in terms of a 3-form $E_{\mu\nu\rho}$ as
\begin{eqnarray}
E^a = - \tfrac{1}{12}e_{\mu}{}^{a}e^{-1}\varepsilon^{\mu\nu\rho\sigma\lambda}\mathcal{D}_\nu E_{\rho\sigma\lambda}.\,
\label{3formE}
\end{eqnarray}
Similar to the vector multiplet, if the dilaton Weyl multiplet is adopted, the supersymmetry transformation rules for the linear multiplet can be obtained by using the map (\ref{UMap}).
\section{Superconformal Actions}\label{section: sactions}
In this section, we review the superconformal actions for the matter multiplets coupled to the standard Weyl multiplet and the dilaton Weyl multiplet, which give rise to different formulations of off-shell Poincar\'e supergravities. These actions include a linear multiplet action \cite{Coomans:2012cf, Ozkan:2013uk} and three vector multiplet actions. The first vector multiplet action describing $n$ number of vector multiplets coupled to the standard Weyl is treated as the master action from which we derive the other two actions for $n$ number of vector multiplets coupled to the dilaton Weyl multiplet.
The starting point for the constructions of the linear and vector multiplet actions is the superconformal density formula \cite{Fujita:2001kv}
\begin{eqnarray}
e^{-1} \mathcal{L}_{VL} &=& Y^{ij} L_{ij} + {\rm i} \bar\lambda \varphi - \frac12 \bar\psi_a^i \gamma^a \lambda^i L_{ij} + A_a P^a \nonumber\\
&& + \rho \Big(N + \frac12 \bar\psi_a \gamma^a \varphi + \frac14 {\rm i} \bar\psi_a^i \gamma^{ab} \psi_b^j L_{ij}\Big)\,,
\label{densityformula}
\end{eqnarray}
where $P_a$ is the bosonic part of the supercovariant field strength $E_a$
\begin{eqnarray}
P^a &=& E^a + \frac12 {\rm i} \bar\psi_b \gamma^{ba} \varphi + \frac14 \bar\psi_b^i \gamma^{abc} \psi_c^j L_{ij}.
\label{PmEm}
\end{eqnarray}
\subsection{The Linear Multiplet Action}\label{ss:laction}
The procedure of constructing an action for the linear multiplet is based on superconformal tensor calculus, where the fields of vector multiplets are composed in terms of the fields in linear multiplet and a Weyl multiplet. The construction of a linear multiplet action is explored in detail by \cite{Coomans:2012cf, Ozkan:2013uk}. Here we only present the results which contribute to the purely bosonic action. Using the linear multiplet and the standard Weyl multiplet, the elements of vector multiplet can be composed as given in (\ref{LEmbde}). Inserting the composite expressions into the vector-linear action (\ref{densityformula}), one can obtain an action for the linear multiplet,
\begin{eqnarray}
e^{-1}{\cal{L}}_L^S &=& L^{-1} L_{ij} \Box^c L^{ij} - L^{ij} {\cal{D}}_{\mu}L_{k(i} {\cal{D}}^{\mu} L_{j)m} L^{km} L^{-3} + N^2 L^{-1} \nonumber\\
&& - E_{\mu} E^{\mu} L^{-1} + \tfrac{8}{3} L T^{2} + 4 D L - \tfrac{1}{2}L^{-3} E^{\mu\nu} L_{k}^{l} \partial_{\mu} L^{kp} \partial_\nu L_{pl} \nonumber\\
&& + 2 E^{\mu\nu} \partial_{\mu} ( L^{-1} E_{\nu} + V_{\nu}^{ij} L_{ij} L^{-1} )\,,
\label{Lact}
\end{eqnarray}
where we have defined $L^2 = L_{ij} L^{ij}$ and the superscript $S$ indicates that this action utilizes the standard Weyl multiplet. The superconformal d'Alembertian of $L^{ij}$ is given by
\begin{eqnarray}
L_{ij} \Box^c L^{ij}&=&L_{ij}(\partial^a-4b^a+\omega_b{}^{ba}){\cal{D}}_a L^{ij}+2L_{ij} V_a{}^i{}_k{\cal{D}}^a L^{jk}+6L^2f_a{}^a \nonumber \\
&&-iL_{ij}\bar{\psi}^{a i}{\cal D}_a\varphi^j-6L^2\bar{\psi}^a\gamma_a\chi-L_{ij}\bar{\varphi}^i\gamma\cdot T\gamma^a\psi_a^j+L_{ij}\bar{\varphi}^i\gamma^a\phi_a^j\,.
\end{eqnarray}
The linear multiplet action (\ref{Lact}) can be transferred to an action describing the linear multiplet coupled to the dilaton Weyl multiplet by replacing the $T_{ab}, D$ and $\chi^i$ by their composite expressions according to (\ref{UMap}) in the action (\ref{Lact}).
\subsection{Vector Multiplet Actions}\label{ss:vactions}
The elements of linear multiplet can be constructed in terms of the fields in vector multiplet and a Weyl multiplet (\ref{VEmbed}). Then an action for vector multiplet can be obtained by using the density formula (\ref{densityformula}). Adopting the standard Weyl multiplet, an action for $n$ vector multiplets is given by \cite{Bergshoeff:2002qk, Fujita:2001kv}
\begin{eqnarray}
e^{-1} \mathcal{L}_{V}^S &=& C_{IJK} \Big( - \ft14 \rho^I F_{\mu\nu}^J F^{K\,\mu\nu} + \ft13 \rho^I \rho^J \Box^C \rho^K +\ft16 \rho^I \mathcal{D}_\mu \rho^J \mathcal{D}^\mu \rho^K + \rho^I Y^{J\, ij} Y_{ij}^K\nonumber\\
&& \qquad \qquad - \ft43 \rho^I \rho^J \rho^K (D + \ft{26}3 T_{\mu\nu} T^{\mu\nu}) + 4 \rho^I \rho^J F_{\mu\nu}^K T^{\mu\nu} - \ft1{24} \epsilon^{\mu\nu\rho\sigma\lambda} A_{\mu}^I F_{\nu\rho}^J F_{\sigma\lambda}^K \Big),
\label{Vact}
\end{eqnarray}
where we have generalized the single vector multiplet action to $n$-vector multiplets action, $I = 1, \ldots n$. The coefficient $C_{IJK}$ is symmetric in $I,J,K$ and determines the coupling of $n$ vector multiplets. The complete expression of the superconformal d'Alembertian for $\rho^I$ is \cite{Bergshoeff:2002qk}
\begin{eqnarray}
\Box^c \rho^I &=& (\partial^a - 2b^a + \omega_b{}^{ba}) \mathcal{D}_a \rho^I - \ft12 {\rm i} \bar\psi_a \mathcal{D}^a \lambda^I - 2\rho^I \bar\psi_a \gamma^a {\chi} \nonumber\\
&& + \ft12 \bar\psi_a \gamma^a \gamma \cdot {T} \lambda^I + \ft12 \bar\phi_a \gamma^a \lambda^I + 2 f_a{}^a \rho^I.
\end{eqnarray}
The action (\ref{Vact}) will be used as the \emph{master action} for the construction of $n$-vector coupled curvature squared actions in the dilaton Weyl multiplet. Adopting the dilaton Weyl multiplet, the fields $D, T_{ab}$ and $\chi^i$ are underlined meaning that they become composite fields, one obtains a vector multiplet action describing $n$ number of vector multiplets coupled to supergravity. As we will see, the vector multiplet action plays an important role in the construction of Riemann squared invariant \cite{Bergshoeff:2011xn,Ozkan:2013uk}. A special case of the \emph{master action} to be utilized directly in the derivation of Riemann squared invariant is given by
\begin{eqnarray} \label{CIJKChoice}
C_{IJK} = \left\{ \begin{array}{lll}
C_{1IJ} & = & a_{IJ} \\
0 & & \textrm{otherwise}. \\
\end{array} \right.
\end{eqnarray}
Under this choice, the vector multiplet action in the dilaton Weyl multiplet is given as
\begin{eqnarray}
e^{-1} \mathcal{L}_{V}^{'D} &=& a_{IJ} \Big( \rho Y_{ij}^I Y^{ij\, J} - \ft14 \rho {F}_{\mu\nu}^I {F}^{\mu\nu\, J} - \ft12 \rho^I {F}^J_{\mu\nu} {F}^{\mu\nu} + 8 \rho \rho^I {F}^J_{\mu\nu} \underline{T}^{\mu\nu} + \ft12 \rho^I \rho^J \Box^C \rho \nonumber\\
&& \quad + \ft12 \rho \rho^I \Box^C \rho^J + \ft12 \rho^I \mathcal{D}_a \rho^J \mathcal{D}^a \rho - 4 \rho \rho^I \rho^J (\underline{D} + \ft{26}3 \underline{T}^2 ) + 4 \rho^I \rho^J {F}_{\mu\nu}\underline{ T}^{\mu\nu} \nonumber\\
&& \quad - \ft18 \epsilon^{\mu\nu\rho\sigma\lambda} F_{\mu\nu}^I F_{\rho\sigma}^J A_\lambda + 2 \rho^I Y^J_{ij} Y^{ij} \Big)\,,
\label{NVDW}
\end{eqnarray}
where the superscript $D$ demonstrates that this action is constructed by using the dilaton Weyl multiplet. In this case, the coupling of $n$ number of vector multiplets is dictated by the symmetric rank-2 tensor $a_{IJ}$. To obtain the Riemann squared invariant by using the Yang-Mills trick (\ref{YMDWMap}), the $I$ index indicating $n$-copy of Abelian vector multiplets should be replaced by the Yang-Mills index $\Sigma$ and the field strength should be interpreted as the Yang-Mills field strength . As we shall mention later, this vector multiplet action can give rise to the Riemann squared invariant coupled to $n$ number of vector multiplets. If one prefers to construct a Riemann squared action purely based on the dilaton Weyl multiplet \cite{Bergshoeff:2011xn,Ozkan:2013uk} one can utilize the map from the vector multiplet to the dilaton Weyl multiplet \cite{Ozkan:2013uk}
\begin{equation}
(\lambda_i,\rho,A_\mu,Y_{ij})\rightarrow(\psi_i,\sigma,C_\mu,\ft14 {\rm i} \sigma^{-1} \bar\psi_i \psi_j ).
\label{DVMap}
\end{equation}
which leads to another action for the vector multiplet coupled to supergravity
\begin{eqnarray}
e^{-1} \mathcal{L}_{V}^{D} &=& a_{IJ} \Big( \sigma Y_{ij}^I Y^{ij\, J} - \ft14 \sigma {F}_{\mu\nu}^I {F}^{\mu\nu\, J} - \ft12 \rho^I {F}^J_{\mu\nu} {G}^{\mu\nu} + 8 \sigma\rho^I {F}^J_{\mu\nu} T^{\mu\nu} + \ft12 \rho^I \rho^J \Box^C \sigma \nonumber\\
&& \quad + \ft12 \sigma \rho^I \Box^C \rho^J + \ft12 \rho^I \mathcal{D}_{\mu} \rho^J \mathcal{D}^{\mu} \sigma - 4 \sigma \rho^I \rho^J (\underline{D} + \ft{26}3 \underline{T}^2 ) + 4 \rho^I \rho^J {G}_{\mu\nu}\underline{ T}^{\mu\nu} \nonumber\\
&& \quad - \ft18 \epsilon^{\mu\nu\rho\sigma\lambda} F_{\mu\nu}^I F_{\rho\sigma}^J C_\lambda \Big)\,.
\label{VDW}
\end{eqnarray}
\section{Minimal Curvature Squared Actions in the Dilaton Weyl Multiplet}\label{section: alldilaton}
The five dimensional minimal off-shell Poincar\'e supergravity multiplet consists of the fields
\begin{equation}
e_\mu{}^a(10),\, \psi_\mu^i(32),\, C_\mu(4),\, B_{\mu\nu}(6),\,\varphi^i(8),\,L(1),\, E_{\mu\nu\rho}(4) ,\, N(1),\, V_\mu(4) ,\, V_\mu^{'ij}(10),
\label{fieldcontent}
\end{equation}
where the number in the bracket denotes the off-shell degrees of freedom carried by the fields.
Using the dilaton Weyl multiplet, the minimal off-shell supersymmetric Riemann squared and Gauss-Bonnet combination have been obtained in \cite{Bergshoeff:2011xn, Ozkan:2013uk} and \cite{Ozkan:2013uk} respectively. In this section, we complete the off-shell curvature squared invariants by constructing the Ricci scalar squared action. The map from the dilaton Weyl multiplet to the standard Weyl multiplet (\ref{UMap}) plays a crucial role in the construction of curvature squared actions. In particular, the composite expression for $D$ contains a curvature term. Thus, the existence of a $D^2$ term in a curvature squared action means the curvature terms get an extra $R^2$ contribution from the composite expression of $D$. As we shall see, this fact is essential in the construction of supersymmetric completion of Gauss-Bonnet combination \cite{Ozkan:2013uk}.
In the first two subsections, we list the known supersymmetric Riemann squared and Gauss-Bonnet actions. In the third subsection, we obtain the supersymmetric Ricci scalar squared action from superconformal method.
\subsection{Supersymmetric Riemann Squared Action}
In this subsection, we briefly revisit the Riemann squared action constructed in \cite{Ozkan:2013uk} using superconformal calculus and Yang-Mills trick in five dimensions
\begin{eqnarray}
( A_\mu^{\Sigma}, \quad Y_{\Sigma}^{ij}, \quad \lambda_{\Sigma}^i, \quad \rho_{\Sigma}) \quad \longleftrightarrow \quad ( \omega_{\mu +}^{ab}, \quad - \widehat{V}_{ab}{}^{ij}, \quad - \widehat\psi_{ab}^i, \quad \widehat{G}_{ab} ),
\label{YMDWMap}
\end{eqnarray}
where $\Sigma$ means the Yang-Mills index.
This action can also be obtained from a circle reduction of six
dimensional theory \cite{Bergshoeff:2011xn}. The derivation of the Riemann squared action requires specific gauge fixing conditions given by
\begin{eqnarray}
\sigma = 1, \qquad \psi^i = 0, \qquad L_{ij} = \ft1{\sqrt2} \delta_{ij}L, \qquad b_\mu = 0\,.
\label{GF2}
\end{eqnarray}
The first gauge choice breaks the $\mathop{\rm SU}(2)_R$ down to $\mathop{\rm U}(1)_R$ whereas the second one fixes
dilatations, the third one fixes special supersymmetry transformations and the last one
fixes conformal boosts. The decomposition rules which follow from the above gauge fixing can be found in \cite{Ozkan:2013uk}. As a consequence of the gauge fixing, we obtain the Poincar\'e multiplet. Accordingly, the off-shell supersymmmetry transformation rules for the fields of Poincar\'e multiplet are given by \cite{Ozkan:2013uk}
\begin{eqnarray}
\delta e_{\mu}{}^a &=& \ft12 \bar\epsilon \gamma^a \psi_\mu \, ,\nonumber\\
\delta \psi_\mu^i &=& \mathcal{D}_\mu (\omega_{-}) \epsilon^i - \ft12 {\rm i} \widehat{G}_{\mu\nu} \gamma^\nu \epsilon^i \,, \nonumber\\
\delta V_{\mu}{}^{ij} &=& \ft12 \bar\epsilon^{(i} \gamma^\nu \widehat\psi_{\mu\nu}^{j)} - \ft16 \bar\epsilon^{(i} \gamma \cdot \widehat{H} \psi_\mu^{j)} - \ft14 {\rm i} \bar\epsilon^{(i} \gamma \cdot \widehat{G} \psi_\mu^{j)}, \nonumber\\
\delta C_\mu &=& -\ft12{\rm i}\bar\epsilon \psi_\mu \, ,\nonumber\\
\delta B_{\mu\nu} &=& \ft12 \bar\epsilon \gamma_{[\mu} \psi_{\nu]} + C_{[\mu} \delta(\epsilon) C_{\nu]} \,,\nonumber\\
\delta L &=& \ft1{\sqrt2} {\rm i} \bar\epsilon^i \varphi^j \delta_{ij} \,,\nonumber\\
\delta \varphi^i &=& - \ft1{2\sqrt2} {\rm i} \slashed{\partial} L \delta^{ij} \epsilon_j - \ft1{\sqrt2} {\rm i} V'_{\mu}{}^{(i}{}_k \delta^{j)k} L \epsilon_j - \ft12 {\rm i} \slashed{E} \epsilon^i + \ft12 N \epsilon^i + \ft1{4\sqrt2} L \gamma \cdot \widehat{G} \delta^{ij} \epsilon_j \, \nonumber\\
&& - \ft1{6\sqrt2} {\rm i} L \gamma \cdot \widehat{H} \delta^{ij} \epsilon_j ,\nonumber\\
\delta E_{\mu\nu\rho} &=& - \bar\epsilon \gamma_{\mu\nu\rho} \varphi + \ft1{\sqrt2} {\rm i} L \bar\psi^i_{[\mu} \gamma_{\nu\rho ]} \epsilon^j \delta_{ij} \,, \nonumber\\
\delta N &=& \ft12 \bar\epsilon \gamma^\mu \Big( \partial_\mu + \ft14 \omega_\mu{}^{bc} \gamma_{bc} \Big) \varphi + \ft12 \bar\epsilon^i \gamma^a V_{a\,ij} \varphi^j - \ft1{4\sqrt2} {\rm i} \bar\epsilon^i \gamma^a \slashed{\partial}L \psi_a^j \delta_{ij}\nonumber\\
&& + \ft1{4\sqrt2}{\rm i} \bar\epsilon^i \gamma^a \gamma^b V'_{b(i}{}^k \delta_{j)k} \psi_a^j + \ft14 {\rm i} \bar\epsilon \gamma^a \slashed{E} \psi_{a} - \ft14 N \bar\epsilon \gamma^a \psi_a + \ft1{8\sqrt2} L \bar\epsilon^i \gamma^a \gamma \cdot \widehat{G} \psi_a^j \delta_{ij}\nonumber\\
&&-\sqrt2 L \bar\epsilon^i \gamma^a \phi_a^j \delta_{ij} + \ft18 {\rm i} \bar\epsilon \gamma \cdot \widehat{H} \varphi ,
\label{UnGaugedTransform}
\end{eqnarray}
The bosonic part of the Riemann squared action is given as \cite{Ozkan:2013uk}
\begin{eqnarray}
e^{-1}{\cal L}_{{\rm Riem}^2}^{D} &=& -\ft14
\Big(\,R_{\mu\nu ab}(\omega_+)- G_{\mu\nu} G_{ab}\Big)
\left(\,R^{\mu\nu ab}(\omega_+)- G^{\mu\nu} G^{ab}\right) \nonumber\\ &&
-\ft12 \nabla_\mu(\omega_+) G^{ab} \nabla^\mu(\omega_+)G_{ab}
+V_{\mu\nu}{}^{ij} V^{\mu\nu}{}_{ij} \nonumber\\ && - \ft1{8}
\epsilon^{\mu\nu\rho\sigma\lambda} \Big(\,R_{\mu\nu
ab}(\omega_+)- G_{\mu\nu} G_{ab}\Big)
\left(\,R_{\rho\sigma}{}^{ab}(\omega_+)- G_{\rho\sigma}
G^{ab}\right) C_\lambda \nonumber\\ && - \ft12
\epsilon^{\mu\nu\rho\sigma\lambda} B_{\rho\sigma}\left(\,R_{\mu\nu
ab}(\omega_+)- G_{\mu\nu}G_{ab}\right) \nabla_\lambda (\omega_+)
G^{ab}\,,
\label{bosonicR2}
\end{eqnarray}
where the torsionful spin connection \cite{Bergshoeff:2011xn} is defined as
\begin{eqnarray}
{\omega}_\mu{}^{ab}_\pm &=& {\omega}_\mu{}^{ab} \pm {\widehat
H}_\mu{}^{ab}\ .
\label{torsion} \end{eqnarray}
We have also defined the supercovariant curvature of $V_\mu{}^{ij}$ as
\begin{eqnarray} {\widehat V}_{\mu\nu}{}^{ij} &=& V_{\mu\nu}{}^{ij} -
{\bar\psi}_{[\mu}^{(i}\gamma^\rho {\widehat\psi}_{\nu]\rho}^{j)} +\ft1{6} {\bar\psi}_\mu^{(i} \gamma\cdot {\widehat
H}\psi_\nu^{j)} +\ft14 i {\bar\psi}_\mu^{(i} \gamma\cdot {\widehat
G} \psi_\nu^{j)}\ . \label{vmn}
\end{eqnarray}
The Riemann squared action can be added to the off-shell Poincar\'e supergravity that is also invariant under the transformation rules (\ref{UnGaugedTransform}) \cite{Ozkan:2013uk}
\begin{eqnarray}
e^{-1}{\mathcal{L}}_{LR}^D &=& \tfrac12 L R + \ft12 L^{-1} \partial_\mu L \partial^\mu L- \ft14 L G_{\mu\nu} G^{\mu\nu} - \ft{1}6 L H_{\mu\nu\rho} H^{\mu\nu\rho}\nonumber\\
&& - L^{-1} N^2 - L^{-1} P_\mu P^\mu - \sqrt2 P^\mu V_\mu + L V_\mu^{'ij} V^{'\mu}_{ij}\,,
\label{RLag}
\end{eqnarray}
where $P_\mu$ is the bosonic part of the supercovariant curvature $E_\mu$ according to (\ref{PmEm}), and we have decomposed the field $V_\mu^{ij}$ into its trace and traceless part as
\begin{eqnarray}
V_\mu^{ij} = V_\mu^{'ij} + \tfrac12 \delta^{ij} V_\mu, \qquad V_\mu^{'ij} \delta_{ij}=0\,.
\end{eqnarray}
\subsection{Supersymmetric $C^2_{\mu\nu\rho\lambda}+\ft16R^2$ Action}
In this section, we review the supersymmetrization of Weyl tensor squared action in the dilaton Weyl multiplet. As we mentioned before, in the dilaton Weyl multiplet, the supersymmetrization of Weyl tensor squared accquires an extra Ricci scalar squared term through the square of $D$. Therefore, the supersymmetric completion of Weyl tensor squared $C_{\mu\nu\rho\sigma}^2$ turns into the supersymmetric completion of $C^2_{\mu\nu\rho\lambda}+\ft16R^2$. The bosonic part of the supersymmetric $C^2_{\mu\nu\rho\lambda}+\ft16R^2$ action is presented by \cite{Ozkan:2013uk}
\begin{eqnarray} \label{pregb2}
e^{-1} \mathcal{L}_{C^2 + \ft16R^2}^D &=& \ft18 R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma} - \ft16 R_{\mu\nu} R^{\mu\nu} + \ft1{48} R^2 + \ft{64}3 \underline{D}^2 + \ft{1024}9 \underline{T}^2 \underline{D}\nonumber\\
&& - \ft{16}3 {R}_{\mu\nu\rho\sigma} \underline{T}^{\mu\nu} \, \underline{T}^{\rho\sigma} + 2{R}_{\mu\nu\rho\sigma} \underline{T}^{\rho\sigma} G^{\mu\nu} + \ft13 R \underline{T}_{\mu\nu} G^{\mu\nu} - \ft83 R_{\nu\sigma} G_\rho{}^\nu \underline{T}^{\rho\sigma} \nonumber\\
&& - \ft{64}3 R^{\nu\rho} \underline{T}_{\mu\nu} \underline{T}^\mu{}_\rho +\ft{8}3 R \underline{T}^2 - \ft{32}3 \underline{D} \, \underline{T}_{\mu\nu} G^{\mu\nu} + \ft1{16} \epsilon_{\mu\nu\rho\sigma\lambda}C^\mu {R}^{\nu\rho\tau\delta} {R}^{\sigma\lambda}{}_{\tau\delta} \nonumber\\
&& -\ft1{12} \epsilon_{\mu\nu\rho\sigma\lambda} C^\mu V^{\nu\rho}{}_{ij} V^{\sigma\lambda\, ij} - \ft{1}3 V_{\mu\nu}{}^{ij} V^{\mu\nu}{}_{ij} -\ft{64}3 \nabla_\mu \underline{T}_{\nu\rho} \nabla^\mu \underline{T}^{\nu\rho} \nonumber\\
&&+ \ft{64}3 \nabla_\nu \underline{T}_{\mu\rho} \nabla^\mu \underline{T}^{\nu\rho} - \ft{128}3 \underline{T}_{\mu\nu} \nabla^\nu \nabla_\rho \underline{T}^{\mu\rho} - \ft{128}3 \epsilon_{\mu\nu\rho\sigma\lambda} \underline{T}^{\mu\nu} \underline{T}^{\rho\sigma} \nabla_\tau \underline{T}^{\lambda\tau} \nonumber\\
&&+ 1024 \, \underline{T}^4- \ft{2816}{27} (\underline{T}^2)^2 - \ft{64}9 \underline{T}_{\mu\nu} G^{\mu\nu} \underline{T}^2 - \ft{256}3 \underline{T}_{\mu\rho} \underline{T}^{\rho\sigma} \underline{T}_{\nu\sigma} G^{\mu\nu} \nonumber\\
&& - \ft{32}3 \epsilon_{\mu\nu\rho\sigma\lambda} \underline{T}^{\rho\tau} \nabla_\tau \underline{T}^{\sigma\lambda} G^{\mu\nu}- 16 \epsilon_{\mu\nu\rho\sigma\lambda} \underline{T}^\rho{}_\tau \nabla^\sigma \underline{T}^{\lambda\tau} G^{\mu\nu}\,,
\end{eqnarray}
where the bosonic parts of the composite fields $\underline{D}$ and $\underline{T}^{ab}$ are given as
\begin{eqnarray}
\underline{D}&\equiv&-\ft1{32}R-\ft1{16}G^{ab}G_{ab}-\ft{26}3 \underline{T}^{ab}\underline{T}_{ab}+2\underline{T}^{ab}G_{ab}\,,\c
\underline{T}_{ab} &\equiv& \ft18 G_{ab} + \ft1{48} \epsilon_{abcde} H^{cde}\,.
\label{UMap2}
\end{eqnarray}
For simplicity, we introduced the notation $T^4 \equiv T_{ab} T^{bc} T_{cd} T^{da}$ and $(T^2)^2 \equiv (T_{ab} T^{ab})^2$. We also used
\begin{eqnarray}
\nabla_\mu T_{ab} = \partial_\mu T_{ab} - 2 \omega_\mu{}^c{}_{[a} T_{b]c}.
\end{eqnarray}
Note that to obtain the above expressions for the composite fields, the gauge fixing conditions (\ref{GF2}) have been utilized. The supersymmetric completion of $C_{\mu\nu\rho\sigma}^2 + \ft16 R^2$, (\ref{pregb2}) can be combined with the Riemann squared action (\ref{bosonicR2}) and the Poincar\'e supergravity (\ref{RLag})
\begin{eqnarray}
\mathcal{L}_{LR}^{D} + \alpha \mathcal{L}_{{\rm Riem}^2}^D + \beta \mathcal{L}_{C^2+\ft16R^2}^D.
\end{eqnarray}
This theory possesses a maximally supersymmetric ${\rm Minkowski}_5$ vacuum. The spectrum around the ${\rm Minkowski}_5$ vacuum has been analyzed in \cite{Ozkan:2013uk}, where it is found that for generic $\alpha,\,\beta$, the full spectrum consists of the (reducible) massless 12+12 supergravity multiplet with fields
$(h_{\mu\nu},\,b_{\mu\nu},\,c_{\mu},\,\phi,$ $\,\psi^{i}_{\mu},\,\varphi^i)$ and a ghost massive 32+32 supergravity
multiplet with fields $(h_{\mu\nu},\,b_{\mu\nu},\,c_{\mu},\,\phi,\,v^{ij}_{\mu},\psi^{i}_{\mu},\,\varphi^i)$. In the special case when $\beta = 3 \alpha$, the massive multiplet decouples and the curvature squared terms furnish the supersymmetric completion of Gauss-Bonnet combination \cite{Ozkan:2013uk}.
\subsection{Supersymmetric Ricci Scalar Squared Action}
In this section, we construct the supersymmetric completion of Ricci scalar squared action using the dilaton Weyl multiplet. The key observation behind the construction is that the composite expression of $Y^{ij}$ (\ref{LEmbde}) contains the Ricci scalar implicitly in the superconformal d'Alembertian of $L^{ij}$. Therefore, the supersymmetric Ricci scalar squared action can be obtained by substituting the composite expressions (\ref{LEmbde}) in the vector multiplet action given in (\ref{VDW}) since the off-shell vector multiplet action has a $Y^{ij}Y_{ij}$ term. The construction of supersymmetric Ricci scalar squared action completes the off-shell curvature squared actions based on the dilaton Weyl multiplet in $\mathcal{N}=2,\, D=5$ supergravity.
For the construction of Ricci scalar squared invariant, we first rewrite the vector multiplet action (\ref{VDW}) for a single vector multiplet under the gauge fixing conditions (\ref{GF2}) as,
\begin{eqnarray}
e^{-1} \mathcal{L}_V^D|_{\sigma=1} &=& Y_{ij} Y^{ij} - \ft12 \nabla_\mu \rho \nabla^\mu \rho - \ft14 (F_{\mu\nu} - \rho G_{\mu\nu}) (F^{\mu\nu} - \rho G^{\mu\nu}) \nonumber\\
&& - \ft18 \epsilon^{\mu\nu\rho\sigma\lambda} (F_{\mu\nu} - \rho G_{\mu\nu})(F_{\rho\sigma} - \rho G_{\rho\sigma}) C_\lambda \nonumber\\
&& - \ft12 \epsilon^{\mu\nu\rho\sigma\lambda} (F_{\mu\nu} - \rho G_{\mu\nu}) B_{\rho\sigma} \nabla_\lambda \rho \,.
\label{fixedVDW}
\end{eqnarray}
Using the same gauge fixing, the composite expressions (\ref{LEmbde}) for the elements of vector multiplet can be recast into
\begin{eqnarray}
\underline{\rho}|_{\sigma=1} &=& 2 N L^{-1}, \nonumber\\
\underline{Y}_{ij}|_{\sigma=1} &=& \ft1{\sqrt2} \delta_{ij} \Big(- \ft12 R + \ft14 G_{ab} G^{ab} + \ft16 H_{abc} H^{abc} - L^{-2} N^2 - L^{-2} P_{a}P^{a} - V_{a}^{'kl} V^{'a}_{kl} \nonumber\\
&& \qquad + L^{-1} \Box L - \ft12 L^{-2} \partial_a L \partial^a L \Big) + 2L^{-1} P^a V'_{aij} - \sqrt{2} L^{-1}\nabla^a(L V'_{a}{}^m{}_{(i} \delta_{j)m}), \nonumber\\
\underline{\widehat{F}}_{ab}|_{\sigma=1} &=& 2\sqrt{2} \partial_{[a} \Big( V_{b]} + \sqrt2 L^{-1} P_{b]} \Big).
\end{eqnarray}
The fermionic terms in the composite expressions of vector multiplet can be straightforwardly figured out by using the complete results given in (\ref{LEmbde}). Using the above formulas in (\ref{fixedVDW}), we obtain the supersymmetric Ricci scalar squared action in the dilaton Weyl multiplet whose bosonic part reads
\begin{eqnarray}
e^{-1} \mathcal{L}_{R^2}^{D} &=& \ft14 \Big( R - \ft12 G_{\mu\nu} G^{\mu\nu} - \ft13 H_{\mu\nu\rho} H^{\mu\nu\rho} + 2 L^{-2} N^2 + 2 L^{-2} P_\mu P^\mu -4Z_{\mu}\bar{Z}^{\mu}- 2 L^{-1} \Box L \nonumber\\
&& + L^{-2} \partial_\mu L \partial^\mu L \Big)^2 -L^{-2} \Big|2\nabla^{\mu}(L Z_{\mu})+ 2\sqrt{2}{\rm i} P^{\mu}Z_{\mu} \Big|^2 - 2 \nabla_\mu(L^{-1}N) \nabla^\mu(L^{-1}N) \nonumber\\
&& -\ft12 \epsilon^{\mu\nu\rho\sigma\lambda} \Big( \partial_{\mu} \widetilde{C}_{\nu} - NL^{-1} G_{\mu\nu} \Big)\Big( \partial_{\rho} \widetilde{C}_{\sigma} - NL^{-1} G_{\rho\sigma} \Big) C_\lambda \nonumber\\
&& - 2\epsilon^{\mu\nu\rho\sigma\lambda}\Big(\partial_{\mu} \widetilde{C}_{\nu} - NL^{-1} G_{\mu\nu} \Big)B_{\rho\sigma} \nabla_\lambda (L^{-1} N)\nonumber\\
&& - \Big( \partial_{[\mu} \widetilde{C}_{\nu]} - NL^{-1} G_{\mu\nu} \Big)\Big( \partial^{\mu} \widetilde{C}^{\nu} - NL^{-1} G^{\mu\nu} \Big), \,
\end{eqnarray}
where for simplicity, we have defined
\begin{equation}
Z_{\mu}=V^{'12}_{\mu} + {\rm i} V^{'11}_{\mu},\qquad \widetilde{C}_{\mu} =\sqrt{2}V_{\mu}+2L^{-1}P_{\mu}.
\end{equation}
The general $R+R^2$ action in the dilaton Weyl multiplet can therefore be written as
\begin{eqnarray}
\mathcal{L}_{LR}^{D} + \alpha \mathcal{L}_{{\rm Riem}^2}^D + \beta \mathcal{L}_{C^2+\ft16R^2}^D+\gamma \mathcal{L}_{R^2}^D.
\end{eqnarray}
The inclusion of the Ricci scalar squared action does not affect the existence of maximally supersymmetric ${\rm Minkowski}_5$ vacuum, however it brings a massive vector multiplet with $m^2=\frac{L_0}{2\gamma}$. The 8+8 degrees of freedom in the massive vector multiplet are carried by $(L,N,\partial^{\mu}Z_{\mu},\widetilde{C}_{\mu},\varphi^i)$.
\section{Vector Multiplets Coupled Curvature Squared Invariants in the Dilaton Weyl Multiplet}\label{section: dilationvector}
In the previous section, we have completed the curvature squared invariants purely based on the off-shell Poincar\'e multiplet (\ref{fieldcontent}). In this section, we couple the external vector multiplets to the curvature squared invariants. The inclusion of the external vector multiplet gives rise to a mixed Chern-Simons term in the supersymmetric Riemann squared action
\begin{eqnarray}
A \wedge R \wedge R \,,
\end{eqnarray}
where the vector $A_{\mu}$ belongs to a vector multiplet, as opposed to the case of minimal off-shell curvature squared invariants in the dilaton Weyl multiplet where the Chern-Simons term is purely gravitational (\ref{bosonicR2})
\begin{eqnarray}
C \wedge R \wedge R \,,
\end{eqnarray}
where $C_{\mu}$ is the vector in the Poincar\'e multiplet. In the following, we directly present the results for the vector multiplets coupled curvature squared term which can be straightforwardly obtained from the results for single vector multiplet coupled curvature squared term.
\subsection{Vector Multiplets Coupled Riemann Squared Action}
In this subsection, we generalize the Riemann squared action purely based on the off-shell Poincar\'e multiplet to the vector multiplets coupled Riemann squared action in which the Chern-Simons term takes the form of $A\wedge R \wedge R$. In order to construct the vector multiplets coupled Riemann squared action, we consider the following Yang-Mills action in the dilaton Weyl multiplet. This action is the Yang-Mills analogue of the $n$ Abelian vector action (\ref{NVDW})
\begin{eqnarray}
e^{-1} \mathcal{L}_{\rm YM}^{'D} &=& \rho Y_{ij}^\Sigma Y^{ij\,\Sigma} + 2\rho^\Sigma Y_{ij}^\Sigma Y^{ij} + \rho \rho^\Sigma \nabla_\mu \nabla^\mu \rho^\Sigma + \ft12 \rho \nabla_\mu \rho^\Sigma \nabla^\mu \rho^\Sigma \nonumber\\
&& -\ft14 \rho ( F_{\mu\nu}^\Sigma - \rho^\Sigma G_{\mu\nu}) ( F^{\mu\nu\,\Sigma} - 3 \rho^\Sigma G^{\mu\nu}) -\ft12 (F_{\mu\nu}^\Sigma - \rho^\Sigma G_{\mu\nu} )\rho^\Sigma F^{\mu\nu} \nonumber\\
&& + \ft1{12} \rho^\Sigma \rho^\Sigma \epsilon^{\mu\nu\rho\sigma\lambda}(F_{\mu\nu} - 2\rho G_{\mu\nu}) H_{\rho\sigma\lambda} + \ft16 \rho \rho^\Sigma \epsilon^{\mu\nu\rho\sigma\lambda} F_{\mu\nu}^\Sigma H_{\rho\sigma\lambda} \nonumber\\
&& -\ft18 \epsilon^{\mu\nu\rho\sigma\lambda} F_{\mu\nu}^\Sigma F_{\rho\sigma}^\Sigma A_\lambda \,,
\label{NVS1}
\end{eqnarray}
where $\Sigma$ is the Yang-Mills group index. The construction procedure of the vector multiplets coupled Riemann squared action is the same as before. Upon applying the map between the Yang-Mills multiplet and the dilaton Weyl multiplet, we obtain the vector multiplets coupled Riemann squared action
\begin{eqnarray}
&&e^{-1} \mathcal{L}_{\rm Riem^2}^{'D}= \alpha_I \Big[-\ft14 \rho^I ( R_{\mu\nu ab}(\omega_+) - G_{\mu\nu} G_{ab}) (R^{\mu\nu ab}(\omega_+) - 3 G^{\mu\nu} G^{ab}) \nonumber\\
&& \qquad\qquad\quad - \ft12 (R^{\mu\nu ab}(\omega_+) - G^{\mu\nu} G^{ab}) F^I_{\mu\nu} G_{ab} + \rho^I V_{ij}{}^{\mu\nu} V^{ij}{}_{\mu\nu} - 2 G_{\mu\nu} V_{ij}{}^{\mu\nu} Y^{ij\,I} \nonumber\\
&& \qquad\qquad\quad + \rho^I G_{ab} \nabla_\mu(\omega_+) \nabla^\mu(\omega_+) G^{ab} + \ft12 \rho^I \nabla_\mu (\omega_+) G^{ab} \nabla^\mu (\omega_+) G_{ab} \nonumber\\
&& \qquad\qquad\quad + \ft1{12} \epsilon^{\mu\nu\rho\sigma\lambda}(F_{\mu\nu}^I - 2\rho^I G_{\mu\nu}) H_{\rho\sigma\lambda} G_{ab} G^{ab} + \ft16 \rho^I \epsilon^{\mu\nu\rho\sigma\lambda} R_{\mu\nu ab}(\omega_+) G^{ab}H_{\rho\sigma\lambda} \nonumber\\
&& \qquad\qquad\quad - \ft18 \epsilon^{\mu\nu\rho\sigma\lambda} R_{\mu\nu ab}(\omega_+) R_{\rho\sigma}{}^{ab}(\omega_+) A_\lambda^I \Big].
\end{eqnarray}
Note that this action recovers the Riemann squared invariant (\ref{bosonicR2}) upon considering a single vector multiplet, $I=1$, and applying the map from the dilaton Weyl multiplet to the vector multiplet (\ref{DVMap}).
\subsection{Vector Multiplets Coupled $C^2_{\mu\nu\rho\lambda}+\ft16R^2$ Action}
The $n$-vector multiplets coupled $C^2_{\mu\nu\rho\lambda}+\ft16R^2$ action can be straightforwardly obtained from the Weyl squared action \cite{Hanaki:2006pj} in standard Weyl multiplet by underlining $D$, $T_{ab}$ and $\chi_i$
\begin{eqnarray}
e^{-1} \mathcal{L}_{C^2+\ft16 R^2}^{'D} &=& \beta_{I} \Big[ \ft18\rho^I {C}^{\mu\nu\rho\sigma} {C}_{\mu\nu\rho\sigma}+ \ft{64}3 \rho^I \underline{D}^2 + \ft{1024}9 \rho^I \underline{T}^2 \underline{D} - \ft{32}3 \underline{D} \, \underline{T}_{\mu\nu} F^{\mu\nu\,I} \nonumber\\
&& - \ft{16}3 \rho^I {C}_{\mu\nu\rho\sigma} \underline{T}^{\mu\nu} \, \underline{T}^{\rho\sigma} + 2{C}_{\mu\nu\rho\sigma} \underline{T}^{\mu\nu} F^{\rho\sigma\,I} + \ft1{16} \epsilon^{\mu\nu\rho\sigma\lambda}A_\mu^I {C}_{\nu\rho\tau\delta} {C}_{\sigma\lambda}{}^{\tau\delta} \nonumber\\
&& -\ft1{12} \epsilon^{\mu\nu\rho\sigma\lambda} A_\mu^I {V}_{\nu\rho}{}^{ij} {V}_{\sigma\lambda\, ij} + \ft{16}3 Y^I_{ij} {V}_{\mu\nu}{}^ {ij} \underline{T}^{\mu\nu} - \ft{1}3 \rho^I {V}_{\mu\nu}{}^{ij}{V}^{\mu\nu}{}_{ij} \nonumber\\
&& +\ft{64}3 \rho^I \nabla_\nu \underline{T}_{\mu\rho} \nabla^\mu \underline{T}^{\nu\rho} - \ft{128}3 \rho^I \underline{T}_{\mu\nu} \nabla^\nu \nabla_\rho \underline{T}^{\mu\rho} - \ft{256}9 \rho^I R^{\nu\rho} \underline{T}_{\mu\nu} \underline{T}^\mu{}_\rho \nonumber\\
&& + \ft{32}9 \rho^I R \underline{T}^2 - \ft{64}3 \rho^I \nabla_\mu \underline{T}_{\nu\rho} \nabla^\mu \underline{T}^{\nu\rho} + 1024 \rho^I \, \underline{T}^4- \ft{2816}{27} \rho^I (\underline{T}^2)^2 \nonumber\\
&&- \ft{64}9 {\underline{T}_{\mu\nu}} F^{\mu\nu\,I} \underline{T}^2 - \ft{256}3 \underline{T}_{\mu\rho} \underline{T}^{\rho\lambda} \underline{T}_{\nu\lambda} F^{\mu\nu\,I} - \ft{32}3 \epsilon_{\mu\nu\rho\sigma\lambda} \underline{T}^{\rho\tau} \nabla_\tau \underline{T}^{\sigma\lambda} F^{\mu\nu\,I} \nonumber\\
&& - 16 \epsilon_{\mu\nu\rho\sigma\lambda} \underline{T}^\rho{}_\tau \nabla^\sigma \underline{T}^{\lambda\tau} F^{\mu\nu\,I} - \ft{128}3 \rho^I \epsilon_{\mu\nu\rho\sigma\lambda} \underline{T}^{\mu\nu} \underline{T}^{\rho\sigma} \nabla_\tau \underline{T}^{\lambda\tau}\Big] \,.
\label{pregb3}
\end{eqnarray}
where the composite expressions for $D$ and $T_{ab}$ in $\sigma=1$ gauge fixing are given in (\ref{UMap2}).
\subsection{Vector Multiplets Coupled Ricci Scalar Squared Action }
To obtain an off-shell Ricci scalar squared invariant coupled to vector multiplets, we use the same strategy as we construct the minimal Ricci scalar squared invariant. The starting point is the vector multiplet action (\ref{NVDW}). By choosing the nonvanishing components of $C_{IJK}$ to be $C_{I11} = \alpha_I$ and replacing $D$, $T_{ab}$ by their composite expressions (\ref{UMap}), we obtain the $n$ vector coupled Ricci scalar squared action
\begin{eqnarray}
e^{-1} \mathcal{L}_{R^2}^{' D} &=& \gamma_I \Big( \rho^I \underline{Y}_{ij} \underline{Y}^{ij} + 2 \underline{\rho} \underline{Y}^{ij} Y_{ij}^I - \ft14 \rho^I \underline{F}_{\mu\nu} \underline{F}^{\mu\nu} - \ft12 \underline{\rho} \, \underline{F}^{\mu\nu} (F_{\mu\nu}^I - 2 \rho^I G_{\mu\nu} ) \nonumber\\
&& \quad + \ft12 \underline{\rho}^2 ( F_{\mu\nu}^I - \ft32 \rho^I G_{\mu\nu}) G^{\mu\nu} + \ft1{12}\underline{\rho}^2 \epsilon^{\mu\nu\rho\sigma\lambda} (F_{\mu\nu}^I - 2 \rho^I G_{\mu\nu} ) H_{\rho\sigma\lambda} \nonumber\\
&& \quad + \ft16 \rho^I \underline{\rho} \epsilon_{\mu\nu\rho\sigma\lambda}\underline{F}^{\mu\nu} H^{\rho\sigma\lambda} - \ft18 \epsilon_{\mu\nu\rho\sigma\lambda} A^{\mu I} \underline{F}^{\nu\rho} \underline{F}^{\sigma\lambda} + \ft12 \rho^I \nabla_\mu \underline{\rho} \nabla^\mu \underline{\rho}\nonumber\\
&& \quad + \rho^I \underline{\rho} \Box \underline{\rho} \Big)\,.
\label{R2SW2}
\end{eqnarray}
The composite expressions for the elements of a vector multiplet are given as
\begin{eqnarray}
\underline{\rho}|_{\sigma=1} &=& 2 N L^{-1}, \nonumber\\
\underline{Y}_{ij}|_{\sigma=1} &=& \ft1{\sqrt2} \delta_{ij} \Big(- \ft12 R + \ft14 G_{ab} G^{ab} + \ft16 H_{abc} H^{abc} - L^{-2} N^2 - L^{-2} P_{a}P^{a} - V_{a}^{'kl} V^{'a}_{kl} \nonumber\\
&& \qquad + L^{-1} \Box L - \ft12 L^{-2} \partial_a L \partial^a L \Big) + 2L^{-1} P^a V'_{aij} - \sqrt{2} L^{-1}\nabla^a(L V'_{a}{}^m{}_{(i} \delta_{j)m}), \nonumber\\
\underline{\widehat{F}}_{ab}|_{\sigma=1} &=& 2\sqrt{2} \partial_{[a} \Big( V_{b]} + \sqrt2 L^{-1} P_{b]} \Big).
\end{eqnarray}
At this moment, we have also completed the vector multiplets coupled curvature squared terms. The general vector multiplets coupled $R+R^2$ theory is given by
\begin{eqnarray}
\mathcal{L}_{LR}^D + \mathcal{L}_{V}^{'D} + \mathcal{L}_{\rm Riem^2}^{'D} + \mathcal{L}_{C^2 + \ft16 R^2}^{'D} + \mathcal{L}_{R^2}^{'D},
\end{eqnarray}
in which the vector multiplet action in $\sigma=1$ gauge is given as
\begin{eqnarray}
e^{-1} \mathcal{L}_{V}^{'D}|_{\sigma=1} &=& a_{IJ}\Big( \rho Y_{ij}^I Y^{ij\,J} + 2\rho^I Y_{ij}^J Y^{ij} + \rho \rho^I \nabla_\mu \nabla^\mu \rho^J + \ft12 \rho \nabla_\mu \rho^I \nabla^\mu \rho^J \nonumber\\
&& \qquad -\ft14 \rho ( F_{\mu\nu}^I - \rho^I G_{\mu\nu}) ( F^{\mu\nu\,J} - 3 \rho^J G^{\mu\nu}) -\ft12 (F_{\mu\nu}^I - \rho^I G_{\mu\nu} )\rho^J F^{\mu\nu} \nonumber\\
&& \qquad + \ft1{12} \rho^I \rho^J \epsilon^{\mu\nu\rho\sigma\lambda}(F_{\mu\nu} - 2\rho G_{\mu\nu}) H_{\rho\sigma\lambda} + \ft16 \rho \rho^I \epsilon^{\mu\nu\rho\sigma\lambda} F_{\mu\nu}^J H_{\rho\sigma\lambda} \nonumber\\
&& \qquad -\ft18 \epsilon^{\mu\nu\rho\sigma\lambda} F_{\mu\nu}^I F_{\rho\sigma}^J A_\lambda \Big) \,.
\label{YMS1}
\end{eqnarray}
The supersymmetric completion of vector multiplets coupled Gauss-Bonnet combination can be achieved by setting $\gamma_I = 0$, and choosing the free parameters of $\mathcal{L}_{\rm Riem^2}^{'D}$ and $\mathcal{L}^{'D}_{C^2 +\ft16 R^2}$ to be related to each other according to $\beta_I = 3 \alpha_I$.
\section{$D=5,\, \mathcal{N}=2$ Off-Shell Supergravity in the Standard Weyl Multiplet}\label{section: Sugra}
In the previous section, we presented all off-shell curvature squared actions in five dimensional $\mathcal{N}=2$ theory based on the dilaton Weyl multiplet. In this section, we focus on the construction of off-shell actions by using the standard Weyl multiplet.
In \cite{Bergshoeff:2004kh, Hanaki:2006pj}, a Poincar\'e supergravity was constructed by coupling the standard Weyl multiplet to a hypermultiplet and $n$ number of vector multiplets. However, we notice that to obtain the Ricci scalar squared action, it is more convenient to choose a linear multiplet as the compensator instead of a hypermultiplet. Therefore we devote this section to the construction of an off-shell Poincar\'e supergravity and its $R$-symmetry gauging using the linear and vector multiplets.
In subsection \ref{ss:GFPS1} we introduce the superconformal theory which give rise to an off-shell Poincar\'e supergravity upon fixing the redundant superconformal symmetries. In the next subsection, we discuss the $R$-symmetry gauging of this theory and show that the corresponding on-shell theory is the conventional gauged minimal $D=5,\, \mathcal{N}=2$ supergravity.
\subsection{Poincar\'e Supergravity}\label{ss:GFPS1}
A consistent superconformal supergravity is given by combining the linear multiplet action and vector multiplet action
\begin{eqnarray}
\mathcal{L}_R^S &=& - \mathcal{L}^S_L - 3 \mathcal{L}_{V}^S \,,
\end{eqnarray}
where $\mathcal{L}^S_L$ is given in (\ref{Lact}) and $\mathcal{L}^S_V$ is given in (\ref{Vact}). This action has redundant superconformal symmetries needing be fixed in order to obtain an off-shell Poincar\'e supergravity. The gauge fixing conditions adopted in this section are
\begin{eqnarray}
L_{ij} = \ft1{\sqrt2} \delta_{ij},\qquad b_\mu = 0, \qquad \varphi^i = 0,
\label{GF1}
\end{eqnarray}
where the first one breaks $SU(2)_R$ to $U(1)_R$ and fixes dilatation by setting $L=1$. The second one fixes the special conformal symmetry, and the last choice fixes the $S$-supersymmetry. To maintain the gauge (\ref{GF1}), the compensating transformations are required. Here we only present the compensating special supersymmetry and the compensating conformal boost with parameters
\begin{eqnarray}
\eta_k &=& \ft13 \Big( \gamma \cdot T \epsilon_k - \ft1{\sqrt2} N \delta_{ik} \epsilon^i + \ft1{\sqrt2} {\rm i} \slashed{E} \delta_{ik} \epsilon^i + {\rm i} \gamma^a V_a^{'(i}{}_l \delta^{j)l} \delta_{ik} \epsilon_j \Big), \label{ETA} \\
\Lambda_{K\mu} &=& - \ft14 {\rm i} \bar\epsilon \phi_\mu - \ft14 {\rm i} \bar\eta \psi_\mu + \bar\epsilon \gamma_\mu \chi. \label{LKM}
\end{eqnarray}
Using the gauge fixing conditions, the bosonic part of the corresponding off-shell Poincar\'e supergravity is given by
\begin{eqnarray}
e^{-1} \mathcal{L}_{R}^{S} &=& \ft18 (\mathcal{C}+3)R + \ft13 (104 \mathcal{C} - 8) T^2 + 4 (\mathcal{C}-1)D - N^2- P_\mu P^\mu + V_\mu^{'ij} V^{'\mu}_{ij} - \sqrt2 V_\mu P^\mu \nonumber\\
&&+ \ft34 C_{IJK} \rho^I F_{\mu\nu}^J F^{\mu\nu\, K}+ \ft32 C_{IJK} \rho^I \partial_\mu \rho^J \partial^\mu \rho^K - 3 C_{IJK} \rho^I Y_{ij}^J Y^{ij\,K}\nonumber\\
&& - 12 C_{IJK} \rho^I \rho^J F_{\mu\nu}^K T^{\mu\nu} + \ft18 \epsilon^{\mu\nu\rho\sigma\lambda} C_{IJK} A^I_\mu F_{\nu\rho}^J F_{\sigma\lambda}^K,
\label{SWSUGRA}
\end{eqnarray}
where we have defined $\mathcal{C} = C_{IJK} \rho^I \rho^J \rho^K$. In the context of M-theory, the theory of five-dimensional ${\cal N} = 2$ supergravity coupled to Abelian vector supermultiplets arise by compactifying eleven-dimensional supergravity, the low-energy theory of M-theory, on a Calabi-Yau three-folds \cite{pap1,ant1}. $STU$ model corresponds to $\mathcal{C} =STU$, where $S$, $T$ and $U$ are three vector moduli.
\subsection{Gauged Model}\label{ss:GM1}
As a result of our gauge choices (\ref{GF1}), the $\mathop{\rm U}(1)_R$ symmetry of the off-shell Poincare theory (\ref{SWSUGRA}) is gauged by the auxiliary vector $V_\mu$, i.e. the full $\mathop{\rm U}(1)_R$ covariant derivative for gravitino is given by,
\begin{eqnarray}
\nabla_\mu \psi_\nu^i = \Big( \partial_\mu + \ft14 \omega_\mu{}^{ab} \gamma_{ab} \Big) \psi_\nu^i - \ft12 V_\mu \delta^{ij} \psi_{\nu\,j} \,.
\label{auxcderiv}
\end{eqnarray}
where $V_{\mu}^{'ij}$, the traceless part of $V_\mu^{ij}$ does not appear in the $\mathop{\rm U}(1)_R$ covariant derivative for gravitino as a consequence of our gauge fixing choices (\ref{GF1}). In this section, we discuss the $\mathop{\rm U}(1)_R$ gauging of the Poincar\'e the theory by physical vectors $A_\mu^I$. In the rest of the paper, we use the following notation
\begin{eqnarray}
\mathcal{C} = C_{IJK} \rho^I \rho^J \rho^K, \quad \mathcal{C}_I = 3 C_{IJK} \rho^I \rho^K,\quad \mathcal{C}_{IJ} = 6 C_{IJK} \rho^K\,.
\end{eqnarray}
The off-shell gauged model is given by
\begin{eqnarray}
e^{-1}\mathcal{L}_{g R}^{S} &=& e^{-1} ( \mathcal{L}_{R}^S - 3 g_I \mathcal{L}^I_{VL})|_{L=1} \nonumber\\
&=& \ft18 (\mathcal{C} + 3)R + \ft13 (104 \mathcal{C} - 8) T^2 + 4 (\mathcal{C} -1)D - N^2 - P_\mu P^\mu + V_\mu^{'ij} V^{'\mu}_{ij} \nonumber\\
&& - \sqrt2 P_\mu V^\mu + \ft18 \mathcal{C}_{IJ} F_{\mu\nu}^I F^{\mu\nu\, J} + \ft14 \mathcal{C}_{IJ} \partial_\mu \rho^I \partial^\mu \rho^J - \ft12 \mathcal{C}_{IJ} Y_{ij}^I Y^{ij\,J} - 4 \mathcal{C}_{I} F_{\mu\nu}^I T^{\mu\nu} \nonumber\\
&& + \ft18 \epsilon^{\mu\nu\rho\sigma\lambda} C_{IJK} A^I_\mu F_{\nu\rho}^J F_{\sigma\lambda}^K - \ft3{\sqrt2} g_I Y^I_{ij} \delta^{ij} - 3 g_I P^\mu A_\mu^I - 3 g_I \rho^I N \,.
\end{eqnarray}
where $L=1$ indicates the gauge fixing condition (\ref{GF1}). As the field equation of $V_\mu$ implies that $P_\mu = 0$, we can immediately see that the $P_\mu$ equation implies that
\begin{eqnarray}
V_\mu = - \ft{3}{\sqrt2} g_I A_\mu^I \,,
\label{Gauging}
\end{eqnarray}
hence, the auxiliary vector $V_\mu$ is replaced by a linear combination of physical vectors $A_\mu^I$
\begin{eqnarray}
\nabla_\mu \psi_\nu^i = \Big( \partial_\mu + \ft14 \omega_\mu{}^{ab} \gamma_{ab} \Big) \psi_\nu^i + \ft3{2\sqrt2} g_I A^I_\mu \delta^{ij} \psi_{\nu\,j} \,.
\label{physgaug}
\end{eqnarray}
Therefore, the $\mathop{\rm U}(1)_R$ symmetry is gauged by the physical vectors. The equations of motion for $D, T_{ab}, N$ and $Y_{ij}^I$ lead to
\begin{eqnarray}
0 &=& \mathcal{C} - 1, \nonumber\\
0 &=& \ft23 (104 \mathcal{C} - 8) T_{ab} - 4 \mathcal{C}_I F_{ab}^I, \nonumber\\
0 &=& 2N + 3 g_I \rho^I, \nonumber\\
0 &=& \mathcal{C}_{IJ} Y_{ij}^J + \ft3{\sqrt2} g_I \delta_{ij}.
\end{eqnarray}
The field equation for $D$ implies the constraint for very special geometry
\begin{eqnarray}
C_{IJK} \rho^I \rho^J \rho^K = 1.\,
\label{VSG}
\end{eqnarray}
Eliminating $T_{ab}$, $N$ and $Y_{ij}^I$ according to their field equations gives rise to
the following on-shell action
\begin{eqnarray}
e^{-1}\mathcal{L}_{g R}^{S}|_{\rm{on-shell}} &=& \ft12 R + \ft18 (\mathcal{C}_{IJ} - \mathcal{C}_I \mathcal{C}_J) F_{\mu\nu}^I F^{\mu\nu\,J} + \ft14 \mathcal{C}_{IJ} \partial^\mu \rho^I \partial^\mu \rho^J\nonumber\\
&&+ \ft18 \epsilon^{\mu\nu\rho\sigma\lambda} C_{IJK} A^I_\mu F_{\nu\rho}^J F_{\sigma\lambda}^K + \Lambda(\rho),
\label{premin}
\end{eqnarray}
with
\begin{eqnarray}
\Lambda(\rho) = \ft94 (g_I \rho^I)^2 + \ft92 \mathcal{C}^{IJ} g_I g_J,
\label{potential1}
\end{eqnarray}
where $\mathcal{C}^{IJ}$ is the inverse of $\mathcal{C}_{IJ}$ and due to the constraint (\ref{VSG}), $\rho^I$ is not independent field now.
One can proceed further and truncate the on-shell vector multiplets to obtain the minimal gauged supergravity. In order to do so, we consider a single graviphoton via
\begin{eqnarray}
\rho^I=\bar{\rho}^{I},\quad A_\mu^I = \bar{\rho}^I A_\mu , \quad g_I = \bar{\rho}_I g,
\label{truncation}
\end{eqnarray}
where $\bar{\rho}^I$ is VEV of the scalar at the critical value of the scalar potential (\ref{potential1}) and $\bar{\rho}_I$ satisfies $\bar{\rho}^I \bar{\rho}_I = 1$. The truncation conditions in (\ref{truncation}) are consistent with the supersymmetry transformation rules and lead to
\begin{eqnarray}
e^{-1} \mathcal{L}_{g R}^{{\rm min}} = \ft12 R - \ft38 F_{\mu\nu} F^{\mu\nu} + \ft18 \epsilon^{\mu\nu\rho\sigma\lambda} A_\mu F_{\nu\rho} F_{\sigma\lambda} + 3 g^2,
\end{eqnarray}
which reproduces the conventional minimal on-shell supergravity in five dimensions.
\section{Supersymmetric Curvature Squared Actions in the Standard Weyl Multiplet}\label{section: Ricci2SW}
In this section, we quickly review the Weyl squared action derived in \cite{Hanaki:2006pj} and construct an off-shell Ricci scalar squared action based on the standard Weyl multiplet. The procedure for constructing the Ricci scalar squared action in the standard Weyl multiplet is similar to the one used in the dilaton Weyl multiplet. In the standard Weyl multiplet, the Ricci scalar squared term can be coupled to $n$ vector multiplets and alter the very special geometry.
\subsection{Supersymmetric Weyl Squared Action}
Using superconformal tensor calculus, an off-shell Weyl squared action in the standard Weyl multiplet was constructed in \cite{Hanaki:2006pj}, and its bosonic part reads
\begin{eqnarray}
e^{-1} \mathcal{L}_{C^2}^S &=& c_{I} \Big[ \ft18\rho^I {C}^{\mu\nu\rho\sigma} {C}_{\mu\nu\rho\sigma}+ \ft{64}3 \rho^I {D}^2 + \ft{1024}9 \rho^I {T}^2 {D} - \ft{32}3 {D} \, {T_{\mu\nu}} F^{\mu\nu\,I} \nonumber\\
&& - \ft{16}3 \rho^I {C}_{\mu\nu\rho\sigma} {T}^{\mu\nu} \, {T}^{\rho\sigma} + 2{C}_{\mu\nu\rho\sigma} {T}^{\mu\nu} F^{\rho\sigma\,I} + \ft1{16} \epsilon^{\mu\nu\rho\sigma\lambda}A_\mu^I {C}_{\nu\rho\tau\delta} {C}_{\sigma\lambda}{}^{\tau\delta} \nonumber\\
&& -\ft1{12} \epsilon^{\mu\nu\rho\sigma\lambda} A_\mu^I {V}_{\nu\rho}{}^{ij} {V}_{\sigma\lambda\, ij} + \ft{16}3 Y^I_{ij} {V}_{\mu\nu}{}^ {ij} {T}^{\mu\nu} - \ft{1}3 \rho^I {V}_{\mu\nu}{}^{ij}{V}^{\mu\nu}{}_{ij} \nonumber\\
&& +\ft{64}3 \rho^I \nabla_\nu T_{\mu\rho} \nabla^\mu T^{\nu\rho} - \ft{128}3 \rho^I {T_{\mu\nu}} \nabla^\nu \nabla_\rho {T}^{\mu\rho} - \ft{256}9 \rho^I R^{\nu\rho} T_{\mu\nu} T^\mu{}_\rho \nonumber\\
&& + \ft{32}9 \rho^I R T^2 - \ft{64}3 \rho^I \nabla_\mu T_{\nu\rho}\nabla^\mu T^{\nu\rho} + 1024 \rho^I \, {T}^4- \ft{2816}{27} \rho^I ({T}^2)^2 \nonumber\\
&&- \ft{64}9 {T_{\mu\nu}} F^{\mu\nu\,I} {T}^2 - \ft{256}3 {T_{\mu\rho}} {T}^{\rho\lambda} {T_{\nu\lambda}} F^{\mu\nu\,I} - \ft{32}3 \epsilon_{\mu\nu\rho\sigma\lambda} {T}^{\rho\tau} \nabla_\tau {T}^{\sigma\lambda} F^{\mu\nu\,I} \nonumber\\
&& - 16 \epsilon_{\mu\nu\rho\sigma\lambda} {T}^\rho{}_\tau \nabla^\sigma {T}^{\lambda\tau} F^{\mu\nu\,I} - \ft{128}3 \rho^I \epsilon_{\mu\nu\rho\sigma\lambda} {T}^{\mu\nu} {T}^{\rho\sigma} \nabla_\tau {T}^{\lambda\tau}\Big] \,,
\label{pregb}
\end{eqnarray}
where the five dimensional Weyl tensor reads
\begin{eqnarray}
C_{\mu\nu\rho\sigma} &=& R_{\mu\nu\rho\sigma} - \ft13 (g_{\mu\rho} R_{\nu\sigma} - g_{\nu\rho} R_{\mu\sigma} - g_{\mu\sigma} R_{\nu\rho} + g_{\nu\sigma} R_{\mu\rho}) \nonumber\\
&& + \ft{1}{12} (g_{\mu\rho} g_{\nu\sigma} - g_{\mu\sigma} g_{\nu\rho}) R.
\end{eqnarray}
Now, we would like to comment more rigorously on the difference between the Weyl squared invariant in the standard Weyl multiplet (\ref{pregb}) and its counterpart in the dilaton Weyl multiplet (\ref{pregb2}). As mentioned before, one of the main differences between these actions relies on the definition of $D$ which is an independent field in the standard multiplet but a composite field in the dilaton Weyl multiplet. As a composite field in the dilaton Weyl multiplet, $D$ contains a curvature term (\ref{UMap}). However, simply replacing $D, T_{ab}$ and $\chi^i$ by their composite expressions does not produce an action solely based on the dilaton Weyl multiplet. The resulting action also depends on the fields in the vector multiplet. We recall that neither two-derivative Poincar\'e supergravity (\ref{RLag}) nor the Riemann squared action (\ref{bosonicR2}) has any dependence on the vector multiplet in the minimal off-shell supersymmetric model. To obtain the Weyl squared actions solely constructed in terms of the dilaton Weyl multiplet, the map (\ref{DVMap}) from the dilaton Weyl multiplet to the vector multiplet is indispensable.
The Weyl squared action in (\ref{pregb}) is invariant under the transformation rules given in (\ref{SWMTR}) and (\ref{VMTR}) with $\eta^i$ and $\Lambda_{K\mu}$ being replaced according to (\ref{ETA}) and (\ref{LKM}).
\subsection{Supersymmetric Ricci Scalar Squared Action}
To obtain the Ricci scalar squared invariant in the standard Weyl multiplet, we begin with the composite expressions given in (\ref{LEmbde}) after fixing the redundant symmetries,
\begin{eqnarray}
\underline{\rho}|_{L=1} &=& 2N, \nonumber\\
\underline{Y}^{ij}|_{L=1} &=& \ft1{\sqrt2} \delta^{ij}\Big(- \ft3{8} R - N^2 - P^2 + \ft83 T^2 + 4D - V_{a}^{'kl} V_{kl}^{'a} \Big) \nonumber\\
&& + 2 P^a V'_{a}{}^{ij} - \sqrt2 \nabla^a V'_{a}{}^{m(i} \delta^{j)}{}_{m} , \nonumber\\
\underline{F}^{ab}|_{L=1} &=& 2\sqrt{2} \partial^{[a} \Big( V^{b]} + \sqrt2 P^{b]} \Big).
\label{fixedmap}
\end{eqnarray}
From the above expressions, one sees that the Ricci scalar squared can come from $Y_{ij} Y^{ij}$ term in the vector action. By choosing $C_{I11} = a_I$ and all other possibilities to zero in (\ref{Vact}), we obtain the following Ricci scalar squared action
\begin{eqnarray}
e^{-1} \mathcal{L}_{R^2}^{ S} &=& a_I \Big( \rho^I \underline{Y}_{ij} \underline{Y}^{ij} + 2 \underline{\rho} \underline{Y}^{ij} Y_{ij}^I-\ft18\rho^I \underline{\rho}^2 R - \ft14 \rho^I \underline{F}_{\mu\nu} \underline{F}^{\mu\nu} - \ft12 \underline{\rho} \, \underline{F}^{\mu\nu} F_{\mu\nu}^I \nonumber\\
&& \quad+ \ft12 \rho^I \partial_\mu \underline{\rho} \partial^\mu \underline{\rho} + \rho^I \underline{\rho} \Box \underline{\rho} - 4 \rho^I \underline{\rho}^2 (D + \ft{26}3 T^2) + 4 \underline{\rho}^2 F_{\mu\nu}^I T^{\mu\nu} \nonumber\\
&& \quad + 8 \rho^I \underline{\rho}\, \underline{F}_{\mu\nu} T^{\mu\nu} - \ft18 \epsilon_{\mu\nu\rho\sigma\lambda} A^{\mu I} \underline{F}^{\nu\rho} \underline{F}^{\sigma\lambda} \Big).
\label{R2SW}
\end{eqnarray}
As we will extensively work on this action in the following section, here we postpone to plug in the composite expressions given in (\ref{fixedmap}). To study the solutions with vanishing auxiliary fields, we can use a truncated version of above action as we shall see in the next section. Off-shell supersymmetry allows us to combine the Ricci scalar squared action with the two-derivative Poincar\'e supergravity (\ref{SWSUGRA}) and the Weyl squared action (\ref{pregb}) to form a more general supergravity theory
\begin{eqnarray}
\mathcal{L}_R^{S} + \mathcal{L}_{C^2}^{S} + \mathcal{L}_{R^2}^{S},
\end{eqnarray}
where $\mathcal{L}_R^S$ is given in (\ref{SWSUGRA}), $\mathcal{L}_{C^2}^S$ is given in (\ref{pregb}) and $\mathcal{L}_{R^2}^S$ is given in (\ref{R2SW}).
\subsection{Ricci Scalar Squared Extended Gauged Model and Corrected Very Special Geometry}
In this section, we consider the off-shell Ricci scalar squared extended gauged model
\begin{eqnarray}
\mathcal{L}_R^{S} + \mathcal{L}_{R^2}^{S} + g_I \mathcal{L}_{VL}^I.
\label{gextended}
\end{eqnarray}
We consider the maximal supersymmetric $AdS_5$ solutions. The ansatz preserving $SO(4,2)$ symmetry takes the form
\begin{equation}
R_{\mu\nu\rho\lambda}=\ft{R}{20}(g_{\mu\rho}g_{\nu\lambda}-g_{\mu\lambda}g_{\nu\rho}),\quad A^{I}_{\mu}=0,\quad T_{\mu\nu}=0,\quad \rho=\bar{\rho},\quad N=\bar{N},\quad D=\bar{D},
\end{equation}
where $\bar{\rho}$, $\bar{N}$ and $\bar{D}$ are some constants.
The maximal supersymmetry requires that
\begin{equation}
R=-\ft{40}9\bar{N}^2,\quad Y_{ij}^I = \ft1{3\sqrt2} \bar{ \rho}^I \bar{N} \delta_{ij}, \quad \bar{D}=0.
\label{Maximal}
\end{equation}
Employing $\rho^{I}$ equation and $N$ equation for the Lagrangian (\ref{gextended}), we obtain
\begin{equation}
2 \bar{N} C_{IJK} \bar{\rho}^J \bar{\rho}^K +3 g_I - \ft83 a_I \bar{N}^3=0 ,\quad 2 \bar{N} + 3 g_I \bar{\rho}^I = 0.
\end{equation}
These two equations imply
\begin{equation}
\bar{C} = 1 + \ft43 a_I \bar{\rho}^I \bar{N}^2,
\label{qcvsg}
\end{equation}
which is consistent with $D$ field equation, $Y^{Iij}$ equation and Einstein equation. Therefore, in the presence of Ricci scalar squared term, $AdS_5$ maintains to be the maximally supersymmetric solution. However, in this case, the very special geometry is modified according to (\ref{qcvsg}). Inserting $N=-\ft32g_I \bar{\rho}^I$ into (\ref{qcvsg}), the quantum corrected very special geometry on the moduli space of $AdS_5$ vacuum can be written as
\begin{equation}
\widetilde{C}_{IJK}\bar{\rho}^I\bar{\rho}^J\bar{\rho}^k=1,\qquad \widetilde{C}_{IJK}=C_{IJK}+ 3 a_{(I}g_{J}g_{K)}.
\end{equation}
We emphasize that the inclusion of the Weyl squared action ({\ref{pregb}) also modifies the definition of very special geometry, however the modification vanishes on the maximally supersymmetric $AdS_5$ background (\ref{Maximal}).
\section{Supersymmeric Solutions with $AdS_3 \times S^2$ and $AdS_2 \times S^3$ Near Horizon Geometries} \label{section: Solution}
The strategy for finding regular solutions in higher derivative theory is to first write an ansatz consistent
with the assumed symmetries, and then demand unbroken supersymmetry. The supersymmetric magnetic strings and electric black holes preserving one half of the supersymmetries have been studied in \cite{sab1,sab2} for the case
of $n$ vector multiplets coupled to two-derivative Poincar\'e supergravity and in \cite{cas1} for the higher derivative case where only the off-shell Weyl squared invariant is taken into account. In the presence of Weyl squared, the magnetic strings and electric black holes receive corrections. In the following, we consider the Ricci scalar squared extended two-derivative theory (\ref{SWSUGRA}, which is the simplest curvature squared extended model. Explicitly, in this section we study the theory
\begin{eqnarray}
\mathcal{L} = \mathcal{L}_{R}^{S} + \mathcal{L}_{R^2}^{S},
\label{R2Ext}
\end{eqnarray}
where $\mathcal{L}_{R}^{S}$ and $\mathcal{L}_{R^2}^{S}$ are given by (\ref{SWSUGRA}) and (\ref{R2SW}) respectively. We are interested in solutions with vanishing auxiliary fields. It can be checked that
\begin{eqnarray}
Y_{ij}^I = N = E_a = V_a = V_{a}^{'ij} = 0,
\end{eqnarray}
is a consistent truncation of (\ref{R2Ext}) leading to a simpler effective action describing the
very special geometry extended by Ricci scalar squared invariant
\begin{eqnarray}
e^{-1} \mathcal{L} &=& \ft18 ({\cal C}+3)R + \ft13 (104 {\cal C} - 8) T^2 + 4 ({\cal C}-1)D + \ft34 C_{IJK} \rho^I F_{ab}^J F^{ab\, K} \nonumber\\
&& + \ft32 C_{IJK} \rho^I \partial_\mu \rho^J \partial^\mu \rho^K - 12 C_{IJK} \rho^I \rho^J F_{ab}^K T^{ab} + \ft18 \epsilon^{abcde} C_{IJK} A^I_a F_{bc}^J F_{de}^K \nonumber\\
&& + a_I \rho^I \Big( \ft{9}{64} R^2 - 3 D R - 2 R T^2 + 16 D^2 + \ft{64}3 D T^2 + \ft{64}9 (T^2)^2 \Big).
\label{R+nV+R2}
\end{eqnarray}
The supersymmetry transformations for the fermionic fields take the following forms when the auxiliary fields vanish
\begin{eqnarray}
\delta \psi_\mu^i &=& \Big( \nabla_\mu - 4 {\rm i} \gamma^a T_{\mu a} + \ft23 {\rm i} \gamma_\mu \gamma \cdot T \Big) \epsilon^i, \nonumber\\
\delta \chi^i &=& \Big( \ft14 D + \ft18 {\rm i} \gamma^{ab} \slashed{\nabla} T_{ab} - \ft18 {\rm i} \gamma^a \nabla^b T_{ab} - \ft16 \gamma^{abcd} T_{ab} T_{cd} \Big) \epsilon^i, \nonumber\\
\delta \lambda_i^I &=& \Big( -\ft14 \gamma \cdot \widehat{F}^I - \ft12 {\rm i} \slashed{\nabla} \rho^I + \ft43 \rho^I \gamma \cdot T \Big) \epsilon_i.
\end{eqnarray}
\subsection{Magnetic string solutions}
The metric preserving the symmetry of a static string takes the form
\begin{equation}
ds^2 = e^{2U_1(r)} (- dt^2+ dx_1^2) + e^{-4U_2(r)} dx^i dx^i,\quad dx^i dx^i = dr^2 + r^2 d\Omega_2^2,
\label{ms}
\end{equation}
where $i=2,3,4$. $F^I_{ab}$ and $T_{ab}$ are chosen to be proportional to the volume form of $S^2$. A natural choice for the veilbein is given by
\begin{equation}
e^{\hat{a}} = e^{U_1} dx^a , \quad a= 0,1, \qquad e^{\hat{i}}=e^{-2 U_2} dx^i , \quad i = 2,3,4.
\end{equation}
Accordingly, the non-vanishing components of the spin connections are
\begin{equation}
\omega_a{}^{\hat{a}\hat{i}} =e^{U_1 + 2 U_2} \partial_i U_1, \quad
\omega_k{}^{\hat{i}\hat{j}} = - 2 \delta_k^i \partial_j U_2 + 2 \delta_k^j \partial_j U_2.
\end{equation}
Similar to \cite{cas1}, the supersymmmetry parameter $\epsilon^i$ is constant along the string and obeys
the projection condition which breaks half of the supersymmetries
\begin{equation}
\gamma_{\hat{t}\hat{1}}\epsilon = -\epsilon.
\label{pro}
\end{equation}
We first study the gravitino variation which fixes $U_1=U_2$.
\begin{equation}
\delta \psi_\mu = \Big( \nabla_\mu - 4 {\rm i} \gamma^a T_{\mu a} + \ft23 {\rm i} \gamma_\mu \gamma \cdot T \Big) \epsilon.
\end{equation}
The covariant derivative is
\begin{eqnarray}
\nabla_a &=& \partial_a + \ft12 e^{U_1 + 2 U_2} \partial_i U_1 \gamma_{\hat{a}\hat{i}}, \nonumber\\
\nabla_i &=& \partial_i + \partial_j U_2 \gamma_{\hat{j}\hat{i}}.
\end{eqnarray}
Along the string direction, we have
\begin{eqnarray}
\Big[ \ft12 e^{U_1 + 2 U_2} \partial_i U_1 \gamma_{\hat{a}\hat{i}} + \ft23 {\rm i} e^{U_1} \gamma_{\hat{a}\hat{i}\hat{j}}, T^{\hat{i}\hat{j}} \Big] \epsilon = 0.
\end{eqnarray}
We use the convention that $\gamma^0\gamma^1\gamma^2\gamma^3\gamma^4=i\epsilon^{01234}$ with $\epsilon^{01234}=1$. Therefore (\ref{pro}) implies
\begin{equation}
\gamma_{\hat{i}\hat{j}\hat{k}} \epsilon = {\rm i} \, \epsilon_{\hat{i}\hat{j}\hat{k}} \epsilon,\quad \gamma_{\hat{i}\hat{j}} \epsilon = \epsilon_{\hat{i}\hat{j}\hat{k}} \gamma_{\hat{k}} \epsilon.
\end{equation}
where $\epsilon_{234} = 1$.
Using the above conditions, it can be obtained that
\begin{eqnarray}
\Big[ \ft12 e^{U_1 + 2 U_2} \partial_k U_1 - \ft23 e^{U_1} T^{\hat{i}\hat{j}} \epsilon_{\hat{i}\hat{j}\hat{k}} \Big] \gamma_{\hat{a}\hat{k}} \epsilon = 0.
\end{eqnarray}
The auxiliary field $T_{ab}$ can be solved as
\begin{eqnarray}
T_{\hat{i}\hat{j}} &=& \ft38 e^{2U_2} \epsilon_{\hat{i}\hat{j}\hat{k}} \partial_k U_1.
\end{eqnarray}
The gravitino variatoin along $x^i$ direction leads to
\begin{eqnarray}
\Big[ \partial_k - {\rm i} \, \epsilon_{\hat{i}\hat{j}\hat{k}} \partial_{i} U_2 \gamma_{\hat{j}} -\ft83 {\rm i} \, \gamma^{\hat{i}} T_{k\hat{i}} - \ft23 \epsilon_{\hat{i}\hat{j}\hat{l}} \, e^{\hat{l}}{}_{k} T_{\hat{i}\hat{j}} \Big] \epsilon &=& 0.
\end{eqnarray}
The ``radial" part and ``angular" part result in two conditions
\begin{eqnarray}
0 &=& \Big[ \partial_k - \ft23 \epsilon_{\hat{i}\hat{j}\hat{l}} e{}^{\hat{l}}{}_k T_{\hat{i}\hat{j}} \Big] \epsilon \,,\nonumber\\
0 &=& \Big[ -\epsilon_{\hat{i}\hat{k}\hat{j}} \partial_i U_2 + \ft83 T_{k\hat{j}} \Big] \gamma_{\hat{j}} \epsilon.
\end{eqnarray}
The second equation restricts
\begin{equation}
U_1 = U_2,
\end{equation}
then the first equation implies Killing spinor is
\begin{equation}
\epsilon = e^{U/2} \epsilon_0,
\end{equation}
where $\epsilon_0$ is some constant spinor.
In cylindrical coordinates, $T_{ab}$ can be expressed as
\begin{equation}
T_{\theta\phi} = \ft38 e^{-2U} r^2 \sin\theta \partial_r U, \quad T_{\hat{\theta} \hat{\phi}} = \ft38 e^{2U} \partial_r U.
\end{equation}
The projection in cylindrical coordinates can be written as
\begin{equation}
\gamma_{\hat{r}\hat{\theta}\hat{\phi}} \, \epsilon = {\rm i} \, \epsilon.
\end{equation}
The gaugino variation $\delta \lambda_i^I$ on the magnetic background gives
\begin{equation}
\Big( -\ft12 \gamma_{\hat{\theta}\hat{\phi}} F^{I\, \hat{\theta}\hat{\phi}} - \ft12 {\rm i} \gamma^{\hat{r}} e_{\hat{r}}{}^r \partial_{r} \rho^I + \ft83 \rho^I \gamma_{\hat{\theta}\hat{\phi}} T^{\hat{\theta}\hat{\phi}} \Big) \epsilon = 0.
\end{equation}
Then
\begin{equation}
F_{\hat{\theta}\hat{\phi}}^I = -e^{2U_1} \partial_r \rho^I + \ft{16}3 \rho^I T_{\hat{\theta}\hat{\phi}}=- \partial_r (\rho^I e^{-2U}) e^{4U}.
\end{equation}
The supersymmetry variation of $\chi^i$ leads to
\begin{equation}
\Big( \ft14D + \ft18 {\rm i} \gamma^{ab} \slashed{\nabla} T_{ab} - \ft18 {\rm i} \gamma^a \nabla^b T_{ab} - \ft16 \gamma^{abcd} T_{ab} T_{cd} \Big) \epsilon = 0.
\end{equation}
Explicit computation shows that auxiliary field $D$ can be solved from the above equation as
\begin{equation}
D = \ft38 e^{6U} r^{-2} \partial_r (e^{-3U} r^2 \partial_r U )= -\ft3{16} e^{6U} \nabla^2 e^{-2U}.
\end{equation}
So far we have exhausted the constraints which can be derived from the variations of fermions.
In the following, we have to use the equations of motion. For the magnetic string configuration,
the equations of motion of gauge potential are automatically satisfied, however, the Bianchi identity
gives rise to
\begin{equation}
\partial_r F_{\theta\phi}^I = - \partial_r \Big( r^2 \partial_r (\rho^I e^{-2U}) \Big) \sin\theta = 0.
\end{equation}
The solution to the above equation is given by \cite{sab1}
\begin{equation}
\rho^I e^{-2U} = H^I = \rho^I_{\infty} + \frac{p^I}{2r},\quad F^I = \frac{p^I}2 \epsilon_2,
\end{equation}
where $\rho^I_\infty$ is the value of $\rho^I$ in the asymptotically flat region where $U=0$.
The equation of $D$ derived from action (\ref{R+nV+R2}) is
\begin{equation}
{\cal C} = 1 +a_I \rho^I \Big( \ft34 R - 8 D - \ft{16}3 T^2 \Big).
\end{equation}
After substituting $T_{ab}$, $D$ and $R$ according to
\begin{equation}
T_{\hat{\theta} \hat{\phi}} = \ft38 e^{2U} \partial_r U,\quad D=-\ft3{16} e^{6U} \nabla^2 e^{-2U},\quad R = \frac{2 e^{4U}}r (4 U' - 3 r U^{'2} + 2 r U''),
\end{equation}
where ``prime'' means derivative with respect to $r$. We find that the higher derivative corrections to the $D$ equation of motion vanishes. Similarly, there are no
higher derivative corrections to the equations of motion of $T_{ab}$, $g_{\mu\nu}$ and $\rho^I$. Therefore, the magnetic strings do not receive corrections from the Ricci scalar squared invariant. This result seems to be compatible with the expectation from string theory. From string theory point of view, the Ricci scalar squared invariant has no effects on the on-shell quantities since it can just come from a field redefinition of the two
derivative action. This result also suggests that it is the supersymmetrization of curvature squared terms
which captures the correct feature of quantum corrections of ${\cal N}=2$ string vacua, because an arbitrary combination of $R^2$, $D$ and $T_{ab}$ will modify the equations of motion in general.
\subsection{Electric black holes}
Finding electric black holes follow the procedure as \cite{cas1}.
Given the ansatz
\begin{equation}
ds^2 = -e^{4U_1(r)} dt^2+ e^{-2U_2(r)} dx^i dx^i,\quad dx^i dx^i = dr^2 + r^2 d\Omega_3^2.
\label{eb}
\end{equation}
Supersymmetry requires that
\begin{equation}
U_1=U_2, \quad T_{ti} = \ft38 e^{2U} \partial_i U , \quad A_{t}^I = -e^{2U} \rho^I\,,\quad D = \ft3{16} e^{2U} ( 3 r^{-1} U' + U'' - 2 r^{-2} U^{'2}).
\end{equation}
In this case,
\begin{equation}
R = \frac{2 e^{2U}}{r} (3 U' - 3 r U^{'2} +r U'').
\end{equation}
Again, it can be checked that the higher derivative corrections to the equations of motion vanish. Therefore,
the electric black holes are not modified by the inclusion of Ricci scalar invariant.
\section{Conclusions}\label{section: conc}
In this paper, using superconformal tensor calculus, we have completed all off-shell curvature squared invariants in $D=5,\, \mathcal{N}=2$ supergravity based on the dilaton Weyl multiplet for both the minimal and the vector multiplets coupled curvature squared invariants, namely the complete minimal curvature squared invariants consist of
\begin{eqnarray}
\alpha \mathcal{L}_{{\rm Riem}^2}^D + \beta \mathcal{L}^D_{C^2 + \ft16 R^2} + \gamma \mathcal{L}_{R^2}^D \,,
\end{eqnarray}
and the complete vector multiplets coupled curvature squared invariants take the form
\begin{eqnarray}
\mathcal{L}_{{\rm Riem}^2}^{'D} + \mathcal{L}^{'D}_{C^2 + \ft16 R^2} + \mathcal{L}_{R^2}^{'D} \,.
\end{eqnarray}
Adopting the standard Weyl multiplet, we also constructed an off-shell Poincar\'e supergravity by using the linear and vector multiplets as compensators and a supersymmetric Ricci scalar squared coupled to $n$ vector multiplets. In the standard Weyl multiplet, the curvature squared extended model is generalized to take the form
\begin{eqnarray}
\mathcal{L}_{R}^S + \mathcal{L}^S_{C^2} + \mathcal{L}_{R^2}^S \,.
\end{eqnarray}
It is known that the gauged two-derivative Poincar\'e supergravity possesses an maximally supersymmetric $AdS_5$ vacuum solution. When the Ricci scalar squared is included, the very special geometry defined on the moduli space gets modified in a very simple way. Finally, we study the effects of Ricci scalar squared to the supersymmetric magnetic string and electric black hole solutions which are the 1/2 BPS solutions of the ungauged two-derivative theory. It is found that neither the magnetic string nor the electric black hole solutions gets modified by the supersymmetric completion of the Ricci scalar squared.
A comparison between the results in the dilaton Weyl multiplet and the standard Weyl multiplet leads to a natural question that what is the analogue of supersymmetric Riemann squared in the standard Weyl multiplet. At this moment, we do not know the exact answer. However, if such an invariant exists, one should be able to recover the Riemann squared invariant based on the dilaton Weyl multiplet from the Riemann squared invariant based on the standard Weyl multiplet by using the map (\ref{DVMap}). This argument constrains the form of the supersymmetric Riemann squared in the standard Weyl multiplet.
The modified very special geometry around the $AdS_5$ vacuum by the Ricci scalar squared invariant is very intriguing unlike the supersymmetric completion of the Weyl tensor squared which does not affect the definition of very special geometry in the $AdS_5$ vacuum. Interpretation of the modified very special geometry from string/M theory and its application in the context of AdS/CFT correspondence deserve future investigation. Finally, our procedure for the construction of Ricci scalar squared invariant can be straightforwardly generalized to $D=6, \, \mathcal{N} =(1,0)$ supergravity \cite{op2}.
\section*{Acknowledgements}
We thank Ergin Sezgin for discussions and Bernard de Wit for bringing the interesting paper \cite{deWit:2006gn} to us. Y.P. is supported in part by DOE grant DE-FG03-95ER40917. Y.P. would also like to thank the hospitality of CHEP at Peking University and Department of Physics and Astronomy at Shanghai Jiao Tong University where this work is completed.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Gravitational wave detections of coalescing binary black holes
(\cite{TheLIGOScientific:2016pea, Abbott:2016blz, Abbott:2016nmj, Abbott:2017vtc})
with the Advanced LIGO detectors
(\cite{Abramovici:1992ah, Harry:2010zz, Aasi:2013wya, TheLIGOScientific:2014jea, TheLIGOScientific:2016agk})
have opened up a new tool to test modified theories of gravity.
A number of such tests have already been performed looking for generic deviations from Einstein's general relativity (GR),
and so far the data has been found consistent with GR
(\cite{TheLIGOScientific:2016pea, Abbott:2016blz, TheLIGOScientific:2016src, Abbott:2017vtc, Abbott:2016bqf}).
Merging black hole observations are particularly suited to testing theories that deviate from Einstein's GR in the near black hole horizon regime.
In a certain sense, colliding black holes are the ideal testing ground for such models,
but because of a lack of definite predictions, little is known about how these tests impact specific modified theories.
One such theory that proposes to modify the near-horizon physics is pseudo-complex general relativity (pc-GR) (\cite{Hess:2008wd}).
This theory proposes a pseudo-complex generalisation of GR and leads to a number of novel predictions.
Ray-tracing of light rays in this geometry has been previously calculated in \cite{Schonenbach:2013nya}
with an eye to comparing predictions to observations from the Event Horizon Telescope.
Simulations of accretion disks have been studied to compare to X-ray observations of accreting systems (\cite{Hess:2015hpc}),
associated with an innermost stable circular orbit (ISCO).
Further studies have looked at the effect on gravitational redshift and frame-dragging (\cite{Schonenbach:2012mw}).
Most of these studies relate to comparisons with future precision observational data.
One of the features believed to be associated with pc-GR is a strong modification of near-horizon physics.
In fact it is has been claimed that pc-GR predicts there are no black holes
because of a modification of the near-horizon gravitational field (\cite{Hess:2010pba}).
This makes gravitational wave observations of merging black holes ideal observations to test such a theory.
Here we will directly compare the theory with the gravitational wave observation of GW150914
(\cite{Abbott:2016blz, Abbott:2016bqf})
and with bounds on modifications from GR established by Advanced LIGO's first detections
(\cite{TheLIGOScientific:2016pea, Abbott:2016blz, TheLIGOScientific:2016src, Abbott:2016nmj, Abbott:2017vtc}).
This has previously been investigated in \cite{Hess:2016gmh}, where it was claimed that
pc-GR implies that the coalescence which produced GW150914
may have had a chirp mass much larger than claimed by the LIGO team,
and occurring at a much greater luminosity distance.
Here we will re-examine this interpretation and show how existing gravitational wave observations
are able to constrain the free parameters of pc-GR
and even rule out certain parameter ranges that allow horizonless objects.
\section{Model and spacetime metric}
In any metric theory of gravity, including pc-GR, the spacetime of an isolated, spinning, stationary object is likely to be axisymmetric.
By the requirements of theorem 7.1.1 of (\cite{wald2010general}) such a stationary, axisymmetric metric can be put in the form
\beq \label{generalKerr}
\dd s^2 = g_{tt}\dt^2 + 2g_{\phi t} \dt \dphi + g_{\phi\phi}\dphi^2 + g_{\rho\rho}\drho^{2} + g_{zz}\dz^{2} ~,
\eeq
where only four of the metric functions are independent since $\rho^{2} = g^2_{t\phi} - g_{tt}g_{\phi\phi}$,
and all metric functions are only functions of the coordinates $\rho$ and $z$.
The coordinates $t$ and $\phi$ are adapted to the stationary and axisymmetric symmetries and the coordinate $z$ can be replaced with a zenith-angle coordinate $\theta$.
If we furthermore assume that the spacetime has a reflection symmetry about an equatorial plane, $\theta=\frac{\pi}{2}$,
then all metric functions are guaranteed to satisfy $\partial_{\theta}g_{ab}=0$ on this equatorial plane.
This general form of the metric will hold in any metric theory of gravity with these symmetries
and is a purely geometrical result, before any theory-specific equations of motion have been solved.
In particular,
it contains the Kerr-Newman class of electrovacuum solutions in GR as well as a host of other known solutions.
To obtain the functional form of the metric functions in a specific theory we should solve the equations of motion.
This has already been done within pc-GR (\cite{Caspar:2012ux}) and we adopt without modification the solution here:
\ba
g_{tt} &=& - \left( 1 - \frac{\psi}{\Sg} \right) ~~~, ~~
g_{rr} = \frac{\Sg}{\Delta} ~~~, ~~~
g_{\theta\theta} = \Sg ~, \nonumber \\
g_{\phi\phi} &=& \left( \left( r^2 + a^2 \right) + \frac{a^2\psi}{\Sg} \sin^2\theta \right) \sin^2\theta ~, \nonumber \\
g_{t\phi} &=& g_{\phi t} = - a \frac{\psi}{\Sg} \sin^2\theta ~,
\label{metric}
\ea
with $\Sg = r^2 + a^2 \cos^2 \theta$ and $\Delta = r^2 + a^2 - \psi(r)$.
If the function $\psi(r)$ is chosen to satisfy $\psi =2Mr$
then this solution is just the Kerr solution of vacuum GR with mass $M$ and specific angular momentum $a = \chi M$.
However, pc-GR allows for $\psi(r)$ to be a more general function,
with the form adopted in \cite{Caspar:2012ux} $\psi = 2 m(r) r$ where
\ba
m(r) = M - \frac{B}{2r^{n}} = M \, g(r) ~, ~~
g(r) = \left[ 1 - b \left(\frac{M}{r}\right)^{n} \, \right] .
\label{modified mass}
\ea
Here $b$ is a new dimensionless parameter for the pc-GR modification. Its value in GR is zero.
The ultimate provenance of this free parameter in pc-GR is a term in the action variation for the theory,
that following a proposal of \cite{Schuller:2002ma} is taken to lie exclusively in one half of the kinematical pseudo-complex algebra.
The freedom to choose $\psi(r)$ is a restricted form of the four metric-function freedom in the more general metric (\ref{generalKerr}).
In fact, in the non-rotating case with $g_{t\phi}=0$,
these solutions are examples of a restricted class of dirty black holes studied previously in Einstein's GR (\cite{Visser:1992qh});
of particular relevance to the current work is their quasi-normal mode behaviour (\cite{Medved:2003rga}).
We will not discuss further here the theoretical motivation for including such a general function,
but merely adopt this as a model to be constrained by data.
The parameter $b$ can be chosen large enough such that the solutions do not admit Killing horizons,
although values less than this critical value are not in principle ruled out by the theory.
The Killing horizons of the Killing vector field
$k^{a} = \delta^{a}_{t} - \Omega\delta^{a}_{\phi}$
occur at the solutions of $r^{2}+a^{2}-2m(r)r =0$ where $\Omega = g_{t\phi}/g_{\phi\phi}$.
No horizons exist when this equation does not have real positive solutions, which occurs when $b$ is greater than
\beq b_{\mathrm{maxH}} = \rmin^{n}\left( 1 - \frac{\chi^{2}}{2\rmin} - \frac{\rmin}{2}\right) ~, \label{bmaxH} \eeq
where $\rmin = (n+\sqrt{n^2-(n^2-1)\chi^2})/(n+1)$.
Thus for sufficiently large $b$, black holes can be said not to exist (\cite{Hess:2010pba}).
This limiting value is largest when $\chi =0$, and in the $n=2$ case takes the value $16/27$.
As written, the spacetime metric is still in general singular,
and for $a=0$ contains a curvature singularity at $r=0$,
as evidenced by the value of the Kretschmann scalar
\ba
\frac{r^{6}}{4}R_{abcd}R^{abcd} = \hspace{5cm} & & \nonumber \\
m'' r^{2} (4m-4m'r +r^{2}m'') + 8m'r(m'r-2m)+12m^{2} ~. & &
\ea
It is clearly the intention of the original authors of the model
that such singularities should be regularised by some effect (\cite{HessPrivate})
and we take the spacetime of equation (\ref{metric}) as a working model only away from such singularities.
Equatorial circular orbits in the general spacetime (\ref{generalKerr}) have four-velocities, $u^{a}$, given by
\beq
u^{a} = \frac{\dt}{\dlam}\delta^{a}_{t} + \frac{\dphi}{\dlam}\delta^{a}_{\phi} ~,
\eeq
where $\lam$ parameterises the orbital path and can be chosen to be the proper time in the case of timelike orbits.
For these orbits to be geodesic, bound only by gravity,
we should in addition solve the geodesic equation $u^{a}\nabla_{a}u^{b} = 0$.
In the spacetime (\ref{generalKerr}), the only non-trivial equation of the four geodesic equations is the $r$-component.
This condition suffices to determine the functions $\frac{\dt}{\dlam}$ and $\frac{\dphi}{\dlam}$
up to an overall normalisation as a function of the $r$ coordinate.
Since the orbital frequency observed asymptotically is given by $\w = \dphi / \dt$, the geodesic equation gives
\beq \label{omega_geodesic}
\w_{\pm} =
\frac{-g_{t\phi}' \pm \sqrt{g_{t\phi}'^{2}-g_{\phi\phi}' g_{tt}'}}{g_{\phi\phi}'} ~,
\eeq
where $'$ denotes an $r$-derivative.
Therefore, using the metric components of (\ref{metric}) the geodesic equation can be written as
\beq \label{mgeodesic} (\w \, a - 1)^2(m - m' r) - \w^2 \, r^3 = 0 ~, \eeq
In the limit of $m'=0$ this gives the expected behaviour for the Kerr spacetime (\cite{Bardeen:1972fi}),
and in the further Schwarzschild limit of $a=0$ it reduces to the familiar Kepler-like relation between $r$ and $\w$ (\cite{Abbott:2016bqf}).
\section{Post-merger: ringdown}
The single body metric (\ref{metric}) lends itself immediately to calculations of test mass orbital properties,
needed for studies of ray tracing, accretion disks or ringdown frequencies.
Objects compact enough to support circular photon orbits are expected
to have ringdown frequencies approximated by the gravitational wave frequency
of a massless test particle at this ```light ring" (LR) (\cite{Cardoso:2016rao}).
For the orbits discussed above to be null, we require additionally that $u^{a}u_{a}=0$.
This is equivalent to requiring
\beq \label{omega_null}
\w_{\pm} = \frac{-g_{t\phi} \pm \sqrt{g_{t\phi}^{2}-g_{\phi\phi} g_{tt}}}{g_{\phi\phi}} ~.
\eeq
Using the metric components of (\ref{metric}) the null condition can be rewritten as
\beq \label{mnull} 2m - r - 4\w ma + \w^2 (r^3 + ra^2 + 2ma^2) = 0 ~. \eeq
For a circular orbit to be both geodesic and null requires both eqns (\ref{mgeodesic}) and (\ref{mnull}) to be satisfied.
Eliminating $\w$ from these equations gives the location of the light ring.
This will be the root of a polynomial in $r$, in the present case given by
\beq \sqrt{\Delta}(r^3-a^2 F) + a(2r^2m + (r^2+a^2)F) - r\sqrt{rF}g_{\phi\phi} = 0 ~,\eeq
where $\Delta$ and $g_{\phi\phi}$ are functions given in eqn (\ref{metric}) and we have defined $F = m - m' r$ as in \cite{Hess:2016gmh}.
This equation will have a positive real root (and hence a light ring will exist) if its value at a local minimum is negative for positive $r$.
For the static, spherically symmetric case with $-g_{tt}=g_{rr}^{-1} = 1-2m(r)/r$ and $g_{t\phi} = 0$ the equation becomes
\beq r-3m(r) + m' r = 0 ~, \eeq
For the Schwarzschild solution with $m'=0$ this gives the familiar photon sphere at $r=3M$.
For the mass function (\ref{modified mass}), the existence of a light ring requires $b$ to be less than
\beq b_{\mathrm{maxLR}} = \frac{1}{n(3+n)}\left( \rLR\right)^{n+1} ~, \label{bmaxLR} \eeq
where $\rLR=3n/(n+1)$.
Thus for the non-rotating case, when $n>0$, if there is a horizon then there is also a photon sphere.
With the location of the light ring,
the frequency of an orbiting null test mode can be calculated using either eqn (\ref{omega_geodesic}) or eqn (\ref{omega_null}).
Fig (\ref{fig:ringdown}) shows the mass and spin values that the spacetime described by equation (\ref{metric}) needs to have
in order to have a light-ring frequency of 250 Hz.
This frequency is broadly consistent with the ringdown frequency of GW150914 (\cite{TheLIGOScientific:2016src}).
It can be seen from the figure that for values of $b$ greater than zero,
either a higher mass or a lower frequency is needed obtain the same frequency as the required values in general relativity.
\begin{figure}
\includegraphics[width=\columnwidth]{ringdown_plot}
\caption{The final mass and spin values needed for the ringdown frequency, based on the light ring orbital frequency to take the value 250 Hz.
Values of $b$ greater than zero require either a higher mass or a lower spin than the $b=0$ values predicted by GR.}
\label{fig:ringdown}
\end{figure}
For values of $b$ greater than $b_{\mathrm{maxLR}}$ there is no light ring.
In this case the ringdown of the object will be dominated by its intrinsic quasi-normal modes and will look radically different from the damped-sinusoid ringdown associated with the light ring.
The exact calculation of this behaviour will likely require a numerical solution of the field equations of pc-GR and is beyond the scope of this paper.
\section{Pre-merger: inspiral}
In order to solve the two-body problem for the orbital motion of two nearly equal mass objects,
more is needed than just the one-body metric (\ref{metric}).
In Newtonian gravity the two-body problem of bound gravitational orbits is solved by the Keplerian orbits.
These Keplerian orbits, along with the Einstein quadrupole formula for gravitational wave emission,
can be used to infer basic properties of the binary source of GW150914 (\cite{Abbott:2016bqf}).
Beyond this Newtonian order,
post-Newtonian (PN) corrections to the orbits can be calculated,
which impact the gravitational wave signal.
These are usually regulated (\cite{Cutler:1994ys, Blanchet:1995ez, Blanchet:2013haa})
by the PN parameter $x \sim \left( v/c \right)^2$,
the dimensionless spins,
and the mass ratio $q = M_1/M_2$,
with $M = M_1+M_2$ the total mass
and $\mu = M_1 M_2 / M = M q / (1+q)^2$ the reduced mass.
To these we now add the parameter $b$ which regulates the relative strength of the modification to the function $\psi$.
In the Newtonian and post-Newtonian regimes, $x \sim M / r \sim (M\w)^{2/3}$.
We thus see from the factor $g(r)$ in (\ref{modified mass}) that every appearance of $b$ involves a suppression by at least $n$th pN order.
We shall approximate how the leading order correction to the Newtonian frequency and phase evolutions depends on the modification,
and find the $b$ dependent post-Newtonian term to leading order in $b$.
In the wave zone we expect the same relation between the metric perturbation and the source quadrupole as in GR,
and so expect the same wave polarizations and multipole decomposition.
We therefore treat the generation of gravitational waves
as governed by a quadrupole formula
(\cite{Einstein:1918btx,Blanchet:1995ez,Blanchet:2013haa,Abbott:2016bqf}),
\beq
\label{quadrupole}
\dot{E}_{\rm GW}
= - \frac{32}{5} \frac{G}{c^5} \mu^2 \, r^4 \, \w^6 \, g^\varrho(r)~,
\eeq
where we allow for a possible deviation from the GR quadrupole formula
with a subleading term $g^\varrho(r)$.
Examining the energy carried by the waves far away suggests adopting $\varrho=0$,
while we note that \cite{Hess:2016gmh} uses $\varrho=1$ for regulating this emission.
The output of gravitational waves drains the orbital energy of the system,
which is assumed to descend through quasi-circular orbits.
To leading order in $b$ this orbital energy is
\ba
\label{Eorbital}
E_{\rm orb} &=& -\frac{G m_1(r) m_2(r) } {2r} \nonumber \\
&=& -\frac{G M \mu } {2r} \left[ 1
- b \left( \! \frac{M}{r} \! \right)^{\!n} \!\! Q
+ b^2 \left( \! \frac{M}{r} \! \right)^{\!2n} \!\! Q_2
\right],~~~~~~
\ea
where $Q = \frac{1 \!+\! q^n} {\left( 1 \!+\!q \right)^n}$,
and $Q_2 = \frac{q^n} {\left( 1 \!+\!q \right)^{2n}} \mathfrak{f}(n,\eps_1, \eps_2)$
depend on the mass ratios and distributions\footnote{
The form-factor
\ba
\mathfrak{f}(n, \eps_1, \eps_2) = n \sum_{k=1}^{n\!+\!2} (-1)^k
\frac{\Gm(n\!+\!k\!-\!2)\,\Gm(n\!+\!2\!-\!k)}{\Gm(n\!+\!2)\,\Gm(n\!-\!1)} \cdot ~~~~~~~~ \nonumber\\
\cdot
\left. \left[ x^{-(n\!+\!2\!-\!k)}
\sum_{m=0}^1
(-1)^{m(n-1)} (x\!+\!(-1)^m)^{-(n\!+\!k\!-\!2)}
\right] \right|^{1-\eps_2}_{\eps1}
\!\!\!\!\!
\ea
also generally depends on the how the singularities near $r=0$ are regularized.
In the simplest model, $\mathfrak{f}(n, \eps_1, \eps_2) = 1$.
}.
We note that $Q=1,\,Q_2=0$ corresponds to the model of \cite{Hess:2016gmh}.
We also introduce for the deviation from GR the shorthand
\beq
\gt = 1 - g(R) = b \left( \! \frac{M}{r} \! \right)^{\!n} ~~,~~
\frac{\dd \gt}{dr} = -\frac{n}{r}\gt ~,
\eeq
such that to leading order in the deviation,
\beq
\label{Kepler}
\frac{\dd E_{\rm orb}}{\dr}
= \frac{G M \mu }{2r^2}
\left[ 1 - (n\!+\!1) \, Q \, \gt \, \right] \, ,
\eeq
which can be used in the equation for the energy balance equation
$\dot{E}_{\rm GW} = \dot{E}_{\rm orb} = {E}_{\rm orb}'\dot{r}$
to find
\beq
\label{energy balance short}
- \frac{32}{5} \frac{G}{c^5} \mu^2 \, r^4 \, \w^6 \, g^\varrho(r) = \frac{G M \mu }{2r^2}
\left[ 1 - (n\!+\!1) \, Q \, \gt \, \right] \dot{r} ~,
\eeq
We note that for large $b$, the gravitational well might have a minimum
at finite $r = M ^{\,n} \!\!\!\! \sqrt{(n\!+\!1)\,b\,Q}$,
by which the energy balance approximations fail.
Before reaching there,
the orbital angular velocity $\w$ can be eliminated
from the expression (\ref{energy balance short}) by noting that,
for quasi-circular orbits,
there is a relation between $\w$ and $r$ given similarly to Kepler's third law by
\beq
\label{Kepler2}
\w^2 = \frac{G M }{r^3} \left[ 1 - (n\!+\!1) \, Q \, \gt \, \right] \, ,
\eeq
Thus we find from equation (\ref{energy balance short}) that
\beq
\label{energy balance full r}
\dot{r} = - \frac{64}{5} \frac{G^3 M^2 \mu}{r^3 c^5} \,
\left[ 1 - (n\!+\!1) \, Q \, \gt \, \right]^2 \, g^\varrho(r) ~,
\eeq
an equation which can be solved numerically for the orbit.
\subsection{Amplitude evolution}
The evolution of the amplitude of the gravitational wave from a binary inspiral can be found
by substituting $r(t)$ from eqn (\ref{energy balance full r}) and $\w(t)$ from eqn (\ref{Kepler2})
into the Newtonian order amplitude equation (compare \cite{Hess:2016gmh}'s Eq. (14), with $\varrho=1$)
\beq
A = \frac{4G\mu \w^2 r^2}{d_{L}c^4}g^\varrho(r) ~.
\label{amplitude}
\eeq
Since this relies on several approximations it is not expected to exactly match the amplitude evolution for a real signal.
However, it is known that the 0PN Newtonian amplitude is a good approximation to the full signal in general relativity
up until very close to the merger (\cite{Cutler:1992tc}).
A plot of the amplitude evolutions in pcGR and the 0PN Newtonian approximation are shown in Fig. (\ref{fig:amplitude})
against a full inspiral-merger-ringdown waveform model in vacuum general relativity, SEOBNRv2 (\cite{Taracchini:2013rva}).
\begin{figure}
\includegraphics[width=\columnwidth]{amp_evolution}
\caption{The amplitude evolution of the inspiral with $b=0$ to leading order 0PN approximation (red)
and with $b=16/27$ (black) to leading order in the modification, for the $n=2$ case.
These are compared to the general relativity model SEOBNRv2 full gravitational waveform (blue)
for a binary coalescence of near equal mass objects each with zero spin.
The waveform is shown from an initial frequency of 30Hz,
similar to the low frequency cutoff of the Advanced LIGO detectors during their first observation run
and the amplitudes are matched there.
The $b=0$ 0PN approximation with $b=0$ is seen to match better than the $b=16/27$ approximation,
especially for the later inspiral cycles, just before the peak.}
\label{fig:amplitude}
\end{figure}
It can be seen in the figure that the approximation based on $b=16/27$ begins to deviate noticeably from the GR model waveform several orbits before the peak amplitude.
A comparison of the measured gravitational wave amplitude against the predicted amplitude can constrain the luminosity distance to the source via equation (\ref{amplitude}).
However, the model should match the amplitude evolution for all values of the inspiral expansion parameter $M/r$ as the orbit evolves and the bodies approach one another.
It is not possible to compare amplitudes only at a single value of the expansion parameter as was suggested in \cite{Hess:2016gmh}.
Because pc-GR agrees with GR for sufficiently low values of $M/r$ where several cycles of the waveform are seen, the luminosity distance is required to be broadly consistent with the luminosity distance found by the LIGO team (\cite{Abbott:2016blz}),
and the high redshift values found in \cite{Hess:2016gmh} for GW150914 are not consistent with the data,
even within pseudo-complex general relativity.
\subsection{Phase evolution}
To obtain the phase evolution of the orbital motion and of the gravitational wave,
it is useful to work in the frequency domain in the PN framework (to leading and next-to-leading orders).
Differentiating the Keplerian relation (\ref{Kepler2}) with respect to time, yields after some algebra
\beq
\label{w-r}
\left[ 3 - n (n\!+\!1) \, Q \, \gt \, \right] \frac{\dot{r}}{r} = -2 \frac{\dot{\w}}{\w} \, .
\eeq
and we can change from $r$ to $\w$ using eqns (\ref{Kepler2},\ref{w-r}),
\ba
r &=& \left[\frac{G M }{\w^2} \right]^{1/3}
\left[ 1 - \frac{(n\!+\!1)}{3} \, Q \, \gtw \, \right] \, , \\
\dot{r} &=& -\frac{2}{3}\frac{\dot{\w}}{\w} \, r \, \left[ 1 + \frac{n (n\!+\!1)}{3} \, Q \, \gtw \, \right] \, ,
\ea
with
\beq
\gtw = \tilde{g} \left( r(\w) \right) = b \left( M \w\right)^{2n/3\,} , ~~
g^\varrho(r) = 1- \varrho\,\gtw ~ .
\eeq
Hence instead of (\ref{energy balance full r}) we have
\ba
\label{energy balance full w}
\dot{\w} = \frac{96}{5} \frac{\left(G\Mc\right)^{5/3}}{c^5} \, \w^{11/3}
\left[ 1 - \IBw \right] , \\
\label{energy balance full f}
\dot{f} = \frac{96}{5} \frac{ \pi^{8/3} \left(G\Mc\right)^{5/3}}{c^5} \, f^{11/3}
\left[ 1 - \IBw \right] ,
\ea
as the new chirp equations, with the standard chirp mass $\Mc = \left( M^2 \mu^3 \right)^{1/5}$
and with $\gtw$ and numerical prefactors collected into the modification at $n$-PN $\IBw$,
\beq
\IBw = \left( \frac{(n\!+\!2) (n\!+\!1)}{3} Q + \varrho\right) b \, (M\w)^{2n/3}~.
\eeq
We note also (\ref{energy balance full f}) in terms of the gravitational wave frequency
$f = \w/\pi$ (twice the orbital frequency),
with $\IBf = \IB (\w=\pi f)$.
We would like to relate this $\IBf$ modification to known bounds on PN coefficients.
We first plug equation (\ref{energy balance full f}) into the integrals for the time and for the phase (compare \cite{Cutler:1994ys})\footnote{
These forms must be trivially modified to apply to $n=2.5,4$
where the integrals for $\phi$ and $t$ respectively give logarithms of $\pi M f$ rather than its powers.
},
\ba
t &=& \!\! t_c + \int \frac{\df}{\dot{f}} \nonumber\\
&=& \!\! t_c - \frac{5 \, c^5 \, \left(\pi f \right)^{-8/3}} {256 \left(G\Mc \right)^{5/3}}
\left[ 1 - \frac{4}{n-4} \IBf \right]
\, ,~~~~~~ \\
\phi &=& \!\! 2\pi \int \! f \, dt = 2\pi \int \!\! \frac{f}{\dot{f}} \df \nonumber\\
&=& \!\! - \frac{c^5}{16 \left(\pi G\Mc f\right)^{5/3}} \left[ 1 - \frac{5}{2n -5} \IBf \right]
\, ,~~~~~~
\ea
and then use these with the stationary phase approximation (\cite{Cutler:1994ys}) to find
\ba
\Psi
&=& \!\!\! 2\pi f t_c - \phi_c - \pi/4 \\
&&\!\!\!+ \frac{3 } {128 \left(\pi G \Mc f \right)^{5/3} }
\left[ 1 + \frac{20}{\left(n -4\right)\left(2n -5\right)} \IBf \right] \, . \nonumber
\ea
This form can be compared directly to the expected PN coefficients in GR of \cite{Buonanno:2009zt}
(following \cite{Iyer:1993xi, Will:1996zj, Blanchet:2000nv}),
and to the limits set on deviations from them
by the observed gravitational waves in the inspiral regime in \cite{TheLIGOScientific:2016pea, TheLIGOScientific:2016src}
(based on \cite{Talmadge:1988qz, Mishra:2010tp, PhysRevD.85.082003}).
This comparison is summarized in Table \ref{PN table},
for the leading pc-GR PN terms for $n=1,2,3$ and the corresponding GR PN phase coefficients of orders $1,2,3$.
All coefficients are calculated for the fiducial equal mass non-spinning case ($q=1$, $a=0$);
the pc-GR coefficients are calculated for the critical $b_{\mathrm{maxH}}$ value of equation (\ref{bmaxH}), where the horizon vanishes,
and for $\varrho=0$.
The table also compares to the 90\% bounds set on the relative deviations $(p_n^{\rm mod-GR} - p_n^{GR})/p_n^{GR}$
established in LIGO's first observation run O1.
This comparison provides independent evidence to show that non-existence of horizons for $n=1$ pc-GR
is inconsistent with observed gravitational wave events in O1,
and the first evidence to rule out their possibility in $n=2$ pc-GR as well.
\begin{table}
\centering \caption{pc-GR PN coefficients}
\begin{center}
\begin{tabular}{cccccccc}
\hline
$n$ & $\rmin$ & $b_{\rm crit}$ & $p_n^{\rm pc-GR}$ & $p_n^{\rm GR}$ & $\delta_\phi$ & $range(\delta_\phi)$ \\ \hline
1 & 1 & 0.5 & 20/9 & 6.44 & $34\%$ & $(-20\%, 5\%)$ \\
2 & $4/3$ & 16/27 & 320/27 & 46.2 & $26\%$ & $(-130\%, 15\%)$ \\
3 & 1.5 & 27/32 & -225/8 & -652 & $4.3\%$ & $(-100\%, 600\%)$ \\
\hline
\label{PN table}
\end{tabular}
\end{center}
\end{table}
Conversely, the limits on the deviations of PN coefficients can be translated to limits on $b$,
which for $n=1,2,3$ are $|b|\leq 0.85, 2.96, 118$ respectively.
This suggests that $b \left(\pi M f \right)^{2n/3}$ is indeed a small parameter throughout the system's evolution,
and that hence the introduction of the pc-GR modifications should not produce large deviations from the
standard GR post-Newtonian inspiral, and in particular should not largely affect the chirp mass $\Mc$.
For the GW150914 data, estimating the chirp mass from directly from $f$ and $\dot{f}$ at different inspiral times
using the Newtonian approximation (0PN, $b=0$)
shows it remains approximately constant up to a frequency of $\sim150Hz$ (\cite{Abbott:2016bqf}),
and is equal to roughly 30 solar masses.
This corresponds to $\left(\pi M f \right)^{2/3} \sim 0.17$,
consistent with treating $\IBf$ only at leading order,
and inconsistent with the much larger modified chirp mass (and correspondingly higher redshift)
estimated in \cite{Hess:2016gmh} from the late part of the orbit.
\subsection{Testing a previous model}
As mentioned through the text, the model of \cite{Hess:2016gmh} can be considered under our formalism
as the case $n=2$, $Q=1$, $Q_2=0$, $\varrho=1$.
For the critical value of $b=16/27$,
this changes the leading coefficient $p_2^{\rm GR}$ from $320/27$ to $800/27$,
which changes $\delta_\phi$ from $26\%$ to $65\%$ of the GR value.
As Table \ref{PN table} indicates,
this value is well beyond the range observed and reported by LIGO's O1,
and so this model pc-GR also cannot sustain horizonless objects as the sources of LIGO's detections.
\section{Conclusions}
We have shown how the model of pseudo-complex general relativity can be constrained
using gravitational wave observations.
These observations are very much independent of and complementary to observations
with other techniques that have previously been proposed,
such as accretion disk studies and imaging of super-massive black holes.
For two merging compact objects,
gravitational wave observations provide strong constraints on the near horizon behaviour
from both the inspiral phase and the final ringdown after merger.
In particular we have seen that the ringdown phase,
modeled in terms of the light ring structure within pc-GR,
typically requires the final object after merger to be slightly heavier and spinning slightly slower
than is found using Einstein's GR.
However, the Newtonian limit of pc-GR requires the chirp mass
to be broadly consistent with the values found using Einstein's GR,
and this bounds both the total mass and luminosity distance
to be broadly consistent with those found using Einstein general relativity.
We have discussed the model in terms of a dimensionless parameter $b$ and power index $n$ in equation (\ref{modified mass}).
For sufficiently large values of $b$ objects can be horizonless.
We find that horizonless objects in the $n=1$ case are already ruled out,
independently of other Solar System constraints (\cite{Will2006}).
For the critical case with $n=2$ and $b=16/27$ discussed in \cite{Hess:2016gmh}
we find that the final object can only fit the ringdown with a frequency in the range observed in \cite{TheLIGOScientific:2016src} if the spin value is close to zero.
Larger spin values require lower values of $b$ and hence, if $b$ is universal,
allow non-spinning black holes with horizons.
In the inspiral regime, the case of $n=2$ and $b=16/27$ is also in tension with the data.
The model suggests a noticeable fall-off in the amplitude which is not seen in the data \cite{Abbott:2016bqf},
and the inspiral phasing is in conflict with detailed fits of the post-Newtonian parameters (\cite{TheLIGOScientific:2016pea}).
For values of $n$ larger than 2 the situation is not so clear.
As the value of $n$ is increased in the model, the corrections of the model are constrained to smaller and smaller distances.
For $n=3$ the leading order correction from pseudo-complex general relativity is a 3PN term and this term is less tightly constrained by current LIGO observations.
Higher terms at 4PN and beyond are not yet fully calculated in general relativity so a direct comparison with these terms is not yet possible.
Our conclusions are only valid to the extent of the approximations that have been made in deriving the model of \cite{Caspar:2012ux}.
The parameters $b$ and $n$ are assumed constant and apply equally to the pre-merger inspiral phase as to the post-merger ringdown.
Little is known about how pseudo-complex general relativity behaves in the highly dynamical merger phase and our results cannot address this part of the LIGO observations.
This is ultimately likely to require, as in the case of vacuum Einstein relativity, numerical solutions to the field equations.
Further work is also required to understand exactly how the features of pseudo-complex coordinates should be implemented in a gravitational theory, but this is beyond the scope of this work.
We have deliberately interpreted the pseudo-complex relativity model slightly differently from \cite{Hess:2016gmh}.
We have explicitly mapped the vacuum solution of pseudo-complex general relativity to an equivalent problem of what would be a non-vacuum spacetime in Einstein relativity.
This enables us to relate the techniques more generally.
Aside from pseudo-complex general relativity,
this analysis can also be applied to dirty black holes,
and a similar analysis of PN effects has been pursued for dark matter minispikes (\cite{Eda:2014kra}).
Thus it is hoped that the techniques here described may find application beyond the specific scope of testing pseudo-complex general relativity.
\section*{Acknowledgements}
We thank P. O. Hess and the LIGO TestingGR working group for useful conversations and correspondance.
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
A classical theorem, due to the combined work of Almgren, Pitts and Schoen--Simon,
asserts that for $n \geq 2$, every $(n+1)$-dimensional closed Riemannian manifold $M$ contains a
minimal hypersurface smoothly embedded away from a closed singular set of Hausdorff
dimension at most $n-7$. The original proof of this theorem is based on a highly
non-trivial geometric min-max construction due to Pitts~\cite{Pitts81},
which extended earlier work of Almgren~\cite{Almgren65}.
This construction is carried out directly for
the area functional on the space of hypersurfaces equipped with an appropriate weak
topology, and it yields in the first instance a critical point of area satisfying
a certain almost-minimizing property.
This property is central to the rest of the argument, and allows
to deduce regularity of the min-max hypersurface from compactness of
the space of uniformly area-bounded stable minimal hypersurfaces
with singular sets of dimension at most $n-7$,
a result proved for $2 \leq n \leq 5$ by Schoen--Simon--Yau~\cite{SchoenSimonYau75} and
extended to arbitrary $n \geq 2$ by Schoen--Simon~\cite{SchoenSimon81}.
(The Almgren--Pitts min-max construction has recently been streamlined by
De Lellis and Tasnady~\cite{DeLellisTasnady13} giving a shorter proof.
However, their argument still follows Pitts' closely and is in
particular based on carrying out the min-max procedure directly
for the area functional on hypersurfaces.)
In recent years an alternative approach to this theorem has been developed, whose philosophy is to push the regularity theory to its limit in order to gain substantial simplicity on the existence part.
Specifically, this approach differs from the original one in two key aspects:
first, it is based on a strictly PDE-theoretic min-max construction that replaces the Almgren--Pitts geometric construction;
second, for the regularity conclusions, it relies on a sharpening of
the Schoen--Simon compactness theory for stable minimal hypersurfaces.
The idea in this approach is to construct a minimal hypersurface as the
limit-interface associated with a sequence of solutions $u = u_{i}$ to the Allen--Cahn equation
\begin{equation}
\label{eq:allen_cahn_equation}
\Delta u -\epsilon_{i}^{-2} W'(u) = 0
\end{equation}
on the ambient space $M$, where $W \! : {\mathbb R} \to {\mathbb R}$
is a fixed double-well potential with precisely two minima at $\pm 1$ with $W(\pm1) = 0$.
Roughly speaking, if the $u_{i}$ solve \eqref{eq:allen_cahn_equation} and satisfy appropriate bounds, then the level sets of $u_i$
converge as $\epsilon_{i} \to 0^{+}$ to a stationary codimension
$1$ integral varifold $V$.
This fact was rigorously established by
Hutchinson--Tonegawa~\cite{HutchinsonTonegawa00}, using in part methods inspired
by the earlier work of Ilmanen in the parabolic setting Ilmanen~\cite{Ilmanen93}.
Note that $u_{i}$ solves \eqref{eq:allen_cahn_equation} if and only if
it is a critical point of the Allen--Cahn functional
\begin{equation}
E_{\epsilon_{i}} (u)
= \int_U \epsilon_{i} \frac{\abs{\nabla u}^{2}}{2} + \frac{W(u_i)}{\eps_i}.
\end{equation}
If the solutions $u_{i}$ are additionally assumed stable with respect to
$E_{\epsilon_{i}}$, then Tonegawa and Wickramasekera \cite{TonegawaWickramasekera10}
proved that the resulting
varifold $V$ is supported on a hypersurface smoothly embedded away from a closed
singular set of Hausdorff dimension at most $n-7$, using an earlier result of
Tonegawa \cite{Tonegawa05} which established the stability of the regular part
$\reg V$ with respect to the area functional.
Their proof of this regularity result uses the regularity and compactness theory
for stable codimension 1 integral varifolds developed by
Wickramasekera~\cite{Wickramasekera14} sharpening the Schoen--Simon theory.
Stability of $u_i$ means that the second variation of the Allen--Cahn functional
$E_{\eps_i}$ with respect to $H^1(M)$ is a non-negative quadratic form.
More generally the index $\ind u_i$ denotes the number of strictly negative
eigenvalues of the elliptic operator $L_i = \Delta - \eps_i^{-2}W''(u_i)$,
so that $u_i$ is stable if and only if $\ind u_i = 0$.
Using min-max methods for semi-linear equations,
Guaraco~\cite{Guaraco15} recently gave a simple and elegant construction
of a solution $u_{i}$ to \eqref{eq:allen_cahn_equation}
with $\ind u_{i} \leq 1$ and $\norm{u_i}_{L^\infty} \leq~1$,
and such that as $\epsilon_{i} \to 0$, the energies
$E_{\epsilon_{i}}(u_{i})$ are bounded above and below away from 0.
The lower energy bound
guarantees that the resulting limit varifold $V$ is non-trivial.
Since $\ind u_i \leq 1$, $u_i$ must be stable in at least one of
every pair of disjoint open subsets of $M$; similarly if
$\ind u_i \leq k$ then $u_i$ must be stable in at least one of every
$(k+1)$-tuple of disjoint open sets. This elementary observation,
originally due to Pitts in the context of minimal surfaces, together
with a tangent cone analysis in low dimensions, allowed Guaraco
to deduce the regularity of $V$ from the results of \cite{TonegawaWickramasekera10}.
More recently still, Gaspar and Guaraco~\cite{GasparGuaraco2016}
have used $k$-parameter min-max methods to produce sequences of
critical points with Morse index at most $k$, for all positive
integers~$k$.
Our results show that this index bound is inherited by
the minimal surface arising as $\eps_i \to 0$, provided
it has a trivial normal bundle.
We also point out that the regularity follows
in all dimensions from the corresponding result in the stable case via
an inductive argument that avoids the tangent cone analysis used in~\cite{Guaraco15}.
\begin{cor*}
Let $M$ be a closed Riemannian manifold of dimension $n+1 \geq 3$.
Let $V$ be the integral varifold arising as the limit-interface of
the sequence $(u_i)$ of solutions to \eqref{eq:allen_cahn_equation}
constructed in~\textup{\cite{Guaraco15}}
(respectively in~\textup{\cite{GasparGuaraco2016}} using $k$-parameter min-max methods).
Then $\dimh \sing V \leq n-7$. If $\reg V$ is two-sided,
then its Morse index with respect to the
area functional satisfies $\ind_{\calH^n} \reg V \leq 1$
(respectively $\ind_{\calH^n} \reg V \leq k$).
\end{cor*}
In min-max theory, one generally expects that the Morse index of the constructed critical point is no greater than the number of parameters
used in the construction.
The above corollary gives this result for the constructions of Guaraco and
Gaspar--Guaraco, provided the arising hypersurface is two-sided.
This was recently shown by Chodosh and Mantoulidis~\cite{ChodoshMantoulidis2018}
to hold automatically when the ambient manifold $M$
has dimension $3$ and is equipped either with a bumpy metric or has positive
Ricci curvature. Building on work of Wang and Wei~\cite{WangWei2017}, Chodosh--Mantoulidis
prove curvature and strong sheet separation estimates, and use these
to deduce that in this three-dimensional setting the convergence of the level
sets occurs with multiplicity $1$.
They moreover show that in all dimensions, if the limiting surface has
multiplicity $1$, then its index is bounded \emph{below} by the index
of the $u_\eps$.
This complements our upper bound for the index, which is a direct consequence
of a lower bound for $(\lambda_p)$, the spectrum of the elliptic
operator $L_V = \Delta_V + \abs{A}^2 + \Ric_M(\nu,\nu)$---the \emph{scalar
Jacobi operator}---in terms of $(\lambda_p^i)$, the spectra of the operators $(L_i)$.
Establishing this spectral lower bound is our main result.
\begin{thm*}
Let $M$ be a closed Riemannian manifold of dimension $n+1 \geq 3$.
Let $V$ be the integral
varifold arising from a sequence $(u_i)$ of solutions to
\eqref{eq:allen_cahn_equation} with $\ind u_i \leq k$ for some $k \in \NN$.
Then $\dimh \sing V \leq n-7$ and
\begin{enumerate}[font = \upshape, label = (\roman*)]
\item $\lambda_p(W) \geq \limsup_{i \to \infty} \lambda_p^i(W)$ for all
$W \subset \subset M \setminus \sing V$ and $p \in \NN$, \label{item:intro_thm_spec_lower_bd}
\item $\ind_{\calH^n} C \leq k$ for every two-sided connected component
$C \subset \reg V$.
\end{enumerate}
\end{thm*}
\begin{rem*}
The spectral lower bound of~\ref{item:intro_thm_spec_lower_bd} also holds
if the assumptions on the $u_i$ are weakened in a spirit similar to the work of
Ambrozio, Carlotto and Sharp~\cite{AmbrozioCarlottoSharp2015},
that is if instead of an index upper bound one assumes that for some
$k \in \NN$ there is $\mu \in \RR$ such that $\lambda_k^i \geq \mu$ for all $i$.
(Note that the index bound $\ind u_i \leq k$ is equivalent to
$\lambda_{k+1}^i \geq 0$.)
\end{rem*}
\begin{rem*}
It was recently brought to our attention that a similar result
had previously been proved by Le~\cite{Le2011} in ambient Euclidean space,
under the additional assumption that the convergence to the
limit surface occurs with multiplicity $1$.
Adapting the methods developed in~\cite{Le2011,Le2015}
to ambient Riemannian manifolds, Gaspar generalised our
results to the case where the limit varifold is one-sided,
without any assumption on multiplicity~\cite{Gaspar2017}.
Their general approach is similar to ours but subtly different,
in that they instead consider the second \emph{inner}
variation of the Allen--Cahn functional; see also the recent
work of Le and Sternberg~\cite{LeSternberg2018}, where similar bounds are established
for other examples of eigenvalue problems.
\end{rem*}
For the minimal hypersurfaces obtained by a direct min-max
procedure for the area functional on the space of hypersurfaces (as in the Almgren--Pitts construction), index bounds have recently been established
by Marques and Neves~\cite{MarquesNeves15}.
Both the Almgren--Pitts existence proof and the Marques--Neves proof of
the index bounds are rather technically involved; in particular, the
min-max construction in this setting has to be carried out in a bare-handed
fashion in the absence of anything like a Hilbert space structure.
In contrast, in the approach via the Allen--Cahn functional, Guaraco's
existence proof is strikingly simple, and our proofs for the spectral
bound and the regularity of $V$ are elementary bar the fact that
they rely on the highly non-trivial sharpening of the
Schoen--Simon regularity theory for stable hypersurfaces as
in~\cite{Wickramasekera14}.
\begin{outline}
In Section~\ref{sec:main_statement} we briefly expose notions from the theory of
varifolds, set the context for the rest of the paper and give the statements
of the main result and its corollaries.
Their proof requires a number of preliminary results, which are contained in
Section~\ref{section:preliminary_results}.
The proof of the main result (Theorem~\ref{thm:main_thm_propagation_of_index_bounds})
is in Section~\ref{sec:proof_main_thm}, and is split into two parts:
in the first part we prove the spectral lower bound by an inductive
argument on $\ind u_i$; this immediately implies the
index upper bound. The proof of $\dimh \sing V \leq n-7$ is given in the second
part, and uses a similar inductive argument.
There are two appendices: Appendix~\ref{app:measure_function_convergence} contains
two elementary lemmas from measure theory that are used repeatedly in
Section~\ref{section:preliminary_results}.
Appendix~\ref{app:second_fundamental_form_coordinate_expression}
gives a proof of Proposition~\ref{prop:weak_convergence_second_ff},
which is a straight-forward adaptation of an argument used by
Tonegawa for the stable case.
\end{outline}
\begin{ackn}
I would like to thank my PhD supervisor Neshan Wickramasekera for his
encouragement and support, and Otis Chodosh for helpful conversations.
This work was supported by the UK Engineering and Physical Sciences Research Council
(EPSRC) grant EP/L016516/1 for the University of Cambridge Centre for Doctoral
Training, the Cambridge Centre for Analysis.
\end{ackn}
\section{Varifolds, stability and statement of main theorem}
\label{sec:main_statement}
The setting is as follows:
$(M^{n+1},g)$ is a closed (that is, compact without boundary) Riemannian manifold
of dimension $n+1 \geq 3$, and $U \subset M$ is an arbitrary
open subset, possibly equal to $M$ itself.
\subsection{Varifolds: basic definitions}
An $n$-dimensional \emph{varifold} in $U$ is a Radon measure
on the Grassmannian $G_n(U) = \{(p,E) \mid p \in U, E \subset T_pM,
\dim E = n\}$---the space of $n$-dimensional planes over points in $U$.
An important subclass are the \emph{integral varifolds},
which correspond to a pair $(\Sigma,\theta)$ of a countably $n$-rectifiable set
$\Sigma \subset U$ and a Borel-measurable function $\theta \in L_{\mathrm{loc}}^1(\Sigma,\NN)$ via
\begin{equation}
V_{\Sigma,\theta}(\phi) = \int_U \phi(x,T_x\Sigma) \theta(x) \dH^n(x)
\quad \text{for all $\phi \in C_c(G_n(U))$},
\end{equation}
where $T_x\Sigma$ is the $\calH^n$-a.e. defined tangent space to $\Sigma$.
The function $\theta$ is called the \emph{multiplicity function}.
A sequence $(\vi)$ \emph{converges as varifolds} to $V$
if they converge weakly as Radon measures on $G_n(U)$, i.e.\ if
\begin{equation}
\int_{G_n(U)} \phi \dvi \to \int_{G_n(U)} \phi \dv
\quad
\text{for all $\phi \in C_c(G_n(U))$.}
\end{equation}
The \emph{weight measure} $\norm{V}$ of a varifold $V$ is defined by
\begin{equation}
\norm{V}(\phi) = \int_{G_n(U)} \phi(x) \dv(x,S)
\quad
\text{for all $\phi \in C_c(U)$}.
\end{equation}
Consider an arbitrary vector field $X \in C_c^1(U,TM)$ with
flow $(\Phi_t)$. We deform $V$ in the direction
of $X$ by pushing it forward via its flow, that is
\begin{equation}
(\Phi_t)_*V(\phi) = \int_{G_n(U)} \phi(\Phi_t(x),\intdiff \Phi_t(x) \cdot S) J \Phi_t(x) \dv(x,S)
\end{equation}
for all $\phi \in C_c(G_n(U))$,
where $J\Phi_t(x) =
\det (\mathrm{d} \Phi_t(x)^* \circ \mathrm{d} \Phi_t(x)) ^{\frac{1}{2}}$
is the Jacobian of $\Phi_t(x)$.
Differentiating the corresponding weight measures
$\lVert (\Phi_t)_* V \rVert$ yields the \emph{first variation} of $V$:
\begin{equation}
\delta V(X) = \srestr{\frac{\mathrm{d}}{\mathrm{d}t}}{t = 0} \norm{(\Phi_t)_*V}(U).
\end{equation}
When $\delta V(X) = 0$ for all vector fields $X \in C_c^1(U,TM)$, we say
that $V$ is \emph{stationary} in $U$.
By definition, the \emph{regular part} of $V$ is the set of points
$x \in U \cap \spt \lVert V \rVert$ such that in a neighbourhood of $x$,
$\spt \lVert V \rVert$ is smoothly embedded in $M$.
Its complement is the \emph{singular part} of $V$,
denoted $\sing V := U \cap \spt \nv \setminus \reg V$.
For a stationary integral varifold $V$, the Allard regularity theorem
implies that $\reg V$ is a dense subset of $U \cap \spt \lVert V \rVert$
\cite[Ch.~5]{Simon84}.
\subsection{Stability and the scalar Jacobi operator}
Throughout this section $V$ will be a stationary integral $n$-varifold in $U \subset M$.
We call $V$ \emph{two-sided} if its regular part $\reg V$
is two-sided, that is if the normal bundle $NV := N(\reg V)$
admits a continuous non-vanishing section.
When this fails, $V$ is called \emph{one-sided}.
(Recall that when the ambient manifold $M$ is orientable,
then $\reg V$ is two-sided if and only if it is orientable.)
Suppose that $V$ is two-sided, and fix a unit normal vector field
$N \in C^1(NV)$, so that every function $\phi \in C_c^1(\reg V)$
corresponds to a section $\phi N \in C_c^1(NV)$ and vice-versa.
After extending the vector field $\phi N$ to $C_c^1(U,TM)$---the chosen
extension will not matter for our purposes---we can deform $\reg V$
with respect to its flow $(\Phi_t)$.
As $V$ is stationary, the first variation vanishes: $\delta V(\phi N) = 0$.
A routine calculation, the details of which can be found for instance
in~\cite[Ch.~2]{Simon84} shows that the second variation satisfies
\begin{multline}
\label{eq:second_variation_smooth}
\delta^2 V(\phi N) =
\srestr{\frac{\intdiff^2}{\intdiff t^2}}{t = 0} \norm{(\Phi_t)_* V}(U)
= \\ \int_U \abs{\nabla_V \phi}^2 -
( \abs{A}^2 + \Ric_M(N,N) )\phi^2 \dnv,
\end{multline}
where $\nabla_V$ is the Levi-Civita connection on $\reg V$,
$A$ is the second fundamental form of $\reg V \subset M$, and
$\Ric_M$ is the Ricci curvature tensor on $M$.
The expression on the right-hand side can be defined for one-sided
$V$ by replacing $N$ by an arbitrary measurable unit section
$\nu: \reg V \to NV$, but it loses its interpretation in terms
of the second variation of the area.
\begin{defn}[Scalar second variation]
The \emph{scalar second variation} of a stationary integral varifold $V$
is the quadratic form $B_V$ defined for $\phi \in C_c^2(\reg V)$ by
\begin{equation}
\label{defn:scalar_second_variation}
B_V(\phi,\phi)
= \int_{\reg V} \abs{\nabla_V \phi}^2
- (\abs{A}^2 + \Ric_M(\nu,\nu)) \phi^2 \dnv.
\end{equation}
\end{defn}
\begin{rem}
When $V$ is one-sided, the second variation of its area has to be measured
with respect to variations in $C_c^1(NV)$---we refer
to \cite[Ch.~2]{Simon84} or~\cite[Sec.~1.8]{ColdingMinicozzi11}
for further information on this.
We called $B_V$ `scalar' in order to highlight its difference with
the second variation of area in this case,
but emphasise that for the remainder `second variation' refers exclusively to the
quadratic form $B_V$ from Definition~\ref{defn:scalar_second_variation}.
(For the same reasons we also call the Jacobi operator $L_V$
`scalar' in Definition~\ref{defn:scalar_jac} below,
but omit this adjective in the remainder of the text.)
\end{rem}
One can consider $\reg V$ as a stationary integral varifold
in its own right by identifying it with the corresponding varifold
with constant multiplicity $1$. Its scalar second variation
\begin{equation}
B_{\reg V}(\phi,\phi) = \int_{\reg V} \abs{\nabla_V \phi}^2
- (\abs{A}^2 + \Ric_M(\nu,\nu)) \phi^2 \dH^n
\end{equation}
differs from $B_V$ only in that the integral is with respect
to the $n$-dimensional Hausdorff measure instead of $\nv$. This means exactly that
while $B_V$ is `weighted' by the multiplicity of $V$, the quadratic
form $B_{\reg V}$ measures the variation of `unweighted' area;
we will briefly use this in Section~\ref{sec:prelim_spectrum_sec_var}.
After integrating by parts on $\reg V$, the form $B_V$
corresponds to the second-order elliptic operator
$L_V = \Delta_V + \abs{A}^2 + \Ric_M(\nu,\nu),$
where $\Delta_V$ is the Laplacian on $\reg V$.
\begin{defn}[Scalar Jacobi operator]
\label{defn:scalar_jac}
The \emph{scalar Jacobi operator} of $V$, denoted $L_V$, is the second-order
elliptic operator
\begin{equation}
L_V \phi = \Delta_V \phi +
(\abs{A}^2 + \Ric_M(\nu,\nu)) \phi
\quad \text{for all $\phi \in C^2(\reg V)$},
\end{equation}
where $\nu: \reg V \to NV$ is an arbitrary measurable unit normal
vector field.
\end{defn}
The curvature of $\reg V$ can blow up as one approaches $\sing V$,
in which case the coefficients of the operator $L_V$ would
not be bounded. To avoid this, we restrict ourselves to a compactly
contained open subset $W \subset \subset U \setminus \sing V$;
moreover we require
$W \cap \reg V \neq \emptyset$ to avoid vacuous statements.
We use the sign convention for the spectrum defined in
\cite[Ch.~8]{GilbargTrudinger98}, where $\lambda \in \RR$
is an eigenvalue of $L_V$ in $W$ if there is $\varphi
\in H_0^1(W \cap \reg V)$ such that $L_V \varphi + \lambda \varphi = 0$.
By standard elliptic PDE theory the spectrum
\begin{equation}
\lambda_1 \leq \lambda_2 \leq \cdots \to +\infty
\end{equation}
of $L_V$ in $W$ is discrete and bounded below.
We will sometimes also write $\lambda_p(W)$ instead of $\lambda_p$
in order to highlight the dependence of the spectrum on the
subset $W$.
The eigenvectors of $L_V$
span the space $H_0^1(W \cap \reg V) = W_0^{1,2}(W \cap \reg V)$,
which we abbreviate throughout by $H_0^1$.
The \emph{index of $B_V$} in $W$ is the
maximal dimension of a subspace of $H_0^1$ on which
$B_V$ is negative definite; equivalently
\begin{equation}
\ind_W B_V = \mathrm{card} \{ p \in \NN \mid \lambda_p(W) < 0 \}.
\end{equation}
Moreover $\ind B_V := \sup_W (\ind_W B_V)$, where
the supremum is taken over all $W \subset \subset U \setminus \sing V$ with
$W \cap \reg V \neq \emptyset$.
\begin{rem}
We will see in Section~\ref{sec:prelim_spectrum_sec_var}
that the index of $B_V$ coincides with the Morse index
of $\reg V$ with respect to the area functional,
at least when $\reg V$ is two-sided.
\end{rem}
\subsection{Statement of main theorem}
Let $(\eps_i)$ be a sequence of positive parameters with $\eps_i \to 0$
and consider an associated sequence of functions $(u_i)$ in $C^3(U)$ satisfying the
following hypotheses:
\begin{enumerate}[label=(\Alph*)]
\item Every $u_i \in C^3(U)$ is a critical point of the Allen--Cahn functional
\begin{equation}
\label{eq:allen_cahn_functional}
E_{\epsilon_i}(u)
= \int_U
\epsilon_i \frac{\lvert \nabla u \rvert^2}{2} + \frac{W(u)}{\epsilon_i}
\dH^{n+1},
\end{equation}
i.e. $u_i$ satisfies the equation
\begin{equation}
\label{eq:epsilon_allen_cahn_equation}
- \epsilon_i^2 \Delta u_i + W'(u_i) = 0 \quad \text{in } U.
\end{equation} \label{hypA}
\item There exist constants $C, E_0 < \infty$ such that
\begin{equation}
\sup_i \lVert u_i \rVert_{L^\infty(U)} \leq C\; \text{and} \; \sup_i E_{\epsilon_i}(u_i) \leq E_0.
\end{equation} \label{hypB}
\item There exists an integer $k \geq 0$ such that the Morse index of each $u_i$
is at most $k$, i.e. any subspace of $C_c^1(U)$ on which the second variation
\begin{equation}
\delta^2 E_{\epsilon_i}(u_i)(\phi,\phi)
= \int_U \epsilon_i \lvert \nabla \phi \rvert^2
+ \frac{W''(u_i)}{\epsilon_i} \phi^2
\dH^{n+1}
\end{equation}
is negative definite has dimension at most $k$. We write this $\ind u_i \leq k$,
and if $k = 0$, say that $u_i$ is \emph{stable in $U$}.
\label{hyp:C_index_bound}
\end{enumerate}
\begin{rem}
\label{rem:defn_stability}
More generally $\ind_{U'} u_i$ denotes
the index of $\delta^2 E_{\eps_i}(u_i)$ with respect to variations in
$C_c^1(U')$ (or equivalently in $H_0^1(U')$) for all open
subsets $U' \subset U$.
When $\ind_{U'} u_i = 0$, we say that $u_i$ is \emph{stable in U'}.
\end{rem}
We follow Tonegawa~\cite{Tonegawa05}, using an idea originally
developed by Ilmanen~\cite{Ilmanen93} in a parabolic setting
to `average the level sets' of $u_i \in C^3(U)$
and define a varifold $\vei$ by
\begin{equation}
\label{eq:definition_varifolds}
\vei(\phi)
= \frac{1}{\sigma}
\int_{U \cap \{\nabla u_i \neq 0\}}
\epsilon_i \frac{\lvert \nabla u_i(x) \rvert^2}{2}
\phi(x,T_x \{u_i = u_i(x)\})
\dH^{n+1}(x)
\end{equation}
for all $\phi \in C_c(G_n(U))$.
Here $T_x \{u_i = u_i(x)\}$ is the tangent space to the level set
$\{u_i = u_i(x)\}$ at $x \in U$,
and $\sigma = \int_{-1}^1 \sqrt{W(s)/2} \intdiff s$ is a constant.
\begin{rem}
In~\cite{HutchinsonTonegawa00,Guaraco15} the varifold $\vei$ is defined by
the slightly different expression
$ \vei(\phi)
= \frac{1}{\sigma}
\int_{U \cap \{\nabla u_i \neq 0\}}
\abs{\nabla w_i(x)} \phi(x,T_x \{u_i = u_i(x)\}) \dH^{n+1}(x)$, with
$w_i$ as in Theorem~\ref{lem:properties_V}.
The `equipartition of energy' \eqref{eq:thm_properties_V_equipartition_energy}
from Theorem~\ref{lem:properties_V}
shows that the two definitions give rise to the same limit varifold $V$
as $i \to \infty$.
\end{rem}
The weight measures $\norm{\vei}$ of these varifolds satisfy
\begin{equation}
\label{eq:density_weight_measure}
\lVert \vei \rVert(A)
= \frac{1}{\sigma} \int_{A \cap \{\nabla u_i \neq 0\}}
\epsilon_i \frac{\lvert \nabla u_i \rvert^2}{2} \dH^{n+1}
\leq \frac{E_0}{2\sigma}
\end{equation}
for all Borel subsets $A \subset U$, where the inequality follows
from the energy bound in Hypothesis~\ref{hypB}.
The resulting bound $\vei(G_n(U)) \leq \frac{E_0}{2\sigma}$ allows us to extract a
subsequence that converges to a varifold $V$, with properties laid out in the following theorem by Hutchinson--Tonegawa~\cite{HutchinsonTonegawa00}.
\begin{thm}[\cite{HutchinsonTonegawa00}]
\label{lem:properties_V}
Let $(u_i)$ be a sequence in $C^3(U)$ satisfying Hypotheses (A) and (B).
Passing to a subsequence $\vei \rightharpoonup V$ as varifolds, and
\begin{enumerate}[font = \upshape, label = (\alph*)]
\item $V$ is a stationary integral varifold,
\item $\nv(U) = \liminf_{i \to \infty} \frac{1}{2 \sigma} E_{\eps_i}(u_i),$
\item for all $\phi \in C_c(U)$:
\begin{equation}
\label{eq:thm_properties_V_equipartition_energy}
\lim_{i \to \infty} \int_U
\epsilon_i \frac{\abs{\nabla u_i}^2}{2} \phi
= \lim_{i \to \infty} \int_U \frac{W(u_i)}{\epsilon_i} \phi
= \lim_{i \to \infty} \int_U \abs{\nabla w_i} \phi,
\end{equation}
where $w_i := \Psi \circ u_i$ and
$\Psi(t) := \int_{0}^{t} \sqrt{W(s)/2} \intdiff s$.
\end{enumerate}
\end{thm}
Up to a factor of $\eps_i$ the second variation $\delta^2 E_{\eps_i}$
corresponds to the second-order elliptic operator
$L_i := \Delta - \eps_i^{-2} W''(u_i)$. As in the discussion for
the Jacobi operator, $L_i$
has discrete spectrum $\lambda_1^i \leq \lambda_2^i \leq \cdots \to +\infty$,
which we denote by $\lambda_p^i(W)$ when we want to
emphasise its dependence on the subset $W$.
The following theorem is our main result.
\begin{customthm}{A}
\label{thm:main_thm_propagation_of_index_bounds}
Let $M^{n+1}$ be a closed Riemannian manifold, and $U \subset M$ an open subset.
Let $(u_i)$ be a sequence in $C^3(U)$ satisfying Hypotheses \ref{hypA},
\ref{hypB} and \ref{hyp:C_index_bound}, and $\vei \rightharpoonup V$.
Then $\dimh \sing V \leq n-7$ and
\begin{enumerate}[font = \upshape, label = (\roman*)]
\item $\lambda_p(W) \geq \limsup_{i \to \infty} \lambda_p^i(W)$
for all open $W \subset \subset U \setminus \sing V$ with
$W \cap \reg V \neq \emptyset$ and all $p \in \NN$,
\label{item:main_thm_spec}
\item $\ind B_V \leq k$. \label{item:main_thm_ind_reg}
\end{enumerate}
\end{customthm}
\begin{rem}
The spectral lower bound remains true
if the assumptions are weakened and one assumes that for some $k \in \NN$
there is $\mu \in \RR$ such that
\begin{equation}
\lambda_k^i(U) \geq \mu \quad \text{for all $i \in \NN$}
\end{equation}
instead of an index bound---this observation is inspired
the work of Ambrozio--Carlotto--Sharp~\cite{AmbrozioCarlottoSharp2015}, where
a similar generalisation is made in the context of minimal surfaces.
One obtains the spectral bound via an inductive argument on $k$ similar to the argument
in Section~\ref{sec:proof_main_thm}, noting for the base case of
the induction that
bounds as in Corollary~\ref{cor:L2_bound_for_Ae_and_Be_if_u_stable}
hold if $\lambda_1^i \geq \mu$.
\end{rem}
The following corollary is an immediate consequence of
Theorem~\ref{thm:main_thm_propagation_of_index_bounds}.
\begin{customcor}{B}
\label{cor:main_thm_morse}
If $\reg V$ is two-sided, then its Morse index with respect to the
area functional satisfies $\ind_{\calH^n} \reg V \leq k$.
\end{customcor}
If $V$ is the stationary varifold arising from Guaraco's $1$-parameter
min-max construction~\cite{Guaraco15} (resp.\ from the $k$-parameter min-max construction of Gaspar--Guaraco~\cite{GasparGuaraco2016})
and its regular part is two-sided, then by Corollary~\ref{cor:main_thm_morse}
its Morse index is at most $1$ (resp.\ at most $k$).
\section{Preliminary results}
\label{section:preliminary_results}
The preliminary results are divided into three parts. In the first,
following~\cite{Tonegawa05}
we introduce `second fundamental forms' $A^i$ for the varifolds $V^i$
and relate them to the second variation of the Allen--Cahn functional.
The last two sections are dedicated to the spectra of the operators
$L_V = \Delta_V + \abs{A}^2 + \Ric_M(\nu,\nu)$ and $L_i = \Delta - \eps_i^{-2} W''(u_i)$.
\subsection{Stability and \texorpdfstring{$L^2$}{L2}--bounds of curvature}
\label{subsec:generalised_second_ff}
To simplify the discussion fix for the moment a
critical point $u \in C^3(U) \cap L^\infty(U)$ of the Allen--Cahn functional
$E_\eps$, with associated varifold $V^\eps$
defined by \eqref{eq:definition_varifolds}.
Let $x \in U$ be a regular point of $u$, that is $\nabla u(x) \neq 0$.
In a small enough neighbourhood
of $x$, the level set $\{ u = u(x)\}$ is embedded in $M$.
Call $\Sigma \subset M$ this embedded portion of the
level set, and let $A^\Sigma$ be its second fundamental form.
We use this to define a `second fundamental form' for $V^\eps$.
\begin{defn}
\label{defn:second_ff}
The function $A^\eps$ is defined at all $x \in U$ where $\nabla u(x) \neq 0$
by $A^\eps(x) = A^\Sigma(x)$
\end{defn}
\begin{rem}
Second fundamental forms can be generalised to the context
of varifolds via the integral identity \eqref{eq:gen_second_ff}---see
Appendix~\ref{app:second_fundamental_form_coordinate_expression}, or
\cite{Hutchinson86} for the original account of this theory.
Strictly speaking it is an abuse of language to call
$A^\eps$ the `second fundamental form' of
$V^\eps$, as it satisfies this identity only up to a small error term
\eqref{eq:approximate_identity_second_fundamental_form}.
\end{rem}
By definition $\nabla_X Y = \nabla^\Sigma_X Y + A^\Sigma(X,Y)$ for all
$X,Y \in C^1(T\Sigma)$.
Making implicit use of the musical isomorphisms here and throughout the text,
write $\nu^\eps(x) = \frac{\nabla u(x)}{\abs{\nabla u(x)}}$, so that
\begin{equation}
A^\Sigma(X,Y) = \langle \nabla_X Y,\nu^\eps \rangle \nu^\eps
= - \langle Y, \nabla_X \nu^\eps \rangle \nu^\eps.
\end{equation}
\begin{lem}
\label{lem:second_ff_ae_bound}
Let $x \in U$ be a regular point of $u$.
Then
\begin{equation}
\label{eq:ae_bounded_hessian}
\abs{\Ae}(x)^2
\leq \frac{1}{\abs{\nabla u}^2(x)}
(\abs{\nabla^2 u}^2(x) - \abs{\nabla \abs{\nabla u}}^2(x)),
\end{equation}
where $\nabla^2 u(x)$ is the Hessian of $u$ at $x$.
\end{lem}
\begin{proof}
The second fundamental form $A^\Sigma$ is expressed in terms of the
covariant derivative $\nabla \nu^\eps$ by
\begin{equation}
A^\Sigma = - \srestr{\nabla \nu^\eps}{T\Sigma \otimes T\Sigma} \otimes \nu^\eps.
\end{equation}
We can express $\nabla \nu^\eps$ as
\begin{equation}
\nabla \nu^\eps =
\frac{\nabla^2 u}{\abs{\nabla u}} -
\nu^\eps \otimes \frac{\nabla \abs{\nabla u}}{\abs{\nabla u}},
\end{equation}
whence after restriction to $T\Sigma \otimes T\Sigma$ we get
\begin{equation}
A^\Sigma =
- \frac{1}{\abs{\nabla u}}
\srestr{\nabla^2 u}{T\Sigma \otimes T\Sigma}
\otimes \nu^\eps.
\end{equation}
On the other hand $\nabla \abs{\nabla u} = \langle \nabla^2 u, \nu^\eps \rangle$
where $\nabla u \neq 0$, so after decomposing the Hessian $\nabla^2 u$
in terms of its action on $T\Sigma$ and $N\Sigma$, we obtain
\begin{equation}
\abs{\nabla^2 u}^2 - \abs{\nabla \abs{\nabla u}}^2
= \abs{\nabla u}^2 \abs{A^\Sigma}^2 +
\abs{\srestr{\nabla^2 u}{T\Sigma \otimes N\Sigma}}^2
\geq \abs{\nabla u}^2 \abs{A^\eps}^2. \qedhere
\end{equation}
\end{proof}
When considering the
second variation, it somewhat simplifies notation to rescale the energy as
$\calE_\eps = \eps^{-1} E_\eps$.
Its second variation is
$ \delta^2 \calE_\eps(u)(\phi,\phi) =
\int_U \abs{\nabla \phi}^2 + \frac{W''(u)}{\eps^2} \phi^2, $
defined for all $\phi \in C_c^1(U)$, which by a density argument can be extended to $H_0^1(U)$.
The following identity will be useful throughout; a proof can be found
in either of the indicated sources.
\begin{lem}[\cite{Farina13,Tonegawa05}]
\label{lem:expr_second_variation}
Let $u \in C^3(U) \cap L^\infty(U)$ be a critical point of $E_\eps$.
For all $\phi \in C_c^1(U)$,
\begin{multline}
\label{eq:second_variation_rescaled_expression}
\delta^2 \calE_\eps(u)(\abs{\nabla u} \phi,\abs{\nabla u} \phi)
= \\ \int_{U} \abs{\nabla u}^2 \abs{\nabla \phi}^2
- \left(\abs{\nabla^2 u}^2 -
\abs{\nabla \abs{\nabla u}}^2 + \Ric_M(\nabla u,\nabla u) \right)
\phi^2 \dH^{n+1}.
\end{multline}
\end{lem}
Combining \eqref{eq:second_variation_rescaled_expression}
with the $\nve$-a.e. bound \eqref{eq:ae_bounded_hessian} yields
for all $\phi \in C_c^1(U)$
\begin{equation}
\label{eq:second_inequality_stability}
\frac{\eps}{2 \sigma}
\delta^2 \calE_{\eps}(u)(\abs{\nabla u}\phi,\abs{\nabla u}\phi)
\leq
\int \abs{\nabla \phi}^2
- (\abs{A^\eps}^2 + \Ric_M(\nu^\eps,\nu^\eps)) \phi^2 \dnve.
\end{equation}
When $u$ is stable, that is when $\delta^2 \calE_\eps(u)$ is non-negative,
then this identity yields $L^2(V^\eps)$--bounds for $A^\eps$.
\begin{cor}
\label{cor:L2_bound_for_Ae_and_Be_if_u_stable}
There is a constant $C = C(M) > 0$ such that if $u \in C^3(U) \cap L^\infty(U)$
is a critical point of $E_\eps$ and is stable in an open ball $B(x,r) \subset U$
of radius $r \leq 1$ then
\begin{equation}
\label{eq:bound_in_cor_stability_in_balls}
\int_{B(x,\frac{r}{2})} \abs{\Ae}^2 \dnve
\leq \frac{C}{r^2} \norm{\ve}(B(x,r)).
\end{equation}
\end{cor}
\begin{proof}
The Ricci curvature
term in \eqref{eq:second_inequality_stability} can be bounded by some
constant $C(M) \geq 1$ as the ambient manifold $M$ is closed, so
$ \int_{B(x,r)} \abs{\Ae}^2 \phi^2 \dnve
\leq C(M) \int \phi^2 + \abs{\nabla \phi}^2 \dnve $
for all $\phi \in C_c^1(B(x,r))$. Plug in a cut-off function
$\eta \in C_c^1(B(x,r))$ with $\eta = 1$ in $B(x,\frac{r}{2})$ and
$\abs{\nabla \eta} \leq 3r^{-1}$ to obtain the desired inequality.
\end{proof}
We now turn to a sequence $(u_i)$ of critical points satisfying
Hypotheses~\ref{hypA}--\ref{hyp:C_index_bound}.
If the $u_i$ are stable in a ball as in
Corollary~\ref{cor:L2_bound_for_Ae_and_Be_if_u_stable},
then the uniform weight bounds \eqref{eq:density_weight_measure}
imply uniform $L^2(V^i)$--bounds of the second fundamental forms,
which we denote $A^i$ from now on.
Under these conditions the $A^i$ converge
weakly to the second fundamental form $A$ (in the classical, smooth
sense) of $\reg V$.
\begin{prop}
\label{prop:weak_convergence_second_ff}
Let $W \subset \subset U \setminus \sing V$ be open with
$W \cap \reg V \neq \emptyset$.
If $\sup_i \int_W \abs{\Aei}^2 \dnvi < +\infty$,
then passing to a subsequence $\Aei \dvei \rightharpoonup A \dv$ weakly as Radon
measures on $G_n(W)$, and
\begin{equation}
\label{eq:bound_fatou_second_ff}
\int_W \abs{A}^2 \dnv \leq
\liminf_{i \to \infty} \int_W \abs{\Aei}^2 \dnvi,
\end{equation}
where $A$ is the second fundamental form of $\reg V \subset M$.
\end{prop}
The weak subsequential convergence follows immediately from compactness
of Radon measures; the main difficulty is to show that the weak limit is $A \dv$.
The proof is a straight-forward adaptation of the argument
used for the stable case in~\cite{Tonegawa05}; we present a
complete argument in Appendix~\ref{app:second_fundamental_form_coordinate_expression}
for the reader's convenience.
\subsection{Spectrum of \texorpdfstring{$L_V$}{LV} and weighted min-max}
\label{sec:prelim_spectrum_sec_var}
Throughout we restrict ourselves to a compactly contained open
subset $W \subset \subset U \setminus \sing V$
to avoid blow-up of the coefficients of $L_V$ near the singular set,
and assume $W \cap \reg V \neq \emptyset$ to avoid vacuous statements.
As $W \cap \reg V$ is compactly contained in $\reg V$, it can
intersect only finitely many connected components $C_1,\dots,C_N$
of $\reg V$.
By the constancy theorem~\cite[Thm.~41.1]{Simon84}
the multiplicity function $\Theta$ of a stationary
integral varifold $V$ is constant on every connected component of $\reg V$;
we write $\Theta_1,\dots,\Theta_N$ for the respective multiplicities
of $C_1,\dots,C_N$.
By classical theory for elliptic PDE~\cite[Ch.~8]{GilbargTrudinger98},
the spectrum of $L_V$ has the following min-max characterisation:
\begin{equation}
\label{eq:unw_min_max}
\lambda_p = \inf_{\dim S = p} \max_{\phi \in S \setminus \{ 0 \}}
\frac{B_{\reg V}(\phi,\phi)}{\norm{\phi}^2_{L^2}}
\quad \text{for all $p \in \NN$},
\end{equation}
where the infimum is taken over linear subspaces $S$
of $H_0^1$ (recall this is our
abbreviated notation for $H_0^1(W \cap \reg V)$).
From this we easily obtain a min-max characterisation that is `weighted' by
the multiplicities $\Theta_1,\dots,\Theta_N$ in the sense that
\begin{equation}
\label{eq:w_min_max}
\lambda_p = \inf_{\dim S = p} \max_{\phi \in S \setminus \{0 \}}
\frac{B_V(\phi,\phi)}{\norm{\phi}^2_{L^2(V)}}
\quad \text{for all $p \in \NN$}.
\end{equation}
To see this, observe the following: as functions $\phi \in H_0^1$
vanish near the boundary of every connected component $C \subset \reg V$,
the function $\phi_C$ on $W \cap \reg V$ defined by
\begin{equation}
\phi_C = \begin{cases}
\phi &\text{on } C \\
0 & \text{on } W \cap \reg V \setminus C
\end{cases}
\end{equation} also belongs to $H_0^1$.
Moreover
\begin{equation}
B_V(\phi_C,\phi_C)
= \Theta_C B_{\reg V}(\phi_C,\phi_C)
\text{ and }
\norm{\phi_C}^2_{L^2(V)}
= \Theta_C \norm{\phi_C}^2_{L^2},
\end{equation}
where $\Theta_C$ denotes the multiplicity of $C$.
We then define a linear isomorphism of $H_0^1$ via normalisation
by the respective multiplicities of the components. This sends
$\phi \mapsto \bar{\phi} := \sum_{j=1}^N \Theta_j^{-1/2} \phi_{C_j}$; then
\begin{equation}
\frac{B_V(\bar{\phi},\bar{\phi})}{\norm{\bar{\phi}}_{L^2}^2}
= \frac{B_{\reg V}(\phi,\phi)}{\norm{\phi}_{L^2(V)}^2}.
\end{equation}
Therefore the `unweighted' and `weighted' min-max
characterisations \eqref{eq:unw_min_max} and \eqref{eq:w_min_max} are in fact equivalent.
In the remainder we mainly use \eqref{eq:w_min_max}, and
abbreviate this as $\lambda_p = \inf_{\dim S = p} \max_{S \setminus \{0\}} J_V$,
where $J_V$ denotes the `weighted' Rayleigh quotient
\begin{equation}
J_V(\phi) = \frac{B_V(\phi,\phi)}{\norm{\phi}_{L^2(V)}^2}
\text{\quad for all $\phi \in H_0^1 \setminus \{0\}$}.
\end{equation}
The min-max characterisation implies the following lemma, which
highlights the dependence of the spectrum $\lambda_p(W)$ on
the subset $W$.
\begin{lem}
\label{lem:shrinking_balls_spectrum}
\leavevmode
\begin{enumerate}[label = (\alph{*}), font = \upshape]
\item If $W_1 \subset W_2 \subset \subset U \setminus \sing V$, then
$\lambda_p(W_1) \geq \lambda_p(W_2)$: the spectrum
is monotone decreasing.
\item \label{item:index_superadd} If $W_1,W_2 \subset \subset U \setminus \sing V$
have $W_1 \cap W_2 = \emptyset$, then
$\ind_{W_1} B_V + \ind_{W_2} B_V = \ind_{W_1 \cup W_2} B_V$.
\item \label{item:shrinking_balls}
If $W \subset \subset U \setminus \sing V$ and $y \in W \cap \reg V$, then
$\lambda_p(W) = \lim_{R \to 0} \lambda_p(W \setminus \overline{B}(y,R))$.
\end{enumerate}
\end{lem}
\begin{rem}
The same properties hold for the spectrum and index of $L_i$, and the proof is easily
modified to cover this case.
\end{rem}
\begin{proof}
(a) This is immediate from the min-max characterisations,
or simply by definition of the spectrum. Similarly for \ref{item:index_superadd}.
\ref{item:shrinking_balls} By monotonicity of the spectrum we have
\begin{equation}
\lambda_p(W \setminus \overline{B}(y,R))
\geq \lambda_p(W \setminus \overline{B}(y,R'))
\geq \lambda_p(W)
\end{equation}
for all $R > R' > 0$. The limit as $R \to 0$ therefore exists and is bounded below
by $\lambda_p(W)$; it remains only to show that $\lim_{R \to 0}
\lambda_p(W \setminus \overline{B}(y,R)) \leq \lambda_p(W)$.
By monotonicity of the spectrum it is equivalent to show that
for a fixed radius $R > 0$, $\lim_{m \to \infty}
\lambda_p(W \setminus \overline{B}(y,2^{-m}R))
\leq \lambda_p(W)$.
Let $(\rho_m)_{m \in \NN}$ be a sequence
in $C_c^1(B(y,R) \cap \reg V)$ with the following properties (such a sequence
exists provided $n \geq 2$, see Remark~\ref{rem:cutoff_functions} below):
\begin{enumerate}[label = (\arabic*)]
\item \label{item:cutoff_props_a} $\srestr{\rho_m}{B(y,2^{-m}R) \cap \reg V}
\equiv 0$ and $\rho_m \to 1$ $\calH^n$-a.e.\ in
$W \setminus \{ y \} \cap \reg V $,
\item \label{item:cutoff_props_b}
$\norm{\nabla_V \rho_m}_{L^2(W \cap \reg V)} \to 0$.
\end{enumerate}
Let a small $\delta > 0$ be given and choose a family $(\phi_1,\dots,\phi_p)$
in $C_c^1(W \cap \reg V)$ whose $\linspan(\phi_1,\dots,\phi_p) =: S$ has
$\max_{S \setminus 0} J_V \leq \lambda_p(W) + \delta$.
Write $\rho_m S$ for
$\linspan(\rho_m \phi_1,\dots,\rho_m \phi_p)\subset
C_c^1(W \setminus \overline{B}(y,2^{-m}R) \cap \reg V)$---for
$m$ large enough the functions $\rho_m \phi_i$ are indeed linearly independent.
By the weighted min-max formula \eqref{eq:w_min_max},
\begin{equation}
\max_{\rho_m S \setminus 0} J_V
\geq \lambda_p(W \setminus \overline{B}(y,2^{-m}R)).
\end{equation}
Let $t_m \in \Sph^{p-1} \subset \RR^p$ denote the coefficients of the linear combination
$t_m \cdot \rho_m \phi := \rho_m \sum_{j=1}^p t_{mj} \phi_j \in \rho_m S$
that realises $\max_{\rho_m S \setminus 0} J_V$.
Passing to a convergent subsequence $t_m \to t \in \Sph^{p-1} \subset \RR^p$
we get $J_V(t_m \cdot \rho_m \phi) \to J_V(t \cdot \phi)$, and hence
\begin{equation}
\lim_{m \to \infty} J_V(t_m \cdot \rho_m \phi)
\leq \max_{S \setminus 0} J_V.
\end{equation}
On the one hand $\max_{S \setminus 0} J_V \leq \lambda_p(W) + \delta$ by
our choice of $S$,
on the other hand $\lim_{m \to \infty} \lambda_p(W \setminus \overline{B}(y,2^{-m}R))
\leq \lim_{m \to \infty} J_V(t_m \cdot \rho_m \phi)$ by our choice of $t_m$.
The conclusion follows after combining these two observations and letting $\delta \to 0$.
\end{proof}
\begin{rem}
\label{rem:cutoff_functions}
A sequence of functions $(\rho_m)$ with properties
\ref{item:cutoff_props_a} and \ref{item:cutoff_props_b} exists provided $n \geq 2$,
as we assume throughout. When $n \geq 3$ one can use the standard cutoff functions;
for $n = 2$ a more precise construction is necessary, described
for instance in~\cite[Sec.~4.7]{Evans15}.
\end{rem}
\subsection{Spectrum of \texorpdfstring{$L_i$}{Li} and conditional proof of
Theorem~\texorpdfstring{\ref{thm:main_thm_propagation_of_index_bounds}}{A}}
The main result in this section is Lemma~\ref{lem:theorem_holds_if_second_ff_bounded};
essentially it says that
\begin{equation}
\lambda_p(W) \geq \limsup_{i \to \infty} \lambda_p^i(W)
\end{equation}
holds under the condition that $\sup_i \int_W \abs{A^i}^2 \dnvi < +\infty$.
What precedes it in this section are technical results required for its proof.
Again, by classical elliptic PDE theory the eigenvalues $\lambda_p^i(W)$
of $L_i = \Delta - \eps_i^{-2} W''(u_i)$ on $H_0^1(W)$
have the following min-max characterisation
in terms of the rescaled Allen--Cahn functional
$\calE_{\eps_i} = \eps_i^{-1} E_{\eps_i}$:
\begin{equation}
\label{eq:min_max_L_i}
\lambda_p^i(W) = \inf_{\dim S = p} \max_{\phi \in S \setminus \{0\}}
\frac{\delta^2 \calE_{\eps_i}(u_i)(\phi,\phi)}{\norm{\phi}_{L^2}^2}
\quad \text{for all $p \in \NN$},
\end{equation}
where the infimum is over $p$-dimensional linear subspaces
$S \subset H_0^1(W)$.
Define the \emph{Rayleigh quotient} $J_i$ by
\begin{equation}
J_i(\phi) = \frac{\delta^2 \calE_{\eps_i}(u_i)(\phi,\phi)}{\norm{\phi}_{L^2}^2}
\quad \text{for all $\phi \in H_0^1(W) \setminus \{0\}$},
\end{equation}
so that we can write the min-max characterisation more succinctly as
$\lambda_p^i = \inf_{\dim S = p} \max_{S \setminus \{0\}} J_i.$
To compare the spectrum of $L_V$ in $H_0^1$ with those of the
operators $L_i$ in $H_0^1(W)$,
extend functions in $C_c^1(W \cap \reg V)$ to $C_c^1(W)$
in the standard way, which we now describe to fix notations.
Pick a small enough $0 < \tau < \inj(M)$ so that $B_\tau V :=
\exp N_\tau V$ is a tubular neighbourhood of $W \cap \reg V$, where
$N_\tau V := \{ s_p \in NV \mid p \in W \cap \reg V, \abs{s_p} < \tau \}$.
We abuse notation slightly to denote points in $B_\tau V$
by $s_p$, and identify the fibre $N_pV$ with
$(\exp_p)_* N_p V \subset T_{s_p} (B_\tau V)$.
The distance function $d_V: x \in B_\tau V \mapsto \dist(x,\reg V)$
is Lipschitz and smooth on $B_\tau V \setminus \reg V$.
By the Gauss lemma $\grad d_V(s_p) = -s_p/\abs{s_p}$ for all
$s_p \in B_\tau V \setminus \reg V$.
A function $\phi \in C^1(B_\tau V)$ is constant
along geodesics normal to $\reg V$ if $\phi(s_p) =
\phi(0_p)$ for all $s_p \in B_\tau V$,
or equivalently if $\langle \nabla \phi,\nabla d_V \rangle \equiv 0$
in $B_\tau V \setminus \reg V$.
\begin{lem}
\label{lem:extension_scalar_functions}
Any $\phi \in C_c^1(W \cap \reg V)$ can be extended to $C_c^1(W)$
with $\langle \nabla \phi, \nabla d_V \rangle \equiv 0$
in $B_{\frac{\tau}{2}} V \setminus \reg V$ for some $\tau = \tau(\phi) > 0$.
\end{lem}
\begin{proof}
Extend $\phi \in C_c^1(W \cap \reg V)$ to
$B_\tau V$ by setting $\tilde{\phi}(s_p) = \phi(p)$, so that
$\langle \nabla d_V , \nabla \tilde{\phi} \rangle \equiv 0$ in
$B_\tau V \setminus \reg V$.
Let $\eta \in C^1[0,\infty)$ be a cutoff
function with $0 \leq \eta \leq 1$, $\eta \equiv 1 $ on $[0,1/2)$
and $\spt \eta \subset [0,1)$.
Then
\begin{equation}
(\eta \circ d_V/\tau) \tilde{\phi} \in C_c^1(B_\tau V)
\text{ and } (\eta \circ d_V/\tau) \tilde{\phi} = \tilde{\phi} \text{ on } B_{\tau/2} V.
\end{equation}
Moreover even though $B_\tau V \not\subset W$ in general, as $\spt \phi$
is compactly contained in $W \cap \reg V$ we still have
$ (\eta \circ d_V/\tau) \tilde{\phi} \in C_c^1(W)$ provided
$0 < \tau < \dist (\spt \phi, \partial W)$.
\end{proof}
The following lemma gives an asymptotic lower bound for the Rayleigh
quotient $J_V$ in terms of the $J_i$.
\begin{lem}
\label{lem:min_max_convergence_when_C_c_convergence}
Let $B_{\tau}V$
be a tubular neighbourhood of $W \cap \reg V$ with width $\tau > 0$,
and let $(\phi_i)_{i \in \NN}$ be a sequence of functions in $C_c^1(W)$ with
\begin{enumerate}[label = (\alph*), font = \upshape]
\item $\langle \nabla \phi_i,\nabla d \rangle \equiv 0$ in $W \cap B_{\tau/2}V$
for all $i$,
\item $\phi_i \to \phi$ in $C_c^1(W)$ as $i \to \infty$,
where $\phi \neq 0$ in $W \cap \reg V$.
\label{item:Cc_cvg_second_assumption}
\end{enumerate}
If $\sup_i \int_W \abs{A^i}^2 \dnvi < +\infty$,
then $J_V(\phi) \geq \limsup_{i \to \infty}
J_i(\abs{\nabla u_i}\phi_i)$.
\end{lem}
\begin{proof}
Before we start the proof proper, note that for all $\phi \in H_0^1(U)$,
dividing both sides of \eqref{eq:second_inequality_stability}
by $ \frac{\eps_i}{2 \sigma} \int \phi^2 \abs{\nabla u_i}^2 \dH^{n+1}
= \int \phi^2 \dnvi$ yields
\begin{equation}
\label{eq:sec_var_sec_form}
\norm{\phi}_{L^2(\vi)}^{-2}
\int \abs{\nabla \phi}^2 - (\abs{A^i}^2 + \Ric_M(\nui,\nui)) \phi^2 \dnvi
\geq J_i(\abs{\nabla u_i}\phi),
\end{equation}
provided of course that $\norm{\phi}_{L^2(V^i)}^2 \neq 0$.
We treat the terms on the left-hand side separately in the calculations
\eqref{item:proof_thmA_i_1}--\eqref{item:proof_thmA_i_4} below.
Once these are completed, we combine \eqref{item:proof_thmA_i_1} with our
assumption that $\norm{\phi}_{L^2(V)}^2 \neq 0$ to obtain that
$\norm{\phi_i}_{L^2(V^i)} \neq 0$ for large enough $i$.
The conclusion follows by combining \eqref{eq:sec_var_sec_form}
with the remaining calculations:
\begin{enumerate}
\item $\int \phi^2 \dnv = \lim_{i \to \infty} \int \phi_i^2
\dnvi$\label{item:proof_thmA_i_1}
\item $\int \abs{\nabla_V \phi}^2 \dnv = \lim_{i \to \infty} \int
\abs{\nabla \phi_i}^2 \dnvi$, \label{item:proof_thmA_i_2}
\item $\int \abs{A}^2 \phi^2 \dnv
\leq \liminf_{i \to \infty} \int \abs{A^i}^2 \phi_i^2 \dnvi$,
\label{item:proof_thmA_i_3}
\item $\int \Ric_M(\nu,\nu) \phi^2 \dnv
= \lim_{i \to \infty } \int \Ric_M(\nui,\nui)\phi_i^2 \dnvi$.
\label{item:proof_thmA_i_4}
\end{enumerate}
\eqref{item:proof_thmA_i_1}
By assumption $\phi_i^2 \to \phi^2$
in $C_c(W)$, whence by Corollary~\ref{cor:meas_fct_cvg} we get
$\int \phi^2 \dnv = \lim_{i \to \infty} \int \phi_i^2 \dnvi$.
The same argument proves \eqref{item:proof_thmA_i_2}, after noticing
that $\langle \nabla \phi_i, \nabla d_V \rangle \equiv 0$ implies
$\abs{\nabla \phi}^2 = \abs{\nabla_V \phi}^2$ on $W \cap \reg V$.
\eqref{item:proof_thmA_i_3}
The sequence $(A^i \phi_i \dnvi)$ converges weakly to $A \phi \dnv$,
as we can show by
testing against an arbitrary $\varphi \in C_c(U)$:
\begin{multline}
\int A^i \phi_i \varphi \dnvi
- \int A \phi \varphi \dnv
= \\ \int A^i (\phi_i - \phi) \varphi \dnvi
+ \int A^i \phi \varphi \dnvi - \int A \phi \varphi \dnv.
\end{multline}
The first integral is bounded by
\begin{equation}
\Big \lvert \int A^i (\phi_i - \phi) \varphi \dnvi \Big \rvert
\leq
\norm{\phi_i - \phi}_{L^\infty} \norm{\varphi}_{L^2(\vi)}
\norm{A^i}_{L^2(V^i)} \to 0
\end{equation}
because $\phi_i \to \phi$
in $C_c(W)$ as $i \to \infty$.
The remaining terms tend to $0$ by the weak convergence of
$A^i \dnvi \rightharpoonup A \dnv$ tested against
$\phi \varphi \in C_c(W)$.
Then inequality \eqref{eq:measure_function_convergence_L2_bound}
gives
$
\int \abs{A}^2 \phi^2 \dnv
\leq \liminf_{i \to \infty} \int \abs{A^i}^2 \phi_i^2 \dnvi.
$
\eqref{item:proof_thmA_i_4}
For each $S \in G_n(T_p M)$ pick a unit vector $\nu_S$ in $T_p M$
orthogonal to $S$, and define a smooth function $R_M$ on $G_n(U)$ by
$R_M \! : S \mapsto \Ric_M(\nu_S,\nu_S)$.
Then $\phi_i^2 R_M \to \phi^2 R_M$ in $C_c(G_n(U))$ as $i \to \infty$,
and by Corollary~\ref{cor:meas_fct_cvg},
\begin{multline}
\int \phi_i^2 \Ric_M(\nui,\nui) \dnvi
= \\ \int \phi_i^2 R_M \dvi
\to
\int \phi^2 R_M \dv = \int \phi^2 \Ric_M(\nu,\nu) \dnv.
\qedhere
\end{multline}
\end{proof}
We conclude the section with a proof of
Theorem~\ref{thm:main_thm_propagation_of_index_bounds}\ref{item:main_thm_spec}
in the case where there is a uniform $L^2(\vei)$--bound on the second
fundamental forms $(\Aei)$.
\begin{lem}
\label{lem:theorem_holds_if_second_ff_bounded}
Let $W \subset \subset U \setminus \sing V$ be open with
$W \cap \reg V \neq \emptyset$.
If $\sup_i \int_W \abs{\Aei}^2 \dnvi < +\infty$,
then $\lambda_p(W) \geq \limsup_{i \to \infty} \lambda_p^i(W)$
for all $p$.
\end{lem}
\begin{proof}
We may assume that every connected component of $W$ intersects $\spt \nv$
(or $\reg V$, equivalently as $W \cap \sing V = \emptyset$)
without restricting generality: if $C$ is a
connected component of $W$ with $C \cap \reg V = \emptyset$,
then $\lambda_p(W \setminus C) = \lambda_p(W)$, and by monotonicity
$\lambda_p^i(W \setminus C) \geq \lambda_p^i(W)$ for all $i$.
Given $\delta > 0$ there is a $p$-dimensional linear subspace
$S = \linspan(\phi_1,\dots,\phi_p)$ of $C_c^1(W \cap \reg V)$ with
\begin{equation}
\label{eq:almost_maximiser_but_in_C_c}
\lambda_p(W) + \delta
\geq
\max_{S \setminus \{0\}} J_V.
\end{equation}
Extend the functions $\phi_i$
to $C_c^1(W)$ as in Lemma~\ref{lem:extension_scalar_functions};
for large enough $i$ the family
$(\abs{\nabla u_i}\phi_1,\dots,\abs{\nabla u_i} \phi_p)$
is still linearly independent.
Indeed, otherwise we could extract a subsequence
such that $(\abs{\nabla u_{i'}} \phi_1,\dots,
\abs{\nabla u_{i'}} \phi_p)$ has a linear dependence, with coefficients
$a_{i'} \neq 0 \in \RR^p$ say.
Then notice that
\begin{equation}
\abs{\nabla u_{i'}} a_{i'} \cdot \phi = 0
\Leftrightarrow \norm{a_{i'} \cdot \phi}_{L^2(V^{i'})} = 0,
\end{equation}
where we abbreviated
$a_{i'} \cdot \phi := \sum_{j=1}^p a_{i'j} \abs{\nabla u_{i'}} \phi_j$.
We may normalise the coefficients $a_{i'}$ so as to guarantee
$\abs{a_{i'}} = 1$ and then, possibly after extracting a second subsequence, assume that $a_{i'} \to a \in \Sph^{p-1}$ as $i' \to \infty$.
The resulting strong convergence
$a_{i'} \cdot \phi \to a \cdot \phi$ in $C_c(W)$
combined with $\nvi \rightharpoonup \nv$ yield
$\norm{a_{i'} \cdot \phi}_{L^2(V^{i'})} \to \norm{a \cdot \phi}_{L^2(V)}$;
this contradicts $\norm{a \cdot \phi}_{L^2(V)} > 0$.
From now on take $i$ large enough so that
$(\abs{\nabla u_i} \phi_1,\cdots,\abs{\nabla u_i} \phi_p)$
is linearly independent.
For such large $i$, we may let $t_i \in \Sph^{p-1} \subset \RR^p$
be the (normalised) coefficients of a linear combination
$t_i \cdot \abs{\nabla u_i} \phi
= \sum_{j=1}^p t_{ij} \abs{\nabla u_i} \phi_j$
that maximises the Rayleigh quotient $J_i$:
\begin{equation}
J_i(t_i \cdot \abs{\nabla u_i} \phi)
= \max_{\abs{\nabla u_i} S \setminus \{0\}} J_i \geq \lambda_p^i(W).
\end{equation}
Extract a convergent subsequence $t_{i'} \to t \in \Sph^{p-1}$,
so that $t_{i'} \cdot \phi \to t \cdot \phi$ in $C_c^1(W)$ as $i' \to \infty$.
Lemma~\ref{lem:min_max_convergence_when_C_c_convergence} gives
$J_V(t \cdot \phi) \geq \limsup_{i \to \infty}
\max_{\abs{\nabla u_i} S\setminus \{ 0\}} J_i,$
which in turn is greater than $\limsup_{i \to \infty} \lambda_p^{i}(W)$.
Using \eqref{eq:almost_maximiser_but_in_C_c},
\begin{equation}
\lambda_p(W) + \delta
\geq J_V(t \cdot \phi)
\geq \limsup_{i \to \infty} \lambda_p^i(W),
\end{equation}
and we conclude by letting $\delta \to 0$.
\end{proof}
Lemma~\ref{lem:theorem_holds_if_second_ff_bounded}
has the following immediate corollary.
\begin{cor}
\label{cor:thmA_ii_when_second_ff_is_bounded}
Under the hypotheses of Lemma~\ref{lem:theorem_holds_if_second_ff_bounded},
\begin{equation}
\ind_W B_V \leq \liminf_{i \to \infty} (\ind_W u_i).
\end{equation}
\end{cor}
\section{Proof of the main theorem (Theorem A)}
\label{sec:proof_main_thm}
We briefly recall the context of the proof: $M^{n+1}$ is a closed Riemannian
manifold and $U \subset M$ is an arbitrary open subset. The sequence
of functions $(u_i)$ in $C^3(U)$ satisfies Hypotheses \ref{hypA}, \ref{hypB}
and \ref{hyp:C_index_bound}---the last hypothesis says that
$\ind u_i \leq k$ for all $i$.
To every $u_i$ we associate the $n$--varifold $\vei$ from
\eqref{eq:definition_varifolds}. By Theorem~\ref{lem:properties_V},
we may pass to a subsequence of $(\vei)$ converging
weakly to a stationary integral varifold $V$.
\subsection{Spectrum and index of \texorpdfstring{$V$}{V}: proof of (i) and (ii)}
The main idea, inspired by an argument of Bellettini--Wickramasekera
\cite{BelWic16}, is to fix a compactly contained open subset
$W \subset \subset U \setminus \sing V$ and study
the stability of $u_i$ in open balls covering $W \cap \reg V \neq \emptyset$.
We then shrink the radii of the covering balls to $0$, and prove
the spectral lower bound of
Theorem~\ref{thm:main_thm_propagation_of_index_bounds}\ref{item:main_thm_spec} by
induction on $k$. The upper bound on $\ind B_V$ of
Theorem~\ref{thm:main_thm_propagation_of_index_bounds}\ref{item:main_thm_ind_reg}
is then an immediate consequence.
In the base of induction the $u_i$ are stable in $U$.
Let $\eta \in C_c^1(U)$ be a cutoff function constant equal to $1$ on $W$.
The stability inequality \eqref{eq:second_inequality_stability} gives that
\begin{equation}
\int_W \abs{\Aei}^2 \dnvi
\leq C(M) \dist(W,\partial U)^{-2}
\norm{\vei}(U) \quad \text{for all $i$}.
\end{equation}
Combining this with \eqref{eq:density_weight_measure}
we get $\sup_i \int_W \abs{\Aei}^2 \dnvi < +\infty$,
and thus $\lambda_p(W) \geq \limsup_{i \to \infty} \lambda_p^i (W)$
by Lemma~\ref{lem:theorem_holds_if_second_ff_bounded}.
For the induction step, let $k \geq 1$ and assume that
Theorem~\ref{thm:main_thm_propagation_of_index_bounds}\ref{item:main_thm_spec}
holds with $k-1$ in place of $k$. Consider an arbitrary $W \subset \subset
U \setminus \sing V$ that intersects $\reg V$.
Fix a radius $0 < r < \dist(W,\sing V)$, and pick points
$x_1,\dots,x_N \in W \cap \reg V$ such that
$W \cap \reg V \subset \cup_{j = 1}^N B(x_j,\frac{r}{2})$.
We define the following \emph{Stability Condition} for the cover
$\{B(x_j,\frac{r}{2})\}_{1 \leq j \leq N}$:
\begin{enumerate}[label={(SC)}]
\item \emph{For large $i$, each $u_i$ is stable in every ball
$B(x_1,r),\dots,B(x_N,r)$. \label{item:stability_condition}}
\end{enumerate}
\begin{claim}
\label{claim:SH_regularity}
If for the cover $\{B(x_j,\frac{r}{2})\}_{1 \leq j \leq N}$:
\begin{enumerate}[label = (\alph*)]
\item \label{item:reg_SH_holds}
\ref{item:stability_condition} holds,
then $\lambda_p(W) \geq \limsup_{i \to \infty} \lambda_p^i(W)$;
\item \label{item:reg_SH_fails}
\ref{item:stability_condition} fails, then $\lambda_p(W \setminus \overline{B_r})
\geq \limsup_{i \to \infty} \lambda_p^i(W \setminus \overline{B_r})$
for some ball $B_r\in \{ B(x_j,r)\}$.
\end{enumerate}
\end{claim}
\begin{proof}
\ref{item:reg_SH_holds}
Let $W_r = W \cap \cup_{j=1}^N B(x_j,\frac{r}{2})$, so that
$W_r \cap \reg V = W \cap \reg V$ and hence $\lambda_p(W_r) = \lambda_p(W)$.
Moreover $W_r \subset W$, so $\lambda_p^i(W_r) \geq \lambda_p^i(W)$
for all $i$ by monotonicity of the spectrum.
Therefore it is enough to show that $\lambda_p(W_r)
\geq \limsup_{i \to \infty} \lambda_p^i(W_r)$.
Because \ref{item:stability_condition} holds, summing
\eqref{eq:bound_in_cor_stability_in_balls} over all balls we get
\begin{equation}\label{bound}
\int_{W_r} \abs{\Aei}^2 \dnvi \leq
\frac{NC}{r^2} \norm{\vei}(W_r) \leq \frac{NCE_{0}}{2 r^2 \sigma}
\quad \text{for all $i$},
\end{equation}
so $\lambda_p(W_r) \geq \limsup_{i \to \infty} \lambda_p^i(W_r)$
by Lemma~\ref{lem:theorem_holds_if_second_ff_bounded}.
\ref{item:reg_SH_fails} If \ref{item:stability_condition} fails, then
some subsequence $(u_{i'})$ must be unstable in one of the balls
$B_r \in \{B(x_j,r)\}$,
in other words $\ind_{B_r} u_{i'} \geq 1$ for all $i'$.
On the other hand
\begin{equation}
\ind_{B_r} u_{i'} + \ind_{W \setminus \overline{B_r}} u_{i'}
\leq \ind_W u_{i'}
\end{equation}
because $B_r$ and $W \setminus \overline{B_r}$ are disjoint
open sets.
As $\ind_W u_{i'} \leq k$, we get
$\ind_{W \setminus \overline{B_r}} u_{i'} \leq k-1$
for all $i'$, and we conclude after applying the
induction hypothesis to $(u_{i'})$ in $W \setminus \overline{B_r}$.\end{proof}
\begin{rem}
This argument shows that when \ref{item:stability_condition} fails there is
a ball $B_r\in \{B(x_j,r)\}$ with $\lambda_p(W \setminus \overline{B_r})
\geq \limsup_{i \to \infty} \lambda_{p+1}^i(W)$ for $p \geq k$, and thus also
$\ind_{W \setminus \overline{B_r}} B_V \leq k-1$, but
the induction step only requires the weaker conclusion
from Claim~\ref{claim:SH_regularity}.
\end{rem}
Now consider a decreasing sequence $r_m \to 0$ with
$0 < r_m < \dist(W,\sing V)$ and reason as above with $r = r_m$.
For each $m$, pick points $x_1^m,\dots,x_{N_{m}}^m \in W \cap \reg V$
such that
\begin{equation}
W \cap \reg V \subset \cup_{j=1}^{N_m} B \big(x_j^m,\frac{r_m}{2} \big).
\end{equation}
If \ref{item:stability_condition} holds for a cover
$\{B(x_j^m,\frac{r_m}{2})\}$, then
$\lambda_p(W) \geq \limsup_{i \to \infty} \lambda_p^i(W)$
by Claim~\ref{claim:SH_regularity}, and the induction step
is completed.
Otherwise \ref{item:stability_condition} fails for all constructed covers,
and by Claim~\ref{claim:SH_regularity} there is a sequence $(y_m)$ in
$W \cap \reg V$ with
\begin{equation}
\lambda_p (W \setminus \overline{B}(y_m,r_m))
\geq \limsup_{i \to \infty} \lambda_p^i(W \setminus \overline{B}(y_m,r_m)),
\end{equation}
and thus by monotonicity of the spectrum
\begin{equation}
\label{eq:spec_lower_bound_outside_ball}
\lambda_p (W \setminus \overline{B}(y_m,r_m)) \geq
\limsup_{i \to \infty} \lambda_p^i(W).
\end{equation}
Passing to a subsequence if necessary, we may assume that
$(y_m)$ converges to a point $y \in \overline{W} \cap \reg V$.
If we fix a radius $R > 0$, then $B(y_m,r_m) \subset B(y,R)$ for large enough $m$,
so by monotonicity and \eqref{eq:spec_lower_bound_outside_ball},
\begin{equation}
\lambda_p(W \setminus \overline{B}(y,R))
\geq \limsup_{m \to \infty} \lambda_p(W \setminus \overline{B}(y_m,r_m))
\geq \limsup_{i \to \infty} \lambda_p^i(W).
\end{equation}
The conclusion follows after combining this with
$\lambda_p(W) = \lim_{R \to 0} \lambda_p(W \setminus \overline{B}(y,R))$
from Lemma~\ref{lem:shrinking_balls_spectrum}.
Together with the base of induction, we have proved
Theorem~\ref{thm:main_thm_propagation_of_index_bounds}\ref{item:main_thm_spec}
for all sequences $(u_i)$ with $\sup_i \ind u_i \leq k$ for some $k \in \NN$.
The index bound $\ind_W B_V \leq k$ follows immediately:
the spectral lower bound implies that $L_V$ must have fewer
negative eigenvalues than the $L_i$ as $i \to \infty$. Therefore
\begin{equation}
\ind_W B_V = \mathrm{card} \{p \in \NN \mid \lambda_p(W) < 0 \} \leq k.
\end{equation}
As the subset $W$ was arbitrary we get $\ind B_V \leq k$;
this proves
Theorem~\ref{thm:main_thm_propagation_of_index_bounds}\ref{item:main_thm_ind_reg}.
\subsection{Regularity of \texorpdfstring{$V$}{V}: proof of \texorpdfstring{$\dimh \sing V \leq n-7$}{dimh sing V <= n-7}}
\label{subsection:proof_regularity_of_v}
The approach is the same as in the proof of
Theorem~\ref{thm:main_thm_propagation_of_index_bounds}\ref{item:main_thm_spec}--\ref{item:main_thm_ind_reg}
with one difference: we again proceed by induction on $k$, but
we now cover the entire support $\spt \nv$ (including the singular set),
instead of constructing covers a positive distance away from $\sing V$.
The base of induction, where the $u_i$ are stable in $U$,
was proved in~\cite{TonegawaWickramasekera10}.
For the induction step, suppose that $\dimh (U' \cap \sing V) \leq n-7$
holds with $k-1$ in place of $k$, and for arbitrary open subsets $U' \subset U$.
Fix $r > 0$, and choose points $x_1,\dots,x_N \in U \cap \spt \norm{V}$
such that $U \cap \spt \norm{V} \subset \cup_{j = 1}^N B(x_j,r)$.
The \emph{Stability Condition} for the cover
$\{B(x_j,r) \}_{1 \leq j \leq N}$ is defined in the same way as above,
except the radii need not be doubled:
\begin{enumerate}[label={(SC)}]
\item \emph{For large $i$, each $u_i$ is stable in every ball
$B(x_1,r),\dots,B(x_N,r)$. \label{item:stability_condition_reg}}
\end{enumerate}
\begin{claim}
\label{claim:reg_SC_holds}
If for the cover $\{ B(x_j,r) \}_{1 \leq j \leq N}$:
\begin{enumerate}[label = (\alph*)]
\item \label{item:reg_SC_holds} \ref{item:stability_condition_reg} holds,
then $\dimh \sing V \leq n-7$;
\item \label{item:reg_SC_fails} \ref{item:stability_condition_reg} fails,
then $\dimh \sing V \setminus \overline{B_r} \leq n-7$
for some ball $B_r\in \{B(x_j,r)\}$.
\end{enumerate}
\end{claim}
\begin{proof}
\ref{item:reg_SC_holds}
The results from \cite{TonegawaWickramasekera10} give
$\dimh B(x_j,r) \cap \sing V \leq n-7$ for every $j=1,\dots,N$.
As the balls $\{B(x_j,r)\}$ cover $U \cap \spt \norm{V}$, the same holds for
$\sing V$.
\ref{item:reg_SC_fails}
When \ref{item:stability_condition_reg} fails,
there must be a subsequence $(u_{i'})$ that is unstable in one of the balls
$B_r$ of the cover, so that in its complement
\begin{equation}
\ind_{U \setminus \overline{B_r}} u_{i'} \leq k-1
\quad \text{for all $i'$.}
\end{equation}
The conclusion follows from the induction hypothesis applied
to $(u_{i'})$ in $U \setminus \overline{B_r}$.
\end{proof}
Now consider a decreasing sequence $r_m \to 0$. For every $m$, choose
points $x_1^m,\dots,x_{N_m}^m \in U \cap \spt \norm{V}$ such that
$U \cap \spt \norm{V} \subset \cup_{j=1}^{N_m} B(x_j^m,r_m)$.
Then either \ref{item:stability_condition_reg} holds for the cover
$\{ B(x_j^m,r_m)\}$ constructed for some $m$,
in which case we can conclude from
Claim~\ref{claim:reg_SC_holds},
or else there is sequence of points $(y_m)$ in $U \cap \spt \norm{V}$ for which
\begin{equation}
\dimh \sing V \setminus \overline{B}(y_m,r_m) \leq n-7.
\end{equation}
Possibly after extracting a subsequence, the sequence $(y_m)$ converges to a point
$y \in \overline{U} \cap \spt \norm{V}$.
As $U \setminus \{y\} \subset
\cup_{m \geq 0} U \setminus \overline{B}(y_m,r_m)$, we get
$\dimh \left(\sing V \setminus \{y\}\right) \leq n-7$.
If $n \geq 7$, then $\dimh \sing V \leq n-7$ holds whether or not $y \in \sing V$,
as points are zero-dimensional.
If however $2 \leq n \leq 6$ then we need $\sing V = \emptyset$, which
amounts to the following claim.
\begin{claim}
\label{claim:y_not_in_sing_V}
If $2 \leq n \leq 6$ then $y \notin \sing V$.
\end{claim}
\begin{proof}
Choose $B(y,R) \subset U$, and
consider balls $\{B(y,R_m)\}_{m \in \NN}$ with
shrinking radii $R_m := 2^{-m}R$.
If for some $m$ there is a subsequence $(u_{i'})$ with
\begin{equation}
\ind_{B(y,R_m)} u_{i'} \leq k-1
\quad \text{for all $i'$,}
\end{equation}
then we can conclude from the induction hypothesis.
Otherwise for all $m$
\begin{equation}
\ind_{B(y,R_m)} u_{i} = k
\quad \text{for $i$ large enough,}
\end{equation}
and the $u_i$ are eventually stable in the annulus $B(y,R) \setminus \overline{B}(y,R_m)$.
By Theorem~\ref{thm:main_thm_propagation_of_index_bounds}\ref{item:main_thm_ind_reg}
\begin{equation}
\ind_{B(y,R) \setminus \overline{B}(y,R_m)} B_V = 0
\text{ for all $R_m \to 0$,}
\end{equation}
and thus $\ind_{B(y,R) \setminus \{y\}} B_V = 0$.
By contradiction, suppose that $y \in \sing V$. Then $\ind_{B(y,r)} B_V = 0$
holds in the whole ball $B(y,r)$ away from $\sing V$, and
the regularity results of~\cite{Wickramasekera14} give
$\dimh B(y,R) \cap \sing V \leq n-7$, so that $y \notin \sing V$.
\end{proof}
Claim~\ref{claim:y_not_in_sing_V} concludes the induction step;
together with the base of induction, we have proved that
$\dimh \sing V \leq n-7$. This finishes the proof of
Theorem~\ref{thm:main_thm_propagation_of_index_bounds}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Weak lensing mass calibration is a key to achieving the full potential of galaxy cluster cosmology \citep[for a discussion, see e.g.][]{vonderlinden2014}. Numerous lensing studies have provided cluster mass estimates over the last years \citep[e.g.][]{Gruen2014,vonderlinden2014,2015arXiv150201883H,okabesmith16,2017MNRAS.466.3103S,2017MNRAS.469.4899M,Dietrich17,2018arXiv180500039M}. The statistical power of such analyses is continuously growing with precise lensing source catalogs around large cluster samples coming from DES,\footnote{\href{https://www.darkenergysurvey.org}{https://www.darkenergysurvey.org}}, HSC,\footnote{\href{http://hsc.mtk.nao.ac.jp/ssp/}{http://hsc.mtk.nao.ac.jp/ssp/}} and KiDS\footnote{\href{http://kids.strw.leidenuniv.nl/}{http://kids.strw.leidenuniv.nl/}}, and future Euclid,\footnote{\href{https://www.euclid-ec.org/}{https://www.euclid-ec.org/}} LSST,\footnote{\href{https://www.lsst.org/}{https://www.lsst.org/}} or WFIRST\footnote{\href{https://wfirst.gsfc.nasa.gov/}{https://wfirst.gsfc.nasa.gov/}} data.
This improvement in statistics requires an equivalent push for reducing systematic uncertainties in measurement and modeling of lensing signals. State-of-the-art studies account for systematic effects such as deviations of the assumed model of the cluster matter density profile from the truth \citep[e.g.][]{2010arXiv1011.1681B}, systematics in lensing source catalogs \citep[e.g.][]{2017arXiv170801533Z}, excess contamination of the lensing source catalog with cluster member galaxies \citep[e.g.][]{2017MNRAS.469.4899M,2017arXiv170600427M}, and biases and calibration uncertainties in lensing source photometric redshifts inherent to the algorithms used for estimating them \citep[e.g.][]{2016arXiv161001160G,2017arXiv170801532H}. In recent studies, each of these effects cause uncertainty on cluster mass at the level of one to a few per cent \citep[e.g.][]{2017MNRAS.469.4899M}.
In this paper, we investigate another effect on redshift estimates of weak lensing sources -- the bias due to contamination of source photometry from diffuse intracluster light (ICL). In our ICL model, we consider light from the central galaxy and from unbound stars in the cluster potential \citep[see examples of studies or reviews in][]{1951PASP...63...61Z, 1952PASP...64..242Z, 2005ApJ...618..195G, 2005MNRAS.358..949Z, 2015IAUGA..2247903M, 2018MNRAS.474..917M, 2018AstL...44....8K} as well as the light of faint member galaxies below the survey selection threshold.
The diffuse light biases the flux and color measurements of lensing source galaxies, and causes a systematic change in photometric estimates of their redshift distributions. Among other effects, the spectral energy distribution (SED) of passive stellar populations at the cluster redshift introduces a mild cluster rest-frame D4000 break to the observed SED of the lensing source galaxy. These changes in flux and color affect the redshift assigned, especially for star-forming galaxies with weaker break features.
Careful analysis of color-magnitude space could be used to select galaxies less susceptible to these effects, and composite models for blended galaxies could in principle fully account for them. Given the complexity and algorithm dependence of source photometry and redshift estimation, we do not aim to provide a prescription for correcting ICL photo-$z$ contamination in this paper. Our goal is rather to evaluate approximately and, if possible, conservatively, what amplitude of bias we expect and identify the regimes in which it can be ignored.
In section 2, we describe our model for the surface brightness of intracluster light, using the results of \citet{zhang}. In section 3, we derive our estimate for how diffuse intracluster light of given surface brightness biases the lensing amplitude predicted from photometric redshifts, based on \citet{2016arXiv161001160G}. Section 4 combines these two components of the model to estimate the bias in lensing excess density profiles in a typical current (DES-like) and future (LSST-like) survey, as a function of cluster redshift and separation from the cluster center. We conclude the study in section 5.
Estimates of a quantity $q$ are denoted as $\hat{q}$. All magnitudes given in the $u^{\star}g'r'i'z'$ bands are in CFHT/Megacam filters\footnote{\href{http://cfht.hawaii.edu/Instruments/Filters/megaprime.html}{http://cfht.hawaii.edu/Instruments/Filters/megaprime.html}} u.MP9301, g.MP9401, r.MP9601, i.MP9701, z.MP9801 and AB units until otherwise noted. Surface brightnesses are given in nJy arcsec$^{-2}$ units. These can be converted to counts per arcsec$^2$ at a magnitude zeropoint of 30 with a conversion factor of 3.63~nJy per count, i.e.~3.63~nJy arcsec$^{-2}$ correspond to 30 mag arcsec$^{-2}$. Cosmological distances for the scaling of lensing signal amplitudes are calculated in a flat $\Lambda$ cold dark matter cosmology with $\Omega_{m,0}=0.27$, and masses are expressed assuming a Hubble constant $H_0=70$km s$^{-1}$ Mpc$^{-1}$.
\section{Intracluster light model}
\label{sec:iclmodel}
The goal of this section is to derive a model for the surface brightness of ICL. We describe it as a function of cluster mass, cluster redshift, and projected physical distance from the cluster center.
The distribution of ICL is a debated topic in the literature. It is believed that the ICL contains a significant amount of stellar mass \citep{2013ApJ...770...57B, 2014MNRAS.437.3787C, 2018MNRAS.475..648P}, comparable to that of cluster central galaxies or the rest of the cluster galaxies. However, measurements of ICL in various samples \citep{2005MNRAS.358..949Z, 2005ApJ...618..195G, 2006AJ....131..168K, 2011MNRAS.414..602T,2012MNRAS.425.2058B, 2013ApJ...778...14G, 2014ApJ...794..137M, 2015MNRAS.448.1162D} do not necessarily find agreement on such a massive component, possibly due to methodological difference (such as differences in filter bands, or surface brightness thresholds and other criteria used to distinguish ICL from galaxies, see e.g. \citealt{2017ApJ...846..139M}, \citealt{2018MNRAS.474..917M} and the discussion in the latter), cluster-to-cluster variations \citep[e.g.,][]{2007AJ....134..466K}, cluster dynamic state \citep[e.g.,][]{2018ApJ...857...79J,2019A&A...622A.183J}, or redshift evolution and the surface brightness limits of ICL \citep[e.g.,][]{2015MNRAS.449.2353B}. By averaging the light profile of $\sim$ 300 optically selected clusters, \citet{zhang} quantified the ICL distribution at $z\sim 0.25$ for clusters more massive than $\sim 2 \times 10^{14} M_\odot$. A comparison of the stellar mass in the ICL component with the total stellar mass in DES Y1 \textsc{redMaPPer} clusters measured in \citet{palmese} shows that the ICL, together with the central galaxy, makes up $\sim 40\%$ of the total cluster stellar mass in the sample from \citet{zhang}. We make use of these measurements to model ICL.
There are three components empirically seen as diffuse light in clusters with the methodology of \citet{zhang}: pure ICL due to stars not bound to any galaxy, the light of faint cluster members below a detection/masking threshold, and scattered light of the cluster galaxies in the outskirts of the point-spread function (PSF).
We will call the first component, dominant in most regimes, \emph{pure} ICL.
Our model for pure ICL is based on the measurements presented in \citet{zhang}. In that work, sky brightness around centers of optically selected clusters is measured on co-added images. The latter are made by masking well-detected galaxies ($i<22.4$) on single-epoch DES images without background subtraction, and combining all frames of the full cluster sample while placing the cluster center at the center of the co-add image. Three effects contaminate the light measured such: background contamination due to random field galaxies, light of faint un-masked cluster member galaxies ($i>22.4$), and light of bright cluster member galaxies escaping the applied masking. These components are estimated and subtracted to yield the measurement of pure ICL.
As an additional contaminant, the PSF effect exists with every ground-based telescope at similar levels (see studies in \citealt{1969A&A.....3..455M, 1971PASP...83..199K, 1996PASP..108..699R,2007ApJ...666..663B, 2014A&A...567A..97S} and also discussions in \citealt{zhang}). It is a contaminant to the measurement in \citet{zhang}, yet greatly subdominant in the case of the DECam PSF, given that 97 per cent of light is contained within a 5'' radius of the PSF \citep[][their section 4]{zhang}, and intrinsic ICL is a much larger fraction of total cluster light. We find that the effect of PSF changes the ICL profile by less than 5 per cent in the relevant radial redshift ranges.
Our second term, the amount of light in \emph{undetected} galaxies, depends on the magnitude limit to which cluster members are detected and can be successfully deblended. We approximate this as a fixed limiting magnitude $m^{\rm lim}$.
The full function we are trying to model is thus
\begin{eqnarray}
\label{eqn:fmodel}
f_{\rm ICL}(M_{200m}, z_{d}, r, m^{\rm lim}) &=& f_{\rm pure\; ICL}(M_{200m}, z_{d}, r) \nonumber \\ &+& f_{\rm undetected}(M_{200m}, z_{d}, r, m^{\rm lim}) \; , \nonumber \\
\end{eqnarray}
with cluster mass $M_{\rm 200m}$, cluster redshift $z_{d}$, and projected physical distance $r$ from the cluster center. We describe our model for both terms in the following sections.
\subsection{Model for pure ICL}
\citet{zhang} have measured the pure ICL profile around a richness-redshift selected sample of redMaPPer clusters in DES Y1 data. In this subsection, we convert their measurement of pure ICL at these fixed parameters into a prediction for $f_{\rm pure\; ICL}(M_{200m}, z_{d}, r)$ based on the assumptions that
\begin{itemize}
\item The stellar mass density profile is self-similar, i.e.~indistinguishable between different clusters when expressed as a function of $r/r_{200m}\propto r\times M_{\rm 200m}^{-1/3}$. This is qualitatively consistent with the results of a richness-binned analysis in \citet{zhang}.
\item ICL has a fixed stellar mass density profile in physical coordinates across redshifts, which leads to a re-scaling of stellar mass per solid angle with the square of angular diameter distance. We note that there is an ongoing debate in the literature about the growth of ICL over cosmic time, which is discussed below.
\item ICL is passively evolving. As a function of redshift, it follows the corresponding luminosity evolution.
\end{itemize}
These three assumptions can be written as the three re-scaling terms on the right-hand side of the expression
\begin{eqnarray}
f_{\rm pure\; ICL}^{i'}(M_{200m}, z_{d}, r) &=&
f_{\rm ICL}^{\rm Zhang}\left(r\times \left(\frac{M_{200m}}{M_{200m}^{\rm fid}}\right)^{-1/3}\right) \nonumber \\
&\times& \left(\frac{D_{A}(z_d)}{D_{A}(z_{\rm fid})}\right)^2 \nonumber \\
&\times& 10^{-0.4\left(m_{i',z_d}-m_{\rm fid}\right)} \; .
\label{eqn:ftransform}
\end{eqnarray}
Here $f_{\rm ICL}^{\rm Zhang}(r)$ is the ICL surface brightness of \citet{zhang}, measured for a fiducial mass $M_{200m}^{\rm fid}=3\times10^{14} M_{\odot}$ and redshift $z_{\rm fid}=0.25$. $m_{i',z_d}-m_{\rm fid}$ is the apparent magnitude difference of a passively evolving galaxy seen at redshift $z_d$ in CFHT $i'$ band and at redshift $z_{\rm fid}$ in DES $r'$ band. For the purpose of this paper, we use a \citet{2003MNRAS.344.1000B} model with solar metallicity ($Z=0.02$), no dust, and with star formation beginning 10 Gyr before $z=0$ and subsequently declining as $e^{-t/\tau}$ with $\tau=0.1$ Gyr. The ratio of angular diameters $D_A$ corrects for the change of angular scale of the ICL profile with redshift.
Examples of ICL profiles transformed in cluster redshift, mass and filter band are shown in \autoref{fig:zhangtransform}. In this figure and all that follows, we apply azimuthal averaging and a smoothing of $\pm40$kpc at $r>150$kpc to reduce the noise of the pure ICL measurement of \citet{zhang} at large radii.
\begin{figure}
\includegraphics[width=\columnwidth]{zhang_transform}
\caption{Pure ICL profiles (solid lines) measured in DES (black) and transformed to higher mass (blue) and redshift (red) according to \autoref{eqn:ftransform}. Dotted lines show the additional ICL due to undetected cluster members (\autoref{eqn:fundet}) in a survey that detects galaxies down to $r=22.5$.}
\label{fig:zhangtransform}
\end{figure}
Note that we assume ICL to not accrete or eject stars over time. It is often argued that ICL forms relatively late, assembling most of its total stellar mass during galaxy interactions after redshift 1.0 \citep{2006ApJ...648..936R,2006ApJ...652L..89M, 2007ApJ...668..826C, 2012MNRAS.425.2058B, 2013ApJ...770...57B, 2014MNRAS.437.3787C, 2016ApJ...816...98Z}. Since our model is based on ICL measurements at $z\sim 0.25$, the luminosity of ICL at higher redshift ($z>0.25$) is likely to be lower than, or at most equal to, the amount predicted from our passive evolution model. Hence, the photometric bias due to ICL at higher redshift ($z>0.25$) is likely to be less severe than that predicted in the paper. Passive evolution is a conservative assumption for the purpose of estimating photo-$z$ bias.
\begin{table}
\centering
\begin{tabular}{cccc}
$a$ & $b_r$ & $b_\lambda$&$b_z$\\
$[\mathrm{nJy}\; \mathrm{arcsec}^{-2}]$ & & & \\
\hline
$ 9.95 \pm 0.12 $ & $ 1.205 \pm 0.010 $ & $-0.831 \pm 0.037$ & $8.96 \pm 0.10$
\end{tabular}\caption{Best--fit values for eq. (\ref{eq:fmem}) for the DES $r'$ band flux from redMaPPer members. The reduced $\chi^2$ is 1.5.}\label{tab:fmem}
\end{table}
Finally, to set the ICL fluxes in other bands, we assume that at any redshift, ICL has the same color as the passive galaxy population. In reality, ICL could be somewhat bluer in color, especially at large cluster-centric distance. This is due to its lower metallicity, related to its build-up from tidal stripping of cluster members and disruption of dwarf galaxy members \citep[e.g.][]{2014ApJ...794..137M,2014A&A...565A.126P,2018MNRAS.475.3348H, 2018MNRAS.474..917M,demaio18,zhang,2019ApJ...871...24C}. We confirm that an excess in blue light of 0.1mag in $g$ band, comparable to the color effect of the expected metallicity offsets, would reduce our predicted bias, yet by less than 5 per cent, at all cluster redshifts studied here.
\subsection{ICL from undetected cluster members}
The light of undetected member galaxies is an additional contribution to diffuse light in the cluster. To add this to our full ICL model, we use the same methodology as \citet{zhang} apply for \emph{subtracting} the faint galaxy contribution towards a measurement of pure ICL (see their section 5). Namely, we assume that at any radius, the fraction of total cluster member light in faint galaxies is determined by a spatially homogeneous luminosity function. From the measured light in bright cluster members we can then predict the undetected contribution.
Formally, we write
\begin{eqnarray}
f_{\rm undetected}(M_{200m}, z_{d}, r, m^{\rm lim}) &=& f_{\rm members}(M_{200m}, z_{d}, r) \nonumber \\ &\times& S(z_d,m^{\rm lim},\infty) \; ,
\label{eqn:fundet}
\end{eqnarray}
where $S(z_d,m^{\rm lim},\infty)$ is the fraction of the integral over the cluster member luminosity function contributed by the faint end from $m^{\rm lim}$ to $\infty$.
For a \citet{1976ApJ...203..297S} luminosity function with faint-end slope $\alpha$,
\begin{equation}
\frac{\mathrm{d}N_{\rm gal}}{\mathrm{d}L}\propto\phi(L)\propto \left(\frac{L}{L^{\star}}\right)^{\alpha}\exp[L/L^{\star}]\; ,
\end{equation}
the integrated luminosity is given by
\begin{equation}
I(L_1,L_2)=\int_{L_1}^{L_2} L\;\phi(L)\,\mathrm{d}L \propto \left[\Gamma\left(\alpha+2,\frac{L_2}{L^{\star}}\right)-\Gamma\left(\alpha+2,\frac{L_1}{L^{\star}}\right)\right] \; ,
\label{eqn:lint}
\end{equation}
with the incomplete gamma function $\Gamma$. In this work, we assume $\alpha=-1$, as motivated by \citet{2014ApJ...785..104R}, thus
\begin{equation}
S(z_d,m_1,m_2)=\Gamma\left(1,10^{0.4(m^{\star}-m_1)}\right)-\Gamma\left(1,10^{0.4(m^{\star}-m_2)}\right)
\end{equation}
Note that for this luminosity function about 18\% of the total flux is contained in galaxies fainter than $0.2L_{\star}$ and more than 99\% of the total flux is contained in members brighter than $m^{\star}+5$.
The characteristic magnitude $m^{\star}$ \citep{2007ApJ...660..221K, 2014ApJ...785..104R} is a function of cluster redshift $z_d$, which we calculate from the \citet{2003MNRAS.344.1000B} model, normalized to match the SDSS DR8 \citep{2011ApJS..193...29A} redMaPPer catalog \citep{2014ApJ...785..104R} at $z=0.2$.
We approximate $f_{\rm members}$ from the light of redMaPPer cluster members $f_{\rm redMaPPer}$ in the "flux limited" DES Y1 catalog \citep{2018arXiv180500039M}. RedMaPPer estimates the probability of each galaxy along the line of sight to belong to the red cluster member population above $0.2 L_\star$ based on its position relative to the central galaxy and its color-magnitude relative to the empirically calibrated red sequence at the cluster redshift. The count of these galaxies within a cluster defines the redMaPPer richness $\lambda$. They are detected by DES over the full redshift range of the redMaPPer catalog, allowing us to empirically constrain the evolution of $f_{\rm members}$ with redshift. However, they are only the bright, red subset of the cluster galaxy population. For $f_{\rm members}$ in \autoref{eqn:fundet}, we thus use the luminosity function to re-scale $f_{\rm redMaPPer}$ by a factor $I(0.2 L^{\star},\infty)/I(0,\infty)=1.22$ from \autoref{eqn:lint}, for the missing members at $L<0.2 L_\star$. In the relevant radial range, these passive galaxies dominate the cluster member population \citep[e.g.][]{2016MNRAS.457.4360Z}, which is why we do not correct for the missing non-passive members.
From examining these light profiles, we find that the DES $r'$ band flux $f_{\rm members}$ of redMaPPer cluster members approximately follows a power law in projected radial distance, cluster richness and redshift, as:
\begin{equation}
f_{\rm members}(\lambda, z_{d}, r) = a \Big(\frac{r}{\tilde{r}} \Big)^{-b_r} \Big(\frac{\lambda}{\tilde{\lambda}}\Big)^{-b_\lambda} \Big(\frac{1+z_d}{1+\tilde{z}_d}\Big)^{-b_z} \,,\label{eq:fmem}
\end{equation}
where $\tilde{r}=240$ kpc, $\tilde{\lambda}=40$ and $\tilde{z}_d=0.5$ are the pivot values for richness and cluster redshift, and $a$ and $b_{r/\lambda/z}$ are our fit parameters for overall amplitude and power law exponents, respectively.
Eq. (\ref{eq:fmem}) is fit between 20 and 1000 kpc, and the best-fitting results from a $\chi^2$ minimization are given in Table \ref{tab:fmem} and $r$ is the comoving projected distance from the cluster center in kpc. The flux used is the SExtractor \texttt{AUTO} measurement in DES $r'$ band \citep{2017arXiv170801531D} and this is weighted for each galaxy from the redMaPPer catalog by the corresponding membership probability. The masked regions are taken into account when computing the flux per area, and the errors on the flux profiles are computed through a jackknife resampling. The bins in richness ($20<\lambda<140$) and redshift ($0.1<z<0.8$) are chosen to have a similar number of clusters in most bins.
To convert $f_{\rm members}(\lambda, z_{d}, r)$ to $f_{\rm members}(M_{200m}, z_{d}, r)$ we apply the $\langle\ln\lambda|M_{500c}\rangle$ relation of \citet{2015MNRAS.454.2305S}. We convert between $M_{200m}$ and their $M_{500c}$ using the mass-concentration relation of \citet{2008MNRAS.390L..64D}. We note that $\langle\lambda|M\rangle\neq e^{\langle\ln\lambda|M\rangle}$ due to intrinsic scatter in $\lambda$ at fixed $M$. For the purpose of this paper and consistency with our scaling of the \citet{zhang} model for pure ICL, we set the amplitude of the scaling relation such that $\langle\lambda|M_{200m}=3\times10^{14}M_{\odot},z=0.25\rangle$=30.
The $m_{\rm lim}$ to use with \autoref{eqn:fundet} is dependent on survey and detection strategy. For the DES Y1 Gold catalog \citet[][their Figure 8]{2017arXiv170801531D}, a conservative $m_{\rm lim}$ for the purpose of estimating the contribution of cluster members to diffuse light is a DES $i'$ band magnitude of 22.5.
We model the light of undetected members at a given cluster-centric radius as homogeneously distributed, rather than concentrated at the positions of the actual galaxies. If the surface brightness of ICL at the positions of actual undetected galaxies is small enough so that the linearity of photo-$z$ bias found in \autoref{sec:dbdf} holds, the predicted mean bias does not depend on this assumption of homogeneity. For member galaxies with larger surface brightness, non-linear blending effects will likely play a role - we consider these to be an issue separate to the ICL studied in this paper.
We note that the contribution of undetected cluster members becomes important at large cluster mass, high redshift, and for a shallow survey (see dotted lines in \autoref{fig:zhangtransform}). For our DES parameters, it contributes the majority of ICL for a cluster of $M_{200m}/M_{\odot}=10^{15}$ at $z_d>0.6$. For lower mass or redshift in DES, it is a subdominant component -- contributing, in the relevant regimes, between 10 and 40 per cent of ICL. For LSST it is negligible due to the completeness down to fainter magnitudes.
\section{Lensing photo-$z$ biases from diffuse light}
The goal of this section is to derive a model for the bias in the lensing measurement of cluster surface matter density due to leakage of ICL into lensing source galaxy photometry used for estimating source redshift ($z_s$) distributions. The source redshift dependent quantity needed for lensing measurements of a matter distribution at redshift $z_d$ is the predicted amplitude of the lensing signal. This amplitude is proportional to
\begin{equation}
\beta=D_{ds}/D_{s} \; ,
\end{equation}
the ratio of angular diameter distances between lens and source $D_{ds}$ and to the source $D_{s}$, defined as the ratios of physical to angular sizes of objects at $z_s$ seen by observers at $z_d$ and 0, respectively. The true value of $\beta$ could be calculated if redshifts were known for sources and lenses. In practice, the source redshift distributions are estimated from their photometry. Any bias in photo-$z$ thus manifests as a bias in the amplitude $\hat{\beta}$ estimated from them. In this work, we therefore primarily consider biases in $\hat{\beta}$, rather than in the redshift distribution more generally.
We define this bias as
\begin{equation}
\left(\hat{\beta}/\beta\right)-1\approx F(f_{\rm ICL}, z_{d},\mathrm{source\;magnitude\;limit}) \; ,
\label{eqn:dbdf}
\end{equation}
where $f_{\rm ICL}$ is the surface brightness of intracluster light present at the position of the lensing source galaxy in question and $z_d$ is the redshift of the lens. $F$ is the model for the ICL-related bias we derive in this section. The larger the statistical power of a lensing survey, the smaller a bias can be tolerated before it significantly affects the analysis. Current (and future) surveys aim for multiplicative biases to be below the few (to one) per cent level.
In the remainder of this section we describe the basic lensing formalism, followed by our framework for estimating the impact of ICL on empirical redshift estimates in \autoref{sec:betatree}. We then develop the right-hand side of \autoref{eqn:dbdf} in \autoref{sec:dbdf}. In this, $f_{\rm ICL}$ is denoting the level of ICL surface brightness at the position of the lensing source population -- the model for $f_{\rm ICL}$ as a function of cluster mass, redshift, and distance from the cluster center was presented in \autoref{sec:iclmodel}.
The image of a lensing source (or ensemble of sources) located on some annulus around a gravitational lens at angular diameter distance $D_{d}$ from the observer is subject to tangential gravitational shear \citep[e.g.][for a review]{2001PhR...340..291B}
\begin{equation}
\gamma_t=\Sigma_{\rm crit}^{-1}\times\Delta\Sigma= \frac{4\pi G D_{d}}{c^2}\times\beta\times\Delta\Sigma \; .
\label{eq:betagamma}
\end{equation}
The excess surface density $\Delta\Sigma$ at radius $r$ is the difference of the mean mass per area \emph{inside} and \emph{on the edge} of a circle of radius $r$,
\begin{equation}
\Delta\Sigma(r)=\langle\Sigma(<r)\rangle-\Sigma(r) \; .
\end{equation}
$\hat{\beta}$ can be estimated from the photo-$z$ redshift probability density $\hat{p}(z)$ as
\begin{equation}
\hat{\beta}=\int\hat{p}(z) \frac{D_{ds}(z_d,z)}{D_s(z)}\; \mathrm{d}z \; .
\label{eqn:pz}
\end{equation}
For the mean shear signal of an ensemble of lensing source galaxies $i$, each with weight $w_i$, this can be written as
\begin{equation}
\hat{\beta}=\frac{\sum_i w_i\times\hat{\beta}_i}{\sum_i w_i} \; ,
\label{eqn:nz}
\end{equation}
where $w_i$ is a source weight and $\hat{\beta}_i$ the estimated $\beta$ of source $i$ from \autoref{eqn:pz}. For the optimal (minimum variance) estimator of mean shear or surface mass overdensity, $w_i\propto\beta_i/\sigma_{e,i}^2$, or, in practice, $\propto\hat{\beta}_i/\sigma_{e,i}^2$ where $\sigma_{e}^2$ is the shape noise variance including intrinsic and measurement noise.
In the case of an unbiased estimate $\hat{\beta}$, this connects mean tangential shear $\langle\gamma_t\rangle$ and excess surface mass density $\Delta\Sigma$ as
\begin{equation}
\langle\gamma_t\rangle=\frac{\sum_i w_i\times\gamma_{t,i}}{\sum_i w_i}=\frac{4\pi G D_{d}}{c^2}\times\hat{\beta}\times\Delta\Sigma
\end{equation}
Thus, for example, if $\hat{\beta}$ is biased low, e.g.~due to a bias in photo-$z$, the estimated $\Delta\Sigma$ is biased high, and vice versa. This is the source of bias we evaluate in the following. For an indirect impact of the bias in photometric redshifts via the estimation of cluster member contamination of the lensing source sample, see Appendix~A.
\subsection{Framework for empirical redshift estimation}
\label{sec:betatree}
Our framework for estimating the effect of ICL on photo-$z$ is a simple empirical method that gives an unbiased estimate of $p(z|\bm{m})$, where $\bm{m}$ is a vector of colors and magnitude. The accuracy of its redshift distribution estimates are limited only by selection effects or sample variance of the available reference sample with known redshifts. In this work, we use the same sample for reference and bias determination, which cancels these effects: without ICL, the redshift distribution recovery is perfect by construction. Given this, and a model for the color of and total flux from diffuse light that enters each source, we can estimate how much the $\hat{\beta}$ of \autoref{eqn:nz} will be biased. We use this simple empirical method as a proxy for any photometric redshift estimation that could be performed using similar wide-band survey data, e.g.~from DES or, with the caveat that the fainter magnitude limit is not fully covered by our CFHT-based reference catalogs, LSST.
The empirical method is a simple decision tree described in detail in \citet{2016arXiv161001160G} and publicly available at \href{https://github.com/danielgruen/betatree/}{https://github.com/danielgruen/betatree/}. Given a complete reference sample of galaxies with photometric measurements in a set of bands and with known true redshift, the decision tree provides an unbiased and close to optimal estimate of $p(z)$ based on the color-magnitude information in any subset of these bands. The method splits the color-magnitude space spanned by the subset of bands into hyper-rectangles (leaves of the decision tree), and assigns to each galaxy as its $p(z)$ the histogram of true redshifts of reference galaxies in that leaf. We make the simplifying assumption that the lensing source sample is a magnitude limited sample of galaxies, i.e.~ignore additional explicit or implicit selections on pre-seeing size, shape or profile that are commonly present in such catalogs. For the purpose of these tests, and because no sufficiently faint magnitude limited sample of galaxies with spectroscopic $z$ is available, we use the same photo-$z$ sample and (unless otherwise noted) the same settings of the tree as in \citet{2016arXiv161001160G}. The galaxies used are measured from the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) Deep fields, four fields with one sq.~deg.~area each, for which 8-band photometry from CFHTLS and the WIRCam Deep Survey (WIRDS) is available. The sample is complete to $i'\approx25$, although we use a shallower magnitude limited sample for all analyses to follow. The combination of high signal-to-noise photometry for magnitude limits relevant for lensing source samples and large volume relative to e.g.~the COSMOS field make the sample well suited for our purpose.
\bigskip
Operationally, we estimate the bias of photo-$z$ due to intracluster light with the following procedures.
\begin{enumerate}
\item Build a decision tree from magnitude limited sample $20\leq i'\leq24$ (23.5, 24.5 as variants), optimized for a cluster redshift $z_d$, from $g'r'i'z'$ (also $u^{\star}g'r'i'z'$ as a variant) color-magnitude information. The magnitude limits are chosen to approximately match present and future samples of lensing source galaxies \citep[e.g.][]{2017arXiv170801533Z,2018PASJ...70S..25M}.
\item Estimate $\hat{\beta}$ in each leaf of the decision tree as the mean of $\beta_i$ of all reference galaxies in that leaf.
\item Determine the ICL $X-i'$ color $c_X$ as the median of the $X-i'$ color of all galaxies in the reference catalogs with $z\in[z_d-0.02,z_d+0.02]$ and a best-fit spectral energy distribution (SED) of a passive galaxy, where $X$ is one of ($u^{\star}$)$g'r'z'$. Note that this assumes that the ICL has the same SED as a red galaxy: this condition is satisfied in the clusters studied in \citet{zhang}, where the ICL colors are consistent with those from redMaPPer \citep{2014ApJ...785..104R} centrals within the inner 10 kpc, becoming bluer in the outer regions but still consistent with the red sequence galaxy population. Likewise, \citet{demaio18} found that ICL colors are consistent with red sequence galaxies over a wider redshift range ($0.29<z<0.89$) using HST imaging.
\item generate ICL-contaminated fluxes of each reference galaxy as $f_X^{\rm cont.}=f_X+\mu_A\times A\times f_{ICL,i}\times 10^{-0.4 c_X}$. In this, $A$ is defined to be the area of a circle with the post-seeing half-light radius of the galaxy. In our tests, we homogenized the data to a seeing half-light radius of 0.4'' to make this independent of the observing conditions of the CFHTLS-Deep fields. The factor $\mu_A$ accounts for the effective sensitivity of a method of measuring galaxy fluxes to diffuse light. We note that $\mu_A$ will depend strongly on the method used for extracting fluxes. By running \textsc{SExtractor} in dual-image mode with a detection image contaminated with diffuse flux, we find $\mu_{A}=2.5$ for \texttt{DETMODEL} model-fitting fluxes. \texttt{DETMODEL} fluxes are derived by fitting PSF convolved Sersic profile models to the galaxy images in a cut-out region. In our configuration, we follow the DES convention of fitting a PSF-convolved single exponential profile to the galaxies \citep{2017arXiv170801531D}. The value of $\mu_{A}=2.5$ is thus what we use in the following analysis.\footnote{In \texttt{AUTO} photometry, regardless of the explicit background subtraction settings, \textsc{SExtractor} measures and subtracts a background flux estimate locally. In this mode, it is hence insensitive to a diffuse background, i.e. $\mu_{A}^{\rm AUTO}=0$. There are other reasons, in particular the sensitivity to different point-spread function in different bands, that make \texttt{AUTO} photometry problematic for accurate multiband flux measurements in photometric surveys.}
\item Re-assign reference galaxies to leaves of the tree generated in (i), based on the contaminated color-magnitude information.
\item Estimate biased mean $\hat{\beta}$ for the contaminated case as the lensing-weighted mean (i.e.~with weight $w\propto\hat{\beta}$ of the leaf a galaxy falls into) of the respective $\hat{\beta}$ for each galaxy as determined in (ii) .
\item Estimate unbiased mean $\beta$ by weighting galaxies by their biased $\hat{\beta}$ as in (vi), but using their true reference redshifts to determine the $\beta$ to average.
\end{enumerate}
The ratio of the $\hat{\beta}$ of step (vi) and the unbiased true $\beta$ of step (vii), minus 1, is the bias we are trying to determine. Note that at $f_{\rm ICL}=0$, the two are, by construction, identical. In other words, the decision tree is an unbiased $\beta$ estimator unless the sample is affected by photometric biases or selection effects.
\subsection{Model}
\label{sec:dbdf}
In this section, we apply the scheme laid out in \autoref{sec:betatree} to derive an expression for the bias in $\Delta\Sigma$ as a function of ICL surface brightness, lens redshift, and magnitude limit of the source sample (\autoref{eqn:dbdf}).
Judging from the surface brightness of ICL observed in \citet{zhang}, the relevant range is $f_{\rm ICL}<40$~nJy~arcsec$^{-2}$ ($>27.4$~mag~arcsec$^{-2}$) as observed outside $\approx100$~kpc. In this range and given the sizes and magnitudes of lensing source galaxies,\footnote{Note that an $i'=24.5$ galaxy has a flux of 575~nJy, spread out over few arcsec$^2$.} ICL is a perturbation on top of the galaxies' intrinsic flux, such that we can attempt to approximate the effect of ICL on photo-$z$ as linear. We study the linearity of biases in $\hat{\beta}$ at a range of lens redshifts $z_d=0.2\ldots0.8$ in steps of 0.1 and limiting magnitudes of the source sample $m_{\rm lim}\in\{23.5,24.0,24.5\}$. \autoref{fig:boff} shows selected results for illustration that the bias is indeed well approximated as linear in $f_{\rm ICL}$ for the most relevant regimes. Only for the highest redshift clusters are non-linear effects visible at larger ICL flux levels. This is potentially related to the fact that the relevant source populations that are lensed by the cluster are located at high redshift. Their characteristic apparent magnitude is thus relatively faint and more susceptible to change due to ICL leakage. In the following, we will assume the bias on $\hat{\beta}$ due to ICL is linear in ICL flux, and use the measurement at $f_{\rm ICL}=14$~nJy~arcsec$^{-2}$ ($4$ counts per arcsec$^{-2}$ at ZP=30) to determine the slope. This choice is a trade-off: the added flux due to ICL is large enough to allow a high signal-to-noise measurement of the bias, but small enough that it does not suffer from non-linear effects or lead to problems due to sources that are below the $m<25.5$ limit of the CFHTLS-Deep catalog being boosted above the $m_{\rm lim}=23.5\ldots24.5$ magnitude limit of our source sample.
\begin{figure}
\includegraphics[width=\columnwidth]{flux_bias_griz_24_0}
\caption{Bias in $\Delta\Sigma$ (defined as the negative of the bias in $\hat{\beta}$) from $g'r'i'z'$ photo-$z$ bias for a sample of source galaxies at $20\leq i'\leq24$. Differently colored lines and points show results for different lens redshifts. 3.63~nJy arcsec$^{-2}$ correspond to 30 mag arcsec$^{-2}$.}
\label{fig:boff}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{dbdf_model}
\caption{Slope of $\Delta\Sigma$ bias with ICL surface brightness as a function of lens redshift. Circle symbols show measurements made as in \autoref{fig:boff}, upward and downward triangles the same measurements, but for deeper and shallower source samples. Solid line shows a quadratic model fit at the fiducial magnitude limit, dashed and dotted lines are the same model re-scaled by $2^{m_{\rm lim}-24}$, where $m_{\rm lim}$ is the limiting magnitude of the sample.}
\label{fig:dbdf_model}
\end{figure}
For a given source magnitude limit, the slope of bias with ICL surface brightness is a function of lens redshift. By measuring the slope at a range of redshifts, we empirically find that it can be described well, within the range of $z_d=0.2\ldots0.8$, by a quadratic function of $z_d$. Measurements and quadratic model (circles and solid line) are shown in \autoref{fig:dbdf_model}.
In addition, we empirically find that a re-scaling of the model by $2^{m_{\rm lim}-24}$ describes the measurements reasonably well at magnitude limits in the range $m_{\rm lim}\in(23.5,24.5)$ (downward and upward triangles with model as dotted and dashed curve in \autoref{fig:dbdf_model}). The following is the proposed model, for $g'r'i'z'$, fitted from in $z_d\in(0.2,0.8), m_{\rm lim}\in(23.5,24.5)$,
\begin{equation}
\label{eqn:dbdfmodel}
\frac{\mathrm{d}(\hat{\beta}/\beta)}{\mathrm{d}f_{\rm ICL}}\times [\mu\mathrm{Jy}\;\mathrm{arcsec}^{-2}] \approx\left(2.5 z_d^2-1.1z_d+0.028\right)\times 2^{m_{\rm lim}-24} \; .
\end{equation}
Repeating the same analysis including $u^{\star}$ band gives a somewhat smaller amplitude of
$(1.2z_d^2-0.063 z_d +0.10)$,
to be rescaled the same way as a function of magnitude limit.
\section{Bias predictions}
Using the models described in section 2 and 3, we study the bias in $\Delta\Sigma$ profiles due to contamination of source photometry with diffuse light around clusters.
Due to the dependence on cluster member detection limit of the ICL model (section 2), and the dependence on source population of the bias per unit ICL flux (section 3), we need to define a limiting magnitude for the lensing source catalog and for the detection of cluster members in a given survey. This, in addition to the mass and redshift of a cluster sample, determines our model prediction for ICL related photo-$z$ bias via equations (\ref{eqn:dbdfmodel}) and (\ref{eqn:fmodel}).
We study two cases, and again choose conservative limits (i.e.~faint limiting magnitudes for the lensing source catalog and conservative thresholds for complete cluster member detection): (1) an ongoing $griz$ wide-area survey, similar to DES, with lensing sources measured down to $r\approx23.5$ \citep{2017arXiv170801533Z} and cluster members completely detected and deblended down to $r\approx22.5$ \citep{2017arXiv170801531D}. And (2) an ongoing or future deep wide-area $ugriz$ survey, similar to HSC or LSST, with lensing sources measured and cluster members completely detected and deblended down to $r\approx25$.
Results for both cases are shown in \autoref{fig:model}, for clusters of two different masses approximately spanning the range currently used for optical cluster cosmology with \textsc{redMaPPer}. These should be compared to the statistical uncertainties of present and future surveys (currently of the order of a few per cent, optimistically of the order of one per cent) for a sense of whether the biases are relevant.
\begin{figure*}
\includegraphics[width=\columnwidth]{model_des}
\includegraphics[width=\columnwidth]{model_lsst}
\includegraphics[width=\columnwidth]{model_des_15}
\includegraphics[width=\columnwidth]{model_lsst_15}
\caption{Predictions for the bias in $\Delta\Sigma$ profiles due to ICL-related source photo-$z$ bias for a DES-like (left-hand panels) and LSST-like (right-hand panels) survey, and a cluster of $M_{200m}/M_{\odot}=3\times10^{14}$ (top) and $10^{15}$ (bottom). The smallest scales (e.g.~$r<200 h_{70}^{-1}$ kpc in \citealt{2018arXiv180500039M}) that are most heavily affected by ICL are commonly excised from cluster lensing analyses for other reasons.}
\label{fig:model}
\end{figure*}
We find that for a DES-like survey, even under the conservative assumptions made above, the $\Delta\Sigma$ signal estimated outside $200$kpc radius is biased mostly below the one per cent level, and only in extreme cases above the two per cent level, even for very massive and high redshift clusters. This implies that at the scale cuts and uncertainties of present DES cluster lensing studies \citep{2018arXiv180500039M}, ICL-related photo-$z$ bias is highly subdominant compared to the 5 per cent combined statistical and systematic uncertainty.
For a significantly deeper survey like LSST, biases at the level of two per cent are possible on the small to intermediate scales of $200-300$kpc that we hope to use for cluster lensing purposes. This is driven by the larger biases incurred by the fainter sources measured in these surveys. The availability of $u$ band information in addition to $griz$ somewhat alleviates the effect. Given the conservative assumptions made in our study, it is conceivable that the actual bias is only a fraction from our model prediction. But at least for the massive end of the clusters studied with these surveys, diffuse light photo-$z$ contamination requires either more detailed investigation or more conservative cuts in radius or limiting source magnitude.
\subsection{Limitations of our model}
In the context of these predictions, we summarize the simplifications made in our model, and their likely effect on the bias in practical applications.
Simplifications, i.e.~assumptions we had to make due to limited understanding of physical or algorithmic details:
\begin{itemize}
\item \textbf{Generic photo-$z$ algorithm:} For the purpose of this test, we used a simple empirical photo-$z$ algorithm. Assuming that all photo-$z$ algorithms estimate the same relation of multiband flux and redshift, results for other algorithms would likely be similar, yet not equal. We have made simplified tests using BPZ \citep{benitez2000,2017arXiv170801532H} that indicate that this is indeed the case.
\item \textbf{Leakage of ICL into galaxy photometry:} We assumed leakage to be proportional to a circular aperture with the post-seeing half-light radius of the galaxy. This is an approximation of how a matched aperture or, equivalently, model fitting algorithm for photometric measurements might perform. While we match the leakage scale in this work to the mean observed change in \textsc{SExtractor} \texttt{DETMODEL} flux, other photometry measurement algorithms might show very different results, and galaxy morphology might affect the leakage scale in a galaxy type and redshift dependent way. Also, small scale background subtraction could greatly reduce (or even invert) the effect. It is advisable that the leakage of diffuse light into galaxy photometry is estimated from image simulations for any lensing analysis that aims at a per cent level accuracy.
\item \textbf{Linearity of $\beta$ bias as a function of ICL flux:} Our model assumes that the change in estimated lensing amplitude $\hat{\beta}$ is linear in the ICL surface brightness. While this is appropriate for the relevant range of mean ICL surface brightness, inhomogeneity (i.e.~due to undetected yet localized cluster members) could affect the photo-$z$ more or less than predicted here. At the level of deblending possible with present and future lensing surveys, we expect this effect to be subdominant.
\item \textbf{Pure red cluster member population:} We have assumed that the cluster galaxy population only contains passive galaxies, similar in color to the ones identified by the \textsc{redMaPPer} algorithm. In practice, clusters contain star forming galaxies, especially at lower mass and higher redshift. The light of the undetected members among them is likely to have a similar, but not quite equal, effect on photo-$z$ bias as the light of red members. On the radial scales considered here, star-forming members are, however, not a majority of the population. In addition, the light of undetected cluster members is a subdominant component relative to pure ICL, hence we do not expect this assumption to significantly change our conclusions.
\item \textbf{Self-similar scaling of pure ICL:} We have assumed that pure ICL scales self-similarly with cluster mass, i.e.~its surface brightness is fixed at a given projected $r/r_{500}$. While this is consistent with simple comparisons made in \citet{zhang}, a more detailed study could reveal deviations.
\end{itemize}
Conservative assumptions, i.e.~ways in which we likely overestimate the effect of ICL in practice:
\begin{itemize}
\item \textbf{Passive SED of ICL:} We assume ICL to share the color of passive galaxies at the cluster redshift. A population of younger stars in the ICL would likely reduce its effect on photo-$z$ bias due to its similarity in color to lensing sources at higher redshift. We find that predicted bias is reduced, yet by less than 5 per cent, if ICL should be brighter in $g$ band by 0.1mag, which is approximately the level expected from reduced metallicity.
\item \textbf{Lack of ICL growth}: We fix the ICL surface density to a measurement at low redshift and predict the expected bias at higher redshift without accounting for any growth of ICL from early to late times. If ICL is assembled over time we thus overestimate biases for higher redshift clusters.
\item \textbf{Conservative deblending limits:} For DES, we have assumed cluster members to be deblended and thus not affecting source photo-$z$ down to a magnitude limit of $r=22.5$. At this level, DES Y1 is highly complete -- a significant fraction of cluster members below this limit are likely deblended successfully and, unlike assumed, do not in fact contribute to diffuse ICL. As a result, we likely overestimate the associated photo-$z$ bias in DES, in particular at large cluster mass and redshift.
\item \textbf{Magnitude limited source sample:} We used a simple magnitude cut to define our source sample. Realistic lensing source samples have additional selection criteria. A choice of limiting magnitude at the faint end of the population that is used in a given analysis allows for a conservative prediction of potential biases. For DES Y1/Y3 data, this was possible to do in this work.
\end{itemize}
Limitations, i.e.~regimes in which our model is not reliable:
\begin{itemize}
\item \textbf{Faint limit of source sample:} For LSST data, sufficiently faint reference samples of galaxies with known redshift and flux measurements do not exist to extend the modeling beyond $i'\approx 25$. Assuming that fainter lensing samples are used, the bias derived here is an underestimate of the bias encountered by such analyses.
\item \textbf{Blending with cluster members:} We only attempt to model diffuse ICL leaking into source photometry at a subdominant level. For the effect of blending between similarly bright cluster member and lensing source galaxies, the model developed here is not applicable. Besides, the success of correctly treating these cases will likely strongly depend on the choice of deblending algorithm.
\end{itemize}
\section{Conclusions}
We have developed a model for the bias in weak lensing estimates of cluster surface mass overdensity due to the contamination of lensed galaxy photometry from diffuse intracluster light. The latter systematically changes the flux, color, and thus photometric redshift estimate of the faint galaxies used as lensing sources.
Our model for diffuse light in clusters is simplistic yet conservative for the purpose of this exercise: a pure component of ICL due to un-bound stars in the cluster potential, measured at low redshift \citep{zhang} and re-scaled in mass and redshift by assuming self-similarity and passive evolution; and a component due to stars in undetected, faint cluster members, extrapolated from detected galaxies by means of the luminosity function. The effect of this surface brightness on photo-$z$ is estimated from an idealized empirical photo-$z$ estimation scheme \citep{2016arXiv161001160G}.
We find that for a DES-like cluster lensing experiment, i.e.~with cluster masses up to $M_{200m}=10^{15}M_{\odot}$, detection and deblending of cluster members brighter than $i'=22.5$, and a source sample no fainter than $i'=23.5$, ICL-related photo-$z$ bias does not significantly affect weak lensing mass reconstruction. Outside a cluster-centric radius of $200$kpc, which is commonly excluded in lensing studies for other reasons, biases are typically below 1 per cent for an $M_{200m}~3\times10^{14}M_{\odot}$ cluster, and below 2 per cent at $M_{200m}~10^{15}M_{\odot}$, even under the conservative assumptions we make. The effect of ICL on measured galaxy shapes may well be larger than that, and should be tested with dedicated image simulations.
Deeper source catalogs will be somewhat more susceptible to ICL-related photo-$z$ biases because the flux and color of faint source galaxies can be changed more strongly by ICL contamination. For massive clusters, lensing source catalogs down to $i'=25$ show one per cent biases at approximately twice the radius as the above DES-like survey. Even fainter sources will likely show even stronger effects, although this is difficult to quantify at present due to the lack of reliable color-magnitude-redshift information for such samples. An explicit treatment of measured fluxes as a composite of intracluster and lensing source galaxy light in photo-$z$ estimation could in principle remedy this effect. With moderately conservative scale and magnitude cuts, however, ICL bias of photo-$z$ will be a non-issue even in the next generation of surveys -- and with a less conservative examination of the effect, these could likely be moderately relaxed from the recommendations given in this work.
\section*{Acknowledgements}
Support for DG was provided by NASA through the Einstein Fellowship
Program, grant PF5-160138 and by Chandra Award Number GO8-19101A, issued by the Chandra X-ray Observatory Center. This work was supported in part by the U.S. Department of Energy under
contract number DE-AC02-76SF00515.
The authors thank Fabrice Brimioulle for providing the CFHTLS Deep photometry and photo-$z$ catalogs used in this work. This work has been promoted by fruitful discussions during the workshop series ``Becoming a One-Percenter''.
Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain,
the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing
Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago,
the Center for Cosmology and Astro-Particle Physics at the Ohio State University,
the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos,
Funda{\c c}{\~a}o Carlos Chagas Filho de Amparo {\`a} Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cient{\'i}fico e Tecnol{\'o}gico and
the Minist{\'e}rio da Ci{\^e}ncia, Tecnologia e Inova{\c c}{\~a}o, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey.
The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energ{\'e}ticas,
Medioambientales y Tecnol{\'o}gicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh,
the Eidgen{\"o}ssische Technische Hochschule (ETH) Z{\"u}rich,
Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ci{\`e}ncies de l'Espai (IEEC/CSIC),
the Institut de F{\'i}sica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universit{\"a}t M{\"u}nchen and the associated Excellence Cluster Universe,
the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, The Ohio State University, the University of Pennsylvania, the University of Portsmouth,
SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, Texas A\&M University, and the OzDES Membership Consortium.
Based in part on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, which is operated by the Association of
Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
The DES data management system is supported by the National Science Foundation under Grant Numbers AST-1138766 and AST-1536171.
The DES participants from Spanish institutions are partially supported by MINECO under grants AYA2015-71825, ESP2015-66861, FPA2015-68048, SEV-2016-0588, SEV-2016-0597, and MDM-2015-0509,
some of which include ERDF funds from the European Union. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya.
Research leading to these results has received funding from the European Research
Council under the European Union's Seventh Framework Program (FP7/2007-2013) including ERC grant agreements 240672, 291329, and 306478.
We acknowledge support from the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020, and the Brazilian Instituto Nacional de Ci\^encia
e Tecnologia (INCT) e-Universe (CNPq grant 465376/2014-2).
This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Based in part on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/IRFU, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at Terapix available at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.
This paper has gone through internal review by the DES collaboration.
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Electron states with topological character
and quantum frustration from competing interactions
are two major themes in current condensed matter physics. In both contexts, fractionalized degrees of freedom on the boundary of a system occur. Perhaps the best known example is the possibility of Majorana zero modes (MZMs) at the ends of a one-dimensional (1D) system. MZMs are exotic self-conjugate edge states that can occur through either topology
\cite{DasSarmaMajoranaQInf15,AguadoReview17,LutchynNatRevMat18,StanescuTopoBook} or fine-tuning of competing interactions \cite{AffleckTCK92,EmeryKivelsonPRB92,WongAffleck94,Affleck2IKPRB95,[{}][{, Chapter 28.}]GogolinBook}.
Here we study the interplay between a topological Majorana zero mode (tMZM) and frustration-induced Majorana (fMZM) in a nanoscale system of quantum dots and wires. We show that a fMZM can stabilize a tMZM.
Quantum frustration typically produces states of matter that are delicately balanced between competing options and which then show fractionalization \cite{SchifferRMP13}.
Quantum impurity models---an interacting quantum system coupled to leads---provide several canonical examples. (Note that
quantum impurity models are effectively 1D since the impurity couples to a limited set of states in the leads \cite{WilsonRMP75}.) The two channel Kondo (2CK) model, for instance, has been extensively studied \cite{GogolinBook,NozieresBlandin80,ZawadowskiPRL80,EmeryKivelsonPRB92,AffleckTCK92}:
an impurity spin is equally coupled to two metallic leads. It would be natural for the impurity spin to form a singlet with each lead, but it cannot because of entanglement exhaustion. This frustration in screening the impurity leads to a non-Fermi-liquid ground state in which there is a degeneracy of $\sqrt{2}$ at the impurity. This signals fractionalization and the existence of an unpaired fMZM \cite{MZM_pair}. It has also been discussed in the two impurity Kondo model \cite{JayaprakashPRL81,Affleck2IKPRB95} and the dissipative resonant level model \cite{Mebrahtu13,Zheng1-GPRB14}. Experimentally, several groups have investigated in detail nanoscale systems with an unpaired fMZM of this type \cite{Potok2CK07,Mebrahtu12,Mebrahtu13,KellerDGGNat15,IftikharPierreNat15,IftikharPierreScience18}--- to date, the fine tuning required appears to be easier to achieve than the creation of a topological state.
Topological MZMs, in addition to their inherent interest, have attracted attention because their non-Abelian statistics provide a possible route toward fault tolerant quantum computation \cite{DasSarmaMajoranaQInf15,PachosBook12}. To construct and observe such MZMs, researchers have proposed multiple systems
\cite{LutchynNatRevMat18,SatoAndo-TopSupRPP17,SunJia-MajVortexNQM17,HaimOreg-TopSupPRep19,Motome-HuntingJPSP20}.
One particularly promising 1D system consists of a semiconducting nanowire made of a material that has strong spin-orbit coupling which is placed in proximity to a \textit{s}-wave superconductor and in a magnetic field \cite{LutchynNatRevMat18}. In this system, signatures of tMZM through measurement of the conductance have been intensively pursued \cite{LutchynNatRevMat18}.
In contrast to the free elementary particle predicted by Majorana, the effective tMZMs in condensed matter always appear in pairs in finite size systems \cite{DasSarmaMajoranaQInf15,LutchynNatRevMat18}. Unfortunately, tMZMs lose many of their interesting properties when they hybridize with their partners.
The inter-MZM coupling decays as $\propto \exp(-L/\xi)$ \cite{DasSarmaMajoranaQInf15,LutchynNatRevMat18}, with $L$ the distance and $\xi$ the superconducting correlation length in the nanowire. Experimentally,
this hybridization can not generally be ignored \cite{LutchynNatRevMat18}.
Consequently, to see the full effect of a tMZM, a method to stabilize the tMZM against inter-MZM coupling is desirable.
In this paper, we stabilize a tMZM against hybridization with its partner by coupling that partner to an unpaired Majorana fermion of a dissipative quantum dot.
Through this stabilization, the frustration-induced fMZM of the $R \!=\! R_Q \!\equiv\! h/e^2$ dissipative resonant level model can be experimentally detected.
We emphasize that ``stabilization" here is understood in the renormalization group (RG) sense. In the absence of stabilization, we show that a finite inter-tMZM coupling, no matter how small, is RG \emph{relevant}: it effectively increases with decreasing temperature and thus drastically changes the system ground state.
In contrast, when the dissipation-induced fMZM is present, the inter-tMZM coupling is \emph{irrelevant} and vanishes at zero temperature, regardless of its bare value.
Because of this RG aspect, the stabilization studied here is qualitatively different from other proposals where the bare inter-tMZM coupling can be
reduced to zero through fine tuning.
See Refs.\,\cite{AguadoReview17, deng_majorana_2016, ElsaPRB17, ClarkePRB17, PradaNpjQM17} for examples of fine tuning by increasing the nanowire size, coupling the nanowire to a quantum dot, or through electronic interactions.
The rest of the paper is organized as follows. We begin with the introduction of our model in
Sec.\,\ref{sec:model}.
After that, Sec.\,\ref{sec:review} contains a brief review of the dissipative resonant level model and its frustration-induced Majorana fermion.
With those ingredients, we calculate in Sec.\,\ref{sec:conductance} the conductance through the detector quantum dot (the left side of Fig.\,\ref{fig:structure}) that couples to one of the tMZM. These results allow us to conclude that the fMZM of a dissipative resonant level model stabilizes the tMZM by coupling to its partner., thus leaving a single unpaired tMZM.
We further interpret the result with the g-theorem of boundary conformal field theory in Sec.\,\ref{sec:g_theorem}, where we stress the importance of the dissipation-induced fMZM in the process of stabilization.
To gain further information and support for our view, we find the non-equilibrium conductance and shot noise through the detector quantum dot with full counting statistics methods in Sec.\,\ref{sec:full_counting}.
Our results indicate that there is an abrupt transition of the ground state when the fMZM joins the system. This transition can be clearly observed through conductance, shot noise, as well as the Fano factor of the detecting resonant level.
Finally, we summarize our paper in Sec.\,\ref{sec:summary}.
\begin{figure}[t]
\includegraphics[width=6.3cm]{structure.pdf}
\caption{The structure of the system. Two tMZMs $\gamma_1$ and $\gamma_2$ are realized at two ends of a nanowire on top of a grounded topological superconductor (TS). We calculate the conductance through the left quantum dot to detect the existence of the tMZM $\gamma_1$. The right quantum dot, through which transport is dissipative (blue), couples to $\gamma_2$ and thereby stabilizes $\gamma_1$ as an isolated tMZM.
}
\label{fig:structure}
\end{figure}
\section{The System}
\label{sec:model}
The system we consider consists of three major parts: (i) a superconducting nanowire that hosts two tMZMs, (ii) a resonant level that detects the presence of a tMZM by its conductance, and (iii) a dissipative resonant level, formed in a quantum dot, which introduces a fMZM that stabilizes the signature of the tMZM.
We consider the superconducting nanowire as a bare bones system, shown in Fig.\,{\ref{fig:structure}}, that has a pair of tMZM, $\gamma_1$ and $\gamma_2$ (red dots), at its ends. The coupling between these two tMZMs is $\epsilon_M \neq\! 0$ and hence the Hamiltonian of the system is
\begin{equation}
H_\textrm{sys.} = i \epsilon_M \gamma_1 \gamma_2 .
\label{eq:Hsys}
\end{equation}
The goal is to have effectively $\epsilon_M\!=\!0$ so that even at zero temperature the topological feature of $\gamma_1$ is evident.
In order to assess directly whether $\gamma_1$ is indeed an independent tMZM, we incorporate a detector explicitly. As the presence of a tMZM affects many physical properties, different types of detectors could be used. We choose to consider the conductance through a spinless quantum dot modeled as a resonant level \cite{DongPRBR11}, pictured on the left of Fig.\,{\ref{fig:structure}}. As there are no interactions here, the Hamiltonian of the detector is simply
\begin{equation}
\begin{aligned}
H_\textrm{detect.} = &\; \epsilon_L^{\vphantom{\dagger}} d^\dagger_L d^{\vphantom{\dagger}}_L +
\sum_{k,\alpha} \epsilon_k c^{\dagger}_{k L \alpha} c^{\vphantom{\dagger}}_{k L \alpha} \\
& + V_L \sum_{k, \alpha} \left( c^{\dagger}_{k L \alpha} d^{\;}_L + d^\dagger_L c^{\vphantom{\dagger}}_{k L \alpha} \right) ,
\end{aligned}
\label{eq:Hdetect}
\end{equation}
where $\epsilon_{L}^{\vphantom{\dagger}}$ is the dot energy level, $c_{k L \alpha}^{\vphantom{\dagger}}$ is for electrons in the $\alpha=$ $S$ (source) or $D$ (drain) lead, and $V_L$ is the dot-lead coupling. We assume that the dot is tuned to resonance, $\epsilon_L \!= 0$, and symmetrically coupled to the leads.
\begin{figure}[t]
\includegraphics[width=7.2cm]{fig_lead_majoranas.pdf}
\caption{A resonant level (with operators $c_n$ and $c_n^{\dagger}$ for the $n$th site in the leads) couples to a tMZM $\gamma_1$.
When fine-tuning, the resonant level system can be considered as two independent Majorana chains \cite{DongPRBR11} (indicated by red and blue colors, respectively).
The tMZM $\gamma_1$ and $(d_L + d_L^{\dagger})/\sqrt{2}$ form into a singlet (indicated by the black dashed box), thus breaking one of the Majorana channels (the blue one).
The conductance becomes $e^2/2h$, totally from the red Majorana channel.
}
\label{fig:lead_majoranas}
\end{figure}
The tMZM $\gamma_1$ is tunnel coupled to a combination of $d_L$ and $d^\dagger_L$. Because $\epsilon_L = 0$, all such combinations are equivalent and so we take
\begin{equation}
H_\textrm{sys.-det.} = i t_L \frac{d^{\dagger}_L + d_L}{\sqrt{2}} \gamma_1 .
\label{eq:sys_det}
\end{equation}
The (zero-temperature) conductance through the left dot, denoted $G_L$, for $H=H_\textrm{sys.}+H_\textrm{detect.}+H_\textrm{sys.-det.}$ has been studied previously \cite{DongPRBR11}.
There is a clear signature of the coupling to the topological Majorana: while $G_L = e^2/h$ on resonance for a classic resonant level ($t_L=0$), in the presence of the tMZM one obtains half that value, $G_L \!=\! e^2/2h$. Intuitively, the tMZM hybridizes with part of the resonant level and so blocks the conductance of a ``half chain'', as illustrated in Fig.\,\ref{fig:lead_majoranas} \cite{DongPRBR11}.
Thus, we shall use $G_L \!=\! e^2/2h$ as the sign that a tMZM is present.
However, for any non-zero $\epsilon_M$, the conductance reverts to the topologically trivial $G_L \!=\! e^2/h$ at low temperature, $T\ll\epsilon_M$.
Indeed, $\epsilon_M$ is RG relevant, as shown below, and so grows large at low temperature regardless of its bare value (i.e., the value determined by the experimental system). The resulting extreme sensitivity to the value of $\epsilon_M$ makes observation of the tMZM especially difficult.
In order to stabilize $\gamma_1$, one natural idea is to couple its partner $\gamma_2$ to an isolated Majorana fermion, thereby removing it from potentially hybridizing with $\gamma_1$.
Fortunately, such an isolated Majorana fermion is known to exist in the 2CK quantum impurity model. Specifically, the frustration between the two channels leaves an effective fMZM $(d+d^{\dagger})/\sqrt{2}$ untouched, where $d \equiv i S_x - S_y$ is the effective fermionic operator and $S_{\sigma}$ are the Pauli matrices of the impurity spin \cite{EmeryKivelsonPRB92,ShillerPRB95,GogolinBook}.
However, this effective fMZM is fundamentally a \emph{spin} operator and so will not naturally couple with the \emph{spatial} degree of freedom $\gamma_2$.
To solve this problem, we propose to generate an isolated fMZM with a dissipative quantum dot \cite{Mebrahtu13,Mebrahtu12}. The corresponding resonant level model with ohmic impedance $R \!=\! R_Q$ \cite{Mebrahtu13,Zheng1-GPRB14} is known to be equivalent to the 2CK model as well as the Luttinger liquid resonant level model with Luttinger liquid interaction $g = 1/2$. The quantum dot forming the resonant level hosts a real \cite{real-mf} decoupled Majorana fermion \cite{Mebrahtu13,Zheng1-GPRB14}. Since this fMZM involves a \emph{spatial} degree of freedom, we can model its coupling to $\gamma_2$ with the standard inter-Majorana coupling.
\section{Quantum Frustration from Dissipation: The Dissipative Resonant Level Model}
\label{sec:review}
We begin by sketching the needed elements of the theory of the dissipative resonant level model \cite{Mebrahtu12,Mebrahtu13,Zheng1-GPRB14,LiuRLdissipPRB14}, emphasizing the formation of its dissipation-induced fMZM.
The dissipative resonant level model describes a quantum dot that couples to two dissipative spin-polarized leads. It is defined by the Hamiltonian \cite{IngoldNazarov92,NazarovBlanterBook,LiuRLdissipPRB14}
\begin{equation}
\begin{aligned}
H_\textrm{dissip.}= & H_{\text{dot}} + H_{\text{lead}} + H_{\text{T}} \\
=&\; \epsilon_R d^\dagger d^{\vphantom{\dagger}} +
\sum_{k,\alpha} \epsilon_k c^{\dagger}_{k \alpha} c^{\vphantom{\dagger}}_{k \alpha}
\hfill \\
& + \sum_{k, \alpha} V_{\alpha} \left( e^{-i\phi_\alpha} c^{\dagger}_{k \alpha} d^{\;} + e^{+i\phi_\alpha} d^\dagger c^{\vphantom{\dagger}}_{k \alpha} \right),
\end{aligned}
\label{eq:Hdissip}
\end{equation}
where $\alpha \!=\! S, D$ for the source and drain, respectively.
The notation parallels that for the detector: $\epsilon_R$ is the dot energy level, $c^\dagger_{k\alpha}$ creates an electron in the lead labeled $\alpha$, and lead $\alpha$ couples to the dot with strength $V_{\alpha}$. The key aspect of the model is the coupling to dissipation in the tunneling term [third line of Eq.\,\eqref{eq:Hdissip}].
In modeling the dissipation, we follow a standard approach \cite{IngoldNazarov92,NazarovBlanterBook,VoolDevoretIJCTA17}. The phase fluctuation operator $\phi_\alpha$ is conjugate to the charge fluctuation operator for the capacitor between the dot and lead $\alpha$. Thus, the operator $e^{\pm i\phi_\alpha}$ in Eq.\,(\ref{eq:Hdissip}) accounts for the change in charge upon tunneling. The current and voltage fluctuations caused by the electrons tunneling on and off the dot excite the ohmic environment of the leads. This environment is modeled as a bath of harmonic oscillators \cite{CaldeiraLeggett81,IngoldNazarov92,NazarovBlanterBook,VoolDevoretIJCTA17}. Because it is the charge moving across the dot that excites the environment, the difference $\varphi\equiv \phi_S - \phi_D$ couples to the harmonic oscillators. The bath causes the correlation function of these fluctuations to be
\begin{equation}
\langle e^{-i\varphi(t)}e^{i\varphi(0)}\rangle \propto (1/t)^{2r}
\label{eq:dissipative_correlations}
\end{equation}
where the exponent $r$ is related to the resistance of the environment by
$r \equiv Re^2/h \equiv R/R_Q$. With this correlation function, the conductance and scaling behavior can be found.
Interactions are thus introduced by the dissipation [i.e.\ the third line of Eq.\,(\ref{eq:Hdissip}) is \emph{not} quadratic]---theoretically, we are faced with an \emph{interacting quantum impurity} model. It is then natural to proceed with a bosonization treatment followed by renormalization (RG)
\cite{GogolinBook,GiamarchiBook,Anderson70,Cardy81,KaneFisherPRB92,EggertAffleck92,Mebrahtu13,Zheng1-GPRB14}.
Following the standard technique \cite{GogolinBook,LiuRLdissipPRB14}, we unfold each lead into an infinite chiral fermion channel and then apply
chiral bosonization,
\begin{equation}
c_{\alpha} (x) = \frac{F_{\alpha}}{\sqrt{2\pi a}} e^{i\phi_{\alpha} (x)+ik_Fx},
\label{eq:bosonization}
\end{equation}
where $F_{\alpha}$ is the Klein factor $\left\{ F_{\alpha}, F_{\alpha'} \right\} = 2 \delta_{\alpha,\alpha'}$ that preserves the fermionic commutation relations, and $a$ is the lattice constant.
The bosonic field operator $\phi_{\alpha}$, representing the collective modes of the corresponding chiral channel \cite{GogolinBook}, has the standard commutation relation
\begin{equation}
[\partial_x \phi_{\alpha}(x), \phi_{\alpha'}(x') ] = i\delta_{\alpha,\alpha'} \pi \delta (x - x').
\label{eq:commutators}
\end{equation}
We further define fields in the common ($\phi_c$) and difference ($\phi_f$) sectors through the rotation
\begin{equation}
\phi_f = \frac{\phi_{ S} - \phi_{D}}{\sqrt{2}},\ \ \ \ \phi_c = \frac{\phi_{S} + \phi_{D}}{\sqrt{2}}.
\end{equation}
The fields $\phi_c$ and $\phi_f$ reflect the dot occupation number and the electron number difference between two leads, respectively.
Substituting these bosonization expressions into the Hamiltonian (\ref{eq:Hdissip}), one finds that the bosonic field $\phi_f$ and dissipative phase $\varphi$ appear in the same way in the tunneling term. As they both have power-law correlation functions [Eq.\,\eqref{eq:dissipative_correlations} and \cite{GogolinBook}], we therefore combine them with the transformation
\begin{equation}
\begin{aligned}
\phi_f' & \equiv \frac{1}{\sqrt{1+r}} \left( \phi_f + \frac{1}{\sqrt{2}} \varphi \right) \\
\varphi' & \equiv \frac{1}{\sqrt{1+r}} \left( \sqrt{r} \phi_f + \frac{1}{\sqrt{2 r}} \varphi \right),
\end{aligned}
\label{eq:boson_rotation}
\end{equation}
through which the
tunneling Hamiltonian becomes \cite{Zheng1-GPRB14,LiuRLdissipPRB14}
\begin{equation}
\begin{aligned}
H_{\text{T}} & = \sum_{\alpha \in \{S,D\}} \frac{V_{\alpha}}{\sqrt{2 \pi a}} \left( e^{- i \frac{1}{\sqrt{2}} \phi_c} e^{- i\alpha \sqrt{\frac{1+r}{2}} \phi_f'} F_{\alpha} d + \text{h.c.} \right),
\end{aligned}
\label{eq:ht_effective}
\end{equation}
where $\alpha \!=\! \pm 1$ for source and drain, respectively.
Eq.\,\eqref{eq:ht_effective} effectively mimics the tunneling Hamiltonian of a Luttinger liquid
in which the interaction in the common ($c$) and difference ($f$) sectors is different.
(For related work on links between the physics of Luttinger liquids and dissipative tunneling see, e.g., Refs.\,\cite{MatveevGlazman93,FlensbergPRB93,SassettiWeissEPL94,SafiSaleurPRL04,LeHurLiPRB05,JezouinPierre13}.)
Following the well-established RG technique for Luttinger liquids \cite{GogolinBook,KaneFisherPRB92,GiamarchiBook}, we obtain RG equations when $\epsilon_R \!=\! 0$ \cite{KaneFisherPRB92,LiuRLdissipPRB14}:
\begin{equation}
\begin{aligned}
\frac{dV_S}{d\ln\tau_c} & = \left[1 - \Big( \frac{1+r}{4} + \frac{K_1}{4} +\frac{K_2}{2} \Big) \right] V_S, \\
\frac{dV_D}{d\ln\tau_c} & = \left[1 - \Big( \frac{1+r}{4} + \frac{K_1}{4} -\frac{K_2}{2} \Big) \right] V_D, \\
\frac{dK_1}{d\ln\tau_c} & = - 4\tau_c^2 \Big[ \left(V_S^2 + V_D^2 \right) K_1 + \left(V_S^2 - V_D^2 \right) K_2 \Big], \\
\frac{dK_2}{d\ln\tau_c} & = - 2\tau_c^2 \Big[ \left(V_S^2 + V_D^2 \right) K_2 + \left(V_S^2 - V_D^2 \right) \Big],
\end{aligned}
\label{eq:rg_equations}
\end{equation}
where $\tau_c$ is the energy cutoff that decreases gradually with the decreasing temperature, and $K_1$, $K_2$ are the fugacity parameters that incorporate the symmetric and anti-symmetric parts of the dot-$\phi_c$ interaction during the RG flow.
Initially, $K_1 \!=\! 1$ and $K_2 \!=\! 0$.
Importantly, for finite asymmetry $V_S \!-\! V_D$, $|K_2|$ increases, which in turn leads to increased asymmetry upon RG flow. Generically, the flow thus ends at a ground state in which the quantum dot is completely hybridized with either the source or the drain (and cut from the other), depending on whether $V_S$ or $V_D$ is initially larger. The transition between these two candidate ground states, known as a boundary quantum phase transition, has been experimentally realized \cite{Mebrahtu12,Mebrahtu13}.
Non-trivial behavior appears at the quantum critical point $V_S \!=\! V_D$: \emph{frustration} between hybridization with the source versus the drain prevents the quantum dot from being fully hybridized.
In fact, a finite residual entropy $\ln\!\sqrt{1+r}$ remains at zero temperature \cite{WongAffleck94}.
This residua; entropy mimics that of the 2CK problem at the intermediate fixed point \cite{EggertAffleck92,Mebrahtu13,Zheng1-GPRB14,Crossover}, where a spin Majorana becomes isolated due to overscreening.
Specially, when $R\!=\!R_Q$ (i.e.\ $r\!=\!1$), the residual entropy at the fine-tuned quantum critical point becomes $\ln\!\sqrt{2}$, which coincides with that of a Majorana fermion.
\emph{This Majorana is the fMZM we use in this paper to stabilize the tMZM.}
For $R\!=\!R_Q$ the model has been thoroughly investigated through bosonization and refermionization \cite{Mebrahtu13,Zheng1-GPRB14,Crossover}, following that for the 2CK model
\cite{EmeryKivelsonPRB92,SenguptaGeorges94,ShillerPRB95}
or a $g \!=\! 1/2$ Luttinger liquid resonant level model \cite{GogolinBook}.
Following their example, we apply the unitary transformation
\begin{equation}
U = e^{i(d^{\dagger} d - \frac{1}{2}) \phi_c(0)/\sqrt{2}},
\label{eq:u-trans}
\end{equation}
to remove the common field $\phi_c$ from the tunneling term.
However, this unitary transformation introduces two minor side effects.
First, the impurity operator is now dressed with the common field $\phi_c(0)$,
\begin{equation}
d \to d e^{i K_1 \frac{1}{\sqrt{2}}\phi_c(0)},
\end{equation}
with $K_1 \!=\! 1$ initially, i.e.\ before the RG flow of Eq.\,\eqref{eq:rg_equations}.
Second, the unitary transformation introduces a quartic interaction
\cite{Zheng1-GPRB14,LiuRLdissipPRB14},
\begin{equation}
H_{\text{extra}} = -\frac{v}{2\sqrt{2}} \left(d^{\dagger}d - \frac{1}{2}\right) \partial_x \phi_c(x)\Big|_{x = 0}
\label{eq:quartet}
\end{equation}
where $v$ is the Fermi velocity, that couples $\phi_c$ to the impurity occupation number.
Strictly, the phase factor $\exp[i\phi_c(0)/\sqrt{2}]$ that attaches to $d$ as well as the interaction Eq.\,(\ref{eq:quartet}) are quite important at high temperatures \cite{Zheng1-GPRB14}.
However, $K_1$ decreases according to the RG equations \eqref{eq:rg_equations} \cite{KaneFisherPRB92}, so that at low temperature $d\exp[iK_1\phi_c(0)/\sqrt{2}]$ and the bare operator $d$ become indistinguishable. With regard to the induced density-density interaction (\ref{eq:quartet}), it has scaling dimension 3/2 (see, for instance, \cite{SchillerHershToulousePRB98} or \cite{GanPRB95} where similar terms have been encountered) and is thus RG irrelevant. Consequently, as we are only interested in the low temperature physics near the ground state, \emph{both} the phase attached to impurity operators and the quartic interaction can be safely neglected.
Finally, we define a Majorana representation for the degree of freedom represented by $d$: $\chi_1 \equiv ( d^{\dagger} \!+\! d)/\sqrt{2}$ and $\chi_2 \equiv i(d^{\dagger} \!-\! d )/\sqrt{2}$. Because of the unitary transformation mentioned in the last paragraph, this is no longer simply the dot level but rather a nonlinear mixture of the dot and the density in the two leads near the dot. Both resulting MZMs are highly localized near the quantum dot.
For the specific case $r\!=\!1$, the dependence on the difference field $\phi_f'$ in \eqref{eq:ht_effective} can be expressed as a fermionic operator $\psi_f \!\equiv\! e^{i\phi_f'}/\sqrt{2\pi a}$ (using the Klein factor from the original bosonization).
The result of these manipulations is an effective Majorana Hamiltonian for the right-hand dot and leads:
\begin{equation}
\begin{aligned}
H_\textrm{dissip.} = & \sum_{k} \epsilon_k \psi^{\dagger}_{f,k}\psi_{f,k} + (V_S - V_D) \frac{\psi^{\dagger}_f(0) - \psi_f(0)}{\sqrt{2}} \chi_1 \\
& + i(V_S + V_D) \frac{\psi^{\dagger}_f (0) + \psi_f(0)}{\sqrt{2}} \chi_2 + i \epsilon_R \chi_1 \chi_2,
\end{aligned}
\label{eq:Majorana}
\end{equation}
Straightforwardly, at the quantum critical point where $\epsilon_R = 0$ and $V_S \!=\! V_D$, one impurity Majorana $\gamma_1$ becomes isolated, thus leading to the $\ln\! \sqrt{2}$ residual entropy.
In the rest of the paper, we focus on the symmetric point $V_S = V_D \equiv V_R$, and couple this system to the right end of the superconducting nanowire as a stabilizer.
\section{Conductance in the Detector: Three Cases}
\label{sec:conductance}
With the system introduced, we calculate the conductance $G_L$ through the left quantum dot (the detector) in different scenarios.
\subsection{No Stabilizer}
For the simplest scenario, without the presence of any stabilizer, $\gamma_2$ couples only to its partner $\gamma_1$. This case has been studied previously \cite{DongPRBR11}: the non-trivial zero-temperature conductance $e^2/2h$ abruptly becomes the trivial one $e^2/h$ upon \emph{any} non-zero $\epsilon_M$.
From the RG perspective, this means that $\epsilon_M$ is relevant and thus when temperature decreases $\epsilon_M$ effectively \emph{increases}.
\subsection{Frustration-Induced Degeneracy in Right Dot}
\label{sec:Cond-dissip}
With the presence of the dissipative resonant level, the key final ingredient in our problem is the connection between the right dot and the topological wire. This is simply tunneling, as for the left dot Eq.\,(\ref{eq:sys_det}); for detailed discussions of tunneling between tMZM and those arising from Klein factors in bosonization see, e.g., Refs.\,\cite{Beri-MajoranaKleinPRL13,HerviouPRB16,GiulianoAffleckNPB19}.
Generically, $\gamma_2$ couples to both $\chi_1$ and $\chi_2$, yielding the Hamiltonian
\begin{equation}
\begin{aligned}
H_\textrm{sys.-dis.} =
+ i t_{R1} \,\gamma_2 \,\chi_1 + i t_{R2} \,\gamma_2 \,\chi_2
\end{aligned}
\label{eq:sys_dis}
\end{equation}
with arbitrary couplings $t_{R1}$ and $t_{R2}$.
The full Hamiltonian for our problem,
$H \!=\! H_\textrm{sys.} \!+\! H_\textrm{detect.} \!+\! H_\textrm{dissip.} \!+\! H_\textrm{sys.-det.} \!+\! H_\textrm{sys.-dis.}$,
is quadratic and so can be solved through the equation of motion method. We calculate the
conductance of the left quantum dot that probes the $\gamma_1$ tMZM. With symmetric coupling, its equilibrium conductance is related to the dot spectral function by
\begin{equation}
G_L = -\Gamma_L \frac{e^2}{h} \int\frac{d\omega}{2 \pi} \text{Im} \left\{ G^R(d^{\vphantom{\dagger}}_{L},d^{\dagger}_{L}) (\omega) \right\} \partial_{\omega}n_F(\omega),
\label{eq:conductance_spectrum}
\end{equation}
where $G^R(d^{\vphantom{\dagger}}_{L},d^{\dagger}_{L})(\omega)$ is the Fourier transform of the retarded Green function
$-i \theta(t) \big\langle \big\{ d_L(0), d_L^{\dagger}(t) \big\} \big\rangle$, $n_F(\omega)$ is the Fermi distribution function, and $\Gamma_L \!=\! \pi \rho_0 V_L^2$ is the level broadening.
The retarded Green function of the left dot from the equation of motion method \cite{Bruus-Flensberg} is
\begin{equation}
G^R(d_L,d^{\dagger}_L) (\omega) = \frac{1}{\omega + i\Gamma_L - \epsilon_L - \Sigma (\omega)},
\label{eq:igf_dissipativeless}
\end{equation}
where the self-energy $\Sigma (\omega)$ incorporates the effects of the coupling between
(i) the left dot and $\gamma_1$,
(ii) $\gamma_1$ and $\gamma_2$ Eq.\,\eqref{eq:Hsys}, and
(iii) $\gamma_2$ and the right dot Eq.\,\eqref{eq:sys_dis}).
With the presence of the dissipative stabilizer, the self energy is
\begin{equation}
\begin{aligned}
&\Sigma (\omega)^{-1} = \frac{\omega}{t^2_L} - \frac{1}{\omega + \epsilon_L + i\Gamma_L} \\
& - \frac{\epsilon_M^2}{t^2_L} \left(\omega - \frac{t^2_{R1}}{\omega+ \epsilon_R + i\eta} - \frac{t^2_{R1}}{\omega- \epsilon_R + i\eta} \right)^{-1}\\
&- \frac{\epsilon_M^2}{t^2_L} \left(\omega - \frac{t^2_{R2}}{\omega+ \epsilon_R + i\Gamma_R} - \frac{t^2_{R2}}{\omega- \epsilon_R + i\Gamma_R} \right)^{-1},
\end{aligned}
\label{eq:dissipative_self_energy}
\end{equation}
where $\eta $ is a positive infinitesimal and $\Gamma_R \!=\! \pi\rho_0 V_R^2$ is the level broadening of the right dot in the absence of dissipation.
With Eqs.\,\eqref{eq:igf_dissipativeless} and \eqref{eq:dissipative_self_energy}, the conductance through the left dot when $\epsilon_R =0$ is
\begin{equation}
G_L = \frac{1}{2} \frac{e^2}{h},
\label{eq:conductance_with_right_system}
\end{equation}
\emph{independent} of the values of any parameters (such as $t_{R2}$ or $\epsilon_M$). This striking independence implies, for instance, that fine tuning of the coupling between $\gamma_2$ and the right dot is \emph{not} needed, a significant experimental simplification. This result holds only within the validity of our model, of course: one should have $\epsilon_M \!\ll\! \Delta$ in order to have the tMZM pair ($\Delta$ is the superconducting gap in the proximitized nanowire) and $\epsilon_M \!\ll\!\Gamma_R$ in order to have a fMZM from frustration.
The conductance Eq.\,(\ref{eq:conductance_with_right_system}) is one of our main results. It indicates that the introduction of the $R \!=\! R_Q$ dissipative quantum dot stabilizes $\gamma_1$.
We emphasize that the stabilization of $\gamma_1$ refers to the fact that at zero temperature $\epsilon_M$ always effectively vanishes. This is true even if $\epsilon_M$ is significant initially, such as in a system with a short nanowire. This complete stabilization is uniquely guaranteed by the presence of the fMZM: it couples with $\gamma_2$ into a singlet, thus preventing the inter-tMZM coupling.
We further illustrate this point in Sec.\,\ref{sec:g_theorem} below through analysis with the g-theorem.
Fine-tuning of the energy level of the quantum dot is required only for the right-hand dot, $\epsilon_R \!=\! 0$. We do not need a fine-tuned left dot since $\epsilon_L$ is irrelevant at the nontrivial fixed point:
at this point, $\gamma_1$ and $(d^{\dagger}_L \!+\! d_L)/\sqrt{2}$ form a singlet, thus strongly suppressing the hybridization between $(d^{\dagger}_L \!+\! d_L)/\sqrt{2}$ and $(d^{\dagger}_L \!-\! d_L)/i\sqrt{2}$.
Experimentally, the irrelevance of $\epsilon_L$ is a signature of the tMZM: instead of the usual Lorentzian shape from the resonant level model, the zero temperature conductance of the
left-hand dot
is expected to be \emph{flat} as a function of $\epsilon_L$. For non-zero temperature, the conductance will be constant as long as $\epsilon_L(T) < \Gamma_L(T)$, both of which may vary with temperature because of renormalization effects.
\subsection{Dissipation-Free
Right Dot}
\label{sec:dissipation_free}
To highlight the role of dissipation, we now consider what happens if there is no dissipation in the right-hand leads, $r \!=\!0$. Thus the full-transmission fixed point Hamiltonian Eq.\,(\ref{eq:Majorana}) is replaced by a second copy of the resonant level Hamiltonian Eq.\,(\ref{eq:Hdetect}).
Since the Hamiltonian remains quadratic, we again use the equation of motion method to find the retarded Green function of the left dot; the explicit form of the self-energy now becomes
\begin{equation}
\begin{aligned}
&\Sigma (\omega)^{-1} = \frac{\omega}{t^2_L} - \frac{1}{\omega + \epsilon_L + i\Gamma_L} \\
&- \frac{\epsilon_M^2}{t^2_L} \left(\omega - \frac{t^2_R}{\omega+ \epsilon_R + i\Gamma_R} - \frac{t^2_R}{\omega- \epsilon_R + i\Gamma_R} \right)^{-1}.
\end{aligned}
\label{eq:impurity_self_interaction}
\end{equation}
In the absence of dissipation, the two MZMs on the right dot become equivalent, allowing us to freely choose $\gamma_2$ to couple to any linear combination of them with the coupling strength $t_R$. The remaining MZM will be hybridized by the leads.
The conductance of the left quantum dot in this case is
\begin{equation}
G_L = \frac{e^2}{h} \frac{2 t_L^2 t_R^2 + \epsilon_M^2 \Gamma_L \Gamma_R}{4 t^2_L t^2_R + \epsilon^2_M \Gamma_L \Gamma_R},
\label{eq:conductance_dissipativeless}
\end{equation}
where $\Gamma_R \!=\! \pi \rho_0 V_R^2$ is the broadening of the right resonant level.
Eq.\,(\ref{eq:conductance_dissipativeless}) displays an interesting feature:
the equilibrium zero temperature conductance varies \emph{continuously} between the trivial ($e^2/h$) and the non-trivial ($e^2/2h$) values, depending on the details of the system.
This crossover originates from the competition between the dot-MZM couplings and the hybridization of the quantum dots by the leads.
Eq.\,\eqref{eq:conductance_dissipativeless} implies that $\epsilon_M$ is effectively controllable through fine-tuning $t_{L,R}$ and $\Gamma_{L,R}$. Indeed, this result agrees with previous investigations of quantum dots coupled to proximitized nanowires in which non-local effects produced by the two tMZM
(which are related directly to the value of $\epsilon_M$) are tunable through fine-tuning of the dot level \cite{AguadoReview17, deng_majorana_2016, ElsaPRB17, ClarkePRB17}.
We stress that in these systems the effective inter-tMZM overlap is only reduced.
In strong contrast, with the fMZM induced by dissipation in our system, this overlap
is driven by interactions to \emph{zero}.
\subsection{Explanation with g-Theorem}
\label{sec:g_theorem}
To summarize briefly thus far, we have shown that the conductance through the MZM-coupled left quantum dot is strongly influenced by the nature of the system on the right. (i) In the absence of a right-hand system, the two MZMs $\gamma_1$ and $\gamma_2$ couple into a trivial state for which $G_L/(e^2/h) = 1$. (ii) When the right-hand system is present but without dissipation, the conductance through the left dot is between the trivial and nontrivial values, depending on the details of the system. (iii) Finally, the nontrivial state is stabilized when the right-hand system is dissipative with $R \!=\! R_Q$, leading to the zero-temperature conductance $G_L/(e^2/h) = 1/2$.
Through the g-theorem, we now provide a simple way to understand these results. As the counterpart of the famous c-theorem of two-dimensional conformal field theory \cite{ZomolodchikovJETPLett83, CardyPLB88}, the g-theorem treats boundary phase transitions and relates the stability of the fixed points to the impurity or boundary entropy. Specifically, if the bulk parameters remain invariant during the RG flows, the flow will bring the system toward the fixed point with a smaller impurity entropy \cite{AffleckGTheoremPRL91,FriedanKonechnyPRL04}. We have calculated the ground state degeneracy of our system at the two fixed points in the three scenarios above. The results are compiled in Table\,\ref{tab:summary}, and we now discuss each scenario in turn.
\begin{table}[t]
\begin{tabular}{| c | c | c | c |}
\hline
& $t_{R} = 0$ & $t_R \neq 0$ and $R = 0$ & $t_R\neq 0$ and $R = R_Q$ \\ \hline
$g_{\text{trivial}}$ & 1 & 1 & $\sqrt{2}$ \\ \hline
$g_{\text{nontrivial}}$ & $\sqrt{2}$ & 1 & 1 \\ \hline
$G_L/(e^2/h)$ & 1 & $\dfrac{2 t_L^2 t_R^2 + \epsilon_M^2 \Gamma_L \Gamma_R}{4 t^2_L t^2_R + \epsilon^2_M \Gamma_L \Gamma_R}$ & 1/2 \\ \hline
\end{tabular}
\caption{System characteristics at different fixed points. Here,
$g_{\text{trivial}}$ and $g_{\text{nontrivial}}$ are the degeneracies of the trivial and nontrivial fixed points, respectively, and $G_L$ is the zero temperature conductance through the left dot,
For simplicity we have used $t_R \!=\! 0$ to label the decoupling of the right dot.}
\label{tab:summary}
\end{table}
(i) If the right-hand system is absent ($t_R \!=\! 0 $) and $\epsilon_M \!\neq 0$, the trivial fixed point is non-degenerate: the two tMZMs $\gamma_1$ and $\gamma_2$ form into a singlet and the left quantum dot is completely hybridized with the leads. In contrast, the nontrivial fixed point has a decoupled tMZM, namely $\gamma_2$, yielding a ground state degeneracy $\sqrt{2}$. The g-theorem then implies, in agreement with the conductance calculation above, that the nontrivial fixed point is unstable.
Alternatively, we notice that the leading operator at the nontrivial fixed point is the hybridization between the leads and
$(d^{\dagger}_{L} \!+\! d^{\vphantom{\dagger}}_{L})/\sqrt{2}$, which has the scaling dimension $1/2$. Consequently, the hybridization operator is relevant and sabotages the nontrivial fixed point.
(ii) If the right-hand system is a dissipationless quantum dot, the ground states of both the trivial and nontrivial fixed points are non-degenerate.
Consequently, the operator that connects these two fixed points is marginal, leading to a crossover between the trivial and nontrivial fixed points. This crossover is reflected in the intermediate value of the conductance, Eq.\,(\ref{eq:conductance_dissipativeless}).
From an RG point of view, because the parity of the superconducting island is conserved, tunneling happens simultaneously in the left and right quantum dots (see Appendix for details). Thus the scaling dimension of the hybridization doubles compared to case (i), rendering it marginal.
(iii) Finally, when the right quantum dot is dissipative, at the nontrivial fixed point the isolated fMZM $\chi_2$ couples to $\gamma_2$ in a singlet, thus leading to a non-degenerate ground state.
In contrast, at the trivial fixed point, $\chi_2$ remains isolated, yielding degeneracy $\sqrt{2}$.
Consequently, the g-theorem predicts that the RG flow brings the system toward the \emph{nontrivial} fixed point.
Alternatively, the lead-dot hybridization is suppressed by the dissipation and so has a larger scaling dimension than in case (ii). The hybridization thus becomes irrelevant (for any $R \neq 0$) and the Majorana feature is protected.
This protection uniquely exists in a dissipative or an interacting system where a non-trivial (i.e., the Majorana-like) residual entropy has been added to the system through the frustration between two competing dissipative leads.
\section{Full Counting Statistics}
\label{sec:full_counting}
In the previous section, we investigate the zero-temperature linear-response conductance through the left quantum dot. Our calculation indicates that the effective $\epsilon_M$ exactly vanishes when $\gamma_2$ couples to a dissipative quantum dot,
thus completely stabilizing $\gamma_1$ from the inter-tMZM coupling.
In this section, we instead investigate the \textit{non-equilibrium} current and shot noise of the model at different fixed points with full counting statistics \cite{LevitovReznikovPRB04,GogolinKomnikPRB06, KamenevBook}.
Since analysis in previous sections indicates that $t_{R2}$ is RG irrelevant in the presence of finite $t_{R1}$, we take $t_{R2} \!=\! 0$ for simplicity.
\subsection{Full Counting Statistic in the Majorana-Fermion-Coupled Resonant Level Model}
In full counting statistics, the current and noise are calculated through \cite{LevitovReznikovPRB04}
\begin{equation}
I = \frac{e \left\langle \delta q \right\rangle}{\tau},\ \ \ \text{and} \ \ \
S = \frac{2e^2 \left\langle\delta^2 q \right\rangle}{\tau},
\label{eq:current_and_noise_from_fcs}
\end{equation}
where the moments
\begin{equation}
\left\langle \delta^n q \right\rangle = (-i)^n \frac{\partial^n}{\partial \lambda^n} \ln \chi(\lambda) \Big\vert_{\lambda = 0}
\label{eq:fcs_presciption}
\end{equation}
characterize the charge correlations.
In Eq.\,\eqref{eq:fcs_presciption},
the generating function $\chi(\lambda) = \sum_{q} e^{i q \lambda} P_q (\tau)$ includes tunneling events to all orders ($q\in Z^{\ge}$ is a non-negative integer), where $\lambda$ is the measuring field and $P_q(\tau)$ is the probability that charge $q e$ tunnels through the barrier(s) during the measuring time $\tau$. Practically, in 1D systems $\ln \chi(\lambda) \!=\! -i \tau U(\lambda,-\lambda)$ where $U(\lambda,-\lambda)$ is the adiabatic potential.
For the left quantum dot of our system (the detector), the adiabatic potential of the resonant level model is
\begin{equation}
\begin{aligned}
\frac{\partial}{\partial_{\lambda_-}} U(\lambda_-,\lambda_+) &\\
= \frac{V^2_L}{2} \int d\omega & \left[e^{-i (\lambda_- - \lambda_+)/2} G^{< }(d_L, d_L^{\dagger}) g_{L\alpha}^{+-} \right. \\
& \left. - e^{i (\lambda_- - \lambda_+)/2} g_{L\alpha}^{- + } G^{>}(d_L, d_L^{\dagger}) \right],
\end{aligned}
\label{eq:adiabatic_potential}
\end{equation}
where $G^{>}(d, d^{\dagger})$ [$G^{<}(d, d^{\dagger})$] is the full greater (lesser) impurity Green function and $g_{L\alpha}$ is the bare Green function for the source or drain lead ($\alpha \!=\! S$ or $D$). The four components of the bare lead Green function are given by \cite{GogolinKomnikPRB06,KamenevBook}
\begin{equation}
\begin{aligned}
g^{--}_{L\alpha} (\omega) & = g_{\alpha}^{++} (\omega) = i 2 \pi \rho_0 (n_{\alpha} -1/2), \\
g_{L\alpha}^{-+} (\omega) & = i 2\pi \rho_0 n_{\alpha}, \\
g^{+-}_{L\alpha}(\omega) & = -i 2\pi \rho_0 ( 1 - n_{\alpha}),
\end{aligned}
\label{eq:bare_lead_gfs}
\end{equation}
where $n_{S,D} = n_F(\epsilon \pm V/2)$ is the distribution function of the leads and $V>0$ is the bias applied between source and drain.
At zero temperature, these distribution functions simplify to $n_S = \Theta(-\epsilon - V/2)$ and $n_D = \Theta(-\epsilon + V/2)$.
To calculate the full Green functions $G^{<}$ and $G^{>}$, we divide the Hamiltonian into two parts
$H = H_0 + \delta H $,
where $H_0 = H_{\text{sys.}} + H_{\text{detect.}} + H_{\text{dissip.}} + H_{\text{sys.-dis.}}$ contains the non-equilibrium resonant level model (the detector) plus the other parts of the system,
while $\delta H = H_{\text{sys.-det}}$ connects these two parts.
When $\delta H\!=\!0$ ($t_L \!=\! 0$), these two parts are disconnected, and we can calculate the Green functions of them separately.
The non-equilibrium Green function matrix $G_0(d_L,d^{\dagger}_L)$ of the resonant level Hamiltonian $H_\textrm{detect.}$ is known \cite{GogolinKomnikPRB06},
\begin{equation}
\begin{aligned}
G_0(d_L,d_L^{\dagger})(\omega) & =\left[\begin{array}{cc}
G^R_0 (d_L,d_L^{\dagger})(\omega) & G^<_0 (d_L,d_L^{\dagger})(\omega) \\
G^>_0 (d_L,d_L^{\dagger})(\omega) & G^A_0 (d_L,d_L^{\dagger})(\omega)
\end{array}\right] \\
&=\frac{1}{\omega^2 + \Gamma_L^2 e^{i\lambda}}\Big(
\begin{array}{cc}
\omega - i \Gamma_L & i e^{i \lambda} \Gamma_L \\
-i \Gamma_L & -\omega - i \Gamma_L,
\end{array}
\Big),
\end{aligned}
\label{eq:free_gf}
\end{equation}
with the four entries referring to the retarded, lesser, greater and advanced Green functions, respectively (from top to bottom, left to right). Note that the measuring field $\lambda$ appears.
The remaining parts of the system ($H_{\text{sys.}} \!+\! H_{\text{dissip.}} \!+\! H_{\text{sys.-dis.}}$), on the other hand, remain in equilibrium when $t_L \!=\! 0$. We thus obtain the equilibrium retarded Green function of $\gamma_1$,
\begin{equation}
G_0^R(\gamma_1,\gamma_1)(\omega) = \frac{\omega^2 - t_R^2}{\omega (\omega^2 - t_R^2 - \epsilon_M^2)}.
\label{eq:rest_part_free_gf}
\end{equation}
With $\delta H$, we calculate the full lesser Green function through the Dyson equation \cite{KamenevBook,GogolinKomnikPRB06,SelaNoneqQdotsPRB09}
\begin{equation}
\begin{aligned}
G^< & = G^R \Sigma^< G^A \\
&+ (1 + G^R \Sigma^R) G^<_0 (1 + \Sigma^A G^A).
\end{aligned}
\label{eq:gf_retarded_relation}
\end{equation}
After including all possible processes, Eq.\,\eqref{eq:gf_retarded_relation} becomes
\begin{widetext}
\begin{equation}
\begin{aligned}
G^<(d_L,d_L^{\dagger})(\omega) & = G^<_0(d_L,d_L^{\dagger})(\omega) + G^R(d_L,\gamma_1) \Sigma^R_{\gamma_1,d_L} G_0^<(d_L,d_L^{\dagger}) \Sigma^A_{d_L^{\dagger},\gamma_1} G^A(\gamma_1,d_L^{\dagger})(\omega) \\
&\ \ \ \ + G^R(d_L,\gamma_1) \Sigma^R_{\gamma_1,d^{\dagger}_L} G_0^<(d^{\dagger}_L,d_L) \Sigma^A_{d_L,\gamma_1} G^A(\gamma_1,d_L^{\dagger})(\omega) \\
&\ \ \ \ + G^R(d_L,\gamma_1)(\omega) \Sigma^R_{\gamma_1,d_L} G^<_0 (d_L,d_L^{\dagger}) + G^<_0(d_L ,d_L^{\dagger}) \Sigma^A_{d_L^{\dagger}, \gamma_1} G^A(\gamma_1,d_L^{\dagger})(\omega),
\end{aligned}
\label{eq:second_relation}
\end{equation}
\end{widetext}
where the interaction terms $\Sigma^A_{d_L^{\dagger},\gamma_1} \!=\! \Sigma^R_{\gamma_1,d_L} \!=\! -\Sigma^R_{\gamma_1,d_L^{\dagger}} \!=\! - \Sigma^A_{d_L,\gamma_1}\!=\! t_L$, and the impurity function $G_0^<(d_L^{\dagger},d_L)$ equals
\begin{equation}
\begin{aligned}
G_0^<(d_L^{\dagger},d_L)(\omega) & = \frac{-i \Gamma_L e^{-i\lambda}}{\omega^2 +\Gamma_L^2 e^{-i\lambda}}.
\end{aligned}
\label{eq:several_free_gfs}
\end{equation}
Eq.\,(\ref{eq:second_relation}) contains two additional Green functions,
$G^R(d_L,\gamma_1)(\omega)$ and $G^A(\gamma_1,d_L^{\dagger})(\omega)$,
that can also be found from Dyson equations:
\begin{equation}
\begin{aligned}
G^R(d_L,\gamma_1) (\omega) & = 0 + G^R_0(d_L,d_L^{\dagger})(\omega) \Sigma_{d_L^{\dagger},\gamma_1} G^R(\gamma_1,\gamma_1)(\omega) , \\
G^R(\gamma_1,\gamma_1)(\omega) & = G^R_0(\gamma_1,\gamma_1)(\omega) \\
&+ G^R_0(\gamma_1,\gamma_1)(\omega) \Sigma_{\gamma_1,d_L} G^R(d_L,\gamma_1)(\omega) \\
&+ G^R_0(\gamma_1,\gamma_1)(\omega) \Sigma_{\gamma_1,d_L^{\dagger}} G^R(d_L^{\dagger},\gamma_1)(\omega) , \\
G^R(d_L^{\dagger},\gamma_1)(\omega) & = 0 + G^R_0(d_L^{\dagger}, d_L)(\omega) \Sigma_{d_L,\gamma_1} G^R(\gamma_1,\gamma_1)(\omega).
\end{aligned}
\label{eq:linear_equations}
\end{equation}
Eqs.\,\eqref{eq:free_gf}-\eqref{eq:linear_equations} yield the lesser Green function required in Eq.\,\eqref{eq:adiabatic_potential}.
Following analogous steps, we also obtain the greater Green function. Combining these with the
bare lead Green functions Eq.\,\eqref{eq:bare_lead_gfs},
we obtain the adiabatic potential and the generating function from Eq.\,\eqref{eq:adiabatic_potential}. This finally allows evaluation of the the current and noise, Eq.\,\eqref{eq:current_and_noise_from_fcs} \cite{NoteBook}.
Below we present expressions for the non-equilibrium current and noise in different cases corresponding to different fixed points. Though from the calculation sketched here,
we have the full nonlinear dependence on the bias $V$, for simplicity we expand the expressions to leading order in $V$ \cite{NoteBook}, following the assumption that the bias $V$ is smaller than all other relevant energy scales.
\subsection{Non-Interacting tMZMs ($\epsilon_M = 0$)}
\label{sec:non_interacting_M}
We begin with the simple case $\epsilon_M \!=\! 0$.
In this case, $\gamma_1$ is stable since it totally decouples from its partner $\gamma_2$, and the result thus is independent of the presence of a stabilizer.
The current calculated from the adiabatic potential in this case is
\begin{equation}
I(V ) = \frac{e^2}{h} \left[\frac{V}{2} - \left(\frac{1}{\Gamma_L^2} - \frac{\Gamma_L^2}{4t_L^4} \right) \frac{e^2}{24} V^3 + \mathcal{O}(V^4) \right],
\label{eq:current_decoupled_MFs}
\end{equation}
where $\mathcal{O}(V^4)$ indicates the neglected higher-order terms. Note that in agreement with Eq.\,(\ref{eq:conductance_dissipativeless}) the linear-response conductance from this calculation is $e^2/2h$. As discussed above and in \cite{DongPRBR11}, this comes about because one of the left dot's Majorana degrees of freedom is fully coupled to the tMZM while the other fully hybridizes with the lead to form a transparent half-channel.
Interestingly, the cubic term [$\mathcal{O}(V^3)$] in (\ref{eq:current_decoupled_MFs}) displays a competition between two processes: (i) backscattering in the transparent half-channel which reduces the current and (ii) hybridization of the left leads with $(d_L \!+ d^{\dagger}_L)/\sqrt{2}$, pulling it away from the tMZM and thereby enhancing the current.
More specifically, when $\Gamma^2 \!>\! 2 t_L^2$ the hybridization is stronger so that the $\mathcal{O}(V^3)$ current increases with an increasing bias and vice versa.
Note that the presence of this competition requires two Majorana channels in opposite limits---one completely healed and the other totally disconnected. To the best of our knowledge, such a situation exists only in systems that are topological.
Turning to the fluctuations of the current, we find that the zero-temperature shot noise to leading order is
\begin{equation}
S = 2\frac{e^3}{h} \left[\frac{V}{4} + \left(-\frac{1}{\Gamma_L^2} - \frac{\Gamma_L^2}{4 t_L^4} +
\frac{1}{t_L^2} \right)\frac{e^2}{48} V^3 + \mathcal{O}(V^4) \right],
\label{eq:shot_noise_decoupled}
\end{equation}
with the leading term proportional to bias.
This linear term is a signal that the transmission is \emph{not} perfect: for a system with perfect transmission, the leading term should instead be $\propto V^3$ [see Eq.\,\eqref{eq:shot_noise_interacting_MFs} for an example when $\epsilon_M \neq 0$].
The competing effects seen in the nonlinear current also appear here.
Indeed, noise to the next-leading order [i.e., $O(V^3)$] reaches it maximum when two competing processes equal (i.e., $\Gamma^2 \!=\! 2t_L^2$).
This point, with half transmission probability, is known to have the largest shot noise \cite{IhnBook}.
Near equilibrium, the Fano factor, $F\!\equiv\! S/2eI$, becomes $1/2$, implying that the current is carried by quasi-particles with effective charge $e^* \!=\! e/2$ at zero temperature.
As in the charge 2CK case discussed in \cite{LandauCornfeldSelaPRL18}, here the $e^* \!=\! e/2$ fractional charge property originates from the fact that one of the Majorana channels in the leads completely decouples (Fig.\,\ref{fig:lead_majoranas}).
\subsection{Interacting tMZMs without Stabilization}
\label{sec:interacting_M_decoupled_R}
Now we add back the interaction between two tMZMs ($\epsilon_M \!\neq\! 0$), but do not include the
right-hand quantum dot
($t_R = 0$) as the stabilizer.
The nonlinear current calculated with full counting statistics now becomes
\begin{equation}
I (V) = \frac{e^2}{h} \left[V - \frac{1 + 2 t_L^2/\epsilon_M^2 + 2t_L^4/\epsilon_M^4}{12\Gamma_L^2}e^2 V^3 + \mathcal{O}(V^4) \right],
\label{eq:current_interacting_MFs}
\end{equation}
with the trivial equilibrium conductance $e^2/h$, in agreement with Ref.\,\cite{DongPRBR11}.
In contrast to Eq.\,(\ref{eq:current_decoupled_MFs}), there is no sign of competing effects in the nonlinear term. Indeed, at a perfectly conducting fixed point, non-equilibrium effects necessarily reduce the conductance, so the sign of the $V^3$ term is fixed.
The shot noise now becomes
\begin{equation}
S =2 \frac{e^3}{h} \left[ \left(\frac{ t_L^4}{\epsilon_M^4} + 1 \right) \frac{e^2}{12 \Gamma_L^2} V^3 + \mathcal{O}(V^4) \right],
\label{eq:shot_noise_interacting_MFs}
\end{equation}
with the leading term $\propto\! V^3$.
Eq.\,(\ref{eq:shot_noise_interacting_MFs}) contains both back\-scattering-induced noise and noise generated by the coupling between $\gamma_1$ and $i(-d_L \!+\! d^{\dagger}_L)$.
Notice the factor $\epsilon_M^4$ in the denominator here as well as in the nonlinear current Eq.\,\eqref{eq:current_interacting_MFs}. The high power indicates that $\epsilon_M$ is RG relevant and drives the system to a different fixed point when it is non-zero.
Since the linear term in the shot noise vanishes, here the Fano factor can be defined as the ratio between shot noise and twice the value of the ``backscattering current" as in the charge 2CK model \cite{LandauCornfeldSelaPRL18}. Using the $V^3$ terms in both the current and the shot noise, Eqs.\,(\ref{eq:current_interacting_MFs}) and (\ref{eq:shot_noise_interacting_MFs}), we see that the Fano factor depends on the ratio $t_L /\epsilon_M$. We prefer not to discuss an effective charge of the quasi-particle in this case since the inelastic contributions to the nonlinear terms complicate the interpretation \cite{SelaNoiseKondoPRL06}.
The variation of the Fano factor indicates that at the quantum dot, an electron either backscatters or tunnels into the superconducting island.
\subsection{Interacting tMZMs Stabilized by Frustration}
Comparison of the results in these two cases (Sections \ref{sec:non_interacting_M} and \ref{sec:interacting_M_decoupled_R}) supports our analysis that the coupling $\epsilon_M$ between the two tMZMs is relevant, leading to the destruction of the Majorana signature in the detector. In this section we add the frustration-induced Majorana fermion $\chi_1$ and show that it stabilizes the probed tMZM $\gamma_1$.
In this case, the model in Section \ref{sec:Cond-dissip} yields the current
\begin{equation}
\begin{aligned}
I (V) = \frac{e^2}{h} &\left[\frac{V}{2} + \left(\frac{\epsilon_M^4 \Gamma_L^2}{t_L^4 t_R^4} + \frac{2\epsilon_M^2 \Gamma_L^2}{t_L^4 t_R^2} + \frac{\Gamma_L^2}{t_L^4} - \frac{4}{\Gamma_L^2} \right) \frac{e^2}{96} V^3 \right. \\ & \qquad + \mathcal{O}(V^4) \Big],
\end{aligned}
\label{eq:current_tr_on}
\end{equation}
with the expected linear-response conductance $e^2/2h$, Eq.\,\eqref{eq:conductance_with_right_system}. Note further that the full $I(V)$ reduces to Eq.\,(\ref{eq:current_decoupled_MFs}) when $\epsilon_M \!=\! 0$.
The fact that $\epsilon_M$ is in the numerator in the nonlinear term shows that it is RG irrelevant. This is, then, verification of our analysis that the presence of $\chi_1$ stabilizes $\gamma_1$ against the inter-tMZM coupling $\epsilon_M$.
The shot noise now becomes
\begin{equation}
\begin{aligned}
S & = 2 \frac{e^2}{h} \left[ \frac{V}{4} + \left( - \frac{1}{4 \Gamma^2} - \frac{\Gamma^2}{16 t_L^4} + \frac{1}{4t_L^2} \right. \right. \\
& \left.\left. -\frac{\epsilon_M^4 \Gamma^2}{16 t_L^4 t_R^4} - \frac{\epsilon_M^2 \Gamma^2}{8 t_L^4 t_R^2} + \frac{\epsilon_M^2}{4 t_L^2 t_R^2} \right) \frac{e^2}{12} V^3 + \mathcal{O}(V^4) \right].
\end{aligned}
\label{eq:shot_noise_tr_on}
\end{equation}
In the $V^3$ term, the inter-tMZM coupling $\epsilon_M$ again appears only in the numerator, showing its RG irrelevance. This allows a smooth $\epsilon_M \to 0$ limit yielding, indeed,
Eq.\,(\ref{eq:shot_noise_decoupled}). The shot noise here does have a fixed linear term. As in the $\epsilon_M \!=\! 0$ case, from the Fano factor we deduce tunneling of quasi-particles with effective charge $e^* \!=\! e/2$.
These results for both the nonlinear transport $I(V)$ and the shot noise, then, support our equilibrium result (Section \ref{sec:conductance}) that $\chi_1$ stabilizes the tMZM $\gamma_1$ against its coupling to $\gamma_2$.
\section{Summary}
\label{sec:summary}
The effect that we elucidate here arises in essence from combining a Majorana degree of freedom due to topology with a Majorana produced by fine-tuned quantum frustration (i.e.\ interactions). This is a highly unusual situation: we connect a topological system with one that is not topological and find no unusual feature at the boundary (and in particular no MZM).
We arrive at this result by, first, calculating the conductance through a detector quantum dot and, second, supporting the calculation through RG arguments and the g-theorem.
The combination of the two types of MZM originates from the preference of the system for a non-degenerate ground state, arrived at by coupling the topological Majorana $\gamma_2$ and the quantum dot Majorana fermion $\chi_1$ into a singlet.
Our conclusions are further supported by a fully non-equilibrium calculation in which the nonlinear current and shot noise reveal signatures of the RG relevant and irrelevant processes.
Perhaps the most interesting implication of our results for future work is that
the long-ignored frustration generated Majorana fermion is in principle detectable.
Indeed, it could be potentially beneficial for quantum computation.
\bigskip
\emph{Acknowledgements---}
We thank Ruixing Zhang for fruitful discussions.
The work at Duke was supported by the U.S.\ DOE Office of Science, Division of Materials Sciences and Engineering, under Grant No.\,{DE-SC0005237}.
\begin{appendix}
\section{Parity Conservation Induced Joint Tunneling}
\label{sec:parity_conservation}
In this appendix, we briefly explain a striking feature caused by the presence of tMZMs: parity conservation induced \emph{joint tunneling} at both sides of the superconducting nanowire.
We consider the system near the strong coupling fixed point, where the impurity part consists of two tMZMs ($\gamma_{1,2}$) and two un-hybridized impurity Majorana fermions $(d^{\dagger}_{L,R} \!+\! d_{L,R})/\sqrt{2}$.
For the nontrivial case ($\epsilon_M \!=\! 0$), the quantum dot MZMs $(d^{\dagger}_{L,R} \!+\! d_{L,R})/\sqrt{2}$ form into singlets with their corresponding tMZMs $\gamma_{1,2}$. For simplicity, we label this state $|0\rangle_L \!\otimes\! |0\rangle_R $. We further label the exited state on each side as $|1\rangle_{L,R}$, respectively.
Now we add the weak inter-tMZM coupling $\epsilon_M$, which alters the system ground state. In this case we find the ground state of the impurity system with the Hamiltonian
\begin{equation}
H_{\text{Impurity}}' = i t_L (d^{\dagger}_L + d_L) \gamma_1 + i t_R (d^{\dagger}_R + d_R) \gamma_2 + i\epsilon_M \gamma_1 \gamma_2
\label{eq:effective_impurity_hamiltonian}
\end{equation}
through exact diagonalization. Without loss of generality, we take $t_L \!=\! t_R \!=\! t$ for simplification. The ground state has energy $\sqrt{2} t - \sqrt{\epsilon_M^2 + 8 t^2}/2$ with eigenvector
\begin{equation}
\begin{aligned}
\frac{ i \left(-2\sqrt{2} u + \sqrt{1 + 8u^2}\right) |0\rangle_L \!\otimes\! |0\rangle_R + \, | 1\rangle_L \!\otimes\! |1\rangle_R }
{ \sqrt{1 + \left(-2 \sqrt{2} u +\sqrt{1 + 8 u^2}\right)^2}},
\end{aligned}
\label{eq:mid_impurity_es}
\end{equation}
where $u\equiv t/\epsilon_M$.
Eq.\,\eqref{eq:mid_impurity_es} correctly reduces to $|0\rangle_L \!\otimes\! |0\rangle_R$ in the limit $\epsilon_M \to 0$.
Eq.\,(\ref{eq:mid_impurity_es}) indicates that the system ground state involves dot states with only even parity $|0\rangle_L \!\otimes\! |0\rangle_R$ and $|1\rangle_L \!\otimes\! |1\rangle_R$. Physically, this originates from the fact that the inter-tMZM coupling operator $\epsilon_M \gamma_1 \gamma_2$ changes the parity of the impurity states at \emph{both} sides.
Consequently, if we now turn on weak tunneling to the corresponding leads in both quantum dots (the resonant level model), the system allows only tunneling events that occur \emph{simultaneously} at both sides, thus doubling the scaling dimension of the leading-order terms.
\end{appendix}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Understanding the statistical properties of a certain dynamical system is of
fundamental importance in many problems coming from pure and applied
mathematics, as well as in developing applications to other sciences.
\medskip
In this article, we will focus on the concept of \textit{statistical
stability} of a dynamical system, \textit{i.e.}, how its statistical
features change when the systems is perturbed or modified. The interest in
this question is clearly motivated by the need of controlling how much, and
to which extent, approximations, external perturbations and uncertainties can affect the qualitative
and quantitative analysis of its dynamics.
\medskip
Statistical properties of the long-term evolution of a system are reflected,
for instance, by the properties of its invariant measures. When the system
is perturbed, it is then useful to understand, and be able to predict, how
the relevant\footnote{%
The concept of \textit{relevant} is strictly related to the analysis that is
carried out. Hereafter, we will be interested in so called \emph{\ physical
measures} (see footnote \ref{notap} or \cite{Y}). In other contexts, other
kinds of measures might be considered, for example, the so-called measures
of maximal entropy.} invariant measures change by the
effect of the perturbation, \textit{i.e.}, what is called the \textit{%
response} of the system to the perturbation. In particular, it becomes
important to get quantitative estimates on their change by effect of the
perturbation, as well as understanding the \textit{regularity} of their
behavior, for instance differentiability, Lipschitz or H\"{o}lder
dependence, etc...
\medskip
These ideas can be applied to many kinds of systems and these concepts can
be studied in many different ways. In this paper we will consider \emph{%
discrete deterministic dynamical systems} and \emph{\ deterministic
perturbations. }
\smallskip
More specifically, we will consider systems of the kind $(X,T_{0})$, where $%
X $ is a compact metric space and $T_{0}:X\rightarrow X$ a map, whose
iterations determine the dynamics; we investigate perturbed systems $%
\{(X,T_{\delta })\}_{\delta \in \lbrack 0,\overline{\delta })}$, where $%
T_{\delta }:X\rightarrow X$ are such that $T_{\delta }\rightarrow T_{0}$, as
$\delta \rightarrow 0$, in some suitable topology.
\smallskip
For each $\delta \in \lbrack 0,\overline{\delta })$ let $\mu _{\delta }$ be
an invariant Borel probability measure for the system $(X,T_{\delta })$; we
aim to get information on the regularity of this family of measures, by
investigating the regularity of the map $\delta \longmapsto \mu _{\delta }$.
This notion of regularity might depend on the topology with which the space
of measures is equipped. In this paper we will be interested in absolutely
continuous measures with the $L^{1}$ norm, as well as in the whole space of
Borel probability measures ${\mathcal{P}}(X)$, endowed with a suitable weak norm,
see subection \ref{sec1.1} for more details.
\medskip
We say that $(X,T_{0},\mu _{0})$ is \emph{statistically stable} (with
respect to the considered class of perturbations) if this map is continuous
at $\delta =0$ (with respect to the chosen topology on the space of measures
in which $\mu_0$ is perturbed). \emph{Quantitative statistical stability} is
provided by quantitative estimates on its modulus of continuity.
\smallskip
Differentiability of this map at $\delta =0$ is referred to by saying that
the system has \emph{linear response } to a certain class of perturbations.
Similarly, higher derivatives and higher degrees of smootheness can be
considered.
\medskip
These questions are by now well understood in the case of uniformly
hyperbolic systems, where it has been established Lipschitz and, in some
cases, differentiable dependence of the relevant (physical) invariant
measures with respect to the considered perturbation (see, for example, \cite%
{BB} for a recent survey on linear response under deterministic
perturbations, or the introduction in \cite{GS} for a survey focused on
higher-order terms in the response and for results in the stochastic
setting).
\smallskip
For systems having not a uniformly hyperbolic behavior, in presence of
discontinuities, or more complicated perturbations, much less is known and
results are limited to particular classes of systems; see, for instance,
\cite{ASsu} for a general survey and \cite{A}, \cite{AV} ,\cite{BV}, \cite%
{BT}, \cite{BBS}, \cite{BKL}, \cite{BK2}, \cite{BS}, \cite{BS2}, \cite{BS2}, \cite{Dol}, \cite{D2},
\cite{D3}, \cite{GL}, \cite{Gmann}, \cite{Gpre}, \cite{Ko}, \cite{KL}, \cite%
{met}, \cite{Lin}, \cite{LS}, \cite{SV}, \cite{zz} for other results %
{about statistical stability} for different classes of
systems.
We point out a particular kind of deterministic perturbation which will be considered in this paper: the spatial discretization. In this perturbation, one considers a discrete set in the phase space and replaces the map $T$ with its composition with a projection to this discrete set.
This is what happens for example when we simulate the behavior of a system by iterating a map on our computer, which has a finite resolution and each iterate is subjected to numerical truncation. This perturbation changes the system into a periodic one, destroying many features of the original dynamics, yet this kind of simulations are quite reliable in many cases when the resolution is large enough and are widely used in the applied sciences. Why and under which assumptions these simulations are reliable or not is an important mathematical problem, which is still largely unsolved. Few
rigorous results have been found so far about the stability under spatial dicretization (see e.g. \cite{Bo}, \cite{GB}, \cite{Gu}, \cite{Gu2}, \cite{mier}). We refer to Section \ref{sectrunc} for a more detailed discussion on the subject.
\smallskip
The majority of results on statistical stability are established for systems
that are, in some sense, \textit{chaotic}. There is indeed a general
relation between the speed of convergence to the equilibrium of a system
(which reflects the speed of \textit{mixing}) and the quantitative aspects of its statistical stability
(see \cite{Gpre}, Theorem 5).
\medskip
In this paper we consider a class of systems that are not chaotic at all,
namely the {\emph{diffeomorphisms of the circle}}. We believe that they provide
a good model to start pushing forward this analysis.
{In particular, we will start our discussion by investigating the case of {\it rotations of the circle}, and then explaining how to generalize the results to the case of circle diffeomorphisms (see section \ref{sec:stabdiff}).}
\medskip
We prove the following results.
\begin{enumerate}
\item The statistical stability of irrational rotations under perturbations
that are small in the uniform convergence topology. Here stability is proved
with respect to a weak norm on the space ${\mathcal{P}}(X)$, related to the
so-called Wassertein distance; see Theorem \ref{statstab}.
\item H\"{o}lder statistical stability for Diophantine rotations under the
same kind of perturbations, where the H\"{o}lder exponent depends on the
Diophantine type of the rotation number. See Theorem \ref{stst2} for the
general upper bounds\ and Proposition \ref{berlusconi} for examples showing
these bounds are in some sense sharp.
\item Differentiable behavior and linear response for Diophantine rotations,
under smooth perturbations that preserve the rotation number; for general
smooth perturbations the result still holds, but for a Cantor set of
parameters (differentiability in the sense of Whitney); see Theorem \ref{KAMandResp} and Corollary \ref{corKAM}.
\item We extend these qualitative and quantitative stability results to diffeomorphisms of the circle {satisfying suitable assumptions}; see Theorems \ref{stadiff} and \ref{quantdiff}.
\item We prove the statistical stability of diffeomorphisms of the circle
under spatial discretizations and numerical truncations, also providing quantitative estimates on the ''error'' introduced by the discretization.
\end{enumerate}
{We believe that the general statistical stability picture here described for rotations is analogous to the one found, in different settings, for example in \cite{BBS, BS2, BS1, LS} (see also \cite[Section 4]{BB}), where one has a smooth behavior for the response of statistical properties of the system to perturbations not changing the topological class of the system ({\it i.e.}, changing the system to a topologically conjugated one), while we have less regularity, and in particular H\"{o}lder behavior, if the perturbation is allowed to change it. In our case, the rotation number plays the role of determining the topological class of the system.}
Some comments on the methodology used to establish these results. As far as
items 1 and 2 are concerned, we remark that since rotations are not mixing,
the general relation between the speed of convergence to the equilibrium and
their statistical stability, that we have recalled above, cannot be applied.
However, we can perform some analogous construction considering the speed of
convergence to the equilibrium of the Ces\`{a}ro averages of the iterates of
a given measure, which leads to a measure of the speed of convergence of the
system to its ergodic behavior (see Lemma \ref{stablemma}). Quantitative
estimates of this speed of the convergence -- and hence our quantitative
stability statement, Theorem \ref{stst2} -- are obtained by means of the
so-called Denjoy-Koksma inequality (see Theorem \ref{DK}).
\smallskip
On the other hand, results in item 3 are obtained as an application of KAM
theory for circle maps (see Theorem \ref{KAMVano}), with a particular focus
on the dependence of the KAM-construction on the perturbative parameter. In
Section \ref{KAMsection} we provide a brief introduction on this subject.%
\newline
The extension of the statistical stability results established for rotations
to{ circle diffeomorphisms} (item 4) is done again by
{combining our results for irrational rotations with the general theory of linearization of circle diffeomorphims, including Denjoy theorem, KAM theory and Herman-Yoccoz general theory (see section \ref{secconj})}.
The final application to spatial discretizations is obtained as corollary of these statements, which -- thanks to the rather weak assumptions on the perturbations -- are suitable to deal with this particularly
difficult kind of setting.\\
{As a final remark, although we have decided to present our results in the framework of circle diffeomorphisms and rotations of the circle, we believe that the main ideas present in our constructions can be naturally applied to extend these results to rotations on higher dimensional tori. }\\
\noindent \textbf{Organization of the article.}
In Section \ref{sec1} after introducing some tools from number theory and geometric measure theory we prove qualitative and quantitative statistical stability of irrational rotations. The quantitative stability results are proved first by establishing general H\"older upper bounds in subsection \ref{ub} and then exhibiting particular small perturbations for which we actually have H\"older behavior, hence establishing lower bounds in section \ref{lob}.
In Section \ref{KAMsection}, after a brief introduction to KAM theory and to the problem of smooth linearization of circle diffeomorphisms, we prove linear response results for suitable deterministic perturbations of Diophantine rotations.
In Section \ref{sec:stabdiff} we show how to extend the results of Section \ref{sec1} to sufficiently smooth circle diffeomorphisms.
Finally, in Section \ref{sectrunc} we introduce a class of perturbations coming from spatial discretization and apply our previous results to this kind of perturbations, obtaining some qualitative and
quantitative results.
\newline
\noindent \textbf{Acknowledgments.} The authors are grateful to A. Celletti, R. de la Llave, P-A Guiheneuf, C. Liverani, M. Sevryuk for their helpful suggestions. The authors also thank R. Calleja, A. Alessandra and R. de la Llave
for sharing with them their results in \cite{CCdL}.\\
S.G. and A.S. have been partially supported by the
research project PRIN Project 2017S35EHN ``{\it Regular and stochastic behavior in
dynamical systems}'' of the Italian Ministry of Education and Research (MIUR).
AS also acknowledges the support of the MIUR Department of Excellence grant CUP E83C18000100006.
\newline
\bigskip
\section{Statistical stability of irrational rotations}
\label{sec1}
Irrational rotations on the circle preserve the Lebesgue measure $m$ on the
circle {${\mathbb{S}}^1:= {\mathbb{R}}/{\mathbb{Z}}$} and are well known for
being uniquely ergodic. It is easy to see that small perturbations of such
rotations may have singular invariant measures {(\textit{i.e.}, not
absolutely continuous with respect to $m$)}, even supported on a discrete
set (see examples in Section \ref{lob}). However, we will show that these
measures must be {close, in some suitable sense,} to $m$.
\subsection{Weak statistical stability of irrational rotations}
\label{sec1.1}
In this section, we aim to prove a statistical stability result for
irrational rotations in a weak sense; more specifically, {we show that by
effect of small natural perturbations, their invariant measures vary
continuously with respect to the so-called Wassertein distance.}
{This qualitative result might not be surprising for experts, however
the construction that we apply also leads to quantitative estimates on the
statistical stability, which will be presented in the next subsections.}
\medskip
{Let us first recall some useful notions that we are going to use in the
following}. Let $(X,d)$ be a compact metric space and let ${\mathcal{M}}(X)$
denote the set of signed {finite} Borel measures on $X$.
If $g:X\longrightarrow \mathbb{R}$ is a Lipschitz function, we denote its
(best) Lipschitz constant by $\mathrm{Lip}(g)$, \textit{i.e.}
\begin{equation*}
\displaystyle{\mathrm{Lip}(g):=\sup_{x,y\in X,x\neq y}\left\{ \dfrac{%
|g(x)-g(y)|}{d(x,y)}\right\} }.
\end{equation*}
\smallskip
\begin{definition}
\label{w} Given $\mu, \nu \in {\mathcal{M}}(X) $ we define the \textbf{%
Wasserstein-Monge-Kantorovich} distance between $\mu $ and $\nu $ by%
\begin{equation}
W(\mu ,\nu ):=\sup_{\mathrm{Lip}(g)\leq 1,{\mathcal{M}}g{\mathcal{M}}%
_{\infty }\leq 1}\left\vert \int_{\mathbb{S}^1} {g}d\mu -\int_{\mathbb{S}^1}
{g}d\nu \right\vert .
\end{equation}
We denote%
\begin{equation*}
\|\mu \|_{W}:=W(0,\mu ),
\end{equation*}%
{where $0$ denotes the trivial measure identically equal to zero.} $\|\cdot
\|_{W}$ defines a norm on the vector space of signed measures defined on a
compact metric space.
\end{definition}
{We refer the reader, for example, to \cite{AGS} for a more systematic and detailed description of these topics.}%
\newline
\medskip
Let $T:X\rightarrow X$ be a Borel measurable map. Define the linear
functional
\begin{equation*}
L_{T}:{\mathcal{M}}(X)\rightarrow {\mathcal{M}}(X)
\end{equation*}%
that to a measure $\mu \in {\mathcal{M}}(X)$ associates the new measure $%
L_{T}\mu$, satisfying $L_{T}\mu (A):=\mu (T^{-1}(A)) $ for every Borel set $%
A\subset X$; {$L_{T}$ will be called \textit{transfer operator} (observe
that $L_{T}\mu$ is also called the {push-forward of $\mu$ by $T$} and
denoted by $T_*\mu$)}. If follows easily from the definition, that invariant
measures correspond to fixed points of $L_{T}$, \textit{i.e.}, $L_{T}\mu
=\mu $.
\medskip
We are now ready to state our first statistical stability result for
irrational rotations.
\begin{theorem}[Weak statistical stability of irrational rotations.]
\label{statstab} Let $R_{\alpha }:{{\mathbb{S}}^{1}\rightarrow {\mathbb{S}}%
^{1}}$ be an irrational rotation. Let $\{T_{\delta }\}_{0\leq \delta \leq
\overline{\delta }}$ be a family of Borel probability measurable maps of ${\mathbb{S}}^1$
to itself such that%
\begin{equation*}
\sup_{x\in {\mathbb{S}}^{1}}|R_{\alpha }(x)-T_{\delta }(x)|\leq \delta .
\end{equation*}%
Suppose $\mu _{\delta }$ is an invariant measure\footnote{%
In the case when $T_{\delta }$ is continuous such measures must exist by the
Krylov-Bogoliubov theorem { \cite{KB}}. In other cases such
measures can be absent, in this case our statement is empty.} of $T_{\delta
} $. Then
\begin{equation*}
\lim_{\delta \rightarrow 0}\| m-\mu _{\delta }\|_{W}=0.
\end{equation*}
\end{theorem}
\bigskip
{Let us start with the following preliminary computation.}
\begin{lemma}
\label{stablemma}Let $L$ be the transfer operator associated to an isometry
of $\ {\mathbb{S}}^{1}$ and let $L_{\delta }$ be the transfer operator
associated to a measurable map $T_{\delta }$. Suppose that $\mu _{\delta
}=L_{\delta }\mu _{\delta }.$ Then, for each $n\geq 1$
\begin{equation}
\|\mu _{\delta }-m\|_{W} \;\leq\; \big\| m-\frac{1}{n}\sum_{{1\leq }i\leq
n}L^{i}\mu _{\delta } \big\|_{W} \;+\; \frac{(n-1)}{2} \; \big\|(L-L_{\delta
})\mu _{\delta } \big\|_{W}
\end{equation}%
where {$L^i := L \circ \ldots \circ L$ ($i$-times)}.
\end{lemma}
\medskip
\begin{proof}
The proof is a direct computation. Since $\mu _{\delta }=L_{\delta }\mu
_{\delta }$ and $m$ \ is invariant for $L$, then
\begin{eqnarray} \label{prodi}
\|\mu _{\delta }-m\|_{W} &\leq & \big \| \frac{1}{n}\sum_{1\leq i\leq
n}L_{\delta }^{i}\mu _{\delta }-\frac{1}{n}\sum_{1\leq i\leq n}L^{i}m \big\|%
_{W} \notag \\
&\leq & \big\|\frac{1}{n}\sum_{1\leq i\leq n}L^{i}(m-\mu _{\delta }) \big\|%
_{W}+\big\|\frac{1}{n}\sum_{1\leq i\leq n}(L^{i}-L_{\delta }^{i})\mu
_{\delta }\big\|_{W}.
\end{eqnarray}%
Since
\begin{equation*}
L^{i}-L_{\delta }^{i}=\sum_{k=1}^{i}L^{i-k}(L-L_{\delta })L_{\delta }^{k-1}
\end{equation*}%
then%
\begin{eqnarray*}
(L^{i}-L_{\delta }^{i})\mu _{\delta } &=&\sum_{k=1}^{i}L^{i-k}(L-L_{\delta
})L_{\delta }^{k-1}\mu _{\delta } \\
&=&\sum_{k=1}^{i}L^{i-k}(L-L_{\delta })\mu _{\delta }.
\end{eqnarray*}
Being $L$ is the transfer operator associated to an isometry, then
\begin{equation}\label{mis}
\|L^{i-k}(L-L_{\delta })\mu _{\delta }\|_{W}\leq \|(L-L_{\delta })\mu _{\delta
}\|_{W}
\end{equation}
and consequently
\begin{equation*}
{\Vert }(L^{i}-L_{\delta }^{i})\mu _{\delta }{\Vert _{W}}\leq
(i-1)\|(L-L_{\delta })\mu _{\delta }\|_{W}.
\end{equation*}
Substituting in \eqref{prodi}, we conclude
\begin{equation*}
\|\mu _{\delta }-m\|_{W}\leq \big\|\frac{1}{n}\sum_{1\leq i\leq
n}L^{i}(m-\mu _{\delta })\big\|_{W}+\frac{(n-1)}{2}\|(L-L_{\delta })\mu
_{\delta }\|_{W}.
\end{equation*}
\end{proof}
\bigskip
\begin{lemma}
\label{prv1}Under the assumptions of Theorem \ref{statstab}, let $\{\mu
_{\delta }\}_{0\leq \delta \leq \overline{\delta }}$ be a family of Borel {probability}
measures on $\mathbb{S}^{1},$ then
\begin{equation*}
\lim_{n\rightarrow \infty } \big \|m-\frac{1}{n}\sum_{1\leq i\leq n}L^{i}\mu
_{\delta } \big\|_{W}=0
\end{equation*}%
uniformly in $\delta$; namely, for every $\varepsilon>0$ there exists $%
\overline{n} = \overline{n}(\varepsilon)$ such that if $n\geq \overline{n}$
then
\begin{equation*}
\sup_{0 \leq \delta \leq \overline{\delta}} \big\| m-\frac{1}{n}\sum_{1\leq
i\leq n}L^{i}\mu _{\delta } \big\|_{W} \leq \varepsilon.
\end{equation*}
\end{lemma}
\medskip
\begin{proof}
Let $\delta _{x_{o}}$ be the delta-measure concentrated at a point $x_{0}\in
\mathbb{S}^{1}$. By unique ergodicity of the system, we get $%
\lim_{n\rightarrow \infty }\|m-\frac{1}{n}\sum_{1\leq i \leq n}L^{i}\delta
_{x_{0}}\|_{W}=0.$ This is uniform in $x_{0}$; in fact, changing $x_{0}$ is
equivalent to compose by a further rotation, which is an isometry and hence
does not change the $\|\cdot\|_{W}$ norm. Any measure $\mu _{\delta }$ can
be approximated in the $\|\cdot\|_{W}$ norm, with arbitrary precision, by a
convex combination of delta-measures, \textit{i.e.}, for each $\varepsilon
>0 $ there are $x_{1},...,x_{k}\in {\mathbb{S}}^1$and $\lambda
_{1},...,\lambda _{k}\geq 0$, with $\sum_{i\leq k}\lambda _{i}=1$ \ such
that
\begin{equation*}
\big \|\mu _{\delta }-\sum_{1\leq i\leq k}\lambda _{i}\delta_{x_{i}} \big\|_{W}\leq
\varepsilon .
\end{equation*}
Since $R_{\alpha }$ is an isometry the $\|\cdot\|_{W}$ norm is preserved by
the iterates of $L.$ Hence for each $n\geq 0,$ we also have%
\begin{equation*}
\big \|L^{n}\mu _{\delta }-L^{n}\big(\sum_{1\leq i\leq k}\lambda _{i}\delta
_{x_{i}}\big) \big\|_{W}\leq \varepsilon,
\end{equation*}
which implies
\begin{equation*}
\big \| m- L^{n}\mu _{\delta }\big\|_{W} \leq \varepsilon+ \big\| m -L^{n}\big(\sum_{1\leq i\leq k}\lambda _{i}\delta
_{x_{i}}\big) \big\|_{W}
\end{equation*}
and
\begin{equation*}
\big \| m- \frac{1}{n}\sum_{1\leq i\leq n}L^{i}\mu _{\delta } \big\|_{W} \leq \varepsilon+ \big\| m -\frac{1}{n}\sum_{1\leq j\leq n}L^{j}\big(\sum_{i\leq k}\lambda
_{i}\delta _{x_{i}}\big)\big\|_{W}.
\end{equation*}
We estimate now the behavior of the right hand side of the last inequality as $n\to \infty$. For any $n$ we have
\begin{equation*}
\big \|m-\frac{1}{n}\sum_{1\leq j\leq n}L^{j}\big(\sum_{i\leq k}\lambda
_{i}\delta _{x_{i}}\big) \big\|_{W}= \big \|\sum_{1\leq i\leq k}\lambda
_{i}m-\sum_{1\leq i\leq k} \frac{\lambda_i}{n} \big( \sum_{1\leq j\leq
n}L^{j}\delta _{x_{i}} \big) \big\|_{W}
\end{equation*}%
\ and therefore $\lim_{n\rightarrow \infty }\|\sum_{i\leq k}\lambda _{i}\big(%
m-\frac{1}{n}\sum_{j\leq n}L^{j}\delta _{x_{i}}\big)\|_{W}=0 $. From this, the claim of the lemma easily follows.
\end{proof}
\bigskip
We can now prove Theorem \ref{statstab}.\newline
\begin{proof}[Proof of Theorem \protect\ref{statstab}]
Let $L_{\delta }$ be the transfer operator associated to $T_{\delta }.$ By
Lemma \ref{prv1}, $\lim_{n\rightarrow \infty }\|m-\frac{1}{n}\sum_{1\leq i
\leq n}L^{i}\mu _{\delta }\|_{W}=0$ uniformly in $\delta$. Since
\begin{equation*}
\sup_{x\in {\mathbb{S}}^{1}}|R_{\alpha }(x)-T_{\delta }(x)|\leq \delta,
\end{equation*}
then $\|(L-L_{\delta })\mu _{\delta }\|_{W}\leq \delta $ and
\begin{equation} \label{covid}
\lim_{\delta \rightarrow 0}\|(L-L_{\delta })\mu _{\delta }\|_{W}=0.
\end{equation}
By Lemma \ref{stablemma} \ we get that for each $n$
\begin{equation}
\big \|\mu _{\delta }-m\|_{W}\leq \|m-\frac{1}{n}\sum_{1\leq i \leq
n}L^{i}\mu _{\delta } \big \|_{W}+\frac{(n-1)}{2} \big \|(L-L_{\delta })\mu
_{\delta }\big\|_{W}.
\end{equation}%
It follows from Lemma \ref{prv1} that we can choose $n$ such that $\|m-\frac{%
1}{n}\sum_{1\leq i \leq n}L^{i}\mu _{\delta }\|_{W}$ is as small as wanted.
Then, using \eqref{covid}, we can choose $\delta $ sufficiently mall so to
make $\frac{(n-1)}{2}\|(L-L_{\delta })\mu _{\delta }\|_{W}$ as small as
needed, hence proving the statement.
\end{proof}
\begin{remark}
The qualitative stability statements with respect to the Wasserstein distance proved in this section for circle rotations, extend directly to many other systems, for example to uniquely ergodic rotations on the multidimensional torus. In fact in the proof, aside of the general properies of the Wasserten distance and of pushforward maps, we only use that the system is uniquely ergodic, and the map is an isometry. This property could also be relaxed to a non-expansive property, ensuring that \eqref{mis} is satisfied.
\end{remark}
\subsection{Quantitative statistical stability of Diophantine rotations,
upper bounds\label{ub}}
We now consider irrational rotations, {for rotation numbers that are
``badly'' approximable by rationals: the so-called \textit{Diophantine numers%
}. In this case, we can provide a quantitative estimate for the statistical
stability of the system by showing that the modulus of continuity of the function $\delta \longmapsto \mu_\delta$ is
H\"olderian, and that its exponent depends on the Diophantine type of the
rotation number.}
Let us start by recalling the definition of \textit{Diophantine type} for a
real number (see \cite{KN}): {this concept expresses quantitatively the rate
of approximability of an irrational number by sequences of rationals}.
\newline
In what follows, we will also use {$\| \cdot \|_{\mathbb{Z}}$} to denote the
distance from a real number to the nearest integer.
\begin{definition}
\label{linapp} If $\alpha $ is irrational, the Diophantine type of $\alpha $
is defined by%
\begin{equation*}
\gamma (\alpha ):=\sup \{\gamma\geq 0: \underset{k\rightarrow \infty }{\lim
\inf }~\,k^{\gamma }\Vert k\alpha \Vert_{\mathbb{Z}} =0\mathbb{\}}.
\end{equation*}
\end{definition}
We remark that in some cases $\gamma (\alpha )=+\infty$. When $\gamma
(\alpha )<+\infty$ we say $\alpha $ is of \textit{finite Diophantine type}.%
\newline
\begin{remark}
The Diophantine type of $\alpha $ can be also defined by%
\begin{eqnarray*}
\gamma (\alpha )&:=&\inf \left\{ \gamma\geq 0: \,\exists c>0 \; \mbox{s.t.}
\;\Vert k\alpha \Vert_{\mathbb{Z}} \geq c_{0}|k|^{-\gamma } \; \forall \,
k\in \mathbb{Z}\setminus\{0\} \right\} \\
&=& \inf \left\{ \gamma\geq 0: \,\exists c>0 \; \mbox{s.t.} \; \big|\alpha -%
\frac{p}{q}\big|\geq \frac{c}{|q|^{\gamma +1}} \quad \forall \; \frac{p}{q}%
\in \mathbb{Q}\setminus\{0\} \right\}.
\end{eqnarray*}
\end{remark}
\medskip
{In the light of this last remark on the Diophantine type of a number, we
recall the definition of \textit{Diophantine number} as it very commonly
stated in the literature.}\newline
\begin{definition}
\label{DDD}Given $c >0$ and $\tau \geq 0$, we say that a number $\alpha \in
(0,1)$ is $(c ,\tau )$-\textit{Diophantine} if
\begin{equation}
\left\vert \alpha -\frac{p}{q}\right\vert >\frac{c }{|q|^{1+\tau }}\qquad
\forall \quad \frac{p}{q}\in \mathbb{Q}\setminus \{0\}. \label{diophantine}
\\
\end{equation}
We denote by $\mathcal{D}(c, \tau)$ the set of of $(c,\tau)$-{Diophantine}
numbers and by $\mathcal{D}(\tau) := \cup_{c>0} \mathcal{D}(c, \tau).$
\end{definition}
\medskip
\begin{remark}
{Comparing with Definition \ref{linapp}, it follows that every $\alpha \in
\mathcal{D}(\tau)$ has finite Diophantine type $\gamma(\alpha)\leq \tau$. On
the other hand, if $\alpha$ has finite Diophantine type, then $\alpha \in
\mathcal{D}(\tau )$ for every $\tau >\gamma (\alpha )$.}
\end{remark}
\begin{remark}
Let us point out the following properties (see \cite[p. 601]{Russmann} for
their proofs):
\begin{itemize}
\item \textrm{if $\tau<1$, the set $\mathcal{D}(\tau)$ is empty; }
\item \textrm{if $\tau>1$ the set $\mathcal{D}(\tau)$ has full Lebesgue
measure; }
\item \textrm{if $\tau=1$, then $\mathcal{D}(\tau)$ has Lebesgue measure
equal to ero, but it has Hausdorff dimension equal to $1$ (hence, it has the
cardinality of the continuum). }
\end{itemize}
\textrm{See also \cite[Section V.6]{Herman} for more properties.\newline
}
\end{remark}
Now we introduce the notion of discrepancy of a sequence $x_{1},...,x_{N}\in
\lbrack 0,1]$. This is a measure of the equidistribution of the points $%
x_{1},...,x_{N}$. Given $x_{1},...,x_{N}\in \lbrack 0,1]$ we define the
discrepancy of the sequence by%
\begin{equation*}
D_{N}(x_{1},...,x_{N}):=\sup_{\alpha \leq \beta ,~\alpha ,\beta \in \lbrack
0,1]}\big |\frac{1}{{N}}\sum_{1\leq i\leq N}1_{[\alpha ,\beta
]}(x_{i})-(\beta -\alpha )\big|
\end{equation*}%
it can be proved (see \cite[Theorem 3.2, page 123]{KN}) that the discrepancy
of sequences obtained from orbits of and irrational rotation is related to
the Diophantine type of the rotation number.
\begin{theorem}
\label{11}Let $\alpha $ be an irrational of finite Diophantine type. Let us
denote by $D_{N,\alpha }(0)$\ the discrepancy of the sequence $%
\{x_{i}\}_{0\leq i\leq N}=\{\alpha i-\left\lfloor \alpha i\right\rfloor
\}_{0\leq i\leq N}$ \ \ (where $\left\lfloor {\cdot}\right\rfloor $ \ stands
for the integer part). \ Then:
\begin{equation*}
D_{N,\alpha }(0)=O(N^{-\frac{1}{\gamma (\alpha )}+\varepsilon }) \qquad
\forall\; \varepsilon>0.
\end{equation*}
\end{theorem}
\bigskip
From the definition of discrepancy, Theorem \ref{11}, and the fact that the
translation is an isometry, we can deduce the following corollary.\newline
\begin{corollary}
\label{preDK}Let $x_{0}\in [0,1]$, let us denote by $D_{N,\alpha
}(x_{0})$\ the discrepancy of the sequence $\{x_{i}\}_{1\leq i\leq
N}=\{x_{0}+\alpha i-\left\lfloor x_{0}+\alpha i\right\rfloor \}_{0\leq i\leq
N}$. Then Theorem \ref{11} holds uniformly for each $x_{0}$, namely for
every $\varepsilon >0$ there exists $C=C(\varepsilon )\geq 0$ \ such that
for each \thinspace $x_{0}$ and $N\geq 1$%
\begin{equation*}
D_{N,\alpha }(x_{0})\leq CN^{-\frac{1}{\gamma (\alpha )}+\varepsilon }.
\end{equation*}
\end{corollary}
\begin{proof}
It is sufficient to prove that for each $x_0$ it holds that $D_{N,\alpha}(x_{0})\leq 2D_{N,\alpha}({0})$. Indeed, consider $\varepsilon>0 $ and an interval $I=[\alpha,\beta]$ such that
\begin{equation*}
D_{N}(x_{1},...,x_{N})-\varepsilon\leq \left| \frac{1}{{N}}\sum_{1\leq i\leq N}1_{I}(x_{i})-(\beta -\alpha )\right|.
\end{equation*}
Now consider the translation of $I$ by $-x_0$ (mod. $1$): $$S=\{x\in[0,1] \ | \ x+x_0-\lfloor x+x_0 \rfloor\in I\}$$ and the translation of the sequence $x_i$, which is the sequence $y_i=\alpha i-\left\lfloor \alpha i\right\rfloor
$. We have that $S $ is composed by at most two intervals $S=I_1\cup I_2$ with lenghts $m(I_1)$ and $m(I_2)$; moreover
\begin{equation*}
\left |\frac{1}{{N}}\sum_{1\leq i\leq N}1_{I}(x_{i})-(\beta -\alpha )\right|= \left |\frac{1}{{N}}\sum_{1\leq i\leq N}1_{I_1}(y_{i})- m(I_1)+ \frac{1}{{N}}\sum_{1\leq i\leq N}1_{I_2}(y_{i})- m(I_2) \right|.
\end{equation*}
Then
\begin{equation*}
D_{N}(x_{1},...,x_{N})-\varepsilon\leq 2 D_{N}(y_{1},...,y_{N}).
\end{equation*}
Since $\varepsilon$ is arbitrary, we conclude that $D_{N,\alpha}(x_{0})\leq 2D_{N,\alpha}({0})$.
\end{proof}
\medskip
The discrepancy is also related to the speed of convergence of Birkhoff sums
of irrational rotations. The following is known as the Denjoy-Kocsma
inequality (see \cite[Theorem 5.1, page 143 and Theorem 1.3, page 91]{KN}).%
\newline
\begin{theorem}
\label{DK}Let $f$ be a function of bounded variation, that we denote by $%
V(f) $. Let $x_{1},...,x_{N}\in \lbrack 0,1]$ be a sequence with discrepancy
$D_{N}(x_{1},...,x_{N})$. Then%
\begin{equation*}
\left|\frac{1}{N}\sum_{1\leq i\leq N}f(x_{i})-\int_{[0,1]}f~dx \right|\leq
V(f)\,D_{N}(x_{1},...,x_{N}).
\end{equation*}
\end{theorem}
\medskip
We can now prove a quantitative version of our stability result.\newline
\begin{theorem}[Quantitative statistical stability of Diophantine rotations]
\label{stst2} Let $R_{\alpha }:{{\mathbb{S}}^{1}\rightarrow {\mathbb{S}}^{1}}
$ be an irrational rotation. Suppose $\alpha $ has finite Diophantine type $%
\gamma (\alpha ).$ Let $\{T_{\delta }\}_{0\leq \delta \leq \overline{\delta }%
}$ be a family of Borel measurable maps of the circle such that%
\begin{equation*}
\sup_{x\in {\mathbb{S}}^{1}}|R_{\alpha }(x)-T_{\delta }(x)|\leq \delta .
\end{equation*}%
Suppose $\mu _{\delta }$ is an invariant measure of $T_{\delta }$. Then, for
each $\ell <{\frac{1}{\gamma (\alpha )+1}}$ we have:
\begin{equation*}
\|m-\mu _{\delta }\|_{W}=O(\delta ^{\ell }).
\end{equation*}
\end{theorem}
\bigskip
Let us first prove some preliminary result.\newline
\begin{lemma}
\label{conv2}Under the assumptions of Theorem \ref{stst2}, let $\{\mu
_{\delta }\}_{0\leq \delta \leq \overline{\delta }}$ be a family of Borel {%
probability} measures on $\mathbb{S}^{1}$. Then, {for every $\varepsilon >0$}
\begin{equation}
\|m-\frac{1}{n}\sum_{1\leq i\leq n}L^{i}\mu _{\delta }\|_{W}=O(n^{-\frac{1}{%
\gamma (\alpha )}+\varepsilon }) \label{ww}
\end{equation}%
uniformly in $\delta$; namely, {for every $\varepsilon >0$}, there exist $C={%
C(\varepsilon )}\geq 0$ such that for each $\delta $ and $n\geq 1$
\begin{equation*}
\|m-\frac{1}{n}\sum_{1\leq i\leq n}L^{i}\mu _{\delta }\|_{W}\leq Cn^{-\frac{1%
}{\gamma (\alpha )}+\varepsilon }.
\end{equation*}
\end{lemma}
\bigskip
\begin{proof}
Let us fix $\varepsilon >0.$ By Theorem \ref{DK} and Corollary \ref{preDK}
we have that there is $C\geq 0$ such that for each Lipschitz function $f$
with Lipschitz constant $1$, and for each $x_{0}\in {\mathbb{S}}^{1}$ we have%
\begin{equation*}
\left|\frac{1}{n}\sum_{1\leq i\leq n}f(R_{\alpha
}^{i}(x_{0}))-\int_{[0,1]}f~dx\right |\leq C\, n^{-\frac{1}{\gamma (\alpha )}%
+\varepsilon } \qquad \forall\; n\geq 1.
\end{equation*}
Let $\delta _{x_{0}}$ be the delta-measure concentrated at a point $x_{0}\in
\mathbb{S}^{1}$. By definition of $\|\cdot\|_{W}$, we conclude that
\begin{equation}
\|m-\frac{1}{n}\sum_{1\leq i\leq n}L^{i}\delta _{x_{0}}\|_{W}\leq Cn^{-\frac{%
1}{\gamma (\alpha )}+\varepsilon }. \label{www}
\end{equation}%
\newline
Now, as in the proof of Lemma \ref{stablemma}, any measure $\mu _{\delta }$
can be approximated, arbitary well, in the $\|\cdot\|_{W}$ norm by a convex
combination of delta-measures and we obtain $($\ref{ww}$)$ from $(\ref{www})$%
, exactly in the same way as done in the proof of Lemma \ref{stablemma}.
\end{proof}
\bigskip
\begin{proof}[Proof of Theorem \protect\ref{stst2}]
Let $L_{\delta }$ be the transfer operator of $T_{\delta }.$ Let us fix $%
\varepsilon >0$; without loss of generality we can suppose $\varepsilon <%
\frac{1}{\gamma (\alpha )}.$ By lemma \ref{conv2} we have that
\begin{equation*}
\|m-\frac{1}{n}\sum_{1\leq i \leq n}L^{i}\mu _{\delta }\|_{W}\leq Cn^{-\frac{%
1}{\gamma (\alpha )}+\varepsilon }.
\end{equation*}
By Lemma \ref{stablemma} \ we get that for each $n\geq 1$
\begin{equation}
\|\mu _{\delta }-m\|_{W}\leq \big \|m-\frac{1}{n}\sum_{1\leq i \leq
n}L^{i}\mu _{\delta }\big\|_{W}+\frac{(n-1)}{2}\big \|(L-L_{\delta })\mu
_{\delta }\big\|_{W}.
\end{equation}
Hence%
\begin{eqnarray}
\|\mu _{\delta }-m\|_{W} &\leq &Cn^{-\frac{1}{\gamma (\alpha )}+\varepsilon
}+\frac{(n-1)}{2}\|(L-L_{\delta })\mu _{\delta }\|_{W} \label{stimaboh} \\
&\leq &{\ Cn^{-\frac{1}{\gamma (\alpha )}+\varepsilon }+\frac{(n-1)}{2}%
\delta }, \notag
\end{eqnarray}%
{\ where we have used that, since }$\sup_{x\in {\mathbb{S}}^{1}}|R_{\alpha
}(x)-T_{\delta }(x)|\leq \delta $, then
\begin{equation*}
\|(L-L_{\delta })\mu _{\delta }\|_{W}\leq \delta.
\end{equation*}
Since the inequality is true for each $n\geq1$, we can now consider $n$
minimizing
\begin{equation*}
{F(n):=Cn^{-\frac{1}{\gamma (\alpha )}+\varepsilon }+\frac{n-1}{2}\delta .}
\end{equation*}%
The extension to $\mathbb{R}$ of the funcion $F$ is convex {and it goes to $%
+\infty $ both as $x\rightarrow 0^{+}$ and as $x\rightarrow +\infty $.} Let
us denote \ $a:=\frac{1}{\gamma (\alpha )}-\varepsilon {>0}$, then $%
F(x)=Cx^{-a}+\frac{x-1}{2}\delta .$ This is minimized at
{\
\begin{equation*}
x_{\ast }:=(2aC)^{\frac{1}{a+1}}\delta ^{-{\frac{1}{a+1}}}:=\tilde{c}%
\;\delta ^{-\frac{1}{a+1}}.
\end{equation*}%
} {\ Consider $n_{\ast }=\left\lfloor x_{\ast }\right\rfloor $ and observe
that%
\begin{eqnarray*}
F(n_{\ast }) &=&\frac{C}{n_{\ast }^{a}}+\frac{n_{\ast }-1}{2}\delta \leq
\frac{C}{n_{\ast }^{a}}+\frac{n_{\ast }}{2}\delta =O(\delta ^{\frac{a}{a+1}})
\\
F(n_{\ast }+1) &=&\frac{C}{(n_{\ast }+1)^{a}}+\frac{n_{\ast }}{2}\delta \leq
\frac{C}{n_{\ast }^{a}}+\frac{n_{\ast }}{2}\delta =O(\delta ^{\frac{a}{a+1}%
}).
\end{eqnarray*}%
}
Substituting in \eqref{stimaboh} we conclude:%
\begin{eqnarray*}
\|\mu _{\delta }-m\|_{W} &\leq &\min \{F(n_{\ast }),F(n_{\ast
}+1)\}=O(\delta ^{\frac{a}{a+1}}) \\
&=&O\big(\delta ^{\frac{1-\varepsilon \gamma (\alpha )}{1+(1-\varepsilon
)\gamma (\alpha )}}\big)
\end{eqnarray*}%
proving the statement.
\end{proof}
\begin{remark}
{We remark that, as it follows from the above proof, the constants involved in $O(\delta ^{\ell })$ in the statement of Theorem \ref{stst2} only depend on $\alpha $ and $\ell$.}
\end{remark}
\subsection{Quantitative statistical stability of Diophantine rotations,
lower bounds\label{lob}}
{In this subsection we discuss that the upper bound on the statistical
stability obtained in Theorem \ref{stst2} is essentially optimal.} We show
that for a rotation $R_{\alpha }$ with rotation number $\alpha$ of
Diophantine type $1< \gamma(\alpha) \leq +\infty$, there exist perturbations
of ``size $\delta$'', for which the unique physical invariant measure {varies} in a H%
\"{o}lder way.\newline
More specifically, for any $r\geq 0$ we will construct a sequence $\delta
_{n}\rightarrow 0$ \ and $C^\infty$-maps $T_{n}$ such that: $\|R_{\alpha
}-T_{n}\|_{C^{r}}\leq \delta _{n}$, $T_{n}$ has {a} unique physical invariant {%
probability} measure $\mu _{n}$ and $\|\mu _{n}-m \|_{W}\geq C\delta_n ^{\frac{%
1}{p}}$ for some $C\geq 0$ and $p>1$.\newline
\begin{proposition}
\label{berlusconi} Let us consider the rotation $R_{\alpha }:\mathbb{S}%
^{1}\rightarrow \mathbb{S}^{1}$, where $\alpha$ is an irrational number with
{$1< \gamma(\alpha) \leq +\infty$}. For each {$r\geq 0$ } and $\gamma
^{\prime }<\mathcal{\gamma }(\alpha )$ there exist a sequence of numbers $%
\delta _{j}> 0 $ and $C^\infty$ diffeomorphisms $T_{j}:\mathbb{S}%
^{1}\rightarrow \mathbb{S}^{1}$ such that $\|T_{j}-R_{\alpha }\|_{C^{r}}\leq
2\delta _{j}$ \ and
\begin{equation*}
\|m-\mu _{j}\|_{W}\geq \frac{1}{2}{\delta _{j}^{\frac{1}{\gamma ^{\prime
}+1}}}
\end{equation*}%
for every $j\in \mathbb{N}$ and for every $\mu _{j}$ invariant measure of $%
T_{j}$.
\end{proposition}
\begin{proof}
We remark that the unique invariant measure for $R_{\alpha }$ is the Lebesgue
measure $m.$ Let us choose $\gamma ^{\prime }<\gamma (\alpha )$; {it follows
from the definition of $\gamma (\alpha )$} that there are infinitely many
integers ${k_{j}\in \mathbb{N}}$ and ${p_{j}\in \mathbb{Z}}$ such that
\begin{equation*}
|k_{j}\alpha -p_{j}|\leq \frac{1}{k_{j}^{\gamma ^{\prime }}}\qquad
\Longleftrightarrow \qquad \big|\alpha -\frac{p_{j}}{k_{j}}\big|\leq \frac{1%
}{k_{j}^{\gamma ^{\prime }+1}}.
\end{equation*}
Let us set $\delta _{j}:=-\alpha +\frac{p_{j}}{k_{j}}$. Clearly, $|\delta
_{j}|\leq \frac{1}{k_{j}^{\gamma ^{\prime }+1}}\longrightarrow 0$ as $%
j\rightarrow \infty $.
Consider $\hat{T}_{j}$ defined as $\hat{T}_{j}(x)=R_{\alpha +\delta _{j}}(x)$%
; for each $r\geq 0$ we have that \ $\|\hat{T}_{j}-R_{\alpha
}\|_{C^{r}}=|\delta _{j}|$. Since $(\delta _{j}+\alpha )=\frac{p_{j}}{k_{j}}
$ is rational, {every orbit is $k_{j}$-periodic. Let us consider the orbit
starting at $0$ and denote it by}
\begin{equation*}
y_{0}:=0,\;y_{1}:=\delta _{j},\;\ldots ,\;y_{k_{j}-1}:=1-\delta
_{j},\;y_{k_{j}}:=0\;(\mathrm{mod.} \,{\mathbb{Z}}).
\end{equation*}%
Consider the measures
\begin{equation*}
\mu _{j}=\frac{1}{k_{j}}\sum_{0\leq i<k_{j}}\delta _{y_{i}},
\end{equation*}%
where $\delta _{y_{i}}$ is the delta-measure concentrated at $y_{i}$.
The measure $\mu _{j}$ is clearly invariant for the map $\hat{T}_{j}$ and
it can be directly computed that%
\begin{equation*}
\|m-\mu _{j}\|_{W}\geq \frac{1}{2k_{j}}.
\end{equation*}%
{Observe that $|\delta _{j}|\leq \frac{1}{k_{j}^{\gamma ^{\prime }+1}}$,
hence} we get $|\delta _{j}|^{\frac{1}{\gamma ^{\prime }+1}}\leq \frac{1}{%
k_{j}}$; then%
\begin{equation*}
\|m-\mu _{j}\|_{W}\geq \frac{1}{2}{|\delta _{j}|^{\frac{1}{\gamma
^{\prime }+1}}}.
\end{equation*}
This example can be further improved by perturbing the map $\hat{T}%
_{j}=R_{\alpha +\delta _{j}}$ to a new map $T_{j}$ in a way that the measure
$\mu _{j}$ (supported on the attractor of $T_{j}$) and the measure \footnote{%
The \textit{translated measure} is defined as follows: $[\mu _{j}+\frac{1}{%
2k_{j}}](A):=\mu _{j}(A-\frac{1}{2k_{j}})$ \ for each measurable set $A$ in $%
\mathbb{S}^{1}$, where $A-\frac{1}{2k_{j}}$ is the translation of the set $A$
by $-\frac{1}{2k_{j}}$.} $\mu _{j}+\frac{k_{j}}{2}$ (supported on the
repeller of $T_{j}$) are the only invariant measures of $T_{j}$, and $\mu
_{j}$ is the unique physical measure for the system. This can be done by
making a $C^{\infty }$ perturbation on $\hat{T}_{j}=R_{\alpha +\delta _{j}}$%
, as small as wanted in the $C^{r}$-norm. In fact, let us denote, as before,
by $\{y_{k}\}_{k}$\ the periodic orbit of $0$ for $R_{\alpha +\delta _{j}}$.
Let us consider a $C^{\infty }$ function $g:[0,1]\rightarrow \lbrack 0,1]$
such that:
\begin{itemize}
\item $g$ is negative on the each interval $[y_{i},y_{i}+\frac{1}{2k_{j}}]$
and positive on each interval $[y_{i}+\frac{1}{2k_{j}},y_{i+1}]$ (so that $%
g(y_{i}+\frac{1}{2k_{j}})=0$ );
\item $g^{\prime }$ is positive in each interval $[y_{i}+\frac{1}{3k_{j}}%
,y_{i+1}-\frac{1}{3k_{j}}]$ and negative in $[y_{i},y_{i+1}]-[y_{i}+\frac{1}{%
3k_{j}},y_{i+1}-\frac{1}{3k_{j}}]$.
\end{itemize}
Considering $D_{\delta }:{\mathbb{S}}^{1}\rightarrow {\mathbb{S}}^{1}$,
defined by $D_{\delta }(x):=x+\delta g(x)$ $\func{(mod.\; {\mathbb{Z}})}$,
it holds that the iterates of this map send all the space, with the
exception of the set $\Gamma_{\mathrm{rep}}:=\{y_{i}+\frac{1}{2k_{j}}: \;0
\leq i< k_{j}\}$ (which is a repeller), to the set $\Gamma_{ \mathrm{att}%
}:=\{y_{i}: \; 0\leq i< k_{j}\}$ (the attractor). Then, define $\ T_{j}$ by
composing $R_{\alpha +\delta _{j}}$ and $D_{\delta }$, namely
\begin{equation*}
T_{j}(x):=D_{\delta _{j}}(x+(\delta _{j}+\alpha )).
\end{equation*}
The claim follows by observing that for the map $T_{j}(x)$, both sets \ $%
\Gamma_{\mathrm{att}}$ and $\Gamma_{\mathrm{rep}}$ are invariant and, in
particular, the whole space ${\mathbb{S}}^{1}-\Gamma_{\mathrm{rep}}$ is
attracted by $\Gamma_{\mathrm{att}}$.
\end{proof}
\bigskip
The construction done in the previous proof can be extended to show H\"{o}%
lder behavior for the average of a given \emph{fixed} regular observable. We
show an explicit example of such an observable, with a particular choice of
rotation number $\alpha$.
\begin{proposition}
\label{30}Consider a rotation $R_{\alpha }$ with rotation angle $\alpha
:=\sum_{1}^{\infty }2^{-2^{2i}}$. Let $T_{j}$ be its perturbations as
constructed in Proposition \ref{berlusconi} and let $\mu _{j}$ denote their
invariant measures; recall that $\|T_{j}-R_{\alpha }\|_{C^{k}}\leq 2|\delta
_{j}|=2\sum_{n+1}^{\infty }2^{-2^{2i}}$.\newline
Then, there is an observable $\psi :{\mathbb{S}}^{1}\rightarrow \mathbb{R}$,
with derivative in $L^{2}({\mathbb{S}^1})$, and $C\geq 0$ such that%
\begin{equation*}
\left |\int_{\mathbb{S}^1} \psi d{m}-\int_{\mathbb{S}^1} \psi d\mu _{j}
\right|\geq C\sqrt{\delta _{j}}.
\end{equation*}
\end{proposition}
\bigskip
\begin{proof}
Comparing the series with a geometric one, we get that
\begin{equation*}
\sum_{n+1}^{\infty }2^{-2^{2i}}\leq 2^{-2^{2(n+1)}+1}.
\end{equation*}
By this, it follows
\begin{equation*}
\|2^{2^{2n}}\alpha \|\leq 2^{-2^{2(n+1)}+1}=\frac{1}{2(2^{2^{2+2n}})}=\frac{1%
}{2(2^{2^{2n}})^{4}}.
\end{equation*}
Since it also holds that $\|2^{2^{2n}}\alpha \|\geq 2^{-2^{2(n+1)}}$, the we
conclude that $\gamma (\alpha )=$ $4$. Following the construction in the
proof of Proposition \ref{berlusconi}, we have that with a perturbation of
size less than $2^{-2^{2(n+1)}+1}$ the angles $\alpha _{j}:=\alpha -\delta
_{j}=\sum_{1}^{j}2^{-2^{2i}}$ generate orbits of period $2^{2^{2j}}$. Now
let us construct a suitable observable which can ``see'' the change of the
invariant measure under this perturbation. Let us consider
\begin{equation}
\psi (x):=\sum_{i=1}^{\infty }\frac{1}{(2^{2^{2i}})^{2}}\cos (2^{2^{2i}}2\pi
x) \label{obss}
\end{equation}%
and debote by $\psi _{k}(x):=\sum_{i=1}^{k}\frac{1}{(2^{2^{2i}})^{2}}\cos
(2^{2^{2i}}2\pi x)$ its truncations. Since for the observable $\psi $, the $%
i $-th Fourier coefficient decreases like $i^{-2}$, then $\psi $ has
derivative in $L^{2}({\mathbb{S}^1})$. Let $\{x_{i}\}_{i}$
be the periodic orbit of $0$ for the map $R_{\alpha _{j}}$
and let $\mu _{j}:=\frac{1}{2^{2^{2i}}}\sum_{i=0}^{\alpha _{j}-1}\delta
_{x_{i}}$ be the physical measure supported on it. Since $2^{2^{2j}}$\
divides $2^{2^{2(j+1)}}$ then\ $\sum_{i=1}^{2^{2^{2j}}}\psi _{k}(x_{i})=0$
for every $k<j$, thus $\int_{\mathbb{S}^1} \psi _{j-1}~d\mu _{j}=0.$ Then
\begin{eqnarray*}
v_{j}:= &&\int_{\mathbb{S}^1} \psi ~d\mu _{j}\geq \frac{1}{(2^{2^{2j}})^{2}}%
-\sum_{j+1}^{\infty }\frac{1}{(2^{2^{2i}})^{2}} \\
&\geq &2^{-2^{2j+1}}-2^{-2^{2(j+1)}+1}.
\end{eqnarray*}%
For $j$ big enough%
\begin{equation*}
2^{-2^{2j+1}}-2^{-2^{2(j+1)}+1}\geq \frac{1}{2}(2^{-2^{2j}})^{2}.
\end{equation*}%
Summarizing, with a perturbation of size
\begin{equation*}
\delta _{j}=\sum_{j+1}^{\infty }2^{-2^{2i}}\leq 2\cdot
2^{-2^{2(j+1)}}=2^{-2^{2(j+1)}}=2(2^{-2^{2j}})^{4}
\end{equation*}
we get a change of average for the observable $\psi $ from $\int_{\mathbb{S}%
^1} \psi dm=0$ to $v_{n}\geq \frac{1}{2}(2^{-2^{2j}})^{2}$. Therefore, there
is $C\geq 0$ such that with a perturbation of size $\delta _{j}$, we get a
change of average for the observable $\psi $ of size bigger than $C\sqrt{%
\delta _{j}}.$
\end{proof}
\bigskip
\begin{remark}
Using in (\ref{obss}) {$\frac{1}{(2^{2^{2i}})^{\sigma}}$, for some $\sigma>2$}, instead of $\frac{1%
}{(2^{2^{2i}})^{2}}$, we can obtain a smoother observable. Using rotation
angles with bigger and bigger Diophantine type, it is possible to obtain a
dependence of the physical measure on the perturbation with worse and worse H%
\"{o}lder exponent. Using angles with infinite Diophantine type it is
possible to have a behavior whose modulus of continuity is worse than the H%
\"{o}lder one.
\end{remark}
\bigskip
\section{Linear response and KAM theory}
\label{KAMsection} {In this section, we would like to discuss differentiable
behavior and linear response for Diophantine rotations, under suitable
smooth perturbations. In particular, we will obtain our results by means of
the so-called KAM theory.}
{Let us first start by explaining more precisely,} what linear response
means.\newline
Let $(T_{\delta })_{\delta \geq 0}$ be a one parameter family of maps
obtained by perturbing an initial map $T_{0}$. We will be interested on how
the perturbation made on $T_{0}$ affects some invariant measure of $T_{0}$
of particular interest. For example its physical measure. Suppose hence $%
T_{0}$ has a physical measure $\mu _{0}$ and let $\mu _{\delta }$ be\
physical measures of $T_{\delta }$. \footnote{\label{notap} An invariant
measure $\mu $ is said to be \emph{physical} if there is a positive Lesbegue
measure set $B$ such that for each continuous observable $f $%
\begin{equation*}
\int_{\mathbb{S}^1} f~d\mu =\underset{n\rightarrow \infty }{\lim }\frac{%
f(x)+f(T(x))+...+f(T^{n}(x))}{n+1}
\end{equation*}%
for each $x\in B$ (see \cite{Y}).}
The linear response of the invariant measure of $T_{0}$ under {a} given
perturbation is defined, {if it exists}, by the limit
\begin{equation}
\dot{\mu}:=\lim_{\delta \rightarrow 0}\frac{\mu _{\delta }-\mu _{0}}{\delta }
\label{LRidea}
\end{equation}
where the meaning of this convergence can vary from system to system. In
some systems and for a given perturbation, one may get $L^{1}$-convergence
for this limit; in other systems or for other perturbations one may get
convergence in weaker or stronger topologies. The linear response to the
perturbation hence represents the first order term of the response of a
system to a perturbation and when it holds, a linear response formula can be
written {as}:
\begin{equation}
\mu _{\delta }=\mu _{0}+\dot{\mu}\delta +o(\delta ) \label{lin}
\end{equation}%
which holds in some weaker or stronger sense.
We remark that given an observable function $c:X\rightarrow \mathbb{R}$, if
the convergence in \eqref{LRidea} is strong enough with respect to the
regularity \footnote{%
For example, $L^{1}$ convergence in $($\ref{LRidea}$)$ allows to control the
behavior of $L^{\infty }$ observables in $($\ref{LRidea2}$)$, while a weaker
convergence in $($\ref{LRidea}$)$, for example in the Wasserstein norm (see
definition \ref{w})\ allows to get information on the behavior of Lipschitz
obsevable.} of $c$, we get
\begin{equation}
\lim_{t\rightarrow 0}\frac{\int_{\mathbb{S}^1} \ c\ d\mu _{t}-\int_{\mathbb{S%
}^1} \ c\ d\mu _{0}}{t}=\int_{\mathbb{S}^1} \ c\ d\dot{\mu} \label{LRidea2}
\end{equation}%
showing how the linear response of the invariant measure controls the
behavior of observable averages.\newline
\subsection{Conjugacy theory for circle maps} \label{secconj}
{Let us recall some classical results on smooth linearization of circle
diffeomorphisms and introduce KAM theory}.
Let $\mathrm{Diff}_+^r({{\mathbb{S}}^1})$ denote the set of orientation
preserving homeomorphism of the circle of class $C^r$ with $r\in \mathbb{N}%
\cup \{+\infty, \omega \}$. Let $\mathrm{rot}(f) \in {{\mathbb{S}}^1}$
denote the rotation number of $f$ (see, for example, \cite[Section II.2]%
{Herman} for more properties on the rotation number).\newline
A natural question is to understand when a circle diffeomorphism is
conjugated to a rotation with the same rotation number, namely whether there
exists a homeomorphim $h: {\ \mathbb{S}^1 }\longrightarrow {{\mathbb{S}}^1}$
such that the following diagram commutes:
\begin{equation*}
\begin{array}{ccc}
{{\mathbb{S}}^1} & \overset{f}{\longrightarrow } & {{\mathbb{S}}^1} \\
\uparrow {\small h} & & \uparrow {\small h} \\
{{\mathbb{S}}^1} & \overset{R_{\mathrm{rot (f)}}}{\longrightarrow } & {{%
\mathbb{S}}^1}%
\end{array}%
\end{equation*}
\textit{i.e.}, $h^{-1} \circ f \circ h = R_{\mathrm{rot}(f)}$. Moreover,
whenever this conjugacy exists, one would like to understand what is the
best regularity that one could expect.
\begin{remark}
\textrm{Observe that if $h$ exists, then it is essentially unique, in the
sense that if $h_i: \mathbb{S}^1\longrightarrow {{\mathbb{S}}^1}$, $i=1,2$,
are homeomorphisms conjugating $f$ to $R_{\mathrm{rot }(f)}$, then $h_1
\circ h_2^{-1}$ must be a rotation itself: $h_1\circ h_2^{-1} = R_\beta$ for
some $\beta\in {\mathbb{S}}^1 $ (see \cite[Ch. II, Proposition 3.3.2]{Herman}%
).}
\end{remark}
This question has attracted a lot of attention, dating back, at least, to
Henri Poincar\'e.
{Let us start by recalling the following result due to Denjoy \cite{Den} shows that
diffeomorphisms with irrational rotation number and satisfying some extra mild regularity assumption (for example, $C^2$ diffeomorphisms do satisfy it) are conjugated to irrational
rotations by an homeomorphism.}
\begin{theorem}[Denjoy]
\label{Denteo}Let $T$ be an orientation preserving diffeomorphism of the
circle with an irrational rotation number $\alpha $ and such that $\log
(T^{\prime })$ has bounded variation. Then there exists a homeomorphism $h:%
\mathbb{S}^{1}\rightarrow \mathbb{S}^{1}$ such that%
\begin{equation*}
{T \circ h= h \circ R_{\alpha }.}
\end{equation*}
\end{theorem}
{
\begin{remark}
Denjoy constructed diffeomorphisms $T$ only of class $C^1$ that are not conjugated to rotations ({\it i.e.}, such that the support of their invariant measure $\mu$ is not the whole ${\mathbb S^1}$). These are usually called in the literature {\it Denjoy-type} diffeomorphisms.
\end{remark}
}
{Some of the first contributions about smooth linearization ({\it i.e.}, obtaining a conjugacy of higher regularity)} were due to V.I. Arnol'd
\cite{Arnold} and J. Moser \cite{Moser}. These results are in the
perturbative setting and are generally referred to as \textit{KAM theory}.
Namely, they consider perturbations of \textit{Diophantine} rotations
\begin{equation} \label{deffeps}
f_\varepsilon(x) = R_\alpha + \varepsilon u(x,\varepsilon)
\end{equation}
and prove that, under suitable regularity assumptions on $u$, there exist $%
\varepsilon_0>0$ (depending on the properties of $\alpha$ and $u$)
{and a Cantor set ${\mathcal C} \subset (-\varepsilon_0, \varepsilon_0)$ such that $f_\varepsilon$ is conjugated to a $R_{\mathrm{rot} (f_\varepsilon)}$ for every $\varepsilon \in {\mathcal C}$.}
{Observe that the conjugacy does not exist in general for an interval of $\varepsilon$, but only for those values of $\varepsilon$ for which the rotation number of $f_\varepsilon$ satisfies
suitable arithmetic properties ({\it e.g.}, it is Diophantine)}.
See below for a more precise statement.
\begin{remark}
{Observe that $f_\varepsilon$ has not necessarily rotation number $\alpha$,
even if one asks that $u(\cdot, \varepsilon)$ has zero average.}
\end{remark}
\begin{remark}
In the analytic setting, KAM theorem for circle diffeomorphisms was firstly
proved by Arnol'd (see \cite[Corollary to Theorem 3, p. 173]{Arnold}),
showing that the conjugation is analytic. In the smooth case, it was proved
by Moser \cite{Moser} under the assumption that $u$ is sufficiently smooth
(the minimal regularity needed was later improved by R\"ussmann \cite%
{Russmann2}). The literature on KAM theory and its recent developments is
huge and we do not aim to provide an accurate account here; for reader's
sake, we limit ourselves to mentioning some recent articles and surveys,
like \cite{BroerSevryuk, DL, Dumas, Massetti, MatherForni, Wayne} and
references therein.
\end{remark}
\smallskip
Later, Herman \cite{Herman} and Yoccoz \cite{Yoccoz,Yoccoz2} provided a
thorough analysis of the situation in the general (non-perturbative)
context. Let us briefly summarize their results (see also \cite%
{EliassonFayadKrikorian} for a more complete account). \newline
\begin{theorem}[Herman \protect\cite{Herman}, Yoccoz \protect\cite{Yoccoz,
Yoccoz2}]
\textcolor{white}{per andare a capo} \label{thmhermanyoccoz}
\begin{itemize}
\item Let $f \in \mathrm{Diff}_+^r({{\mathbb{S}}^1})$ and $\mathrm{rot}(f)
\in \mathcal{D}(\tau)$. If $r>\max\{3, 2\tau-1\}$, then there exists $h\in
\mathrm{Diff}_+^{r-\tau-\varepsilon}({{\mathbb{S}}^1})$, for every $%
\varepsilon>0 $, conjugating $f$ to $R_{\mathrm{rot (f)}}$.
\item Let $f \in \mathrm{Diff}_+^\infty({{\mathbb{S}}^1})$ and $\mathrm{rot}%
(f) \in \mathcal{D}(\tau)$. Then, there exists $h\in \mathrm{Diff}%
_+^{\infty}({{\mathbb{S}}^1})$ conjugating $f$ to $R_{\mathrm{rot (f)}}$.
\item Let $f \in \mathrm{Diff}_+^\omega({{\mathbb{S}}^1})$ and $\mathrm{rot}%
(f) \in \mathcal{D}(\tau)$. Then, there exists $h\in \mathrm{Diff}%
_+^{\omega}({{\mathbb{S}}^1})$ conjugating $f$ to $R_{\mathrm{rot (f)}}$.%
\newline
\end{itemize}
\end{theorem}
\begin{remark}
\textrm{The above results can be generalized to larger classes of rotation
number, satisfying a weaker condition than being Diophantine. Optimal
conditions were studied by Yoccoz and identified in \textit{Brjuno numbers}
for the smooth case and in those satisfying the so-called ${\mathcal{H}}$%
-condition (named in honour of Herman); we refer to \cite{Yoccoz, Yoccoz2}
for more details on these classes of numbers.\newline
}
\end{remark}
\bigskip
\subsection{Linear response for Diophantine circle rotations}
In this subsection we describe how, as a corollary to KAM theory, one can
prove the existence of linear response for Diophantine rotations.\newline
Let us state the following version of KAM theorem, whose proof can be found
in \cite[Theorem 9.0.4]{Vano} (cf. also \cite[Theorem 2]{BroerSevryuk} and \cite{CCdL}). %
\medskip
\begin{theorem}[KAM Theorem for circle diffeomorphisms]
\label{KAMVano} Let $\alpha \in \mathcal{D}({\tau})$, with $\tau>1$ and let
us consider a smooth family of circle diffeomorphisms
\begin{equation*}
f_\varepsilon(x) = R_\alpha + \varepsilon u(x,\varepsilon) \qquad
|\varepsilon|< 1
\end{equation*}
with
\begin{itemize}
\item[\textrm{(i)}] $u(x,\varepsilon) \in C^{\infty}({\mathbb{S}}^1)$ for
every $|\varepsilon|<1$;
\item[\textrm{(ii)}] the map $\varepsilon \longmapsto u(\cdot, \varepsilon)$
is $C^{\infty}$;
\item[\textrm{(iii)}] $\int_{{\mathbb{S}}^1} u(x,\varepsilon) dx =
A\varepsilon^m + o(\varepsilon^m)$, where $A\neq 0$ and $m\geq 0$.
\end{itemize}
Then, there exists a Cantor set ${\mathcal{C}}\subset (-1,1)$ containing $0$%
, such that for every $\varepsilon \in {\mathcal{C}}$ the map $%
f_{\varepsilon }$ is smoothly conjugated to a rotation $R_{\alpha
_{\varepsilon }}$, with $\alpha _{\varepsilon }\in \mathcal{D}(\tau )$. More
specifically, there exists
\begin{equation*}
h_{\varepsilon }(x)=x+\varepsilon v(x,\varepsilon )\in C^{\infty }({\mathbb{S%
}}^{1})
\end{equation*}%
such that
\begin{equation}
\begin{array}{ccc}
{{\mathbb{S}}^{1}} & \overset{f_{\varepsilon }}{\longrightarrow } & {{%
\mathbb{S}}^{1}} \\
\uparrow {\small h_{\varepsilon }} & & \uparrow {\small h_{\varepsilon }}
\\
{{\mathbb{S}}^{1}} & \overset{R_{\alpha _{\varepsilon }}}{\longrightarrow }
& {{\mathbb{S}}^{1}}%
\end{array}%
\qquad \Longleftrightarrow \qquad f_{\varepsilon }\circ h_{\varepsilon
}=h_{\varepsilon }\circ R_{\alpha _{\varepsilon }}. \label{conjugation}
\end{equation}%
Moreover:
\begin{itemize}
\item the maps $\varepsilon \longmapsto h_{\varepsilon }$ and $\varepsilon
\longmapsto \alpha _{\varepsilon }$ are $C^{\infty }$ on the Cantor set ${%
\mathcal{C}}$, in the sense of Whitney;
\item $\alpha_{\varepsilon} = \alpha + A\varepsilon^{m+1} +
o(\varepsilon^{m+1}). $\newline
\end{itemize}
\end{theorem}
\medskip
\begin{remark}
\textrm{\label{rm8} Observe that $f_{\varepsilon}$ does not have necessarily
rotation number $\alpha$. In particular, the map $rot:\mathrm{Diff}_{+}^{0}(%
\mathbb{S}^{1})\longrightarrow $}$\mathbb{S}^{1}$\textrm{\ is continuous
with respect to the $C^{0}$-topology (see for example \cite[Ch. II,
Proposition 2.7]{Herman})}
\end{remark}
\begin{remark}
\label{remarkteokam} \hspace{0.1 cm}\newline
\begin{itemize}
\item[\textrm{(i)}] Theorem \ref{KAMVano} is proved in \cite{Vano} in a more
general form, considering also the cases of $u(x,\varepsilon)$ being
analytic or just finitely differentiable (in this case, there is a lower
bound on the needed differentiablity, cf. Theorem \ref{thmhermanyoccoz}). In
particular, the proof of the asymptotic expansion of $\alpha_{\varepsilon}$
appears on \cite[p. 149]{Vano}.
\item[\textrm{(ii)}] One could provide an estimate of the size of this
Cantor set: {\ there exist $M>0$ and $r_0>0$ such that for all $0<r<r_0$ the
set $(-r,r)\cap {\mathcal{C}} $ has lebesgue measure $\geq M r^{\frac{1}{m+1}%
}$ } (see \cite[formula (9.2)]{Vano}).
\item[\textrm{(iii)}] A version of this theorem in the analytic case, can be
also found in \cite[Theorem 2]{Arnold}; in particular, in \cite[Sections 8]%
{Arnold} it is discussed the property of monogenically dependence of the
conjugacy and the rotation number on the parameter.\newline
These results can be extended to arbitrary smooth circle diffeomorphisms
with Diophantine rotation numbers and to higher dimensional tori (see \cite%
{Vano}).
\end{itemize}
\end{remark}
\medskip
Let us discuss how to deduce from this result the existence of linear
response for the circle diffeomorphisms $f_{\varepsilon }$.\newline
\begin{theorem}
\label{KAMandResp} Let $\alpha \in \mathcal{D}({\tau})$, with $\tau>1$ and
let us consider a family of circle diffeomorphisms obtained by perturbing
the rotation $R_\alpha$ in the following way:
\begin{equation*}
f_\varepsilon(x) = R_\alpha + \varepsilon u(x,\varepsilon) \qquad
|\varepsilon|< 1,
\end{equation*}
where $u(x,\varepsilon) \in C^{\infty}({\mathbb{S}}^1)$, for every $%
|\varepsilon|<1$, and the map $\varepsilon \longmapsto u(\cdot, \varepsilon)$
is $C^{\infty}$.\newline
Then, the circle rotation $R_\alpha$ admits linear response, in the limit as
$\varepsilon$ goes to $0$, by effect of this family of perturbations.\newline
More precisely, there exists a Cantor set $\mathcal{C}\subset (-1,1)$ such
that
\begin{equation} \label{limitlinearresponse}
\lim_{\varepsilon \in \mathcal{C}, \varepsilon \rightarrow 0} \frac{%
\mu_\varepsilon - m}{\varepsilon} = 2\pi i \sum_{n \in \mathbb{Z }\setminus
\{0\}} \left(\frac{n\, \hat{u}(n)}{1- e^{2\pi i n \alpha}}\right) e^{2\pi i
n x} \qquad \mbox{(in the $L^1$-sense)}
\end{equation}
where $\mu_\varepsilon$ denotes the unique invariant probability measure of $%
f_\varepsilon$, for $\varepsilon \in {\mathcal{C}}$, and $\{\hat{u}%
(n)\}_{n\in {\mathbb{Z}}}$ the Fourier coefficients of $u(x,0)$.\newline
\end{theorem}
\medskip
\begin{remark}
\label{remarkKAM} In this article we focus on the circle;
however, a similar result could be proved for rotations on higher
dimensional tori, by using analogous KAM results in that setting (see for
example \cite{Vano}).
\end{remark}
\medskip
As we have already observed in Remark \ref{rm8}, the rotation number of $%
f_{\varepsilon}$ varies continuously with respect to the perturbation, from
here the need of taking the limit in \eqref{limitlinearresponse} on a Cantor
set of parameters (corresponding to certain Diophantine rotation numbers {for which the KAM algorithm can be applied}).
{ Under the assumption that the perturbation does not change the rotation number, {and this is Diophantine},
then the KAM algorithm can be applied for all values of the parameters $\varepsilon$, hence $\mathcal{C}$ coincides with the whole set of parameters;
therefore the limit in \eqref{limitlinearresponse} can be taken in the classical sense.}
\begin{corollary}
\label{corKAM} Under the same hypotheses and notation of Theorem \ref%
{KAMandResp}, if in addition we have that $\mathrm{rot}(f_\varepsilon) =
\alpha$ for every $|\varepsilon|<1$, then there exists linear response
without any need of restricting to a Cantor set and it is given by
\begin{equation}
\lim_{\varepsilon \rightarrow 0} \frac{\mu_\varepsilon - m}{\varepsilon} =
2\pi i \sum_{n \in \mathbb{Z }\setminus \{0\}} \left(\frac{n\, \hat{u}(n)}{%
1- e^{2\pi i n \alpha}}\right) e^{2\pi i n x} \qquad
\mbox{(in the
$L^1$-sense)}.
\end{equation}
\end{corollary}
\bigskip
{
\begin{proof} {\bf (Corollary \ref{corKAM}).}
As we have remarked above, this corollary easily follows from Theorem \ref{KAMandResp} by observing that
$\mathrm{rot}(f_\varepsilon) = \alpha \in \mathcal{D}({\tau})$ for every $|\varepsilon|<1$, hence
$\mathcal{C} \equiv (-1,1)$. In fact, this follows from \cite[Section 9.2, pp. 147-148]{Vano}: in their notation our parameter $\varepsilon$ corresponds to $\mu$ and their $a(\mu)$ corresponds to our $\mathrm{rot}(f_\varepsilon)$.
In particular, they define the Cantor set as ${\mathcal C}_F = v^{-1}(D_\Upsilon)$ (see \cite[p.148]{Vano}): in our notation this corresponds to
the values of $\varepsilon \in (-1,1)$ for which $\mathrm{rot}(f_\varepsilon)$ belongs to the a certain set of Diophantine numbers that includes $\alpha$. Since, by hypothesis, $\mathrm{rot}(f_\varepsilon)\equiv \alpha$, it follows that
${\mathcal C}\equiv (-1,1)$ and, in particular, the limit in \eqref{limitlinearresponse} is meant in the classical sense.
\end{proof}
}
\bigskip
Let us now prove Theorem \ref{KAMandResp}.\newline
\begin{proof} {\bf (Theorem \ref{KAMandResp}).}
First of all, applying Theorem \ref{KAMVano}, it follows that for every $%
\varepsilon \in {\mathcal{C}}$, the map $f_{\varepsilon }:= R_\alpha +
\varepsilon u(x,\varepsilon)$ possesses a unique invariant probability
measure given by
\begin{equation*}
\mu _{\varepsilon }={h_{\varepsilon }}_{\ast }m
\end{equation*}%
where $m$ denotes the Lebesgue measure on ${{\mathbb{S}}^{1}}$ and ${%
h_{\varepsilon }}_{\ast }$ denotes the push-foward by $h_{\varepsilon }$; in
particular, $\mu _{0}=m$. This measure is absolutely continuous with respect
to $m $ and its density is given by
\begin{equation}
\frac{d\mu _{\varepsilon }}{dx}(x)=\frac{1}{\partial _{x}h_{\varepsilon
}(h_{\varepsilon }^{-1}(x))}. \label{density}
\end{equation}%
In fact, if $A$ is a Borel set in ${{\mathbb{S}}^{1}}$, then
\begin{equation*}
\mu _{\varepsilon }(A)=\int_{A}\mu _{\varepsilon }(dy)=\int_{h_{\varepsilon
}(A)}\partial _{x}(h_{\varepsilon }^{-1})(x)\,dx=\int_{h_{\varepsilon }(A)}%
\frac{dx}{\partial _{x}h_{\varepsilon }(h_{\varepsilon }^{-1}(x))}.
\end{equation*}
Hence, it follows from \eqref{density} that
\begin{eqnarray} \label{densitymueps}
\frac{d\mu _{\varepsilon }}{dx}(x) &= & \frac{1}{\partial_x h_{\varepsilon}
(h_{\varepsilon}^{-1}(x))} = \frac{1}{1 + \varepsilon \partial_x v
(h_{\varepsilon}^{-1}(x),0) + o(\varepsilon)} \notag \\
&=& \frac{1}{1 + \varepsilon \partial_x v(x,0) + o_{\mathcal{C}}(\varepsilon)%
} = 1-\varepsilon \partial_x v(x,0) + o_{\mathcal{C}}(\varepsilon),
\end{eqnarray}
where $o_{\mathcal{C}}(\varepsilon)$ denotes a term that goes to zero faster
than $\varepsilon \in {\mathcal{C}}$, uniformly in $x$.\newline
Then the linear response is given by%
\begin{equation*}
\dot{\mu}=\lim_{\varepsilon \in {\mathcal{C}},\varepsilon \rightarrow 0}%
\frac{\mu _{\varepsilon }-\mu _{0}}{\varepsilon }=\lim_{\varepsilon \in {%
\mathcal{C}},\varepsilon \rightarrow 0}\frac{\mu _{\varepsilon }-m}{%
\varepsilon }
\end{equation*}%
which, passing to densities and using \eqref{densitymueps}, corespond to%
\begin{equation*}
\lim_{\varepsilon \in {\mathcal{C}},\varepsilon \rightarrow 0}\frac{1}{%
\varepsilon }(1-\varepsilon \partial _{x}v(x,0)+o_{0}(\varepsilon
)-1)=-\partial _{x}v(x,0).
\end{equation*}
Giving a formula for the response%
\begin{equation} \label{linearresponse}
\frac{d\dot{\mu}}{dx}(x)= - \partial_x v (x,0).
\end{equation}
\medskip
Moreover, we can find a more explicit representation formula
{(the above formula, in fact, is somehow implicit, since $v$ depends on $h_\varepsilon$)}. Observe that it
follows from \eqref{conjugation} that $f_\varepsilon \circ h_\varepsilon =
h_\varepsilon \circ R_{\alpha_\varepsilon}$:
\begin{equation} \label{boh}
x + \varepsilon v(x,\varepsilon) + \alpha + \varepsilon u(x + \varepsilon
v(x,\varepsilon), \varepsilon) = x + \alpha_\varepsilon + \varepsilon
v(x+\alpha_\varepsilon,\varepsilon).
\end{equation}
Recall, from the statement of Theorem \ref{KAMVano} that
\begin{equation*}
\alpha_{\varepsilon} = \alpha + A\varepsilon^{m+1} + o(\varepsilon^{m+1}),
\end{equation*}
where $m$ and $A$ are defined by (see item (ii) in Theorem \ref{KAMVano})
\begin{equation*}
<u(\cdot,\varepsilon)>:=\int_{{\mathbb{S}}^1} u(x,\varepsilon) dx =
A\varepsilon^m + o(\varepsilon^m).
\end{equation*}
Hence, expanding equation \eqref{boh} in terms of $\varepsilon$ and equating
the terms of order $1$, we obtain the following (observe that $%
\alpha_\varepsilon$ will contribute to the first order in $\varepsilon$ only
if $m=0$ and, therefore, $A= <u(\cdot,0)>:= \int_{{\mathbb{S}}^1} u(x,0) dx
\neq 0$):
\begin{equation} \label{homologicaleq}
v(x+\alpha, 0) - v(x,0) = u(x,0) - <u(\cdot,0)> \qquad \forall \,x\, \in {{%
\mathbb{S}}^1},
\end{equation}
the so-called \textit{homological equation}.
Observe that it makes sense that we need to subtract to $u(x,0)$ its
average, if this is not zero. In fact, in order for \eqref{homologicaleq} to
have a solution, its right-hand side must have zero average: to see this, it
is sufficient to integrate both sides and use that the Lebesgue measure is
invariant under $R_\alpha$:
\begin{equation*}
\int_{{\mathbb{S}}^1} u(x,0) \, dx = \int_{\mathbb{S}^1} v(x+\alpha,0) \, dx
- \int_{\mathbb{S}^1} v(x,0) \, dx =0.
\end{equation*}
Let us now find an expression for $v(x,0)$ in Fourier series. In fact, let
us consider:
\begin{equation*}
v(x,0):= \sum_{n\in \mathbb{Z}} \hat{v}(n) e^{2\pi i n x} \qquad \mathrm{and}
\qquad u(x,0):= \sum_{n\in \mathbb{Z}} \hat{u}(n) e^{2\pi i n x}.
\end{equation*}
In Fourier terms, \eqref{homologicaleq} becomes:
\begin{equation*}
\sum_{n\in \mathbb{Z}} \hat{v}(n) \left( e^{2\pi i n \alpha} -1 \right) \,
e^{2\pi i n x} = \sum_{n\in \mathbb{Z }\setminus \{0\}} \hat{u}(n) e^{2\pi i
n x}
\end{equation*}
and therefore for $n\neq 0$
\begin{equation*}
\hat{v}(n) = \frac{\hat{u}(n)}{e^{2\pi i n \alpha} -1};
\end{equation*}
we do not determine $\hat{v}(0)$, as it should be expected, since $v$ is
determined by \eqref{homologicaleq} only up to constants.
Substituting in \eqref{linearresponse}, we conclude:
\begin{eqnarray*}
\frac{d \dot{\mu}}{dx}(x) &=& - \partial_x v (x,0) = - 2\pi i \sum_{n\in
\mathbb{Z }} \,n \,\hat{v}(n) e^{2\pi i n x} \\
&=& 2\pi i \sum_{n \in \mathbb{Z }\setminus \{0\}} \left(\frac{n\, \hat{u}(n)%
}{1- e^{2\pi i n \alpha}}\right) e^{2\pi i n x}.
\end{eqnarray*}
\end{proof}
\section{Beyond rotations: the case of circle diffeomorphisms \label{sec:stabdiff} %
}
{In this section, we want to describe how it is possible to}
extend our previous results from irrational rotations to
diffeomorphisms of the circle having irrational rotation number.
We prove
the following:
\begin{theorem}
\label{stadiff}Let $T_{0}$ be an orientation preserving diffeomorphism of
the circle with an irrational rotation number $\alpha $ and such that $\log
(T^{\prime })$ has bounded variation (for example f is of class $C^{2}$).
{Let $\mu_0$ be its unique invariant (absolutely continuous) probability measure (see Theorem \ref{Denteo})}.\
Let $\{T_{\delta }\}_{0\leq \delta \leq \overline{\delta }}$ be a family of
Borel measurable maps of the circle such that%
\begin{equation*}
\sup_{x\in {\mathbb S}^{1}}|T_{0}(x)-T_{\delta }(x)|\leq \delta .
\end{equation*}%
Suppose that for each $0\leq \delta \leq \overline{\delta }$, $\mu _{\delta
} $ is an invariant measure of $T_{\delta }$. Then
\begin{equation*}
\lim_{\delta \rightarrow 0}\int_{{\mathbb S}^{1}} f~d\mu _{\delta }=\int_{{\mathbb S}^{1}} f~d\mu _{0}
\end{equation*}%
for all $f\in C^{0}(\mathbb{S}^{1}).$
\end{theorem}
{The proof will follow by combining Theorem \ref{statstab} with Denjoy Theorem \ref{Denteo}.\\
}
\begin{proof}[Proof of Theorem \protect\ref{stadiff}]
By Theorem \ref{Denteo} we can coniugate $T_{0}$ with the rotation $%
R_{\alpha }.$ We apply the same coniugation to $T_{\delta }$ for each $%
\delta >0$ obtaining a family of maps {$U_{\delta }:= h \circ T_\delta \circ h^{-1}$}.
We summarize the
situation in the following diagram%
\begin{equation}
\begin{array}{ccc}
{{\mathbb{S}}^1} & \overset{T_0}{\longrightarrow } & {{\mathbb{S}}^1} \\
\downarrow {\small h} & & \downarrow {\small h} \\
{{\mathbb{S}}^1} & \overset{R_{\alpha}}{\longrightarrow } & {{%
\mathbb{S}}^1}%
\end{array}%
\qquad%
\begin{array}{ccc}
{{\mathbb{S}}^1} & \overset{T_\delta}{\longrightarrow } & {{\mathbb{S}}^1} \\
\downarrow {\small h} & & \downarrow {\small h} \\
{{\mathbb{S}}^1} & \overset{U_\delta}{\longrightarrow } & {{%
\mathbb{S}}^1}%
\end{array}%
\label{diagrams}
\end{equation}%
Since $h$ is an homeomorphism of a compact space it is uniformly continuous.
This implies that
\begin{equation*}
\lim_{\delta \rightarrow 0}\sup_{x\in {\mathbb S}^{1}}|R_{\alpha }(x)-U_{\delta
}(x)|=0.
\end{equation*}%
Let $\overline{\mu }_{\delta }:=h_{\ast }\mu _{\delta }.$ These measures
are invariant for $U_{\delta }.$\ \ Then, by Theorem \ref{statstab} we get%
\begin{equation*}
\lim_{\delta \rightarrow 0}||\overline{\mu }_{\delta }-m||_{W}=0.
\end{equation*}%
This implies (uniformly approximating any continuous fuction with a sequence
of Lipschitz ones) that for each $g\in C^{0}(\mathbb{S}^{1})$%
\begin{equation}
\lim_{\delta \rightarrow 0}\int_{\mathbb S^1} g~d\overline{\mu }_{\delta }=\int_{\mathbb S^1} g~dm.
\label{inte}
\end{equation}%
Now consider $f\in C^{0}(\mathbb{S}^{1})$ and remark that {(using the definition of push-forward of a measure)}
\begin{eqnarray*}
\int_{\mathbb S^1} f~~d\mu _{\delta } &=&\int_{\mathbb S^1} f\circ h^{-1} \circ h~d\mu _{\delta }=\int_{\mathbb S^1}
f\circ h^{-1}~d\overline{\mu }_{\delta }, \\
\int_{\mathbb S^1} f~d\mu _{0} &=&\int_{\mathbb S^1} f\circ h^{-1}~d\overline{\mu }_{0}.
\end{eqnarray*}%
By \ref{inte}, considering $g=f\circ h^{-1}$ this shows
\begin{equation*}
\lim_{\delta \rightarrow 0}\int_{\mathbb S^1} f~d\mu _{\delta }=\int_{\mathbb S^1} f~d\mu _{0}.
\end{equation*}
\end{proof}
\bigskip
{Similarly, one can extend the quantitative stability results proved in Theorem \ref{stst2} to smooth diffeomorphisms of the circle}.
{
\begin{remark}
We point out that the following theorem holds under much less regularity for $T_0$ (the proof remains the same). In fact, it is enough that $T_0\in C^r({\mathbb S^1})$ with
$r$ sufficiently big so that the cojugation $h$ is bi-Lipschitz; compare with Theorem \ref{thmhermanyoccoz}.
\end{remark}
}
\medskip
\begin{theorem}
\label{quantdiff}Let $T_{0}$ be a $C^{\infty }$ diffeomorphism of the circle
with Diophantine rotation number $\alpha \in \mathcal{D}(\tau )$, for some $\tau>1$. Let $%
\{T_{\delta }\}_{0\leq \delta \leq \overline{\delta }}$ be a family of Borel
measurable maps of the circle such that%
\begin{equation*}
\sup_{x\in {\mathbb S^{1}}}|T_{0}(x)-T_{\delta }(x)|\leq \delta .
\end{equation*}%
Suppose that for each $0\leq \delta \leq \overline{\delta }$, $\mu _{\delta
} $ is an invariant measure of $T_{\delta }$. Then, for each $\ell <{\frac{1%
}{\gamma (\alpha )+1}}$ we have:
\begin{equation*}
\Vert m-\mu _{\delta }\Vert _{W}=O(\delta ^{\ell }).
\end{equation*}
\end{theorem}
\begin{proof}
By Theorem \ref{thmhermanyoccoz} , there exists $h\in \mathrm{Diff}%
_{+}^{\infty }({{\mathbb{S}}^{1}})$ conjugating $T_{0}$ with the rotation $%
R_{\alpha }.$ We apply the same coniugation to $T_{\delta }$ for each $%
\delta >0$ obtaining a family of maps $U_{\delta }.$ \ The situation is
still summarized by $(\ref{diagrams}).$ Since $h$ is a bilipschitz map we
have
\begin{equation*}
\lim_{\delta \rightarrow 0}\sup_{x\in {\mathbb S^{1}}}|R_{\alpha }(x)-U_{\delta }(x)|=0
\end{equation*}%
and there is a $C\geq 1$ such that for any pair of probability measures $\mu
_{1},\mu _{2}$%
\begin{equation*}
C^{-1}||\mu _{1}-\mu _{2}||_{W}\leq ||h_{\ast }^{-1}\mu _{1}-h_{\ast
}^{-1}\mu _{2}||_{W}\leq C||\mu _{1}-\mu _{2}||_{W}
\end{equation*}%
(and the same holds for $h_{\ast }$). Let $\overline{\mu }_{\delta
}:=h_{\ast }(\mu _{\delta }).$ These measures are invariant for $U_{\delta
}. $\ \
By Theorem \ref{stst2} we then get that for each $\ell <{\frac{1}{\gamma
(\alpha )+1}}$ we have:
\begin{equation*}
\Vert m-\overline{\mu }_{\delta }\Vert _{W}=O(\delta ^{\ell }).
\end{equation*}%
This imply
\begin{equation*}
\Vert \mu _{0}-\mu _{\delta }\Vert _{W}=||h_{\ast }^{-1}m-h_{\ast }^{-1}%
\overline{\mu }_{\delta }||_{W}=O(\delta ^{\ell }).
\end{equation*}
\end{proof}
\bigskip
{Finally, one can also extend the existence of linear response, along the same lines of Theorem \ref{KAMandResp} and Corollary \ref{corKAM}.
In fact, as observe in Remark \ref{remarkteokam} ({\it iii}), KAM theorem can be extended to sufficiently regular diffeomorphisms of the circle (one can prove it either directly ({\it e.g.}, \cite{Arnold, BroerSevryuk, Moser, Russmann,Vano}), or
by combining the result for rotations of the circle, with Theorem \ref{thmhermanyoccoz}).
Since the proof can be adapted {\it mutatis mutandis} {(of course, leading to a different expression for the linear response}), we omit further details.}\\
\medskip
\section{Stability under discretization and numerical truncation}\label{sectrunc}
As an application of what discussed in this section we want to address the
following question:
\medskip
\noindent \textbf{Question:} \emph{Why are numerical simulations generally
quite reliable, in spite of the fact that numerical truncations are quite
bad perturbations, transforming the system into a piecewise constant one,
having only periodic orbits?}
\medskip
Let us consider the uniform grid $E_{N}$ on $\mathbb{S}^{1}$ defined by%
\begin{equation*}
E_{N}=\left\{\frac{i}{N}\in \mathbb{R}/\mathbb{Z}: \quad 1\leq i\leq N
\right\}.
\end{equation*}%
In particular when $N=10^{k}$ the grid represents the points which are
representable with $k$ decimal digits. Let us consider the projection $P_{N}:%
\mathbb{S}^{1}\rightarrow E_{N}$ defined by
\begin{equation*}
P_{N}(x)=\frac{\left\lfloor Nx\right\rfloor }{N},
\end{equation*}
where $\lfloor \cdot \rfloor$ is the floor function.
Given a map $T:$ $\mathbb{S}^{1}\rightarrow \mathbb{S}^{1}$ and let $N\in {%
\mathbb{N}}$; we define its \textit{$N$-discretization} $T_{N}:\mathbb{S}%
^{1}\rightarrow \mathbb{S}^{1}$ \ by%
\begin{equation*}
T_{N}(x):=P_{N}(T(x)).
\end{equation*}%
This is an idealized representation of what happens if we try to simulate
the behavior of $T$ on a computer, having $N$ points of resolution. Of
course the general properties of the systems $T_{N}$ and $T$ are a priori
completely different, starting from the fact that $T_{N}$ is forced to be
periodic. Still these simulations gives in many cases quite a reliable
picture of many aspects of the behavior of $T$, which justifies why these
naive simulations are still much used in many applied sciences.
Focusing on the statistical properties of the system and on its invariant measures, one can investigate
whether the invariant measures of the system $T_{N}$ (when they exist)
converge to the physical measure of $T$, and in general if they converge to
some invariant measure of $T$. In this case, the statistical properties of $%
T $ are in some sense robust under discretization. Results of this kind have
been proved for some classes of pievewise expanding maps (see \cite{Bo},
\cite{GB})\ and for topologically generic diffeomorphisms of the torus (see
\cite{Gu}, \cite{Gu2}, \cite{mier}).
Since the discretization is a small perturbation in the uniform convergence
topology, a direct application of Theorem \ref{stadiff} gives
\begin{corollary}
\label{discrediffeo}Let $T_{0}$ be an orientation preserving
diffeomorphism of the circle with an irrational rotation number $\alpha $
and such that $\log (T_{0}^{\prime })$ has bounded variation and let $N\geq
1 $. Let $T_{N}=P_{N}\circ T_{0}$ be the family of maps given by its $%
N-discretizations$. Suppose $\mu _{N}$ is an invariant measure of $T_{N}$.
Then
\begin{equation*}
\lim_{N\rightarrow \infty }\int_{\mathbb S^1} f~d\mu _{N}=\int_{\mathbb S^1} f~d\mu _{0}
\end{equation*}%
for all $f\in C^{0}(\mathbb{S}^{1}).$
\end{corollary}
\begin{proof}
The statement follows by Theorem \ref{stadiff} noticing that
\begin{equation*}
\sup_{x\in {\mathbb{S}}^{1}}|T_{0}(x)-T_{N}(x)|\leq \frac{1}{N}.
\end{equation*}
\end{proof}
We think this result is very similar to the one shown in Proposition 8.1 of \cite{mier}.
Comparing this kind of results with the ones in \cite{Gu}, we point out that in this
statement we do not suppose the system to be topologically generic and that
the convergence is proved for all discretizations, while in \cite{Gu} the
convergence is proved for a certain sequence of finer and finer
discretizations. \newline
As an application of our quantitative stability result (Theorem \ref{stst2}
and \ref{quantdiff}), we can also provide a quantitative estimate for the
speed of convergence of the invariant measure of the $N$-discretized system
to the original one. We remark that as far as we know, there are no other
similar quantitative convergence results of this kind in the literature.%
\newline
\begin{corollary}
\label{quantdiscrediffeo}Let $T_{0}$ be a $C^{\infty }$ diffeomorphism of
the circle with Diophantine rotation number $\alpha \in \mathcal{D}(\tau ).$
Let $T_{N}=P_{N}\circ T_0$ be the family of its $N$-discretizations.
Suppose $\mu _{N}$ is an invariant measure of $T_{N}$. Then, for each $\ell <%
{\frac{1}{\gamma (\alpha )+1}}$
\begin{equation*}
\Vert m-\mu _{N}\Vert _{W}=O(N^{-\ell }).
\end{equation*}
\end{corollary}
The proof of Corollary \ref{quantdiscrediffeo} \ is {similar to}
the one of Corollary \ref{discrediffeo}.
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\@ifundefined{Column}{\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}}{}%
\@ifundefined{qed}{\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}}{}%
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}%
\@ifundefined{tciLaplace}{\def\tciLaplace{L}}{}%
\@ifundefined{tciFourier}{\def\tciFourier{F}}{}%
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}%
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}%
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}%
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}%
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}%
\@ifundefined{vvert}{\def\vvert{\Vert}}{
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}%
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\mathop{\textstyle \int}}%
\def\tiint{\mathop{\textstyle \iint }}%
\def\tiiint{\mathop{\textstyle \iiint }}%
\def\tiiiint{\mathop{\textstyle \iiiint }}%
\def\tidotsint{\mathop{\textstyle \idotsint }}%
\def\toint{\mathop{\textstyle \oint}}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\def\dint{\mathop{\displaystyle \int}}%
\def\diint{\mathop{\displaystyle \iint}}%
\def\diiint{\mathop{\displaystyle \iiint}}%
\def\diiiint{\mathop{\displaystyle \iiiint }}%
\def\didotsint{\mathop{\displaystyle \idotsint }}%
\def\doint{\mathop{\displaystyle \oint}}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\if@compatibility\else
\RequirePackage{amsmath}
\makeatother
\endinput
\fi
\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}
\def\makeatother\endinput{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\makeatother\endinput
\else
\@ifpackageloaded{amsmath}%
{\message{amsmath already loaded}\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amstex}%
{\message{amstex already loaded}\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\message{amsgen already loaded}\aftergroup\makeatother\endinput}
{}
\fi
\egroup
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\ex@.2326ex
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
\@ifundefined{tag}{
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
}{}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\makeatother
\endinput
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec:introduction}
The search for topological Majorana zero modes in nanowires with superconducting proximity effect is among the most active research areas in physics~\cite{sarma2015majorana,sau2021majorana,lutchyn2018majorana}. The theoretical predictions made in 2010 are precise~\cite{sau2010nonabelian,lutchyn2010majorana,sau2010generic,oreg2010helical}: Take a semiconductor (SM) nanowire, InAs or InSb, with strong spin-orbit (SO) coupling with a nearby superconductor (SC), Al or Nb, inducing proximity effect in the nanowire; then apply a magnetic field along the nanowire to create a Zeeman spin splitting; and for suitable theoretically defined values of spin splitting $V_z$, the chemical potential $\mu$, and the induced proximity SC gap $\Delta$, the nanowire will develop topological SC provided $V_z ^2 > \mu^2 + \Delta ^2$, with $V_{zc}= (\mu^2 + \Delta ^2)^{1/2}$ being the topological quantum phase transition (TQPT) point where the SC gap closes with the system transitioning from a trivial SC to a topological SC. The topological SC with$ V_z > V_{zc}$ comes with a bulk gap that protects emergent non-Abelian Majorana zero modes (MZMs) localized at the wire ends. If the wire is long enough, so that the two MZMs have little overlap, the system is exponentially topologically protected with effectively isolated anyonic MZMs, leading to the possibility of a limited version of fault-tolerant topological quantum computing using Ising anyons~\cite{nayak2008nonabelian}.
The standard scheme to identify MZMs has been tunneling spectroscopy~\cite{sengupta2001midgap,sau2010nonabelian,flensberg2010tunneling,wimmer2011quantum,law2009majorana}, where MZMs lead to zero-bias conductance peaks (ZBCPs) of the quantized value $2e^2/h$ at $T=0$, although this strict quantization may not apply at nonzero temperatures~\cite{setiawan2017electron}. Early experiments indicated the existence of ZBCPs with small values $\ll 2e^2/h$, which were tentatively claimed to be evidence for MZMs~\cite{das2012zerobias,deng2012anomalous,mourik2012signatures,churchill2013superconductornanowire,finck2013anomalous}. But the very small tunnel conductance values of these ZBCPs (as well as the extremely soft nature of the induced SC gap) made the situation inconclusive. In fact, it was clear that these early nanowire samples all had considerable disorder-induced subgap fermionic states, with the small ZBCPs most likely associated with class D electron antilocalization effects~\cite{liu2012zerobias,bagrets2012class,akhmerov2011quantized,liu2012zerobias,sau2013density}. More recently, however, experiments have reported large ZBCPs $\sim 2e^2/h$ in hard-induced SC gap situations~\cite{nichele2017scaling,zhang2018quantizeda,zhang2021large,yu2021nonmajorana,pan2020situ,song2021large}, which have created considerable excitement as the possible signatures for MZMs, but the typical ZBCP is not stable (e.g., in magnetic field, gate voltage, or tunnel barrier), and they often arise only with considerable fine-tuning of postselected data. In addition, the ZBCPs are never seen in simultaneous tunneling from both ends, which is a requirement for the nonlocality of MZMs. In addition, the necessary Majorana oscillations and the opening of a topological gap are never observed either~\cite{dassarma2012splitting}. It now seems almost certain that these large-conductance ZBCPs are induced by strong disorder in the nanowire and are associated with disorder-induced nontopological subgap Andreev bound states which are experimentally generated by fine-tuning and cherry-picking of large data sets~\cite{pan2020physical,pan2020generic,pan2021disorder,pan2021threeterminal,pan2021crossover,pan2021quantized,dassarma2021disorderinduced,woods2021charge,zeng2021partiallyseparated,ahn2021estimating}.
In the current work we show that in certain situations, such large-conductance trivial ZBCPs may arise almost on-demand in disordered nanowires under certain conditions as has recently been reported in both InAs and InSb nanowires~\cite{song2021large,yu2021nonmajorana}.
{Our mechanism, which explains the generic zero-bias conductance peaks in an experimentally plausible way, was not addressed in previous studies~\cite{pan2020physical,pan2021threeterminal,pan2021quantized} and was not even possible before because the observations of the generic large zero-bias conductance peaks have only been reported in recent experiments~\cite{yu2021nonmajorana,song2021large}.}
We establish that the requirements for such on-demand large ZBCPs are simply that the strong disorder be suppressed near the wire ends where the tunnel barrier/contact is located along with an ability to adjust the tunnel barrier to rather low values to enhance the conductance to the desired large magnitude. Since the nanowire disorder is likely to be suppressed near the wire ends by virtue of the metallic tunnel contact-induced screening, we think that we have identified the main physical mechanism leading to almost generic large trivial conductance peaks in the tunneling experiments, {which are also likely to happen in experiments in a natural way without invoking too much fine-tuning.}
Such on-demand large ZBCPs are, however, neither robust nor stable, and their conductance would vary strongly with tunnel barrier, magnetic field, temperature, and chemical potential. In addition, they would typically show up only for tunneling from one end and not manifest any Majorana oscillations. We have, therefore, solved the mystery of why some recent experiments find numerous large trivial ZBCPs in nanowire experiments~\cite{song2021large}.
\section{Theory}\label{sec:theory}
We use the minimal one-dimensional one-subband nanowire model solving the Bogoliubov-de Gennes (BdG) equation exactly in the presence of random disorder as described in great detail in Ref.~\cite{pan2020physical}. This minimal model is known to work very well and reproduce the results of numerically intensive realistic models essentially quantitatively~\cite{pan2021quantized,ahn2021estimating}. The realistic models involve many unknown parameters since the disorder details in the actual complicated hybrid SM-SC platforms with gates and tunneling leads are unknown, and therefore, the minimal model is more appropriate for general theoretical considerations. We refer the reader for more details on the theory {to Appendix~\ref{App:A}} as well as earlier publications~\cite{pan2020physical,pan2021quantized,ahn2021estimating}. The interesting aspect in the current work is that the strength of the random disorder is much higher in the bulk of the wire than near its ends. We then solve the BdG equation exactly. Using these BdG solutions and the KWANT scattering matrix theory~\cite{groth2014kwant}, we obtain the tunnel conductance as a function of several relevant variables (e.g., temperature, disorder configurations, tunnel barrier, Zeeman splitting, chemical potential, etc.). Unless otherwise stated, we choose the following typical parameters~\cite{lutchyn2018majorana}: the effective mass is 0.015 $m_e$, with $m_e$ being the rest electron mass, the spin-orbit coupling strength is $0.5$ eV\AA{}, the chemical potential in InSb (normal lead) is 1 meV (25 meV), the parent SC gap is 0.2 meV, the SC-SM coupling strength is 0.2 meV, and the wire length is 3 $\mu$m (except for Fig.~\ref{fig:4}, in which it is 1 $\mu$m). Once we obtain the bias-voltage-dependent tunnel conductance at zero temperature $G(V_{\text{bias}};T=0)$, we calculate the tunnel conductance at finite temperature $T$ as $G(V_{\text{bias}};T)=-\int \frac{d f(E-V_{\text{bias}})}{dE} G(V_{\text{bias}},T=0)dE $, where $f(E-V_{\text{bias}};T)$ is the Fermi-Dirac distribution at temperature $T$~\cite{setiawan2017electron}.
\begin{figure}[ht]
\centering
\includegraphics[width=3.4in]{Fig1.pdf}
\caption{Conductance spectra as a function of the Zeeman field $V_z$ and the bias voltage $V_{\text{bias}}$ in a pristine wire for (a) temperature $T=0$ and barrier strength $V_b=5$ meV; (b) $T=0$ and $ V_{b} =20$ meV; (c) $T=116$ mK and $ V_{b} =5$ meV; (d) $T=116$ mK and $ V_{b} = 20 $ meV. Panels (e)-(h) show corresponding line cuts of the topological ZBCPs at $V_z=1.31$ meV [red dashed lines in panels (a)-(d)] for different temperatures and barrier strengths. Refer to Sec.~\ref{sec:theory} for the parameters {and Appendix~\ref{App:B} for the wave functions and the local density of states}.}
\label{fig:1}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=6.8in]{Fig2.pdf}
\caption{Conductance spectra as a function of the Zeeman field $V_z$ and the bias voltage $V_{\text{bias}}$ in the presence of strong disorder in the bulk and strong suppression of the disorder near the ends for (a) temperature $T=0$ and barrier strength $V_b=5$ meV; (b) $T=0$ and $ V_{b} =20$ meV; (c) {$T=116$ mK} and $ V_{b} =5$ meV; (d) {$T=116$ mK} and $ V_{b} = 20 $ meV. The ZBCPs here are in the trivial regime. Panels (e)-(h) are corresponding conductance spectra measured from the other (right) end. The standard deviation of Gaussian disorder is $\sigma_\mu=3$ meV in the bulk region and 0.5 meV near the ends ($\sim 0.1~\mu$m). Refer to Sec.~\ref{sec:theory} for other parameters {and Appendix~\ref{App:B} for the wave functions, the local density of states, and the disorder spatial profile}.}
\label{fig:2}
\end{figure*}
\begin{figure}[ht]
\centering
\includegraphics[width=3.4in]{Fig3.pdf}
\caption{Line cuts of the trivial ZBCPs at a fixed Zeeman field for different temperatures and barrier strengths. Panels (a)-(d) show the line cuts at $V_z=0.52$ meV [red dashed lines in Figs.~\ref{fig:2}(a)-\ref{fig:2}(d)]. Panels (e)-(h) show the line cuts at $V_z=0.65$ meV [red dotted lines in Figs.~\ref{fig:2}(a)-\ref{fig:2}(d)].}
\label{fig:3}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=3.4in]{Fig4.pdf}
\caption{Conductance spectra as a function of the Zeeman field $V_z$ and the bias voltage $V_{\text{bias}}$ in a short wire ($L=1~\mu$m) in the presence of strong disorder in the bulk and strong suppression of the disorder near the ends for (a) temperature $T=0$ and barrier strength $V_b=5$ meV; (b) $T=0$ and $ V_{b} =20$ meV; (c) {$T=116$ mK} and $ V_{b} =5$ meV; (d) {$T=116$ mK} and $ V_{b} = 20 $ meV. Panels (e)-(h) show corresponding line cuts of the trivial ZBCPs at $V_z=0.31$ meV [red dashed lines in panels (a)-(d)] for different temperatures and barrier heights. The standard deviation of Gaussian disorder is $\sigma_\mu=3$ meV in the bulk region and 0.1 meV near the ends ($\sim 0.1~\mu$m). Refer to Sec.~\ref{sec:theory} for other parameters {and Appendix~\ref{App:B} for the wave functions, the local density of states, and the disorder spatial profile}.}
\label{fig:4}
\end{figure}
\section{Results}\label{sec:results}
In order to benchmark our results, we start with the pristine wire to study the generic effects of the tunnel barrier and the temperature on the conductance as shown in Fig.~\ref{fig:1}. In Fig.~\ref{fig:1}(a), we present the conductance spectrum of a pristine wire with topological ZBCPs showing beyond the TQPT point ($V_{zc}=1.02$ meV) at zero temperature and low barrier strength ($V_b=5$ meV). As we increase the barrier strength to $V_b=20$ meV in Fig.~\ref{fig:1}(b) while keeping zero temperature, we find that the topological ZBCP becomes sharper, and Majorana oscillations are more salient consequently. However, the conductances of subgap intrinsic Andreev bound states (ABS)~\cite{huang2018metamorphosis} below the TQPT, showing the gap-closing feature, become smaller in Fig.~\ref{fig:1}(b). Therefore, it indicates that the barrier strength does not affect the peak value of topological ZBCPs at zero temperature: it only lowers the conductance peaks of nontopological states. This is also directly manifested in Fig.~\ref{fig:1}(e), where we plot the line cuts of topological ZBCPs at a fixed Zeeman field [$V_z=1.31$ meV indicated by the red dashed line in Fig.~\ref{fig:1}(a)] for different tunnel barrier strengths varying from 5 to 20 meV, and we find that the peaks always stick to the quantized conductance of $2e^2/h$ at zero temperature.
The finite temperature always suppresses the topological ZBCPs below the quantized conductance of $2e^2/h$, particularly when $T$ is larger than the tunneling energy. For instance, in Fig.~\ref{fig:1}(f), where we increase the temperature to {$T=116$ mK}, topological ZBCPs are no longer quantized, and they decrease as we increase the tunnel barrier strength. We show a comparison between $T=0$ in Fig.~\ref{fig:1}(a) and finite $T$ in Fig.~\ref{fig:1}(c) to emphasize as a benchmark that the topological conductance spectrum is weakened and broadened everywhere by finite temperatures. Thus, all peaks manifest larger linewidths and lower peak values; therefore, the details of subgap states become indiscernible.
In Fig.~\ref{fig:1}(d), we study the joint effect of the finite barrier strength and the finite temperature, and we find that they suppress all tunneling signals in the conductance spectrum, leaving very faint topological ZBCPs and a nearly zero conductance background. We also show the line cuts of conductances in Fig.~\ref{fig:1}(g) to visualize the thermal broadening effect as the temperature increases. From $T=0$ (red line) to {$T=12$ mK} (green line), the two lines overlap, which means the conductance peak does not change too much near zero temperature (i.e., $T$ below tunneling energy). However, as the temperature increases above the tunneling energy to {$T=116$ mK} (yellow line), the conductance peak of the topological ZBCP drops to $\sim 1.5e^2/h$. For the large barrier strength, we present Fig.~\ref{fig:1}(h) and find that the conductance peak drops even more quickly compared to the low barrier strength in Fig.~\ref{fig:1}(g), because the tunneling energy is now exponentially lower than $T$. Note that a finite temperature of {$T=116$ mK (10 $\mu$eV)}, which is less than 10\% of the induced SC gap ($\sim 0.12$ meV), can make the conductance peak drop by a large factor of 75\% [Fig.~\ref{fig:1}(h)] from the original quantized value of $2e^2/h$. This implies that it is unlikely to observe the quantized value of $2e^2/h$ in experiments even if one manages to measure the conductance of the real topological ZBCP due to the thermal broadening.
After establishing the benchmark of the finite barrier strength and the temperature in the pristine wire, we introduce our protocol that generates the on-demand large trivial ZBCP in disordered nanowires. The prerequisites are a strong bulk potential disorder and a suppression of the disorder near the nanowire ends. This particular configuration of disorder effectively creates quantum dots near the ends and thus can make the fermionic ABSs localized near the ends, which will then appear as large trivial ZBCPs. We present a representative example in Fig.~\ref{fig:2}(a), which shows a trivial ZBCP with a large conductance of $3e^2/h$. Because the trivial ZBCP is not invariant under the variations in barrier strength, we can manipulate the conductance peak on demand by adjusting the barrier strength as well as the temperature. In Figs.~\ref{fig:2}(b)-\ref{fig:2}(d), we change the conductance of trivial ZBCPs from $2.5e^2/h$ [Fig.~\ref{fig:2}(c)] to $ 0.5e^2/h$ [Fig.~\ref{fig:2}(d)] as we tune the barrier strength and the temperature within the same disorder realization.
In Figs.~\ref{fig:2}(e)-\ref{fig:2}(h), we present conductances measured from the other end (right) corresponding to Figs.~\ref{fig:2}(a)-\ref{fig:2}(d), {where there are no signatures of zero-bias peaks (see the wave functions and the local density of states in Fig.~\ref{fig:App1})}. Due to the trivial origin of these ZBCPs, the conductances from both ends lack the nonlocal correlation, resembling the experimentally observed ZBCPs.
In addition to the false-color plots of conductances, we also provide the line cuts at a fixed Zeeman field [$V_z=0.52$ meV indicated by red dashed lines in Fig.~\ref{fig:2}] in Figs.~\ref{fig:3}(a)-\ref{fig:3}(d). In Fig.~\ref{fig:3}(a), we notice that the shape of the ZBCP can be tuned into a zero-bias conductance dip (ZBCD) when we increase the barrier strength. Here, the ZBCD is essentially two symmetric side peaks close to each other, which arise from a pair of low-lying trivial ABSs. Therefore, they only appear if the barrier strength is sufficiently high because a high barrier strength suppresses the background conductance between the two side peaks. We show in Fig.~\ref{fig:3}(c) that a small barrier strength fails to manifest a transformation from the ZBCP to the ZBCD. Thus, the transmutation between the ZBCP and the ZBCD, as well as their conductance magnitudes, is easily experimentally tuned by the tunnel barrier~\cite{song2021large}.
Besides the high tunnel barrier strength, the existence of the ZBCD also needs a low temperature. In Fig.~\ref{fig:3}(b), we increase the temperature from zero to {$T=116$ mK}, finding that all ZBCDs in Fig.~\ref{fig:3}(a) now become ZBCPs. This is because of larger thermal broadening, which could combine two side peaks into one ZBCP, as manifested in Fig.~\ref{fig:3}(d). In Fig.~\ref{fig:3}(d), the ZBCD gradually disappears as the two side peaks merge into one zero-bias peak when the temperature increases, resembling the experimentally observed ZBCDs in Ref.~\onlinecite{song2021large}. Thus, ZBCDs disappear with increasing temperature and/or decreasing tunnel barrier, showing that they represent unstable trivial features.
In addition, the ZBCD, being a fine-tuned feature, is unstable to the applied field. In Fig.~\ref{fig:3}(e), we slightly change the Zeeman field to $V_z=0.65$ meV (red dotted lines in Fig.~\ref{fig:2}), and find that all ZBCDs disappear in comparison with Fig.~\ref{fig:3}(a). At this Zeeman field, the ZBCD cannot exist regardless of the barrier strength and the temperature as shown in Figs.~\ref{fig:3}(e)-\ref{fig:3}(h).
Although all conductances above from Fig.~\ref{fig:1} to Fig.~\ref{fig:3} are obtained in a wire of 3 $\mu$m, our protocol also works for the short wire, which is closer to the experimental situation. To show this, we consider a shorter wire of 1 $\mu$m in Fig.~\ref{fig:4} with strong potential disorder in the bulk region and suppression of the potential disorder near the nanowire ends.
{The conductance spectra shown in Fig.~\ref{fig:4} are measured at the right end in the presence of a specific disorder profile [shown in Fig.~\ref{fig:App1}(i)] because the left end does not manifest large conductance of zero-bias peaks.}
In Fig.~\ref{fig:4}(a), we generate the trivial ZBCP with on-demand large conductance with a low tunnel barrier. By increasing the barrier strength and the temperature, we find that the trivial ZBCPs vary drastically as shown in Figs.~\ref{fig:4}(b)-\ref{fig:4}(d). From the line cuts of trivial ZBCPs in Figs.~\ref{fig:4}(e)-\ref{fig:4}(h), we see that the zero-bias conductance declines from $3.5e^2/h$ [blue line in Fig.~\ref{fig:4}(e)] to $e^2/h$ [yellow line in Fig.~\ref{fig:4}(h)] when the barrier strength increases from 5 to 20 meV and the temperature increases from zero to {$T=116$ mK}. All these results manifest the efficacy of our protocol in creating the on-demand trivial ZBCP with large conductance.
\section{Conclusion}\label{sec:conclusion}
We show in this work that there are experimentally relevant physical situations where large zero-bias peaks in the tunneling spectra may be somewhat abundantly present in spite of being entirely trivial in origin. Such peaks can be made to coincide with the quantized conductance $2e^2/h$ simply by adjusting the tunnel barrier strength (or alternatively perhaps the temperature at a fixed tunnel barrier). One may even see in some situations transitions in the trivial regions between large-conductance peaks and large-conductance dips in the tunnel spectra~\cite{song2021large}. Our results show that such trivial tunnel spectra arise from strong disorder in the bulk modulated by the weakening of the disorder at the wire ends (making the ends behave like they are quantum dots), {which is not only theoretically possible but also experimentally plausible because the large disorder induced by the bulk region is always generic and inevitable and weak disorder is suppressed by the screening of the metallic gate.}
{Although our theory is not guaranteed to be the unique mechanism that can happen in realistic experiments because we simply cannot rule out other possibilities, we show that such a simple yet experimentally plausible hypothesis can lead to the theoretically simulated generic ZBCPs on demand. Therefore, the nutshell of the current work is that on-demand trivial ZBCPs may arise without fine-tuning under certain experiment conditions, and we propose a very likely mechanism that the screening-induced appearance of artificial quantum dots at the ends of the wire plays a substantial role in creating large-conductance peaks on demand which do not manifest much robustness.}
Therefore, experimentalists should be skeptical of any zero-bias peaks not robust to the tunnel barrier, the magnetic field, and the chemical potential (i.e., gate voltage). In addition, the manifestation of a gap opening and the occurrence of peaks in tunneling from both ends are the minimal requirements for the existence of Majorana zero modes. In fact, any topological Majorana mode is unlikely to manifest $2e^2/h$ quantization because in most situations finite temperature should suppress the peak value--- what is important is not the peak conductance, but the robust stability of the conductance feature to experimental tuning parameters and the occurrence of the feature for tunneling from both ends. Obtaining ``quantization'' by fine-tuning the tunnel barrier is perhaps better avoided in the search for topological Majorana zero modes. What is necessary are simultaneous ZBCPs from both ends which are stable to parameter variations independent of the actual conductance values.
This work is supported by the Laboratory for Physical Sciences. We also acknowledge the University of Maryland High-Performance Computing Cluster (HPCC).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec1}
The multivariate normal distribution plays a fundamental role in statistical analyses and applications.
One of the most basic properties of the normal distribution is the symmetry of its density function.
However, in practice, data sets do not follow the normal distribution or even possess symmetry, and for this reason,
researchers search for new distributions to fit data with different features allowing flexibility in skewness, kurtosis, tails and multimodality; see for example, \cite{Eling (2008),Fung and Hsieh (2000)}.
Several new families of distributions have been introduced for modeling skewed data, including normal distribution as a special case.
One such prominent distribution in the univariate case is the skew normal($\mathcal{SN}$) distribution due to \cite{Azzalini (1985),Azzalini (1986)}.
The multivariate version of the $\mathcal{SN}$ distribution has been introduced in \cite{Azzalini and Dalla Valle (1996)}.
This distribution has found diverse applications such as portfolio optimization concepts and risk measurement indices in financial markets; see \cite{Bernardi et al. (2020)}.
A complete set of extensions of multivariate $\mathcal{SN}$ distributions can be found in \cite{Azzalini (2005), Azzalini and Capitanio (2014)}. \cite{Balakrishnan and Scarpa (2012)} calculated and compared several different measures of skewness for the multivariate $\mathcal{SN}$ distribution. \cite{Balakrishnan et al. (2014)} proposed a test to assess if a sample comes from a multivariate $\mathcal{SN}$ distribution.
Here, we use $\phi_p(.;{\boldsymbol{\mu}}, {\mathbf{\Sigma}})$ and $\Phi_p(.;{\boldsymbol{\mu}}, {\mathbf{\Sigma}})$ to
denote the probability density function (PDF) and the cumulative distribution function (CDF) of the $p$-variate normal distribution, with mean ${\boldsymbol{\mu}}$ and covariance matrix ${\mathbf{\Sigma}}$, respectively, and also $\phi(.)$ and $\Phi(.)$ to denote the PDF and CDF of the univariate standard normal distribution, respectively.
From \cite{Azzalini and Capitanio (2014)} and \cite{Azzalini and Dalla Valle (1996)}, a $p$-dimensional random vector ${\mathbf{Y}}$ follows a multivariate
$\mathcal{SN}$ distribution if it has the PDF
\begin{eqnarray*}
f({\boldsymbol{y}})=2\phi_p({\boldsymbol{y}};{\boldsymbol{\xi}}, {\mathbf{\Omega}})
\Phi\left(\frac{ {\boldsymbol{\delta}}^\top {\overline{\mathbf{\Omega}}}^{-1} {\boldsymbol{\omega}}^{-1} ({\boldsymbol{y}}-{\boldsymbol{\xi}}) }{\sqrt{1-{\boldsymbol{\delta}}^\top {\overline{\mathbf{\Omega}}}^{-1} {\boldsymbol{\delta}} }} \right),
\end{eqnarray*}
with stochastic representation
\begin{eqnarray}\label{Repre}
{\mathbf{Y}} \stackrel{d}{=} {\boldsymbol{\xi}}+{\boldsymbol{\omega}}\left( {\boldsymbol{\delta}} U + {\mathbf{Z}}\right),
\end{eqnarray}
where $\stackrel{d}{=}$ stands for equality in distribution, ${\boldsymbol{\xi}}\in \mathbb{R}^p$, ${\mathbf{Z}} \sim \mathcal{N}_p\left(\boldsymbol{0},{\overline{\mathbf{\Omega}}}-{\mathbf{{\boldsymbol{\delta}}{\boldsymbol{\delta}}}^\top}\right)$ and
univariate random variable $ U$ has a standard normal distribution within the truncated interval $\left(0, \infty\right)$, independently of ${\mathbf{Z}}$. Truncated normal distribution in the interval $(0, \infty)$ with parameters $(a, b)$ is denoted by $\mathcal{TN}\left(a,b,(0,\infty)\right)$.
The vector ${\boldsymbol{\delta}}=\left(\delta_1,\ldots,\delta_p\right)^\top$ is the skewness parameter vector, such that $-1<\delta_i<1$, for $i\in\{1,\ldots, p\}$. The matrix
${\boldsymbol{\omega}}=\textrm{diag}\left(\omega_1,\ldots, \omega_p\right)=\left( {\mathbf{\Omega}}\odot \mathbf{I}_p\right)^{1/2}>0$ is a diagonal matrix formed by the standard
deviations of ${\mathbf{\Omega}}$ and ${\mathbf{\Omega}}={\boldsymbol{\omega}} {\overline{\mathbf{\Omega}}}{\boldsymbol{\omega}}$. Here, $\mathbf{I}_p$ is identity matrix of size $p$. The Hadamard product of matrices ${\mathbf{A}}=\left(a_{ij}\right): m\times n$ and ${\mathbf{B}}=\left(b_{ij}\right): m\times n$ is given by the
$ m\times n$ matrix ${\mathbf{A}} \odot{\mathbf{B}}=\left(a_{ij}b_{ij}\right)$. In the stochastic representation in (\ref{Repre}), positive definite matrices ${\mathbf{\Omega}}$ and ${\overline{\mathbf{\Omega}}}$ are covariance and correlation matrices, respectively.
The parameters ${\boldsymbol{\xi}}$, ${\boldsymbol{\omega}}$ and ${\boldsymbol{\delta}}$ are the location, scale and skewness parameters,
respectively.
Upon using the stochastic representation in (\ref{Repre}), a general new family of mixture distributions of multivariate normal distribution can be introduced based on arbitrary random variable $U$. A $p$-dimensional random vector ${\mathbf{Y}}$ follows a multivariate
mean mixture of normal ($\mathcal{MMN}$) distribution if, in (\ref{Repre}), $U$ is an arbitrary random variable with CDF $H (. ; {\boldsymbol{\nu}})$, independently of ${\mathbf{Z}}$, indexed by the parameter ${\boldsymbol{\nu}}=\left(\nu_1,\ldots,\nu_p\right)^\top$.
Then, we say that $\textbf{Y}$ has a mean mixture of multivariate normal ($\mathcal{MMN}$)
distribution, and denote it by ${\mathbf{Y}}\sim \mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}}, {\boldsymbol{\delta}}; H)$.
\cite{Negarestani et al. (2019)} presented a new family of distributions as a mixture of normal distribution and studied its properties in the univariate and multivariate cases. These authors defined a $p$-dimensional random vector ${\mathbf{Y}}$ to have a
multivariate mean mixture of normal distribution if it has the stochastic representation
$
{\mathbf{Y}} \stackrel{d}{=} {\boldsymbol{\xi}}+{\boldsymbol{\delta}} U + {\mathbf{Z}}
$,
where ${\mathbf{Z}} \sim N_p(\boldsymbol{0},{\mathbf{\Omega}})$ and $ U$ is an arbitrary positive random variable with CDF $H (. ; {\boldsymbol{\nu}})$ independently of ${\mathbf{Z}}$, indexed by the parameter vectors ${\boldsymbol{\nu}}=\left(\nu_1,\ldots,\nu_p\right)^\top$ and ${\boldsymbol{\delta}}=\left(\delta_1,\ldots,\delta_p\right)^\top\in \mathbb{R}^p$.
The stochastic representation used by \cite{Negarestani et al. (2019)} is along the lines of the stochastic representation of the restricted multivariate $\mathcal{SN}$ distribution (see \cite{Azzalini (2005)}), but in this work, we use a different stochastic representation in (\ref{Repre}). \cite{Negarestani et al. (2019)} examined some properties of this family in the univariate case for general $U$, and also two specific cases of the family.
In the present work, we consider the multivariate form of this family and study its properties.
In (\ref{Repre}), if the random variable $U$ is a skewed random variable, then the $p$-dimensional vector ${\mathbf{Y}} $ will also be skewed.
In the $\mathcal{MMN}$ family, skewness can be regulated through the parameter $ {\boldsymbol{\delta}}$.
If in (\ref{Repre}) ${\boldsymbol{\delta}}={\boldsymbol{{0}}}$, the $\mathcal{MMN}$ family is reduced to the multivariate normal distribution.
The extended form of the $\mathcal{SN}$ distribution is obtained from (\ref{Repre}) when $U$ is distributed as $\mathcal{N}(0,1)$ variable truncated below $-\tau$ instead of $0$, for some constant $\tau$.
The representation in (\ref{Repre}) means that the $\mathcal{MMN}$ distribution is a ``mean mixture'' of the multivariate normal distribution when the mixing random variable is $U$. Specifically, we have the following hierarchical representation for the $\mathcal{MMN}$ distribution:
\begin{eqnarray}\label{Hierarchical.U}
{\mathbf{Y}}|{(U=u)} \sim N_p \left({\boldsymbol{\xi}}+{\boldsymbol{\omega}}{\boldsymbol{\delta}} u, {\mathbf{\Omega}}-{\boldsymbol{\omega}}{\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top {\boldsymbol{\omega}}\right),
~~~~~ U \sim H\left(. ; {\boldsymbol{\nu}}\right).
\end{eqnarray}
According to (\ref {Hierarchical.U}), in the $\mathcal{MMN}$ model, just the mean parameter is mixed with arbitrary random variable $U$, and so this class can not be obtained from the Normal Mean-Variance Mixture ($\mathcal{NMVM}$) family. The family of multivariate $\mathcal{NMVM}$ distributions, originated by \cite{Barndorff et al. (1982)}, is another extension of multivariate normal distribution, with a skewness parameter ${\boldsymbol{\delta}}\in \mathbb{R}^p$.
A $p$-dimensional random vector $\mathbf{Y}$ is said to have a multivariate $\mathcal{NMVM}$ distribution if it has the representation
\begin{eqnarray}\label{Rep-NMVM}
\mathbf{Y}={\boldsymbol{\xi}}+{\boldsymbol{\delta}} U+\sqrt{U}\mathbf{Z},
\end{eqnarray}
where $\mathbf{Z}\sim \mathcal{N}_p(\mathbf{0},\mathbf{\Omega})$ and $U$ is a positive random variable and the CDF of $U$, $H(.;{\boldsymbol{\nu}})$, is the mean-variance mixing distribution.
Both families of distributions in (\ref{Repre}) and (\ref{Rep-NMVM}) include the multivariate normal distribution as a special case and can be used for modeling data possessing skewness. In (\ref{Rep-NMVM}), both mean and variance are mixed with the
same positive random variable $U$, while in (\ref{Repre}) just the mean parameter is mixed with
$U$, however, the class in (\ref{Repre}) cannot be obtained from the class in (\ref{Rep-NMVM}).
Besides, skewness is a feature commonly found in the returns of some financial assets. For more information on applications of skewed distributions in finance theory, one may refer to \cite{Adcock et al. (2015)}.
In the presence of skewness in asset returns, the
$\mathcal{SN}$ and skew-t ($\mathcal{ST}$) distributions have been found to be useful models in both theoretical and empirical work.
Their parametrization is parsimonious, and they are mathematically tractable, and in financial applications, the distributions are interpretable in terms of the efficient market hypothesis. Furthermore, they lead to theoretical results that are useful for portfolio selection and asset pricing. In actuarial science, the presence of skewness
in insurance claims data is the primary motivation for using $\mathcal{SN}$ distribution and its
extensions. In this regard, the $\mathcal{MMN}$ family that is developed here will also prove useful in finance, insurance science, and other applied fields.
\cite{Simaan (1993)} proposed that the
$n$-dimensional vector of returns on financial assets should be represented as
${\boldsymbol{X}}={\boldsymbol{U}}+{\boldsymbol{\lambda}} V$. The $n$-dimensional vector ${\boldsymbol{U}}$ is assumed to have a multivariate elliptically symmetric
distribution, independently of the non-negative univariate
random variable $V$, having an unspecified skewed distribution.
The vector ${\boldsymbol{\lambda}}$, whose elements may take any real values, induces
skewness in the return of individual assets. \cite{Adcock and Shutes
(2012)} have described multivariate versions of the normal-exponential
and normal-gamma distributions. Both of them are specific cases of the
model proposed in \cite{Simaan (1993)}.
\cite{Adcock (2014)} and \cite{Adcock and Shutes (2012)} used the representation in \cite{Simaan (1993)}, with specific choices of ${\boldsymbol{U}} $ and $ V$, and introduced a number of distributions such as $\mathcal{SN}$, extended $\mathcal{SN}$, $\mathcal{ST}$, normal-exponential, and normal-gamma, and investigated the corresponding distributions and their applications in capital pricing, return on financial assets and portfolio selection.
In this paper, with an arbitrary random variable $U$ for the $\mathcal{MMN}$ family with stochastic representation in (\ref{Repre}),
basic distributional properties of the class such as the characteristic function (CF), the moment generating function (MGF), the first four moments of the model, distributions of linear and affine transformations, the canonical form of the family and the mode of the model are derived in general. Also, the maximum likelihood estimation of the parameters by using an EM-type algorithm is discussed, and then different measures of multivariate skewness are obtained.
The special cases when $U$ has standard gamma and standard exponential distributions, with the corresponding distributions denoted by $\mathcal{MMNG}$ and $\mathcal{MMNE}$ distributions, respectively, are studied in detail.
For the $\mathcal{MMNG}$ distribution, in addition to all the above basic properties of the distribution the infinitely divisibility of the model is also discussed. For the $\mathcal{MMNE}$ distribution, the basic properties of the distribution as well as log-concavity of the model are discussed.
The maximum likelihood estimates of the parameters of the $\mathcal{MMNE}$ distribution
are evaluated using the bias and the mean square error by means of a simulation study.
Moreover, various multivariate measures of skewness are computed and compared.
Finally, for two real data sets, the $\mathcal{MMNE}$ distribution is fitted and compared with the $\mathcal{SN}$ and $\mathcal{ST}$ distributions in terms of log-likelihood value as well as AIC and BIC criteria.
\section{ Model and Properties}\label{sec2}
In this section, some basic properties of the model are studied.
From (\ref{Repre}), if $U$ has a PDF $h (. ; {\boldsymbol{\nu}})$, an integral form of
the PDF of ${\mathbf{Y}}\sim \mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}}, {\boldsymbol{\delta}}; H)$ can be obtained as
\begin{eqnarray}\label{density-y}
f_{MMN_p}({\boldsymbol{y}};{\boldsymbol{\xi}},{\mathbf{\Omega}}, {\boldsymbol{\delta}},{\boldsymbol{\nu}}) =\int_{-\infty}^{+\infty}\phi_p\left({\boldsymbol{y}}; {\boldsymbol{\xi}}+{\boldsymbol{\omega}}{\boldsymbol{\delta}} u, {\mathbf{\Omega}}-{\boldsymbol{\omega}}{\mathbf{{\boldsymbol{\delta}}{\boldsymbol{\delta}}}^\top} {\boldsymbol{\omega}}\right)d H(u;{\boldsymbol{\nu}})=\int_{-\infty}^{+\infty}\phi_p\left({\boldsymbol{y}}; {\boldsymbol{\xi}}+{\boldsymbol{\omega}}{\boldsymbol{\delta}} u, {\mathbf{\Omega}}-{\boldsymbol{\omega}}{\mathbf{{\boldsymbol{\delta}}{\boldsymbol{\delta}}}^\top} {\boldsymbol{\omega}}\right) h(u;{\boldsymbol{\nu}}) du.
\end{eqnarray}
We now present some theorems and lemmas with regard to different properties of these distributions, proofs of which are presented in Appendix A.\\
\noindent{\textbf{Remark 1.}}
We can introduce the normalized $\mathcal{MMN}$ distribution through the transformation
$\textbf{X}={\boldsymbol{\omega}}^{-1}\left({\mathbf{Y}}- {\boldsymbol{\xi}}\right)$. It is immediate that the stochastic representation of
$ \textbf{X}= {\boldsymbol{\delta}} U + {\mathbf{Z}}$ has the hierarchial representation
$
{\mathbf{X}}|(U=u) \sim \mathcal{N}_p \left({\boldsymbol{\delta}} u, {\overline{\mathbf{\Omega}}}-{\mathbf{{\boldsymbol{\delta}}{\boldsymbol{\delta}}}^\top}\right)
$
and
$
U\sim H(.; {\boldsymbol{\nu}})
$.
Then, we say that ${\mathbf{X}}$ has a normalized mean mixture of multivariate normal
distributions, and denote it by ${\mathbf{X}}\sim \mathcal{MMN}_p\left({\boldsymbol{0}}, {\overline{\mathbf{\Omega}}},{\boldsymbol{\delta}}; H\right)$.
\begin{lemma}\label{Lem-CF-MG}
If ${\mathbf{Y}}\sim \mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}}, {\boldsymbol{\delta}}; H)$, the CF and MGF of ${\mathbf{Y}}$ are as follows:
\begin{eqnarray}
C_{\mathbf{Y}}({\mathbf{t}})=e^{i{\mathbf{t}}^\top {\boldsymbol{\xi}}+ \frac{1}{2}{
{\mathbf{t}}^\top {\mathbf{ \Sigma}}_{\mathbf{Y}} {\mathbf{t}}}} C_U\left(i{\mathbf{t}}^\top{\boldsymbol{\omega}}{\boldsymbol{\delta}};{\boldsymbol{\nu}}\right),~~~~~
M_{\mathbf{Y}}({\mathbf{t}})=e^{{\mathbf{t}}^\top {\boldsymbol{\xi}}+ \frac{1}{2}{
{\mathbf{t}}^\top {\mathbf{ \Sigma}}_{\mathbf{Y}} {\mathbf{t}}}} M_U\left({\mathbf{t}}^\top{\boldsymbol{\omega}}{\boldsymbol{\delta}};{\boldsymbol{\nu}}\right),\label{MtY}
\end{eqnarray}
respectively, where $i=\sqrt{-1}$, ${\mathbf{ \Sigma}}_{\mathbf{Y}}={\mathbf{\Omega}}-{\boldsymbol{\omega}}{\mathbf{{\boldsymbol{\delta}}{\boldsymbol{\delta}}}^\top} {\boldsymbol{\omega}}$, and $C_U(.;{\boldsymbol{\nu}})=C_U(.)$ and $M_U(.;{\boldsymbol{\nu}})=M_U(.)$ are the CF and MGF of $U$, respectively.
\end{lemma}
Moreover, if ${\mathbf{X}}\sim \mathcal{MMN}_p\left({\boldsymbol{0}}, {\overline{\mathbf{\Omega}}},{\boldsymbol{\delta}}; H\right)$, the CF and MGF of ${\mathbf{X}}$ are
\begin{eqnarray}
C_{\mathbf{X}}({\mathbf{t}})=e^{ \frac{1}{2}{
{\mathbf{t}}^\top {\mathbf{ \Sigma}}_{\mathbf{X}} {\mathbf{t}}}} C_U\left(i{\mathbf{t}}^\top{\boldsymbol{\delta}};{\boldsymbol{\nu}}\right),~~~~~
M_{\mathbf{X}}({\mathbf{t}})=e^{ \frac{1}{2}{
{\mathbf{t}}^\top {\mathbf{ \Sigma}}_{\mathbf{X}} {\mathbf{t}}}} M_U\left({\mathbf{t}}^\top{\boldsymbol{\delta}};{\boldsymbol{\nu}}\right)\label{MtX},
\end{eqnarray}
respectively, where ${\mathbf{ \Sigma}}_{\mathbf{X}}={\overline{\mathbf{\Omega}}}-{\mathbf{{\boldsymbol{\delta}}{\boldsymbol{\delta}}}^\top}$. The first four moments of ${\mathbf{X}}$, presented in the following lemma, are derived by using the partial derivatives of MGF of normalized MMN distribution, and these, in turn, can be used to obtain the first four moments of ${\mathbf{Y}}$.
\begin{lemma}\label{Lem1}
Suppose ${\mathbf{X}}\sim \mathcal{MMN}_p\left({\boldsymbol{0}}, {\overline{\mathbf{\Omega}}},{\boldsymbol{\delta}}; H\right)$. Then, the first four moments of ${\mathbf{X}}$ are as follows:
\begin{eqnarray}
M_1({\mathbf{X}})&=&M_1^{\mathbf{X}}={\rm E}[U]{\boldsymbol{\delta}},\label{M1X}\\
M_2({\mathbf{X}})&=&M_2^{\mathbf{X}}={\mathbf{ \Sigma}}_{\mathbf{X}}+{\rm E}\left[U^2\right]\left({\boldsymbol{\delta}}\otimes{\boldsymbol{\delta}}^\top\right),\label{M2X}\\
M_3({\mathbf{X}})&=&M_3^{\mathbf{X}}= {\rm E}[U]\left\{{\boldsymbol{\delta}} \otimes {\mathbf{ \Sigma}}_{\mathbf{X}}+\mathrm{vec}\left({\mathbf{ \Sigma}}_{\mathbf{X}}\right){\boldsymbol{\delta}}^\top+ \left(\mathbf{I}_p\otimes {\boldsymbol{\delta}} \right){\mathbf{ \Sigma}}_{\mathbf{X}} \right\}+ {\rm E}\left[U^3\right] \left(\mathbf{I}_p\otimes {\boldsymbol{\delta}} \right)\left({\boldsymbol{\delta}}\otimes{\boldsymbol{\delta}}^\top\right), \label{M3X}\\
M_4({\mathbf{X}})&=&M_4^{\mathbf{X}}=\left(\mathbf{I}_{p^2} +\mathbf{U}_{p,p}\right)\left({\mathbf{ \Sigma}}_{\mathbf{X}}\otimes {\mathbf{ \Sigma}}_{\mathbf{X}}\right)+
\mathrm{vec}({\mathbf{ \Sigma}}_{\mathbf{X}})\left(\mathrm{vec}({\mathbf{ \Sigma}}_{\mathbf{X}})\right)^\top+{\rm E}\left[U^2\right]\left[
{\boldsymbol{\delta}} \otimes {\boldsymbol{\delta}}^\top \otimes {\mathbf{ \Sigma}}_{\mathbf{X}}+ {\boldsymbol{\delta}}\otimes {\mathbf{ \Sigma}}_{\mathbf{X}} \otimes {\boldsymbol{\delta}}^\top\right.\nonumber\\
&&+ \left. {\mathbf{ \Sigma}}_{\mathbf{X}} \otimes {\boldsymbol{\delta}}\otimes{\boldsymbol{\delta}}^\top+ {\boldsymbol{\delta}}^\top\otimes {\mathbf{ \Sigma}}_{\mathbf{X}} \otimes {\boldsymbol{\delta}}+{\boldsymbol{\delta}}^\top\otimes \mathrm{vec}({\mathbf{ \Sigma}}_{\mathbf{X}}) \otimes {\boldsymbol{\delta}}^\top +({\boldsymbol{\delta}} \otimes {\boldsymbol{\delta}}) (\mathrm{vec}({\mathbf{ \Sigma}}_{\mathbf{X}}) )^\top \right]+{\rm E}\left[U^4\right]{\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top\otimes {\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top ,
\end{eqnarray}
where ${\rm E}(U^k)=M_U^{(k)}(0)$, with $M_U^{(k)}(.)$ being the k-th derivative of $M_U(t)$ with respect to $t$.
\end{lemma}
The Kronecker product of matrices ${\mathbf{A}}=\left(a_{ij}\right): m\times n$ and ${\mathbf{B}}=\left(b_{ij}\right): p\times q$ is a
$mp \times nq$ matrix ${\mathbf{A}} \otimes{\mathbf{B}}=\left(a_{ij}{\mathbf{B}}\right)$. A matrix ${\mathbf{A}} = \left({\mathbf{a}}_1,\ldots , {\mathbf{ a}}_n \right): m \times n$ with columns ${\mathbf{a}}_1, \ldots, {\mathbf{a}}_n$ is sometimes written as a vector and called $\mathrm{vec}({\mathbf{A}})$, defined by $\mathrm{vec}({\mathbf{A}})=\left({\mathbf{a}}_1^\top, \ldots, {\mathbf{a}}_n^\top \right)^\top$. The matrix $\mathbf{U}_{p,p}$ is the permutation matrix (commutation matrix) associated with a $p \times p$ matrix (its size is $p^2 \times p^2 $). For details about Kronecker product, permutation matrix and its properties, see \cite{Graham (1981)} and \cite{Schott (2016)}.
We extend the results of Lemma \ref{Lem1}, using the stochastic representation in (\ref{Repre}), to incorporate location and scale parameters, ${\boldsymbol{\xi}}$ and ${\boldsymbol{\omega}}$, through the transformation ${\mathbf{Y}}={\boldsymbol{\xi}}+{\boldsymbol{\omega}}{\mathbf{X}}$.
\begin{thm}\label{ThmLem2}
If ${\mathbf{Y}}\sim \mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}}, {\boldsymbol{\delta}}; H)$,
then its first four moments are as follows:
\begin{eqnarray}
M_1({\mathbf{Y}})&=& {\boldsymbol{\xi}}+{\boldsymbol{\omega}}M_1^{\mathbf{X}},\label{M1Y}\\
M_2({\mathbf{Y}})&=&{\boldsymbol{\xi}}\otimes{\boldsymbol{\xi}}^\top+{\boldsymbol{\xi}}\otimes \left({\boldsymbol{\omega}}M_1^{\mathbf{X}}\right)^\top+{\boldsymbol{\omega}}M_1^{\mathbf{X}}\otimes {\boldsymbol{\xi}}^\top+ {\boldsymbol{\omega}}M_2^{\mathbf{X}} {\boldsymbol{\omega}}, \label{M2Y}\\
M_3({\mathbf{Y}})&=&{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top \otimes{\boldsymbol{\xi}}+{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top\otimes\left({\boldsymbol{\omega}}M_1^{\mathbf{X}}\right )
+{\boldsymbol{\xi}}\left({\boldsymbol{\omega}}M_1^{\mathbf{X}} \right)^\top \otimes{\boldsymbol{\xi}}
+\left({\boldsymbol{\omega}}M_1^{\mathbf{X}} \right)\otimes{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top+ \left({\boldsymbol{\omega}}M_2^{\mathbf{X}}{\boldsymbol{\omega}} \right)\otimes{\boldsymbol{\xi}}+{\boldsymbol{\xi}}\otimes \left({\boldsymbol{\omega}}M_2^{\mathbf{X}}{\boldsymbol{\omega}}\right)\nonumber\\
&&+ ({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}})\mathrm{vec}\left(M_2^{\mathbf{X}}\right)\otimes{\boldsymbol{\xi}}^\top
+({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}})M_3^{\mathbf{X}}{\boldsymbol{\omega}}, \label{M3Y}\\
M_4({\mathbf{Y}})&=& {\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top \otimes{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top
+ {\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top \otimes{\boldsymbol{\xi}}\left({\boldsymbol{\omega}}M_1^{\mathbf{X}} \right)^\top
+{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top \otimes \left({\boldsymbol{\omega}}M_1^{\mathbf{X}} \right){\boldsymbol{\xi}}^\top
+ {\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top \otimes \left({\boldsymbol{\omega}}M_2^{\mathbf{X}}{\boldsymbol{\omega}} \right)+{\boldsymbol{\xi}}\left({\boldsymbol{\omega}}M_1^{\mathbf{X}}\right)^\top \otimes{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top\nonumber\\
&&+ \left({\boldsymbol{\xi}} \otimes{\boldsymbol{\xi}}\right) \left(\mathrm{vec}\left(M_2^{\mathbf{X}}\right) \right)^\top({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}})
+{\boldsymbol{\xi}} \otimes \left({\boldsymbol{\omega}}M_2^{\mathbf{X}}{\boldsymbol{\omega}} \right) \otimes{\boldsymbol{\xi}}^\top +{\boldsymbol{\xi}} \otimes {\boldsymbol{\omega}}\left(M_3^{\mathbf{X}} \right)^\top({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}})
+\left({\boldsymbol{\omega}}M_1^{\mathbf{X}} \right){\boldsymbol{\xi}}^\top \otimes{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top\nonumber\\
&&+{\boldsymbol{\xi}}^\top \otimes \left({\boldsymbol{\omega}}M_2^{\mathbf{X}}{\boldsymbol{\omega}} \right) \otimes {\boldsymbol{\xi}}
+{\boldsymbol{\xi}}^\top \otimes ({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}})\mathrm{vec}\left(M_2^{\mathbf{X}}\right) \otimes{\boldsymbol{\xi}}^\top
+{\boldsymbol{\xi}}^\top \otimes ({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}}) M_3^{\mathbf{X}}{\boldsymbol{\omega}}
+\left({\boldsymbol{\omega}}M_2^{\mathbf{X}}{\boldsymbol{\omega}} \right) \otimes {\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top\nonumber\\
&&+ {\boldsymbol{\omega}} \left(M_3^{\mathbf{X}}\right)^\top ({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}})\otimes{\boldsymbol{\xi}}
+ ({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}}) M_3^{\mathbf{X}}{\boldsymbol{\omega}} \otimes{\boldsymbol{\xi}}^\top
+({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}}) M_4^{\mathbf{X}}({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}}). \label{M4Y}
\end{eqnarray}
\end{thm}
From the above expressions, we can obtain the mean vector and covariance matrix of $\mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}}, {\boldsymbol{\delta}}; H)$ family as
${\rm E}({\mathbf{Y}})={\boldsymbol{\xi}}+{\rm E}(U){\boldsymbol{\omega}}{\boldsymbol{\delta}}$ and
${\rm var}\left({\mathbf{Y}}\right)=
{\mathbf{\Omega}}+ ({\rm var}(U)-1){\boldsymbol{\omega}}{\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}}$.
Multiplication of $M_{\mathbf{X}}({\mathbf{t}})$ by the MGF of the $\mathcal{N}_p({\boldsymbol{\mu}},{\mathbf{\Sigma}})$
distribution, $\exp\left({{\mathbf{t}}^\top {\boldsymbol{\mu}}+ {
{\mathbf{t}}^\top {\mathbf{ \Sigma}}{\mathbf{t}}}}/2\right)$, is still a function of type $M_{\mathbf{X}}({\mathbf{t}})$, and we thus obtain the following result.
\begin{thm}
If ${\mathbf{Y}}_1\sim \mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}}, {\boldsymbol{\delta}}; H)$ and
${\mathbf{Y}}_2 \sim \mathcal{N}_p({\boldsymbol{\mu}},{\mathbf{\Sigma}})$ are independent variables, then
$
{\mathbf{Y}}={\mathbf{Y}}_1+{\mathbf{Y}}_2 \sim \mathcal{MMN}_p\left({\boldsymbol{\xi}}_{\mathbf{Y}},{\mathbf{\Omega}}_{\mathbf{Y}}, {\boldsymbol{\delta}}_{\mathbf{Y}}; H\right),
$
where
$
{\boldsymbol{\xi}}_{\mathbf{Y}}={\boldsymbol{\xi}}+{\boldsymbol{\mu}} $,
${\mathbf{\Omega}}_{\mathbf{Y}}={\mathbf{\Omega}}+{\mathbf{\Sigma}}$, and
$
{\boldsymbol{\delta}}_{\mathbf{Y}}={\boldsymbol{\omega}}_{\mathbf{Y}}^{-1}{\boldsymbol{\omega}}{\boldsymbol{\delta}},
$
with ${\boldsymbol{\omega}}_{\mathbf{Y}}=( {\mathbf{\Omega}}_{\mathbf{Y}}\odot \mathbf{I}_p)^{1/2}$.
\end{thm}
From the MGF's $M_{\mathbf{X}}({\mathbf{t}})$ and $M_{\mathbf{Y}}({\mathbf{t}})$, it is clear that the family
of MMN distributions is closed under affine transformations, as given in the following results.
\begin{thm}\label{Thm1}
If ${\mathbf{X}}\sim \mathcal{MMN}_p\left({\boldsymbol{0}},{\overline{\mathbf{\Omega}}}, {\boldsymbol{\delta}}; H\right)$ and ${\mathbf{A}}$ is a non-singular $p\times p$ matrix such that $\mathrm{diag}\left({\mathbf{A}}^\top {\overline{\mathbf{\Omega}}} {\mathbf{A}} \right)={\mathbf{I}}_p$, that is, ${\mathbf{A}}^\top {\overline{\mathbf{\Omega}}} {\mathbf{A}}$ is a correlation matrix, then
$
{\mathbf{A}}^\top{\mathbf{X}} \sim \mathcal{MMN}_p\left({\boldsymbol{0}},{\mathbf{A}}^\top{\overline{\mathbf{\Omega}}}{\mathbf{A}},{\mathbf{A}}^\top{\mathbf{\delta}}; H\right).
$
\end{thm}
\begin{thm}\label{Thm2}
If ${\mathbf{Y}}\sim \mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}}, {\boldsymbol{\delta}}; H)$, A is a full-rank $p\times h$ matrix,
with $h \leq p$, and $\textbf{c}\in \mathbb{R}^h$, then
$
{\mathbf{T}}=\textbf{c}+{\mathbf{A}}^\top {\mathbf{Y}} \sim \mathcal{MMN}_h\left({\boldsymbol{\xi}}_{\mathbf{T}},{\mathbf{\Omega}}_{\mathbf{T}}, {\boldsymbol{\delta}_{\mathbf{T}}}; H\right),
$
where
$
{\boldsymbol{\xi}}_{\mathbf{T}}=\textbf{c}+{\mathbf{A}}^\top {\boldsymbol{\xi}}
$,
$
{\mathbf{\Omega}}_{\mathbf{T}}={\mathbf{A}}^\top {\mathbf{\Omega}}{\mathbf{A}}
$, and
$
{\boldsymbol{\delta}}_{\mathbf{T}}={\boldsymbol{\omega}}_{\mathbf{T}}^{-1}{\mathbf{A}}^\top{\boldsymbol{\omega}}{\boldsymbol{\delta}},
$
with ${\boldsymbol{\omega}}_{\mathbf{T}}=\left( {\mathbf{\Omega}}_{\mathbf{T}}\odot \mathbf{I}_h\right)^{1/2}$.
\end{thm}
As in the case of multivariate $\mathcal{SN}$ distribution, (see \cite{Azzalini and Capitanio (2014)}), it can be shown that, if the random vector ${\mathbf{Y}}$ is partitioned into a number of random vectors, the independence occurs between its components when at least one component follows the $\mathcal{MMN}$ distribution and the others have normal distribution, that is, the independence between components occurs when only one component of the skewness parameter ${\boldsymbol{\delta}}$ is non-zero and all others are zero.
Without loss of generality, from here on, it is assumed that the first element of ${\boldsymbol{\delta}}$ is non-zero.
We now focus on a specific type of linear transformation of the $\mathcal{MMN}$ variable, having special relevance for theoretical developments
but also to some extent for practical reasons.
\begin{thm}\label{Thm3}
For a given variable ${\mathbf{Y}}\sim \mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}}, {\boldsymbol{\delta}}; H)$, there exists a linear transformation
$\textbf{Z}^*={\mathbf{A}}_*({\mathbf{Y}}-{\boldsymbol{\xi}})$ such that $\mathbf{Z}^*\sim \mathcal{MMN}_p\left({\boldsymbol{0}},{\mathbf{I}}_p, {\boldsymbol{\delta}}_{\textbf{Z}^*}; H\right)$, where at most one component of ${\boldsymbol{\delta}}_{\textbf{Z}^*}$ is not zero, and ${\boldsymbol{\delta}}_{\textbf{Z}^*}=({\mathbf{\delta}}_*, 0, \ldots, 0)^\top$ with $\delta_*=\left({\boldsymbol{\delta}}^\top \overline{{\mathbf{\Omega}}}^{-1} {\boldsymbol{\delta}}\right)^{1/2}$.
\end{thm}
The variable $\textbf{Z}^*$, which we shall sometimes refer to as a canonical variate,
consists of $p$ independent components. The joint density is given by
the product of $p-1$ standard normal densities and at most one non-Gaussian
component $\mathcal{MMN}_1\left(0, 1, {\mathbf{\delta}}_*; H\right)$; that is, the density of $\textbf{Z}^*$ is
\begin{eqnarray}\label{density}
f_{\textbf{Z}^*}(\textbf{z})= f_{Z_1^*}(z_1) \prod_{i=2}^{p}\phi( z_i),
\end{eqnarray}
where $Z_1^*\sim \mathcal{MMN}_1(0,1,{\mathbf{\delta}}_*; H)$ (for univariate $\mathcal{MMN}$ distribution, see \cite{Negarestani et al. (2019)}).
Although Theorem \ref{Thm3} ensures that it is possible to obtain a canonical
form, note that in general there are many possible ways
to do so, but it is not obvious how to achieve the canonical form in practice.
To find the appropriate ${\mathbf{A}}_*$ in the linear transformation $\textbf{Z}^*={\mathbf{A}}_*({\mathbf{Y}}-{\boldsymbol{\xi}})$, it is sufficient to find a ${\mathbf{A}}_*$ satisfying the following two conditions:
$
{\mathbf{A}}_*^\top {\mathbf{\Omega}} {\mathbf{A}}_*={\mathbf{I}}_p$
and
$
{\mathbf{A}}_*^\top{\boldsymbol{\omega}}{\boldsymbol{\delta}}={\boldsymbol{\delta}}_{\textbf{Z}^*}=({\mathbf{\delta}}_*, 0,\ldots, 0)^\top
$.
The canonical form facilitates the computation of the mode of the distribution and the multivariate coefficients of skewness.
\begin{thm}\label{Thm5-mode}
If ${\mathbf{Y}}\sim \mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}}; H)$, the mode of ${\mathbf{Y}}$ is
$
\textbf{M}_0={\boldsymbol{\xi}}+\frac{m_0^*}{{\mathbf{\delta}}_*}{\boldsymbol{\omega}}{\boldsymbol{\delta}},
$
where $\delta_*=\left({\boldsymbol{\delta}}^\top \overline{{\mathbf{\Omega}}}^{-1} {\boldsymbol{\delta}}\right)^{1/2}$ and $m_0^*$ is
the mode of the univariate $\mathcal{MMN}_1\left(0,1,\delta_*; H\right)$ distribution.
\end{thm}
\section{Likelihood Estimation through EM Algorithm}\label{Est-sec}
For obtaining the maximum likelihood
estimates of all the parameters of $\mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}}; H)$, we propose an EM-type algorithm as in \cite{Meng and Rubin (1993)}. Let $\mathbf{Y} = (\mathbf{Y}_1, \ldots, \mathbf{Y}_n)^\top$ be a random sample of
size $n$ from a $\mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}}; H)$ distribution.
Consider the stochastic representation in (\ref{Repre})
for $\mathbf{Y}_i, i\in\{1,\ldots, n\}$. Following the EM algorithm, let $(\mathbf{Y}_i,U_i), i\in\{1,\ldots, n\}$, be the complete data, where $\mathbf{Y}_i$ is the observed data and $U_i$ is considered as missing data. Let ${\boldsymbol{\theta}}=({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}},{\boldsymbol{\nu}})$.
Using (\ref{Hierarchical.U}), the distribution of ${\mathbf{Y}}_i$, for
$i\in\{1,\ldots, n\}$, can be written hierarchically as
\begin{eqnarray*}\label{hierarchicall}
{\mathbf{Y}}_i|(U_i=u_i) \sim \mathcal{N}_p ({\boldsymbol{\xi}}+{\boldsymbol{\omega}}{\boldsymbol{\delta}} u_i, {\mathbf{\Sigma}}_{\mathbf{Y}}),~~~~~~
U_i \stackrel{iid}{\sim} H(.;{\boldsymbol{\nu}}),
\end{eqnarray*}
where $\stackrel{iid}{\sim}$ denotes independence of random variables and ${\mathbf{\Sigma}}_{\mathbf{Y}}={\mathbf{\Omega}}-{\boldsymbol{\omega}}{\mathbf{{\boldsymbol{\delta}}{\boldsymbol{\delta}}}^\top} {\boldsymbol{\omega}}$.
Let ${\boldsymbol{y}}=\left({\boldsymbol{y}}^\top_1, \ldots, {\boldsymbol{y}}^\top_n\right)$ , where ${\mathbf{y}}_i$ is a realization of $\mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}}; H)$.
Because
\begin{eqnarray}\label{complet-i}
f({\boldsymbol{y}}_i,u_i)=f({\boldsymbol{y}}_i|u_i)h(u_i;{\boldsymbol{\nu}}),
\end{eqnarray}
the complete data log-likelihood function, ignoring additive constants, is obtained from (\ref{complet-i}) as
\begin{eqnarray*}\label{complet-c}
\ell_c({\boldsymbol{\theta}})&=&-\frac{n}{2}\ln |{\mathbf{\Sigma}}_{\mathbf{Y}}|
-\frac{1}{2}\sum_{i=1}^{n}({\boldsymbol{y}}_i- {\boldsymbol{\xi}})^\top{\mathbf{\Sigma}}_{\mathbf{Y}}^{-1}({\boldsymbol{y}}_i- {\boldsymbol{\xi}})+{\boldsymbol{\alpha}}^\top {\mathbf{\Sigma}}_{\mathbf{Y}}^{-1} \sum_{i=1}^{n} u_i ({\boldsymbol{y}}_i- {\boldsymbol{\xi}})-\frac{1}{2} {\boldsymbol{\alpha}}^\top {\mathbf{\Sigma}}_{\mathbf{Y}}^{-1}{\boldsymbol{\alpha}}\sum_{i=1}^{n}u_i^2+\sum_{i=1}^{n} \ln h(u_i;{\boldsymbol{\nu}}),
\end{eqnarray*}
where ${\boldsymbol{\alpha}}={\boldsymbol{\omega}}{\boldsymbol{\delta}}$. Let us set
\begin{eqnarray}
\widehat{E_{i1}}^{(k)}={\rm E}\left[U_i|{\mathbf{Y}}_i={\boldsymbol{y}}_i, \widehat{{\boldsymbol{\theta}}}^{(k)}\right],~~~~~
\widehat{E_{i2}}^{(k)}={\rm E}\left[U_i^2|{\mathbf{Y}}_i={\boldsymbol{y}}_i,\widehat{{\boldsymbol{\theta}}}^{(k)}\right],\label{Eihat-general}
\end{eqnarray}
where $\widehat{{\boldsymbol{\theta}}}^{(k)}=\left(\widehat{{\boldsymbol{\xi}}}^{(k)}, \widehat{{\boldsymbol{\Omega}}}^{(k)}, \widehat{{\boldsymbol{\delta}}}^{(k)},\widehat{{\boldsymbol{\nu}}}^{(k)}\right)$.
After some simple algebra and using (\ref{Eihat-general}), the expectation with
respect to $U$ conditional on ${\boldsymbol{Y}}$, of the complete log-likelihood function, has the form
\begin{eqnarray}\label{Com-Like}
Q\left({\boldsymbol{\theta}}|\widehat{{\boldsymbol{\theta}}}^{(k)}\right)&=&\frac{n}{2}\ln|{\mathbf{\Sigma}}_{\mathbf{Y}}^{-1}|
-\frac{1}{2}\sum_{i=1}^{n}({\boldsymbol{y}}_i- {\boldsymbol{\xi}})^\top{\mathbf{\Sigma}}_{\mathbf{Y}}^{-1}({\boldsymbol{y}}_i- {\boldsymbol{\xi}})+ \sum_{i=1}^n \textrm{tr}\left[ {\mathbf{\Sigma}}_{\mathbf{Y}}^{-1} (\boldsymbol{y}_i-{\boldsymbol{\xi}}) {\boldsymbol{\alpha}}^\top\right]\widehat{E_{i1}}^{(k)}\nonumber\\
&&-\frac{1}{2} \textrm{tr}\left[ {\mathbf{\Sigma}}_{\mathbf{Y}}^{-1}{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}^\top\right]\sum_{i=1}^n \widehat{E_{i2}}^{(k)}+\sum_{i=1}^n {\rm E}\left[\ln h(u_i;{\boldsymbol{\nu}})|{\mathbf{Y}}_i={\boldsymbol{y}}_i,\widehat{{\boldsymbol{\theta}}}^{(k)}\right],
\end{eqnarray}
where $\overline{{\boldsymbol{y}}}=\frac{1}{n} \sum_{i=1}^n {\boldsymbol{y}}_i$ is the sample mean vector.
The EM-type algorithm for the ML estimation of ${\boldsymbol{\theta}} =({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}},{\boldsymbol{\nu}})$ then proceeds as
follows:\\
\noindent \textbf{Algorithm 1.} Based on the initial value of $\boldsymbol{\theta}^{(0)}=\left(\boldsymbol{\xi}^{(0)}, \mathbf{\Omega}^{(0)}, \boldsymbol{\delta}^{(0)}, \boldsymbol{\nu}^{(0)}\right)$, the EM-type algorithm
iterates between the following E-step and M-step:
\noindent \textbf{E-step}: Given the estimates of model parameters at the $k$-th iteration, say ${\boldsymbol{\theta}}=\widehat{{\boldsymbol{\theta}}}^{(k)}$, compute $\widehat{E_{i1}}^{(k)}$ and $\widehat{E_{i2}}^{(k)}$, for $i \in\{1, 2, \ldots, n\}$;
\noindent \textbf{M-step 1}: Maximization of (\ref{Com-Like}) over parameters ${\boldsymbol{\xi}}$, ${\boldsymbol{\alpha}}$ and ${\mathbf{\Sigma}}_{\mathbf{Y}}$ leads to the following closed-form expressions:
\begin{eqnarray*}
\widehat{{\boldsymbol{\alpha}}}^{(k+1)}&=&\frac{\sum_{i=1}^n {\boldsymbol{y}}_i\widehat{E_{i1}}^{(k)}-\overline{{\boldsymbol{y}}}\sum_{i=1}^n \widehat{E_{i1}}^{(k)}}{\sum_{i=1}^n \widehat{E_{i2}}^{(k)}-\frac{1}{n}\left(\sum_{i=1}^n\widehat{E_{i1}}^{(k)}\right)^2},~~~~
\widehat{{\boldsymbol{\xi}}}^{(k+1)}=\overline{{\boldsymbol{y}}}-\frac{{~\widehat{{\boldsymbol{\alpha}}}^{(k+1)}}}{n} \sum_{i=1}^n \widehat{E_{i1}}^{(k)},\\
\widehat{{\boldsymbol{\Sigma}}}^{(k+1)}_{\mathbf{Y}}&=&
\frac{1}{n}\sum_{i=1}^{n}\left({\boldsymbol{y}}_i- \widehat{{\boldsymbol{\xi}}}^{(k+1)}\right)\left({\boldsymbol{y}}_i-
\widehat{{\boldsymbol{\xi}}}^{(k+1)}\right)^\top-\frac{2}{n}\sum_{i=1}^n \widehat{E_{i1}}^{(k)} \left(\boldsymbol{y}_i-\widehat{{\boldsymbol{\xi}}}^{(k+1)}\right){~\widehat{{\boldsymbol{\alpha}}}^{(k+1)}}^\top
+\frac{1}{n}{~\widehat{{\boldsymbol{\alpha}}}^{(k+1)}} {~\widehat{{\boldsymbol{\alpha}}}^{(k+1)}}^\top \sum_{i=1}^n \widehat{E_{i2}}^{(k)}.
\end{eqnarray*}
Therefore, we can compute $\widehat{{\boldsymbol{\Omega}}}^{(k+1)}=\widehat{{\boldsymbol{\Sigma}}}^{(k+1)}_{\mathbf{Y}}+\widehat{{\boldsymbol{\alpha}}}^{(k+1)}{~\widehat{{\boldsymbol{\alpha}}}^{(k+1)}}^\top$
and ${~\widehat{{\boldsymbol{\delta}}}^{(k+1)}}={~\widehat{{\boldsymbol{\omega}}}^{(k+1)}}^{-1}{\widehat{{\boldsymbol{\alpha}}}^{(k+1)}}$, where
$\widehat{{\boldsymbol{\omega}}}=\left(\widehat{{\boldsymbol{\Omega}}}\odot \mathbf{I}_p\right)^{1/2}$.
\noindent \textbf{M-step 2}: The update of $\widehat{{\boldsymbol{\nu}}}^{(k)}$ depends on the chosen distribution for $U$, and is obtained as
\begin{eqnarray*}
\widehat{{\boldsymbol{\nu}}}^{(k+1)}=\arg \max_{\boldsymbol{\nu}} \sum_{i=1}^n {\rm E}\left[\ln h(u_i;{\boldsymbol{\nu}})|{\mathbf{Y}}_i={\boldsymbol{y}}_i,\widehat{{\boldsymbol{\theta}}}^{(k)}\right].
\end{eqnarray*}
Updating of $\widehat{{\boldsymbol{\nu}}}^{(k)}$ is strongly related to the form of
$h(u_i;{\boldsymbol{\nu}})$. If the conditional expectation
${\rm E} \left[\ln h(u_i; {\boldsymbol{\nu}}) | {\mathbf{Y}}_i={\boldsymbol{y}}_i, \widehat{{\boldsymbol{\theta}}}^{(k)}\right]$
is difficult to evaluate, one may resort to
maximizing the restricted actual log-likelihood function, as follows:\\
\noindent \textbf{Modified M-step 2}: (Liu and Rubin \cite{Liu and Rubin (1994)})
Update $\widehat{{\boldsymbol{\nu}}}^{(k)}$ by
$
\widehat{{\boldsymbol{\nu}}}^{(k+1)}=\arg \max_{\boldsymbol{\nu}} \sum_{i=1}^n \ln f_{\mathcal{MMN}_p}\left({\boldsymbol{y}}_i;\widehat{{\boldsymbol{\xi}}}^{(k+1)}, \widehat{{\boldsymbol{\Omega}}}^{(k+1)}, \widehat{{\boldsymbol{\delta}}}^{(k+1)},{\boldsymbol{\nu}}\right)
$.\\
The above algorithm iterates between
the E-step and M-step until a suitable convergence criterion is satisfied.
We adopt the distance involving two successive evaluations of the log-likelihood function,
i.e., $\left|{\ell\left({\boldsymbol{\theta}}^{(k+1)}|{\boldsymbol{y}}\right)}/{\ell\left({\widehat{\boldsymbol{\theta}}}^{(k)}|{\boldsymbol{y}}\right)}-1\right|$,
as a convergence criterion, where
$
\ell({\boldsymbol{\theta}}|{\boldsymbol{y}})=\sum_{i=1}^n \ln f_{\mathcal{MMN}_p}
\left({\boldsymbol{y}}_i;{\boldsymbol{\xi}}, {\mathbf{\Omega}}, {\boldsymbol{\delta}}, {\boldsymbol{\nu}}\right)
$.
\section{Special Case of $\mathcal{MMN}$ Distribution}\label{MMNG-sec}
In this section, we study in detail a special case of the $\mathcal{MMN}$ family.
In the stochastic representation in (\ref{Repre}), if the random variable $U$ follows the standard gamma
distribution with corresponding PDF $h(u; \nu)=u^{\nu-1} e^{-u}/\Gamma(\nu),~u>0$, we denote it by ${\mathbf{Y}}\sim \mathcal{MMNG}_p({\boldsymbol{\xi}}, {\mathbf{\Omega}}, {\boldsymbol{\delta}}, \nu)$. Then, the PDF of ${\mathbf{Y}}$ can be obtained from (\ref{density-y}) as follows:
\begin{eqnarray*}
f_{\mathcal{MMNG}_p}({\boldsymbol{y}})=\frac{\sqrt{2\pi}}{\eta^\nu\Gamma(\nu)} \exp\left({\frac{A^2}{2}}\right) \phi_p({\boldsymbol{y}}; {\boldsymbol{\xi}}, {\mathbf{ \Sigma}}_{\mathbf{Y}}) \int_{-A}^{+\infty}(z+A)^{\nu-1}\phi(z) dz, ~~~~~{\mathbf{y}}\in {\mathbb{R}}^p,
\end{eqnarray*}where $\eta=\sqrt{{\boldsymbol{\delta}}^\top {\boldsymbol{\omega}} {\mathbf{ \Sigma}}_{\mathbf{Y}}^{-1}{\boldsymbol{\omega}}{\boldsymbol{\delta}}}$, $A=\eta^{-1}\left[{\boldsymbol{\delta}}^\top {\boldsymbol{\omega}} {\mathbf{ \Sigma}}_{\mathbf{Y}}^{-1}({\boldsymbol{y}}-{\boldsymbol{\xi}}) -1 \right]$ and ${\mathbf{ \Sigma}}_{\mathbf{Y}}={\mathbf{\Omega}}-{\boldsymbol{\omega}}{\mathbf{{\boldsymbol{\delta}}{\boldsymbol{\delta}}}^\top} {\boldsymbol{\omega}}$.
By using the MGF in (\ref{MtY}), for ${\mathbf{Y}}\sim \mathcal{MMNG}_p({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}},\nu)$, we obtain
\begin{eqnarray*}
M_{\mathbf{Y}}({\mathbf{t}})=e^{{\mathbf{t}}^\top {\boldsymbol{\xi}}+ \frac{1}{2}{
{\mathbf{t}}^\top {\mathbf{ \Sigma}}_{\mathbf{Y}} {\mathbf{t}}}}
\left(1-{\mathbf{t}}^\top{\boldsymbol{\omega}}{\boldsymbol{\delta}}\right)^{-\nu},~~~~{\mathbf{t}}^\top{\boldsymbol{\omega}}{\boldsymbol{\delta}}\neq1,~~~\forall {\mathbf{t}}.
\end{eqnarray*}
From the expressions in (\ref{M1X})-(\ref{M4Y}), and the fact that ${\rm E}(U^r)=\Gamma(\nu+r)/\Gamma(\nu)$, for positive constant $r$, we can compute the first four moments of ${\mathbf{Y}}$ by substituting
${\rm E}(U)=\nu$, ${\rm E}\left(U^2\right)=\nu(\nu+1)$, ${\rm E}\left(U^3\right)=\nu(\nu+1)(\nu+2)$, and ${\rm E}\left(U^4\right)=\nu(\nu+1)(\nu+2)(\nu+3)$. Specifically, we find that
${\rm E}({\mathbf{Y}})={\boldsymbol{\xi}}+\nu{\boldsymbol{\omega}}{\boldsymbol{\delta}}$ and
$\textrm{var}\left({\mathbf{Y}}\right)={\mathbf{\Omega}}+(\nu-1){\boldsymbol{\omega}}{\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}}$.
\begin{figure}[htp]
\centerline{ \includegraphics[scale=.7]{PDFContour1.eps} }
\caption{Contour plots of $\mathcal{MMNE}_2$ distribution for different choices of ${\boldsymbol{\delta}}$. For the first two rows, the scale matrix is ${\mathbf{\Omega}}=(1, 0; 0 ,1)$, while for the third row, it is ${\mathbf{\Omega}}=(1, 1; 1 ,1.5)$.
\label{PDFContour1}}
\end{figure}
\noindent\textbf{Definition 1.} (Bose {\rm et al.} \cite{Bose et al. (2002)} ; Steutel and Van Harn \cite{Steutel and Van Harn (2004)} )
A random vector ${\mathbf{Y}}$ (or its distribution) is said to be infinitely divisible if, for
each $n\geq1$, there exist independent and identically distributed (iid) random vectors
${\mathbf{Y}}_1, \ldots, {\mathbf{Y}}_n$ such that ${\mathbf{Y}}\stackrel{d}{=}{\mathbf{Y}}_1+ \cdots + {\mathbf{Y}}_n$.
\begin{thm}\label{infinitely-divisible}
The $\mathcal{MMNG}$ distribution, in the multivariate case, is infinitely divisible.
\end{thm}
\begin{proof}[\bf Proof.] Without loss of generality, let ${\mathbf{X}}\sim \mathcal{MMNG}_p\left({\boldsymbol{0}}, {\overline{\mathbf{\Omega}}}, {\boldsymbol{\delta}}, \nu\right)$ and $ \textbf{X}_i\stackrel{d}{=} {\boldsymbol{\delta}} U_i + {\mathbf{Z}}_i$, where $U_i\sim Gamma\left(\alpha=\frac{\nu}{n},\beta=1\right)$ and ${\mathbf{Z}}_i \sim \mathcal{N}_p\left(0,\frac{1}{n}\left({\overline{\mathbf{\Omega}}}-{\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top\right)\right)$ be independent random variables. It is easy to show that $\sum_{i=1}^{n} U_i\sim Gamma(\nu,1)$ and $\sum_{i=1}^{n} {\mathbf{Z}}_i\sim \mathcal{N}_p\left(0,{\overline{\mathbf{\Omega}}}-{\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top\right)$, and so we can write
${\mathbf{X}}\stackrel{d}{=}{\mathbf{X}}_1+ \cdots+ {\mathbf{X}}_n$. Hence, the required result.
\end{proof}
In the following, a particular case of the $\mathcal{MMNG}$ distribution with $\nu=1$ is considered.
Upon substituting $\nu=1$, the mixing distribution of $U$ follows the standard exponential distribution and the distribution of ${\mathbf{Y}}$ in this case is denoted by $\mathcal{MMNE}_p({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}})$. Then, the PDF of ${\mathbf{Y}}$ can be obtained as
\begin{eqnarray*}
f_{\mathcal{MMNE}_p}({\boldsymbol{y}})=\frac{\sqrt{2\pi}}{\eta} \exp\left({\frac{A^2}{2}}\right) \phi_p({\boldsymbol{y}}; {\boldsymbol{\xi}}, {\mathbf{ \Sigma}}_{\mathbf{Y}})\Phi(A), ~~~~~{\mathbf{y}}\in {\mathbb{R}}^p.
\end{eqnarray*}
Fig. \ref{PDFContour1} presents the PDFs of the bivariate $\mathcal{MMNE}$ distribution for
${\mathbf{\Omega}}=(1, 0; 0 ,1)$ and
${\mathbf{\Omega}}=(1, 1; 1 ,1.5)$,
and different choices of ${\boldsymbol{\delta}}$ for ${\boldsymbol{\xi}}=(0,0)^\top$.
Fig. \ref{PDFContour1} shows that the $\mathcal{MMNE}$ distribution exhibits a wide variety of density shapes, in terms of skewness. The PDF of the $\mathcal{MMNE}$ distribution clearly depends on ${\mathbf{\Omega}}$ and ${\boldsymbol{\delta}}$.
The following theorem is useful in the implementation of the EM algorithm for the
ML estimation of the parameters of the $\mathcal{MMNE}$ distribution.
\begin{thm}\label{Thm-conditional}
If ${\mathbf{Y}}\sim \mathcal{MMNE}_p({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}})$ and the random variable $U$ follows the standard exponential distribution, then
$
U|\left({\mathbf{Y}}={\mathbf{y}}\right) \sim \mathcal{TN}\left(\eta^{-1}A,\eta^{-2},(0,\infty)\right).
$
Furthermore, for $k\in\{2, 3, \ldots \}$,
\begin{eqnarray*}\label{EU|Y}
{\rm E}[U|{\mathbf{Y}}={\boldsymbol{y}}]=\eta^{-1}\left(A +\frac{\phi(A)}{\Phi(A)}\right),~~~~
{\rm E}[U^k|{\mathbf{Y}}={\boldsymbol{y}}]=A\eta^{-1} {\rm E}\left[U^{k-1}|{\mathbf{Y}}={\boldsymbol{y}}\right]+(k-1)\eta^{-2}
{\rm E}\left[U^{k-2}|{\mathbf{Y}}={\boldsymbol{y}}\right].
\end{eqnarray*}
\end{thm}
\begin{proof}[\bf Proof.]
The proof of the conditional distribution is completed easily by the use of Bayes rule.
\end{proof}
Now, we can obtain the ML estimates of the parameters of $\mathcal{MMNE}$ distribution. By using Theorem \ref{Thm-conditional} and letting
\begin{eqnarray*}
\widehat{E_{i1}}^{(k)}={\rm E}\left[U_i|{\mathbf{Y}}_i={\boldsymbol{y}}_i,\widehat{{\boldsymbol{\theta}}}^{(k)}\right]=\frac{1}{\widehat{\eta}^{(k)}}\left(\widehat{A}_i^{(k)} +\frac{\phi\left(\widehat{A}_i^{(k)}\right)}{\Phi\left(\widehat{A}_i^{(k)}\right)}\right)\mbox{ and }
\widehat{E_{i2}}^{(k)}=
{\rm E}\left[U_i^2|{\mathbf{Y}}_i={\boldsymbol{y}}_i,\widehat{{\boldsymbol{\theta}}}^{(k)}\right]=\frac{1}{{~\widehat{\eta}^{(k)}}^2}
\left[{~\widehat{A}_i^{(k)}}^2 +\widehat{A}_i^{(k)}\frac{\phi\left(\widehat{A}_i^{(k)}\right)}{\Phi\left(\widehat{A}_i^{(k)}\right)}+1\right],
\end{eqnarray*}
in expression (\ref{Eihat-general}), the EM algorithm for the $\mathcal{MMNE}$ distribution can be performed,
where $\widehat{\eta}^{(k)}= \sqrt{{~\widehat{{\boldsymbol{\alpha}}}^{(k)}}^\top {~\widehat{\mathbf{\Sigma}}_{\mathbf{Y}}^{(k)}}^{-1} \widehat{{\boldsymbol{\alpha}}}^{(k)}}$ and $\widehat{A}_i^{(k)}= {~\widehat{\eta}^{(k)}}^{-1} \left[{~\widehat{{\boldsymbol{\alpha}}}^{(k)}}^\top {~\widehat{\mathbf{\Sigma}}_{\mathbf{Y}}^{(k)}}^{-1}\left({\boldsymbol{y}}_i-\widehat{{\boldsymbol{\xi}}}^{(k)}\right) -1 \right]$.
Note that, in the case of $\mathcal{MMNE}$ distribution, the distribution of $U$ does not have any parameter, and so there is no need to estimate ${\boldsymbol{\nu}}$ in the EM algorithm and so M-step 2 must be skipped.
By using the fact that $E(U^m)=m!$, for $m\in\{1, 2, \ldots $\}, and by using expressions in (\ref{M1Y})-(\ref{M4Y}), for random vector ${\mathbf{Y}}\sim \mathcal{MMNE}_p({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}})$, we have
\begin{eqnarray}
M_1({\mathbf{Y}})&=& {\boldsymbol{\xi}}+{\boldsymbol{\omega}}{\boldsymbol{\delta}},\label{M1Yexp}\\
M_2({\mathbf{Y}})&=& {\boldsymbol{\xi}}\otimes{\boldsymbol{\xi}}^\top+{\boldsymbol{\xi}}\otimes {\boldsymbol{\delta}}^\top{\boldsymbol{\omega}}+{\boldsymbol{\omega}}{\boldsymbol{\delta}}\otimes {\boldsymbol{\xi}}^\top+\left({\mathbf{ \Sigma}}_{\mathbf{Y}}+2{{\boldsymbol{\omega}}\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}}\right), \label{M2Yexp}\\
M_3({\mathbf{Y}})&=&{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top \otimes{\boldsymbol{\xi}}+{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top\otimes{\boldsymbol{\omega}}{\boldsymbol{\delta}}
+{\boldsymbol{\xi}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}} \otimes{\boldsymbol{\xi}}
+{\boldsymbol{\omega}}{\boldsymbol{\delta}}\otimes{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top+ \left({\mathbf{ \Sigma}}_{\mathbf{Y}}+2{{\boldsymbol{\omega}}\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}} \right)\otimes{\boldsymbol{\xi}} + {\boldsymbol{\xi}}\otimes \left({\mathbf{ \Sigma}}_{\mathbf{Y}}+2{{\boldsymbol{\omega}}\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}} \right)\nonumber\\
&&+
\mathrm{vec}\left({\mathbf{ \Sigma}}_{\mathbf{Y}}+2{{\boldsymbol{\omega}}\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}} \right)\otimes{\boldsymbol{\xi}}^\top
+ {\boldsymbol{\omega}}{\mathbf{{\boldsymbol{\delta}}}} \otimes{\mathbf{ \Sigma}}_{\mathbf{Y}}
+ \mathrm{vec}\left({\mathbf{ \Sigma}}_{\mathbf{Y}}\right){\boldsymbol{\delta}}^\top{\boldsymbol{\omega}}+\left({\mathbf{I}}_p\otimes {\boldsymbol{\omega}}{\boldsymbol{\delta}}\right)\left[ {\mathbf{ \Sigma}}_{\mathbf{Y}}+ 6{{\boldsymbol{\omega}}\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}}\right], \label{M3Yexp}\\
M_4({\mathbf{Y}})&=& {\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top \otimes{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top
+ {\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top \otimes{\boldsymbol{\xi}}({\boldsymbol{\omega}} {\boldsymbol{\delta}})^\top
+{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top \otimes ({\boldsymbol{\omega}}{\boldsymbol{\delta}} ){\boldsymbol{\xi}}^\top+ {\boldsymbol{\xi}}({\boldsymbol{\omega}}{\boldsymbol{\delta}} )^\top \otimes{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top
+{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top \otimes \left({\mathbf{ \Sigma}}_{\mathbf{Y}}+2{{\boldsymbol{\omega}}\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}}\right)\nonumber\\
&&+
({\boldsymbol{\xi}} \otimes{\boldsymbol{\xi}}) \left(\mathrm{vec}\left({\mathbf{ \Sigma}}_{\mathbf{Y}}+2{{\boldsymbol{\omega}}\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}} \right) \right)^\top
+{\boldsymbol{\xi}} \otimes \left({\mathbf{ \Sigma}}_{\mathbf{Y}}+2{{\boldsymbol{\omega}}\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}} \right) \otimes{\boldsymbol{\xi}}^\top + {\boldsymbol{\xi}} \otimes {\boldsymbol{\omega}}\left(M_3^{\mathbf{X}} \right)^\top\left({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}}\right)
+({\boldsymbol{\omega}}{\boldsymbol{\delta}} ){\boldsymbol{\xi}}^\top \otimes{\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top
\nonumber\\
&&+{\boldsymbol{\xi}}^\top \otimes \left({\mathbf{ \Sigma}}_{\mathbf{Y}}+2{{\boldsymbol{\omega}}\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}}\right) \otimes {\boldsymbol{\xi}}
+{\boldsymbol{\xi}}^\top \otimes \mathrm{vec}\left({\mathbf{ \Sigma}}_{\mathbf{Y}}+2{{\boldsymbol{\omega}}\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}}\right) \otimes{\boldsymbol{\xi}}^\top
+{\boldsymbol{\xi}}^\top \otimes ({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}}) M_3^{\mathbf{X}} {\boldsymbol{\omega}}
\nonumber\\
&&+\left({\mathbf{ \Sigma}}_{\mathbf{Y}}+2{{\boldsymbol{\omega}}\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\boldsymbol{\omega}} \right) \otimes {\boldsymbol{\xi}}{\boldsymbol{\xi}}^\top+ {\boldsymbol{\omega}} (M_3^{\mathbf{X}})^\top \left({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}}\right)\otimes{\boldsymbol{\xi}}
+ ({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}}) M_3^{\mathbf{X}} {\boldsymbol{\omega}} \otimes{\boldsymbol{\xi}}^\top
+({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}}) M_4^{\mathbf{X}} ({\boldsymbol{\omega}}\otimes{\boldsymbol{\omega}}),\label{M4Yexp}
\end{eqnarray}
where
\begin{eqnarray*}
M_3^{\mathbf{X}}&=& {\boldsymbol{\delta}} \otimes {\mathbf{ \Sigma}}_{\mathbf{X}}+\mathrm{vec}({\mathbf{ \Sigma}}_{\mathbf{X}}){\boldsymbol{\delta}}^\top+ (\mathbf{I}_p\otimes {\boldsymbol{\delta}} ){\mathbf{ \Sigma}}_{\mathbf{X}} + 6 (\mathbf{I}_p\otimes {\boldsymbol{\delta}} )({\boldsymbol{\delta}}\otimes{\boldsymbol{\delta}}^\top),\\
M_4^{\mathbf{X}}&=&(\mathbf{I}_{p^2} +{\bf U}_{p,p})({\mathbf{ \Sigma}}_{\mathbf{X}}\otimes {\mathbf{ \Sigma}}_{\mathbf{X}})+
vec({\mathbf{ \Sigma}}_{\mathbf{X}})(\mathrm{vec}({\mathbf{ \Sigma}}_{\mathbf{X}}))^\top+ 2[
{\boldsymbol{\delta}} \otimes {\boldsymbol{\delta}}^\top \otimes {\mathbf{ \Sigma}}_{\mathbf{X}}+ {\boldsymbol{\delta}}\otimes {\mathbf{ \Sigma}}_{\mathbf{X}} \otimes {\boldsymbol{\delta}}^\top+ {\mathbf{ \Sigma}}_{\mathbf{X}} \otimes {\boldsymbol{\delta}}\otimes{\boldsymbol{\delta}}^\top\nonumber \\
&&+ {\boldsymbol{\delta}}^\top\otimes {\mathbf{ \Sigma}}_{\mathbf{X}} \otimes {\boldsymbol{\delta}}+ {\boldsymbol{\delta}}^\top\otimes \mathrm{vec}({\mathbf{ \Sigma}}_{\mathbf{X}}) \otimes {\boldsymbol{\delta}}^\top +({\boldsymbol{\delta}} \otimes {\boldsymbol{\delta}}) (\mathrm{vec}({\mathbf{ \Sigma}}_{\mathbf{X}}) )^\top ]+24 {\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top\otimes {\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top,
\end{eqnarray*}
and ${\mathbf{ \Sigma}}_{\mathbf{X}}={\overline{\mathbf{\Omega}}}-{\mathbf{{\boldsymbol{\delta}}{\boldsymbol{\delta}}}^\top}$.
In particular, the mean vector and covariance matrix are
$ {\rm E}({\mathbf{Y}})={\boldsymbol{\xi}}+{\boldsymbol{\omega}}{\boldsymbol{\delta}}$ and $\textrm{var}\left({\mathbf{Y}}\right)={\mathbf{\Omega}}$.
\begin{thm}
The $\mathcal{MMNE}$ distribution, in the multivariate case, is log-concave.
\end{thm}
\begin{proof}[\bf Proof.] Because log-concavity is preserved by affine transformations, it is sufficient to prove this property for the canonical form $\mathbf{Z}^*\sim \mathcal{MMNE}_p({\boldsymbol{0}},{\mathbf{I}}_p, {\boldsymbol{\delta}}_{\textbf{Z}^*})$.
From \cite{An (1996)} and \cite{Preekopa (1973)}, if the elements of a random vector are independent, and each has a log-concave density function, then their joint density is log-concave. We know that in the canonical form with PDF in (\ref{density}), the random variables $Z_1, \ldots, Z_p$ are independent of each other. Log-concavity of $\mathcal{MMNE}$ distribution in the univariate case has been established in Proposition 3.1 of \cite{Negarestani et al. (2019)}, and the PDF of the univariate normal distribution is also known to be log-concave. Hence, the result.
\end{proof}
As shown in Section \ref{sec2}, to compute the mode of the $\mathcal{MMNE}$ distribution, it is sufficient to obtain the mode of the distribution in its canonical form, and then compute the mode of the distribution using Theorem \ref{Thm5-mode}.
To compute the mode of the distribution in its canonical form, we must calculate the value of the mode in the univariate case. Existence and uniqueness of the mode (log-concavity) of the $\mathcal{MMNE}$ distribution in the univariate case has been discussed in Proposition 3.1 of \cite{Negarestani et al. (2019)}.
For this purpose, we recall the density function of the univariate $\mathcal{MMNE}$ distribution (given
in \cite{Negarestani et al. (2019)}) as
\begin{eqnarray*}
f_{Z_1}(z;\xi,\omega^2,\lambda)=\frac{\sqrt{1+\lambda^2}}{\omega|\lambda|}e^{-\frac{\sqrt{1+\lambda^2}}{\lambda}z+\frac{1}{2\lambda^2}} \Phi\left(\frac{\lambda\sqrt{1+\lambda^2}z-1}{|\lambda|}\right),
\end{eqnarray*}
where $z={(y-\xi)}/{\omega}$, $\lambda={\delta}/{\sqrt{1-\delta^2}}\neq0$, $y\in \mathbb{R}$, $\xi\in \mathbb{R}$ is a location parameter and $\omega>0$ is a scale parameter. It is denoted by $\mathcal{MMNE}_1(\xi,\omega^2,\lambda)$. For obtaining the mode of $\mathcal{MMNE}_1$, based on Theorem \ref{Thm5-mode}, we need to solve the equation
${\partial f_{Z_1^*}\left(z;0,1,\lambda_*\right)}/{\partial z}=0$, where $\lambda_*={\delta_*}/{\sqrt{1-{\delta_*}^2}}$. The solution need to be obtained by using numerical methods.
\section{Multivariate Measures of Skewness}\label{measures-skewness}
The skewed shape of the distribution is usually captured by multivariate skewness measures.
The skewness is a measure of the asymmetry of
a distribution about its mean and its value far from zero indicates stronger
asymmetry of the underlying distribution than that with close to zero skewness
value.
\begin{table}[htb!]
\scriptsize
\center
\caption{Multivariate measures of skewness for the $\mathcal{MMN}$ family.}
\label{skewformula}
\begin{tabular}{ll}
\hline\\[-0.2cm]
Mardia \cite{Mardia (1970)} \& Malkovich and Afifi \cite{Malkovich and Afifi (1973)}& $\beta_{1,p}=(\gamma_1^*)^2$\\
Srivastava \cite{Srivastava (1984)}& $\beta_{1p}^2=\frac{1}{p}\sum_{i=1}^{p} \left\{\frac{{\rm E}[{\boldsymbol{\gamma}}_i^\top({\mathbf{Y}}-{\boldsymbol{\mu}})]^3}{\lambda_i^{3/2}}\right\}^2$\\
M\'{o}ri-Rohatgi-Sz\'{e}kely \cite{Mori et al. (1993)}& ${\mathbf{s}}=\sum_{i=1}^{p}{\rm E}\left(Z_i^2{\mathbf{Z}}\right) =\left(\sum_{i=1}^{p}{\rm E}\left(Z_i^2 Z_1\right), \ldots,\sum_{i=1}^{p} {\rm E}\left(Z_i^2 Z_p\right) \right)^\top$\\\\
Kollo \cite{Kollo (2008)}& ${\mathbf{b}}={\rm E}\left(\sum_{i,j}^{p}(Z_i Z_j){\mathbf{Z}}\right)=\left(\sum_{i,j}^{p}E\left[(Z_i Z_j) Z_1\right], \ldots,\sum_{i,j}^{p}{\rm E}\left[(Z_i Z_j) Z_p\right]\right)^\top$\\\\
Balakrishnan-Brito-Quiroz \cite{Balakrishnan et al. (2007)}& $\mathbf{T}=\int_{{\phi}_p} {\mathbf{u}}c_1({\mathbf{u}})d \lambda({\mathbf{u}})$, $Q^*={\mathbf{T}}^\top {\mathbf{\Sigma}}_{\mathbf{Z}}^{-1} {\mathbf{T}}$\\
& The elements of ${\mathbf{T}}$ are $T_r=\frac{3}{p(p+2)} {\rm E}\left(Z_r^3\right)+ 3 \sum_{i\neq r} \frac{1}{p(p+2)} {\rm E}\left(Z_i^2 Z_r\right)$ \\[0.1cm]
Isogai \cite{Isogai (1982)} & $s_I=\frac{\left[{\mathbf{\delta}}_* {\rm E}(U)-m_0^* \right]^2}{1+{\mathbf{\delta}}_*^2[\textrm{var}(U)-1])}
$, $s_C=\left({\rm E}(U)-\frac{m_0^*}{{\mathbf{\delta}}_*}\right){\boldsymbol{\delta}}
$\\[0.1cm]
\hline
\end{tabular}
\end{table}
The best-known scalar function of the vectorial measure of skewness proposed by \cite{Mori et al. (1993)} is its squared norm. Its sampling properties has been thoroughly discussed by \cite{Henze (1997)} and have been implemented in the \textsf{R} package \texttt{MultiSkew}. \cite{Franceschini and Loperfido (2019)} discusses the usage of Multiskew and briefly review the literature related to the same squared norm. Also, the skewness measure in \cite{Malkovich and Afifi (1973)} has become an useful tool in projection pursuit (\cite{Loperfido (2018)}).
In this work, multivariate measures of skewness by Mardia \cite{Mardia (1970)}, Malkovich and Afifi \cite{Malkovich and Afifi (1973)},
Srivastava \cite{Srivastava (1984)}, M\'{o}ri {\rm et al.} \cite{Mori et al. (1993)}, Kollo \cite{Kollo (2008)}, Balakrishnan {\rm et al.} \cite{Balakrishnan et al. (2007)} and Isogai \cite{Isogai (1982)} are studied for the $\mathcal{MMN}$ family. Table \ref{skewformula} presents these measures for the $\mathcal{MMN}$ family of distributions. The relevant derivations are given in Appendix B.
In Table \ref{skewformula}, $\gamma_1^*$ is the skewness of $Z_1^*\sim \mathcal{MMN}_1(0,1,{\mathbf{\delta}}_*; H)$ of the canonical form, respectively.
Srivastava measure uses principal components ${\mathbf{F}}={\mathbf{\Gamma}} {\mathbf{Y}}$, where ${\mathbf{\Gamma}}=({\boldsymbol{\gamma}}_1, \ldots, {\boldsymbol{\gamma}}_p)$ is the matrix of eigenvectors of the covariance matrix ${\mathbf{\Delta}}$, that is, an orthogonal matrix such that ${\mathbf{\Gamma}}^\top{\mathbf{\Delta}}{\mathbf{\Gamma}}=\mathbf{\Lambda}$, and $\mathbf{\Lambda}=\textrm{diag}(\lambda_1, \ldots, \lambda_p)$ is diagonal matrix of corresponding eigenvalues. Here, ${\mathbf{Z}}={\mathbf{\Delta}}^{-1/2}({\mathbf{Y}}-{\boldsymbol{\mu}})=(Z_1, \ldots, Z_p)^\top$ has the distribution $\mathcal{MMN}_p({\boldsymbol{\xi}}_{\mathbf{Z}},{\mathbf{\Omega}}_{\mathbf{Z}},{\boldsymbol{\delta}}_{\mathbf{Z}}; H)$, with its parameters as
${\boldsymbol{\xi}}_{\mathbf{Z}}={\mathbf{\Delta}}^{-1/2}({\boldsymbol{\xi}}-{\boldsymbol{\mu}})$,
${\mathbf{\Omega}}_{\mathbf{Z}}={\mathbf{\Delta}}^{-1/2}{\mathbf{\Omega}} {\mathbf{\Delta}}^{-1/2}$,
${\boldsymbol{\delta}}_{\mathbf{Z}}={\boldsymbol{\omega}}_{\mathbf{Z}}^{-1} {\mathbf{\Delta}}^{-1/2} {\boldsymbol{\omega}}{\boldsymbol{\delta}}$, and ${\boldsymbol{\omega}}_{\mathbf{Z}}=( {\mathbf{\Omega}}_{\mathbf{Z}}\odot \mathbf{I}_p)^{1/2}$.
Also, $m_0^*$ is the mode of the scalar $\mathcal{MMN}$ distribution in the canonical form.
From Table \ref{skewformula}, and using the moments in (\ref{M1Yexp})-(\ref{M4Yexp}), we can obtain different measures of skewness for the $ \mathcal{MMNE}_p ({\boldsymbol{\xi}}, {\mathbf{\Omega}}, {\boldsymbol{\delta}})$ distribution as follows:
\begin{itemize}
\item \textit{\textbf{Mardia and Malkovich-Afifi indices}}:
$\beta_{1,p}=\beta_{1}^*=4\delta_*^6$;
\item\textbf{\textit{ Srivastava index}}:
$
\beta_{1p}^2=\frac{1}{p}\sum_{i=1}^{p} \left\{\frac{{\rm E}[{\boldsymbol{\gamma}}_i^\top({\mathbf{Y}}-{\boldsymbol{\mu}})]^3}{\lambda_i^{3/2}}\right\}^2,
$
where ${\boldsymbol{\gamma}}_i$ and $\lambda_i$ are eigenvectors and corresponding eigenvalues for covariance matrix $\textrm{var}\left({\mathbf{Y}}\right)={\mathbf{\Omega}}$, when ${\mathbf{Y}}\sim \mathcal{MMNE}_p({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}})$;
\item \textit{\textbf{M\'{o}ri-Rohatgi-Sz\'{e}kely index}}:
If ${\mathbf{Y}}\sim \mathcal{MMNE}_p({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}})$, then for the standardized variable ${\mathbf{Z}}={\mathbf{\Omega}}^{-1/2}({\mathbf{Y}}-{\boldsymbol{\mu}})$, and with ${\mathbf{A}}={\mathbf{\Omega}}^{-1/2}$, we have (see Appendix A)
\begin{eqnarray}\label{noncentralzzEE}
M_3({\mathbf{Z}})&=&{\rm E}\left[{\mathbf{A}}^\top({\mathbf{Y}}-{\boldsymbol{\mu}})\right]^3= \left({\mathbf{A}}^\top \otimes {\mathbf{A}}^\top\right) M_3({\mathbf{Y}}){\mathbf{A}}-\left[{\mathbf{A}}^\top M_2({\mathbf{Y}}) {\mathbf{A}} \right]\otimes \left[{\mathbf{A}}^\top {\rm E}({\mathbf{Y}})\right]
-{\mathbf{A}}^\top {\rm E}({\mathbf{Y}})\otimes \left[{\mathbf{A}}^\top M_2({\mathbf{Y}}) {\mathbf{A}} \right]\nonumber\\
&&-\mathrm{vec}\left({\mathbf{A}}^\top M_2({\mathbf{Y}}) {\mathbf{A}}\right){\rm E}({\mathbf{Y}})^\top {\mathbf{A}}
+2\left[{\mathbf{A}}^\top {\rm E}({\mathbf{Y}}){\rm E}({\mathbf{Y}})^\top {\mathbf{A}}\right]\otimes\left[{\mathbf{A}}^\top {\rm E}({\mathbf{Y}})\right].
\end{eqnarray}
All the quantities in the M\'{o}ri-Rohatgi-Sz\'{e}kely measure of skewness are specific non-central moments of third order of ${\mathbf{Z}}$, where ${\mathbf{Z}}={\mathbf{\Omega}}^{-1/2}({\mathbf{Y}}-{\boldsymbol{\mu}})\sim \mathcal{MMNE}_p({\boldsymbol{\xi}}_{\mathbf{Z}},{\mathbf{\Omega}}_{\mathbf{Z}},{\boldsymbol{\delta}}_{\mathbf{Z}})$, such that
${\boldsymbol{\xi}}_{\mathbf{Z}}=-{\mathbf{\Omega}}^{-1/2}{\boldsymbol{\omega}}{\boldsymbol{\delta}}$,
${\mathbf{\Omega}}_{\mathbf{Z}}=\textbf{I}_p$ and
${\boldsymbol{\delta}}_{\mathbf{Z}}= {\mathbf{\Omega}}^{-1/2} {\boldsymbol{\omega}}{\boldsymbol{\delta}}$;
\item \textbf{\textit{Kollo index}}:
To obtain Kollo's measure, we use the elements of
non-central moments of third order of ${\mathbf{Z}}$;
\item \textit{\textbf{Balakrishnan-Brito-Quiroz index}}:
Upon substituting $E(U^m)=m!$ for $m=1, 2, \ldots$, the elements of ${\mathbf{T}}$ in Table \ref{skewformula}, for $r=1, 2, \ldots, p$, are
$
{\mathbf{T}}_r=\frac{3}{p(p+2)} \left({\mathbf{M}}_3^{\mathbf{Z}}[(r - 1)p + r, r] +\sum_{i\neq r} {\mathbf{M}}_3^{\mathbf{Z}}[(i - 1)p + i, r]\right),
$
where ${\mathbf{M}}_3^{\mathbf{Z}}[., .]$ denotes the elements of matrix ${\mathbf{M}}_3^{\mathbf{Z}}$, third moments of $\mathcal{MMNE}_p({\boldsymbol{\xi}}_{\mathbf{Z}},{\mathbf{\Omega}}_{\mathbf{Z}},{\boldsymbol{\delta}}_{\mathbf{Z}})$ distribution,
and we can then compute ${\mathbf{Q}}={\mathbf{T}}^\top {\mathbf{\Sigma}}_{\mathbf{T}}^{-1}{\mathbf{T}}$ and ${\mathbf{Q}}^*={\mathbf{T}}^\top {\mathbf{T}}$;
\item\textbf{\textit{ Isogai index}}:
By substituting ${\rm E}(U)=\textrm{var}(U)=1$ in Isogai measure of skewness, we have
$S_I=\left({\mathbf{\delta}}_*-m_0^* \right)^2$,
where $m_0^*$
is the mode of the $\mathcal{MMNE}_1$ distribution in the canonical form. This index is location and scale invariant.
The vectorial measure, given by \cite{Balakrishnan and Scarpa (2012)}, is $S_C=\left(1-\frac{m_0^*}{{\mathbf{\delta}}_*}\right){\boldsymbol{\delta}}$.
Therefore, the direction of ${\boldsymbol{\delta}}$ can be regarded as a measure of vectorial skewness for the $\mathcal{MMNE}$ distribution.
\end{itemize}
\begin{table}[htb!]
\scriptsize
\center
\caption{The Average computational time spent (Atime), the average values (Mean), thecorresponding standard
deviations (Std.), Bias and MSE of the EM estimates over 1000 samples from the $\mathcal{MMNE}$ model in Subsection \ref{Simul-EST}. }
\label{tab-sim}
\begin{tabular}{cclccccccccc}
\hline
n& Atime&& $\xi_1$ & $\xi_2$& $\xi_3$&$\delta_1$&$\delta_2$&$\delta_3$&$\sigma_{11}$&$\sigma_{22}$&$\sigma_{33}$\\
\hline
$50$&0.3265&Mean&5.0804 &10.1518 &15.1735 &0.1663 &0.4940 &0.2259& 0.3947 &0.5931& 0.9800 \\
&&Std.& 0.2625 &0.3163 &0.4202 &0.3870 &0.3912 &0.4041 &0.0846 &0.1512 &0.2003 \\
&&Bias&0.0804& 0.1518& 0.1735& -0.1337 &-0.2060 &-0.1741& -0.0053 &-0.0069 &-0.0200 \\
&&MSE&0.0753 &0.1230 &0.2065 &0.1675 &0.1953 &0.1934 &0.0072 &0.0229 &0.0405 \\
$100$&0.4824&Mean&5.0409 &10.0554 &15.0637 &0.2360 &0.6221 &0.3346 &0.3979 &0.5973 &0.9871 \\
&&Std.& 0.1755 &0.2028 &0.2814 &0.2581 &0.2453 &0.2651 &0.0584 &0.1110 &0.1492 \\
&&Bias&0.0409 &0.0554 &0.0637 &-0.0640 &-0.0779 &-0.0654 &-0.0021& -0.0027& -0.0129 \\
&&MSE&0.0324 &0.0441 &0.0832 &0.0706 &0.0662 &0.0745 &0.0034 &0.0123 &0.0224 \\
$500$&1.8063&Mean&5.0006& 10.0036& 15.0024& 0.3001 &0.6973 &0.3964 &0.3993 &0.5995 &1.0004 \\
&&Std.& 0.0508 &0.0534 &0.0733 &0.0686 &0.0492 &0.0587 &0.0268 &0.0483 &0.0644 \\
&&Bias&0.0006 &0.0036 &0.0025 &0.0006 & -0.0027 &-0.0036 &0.0007 & 0.0005 & 0.0004\\
&&MSE&0.0026 &0.0029 &0.0054 &0.0047 &0.0024 &0.0035 &0.0007 &0.0023 &0.0041\\
$1000$&4.0388&Mean&5.0004 &9.9996 &15.0025 &0.3006 &0.6995 &0.3971 &0.3999 &0.5997 &0.9981 \\
&&Std.& 0.0345 &0.0355 &0.0529 &0.0435 &0.0328 &0.0424 &0.0180 &0.0336 &0.0447 \\
&&Bias&0.0004 &-0.0004 &0.0024 &0.0001 &-0.0005 &-0.0029 &-0.0001 &-0.0003 &-0.0002 \\
&&MSE&0.0012 &0.0013 &0.0028 &0.0019 &0.0011 &0.0018 &0.0003 &0.0011 &0.0020\\
\hline
\end{tabular}
\end{table}
\section{Simulation Study}
\subsection{Model Fitting}\label{Simul-EST}
This subsection presents the results of a Monte Carlo simulation
study carried out to examine the performance of the proposed estimation method for the $\mathcal{MMNE}$ distribution in the trivariate case.
We evaluate the estimates in terms of Bias and MSE (mean squared error). The results are based on $1000$ simulated samples from the $\mathcal{MMNE}$ distribution with parameters
$ {\boldsymbol{\xi}}=(5, 10, 15)^\top$, ${\mathbf{\Omega}}=\textrm{diag}(0.4, 0.6, 1.0)$, ${\boldsymbol{\delta}}=(0.3, 0.7, 0.4)^\top$ for different sample sizes $n\in\{50, 100, 500, 1000\}$. We computed the Bias and the MSE as
$
\textrm{Bias}=\frac{1}{1000}\sum_{j=1}^{1000} (\widehat{\theta}_j-\theta)$ and
$\textrm{MSE}=\frac{1}{1000}\sum_{j=1}^{1000} (\widehat{\theta}_j-\theta)^2$,
where $\theta$ is the true parameter (each of
${\boldsymbol{\xi}}=(\xi_1,\xi_2,\xi_3)^\top$,
${\boldsymbol{\delta}}=(\delta_1,\delta_2,\delta_3)^\top$ and
${\mathbf{\Omega}}=\textrm{diag}(\sigma_{11},\sigma_{22},\sigma_{33})^\top$)
and $\widehat{\theta}_j$ is the estimate from the $j$-th simulated sample.
Table \ref{tab-sim} presents the average computational time spent (Atime) (in seconds) (computational time for the convergence of the EM algorithm), average values (Mean), the corresponding
standard deviations (Std.), Bias and MSE of the EM estimates of all the parameters of the $\mathcal{MMNE}$ model in 1000 simulated samples for each sample size.
It can be observed form Table \ref{tab-sim}
that the Bias and MSE decrease as $n$ increases, revealing the asymptotic unbiasedness and
consistency of the ML estimates obtained through the EM algorithm.
Note that the EM algorithm presented in this work, for the $\mathcal{MMNE}$ distribution, leads to closed-form expressions, and so the computational time required for the convergence of the EM estimates of the parameters is quite short.
\begin{table}[htb!]
\scriptsize
\center
\caption{ Skewness measures in Section \ref{measures-skewness}, for some bivariate $\mathcal{MMNE}$ distributions with presumed parameter ${\boldsymbol{\xi}}=\boldsymbol{0}$ and different choices of parameters ${\boldsymbol{\Omega}}$ and ${\boldsymbol{\delta}}$.}
\label{tab.skew2var}
\begin{tabular}{cclcccccccc}
\hline
$\#$&\multicolumn{2}{c}{Parameters}&$\beta_{1,p}$&$\beta_{1p}^2$& $s$ & $b$& $Q^*$& $T$ & $s_I$& $s_C$\\
\hline\\[-0.15cm]
$1$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 1 \\ 1&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.750\\ 0.985 \\ \end{bmatrix}$&
3.966& 1.975 &
$\begin{bmatrix} 0.825 \\1.812 \\ \end{bmatrix}$
&
$\begin{bmatrix} 1.448 \\3.179\\ \end{bmatrix}$
&
0.558
&
$\begin{bmatrix} 0.310\\ 0.680\\ \end{bmatrix}$
&
0.788
&
$\begin{bmatrix} 0.667\\ 0.876 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$2$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 0 \\ 0&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.200\\ 0.975\\ \end{bmatrix}$&
3.889& 1.718 &
$\begin{bmatrix} 0.396\\ 1.932 \\ \end{bmatrix}$
&
$\begin{bmatrix} 0.552\\ 2.692\\ \end{bmatrix}$
&
0.547
&
$\begin{bmatrix} 0.149\\ 0.724\\ \end{bmatrix}$
&
0.673
&
$\begin{bmatrix} 0.165\\ 0.804 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$3$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 0 \\ 0&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.000\\ 0.995\\ \end{bmatrix}$&
3.881& 1.941 &
$\begin{bmatrix} 0.000 \\1.970 \\ \end{bmatrix}$
&
$\begin{bmatrix} 0.000\\ 1.970\\ \end{bmatrix}$
&
0.546
&
$\begin{bmatrix} 0.000\\ 0.739\\ \end{bmatrix}$
&
0.666
&
$\begin{bmatrix} 0.000\\ 0.816 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$4$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 1 \\ 1&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.650\\ 0.995\\ \end{bmatrix}$&
3.890& 1.774 &
$\begin{bmatrix} 0.562\\ 1.890 \\ \end{bmatrix}$
&
$\begin{bmatrix} 0.870\\ 2.924\\ \end{bmatrix}$
&
0.547
&
$\begin{bmatrix} 0.211 \\0.709\\ \end{bmatrix}$
&
0.675
&
$\begin{bmatrix} 0.536\\ 0.821 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$5$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 1 \\ 1&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.850\\ 0.900\\ \end{bmatrix}$&
3.337 &1.511 &
$\begin{bmatrix} 1.099\\ 1.460 \\ \end{bmatrix}$
&
$\begin{bmatrix} 2.154\\ 2.862 \\ \end{bmatrix}$
&
0.469
&
$\begin{bmatrix} 0.412\\ 0.547\\ \end{bmatrix}$
&
0.412
&
$\begin{bmatrix} 0.562\\ 0.595 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$6$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 0 \\ 0&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.550\\-0.800 \\ \end{bmatrix}$&
3.349& 0.580 &
$\begin{bmatrix} 1.037\\ -1.508 \\ \end{bmatrix}$
&
$\begin{bmatrix} 0.069\\ -0.100 \\ \end{bmatrix}$
&
0.471
&
$\begin{bmatrix} 0.389 \\-0.566 \\ \end{bmatrix}$
&
0.415
&
$\begin{bmatrix} 0.365\\ -0.531 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$7$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 1 \\1&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.900\\0.775 \\ \end{bmatrix}$&
2.731& 0.843 &
$\begin{bmatrix} 1.254\\ 1.077 \\ \end{bmatrix}$
&
$\begin{bmatrix} 2.493\\ 2.141\\ \end{bmatrix}$
&
0.384
&
$\begin{bmatrix} 0.470\\ 0.404\\ \end{bmatrix}$
&
0.286
&
$\begin{bmatrix} 0.513\\ 0.442 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$8$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 0 \\0&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.800\\0.400 \\ \end{bmatrix}$&
2.048& 0.532 &
$\begin{bmatrix} 1.280\\ 0.640 \\ \end{bmatrix}$
&
$\begin{bmatrix} 2.304\\ 1.152\\ \end{bmatrix}$
&
0.288
&
$\begin{bmatrix} 0.480\\ 0.240\\ \end{bmatrix}$
&
0.193
&
$\begin{bmatrix} 0.393\\ 0.196 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$9$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 1 \\1&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.750\\ 0.150 \\ \end{bmatrix}$&
1.607& 0.521 &
$\begin{bmatrix} 1.263\\ -0.110 \\ \end{bmatrix}$
&
$\begin{bmatrix} 1.045\\ -0.091\\ \end{bmatrix}$
&
0.226
&
$\begin{bmatrix} 0.473\\ -0.041\\ \end{bmatrix}$
&
0.1451
&
$\begin{bmatrix} 0.333 \\ 0.066 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$10$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 1 \\1&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} -0.750\\ -0.150 \\ \end{bmatrix}$&
1.607 & 0.521 &
$\begin{bmatrix} -1.263\\ 0.110 \\ \end{bmatrix}$
&
$\begin{bmatrix} -1.045\\ 0.091\\ \end{bmatrix}$
&
0.226
&
$\begin{bmatrix} -0.474 \\ 0.041\\ \end{bmatrix}$
&
0.145
&
$\begin{bmatrix} -0.333\\ -0.067 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$11$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 0 \\0&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.700\\ 0.000 \\ \end{bmatrix}$&
0.471& 0.235 &
$\begin{bmatrix} 0.686\\ 0.000 \\ \end{bmatrix}$
&
$\begin{bmatrix} 0.686\\ 0.000\\ \end{bmatrix}$
&
0.066
&
$\begin{bmatrix} 0.257\\ 0.000\\ \end{bmatrix}$
&
0.043
&
$\begin{bmatrix} 0.208\\ 0.000 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$12$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& -1 \\-1&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.000\\ 0.000 \\ \end{bmatrix}$&
0.000& 0.000 &
$\begin{bmatrix} 0.000\\ 0.000 \\ \end{bmatrix}$
&
$\begin{bmatrix} 0.000\\ 0.000\\ \end{bmatrix}$
&
0.000
&
$\begin{bmatrix} 0.000\\ 0.000\\ \end{bmatrix}$
&
0.000
&
$\begin{bmatrix} 0.000\\ 0.000 \\ \end{bmatrix}$
\\[0.2cm]
\hline
\end{tabular}
\end{table}
\subsection{Assessment of Skewness}
To study and compare different multivariate measures of skewness for the $\mathcal{MMN}$ distributions,
we consider the $\mathcal{MMNE}$ distribution. We compute the values of all the skewness
measures for different choices of the parameters of the bivariate and trivariate $\mathcal{MMNE}$ distributions. Tables \ref{tab.skew2var} and \ref{tab.skew3var} present the values of all the skewness measures. It should be noted that all the measures are location and scale invariant, a desirable property indeed for any measure of skewness. For similar work on skewness comparisons for $\mathcal{SN}$ distribution, one may refer to \cite{Balakrishnan and Scarpa (2012)}, and also to \cite{Kim and Zhao (2018)} for a similar work on scale mixtures of $\mathcal{SN}$ distributions.
From Table \ref{tab.skew2var}, we find that in all cases with scalar measures of skewness, Mardia's measure have the highest value and Srivastava's measure is the next largest.
Just as in the case of $\mathcal{SN}$ distribution, for the bivariate $\mathcal{MMNE}$ distribution, the vectorial measures yield very similar results in terms of skewness directions, especially when the distribution is highly asymmetric \cite{Balakrishnan and Scarpa (2012)}.
It is important to note that Cases 9 and 10 deal with reflected distributions; and in these cases, all the measures
are the same and the vectorial ones are reflected as well.
Table \ref{tab.skew3var} presents the values of all the measures for the trivariate $\mathcal{MMNE}$ distribution. In this case, differences among the
measures become much more pronounced. From Table \ref{tab.skew3var}, we find that in all cases, among the vectorial measures of skewness, Mardia's measure has the highest value.
Of course, the magnitude of the measures alone does not say much; one has to know how significant the values are!
\begin{table}[htb!]
\scriptsize
\center
\caption{
Skewness measures in Section \ref{measures-skewness}, for some trivariate $\mathcal{MMNE}$ distributions with presumed parameter ${\boldsymbol{\xi}}=\boldsymbol{0}$ and different choices of parameters ${\boldsymbol{\Omega}}$ and ${\boldsymbol{\delta}}$.}
\label{tab.skew3var}
\begin{tabular}{cclcccccccc}
\hline
$\#$&\multicolumn{2}{c}{Parameters}&$\beta_{1,p}$&$\beta_{1p}^2$& $s$ & $b$& $Q^*$& $T$ & $s_I$& $s_C$\\
\hline\\[-0.15cm]
$1$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 0& 0\\ 0& 2.5&0\\ 0& 0& 2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.10\\ 0.70\\ 0.70 \\ \end{bmatrix}$&
3.881& 0.314 &
$\begin{bmatrix} 0.198 \\1.386\\ 1.386 \\ \end{bmatrix}$
&
$\begin{bmatrix} 0.450 \\3.150\\ 3.150 \\ \end{bmatrix}$
&
0.155
&
$\begin{bmatrix} 0.040\\ 0.277\\ 0.277\\ \end{bmatrix}$
&
0.666
&
$\begin{bmatrix} 0.082\\ 0.574\\ 0.574 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$2$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 1& 1\\ 1& 2.5&1\\ 1&1& 10 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.75\\0.75\\0.65\\ \end{bmatrix}$&
2.712& 0.235 &
$\begin{bmatrix} 0.752\\ 1.044\\ 1.028 \\ \end{bmatrix}$
&
$\begin{bmatrix} 2.211\\ 3.070 \\ 3.023\\ \end{bmatrix}$
&
0.108
&
$\begin{bmatrix} 0.150 \\ 0.209 \\ 0.206\\ \end{bmatrix}$
&
0.283
&
$\begin{bmatrix} 0.426\\ 0.426 \\ 0.369 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$3$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 0& 0\\ 0& 2.5&0\\ 0& 0& 5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix}0.995\\ 0.00\\0.00\\ \end{bmatrix}$&
3.881& 1.294 &
$\begin{bmatrix} 1.970\\ 0.000\\ 0.000 \\ \end{bmatrix}$
&
$\begin{bmatrix} 1.970\\ 0.000\\ 0.000 \\ \end{bmatrix}$
&
0.155
&
$\begin{bmatrix} 0.394 \\0.000\\ 0.000\\ \end{bmatrix}$
&
0.666
&
$\begin{bmatrix} 0.816\\ 0.000\\ 0.000 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$4$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 0& 0\\ 0& 1&0\\ 0& 0& 2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.40\\-0.60\\-0.60\\ \end{bmatrix}$&
2.726& 0.130 &
$\begin{bmatrix} 0.704\\ -1.056 \\ -1.056 \\ \end{bmatrix}$
&
$\begin{bmatrix} 0.512\\ -0.768\\ -0.768\\ \end{bmatrix}$
&
0.109
&
$\begin{bmatrix} 0.141\\ -0.211\\ -0.211 \\ \end{bmatrix}$
&
0.286
&
$\begin{bmatrix} 0.228\\ -0.342\\ -0.342 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$5$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 1& 1\\ 1& 2.5&1\\ 1&1& 10 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.55\\0.05\\-0.30\\ \end{bmatrix}$&
1.372 &0.223 &
$\begin{bmatrix} 1.055\\ -0.140\\ -0.490\\ \end{bmatrix}$
&
$\begin{bmatrix} 0.139\\ -0.018\\ -0.064 \\ \end{bmatrix}$
&
0.055
&
$\begin{bmatrix} 0.211\\ -0.028\\ -0.098\\ \end{bmatrix}$
&
0.122
&
$\begin{bmatrix} 0.230\\ 0.021\\ -0.125 \\ \end{bmatrix}$
\\[0.2cm]
\\[-0.15cm]
$6$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 0& 0\\ 0& 2.5&0\\ 0&0& 1 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix} 0.75\\0.35\\0.35 \\ \end{bmatrix}$&
2.106& 0.242 &
$\begin{bmatrix} 1.211 \\0.565 \\0.565\\ \end{bmatrix}$
&
$\begin{bmatrix} 3.154\\ 1.472\\ 1.472 \\ \end{bmatrix}$
&
0.084
&
$\begin{bmatrix} 0.242 \\0.113\\ 0.113\\ \end{bmatrix}$
&
0.200
&
$\begin{bmatrix} 0.373 \\0.174 \\0.174 \\ \end{bmatrix}$
\\[0.2cm]
\hline
\end{tabular}
\end{table}
\begin{table}[htb!]
\scriptsize\center
\caption{Upper and lower $2.5\%$ critical values based on the test statistics $\beta_{1,p}$, $\beta_{1p}^2$, $s_{{\rm sum}}$, $s_{{\rm max}}$, $b_{\rm sum}$, $b_{\rm max}$, $Q^*$, $\boldsymbol{T}_{{\rm sum}}$, $\boldsymbol{T}_{{\rm max}}$, $s_I$, ${s_C}_{{\rm sum}}$, and ${s_C}_{{\rm max}}$, in Subsection \ref{Power}, obtained from $10000$ simulated samples of standard multivariate normal distribution, for sample size $n=100$ and dimensions $p\in\{2, \ldots, 8\}$. }
\label{25Percentiles}
\begin{tabular}{c|crrrrrrrrrrrrrr}
\hline
Test Statistics& Percentile &$p=2~$&$p=3~$&$p=4~$&$p=5~$ & $p=6~$&$p=7~$&$p=8~$\\
\hline
$\beta_{1,p}$& 0.025 &0.0000 &0.0000&0.0685&0.1232&0.2304& 0.3170 &0.4388 \\
& 0.975 &1.1816&1.4552&2.1717 &2.8814&3.5455& 3.8266 &3.9085 \\
$\beta_{1p}^2$& 0.025 &0.0000&0.0000&0.0025 &0.0038&0.0041&0.0035 & 0.0042 \\
& 0.975 &0.4474&0.2918&0.2827 &0.2906&0.3167& 0.2311 &0.2654 \\
$s_{max}$ & 0.025 &-0.3562& -0.1626&-0.0975&-0.0244&0.0160&0.1061 &0.1452 \\
& 0.975 & 0.8894&0.9881&1.1032&1.2195&1.2737& 1.3150 & 1.4936 \\
$s_{sum}$& 0.025 &-0.9945&-1.3716&-1.6814& -1.9739&-2.2822&-2.5103 &-2.7708 \\
& 0.975 &1.0052& 1.3275&1.4690&1.9442&2.1993&2.5009 &3.1478 \\
$b_{max}$& 0.025 & -0.6804&-0.3989&-0.2855&-0.0707&0.0000&0.0002 &0.0018 \\
& 0.975 & 1.0957&1.6014&1.7865 &2.4971&2.6128&3.1300 &4.0002 \\
$b_{sum}$& 0.025 &-1.7299&-2.8449&-3.9202&-5.5741&-6.4435&-7.3387 &-8.0038 \\
& 0.975 &1.7621&2.8721&3.3774 & 5.6679&6.3422&7.8015 &10.2697 \\
$Q^*$& 0.025 & 0.0000&0.0000&0.0011&0.0009& 0.0009&0.0007 & 0.0006 \\
& 0.975 &0.1662&0.0582&0.0339&0.0212& 0.0138&0.0087 &0.0055 \\
$\boldsymbol{T}_{max}$ & 0.025 &-0.1336&-0.0325&-0.0122&-0.0021&0.0010&0.0051 &0.0054 \\
& 0.975 &0.3335&0.1976&0.1379&0.1045& 0.0796&0.0626 &0.0560 \\
$\boldsymbol{T}_{sum}$ & 0.025 &-0.3729&-0.2743&-0.2102&-0.1692& -0.1426&-0.1195 &-0.1039 \\
& 0.975 &0.3769&0.2655&0.1836&0.1666&0.1375&0.1191 &0.1180 \\
${s_I}$& 0.025 &0.0000& 0.0000&0.0080&0.0133&0.0229&0.0304 &0.0407 \\
& 0.975 &0.1045&0.1301&0.2077&0.3122&0.4770&0.6192 &0.6956 \\
${s_C}_{max}$& 0.025 &-0.1130&-0.0536&-0.0347&-0.0108&0.0109&0.0311 &0.0482 \\
& 0.975 &0.2710&0.2955&0.3410&0.4033& 0.4830&0.5097 &0.6410 \\
${s_C}_{sum}$& 0.025 & -0.2920&-0.4116& -0.5417&-0.6276&-0.7303&-0.8984 &-1.0657 \\
& 0.975 &0.3036&0.3932&0.4644&0.6154&0.7311&0.8275 &1.1957 \\
\hline
\end{tabular}
\end{table}
\begin{table}[htb!]
\scriptsize\center
\caption{Upper $5\%$ critical values based on the test statistics $\beta_{1,p}$, $\beta_{1p}^2$, $s_{{\rm sum}}$, $s_{{\rm max}}$, $b_{\rm sum}$, $b_{\rm max}$, $Q^*$, $\boldsymbol{T}_{{\rm sum}}$, $\boldsymbol{T}_{{\rm max}}$, $s_I$, ${s_C}_{{\rm sum}}$, and ${s_C}_{{\rm max}}$, in Subsection \ref{Power}, obtained from $10000$ simulated samples of standard multivariate normal distribution, for sample size $n=100$ and dimensions $p\in\{2, \ldots, 8\}$. }
\label{5Percentiles}
\begin{tabular}{c|rrrrrrrrrrrrrr}
\hline
Test Statistics& $p=2~$&$p=3~$&$p=4~$&$p=5~$ & $p=6~$&$p=7~$&$p=8~$\\
\hline
$\beta_{1,p}$ &0.9325&1.2317&1.7024&2.4021&3.0324& 3.4899 & 3.8939 \\
$\beta_{1p}^2$ &0.3111&0.2135&0.1997&0.1812&0.2044& 0.1612 & 0.2039 \\
$s_{max}$ & 0.7679 &0.8177&0.9712&1.0931&1.2011& 1.1774 &1.3446 \\
$s_{sum}$& 0.8546 &1.0509&1.2424&1.6682& 1.8934& 1.9996 & 2.4041 \\
$b_{max}$& 0.9619&1.2206&1.3468& 1.9457&2.0467& 2.2711 & 2.8953 \\
$b_{sum}$& 1.3001&2.1093& 2.5046&3.9367&4.4979&5.1949 &7.1686 \\
$Q^*$& 0.1311& 0.0493&0.0266&0.0176&0.0118&0.0079 &0.0055 \\
$\boldsymbol{T}_{max}$ &0.2880 &0.1635&0.1214&0.0937&0.0751&0.0561 &0.0504 \\
$\boldsymbol{T}_{sum}$ & 0.3205 &0.2102&0.1553&0.1430&0.1183& 0.0952 &0.0902 \\
${s_I}$& 0.0824 &0.1091&0.1549& 0.2375&0.3409&0.4576 & 0.6792 \\
${s_C}_{max}$& 0.2324 &0.2468&0.2959& 0.3382&0.4069& 0.4282 &0.5509 \\
${s_C}_{sum}$& 0.2619 &0.3224&0.3757&0.5072&0.6228 &0.6621 & 0.8116 \\
\hline
\end{tabular}
\end{table}
\begin{table}[htb!]
\scriptsize
\center
\caption{ Simulated values of power for all tests based on the test statistics $\beta_{1,p}$, $\beta_{1p}^2$, $s_{{\rm sum}}$, $s_{{\rm max}}$, $b_{\rm sum}$, $b_{\rm max}$, $Q^*$, $\boldsymbol{T}_{{\rm sum}}$, $\boldsymbol{T}_{{\rm max}}$, $s_I$, ${s_C}_{{\rm sum}}$, and ${s_C}_{{\rm max}}$, in Subsection \ref{Power}, for bivariate normal distribution against $\mathcal{MMNE}$ distribution.}
\label{power-2var}
\begin{tabular}{cclcccccccccccc}
\hline
$\#$&\multicolumn{2}{c}{Parameters}&$\beta_{1,p}$&$\beta_{1p}^2$& $s_{{\rm max}}$ & $s_{\rm sum}$ & $b_{{\rm max}}$ & $b_{\rm sum}$ & $Q^*$& $\boldsymbol{T}_{{\rm max}}$ & $\boldsymbol{T}_{\rm sum}$ & $s_I$ & ${s_C}_{{\rm max}}$ & ${s_C}_{\rm sum}$\\
\hline\\[-0.2cm]
$1$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1& 0 \\ 0&2.5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix}0.1\\ 0.1 \\ \end{bmatrix}$&
0.030& 0.041& 0.034& 0.040& 0.038& 0.035& 0.030& 0.034& 0.040& 0.030& 0.030& 0.044\\[0.3cm]
$2$&& ${\boldsymbol{\delta}}=\begin{bmatrix}0.5\\ 0.5 \\ \end{bmatrix}$&
0.283& 0.121& 0.172& 0.467& 0.492& 0.497& 0.283& 0.172& 0.467& 0.283& 0.167& 0.459\\[0.3cm]
$3$&& ${\boldsymbol{\delta}}=\begin{bmatrix}0.1\\ 0.8 \\ \end{bmatrix}$&
0.659& 0.748& 0.711& 0.668& 0.631& 0.349& 0.659 &0.711& 0.668& 0.659& 0.690& 0.634\\[0.3cm]
$4$&& ${\boldsymbol{\delta}}=\begin{bmatrix}0.8\\ 0.1 \\ \end{bmatrix}$&
0.682& 0.724 &0.729& 0.700& 0.661& 0.383& 0.682& 0.729& 0.700& 0.682& 0.711& 0.682\\[0.3cm]
$5$&& ${\boldsymbol{\delta}}=\begin{bmatrix}0.8\\ 0.8 \\ \end{bmatrix}$&
0.994& 0.994& 0.994& 0.994& 0.994& 0.994& 0.994& 0.994& 0.994& 0.994& 0.994& 0.994\\[0.3cm]
$6$&& ${\boldsymbol{\delta}}=\begin{bmatrix}-0.7\\ -0.7 \\ \end{bmatrix}$&
0.982& 0.982& 0.982& 0.981& 0.981& 0.981& 0.982& 0.982& 0.981& 0.982& 0.982& 0.981\\[0.3cm]
\hline\\[-0.2cm]
$7$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1&1 \\1&2.5 \\ \end{bmatrix}$& ${\boldsymbol{\delta}}=\begin{bmatrix}0.1\\ 0.1 \\ \end{bmatrix}$&
0.032& 0.035& 0.043& 0.057& 0.060& 0.059& 0.032& 0.043& 0.057& 0.032& 0.087& 0.140 \\[0.3cm]
$8$&& ${\boldsymbol{\delta}}=\begin{bmatrix}0.5\\ 0.5 \\ \end{bmatrix}$&
0.074& 0.079& 0.062& 0.159& 0.189& 0.186& 0.074& 0.062& 0.159& 0.074& 0.088& 0.315\\[0.3cm]
$9$&& ${\boldsymbol{\delta}}=\begin{bmatrix}0.1\\ 0.8 \\ \end{bmatrix}$&
0.979& 0.929& 0.980& 0.668& 0.105& 0.004& 0.979& 0.980& 0.668& 0.979& 0.974& 0.956 \\[0.3cm]
$10$&& ${\boldsymbol{\delta}}=\begin{bmatrix}0.8\\ 0.1\\ \end{bmatrix}$&
0.982& 0.978& 0.984& 0.941& 0.639& 0.084& 0.982& 0.984& 0.941& 0.982& 0.977& 0.963\\[0.3cm]
$11 $&& ${\boldsymbol{\delta}}=\begin{bmatrix}0.8\\ -0.1\\ \end{bmatrix}$&
0.956& 0.958& 0.959& 0.816& 0.076& 0.002& 0.956& 0.959& 0.816& 0.956& 0.960& 0.938\\[0.3cm]
$12 $&& ${\boldsymbol{\delta}}=\begin{bmatrix}-0.8\\ 0.1\\ \end{bmatrix}$&
0.953& 0.955& 0.023& 0.832& 0.003& 0.005& 0.953& 0.023& 0.832& 0.953& 0.024& 0.936\\[0.3cm]
$13 $&& ${\boldsymbol{\delta}}=\begin{bmatrix}-0.8\\ -0.1\\ \end{bmatrix}$&
0.968& 0.963& 0.004& 0.921& 0.004& 0.088& 0.968& 0.004& 0.921& 0.968& 0.248& 0.944\\[0.3cm]
$14 $&& ${\boldsymbol{\delta}}=\begin{bmatrix}0.8\\ 0.8\\ \end{bmatrix}$&
0.929& 0.929& 0.815& 0.977& 0.979& 0.980& 0.929& 0.815& 0.977& 0.929& 0.918& 0.987\\[0.3cm]
\hline
\end{tabular}
\end{table}
\begin{table}[htb!]
\scriptsize
\center
\caption{ Simulated values of power for all tests based on the test statistics $\beta_{1,p}$, $\beta_{1p}^2$, $s_{{\rm sum}}$, $s_{{\rm max}}$, $b_{\rm sum}$, $b_{\rm max}$, $Q^*$, $\boldsymbol{T}_{{\rm sum}}$, $\boldsymbol{T}_{{\rm max}}$, $s_I$, ${s_C}_{{\rm sum}}$, and ${s_C}_{{\rm max}}$, in Subsection \ref{Power}, for trivariate normal distribution against $\mathcal{MMNE}$ distribution.}
\label{power-3var}
\begin{tabular}{ccccccccccccccc}
\hline
$\#$&\multicolumn{2}{c}{Parameters}&$\beta_{1,p}$&$\beta_{1p}^2$& $s_{\rm max}$ & $s_{\rm sum}$ & $b_{\rm max}$ & $b_{\rm sum}$ & $Q^*$& $\boldsymbol{T}_{\rm max}$ & $\boldsymbol{T}_{\rm sum}$ & $s_I$ & ${s_C}_{\rm max}$ & ${s_C}_{\rm sum}$\\
\hline\\[-0.2cm]
$1$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1&0&0\\0&2.5&0\\0&0&2.5 \\ \end{bmatrix}$& ${\boldsymbol{\delta}}=\begin{bmatrix}0.1\\ 0.1 \\ 0.1\\ \end{bmatrix}$&
0.056& 0.058& 0.047 &0.036& 0.042& 0.035& 0.056& 0.047& 0.036& 0.056& 0.053& 0.038\\[0.3cm]
$2$&& ${\boldsymbol{\delta}}=\begin{bmatrix} -0.1\\ -0.1 \\ -0.1\\ \end{bmatrix}$&
0.057& 0.061& 0.057& 0.035& 0.048& 0.052& 0.057& 0.057& 0.035& 0.057& 0.056& 0.039\\[0.3cm]
\\[-0.2cm]
$3$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1&1&1\\1&2.5&1\\1&1&10 \\ \end{bmatrix}$& ${\boldsymbol{\delta}}=\begin{bmatrix}0.1\\ 0.7 \\ 0.1\\ \end{bmatrix}$&
0.601& 0.192& 0.637& 0.166& 0.082& 0.031& 0.601& 0.637& 0.166& 0.601& 0.557& 0.513\\[0.3cm]
\\[-0.2cm]
$4$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1&0&0\\0&2.5&0\\0&0&5 \\ \end{bmatrix}$&
${\boldsymbol{\delta}}=\begin{bmatrix}0.7\\ 0.1 \\ 0.1\\ \end{bmatrix}$&
0.272& 0.324& 0.309& 0.277& 0.262& 0.168& 0.272& 0.309& 0.277& 0.272& 0.308& 0.287\\[0.3cm]
\\[-0.2cm]
$5$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1&0&0\\0&1&0\\0&0&2.5 \\ \end{bmatrix}$& ${\boldsymbol{\delta}}=\begin{bmatrix}0.4\\ 0.6 \\ 0.6\\ \end{bmatrix}$&
0.899& 0.513& 0.816& 0.902& 0.901& 0.902& 0.899& 0.816& 0.902& 0.899& 0.815& 0.901\\
[0.3cm] \\[-0.2cm]
$6$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1&1&1\\1&2.5&1\\1&1&10 \\ \end{bmatrix}$& ${\boldsymbol{\delta}}=\begin{bmatrix}0.7\\ 0.05 \\ 0.3\\ \end{bmatrix}$&
0.745& 0.674 &0.750& 0.501& 0.277& 0.114& 0.745 &0.750& 0.501& 0.745& 0.638& 0.709\\
[0.3cm] \\[-0.2cm]
$7$&
${\boldsymbol{\Omega}}=\begin{bmatrix} 1&0&0\\0&2.5&0\\0&0&1 \\ \end{bmatrix}$& ${\boldsymbol{\delta}}=\begin{bmatrix}0.7\\ 0.3 \\ 0.3\\ \end{bmatrix}$&
0.572& 0.349& 0.461& 0.726& 0.729& 0.703& 0.572& 0.461& 0.726& 0.572& 0.476& 0.711\\[0.3cm]
$8$&
& ${\boldsymbol{\delta}}=\begin{bmatrix} -0.7\\ 0.3 \\ 0.1\\ \end{bmatrix}$&
0.381& 0.228& 0.025& 0.025& 0.015& 0.015& 0.380 &0.025 &0.025& 0.380& 0.029& 0.029\\[0.3cm]
$9$&
& ${\boldsymbol{\delta}}=\begin{bmatrix} 0.7\\ -0.3 \\ 0.1\\ \end{bmatrix}$&
0.415& 0.248& 0.400& 0.061& 0.034& 0.013& 0.415& 0.401 &0.061 &0.415& 0.414 &0.081\\[0.3cm]
$10$&
& ${\boldsymbol{\delta}}=\begin{bmatrix}0.7\\ 0.3 \\ -0.1\\ \end{bmatrix}$&
0.394 &0.240& 0.378& 0.344 &0.259 &0.152& 0.392 &0.378& 0.344 &0.394& 0.385 &0.362\\[0.3cm]
$11$&
& ${\boldsymbol{\delta}}=\begin{bmatrix}-0.7\\ -0.3 \\ 0.1\\ \end{bmatrix}$&
0.404& 0.206& 0.073 &0.304& 0.059& 0.146 &0.404 &0.073& 0.304 &0.404 &0.072& 0.317\\[0.3cm]
$12$&
& ${\boldsymbol{\delta}}=\begin{bmatrix}-0.7\\ 0.3 \\ -0.1\\ \end{bmatrix}$&
0.415& 0.222& 0.034 &0.050 &0.015 &0.022 &0.415 &0.034 &0.050& 0.415& 0.044& 0.069\\[0.3cm]
$13$&
& ${\boldsymbol{\delta}}=\begin{bmatrix} 0.7\\ -0.3 \\ -0.1\\ \end{bmatrix}$&
0.393& 0.257& 0.356& 0.031& 0.020& 0.015& 0.393 &0.356& 0.031& 0.393& 0.370& 0.037\\[0.3cm]
$14$&
& ${\boldsymbol{\delta}}=\begin{bmatrix}-0.8\\ -0.8 \\ -0.8\\ \end{bmatrix}$&
0.996& 0.965& 0.997& 0.993 &0.993& 0.993& 0.996& 0.997 &0.993 &0.996& 0.997& 0.993\\[0.3cm]
\hline
\end{tabular}
\end{table}
\subsection{Comparison and Performance of Different Skewness Measures}\label{Power}
The measures studied in Section \ref{measures-skewness} and in the preceding subsection are not directly comparable with each other. So, for comparing them, we should have measures obtained on the same scale. To get such a set of comparable indices, we study the sample version for each of the skewness measures considered as test statistics for the hypothesis of normal distribution against $\mathcal{MMNE}$ distribution, and then use the power of test based on different test statistics.
Let $\boldsymbol{Y}_1, \boldsymbol{Y}_2, \ldots, \boldsymbol{Y}_n$ denote a sample of $p\times1$ observations from any $p$-dimensional distribution. Then, a sample version of all the skewness measures described can be obtained by replacing $\boldsymbol{\xi}$, $\boldsymbol{\Omega}$, and $\boldsymbol{\delta}$ with the maximum likelihood estimates of these quantities \cite{Balakrishnan and Scarpa (2012)}.
As seen in the previous sections, the Mardia and Malkovich-Afifi measure $\beta_{1,p}$, Srivastava measure $\beta_{1p}^2$, Isogai measure $s_I$, Balakrishnan-Brito-Quiroz measure $Q^*$ are scalar indices and the M\'{o}ri-Rohatgi-Sz\'{e}kely measure $s$, Kollo measure $b$, Balakrishnan-Brito-Quiroz measure $\boldsymbol{T}$, and Isogai measure $s_C$ are vectorial indices.
Here, we study different statistics for testing the null hypothesis and powers for each of these tests to quantify the capacity of each skewness measure to identify the specific asymmetry present in the $\mathcal{MMNE}$ distribution.
The power of the test is a probability, and its use enables us to compare different statistics, no matter what the original scales of them were.
To obtain a single test statistic for the vectorial measures, we propose two different metrics, namely, the sum and the maximum (see \cite{Balakrishnan and Scarpa (2012)}, pages 82-83).
For the M\'{o}ri-Rohatgi-Sz\'{e}kely measure, we compute $s_{{\rm sum}}=\sum_{r=1}^p s_r$ and $s_{{\rm max}}=\max_{r\in \{1, \ldots ,p\}} s_r$,
for the Kollo measure $b_{{\rm sum}}=\sum_{r=1}^p b_r$ and $b_{{\rm max}}=\max_{r\in \{1, \ldots ,p\}} b_r$,
for the Balakrishnan-Brito-Quiroz measure $\boldsymbol{T}_{{\rm sum}}=\sum_{r=1}^p T_r$ and $\boldsymbol{T}_{{\rm max}}=\max_{r\in \{1, \ldots ,p\}} T_r$,
for Isogai's measure ${s_C}_{{\rm sum}}= \sum_{r=1}^p{s_C}_r$ and ${s_C}_{{\rm max}}= \max_{r\in \{1, \ldots ,p\}} {s_C}_r $.
The distributions of sample versions of measures, $\beta_{1,p}$, $\beta_{1p}^2$, $s_{{\rm sum}}$, $s_{{\rm max}}$, $b_{\rm sum}$, $b_{\rm max}$, $Q^*$, $\boldsymbol{T}_{{\rm sum}}$, $\boldsymbol{T}_{{\rm max}}$, $s_I$, ${s_C}_{{\rm sum}}$, and ${s_C}_{{\rm max}}$ are not analytically computable easily,
and so we may determine the critical values of these tests through Monte Carlo simulations. Two sets of critical values obtained by Monte Carlo simulation, based on $10000$ samples from the standard multivariate normal distribution, are presented in Tables \ref{25Percentiles} and \ref{5Percentiles}, for dimensions $p\in\{2, \ldots, 8\}$.
To get the values of critical values, we first simulated $10000$ samples of size $n=100$ from the standard multivariate normal distribution with dimensions $p\in\{2, \ldots, 8\}$. We estimated the parameters and then found the values of test statistics. Then, we arranged the obtained values in increasing order and then selected the $2.5$ and $5$ lower and upper percentage points as critical values.
For computing the powers of the different tests, based on the above test statistics, we simulated 1000 samples from $\mathcal{MMNE}$ distribution of size $n=100$ for different choices of the parameters of the $\mathcal{MMNE}$ distribution and estimated the test statistics by using the ML estimates of parameters evaluated by EM algorithm. Then, we computed the proportion of samples falling in the same rejection region.
For test statistics $\beta_{1,p}$, $\beta_{1p}^2$, $Q^*$ and $s_I$,
we considered the sample versions exceeding the critical values as critical regions, in the form $CR=\{Q_0>q_\alpha\}$, and
for all other test statistics, the rejection regions were the two-sided areas of the form $CR=\{Q_0<q_{1-\alpha/2} ~\mbox{or}~ Q_0>q_{\alpha/2}\}$, where $Q_0$ is test statistic under null hypothesis and $q_\alpha$ is upper $\alpha$ percentile of distribution of test statistic.
\begin{table}[htb!]
\scriptsize
\center
\caption{ Simulated values of power for all tests based on the test statistics $\beta_{1,p}$, $\beta_{1p}^2$, $s_{{\rm sum}}$, $s_{{\rm max}}$, $b_{\rm sum}$, $b_{\rm max}$, $Q^*$, $\boldsymbol{T}_{{\rm sum}}$, $\boldsymbol{T}_{{\rm max}}$, $s_I$, ${s_C}_{{\rm sum}}$, and ${s_C}_{{\rm max}}$, in Subsection \ref{Power}, for seven dimensional normal distribution against $\mathcal{MMNE}$ distribution when ${\boldsymbol{\Omega}}=\mathbf{I}_7$.}
\label{power-7var}
\begin{tabular}{cccccccccccccc}
\hline
$\#$&Parameter&$\beta_{1,p}$&$\beta_{1p}^2$& $s_{\rm max}$ & $s_{\rm sum}$ & $b_{\rm max}$ & $b_{\rm sum}$ & $Q^*$& $\boldsymbol{T}_{\rm max}$ & $\boldsymbol{T}_{\rm sum}$ & $s_I$ & ${s_C}_{\rm max}$ & ${s_C}_{\rm sum}$\\
\hline
$1$&
${\boldsymbol{\delta}}=(0.1,0.1,0.1,0.1,0.1,0.1,0.1)^\top$&
0.061& 0.060& 0.061& 0.038& 0.042& 0.046& 0.062& 0.061& 0.044& 0.061& 0.059& 0.045\\
$2$&
${\boldsymbol{\delta}}=(0.7,0.7,0.7,0.7,0.7,0.7,0.7)^\top$&
0.978& 0.985& 0.000 &0.985 &0.985& 0.985& 0.978& 0.000& 0.985& 0.978& 0.704& 0.985\\
$3$&
${\boldsymbol{\delta}}=(0.1,0.7,0.1,0.7,0.1,0.7,0.1)^\top$&
0.952& 0.543& 0.035& 0.952 &0.953 &0.951& 0.952 &0.035& 0.952& 0.952 &0.580& 0.954\\
$4$&
${\boldsymbol{\delta}}=(0.4,0.2,0.5,0.1,0.7,0.6,0.3)^\top$&
0.918 &0.357 &0.033& 0.923 &0.927 &0.923 &0.919 &0.034 &0.923 &0.918 &0.326& 0.925\\
$5$&
${\boldsymbol{\delta}}=-(0.1,0.1,0.1,0.1,0.1,0.1,0.1)^\top$&
0.052& 0.055& 0.093& 0.058& 0.047 &0.070 &0.052& 0.095& 0.060& 0.052& 0.075 &0.054\\
$6$&
${\boldsymbol{\delta}}=-(0.7,0.7,0.7,0.7,0.7,0.7,0.7)^\top$&
0.980& 0.982 &0.986 &0.982 &0.983 &0.982 &0.980 &0.986 &0.982 &0.980& 0.983 &0.982\\
$7$&
${\boldsymbol{\delta}}=(0.1,-0.7,0.1,-0.7,0.1,-0.7,0.1)^\top$&
0.959 &0.516 &0.010& 0.684& 0.000 &0.080& 0.959 &0.010 &0.771& 0.959& 0.007& 0.841\\
$8$&
${\boldsymbol{\delta}}=(-0.4,0.2,-0.5,0.1,-0.7,0.6,-0.3)^\top$&
0.889& 0.413& 0.015 &0.009& 0.006 &0.005 &0.891& 0.015& 0.014& 0.889& 0.102& 0.156\\
\hline
\end{tabular}
\end{table}
In the simulation study, we took $\boldsymbol{\xi}= \boldsymbol{0}$, and the parameters $\boldsymbol{\Omega}$ and $\boldsymbol{\delta}$ as given in Tables \ref{power-2var}-\ref{power-7var}.
Tables \ref{power-2var}-\ref{power-7var} present the power of the proposed tests for bivariate, trivariate and seven dimensional normal distribution against MMNE distribution, respectively.
The comparison of different measures may be done directly from the results in Tables \ref{power-2var}-\ref{power-7var}.
These results show clearly which are the poorer indices of skewness among those considered.
Based on our empirical study, by considering different cases of the $\mathcal{MMNE}$ distributions in two, three and seven dimensions, we
make the following points:
for all cases with small skewness, as expected, the power of the tests are lower for distributions more similar to the normal, and test statistics $\beta_{1,p}$, $\beta_{1p}^2$, $Q^*$ and $s_I$ have better performance.
From Tables \ref{power-2var}-\ref{power-7var}, as expected, for increasing values of the elements of the skewness parameters, the power values of all tests increase.
For large elements close to 1 or -1, for skewness parameters, the power of the tests are higher and have almost the same values for different test statistics.
The behaviour of test statistics $\beta_{1,p}$, $Q^*$ and $s_I$, are very close to each other and have the same power. For small values of the skewness parameter, these test statistics have poorer performance.
The power of the tests for $s_{{\rm max}}$, and $\boldsymbol{T}_{{\rm max}}$ statistics are the same, and the test statistics $s_{\rm sum}$ and $\boldsymbol{T}_{\rm sum}$ often have similar behaviour.
For large and moderate values of skewness parameters, $b_{{\rm sum}}$ and $b_{{\rm max}}$ statistics have the lowest test power and have the worst performance compared with other test statistics.
For the bivariate case in Table \ref{power-2var}, when one element of the skewness parameter is large and one is small, the statistics $\beta_{1p}^2$, $\boldsymbol{T}_{{\rm max}}$ and $s_{{\rm max}}$ perform well, but $b_{{\rm sum}}$ and $b_{{\rm max}}$ statistics have the lowest test power.
For the trivariate case in Table \ref{power-3var}, when one element of the skewness parameter is large and two elements and small, the statistics $\beta_{1p}^2$, $\boldsymbol{T}_{{\rm max}}$ and $s_{{\rm max}}$ have better performance.
From Table \ref{power-7var}, for case 3, the statistic ${s_C}_{{\rm sum}}$ has the best performance and $\boldsymbol{T}_{{\rm max}}$ and $s_{{\rm max}}$ have lower power close to $0.05$.
From Table \ref{power-7var}, for case 4, the statistic $b_{{\rm max}}$ has the best performance, but $\boldsymbol{T}_{{\rm max}}$ and $s_{{\rm max}}$ have lower power close to $0.05$.
A result that we find from Tables \ref{power-2var}-\ref{power-7var} is that the test statistic ${s_C}_{{\rm max}}$ performs better than others in many cases.
\section{Illustrative Examples}\label{Example}
In this section, we fit the $\mathcal{MMNE}$ model for two real data sets to
illustrate the flexibility of the model. It is also compared with $\mathcal{SN}$ and $\mathcal{ST}$ distributions in terms of some measures of fit.
\subsection{AIS data}\label{AIS-Example}
The first example considers the Australian
Institute of Sport (AIS) data \cite{Cook and Weisberg (1994)},
containing 11 biomedical measurements on 202 Australian
athletes (100 female and 102 male).
Here, we focus solely on the first 100, and the trivariate case corresponding to BMI, SSF and Bfat variables, where the three acronyms denote Body Mass Index, Sum of Skin Folds, and Body Fat percentage, respectively. These data are available in the \textsf{R} software, \texttt{sn} package.
Fig. \ref{Hist-AIS} presents the histograms for the three variables.
Upon using the EM algorithm, we obtained the maximum likelihood estimates of parameters of the model. Table \ref{tab_est_AIS} presents the estimates of parameters $({\boldsymbol{\xi}},{\mathbf{\Omega}},{\boldsymbol{\delta}})$.
Table \ref{tab_skew_AIS} presents values of all skewness measures by using the estimates of parameters, presented in Table \ref{tab_est_AIS}.
\begin{figure}[htp]
\centerline{ \includegraphics[scale=.45]{hist-AIS.eps} }
\caption{ The histograms for the three selected variables BMI, SSF and Bfat of the AIS data set in Subsection \ref{AIS-Example}. \label{Hist-AIS}}
\end{figure}
The relative difference in the fit of a number of
candidate models can be compared by using the maximized log-likelihood values $ \ell({\boldsymbol{\hat{\theta}}}|{\boldsymbol{y}}) $, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). The AIC and BIC indices are defined as
$
AIC = 2k-\ell\left({\boldsymbol{\hat{\theta}}}|{\boldsymbol{y}}\right)$
and
$BIC = k \ln n-2\ell\left({\boldsymbol{\hat{\theta}}}|{\boldsymbol{y}}\right),
$
where $k$ is the number of model parameters and $\ell\left({\boldsymbol{\hat{\theta}}}|{\boldsymbol{y}}\right)$ is the maximized log-likelihood value of a fitted model.
The larger value of $\ell\left({\boldsymbol{\hat{\theta}}}|{\boldsymbol{y}}\right)$ and the smaller value of AIC or BIC indicates a better fit of the model to the data.
Table \ref{tab_AIC_fit} summarizes the fitting performance of $\mathcal{MMNE}$ model, as compared to the $\mathcal{SN}$ and $\mathcal{ST}$ distributions. From Table \ref{tab_AIC_fit}, it is seen that the $\mathcal{MMNE}$ model provides the best fit overall as it provides the largest $\ell({\boldsymbol{\hat{\theta}}}|{\boldsymbol{y}})$ value and
the lowest AIC and BIC values. Fig. \ref{scatter-AIS} shows the scatter plots of pairs of the three variables BMI, SSF, Bfat, along with the contour plots for the fitted $\mathcal{MMNE}$, $\mathcal{SN}$ and $\mathcal{ST}$ distributions.\\
\begin{table}[htb!]
\small
\center
\caption{Parameter estimates of the $\mathcal{MMNE}$ distribution by using EM Algorithm presented in Section \ref{Est-sec}, based on the three selected variables (BMI, SSF, Bfat) of the AIS data set in Subsection \ref{AIS-Example}.}
\label{tab_est_AIS}
\begin{tabular}{ccc}
\hline
$\widehat{{\boldsymbol{\xi}}}$&$\widehat{{\boldsymbol{\Omega}}}$& $\widehat{{\boldsymbol{\delta}}}$\\
\hline\\[-0.2cm]
$
\begin{bmatrix}
20.1099 \\
56.1969 \\
13.6666 \\
\end{bmatrix}
$
&
$
\begin{bmatrix}
7.2870& 66.3650 & 10.2745\\
66.3650& 1238.1858& 191.2748\\
10.2745& 191.2748& 31.3535\\
\end{bmatrix}
$
&
$
\begin{bmatrix}
0.6963\\
0.8747\\
0.7471\\
\end{bmatrix}
$\\[0.2cm]
\hline
\end{tabular}
\end{table}
\begin{table}[htb!]
\small
\center
\caption{Values of skewness measures in Section \ref{measures-skewness}, based on the three selected variables (BMI, SSF, Bfat) of the AIS data set in Subsection \ref{AIS-Example}.}
\label{tab_skew_AIS}
\begin{tabular}{cccccccc}
\hline
$\beta_{1,p}$&$\beta_{1p}^2$& $s$ & $b$& $Q^*$& $T$ & $s_I$& $s_C$\\
\hline\\[-0.2cm]
3.5539 & 0.5973 &
$
\begin{bmatrix}
0.3182\\
1.7703\\
-0.5644\\
\end{bmatrix}
$
&
$
\begin{bmatrix}
0.2080\\
1.1571\\
-0.3689\\
\end{bmatrix}
$
&
0.1422
&
$
\begin{bmatrix}
0.0636\\
0.3541 \\
-0.1129 \\
\end{bmatrix}
$
&
0.4800
&
$
\begin{bmatrix}
0.4920\\
0.6181 \\
0.5279 \\
\end{bmatrix}
$
\\[0.2cm]
\hline
\end{tabular}
\end{table}
\begin{table}[htb!]
\small
\center
\caption{Comparison of fitting measures, maximized log-likelihood value $\ell({\boldsymbol{\hat{\theta}}}|{\boldsymbol{y}})$, Akaike information criterion AIC and Bayesian information criterion BIC, for skew-normal ($\mathcal{SN}$), skew-t ($\mathcal{ST}$) and $\mathcal{MMNE}$ distributions for the three selected variables (BMI, SSF, Bfat) of the AIS data set in Subsection \ref{AIS-Example}.}
\label{tab_AIC_fit}
\begin{tabular}{l|ccc }
\hline
Distribution & ~~~~~~~ $\ell({\boldsymbol{\hat{\theta}}}|{\boldsymbol{y}}) $ & ~~~~~~~ AIC& ~~~~~~~ BIC\\
\hline
$\mathcal{SN}$ & ~~~~~~~ -866.2725 & ~~~~~~~ 1756.545& ~~~~~~~ 1787.807 \\
$\mathcal{ST}$ & ~~~~~~~ -852.1354 & ~~~~~~~ 1730.271& ~~~~~~~ 1764.138 \\
$\mathcal{MMNE}$ & ~~~~~~~ -850.7388 & ~~~~~~~ 1725.478& ~~~~~~~ 1756.740 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[htp]
\centerline{ \includegraphics[scale=1]{contour-plot-scater-plot-AIS.eps} }
\caption{ Scatter plots of pairs of the three selected variables for the AIS data set, along with the contour plots for the fitted $\mathcal{MMNE}$, skew-normal ($\mathcal{SN}$) and skew-t ($\mathcal{ST}$) distributions presented in Section \ref{sec1} and \ref{sec2}. \label{scatter-AIS}}
\end{figure}
\subsection{ Italian olive oil data}\label{olive-Example}
As a second example, we consider the well-known data on the percentage composition of eight fatty acids found by lipid fraction of 572 Italian olive oils. These data come from three areas; within each area, there are a number of constituent regions, 9 in total. The data set includes a data frame with 572 observations and 10 columns. The first column gives the area (one of Southern Italy, Sardinia, and Northern Italy), the second gives the region, and the remaining 8 columns give the variables. Southern Italy consists of North Apulia, Calabria, South Apulia, and Sicily regions, Sardinia is divided into Inland Sardinia, and Coastal Sardinia, and Northern Italy consists of Umbria, East Liguria, and West Liguria regions. These data are available in the \textsf{R} software, \texttt{pgmm} package.
\begin{figure}[htp]
\centerline{ \includegraphics[scale=.45]{hist-Olive.eps} }
\caption{ The histograms of the two selected variables Linolenic and Arachidic fatty acids of olive oil data set in Subsection \ref{olive-Example}. \label{Hist-Olive}}
\end{figure}
\begin{table}[htb!]
\small
\center
\caption{Parameter estimates of the $\mathcal{MMNE}$ distribution by using EM Algorithm presented in Section \ref{Est-sec}, based on the two selected variables (Linolenic, Arachidic) of the olive oil data set in Subsection \ref{olive-Example}.}
\label{tab_est-Olive}
\begin{tabular}{ccc}
\hline
$\widehat{{\boldsymbol{\xi}}}$&$\widehat{{\boldsymbol{\Omega}}}$& $\widehat{{\boldsymbol{\delta}}}$\\
\hline\\[-0.2cm]
$
\begin{bmatrix}
36.8344\\
55.3462 \\
\end{bmatrix}
$
&
$
\begin{bmatrix}
63.3623& 40.9481\\
40.9481& 124.0575\\
\end{bmatrix}
$
&
$
\begin{bmatrix}
0.1546\\
0.6977\\
\end{bmatrix}
$
\\[0.2cm]
\hline
\end{tabular}
\end{table}
\begin{table}[htb!]
\small
\center
\caption{Values of skewness measures in Section \ref{measures-skewness}, based on the two selected variables (Linolenic, Arachidic) of the olive oil data set in Subsection \ref{olive-Example}.}
\label{tab_skew-Olive}
\begin{tabular}{cccccccc}
\hline
$\beta_{1,p}$&$\beta_{1p}^2$& $s$ & $b$& $Q^*$& $T$ & $s_I$& $s_C$\\
\hline\\[-0.2cm]
0.5707 & 0.1218 &
$
\begin{bmatrix}
-0.0492\\
0.7538\\
\end{bmatrix}
$
&
$
\begin{bmatrix}
-0.0428\\
0.6557 \\
\end{bmatrix}
$
&
0.0802&
$
\begin{bmatrix}
-0.0185\\
0.2827 \\
\end{bmatrix}
$
&
0.0517
&
$
\begin{bmatrix}
0.0486\\
0.2195 \\
\end{bmatrix}
$
\\[0.2cm]
\hline
\end{tabular}
\end{table}
For the purpose of illustration, we consider 323 cases from Southern Italy, and columns (8, 9), Linolenic and Arachidic fatty acids, respectively, so as to consider the bivariate case. Fig.~\ref{Hist-Olive} shows the histograms of the two selected variables, while Table \ref{tab_est-Olive} presents the estimates of parameters and Table \ref{tab_skew-Olive} presents the values of skewness measures.
Table \ref{tab_olive_fit} provides the fit of $\mathcal{MMNE}$ model, as compared to those of $\mathcal{SN}$ and $\mathcal{ST}$ distributions, for the considered data. From Table \ref{tab_olive_fit}, it is clear that the $\mathcal{MMNE}$ model provides the best overall fit as it possesses the largest $\ell({\boldsymbol{\hat{\theta}}}|{\boldsymbol{y}})$ value and
the lowest AIC and BIC values.
Fig. \ref{fig-olive} shows the scatter plot of the data and the contour plots of the fitted $\mathcal{MMNE}$, $\mathcal{SN}$ and $\mathcal{ST}$ distributions.
\begin{table}[htb]
\small
\center
\caption{Comparison of fitting measures, maximized log-likelihood value $\ell({\boldsymbol{\hat{\theta}}}|{\boldsymbol{y}})$, Akaike information criterion AIC and Bayesian information criterion BIC, for skew-normal ($\mathcal{SN}$), skew-t ($\mathcal{ST}$) and $\mathcal{MMNE}$ distributions for the two selected variables (Linolenic, Arachidic) of the olive oil data set in Subsection \ref{olive-Example}.}
\label{tab_olive_fit}
\begin{tabular}{l|ccc}
\hline
Distribution & ~~~~~~~$\ell({\boldsymbol{\hat{\theta}}}|{\boldsymbol{y}}) $ & ~~~~~~~ AIC& ~~~~~~~ BIC\\
\hline
$\mathcal{SN}$ & ~~~~~~~ -2320.039 & ~~~~~~~ 4654.079& ~~~~~~~ 4680.522 \\
$\mathcal{ST}$ & ~~~~~~~-2316.320 & ~~~~~~~ 4648.640& ~~~~~~~ 4678.861 \\
$\mathcal{MMNE}$ & ~~~~~~~-2314.604 & ~~~~~~~ 4643.207& ~~~~~~~4669.651 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[htp]
\centerline{ \includegraphics[scale=.60]{contour-plot-scater-plot-OLIVE} }
\caption{ Scatter plots of the olive oil data, and the contour plots of the fitted $\mathcal{MMNE}$, skew-normal ($\mathcal{SN}$) and skew-t ($\mathcal{ST}$) distributions presented in Section \ref{sec1} and \ref{sec2}. \label{fig-olive}}
\end{figure}
\section{Concluding Remarks}
In this paper, we have discussed the mean mixture of multivariate normal distribution ($\mathcal{MMN}$), which includes the normal, $\mathcal{SN}$, and extended $\mathcal{SN}$ distributions as particular cases. We have studied several features of this family of distributions, including the first four moments, the distributions of affine transformations and canonical forms, estimation of parameters by using an EM-type algorithm with closed-form expressions, and different measures of multivariate skewness. Two special cases of the $\mathcal{MMN}$ family, with standard gamma and standard exponential distributions as mixing distributions, denoted by $\mathcal{MMNG}$ and $\mathcal{MMNE}$ distributions, have been studied in detail.
A simulation study has been performed to evaluate the performance of the MLEs of parameters of the $\mathcal{MMNE}$ distribution.
From the results in Section \ref{Example}, for the AIS and olive oil data sets, the $\mathcal{MMNE}$ distribution is shown to provide a better fit than the $\mathcal{SN}$ and $\mathcal{ST}$ distributions.
Different multivariate measures of skewness have been derived for the $\mathcal{MMNE}$ distribution, and the evaluation of tests based on these measures is carried out in terms of powers of tests.
There are several possible directions for future research. For example, the study of finite mixtures and scale mixtures of $\mathcal{MMN}$ family will be of great interest.
In the stochastic representation in (\ref{Repre}), if the skewness parameter is a matrix, with representation
${\mathbf{Y}} \stackrel{d}{=} {\boldsymbol{\xi}}+{\boldsymbol{\omega}}\left( {\boldsymbol{\Delta}} \mathbf{U}+ {\mathbf{Z}}\right)$, then ${\mathbf{Y}}$ has the unified skew
normal ($\mathcal{SUN}$) distribution (see \cite{Arellano Valle and Azzalini (2006)}), wherein elements of $\mathbf{U}$ have the standard half-normal distribution. In this connection, consideration of a general distribution for $\mathbf{U}$ would be of interest.
All the computations presented
in this paper were performed by using the statistical software \textsf{R}, version 4.0.0. The computer program for the implementation of the proposed EM-type algorithm and comparison of the skewness measures are available as supplementary material associated with this article.\\
\noindent\textbf{CRediT authorship contribution statement}\\
\textbf{Me$'$raj Abdi:} Conceptualization, Methodology, Software, Writing - original draft, Writing – review \& editing. Investigation, Validation.
\textbf{Mohsen Madadi:} Methodology, Supervision, Investigation.
\textbf{Narayanaswamy Balakrishnan:} Supervision, Writing - review \& editing, Methodology.
\textbf{Ahad Jamalizadeh:} Conceptualization, Methodology, Supervision, Visualization.
\section*{Acknowledgments} The authors are grateful to the Editors and two anonymous reviewers who provided
very helpful feedback, comments, and suggestions, based on which the paper has improved significantly.
\section*{Appendix A. Proofs}
\begin{proof}[\bf Proof of Lemma \ref{Lem1}.]
By using (\ref{MtX}), we can calculate the partial derivatives of $M_{\mathbf{X}}({\mathbf{t}})$, the MGF of normalized $\mathcal{MMN}$ distribution, that are directly related to the moments of the $\mathcal{MMN}$ random vector. Suppose ${\mathbf{X}}\sim \mathcal{MMN}_p({\boldsymbol{0}}, {\overline{\mathbf{\Omega}}},{\boldsymbol{\delta}}; H)$. Then, some derivatives
of $M_{\mathbf{X}}({\mathbf{t}})$ in (\ref{MtX}) are as follows:
\begin{eqnarray*}
\frac{\partial M_{\mathbf{X}}({\mathbf{t}})}{\partial {\mathbf{t}}}&=&e^{\frac{1}{2}{\mathbf{t}}^\top{\mathbf{ \Sigma}}_{\mathbf{X}} {\mathbf{t}}}
\left[{\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}} M_U({\mathbf{t}}^\top{\boldsymbol{\delta}})+{\boldsymbol{\delta}} M_U^{(1)}({\mathbf{t}}^\top{\boldsymbol{\delta}})\right],\label{dif1}\\
\frac{\partial^2 M_{\mathbf{X}}({\mathbf{t}})}{\partial {\mathbf{t}}\partial {\mathbf{t}}^\top}&=&e^{\frac{1}{2}{\mathbf{t}}^\top{\mathbf{ \Sigma}}_{\mathbf{X}} {\mathbf{t}}}
\left\{ M_U({\mathbf{t}}^\top{\boldsymbol{\delta}})\left[{\mathbf{ \Sigma}}_{\mathbf{X}}+({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}})\otimes({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}})^\top \right]
+M_U^{(1)}({\mathbf{t}}^\top{\boldsymbol{\delta}} )\left[ ({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}})\otimes {\boldsymbol{\delta}}^\top + {\boldsymbol{\delta}}\otimes ({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}})^\top\right]+ M_U^{(2)}({\mathbf{t}}^\top{\boldsymbol{\delta}})~{\boldsymbol{\delta}}\otimes {\boldsymbol{\delta}}^\top \right\},\label{dif2}\\
\frac{\partial^3 M_{\mathbf{X}}({\mathbf{t}})}{\partial {\mathbf{t}}\partial {\mathbf{t}}^\top\partial {\mathbf{t}}}&=&e^{\frac{1}{2}{\mathbf{t}}^\top{\mathbf{ \Sigma}}_{\mathbf{X}} {\mathbf{t}}}
\left \{ M_U({\mathbf{t}}^\top{\boldsymbol{\delta}})
\left[ ({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}})\otimes {\mathbf{ \Sigma}}_{\mathbf{X}}
+\mathrm{vec}({\mathbf{ \Sigma}}_{\mathbf{X}})({\mathbf{ \Sigma}_{\mathbf{X}}}{\mathbf{t}})^\top
+(\mathbf{I}_p\otimes ({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}}))({\mathbf{ \Sigma}}_{\mathbf{X}} + ({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}})\otimes ({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}})^\top) \right]\right. \nonumber\\
&&+ M_U^{(1)}({\mathbf{t}}^\top{\boldsymbol{\delta}})\left[{\boldsymbol{\delta}} \otimes{\mathbf{ \Sigma}}_{\mathbf{X}}+\mathrm{vec}({\mathbf{ \Sigma}}_{\mathbf{X}}){\boldsymbol{\delta}}^\top
+ (\mathbf{I}_p\otimes ({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}}))[{\boldsymbol{\delta}}\otimes({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}})^\top+({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}})\otimes{\boldsymbol{\delta}}^\top]\right.\nonumber\\
&&+ \left. (\mathbf{I}_p\otimes {\boldsymbol{\delta}} )\left({\mathbf{ \Sigma}}_{\mathbf{X}}+({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}})\otimes({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}})^\top \right) \right]\nonumber\\
&&+ M_U^{(2)}({\mathbf{t}}^\top{\boldsymbol{\delta}}) \left[ (\mathbf{I}_p\otimes ({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}}))({\boldsymbol{\delta}} \otimes {\boldsymbol{\delta}}^\top ) +(\mathbf{I}_p\otimes {\boldsymbol{\delta}} ) \left( {\boldsymbol{\delta}} \otimes ({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}})^\top + ({\mathbf{ \Sigma}}_{\mathbf{X}}{\mathbf{t}}) \otimes {\boldsymbol{\delta}}^\top \right) \right]+ \left.M_U^{(3)}({\mathbf{t}}^\top{\boldsymbol{\delta}}) (\mathbf{I}_p\otimes {\boldsymbol{\delta}} ) ({\boldsymbol{\delta}} \otimes {\boldsymbol{\delta}}^\top) \right\},\label{dif3}
\end{eqnarray*}
where $M_U^{(1)}({\mathbf{t}}^\top{\boldsymbol{\delta}})=\frac{\partial M_U({\mathbf{t}}^\top{\boldsymbol{\delta}}) }{\partial \mathbf{t}}$,
$M_U^{(2)}({\mathbf{t}}^\top{\boldsymbol{\delta}})=\frac{\partial^2 M_U({\mathbf{t}}^\top{\boldsymbol{\delta}})}{\partial {\mathbf{t}}\partial {\mathbf{t}}^\top}$
and $M_U^{(3)}({\mathbf{t}}^\top{\boldsymbol{\delta}})=\frac{\partial^3 M_U({\mathbf{t}}^\top{\boldsymbol{\delta}})}{\partial {\mathbf{t}}\partial {\mathbf{t}}^\top\partial {\mathbf{t}}}$.
Setting $\mathbf{t}=\boldsymbol{0}$, as in \cite{Genton et al. (2001)}, we obtain the first three moments of the $\mathcal{MMN}$ family. To find the fourth moment, since we only need the value of fourth partial derivative of $M_{\mathbf{X}}({\mathbf{t}}) $ at $\mathbf{t}=\boldsymbol{0}$, say $M_4({\mathbf{X}})=\frac{\partial^4 M_{\mathbf{X}}({\mathbf{t}})}{\partial {\mathbf{t}}\partial {\mathbf{t}}^\top\partial {\mathbf{t}}\partial {\mathbf{t}}^\top}|_{\mathbf{t}=\boldsymbol{0}}$,
we do not need to compute the whole expression. Instead, we can simply single out all the terms in $\frac{\partial^4 M_{\mathbf{X}}({\mathbf{t}})}{\partial {\mathbf{t}}\partial {\mathbf{t}}^\top\partial {\mathbf{t}}\partial {\mathbf{t}}^\top}$ that do not contain the factor $\mathbf{t}$ or $\mathbf{t}^\top$.
\end{proof}
\noindent{\bf Note 1:}
The stochastic representation
${\mathbf{Y}} \stackrel{d}{=} {\boldsymbol{\xi}}+{\boldsymbol{\omega}}\left( {\boldsymbol{\delta}} U + {\mathbf{Z}}\right)$
can be used directly as a way to obtain the first four moments of ${\mathbf{Y}}$ in the following formulas:
\begin{eqnarray*}
M_1({\mathbf{Y}})&=&{\rm E}({\mathbf{Y}}),~~~~~~~~~
M_2({\mathbf{Y}})={\rm E}\left({\mathbf{Y}}\otimes {\mathbf{Y}}^\top\right)={\rm E}\left({\mathbf{Y}} {\mathbf{Y}}^\top\right),~~~~~~~~
M_3({\mathbf{Y}})={\rm E}\left({\mathbf{Y}}\otimes {\mathbf{Y}}^\top \otimes {\mathbf{Y}}\right)={\rm E}\left[({\mathbf{Y}}\otimes {\mathbf{Y}}) {\mathbf{Y}}^\top\right], \nonumber\\
M_4({\mathbf{Y}})&=&{\rm E}\left({\mathbf{Y}}\otimes {\mathbf{Y}}^\top \otimes {\mathbf{Y}}\otimes {\mathbf{Y}}^\top\right)={\rm E}\left[\left({\mathbf{Y}} {\mathbf{Y}}^\top\right) \otimes \left({\mathbf{Y}} {\mathbf{Y}}^\top\right)\right].
\end{eqnarray*}
The corresponding central moments of ${\mathbf{Y}}$ are then
\begin{eqnarray*}
\overline{M}_1({\mathbf{Y}})&=&{\mathbf{0}},~~~~~~~~~
\overline{M}_2({\mathbf{Y}})={\rm E}\left\{[{\mathbf{Y}}-{\rm E}({\mathbf{Y}})]\otimes [{\mathbf{Y}}-{\rm E}({\mathbf{Y}})]^\top \right\}=\textrm{var}({\mathbf{Y}}),\nonumber\\
\overline{M}_3({\mathbf{Y}})&=&{\rm E}\left\{[{\mathbf{Y}}-{\rm E}({\mathbf{Y}})]\otimes [{\mathbf{Y}}-{\rm E}({\mathbf{Y}})]^\top \otimes [{\mathbf{Y}}-{\rm E}({\mathbf{Y}})]\right\},\nonumber\\
\overline{M}_4({\mathbf{Y}})&=&{\rm E}\left\{\left([{\mathbf{Y}}-{\rm E}({\mathbf{Y}})] [{\mathbf{Y}}-{\rm E}({\mathbf{Y}})]^\top\right) \otimes \left([{\mathbf{Y}}-{\rm E}({\mathbf{Y}})] [{\mathbf{Y}}-{\rm E}({\mathbf{Y}})]^\top\right)\right\}.
\end{eqnarray*}
\noindent{\bf Note 2:} We know that for any multivariate random vector ${\mathbf{Y}}$, the central moments of third and fourth orders are related to the non-central moments by the following relationships (see, for example, \cite{Kollo and Srivastava (2004)} and \cite{Kollo and von Rosen (2005)}):
\begin{eqnarray}
\overline{M}_3({\mathbf{Y}}) &=& M_3({\mathbf{Y}}) - M_2({\mathbf{Y}}) \otimes {\rm E}({\mathbf{Y}}) - {\rm E}({\mathbf{Y}}) \otimes M_2({\mathbf{Y}}) - \mathrm{vec}(M_2({\mathbf{Y}})){\rm E}({\mathbf{Y}})^\top+ 2{\rm E}({\mathbf{Y}}){\rm E}({\mathbf{Y}})^\top \otimes {\rm E}({\mathbf{Y}}),\label{M3-M3bar}\\
\overline{M}_4({\mathbf{Y}}) &=& M_4({\mathbf{Y}}) - (M_3({\mathbf{Y}}) )^\top\otimes {\rm E}({\mathbf{Y}})
- M_3({\mathbf{Y}}) \otimes {\rm E}({\mathbf{Y}}) ^\top -{\rm E}({\mathbf{Y}})\otimes (M_3({\mathbf{Y}}) )^\top -{\rm E}({\mathbf{Y}}) ^\top \otimes M_3({\mathbf{Y}})
\nonumber\\
&&+M_2({\mathbf{Y}})\otimes {\rm E}({\mathbf{Y}})E({\mathbf{Y}})^\top
+ ({\rm E}({\mathbf{Y}}) \otimes {\rm E}({\mathbf{Y}}))(\mathrm{vec}(M_2({\mathbf{Y}})))^\top+{\rm E}({\mathbf{Y}}) \otimes M_2({\mathbf{Y}})\otimes {\rm E}({\mathbf{Y}})^\top \nonumber\\
&&+
{\rm E}({\mathbf{Y}})^\top \otimes M_2({\mathbf{Y}})\otimes {\rm E}({\mathbf{Y}})
+
{\rm E}({\mathbf{Y}})^\top \otimes \mathrm{vec}(M_2({\mathbf{Y}}))\otimes {\rm E}({\mathbf{Y}})^\top+ {\rm E}({\mathbf{Y}}){\rm E}({\mathbf{Y}})^\top \otimes M_2({\mathbf{Y}}) \nonumber\\
&&- 3{\rm E}({\mathbf{Y}}){\rm E}({\mathbf{Y}})^\top \otimes {\rm E}({\mathbf{Y}}){\rm E}({\mathbf{Y}})^\top.\label{M4-M4bar}
\end{eqnarray}
Upon using the relations for affine transformations of moments, we then obtain
\begin{eqnarray}
M_1({\mathbf{AY}})&=&{\rm E}({\mathbf{AY}})={\mathbf{A}}{\rm E}({\mathbf{Y}}), \label{M1-AY}~~~~~~~~~M_2({\mathbf{AY}})={\rm E}\left({\mathbf{AY}}\otimes ({\mathbf{AY}})^\top\right)={\mathbf{A}}{\rm E}\left({\mathbf{Y}}\otimes {\mathbf{Y}}^\top\right) {\mathbf{A}}^\top,\\
M_3({\mathbf{AY}})&=&{\rm E}[({\mathbf{AY}}\otimes {\mathbf{AY}}) ({\mathbf{AY}})^\top]={\rm E}\left\{\mathrm{vec}\left({\mathbf{AY}}({\mathbf{AY}})^\top\right)({\mathbf{AY}})^\top\right\} =({\mathbf{A}}\otimes {\mathbf{A}})M_3({\mathbf{Y}}) {\mathbf{A}}^\top, \label{M3-AY}\\
M_4({\mathbf{AY}})&=&{\rm E}\left({\mathbf{AY}} ({\mathbf{AY}})^\top \otimes {\mathbf{AY}} ({\mathbf{AY}})^\top\right)
=({\mathbf{A}}\otimes {\mathbf{A}})M_4({\mathbf{Y}})\left({\mathbf{A}}\otimes {\mathbf{A}}\right)^\top.\label{M4-AY}
\end{eqnarray}
\begin{proof}[\bf Proof of Theorem \ref{Thm1}.]
The moment generating function of ${\mathbf{A}}^\top{\mathbf{X}} $ can be written as
\begin{eqnarray*}
M_{{\mathbf{A}}^\top{\mathbf{X}} }({\mathbf{t}})=M_{\mathbf{X}}({\mathbf{A}}{\mathbf{t}})=
e^{\frac{1}{2}{\mathbf{t}}^\top\left({\mathbf{A}}^\top{\overline{\mathbf{\Omega}}}{\mathbf{A}} -{\mathbf{A}}^\top{\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top{\mathbf{A}} \right) {\mathbf{t}}}
M_U\left({\mathbf{t}}^\top {\mathbf{A}}^\top{\boldsymbol{\delta}};{\boldsymbol{\nu}}\right).
\end{eqnarray*}
Upon using the uniqueness property of the moment generating function, the required result is obtained.
\end{proof}
\begin{proof}[\bf Proof of Theorem \ref{Thm2}.] The moment generating function of ${\mathbf{X}}=\textbf{c}+{\mathbf{A}}^\top {\mathbf{Y}}$ can be written as
\begin{eqnarray*}
M_{{\mathbf{X}} }({\mathbf{t}})
&=&e^{{\mathbf{t}}^\top \textbf{c}}M_{\mathbf{Y}}({\mathbf{A}}{\mathbf{t}})=
e^{ {\mathbf{t}}^\top {\boldsymbol{\xi}}_{{\mathbf{X}} }+
\frac{1}{2}{\mathbf{t}}^\top \left({\mathbf{\Omega}}_{{\mathbf{X}} }- {\boldsymbol{\omega}}_{{\mathbf{X}} }{\boldsymbol{\delta}}_{{\mathbf{X}} } {\boldsymbol{\delta}}_{{\mathbf{X}} }^\top{\boldsymbol{\omega}}_{{\mathbf{X}} }\right){\mathbf{t}}} M_U\left({\mathbf{t}}^\top {\boldsymbol{\omega}}_{{\mathbf{X}} }{\boldsymbol{\delta}}_{{\mathbf{X}} } ;{\boldsymbol{\nu}}\right),
\end{eqnarray*}
which completes the proof.
\end{proof}
\begin{proof}[\bf Proof of Theorem \ref{Thm3}.]
We have introduced the $\mathcal{MMN}$ distribution by assuming ${\mathbf{\Omega}}>0$ through the factorization ${\mathbf{\Omega}}={\boldsymbol{\omega}} \overline{{\mathbf{\Omega}}}{\boldsymbol{\omega}}$. The matrix $\overline{{\mathbf{\Omega}}}$ is a positive definite non-singular matrix if and only if there exists some invertible (non-singular) matrix ${\mathbf{C}}$ such that $\overline{{\mathbf{\Omega}}}=\mathbf{C}^\top\mathbf{C}$. If ${\boldsymbol{\delta}}\neq \textbf{0}$, there exists an orthogonal
matrix $\mathbf{P}$ with the first column being proportional to $\mathbf{C}\overline{{\mathbf{\Omega}}}^{-1}{\boldsymbol{\delta}}$, while for ${\boldsymbol{\delta}}=\textbf{0}$
we set $\mathbf{P}=\mathbf{I}_p$. Finally, define ${\mathbf{A}}_*=\left({\mathbf{C}}^{-1} \mathbf{P}\right)^\top{\boldsymbol{\omega}}^{-1}$. By using Theorem \ref{Thm2},
we see that $\textbf{Z}^*={\mathbf{A}}_*({\mathbf{Y}}-{\boldsymbol{\xi}})$ has the stated distribution with ${\boldsymbol{\delta}}_{{\mathbf{Z}}^*}=({\mathbf{\delta}}_*, 0, \ldots, 0)^\top$.
\end{proof}
\begin{proof}[\bf Proof of Theorem \ref{Thm5-mode}.]
First, consider the mode of the
corresponding canonical variable $Z^*\sim \mathcal{MMN}_p({\boldsymbol{0}},{\mathbf{I}}_p, {\boldsymbol{\delta}}_{\textbf{Z}^*}; H)$. We find this mode by solving the following equations with respect to $z_1, z_2, \ldots, z_p$:
\begin{eqnarray*}
\frac{\partial f_{Z_1^*}(z_1)}{\partial z_1}=0,~~~~~z_i f_{Z_1^*}(z_1)=0,~~~ i\in\{2,3, \ldots, p\}.
\end{eqnarray*}
The last $p - 1$ equations are fulfilled when $z_i= 0$, while the root of the first equation corresponds to the mode, $m_0^*$ say, of the $\mathcal{MMN}_1(0,1,{\mathbf{\delta}}_*; H)$ distribution. Therefore, the mode of $\textbf{Z}^*$ is $\textbf{M}_0^*=(m_0^*, 0, \ldots, 0)^\top = \frac{m_0^*}{{\mathbf{\delta}}_*}{\boldsymbol{\delta}}_{\textbf{Z}^*}$. From Theorem \ref{Thm3}, we can write ${\mathbf{Y}}={\boldsymbol{\xi}}+{\boldsymbol{\omega}}\mathbf{C}^\top \mathbf{P}\textbf{Z}^* $
and ${\boldsymbol{\delta}}_{\textbf{Z}^*}=\mathbf{P}^\top \mathbf{C}\overline{{\mathbf{\Omega}}}^{-1}{\boldsymbol{\delta}}$. As the
mode is equivariant with respect to affine transformations, the mode of $\textbf{Y}$ is
\begin{eqnarray*}
\textbf{M}_0={\boldsymbol{\xi}}+\frac{m_0^*}{{\mathbf{\delta}}_*}{\boldsymbol{\omega}}\mathbf{C}^\top \mathbf{P}{\boldsymbol{\delta}}_{\textbf{Z}^*}
={\boldsymbol{\xi}}+\frac{m_0^*}{{\mathbf{\delta}}_*}{\boldsymbol{\omega}}\mathbf{C}^\top \mathbf{P}\mathbf{P}^\top \mathbf{C}\overline{{\mathbf{\Omega}}}^{-1}{\boldsymbol{\delta}}
={\boldsymbol{\xi}}+\frac{m_0^*}{{\mathbf{\delta}}_*}{\boldsymbol{\omega}}{\boldsymbol{\delta}}.
\end{eqnarray*}
Hence, the result.
\end{proof}
\section*{Appendix B. Computation of Different Measures of Skewness }
\subsection*{B1. Mardia Measure of Skewness}\label{Sec-Mardia}
Mardia \cite{Mardia (1970), Mardia (1974)} presented a multivariate measure of skewness of an arbitrary $p$-dimensional distribution $F$ with mean vector ${\boldsymbol{\mu}}$ and covariance matrix $\mathbf{\Delta}$. Let
${\mathbf{X}} $ and ${\mathbf{Y}}$ be two independent and identically distributed random vectors
from distribution $F$. Then, the measure of skewness is
\begin{eqnarray}
\beta_{1,p}&=&{\rm E}\left[\left\{({\mathbf{X}}-{\boldsymbol{\mu}})^\top {\mathbf{\Delta}}^{-1}({\mathbf{Y}}-{\boldsymbol{\mu}})\right\}^3\right],\label{Mardia-SK}
\end{eqnarray}
where ${\boldsymbol{\mu}}={\rm E}({\mathbf{X}})$ and ${\mathbf{\Delta}}=\textrm{var}({\mathbf{X}})$.
Mardia measure of skewness is location and scale invariant (see \cite{Mardia (1970)}). From Theorems \ref{Thm2} and \ref{Thm3}, the $\mathcal{MMN}$ family is closed under affine transformations and have a canonical form. If ${\mathbf{X}}\sim \mathcal{MMN}_p({\boldsymbol{\xi}},{\mathbf{\Omega}}, {\boldsymbol{\delta}}; H)$,
there exists a linear transformation
$\textbf{Z}^*={\mathbf{A}}_*({\mathbf{Y}}-{\boldsymbol{\xi}})$ such that $\textbf{Z}^*\sim \mathcal{MMN}_p({\boldsymbol{0}},{\mathbf{I}}_p, {\boldsymbol{\delta}}_{\textbf{Z}^*}; H)$, where at most one component of ${\boldsymbol{\delta}}_{\textbf{Z}^*}$ is not zero.
Without loss of any generality, we take the first component of $\textbf{Z}^*$ to be skewed and denote it by $Z_1^*$, and so for computing the measure in (\ref{Mardia-SK}), we can use the canonical form of the $\mathcal{MMN}$ family. Let ${\mathbf{X}}^*$ and ${\mathbf{Y}}^*$ be
two independent and identically distributed random vectors
from $\mathcal{MMN}_p({\boldsymbol{0}},{\mathbf{I}}_p, {\boldsymbol{\delta}}_{\textbf{Z}^*}; H)$. Now,
by using
${\boldsymbol{\mu}}^*={\rm E}({\mathbf{X}}^*)={\rm E}({\mathbf{Y}}^*)= {\rm E}(U){\boldsymbol{\delta}}_{\textbf{Z}^*}$ and
${\boldsymbol{\Delta}}^*=\textrm{var}({\mathbf{X}}^*)=\textrm{var}({\mathbf{Y}}^*)={\mathbf{I}}_p+ \left(\textrm{var}(U)-1\right){\boldsymbol{\delta}}_{\textbf{Z}^*}{\boldsymbol{\delta}}_{\textbf{Z}^*}^\top$ in (\ref{Mardia-SK}), the Mardia measure of skewness can be expressed as
\begin{eqnarray}\label{Mardia-skew}
\beta_{1,p}&=&{\rm E}\left[\left\{({\textbf{X}^*}-{\boldsymbol{\mu}}^*)^\top [{\boldsymbol{\Delta}}^*]^{-1}(\textbf{Y}^*-{\boldsymbol{\mu}}^*])\right\}^3\right] =\left({\rm E}\left[\frac{Z_1^*-\delta_*}{\sqrt{\textrm{var}(Z_1^*)}}\right]^3\right)^2 =(\gamma_1^*)^2,
\end{eqnarray}
where $\gamma_1^*$ is the univariate skewness of $Z_1^*\sim \mathcal{MMN}_1(0,1,{\mathbf{\delta}}_*; H)$ of the canonical form (see Theorem \ref{Thm3}).
An explicit formula of $\gamma_1^*$ can be found in \cite{Negarestani et al. (2019)} for the univariate case.
\subsection*{B2. Malkovich-Afifi Measure of Skewness}\label{Sec-Malkovich-Afifi-skew}
Malkovich and Afifi \cite{Malkovich and Afifi (1973)} proposed a measure of multivariate skewness as a different type of generalization of the univariate measure. By denoting the unit $p$-dimensional sphere by ${\phi}_p=\left\{{\mathbf{u}}\in \mathbb{R}^p; ||{\mathbf{u}}||=1 \right\}$, for ${\mathbf{u}}\in {\phi}_p$, the usual univariate measure of skewness in the ${\mathbf{u}}$-direction is
\begin{eqnarray}\label{malkovich1skew}
\beta_{1}({\mathbf{u}})&=&\frac{\left[ {\rm E}\left\{{\mathbf{u}}^\top({\mathbf{Y}}-{\rm E}({\mathbf{Y}}))\right\}^3 \right]^2}{\left[\textrm{var}({\mathbf{u}}^\top{\mathbf{Y}})\right]^3},
\end{eqnarray}
and so the Malkovich-Afifi multivariate extension of it is defined as
\begin{eqnarray}\label{malkovich2skew}
\beta_1^{*}= \sup_{{\mathbf{u}}\in {\phi}_p} \beta_{1}({\mathbf{u}}).
\end{eqnarray}
Malkovich-Afifi measure of multivariate skewness is also location and scale invariant.
\cite{Malkovich and Afifi (1973)} then defined the measures in (\ref{malkovich1skew}) and (\ref{malkovich2skew}) and showed that if ${\mathbf{Z}}$ is the standardized variable ${\mathbf{Z}}={\mathbf{\Delta}}^{-1/2}({\mathbf{Y}}-{\boldsymbol{\mu}})$, an equivalent
version is
$
\beta_1^{*}= \sup_{{\mathbf{u}}\in {\phi}_p} \left({\rm E}\left[({\mathbf{u}}^\top{\mathbf{Z}})^3 \right] \right)^2.
$
For obtaining $\beta_1^{*}$ for the $\mathcal{MMN}$ family, it is convenient to use the canonical form.
If ${\mathbf{Y}} \sim \mathcal{MMN}_p({\boldsymbol{\xi}}, {\mathbf{\Omega}}, {\boldsymbol{\delta}}; H)$, there exists a linear transformation
$\textbf{Z}^*={\mathbf{A}}_*({\mathbf{Y}}-{\boldsymbol{\xi}})$ such that $\textbf{Z}^*\sim \mathcal{MMN}_p({\boldsymbol{0}},{\mathbf{I}}_p, {\boldsymbol{\delta}}_{\textbf{Z}^*}; H)$, where at most one component of ${\boldsymbol{\delta}}_{\textbf{Z}^*}$ is not zero.
This means that the Malkovich-Afifi index, which is the maximum of the univariate skewness measures among all
the directions of the unit sphere, will be, for $\textbf{Z}^*$, the index of asymmetry in the only (if there is) skew direction (without loss
of any generality, we take the first component of $\textbf{Z}^*$ to be skewed and denote it by $Z_1^*$):
\begin{eqnarray}\label{beta1star}
\beta_{1}^*=\beta_{1}^*({\mathbf{u}})&=&\sup_{{\mathbf{u}}\in {\phi}_p} \frac{\left[ {\rm E}\left\{{\mathbf{u}}^\top({\mathbf{Y}}-{\rm E}({\mathbf{Y}}))\right\}^3 \right]^2}{\left[\textrm{var}({\mathbf{u}}^\top{\mathbf{Y}})\right]^3}=\frac{\left[ {\rm E}\left\{Z_1^*-{\rm E}(Z_1^*)\right\}^3 \right]^2}{\left[\textrm{var}(Z_1^*)\right]^3}=(\gamma_1^*)^2.
\end{eqnarray}
As in the case of Mardia index, we have used $\gamma_1^*$ to denote the univariate skewness measure of the unique (if any) skewed component of the canonical
form ${\mathbf{Z}}^*$. As this measure is location and scale invariant, it is invariant for linear transforms and consequently (\ref{beta1star}) is also
the Malkovich-Afifi measure for ${\mathbf{Y}}$, and thus it is the same as the Mardia index in (\ref{Mardia-skew}).
\subsection*{B3. Srivastava Measure of Skewness}\label{Sec-Srivastava-skew}
Using principal components ${\mathbf{F}}={\mathbf{\Gamma}} {\mathbf{Y}}$, Srivastava \cite{Srivastava (1984)} developed a measure of skewness for the multivariate vector ${\mathbf{Y}}$, where ${\mathbf{\Gamma}}=({\boldsymbol{\gamma}}_1, \ldots, {\boldsymbol{\gamma}}_p)$ is the matrix of eigenvectors of the covariance matrix ${\mathbf{\Delta}}$, that is, an orthogonal matrix such that ${\mathbf{\Gamma}}^\top{\mathbf{\Delta}}{\mathbf{\Gamma}}=\mathbf{\Lambda}$, and $\lambda_1, \ldots, \lambda_p$ are the corresponding eigenvalues. Srivastava's measure of skewness for ${\mathbf{Y}}$ may then be presented as
\begin{eqnarray}
\beta_{1p}^2&=&\frac{1}{p}\sum_{i=1}^{p} \left\{\frac{{\rm E}(F_i-\theta_i)^3}{\lambda_i^{3/2}}\right\}^2=\frac{1}{p}\sum_{i=1}^{p} \left\{\frac{{\rm E}[{\boldsymbol{\gamma}}_i^\top({\mathbf{Y}}-{\boldsymbol{\mu}})]^3}{\lambda_i^{3/2}}\right\}^2,\label{Sriv-skew}
\end{eqnarray}
where $F_i={\boldsymbol{\gamma}}_i^\top {\mathbf{Y}}$ and $\theta_i={\boldsymbol{\gamma}}_i^\top {\boldsymbol{\mu}}$.
The measure in (\ref{Sriv-skew}) is based on central moments of third order ${\rm E}[{\boldsymbol{\gamma}}_i^\top({\mathbf{Y}}-{\boldsymbol{\mu}})]^3$.
For obtaining this measure for the $\mathcal{MMN}$ distribution, we only need to obtain the non-central moments up to third
order. Upon using the relations in (\ref{M3-M3bar})-(\ref{M3-AY}), we can obtain the third central moment by replacing ${\mathbf{A}}$ by ${\boldsymbol{\gamma}}_i$ in (\ref{noncentralzzEE}).
\subsection*{B4. M\'{o}ri-Rohatgi-Sz\'{e}kely Measure of Skewness}\label{Sec-Mori-skew}
M\'{o}ri {\rm et al.} \cite{Mori et al. (1993)} suggested a vectorial measure of skewness as a $p$-dimensional vector.
If ${\mathbf{Z}}={\mathbf{\Delta}}^{-1/2}({\mathbf{Y}}-{\boldsymbol{\mu}})=(Z_1, \ldots, Z_p)^\top$ is the standardized vector, this measure
can be written in terms of coordinates of ${\mathbf{Z}}$ as
\begin{eqnarray}\label{Moriii-skew}
s({\mathbf{Y}})={\rm E}(\|{\mathbf{Z}} \|^2 {\mathbf{Z}})={\rm E}\left(({\mathbf{Z}}^\top{\mathbf{Z}}) {\mathbf{Z}}\right)=\sum_{i=1}^{p}{\rm E}\left(Z_i^2{\mathbf{Z}}\right) =\left(\sum_{i=1}^{p}{\rm E}\left(Z_i^2 Z_1\right), \ldots,\sum_{i=1}^{p} {\rm E}\left(Z_i^2 Z_p\right) \right)^\top.
\end{eqnarray}
All the quantities involved in (\ref{Moriii-skew}) are specific non-central moments of third order of ${\mathbf{Z}}$. When ${\mathbf{Y}}$ has a multivariate $\mathcal{MMN}$ distribution, ${\mathbf{Z}}$ is still $\mathcal{MMN}$ distribution, and so we can use once again the expressions in Theorem \ref{ThmLem2}. Now, ${\mathbf{Z}}={\mathbf{\Delta}}^{-1/2}({\mathbf{Y}}-{\boldsymbol{\mu}})=(Z_1, \ldots, Z_p)^\top$ has the distribution $\mathcal{MMN}_p({\boldsymbol{\xi}}_{\mathbf{Z}},{\mathbf{\Omega}}_{\mathbf{Z}},{\boldsymbol{\delta}}_{\mathbf{Z}}; H)$.
Furthermore, upon replacing ${\mathbf{A}}$ by ${\mathbf{\Delta}}^{-1/2}$ in
the third central moment in (\ref{noncentralzzEE}), using (\ref{M3-M3bar})-(\ref{M3-AY}), and the moments in Theorem \ref{ThmLem2}, we can compute $s({\mathbf{Y}})$ in (\ref{Moriii-skew}).
\subsection*{B5. Kollo Measure of Skewness }
Kollo \cite{Kollo (2008)} noticed that M\'{o}ri-Rohatgi-Sz\'{e}kely skewness measure $s({\mathbf{Y}})$ does not include all third-order mixed moments. To include all mixed
moments of the third order, he defined a skewness vector of ${\mathbf{Y}}$ as
\begin{eqnarray}\label{Kollo-skew}
b({\mathbf{Y}})&=&{\rm E}\left(\sum_{i,j}^{p}(Z_i Z_j){\mathbf{Z}}\right)=\left(\sum_{i,j}^{p}{\rm E}\left[(Z_i Z_j) Z_1\right], \ldots,\sum_{i,j}^{p}{\rm E}\left[(Z_i Z_j) Z_p\right]\right)^\top.
\end{eqnarray}
The required moments can be obtained from Theorem \ref{ThmLem2} and the corresponding measure in (\ref{Kollo-skew})
can then be computed.
\subsection*{B6. Balakrishnan-Brito-Quiroz Measure of Skewness}\label{Sec-Bala-skew}
When reporting the skewness of a univariate distribution, it is customary to indicate
skewness direction by referring to skewness 'to the left' (negative) or 'to the right' (positive).
It seems natural that, in the multivariate setting, one would also like to indicate
a direction for the skewness of a distribution.
Both Mardia and Malkovich-Afifi measures give an overall view of skewness measures without any specific reference to the direction of skewness. For
this reason, \cite{Balakrishnan et al. (2007)} modified the Malkovich-Afifi measure to produce an overall vectorial measure of
skewness as
\begin{eqnarray}\label{T-index}
{\mathbf{T}}=\int_{{\phi}_p} {\mathbf{u}}c_1({\mathbf{u}})d \lambda({\mathbf{u}}),
\end{eqnarray}
where $c_1({\mathbf{u}})={\rm E}\left[\left({\mathbf{u}}^\top{\mathbf{Z}} \right)^3 \right] $ is a signed measure of skewness of the standardized variable
${\mathbf{Z}}={\mathbf{\Delta}}^{-1/2}({\mathbf{Y}}-{\boldsymbol{\mu}})$ in the direction
of ${\mathbf{u}}$, and $\lambda$ denotes the rotationally invariant probability measure on the unit $p$-dimensional sphere
${\phi}_p=\left\{{\mathbf{u}}\in \mathbb{R}^p; ||{\mathbf{u}}||=1 \right\}$.
From \cite{Balakrishnan et al. (2007)} and \cite{Balakrishnan and Scarpa (2012)}, it turns out that the computation of ${\mathbf{T}}$ is straightforward and, when the distribution of ${\mathbf{Y}}$ is absolutely continuous with respect to Lebesgue measure and symmetric
(in the broad sense specified below), it has, under some moment assumptions, a
Gaussian asymptotic distribution with a limiting covariance matrix, ${\mathbf{\Sigma}}_T$, that can be
consistently estimated from the $Z_i$ sample.
If $c_1({\mathbf{u}})$ is negative, it indicates skewness in the direction
of $-{\mathbf{u}}$, while ${\mathbf{u}}c_1({\mathbf{u}})$ provides a vectorial index of skewness in the ${\mathbf{u}}$ (or $-{\mathbf{u}}$) direction. Summation of these vectors over ${\mathbf{u}}$ (in the form of an integral) will then yield an overall vectorial measure of skewness presented earlier in (\ref{T-index}).
For obtaining a single measure, \cite{Balakrishnan et al. (2007)} proposed
the quantity ${\mathbf{Q}}={\mathbf{T}}^\top {\mathbf{\Sigma}}_{\mathbf{T}}^{-1} {\mathbf{T}}$, where ${\mathbf{T}}$ is as in (\ref{T-index}) and ${\mathbf{\Sigma}}_{\mathbf{T}}$ is the covariance matrix of ${\mathbf{T}}$.
However, the covariance matrix ${\mathbf{\Sigma}}_{\mathbf{T}}$ depends on the moments of sixth order. Sixth order moments in this family are not in explicit form, and so as done in \cite{Balakrishnan and Scarpa (2012)}, by replacing ${\mathbf{\Sigma}}_{\mathbf{T}}$ by ${\mathbf{\Sigma}}_{\mathbf{Z}}$, we obtain ${\mathbf{Q}}^*={\mathbf{T}}^\top {\mathbf{\Sigma}}_{\mathbf{Z}}^{-1} {\mathbf{T}}$, to provide a
reasonable measure of overall skewness.
In the following, evaluation of ${\mathbf{T}}$ using the integrals of some monomials over the unit sphere ${\phi}_p$ are required. From \cite{Balakrishnan et al. (2007)}, let $u_j$ be the $j$-th coordinate of a point ${\mathbf{u}}\in{\phi}_p$. Then, the values of the
integrals
\begin{eqnarray*
J_4=\int_{{\phi}_p} u_j^4 d\lambda({\mathbf{u}})=\frac{3}{p(p+2)},~~~~~~~J_{2,2}=\int_{{\phi}_p} u_j^2 u_i^2 d\lambda({\mathbf{u}})=\frac{1}{p(p+2)},
\end{eqnarray*}
for $j \neq i, 1 \leq j, i \leq p$, are obtained using Theorem 3.3 of \cite{Fang et al. (1990)}. We see that the above
integrals do not depend on the particular choices of $j$ and $ i$. Therefore, the $r$-th
coordinate of ${\mathbf{T}}$ is simply
$
{\mathbf{T}}_r=J_4 {\rm E}\left(Z_r^3\right)+ 3 \sum_{i\neq r} J_{2,2} {\rm E}\left(Z_i^2 Z_r\right).
$
So, we must obtain the moments $ {\rm E}\left(Z_r^3\right)$ and ${\rm E}\left(Z_i^2 Z_r\right)$. The required moments can be obtained as
$
{\rm E}\left(Z_i^3\right)={\mathbf{M}}_3^{\mathbf{Z}}[(i - 1)p + i, i] \mbox{~ and ~~}
{\rm E}\left(Z_i^2 Z_j\right)={\mathbf{M}}_3({\mathbf{Z}})[(i - 1)p + i, j],
$
where ${\mathbf{M}}_3({\mathbf{Z}})[., .]$ denotes the elements of ${\mathbf{M}}_3({\mathbf{Z}})$, third moment of the $\mathcal{MMN}_p({\boldsymbol{\xi}}_{\mathbf{Z}},{\mathbf{\Omega}}_{\mathbf{Z}},{\boldsymbol{\delta}}_{\mathbf{Z}}; H)$ distribution.
Upon using the above moments, we can obtain the elements of ${\mathbf{T}}$ as follows:
\begin{eqnarray}\label{Trrr}
{\mathbf{T}}_r=\frac{3}{p(p+2)} {\rm E}\left(Z_r^3\right)+ 3 \sum_{i\neq r} \frac{1}{p(p+2)} {\rm E}\left(Z_i^2 Z_r\right).
\end{eqnarray}
\subsection*{B7. Isogai Measure of Skewness}\label{Sec-Isogai-skew}
Isogai \cite{Isogai (1982)} considered an overall extension of Pearson measure of skewness to a
multivariate case in the form
\begin{eqnarray*}
S_I=\left({\boldsymbol{\mu}}-{\mathbf{M}}_0\right)^\top g^{-1}\left({\mathbf{\Delta}}\right)\left({\boldsymbol{\mu}}-{\mathbf{M}}_0\right),
\end{eqnarray*}
where ${\boldsymbol{\mu}}$, ${\mathbf{M}}_0$ and ${\mathbf{\Delta}}$ are the mean, mode and the covariance matrix of $\textbf{Y}$, respectively. The function $g\left({\mathbf{\Delta}}\right)$ is an ``appropriate'' function of the covariance matrix. To derive this measure of skewness, we need to obtain the mode of the $\mathcal{MMN}$ distribution, but the uniqueness
of the mode for the family of mean mixture of normal distributions is an open problem. For obtaining this measure, we choose $g(.)$ to be the identity function. This measure is location and scale invariant, and so by using the canonical form of the $\mathcal{MMN}$ distribution, we get
\begin{eqnarray*
S_I=\frac{\left[{\mathbf{\delta}}_* {\rm E}(U)-m_0^* \right]^2}{1+{\mathbf{\delta}}_*^2[\textrm{var}(U)-1])},
\end{eqnarray*}
where ${\mathbf{\delta}}_*=\left({\boldsymbol{\delta}}^\top\overline{{\mathbf{\Omega}}}^{-1} {\boldsymbol{\delta}}\right)^{1/2}$ and $m_0^*$
is the mode of the single scalar $\mathcal{MMN}$ distribution in the canonical form. This index is essentially the Mahalanobis distance between the null vector and the vector ${\rm E}(\textbf{Y})-{\mathbf{M}}_0$, and it is indeed location and scale invariant.
Another vectorial measure has been given by \cite{Balakrishnan and Scarpa (2012)} as $S_C={\boldsymbol{\omega}}^{-1}\left({\boldsymbol{\mu}}-{\mathbf{M}}_0\right)$, which is a natural choice to characterize the direction of the asymmetry of the multivariate $\mathcal{SN}$ distribution. Using the same reasoning for the $\mathcal{MMN}$ distribution, we can consider
$S_C=\left({\rm E}(U)-\frac{m_0^*}{{\mathbf{\delta}}_*}\right){\boldsymbol{\delta}}$,
and so, the direction of ${\boldsymbol{\delta}}$ can be regarded as a measure of vectorial skewness for the $\mathcal{MMN}$ distribution.
\section*{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and Statement of Result}\label{introduction}
It is a famous conjecture there are infinitely many primes of the form $n^2 + 1.$ This seemingly innocuous question has a long history and is the motivation for many deep results. Following this problem, one can see that any prime of the form $n^2 + 1$ is also of the form $a^2 + b^2$, and may ask the more general question of how many primes are there of this form, with $a$ small in comparison to $\sqrt{p}$. In answer to this, Kubilius proved in 1952 that there are infinitely many primes $p$ of the form $p = a^2 + b^2,$ where $a = O(p^{25/64})$, a result that has been improved several times since. The best result of this form to date, due to Harman and Lewis \cite{H-L}, states that there are infinitely many primes $p = a^2 + b^2$ with $a = O(p^\theta)$, for $\theta < 0.119.$
We proceed along a different route. We observe that $p\geq 3$ is prime, then there exist $a,b\in{\mathbb Z}$ such that $p=a^2+b^2$ if and only if $p\equiv 1\pmod{4}$. The representation of $p$ as $a^2+b^2$ is unique provided that $a$ is odd, $a+b\equiv 1\pmod{4}$, and $b>0$. This representation is closely related to the elliptic curve over ${\mathbb Q}$ defined by
\[
E:y^2=x^3-x,
\]
which is closely tied to the ``congruent number problem'' of classifying the integers $n\geq 1$ that are the areas of triangles with rational side lengths. If $p=a^2+b^2$ with $a$ and $b$ chosen as above, then $p+1-\#E(\mathbb{F}_p)=2a$, where $\mathbb{F}_p$ is the finite field of $p$ elements and $E(\mathbb{F}_p)$ is the group of ${\mathbb F}_p$-points on the modulo $p$ reduction of $E$. By modularity, there exists a holomorphic cusp form $f_E(z)$ of weight $2$ and level $32$ such that
\[
f_E(z) = \sum_{n=1}^{\infty}a_E(n)e^{2\pi i n z},
\]
where $a_E(p)=p+1-\#E({\mathbb F}_p)$ for primes $p\geq 3$, and the number $a_E(n)$ at an integer $n\geq 2$ is generated from $a_E(p)$ by multiplicativity. Moreover, the function $f_E(z)$ is an eigenfunction of all of the Hecke operators, and $a_E(1)=1$. Finally, we can express $f_E(z)$ explicitly as an $\eta$-quotient, namely
\[
f_E(z) = \eta(4z)^2\eta(8z)^2
\]
It is important to note that the primes $p\equiv 3\pmod{4}$ are precisely the primes which are inert in ${\mathbb Q}(i)$. By the prime number theorem for arithmetic progressions, the asymptotic density of such primes is $1/2$. In Section 2, we will define the $\eta$-quotient and use its combinatorial properties to prove the above claim on the relationship between $p+1-\#E(\mathbb{F}_p)$ and primes of the form $p=a^2+b^2$.
More generally, let
\[
E:y^2 = x^3+ax^2+bx+c
\]
be an elliptic curve over ${\mathbb Q}$ of conductor $N$ that has complex multiplication by an imaginary quadratic field $K$. Such a field $K$ necessarily has class number 1, so all integral ideals are principally generated and all elements of $K$ have a unique factorization into prime elements of $K$. Define $a_E(p) = p+1-\#E({\mathbb F}_p)$. If $p$ is inert in $K$, then $a_E(p)=0$. It follows from work of Hasse that if $p$ is prime, then $|a_E(p)| < 2\sqrt{p}$. Hence we can define $\theta_p \in [0, \pi]$ as the solution to
\[
a_E(p) = 2\sqrt{p}\cos\theta_p.
\]
Now, we ask the natural question of how the sequence ${\theta_p}$ is distributed in the interval $[0, \pi]$. The answer follows from work of Hecke \cite{hecke}
\begin{theorem*}
Let $E/{\mathbb Q}$ be an elliptic curve of conductor $N$ that has complex multiplication by an imaginary quadratic field. For a prime $p$, define $\theta_p \in [0, \pi]$ by the equality
\[
p+1-\#E({\mathbb F}_p) = 2\sqrt{p}\cos{\theta_p}.
\]
If $I \subseteq [0, \pi]$ is an open subinterval, then
\begin{equation}
\label{eqn:ST_CM}
\lim_{x \to \infty} \frac{\#\{ p \le x : \theta_p \in I\}}{\#\{p\leq x\}} = \frac{\mathbf{1}_{\frac{\pi}{2}\in I}}{2}+\frac{|I|}{2\pi},\qquad\text{where}\qquad \mathbf{1}_{\frac{\pi}{2}\in I}=\begin{cases}
1&\mbox{if $\frac{\pi}{2}\in I$,}\\
0&\mbox{otherwise.}
\end{cases}
\end{equation}
\end{theorem*}
Here, we prove a result that contains several refinements of Hecke's work. First, rather than merely achieving an asymptotic distribution, our result provides a variant of \eqref{eqn:ST_CM} which also explicates the rate of convergence in the limit. Second, our result provides strong uniformity in the length of the interval $I$ in which the angles $\theta_p$ lie, enabling us to look at the small-scale distribution of the angles $\theta_p$ as well as the large-scale distribution. Finally, our result provides strong uniformity in the length of the interval in which the primes themselves lie, enabling us to look at both the small-scale and large-scale distribution of the primes $p\in[x,x+h]$ satisfying $\theta_p\in I$.
\begin{comment}
Let $K = {\mathbb Q}(\sqrt{-n})$ have ${\mathcal O}_K$ as its ring of integers, and
\[
D = \begin{cases} -n & n \equiv 3 \pmod{4} \\ -4n & n \equiv 1, 2\pmod4 \end{cases}
\]
denote the discriminant of $K$ over ${\mathbb Q}.$
For an integral ideal ${\mathfrak a},$ $\Lambda({\mathfrak a})$ is the generalized von Mangoldt function
\[
\Lambda({\mathfrak a}) = \begin{cases} \log{N({\mathfrak p})} & {\mathfrak a} = {\mathfrak p}^{j} \text{ for some prime ideal } {\mathfrak p} \\ 0 & \text{ otherwise }\end{cases}
\]
where $N$ is the absolute norm of an ideal. For each principal ideal ${\mathfrak a} = (a + b\sqrt{-n}),$ define $\arg{{\mathfrak a}} = \arg(a + b\sqrt{-n}).$
Define $g$ as the number of units of $K$.
Let $\chi \xi_{m}$ be the Hecke Grossencharacter defined on principal ideals $(\alpha)$ as
\[
\xi_{m}((\alpha)) = \Big( \frac{\alpha}{|\alpha|} \Big)^{mg}
\]
where $\chi$ is a ray class character.
We have the following prime counting function
\[
\pi_{I, \delta}([x, x+h]) \coloneqq \sum_{\substack{x < p \le x+h \\ \theta_p \in I}}1.
\]
Per usual, we instead study its log counterpart
\[
\sum_{\substack{x < p \le x+h \\ \theta_p \in I}}\log{p} \coloneqq \sum_{\substack{x < p \le x+h \\ \theta_p \in I}}\log{p}.
\]
\end{comment}
\begin{theorem}
\label{thm:main_theorem}
Let $E/{\mathbb Q}$ be an elliptic curve of conductor $N$ that has complex multiplication by an imaginary quadratic field. Let $I \subseteq [0, \pi]$ be a subinterval, and define $\theta_p \in [0, \pi]$ by
\[
p+1-\#E({\mathbb F}_p) = 2\sqrt{p}\cos{\theta_p}.
\]
Let $\delta>0$ and $\delta'>0$ be fixed numbers such that $\delta+\delta'<\frac{5}{24}$, $h\geq x^{1-\delta}$, and $|I|\geq x^{-\delta'}$. There exists a number $c=c_{\delta,\delta',E}>0$ such that
\[
\sum_{\substack{x < p \le x+h \\ \theta_p \in I}}\log{p} = \Big(\frac{1}{2}\mathbf{1}_{\frac{\pi}{2}\in I}+\frac{|I|}{2\pi}\Big)h\Big(1 + O\Big(\exp\Big(-c\frac{(\log x)^{1/3}}{(\log\log x)^{1/3}}\Big)\Big).
\]
The implied constant depends on $N$.
\end{theorem}
In Section 2, we present some explicit examples of the relationship between the numbers $a_E(p)$ for elliptic curves $E/\mathbb{Q}$ with complex multiplication and primes represented by certain positive-definite binary quadratic forms. This relationship uses the Jacobi triple product formula and gives rise to the claimed connection between primes of the form $a^2+b^2$ and the elliptic curve $y^2=x^3-x$. One can recast Theorem \ref{thm:main_theorem} as a statement about the distribution of primes represented by these quadratic forms. In this context, Theorem \ref{thm:main_theorem} reproves a result of Coleman \cite{coleman1990}. For related work on more general norm forms, see Duke \cite{duke}.
After we prove Theorem \ref{thm:main_theorem}, we describe how one can extend our results to a much broader context. In particular, our ideas extend to study the distribution of Fourier coefficients of holomorphic cuspidal Hecke eigenforms which are also CM newforms. We will indicate how Theorem 1.1 fits into this context, which can be addressed using class field theory.
\subsection*{Acknowledgements} This research was supported by the NSF (DMS-2002265), the NSA (H98230-20-1-0012), the Templeton World Charity Foundation, and the Thomas Jefferson Fund at the University of Virginia. The authors would like to thank Dr. Ken Ono for his many helpful suggestions and conversations.
\section{Combinatorial properties of newforms}
In this section, we first discuss the combinatorial properties of the Dedekind eta-function, defined by the infinite product
\[
\eta(z) \coloneqq q^{1/24}\prod_{n=1}^{\infty}(1 - q^n),
\]
where $q = e^{2\pi iz}$. We include this note to give insight into how the binary quadratic forms that we represent primes by arise. Using this function, we can easily provide explicit descriptions of many modular forms. It is also useful for producing generating functions. To do this, we make use of Jacobi's Triple Product Identity
\[
\label{jacobiidentity}
\prod_{n=1}^{\infty}(1 - x^{2n})(1+x^{2n -1}z^2)(1+x^{2n-1}z^{-2}) = \sum_{m \in {\mathbb Z}}z^{2m}x^{m^2}.
\]
Thus, we have the following identities.
\begin{proposition}
The following q-series identities are true.
\label{identities}
\begin{align*}
\eta(24z) &= q\prod_{n=1}^{\infty}(1 - q^{24n}) = \sum_{k \in {\mathbb Z}}(-1)^kq^{(6k+1)^2} \\
\frac{\eta(z)^2}{\eta(2z)} &= \prod_{n = 1}^{\infty} \frac{(1 - q^n)^2}{(1 - q^{2n})} = \sum_{k \in {\mathbb Z}}(-1)^kq^{k^2} \\
\eta(8z)^3 &= q\prod_{n=1}^{\infty}(1 - q^{8n})^3 = \sum_{k =0}^{\infty}(-1)^k(2k + 1)q^{(2k+1)^2} \\
\frac{\eta(6z)^5}{\eta(3z)^2} &= q\prod_{n=1}^\infty \frac{(1 - q^{6n})^5}{(1 - q^{3n})^2} = \sum_{n=1}^{\infty}(-1)^{n-1}\Big(\frac{n}{3}\Big)nq^{n^2} \\
\frac{\eta(2z)^5}{\eta(z)^2 \eta(4z)^2} &= \prod_{n=1}^\infty \frac{(1 - q^{2n})^5}{(1 - q^n)^2(1-q^{4n})^2} = \sum_{k \in {\mathbb Z}}q^{k^2}
\end{align*}
\end{proposition}
\begin{proof}
See \cite[Section 1.4]{ono-cbms}.
\end{proof}
Consider the functions
\begin{equation}
\label{eqn:newform_examples}
\begin{aligned}
f_{E_1}(z) &= \eta(4z)^2\eta(8z)^2 = \sum_{n=1}^{\infty} a_{E_1}(n)q^n, \\
f_{E_2}(z) &= \eta(6z)^4 = \sum_{n=1}^{\infty} a_{E_2}(n)q^n, \\
f_{E_3}(z) &= \frac{\eta(8z)^8}{\eta(4z)^2\eta(16z)^2} = \sum_{n=1}^{\infty} a_{E_3}(n)q^n.
\end{aligned}
\end{equation}
From \cite{martin-ono}, the corresponding elliptic curves for these weight 2 newforms are
\begin{align*}
E_{1} &: y^2 = x^3 - x ,\\
E_{2} &: y^2 = x^3 + 1 ,\\
E_{3} &: y^2 = x^3 - 4x.
\end{align*}
The curve $E_2$ with conductor 36 has complex multiplication by $\mathbb{Q}(\sqrt{-3}),$ and the curves $E_1$ and $E_3$ with conductor 32 and 64, respectively, have complex multiplication by ${\mathbb Q}(i)$. This implies that $L(s,E_{j}),$ for $j \in \{1, 2, 3\}$ is a Hecke L-function $L(s, \phi)$, for a suitable Hecke Gr\"ossencharacter $\phi$. Since the curves $E_{j}$ have complex multiplication, we can find the explicit formulae for the coefficients for $L(s, \phi)$ using the well-known $q$-series identities seen in Corollary \ref{identities}. We use this to compute the Fourier coefficients of the weight 2 newforms from above, which are sums of the relevant Hecke Gr{\"o}ssencharacters.
\par
Using the $q$-series and infinite product identities in Proposition 2.1, we see that
\begin{align*}
\eta(4z)^2\eta(8z)^2 &= \frac{\eta(4z)^2}{\eta(8z)}\cdot \eta(8z)^3 = q\prod_{n=1}^{\infty}(1 - q^{8n})^3 \cdot \prod_{m = 1}^{\infty} \frac{(1 - q^{4m})^2}{(1 - q^{8m})} \\
&= \Big(\sum_{n =0}^{\infty}(-1)^n(2n + 1)q^{(2n+1)^2} \Big) \cdot \Big(1 + 2\sum_{m=1}^\infty (-1)^{m}q^{4m^2}\Big), \\
\eta(6z)^4 &= \frac{\eta(6z)^5}{\eta(3z)^2} \cdot \frac{\eta(3z)^2}{\eta(6z)} =
q\prod_{n=1}^{\infty} \frac{(1 - q^{6n})^5}{(1 - q^{3n})^2} \cdot \prod_{m=1}^{\infty} \frac{(1 - q^{3m})^2}{(1 - q^{6m})} \\
&= \Big(\sum_{n=1}^\infty (-1)^{n-1}\Big(\frac{n}{3} \Big) nq^{n^2}\Big) \cdot \Big(1 + 2\sum_{m=1}^\infty (-1)^{3m}q^{3m^2} \Big),\\
\frac{\eta(8z)^8}{\eta(4z)^2\eta(16z)^2} &= \eta(8z)^3 \cdot \frac{\eta(8z)^5}{\eta(4z)^2\eta(16z)^2} = q\prod_{n=1}^{\infty}(1 - q^{8n})^3 \cdot \prod_{n=1}^\infty \frac{(1 - q^{8n})^5}{(1 - q^{4n})^2(1-q^{16n})^2} \\
&= \Big(\sum_{n =0}^{\infty}(-1)^n(2n + 1)q^{(2n+1)^2}\Big) \cdot \Big(1 + 2\sum_{m=1}^\infty q^{4m^2}\Big).
\end{align*}
Since the coefficients are Hecke multiplicative, it suffices to give the formulae for $a_{E_j}(p)$ for $p$ prime in order to understand $a_{E_j}(n)$ for all $n\geq 1$.
\begin{theorem}
We have the following formulae for primes $p \nmid N$, where $N$ is the conductor.
\begin{align*}
a_{E_1}(p) &= \begin{cases} 0 & \text{ if } p \equiv 3 \pmod4 \\ (-1)^{n + m}(4n + 2) & \text{ if } p \equiv 1 \pmod4, p = (2n + 1)^2 + 4m^2, n, m \ge 0 \end{cases} \\
a_{E_2}(p) &= \begin{cases} 0 & \text{ if } p \equiv 3 \pmod4 \\ 2p\Big(\frac{p}{3}\Big) & \text{ if } p \equiv 1 \pmod{3}, p = n^2 + 3m^2, \end{cases} \\
a_{E_3}(p) &= \begin{cases} 0 & \text{ if } p \equiv 3 \pmod4 \\ (-1)^{n}(4n + 2) & \text{ if } p \equiv 1 \pmod4, p = (2n + 1)^2 + 4m^2, n, m \ge 0 \end{cases}
\end{align*}
\end{theorem}
\begin{proof}
Follows from the $q$-series infinite product identities in Proposition \ref{identities}.
\end{proof}
We observe that the Fourier coefficients of these weight 2 newforms represented by eta quotients with complex multiplication give rise to primes represented as binary quadratic forms. Thus, to study these primes, we can instead study the distribution of the Fourier coefficients $\{a_{E_j} (p)\}$.
\section{Preliminary manipulations}
Let
\[
E:y^2 = x^3+ax^2+bx+c
\]
be an elliptic curve defined over ${\mathbb Q}$ having conductor $N$. Suppose that $E/{\mathbb Q}$ has complex multiplication by an imaginary quadratic field $K$ having ring of integers $\mathcal{O}_K$, discriminant $D$, and absolute norm $\mathrm{N}=\mathrm{N}_{K/\mathbb{Q}}$; concretely, we have $\mathrm{N}\mathfrak{a}=|\mathcal{O}_K/\mathfrak{a}|$ for a nonzero integral ideal $\mathfrak{a}\subseteq\mathcal{O}_K$. Each Such field $K$ must be of the form ${\mathbb Q}(\sqrt{-d})$ where
\[
d\in\{-1, -2, -3, -7, -11, -19, -43, -67, -163\},
\]
each of which has class number one. Thus, the integral ideals of $\mathcal{O}_K$ are principally generated.
For a prime $p\nmid N$, we recast the Hasse bound as
\[
p+1-\#E({\mathbb F}_p)=2\sqrt{p}\cos\theta_p.
\]
Given a subinterval $I\subseteq[0,\pi]$ with indicator function $\mathbf{1}_I$ and $x>DN$, we study
\begin{equation}
\label{eqn:sec3sum_1}
\sum_{\substack{x<p\leq x+h \\ \theta_p\in I}}\mathbf{1}_{I}(\theta_p)\log p.
\end{equation}
\subsection{Preliminary Fourier analysis}
Recall that $p+1=\#E({\mathbb F}_p)$ (or, equivalently, $\theta_p=\pi/2$) if and only if $p$ is inert in $K$. Therefore, we decompose \eqref{eqn:sec3sum_1} as
\begin{equation}
\label{eqn:sec3sum_2}
\sum_{\substack{x<p\leq x+h \\ \theta_p\in I-\{\frac{\pi}{2}\} \\ \textup{$p$ splits in $K$}}}\mathbf{1}_{I}(\theta_p)\log p+\mathbf{1}_{\frac{\pi}{2}\in I}\sum_{\substack{x<p\leq x+h \\ \textup{$p$ is inert in $K$}}}\log p.
\end{equation}
If the interior of $I$ contains $\frac{\pi}{2}$, then there exist intervals $I_1$ and $I_2$ such that we have the disjoint union
\[
I = I_1\cup \{\pi/2\}\cup I_2.
\]
Suppose now that $J\subseteq[0,\pi]$ is an open subinterval that does not contain $\pi/2$ in its interior.
\begin{lemma}
\label{lem:BS}
Let $M\geq 1$ be an integer, and let $J\subseteq[0,\pi]$ be a subinterval. There exist trigonometric polynomials
\[
S_{M}^-(\theta) = \sum_{|m|\leq M}b_m^- e^{i m\theta},\qquad S_{M}^+(\theta) = \sum_{|m|\leq M}b_m^+ e^{i m\theta}
\]
such that the following properties hold.
\begin{enumerate}
\item If $\theta\in[0,\pi]$, then $S_M^-(\theta)\leq\mathbf{1}_J(\theta)\leq S_M^+(\theta)$.
\item If $0<|m|\leq M$, then $b_m^{\pm}=b_{-m}^{\pm}$.
\item We have the bounds
\begin{equation}
\label{eqn:first_Fourier}
\Big|b_0^{\pm} - \frac{|J|}{\pi}\Big|\ll\frac{1}{M}.
\end{equation}
\item If $0<|m|\leq M$, then we have the bounds
\[
|b_m^{\pm}|\ll \frac{1}{M}+\min\Big\{|J|,\frac{1}{|m|}\Big\}.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
See \cite[Lecture 1]{Montgomery}.
\end{proof}
It follows from Lemma \ref{lem:BS} that
\[
\sum_{|m|\leq M}b_m^- \sum_{\substack{x<p\leq x+h \\ \textup{$p$ splits in $K$}}} e^{i m\theta_p}\log p \leq \sum_{\substack{x<p\leq x+h \\ \theta_p\in J \\ \textup{$p$ splits in $K$}}}\mathbf{1}_{J}(\theta_p)\log p\leq \sum_{|m|\leq M}b_m^+ \sum_{\substack{x<p\leq x+h \\ \textup{$p$ splits in $K$}}} e^{i m\theta_p}\log p.
\]
The bounds on $|b_m^{\pm}|$ and the relation between $b_m^{\pm}$ and $b_{-m}^{\pm}$ imply that
\begin{multline*}
\Big|\sum_{\substack{x<p\leq x+h \\ \theta_p\in J \\ \textup{$p$ splits in $K$}}}\mathbf{1}_{J}(\theta_p)\log p - \frac{|J|}{\pi}\sum_{\substack{x<p\leq x+h \\ \textup{$p$ splits in $K$}}}\log p\Big|\\
\ll \frac{1}{M}\sum_{\substack{x<p\leq x+h \\ \textup{$p$ splits in $K$}}}\log p+\sum_{1\leq m\leq M}\Big(\frac{1}{M}+\min\Big\{|J|,\frac{1}{|m|}\Big\}\Big)\Big|\sum_{\substack{x<p\leq x+h \\ \textup{$p$ splits in $K$}}}e^{im\theta_p}\log p\Big|.
\end{multline*}
If $\chi_D$ denotes the primitive quadratic Dirichlet character associated to $K$ and $p>DN$, it follows that
\[
\frac{1+\chi_D(p)}{2}=\begin{cases}
1&\mbox{if $p$ splits in $K$,}\\
0&\mbox{if $p$ is inert in $K$}
\end{cases},\qquad \frac{1-\chi_D(p)}{2}=\begin{cases}
0&\mbox{if $p$ splits in $K$,}\\
1&\mbox{if $p$ is inert in $K$.}
\end{cases}
\]
Defining $\chi_0$ to be the trivial Dirichlet character modulo 1, we it follows from the discussion above that
\begin{multline}
\label{eqn:sec3sum_3}
\Big|\sum_{\substack{x<p\leq x+h \\ \theta_p\in I}}\log p - \Big(\frac{1}{2}\mathbf{1}_{\frac{\pi}{2}\in I}+\frac{|I|}{2\pi}\Big)h\Big|\\
\ll\frac{1}{M}\sum_{\substack{x<p\leq x+h \\ \textup{$p$ splits in $K$}}}\log p+ \sum_{1\leq m\leq M}\Big(\frac{1}{M}+\min\Big\{|I|,\frac{1}{|m|}\Big\}\Big)\Big|\sum_{\substack{x<p\leq x+h \\ \textup{$p$ splits in $K$}}}e^{im\theta_p}\log p\Big|\\
+(\mathbf{1}_{\frac{\pi}{2}\in I}+|I|)\sum_{\chi\in\{ \chi_D,\chi_0\}}\Big|\sum_{x<p\leq x+h}\chi(p)\log p-\delta_{\chi}h\Big|,
\end{multline}
where $\delta_{\chi}=1$ if $\chi=\chi_0$ and $\delta_{\chi}=0$ if $\chi=\chi_D$.
\subsection{Hecke Gr{\"o}ssencharacters}
We introduce a crucial connection between the angles $\theta_p$ at primes $p$ that split completely in $K$ and certain infinite-order characters defined on integral ideals of $K$.
\begin{lemma}
\label{lem:Grossencharacter}
Let $w$ be the number of roots of unity in $K$. For each integer $m\neq 0$, there exists an integral ideal $\mathfrak{m}_m$ (depending only on $K$ and the residue class of $m$ modulo $w$) and a Hecke Gr{\"o}ssencharacter modulo $\mathfrak{m}_m$ with frequency $m$ such that if $p$ splits completely in $K$, so that there exists a prime ideal $\mathfrak{p}$ of $\mathcal{O}_K$ such that $(p)=\mathfrak{p}\overline{\mathfrak{p}}$, then
\[
\xi_{m}(\mathfrak{p}) = e^{im\theta_p}.
\]
\end{lemma}
\begin{proof}
See \cite[Section 2]{Sutherland}. The discussion therein relies crucially on \cite[Theorem II.10.5]{Silverman}, which was originally proved by Deuring. To provide an explicit example, when $E$ is the elliptic curve $y^2=x^3-x$ from the introduction, then $K={\mathbb Q}(i)$, and for each $m\geq 1$, we define
\[
\mathfrak{m}_m=\begin{cases}
\mathcal{O}_K &\mbox{if $m\equiv 1\pmod{4}$,}\\
(1+i)^2&\mbox{if $m\equiv 3\pmod{4}$,}\\
(1+i)^3&\mbox{if $m$ is even.}
\end{cases}
\]
If $\mathfrak{a}$ is a nonzero integral ideal of $\mathcal{O}_K$ generated by $\alpha\in\mathcal{O}_K$ with $\alpha\equiv 1\pmod{\mathfrak{m}_m}$, then we let
\[
\xi_{m}(\mathfrak{a}) = \begin{cases}
(\alpha/|\alpha|)^{m-1}&\mbox{if $\mathfrak{a}$ and $\mathfrak{m}_m$ are coprime,}\\
0&\mbox{otherwise.}
\end{cases}
\]
We multiplicatively extend $\xi_{m}$ to other nonzero fractional ideals $\mathfrak{a}$ that are coprime to $\mathfrak{m}_m$, and we extend $\xi_{m}$ to all other nonzero fractional ideals by sending them to zero. This defines a primitive Hecke Grossencharacter modulo $\mathfrak{m}_m$ for all $m\geq 1$. For $m\geq 1$, we have $\xi_{-m}=\overline{\xi_{m}}$.
\end{proof}
Note that if $\mathfrak{p}$ is a prime ideal of $\mathcal{O}_K$ such that there exists a prime $p$ satisfying $(p)=\mathfrak{p}\overline{\mathfrak{p}}$, then $\mathrm{N}\mathfrak{p}=p$. For all other prime ideals of $K$, there exists a prime $q$ such that $\mathrm{N}\mathfrak{p}=q^2$. The contribution from such primes is trivially bounded by $O(\sqrt{x}\log x)$ by the prime number theorem. It follows that the right hand side of \eqref{eqn:sec3sum_3} is
\begin{multline}
\label{eqn:sec3sum_4}
\ll\frac{1}{M}\sum_{x<p\leq x+h }\log p+ \sum_{1\leq m\leq M}\Big(\frac{1}{M}+\min\Big\{|I|,\frac{1}{|m|}\Big\}\Big)\Big|\sum_{x<\mathrm{N}\mathfrak{p}\leq x+h}\xi_{m}(\mathfrak{p})\log \mathrm{N}\mathfrak{p}\Big|\\
+|I|\cdot\Big|\sum_{x<p\leq x+h}\chi_D(p)\log p\Big|+\sqrt{x}\log x.
\end{multline}
For an integral ideal $\mathfrak{a}$ of $\mathcal{O}_K$, we define
\[
\Lambda_K(\mathfrak{a}) = \begin{cases}
\log{\mathrm{N}}\mathfrak{p}&\mbox{if there exists a prime ideal $\mathfrak{p}$ and an integer $j\geq 1$ such that ${\mathfrak a}=\mathfrak{p}^j$,}\\
0&\mbox{otherwise.}
\end{cases}
\]
It will be convenient for us to bound the error in approximating a sum over $\xi_{m}(\mathfrak{p})\log{\mathrm{N}}\mathfrak{p}$ with a sum over $\xi_{m}({\mathfrak a})\Lambda_K({\mathfrak a})$. In particular, we want to bound
\[
\Big|\sum_{x < {\mathrm{N}}{\mathfrak a} \le x+h}\xi_{m}({\mathfrak a})\Lambda({\mathfrak a}) - \sum_{x < {\mathrm{N}}\mathfrak{p} \le x+h}\xi_{m}({\mathfrak p})\log{{\mathrm{N}}\mathfrak{p} }\Big|
\]
First, note that $|\xi_m({\mathfrak a})|\leq1$, so $|\xi_{m}({\mathfrak a})\Lambda({\mathfrak a})| \le \log{{\mathrm{N}}\mathfrak{p} }$ if ${\mathfrak a}$ is a power of a prime ideal ${\mathfrak p},$ and is 0 otherwise. We rewrite the above as
\begin{align*}
\Big|\sum_{j=1}^{\infty}\sum_{x < {\mathrm{N}}\mathfrak{p} ^j \le x+h}\xi_{m}({\mathfrak p}^j)\log{{\mathrm{N}}\mathfrak{p} } - \sum_{x < {\mathrm{N}}\mathfrak{p} \le x+h}\xi_{m}({\mathfrak p})\log{{\mathrm{N}}\mathfrak{p} }\Big| &= \Big| \sum_{j=2}^{\infty}\sum_{x < {\mathrm{N}}\mathfrak{p} ^j \le x+h}\xi_{m}({\mathfrak p}^j)\log{{\mathrm{N}}\mathfrak{p} } \Big| \\
&\le \sum_{j=2}^{\infty}\sum_{x < {\mathrm{N}}\mathfrak{p} ^j \le x+h}\xi_{m}({\mathfrak p}^j)\log{{\mathrm{N}}\mathfrak{p} } \\
&\ll (\log{x})^2 \sum_{x < {\mathrm{N}}\mathfrak{p} ^2 \le x+h}\log{{\mathrm{N}}\mathfrak{p} } \\
&\ll (\log{x})^2\sum_{\sqrt{x} < {\mathrm{N}}\mathfrak{p} \le \sqrt{x}+\frac{2h}{\sqrt{x}}}1
\end{align*}
By \cite[Proposition 2]{Grenie-Molteni-Perelli},
this is bounded by
\begin{equation*}
\ll (\log x)^2 \frac{h}{\sqrt{x}\log x} = \frac{h}{\sqrt{x}}\log x
\end{equation*}
Then,
\begin{multline}
\label{eqn:fourier_coeffs_accumulate}
\sum_{\substack{m \in {\mathbb Z} \\ |m| \le M}}|b_m^{\pm}|\Big|\sum_{x < {\mathrm{N}}{\mathfrak a} \le x+h}\xi_{m}({\mathfrak a})\Lambda({\mathfrak a}) - \sum_{x < {\mathrm{N}}\mathfrak{p} \le x+h}\xi_{m}({\mathfrak p})\Lambda({\mathfrak p})\Big| \\
\ll \frac{h\log{x}}{\sqrt{x}} \sum_{\substack{m \in {\mathbb Z} \\ 0<|m| \le M}}\Big(\frac{1}{M} + \min\Big(\frac{1}{|m|}, |I| \Big) \Big) \\
\ll \frac{h\log{x}}{\sqrt{x}}\log M = x^{1/2 - \delta'}(\log x)^2.
\end{multline}
It follows that \eqref{eqn:sec3sum_4} is
\begin{multline}
\label{eqn:sec3sum_5.1}
\ll\frac{1}{M}\sum_{x<p\leq x+h }\log p+ \sum_{1\leq m\leq M}\Big(\frac{1}{M}+\min\Big\{|I|,\frac{1}{|m|}\Big\}\Big)\Big|\sum_{x<\mathrm{N}\mathfrak{a}\leq x+h}\xi_{m}(\mathfrak{a})\Lambda_K({\mathfrak a})\Big|\\
+|I|\cdot\Big|\sum_{x<p\leq x+h}\chi_D(p)\log p\Big|+\sqrt{x}\log x.
\end{multline}
Recalling the classical von Mangoldt function
\[
\Lambda(n)=\begin{cases}
\log p&\mbox{if $n=p^k$ for some prime $p$ and some integer $k\geq 1$,}\\
0&\mbox{otherwise}
\end{cases}
\]
along with the Chebyshev bounds for the prime counting function, we find that \eqref{eqn:sec3sum_5.1} is
\begin{multline}
\label{eqn:sec3sum_5}
\ll\frac{1}{M}\sum_{x<p\leq x+h }\log p+ \sum_{1\leq m\leq M}\Big(\frac{1}{M}+\min\Big\{|I|,\frac{1}{|m|}\Big\}\Big)\Big|\sum_{x<\mathrm{N}\mathfrak{a}\leq x+h}\xi_{m}(\mathfrak{a})\Lambda_K({\mathfrak a})\Big|\\
+|I|\cdot\Big|\sum_{x<p\leq x+h}\chi_D(n)\Lambda(n)\Big|+\sqrt{x}\log x.
\end{multline}
For expositional clarity, we defer our handling of the sum over $\chi_D(n)\Lambda(n)$ to the end of our proofs, and we will only address it briefly.\footnote{The extra contribution from the sum over $\chi_D(n)\Lambda(n)$ is much simpler to handle than the sum over Hecke Gr{\"o}ssencharacters, but it is handled independently from the Hecke Gr{\"o}ssencharacter contribution.}
\section{$L$-functions of Hecke Gr{\"o}ssencharacters}
In order to bound \eqref{eqn:sec3sum_5} efficiently, we introduce the theory of $L$-functions, starting with a classical result of Hecke \cite{hecke}.
\begin{lemma}
\label{lem:Hecke_L-function}
Let $m$ be a nonzero integer, and let $\xi_m\pmod{\mathfrak{m}_m}$ be as in Lemma \ref{lem:Grossencharacter}. Define
\[
L(s,\xi_m) = \prod_{\mathfrak{p}}(1-\xi_m(\mathfrak{p}){\mathrm{N}}\mathfrak{p}^{-s})^{-1}.
\]
The above Euler product converges absolutely for $\mathrm{Re}(s)>1$. Moreover, if we define
\[
\Lambda(s,\xi)=((2\pi)^{-2}D{\mathrm{N}}\mathfrak{m}_m)^{\frac{s}{2}}\Gamma\Big(s+\frac{m}{2}\Big)L(s,\xi),
\]
where $\Gamma(s)$ is the complex-analytic extension of the factorial, then $\Lambda(s,\xi)$ is an entire function of order one, and there exists a complex number $W(\xi_m)$ of modulus one such that
\[
\Lambda(s,\xi)=W(\xi)\Lambda(1-s,\overline{\xi}).
\]
\end{lemma}
\begin{remark}
The integer $D{\mathrm{N}}\mathfrak{m}_m$ is not pertinent in our proofs because $D$ is fixed (since $E/{\mathbb Q}$ is fixed) and $\mathfrak{m}_m$ depends only on $K$ (which is fixed) and the residue class of $m$ modulo the number of roots of unity in $K$, and the number of such residue classes is fixed.
\end{remark}
The Euler product that defines $L(s,\xi)$ converges absolutely for $\mathrm{Re}(s)>1$, so it follows that $L(s,\xi)\neq 0$ in that region. By the functional equation of $\Lambda(s,\xi)$, it follows that $L(s,\xi)\neq 0$ for $\mathrm{Re}(s)<0$ except at $-j-\frac{m}{2}$, where $j\geq 0$ is an integer. These trivial zeros arise as poles of the gamma function. Because $\Lambda(s,\xi)$ is entire of order one, it exhibits a Hadamard factorization
\[
\Lambda(s,\xi)=\prod_{\rho}\Big(1-\frac{s}{\rho}\Big)e^{\frac{s}{\rho}},
\]
where $\rho=\beta+i\gamma$ denotes a nontrivial zero of $L(s,\xi)$. These zeros all satisfy $0\leq\beta\leq 1$ and $\gamma\in{\mathbb R}$. The nontrivial zeros are directly related to the partial sums of $\xi_m({\mathfrak a})\Lambda_K({\mathfrak a})$ by means of the so-called ``explicit formula".
\begin{lemma}
\label{lem:explicit_formula}
If $1\leq m\leq M\leq x$ and $x^{1-\delta}\leq h\leq x$, then
\[
\sum_{x<{\mathrm{N}}{\mathfrak a}\leq x+h}\xi_m({\mathfrak a})\Lambda_K({\mathfrak a}) = -\sum_{\substack{\rho=\beta+i\gamma \\ |\gamma|\leq Mx/h \\ 0\leq\beta\leq 1}}\frac{(x+h)^{\rho}-x^{\rho}}{\rho}+O\Big(\frac{h(\log x)^2}{M}\Big).
\]
\end{lemma}
\begin{proof}
It follows from \cite[Theorem 12.5]{Iwaniec} that $L(s,\xi_m)$ is in fact the $L$-function associated to a holomorphic cuspidal Hecke eigenform of weight $m+1$ and level $D{\mathrm{N}}\mathfrak{m}$ (which, as we already observed, is uniformly bounded). This is a degree 2 $L$-function defined over ${\mathbb Q}$ whose analytic conductor $\mathfrak{q}_m$ (per \cite[Equation 5.7]{Iwaniec-Kowalski}) satisfies $\log \mathfrak{q}_m\asymp \log m\ll \log M\ll \log x$. By \cite[Equation 5.53]{Iwaniec-Kowalski}, it follows that
\[
\sum_{{\mathrm{N}}{\mathfrak a}\leq x}\xi_m({\mathfrak a})\Lambda_K({\mathfrak a}) = -\sum_{\substack{\rho=\beta+i\gamma \\ |\gamma|\leq Mx/h \\ 0\leq\beta\leq 1}}\frac{x^{\rho}-1}{\rho}+O\Big(\frac{h(\log x)^2}{M}\Big).
\]
The desired result follows from taking the difference between the above expression at $x$ and at $x+h$.
\end{proof}
While our focus here is on Hecke Gr{\"o}ssencharacters, there is a corresponding theory for Dirichlet characters which was established long before the work of Hecke. We summarize the key results, all of which are proved in \cite{davenport}.
\begin{theorem}
Let $\chi\in \{\chi_0,\chi_D\}$ be as above. Let $q=D$ if $\chi=\chi_D$ and $q=1$ if $\chi=\chi_0$ Define
\[
L(s,\chi) = \prod_p (1-\chi(p)p^{-s})^{-1}.
\]
The above Euler product converges absolutely for $\mathrm{Re}(s)>1$. Moreover, if we define
\[
\Lambda(s,\chi) = \pi^{-s/2} q^{s/2}\Gamma\Big(\frac{s+\frac{1-\chi(-1)}{2}}{2}\Big)L(s,\chi)m
\]
then $\Lambda(s,\chi)$ is an entire function of order one, and there exists a complex number $W(\chi)$ of modulus one such that
\[
\Lambda(s,\chi)=W(\chi)\Lambda(1-s,\chi).
\]
If $\rho=\beta+i\gamma$ denotes a nontrivial zero of $L(s,\chi)$ with $0\leq\beta\leq 1$, and if $x^{1-\delta}\leq h\leq x$, then
\[
\sum_{x<n\leq x+h} \chi(n)\Lambda(n) = -\sum_{\substack{\rho=\beta+i\gamma \\ |\gamma|\leq Mx/h \\ 0\leq\beta\leq 1}}\frac{(x+h)^{\rho}-x^{\rho}}{\rho}+O\Big(\frac{h(\log x)^2}{M}\Big).
\]
\end{theorem}
\subsection{Preliminary reductions}
By a calculation nearly the same as \eqref{eqn:fourier_coeffs_accumulate}, we deduce that \eqref{eqn:sec3sum_5} is
\begin{multline}
\label{eqn:sec4sum_1}
\ll\frac{1}{M}\sum_{x<p\leq x+h }\log p+ \sum_{1\leq m\leq M}\Big(\frac{1}{M}+\min\Big\{|I|,\frac{1}{|m|}\Big\}\Big)\sum_{\substack{\rho=\beta+i\gamma \\ L(\rho,\xi_m)=0 \\ |\gamma|\leq Mx/h \\ 0\leq\beta\leq 1}}\Big|\frac{(x+h)^{\rho}-x^{\rho}}{\rho}\Big|\\
+(\mathbf{1}_{\frac{\pi}{2}\in I}+|I|)\sum_{\chi\in\{\chi_0,\chi_D\}}\sum_{\substack{\rho=\beta+i\gamma \\ L(s,\chi)=0 \\ |\gamma|\leq Mx/h \\ 0\leq\beta\leq 1}}\Big|\frac{(x+h)^{\rho}-x^{\rho}}{\rho}\Big|+\sqrt{x}\log x+\frac{h(\log x)^3}{M}.
\end{multline}
In light of the bound
\[
\Big|\frac{(x+h)^{\rho}-x^{\rho}}{\rho}\Big|=\Big|\int_{x}^{x+h}t^{\rho-1}dt\Big|\leq \int_x^{x+h}t^{\beta-1}dt\ll hx^{\beta-1},
\]
we find that \eqref{eqn:sec4sum_1} is
\begin{multline}
\label{eqn:sec4sum_2}
\ll\frac{1}{M}\sum_{x<p\leq x+h }\log p+ h\sum_{1\leq m\leq M}\Big(\frac{1}{M}+\min\Big\{|I|,\frac{1}{|m|}\Big\}\Big)\sum_{\substack{\rho=\beta+i\gamma \\ L(\rho,\xi_m)=0 \\ |\gamma|\leq Mx/h \\ 0\leq\beta\leq 1}}x^{\beta-1}\\
+(\mathbf{1}_{\frac{\pi}{2}\in I}+|I|)h\sum_{\chi\in\{\chi_0,\chi_D\}}\sum_{\substack{\rho=\beta+i\gamma \\ L(s,\chi)=0 \\ |\gamma|\leq Mx/h \\ 0\leq\beta\leq 1}}x^{\beta-1}+\sqrt{x}\log x+\frac{h(\log x)^3}{M}.
\end{multline}
In order to ensure that our Fourier coefficient bounds from Lemma \ref{lem:BS} are nontrivial, we make the assumption
\[
|I|\gg M^{-1}
\]
so that \eqref{eqn:sec4sum_2} is
\begin{equation}
\label{eqn:sec4sum_3}
\ll\frac{1}{M}\sum_{x<p\leq x+h }\log p+ (\mathbf{1}_{\frac{\pi}{2}\in I}+|I|)h\sum_{\substack{\rho=\beta+i\gamma \\ |\gamma|\leq Mx/h \\ 0\leq\beta\leq 1}}x^{\beta-1}+\frac{h(\log x)^3}{M}+\sqrt{x}\log x,
\end{equation}
where $\rho=\beta+i\gamma$ ranges over all zeros of $L(s,\chi_0)$, $L(s,\chi_D)$, and the $L$-functions $L(s,\xi_m)$ with $1\leq m\leq M$. It follows from the work of Montgomery and Vaughan \cite[Equation 1.12]{M-V} that if $h\geq x^{1-\delta}$, then
\[
\sum_{x<p\leq x+h}\log p\ll h.
\]
Consequently, if $h\geq M\sqrt{x}$, then \eqref{eqn:sec4sum_3} is
\begin{equation}
\label{eqn:sec4sum_4}
\ll\frac{h(\log x)^3}{M}+ (\mathbf{1}_{\frac{\pi}{2}\in I}+|I|)h\sum_{\substack{\rho=\beta+i\gamma \\ |\gamma|\leq Mx/h \\ 0\leq\beta\leq 1}}x^{\beta-1} \ll\frac{h(\log x)^3}{M}+ (\mathbf{1}_{\frac{\pi}{2}\in I}+|I|)h\sum_{\substack{\rho=\beta+i\gamma \\ |\gamma|\leq Mx/h \\ \frac{1}{2}\leq\beta\leq 1}}x^{\beta-1},
\end{equation}
The last step uses the consequence of the functional equations for $L(s,\xi_m)$ and $L(s,\chi)$ that if $\beta+i\gamma$ is a zero, then so is $1-\beta+i\gamma$.
\subsection{Zeros of Hecke Gr{\"o}ssencharacter $L$-functions}
We now proceed as in the work of Hoheisel. Define
\begin{multline*}
N(\sigma,M,T):=\sum_{1\leq m\leq M}\#\{\rho=\beta+i\gamma\colon L(\rho,\xi_m)=0,~|\gamma|\leq T,~\beta\geq\sigma\}\\
+\sum_{\chi\in\{\chi_0,\chi_D\}}\#\{\rho=\beta+i\gamma\colon L(\rho,\chi)=0,~|\gamma|\leq T,~\beta\geq\sigma\}.
\end{multline*}
Then \eqref{eqn:sec4sum_4} is
\begin{equation}
\label{eqn:sec4sum_5}
\ll\frac{h(\log x)^3}{M}+ (\mathbf{1}_{\frac{\pi}{2}\in I}+|I|)h (\log x)\max_{\frac{1}{2}\leq \sigma\leq 1}x^{\sigma-1}N(\sigma,M,Mx/h).
\end{equation}
In order to show that \eqref{eqn:sec4sum_5} is small, we will make use of a zero-free region and zero density estimate. The zero-free region is a result of Coleman in \cite{colemanZFR}.
\begin{lemma}
\label{ZFR}
Let $m\geq 1$. There exists a constant $c > 0$ depending only on $E$ such that $L(s,\xi_m)\neq 0$ in the region
\[
\mathrm{Re}(s) \ge 1 - \frac{c}{(\log (|\mathrm{im}(s)|+3)|m|)^{2/3} (\log \log (|\mathrm{im}(s)|+3)|m|)^{1/3}}.
\]
\end{lemma}
We apply Lemma \ref{ZDE} to \eqref{eqn:sec4sum_5} and conclude that it is
\begin{equation}
\label{eqn:sec4sum_6}
\ll\frac{h(\log x)^3}{M}+ (\mathbf{1}_{\frac{\pi}{2}\in I}+|I|)h (\log x)\max_{\frac{1}{2}\leq \sigma\leq 1-\frac{c}{(\log M^2 x/h)^{2/3}(\log\log M^2 x/h)^{1/3}}}x^{\sigma-1}N(\sigma,M,Mx/h).
\end{equation}
We use a deep {\it zero density estimate} for Hecke Gr{\"o}ssencharacter $L$-functions due to Coleman \cite[Corollary 8.2]{coleman1990} verifying that $N(\sigma,M,T)$ becomes very small as $\sigma\to 1^-$. We use this as a strong substitute for the yet-unproven generalized Riemann hypothesis for the family of Hecke Gr{\"o}ssencharacter $L$-functions.
\begin{lemma}
\label{ZDE}
For all $\epsilon>0$, there exists a constant $B>0$ such that
\[
N(\sigma,M,T) \ll_{\epsilon} \max\{M,T\}^{(\frac{24+\epsilon}{5})(1 - \sigma)}(\log{T})^B
\]
for $\frac{1}{2}\le \sigma \le 1$.
\end{lemma}
\section{Proof of Theorem \ref{thm:main_theorem}}
Let $\epsilon>0$ be small. We apply Lemma \ref{ZDE} to \eqref{eqn:sec4sum_6} to achieve the bound
\begin{multline*}
(\log x)\max_{\frac{1}{2}\leq \sigma\leq 1-\frac{c}{(\log M^2 x/h)^{2/3}(\log\log M^2 x/h)^{1/3}}}x^{\sigma-1}N(\sigma,M,Mx/h)\\
\ll_{\epsilon} (\log x)^{B+1}\max_{\frac{1}{2}\leq \sigma\leq 1-\frac{c}{(\log M^2 x/h)^{2/3}(\log\log M^2 x/h)^{1/3}}}\Big(\frac{(Mx/h)^{(24+\epsilon)/5}}{x}\Big)^{1-\sigma}.
\end{multline*}
If $h\geq x^{1-\delta}$ and $M\leq x^{\theta}$, then we assume that
\[
\theta+\delta\leq \frac{5}{24}-\epsilon,\qquad \log(M^2 x/h)\asymp\log x
\]
so that the above maximum is achieved when $\sigma$ is largest. We find that there exists an absolute constant $c'>0$, depending on $E$, such that
\begin{align*}
&(\log x)^{B+1}\max_{\frac{1}{2}\leq \sigma\leq 1-\frac{c}{(\log M^2 x/h)^{2/3}(\log\log M^2 x/h)^{1/3}}}\Big(\frac{(Mx/h)^{(24+\epsilon)/5}}{x}\Big)^{1-\sigma}\\
&\ll (\log x)^{B+1}\max_{\frac{1}{2}\leq \sigma\leq 1-\frac{c'}{(\log x)^{2/3}(\log\log x)^{1/3}}} x^{-\frac{\epsilon}{120}(1-\sigma)}\\
&\ll (\log x)^{B+1} x^{-\frac{c'\epsilon}{120(\log x)^{2/3}(\log\log x)^{1/3}}}\\
&\ll \exp\Big(-\frac{c'\epsilon (\log x)^{1/3}}{240(\log\log x)^{1/3}}\Big).
\end{align*}
Therefore, \eqref{eqn:sec4sum_6} is
\[
\ll\frac{h(\log x)^3}{M}+ (\mathbf{1}_{\frac{\pi}{2}\in I}+|I|)h \exp\Big(-\frac{c'\epsilon (\log x)^{1/3}}{240(\log\log x)^{1/3}}\Big).
\]
If $\epsilon_1>0$ and $\epsilon_2>0$ are small and we choose $|I|\geq x^{\epsilon_1}/M$, $M\leq x^{\theta}$, $h\geq x^{1-\delta+\epsilon_2}$, and $\theta+\delta\leq \frac{5}{24}-\epsilon$, then we conclude that the above display is
\[
\ll (\mathbf{1}_{\frac{\pi}{2}\in I}+|I|)h \exp\Big(-\frac{c'\epsilon (\log x)^{1/3}}{240(\log\log x)^{1/3}}\Big).
\]
Upon rescaling $\epsilon$, $\epsilon_1$, and $\epsilon_2$, we find that if $\delta>0$ and $\delta'>0$ are fixed numbers such that
\[
h\geq x^{1-\delta},\qquad |I|\geq x^{-\delta'},\qquad \delta+\delta'<\frac{5}{24},
\]
then there exists a constant $c''>0$ (depending on $\delta$, $\delta'$, and $E$) such that
\[
\sum_{\substack{\theta_p\in I \\ x<p\leq x+h}}\log p = \Big(\frac{\mathbf{1}_{\frac{\pi}{2}\in I}}{2}+\frac{|I|}{2\pi}h\Big)\Big(1+O\Big(\exp\Big(-\frac{c''(\log x)^{1/3}}{(\log\log x)^{1/3}}\Big)\Big)\Big),
\]
as desired.
\section{Further extensions}
We now describe a natural extension of our work to the study of Fourier coefficients of holomorphic cuspidal newforms. Let
\[
f(z) = \sum_{n=1}^{\infty}a_f(n) e^{2\pi inz}
\]
be the Fourier series of a holomorphic cusp form of even integral weight $k\geq 2$ that transforms under the a congruence subgroup $\Gamma_0(N)\subseteq \mathrm{SL}_2(\mathbb{Z})$. Suppose that $f$ is an eigenform of all of the Hecke operators, and that it is in fact a newform. Each of the $\eta$-quotients in \eqref{eqn:newform_examples} are examples of weight 2 newforms with complex multiplication (i.e., newforms with the property that there exists an imaginary quadratic field $K$ such that for a prime $p$, $a_f(p)=0$ if and only if $p$ is inert in $K$). Unlike the case for newforms associated to elliptic curves, it could be the case that $K$ has a class number larger than 1, in which case the prime ideals of $K$ are no longer principally generated.
Generalizing what we did for elliptic curves, one could use the Deligne bound
\[
|a_f(p)|\leq 2p^{\frac{k-1}{2}}
\]
(which generalizes the Hasse bound that we used before) and thus define $\theta_p\in[0,\pi]$ by
\[
a_f(p) = 2p^{\frac{k-1}{2}}\cos\theta_p.
\]
We can now ask how the angles $\theta_p$ are distributed in the interval $[0,\pi]$. In order to relate the angles $\theta_p$ to Hecke Gr{\"o}ssencharacters like we did for elliptic curves, we restrict our consideration to the primes $p$ such that $(p)$ is principally generated. In order to isolate such primes, we use the orthogonality of the characters of the ideal class group of $K$, a finite abelian group with the property that it is trivial if and only if the ring of integers of $K$ has unique factorization. One can then use relate the prime counting function
\[
\sum_{\substack{x<p\leq x+h \\ \theta_p\in I \\ \textup{$(p)$ principally generated}}}\log p
\]
to the analytic properties of the $L$-functions $L(s,\xi_m)$ twisted by class group characters. (The class group characters behave in many ways like Dirichlet characters, but they are defined on integral ideals of $K$ instead of integers in $\mathbb{Z}$.) Ultimately, the distribution of primes we are interested in boils down to an understanding of the distribution of zeros of the aforementioned $L$-functions. The zero-free region and zero density estimate of Coleman were proved in this level of generality, so our proofs will carry through with the aforementioned adjustments so that we can conclude
\[
\sum_{\substack{x<p\leq x+h \\ \theta_p\in I \\ \textup{$(p)$ principally generated}}}\log p = \frac{1}{h_K}\Big(\frac{\delta_{\frac{\pi}{2}\in I}}{2}+\frac{|I|}{2\pi}\Big)h\Big(1+O\Big(\exp\Big(-c'''\frac{(\log x)^{1/3}}{(\log\log x)^{1/3}}\Big)\Big)\Big),
\]
where $h\geq x^{1-\delta}$; $|I|\geq x^{-\delta'}$; $\delta+\theta<5/24$; $c'''$ is a certain constant depending on $\delta$, $\delta'$, and $f$; and $h_K$ is the class number of $K$ (i.e., the order of the ideal class group of $K$).
\nocite{cochrane}
\bibliographystyle{acm}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Acknowledgment}
We would like to thank the authors
of \cite{kay2017kinetics} for the help on the Kinetics dataset and the baseline experiments,
especially Joao Carreira for many constructive discussions.
We also want to thank Abhinav Shrivastava, Jitendra
Malik, and Rahul Sukthankar for valuable feedbacks. S.X. is supported by Google. Z.T. is supported by NSF IIS-1618477 and NSF IIS-1717431.
\section{Conclusion}
We show that we can significantly improve on the previous state of the art 3D CNN video classification model, known as I3D,
in terms of efficiency,
by combining 3 key ideas:
a top-heavy model design,
temporally separable convolution,
and spatio-temporal feature gating.
Our modifications are simple and can be applied to other architectures.
We hope this will boost performance on a variety of video understanding tasks.
\section{Experiment Setup}
\label{sec:experiments}
\subsection{Datasets}
\label{data}
In this paper, we consider two large video action classification datasets.
The first one is Kinetics \cite{kay2017kinetics},
which is a large dataset collected from YouTube, containing 400 action classes and 240K training examples.
Each example is temporally trimmed to be around 10 seconds.
Since the full Kinetics dataset is quite large,
we have created a smaller dataset
that we call \MK.\footnote{
The original ``Mini-Kinetics'' dataset used in
\cite{kay2017kinetics} contains videos that are no longer available.
We created the new \MK\ dataset in collaboration with the original authors.
} %
\MK\ consists of
the 200 categories with most training examples;
for each category, we randomly sample 400 examples from the training set, and 25 examples from the validation set,
resulting in
80K training examples and 5K validation examples in total. The splits are publicly released to enable future comparisons.
We also report some results on the original Kinetics dataset, which we will call Kinetics-Full\ for clarity.
The second main dataset is
Something-something~\cite{Something}. It consists of ~110k videos of 174 different low-level actions, each lasting between 2 to 6 seconds.
In contrast to Kinetics,
this dataset requires making fine-grained low-level distinctions, such as between ``Pushing something from left to right'' and ``Pushing something from right to left''.
It is therefore an interesting question whether the same principles will hold and the same architectures will work well on both datasets.
We also consider two
smaller action classification datasets to test the transferability of our model,
which we discuss in Section~\ref{sec:otherClassification},
as well as two
action detection datasets, which we discuss in Section~\ref{sec:detection}.
\subsection{Model training}
Our training procedure largely follows~\cite{carreira2017quo}. During training, we densely sample 64 frames from a video and resize input frames to $256\times 256$ and then take random crops of size $224\times 224$. During evaluation, we use all frames and take $224\times 224$ center crops from the resized frames. Our models are implemented with TensorFlow and optimized with a vanilla synchronous SGD algorithm with momentum of 0.9 and on 56 GPUs, batch size is set to 6 per GPU. For \MK, we train our model for 80k steps with an initial learning rate of 0.1. We decay the learning rate at step 60k to 0.01, and step 70k to 0.001. Since Something-something\ is a smaller dataset, we reduce the number of GPUs to 16 and train at learning rate of 0.1 for 10k steps.
\subsection{Measuring speed and accuracy}
We report top-1 and top-5 accuracy. To measure the computational efficiency of our models, we report theoretical FLOPS based on a single input video sequence of 64 frames and spatial size $224\times 224$. We pad the total number of frames to 250 for \MK\ and 64 for Something-something\ when evaluating.
\subsection{Spatio-temporal feature gating}
\label{sec:gating}
In this section we further improve the accuracy of our model by using feature gating.
We start by considering the context feature gating
mechanism first used for video classification in \cite{miech2017learnable}.
They consider an unstructured input feature vector $x \in \mathcal{R}^n$ (usually learned at final embedding layers close to the logit output),
and produce an output feature vector $y \in \mathcal{R}^n$ as follows:
\[
y = \sigma(W x + b) \odot x
\]
where $\odot$ represents elementwise multiplication,
$W \in \mathcal{R}^{n \times n}$ is a weight matrix,
and $b \in \mathcal{R}^n$ is the bias term.
This mechanism allows the model to upweight certain
dimensions of $x$ if the context model $\sigma(W x +b)$ predicts that they are important,
and to downweight irrelevant dimensions; this can be thought of as a ``self-attention'' mechanism.
We now extend this to feature tensors, with spatio-temporal structure.
Let $X \in \mathcal{R}^{T \times W \times H \times D}$ be the input tensor,
and let $Y$ be an output tensor of the same shape.
We replace the matrix product $W x$ with $W \mathrm{pool}(X)$,
where the pooling operation averages the dimensions of $X$ across space and time.
(We found that this worked better than just averaging across space or just across time.)
We then compute $Y = \sigma(W \mathrm{pool}(X) + b) \odot X$,
where $\odot$ represents multiplication across the feature (channel) dimension,
(i.e., we replicate the attention map $\sigma(W \mathrm{pool}(X) + b)$ across space and time).
We can plug this gating module into any layer of the network.
We experimented with several options, and got the best results by applying it directly after
each of the $[k, 1, 1]$ temporal convolutions in the S3D network.
We call the final model (S3D with gating) \SG.
We see from Table~\ref{tab:FK} that this results in a healthy gain in accuracy
compared to \Sthree\
on the Kinetics-Full\ dataset (72.2\% top-1 to 74.7\%) at a very modest cost increase (66.38 GFLOPS to 71.38). Table~\ref{tab:something} shows that \SG\ also outperforms \Sthree\ and I3D\
on Something-something.
We also significantly outperform the current state of the art method,
which is the Multi-scale TRN of \cite{zhou_trn},
improving top-1 accuracy from 33.6\% to 42.0\%.
\section{Introduction}
The resurgence of convolutional neural networks (CNNs) has led to a wave of unprecedented advances for image classification using end-to-end hierarchical feature learning architectures \cite{krizhevsky2012imagenet,InceptionV1,simonyan2015very,resnet}. The task of video classification, however, has not enjoyed the same level of performance jump as in image classification.
In the past, one limitation was the lack of large-scale labeled video datasets.
However, the recent creation of
Sports-1M \cite{Karpathy2014},
Kinetics \cite{kay2017kinetics},
Something-something\ \cite{Something},
ActivityNet \cite{ActivityNet},
Charades \cite{Charades}, etc.
has partially removed that impediment.
Now we face more fundamental challenges.
In particular, we have three main barriers to overcome: (1)
how best to represent spatial information (i.e., recognizing the appearances of objects);
(2) how best to represent temporal information
(i.e., recognizing context, correlation and causation through time);
and (3)
how best to tradeoff model complexity with speed,
both at training and testing time.
\begin{figure}[t]
\centering
\includegraphics[height=2.2in]{figures/teaser_2.png}
\caption{
Our goal is to classify videos into different categories, as shown in the top row.
We focus on two qualitatively different kinds of datasets:
Something-something\, which requires recognizing low-level physical interactions,
and Kinetics, which requires recognizing high-level activities.
The main question we seek to answer is what kind of network architecture to use.
We consider 4 main variants:
I2D, which is a 2D CNN, operating on multiple frames;
I3D, which is a 3D CNN, convolving over space and time;
Bottom-Heavy I3D, which uses 3D in the lower layers, and 2D in the higher layers;
and
Top-Heavy I3D, which uses 2D in the lower (larger) layers,
and 3D in the upper layers.
}
\label{fig:teaser}
\end{figure}
In this paper, we study these three questions by considering various kinds of 3D CNNs.
Our starting point is the state of the art approach, due to
Carreira and Zisserman~\cite{carreira2017quo},
known as ``I3D''
(since it ``inflates'' the 2D convolutional filters of the
``Inception'' network \cite{InceptionV1} to 3D).
Despite giving good performance,
this model is very computationally expensive. This prompts several questions, which we seek to
address in this paper:
\begin{itemize}
\item Do we even need 3D convolution? If so, what layers should we make 3D, and what layers can be 2D? Does this depend on the nature of the dataset and task?
\item Is it important that we convolve jointly over time and space, or would it suffice to convolve over these dimensions independently?
\item How can we use answers to the above questions to improve on prior methods in terms of accuracy, speed and memory footprint?
\end{itemize}
To answer the first question,
we apply ``network surgery'' to obtain several
variants of the I3D architecture.
In one family of variants, which we call Bottom-Heavy-I3D, we retain 3D temporal convolutions at the lowest layers of the network
(the ones closest to the pixels), and use 2D convolutions for the higher layers.
In the other family of variants, which we call Top-Heavy-I3D, we do the opposite, and retain 3D temporal convolutions at the top layers, and use 2D for the lower layers
(see Figure~\ref{fig:teaser}).
We then investigate how to trade between accuracy and speed by varying
the number of layers that are ``deflated'' (converted to 2D) in this way.
We find that the Top-Heavy-I3D\ models are faster,
which is not surprising, since they only apply 3D to the abstract feature maps, which are smaller than the low level feature maps due to spatial pooling.
However, we also find that Top-Heavy-I3D\ models are often more accurate, which is surprising since they ignore low-level motion cues.
To answer the second question (about separating space and time), we consider replacing 3D convolutions
with spatial and temporal separable 3D convolutions,
i.e., we replace filters of the form $k_t \times k \times k$
by $1 \times k \times k$ followed by $k_t \times 1 \times 1$,
where $k_t$ is the width of the filter in time,
and $k$ is the height/width of the filter in space.
We call the resulting model \Sep, which stands for ``separable 3D CNN''.
\Sep\ obviously has many fewer parameters than models that use standard 3D convolution,
and it is more computationally efficient.
Surprisingly, we also show that it also has better accuracy than the original I3D model.
Finally, to answer the third question (about putting things together for an efficient and accurate video classification system), we combine what we have learned in answering
the above two questions with a spatio-temporal gating mechanism
to design a new model architecture
which we call \SG.
We show that this model gives significant gains in accuracy over baseline methods on a variety of challenging video classification datasets, such as Kinetics,
Something-something, UCF-101 and HMDB,
and also outperforms many other methods on other video recognition tasks, such as action localization on JHMDB.
\subsection{Replacing \emph{all} 3D convolutions with 2D}
\label{sec:all3d}
\label{sec:pyramids}
In this section, we seek to determine how much value 3D convolution brings,
motivated by the surprising success of 2D CNN approaches
to video classification (see e.g., \cite{tsn_wang_eccv16}).
We do this by
replacing every 3D filter in the I3D model with a 2D filter. This yields what we will refer to as the I2D
model.\footnote{
To reduce the memory and time requirements,
and to keep
the training protocol identical to I3D (in terms of the number of clips we use for training in each batch, etc),
we retain two max-pooling layers with temporal stride 2 between
Inception modules.
Hence, strictly speaking, I2D is not a pure 2D model. However, it is very similar to a single-frame 2D classification model.
}
Theoretically, the I2D network should be invariant to the temporal reversal of the input frames, since it is not capable of incorporating global signals. To verify this, we train I2D and the original I3D model on the Kinetics-Full\ and Something-something\ datasets with normal frame order, and apply the trained models on validation data in which the frames are in
normal order and reversed temporal order.
The results of the experiment are shown in Table~\ref{tab:arrowOfTime}.
We see that I2D has the same performance on both versions during testing, as is to be expected. However,
we notice an interesting difference between the Kinetics\ dataset and the Something-something\ dataset.
In the former case, the performance of I3D is indifferent to the ``arrow of time''~\cite{pickup2014seeing},
whereas in the latter case, reversing the order hurts performance.
We believe this is because Something-something\ dataset requires fine-grained distinctions between visually similar action categories.
\setlength{\tabcolsep}{3pt}
\begin{table}
\begin{center}
\small
\begin{tabular}{c|c|c|c|c}
\hline
& \multicolumn{2}{c|}{Kinetics-Full} & \multicolumn{2}{c}{Something-something} \\
\hline
Model & Normal (\%) & Reversed (\%) & Normal (\%) & Reversed (\%)\\
\hline
I3D & 71.1 & 71.1 & 45.8 & 15.2\\
I2D & 67.0 & 67.2 & 34.4 & 35.2\\
\hline
\end{tabular}
\end{center}
\caption{Top-1 accuracy on Kinetics-Full\ and Something-something\ datasets.
We train on frames in normal order, and then test on frames
in normal order or reverse order.
Not surprisingly, 2D CNNs do not care about the order of the frames.
For 3D CNNs on Kinetics-Full\, the results are the same on normal and reverse order,
indicating
that capturing the ``arrow of time'' is not important on this dataset.
However, on Something-something\, the exact order does matter.
}
\label{tab:arrowOfTime}
\end{table}
\subsection{Replacing \emph{some} 3D convolutions with 2D}
\label{sec:some3d}
Although we have seen that 3D convolution can boost accuracy compared to 2D convolution, it is computationally very expensive.
In this section, we investigate the consequences of only replacing some of the 3D convolutions with 2D.
Specifically, starting with an I2D model, we gradually inflate 2D convolutions into 3D,
from low-level to high-level layers in the network,
to create what we call the Bottom-Heavy-I3D\ model.
We also consider the opposite process, in which we inflate the top layers of the model to 3D, but keep the lower layers 2D;
we call such models Top-Heavy-I3D\ models.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[height=1.5in]{figures/acc_flops.png}
&
\includegraphics[height=1.5in]{figures/acc-flops-SS.png}
\\
(a) & (b)
\end{tabular}
\caption{Accuracy vs number of FLOPS needed to perform inference on 64 RGB frames.
Left: \MK\ dataset.
Right: Something-something\ dataset.
Solid lines denote top-heavy models,
dotted lines denote bottom-heavy models.
Orange denotes spatial and temporal separable 3D convolutions, blue denotes full 3D convolutions.}
\label{fig:accuracyVsFlops}
\end{figure}
We train and evaluate the Bottom-Heavy-I3D\ and Top-Heavy-I3D\ models on \MK\ and Something-something,
and show the results in Figures~\ref{fig:accuracyVsFlops}.
We see that the solid blue lines (top heavy I3D) are much better than the dotted blue lines (bottom heavy I3D) under the same FLOPS, which indicates that top heavy models are faster and more accurate.
The speed increase is expected, since in a top-heavy model,
the feature maps are reduced in size using spatial pooling before being convolved in 3D. For fixed computation budget, Top-Heavy-I3D is often significantly more accurate than Bottom-Heavy-I3D. This suggests that 3D convolutions are more capable and useful to model temporal patterns amongst high level features that are rich in semantics.
\begin{figure}[t]
\centering
\includegraphics[width=12cm,height=2cm]{figures/weight_stats.png}
\caption{Statistics of convolutional filter weights of an I3D model trained on Kinetics-Full. Each boxplot shows the distribution of $W_l(t,:,:,:)$ for temporal offset $t$, with $t=0$ being in the middle.
Results for
different layers $l$ are shown in different panels,
with lowest layers on the left.
All filters with different temporal offset are initialized with the same set of weights. Low-level
filters essentially ignore the temporal dimension,
unlike higher level filters,
where the weights distributed nicely across different temporal offsets. }
\label{fig:weights}
\end{figure}
\subsection{Analysis of weight distribution of learned filters}
To verify the above intuition, we examined the weights of an I3D model
which was trained on Kinetics-Full.
Figure~\ref{fig:weights} shows the distribution of these weights across 4 layers of our model, from low-level to high-level. In particular, each boxplot shows the distribution of $W_l(t,:,:,:)$ for temporal offset $t$ and layer $l$.
We use $t=0$ to indicate no offset in time, i.e.,
the center in the temporal kernel.
At initialization, all the filters started with the same set of (2D convolution) weights (derived from an Inception model pre-trained on Imagenet) for each value of $t \in \{-1,0,1\}.$
After training, we see that the temporally offset filters (i.e., for $t \neq 0$) have a weight distribution that is still closely centered on zero in the lower layers (see left panel), whereas the variance of the distribution increases in higher layers (see right panel).
This suggests once again that the higher level temporal patterns are more useful for the Kinetics action classification task.
\section{Related work}
\label{sec:related}
2D CNNs have achieved state of the art results for image classification,
so, not surprisingly,
there have been many recent attempts to extend these successes
to video classification.
The Inception 3D (I3D) architecture~\cite{carreira2017quo} proposed by Carreira and Zisserman is one of the current
state-of-the-art models. There are three key ingredients for its success: first, they ``inflate'' all the 2D convolution filters used by the Inception V1 architecture~\cite{InceptionV1} into 3D convolutions, and carefully choose the temporal kernel size in the earlier layers. Second, they initialize the inflated model weights by duplicating weights
that were pre-trained on ImageNet classification
over the temporal dimension. Finally, they train the network on
the large-scale Kinetics dataset~\cite{kay2017kinetics}.
Unfortunately, 3D CNNs are computationally expensive, so there has been recent interest in more efficient variants.
In concurrent work, \cite{Tran2018}
has recently proposed a variety of models based on top
of the ResNet architecture \cite{resnet}.
In particular, they consider models that use 3D convolution in either the bottom or top layers, and 2D in the rest; they call these ``mixed convolutional'' models.
This is similar to our top-heavy and bottom-heavy models.
They conclude that bottom heavy networks are more accurate,
which contradicts our finding.
However, the differences they find between top heavy and bottom heavy are fairly small,
and are conflated with changes in computational complexity.
By studying the entire speed-accuracy tradeoff curve (of Inception variants),
we show that there are clear benefits to using a top-heavy design for a given computational budget (see Section~\ref{sec:some3d}).
Another way to save computation is to replace 3D convolutions with separable convolutions, in which we first convolve spatially in 2D, and then convolve temporally in 1D.
We call the resulting model S3D.
This factorization is similar in spirit to the depth-wise separable convolutions
used in \cite{xception,mobilenet,resnext}, except that we apply the idea
to the temporal dimension instead of the feature dimension.
This idea has been used in a variety of recent papers,
including \cite{Tran2018} (who call it ``R(2+1)D''),
\cite{P3D} (who call it ``Pseudo-3D network''),
\cite{Sun2015} (who call it ``factorized spatio-temporal convolutional networks''),
etc.
We use the same method, but combine it with both top-heavy and bottom-heavy designs, which is a combination that leads to a very efficient video classification system.
We show that the gains from separable convolution are complementary to the gains from using a top-heavy design (see Section~\ref{sec:sepconv}).
An efficient way to improve accuracy is to use feature gating, which captures dependencies between feature channels with a simple but effective multiplicative transformation.
This can be viewed as an efficient approximation to second-order pooling
as shown in \cite{Girdhar_17b_AttentionalPoolingAction}.
Feature gating has been used for many tasks,
such as machine translation~\cite{dauphin2016language},
VQA~\cite{perez2017film},
reinforcement learning~\cite{elfwing2018sigmoid}, image classification~\cite{ramachandran2017swish,hu2017squeeze},
and action recognition~\cite{miech2017learnable}.
We consider a variant of the above techniques in
which we place the feature gating module after each of the temporal convolutions in an S3D network,
and show that this results in substantial gains in accuracy (see Section~\ref{sec:gating}).
Another way to improve accuracy (at somewhat higher cost)
is to use precomputed optical flow features.
This idea was successfully used in \cite{TwoStreamVGG},
who proposed a two-stream architecture where one CNN stream handles raw RGB input, and the other
handles precomputed optical flow.
Since then, many video classification methods follow the same multi-stream 2D CNN design, and have made improvements in terms of new representations~\cite{bilen2016dynamic,bilen2017action}, different backbone architecture~\cite{FeichtenhoferCVPR17,TwoStreamCUHK,ng_cvpr15,Girdhar_17b_AttentionalPoolingAction}, fusion of the streams~\cite{Feichtenhofer16,feichtenhofer2017,FeichtenhoferNIPS16,chained_fusion} and exploiting richer temporal structures~\cite{lrcn2014,Wang_Transformation,tsn_wang_eccv16}.
We will study the benefits of using optical flow in Section~\ref{sec:flow}.
\subsection{Separating temporal convolution from spatial convolutions}
\label{sec:sepconv}
In this section, we study the effect of replacing standard 3D convolution with a
factored version which disentangles this operation into a temporal part and a spatial part.
In more detail, our method is to
replace each 3D convolution with two consecutive convolution layers: one 2D convolution layer to learn spatial features, followed by a 1D convolution layer purely on the temporal axis.
This can be implemented by running two 3D convolutions, where the first (spatial) convolution has filter shape $[1,k,k]$ and the
second (temporal) convolution has filter shape $[k, 1, 1]$.
By applying this factorization to I3D, we obtain a
model which we refer to as S3D. For a detailed illustration of the architecture, please refer to Figure~\ref{fig:s3d}.\footnote{
There are 4 branches in an Inception block, but only two of them have 3x3 convolutions (the other two being pointwise 1x1 convolutions), as shown in Figure~\ref{fig:inc_blocks}.
As such, when I3D inflates the convolutions to 3D, only some of the features contain temporal information.
However, by using separable temporal convolution, we can add temporal information to all 4 branches.
This improves the performance from $78.4\%$ to $78.9\%$ on \MK.
In the following sections, whenever we refer to an S3D model, we mean S3D with such configuration.
}
\begin{figure}[!htp]
\begin{center}
\includegraphics[height=30mm]{figures/s3dg.pdf}
\end{center}
\caption{
An illustration of the S3D model.
Dark red boxes are
temporal separable convolutions (sep-conv),
and pink boxes are temporal separable
inception blocks, shown in
Figure~\ref{fig:sep_inc_block}.
}
\label{fig:s3d}
\end{figure}
\label{sec:resultsSeparable}
Table~\ref{tab:FK}
compares the results of S3D and I3D on Kinetics-Full. Table~\ref{tab:something} shows that S3D also outperforms I3D on the Something-something\ dataset.
The results show that, despite a substantial compression in model size
(12.06M parameters for I3D reduced to 8.77M for S3D),
and a large speed-up ($107.9$ GFLOPS for I3D reduced to $66.38$ GFLOPS for
S3D), the separable model is even more accurate
(top-1 accuracy improved from 71.1\% to 72.2\% for Kinetics-Full, and from 45.8\% to 47.3\% for Something-something).
We believe the gain in accuracy is because the
spatio-temporal
factorization reduces overfitting, in a way without sacrificing the expressiveness of the representation, as we find that simply reducing the parameters of the network does not help with the performance.
Note that we can apply this separable transformation to any place where 3D convolution is used;
thus this idea is orthogonal to the question of which layers should contain 3D convolution, which we discussed
in Section~\ref{sec:pyramids}.
We denote
the separable version of the
Bottom-Heavy-I3D\ models by Bottom-Heavy-S3D,
and the separable version of the
Top-Heavy-I3D\ models by Top-Heavy-S3D,
thus
giving us 4 families of models.
We plot the speed vs accuracy of these models
in Figure~\ref{fig:accuracyVsFlops}. We see that separable top-heavy models offer the best speed-accuracy trade-off.
In particular, the model in which we keep the top 2 layers as separable 3D convolutions,
and make the rest 2D convolutions,
seems to be a kind of ``sweet spot''.
We call this model ``Fast-S3D'',
since it is is 2.5 times more efficient than I3D
(43.47 vs 107.9 GFLOPS), and yet has comparable
accuracy (78.0\% vs 78.4\% on \MK).
\begin{figure}[htp]
\begin{center}
\includegraphics[height=59mm]{figures/tsne_new.pdf}
\end{center}
\caption{tSNE projection of activation maps
derived from images in the Something-something\ dataset.
Colors and numbers represent the 10 action groups defined in \cite{Something}.
The top row shows increasing semantic separation as we move to higher layers of S3D.
The bottom row shows activations at level Max5a for 4 different models.
We see that Top-Heavy-S3D has better semantic separation
than Bottom-Heavy-S3D, especially for visually similar categories inside the red box.
}
\label{fig:tSNE}
\end{figure}
\subsection{tSNE analysis of the features}
\label{sec:tSNE}
Here we explore the spatiotemporal representations learned by different levels
of the S3D model
on the Something-something\ dataset,
using the tool of tSNE projection \cite{tSNE}.
The behavior of the I3D models is very similar.
Instead of using samples from all 174 categories, we use a smaller vocabulary, namely the ``10 action groups'' defined in \cite{Something}.\footnote{
The labels are as follows.
0: Dropping [something],
1: Moving [something] from right to left,
2: Moving [something] from left to right,
3: Picking [something],
4:Putting [something],
5: Poking [something],
6: Tearing [something],
7: Pouring [something],
8: Holding [something],
9: Showing [something].
} %
We sample ~2,200 data points from the validation set.
In Figure~\ref{fig:tSNE}, the top row shows representations learned by a S3D model, at levels from Max3a to Max5c.
The class separation becomed increasingly clearer at higher levels.
The bottom row shows representations learned at a certain feature level (Max5a), but across different models including I2D, Bottom-Heavy-S3D and Top-Heavy-S3D (both have a 2D-3D transition point at Max4b layer),
as well as a full S3D model. Comparing the bottom-heavy and top-heavy models, for subtle actions such as ``3: Picking'', ``4: Putting'' and ``5: Poking'' something, representations learned with a top-heavy model are more discriminative than that in a bottom-heavy model, thus leading to better class separations with the tSNE projection (highlighted with the red box). A top-heavy model can learn features that are as good as those learned with a full 3D model, and significantly better than those from the 2D model, without much sacrifice in processing speed.
This observation further supports our hypothesis that temporal information modeling is most effective at top levels in the feature hierarchy for action classification tasks.
\begin{table}[!htp]
\begin{center}
\begin{tabular}{c|c|c|c|c}
\hline
Model & Top-1 (\%) & Top-5 (\%) & Params (M) & FLOPS (G) \\
\hline
I3D & 71.1 & 89.3 & $12.06$ & $107.89$ \\
\Sthree & 72.2 & 90.6 & $8.77$ & $66.38$ \\
\SG & {\bf 74.7} & {\bf 93.4} & $11.56$ & $71.38$ \\
\hline
\end{tabular}
\end{center}
\caption{Effect of separable convolution and feature gating on the Kinetics-Full\ validation set using RGB features.
}
\label{tab:FK}
\end{table}
\begin{table}[!htp]
\begin{center}
\begin{tabular}{c|c|c|c|c}
\hline
Model & Backbone & Val Top-1 (\%) & Val Top-5 (\%) & Test Top-1 (\%)\\
\hline
Pre-3D CNN + Avg~\cite{Something} & VGG-16 & - & - & 11.5 \\
Multi-scale TRN~\cite{zhou_trn} & Inception & 34.4 & 63.2 & 33.6 \\
\hline
I2D & Inception & 34.4 & 69.0 & - \\
I3D & Inception & 45.8 & 76.5 & - \\
\Sthree & Inception & 47.3 & 78.1 & - \\
\SG & Inception & {\bf 48.2} & {\bf 78.7} & {\bf 42.0} \\
\hline
\end{tabular}
\end{center}
\caption{
Effect of separable convolution and feature gating on
the Something-something\ validation and test sets using RGB features.
}
\label{tab:something}
\end{table}
\section{Network surgery}
\label{sec:surgery}
In this section, we report the results of various ``network surgery'' experiments, where we vary different aspects of the I3D model to study the effects on speed and accuracy.
\begin{figure}[!htp]
\begin{center}
\subfigure[I3D]{
\includegraphics[height=25mm]{figures/i3d.pdf}
\label{fig:detail_i3d}
}
\subfigure[I2D]{
\includegraphics[height=25mm]{figures/i2d.pdf}
\label{fig:detail_i2d}
}
\subfigure[Bottom-heavy I3D]{
\includegraphics[height=25mm]{figures/bottomheavy_i3d.pdf}
\label{fig:detail_bottomheavy_i3d}
}
\subfigure[Top-heavy I3D]{
\includegraphics[height=25mm]{figures/topheavy_i3d.pdf}
\label{fig:detail_topheavy_i3d}
}
\end{center}
\caption{
Network architecture details for (a) I3D, (b) I2D, (c) Bottom-Heavy and (d) Top-Heavy variants. $K$ indexes the spatio-temporal convolutional layers. The ``2D Inc.'' and ``3D Inc.'' blocks
refer to 2D and 3D inception blocks, defined in Figure~\ref{fig:inc_blocks}.}
\end{figure}
\begin{figure}
\begin{center}
\subfigure[]{
\includegraphics[height=35mm]{figures/inc_block_2d.pdf}
\label{fig:inc_block_2d}
}\qquad
\subfigure[]{
\includegraphics[height=35mm]{figures/inc_block_3d.pdf}
\label{fig:inc_block_3d}
}\qquad
\subfigure[]{
\includegraphics[height=35mm]{figures/sep_inc_block.pdf}
\label{fig:sep_inc_block}
}
\end{center}
\caption{\subref{fig:inc_block_2d} 2D Inception block;
\subref{fig:inc_block_3d} 3D Inception block;
\subref{fig:sep_inc_block} 3D temporal separable Inception block used in S3D networks.
}
\label{fig:inc_blocks}
\end{figure}
\input{pyramids}
\input{sepconv}
\input{gating}
\section{Generalization to other modalities, data and tasks}
In this section, we evaluate the generality and robustness of the proposed \SG\ architecture by conducting transfer learning experiments on different input modalities, video datasets, and tasks.
\subsection{Using optical flow features}
\label{sec:flow}
We first verify if \SG\ also works with optical flow inputs.
For these experiments, we follow the standard setup as described in~\cite{carreira2017quo} and extract optical flow features with the TV-L1 approach~\cite{tvl1}. We truncate the flow magnitude at $[-20, 20]$ and store them as encoded JPEG files. Other experiment settings are the same as the RGB experiments. From Table~\ref{tab:flow}, we can see that the improvement of \SG\ over I3D is consistent
with the gain we saw with RGB inputs,
bringing the performance up from $63.91\%$ to $68.00\%$.
By ensembling
the two streams of RGB and flow, we obtain
a performance of 77.22\%,
which is a 3\% boost over the I3D network when trained on the same data. We note that even though we focus on the speed-accuracy trade-offs in action classification network design, the performance is competitive compared with recent Kinetics Challenge winners and concurrent works; notably \cite{kinetics_winner} and \cite{wang2018non} use heavier backbone architectures (e.g. ResNet 101 has ~8.5x more FLOPS than our S3D-G architecture)
\begin{table*}[htp]
\begin{center}
\begin{tabular}{c|c|c|c|c|c}
\hline
Model & Inputs & Backbone & Pre-train & Top-1 (\%) & Top-5 (\%)\\
\hline
NL I3D \cite{wang2018non} & RGB & ResNet-101 & ImNet & {\bf 77.7} & {\bf 93.3} \\
SAN \cite{kinetics_winner} & RGB+Flow+Audio & Inception-ResNet-v2 & ImNet & {\bf 77.7} & {\bf 93.2} \\
TSN \cite{tsn_wang_eccv16} & RGB+Flow & Inception & ImNet & 73.9 & 91.1 \\
ARTNet \cite{ARTNet} & RGB+Flow & ResNet-18 & ImNet & 72.4 & 90.4\\
R(2+1)D \cite{Tran2018} & RGB+Flow & ResNet-34 & Sports-1M & 75.4 & 91.9 \\
\hline
I3D & Flow & Inception & ImNet & 63.9 & 85.0 \\
I3D & RGB & Inception & ImNet & 71.1 & 89.3 \\
I3D & RGB+Flow & Inception & ImNet & 74.1 & 91.6 \\
\hline
\SG & Flow & Inception & ImNet & 68.0 & 87.6 \\
\SG & RGB & Inception & ImNet & 74.7 & {\bf 93.4} \\
\SG & RGB+Flow & Inception & ImNet & {\bf 77.2} & {\bf 93.0} \\
\hline
\end{tabular}
\end{center}
\caption{Benefits of using optical flow.
We report results
on the Kinetics-Full\ validation set.
We report I3D performance based on our implementation, as~\cite{carreira2017quo} only report results on the held-out test set (where they get a top-1 accuracy of 74.2\% using RGB+flow and ImNet pretraining).
}
\label{tab:flow}
\end{table*}
\subsection{Fine-tuning on other video classification datasets}
\label{sec:otherClassification}
Next we conduct transfer learning experiments from Kinetics to other video classification datasets,
namely HMDB-51~\cite{hmdb_kuehne2011} and UCF-101~\cite{ucf101}.
HMDB-51 contains around 7,000 videos spanning over 51 categories, while UCF-101 has 13,320 videos spanning over 101 categories.
Both datasets consist of short video clips that are temporally trimmed, and contain
3 training and validation splits.
We follow the standard setup as used in previous work and report average accuracy across all splits.
For our transfer learning experiments, we use the same setup as training on Kinetics, but change the number of GPUs to 8 and lower the learning rate to 0.01 for 6K steps, and 0.001 for another 2K steps.
For simplicity, we only use RGB (no optical flow).
Table~\ref{tab:cls-transfer} shows the results of this experiment. On UCF-101, our proposed \SG\ architecture,
which only uses Kinetics for pretraining,
outperforms I3D,
and matches R(2+1)D, both of which use largescale datasets (Kinetics and Sports-1M) for pretraining.
On HMDB-51, we outperform all previous methods published to date.
\begin{table}[pt!]
\small
\begin{center}
\begin{tabular}{c|c|c|c|c}
\hline
Model & Inputs & Pre-train & UCF-101 & HMDB-51 \\
\hline
P3D~\cite{Qiu_2017_ICCV} & RGB & Sports-1M & 88.6 & - \\
C3D~\cite{TranC3D} & RGB & Sports-1M & 82.3 & 51.6 \\
Res3D~\cite{TranR3D} & RGB & Sports-1M & 85.8 & 54.9 \\
ARTNet w/ TSN~\cite{ARTNet} & RGB & Kinetics & 94.3 & 70.9 \\
I3D~\cite{carreira2017quo} & RGB & ImNet+Kinetics & 95.6 & 74.8 \\
R(2+1)D \cite{Tran2018} & RGB & Kinetics & {\bf 96.8} & 74.5 \\
\hline
\SG & RGB & ImNet+Kinetics & {\bf 96.8} & {\bf 75.9} \\
\hline
\end{tabular}
\end{center}
\caption{Results of various methods on action classification on the UCF-101 and HMDB-51 datasets.
All numbers are computed as the average accuracy across three splits.
}
\label{tab:cls-transfer}
\end{table}
\subsection{Spatio-temporal action detection in video}
\label{sec:detection}
Finally, we demonstrate the effectiveness of \SG\ on action detection tasks, where the inputs are video frames, and the outputs are bounding boxes associated with action labels on the frames.
Similar to the framework proposed in~\cite{peng2016multi}, we use the Faster-RCNN~\cite{ren2015faster} object detection algorithm to jointly perform person localization and action recognition. We use the same approach as described in~\cite{ava} to incorporate temporal context information via 3D networks. To be more specific, the model uses a 2D ResNet-50~\cite{resnet} network that takes the annotated keyframe (frame with box annotations) as input, and extract features for region proposal generation on the keyframe. We then use a 3D network (such as I3D or \SG) that takes the frames surrounding the keyframe as input, and extract feature maps which are then pooled for bounding box classification. The 2D region proposal network (RPN) and 3D action classification network are jointly trained end-to-end. Note that we extend the ROIPooling operation to handle 3D feature maps by simply pooling at the same spatial locations over all time steps.
We report performance on two widely adopted video action detection datasets: JHDMB~\cite{jhmdb} and UCF-101-24~\cite{ucf101}. JHMDB dataset is a subset of HMDB-51, it consists of 928 videos for 21 action categories, and each video clip contains 15 to 40 frames. UCF-101-24 is a subset of UCF-101 with 24 labels and 3207 videos; we use the cleaned bounding box annotations from~\cite{micro_tube2017}. We report performance using the standard frame-AP metric defined in~\cite{gkioxari2015}, which is computed as the average precision of action detection over all individual frames, at the intersection-over-union (IoU) threshold of 0.5. As commonly used by previous work, we report average performance over three splits of JHMDB and the first split for UCF-101-24.
Our implementation is based on the TensorFlow Object Detection API~\cite{huang2016speed}. We train Faster-RCNN with asynchronous SGD on 11 GPUs for 600K iterations. We fix the input resolution to be $320\times400$ pixels. For both training and validation, we fix the size of temporal context to 20 frames. All the other model parameters are set based on the recommended values from~\cite{huang2016speed}. The ResNet-50 networks are initialized with ImageNet pre-trained models, and I3D and \SG are pre-trained from Kinetics. We extract 3D feature maps from the ``\textit{Mixed 4e}'' layer which has a spatial stride of 16.
Table~\ref{tab:det-transfer} shows the comparison between I3D, \SG, and other state-of-the-art methods. We can see that both 3D networks outperform previous architectures by large margins, while \SG~is consistently better than I3D.
\begin{table}[!htp]
\begin{center}
\begin{tabular}{c|c|c|c}
\hline
Model & Inputs & JHMDB & UCF-101 \\
\hline
Gkioxari and Malik~\cite{gkioxari2015} &RGB+Flow & 36.2 & - \\
Weinzaepfel \emph{et al}.~\cite{weinzaepfel2015} &RGB+Flow & 45.8 & 35.8 \\
Peng and Schmid~\cite{peng2016multi} & RGB+Flow & 58.5 & 65.7 \\
Kalogeiton \emph{et al}.~\cite{kalogeiton17iccv} & RGB+Flow & 65.7 & 69.5 \\
\hline
Faster RCNN + I3D~\cite{ava} &RGB+Flow & 73.2 & 76.3 \\
Faster RCNN + \SG & RGB+Flow & {\bf 75.2} & {\bf 78.8} \\
\hline
\end{tabular}
\end{center}
\caption{Results of various methods on action detection in JHMDB and UCF101.
We report frame-mAP at IoU threshold of 0.5 on JHMDB (all splits) and UCF-101-24 (split 1) datasets.
}
\label{tab:det-transfer}
\end{table}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Voice conversion (VC) refers to a technique that converts a certain aspect of speech from a source to that of a target without changing the linguistic content \cite{VC, GMM-VC}. In this work, we focus on speaker conversion, which is the most widely investigated type of VC.
From an information perspective, VC can be performed by first extracting the spoken contents from the source speech, and then synthesizing the converted speech from the extracted contents with the identity of the target speaker.
Such a paradigm is sometimes referred to as recognition-synthesis (rec-syn) based VC, as depicted in Figure~\ref{fig:framework}.
Formally, starting from the source speech $\mathbf{X}$, a recognizer first extracts the spoken contents, $\mathbf{H}$, which is then consumed by the synthesizer to generate the converted speech, $\mathbf{Y}$:
\begin{equation}
{\mathbf{Y}} = {\text{Synth}}({\mathbf{H}}), {\mathbf{H}}={\text{Recog}}({\mathbf{X}}). \label{eq:formulation}
\end{equation}
In the latest voice conversion challenge 2020 (VCC2020) \cite{vcc2020}, one of the baselines directly concatenated an automatic speech recognition (ASR) model and a text-to-speech (TTS) model \cite{vcc2020-asr-tts}. In addition, several top performing systems also implemented such a framework \cite{vcc2020-task1-top}, showing state-of-the-art performance in terms of both naturalness and similarity.
In rec-syn based VC, an ASR model trained on a labeled dataset is often used to extract the \textit{supervised} spoken content representation, such as text \cite{vcc2020-asr-tts} or phonetic posteriorgram (PPG) \cite{VC-PPG}. The collection of labeled datasets is often costly, especially in a low-resource setting, such as the cross-lingual VC scenario \cite{vcc2020}. Therefore, researchers have resorted to unsupervised or the so-called self-supervised speech representation (S3R) learning paradigm, where a large-scale unlabeled data is used to learn rich, compact speech representations. S3Rs have been applied to any-to-one VC \cite{vqw2v-vc}, many-to-many VC \cite{speech-resynthesis}, any-to-any VC \cite{fragmentvc, s2vc} and cross-lingual VC \cite{prosody-asr-tts}.
In addition to its label-free property, S3R based VC is also attractive in it being a good probing task for S3R analysis. A recently published SUPERB benchmark \cite{superb} dedicates to compare different S3Rs across a range of \textit{discriminative} speech processing tasks, while it remains unclear what representations are optimal for \textit{generation} tasks like VC. For instance, wav2vec 2.0 \cite{wav2vec2} has been shown to be powerful in not only ASR but also speaker and language recognition \cite{wav2vec2-sid-lid}, implying that it encodes rich content, speaker and language information. Based on the discussion on the information perspective of VC, we may hypothesize that a good ${\mathbf{H}}$ in Eq.~\ref{eq:formulation} should be compact in content but contains little to none speaker information. Based on such an assumption, wav2vec 2.0 may not be an optimal representation for VC.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth]{framework_v3.png}
\caption{The training and conversion procedures in any-to-one recognition-synthesis based VC. \label{fig:framework}}
\vspace{-0.5cm}
\end{figure}
In this paper, we describe S3PRL-VC, an extension of the S3PRL toolkit and SUPERB.
Our main focus was any-to-one (A2O) VC, where the synthesizer is trained in a target-speaker-dependent fashion. We used the VCC2020 dataset, which allows us to test intra-lingual and cross-lingual settings. We also provide an any-to-any (A2A) extension by using an off-the-shelf d-vector \cite{d-vector} model to encode the unseen speaker information.
We implemented models resembling the top systems in VCC2018 \cite{VC-WNV-adapt} and VCC2020 \cite{vcc2020-task2-top}, which allows us to focus on the comparison.
We conducted a large-scale evaluation, both objectively and subjectively, to compare the performance between not only different S3Rs but also state-of-the-art systems.
S3PRL-VC is a competitive system by yielding (1) a comparable performance with VCC2020 top systems in the A2O setting in terms of similarity, and (2) state-of-the-art performance in S3R-based A2A VC.
Our main contributions are:
\begin{itemize}
\item Inheriting the property of SUPERB, our S3PRL-VC implementation ensures fast benchmarking but also state-of-the-art performance. Such a fast, easy-to-use property benefits not only S3R researchers but also the VC community.
\item We present a large-scale comparison of S3Rs from the VC point-of-view, providing new insights and perspectives to analyze the representations. We also compared with top systems in VCC2020 that used PPGs, showing the limitation and competitiveness of S3Rs.
\end{itemize}
\vspace{-0.3cm}
\section{Tasks}
\subsection{General description of VCC2020}
All experiments in this work are benchmarked on the VCC2020 dataset \cite{vcc2020}. There are two tasks in VCC2020, with intra-lingual VC being task 1 and cross-lingual VC being task 2.
The two tasks share the same two English male and female source speakers. The target speakers include two male and two female English speakers for task 1, and one male and one female speaker each of Finnish, German, and Mandarin for task 2.
For each speaker, 70 utterances (roughly five minutes) in their respective languages and contents are provided, and there are 25 test sentences for evaluation.
During conversion, ${\mathbf{X}}$ (which is in English) is converted as if it was uttered by the target speaker while keeping the linguistic contents unchanged.
\subsection{Intra-lingual and cross-lingual any-to-one VC}
We first consider the two tasks in VCC2020 under the A2O setting. Any-to-one VC aims to convert from any arbitrary speech into that of a predefined target speaker. The training and conversion processes are depicted in Figure~\ref{fig:framework}. The ability to encode ${\mathbf{H}}$ from any unseen speaker is ensured by the common practice of training S3Rs on a multi-speaker dataset. Using the target speaker dataset, ${\mathbf{D}_\text{trg}}$, the synthesizer is trained to reconstruct the acoustic feature from ${\mathbf{H}}$. In the conversion phase, the converted features, ${\mathbf{Y}}$, are generated following Eq.~\ref{eq:formulation}.
Finally, a waveform synthesizer (ex. neural vocoder) generates the converted waveform.
Any-to-one VC is a good probing task to investigate several characteristics of an upstream S3R model. First, a fundamental requirement of VC is the linguistic consistency, so there is a positive correlation between the VC performance of an S3R model and its ability to faithfully encode ${\mathbf{H}}$. Second, if an S3R model encodes rich speaker information, then the source speaker information in ${\mathbf{X}}$ will conflict with the target speaker attributes injected by the synthesizer, which hurts the VC performance. Finally, during the synthesizer training in cross-lingual VC, the S3R model may fail to generalize to ${\mathbf{X}}$ from a non-English target speaker since most existing S3R models are trained with English datasets only. It is worthwhile to examine the ability of mono-lingual S3R models to transfer to different languages.
\subsection{Intra-lingual any-to-any VC}
We then provide an extension for the A2A scenario, also known as zero-shot VC. A2A VC attempts to convert to a target speaker where ${\mathbf{D}_\text{trg}}$ is so limited (less than one minute) such that fine-tuning in infeasible. A2A VC models are usually trained on a multi-speaker dataset. Instead of recovering the target speaker information by the synthesizer as in A2O VC, we use speaker embeddings, $s$, extracted by an off-the-shelf speaker encoder, which is pretrained on an automatic speaker verification (ASV) dataset and objective. Such a paradigm is also used in zero-shot TTS \cite{adaptation-verification}. In training, the speaker embedding extracted from the target waveform is used. During conversion, given ${\mathbf{D}_\text{trg}}$, ${\mathbf{s}}$ is formed as an average of each embedding from each utterance. We may then rewrite Eq.~\ref{eq:formulation} as:
\begin{equation}
\vspace{-0.1cm}
{\mathbf{Y}} = {\text{Synth}}({\mathbf{H}}, {\mathbf{s}}), {\mathbf{H}}={\text{Recog}}({\mathbf{X}}), {\mathbf{s}}={\text{SpkEnc}}({\mathbf{D}_\text{trg}}). \label{eq:a2a-formulation}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{all_models_dvec}
\caption{The models implemented in this work. Left: the simple model. Middle: the simple model with an AR loop. Right: the Tacotron2 model, with extension to an any-to-any model by accepting a d-vector as the speaker embedding. \label{fig:models}}
\vspace{-0.5cm}
\end{figure}
\begin{table*}[ht!]
\centering
\footnotesize
\caption{Objective evaluation results on different VC settings over various S3Rs. For MCD and WER, the smaller the better; for ASV, the higher the better.}
\label{tab:obj}
\begin{tabular}{|l||r|r|r|r|r|r|r|r|r|r|r|r|r|r|}
\hline
\multirow{3}{*}{Upstream} & \multicolumn{9}{c|}{Intra-lingual A2O} & \multicolumn{2}{c|}{Cross-lingual A2O} & \multicolumn{3}{c|}{Intra-lingual A2A} \\ \cline{2-15}
& \multicolumn{3}{c|}{Simple} & \multicolumn{3}{c|}{Simple-AR} & \multicolumn{3}{c|}{Taco2-AR} & \multicolumn{2}{c|}{Taco2-AR} & \multicolumn{3}{c|}{Taco2-AR} \\ \cline{2-15}
& MCD & WER & ASV & MCD & WER & ASV & MCD & WER & ASV & WER & \multicolumn{1}{r|}{ASV} & MCD & WER & ASV \\ \hline \hline
mel & 8.41 & 48.5 & 59.00 & 8.92 & 22.7 & 49.75 & 8.47 & 38.3 & 77.25 & 39.0 & 46.67 & 9.49 & 4.2 & 19.50 \\
PPG (TIMIT) & 7.78 & 69.0 & 85.50 & 7.83 & 58.9 & 95.25 & 7.18 & 33.6 & 99.75 & 51.0 & 84.67 & \textbf{8.31} & 12.9 & \textbf{83.50} \\ \hline
PASE+ & 9.29 & 5.0 & 26.75 & 9.52 & 5.7 & 26.00 & 8.66 & 30.6 & 63.20 & 36.3 & 34.67 & 9.85 & 4.2 & 8.00 \\
APC & 8.67 & 8.6 & 48.00 & 8.73 & 7.1 & 41.75 & 8.05 & 27.2 & 87.25 & 33.9 & 52.33 & 9.57 & 3.5 & 23.25 \\
VQ-APC & 8.12 & 10.8 & 81.25 & 8.37 & 7.4 & 60.50 & 7.84 & 22.4 & 94.25 & 28.4 & 68.00 & 9.43 & 4.0 & 22.00 \\
NPC & 7.74 & 39.0 & 92.75 & 8.15 & 21.1 & 76.75 & 7.86 & 30.4 & 94.75 & 37.6 & 59.00 & 9.39 & 4.4 & 21.00 \\
Mockingjay & 8.58 & 31.3 & 51.00 & 8.74 & 9.5 & 47.00 & 8.29 & 35.1 & 79.75 & 39.2 & 46.00 & 9.43 & 5.0 & 25.00 \\
TERA & 8.60 & 11.4 & 46.50 & 8.67 & 6.0 & 42.50 & 8.21 & 25.1 & 83.75 & 29.2 & 49.33 & 9.31 & 5.2 & 18.75 \\
Modified CPC & 8.71 & 9.4 & 40.00 & 8.87 & 7.0 & 30.00 & 8.41 & 26.2 & 71.00 & 35.3 & 32.83 & 9.61 & 4.1 & 10.75 \\
DeCoAR 2.0 & 8.31 & 7.4 & 54.75 & 8.33 & 6.4 & 53.00 & 7.83 & 17.1 & 90.75 & 26.8 & 59.33 & 9.28 & 4.0 & 27.00 \\
wav2vec & 7.45 & 14.0 & \textbf{95.50} & 7.64 & 4.9 & 90.50 & 7.45 & 10.1 & 98.25 & 13.9 & 75.83 & 8.77 & 3.5 & 40.00 \\
vq-wav2vec & \textbf{7.41} & 13.4 & 91.00 & \textbf{7.24} & 11.6 & \textbf{98.75} & \textbf{7.08} & 13.4 & \textbf{100.00} & 21.0 & \textbf{88.83} & 8.47 & 4.2 & 73.25 \\
wav2vec 2.0 Base & 7.80 & 24.7 & 92.75 & 7.77 & 5.0 & 86.50 & 7.50 & 10.5 & 98.00 & 14.9 & 82.17 & 9.03 & 3.2 & 27.00 \\
wav2vec 2.0 Large & 7.64 & 12.5 & 81.75 & 7.67 & 9.0 & 82.75 & 7.63 & 15.8 & 97.25 & 22.7 & 78.00 & 8.99 & 4.1 & 22.25 \\
HuBERT Base & 7.70 & \textbf{5.5} & 89.25 & 7.79 & \textbf{4.7} & 84.25 & 7.47 & \textbf{8.0} & 98.50 & \textbf{13.5} & 82.33 & 9.19 & 3.4 & 23.25 \\
HuBERT Large & 7.54 & 5.6 & 95.00 & 7.54 & 5.6 & 93.00 & 7.22 & 9.0 & 99.25 & 15.9 & 86.50 & 9.13 & \textbf{3.0} & 27.75 \\
\hline
\end{tabular}
\vspace{-0.5cm}
\end{table*}
\vspace{-0.4cm}
\section{Implementation}
\subsection{Recognizer (upstream models)}
Table~\ref{tab:obj} depicts the list of S3Rs we compared in this work, which are the upstream models supported in S3PRL at the date of publication. For a complete list of information (architecture, objective, etc.), refer to \cite{superb}.
All upstreams are trained with English data (mostly Librispeech).
In addition to the S3Rs, two extra upstreams were included: (1) mel-spectrogram, ``mel'', and (2) ``PPG (TIMIT)'', which is trained supervisedly on the TIMIT dataset.
\subsection{Synthesizer model design}
\label{ssec:synthesizer}
Mel-spectrogram was selected as the target acoustic feature. We implemented several models to resemble top systems of past VCCs, as illustrated in Figure~\ref{fig:models}. We avoid expensive model components like attention \cite{transformer} for fast benchmarking.
\noindent\textbf{Simple:} We start from the model used by the top system in VCC2018 \cite{VC-WNV-adapt}. The simple model consists of a single layer feed-forward network (FFN), two long short-term memory layers with projection (LSTMP), and a linear projection layer.
\noindent\textbf{Simple-AR:} As autoregressive (AR) modeling has been shown to be effective in speech synthesis \cite{ar-rnn-mdn-spss}, we added an AR loop to the simple model. At each time step, the previous output is consumed by the first LSTMP layer. Dropout is essential in the AR loop to avoid exposure bias brought by teacher-forcing \cite{ar-f0-spss, Taco}.
\noindent\textbf{Taco2-AR:} We increase the model complexity by using a model architecture similar to that of Tacotron 2 \cite{Taco2}, which resembles the model used by the top system in VCC2020 \cite{vcc2020-task2-top}. Different from Tacotron 2, the attention module was not used as it was reported to be useless in \cite{vcc2020-task2-top}.
\subsection{Other setups}
\noindent\textbf{Any-to-any settings.} The dataset used to train the A2A VC model is the VCTK dataset \cite{vctk}. For the speaker encoder, we used the d-vector model \cite{d-vector} trained on a mix of datasets, including LibriSpeech, VoxCeleb 1 and 2.
\noindent\textbf{Waveform synthesizer.} We used the HiFi-GAN \cite{hifigan}, a state-of-the-art parallel real-time neural vocoder. For the A2O setup, we mixed the data of all 14 speakers in VCC2020 with the VCTK dataset, while for the A2A setup we used only the VCTK dataset.
\section{Evaluation metrics and protocols}
\subsection{Objective evaluation}
We chose three objective evaluation metrics, all of which measure different aspects of a VC system. Mel cepstrum distortion (MCD) is an intrusive, L2-norm based metric which measures the general performance. Word error rate (WER) measures the intelligibility and the linguistic consistency, and in this work we used a pretrained wav2vec 2.0 model. The accept rate from a pretrained ASV model measures the speaker similarity by calculating the cosine similarity using speaker embeddings. For scenarios like the cross-lingual A2O task where the reference speech is not accessible, we report WER and ASV only since they are non-intrusive.
\subsection{Subjective evaluation}
For the subjective test, we asked listening participants to evaluate two common aspects in VC: naturalness and similarity.
Listeners were asked to evaluate the naturalness on a five-point scale.
For conversion similarity, a natural target speech and a converted speech were presented, and listeners were asked to judge whether the two samples were produced by the same speaker on a four-point scale.
For each system, a total of 80 utterances (5 random $\times$ 16 conversion pairs) were evaluated. Recordings of the target speakers were also included in the naturalness test and served as the upper bound.
We used an open-source toolkit \cite{p808-open-source} that implemented the ITU-T Recommendation P.808 \cite{p808} to screen unreliable ratings obtained through the Amazon Mechanical Turk (Mturk). We recruited more than 280 listeners from the United States and had each sample rated by five different participants on average.
Audio samples are available online\footnote{\url{https://bit.ly/3oydaY2}}.
\vspace{-0.2cm}
\section{Evaluation results and discussions}
\subsection{Comparison of different models}
We first investigate the impact of using different synthesizer models described in Section~\ref{ssec:synthesizer} in the intra-lingual A2O setting, as shown in Table~\ref{tab:obj}. First, only by adding the AR loop to the Simple model, most S3Rs benefit from large improvements in WER. With Taco-AR, all S3Rs except PASE+ and modified CPC achieved an ASV accept rate higher 80\%, while all S3Rs suffered from a degradation in WER.
This shows that increasing the model capacity can significantly improve the speaker similarity, while sacrificing the intelligibility.
However, we would like to emphasize that WER is a strict measurement of intelligibility, and human can actually recognize better than machine.
On the other hand, the Taco2-AR model yields the best MCD scores, which, as we will show later, correlates better with subjective naturalness and similarity.
Also, we empirically found the training time of the three models similar.
Based on these reasons, we decided to use the taco2-AR model for the succeeding tasks and comparisons.
\subsection{Results on different tasks}
Next, we compare the results of using S3Rs for different tasks. Looking again at Table~\ref{tab:obj}, we first find S3Rs trained on a mono-lingual corpus can still work well in the cross-lingual setting, demonstrating the ability to transfer across languages.
However, compared with the intra-lingual A2O task, it could be clearly observed that all S3Rs degraded in terms of both the WER and ASV accept rate, which is similar to the findings in \cite{vcc2020-prediction}. Finally, in the intra-lingual A2A setting, all S3Rs yielded WERs much lower than those in the A2O setting, while the MCD values and ASV accept rates were significantly worse. Even the best upstream, vq-wav2vec, yielded only an accept rate of $73.25$. One possible reason is that in the A2A VC setting, modern S3Rs still fail to disentangle content, such that the synthesizer preserves too much speaker information. Another reason may be that a jointly trained speaker encoder \cite{s2vc} is essential for S3R-based VC.
\begin{table}[t]
\centering
\footnotesize
\caption{Comparison with state-of-the-art systems. All upstreams use the Taco2-AR model.}
\label{tab:comparison}
\begin{tabular}{|>{\scriptsize}l||r|r|r|r|c|c|}
\hline
System & MCD & WER & ASV & Naturalness & Similarity \\
\hline \hline
\multicolumn{6}{|c|}{Intra-lingual A2O} \\ \hline
mel & 8.47 & 38.3 & 77.25 & 2.61 $\pm$ 0.11 & 35$\%$ $\pm$ 3$\%$ \\
PPG (TIMIT)& 7.18 & 33.6 & 99.75 & 3.32 $\pm$ 0.10 & 58$\%$ $\pm$ 4$\%$ \\
\hline
PASE+ & 8.66 & 30.6 & 63.20 & 2.58 $\pm$ 0.12 & 31$\%$ $\pm$ 3$\%$ \\
APC & 8.05 & 27.2 & 87.25 & 2.92 $\pm$ 0.11 & 43$\%$ $\pm$ 4$\%$ \\
VQ-APC & 7.84 & 22.4 & 94.25 & 3.08 $\pm$ 0.10 & 40$\%$ $\pm$ 4$\%$ \\
NPC & 7.86 & 30.4 & 94.75 & 2.98 $\pm$ 0.11 & 46$\%$ $\pm$ 3$\%$ \\
Mockingjay & 8.29 & 35.1 & 79.75 & 2.81 $\pm$ 0.12 & 42$\%$ $\pm$ 4$\%$ \\
TERA & 8.21 & 25.1 & 83.75 & 2.91 $\pm$ 0.12 & 37$\%$ $\pm$ 4$\%$ \\
Modified CPC & 8.41 & 26.2 & 71.00 & 2.74 $\pm$ 0.11 & 33$\%$ $\pm$ 3$\%$ \\
DeCoAR 2.0 & 7.83 & 17.1 & 90.75 & 3.04 $\pm$ 0.11 & 43$\%$ $\pm$ 4$\%$ \\
wav2vec & 7.45 & 10.1 & 98.25 & 3.40 $\pm$ 0.05 & 52$\%$ $\pm$ 2$\%$ \\
vq-wav2vec & 7.08 & 13.4 & 100.00 & 3.59 $\pm$ 0.10 & 59$\%$ $\pm$ 4$\%$ \\
wav2vec 2.0 B. & 7.50 & 10.5 & 98.00 & 3.36 $\pm$ 0.06 & 51$\%$ $\pm$ 2$\%$ \\
wav2vec 2.0 L. & 7.63 & 15.8 & 97.25 & 3.26 $\pm$ 0.10 & 50$\%$ $\pm$ 4$\%$ \\
HuBERT B. & 7.47 & 8.0 & 98.50 & 3.48 $\pm$ 0.10 & 55$\%$ $\pm$ 4$\%$ \\
HuBERT L. & 7.22 & 9.0 & 99.25 & 3.47 $\pm$ 0.10 & 54$\%$ $\pm$ 4$\%$ \\
\hline
USTC-2018$\dagger$ & -- & 6.5 & 99.00 & 4.20 $\pm$ 0.08 & 55$\%$ $\pm$ 4$\%$ \\
USTC-2020 & 6.98 & 5.4 & 100.00 & 4.41 $\pm$ 0.07 & 82$\%$ $\pm$ 3$\%$ \\
SRCB & 8.90 & 11.5 & 92.00 & 4.16 $\pm$ 0.08 & 68$\%$ $\pm$ 3$\%$ \\
CASIA & 7.13 & 11.0 & 98.25 & 4.25 $\pm$ 0.08 & 61$\%$ $\pm$ 4$\%$ \\
ASR+TTS & 6.48 & 8.2 & 100.00 & 3.84 $\pm$ 0.09 & 75$\%$ $\pm$ 3$\%$ \\
\hline
Target & -- & 0.7 & -- & 4.57 $\pm$ 0.14 & -- \\
\hline \hline
\multicolumn{6}{|c|}{Cross-lingual A2O} \\ \hline
PPG (TIMIT)& -- & 51.0 & 84.67 & 2.79 $\pm$ 0.08 & 43$\%$ $\pm$ 3$\%$ \\
\hline
vq-wav2vec & -- & 21.0 & 88.83 & 3.28 $\pm$ 0.08 & 44$\%$ $\pm$ 3$\%$ \\
HuBERT L. & -- & 15.9 & 86.50 & 3.13 $\pm$ 0.08 & 41$\%$ $\pm$ 3$\%$ \\
\hline
USTC-2018 & -- & 5.6 & 97.67 & 4.17 $\pm$ 0.06 & 34$\%$ $\pm$ 3$\%$ \\
USTC-2020 & -- & 7.6 & 96.00 & 4.27 $\pm$ 0.07 & 43$\%$ $\pm$ 3$\%$ \\
SRCB & -- & 8.6 & 78.67 & 4.34 $\pm$ 0.07 & 34$\%$ $\pm$ 3$\%$ \\
CASIA & -- & 10.5 & 91.67 & 4.11 $\pm$ 0.07 & 45$\%$ $\pm$ 3$\%$ \\
ASR+TTS & -- & 34.5 & 67.83 & 2.51 $\pm$ 0.08 & 39$\%$ $\pm$ 3$\%$ \\
\hline
Target & -- & -- & -- & 4.48 $\pm$ 0.12 & -- \\
\hline \hline
\multicolumn{6}{|c|}{Intra-lingual A2A} \\ \hline
PPG (TIMIT) & 8.32 & 12.7 & 84.25 & 3.41 $\pm$ 0.08 & 34$\%$ $\pm$ 4$\%$ \\
\hline
vq-wav2vec & 8.47 & 4.2 & 73.25 & 3.58 $\pm$ 0.09 & 28$\%$ $\pm$ 3$\%$ \\
\hline
S2VC$\dagger$ & -- & 12.4 & 71.50 & 2.90 $\pm$ 0.09 & 29$\%$ $\pm$ 3$\%$ \\
\hline
\multicolumn{6}{l}{\makecell[l]{$\dagger$: Systems generate 16kHz, so MCD is not calculable and direct score\\comparison should be made with caution.}}\\
\end{tabular}
\vspace{-0.5cm}
\end{table}
\begin{table}[t]
\centering
\caption{Linear correlation coefficients between different metrics.}
\label{tab:correlation}
\begin{tabular}{|c||c|c|c|c|c|}
\hline
Metric & MCD & WER & ASV & Nat. & Sim. \\
\hline \hline
MCD & -- & 0.678 & -0.934 & -0.968 & -0.961 \\
WER & -- & -- & -0.640 & -0.808 & -0.587 \\
ASV & -- & -- & -- & 0.910 & 0.911 \\
Nat. & -- & -- & -- & -- & 0.932 \\
Sim. & -- & -- & -- & -- & -- \\
\hline
\end{tabular}
\vspace{-0.5cm}
\end{table}
\subsection{Comparing with top systems using subjective evaluation}
We then compared S3R-based VC models with state-of-the-art systems. \textbf{USTC-2018} \cite{VC-WNV-adapt}, \textbf{USTC-2020} \cite{vcc2020-task1-top, vcc2020-task2-top}\footnote{USTC's systems used text and PPG for the intra-lingual and cross-lingual tasks, respectively.}, \textbf{SRCB} \cite{vcc2020-srcb}, \textbf{CASIA} \cite{vcc2020-casia} were top systems in VCC2020, all of which adopted PPGs, synthesizer pretraining on a multi-speaker dataset, and AR vocoders. Notably, they used thousands of hours of internal data for training. \textbf{ASR+TTS} \cite{vcc2020-asr-tts} was the seq2seq+non-AR vocoder baseline in VCC2020. \textbf{S2VC} \cite{s2vc} is the STOA system for A2A VC. The results are shown in Table~\ref{tab:comparison}. We summarize our observations as follows:
\begin{itemize}
\item vq-wav2vec outperformed all other upstreams in the subjective test, with a 3.59 naturalness and 59$\%$ similarity in the intra-lingual A2O setting.
\item In the A2O settings, there was still a naturalness gap between vq-wav2vec and other VCC2020 top systems (3.59 v.s. 4.16-4.25, 3.28 v.s. 4.11-4.34). As for similarity, vq-wav2vec was on par with USTC-2018 and CASIA in the intra-lingual A2O setting, and achieved top in the cross-lingual setting.
\item In the A2A setting, vq-wav2vec was on par with S2VC in similarity, while being significantly better in naturalness. Our system is therefore the new state-of-the-art in S3R-based A2A VC.
\end{itemize}
\vspace{-0.1cm}
\subsection{Impact of supervision}
Although top systems using PPG greatly outperformed vq-wav2vec in naturalness, they used AR vocoders and the system was trained on large internal datasets, so the impact of supervision is not yet clear. To this end, we compared vq-wav2vec result with ``PPG (TIMIT)'' and the same vocoder. The high WERs and low naturalness scores showed that the PPG was indeed of low quality. Nonetheless, in all three settings, ``PPG (TIMIT)'' can achieve similar or higher similarity scores than vq-wav2vec. This shows that supervision greatly contributes to similarity, especially in difficult settings like A2A VC.
This also shows that the ability of current S3Rs to disentangle speaker information is still limited when compared to PPG, and can be further improved in the future.
\vspace{-0.1cm}
\subsection{Justify the objective metrics with correlation analysis}
Conducting a subjective test whenever a new S3R is developed cannot meet the fast benchmark requirement of SUPERB. Therefore, we examine if the objective measures align well with human perception. Using the intra-lingual A2O results over different upstreams, we calculated pairwise linear correlation coefficients. Results in Table~\ref{tab:correlation} suggested that MCD best aligned with both naturalness and similarity.
Note that in this correlation analysis, we considered systems that used the same decoder and neural vocoder. Since the correlation result is strongly affected by the pool of methods evaluated in a listening test, this good correlation could be observed only in such a homogeneous condition. Nonetheless, this result is still very useful for the benchmarking requirement of SUPERB.
\vspace{-0.1cm}
\section{Conclusions and future work}
We presented S3PRL-VC, an extension of the S3PRL toolkit that applied S3R to VC. We described the model design choice, and covered a variety of tasks. Extensive experiments, both objective and subjective, evaluated the capability of various S3Rs when applied to different VC scenarios. By comparing S3Rs with supervised presentations like PPG, we showed the competitiveness of S3Rs in certain settings, meanwhile shedding light on improving directions.
We suggest different future directions for readers from different communities. From the VC perspective, it is worthwhile to continue investigating better downstream model design. For instance, in A2A VC, a proper speaker encoder should be used instead of fixed d-vector. Meanwhile, we encourage to use VC as a probing task when designing a new S3R model, considering the challenges to overcome brought by all aspects required in VC.
\noindent{\textbf{Acknowledgements}}
We would like to thank the S3PRL/SUPERB team for the fruitful discussions. This work was partly supported by JSPS KAKENHI Grant Number 21J20920, JST CREST Grant Number JPMJCR19A3, and a project, JPNP20006, commissioned by NEDO, Japan.
\bibliographystyle{IEEEbib}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{S.0}
In this paper we consider once again the construction of an effective Hamiltonian for a particle described by a periodic Hamiltonian and subject also to a magnetic field that will be considered bounded and smooth but neither periodic nor slowly varying. Our aim is to use some of the ideas in \cite{Bu,GMS} in conjunction with the magnetic pseudodifferential calculus developed in \cite{MP1,IMP1,IMP2,MPR1} and obtain the following improvements:
\begin{enumerate}
\item cover also the case of pseudodifferential operators, as for example the relativistic Schr\"{o}dinger operators with principal symbol $<\eta>$;
\item consider magnetic fields that are neither constant nor slowly variable, and thus working in a manifestly covariant form and obtain results that clearly depend only on the magnetic field;
\item give up the adiabatic hypothesis (slowly variable fields) and consider only the intensity of the magnetic field as a small parameter;
\item consider hypothesis formulated only in terms of the magnetic field and not of the vector potential one uses.
\end{enumerate}
Let us point out from the beginning, that as in \cite{GMS} we construct an effective Hamiltonian associated to any compact interval of the energy spectrum but its significance concerns only the description of the real spectrum as a subset of $\mathbb{R}$. In a forthcoming paper our covariant magnetic pseudodifferential calculus will be used in order to construct an effective dynamics associated to any spectral band of the periodic Hamiltonian. Let us mention here that the magnetic pseudodifferential calculus has been used in the Peierls-Onsager problem in \cite{DNL} where some improvements of the results in \cite{PST} are obtained but still in an adiabatic setting. In fact in our following paper we intend to extend the results in \cite{DNL} and construct a more natural framework for the definition of the Peierls-Onsager effective dynamics associated to a spectral band.
Finally let us also point out here that an essential ingredient in the method elaborated in \cite{GMS} is a necessary and sufficient criterion for a tempered distribution to belong to some given Hilbert spaces (Propositions 3.2 and 3.6 in \cite{GMS}). In our 'magnetic' setting some similar criteria have to be proved and this obliges us to some different formulations that allows to avoid a gap in the original proof given in \cite{GMS,DS}.
\subsection{The problem}
\setcounter{equation}{0}
\setcounter{theorem}{0}
We shall constantly use the notation $\mathcal X\equiv\mathbb{R}^d$, its dual $\mathcal X^*$ being cannonically isomorphic to $\mathbb{R}^d$; let $<.,.>:\mathcal X^*\times\mathcal X\rightarrow\mathbb{R}$ denote the duality relation. We shall always denote by $\Xi:=\mathcal X\times\mathcal X^*$ (considered as a symplectic space with the canonical symplectic form $\sigma(X,Y):=\langle\xi,y\rangle-\langle\eta,x\rangle$); we shall denote by $\overline{\Xi}:=\mathcal X^*\times\mathcal X$.
We shall consider a discrete abelian localy compact subgroup $\Gamma\subset\mathcal X$. It is isomorphic to $\mathbb{Z}^d$ and we can view it as a lattice $\Gamma:=\oplus_{j=1}^d\mathbb{Z}e_j$, with $\{e_1,\ldots, e_d\}$ an algebraic basis of $\mathbb{R}^d$ (that we shall call the basis of $\Gamma$).
We consider the quotient group $\mathbb{R}^d/\Gamma$ that is canonically isomorphic to the $d$-dimensional torus $\mathbb{T}\equiv\mathbb{T}_\Gamma\equiv\mathbb{T}$ and let us denote by $\mathfrak{p}:\mathbb{R}^d\ni x\mapsto\hat{x}\in\mathbb{T}$ the canonical projection onto the quotient. Let us consider an \textit{elementary cell}:
$$
E_\Gamma\,=\,\left\{y=\sum\limits_{j=1}^dt_je_j\in\mathbb{R}^d\,\mid\,0\leq t_j<1,\ \forall j\in\{1,\ldots,d\}\right\},
$$
having the interior (as subset of $\mathbb{R}^d$) locally homeomorphic to its projection on $\mathbb{T}$. The dual lattice of $\Gamma$ is then its polar set in $\mathcal X^*$ defined as
$$
\Gamma_*\,:=\,\left\{\gamma^*\in\mathcal X^*\,\mid\,<\gamma^*,\gamma>\in(2\pi)\mathbb{Z},\ \forall\gamma\in\Gamma\right\}.
$$
Considering the dual basis $\{e^*_1,\ldots,e^*_d\}\subset\mathcal X^*$ of $\{e_1,\ldots,e_d\}$, defined by $<e^*_j,e_k>=(2\pi)\delta_{jk}$, we have evidently that $\Gamma_*:=\oplus_{j=1}^d\mathbb{Z}e^*_j$. By definition, we have that $\Gamma_*\subset\mathcal X^*$ is the polar of $\Gamma\subset\mathcal X$. We define $\mathbb{T}_{\Gamma_*}:=\mathcal X^*/\Gamma_*\equiv\mathbb{T}_*\equiv\mathbb{T}_*$ and $E_{\Gamma_*}$ and notice that $\mathbb{T}_{\Gamma_*}$ is isomorphic to the dual group of $\Gamma$ (in the sense of abelian localy compact groups).
We evidently have the following gourp isomorphisms $\mathcal X\cong\Gamma\times\mathbb{T}_{\Gamma}$, $\mathcal X^*\cong\Gamma_*\times\mathbb{T}_{\Gamma_*}$ that are not topological isomorphisms.
We shall consider the following \textit{free Hamiltonian}:
\begin{equation}\label{0.1}
H_{0,V}:=-\Delta+V(y),\quad V\in BC^\infty(\mathcal{X},\mathbb{R}),\ \Gamma-\text{periodic},
\end{equation}
that describes the evolution of an electron in a periodic crystal without external fields. The above operator has a self-adjoint extension in $L^2(\mathcal{X})$ that commutes with the translations $\tau_\gamma$ for any $\gamma\in\Gamma$. We can thus apply the Floquet-Bloch theory. For any $\xi\in\mathcal{X}^*$ we can define the operator
$$
H_{0,V}(\xi):=\big(D_y+\xi\big)^2+V(y)
$$
that has a self-adjoint extension in $L^2(\mathbb{T})$ that has compact resolvent. Thus its spectrum consists in a growing sequence of finite multiplicity eigenvalues $\lambda_1(\xi)\leq\lambda_2(\xi)\leq...$ that are continuous and $\Gamma^*$-periodic functions of $\xi$. Thus, if we denote by $J_k:=\lambda_k(\mathbb{T}_*)$, we can write that
\begin{equation}
\sigma\big(H_{0,V}\big)\ =\ \underset{k=1}{\overset{\infty}{\cup}}J_k
\end{equation}
and it follows that this spectrum is absolutely continuous.
The above analysis implies the following statement that can be considered as \textit{the spectral form of the Onsager-Peierls substitution} in a trivial situation (with 0 magnetic field):
\begin{equation}\label{0.3}
\ \lambda\in\sigma\big(H_{0,V}\big)\ \quad\Longleftrightarrow\quad\ \exists k\geq1\ 0\in\sigma\big(\lambda-\lambda_k(D)\big)\ ,
\end{equation}
where $\lambda_k(D)$ is the image of the multiplication operator with the function $\lambda_k(\xi)$ on $L^2(\mathcal{X}^*)$ under the conjugation with the Fourier transform (i.e. the Weyl quantization of the symbol $\lambda_k$) and thus defines a bounded self-adjoint operator on $L^2(\mathcal{X})$.
The problem we are interested in, consists in superposing a \textit{magnetic field} $B$ in the above situation; let us first consider a constant magnetic field $B=(B_{jk})_{1\leq j,k\leq d}$ with $B_{jk}=-B_{kj}$. Let us recall that using \textit{the transversal gauge} one can define the following \textit{vector potential} $A=(A_j)_{1\leq j\leq d}$
\begin{equation*}
A_j(x)\ :=\ -\frac{1}{2}\underset{1\leq k\leq d}{\sum}B_{jk}x_k.
\end{equation*}
We are considering $A$ as a differential 1-form on $\mathcal{X}$ so that $B$ is the 2-form given by the exterior differential of $A$. Then the associated \textit{magnetic Hamiltonian} is defined as
\begin{equation}
H_{A,V}:=\big(D+A\big)^2+V(y),
\end{equation}
that has also a self-adjoint extension in $L^2(\mathcal{X})$. The structure of the spectrum of this operator may be very different of the structure of $\sigma\big(H_{0,V}\big)$ (for example it may be pure point with infinite multiplicity!), but one expects that modulo some small correction (depending on $|B|$), for small $|B|$ the property \ref{0.3} with $D$ replaced by $D+A$ should still be true. More precisely it is conjectured that there exists a symbol $r_k(x,\xi;B,\lambda)$ (in fact a $BC^\infty(\Xi)$ function) such that
\begin{equation}
\underset{|B|\rightarrow0}{\lim}r_k(x,\xi;B,\lambda)=0\ \text{in}\ BC^\infty(\Xi)
\end{equation}
and for $\lambda$ in a compact neighborhood of $J_k$ and for small $|B|$ we have that
\begin{equation}\label{0.6}
\ \lambda\in\sigma\big(H_{0A,V}\big)\ \quad\Longleftrightarrow\quad0\in\sigma\big(\lambda-\lambda_k(D+A(x))+r_k(x,D+A(x);B,\lambda)\big),
\end{equation}
where $r_k(x,D+A(x);B,\lambda)$ is the Weyl quantization of $r_k(x,\xi+A(x);B,\lambda)$.
The first rigorous proof of such a result appeared in \cite{N2} for a simple spectral band (i.e. $\lambda_k(\xi)$ is a non-degenerated eigenvalue of $H_{0,V}(\xi)$ for any $\xi\in\mathcal{X}^*$ and $J_k\cap J_l=\emptyset,\forall l\ne k$). In \cite{HS1} the authors study this case of a simple spectral band but also the general case, by using Wannier functions. In these references the operator appearing on the right hand side of the equivalence \ref{0.6} is considered to act in the Hilbert space $\big[l^2(\Gamma)\big]^N$ (with $N=1$ for the simple spectral band). In fact we shall prove that for a simple spectral band one can replace $l^2(\Gamma)$ with $L^2(\mathcal{X})$. Let us also notice that if one would like to consider also non-constant magnetic fields, then the above Weyl quantization of $A(x)$-dependent symbols gives operators that are not gauge covariant and thus unsuitable for a physical interpretation.
\subsection{The result by Gerard, Martinez and Sj\"{o}strand}
In \cite{GMS} the above three authors consider the evolution of an electron (ignoring the spin) in a periodic crystal under the action of exterior non-constant, slowly varying, magnetic and electric fields. More precisely the magnetic field $B$ is defined as $B=dA$ with a vector potential
\begin{equation}\label{0.7}
A=(A_1,\ldots,A_d),\quad A_j\in C^\infty(\mathcal{X};\mathbb{R}),\quad \partial^\alpha A_j\in BC^\infty(\mathcal{X})\ \forall |\alpha|\geq1,
\end{equation}
and the electric potential is described by
\begin{equation}\label{0.8}
\phi\in BC^\infty(\mathcal{X};\mathbb{R}).
\end{equation}
The Hamiltonian is taken to be
\begin{equation}\label{0.9}
P_{A,\phi}\ =\ \underset{1\leq j\leq d}{\sum}\big(D_{y_j}+A_j(\epsilon y)\big)^2+V(y)+\phi(\epsilon y),
\end{equation}
with $|\epsilon|$ small enough; this defines also a self-adjoint operator in $L^2(\mathcal{X})$. In this situation, in order to define an effective Hamiltonian, the authors apply an idea of Buslaev \cite{Bu} (see also \cite{HS1}); this idea consists in ``doubling'' the number of variables and separating the periodic part (that is also ``rapidly varying'') from the non-periodic part (that is also ``slowly varying''). One defines the following operator acting on $\mathcal{X}^2$:
\begin{equation}\label{0.10}
\widetilde{P}_{A,\phi}\ :=\ \underset{1\leq j\leq d}{\sum}\big(\epsilon D_{x_j}+D_{y_j}+A_j(x)\big)^2+V(y)+\phi(x).
\end{equation}
Let us point out the very interesting connection between the operators $P_{A,\phi}$ and $\widetilde{P}_{A,\phi}$. If we consider the following change of variables:
$$
\pi_\epsilon:\mathcal{X}^2\rightarrow\mathcal{X}^2,\quad\pi_\epsilon(x,y):=(x-\epsilon y,y),
$$
then for any tempered distribution $F\in\mathscr{S}^\prime(\mathcal{X})$ we have that:
\begin{equation}\label{0.11}
\big(\widetilde{P}_{A,\phi}\circ\pi^*_\epsilon\big)(\delta_0\otimes F)\ =\ \pi^*_\epsilon\big(\delta_0\otimes(P_{A,\phi}F)\big).
\end{equation}
Buslaev considers the operator $\widetilde{P}_{A,\phi}$ as a semi-classical operator valued pseudodifferential operator on $\mathcal{X}$ and uses the above remark in order to obtain asymptotic solutions for the equation $P_{A,\phi}u=\lambda u$. Let us develop a little bit this idea in the frame of our previous {\it magnetic pseudodifferential calculus} (\cite{MP1,IMP1,IMP2}), presenting some arguments that will be useful in our proofs.
We recall that given a symbol $a(y,\eta)$ defined on $\Xi$ and a potential vector $A$ defined on $\mathcal{X}$, one can define two 'candidates' for the semi-classical 'magnetic' quantization of the symbol $a$:
\begin{equation}\label{0.12}
\big(\mathfrak{Op}_{A,h}(a)u\big)(x):=\iint_\Xi e^{i<\eta,x-y>}a\big(\frac{x+y}{2},h\eta+A\big(\frac{x+y}{2}\big)\big)u(y)dy\;\;\bar{}\!\!\!d\eta,\ \forall u\in\mathscr{S}(\mathcal{X}),
\end{equation}
that is used in \cite{GMS} but is not gauge covariant, and
\begin{equation}\label{0.13}
\big(\mathfrak{Op}^A_{h}(a)u\big)(x):=\iint_\Xi e^{i<\eta,x-y>}\omega_{h^{-1}A}(x,y)a\big(\frac{x+y}{2},h\eta\big)u(y)dy\;\;\bar{}\!\!\!d\eta,\ \forall u\in\mathscr{S}(\mathcal{X}),
\end{equation}
with $\omega_{A}(x,y):=\exp\{-i\int_{[x,y]}A\}$, that has been introduced in \cite{MP1} and is gauge covariant. In both the above formulae $h$ is a strictly positive parameter.
For $A=0$ the two quantizations above coincide with the semi-classical Weyl quantization of $a$ denoted by $\mathfrak{Op}_h(a)$. For $h=1$ we use the notations $\mathfrak{Op}_A(a)$, $\mathfrak{Op}^A(a)$ and $\mathfrak{Op}(a)$.
Let us come back now to the operators $P_{A,\phi}$ and $\widetilde{P}_{A,\phi}$ and consider the following notations
$$
A_\epsilon(x):=A(\epsilon x),
$$
\begin{equation}\label{0.14}
\left\{\begin{array}{lcl}
p(x,y,\eta)&:=&|\eta|^2+V(y)+\phi(x)\\
\widetilde{p}(x,y,\xi,\eta)&:=&|\xi+\eta|^2+V(y)+\phi(x)=p(x,y,\xi+\eta)\\
\overset{\circ}{p}_\epsilon(y,\eta)&:=&p(\epsilon y,y,\eta).
\end{array}\right.
\end{equation}
We evidently have that
\begin{equation}\label{0.15}
P_{A,\phi}\ =\ \mathfrak{Op}_{A_\epsilon}(\overset{\circ}{p}_\epsilon),
\end{equation}
while $\widetilde{P}_{A,\phi}$ may be thought as being obtained through the following procedure: one computes the Weyl quantization of $\widetilde{p}$ considered as a symbol in the variables $(y,\eta)\in\Xi$ and obtains an operator valued symbol in the variables $(x,\xi)\in\Xi$, that is then quantized by $\mathfrak{Op}_{A,\epsilon}$:
\begin{equation}\label{0.16}
\left\{\begin{array}{lcl}
\mathfrak{q}(x,\xi)&:=&\mathfrak{Op}(\widetilde{p}(x,.,\xi,.)\\
\widetilde{P}_{A,\phi}&:=&\mathfrak{Op}_{A,\epsilon}(\mathfrak{q}).
\end{array}\right.
\end{equation}
The rather strange presence of the parameter $\epsilon$ in \ref{0.9} has to be considered as a reflection of the semi-classical quantization used in \ref{0.16} and of the formula \ref{0.11}. In order to deal with a more natural class of perturbations (thus to eliminate the slow variation hypothesis!) one has to give up the semi-classical quantization in the second step and insert the parameter $\epsilon$ in the symbol (like in \ref{0.15}).
Let us briefly review now the main steps of the argument in \cite{GMS}. As previously remarked, they consider a magnetic field described by a vector potential satisfying \ref{0.7}. They propose to consider the following generalization of $P_{A,\phi}$.
\begin{itemize}
\item The starting point is a symbol $p(x,y,\eta)$ that is polynomial in the variable $\eta\in\mathcal{X}^*$ and satisfies the following relations:\\
$i)\quad\ \ p(x,y,\eta)\ =\ \underset{|\alpha|\leq m}{\sum}a_\alpha(x,y)\eta^\alpha,\quad a_\alpha\in BC^\infty(\mathcal{X}\times\mathcal{X};\mathbb{R}),\ m\in\mathbb{N}^*,$\\
$ii)\quad\ a_\alpha(x,y+\gamma)=a_\alpha(x,y),\ \forall|\alpha|\leq m,\ \forall\gamma\in\Gamma, $\\
$iii)\quad \exists c>0\ \text{such that}\ p_m(x,y,\eta):=\underset{|\alpha|=m}{\sum}a_\alpha(x,y)\eta^\alpha\geq c|\eta|^m,\ \forall(x,y)\in\mathcal{X}\times\mathcal{X},\ \forall\eta\in\mathcal{X}^*$ i.e. $p$ is an elliptic symbol (let us notice that this condition implies that $m$ is even).
\item They introduce then the symbols:
$$
\overset{\circ}{p}_\epsilon(y,\eta):=p(\epsilon y,y,\eta),\ \forall\epsilon>0;\qquad\widetilde{p}(x,y,\xi,\eta):=p(x,y,\xi+\eta).
$$
\item The interest is focused on the self-adjoint operator in $L^2(\mathcal{X})$ defined by:
\begin{equation}\label{0.17}
P_\epsilon\ :=\ \mathfrak{Op}_{A_\epsilon}(\overset{\circ}{p}_\epsilon).
\end{equation}
\item The auxiliary operator (obtained by doubling the variables) is the self-adjoint operator in $L^2(\mathcal{X}^2)$ defined by:
\begin{equation}\label{0.18}
\widetilde{P}_\epsilon\ :=\ \mathfrak{Op}_{A,\epsilon}(\mathfrak{q}),\quad\mathfrak{q}(x,\xi):=\mathfrak{Op}\big(\widetilde{p}(x,.,\xi,.)\big).
\end{equation}
\item One verifies that a relation similar to \ref{0.11} is still verified:
\begin{equation}\label{0.19}
\big(\widetilde{P}_\epsilon\circ\pi^*_\epsilon\big)\big(\delta_0\otimes F\big)\ =\ \pi^*_\epsilon\big(\delta_0\otimes(P_\epsilon F)\big),\ \forall F\in\mathscr{S}^\prime(\mathcal{X}).
\end{equation}
\end{itemize}
In order to define an effective Hamiltonian to describe the spectrum of $P_\epsilon$, in \cite{GMS} the authors bring together three important ideas from the literature on the subject.
\begin{enumerate}
\item First, the idea introduced in \cite{Bu,GRT} of ``doubling the variables" and considering the operator $\widetilde{P}_\epsilon$.
\item Then, the use of an operator valued pseudodifferential calculus, idea introduced in \cite{Bu} and having a rigorous development in \cite{B-K}.
\item The formulation of a Grushin type problem, as proposed in \cite{HS1}.
\end{enumerate}
In the following we shall discuss the use of the Grushin type problem in our spectral problem. The ideas are the following. First one fixes some compact interval $I\subset\mathbb{R}$ and some $\epsilon_0>0$ small enough. Then one has to take into account that $\mathbb{T}$ is compact and thus any elliptic self-adjoint operator in $L^2(\mathbb{T})$ is Fredholm and becomes bijective on specific finite co-dimension subspaces. Thus one can find $N\in\mathbb{N}^*$ and $N$ functions $\phi_j\in C^\infty(\mathcal{X}\times\mathcal{X}\times\mathcal{X}^*)$ (with $1\leq j\leq N$) that are $\Gamma$-periodic in the second variable and such that the following statement is true:
\begin{proposition}
If we define:
\begin{itemize}
\item the operator valued symbols
\begin{equation}\label{0.20}
\left\{\begin{array}{lll}
R_+:\Xi\rightarrow\mathbb{B}\big(L^2(\mathbb{T});\mathbb{C}^N\big),\ &R_+(x,\xi)u:=\left\{\left<u,\phi_j(x,.,\xi)\right>_{L^2(\mathbb{T})}\right\}_{1\leq j\leq N}\in\mathbb{C}^N,&\forall u\in L^2(\mathbb{T}),\\
&\\
R_-:\Xi\rightarrow\mathbb{B}\big(\mathbb{C}^N;L^2(\mathbb{T})\big),\ &R_-(x,\xi)c:=\underset{1\leq j\leq N}{\sum}c_j\phi_j(x,.,\xi)\in L^2(\mathbb{T}),&\forall c:=(c_j)_{1\leq j\leq N}\in\mathbb{C}^N,
\end{array}\right.
\end{equation}
\item the associated operators obtained by a semi-classical (non-covariant) quantization:
\begin{equation}\label{0.21}
\boldsymbol{R_\pm}(\epsilon)\ :=\ \mathfrak{Op}_{A,\epsilon}\big(R_\pm\big).
\end{equation}
\item Then, for $\lambda\in I,\ \epsilon\in(0,\epsilon_0]$, the operator:
\begin{equation}\label{0.22}
\mathcal{P}_\epsilon\ :=\ \left(\begin{array}{cc}
\widetilde{P}_\epsilon-\lambda&\boldsymbol{R_-}(\epsilon)\\
\boldsymbol{R_+}(\epsilon)&\boldsymbol{0}
\end{array}\right)
\end{equation}
is self-adjoint in $L^2\big(\mathcal{X}\times\mathbb{T}\big)\oplus L^2\big(\mathcal{X};\mathbb{C}^N\big)$ and has an inverse:
\begin{equation}\label{0.23}
\mathcal{E}(\epsilon,\lambda)\ :=\ \left(\begin{array}{cc}
\boldsymbol{E}(\epsilon,\lambda)&\boldsymbol{E}_+(\epsilon,\lambda)\\
\boldsymbol{E}_-(\epsilon,\lambda)&\boldsymbol{E}_{-+}(\epsilon,\lambda)
\end{array}\right).
\end{equation}
\end{itemize}
\end{proposition}
Moreover one can prove then that $\boldsymbol{E}_{-+}(\epsilon,\lambda)=\mathfrak{Op}_{A,\epsilon}\big(E^{-+}_{\epsilon\lambda}\big)$ with $E^{-+}_{\epsilon\lambda}\in BC^\infty\big(\Xi;\mathbb{B}(\mathbb{C}^N)\big)$ uniformly for $(\epsilon,\lambda)\in(0,\epsilon_0]\times I$. Then it is easy to prove that:
\begin{proposition}
The operator $\boldsymbol{E}_{-+}(\epsilon,\lambda)$ is bounded and self-adjoint in $L^2\big(\mathcal{X};\mathbb{C}^N\big)$ and we have the following equivalence:
\begin{equation}\label{0.24}
\lambda\in\sigma(\widetilde{P}_\epsilon)\quad\Longleftrightarrow\quad 0\in\sigma(\boldsymbol{E}_{-+}(\epsilon,\lambda)).
\end{equation}
\end{proposition}
Finally, in order to pass from $\widetilde{P}_\epsilon$ to $P_\epsilon$ one makes use of \ref{0.19} and of some unitary transforms of the spaces $l^2(\Gamma)$ and $L^2(\mathcal{X})$. More precisely the following two explicit Hilbert spaces are considered in \cite{GMS}:
\begin{itemize}
\item
\begin{equation}\label{0.25}
\mathfrak{V}_{0,\epsilon}\ :=\ \left\{\ F\in\mathscr{S}^\prime(\mathcal{X})\ \mid\ F=\underset{\gamma\in\Gamma}{\sum}f_\gamma\delta_{\epsilon\gamma},\ \forall (f_\gamma)_{\gamma\in\Gamma}\in l^2(\Gamma)\ \right\},\quad\|F\|_{\mathfrak{V}_{0,\epsilon}}:=\|f\|_{l^2(\Gamma)},
\end{equation}
that is evidently unitarily equivalent to $l^2(\Gamma)$;
\item
\begin{equation}\label{0.26}
\mathfrak{L}_{0,\epsilon}\ :=\ \left\{\ F\in\mathscr{S}^\prime(\mathcal{X}^2)\ \mid\ F(x,y)=\underset{\gamma\in\Gamma}{\sum}v(x)\delta_{0}(x-\epsilon(y-\gamma)),\ \forall v\in L^2(\mathcal{X})\ \right\},\quad\|F\|_{\mathfrak{L}}:=\|v\|_{L^2(\mathcal{X})},
\end{equation}
that is evidently unitarily equivalent to $L^2(\mathcal{X})$.
\end{itemize}
The following step is to extend the operators $\widetilde{P}_\epsilon$ and $\boldsymbol{R_\pm}(\epsilon)$, considered as pseudodifferential operators, to continuous operators from $\mathscr{S}^\prime(\mathcal{X}^2)$ to $\mathscr{S}^\prime(\mathcal{X}^2)$ and respectively from $\mathscr{S}^\prime(\mathcal{X})$ to $\mathscr{S}^\prime(\mathcal{X})$. Then we can directly restrict them to the subspaces $\mathfrak{L}_{0,\epsilon}$ and respectively $\mathfrak{V}_{0,\epsilon}^N$ and due to the $\Gamma$-periodicity of the initial symbol $p$ the authors prove as in \cite{GMS} that they leave these spaces invariant and that the matrix operator $\mathcal{P}_\epsilon$ still defines an invertible self-adjoint operator in $\mathcal{L}_{0,\epsilon}\oplus\mathfrak{V}_{0,\epsilon}^N$ having as inverse the coresponding restriction of $\mathcal{E}(\epsilon,\lambda)$; moreover the restriction of $\boldsymbol{E}_{-+}(\epsilon,\lambda)$ is a bounded self-adjoint operator in $\mathfrak{V}_{0,\epsilon}^N$ and we still have
the
property \ref{0.24}. The remark that allows one to end the analysis is that $P_\epsilon$ acting in $L^2(\mathcal{X})$ is unitarily equivalent with the transformed operator $\widetilde{P}_\epsilon$ acting on $\mathfrak{L}_{0,\epsilon}$ and thus \ref{0.24} implies directly the following equivalence:
\begin{equation}\label{0.27}
\lambda\in\sigma(P_\epsilon)\quad\Longleftrightarrow\quad 0\in\sigma(\boldsymbol{E}_{-+}(\epsilon,\lambda)),
\end{equation}
with $\boldsymbol{E}_{-+}(\epsilon,\lambda)$ bounded self-adjoint operator on $\mathfrak{V}_{0,\epsilon}^N$ that can be evidently identified with a bounded self-adjoint operator on $[l^2(\Gamma)]^N$.
\subsection{Summary of our results}\label{S.1.3}
Let us briefly comment upon our hypothesis. First the magnetic field $B_\epsilon:=2^{-1}\underset{1\leq j,k\leq d}{\sum}B_{\epsilon,jk}dx_j\wedge dx_k$ is a closed 2-form valued smooth function ($B_{\epsilon,jk}=-B_{\epsilon,kj}$)
\begin{description}
\item[H.1] For any pair of indices $(j,k)$ between $1$ and $d$ we are given a function $[-\epsilon_0,\epsilon_0]\ni\epsilon\mapsto B_{\epsilon,jk}\in BC^\infty(\mathcal{X};\mathbb{R})$ such that $\underset{\epsilon\rightarrow0}{\lim} B_{\epsilon,jk}=0\ \text{in}\ BC^\infty(\mathcal{X};\mathbb{R})$, for some $\epsilon_0>0$.
\end{description}
Using the transversal gauge we can define a vector potential $A_\epsilon$ (described by a 1-form valued smooth function defined on $[-\epsilon_0,\epsilon_0]$) such that $B_\epsilon=dA_\epsilon$:
\begin{equation}\label{0.28}
A_{\epsilon,j}(x)\ :=\ -\underset{1\leq k\leq d}{\sum}x_k\int_0^1B_{\epsilon,jk}(sx)sds.
\end{equation}
We shall not suppose that our vector potential $A_\epsilon$ satisfy \eqref{0.7}, but we shall always suppose the following behavior (that results from our hypothesis (H.1) and \eqref{0.28}):
\begin{equation}\label{0.29}
\underset{\epsilon\rightarrow0}{\lim} <x>^{-1}A_{\epsilon,j}(x)=0\ \text{in}\ BC^\infty(\mathcal{X};\mathbb{R}).
\end{equation}
The symbols we are considering are also considered as functions
$$
[-\epsilon_0,\epsilon_0]\ni\epsilon\mapsto p_\epsilon\in C^\infty\big(\mathcal{X}\times\mathcal{X}\times\mathcal{X}^\prime\big)
$$
satisfying conditions of type $S^m_1$ with $m>0$ uniformly in $\epsilon\in[-\epsilon_0,\epsilon_0]$:
\begin{description}
\item[H.2] $\exists m>0,\ \text{such that}\ \forall(\widetilde{\alpha},\beta)\in\mathbb{N}^{2d}\times\mathbb{N}^d,\ \ \exists C_{\widetilde{\alpha}\beta}>0$ such that
$$
\left|\left(\partial^{\widetilde{\alpha}}_{x,y}\partial^\beta_\eta p_\epsilon\right)(x,y,\eta)\right|\ \leq\ C_{\widetilde{\alpha}\beta}<\eta>^{m-|\beta|},\quad\forall(x,y,\eta)\in\mathcal{X}\times\mathcal{X}\times\mathcal{X}^\prime,\ \forall\epsilon\in[-\epsilon_0,\epsilon_0],
$$
\item[H.3] $\underset{\epsilon\rightarrow0}{\lim}p_\epsilon\ =\ p_0$ in $S^m_1(\mathcal{X})$,
\item[H.4] $\forall\alpha\in\mathbb{N}^d$ with $|\alpha|\geq1$ we have $\underset{\epsilon\rightarrow0}{\lim}\big(\partial^\alpha_xp_\epsilon\big)=0$ in $S^m_1(\mathcal{X})$,
\item[H.5] $p_\epsilon$ is an elliptic symbol uniformly in $\epsilon\in[-\epsilon_0,\epsilon_0]$, i.e. $\exists C>0,\exists R>0$ such that
$$
p_\epsilon(x,y,\eta)\geq C|\eta|^m,\quad\forall(x,y,\eta)\in\mathcal{X}\times\mathcal{X}\times\mathcal{X}^\prime \text{with}\ |\eta|\geq R,\ \forall\epsilon\in[-\epsilon_0,\epsilon_0],
$$
\item[H.6] $p_\epsilon$ is $\Gamma$-periodic with respect to the second variable, i.e.
$$
p_\epsilon(x,y+\gamma,\eta)\ =\ p_\epsilon(x,y,\eta),\quad\forall\gamma\in\Gamma,\ \forall(x,y,\eta)\in\mathcal{X}\times\mathcal{X}\times\mathcal{X}^\prime,\ \forall\epsilon\in[-\epsilon_0,\epsilon_0].
$$
\end{description}
Let us remark here that the hypothesis (H.3) and (H.4) imply that the limit $p_0$ only depends on the second and third variables ($(y,\eta)\in\Xi$) and thus we can write
$$
p_\epsilon(x,y,\eta)\ :=\ p_0(y,\eta)\ +\ r_\epsilon(x,y,\eta),\quad\underset{\epsilon\rightarrow0}{\lim}r_\epsilon(x,y,\eta)=0\ \text{in}\ S^m_1(\mathcal{X}\times\Xi).
$$
Let us also notice that our hypothesis (H.3) is not satisfied if we consider a perturbation of the form (adiabatic electric field) $\phi(\epsilon y)$ but is verified for a perturbation of the form $\epsilon\phi(y)$. One can consider a weaker hypothesis, allowing also for the adiabatic electric field perturbation, without losing the general construction of the effective Hamiltonian, but some consequences that we shall prove would no longer be true.
We associate to our symbols the two types of symbols proposed in \cite{GMS}:
$$
\overset{\circ}{p}_\epsilon(y,\eta):=p_\epsilon(y,y,\eta),\quad\widetilde{p}_\epsilon(x,y,\xi,\eta):=p_\epsilon(x,y,\xi+\eta).
$$
The operator we want to study is
\begin{equation}\label{0.30}
P_\epsilon:=\mathfrak{Op}^{A_\epsilon}\big(\overset{\circ}{p}_\epsilon\big).
\end{equation}
The auxiliary operator is defined as:
\begin{equation}\label{0.31}
\widetilde{P}_\epsilon:=\mathfrak{Op}^{A_\epsilon}(\mathfrak{q}_\epsilon),\quad\mathfrak{q}_\epsilon(x,\xi):=\mathfrak{Op}\big(\widetilde{p}_\epsilon(x,.,\xi,.)\big).
\end{equation}
Let us notice that in particular all the above hypothesis are satisfied if we take $B_\epsilon:=\epsilon B$ with $B$ a magnetic field with components of class $BC^\infty(\mathcal{X})$, $A_\epsilon=\epsilon A$ with $A$ a potential vector associated to $B$ by \ref{0.28} and $P_\epsilon$ one of the following possible Schr\"{o}dinger operators:
\begin{equation}\label{0.32}
P_\epsilon=\underset{1\leq j\leq d}{\sum}\big(D_{y_j}+\epsilon A_j(y)\big)^2+V(y)+\epsilon\phi(y),
\end{equation}
\begin{equation}\label{0.33}
P_\epsilon=\mathfrak{Op}^{\epsilon A}(<\eta>)+V(y)+\epsilon\phi(y),
\end{equation}
\begin{equation}\label{0.34}
P_\epsilon=\sqrt{\mathfrak{Op}^{\epsilon A}(|\eta|^2)+1}+V(y)+\epsilon\phi(y),
\end{equation}
where $V$ and $\phi$ satisfy \ref{0.1} and \ref{0.8}.
In the Proposition \ref{P.A.28} of the Appendix we prove that the difference between \ref{0.33} and \ref{0.34} is of the form $\mathfrak{Op}^{\epsilon A}(q_\epsilon)$ with $\underset{\epsilon\rightarrow0}{\lim}q_\epsilon=0$ in $S^0_1(\Xi)$.
The connection between the operators \ref{0.30} and \ref{0.31} is the following:
\begin{equation}\label{0.35}
\big( \widetilde{P}_\epsilon\circ\pi^*_1\big)\big(\delta_0\otimes F\big)\ =\ \pi^*_1\big(\delta_0\otimes(P_\epsilon F)\big),\ \forall F\in\mathscr{S}^\prime(\mathcal{X}).
\end{equation}
In order to define an effective Hamiltonian for $P_\epsilon$ we shall apply the same ideas as in \cite{GMS} with the important remark that the operator valued pseudodifferential calculus we use is not a semi classical calculus but the 'magnetic' calculus so that all our constructions are explicitly gauge covariant. This fact obliges us to a lot of new technical lemmas in order to deal with this new calculus. Our main result is the following:
\begin{theorem}\label{T.0.1}
We assume the Hypothesis [H.1]-[H.6]. For any compact interval $I\subset\mathbb{R}$ there exists $\epsilon_0>0$ and $N\in\mathbb{N}^*$ such that $\forall\lambda\in I$ and $\forall\epsilon\in[-\epsilon_0,\epsilon_0]$ there exists a bounded self-adjoint operator $\boldsymbol{E}_{-+}(\epsilon,\lambda):=\mathfrak{Op}^{A_\epsilon}\big(E^{-+}_{\epsilon,\lambda}\big)$ acting in $[\mathfrak{V}_{0,1}]^N$, where $E^{-+}_{\epsilon,\lambda}\in BC^\infty\big(\Xi;\mathbb{B}(\mathbb{C}^N)\big)$ uniformly in $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$ and is $\Gamma^*$-periodic in the variable $\xi\in\mathcal{X}^*$ for which the following equivalence is true:
\begin{equation}\label{0.36}
\lambda\in\sigma(P_\epsilon)\quad\Longleftrightarrow\quad 0\in\sigma\big(\boldsymbol{E}_{-+}(\epsilon,\lambda)\big).
\end{equation}
\end{theorem}
A direct consequence of the above theorem is a stability property for the spectral gaps of the operator $P_\epsilon$ of the same type as that obtained in \cite{AS,N1,AMP} for the Schr\"{o}dinger operator.
\begin{corollary}\label{C.0.2}
Under the hypothesis [H.1]-[H.6], for any compact interval $K\subset\mathbb{R}$ disjoint from $\sigma(P_0)$, there exists $\epsilon_0>0$ such that $\forall\epsilon\in[-\epsilon_0,\epsilon_0]$ the interval $K$ is disjoint from $\sigma(P_\epsilon)$.
\end{corollary}
In fact we obtain a much stronger result, giving the optimal regularity property but only for $\epsilon=0$ i.e. at vanishing magnetic field.
\begin{proposition}\label{P.6.12}
We denote by $\mathfrak{d}_H(F_1,F_2)$ the Hausdorff distance between the two closed subsets $F_1$ and $F_2$ of $\mathbb{R}$. Then, under the Hypothesis H.1 - H.7 and I.1 - I.3, there exists a strictly positive constant $C$ such that
\begin{equation}\label{6.29a}
\mathfrak{d}_H\left(\sigma\big(P_\epsilon\big)\cap I\,,\,\sigma\big(P_0\big)\cap I\right)\,\leq\, C\epsilon\qquad\forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
\end{proposition}
Let us consider now the case of a simple spectral band and generalize the result we discuss previously in this case. By hypothesis we have that $\tau_\gamma P_0=P_0\tau_\gamma$, $\forall\gamma\in\Gamma$ and we can apply the Floquet-Bloch theory. We denote by $\lambda_1(\xi)\leq\lambda_2(\xi)\leq\ldots$ the eigenvalues of the operators $P_{0,\xi}:=\mathfrak{Op}\big(p_0(\cdot,\xi+\cdot)\big)$ that are self-adjoint in $L^2(\mathbb{T})$; they are continuous functions on the torus $\mathbb{T}^{*d}:=\mathcal{X}^*/\Gamma^*$ (and they are even $C^\infty$ in the case of a simple spectral band). Thus $\sigma(P_0)=\underset{j=1}{\overset{d}{\cup}}J_j$ with $J_j:=\lambda_j(\mathbb{T}^{*d})$. Let us consider now the following new Hypothesis:
\begin{description}
\item[H.7] There exists $k\geq1$ such that $J_k$ {\it is a simple spectral band for} $P_0$, i.e. $\forall\xi\in\mathbb{T}^{*d}$ we have that $\lambda_k(\xi)$ is a non-degenerate eigenvalue of $P_0$ and for any $l\ne k$ we have that $J_l\cap J_k=\emptyset$.
\end{description}
\begin{proposition}\label{P.0.3}
Assume the hypothesis [H.1]-[H.7] are true and that moreover we have that $p_0(y,-\eta)=p_0(y,\eta)$, $\forall(y,\eta)\in\Xi$. Let $I\subset\mathbb{R}$ be a compact neighborhood of $J_k$ disjoint from $\underset{l\ne k}{\cup}J_l$. Then there exists $\epsilon_0>0$ such that $\forall(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$ in Theorem \ref{T.0.1} we can take $N=1$ and
\begin{equation}\label{0.37}
E^{-+}_{\epsilon,\lambda}(x,\xi)\ =\ \lambda-\lambda_k(\xi)+r_{\epsilon,\lambda}(x,\xi),\quad\text{with}\quad\underset{\epsilon\rightarrow0}{\lim}r_{\epsilon,\lambda}=0,\ \text{in}\ BC^\infty(\Xi),\ \text{uniformly in}\ \lambda\in I.
\end{equation}
\end{proposition}
In the case of a constant magnetic field, under some more assumptions on the symbol $p_\epsilon$ we can have even more information concerning the operator $\boldsymbol{E}_{-+}(\epsilon,\lambda)$.
\begin{proposition}\label{P.0.4}
Assume that the hypothesis [H.1]-[H.7] are true and that $B_\epsilon$ are constant magnetic fields (for any $\epsilon$) and that the symbols $p_\epsilon$ do not depend on the first variable ($x\in\mathcal{X}$). Then we can complete the conclusion of Theorem \ref{T.0.1} with the following statements:
\begin{enumerate}
\item $\boldsymbol{E}_{-+}(\epsilon,\lambda)$ is a bounded self-adjoint operator in $[L^2(\mathcal{X})]^N$.
\item The symbol $E^{-+}_{\epsilon,\lambda}$ is independent of the first variable ($y\in\mathcal{X}$) and is $\Gamma^*$-periodic in the second variable ($\xi\in\mathcal{X}^*$).
\end{enumerate}
\end{proposition}
\subsection{Overview of the paper}
The first Section analysis the properties of the auxiliary operator, its self-adjointness and its connection with our main Hamiltonian. The second Section recalls some facts about the Floquet-Bloch theory and the connection between the spectra of the auxiliary operator in $L^2(\mathcal{X}\times\mathcal{X})$ and $L^2(\mathcal{X}\times\mathbb{T})$. In Section 3 we introduce a Grushin type problem and define the principal part of the symbol of the effective Hamiltonian. In Section 4 we use a perturbative method to construct the Peierls effective Hamiltonian and study the connection between its spectrum and the spectrum of the auxiliary operator acting in $L^2(\mathcal{X}\times\mathbb{T})$. Section 5 is devoted to the definition and study of some auxiliary Hilbert spaces of tempered distributions $\mathfrak{V}_0$ and $\mathfrak{L}_0$. In Section 6 we make a rigorous study of the Peierls Hamiltonian reducing the local study of its spectrum to a spectral problem of the effective Hamiltonian acting in $[\mathfrak{V}_0]^N$ (the proof of Theorem \ref{T.0.1}). We also consider an application to the stability of spectral gaps (Corollary \ref{C.0.2}). The Lipschitz regularity of the boundaries of the spectral bands at vanishing magnetic field is also obtained as an application of the main Theorem. The 7-th Section is devoted to the case of a simple spectral band (Proposition \ref{P.0.3}). In Section 8 we consider the particular case of a constant magnetic field (Proposition \ref{P.0.4}). An Appendix is gathering a number of facts concerning operator valued symbols and their associated magnetic pseudodifferential operators and some spaces of periodic distributions. Some of the notations and definitions we use in the paper are introduced in this Appendix.
\section{The auxiliary operator in $L^2(\mathcal{X}\times\mathcal{X})$}\label{S.1}
\setcounter{equation}{0}
\setcounter{theorem}{0}
Let us consider given a family $\{B_\epsilon\}_{\epsilon\in[-\epsilon_0,\epsilon_0]}$ of magnetic fields on $\mathcal{X}$ satisfying Hypothesis H.1 and $\{A_\epsilon\}_{\epsilon\in[-\epsilon_0,\epsilon_0]}$ an associated family of vector potentials (we shall always work with the vector potentials given by formula \eqref{0.23}). Let us also consider a given family of symbols $\{p_\epsilon\}_{\epsilon\in[-\epsilon_0,\epsilon_0]}$ that satisfy the Hypothesis H.2 - H.6.
We shall use the following convention: if $f$ is a function defined on $\mathcal{X}\times\Xi$, we denote by $f^\circ$ the function defined on $\Xi$ by taking the restriction of $f$ at the subset $\Delta_{\mathcal{X}\times\mathcal{X}}\times\mathcal{X}^*$ where $\Delta_{\mathcal{X}\times\mathcal{X}}:=\{(x,x)\in\mathcal{X}\times\mathcal{X}\,\mid\,x\in\mathcal{X}\}$ is the diagonal of the Cartesian product and we denote by $\widetilde{f}$ the function on $\Xi\times\Xi$ defined by the formula $\widetilde{f}(X,Y)\equiv\widetilde{f}(x,\xi,y,\eta):=f(x,y,\xi+\eta)$.
It is evident that with the above hypothesis and notations we have that $p_\epsilon^\circ\in S^m_1(\Xi)$ and is elliptic, both properties being uniform in $\epsilon\in[-\epsilon_0,\epsilon_0]$. Then the operator $P_\epsilon:=\mathfrak{Op}^{A_\epsilon}(p_\epsilon^\circ)$, the main operator we are interested in, is self-adjoint and lower semi-bounded in $L^2(\mathcal{X})$ having the domain $\mathcal{H}^m_{A_\epsilon}(\mathcal{X})$ (the magnetic Sobolev space of order $m$ defined in Definition \ref{D.1.4} below); moreover, with the choice of vector potential that we made, it is essentially self-adjoint on the space of Schwartz test functions $\mathscr{S}(\mathcal{X})$.
Taking into account the example \ref{A.8} it follows that $\{\widetilde{p}_\epsilon\}_{|\epsilon|\leq\epsilon_0}\in S^m_{1,\epsilon}(\mathcal{X}\times\Xi)$ so that by defining $\mathfrak{q}_\epsilon(X):=\mathfrak{Op}\big(\widetilde{p}_\epsilon(X,.)\big)$, we have that for any $s\in\mathbb{R}$ the following is true
$$
\{\mathfrak{q}_\epsilon\}_{|\epsilon|\leq\epsilon_0}\ \in\ S^0_{0,\epsilon}\big(\Xi;\mathbb{B}\big(\mathcal{H}^{s+m}_{\bullet}(\mathcal{X});\mathcal{H}^s_{\bullet}(\mathcal{X})\big)\big)
$$
with the notations introduced at the begining of subsection 9.1 of the Appendix. We can then define the {\it auxiliary operator} $\widetilde{P}_\epsilon:=\mathfrak{Op}^{A_\epsilon}\big(\mathfrak{q}_\epsilon\big)$ that will play a very important role in our arguments. In the following Proposition we collect the properties of the operator $\widetilde{P}_\epsilon$ that result from Proposition \ref{A.7} and exemple \ref{A.8}.
\begin{lemma}\label{L.1.1}
With the above notations and under the above Hypothesis H.1 - H.6 we have that
\begin{enumerate}
\item For any $s\in\mathbb{R}$:
$$
\widetilde{P}_\epsilon\ \in\ \mathbb{B}\left(\mathscr{S}\big(\mathcal{X};\mathcal{H}^{s+m}(\mathcal{X})\big);\mathscr{S}\big(\mathcal{X};\mathcal{H}^{s}(\mathcal{X})\big)\right)\ \cap\ \mathbb{B}\left(\mathscr{S}^\prime\big(\mathcal{X};\mathcal{H}^{s+m}(\mathcal{X})\big);\mathscr{S}^\prime\big(\mathcal{X};\mathcal{H}^{s}(\mathcal{X})\big)\right),
$$
uniformly in $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\item
$
\widetilde{P}_\epsilon\ \in\ \mathbb{B}\left(\mathscr{S}\big(\mathcal{X}^2\big);\mathscr{S}\big(\mathcal{X}^2\big)\right)\ \cap\ \mathbb{B}\left(\mathscr{S}^\prime\big(\mathcal{X}^2\big);\mathscr{S}^\prime\big(\mathcal{X}^2\big)\right)
$,
uniformly in $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\item $\widetilde{P}_\epsilon$ considered as unbounded operator in $L^2(\mathcal{X}^2)$ with domain $\mathscr{S}(\mathcal{X}^2)$ is symetric for any $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{enumerate}
\end{lemma}
Let us now discuss some different forms of the operator $\widetilde{P}_\epsilon$ that we shall use further. We consider first the isomorphisms:
\begin{equation}\label{1.1}
\boldsymbol{\psi}:\mathcal{X}^2\rightarrow\mathcal{X}^2,\quad\boldsymbol{\psi}(x,y)\ :=\ (x,x-y);\qquad\boldsymbol{\psi}^{-1}=\boldsymbol{\psi};
\end{equation}
\begin{equation}\label{1.2}
\boldsymbol{\chi}:\mathcal{X}^2\rightarrow\mathcal{X}^2,\quad\boldsymbol{\chi}(x,y)\ :=\ (x+y,y);\qquad\boldsymbol{\chi}^{-1}(x,y)\ =\ (x-y,y).
\end{equation}
The operators $\boldsymbol{\psi}^*$ and $\boldsymbol{\chi}^*$ that they induce on $L^2(\mathcal{X})$ ($ \boldsymbol{\psi}^*u:=u\circ\boldsymbol{\psi}$) are evidently unitary.
\begin{lemma}\label{L.1.2}
For any $u\in\mathscr{S}(\mathcal{X}^2)$, the image $\widetilde{P}_\epsilon u$ may be written in any of the following three equivalent forms:
\begin{equation}\label{1.3}
\big(\widetilde{P}_\epsilon u\big)(x,y)\ =\ (2\pi)^{-d/2}\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\eta,y-\tilde{y}>}\omega_{A_\epsilon}(x,x+\tilde{y}-y)\,p_\epsilon\big(x-y+(y+\tilde{y})/2,(y+\tilde{y})/2,\eta\big)\,u(x+\tilde{y}-y,\tilde{y})\,d\tilde{y}\,d\eta,
\end{equation}
\begin{equation}\label{1.4}
\big(\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^* u\big)(x,y)\ =\ \left[\mathfrak{Op}^{A_\epsilon}\left(\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_y\otimes{\rm id\hspace*{-1.5pt}l})p_\epsilon\big)^\circ\right)u(.,y)\right](x),
\end{equation}
\begin{equation}\label{1.5}
\big(\boldsymbol{\chi}^*\widetilde{P}_\epsilon(\boldsymbol{\chi}^*)^{-1} u\big)(x,y)\ =\ \left[\mathfrak{Op}^{(\tau_{-x}A_\epsilon)}\left(\big((\tau_x\otimes{\rm id\hspace*{-1.5pt}l}\otimes{\rm id\hspace*{-1.5pt}l})p_\epsilon\big)^\circ\right)u(x,.)\right](y).
\end{equation}
\end{lemma}
\begin{proof}
Let us fix $u\in\mathscr{S}(\mathcal{X}^2)$. Starting from the definitions of $\widetilde{P}_\epsilon$ and $\mathfrak{q}_\epsilon$ and using oscillating integral techniques, we get
$$
\big(\widetilde{P}_\epsilon u\big)(x,y)\ =\ (2\pi)^{-d/2}\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\xi,x-\tilde{x}>}\omega_{A_\epsilon}(x,\tilde{x})\left[\mathfrak{q}_\epsilon\big((x+\tilde{x})/2,\xi\big)\,u(\tilde{x},.)\right](y)\,d\tilde{x}\,d\xi=
$$
$$
=(2\pi)^{-d}\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\xi,x-\tilde{x}>}\omega_{A_\epsilon}(x,\tilde{x})\left[\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\eta,y-\tilde{y}>}\,p_\epsilon\big((x+\tilde{x})/2,(y+\tilde{y})/2,\xi+\eta\big)\,u(\tilde{x},\tilde{y})\,d\tilde{y}\,d\eta\right]d\tilde{x}\,d\xi=
$$
$$
=(2\pi)^{-d}\int_{\mathcal{X}}\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\eta,y-\tilde{y}>}\,\omega_{A_\epsilon}(x,\tilde{x})\,p_\epsilon\big((x+\tilde{x})/2,(y+\tilde{y})/2,\eta\big)\left[\int_{\mathcal{X}^*}e^{i<\xi,x-\tilde{x}-y+\tilde{y}>}\,d\xi\right]u(\tilde{x},\tilde{y})\,d\tilde{x}\,d\tilde{y}\,d\eta.
$$
By the Fourier inversion theorem the inner oscillating integral is in fact $(2\pi)^{d/2}\delta_0(x-\tilde{x}-y+\tilde{y})=(2\pi)^{d/2}[\tau_{x-y+\tilde{y}}\delta_0](\tilde{x})$ and we can eliminate the integrals over $\xi\in\mathcal{X}^*$ and over $\tilde{x}\in\mathcal{X}$ in order to obtain formula \eqref{1.3}.
In order to prove \eqref{1.4}, we apply \eqref{1.3} to $\boldsymbol{\psi}^*u$, that meaning to replace in \eqref{1.3} $u(x+\tilde{y}-y,\tilde{y})$ with $u(x+\tilde{y}-y,x-y)$ and we finally replace $y$ with $x-y$, obtaining
$$
\big(\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^* u\big)(x,y)\ =\ (2\pi)^{-d/2}\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\eta,x-y-\tilde{y}>}\,\omega_{A_\epsilon}(x,\tilde{y}+y)\,p_\epsilon\big(y+(x-y+\tilde{y})/2,(x-y+\tilde{y})/2,\eta\big)\,u(\tilde{y}+y,y)\,d\tilde{y}\,d\eta.
$$
Changing the integration variable $\tilde{y}$ to $\tilde{y}-y$ we obtain
$$
\big(\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^* u\big)(x,y)\ =\ (2\pi)^{-d/2}\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\eta,x-\tilde{y}>}\,\omega_{A_\epsilon}(x,\tilde{y})\,p_\epsilon\big((x+\tilde{y})/2,(x+\tilde{y})/2-y,\eta\big)\,u(\tilde{y},y)\,d\tilde{y}\,d\eta,
$$
i.e. \eqref{1.4}
The formula \eqref{1.5} can be easily obtained in a similar way, starting with \eqref{1.3} applied to $\big(\boldsymbol{\chi}^*\big)^{-1} u$, i.e. replacing in \eqref{1.3} $u(x+\tilde{y}-y,\tilde{y})$ with $u(x-y,\tilde{y})$ and finally replacing the variable $x\in\mathcal{X}$ with $x+y$; this gives us the result
$$
\big(\boldsymbol{\chi}^*\widetilde{P}_\epsilon(\boldsymbol{\chi}^*)^{-1} u\big)(x,y)\ =\ (2\pi)^{-d/2}\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\eta,y-\tilde{y}>}\,\omega_{A_\epsilon}(x+y,x+\tilde{y})\,p_\epsilon\big(x+(y+\tilde{y})/2,(y+\tilde{y})/2,\eta\big)\,u(x,\tilde{y})\,d\tilde{y}\,d\eta.
$$
We end the proof of \eqref{1.5} by noticing that the following equalities are true (see also \eqref{A.29}):
$$
\omega_{A_\epsilon}(x+y,x+\tilde{y})=\exp\{-i\int_{[x+y,x+\tilde{y}]}\hspace*{-0.8cm}A_\epsilon\hspace*{0.5cm}\}=\exp\left\{i\left\langle y-\tilde{y},\int_0^1A_\epsilon\big((1-s)(x+y)+s(x+\tilde{y})\big)\,ds\right\rangle\right\}=
$$
$$
=\exp\left\{i\left\langle y-\tilde{y},\int_0^1A_\epsilon\big(x+(1-s)y+s\tilde{y}\big)\,ds\right\rangle\right\}=\exp\left\{i\left\langle y-\tilde{y},\int_0^1\big(\tau_{-x}A_\epsilon\big)\big((1-s)y+s\tilde{y}\big)\,ds\right\rangle\right\}=
$$
$$
=\exp\{-i\int_{[y,\tilde{y}]}\hspace*{-0.5cm}\big(\tau_{-x}A_\epsilon\big)\}=\omega_{(\tau_{-x}A_\epsilon)}(y,\tilde{y}).
$$
\end{proof}
The following Corollary is a direct consequence of Lemma \ref{L.1.2}.
\begin{corollary}\label{C.1.3}
We have the following two relations between the operators $\widetilde{P}_\epsilon$ and $P_\epsilon$:
\begin{enumerate}
\item For any $v\in\mathscr{S}^\prime(\mathcal{X})$
\begin{equation}\label{1.8}
\big(\boldsymbol{\chi}^*\widetilde{P}_\epsilon(\boldsymbol{\chi}^*)^{-1}(\delta_0\otimes v)\big)\ =\ \delta_0\otimes\big(P_\epsilon v\big),
\end{equation}
\begin{equation}\label{1.8'}
\big(\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*(v\otimes\delta_0)\big)\ =\ \big(P_\epsilon v\big)\otimes\delta_0.
\end{equation}
\item If $p_\epsilon$ does not depend on its second variable $y\in\mathcal{X}$, then the following equality is true:
\begin{equation}\label{1.7}
\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*\ =\ P_\epsilon\otimes{\rm id\hspace*{-1.5pt}l}.
\end{equation}
\end{enumerate}
\end{corollary}
\begin{proof}
It is enough to prove \eqref{1.8} for any $v\in\mathscr{S}(\mathcal{X})$. We choose $\varphi\in C^\infty_0(\mathcal{X})$ so that $\{\varphi_n\}_{n\in\mathbb{N}^*}$ with $\varphi_n(x):=n^d\varphi(nx)$ is a $\delta_0$-sequence. We apply now \eqref{1.5} to $u:=\varphi_n\otimes v$ in order to obtain
$$
\big(\boldsymbol{\chi}^*\widetilde{P}_\epsilon(\boldsymbol{\chi}^*)^{-1}(\varphi_n\otimes v)\big)(x,y)=\varphi_n(x)\left[\mathfrak{Op}^{(\tau_{-x}A_\epsilon)}\left(\big((\tau_x\otimes{\rm id\hspace*{-1.5pt}l}\otimes{\rm id\hspace*{-1.5pt}l})p_\epsilon\big)^\circ\right)v\right](y).
$$
We end the proof of \eqref{1.8} by taking the limit $n\rightarrow\infty$ and noticing that for $x=0$ we obtain in the second factor above: $\big((\tau_0\otimes{\rm id\hspace*{-1.5pt}l}\otimes{\rm id\hspace*{-1.5pt}l})p_\epsilon\big)^\circ=\big(({\rm id\hspace*{-1.5pt}l}\otimes{\rm id\hspace*{-1.5pt}l}\otimes{\rm id\hspace*{-1.5pt}l})p_\epsilon\big)^\circ=p_\epsilon$. For \eqref{1.8'} we use exactly the same preocedure starting from \eqref{1.4}.
The equality \eqref{1.7} follows directly from \eqref{1.4} under the given assumption on $p_\epsilon$ that implies that $\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_y\otimes{\rm id\hspace*{-1.5pt}l})p_\epsilon\big)^\circ=p_\epsilon^\circ$.
\end{proof}
In order to study the continuity and the self-adjointness of $\widetilde{P}_\epsilon$ we need some more function spaces related to the magnetic Sobolev spaces; in order to define these spaces we shall need the family of operator valued symbols $\{\mathfrak{q}_{s,\epsilon}\}_{(s,\epsilon)\in\mathbb{R}\times[-\epsilon_0,\epsilon_0]}$ introduced in Remark \ref{R.A.25} and the associated operators:
$$
Q_{s,\epsilon}:=\mathfrak{Op}^{A_\epsilon}\big(\mathfrak{q}_{s,\epsilon}\big),\qquad Q^\prime_{s,\epsilon}:=Q_{s,\epsilon}\otimes{\rm id\hspace*{-1.5pt}l}.
$$
Let us still denote by $\widetilde{Q}_{s,\epsilon}:=\boldsymbol{\psi}^*Q^\prime_{s,\epsilon}\boldsymbol{\psi}^*$ with $\boldsymbol{\psi}$ from \eqref{1.1} and let us notice that due to Corollary \ref{C.1.3} (2) the operators $\widetilde{Q}_{s,\epsilon}$ and $Q_{s,\epsilon}$ are in the same relation as the pair $\widetilde{P}_\epsilon$ and $P_\epsilon$.
\begin{definition}\label{D.1.4}
For magnetic fields $\{B_\epsilon\}_{\epsilon\in[-\epsilon_0,\epsilon_0]}$ verifying Hypothesis H.1 and for choices of vector potentials given by \eqref{0.28} we define the following spaces.
\begin{enumerate}
\item The magnetic Sobolev space of order $s\in\mathbb{R}$ (as defined in \cite{IMP1}) is
\begin{equation}\label{1.9}
\mathcal{H}^s_{A_\epsilon}(\mathcal{X})\ :=\ \left\{\,u\in\mathscr{S}^\prime(\mathcal{X})\,\mid\,Q_{s,\epsilon}u\in L^2(\mathcal{X})\,\right\},
\end{equation}
endowed with the following natural quadratic norm
\begin{equation}\label{1.10}
\|u\|_{\mathcal{H}^s_{A_\epsilon}(\mathcal{X})}\ :=\ \left\|Q_{s,\epsilon}u\right\|_{L^2(\mathcal{X})},\quad\forall u\in\mathcal{H}^s_{A_\epsilon}(\mathcal{X})
\end{equation}
that makes it a Hilbert space containing $\mathscr{S}(\mathcal{X})$ as a dense subspace.
\item We shall define also $\mathcal{H}^\infty_{A_\epsilon}(\mathcal X):=\underset{s\in\mathbb{R}}{\bigcap}\mathcal{H}^s_{A_\epsilon}(\mathcal X)$ with the projective limit topology.
\item For $s\in\mathbb{R}$ we consider also the spaces
\begin{equation}\label{1.11}
\widetilde{\mathcal{H}}^s_{A_\epsilon}(\mathcal{X}^2)\ :=\ \left\{\,u\in\mathscr{S}^\prime(\mathcal{X}^2)\,\mid\,\widetilde{Q}_{s,\epsilon}u\in L^2(\mathcal{X}^2)\,\right\},
\end{equation}
endowed with the following natural quadratic norm
\begin{equation}\label{1.12}
\|u\|_{\widetilde{\mathcal{H}}^s_{A_\epsilon}(\mathcal{X}^2)}\ :=\ \left\|\widetilde{Q}_{s,\epsilon}u\right\|_{L^2(\mathcal{X}^2)},\quad\forall u\in\widetilde{\mathcal{H}}^s_{A_\epsilon}(\mathcal{X}^2)
\end{equation}
that makes it a Hilbert space containing $\mathscr{S}(\mathcal{X}^2)$ as a dense subspace.
\end{enumerate}
\end{definition}
\begin{remark}\label{R.1.5}
$\boldsymbol{\psi}^*$ is a unitary operator from $\widetilde{\mathcal{H}}^s_{A_\epsilon}(\mathcal{X}^2)$ onto $\mathcal{H}^s_{A_\epsilon}(\mathcal{X})\otimes L^2(\mathcal{X})$.
\end{remark}
\begin{proof}
Let us choose some $u\in\mathscr{S}^\prime(\mathcal{X}^2)$ and notice that
$$
u\in\widetilde{\mathcal{H}}^s_{A_\epsilon}(\mathcal{X}^2)\ \Leftrightarrow\ \boldsymbol{\psi}^*Q^\prime_{s,\epsilon}\boldsymbol{\psi}^*u\in L^2(\mathcal{X}^2)\ \Leftrightarrow\ \big(Q_{s,\epsilon}\otimes{\rm id\hspace*{-1.5pt}l}\big)\boldsymbol{\psi}^*u\in L^2(\mathcal{X}^2)\ \Leftrightarrow\ \boldsymbol{\psi}^*u\in\mathcal{H}^s_{A_\epsilon}(\mathcal{X})\otimes L^2(\mathcal{X})
$$
and evidently we have that
$$
\left\|\boldsymbol{\psi}^*u\right\|_{\mathcal{H}^s_{A_\epsilon}(\mathcal{X})\otimes L^2(\mathcal{X})}\ =\ \|u\|_{\widetilde{\mathcal{H}}^s_{A_\epsilon}(\mathcal{X}^2)}.
$$
\end{proof}
We can prove now a continuity property of the operator $\widetilde{P}_\epsilon$ on the spaces defined by \eqref{1.11}.
\begin{lemma}\label{L.1.16}
For any $s\in\mathbb{R}$ we have that $\widetilde{P}_\epsilon\in\mathbb{B}\big(\widetilde{\mathcal{H}}^{s+m}_{A_\epsilon}(\mathcal{X}^2);\widetilde{\mathcal{H}}^s_{A_\epsilon}(\mathcal{X}^2)\big)$ uniformly for $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{lemma}
\begin{proof}
Due to \eqref{1.4} and Remark \ref{R.1.5} it is enough to prove that the application
$$
\mathscr{S}(\mathcal{X}^2)\ni u\mapsto\left[\mathfrak{Op}^{A_\epsilon}\left(\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_y\otimes{\rm id\hspace*{-1.5pt}l})p_\epsilon\big)^\circ\right)u(.,y)\right](x)\in\mathscr{S}(\mathcal{X}^2)
$$
has a continuous extension from $\mathcal{H}^{s+m}_{A_\epsilon}(\mathcal{X})\otimes L^2(\mathcal{X})$ to $\mathcal{H}^s_{A_\epsilon}(\mathcal{X})\otimes L^2(\mathcal{X})$ uniformly for $\epsilon\in[-\epsilon_0,\epsilon_0]$. But
$$
\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_y\otimes{\rm id\hspace*{-1.5pt}l})p_\epsilon\big)^\circ(x,\xi)\ =\ p_\epsilon(x,x-y,\xi)
$$
and the family $\{p_\epsilon(x,x-y,\xi)\}_{(y,\epsilon)\in\mathcal{X}\times[-\epsilon_0,\epsilon_0]}$ of symbols (in the variables $(x,\xi)\in\Xi$) is a bounded subset of $S^m_1(\Xi)$ (due to our Hypothesis). Applying the continuity properties of magnetic pseudodifferential operators in magnetic Sobolev spaces proved in \cite{IMP1} we conclude that there exists a constant $C>0$ such that
$$
\left\|\mathfrak{Op}^{A_\epsilon}\left(\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_y\otimes{\rm id\hspace*{-1.5pt}l})p_\epsilon\big)^\circ\right)u(.,y)\right\|_{\mathcal{H}^s_{A_\epsilon}(\mathcal{X})}^2\ \leq\ C^2\|u(,.y)\|_{\mathcal{H}^{s+m}_{A_\epsilon}(\mathcal{X})}^2
$$
for any $u\in\mathscr{S}(\mathcal{X}^2)$, $\forall y\in\mathcal{X}$ and $\forall\epsilon\in[-\epsilon_0,\epsilon_0]$. We end then the proof by integrating the above inequality with respect to $y\in\mathcal{X}$.
\end{proof}
In order to prove the self-adjointness of $\widetilde{P}_\epsilon$ in $L^2(\mathcal{X}^2)$ we use the following Remark.
\begin{remark}\label{R.1.17}
Suppose given $r\in S^m_1(\mathcal{X}\times\Xi)$. Then evidently $r(.,y,.)\in S^m_1(\Xi)$ uniformly for $y\in\mathcal{X}$. If $B$ is a magnetic field on $\mathcal{X}$ with components of class $BC^\infty(\mathcal{X})$ and $A$ an associated vector potential having components of class $C^\infty_{\text{\sf pol}}(\mathcal{X})$ we define the {\it magnetic pseudodifferential operator with parameter} $y\in\mathcal{X}$
\begin{equation}\label{1.13}
\big(\mathfrak{R}u\big)(x,y)\ :=\ (2\pi)^{-d/2}\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\xi,x-\tilde{x}>}\omega_A(x,\tilde{x})\,r\big((x+\tilde{x})/2,y,\xi\big)\,u(\tilde{x},y)\,d\tilde{x}\,d\xi,\quad\forall u\in\mathscr{S}(\mathcal{X}^2),\ \forall(x,y)\in\mathcal{X}^2.
\end{equation}
A straightforward modification of the arguments from \cite{IMP1}, and denoting by $\mathfrak{R}_\epsilon$ the operator defined as above in \eqref{1.13} with a vector potential $A_\epsilon$, allows to prove that
\begin{equation}\label{1.14}
\mathfrak{R}_\epsilon\ \in\ \mathbb{B}\big(\mathscr{S}(\mathcal{X}^2);\mathscr{S}(\mathcal{X}^2)\big)\ \cap\ \mathbb{B}\big(\mathcal{H}^{s+m}_{A_\epsilon}(\mathcal{X})\otimes L^2(\mathcal{X});\mathcal{H}^{s}_{A_\epsilon}(\mathcal{X})\otimes L^2(\mathcal{X})\big),\quad\forall s\in\mathbb{R}.
\end{equation}
Moreover, if $r$ is elliptic, then for any $u\in L^2(\mathcal{X}^2)$ and any $s\in\mathbb{R}$ we have the equivalence relation:
\begin{equation}\label{1.15}
u\ \in\ \mathcal{H}^{s+m}_{A_\epsilon}(\mathcal{X})\otimes L^2(\mathcal{X})\ \Longleftrightarrow\ \mathfrak{R}_\epsilon u\ \in\ \mathcal{H}^{s}_{A_\epsilon}(\mathcal{X})\otimes L^2(\mathcal{X}).
\end{equation}
\end{remark}
\begin{proposition}\label{P.1.18}
$\widetilde{P}_\epsilon$ is a self-adjoint operator in $L^2(\mathcal{X}^2)$ with domain $\widetilde{\mathcal{H}}^m_{A_\epsilon}(\mathcal{X}^2)$. It is essentially self-adjoint on $\mathscr{S}(\mathcal{X}^2)$.
\end{proposition}
\begin{proof}
Following Lemma \ref{L.1.16} the operator $\widetilde{P}_\epsilon$ with domain $\widetilde{\mathcal{H}}^m_{A_\epsilon}(\mathcal{X}^2)$ is well defined in $L^2(\mathcal{X}^2)$. Moreover we know by Lemma \ref{L.1.1} (3) that $\widetilde{P}_\epsilon$ is symmetric when defined on $\mathscr{S}(\mathcal{X}^2)$ that is dense in $\widetilde{\mathcal{H}}^m_{A_\epsilon}(\mathcal{X}^2)$ for its own norm-topology so that we conclude that $\widetilde{P}_\epsilon$ is symmetric as defined on $\widetilde{\mathcal{H}}^m_{A_\epsilon}(\mathcal{X}^2)$.
Considering now equation \eqref{1.4} and Remark \ref{R.1.5} it follows that $\widetilde{P}_\epsilon$ as considered in the hypothesis of the Proposition is self-adjoint if and only if the operator $\mathfrak{R}_\epsilon$ defined by \eqref{1.13} with a symbol $r_\epsilon(x,y,\xi):=p_\epsilon(x,x-y,\xi)$ is self-adjoint in $L^2(\mathcal{X}^2)$ with domain $\mathcal{H}^m_{A_\epsilon}(\mathcal{X})\otimes L^2(\mathcal{X})$. Using the symmetry of $\widetilde{P}_\epsilon$ and its unitary equivalence with $\mathfrak{R}_\epsilon$ we conclude that $\mathfrak{R}_\epsilon$ is also symmetric on its domain.
Let us fix some $v\in\mathcal{D}(\mathfrak{R}_\epsilon^*)$; thus we know that $v\in L^2(\mathcal{X}^2)$ and there exists some $f\in L^2(\mathcal{X}^2)$ such that
$$
\left(\mathfrak{R}_\epsilon u,v\right)_{L^2(\mathcal{X}^2)}\ =\ \left(u,f\right)_{L^2(\mathcal{X}^2)},\quad\forall u\in\mathscr{S}(\mathcal{X}^2).
$$
We conclude that $\mathfrak{R}_\epsilon v=f$ in $\mathscr{S}^\prime(\mathcal{X}^2)$ and due to \eqref{1.15} we have that $v\in \mathcal{H}^{s}_{A_\epsilon}(\mathcal{X})\otimes L^2(\mathcal{X})$. In conclusion we get that $\mathfrak{R}_\epsilon^*=\mathfrak{R}_\epsilon$.
The last statement of the Proposition follows from the density of $\mathscr{S}(\mathcal{X}^2)$ in $\widetilde{\mathcal{H}}^m_{A_\epsilon}(\mathcal{X}^2)$ and the continuity property proved in Lemma \ref{L.1.16}.
\end{proof}
\section{The auxiliary operator in $L^2\big(\mathcal{X}\times\mathbb{T}\big)$}\label{S.2}
\setcounter{equation}{0}
\setcounter{theorem}{0}
We begin with some elements concerning the Floquet-Bloch theory. We use the notations from the beginning of subsection 9.2 of the Appendix.
\begin{definition}\label{D.2.1}
\hspace*{0.5cm}$\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\times\mathcal{X}^*\big):=$
$$
:=\left\{v\in\mathscr{S}^\prime\big(\mathcal{X}^2\times\mathcal{X}^*\big)\,\mid\,v(x,y+\gamma,\theta)=e^{i<\theta,\gamma>}v(x,y,\theta)\,\forall\gamma\in\Gamma,\ v(x,y,\theta+\gamma^*)=v(x,y,\theta)\,\forall\gamma^*\in\Gamma^*\right\},
$$
endowed with the topology induced from $\mathscr{S}^\prime\big(\mathcal{X}^2\times\mathcal{X}^*\big)$.
\end{definition}
\begin{definition}\label{D.2.2}
\hspace*{0.5cm}$\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big):=\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\times\mathcal{X}^*\big)\cap L^2_{\text{\sf loc}}\big(\mathcal{X}^2\times\mathcal{X}^*\big)\cap L^2\big(\mathcal{X}\times E\times E^*\big)$ endowed with the quadratic norm
\begin{equation}\label{2.1}
\|v\|_{\mathscr{F}_0}\ :=\ \sqrt{|E^*|^{-1}\int_{\mathcal{X}}\int_{E}\int_{E^*}|v(x,y,\theta)|^2dx\,dy\,d\theta},\qquad\forall v\in\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big),
\end{equation}
that makes $\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ into a Hilbert space.
\end{definition}
We evidently have a continuous embedding of $\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ into $\mathscr{S}^\prime\big(\mathcal{X}^2\times\mathcal{X}^*\big)$.
\begin{lemma}\label{L.2.3}
The following map defined on $\mathscr{S}\big(\mathcal{X}^2\big)$:
\begin{equation}\label{2.2}
\big(\mathcal{U}_\Gamma u\big)(x,y,\theta):=\sum\limits_{\gamma\in\Gamma}e^{i<\theta,\gamma>}u(x,y-\gamma),\qquad \forall(x,y)\in\mathcal{X}^2,\forall\theta\in\mathcal{X}^*,\forall u\in\mathscr{S}\big(\mathcal{X}^2\big),
\end{equation}
extends as a unitary operator $\mathcal{U}_\Gamma:L^2\big(\mathcal{X}^2\big)\rightarrow\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$.
The inverse of the above operator is explicitely given by
\begin{equation}\label{2.3}
\big(\mathcal{W}_\Gamma v\big)(x,y):=|E^*|^{-1}\int_{E^*}v(x,y,\theta)d\theta,\qquad \forall(x,y)\in\mathcal{X}^2,\,\forall v\in\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big).
\end{equation}
\end{lemma}
\begin{proof}
Let us notice that for $u\in\mathscr{S}\big(\mathcal{X}^2\big)$ the series in \eqref{2.2} converges in $\mathscr{E}\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ and its sum, that we denoted by $\mathcal{U}_\Gamma u$ evidently verifies the two relations that characterize the subspace $\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ as subspace of $\mathscr{S}^\prime\big(\mathcal{X}^2\times\mathcal{X}^*\big)$:
\begin{equation}\label{2.4}
\big(\mathcal{U}_\Gamma u\big)(x,y,\theta+\gamma^*)=\big(\mathcal{U}_\Gamma u\big)(x,y,\theta),\qquad\forall(x,y,\theta)\in\mathcal{X}^2\times\mathcal{X}^*,\ \forall\gamma^*\in\Gamma^*,
\end{equation}
\begin{equation}\label{2.5}
\big(\mathcal{U}_\Gamma u\big)(x,y+\alpha,\theta)=\sum\limits_{\gamma\in\Gamma}e^{i<\theta,\gamma>}u(x,y+\alpha-\gamma)=e^{i<\theta,\alpha>}\big(\mathcal{U}_\Gamma u\big)(x,y,\theta),\quad\forall(x,y,\theta)\in\mathcal{X}^2\times\mathcal{X}^*,\ \forall\alpha\in\Gamma.
\end{equation}
In particular, considering a fixed pair $(x,y)\in\mathcal{X}^2$, we obtain an element $ \big(\mathcal{U}_\Gamma u\big)(x,y,.)\in\mathscr{S}\big(\mathbb{T}^{*,d}\big)$ and due to Remark \ref{A.10} we can compute its Fourier series that converges in $\mathscr{S}\big(\mathbb{T}^{*,d}\big)$:
\begin{equation}\label{2.6}
\big(\mathcal{U}_\Gamma u\big)(x,y,\theta)=\sum\limits_{\gamma\in\Gamma}\big(\widehat{\mathcal{U}_\Gamma u}\big)(x,y,\gamma)e^{i<\theta,\gamma>},\quad\big(\widehat{\mathcal{U}_\Gamma u}\big)(x,y,\gamma)=|E^*|^{-1}\int_{E^*}e^{-i<\theta,\gamma>}\big(\mathcal{U}_\Gamma u\big)(x,y,\theta)\,d\theta.
\end{equation}
We also have the Parseval equality
\begin{equation}\label{2.7}
\sum\limits_{\gamma\in\Gamma}\left|\big(\widehat{\mathcal{U}_\Gamma u}\big)(x,y,\gamma)\right|^2\ =\ |E^*|^{-1}\int_{E^*}\left|\big(\mathcal{U}_\Gamma u\big)(x,y,\theta)\right|^2d\theta.
\end{equation}
Comparing \eqref{2.2} and \eqref{2.6} we conclude that $\big(\widehat{\mathcal{U}_\Gamma u}\big)(x,y,\gamma)=u(x,y-\gamma)$; replacing then in \eqref{2.7} we get the equality
\begin{equation}\label{2.8}
|E^*|^{-1}\int_{E^*}\left|\big(\mathcal{U}_\Gamma u\big)(x,y,\theta)\right|^2d\theta\ =\ \sum\limits_{\gamma\in\Gamma}\left|u(x,y-\gamma)\right|^2.
\end{equation}
If we integrate the above equality over $\mathcal{X}\times E$ we obtain that
\begin{equation}\label{2.9}
\sqrt{|E^*|^{-1}\int_{\mathcal{X}}\int_{E}\int_{E^*}\left|\big(\mathcal{U}_\Gamma u\big)(x,y,\theta)\right|^2dx\,dy\,d\theta}\ =\ \sqrt{\int_{\mathcal{X}}\int_{\mathcal{X}}|u(x,y)|^2dx\,dy}.
\end{equation}
We conclude that $\mathcal{U}_\Gamma u\in\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ and $\mathcal{U}_\Gamma$ extends to an isometry $\mathcal{U}_\Gamma:L^2\big(\mathcal{X}^2\big)\rightarrow\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$. Thus, to end our proof it is enough to prove that the operator $\mathcal{W}_\Gamma$ is an inverse for $\mathcal{U}_\Gamma$. Let us consider some $v\in\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$, then for almost every $x\in\mathcal{X}$ and $y\in E$ we can write that
\begin{equation}\label{2.10}
v(x,y,\theta)=\sum\limits_{\gamma\in\Gamma}\hat{v}_\gamma(x,y)e^{i<\theta,\gamma>},\quad\text{in }L^2(E^*),
\end{equation}
with
\begin{equation}\label{2.11}
\hat{v}_\gamma(x,y)=|E^*|^{-1}\int_{E^*}e^{-i<\theta,\gamma>}v(x,y,\theta)d\theta=|E^*|^{-1}\int_{E^*}v(x,y-\gamma,\theta)d\theta=\hat{v}_0(x,y-\gamma).
\end{equation}
Using the above identity \eqref{2.11} and the Parseval equality we notice that we have the following equalities that finally imply that $\mathcal{W}_\Gamma v\in L^2(\mathcal{X}^2)$ and the fact that the map $\mathcal{W}_\Gamma:\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)\rightarrow L^2(\mathcal{X}^2)$ is an isometry.
$$
\|\mathcal{W}_\Gamma v\|^2_{L^2(\mathcal{X}^2)}=\int_{\mathcal{X}}\int_{\mathcal{X}}|\hat{v}_0(x,y)|^2dx\,dy=\int_{\mathcal{X}}\left[\sum\limits_{\gamma\in\Gamma}\int_{E}|\hat{v}_0(x,y-\gamma)|^2dy\right]dx=\int_{\mathcal{X}}\left[\sum\limits_{\gamma\in\Gamma}\int_{E}|\hat{v}_\gamma(x,y)|^2dy\right]dx=
$$
$$
=\int_{\mathcal{X}}\left[\int_{E}\sum\limits_{\gamma\in\Gamma}|\hat{v}_\gamma(x,y)|^2dy\right]dx=\int_{\mathcal{X}}\left[\int_{E}|E^*|^{-1}\int_{E^*}|v(x,y,\theta)|^2d\theta\,dy\right]dx=\|v\|_{\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)}^2.
$$
Moreover, for any $u\in\mathscr{S}(\mathcal{X}^2)$ we have that
$$
\big(\mathcal{W}_\Gamma\mathcal{U}_\Gamma u\big)(x,y)=|E^*|^{-1}\int_{E^*}\left[\sum\limits_{\gamma\in\Gamma}e^{i<\theta,\gamma>}u(x,y-\gamma)\right]d\theta=|E^*|^{-1}\int_{E^*}u(x,y)d\theta=u(x,y),\qquad\forall(x,y)\in\mathcal{X}^2.
$$
\end{proof}
\begin{lemma}\label{L.2.4}
With the above definitions for the operators $\mathcal{U}_\Gamma$ and $\mathcal{W}_\Gamma$ we have that
\begin{enumerate}
\item $\mathcal{U}_\Gamma$ admits a continuous extension to $\mathscr{S}^\prime(\mathcal{X}^2)$ with values in $\mathscr{S}^\prime_\Gamma(\mathcal{X}^2\times\mathcal{X}^*)$.
\item $\mathcal{W}_\Gamma$ admits a continuous extension to $\mathscr{S}^\prime_\Gamma(\mathcal{X}^2\times\mathcal{X}^*)$ with values in $\mathscr{S}^\prime(\mathcal{X}^2)$.
\item We have the equalities: $\mathcal{U}_\Gamma\mathcal{W}_\Gamma={\rm id\hspace*{-1.5pt}l}_{\mathscr{S}^\prime_\Gamma(\mathcal{X}^2\times\mathcal{X}^*)}$, $\mathcal{W}_\Gamma\mathcal{U}_\Gamma={\rm id\hspace*{-1.5pt}l}_{\mathscr{S}^\prime(\mathcal{X}^2)}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let us consider a tempered distribution $u\in\mathscr{S}^\prime(\mathcal{X}^2)$; in order to prove the convergence of the series \eqref{2.2} in the sense of tempered distributions on $\mathcal{X}^2\times\mathcal{X}^*)$ we choose a test function $\varphi\in\mathscr{S}(\mathcal{X}^2\times\mathcal{X}^*)$ and notice that for any $\nu\in\mathbb{N}$ there exist a seminorm $\|.\|_\nu$ on $\varphi\in\mathscr{S}(\mathcal{X}^2\times\mathcal{X}^*)$ and a seminorm $\|.\|^\prime_\nu$ on $\mathscr{S}^\prime(\mathcal{X}^2)$ such that the following is true:
\begin{equation}\label{2.12}
\left|\left\langle e^{i<\theta,\gamma>}u(x,y-\gamma),\varphi(x,y,\theta)\right\rangle\right|= \left|\left\langle u(x,y),\int_{\mathcal{X}^*}e^{i<\theta,\gamma>}\varphi(x,y+\gamma,\theta)d\theta\right\rangle\right|\leq\|u\|^\prime_\nu\|\varphi\|_\nu<\gamma>^{-\nu}.
\end{equation}
This last inequality implies the convergence of the series \eqref{2.2} in the sense of tempered distributions on $\mathcal{X}^2\times\mathcal{X}^*)$ and the fact that the map $\mathcal{U}_\Gamma:\mathscr{S}^\prime(\mathcal{X}^2)\rightarrow\mathscr{S}^\prime(\mathcal{X}^2\times\mathcal{X}^*)$ is continuous. The fact that $\mathcal{U}_\Gamma u$ belongs to $\mathscr{S}^\prime_\Gamma(\mathcal{X}^2\times\mathcal{X}^*)$ results either by a direct calculus or by approximating with test functions.
Let us fix now some $v\in\mathscr{S}^\prime_\Gamma(\mathcal{X}^2\times\mathcal{X}^*)$; then, for any test function $\vartheta\in\mathscr{S}(\mathcal{X}^2)$, the application
\begin{equation}\label{2.13}
v_\vartheta:\mathscr{S}(\mathcal{X})\rightarrow\mathbb{C},\quad\langle v_\vartheta,\varphi\rangle:=\langle v,\vartheta\otimes\varphi\rangle,\quad\forall\varphi\in\mathscr{S}(\mathcal{X}^*)
\end{equation}
is a $\Gamma^*$-periodic tempered distribution that can be canonically identified with an element from $\mathscr{S}^\prime\big(\mathbb{T}\big)$.
We define $\mathcal{W}_\Gamma v$ by
\begin{equation}\label{2.14}
\left\langle \mathcal{W}_\Gamma v,\vartheta\right\rangle:=|E^*|^{-1}\langle v_\vartheta,1\rangle_{\mathbb{T}^{*,d}},\quad\forall\vartheta\in\mathscr{S}(\mathcal{X}^2).
\end{equation}
It is straightforward to verify that $\mathcal{W}_\Gamma v\in\mathscr{S}^\prime(\mathcal{X}^2)$ and that the application $\mathcal{W}_\Gamma:\mathscr{S}^\prime_\Gamma(\mathcal{X}^2\times\mathcal{X}^*)\rightarrow\mathscr{S}^\prime(\mathcal{X}^2)$ is linear and continuous. Moreover it is evident that for the case $v\in\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$, the definition \eqref{2.14} coincides with \eqref{2.3}.
The equality $\mathcal{W}_\Gamma\mathcal{U}_\Gamma={\rm id\hspace*{-1.5pt}l}_{\mathscr{S}^\prime(\mathcal{X}^2)}$ results from the one valid on the test functions by the density of $\mathscr{S}(\mathcal{X}^2)$ in $\mathscr{S}^\prime(\mathcal{X}^2)$. In order to prove the other equality we notice that for any $v\in\mathscr{S}^\prime_\Gamma(\mathcal{X}^2\times\mathcal{X}^*)$ and any $\vartheta\in\mathscr{S}(\mathcal{X}^2)$ the tempered distribution $v_\vartheta$ defined in \eqref{2.13} belongs to $\mathscr{S}^\prime\big(\mathbb{T}^{*,d}\big)$ and thus may be written as the sum of a Fourier series converging as tempered distribution in $\mathscr{S}^\prime\big(\mathcal{X}^*\big)$:
\begin{equation}\label{2.15}
v_\vartheta\ =\ |E^*|^{-1}\sum\limits_{\gamma\in\Gamma}\left\langle v_\vartheta,e^{-i<.,\gamma>}\right\rangle_{\mathbb{T}^{*,d}}e^{i<.,\gamma>}.
\end{equation}
But, from \eqref{2.2} we have that
$$
\big(\mathcal{U}_\Gamma\mathcal{W}_\Gamma v\big)(x,y,\theta)=\sum\limits_{\gamma\in\Gamma}e^{i<\theta,\gamma>}\big(\mathcal{W}_\Gamma v\big)(x,y-\gamma),\quad\text{in }\mathscr{S}^\prime(\mathcal{X}^2\times\mathcal{X}^*).
$$
Let us notice that due to \eqref{2.14} we can write
$$
\left\langle \big(\mathcal{W}_\Gamma v\big)(x,y-\gamma),\vartheta(x,y)\right\rangle=\left\langle \big(\mathcal{W}_\Gamma v\big)(x,y),\vartheta(x,y+\gamma)\right\rangle=|E^*|^{-1}\left\langle v_{({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma})\vartheta},1\right\rangle_{\mathbb{T}^{*,d}}.
$$
On the other hand, from \eqref{2.13} we deduce that $\forall\varphi\in\mathscr{S}(\mathcal{X}^*)$ one has that
\begin{equation}\label{2.16}
\left\langle v_{({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma})\vartheta},\varphi\right\rangle=\left\langle v,\left[\big({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma}\big)\vartheta\right]\otimes\varphi\right\rangle=\left\langle\big({\rm id\hspace*{-1.5pt}l}\otimes\tau_\gamma\otimes{\rm id\hspace*{-1.5pt}l}\big)v,\vartheta\otimes\varphi\right\rangle=
\end{equation}
$$
=\left\langle v,\vartheta\otimes\big(e^{-i<.,\gamma>}\varphi\big)\right\rangle=\left\langle v_\vartheta,e^{-i<.,\gamma>}\varphi\right\rangle.
$$
Let us recall the relation between a $\Gamma^*$-periodic tempered distribution and the distribution it induces on the torus (as described in the Remark \ref{A.9}): if $\psi\in C^\infty_0(\mathcal{X}^*)$ is choosen such that $\sum\limits_{\gamma^*\in\Gamma^*}\tau_{\gamma^*}\psi=1$ on $\mathcal{X}^*$ (such a choice is evidently possible), then for any $\rho\in\mathscr{S}(\mathbb{T}^{*,d})$ we have that
$$
\left\langle v_{({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma})\vartheta},\rho\right\rangle_{\mathbb{T}^{*,d}}=\left\langle v_{({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma})\vartheta},\psi\rho\right\rangle_{\mathcal{X}},\qquad\left\langle v_\vartheta,\rho\right\rangle_{\mathbb{T}^{*,d}}=\left\langle v_\vartheta,\psi\rho\right\rangle_{\mathcal{X}}.
$$
Thus it follows that \eqref{2.16} implies that
$$
\left\langle v_{({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma})\vartheta},1\right\rangle_{\mathbb{T}^{*,d}}=\left\langle v_{({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma})\vartheta},\psi\right\rangle_{\mathcal{X}}=\left\langle v_\vartheta,e^{-i<.,\gamma>}\psi\right\rangle_{\mathcal{X}}=\left\langle v_\vartheta,e^{-i<.,\gamma>}\right\rangle_{\mathbb{T}^{*,d}}.
$$
We conclude that
$$
\big(\mathcal{U}_\Gamma\mathcal{W}_\Gamma v\big)_\vartheta\ =\ |E^*|^{-1}\sum\limits_{\gamma\in\Gamma}\left\langle v_\vartheta,e^{-i<.,\gamma>}\right\rangle_{\mathbb{T}^{*,d}}e^{i<.,\gamma>}\ =\ v_\vartheta,
$$
so that finally we obtain that $\mathcal{U}_\Gamma\mathcal{W}_\Gamma={\rm id\hspace*{-1.5pt}l}_{\mathscr{S}^\prime_\Gamma(\mathcal{X}^2\times\mathcal{X}^*)}$.
\end{proof}
\begin{lemma}\label{L.2.5}
Let $\widetilde{P}_\epsilon$ be the operator defined in Section 1 and $\widetilde{P}_{\epsilon,\Gamma}:=\widetilde{P}_\epsilon\otimes{\rm id\hspace*{-1.5pt}l}$.
\begin{enumerate}
\item $\widetilde{P}_{\epsilon,\Gamma}$ is a linear continuous operator in $\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\times\mathcal{X}^*\big)$.
\item $\mathcal{U}_\Gamma\widetilde{P}_\epsilon=\widetilde{P}_{\epsilon,\Gamma}\mathcal{U}_\Gamma$ on $\mathscr{S}^\prime\big(\mathcal{X}^2\big)$.
\end{enumerate}
\end{lemma}
\begin{proof}
We evidently have that $\widetilde{P}_{\epsilon,\Gamma}:\mathscr{S}\big(\mathcal{X}^2\times\mathcal{X}^*\big)\rightarrow\mathscr{S}\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ and $\widetilde{P}_{\epsilon,\Gamma}:\mathscr{S}^\prime\big(\mathcal{X}^2\times\mathcal{X}^*\big)\rightarrow\mathscr{S}^\prime\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ are linear and continuous. It is thus sufficient to prove that $\forall v\in\mathscr{S}\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ we have that:
\begin{equation}\label{2.17}
\left[\Big(\widetilde{P}_\epsilon\otimes{\rm id\hspace*{-1.5pt}l}\Big)v\right](x,y,\theta+\gamma^*)=\left[\Big(\widetilde{P}_\epsilon\otimes{\rm id\hspace*{-1.5pt}l}\Big)({\rm id\hspace*{-1.5pt}l}\otimes{\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma^*})v\right](x,y,\theta),\quad\forall(x,y,\theta)\in\mathcal{X}^2\times\mathcal{X}^*,\ \forall\gamma^*\in\Gamma^*
\end{equation}
and
\begin{equation}\label{2.18}
\left[\Big(\widetilde{P}_\epsilon\otimes{\rm id\hspace*{-1.5pt}l}\Big)v\right](x,y+\gamma,\theta)=\left[\Big(\widetilde{P}_\epsilon\otimes{\rm id\hspace*{-1.5pt}l}\Big)({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma}\otimes{\rm id\hspace*{-1.5pt}l})v\right](x,y,\theta),\quad\forall(x,y,\theta)\in\mathcal{X}^2\times\mathcal{X}^*,\ \forall\gamma\in\Gamma.
\end{equation}
While the equality \eqref{2.17} is obvious, for the equality \eqref {2.18} we use \eqref{1.3} (with $\tilde{y}$ replaced by $\tilde{y}+\gamma$) and the $\Gamma$-periodicity of $p_\epsilon$ with respect to the second variable.
For the second point of the Lemma we use \eqref{2.2} and \eqref{2.18} and notice that for any $u\in\mathscr{S}(\mathcal{X}^2)$ and for any $(x,y,\theta)\in\mathcal{X}^2\times\mathcal{X}^*$ we have that
$$
\Big(\widetilde{P}_{\epsilon,\Gamma}\mathcal{U}_\Gamma u\Big)(x,y,\theta)=\sum\limits_{\gamma\in\Gamma}e^{i<\theta,\gamma>}\left[\widetilde{P}_\epsilon\big({\rm id\hspace*{-1.5pt}l}\otimes\tau_{\gamma}\big)u\right](x,y)=\sum\limits_{\gamma\in\Gamma}e^{i<\theta,\gamma>}\big(\widetilde{P}_\epsilon u\big)(x,y-\gamma)=\Big(\mathcal{U}_\Gamma\widetilde{P}_\epsilon u\Big)(x,y,\theta)
$$
\end{proof}
We shall study the self-adjointness of $\widetilde{P}_\epsilon$ acting in some new spaces of functions that are periodic in one argument. Let us first consider the operator $\widetilde{P}_{\epsilon,\Gamma}$.
\begin{definition}\label{D.2.6}
Recalling the operator $\widetilde{Q}_{s,\epsilon}$ from Definition \ref{D.1.4} (2) we define the operator $\widetilde{Q}_{s,\epsilon,\Gamma}:=\widetilde{Q}_{s,\epsilon}\otimes{\rm id\hspace*{-1.5pt}l}$.
\end{definition}
As we have already noticed about the Definition \ref{D.1.4} (2), the operator $\widetilde{Q}_{s,\epsilon}$ is obtained from $Q_{s,\epsilon}$ by ''doubling the variable`` in the same way as $\widetilde{P}_\epsilon$ is associated to $P_\epsilon$. Then we have a result similar to Lemma \ref{L.2.5} and deduce that $\widetilde{Q}_{s,\epsilon,\Gamma}$ is a linear continuous operator in $\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\times\mathcal{X}^*\big)$.
\begin{definition}\label{D.2.6.b}
For any $s\in\mathbb{R}$ we define
$$
\mathscr{F}_{s,\epsilon}\big(\mathcal{X}^2\times\mathcal{X}^*\big):=\left\{v\in\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\times\mathcal{X}^*\big)\,\mid\,\widetilde{Q}_{s,\epsilon,\Gamma}v\in\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)\right\}
$$
that is evidently a Hilbert space for the quadratic norm
\begin{equation}\label{2.19}
\|v\|_{\mathscr{F}_{s,\epsilon}}\ :=\ \left\|\widetilde{Q}_{s,\epsilon,\Gamma}v\right\|_{\mathscr{F}_0}\qquad\forall v\in\mathscr{F}_{s,\epsilon}\big(\mathcal{X}^2\times\mathcal{X}^*\big).
\end{equation}
\end{definition}
\begin{lemma}\label{L.2.7}
Let $\widetilde{\mathcal{H}}^s_{A_\epsilon}\big(\mathcal{X}^2\big)$ be the Hilbert space defined in \eqref{1.11}. Then $\mathcal{U}_\Gamma:\widetilde{\mathcal{H}}^s_{A_\epsilon}\big(\mathcal{X}^2\big)\rightarrow\mathscr{F}_{s,\epsilon}\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ is unitary.
\end{lemma}
\begin{proof}
Let us pick $u\in\widetilde{\mathcal{H}}^s_{A_\epsilon}\big(\mathcal{X}^2\big)$; thus we know that $u\in\mathscr{S}^\prime\big(\mathcal{X}^2\big)$ and $\widetilde{Q}_{s,\epsilon}u\in L^2(\mathcal{X}^2)$. We denote by $v:=\mathcal{U}_\Gamma u\in\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\times\mathcal{X}^*\big)$. From Lemma \ref{L.2.5} we deduce that $\mathcal{U}_\Gamma\widetilde{Q}_{s,\epsilon}=\widetilde{Q}_{s,\epsilon,\Gamma}\mathcal{U}_\Gamma$ on $\mathscr{S}^\prime\big(\mathcal{X}^2\big)$, and from Lemma \ref{L.2.3} we have that $ \mathcal{U}_\Gamma\widetilde{Q}_{s,\epsilon}u\in\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$. We conclude that $\widetilde{Q}_{s,\epsilon,\Gamma}v\in\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ so that $v\in\mathscr{F}_{s,\epsilon}\big(\mathcal{X}^2\times\mathcal{X}^*\big)$. Moreover,
$$
\left\|\mathcal{U}_\Gamma u\right\|_{\mathscr{F}_{s,\epsilon}}=\|v\|_{\mathscr{F}_{s,\epsilon}}=\left\|\widetilde{Q}_{s,\epsilon,\Gamma}v\right\|_{\mathscr{F}_0}=\left\|\mathcal{U}_\Gamma\widetilde{Q}_{s,\epsilon}u\right\|_{\mathscr{F}_0}=\left\|\widetilde{Q}_{s,\epsilon}u\right\|_{L^2(\mathcal{X}^2)}=\|u\|_{\widetilde{\mathcal{H}}^s_{A_\epsilon}},
$$
implying that $\mathcal{U}_\Gamma:\widetilde{\mathcal{H}}^s_{A_\epsilon}\big(\mathcal{X}^2\big)\rightarrow\mathscr{F}_{s,\epsilon}\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ is isometric. In order to prove its surjectivity we consider $v\in\mathscr{F}_{s,\epsilon}\big(\mathcal{X}^2\times\mathcal{X}^*\big)$; then $v\in\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ and also $\widetilde{Q}_{s,\epsilon,\Gamma}v\in\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$. Let us define $u:=\mathcal{W}_\Gamma v\in\mathscr{S}^\prime(\mathcal{X}^2)$ (Lemma \ref{L.2.4}). Then we apply Lemma \ref{L.2.5} and deduce that $\mathcal{W}_\Gamma\widetilde{Q}_{s,\epsilon,\Gamma}=\widetilde{Q}_{s,\epsilon}\mathcal{W}_\Gamma$ on $\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\times\mathcal{X}^*\big)$, so that we have $\widetilde{Q}_{s,\epsilon}u=\mathcal{W}_\Gamma\widetilde{Q}_{s,\epsilon,\Gamma}v\in L^2(\mathcal{X}^2)$ and we conclude that $u\in\widetilde{\mathcal{H}}^s_{A_\epsilon}\big(\mathcal{X}^2\big)$ and
$\mathcal{U}_\Gamma u=v$.
\end{proof}
\begin{lemma}\label{L.2.8}
The operator $\widetilde{P}_{\epsilon,\Gamma}$ defined on $\mathscr{F}_{m,\epsilon}\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ is self-adjoint as operator acting in the Hilbert space $\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$.
\end{lemma}
\begin{proof}
By Proposition \ref{P.1.18}, $\widetilde{P}_{\epsilon,\Gamma}$ is self-adjoint as operator acting in $L^2(\mathcal{X}^2)$, with domain $\widetilde{\mathcal{H}}^m_{A_\epsilon}\big(\mathcal{X}^2\big)$. By Lemma \ref{L.2.7} the operators $\mathcal{U}_\Gamma:L^2(\mathcal{X}^2)\rightarrow\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ and $\mathcal{U}_\Gamma:\widetilde{\mathcal{H}}^m_{A_\epsilon}\big(\mathcal{X}^2\big)\rightarrow\mathscr{F}_{m,\epsilon}\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ are unitary.Finally, by Lemma \ref{L.2.5} we have that $\widetilde{P}_{\epsilon,\Gamma}\mathcal{U}_\Gamma=\mathcal{U}_\Gamma\widetilde{P}_{\epsilon}$ on $\widetilde{\mathcal{H}}^m_{A_\epsilon}\big(\mathcal{X}^2\big)$, so that $\widetilde{P}_{\epsilon,\Gamma}$ is unitarily equivalent with $\widetilde{P}_{\epsilon}$.
\end{proof}
We shall need some more function spaces in order to come back to the operator $\widetilde{P}_{\epsilon}$.
\begin{definition}\label{D.2.9}
Let $\theta\in\mathcal{X}^*$ and $s\in\mathbb{R}$.
\begin{enumerate}
\item $\mathscr{S}^\prime_\theta\big(\mathcal{X}^2\big):=\left\{u\in\mathscr{S}^\prime\big(\mathcal{X}^2\big)\,\mid\,\big({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma}\big)u=e^{i<\theta,\gamma>}u\ \forall\gamma\in\Gamma\right\}$ endowed with the topology induced from $\mathscr{S}^\prime\big(\mathcal{X}^2\big)$.
\item $\mathcal{H}^s_{\theta,\epsilon}\big(\mathcal{X}^2\big):=\left\{u\in\mathscr{S}^\prime_\theta\big(\mathcal{X}^2\big)\,\mid\,\widetilde{Q}_{s,\epsilon}u\in L^2\big(\mathcal{X}\times E\big)\right\}$ endowed with the following quadratic norm
\begin{equation}\label{2.20}
\|u\|_{\mathcal{H}^s_{\theta,\epsilon}}:=\left\|\widetilde{Q}_{s,\epsilon}u\right\|_{L^2(\mathcal{X}\times E)},\quad\forall u\in\mathcal{H}^s_{\theta,\epsilon}\big(\mathcal{X}^2\big).
\end{equation}
\item $\mathcal{K}^s_\epsilon\big(\mathcal{X}^2\big):=\mathcal{H}^s_{0,\epsilon}\big(\mathcal{X}^2\big)$.
\end{enumerate}
\end{definition}
As we already noticed in the proof of Lemma \ref{L.2.5}, for any $u\in\mathscr{S}^\prime\big(\mathcal{X}^2\big)$ the following equality holds:
$$
\big({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma}\big)\widetilde{P}_{\epsilon}u=\widetilde{P}_{\epsilon}\big({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma}\big)u,\quad\forall\gamma\in\Gamma.
$$
It follows that the operators $\widetilde{P}_{\epsilon}$ and $\widetilde{Q}_{s,\epsilon}$ leave the space $\mathscr{S}^\prime_\theta\big(\mathcal{X}^2\big)$ invariant. We shall use the notation $\mathscr{S}^\prime_0\big(\mathcal{X}^2\big)\equiv\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\big)$. Let us also notice that for $s=0$ the spaces defined in (2) and (3) above do not depend on $\epsilon$ and will be denoted by $\mathcal{H}_{\theta}\big(\mathcal{X}^2\big)$ and respectively by $\mathcal{K}\big(\mathcal{X}^2\big)$; this last one may be identified with $L^2\big(\mathcal{X}\times\mathbb{T}\big)$.
\begin{lemma}\label{L.2.10}
Let us consider the map $\boldsymbol{\psi}$ defined by \eqref{1.1}. Then for any $s\in\mathbb{R}$ the adjoint $\boldsymbol{\psi}^*$ is a unitary operator $\mathcal{K}^s_\epsilon\big(\mathcal{X}^2\big)\rightarrow
\mathcal{H}^s_{A_\epsilon}\big(\mathcal{X}\big)\otimes L^2\big(\mathbb{T}\big)$. In particular $\mathcal{K}^s_\epsilon\big(\mathcal{X}^2\big)$ is a Hilbert space for the norm \eqref{2.20} having $\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$ as a dense subspace.
\end{lemma}
\begin{proof}
The case $s=0$ is straightforward since the map $\boldsymbol{\psi}^*$ leaves invariant the space $\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\big)$ and for any $u\in\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$ we have that
$$
\left\|\boldsymbol{\psi}^*u\right\|_{L^2(\mathcal{X}\times\mathbb{T})}^2=\int_{\mathcal{X}}\left(\int_{E}|u(x,x-y)|^2dy\right)dx=\int_{\mathcal{X}}\left(\int_{-E}|u(x,x+y)|^2dy\right)dx=
$$
$$
=\int_{\mathcal{X}}\left(\int_{x-E}|u(x,y)|^2dy\right)dx=\int_{\mathcal{X}}\left(\int_{E}|u(x,y)|^2dy\right)dx=\|u\|^2_{L^2(\mathcal{X}\times\mathbb{T})}.
$$
For any $s\in\mathbb{R}\setminus\{0\}$ we fix some $u\in\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\big)$ and notice that:
$$
u\in\mathcal{K}^s_\epsilon\big(\mathcal{X}^2\big)\Leftrightarrow\widetilde{Q}_{s,\epsilon}u\in L^2(\mathcal{X}\times\mathbb{T})\Leftrightarrow\boldsymbol{\psi}^*Q^\prime_{s,\epsilon}\boldsymbol{\psi}^*u\in L^2(\mathcal{X}\times\mathbb{T})\Leftrightarrow\big(Q_{s,\epsilon}\otimes{\rm id\hspace*{-1.5pt}l}\big)\boldsymbol{\psi}^*u\in L^2(\mathcal{X}\times\mathbb{T})\Leftrightarrow
$$
$$
\Leftrightarrow\boldsymbol{\psi}^*u\in\mathcal{H}^s_{A_\epsilon}\big(\mathcal{X}\big)\otimes L^2\big(\mathbb{T}\big)
$$
and we also have that $\left\|\boldsymbol{\psi}^*u\right\|_{ \mathcal{H}^s_{A_\epsilon}(\mathcal{X})\otimes L^2(\mathbb{T})}=\|u\|_{\mathcal{K}^s_\epsilon}$.
The last statement becomes obvious noticing that $\mathcal{H}^s_{A_\epsilon}\big(\mathcal{X}\big)\otimes L^2\big(\mathbb{T}\big)$ is a Hilbert space with $\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$ a dense subspace in it that is invariant under the map $\boldsymbol{\psi}$.
\end{proof}
\begin{lemma}\label{L.2.11}
For any $\theta\in\mathcal{X}^*$ and $s\in\mathbb{R}$ we have that the operator $T_\theta:\mathscr{S}\big(\mathcal{X}^2\big)\rightarrow\mathscr{S}\big(\mathcal{X}^2\big)$ defined by
$$
\big(T_\theta u\big)(x,y):=e^{i<\theta,x-y>}u(x,y),
$$
induces a unitary operator $\mathcal{H}^s_{\theta,\epsilon}\big(\mathcal{X}^2\big)\rightarrow\mathcal{K}^s_\epsilon\big(\mathcal{X}^2\big)$. In particular we have that $\mathcal{H}^s_{\theta,\epsilon}\big(\mathcal{X}^2\big)$ is a Hilbert space containing $\mathscr{S}\big(\mathcal{X}^2\big):=T_\theta^{-1}\big[\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)\big]$ as a dense subspace.
\end{lemma}
\begin{proof}
Let us prove first that for any $\theta\in\mathcal{X}^*$ we have the equality:
\begin{equation}\label{2.21}
\widetilde{P}_\epsilon T_\theta\ =\ T_\theta\widetilde{P}_\epsilon,
\qquad\text{on }\mathscr{S}^\prime\big(\mathcal{X}^2\big).
\end{equation}
It is clearly enough to prove it on $\mathscr{S}\big(\mathcal{X}^2\big)$; but in this case it results directly from \eqref{1.3} because $(x+\tilde{y}-y)-\tilde{y}=x-y$.
Then the following equality also follows
\begin{equation}\label{2.22}
\widetilde{Q}_{s,\epsilon}T_\theta\ =\ T_\theta\widetilde{Q}_{s,\epsilon},
\qquad\text{on }\mathscr{S}^\prime\big(\mathcal{X}^2\big).
\end{equation}
We notice further that $T_\theta$ takes the space $\mathscr{S}^\prime_\theta\big(\mathcal{X}^2\big)$ into the space $\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\big)$, while the operator $\widetilde{Q}_{s,\epsilon}$ leaves invariant both spaces $\mathscr{S}^\prime_\theta\big(\mathcal{X}^2\big)$ and $\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\big)$.
For $u\in\mathscr{S}^\prime_\theta\big(\mathcal{X}^2\big)$ we have the equivalence relations:
$$
u\in\mathcal{H}^s_{\theta,\epsilon}\big(\mathcal{X}^2\big)\Leftrightarrow\widetilde{Q}_{s,\epsilon}u\in L^2(\mathcal{X}\times E)\Leftrightarrow T_\theta\widetilde{Q}_{s,\epsilon}u\in L^2(\mathcal{X}\times\mathbb{T})\Leftrightarrow\widetilde{Q}_{s,\epsilon}T_\theta u\in L^2(\mathcal{X}\times\mathbb{T})\Leftrightarrow
$$
$$
\Leftrightarrow T_\theta u\in\mathcal{K}^s_\epsilon\big(\mathcal{X}^2\big)
$$
and the equality $\left\|T_\theta u\right\|_{\mathcal{K}^s_\epsilon}=\|u\|_{\mathcal{H}^s_{\theta,\epsilon}}$.
The last statement is obvious since Lemma \ref{L.2.10} implies that $\mathcal{K}^s_\epsilon\big(\mathcal{X}^2\big)$ is a Hilbert space having $\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$ as a dense subspace.
\end{proof}
\begin{lemma}\label{L.2.12}
For any $s\in\mathbb{R}$ we have that $\widetilde{P}_\epsilon\in\mathbb{B}\Big(\mathcal{K}^{s+m}_\epsilon\big(\mathcal{X}^2\big);\mathcal{K}^{s}_\epsilon\big(\mathcal{X}^2\big)\Big)$ uniformly for $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{lemma}
\begin{proof}
We have seen that:
\begin{itemize}
\item $\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$ is a dense subspace of $\mathcal{K}^{s+m}_\epsilon\big(\mathcal{X}^2\big)$,
\item $\boldsymbol{\psi}^*:\mathcal{K}^{s}_\epsilon\big(\mathcal{X}^2\big)\rightarrow\mathcal{H}^s_{A_\epsilon}\big(\mathcal{X}\big)\otimes L^2(\mathbb{T})$ is a unitary operator that leaves $\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$ invariant.
\end{itemize}
It is thus enough to prove that $\forall s\in\mathbb{R}$, $\exists C_s>0$ such that:
\begin{equation}\label{2.23}
\left\|\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*u\right\|_{\mathcal{H}^s_{A_\epsilon}\big(\mathcal{X}\big)\otimes L^2(\mathbb{T})}\ \leq\ C_s\|u\|_{\mathcal{H}^{s+m}_{A_\epsilon}\big(\mathcal{X}\big)\otimes L^2(\mathbb{T})},\quad\forall u\in\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big),\ \forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
Formula \eqref{1.4} in Lemma \ref{L.1.2} implies the equality:
\begin{equation}\label{2.24}
\Big(\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*u\Big)(x,y)=(2\pi)^{-d}\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\eta,x-\tilde{y}>}\omega_{A_\epsilon}(x,\tilde{y})\,p_\epsilon\Big(\frac{x+\tilde{y}}{2},\frac{x+\tilde{y}}{2}-y,\eta\Big)\,u(\tilde{y},y)\,d\tilde{y}\,d\eta,\quad\forall u\in\mathscr{S}\big(\mathcal{X}^2\big).
\end{equation}
But let us notice that the integral in \eqref{2.24} is well defined for any $u\in\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$ so that we can extend it to such functions (considered as periodic smooth functions in the second variable) either by duality and a computation in $\mathscr{S}^\prime\big(\mathcal{X}^2\big)$ or by approximating with functions from $\mathscr{S}\big(\mathcal{X}^2\big)$ with respect to the topology induced from $\mathscr{S}^\prime\big(\mathcal{X}^2\big)$. By the same time, the properties of the oscillating integral defining the right side of \eqref{2.24} allow to conclude that $\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*\in\mathbb{B}\Big(\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big);\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)\Big)$. Considering now $y\in\mathcal{X}$ in \eqref{2.24} as a parameter, the usual properties of magnetic pseudodifferential operators (see \cite{IMP1}) imply that $\forall\epsilon\in[-\epsilon_0,\epsilon_0]$, $\exists C_s>0$ such that:
\begin{equation}\label{2.25}
\left\|\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*u(.,y)\right\|_{\mathcal{H}^s_{A_\epsilon}\big(\mathcal{X}\big)}^2\ \leq\ C_s^2\|u(.,y)\|_{\mathcal{H}^{s+m}_{A_\epsilon}\big(\mathcal{X}\big)}^2,\qquad\forall(y,\epsilon)\in\mathcal{X}\times[-\epsilon_0,\epsilon_0],\ \forall u\in\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big).
\end{equation}
Integrating the above inequality for $y\in\mathbb{T}$ we obtain \eqref{2.23}.
\end{proof}
\begin{proposition}\label{P.2.13}
$\widetilde{P}_\epsilon$ is a self-adjoint operator in $\mathcal{K}\big(\mathcal{X}^2\big)\equiv L^2(\mathcal{X}\times\mathbb{T})$ with domain $\mathcal{K}^{m}_\epsilon\big(\mathcal{X}^2\big)$; it is essentially self-adjoint on $\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$.
\end{proposition}
\begin{proof}
Considering Lemma \ref{L.2.10} that implies that for any $s\in\mathbb{R}$ the operator $\boldsymbol{\psi}^*:\mathcal{K}^{s}_\epsilon\big(\mathcal{X}^2\big)\rightarrow\mathcal{H}^s_{A_\epsilon}\big(\mathcal{X}\big)\otimes L^2(\mathbb{T})$ is unitary and leaves invariant the subspace $\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$, it will be enough to prove that $\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*$ is self-adjoint in $L^2(\mathcal{X}\times\mathbb{T})$ with domain $\mathcal{H}^m_{A_\epsilon}\big(\mathcal{X}\big)\otimes L^2(\mathbb{T})$ and essentially self-adjoint on $\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$.
Due to the arguments in the proof of Lemma \ref{L.2.12} we know that $\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*$ is well defined in $L^2(\mathcal{X}\times\mathbb{T})$ with domain $\mathcal{H}^m_{A_\epsilon}\big(\mathcal{X}\big)\otimes L^2(\mathbb{T})$ and on $\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$ is defined by the equality \eqref{2.24}. A straightforward check using \eqref{2.24} shows that the operator $\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*$ is symmetric on $\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$, that is a dense subspace of $\mathcal{H}^m_{A_\epsilon}\big(\mathcal{X}\big)\otimes L^2(\mathbb{T})$. As we know that $\boldsymbol{\psi}^*\widetilde{P}_\epsilon
\boldsymbol{\psi}^*\in\mathbb{B}\big(\mathcal{H}^m_{A_\epsilon}\big(\mathcal{X}\big)\otimes L^2(\mathbb{T});L^2(\mathcal{X}\times\mathbb{T})\big)$, it follows that $\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*$ is symmetric on its domain too. In order to prove its self-adjointness let us fix some $v\in\mathcal{D}\big([\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*]^*\big)$; it follows that $v\in L^2(\mathcal{X}\times\mathbb{T})$ and it exists some $f\in L^2(\mathcal{X}\times\mathbb{T})$ such that we have the equality
$$
\Big(\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*u,v\Big)_{L^2(\mathcal{X}\times\mathbb{T})}\ =\ (u,f)_{L^2(\mathcal{X}\times\mathbb{T})},\qquad\forall u\in\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big).
$$
Thus $\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*v=f$ as elements of $\mathscr{S}^\prime\big(\mathcal{X}\times\mathbb{T}\big)\equiv\mathscr{S}^\prime_\Gamma\big(\mathcal{X}^2\big)$. We notice that the Remark \ref{R.1.17} remains true if we replace $\mathcal{X}^2$ by $\mathcal{X}\times\mathbb{T}$ and thus we have that $v\in\mathcal{H}^m_{A_\epsilon}\big(\mathcal{X}\big)\otimes L^2(\mathbb{T})$ and thus $v$ belongs to the domain of $\boldsymbol{\psi}^*\widetilde{P}_\epsilon\boldsymbol{\psi}^*$.
The last statement clearly follows from the above results.
\end{proof}
We shall present now a connection between the operators defined in the Propositions \ref{P.1.18} and \ref{P.2.13}.
\begin{proposition}\label{P.2.14}
Considering $\widetilde{P}_\epsilon$ as operator acting in $\mathscr{S}^\prime\big(\mathcal{X}^2\big)$ we shall denote by $\widetilde{P}_\epsilon^\prime$ the self-adjoint operator that it induces in $L^2(\mathcal{X}^2)$ with domain $\widetilde{H}^m_{A_\epsilon}\big(\mathcal{X}^2\big)$ (as in Proposition \ref{P.1.18}) and by $\widetilde{P}_\epsilon^{\prime\prime}$ the self-adjoint operator that it induces in $L^2(\mathcal{X}\times\mathbb{T})$ with domain $\mathcal{K}^{m}_\epsilon\big(\mathcal{X}^2\big)$ (as in Proposition \ref{P.2.13}). Then we have the equality:
\begin{equation}\label{2.26}
\sigma\big(\widetilde{P}_\epsilon^\prime\big)\ =\ \sigma\big(\widetilde{P}_\epsilon^{\prime\prime}\big).
\end{equation}
\end{proposition}
\begin{proof}
From the arguments in the proof of Lemma \ref{L.2.8} we deduce that $\mathcal{U}_\Gamma\widetilde{P}_\epsilon^\prime\mathcal{U}_\Gamma^{-1}=\widetilde{P}_{\epsilon,\Gamma}:=\widetilde{P}_\epsilon\otimes{\rm id\hspace*{-1.5pt}l}$, that is a self-adjoint operator in $\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ with domain $\mathscr{F}_{m,\epsilon}\big(\mathcal{X}^2\times\mathcal{X}^*\big)$.
On the other side from Lemma \ref{L.2.11} (and the arguments in its proof) we deduce that for any $\theta\in\mathcal{X}^*$ the operator $\widetilde{P}_{\epsilon,\theta}:=T_\theta^{-1}\widetilde{P}_\epsilon^{\prime\prime}T_\theta$ is the self-adjoint operator associated to $ \widetilde{P}_\epsilon$ in $\mathcal{H}_\theta\big(\mathcal{X}^2\big)$, having the domain $\mathcal{H}^m_{\theta,\epsilon}\big(\mathcal{X}^2\big)$.
We shall consider the spaces $\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ and $\mathscr{F}_{m,\epsilon}\big(\mathcal{X}^2\times\mathcal{X}^*\big)$ as direct integrals of Hilbert spaces over the dual torus; more precisely:
$$
\mathscr{F}_0\big(\mathcal{X}^2\times\mathcal{X}^*\big)\cong\int_{\mathbb{T}^{*,d}}^\oplus\mathcal{H}_\theta\big(\mathcal{X}^2\big)d\theta,\qquad\mathscr{F}_{m,\epsilon}\big(\mathcal{X}^2\times\mathcal{X}^*\big)\cong\int_{\mathbb{T}^{*,d}}^\oplus\mathcal{H}^m_{\theta,\epsilon}\big(\mathcal{X}^2\big)d\theta.
$$
Taking into account that:
$$
\big(\widetilde{P}_{\epsilon,\Gamma}u\big)(x,y,\theta)\ =\ \big((\widetilde{P}_{\epsilon,\theta}u)(.,.,\theta)\big)(x,y),\quad\forall u\in\mathscr{F}_{m,\epsilon}\big(\mathcal{X}^2\times\mathcal{X}^*\big),
$$
and the function: $\mathbb{T}^{*,d}\ni\theta\mapsto\big(\widetilde{P}_{\epsilon,\theta}+i\big)^{-1}\in\mathbb{B}\big(\mathcal{H}_\theta;\mathcal{H}_\theta\big)$ is measurable, we can write:
\begin{equation}\label{2.27}
\widetilde{P}_{\epsilon,\Gamma}\ =\ \int_{\mathbb{T}^{*,d}}^\oplus\widetilde{P}_{\epsilon,\theta}\,d\theta.
\end{equation}
We can now apply Theorem XIII.85 (d) from \cite{RS-4} in order to conclude that we have the equivalence:
\begin{equation}\label{2.28}
\lambda\in\sigma\big(\widetilde{P}_{\epsilon,\Gamma}\big)\ \Longleftrightarrow\ \forall\delta>0,\ \left|\left\{\theta\in\mathbb{T}^{*,d}\,\mid\,\sigma\big(\widetilde{P}_{\epsilon,\theta}\big)\cap(\lambda-\delta,\lambda+\delta)\ne\emptyset\right\}\right|\,>0.
\end{equation}
Let us notice that $\sigma\big(\widetilde{P}_{\epsilon,\theta}\big)$ is independent of $\theta\in\mathbb{T}^{*,d}$ and deduce that $\sigma\big(\widetilde{P}_{\epsilon,\Gamma}\big)=\sigma\big(\widetilde{P}_\epsilon^{\prime\prime}\big)$. But the conclusion of the first paragraph in this proof implies that $\sigma\big(\widetilde{P}_{\epsilon,\Gamma}\big)=\sigma\big(\widetilde{P}_\epsilon^{\prime}\big)$ and we finish the proof.
\end{proof}
We shall end up this section with a result giving a connection between the spaces: $\mathcal{K}^{s}_\epsilon\big(\mathcal{X}^2\big)$, $\mathscr{S}\big(\mathcal{X};\mathcal{H}^s(\mathbb{T})\big)$ and $\mathscr{S}^\prime\big(\mathcal{X};\mathcal{H}^s(\mathbb{T})\big)$. We start with a technical Lemma.
\begin{lemma}\label{L.2.15}
Let $B$ be a magnetic field with components of class $BC^\infty(\mathcal{X})$ and $A$ an associated vector potential with components of class $C^\infty_{\text{\sf pol}}$. Let us consider a symbol $q\in S^s_1(\Xi)$ for some $s\in\mathbb{R}$. We denote by $Q:=\mathfrak{Op}^A(q)$, $Q^\prime:=Q\otimes{\rm id\hspace*{-1.5pt}l}$ and $\widetilde{Q}:=\boldsymbol{\psi}^*Q^\prime
\boldsymbol{\psi}^*$, where $\boldsymbol{\psi}$ is defined by \eqref{1.1}. Then we have that $\widetilde{Q}\in\mathbb{B}\big(\mathscr{S}\big(\mathcal{X};\mathcal{H}^s(\mathbb{T})\big);\mathscr{S}\big(\mathcal{X};L^2(\mathbb{T})\big)\big)$ uniformly for $q$ varying in bounded subsets of $S^s_1(\Xi)$ and for $B$ varying in bounded subsets of $BC^\infty(\mathcal{X})$.
\end{lemma}
\begin{proof}
On $\mathscr{S}\big(\mathcal{X};\mathcal{H}^s(\mathbb{T})\big)$ we shall use the following family of seminorms:
\begin{equation}\label{2.29}
|u|_{s,l}\ :=\ \underset{|\alpha|\leq l}{\sup}\left[\int_{\mathcal{X}}<x>^{2l}\left\|\big(\partial^\alpha_x u\big)(x,.)\right\|_{\mathcal{H}^s(\mathbb{T})}^2\ dx\right]^{1/2},\qquad l\in\mathbb{N}, u\in\mathscr{S}\big(\mathcal{X};\mathcal{H}^s(\mathbb{T})\big).
\end{equation}
We have to prove that for any $k\in\mathbb{N}$ there exist $l\in\mathbb{N}$ and $C>0$ such that:
\begin{equation}\label{2.30}
\left|\widetilde{Q} u\right|_{0,k}\ \leq\ C|u|_{s,l},\qquad\forall u\in\mathscr{S}\big(\mathcal{X};\mathcal{H}^s(\mathbb{T})\big),
\end{equation}
uniformly for $q$ varying in bounded subsets of $S^s_1(\Xi)$ and for $B$ varying in bounded subsets of $BC^\infty(\mathcal{X})$.
Using \eqref{1.3} and \eqref{1.7}, or a straightforward computation, we obtain that for any $u\in\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$:
\begin{equation}\label{2.31}
\big(\widetilde{Q} u\big)(x,y)\ =\ (2\pi)^{-d}\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\eta,y-\tilde{y}>}\,\omega_A(x,x-y+\tilde{y})\,q\Big(x+\frac{\tilde{y}-y}{2},\eta\Big)\,u(x-y+\tilde{y},\tilde{y})\,d\tilde{y}\,d\eta.
\end{equation}
In particular we obtain that $\widetilde{Q}u\in\mathscr{S}\big(\mathcal{X}\times\mathbb{T}\big)$.
For $x,y,\tilde{y}$ and $\eta$ fixed in $\mathcal{X}^*$, we consider the following function of the argument $t\in\mathcal{X}$:
\begin{equation}
\Phi(t)\ :=\ \omega_A(x,x-y+t)\,q\Big(x+\frac{t-y}{2},\eta\Big)\,u(x-y+t,\tilde{y}),
\end{equation}
and notice that its value for $t=\tilde{y}$ is exactly the factor that multiplies the exponential $e^{i<\eta,y-\tilde{y}>}$ under the integral in \eqref{2.31}; let us consider its Taylor expansion in $t\in\mathcal{X}$ around $t=y$ with integral rest of order $n>d+s$:
\begin{equation}\label{2.32}
\Phi(\tilde{y}) \ =\ \underset{|\alpha|<n}{\sum}f_\alpha(x,y,\tilde{y},\eta)\big(\tilde{y}-y\big)^\alpha\ +\ \underset{|\alpha|=n}{\sum}g_\alpha(x,y,\tilde{y},\eta)\big(\tilde{y}-y\big)^\alpha,
\end{equation}
where
\begin{equation}\label{2.33}
f_\alpha(x,y,\tilde{y},\eta)\ :=\ \underset{\beta\leq\alpha}{\sum}f_{\alpha\beta}(x)\,q_{\alpha\beta}(x,\eta)\big(\partial^\beta_xu\big)(x,\tilde{y}),\quad f_{\alpha\beta}\in C^\infty_{\text{\sf pol}}(\mathcal X),\ q_{\alpha\beta}\in S^s_1(\Xi)
\end{equation}
and
\begin{equation}\label{2.34}
g_\alpha(x,y,\tilde{y},\eta)\ :=\ \underset{\beta\leq\alpha}{\sum}\int_0^1h_{\tau,\alpha,\beta}(x,y-\tilde{y})\,q_{\alpha\beta}\big(x+(1-\tau)\frac{\tilde{y}-y}{2},\eta\big)\big(\partial^\beta_xu\big)(x-(1-\tau)(y-\tilde{y}),\tilde{y})\,d\tau
\end{equation}
where $h_{\tau,\alpha,\beta}\in C^\infty_{\text{\sf pol}}(\mathcal X\times\mathcal X)$ uniformly for $\tau\in[0,1]$ and $q_{\alpha\beta}\in S^s_1(\Xi)$.
We use the relations \eqref{2.32}-\eqref{2.34} in \eqref{2.31} and eliminate the monomials $(\tilde{y}-y)^\alpha$ through partial integrations using the identity
$$
(\tilde{y}-y)^\alpha e^{i<\eta,\tilde{y}-y>}\ =\ \big(-D_\eta\big)^\alpha e^{i<\eta,\tilde{y}-y>}.
$$
Finally we obtain:
\begin{equation}\label{2.35}
\big(\widetilde{Q} u\big)(x,y)\ =\ \underset{|\alpha|<n}{\sum}\ \underset{\beta\leq\alpha}{\sum}f_{\alpha\beta}(x)\big(T_{\alpha\beta}u\big)(x,y)\ +\ \underset{|\alpha|=n}{\sum}\ \underset{\beta\leq\alpha}{\sum}\int_0^1\big(R_{\alpha\beta}(\tau)u\big)(x,y)\,d\tau,
\end{equation}
where
\begin{equation}\label{2.36}
\big(T_{\alpha\beta}u\big)(x,y):=(2\pi)^{-d}\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\eta,y-\tilde{y}>}t_{\alpha\beta}(x,\eta)\big(\partial^\beta_xu\big)(x,\tilde{y})\,d\tilde{y}\,d\eta,\quad t_{\alpha\beta}\in S^{s-|\alpha|}_1(\Xi),
\end{equation}
\begin{equation}\label{2.37}
\big(R_{\alpha\beta}(\tau)u\big)(x,y):=
\end{equation}
$$
(2\pi)^{-d}\int_{\mathcal{X}}\int_{\mathcal{X}^*}e^{i<\eta,y-\tilde{y}>}h_{\tau,\alpha,\beta}(x,y-\tilde{y})\,r_{\alpha\beta}\big(x+(1-\tau)\frac{\tilde{y}-y}{2},\eta\big)\,\big(\partial^\beta_xu\big)\big(x-(1-\tau)(y-\tilde{y}),\tilde{y}\big)\,d\tilde{y}\,d\eta,\quad r_{\alpha\beta}\in S^{s-n}_1(\Xi).
$$
We begin by estimating the term $T_{\alpha\beta}u$, by using Lemma \ref{L.A.19}; Starting from \eqref{2.36} and considering $x\in\mathcal X$ as a parameter we conclude that there exists a semi-norm $c_{\alpha\beta}(q)$ of $q\in S^s_1(\Xi)$ such that
\begin{equation}\label{2.38}
\left\| \big(T_{\alpha\beta}u\big)(x,.)\right\|^2_{L^2(\mathbb{T})}\ \leq\ c_{\alpha\beta}(q)^2\left\|\big(\partial^\beta_xu\big)(x,.)\right\|^2_{\mathcal{H}^s(\mathbb{T})},\quad\forall x\in\mathcal X,\ \forall u\in\mathscr{S}(\mathcal X\times\mathbb{T}).
\end{equation}
Let us consider now the term $R_{\alpha\beta}(\tau)u$. We begin by noticing that due to our hypothesis there exists a constant $C(B)$ (bounded when the components of the magnetic field $B$ take values in bounded subsets of $BC^\infty(\mathcal X)$) and there exists an entire number $a\in\mathbb{Z}$ such that
\begin{equation}\label{2.39}
\left|h_{\tau,\alpha,\beta}(x,y-\tilde{y})\right|\ \leq\ C(B)<x>^a<y-\tilde{y}>^a,\quad\forall(x,y,\tilde{y})\in\mathcal X^3,\ \forall\tau\in[0,1].
\end{equation}
We integrate by parts in \eqref{2.37}, using the identity
$$
e^{i<\eta,y-\tilde{y}>}\ =\ <y-\tilde{y}>^{-2N}\big(1-\Delta_\eta\big)^Ne^{i<\eta,y-\tilde{y}>}.
$$
This allows us to conclude that there exists a seminorm $c^\prime_{\alpha,\beta,N}(p)$ of the symbol $p\in S^s_1(\Xi)$ for which we have the inequality:
\begin{equation}\label{2.40}
\left|\big(R_{\alpha\beta}(\tau)u\big)(x,y)\right|\ \leq\ C(B)c^\prime_{\alpha,\beta,N}(p)<x>^a\int_{\mathcal X^*}<\eta>^{s-n}d\eta\int_{\mathcal X}<z>^{a-2N}\left|\big(\partial^\beta_xu\big)\big(x-(1-\tau)z,y-z\big)\right|dz
\end{equation}
for any $(x,y)\in\mathcal X^2$ and any $\tau\in[0,1]$. We recall our choice $s-n<-d$, we choose further $2N\geq a+2d$ and we estimate the last integral by using the Cauchy-Schwartz inequality. We take the square of the inequality \eqref{2.40} and integrate with respect to $y\in E$ concluding that there exists a constant $C_0>0$ such that
$$
\int_E\left|\big(R_{\alpha\beta}(\tau)u\big)(x,y)\right|^2dy\ \leq
$$
$$
\leq\ C_0C(B)^2c^\prime_{\alpha,\beta,N}(p)^2<x>^{2a}\int_{\mathcal X}<z>^{-2d}\left(\int_E\left|\big(\partial^\alpha_xu\big)\big(x-(1-\tau)z,y-z\big)\right|^2dy\right)dz,\quad\forall x\in\mathcal X,\ \forall\tau\in[0,1].
$$
For any $\Gamma$-periodic function $v\in L^2_{\text{\sf loc}}(\mathcal X)$ and for any $z\in\mathcal X$ we have that
$$
\int_E|v(y-z)|^2dy\ =\ \int_{\tau_zE}|v(y)|^2dy\ =\ \int_E|v(y)|^2dy
$$
so that for any $k\in\mathbb{N}$ there exists $C_k>0$ such that for any $\tau\in[0,1]$ we have that
\begin{equation}\label{2.41}
\int_{\mathcal X}<x>^{2k}\left\|\big(R_{\alpha\beta}(\tau)u\big)(x,.)\right\|^2_{L^2(\mathbb{T})}\ \leq\ C_kC(B)^2c^\prime_{\alpha,\beta,N}(p)^2\int_{\mathcal X}<x>^{2a+2k}\left\|\big(\partial^\alpha_xu\big)(x,.)\right\|^2_{L^2(\mathbb{T})}dx.
\end{equation}
For the derivatives $\partial^\mu_x\big(T_{\alpha\beta}u\big)(x,.)$ and $\partial^\mu_x\big(R_{\alpha\beta}(\tau)u\big)(x,.)$ (for any $\mu\in\mathbb{N}^d$) we obtain in a similar way estimations of the same form \eqref{2.38} and \eqref{2.41} and using \eqref{2.35} we obtain \eqref{2.30}.
\end{proof}
\begin{lemma}\label{L.2.16}
The following topological embeddings are true (uniformly in $\epsilon\in[-\epsilon_0,\epsilon_0]$):
\begin{equation}\label{2.42}
\mathscr{S}\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big)\ \hookrightarrow\ \mathcal{K}^m_\epsilon(\mathcal X\times\mathcal X)\ \hookrightarrow\ \mathscr{S}^\prime\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big).
\end{equation}
\end{lemma}
\begin{proof}
In order to prove the first embedding we take into account the density of $\mathscr{S}\big(\mathcal X\times\mathbb{T}\big)$ into $\mathscr{S}\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big)$ and the Definition \ref{D.2.9} (c) of the space $\mathcal{K}^m_\epsilon(\mathcal X\times\mathcal X)$. It is thus enough to prove that there exists a seminorm $|.|_{m,l}$ on $\mathscr{S}\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big)$ such that
\begin{equation}\label{2.43}
\left\|\widetilde{Q}_{m,\epsilon}u\right\|_{L^2(\mathcal X\times\mathbb{T})}\ \leq\ C|u|_{m,l},\qquad\forall u\in\mathscr{S}\big(\mathcal X\times\mathbb{T}\big).
\end{equation}
But this fact has been proved in Lemma \ref{L.2.15} (inequality \eqref{2.30}).
For the second embedding let us notice that the canonical sesquilinear map on $\mathscr{S}^\prime\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big)\times\mathscr{S}\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big)$ (associated to the duality map) is just a continuous extension of the scalar product
\begin{equation}\label{2.44}
(u,v)_m\ :=\ \int_{\mathcal X}\big(u(x,.),v(x,.)\big)_{\mathcal{H}^m(\mathbb{T})}dx,\qquad\forall(u,v)\in\mathscr{S}\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big)\times\mathscr{S}\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big).
\end{equation}
Due to the density of $\mathscr{S}\big(\mathcal X\times\mathbb{T}\big)$ into $\mathcal{K}^m_\epsilon(\mathcal X\times\mathcal X)$, this amounts to prove that it exists a continuous seminorm $|.|_{m,l}$ on $\mathscr{S}\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big)$ such that we have that
\begin{equation}\label{2.45}
|(u,v)_m|\ \leq\ \|u\|_{\mathcal{K}^m_\epsilon}\cdot|v|_{m,l}\qquad\forall (u,v)\in\mathscr{S}\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big)\times\mathscr{S}\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big),
\end{equation}
where $\|u\|_{\mathcal{K}^m_\epsilon}=\left\|\widetilde{Q}_{m,\epsilon}u\right\|_{L^2(\mathcal X\times\mathbb{T})}$. Let us notice that
$$
(u,v)_m\ =\ \left(u,\big(1\otimes<D_\Gamma>^{2m}\big)v\right)_{L^2(\mathcal X\times\mathbb{T})}
=\left(\widetilde{Q}_{m,\epsilon}u\,,\,\widetilde{Q}_{-m,\epsilon}\big(1\otimes<D_\Gamma>^{2m}\big)v\right)_
{L^2(\mathcal X\times\mathbb{T})}.
$$
We denote by $v_\Gamma:=\big(1\otimes<D_\Gamma>^{2m}\big)v\in\mathscr{S}
\big(\mathcal X\times\mathbb{T}\big)$ and notice that we have the inequality
\begin{equation}\label{2.46}
|(u,v)_m|\ \leq\ \left\|\widetilde{Q}_{m,\epsilon}u\right\|_{L^2(\mathcal X\times\mathbb{T})}\,\left\|\widetilde{Q}_{-m,\epsilon}v_\Gamma\right\|_{L^2(\mathcal X\times\mathbb{T})}.
\end{equation}
We conclude thus that the inequality \eqref{2.45} follows if we can prove that there exists a seminorm $|.|_{m,l}$ on $\mathscr{S}\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big)$ such that we have
\begin{equation}\label{2.47}
\left\|\widetilde{Q}_{-m,\epsilon}v_\Gamma\right\|_{L^2(\mathcal X\times\mathbb{T})}\ \leq\ C|v|_{m,l},\qquad\forall v\in\mathscr{S}\big(\mathcal X\times\mathbb{T}\big).
\end{equation}
From Lemma\ref{L.2.15} (inequality \eqref{2.30}) we know that there exists a seminorm $|.|_{-m,l}$ on $\mathscr{S}\big(\mathcal X\times\mathbb{T}\big)$ such that we have
\begin{equation}\label{2.48}
\left\|\widetilde{Q}_{-m,\epsilon}v_\Gamma\right\|_{L^2(\mathcal X\times\mathbb{T})}\ \leq\ C|v_\Gamma|_{-m,l},\qquad\forall v\in\mathscr{S}\big(\mathcal X\times\mathbb{T}\big).
\end{equation}
Now \eqref{2.47} follows from \eqref{2.48} once we notice that
$$
|v_\Gamma|_{-m,l}\ =\ |v|_{m,l}.
$$
\end{proof}
\section{The Grushin Problem}\label{S.3}
\setcounter{equation}{0}
\setcounter{theorem}{0}
Suppose given a symbol $p$ satisfying the assumptions of Lemma \ref{A.21}, i.e. $p\in S^m_1(\mathbb{T})$ real and elliptic, with $m>0$. The operator $P:=\mathfrak{Op}(p)$ has a self-adjoint realisation in $L^2(\mathcal X)$ having domain $\mathcal{H}^m(\mathcal X)$ and being lower semibounded and a self-adjoint realisation $P_\Gamma$ in $L^2(\mathbb{T})$ with domain $\mathcal{K}_{m,0}$ being also lower semibounded.
\begin{lemma}\label{L.3.1}
There exists $N\in\mathbb{N}^*$, $C>0$ and the linear independent family $\{\phi_1,\ldots,\phi_N\}\subset\mathscr{S}(\mathbb{T})$, such that the following inequality is true:
\begin{equation}\label{3.1}
\left(P_\Gamma u,u\right)_{L^2(\mathbb{T})}\ \geq\ C^{-1}\|u\|^2_{\mathcal{K}_{m/2,0}}\ -\ C\sum\limits_{j=1}^N\left|\Big(u,\phi_j\Big)_{L^2(\mathbb{T})}\right|^2,\quad\forall u\in\mathcal{K}_{m,0}.
\end{equation}
\end{lemma}
\begin{proof}
The manifold $\mathbb{T}$ being a compact manifold without border, $\mathcal{K}_{m,0}$ is compactly embedded in $L^2(\mathbb{T})$ and the operator $P_\Gamma$ has compact resolvent. Let us fix some $\lambda\in\mathbb{R}$ and let us denote by $E_\lambda$ the spectral projection of $P_\Gamma$ for the semiaxis $(-\infty,\lambda]$. We choose an orthonormal basis $\{\phi_1,\ldots,\phi_N\}$ for the subspace $\mathfrak{R}an(E_\lambda)$ of $L^2(\mathbb{T})$. Then $\mathfrak{R}an({\rm{1}\hspace{-3pt}\mathbf{l}}-E_\lambda)$ is the orthogonal complement of the space $\mathcal{S}p\{\phi_1,\ldots,\phi_N\}$ generated by $\{\phi_1,\ldots,\phi_N\}$ in $L^2(\mathbb{T})$; moreover, for any $v\in\mathcal{D}(P_\Gamma)\cap\mathfrak{R}an({\rm{1}\hspace{-3pt}\mathbf{l}}-E_\lambda)$, one has that
$$
\left(P_\Gamma v,v\right)_{L^2(\mathbb{T})}\ \geq\ \lambda\|v\|^2_{L^2(\mathbb{T})}.
$$
In conclusion:
\begin{equation}\label{3.2}
\left(P_\Gamma v,v\right)_{L^2(\mathbb{T})}\ \geq\ \lambda\|v\|^2_{L^2(\mathbb{T})},\quad\forall v\in\mathcal{K}_{m,0}\cap\left[\mathcal{S}p\{\phi_1,\ldots,\phi_N\}\right]^\bot.
\end{equation}
If $u\in\mathcal{K}_{m,0}$ we have that $v:=u-\sum\limits_{j=1}^N(u,\phi_j)_{L^2(\mathbb{T})}\phi_j$ belongs to the subset of vectors verifying \eqref{3.2} and thus we have that
\begin{equation}\label{3.3}
\left(P_\Gamma v,v\right)_{L^2(\mathbb{T})}\ \geq\ \lambda\left\|u-\sum\limits_{j=1}^N(u,\phi_j)_{L^2(\mathbb{T})}\phi_j\right\|^2_{L^2(\mathbb{T})}\ =\ \lambda\left(\|u\|^2_{L^2(\mathbb{T})}-\sum\limits_{j=1}^N\left|(u,\phi_j)_{L^2(\mathbb{T})}\right|^2\right).
\end{equation}
On the other side, if we know that $P_\Gamma\phi_j=\lambda_j\phi_j$ for any $1\leq j\leq N$, then we have that
$$
\left(P_\Gamma v,v\right)_{L^2(\mathbb{T})}=\left(P_\Gamma u-\sum\limits_{j=1}^N(u,\phi_j)_{L^2(\mathbb{T})}P_\Gamma\phi_j\ ,\ u-\sum\limits_{k=1}^N(u,\phi_k)_{L^2(\mathbb{T})}\phi_k\right)_{L^2(\mathbb{T})}=
$$
$$
=\left(P_\Gamma u,u\right)_{L^2(\mathbb{T})}\ -\sum\limits_{k=1}^N\overline{(u,\phi_k)}_{L^2(\mathbb{T})}(u,P_\Gamma\phi_k)_{L^2(\mathbb{T})}-\sum\limits_{j=1}^N(u,\phi_j)_{L^2(\mathbb{T})}\left(P_\Gamma\phi_j,u\right)_{L^2(\mathbb{T})}+
$$
$$
+\sum\limits_{j,k=1}^N(u,\phi_j)_{L^2(\mathbb{T})}\overline{(u,\phi_k)}_{L^2(\mathbb{T})}\left(P_\Gamma\phi_j,\phi_k\right)_{L^2(\mathbb{T})}=
$$
$$
=\left(P_\Gamma u,u\right)_{L^2(\mathbb{T})}\ -\ \sum\limits_{j=1}^N\lambda_j\left|\left(u,\phi_j\right)_{L^2(\mathbb{T})}\right|^2.
$$
If we compare this inequality with \eqref{3.3} we conclude that
$$
\left(P_\Gamma u,u\right)_{L^2(\mathbb{T})}\ \geq\ \lambda\|u\|^2_{L^2(\mathbb{T})}\ -\ \sum\limits_{j=1}^N(\lambda-\lambda_j)\left|\left(u,\phi_j\right)_{L^2(\mathbb{T})}\right|^2.
$$
Finaly we obtain that
\begin{equation}\label{3.4}
-\|u\|^2_{L^2(\mathbb{T})}\ \geq\ -\frac{1}{\lambda}\left(P_\Gamma u,u\right)_{L^2(\mathbb{T})}\ -\ \sum\limits_{j=1}^N\left(1-\frac{\lambda_j}{\lambda}\right)\left|\left(u,\phi_j\right)_{L^2(\mathbb{T})}\right|^2,\quad\forall u\in\mathcal{K}_{m,0}.
\end{equation}
In order to prove \eqref{3.1} we put together \eqref{3.4} with the G{\aa}rding inequality \eqref{A.36} and conclude that it exists $C_0>0$ such that
$$
\left(P_\Gamma u,u\right)_{L^2(\mathbb{T})}\ \geq\ C_0^{-1}\|u\|^2_{\mathcal{K}_{m/2,0}}\ -\ C_0\|u\|^2_{L^2(\mathbb{T})},\quad\forall u\in\mathcal{K}_{m,0}.
$$
\end{proof}
\begin{remark}\label{R.3.2}
From Remark \ref{R.A.22} we know that for any $\xi\in\mathcal X^*$ the operator $P_{\Gamma,\xi}$ is self-adjoint and lower semibounded in $L^2(\mathbb{T})$ on the domain $\mathcal{K}_{m,\xi}$. If we identify $\mathcal{K}_{m,\xi}$ with $\mathcal{H}^m_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$ endowed with the norm $\|<D+\xi>^mu\|_{L^2(E)}$, we deduce that the operator $P_\xi$ is self-adjoint in $L^2_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$ with the domain $\mathcal{K}_{m,\xi}$. Noticing that $P=\sigma_\xi P_\xi\sigma_{-\xi}$ and $\sigma_\xi:\mathcal{K}_{s,\xi}\rightarrow\mathscr{F}_{s,\xi}$ is a unitary operator for any $s\in\mathbb{R}$ and any $\xi\in\mathcal X^*$, it follows that $P$ generates in $\mathscr{F}_{0,\xi}$ a self-adjoint lower semibounded operator on the domain $\mathscr{F}_{m,\xi}$.
\end{remark}
\begin{lemma}\label{L.3.3}
Suppose given a compact interval $I\subset\mathbb{R}$; it exists a constant $C>0$, a natural integer $N\in\mathbb{N}$ and the family of functions $\{\psi_1,\ldots,\psi_N\}$ having the following properties:
\begin{enumerate}
\item[a)] $\psi_j\in C^\infty(\Xi)$.
\item[b)] $\psi_j(y,\eta+\gamma^*)\ =\ \psi_j(y,\eta),\quad\forall(y,\eta)\in\Xi,\ \forall\gamma^*\in\Gamma^*,\ 1\leq j\leq N$.
\item[c)] $\{\psi_j(.,\xi)\}_{1\leq j\leq N}$ is an orthonormal system in $\mathscr{F}_{0,\xi}$ for any $\xi\in\mathcal X^*$. We denote by $\mathcal{T}_\xi$ the complex linear space generated by the family $\{\psi_j(.,\xi)\}_{1\leq j\leq N}$ in $\mathscr{F}_{0,\xi}$ and by $\mathcal{T}^\bot_\xi$ its orthogonal complement in the same Hilbert space.
\item[d)] The following inequality is true:
\begin{equation}\label{3.5}
\left(\big(P-\lambda\big)u,u\right)_{\mathscr{F}_{0,\xi}}\ \geq\ C\|u\|^2_{\mathscr{F}_{0,\xi}},\quad\forall u\in{\mathscr{F}_{m,\xi}}\cap\mathcal{T}^\bot_\xi,\ \forall\xi\in\mathcal X^*,\ \forall\lambda\in I.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
It is evidently enough to prove \eqref{3.5} for $\lambda=\lambda_0:=\sup I$. We apply Lemma \ref{L.3.1} to the operator $P_{\Gamma,\xi_0}-\lambda_0$ with $\xi_0\in\mathcal X^*$ to be considered fixed. We deduce that there exists $C_0>0$, $N_0\in\mathbb{N}^*$ and a family of functions $\{\widetilde{\psi}_1(\cdot,\xi_0),\ldots,\widetilde{\psi}_{N_0}(\cdot,\xi_0)\}$ from $\mathscr{S}(\mathbb{T})$ such that the following inequality is true:
\begin{equation}\label{3.6}
\left(\big(P_{\Gamma,\xi_0}-\lambda_0\big)v,v\right)_{L^2(\mathbb{T})}\ \geq\ C_0^{-1}\|v\|^2_{\mathcal{K}_{m/2,\xi_0}}\ -\ C_0\sum\limits_{j=1}^{N_0}\left|\big(v,\widetilde{\psi}_j(.,\xi_0)\big)_{L^2(\mathbb{T})}\right|^2,\quad\forall v\in\mathcal{K}_{m,\xi_0}.
\end{equation}
Taking into account the result in Example \ref{E.A.20}, we notice that the map $\mathcal X^*\ni\xi\mapsto P_{\Gamma,\xi}\in\mathbb{B}\big(\mathcal{K}_{s+m,\xi_0};\mathcal{K}_{s,\xi_0}\big)$ is continuous for any $s\in\mathbb{R}$ (it is even smooth); it follows that
$$
\left|\left(\big(P_{\Gamma,\xi_0}-P_{\Gamma,\xi}\big)v,v\right)_{L^2(\mathbb{T})}\right|\ \leq\ \left\|P_{\Gamma,\xi_0}-P_{\Gamma,\xi}\right\|_{\mathbb{B}(\mathcal{K}_{m/2,\xi_0};\mathcal{K}_{-m/2,\xi_0})}\,\|v\|^2_{\mathcal{K}_{m/2,\xi_0}},\quad\forall v\in\mathcal{K}_{m,\xi_0}.
$$
We conclude that for some smaller constant $C_0$, the inequality \eqref{3.6} is true with $P_{\Gamma,\xi}$ in place of $P_{\Gamma,\xi_0}$ on the left side, for $\xi\in V_0$ some small neighborhood of $\xi_0$ in $\mathcal X^*$.
Let us define now the family of functions $\{\psi_1,\ldots,\psi_{N}\}$. Let us first notice that
$$
\psi^\prime_j(.,\xi_0):=e^{i<\xi_0,.>}\widetilde{\psi}_j(.,\xi_0)\in C^\infty(\mathcal X)\cap\mathscr{F}_{0,\xi_0}.
$$
Then let us also notice that for any $\delta>0$ we can find functions $\overset{\circ}{\psi}_j\in C^\infty_0(\overset{\circ}{E})$, with $\overset{\circ}{E}$ the interior set of $E$, such that
$$
\left\|\psi^\prime_j(.,\xi_0)-\overset{\circ}{\psi}_j\right\|_{L^2(E)}\ \leq\ \delta,\quad 1\leq j\leq N_0.
$$
Then we define
$$
\psi_j(x,\xi_0)\ :=\ \sum\limits_{\gamma\in\Gamma}\overset{\circ}{\psi}_j(x-\gamma)e^{i<\xi_0,\gamma>},\quad 1\leq j\leq N_0
$$
and we notice that $\psi_j(.,\xi_0)\in C^\infty(\mathcal X)\cap\mathscr{F}_{0,\xi_0}$ and $\psi_j(.,\xi_0)=\overset{\circ}{\psi}_j$ on $\overset{\circ}{E}$. Thus we can finally define
\begin{equation}\label{3.7}
\psi_j(x,\xi)\ :=\ \sum\limits_{\gamma\in\Gamma}\overset{\circ}{\psi}_j(x-\gamma)e^{i<\xi,\gamma>},\quad 1\leq j\leq N_0,\ \forall(x,\xi)\in\Xi.
\end{equation}
These functions evidently verify the properties (a) and (b) in the statement of the Lemma. It is also clear that for any $\xi\in\mathcal X^*$ we have that $\psi_j(.,\xi)\in\mathscr{F}_{0,\xi}$. Moreover we have that $\psi_j(.,\xi)=\overset{\circ}{\psi}_j=\psi_j(.,\xi_0)$ on $\overset{\circ}{E}$, so that
$$
\left\|\psi^\prime_j(.,\xi_0)-\psi_j(.,\xi)\right\|_{L^2(E)}\ \leq\ \delta,\quad\forall\xi\in\mathcal X^*,\ 1\leq j\leq N_0.
$$
From this estimation we may conclude that $\forall\kappa>0$ we can reduce if necessary the neighborhood $V_0$ fixed above, such that $\forall\xi\in V_0$ and $\forall v\in\mathcal{K}_{m,\xi_0}$ we have that for $1\leq j\leq N_0$:
$$
\left|\big(v,\widetilde{\psi}_j(.,\xi_0)\big)_{L^2(\mathbb{T})}\right|\ =\ \left|\int_Ev(y)\overline{\widetilde{\psi}_j(y,\xi_0)}dy\right|\ =\ \left|\int_Ee^{i<\xi_0,y>}v(y)\overline{\psi^\prime_j(y,\xi_0)}dy\right|\ \leq
$$
$$
\leq\ \left|\int_Ee^{i<\xi_0,y>}v(y)\overline{\psi_j(y,\xi)}dy\right|\,+\,\left|\int_Ee^{i<\xi_0,y>}v(y)\overline{\big[\psi^\prime_j(y,\xi_0)-\psi_j(y,\xi)\big]}dy\right|\ \leq
$$
$$
\leq\ \left|\int_Ee^{i<\xi,y>}v(y)\overline{\psi_j(y,\xi)}dy\right|\,+\,\left|\int_E\big[e^{i<\xi_0,y>}-e^{i<\xi,y>}\big]v(y)\overline{\psi_j(y,\xi)}dy\right|\,+
$$
$$
+\,\left|\int_Ee^{i<\xi_0,y>}v(y)\overline{\big[\psi^\prime_j(y,\xi_0)-\psi_j(y,\xi)\big]}dy\right|\ \leq
$$
$$
\leq\ \left|\int_Ee^{i<\xi,y>}v(y)\overline{\psi_j(y,\xi)}dy\right|\,+\,\kappa\|v\|_{L^2(E)}.
$$
From this estimation we deduce that, by reducing if necessary the constant $C_0>0$ we can replace in the right hand side of \eqref{3.6} the scalar products $\big(v,\widetilde{\psi}_j(.,\xi_0)\big)_{L^2(\mathbb{T})}$ with the scalar products $\big(e^{i<\xi,.>}v,\psi_j(.,\xi)\big)_{\mathscr{F}_{0,\xi}}$ for $\xi\in V_0$.
Let us consider a vector $u\in\mathscr{F}_{m,\xi}$ and associate to it the vector $v:=e^{-i<\xi,.>}u$ that belongs to $\mathcal{K}_{m,\xi}$ and taking into account the equality
$$
e^{i<\xi,.>}P_\xi e^{-i<\xi,.>}u\ =\ Pu
$$
we deduce from \eqref{3.6} and the above arguments that we have
\begin{equation}\label{3.8}
\left(\big(P-\lambda_0\big)u,u\right)_{\mathscr{F}_{0,\xi}}\ \geq\ C_0^{-1}\|u\|^2_{\mathscr{F}_{0,\xi}}\,-\,C_0\sum\limits_{j=1}^{N_0}\left|\left(u,\psi_j(.,\xi)\right)_{\mathscr{F}_{0,\xi}}\right|^2,\quad\forall u\in\mathscr{F}_{m,\xi},\ \forall\xi\in V_0.
\end{equation}
Taking into account that $\mathscr{F}_{s,\xi+\gamma^*}=\mathscr{F}_{s,\xi}$ for any $s\in\mathbb{R}$, any $\xi\in\mathcal X^*$ and any $\gamma^*\in\Gamma^*$ and the fact that the functions $\psi_j(x,.)$ are $\Gamma^*$-periodic (for $1\leq j\leq N_0$), we conclude that it is enough to prove \eqref{3.5} for $\xi\in\mathbb{T}_*$. Being compact, $\mathbb{T}_*$ can be covered by a finite number of neighborhoods of type $V_0$ (as defined in the argument above). In this way, repeating the procedure explained above we can find a finite family of functions $\{\psi_1,\ldots,\psi_{\tilde{N}}\}$ (with some quite larger $\tilde{N}$ in principle) that will satisfy the properties (a), (b) and (d) from the statement of the Lemma. We select now out of this family a maximal linearly independent subfamily of $N$ functions $\{\psi_1,\ldots,\psi_N\}$ (it can be characterized by the property that the functions $\{\overset{\circ}{\psi}_1,\ldots,\overset{\circ}{\psi}_N\}$ is a linearly independent system in $C^\infty_0(\overset{\circ}{E})$). Let us notice that this last step (the choice of the maximal linearly independent subfamily) does not change the subspace $\mathcal{T}_\xi$ that they generate. Finally we may use the Gram-Schmidt procedure in order to obtain a family of $N$ orthonormal functions from $\mathscr{F}_{0,\xi}$.
\end{proof}
\begin{lemma}\label{L.3.4}
Under the assumptions of Lemma \ref{L.3.3} we denote by $\Pi_\xi$ the orthogonal projection on $\mathcal{T}_\xi$ in the Hilbert space $\mathscr{F}_{0,\xi}$ and by $S(\xi,\lambda)$ the unbounded operator in $\mathcal{T}^\bot_\xi$ defined on the domain $\mathscr{F}_{m,\xi}\cap\mathcal{T}^\bot_\xi$ by the action of $\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)\big(P-\lambda\big)$. Then the following statements are true:
\begin{enumerate}
\item[a)] The operator $S(\xi,\lambda)$ is self-adjoint and invertible and $S(\xi,\lambda)^{-1}\in\mathbb{B}\big(\mathcal{T}^\bot_\xi;\mathcal{T}^\bot_\xi\big)$ uniformly with respect to $(\xi,\lambda)\in\mathbb{T}^{d*}\times I$.
\item[b)] The operator $S(\xi,\lambda)^{-1}$ also belongs to $\mathbb{B}\big(\mathcal{T}^\bot_\xi;\mathscr{F}_{m,\xi}\big)$ uniformly with respect to $(\xi,\lambda)\in\mathbb{T}^{d*}\times I$.
\end{enumerate}
\end{lemma}
\begin{proof}
The operator $S(\xi,\lambda)$ is densly defined by definition and is symmetric on its domain because for any couple $(u,v)\in\big[\mathscr{F}_{m,\xi}\cap\mathcal{T}^\bot_\xi\big]^2$ we can write that:
$$
\left(S(\xi,\lambda)u,v\right)_{\mathscr{F}_{0,\xi}}\ =\ \left(\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)\big(P-\lambda\big)u,v\right)_{\mathscr{F}_{0,\xi}}\ =\ \left(u,\big(P-\lambda\big)\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)v\right)_{\mathscr{F}_{0,\xi}}\ =\
$$
$$
=\ \left(\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)u,\big(P-\lambda\big)v\right)_{\mathscr{F}_{0,\xi}}\ =\ \left(u,S(\xi,\lambda)v\right)_{\mathscr{F}_{0,\xi}}.
$$
In order to prove now the self-adjointness of the operator $S(\xi,\lambda)$ let us fix some vector $v\in\mathcal{D}\big(S(\xi,\lambda)^*\big)$; thus we deduce first that $v\in\mathcal{T}^\bot_\xi$ and secondly that there exists a vector $f\in\mathcal{T}^\bot_\xi$ such that $\big(S(\xi,\lambda)u,v\big)_{\mathscr{F}_{0,\xi}}=\big(u,f\big)_{\mathscr{F}_{0,\xi}}$ for any $u\in\mathscr{F}_{m,\xi}\cap\mathcal{T}^\bot_\xi$. For any vector $w\in\mathscr{F}_{m,\xi}$ we can write that
$w=\Pi_\xi w\,+\,\big({{\rm{1}\hspace{-3pt}\mathbf{l}}}-\Pi_\xi\big)w$ and
$$
\Pi_\xi w\,=\ \sum\limits_{j=1}^{N}\big(w,\psi_j(.,\xi)\big)_{\mathscr{F}_{0,\xi}}\psi_j(.,\xi)\in\mathscr{F}_{m,\xi}\cap\mathcal{T}_\xi,\qquad\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)w\in\mathscr{F}_{m,\xi}\cap\mathcal{T}^\bot_\xi.
$$
Thus we have that
$$
\left(\big(P-\lambda\big)\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)w,v\right)_{\mathscr{F}_{0,\xi}}\,=\,\left(\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)\big(P-\lambda\big)\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)w,v\right)_{\mathscr{F}_{0,\xi}}\,=\,\left(S(\xi,\lambda)\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)w,v\right)_{\mathscr{F}_{0,\xi}}\,=
$$
$$
=\,\left(\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)w,f\right)_{\mathscr{F}_{0,\xi}}\,=\,\left(w,f\right)_{\mathscr{F}_{0,\xi}},
$$
and
$$
\left(\big(P-\lambda\big)\Pi_\xi w,v\right)_{\mathscr{F}_{0,\xi}}\,=\,\sum\limits_{j=1}^{N}\big(w,\psi_j(.,\xi)\big)_{\mathscr{F}_{0,\xi}}\left(\big(P-\lambda\big)\psi_j(.,\xi),v\right)_{\mathscr{F}_{0,\xi}}\,=\,\left(w,f_0(.,\xi)\right)_{\mathscr{F}_{0,\xi}}
$$
where $f_0(.,\xi):=\sum\limits_{j=1}^{N}\left(\big(P-\lambda\big)\psi_j(.,\xi),v\right)_{\mathscr{F}_{0,\xi}}\psi_j(.,\xi)\in\mathcal{T}_\xi$. In conclusion we have that for anny $w\in\mathscr{F}_{m,\xi}$:
$$
\left(\big(P-\lambda\big)w,v\right)_{\mathscr{F}_{0,\xi}}\ =\ \left(w,f+f_0(.,\xi)\right)_{\mathscr{F}_{0,\xi}},\qquad f+f_0(.,\xi)\in\mathscr{F}_{0,\xi}.
$$
Recalling that $P-\lambda$ is self-adjoint in $\mathscr{F}_{0,\xi}$ with domain $\mathscr{F}_{m,\xi}$ we conclude that $v\in\mathscr{F}_{m,\xi}\cap\mathcal{T}^\bot_\xi=\mathcal{D}\big(S(\xi,\lambda)\big)$.
The invertibility of $S(\xi,\lambda)$ follows from \eqref{3.5} noticing that
\begin{equation}\label{3.9}
{\left(S(\xi,\lambda)u,u\right)_{\mathscr{F}_{0,\xi}}\ \geq\ C\|u\|^2_{\mathscr{F}_{0,\xi}}\qquad\forall u \in\mathscr{F}_{m,\xi}\cal\mathcal{T}^\bot_\xi,\ \forall\xi\in\mathbb{T}}^{*d},\ \forall\lambda\in I.
\end{equation}
From this last inequality follows also that $S(\xi,\lambda)^{-1}\in\mathbb{B}\big(\mathcal{T}^\bot_\xi;\mathscr{F}_{m,\xi}\big)$ uniformly with respect to $(\xi,\lambda)\in\mathbb{T}_*\times I$.
b) For any fixed $\xi_0\in\mathbb{T}_*$ and $\lambda_0\in I$ we know that $P_{\Gamma,\xi_0}-\lambda_0$ is self-adjoint in $\mathcal{K}_{0,\xi_0}$ on the domain $\mathcal{K}_{m,\xi_0}$ and that the Hilbert norm on $\mathcal{K}_{m,\xi_0}$ is equivalent with the graph-norm of $P_{\Gamma,\xi_0}-\lambda_0$. It exists thus a constant $C_0>0$ such that
\begin{equation}\label{3.10}
\|v\|_{\mathcal{K}_{m,\xi_0}}\ \leq\ C_0\left(\|v\|_{\mathcal{K}_{0,\xi_0}}\,+\,\left\|\big(P_{\Gamma,\xi_0}-\lambda_0\big)v\right\|_{\mathcal{K}_{0,\xi_0}}\right),\qquad\forall v\in\mathcal{K}_{m,\xi_0}.
\end{equation}
Taking into account the Example \ref{E.A.20} we know that the application $\mathcal X^*\ni\xi\mapsto P_{\Gamma,\xi}\in\mathbb{B}\big(\mathcal{K}_{m,0};\mathcal{K}_{0,0}\big)$ is of class $C^\infty$. Noticing that
$$
\left\|\big(P_{\Gamma,\xi_0}-\lambda_0\big)v\,-\,\big(P_{\Gamma,\xi}-\lambda\big)v\right\|_{\mathcal{K}_{0,\xi_0}}\ \leq\ C\left\|P_{\Gamma,\xi_0}\,-\,P_{\Gamma,\xi}\right\|_{\mathbb{B}(\mathcal{K}_{m,0};\mathcal{K}_{0,0})}\|v\|_{\mathcal{K}_{m,\xi_0}}\,+\,|\lambda-\lambda_0|\,\|v\|_{\mathcal{K}_{0,\xi_0}},
$$
for any $(\xi,\xi_0)\in\mathbb{T}_*\times\mathbb{T}_*$, any $(\lambda,\lambda_0)\in I\times I$ and any $v\in\mathcal{K}_{m,\xi_0}$, we deduce that there exist a constant $C^\prime_0>0$ and a neighborhood $V_0$ of $\xi_0\in\mathbb{T}_*$ such that
\begin{equation}\label{3.11}
\|v\|_{\mathcal{K}_{m,\xi}}\ \leq\ C^\prime_0\left(\|v\|_{\mathcal{K}_{0,\xi}}\,+\,\left\|\big(P_{\Gamma,\xi}-\lambda\big)v\right\|_{\mathcal{K}_{0,\xi}}\right),\qquad\forall(\xi,\lambda)\in V_0\times I,\ \forall v\in\mathcal{K}_{m,\xi}.
\end{equation}
The manifold $\mathbb{T}_*$ being compact we can find a finite cover with neighborhoods of type $V_0$ as above and we conclude that \eqref{3.11} is true for any $\xi\in\mathbb{T}_*$ with a suitable constant $C^\prime_0$.
Considering now some vector $u\in\mathscr{F}_{m,\xi}$ and denoting by $v:=\sigma_{-\xi}u\in\mathcal{K}_{m,\xi}$ we deduce from \eqref{3.11} that
\begin{equation}\label{3.12}
\|u\|_{\mathscr{F}_{m,\xi}}\ \leq\ C^\prime_0\left(\|u\|_{\mathscr{F}_{0,\xi}}\,+\,\left\|\big(P-\lambda\big)u\right\|_{\mathscr{F}_{0,\xi}}\right),\qquad\forall(\xi,\lambda)\in\mathbb{T}_*\times I,\ \forall u\in\mathscr{F}_{m,\xi}.
\end{equation}
Considering now $u\in\mathscr{F}_{m,\xi}\cap\mathcal{T}^\bot_\xi=\mathcal{D}\big(S(\xi,\lambda)\big)$ we can write that
$$
\big(P-\lambda\big)u\,=\,S(\xi,\lambda)u\,+\,\Pi_\xi\big(P-\lambda\big)u\,=\,S(\xi,\lambda)u\,+\,\sum\limits_{j=1}^{N}\left(u,\big(P-\lambda\big)\psi_j(.,\xi)\right)_{\mathscr{F}_{0,\xi}}\psi_j(.,\xi),
$$
and we know that the norm in $\mathscr{F}_{0,\xi}$ of the second term on the right (the finite sum over $j$) can be bounded by a constant that does not depend on $\xi\in\mathbb{T}^{d*}$ multiplied by $\|u\|_{\mathscr{F}_{0,\xi}}$. Using \eqref{3.12} we deduce that there exists a constant $C>0$ such that
$$
\|u\|_{\mathscr{F}_{m,\xi}}\ \leq\ C\left(\|u\|_{\mathscr{F}_{0,\xi}}\,+\,\left\|S(\xi,\lambda)u\right\|_{\mathscr{F}_{0,\xi}}\right)\qquad\forall(\xi,\lambda)\in\mathbb{T}_*\times I,\ \forall u\in\mathscr{F}_{m,\xi}\cap\mathcal{T}^\bot_\xi.
$$
This inequality clearly implies point (b) of the Lemma.
\end{proof}
Let us define now the following operators associated to the family of functions $\{\psi_j\}_{1\leq j\leq N}$ introduced above. For any $\xi\in\mathbb{T}_*$ we define:
\begin{equation}\label{3.13}
\forall u\in\mathscr{F}_{0,\xi},\qquad\widetilde{R}_+(\xi)u\ :=\ \left\{\big(u,\psi_j(.,\xi)\big)_{\mathscr{F}_{0,\xi}}\right\}_{1\leq j\leq N}\in\mathbb{C}^N;
\end{equation}
\begin{equation}\label{3.14}
\forall\underline{u}:=\{\underline{u}_1,\ldots,\underline{u}_N\}\in\mathbb{C}^N,\qquad\widetilde{R}_-(\xi)\underline{u}:=\sum\limits_{j=1}^{N}\underline{u}_j\psi_j(.,\xi)\in\mathscr{F}_{0,\xi}.
\end{equation}
Evidently we have that $\forall\xi\in\mathbb{T}_*$, $\widetilde{R}_+(\xi)\in\mathbb{B}\big(\mathscr{F}_{0,\xi};\mathbb{C}^N\big)$ and $\widetilde{R}_-(\xi)\in\mathbb{B}\big(\mathbb{C}^N;\mathscr{F}_{0,\xi}\big)$.
We can define now the Grushin type operator associated to our operator $P$:
\begin{equation}\label{3.15}
Q(\xi,\lambda)\ :=\ \left(
\begin{array}{cc}
P-\lambda&\widetilde{R}_-(\xi)\\
\widetilde{R}_+(\xi)&0
\end{array}
\right),
\end{equation}
that due to our previous results belongs to $\mathbb{B}\big(\mathscr{F}_{m,\xi}\times\mathbb{C}^N;\mathscr{F}_{0,\xi}\times\mathbb{C}^N\big)$ uniformly with respect to $(\xi,\lambda)\in\mathbb{T}_*\times I$.
\begin{lemma}\label{L.3.5}
For any values of $(\xi,\lambda)\in\mathbb{T}_*\times I$ the operator $Q(\xi,\lambda)$ acting as an unbounded linear operator in the Hilbert space $\mathscr{F}_{0,\xi}\times\mathbb{C}^N$ is self-adjoint on the domain $\mathscr{F}_{m,\xi}\times\mathbb{C}^N$.
\end{lemma}
\begin{proof}
We know that $P$ is self-adjoint in $\mathscr{F}_{0,\xi}$ with domain $\mathscr{F}_{m,\xi}$ and it is easy to see that $\widetilde{R}_+(\xi)^*=\widetilde{R}_-(\xi)$ so that we conclude that $Q(\xi,\lambda)$ is symmetric on $\mathscr{F}_{m,\xi}\times\mathbb{C}^N$.
Let us consider now a pair $(v,\underline{v})\in\mathcal{D}\big(Q(\xi,\lambda)^*\big)$; this means that $v\in\mathscr{F}_{0,\xi}$, $\underline{v}\in\mathbb{C}^N$ and there exists a pair $(f,\underline{f})\in\mathscr{F}_{0,\xi}\times\mathbb{C}^N$ such that we have
$$
\left(Q(\xi,\lambda)\left(
\begin{array}{c}
u\\
\underline{u}
\end{array}
\right)
\begin{array}{c}
\\
,
\end{array}
\left(
\begin{array}{c}
v\\
\underline{v}
\end{array}
\right)\right)_{\mathscr{F}_{0,\xi}\times\mathbb{C}^N}\ =\
\left(\left(
\begin{array}{c}
u\\
\underline{u}
\end{array}
\right)
\begin{array}{c}
\\
,
\end{array}
\left(
\begin{array}{c}
f\\
\underline{f}
\end{array}
\right)\right)_{\mathscr{F}_{0,\xi}\times\mathbb{C}^N}.
$$
Considering the case $\underline{u}=0$ we get that
$$
\left(\big(P-\lambda\big)u,v\right)_{\mathscr{F}_{0,\xi}}\ =\ \big(u,f\big)_{\mathscr{F}_{0,\xi}}\,-\,\big(\widetilde{R}_+(\xi)u,\underline{v}\big)_{\mathbb{C}^N}\ =\ \big(u,f-\widetilde{R}_-(\xi)\underline{v}\big)_{\mathscr{F}_{0,\xi}},\qquad\forall u\in\mathscr{F}_{m,\xi}.
$$
Taking into account the self-adjointness of the operator $P-\lambda$ in $\mathscr{F}_{0,\xi}$ on the domain $\mathscr{F}_{m,\xi}$ we may deduce that in fact $v$ belongs to $\mathscr{F}_{m,\xi}$ and thus $(v,\underline{v})\in\mathcal{D}\big(Q(\xi,\lambda)\big)$.
\end{proof}
\begin{lemma}\label{L.3.6}
The operator $Q(\xi,\lambda)$ defined in \eqref{3.15} is bijective and has an inverse $Q(\xi,\lambda)^{-1}\in\mathbb{B}\big(\mathscr{F}_{0,\xi}\times\mathbb{C}^N;\mathscr{F}_{m,\xi}\times\mathbb{C}^N\big)$ uniformly with respect to $(\xi,\lambda)\in\mathbb{T}_*\times I$.
\end{lemma}
\begin{proof}
Let us first prove the injectivity. Let us choose $u\in\mathscr{F}_{m,\xi}$ and $\underline{u}\in\mathbb{C}^N$ verifying the following equations:
\begin{equation}\label{3.16}
\left\{
\begin{array}{l}
\big(P-\lambda\big)u\,+\,\widetilde{R}_-(\xi)\underline{u}\ =\ 0,\\
\\
\widetilde{R}_+(\xi)u\ =\ 0.
\end{array}\right.
\end{equation}
The second equality in \eqref{3.16} implies that $u\in\mathcal{T}^\bot_\xi$. As by definition we have that $\widetilde{R}_-(\xi)\underline{u}\in\mathcal{T}_\xi$, the first equality in \eqref{3.16} implies that $\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)\big(P-\lambda\big)u=0$, or equivalently that $S(\xi,\lambda)u=0$. Taking now into account Lemma \ref{L.3.4} it follows that $u=0$. Now, the first equality in \eqref{3.16} implies that we also have $\widetilde{R}_-(\xi)\underline{u}=0$; but the linear independence of the system of functions $\{\psi_j(.,\xi)\}_{1\leq j\leq N}$ implies that the operator $\widetilde{R}_-(\xi)$ is injective and thus we deduce that we also have $\underline{u}=0$.
Let us consider now the surjectivity of the operator $Q(\xi,\lambda)$. Thus let us choose an arbitrary pair $(v,\underline{v})\in\mathscr{F}_{0,\xi}\times\mathbb{C}^N$ and let us search for a pair $(u,\underline{u})\in\mathscr{F}_{m,\xi}\times\mathbb{C}^N$ such that the following equalities are true:
\begin{equation}\label{3.17}
\left\{
\begin{array}{l}
\big(P-\lambda\big)u\,+\,\widetilde{R}_-(\xi)\underline{u}\ =\ v,\\
\\
\widetilde{R}_+(\xi)u\ =\ \underline{v}.
\end{array}
\right.
\end{equation}
Let us denote by $u_1:=\sum\limits_{j=1}^{N}\underline{v}_j\psi_j(.,\xi)$ so that by definition we have that
\begin{equation}\label{3.18}
\widetilde{R}_+(\xi)u_1\ =\ \underline{v},\quad u_1\in\mathscr{F}_{m,\xi}\cap\mathcal{T}_\xi,
\end{equation}
\begin{equation}\label{3.19}
\|u_1\|_{\mathscr{F}_{m,\xi}}\ \leq\ C\|\underline{v}\|_{\mathbb{C}^N},\quad\forall\xi\in\mathbb{T}_*.
\end{equation}
Thus we have to search now for a pair $(u_2,\underline{u})\in\big[\mathscr{F}_{m,\xi}\cap\mathcal{T}^\bot_\xi\big]\times\mathbb{C}^N$ that should verify the equality:
\begin{equation}\label{3.20}
\big(P-\lambda\big)u_2\,+\,\widetilde{R}_-(\xi)\underline{u}\ =\ v\,-\,\big(P-\lambda\big)u_1\ \in\ \mathscr{F}_{0,\xi}.
\end{equation}
In fact, as we have by definition that $\widetilde{R}_+(\xi)u_2=0$, the relations \eqref{3.18} and \eqref{3.20} imply directly that $u=u_1+u_2$ and $\underline{u}$ is the solution we looked for.
Let us project the equation \eqref{3.20} both on $\mathcal{T}_\xi$ and on its complement $\mathcal{T}^\bot_\xi$ to obtain
\begin{equation}\label{3.21}
S(\xi,\lambda)u_2\ =\ \big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)\big(v\,-\,\big(P-\lambda\big)u_1\big),
\end{equation}
\begin{equation}\label{3.22}
\widetilde{R}_-(\xi)\underline{u}\ =\ \Pi_\xi\big(v-\big(P-\lambda\big)(u_1+u_2)\big).
\end{equation}
Taking into account Lemma \ref{L.3.4}, it follows that equation \eqref{3.21} has a unique solution $u_2\in\mathscr{F}_{m,\xi}\cap\mathcal{T}^\bot_\xi$ and we have the estimation:
\begin{equation}\label{3.23}
\|u_2\|_{\mathscr{F}_{m,\xi}}\ \leq\ C\left\|\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Pi_\xi\big)\big(v-\big(P-\lambda\big)u_1\big)\right\|_{\mathscr{F}_{0,\xi}}\ \leq\ C\left(\|v\|_{\mathscr{F}_{0,\xi}}\,+\,\|u_1\|_{\mathscr{F}_{m,\xi}}\right)\ \leq C\left(\|v\|_{\mathscr{F}_{0,\xi}}\,+\,\|\underline{v}\|_{\mathbb{C}^N}\right)
\end{equation}
for any $(\xi,\lambda)\in\mathbb{T}_*\times I$.
It is very easy to see that equation \eqref{3.22} always has a unique solution $\underline{u}\in\mathbb{C}^N$ given explicitely by:
$$
\underline{u}_j\ :=\ \big(v-\big(P-\lambda\big)(u_1+u_2)\,,\,\psi_j(.,\xi)\big)_{\mathscr{F}_{0,\xi}}.
$$
From this explicit expression we easily get the following estimation:
\begin{equation}\label{3.24}
\|\underline{u}\|_{\mathbb{C}^N}\ \leq\ C\left(\|v\|_{\mathscr{F}_{0,\xi}}\,+\,\left\|\big(P-\lambda\big)(u_1+u_2)\right\|_{\mathscr{F}_{0,\xi}}\right)\ \leq\ C\left(\|v\|_{\mathscr{F}_{0,\xi}}\,+\,\|u_1\|_{\mathscr{F}_{m,\xi}}\,+\,\|u_2\|_{\mathscr{F}_{m,\xi}}\right)\ \leq
\end{equation}
$$
\leq\ C\left(\|v\|_{\mathscr{F}_{0,\xi}}\,+\,\|\underline{v}\|_{\mathbb{C}^N}\right),\qquad\forall(\xi,\lambda)\in\mathbb{T}^{d*}\times I.
$$
In conclusion we have proved the surjectivity of the operator $Q(\xi,\lambda)$ for any values of $(\xi,\lambda)\in\mathbb{T}_*\times I$ and we finish by noticing that the inequalities \eqref{3.19}, \eqref{3.23} and \eqref{3.24} imply the boundedness of the operator $Q(\xi,\lambda)^{-1}$ uniformly with respect to $(\xi,\lambda)\in\mathbb{T}_*\times I$.
\end{proof}
We define now the following family of $N$ functions:
\begin{equation}\label{3.25}
\phi_j(x,\xi)\ :=\ e^{-i<\xi,x>}\psi_j(x,\xi),\qquad\forall(x,\xi)\in\Xi,\ 1\leq j\leq N,
\end{equation}
with the family $\{\psi_j\}_{1\leq j\leq N}$ defined in Lemma \ref{L.3.3}.
\begin{lemma}\label{L.3.7}
The functions $\{\phi_j\}_{1\leq j\leq N}$ defined in \eqref{3.25} have the following properties:
\begin{enumerate}
\item[a)] $\phi_j\in C^\infty(\Xi)$;
\item[b)] $\phi_j(x+\gamma,\xi)=\phi_j(x,\xi),\qquad\forall(x,\xi)\in\Xi,\ \forall\gamma\in\Gamma$;
\item[c)] $\phi_j(x,\xi+\gamma^*)=e^{-i<\gamma^*,x>}\phi_j(x,\xi),\qquad\forall(x,\xi)\in\Xi,\ \forall\gamma^*\in\Gamma^*$;
\item[d)] For any $\alpha\in\mathbb{N}^d$ and any $s\in\mathbb{R}$ there exists a strictly positive constant $C_{\alpha,s}$ such that:
\begin{equation}\label{3.26}
\left\|\big(\partial^\alpha_\xi\phi_j\big)(.,\xi)\right\|_{\mathcal{K}_{s,\xi}}\ \leq\ C_{\alpha,s},\qquad\forall\xi\in\mathcal X^*.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
The properties (a), (b) and (c) follow easily from Lemma \ref{L.3.3} and the definition \eqref{3.25}. In order to prove property (d) we consider \eqref{A.22} and write:
\begin{equation}\label{3.27}
\left\|\big(\partial^\alpha_\xi\phi_j\big)(.,\xi)\right\|_{\mathcal{K}_{s,\xi}}^2\ \leq\ \frac{1}{|E|}\sum\limits_{\gamma^*\in\Gamma^*}<\xi+\gamma^*>^{2s}\left|\big(\widehat{\partial^\alpha_\xi\phi}_j\big)(\gamma^*,\xi)\right|^2,
\end{equation}
where we have used the notation:
\begin{equation}\label{3.28}
\big(\widehat{\partial^\alpha_\xi\phi}_j\big)(\gamma^*,\xi)\ :=\ \frac{1}{|E|}\int_Ee^{-i<\gamma^*,y>}\big(\partial^\alpha_\xi\phi_j\big)(y,\xi)\,dy.
\end{equation}
We recall once again the identity
$$
e^{-i<\gamma^*,y>}\ =\ <\gamma^*>^{-2l}\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Delta_y\big)^le^{-i<\gamma^*,y>},\qquad\forall l\in\mathbb{N},
$$
and taking into account the $\Gamma$-periodicity of the function $\big(\partial^\alpha_\xi\phi_j\big)(y,\xi)$ with respect to the variable $y\in\mathcal X$ we integrate by parts in \eqref{3.28} and deduce that $\forall\alpha\in\mathbb{N}^d$ and $\forall l\in\mathbb{N}$ there exists a constant $C_{\alpha,l}>0$ such that we have the estimation:
\begin{equation}\label{3.29}
\left|\big(\widehat{\partial^\alpha_\xi\phi}_j\big)(\gamma^*,\xi)\right|\ \leq\ C_{\alpha,l}<\gamma^*>^{-2l}\left|\frac{1}{|E|}\int_Ee^{-i<\gamma^*,y>}\Big(\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Delta_y\big)^l\partial^\alpha_\xi\phi_j\Big)(y,\xi)\,dy\right|,\quad\forall\xi\in\mathcal X^*,\ \forall\gamma^*\in\Gamma^*.
\end{equation}
Coming back to \eqref{3.27}, taking $l\geq s/2$ and using the estimation \eqref{3.29} and Plancherel identity we obtain the following estimation:
\begin{equation}\label{3.30}
\left\|\big(\partial^\alpha_\xi\phi_j\big)(.,\xi)\right\|_{\mathcal{K}_{s,\xi}}^2\ \leq\ C^2_{\alpha,l}\frac{1}{|E|}\int_E\left|\Big(\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Delta_y\big)^l\partial^\alpha_\xi\phi_j\Big)(y,\xi)\right|^2dy\ \leq\ C_{\alpha,s}^2,\quad\forall\xi\in E^*.
\end{equation}
In order to extend now to an arbitrary $\xi\in\mathcal X^*$ we notice that for any $\xi\in\mathcal X^*$ there exist $\eta\in E^*$ and $\gamma^*\in\Gamma^*$ such that $\xi=\eta+\gamma^*$ and using property (c) of the functions $\{\phi_1,\ldots,\phi_N\}$ we see that
$$
\left\|\big(\partial^\alpha_\xi\phi_j\big)(.,\xi)\right\|_{\mathcal{K}_{s,\xi}}\ =\ \left\|\big(\partial^\alpha_\eta\phi_j\big)(.,\eta+\gamma^*)\right\|_{\mathcal{K}_{s,\eta+\gamma^*}}\ =\ \left\|e^{-i<\gamma^*,.>}\big(\partial^\alpha_\eta\phi_j\big)(.,\eta)\right\|_{\mathcal{K}_{s,\eta+\gamma^*}}\ =
$$
$$
=\ \left\|<D+\eta+\gamma^*>^se^{-i<\gamma^*,.>}\big(\partial^\alpha_\eta\phi_j\big)(.,\eta)\right\|_{L^2(E)}\ =\ \left\|<D+\eta>^s\big(\partial^\alpha_\eta\phi_j\big)(.,\eta)\right\|_{L^2(E)}\ =\ \left\|\big(\partial^\alpha_\eta\phi_j\big)(.,\eta)\right\|_{\mathcal{K}_{s,\eta}}\ \leq\ C_{\alpha,s}.
$$
\end{proof}
We denote by $\mathcal{K}_0:=\mathcal{K}_{0,0}\equiv L^2(\mathbb{T})\equiv L^2(E)$ and for any $\xi\in\mathcal X^*$ we define the linear operators:
\begin{equation}\label{3.31}
\forall u\in\mathcal{K}_0,\qquad R_+(\xi)u:=\left\{\big(u,\phi_j\big)_{\mathcal{K}_0}\right\}_{1\leq j\leq N},
\end{equation}
\begin{equation}\label{3.32}
\forall\underline{u}\in\mathbb{C}^N,\qquad R_+(\xi)\underline{u}:=\sum\limits_{1\leq j\leq N}\underline{u}_j\phi_j(.,\xi).
\end{equation}
We evidently have that $R_+(\xi)\in\mathbb{B}\big(\mathcal{K}_0;\mathbb{C}^N\big)$, $R_-(\xi)\in\mathbb{B}\big(\mathbb{C}^N;\mathcal{K}_0\big)$ and (using \eqref{3.26}) both are $BC^\infty$ functions of $\xi\in\mathcal X^*$. With these operators we can now define the following Grushin type operator:
\begin{equation}\label{3.34}
\mathcal{P}(\xi,\lambda)\ :=\ \left(
\begin{array}{cc}
P_\xi-\lambda&R_-(\xi)\\
R_+(\xi)&0
\end{array}
\right)\ \in\ \mathbb{B}\big(\mathcal{K}_{m,\xi}\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N\big),\quad\forall(\xi,\lambda)\in\mathcal X^*\times I.
\end{equation}
\begin{proposition}\label{P.3.8}
With the above notations, the following statements are true:
\begin{enumerate}
\item[a)] As a function of $(\xi,\lambda)\in\mathcal X^*\times I$, we have that $\mathcal{P}\in C^\infty\big(\mathcal X^*\times I;\mathbb{B}\big(\mathcal{K}_{m,0}\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N\big)\big)$ and for any $\alpha\in\mathbb{N}^d$ and any $k\in\mathbb{N}$ we have that $\big(\partial^\alpha_\xi\partial^k_\lambda\mathcal{P}\big)(\xi,\lambda)\in\mathbb{B}\big(\mathcal{K}_{m,\xi}\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N\big)$ uniformly in $(\xi,\lambda)\in\mathcal X^*\times I$.
\item[b)] If we considere $\mathcal{P}(\xi,\lambda)$ as an unbounded operator in $\mathcal{K}_0\times\mathbb{C}^N$ with domain $\mathcal{K}_{m,\xi}\times\mathbb{C}^N$ then for any $(\xi,\lambda)\in\mathcal X^*\times I$, it is self-adjoint and unitarily equivalent with the operator $Q(\xi,\lambda)$.
\item[c)] The operator $\mathcal{P}(\xi,\lambda)$ has an inverse:
\begin{equation}\label{3.35}
\mathcal{E}_0(\xi,\lambda)\ :=\ \left(
\begin{array}{cc}
E^0(\xi,\lambda)&E^0_+(\xi,\lambda)\\
E^0_-(\xi,\lambda)&E^0_{-,+}(\xi,\lambda)
\end{array}
\right)\in\mathbb{B}\big(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_{m,\xi}\times\mathbb{C}^N\big),
\end{equation}
uniformly bounded with respect to $(\xi,\lambda)\in\mathcal X^*\times I$.
\item[d)] As a function of $(\xi,\lambda)\in\mathcal X^*\times I$, we have that $\mathcal{E}_0\in C^\infty\big(\mathcal X^*\times I;\mathbb{B}\big(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_{m,0}\times\mathbb{C}^N\big)\big)$ and for any $\alpha\in\mathbb{N}^d$ and any $k\in\mathbb{N}$ we have that $\big(\partial^\alpha_\xi\partial^k_\lambda\mathcal{P}\big)(\xi,\lambda)\in\mathbb{B}\big(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_{m,\xi}\times\mathbb{C}^N\big)$ uniformly in $(\xi,\lambda)\in\mathcal X^*\times I$.
\end{enumerate}
\end{proposition}
\begin{proof}
Point (a) follows clearly from the smoothness of the maps $R_-(\xi)$ and $R_+(\xi)$ (proved above) and the arguments in Exemple \ref{E.A.20}.
b) Let us define the operator:
\begin{equation}\label{3.35.b}
U(\xi)\ :=\ \left(
\begin{array}{cc}
\sigma_\xi&0\\
0&{\rm{1}\hspace{-3pt}\mathbf{l}}
\end{array}
\right):\mathcal{K}_{s,\xi}\times\mathbb{C}^N\rightarrow\mathscr{F}_{s,\xi}\times\mathbb{C}^N
\end{equation}
and let us notice that it is evidently unitary $\forall(\xi,s)\in\mathcal X^*\times\mathbb{R}$. Using Remark \ref{R.A.22} we know that $P_\xi-\lambda=\sigma_{-\xi}\big(P-\lambda\big)\sigma_\xi$ on $\mathcal{K}_{m,\xi}$. If we use the relations \eqref{3.14}, \eqref{3.25} and \eqref{3.32} we deduce that for any $\underline{u}\in\mathbb{C}^N$:
$$
\sigma_{-\xi}\widetilde{R}_-(\xi)\underline{u}\ =\ \sum\limits_{j=1}^{N}\underline{u}_j\big(\sigma_{-\xi}\psi_j\big)(.,\xi)\ =\ \sum\limits_{j=1}^{N}\underline{u}_j\phi_j(.,\xi)\ =\ R_-(\xi)\underline{u}.
$$
In a similar way, using now \eqref{3.13}, \eqref{3.25} and \eqref{3.31} we obtain that $\forall u\in\mathcal{K}_0$ we have that:
$$
\widetilde{R}_+(\xi)\big(\sigma_\xi u\big)\ =\ \left\{\big(\sigma_\xi u,\psi_j(.,\xi)\big)_{\mathscr{F}_{0,\xi}}\right\}_{1\leq j\leq N}\ =\ \left\{\big(u,\phi_j(.,\xi)\big)_{\mathcal{K}_0}\right\}_{1\leq j\leq N}\ =\ R_+(\xi)u.
$$
We conclude that we have the following equality on $\mathcal{K}_{m,\xi}\times\mathbb{C}^N$:
\begin{equation}\label{3.36}
\mathcal{P}(\xi,\lambda)\ =\ U(\xi)^{-1}Q(\xi,\lambda)U(\xi),\qquad\forall(\xi,\lambda)\in\mathcal X^*\times I.
\end{equation}
c) follows easily from point (b).
d) Let us notice that (a) and (c) imply easily that $\mathcal{E}_0\in C^\infty\big(\mathcal X^*\times I;\mathbb{B}\big(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_{m,0}\times\mathbb{C}^N\big)\big)$. The last property of $\mathcal{E}_0$ can be proved by recurence, differentiating the equality $\mathcal{P}\mathcal{E}_0={\rm{1}\hspace{-3pt}\mathbf{l}}$ valid on $\mathcal{K}_0\times\mathbb{C}^N$. For example, if $|\alpha|+k=1$ we can write that
$$
\partial^\alpha_\xi\partial^k_\lambda\mathcal{E}_0\ =\ -\mathcal{E}_0\Big(\partial^\alpha_\xi\partial^k_\lambda\mathcal{P}\Big)\mathcal{E}_0
$$
and thus we have the estimation
$$
\left\|\partial^\alpha_\xi\partial^k_\lambda\mathcal{E}_0\right\|_{\mathbb{B}(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_{m,\xi}\times\mathbb{C}^N)}\leq\left\|\mathcal{E}_0\right\|_{\mathbb{B}(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_{m,\xi}\times\mathbb{C}^N)}\left\|\partial^\alpha_\xi\partial^k_\lambda\mathcal{P}\right\|_{\mathbb{B}(\mathcal{K}_{m,\xi}\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N)} \left\|\mathcal{E}_0\right\|_{\mathbb{B}(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_{m,\xi}\times\mathbb{C}^N)},
$$
where the three factors of the right hand side are clearly uniformly bounded with respect to $(\xi,\lambda)\in\mathcal X^*\times I$.
\end{proof}
\section{ Construction of the effective Hamiltonian}\label{S.4}
\setcounter{equation}{0}
\setcounter{theorem}{0}
Let us recall our hypothesis (see Section \ref{S.1}): $\{B_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ is a family of magnetic fields satisfying Hypothesis H.1, $\{A_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ is an associated family of vector potentials and $\{p_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ is a family of symbols satisfying Hypothesis H.2 - H.6.
As we have already noticed in Remark \ref{R.A.6}, the symbol at $\epsilon=0$, $p_0(x,y,\eta)$ does not depend on the first variable $x\in\mathcal X$; thus if we denote by ${\rm p}_0(y,\eta):=p_0(0,y,\eta)$ and by $r_\epsilon(x,y,\eta):=p_\epsilon(x,y,\eta)-{\rm p}_0(y,\eta)$, we notice that the symbol ${\rm p}_0$ verifies the hypothesis of Section \ref{S.3} and thus ${\rm p}_0\in S^m_1(\mathbb{T})$ being real and elliptic and can write
\begin{equation}\label{4.1}
p_\epsilon\ =\ {\rm p}_0\,+\,r_\epsilon,\qquad\underset{\epsilon\rightarrow0}{\lim}r_\epsilon\,=\,0,\ \text{in}\ S^m_1(\mathcal X\times\mathbb{T}).
\end{equation}
We apply the constructions from Section \ref{S.3} to the operator $P_0:=\mathfrak{Op}({\rm p}_0)$. We obtain that for any compact interval $I\subset\mathbb{R}$, for any $\lambda\in I$ and any $\xi\in\mathcal X^*$ one can construct the operators $R_{\pm}(\xi)$ as in \eqref{3.31} and \eqref{3.32} in order to use \eqref{3.34} and define the operator
\begin{equation}\label{4.2}
\mathcal{P}_0(\xi,\lambda)\ :=\ \left(
\begin{array}{cc}
P_{0,\xi}-\lambda&R_-(\xi)\\
R_+(\xi)&0
\end{array}
\right)\in\mathbb{B}\big(\mathcal{K}_{m,\xi}\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N\big),
\end{equation}
that will verify all the properties listed in Proposition \ref{P.3.8}. In particular
\begin{equation}\label{4.3}
\mathcal{P}_0(.,\lambda)\,\in\,S^0_0\big(\mathcal X;\mathbb{B}\big(\mathcal{K}_{m,\xi}\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N\big)\big),
\end{equation}
uniformly for $\lambda\in I$. Moreover, it follows that $\mathcal{P}_0(\xi,\lambda)$ is invertible and its inverse denoted by $\mathcal{E}_0(\xi,\lambda)$ is given (as in \eqref{3.35}) by
\begin{equation}\label{4.4}
\mathcal{E}_0(\xi,\lambda)\ :=\ \left(
\begin{array}{cc}
E^0(\xi,\lambda)&E^0_+(\xi,\lambda)\\
E^0_-(\xi,\lambda)&E^0_{-,+}(\xi,\lambda)
\end{array}
\right)\,\in\,\mathbb{B}\big(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_{m,\xi}\times\mathbb{C}^N\big)
\end{equation}
and has the property that
\begin{equation}\label{4.5}
\mathcal{E}_0(.,\lambda)\,\in\,S^0_0\big(\mathcal X;\mathbb{B}\big(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_{m,\xi}\times\mathbb{C}^N\big),
\end{equation}
uniformly for $\lambda\in I$.
Let us consider now the following operator:
\begin{equation}\label{4.6}
\mathcal{P}_\epsilon(x,\xi,\lambda)\ :=\ \left(
\begin{array}{cc}
\mathfrak{q}_\epsilon(x,\xi)-\lambda&R_-(\xi)\\
R_+(\xi)&0
\end{array}
\right),\qquad\lambda\in I,\ \epsilon\in[-\epsilon_0,\epsilon_0],\ (x,\xi)\in\Xi
\end{equation}
where we recall that $\mathfrak{q}_\epsilon(x,\xi):=\mathfrak{Op}(\widetilde{p}_\epsilon(x,.,\xi,.))$, $\widetilde{p}_\epsilon(x,y,\xi,\eta):=p_\epsilon(x,y,\xi+\eta)$. Taking into account the example \ref{E.A.20} from the Appendix we notice that $\mathfrak{q}_\epsilon\in S^0_0\big(\mathcal X;\mathbb{B}\big(\mathcal{K}_{m,\xi};\mathcal{K}_0\big)\big)$ uniformly in $\epsilon\in[-\epsilon_0,\epsilon_0]$; thus
\begin{equation}\label{4.7}
\mathcal{P}_\epsilon(x,\xi,\lambda)\,\in\,S^0_0\big(\mathcal X;\mathbb{B}\big(\mathcal{K}_{m,\xi}\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N\big)\big),
\end{equation}uniformly with respect to $(\lambda,\epsilon)\in I\times[-\epsilon_0,\epsilon_0]$.
\begin{lemma}\label{L.4.1}
The operator $\mathcal{P}_{\epsilon,\lambda}:=\mathfrak{Op}^{A_\epsilon}(\mathcal{P}_\epsilon(.,.,\lambda))$ belongs to $\mathbb{B}\big(\mathcal{K}^m_\epsilon(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N);\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N)\big)$ uniformly with respect to $(\lambda,\epsilon)\in I\times[-\epsilon_0,\epsilon_0]$. Moreover, considering $\mathcal{P}_{\epsilon,\lambda}$ as an unbounded linear operator in the Hilbert space $\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N)$ it defines a self-adjoint operator on the domain $\mathcal{K}^m_\epsilon(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N)$.
\end{lemma}
\begin{proof}
If we denote by $\mathfrak{R}_{\mp,\epsilon}:=\mathfrak{Op}^{A_\epsilon}(R_\mp(\xi))$ we can write
\begin{equation}\label{4.8}
\mathcal{P}_{\epsilon,\lambda}\ =\ \left(
\begin{array}{cc}
\widetilde{P}_\epsilon-\lambda&\mathfrak{R}_{-,\epsilon}\\
\mathfrak{R}_{+,\epsilon}&0
\end{array}
\right).
\end{equation}
Taking into account Lemma \ref{L.2.12} we may conclude that $\widetilde{P}_\epsilon\in\mathbb{B}\big(\mathcal{K}^m_\epsilon(\mathcal X^2);\mathcal{K}(\mathcal X^2)\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$. Noticing that $R_-(\xi)=R_+(\xi)^*$ and belongs to $S^0_0(\mathcal X;\mathbb{B}\big(\mathbb{C}^N;\mathcal{K}_0)\big)$, Proposition \ref{P.A.26} implies that
$$
\mathfrak{R}_{-,\epsilon}\ =\ \mathfrak{R}_{+,\epsilon}^*\,\in\,\mathbb{B}\big(L^2(\mathcal X;\mathbb{C}^N);\mathcal{K}(\mathcal X^2)\big)
$$
uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$. This gives us the first part of the statement of the Lemma. The self-adjointness follows from the self-adjointness of $\widetilde{P}_\epsilon$ in $\mathcal{K}(\mathcal X^2)$ on the domain $\mathcal{K}^m_\epsilon(\mathcal X^2)$ and this follows from Proposition \ref{P.2.13}.
\end{proof}
\begin{lemma}\label{L.4.2}
The operator $\mathcal{E}_{0,\epsilon,\lambda}:=\mathfrak{Op}^{A_\epsilon}\big(\mathcal{E}_0(.,\lambda)\big)$ belons to $\mathbb{B}\big(\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N);\mathcal{K}^m_\epsilon(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N)\big)$ uniformly with respect to $(\lambda,\epsilon)\in I\times[-\epsilon_0,\epsilon_0]$.
\end{lemma}
\begin{proof}
We can write
\begin{equation}\label{4.9}
\mathcal{E}_{0,\epsilon,\lambda}\ =\ \left(
\begin{array}{cc}
\mathfrak{E}^0_{\epsilon,\lambda}&\mathfrak{E}^0_{+,\epsilon,\lambda}\\
\mathfrak{E}^0_{-,\epsilon,\lambda}&\mathfrak{E}^0_{-+,\epsilon,\lambda}
\end{array}
\right),
\end{equation}
with
$$
\mathfrak{E}^0_{\epsilon,\lambda}\ :=\ \mathfrak{Op}^{A_\epsilon}\big(E^0(.,\lambda)\big),\qquad\mathfrak{E}^0_{\pm,\epsilon,\lambda}\ :=\ \mathfrak{Op}^{A_\epsilon}\big(E^0_\pm(.,\lambda)\big),\qquad\mathfrak{E}^0_{-+,\epsilon,\lambda}\ :=\ \mathfrak{Op}^{A_\epsilon}\big(E^0_{-+}(.,\lambda)\big).
$$
From \eqref{4.5} it follows that $\mathcal{E}_0(.,\lambda)\in S^0_0\big(\mathcal X;\mathbb{B}\big(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N\big)\big)$ uniformly with respect to $(\lambda,\epsilon)\in I\times[-\epsilon_0,\epsilon_0]$. In order to prove the boundedness result in the Lemma it is enough to show that
\begin{equation}\label{4.10}
\left(\begin{array}{cc}
\widetilde{Q}_{m,\epsilon}&0\\
0&{\rm{1}\hspace{-3pt}\mathbf{l}}
\end{array}
\right)\,\mathcal{E}_{0,\epsilon,\lambda}\,\in\,\mathbb{B}\big(\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N);\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N)\big),
\end{equation}
uniformly with respect to $(\lambda,\epsilon)\in I\times[-\epsilon_0,\epsilon_0]$; here $\widetilde{Q}_{m,\epsilon}$ is defined before Definition \ref{D.1.4}, with some suitable identifications. In that Definition we also argued that the operator $\widetilde{Q}_{m,\epsilon}$ corresponds to the operator $Q_{m,\epsilon}$ from Remark \ref{R.A.25} transformed by {\it doubling the variables} starting from the operator valued symbol $\mathfrak{q}_{m,\epsilon}$. We may thus conclude that $\widetilde{Q}_{m,\epsilon}$ is obtained by the $\mathfrak{Op}^{A_\epsilon}$ quantization of a symbol from $S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_0;\mathcal{K}_{m,\xi})\big)$. Taking into account that $E^0(\xi,\lambda)\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_0;\mathcal{K}_{m,\xi})\big)$ and $E^0_+(\xi,\lambda)\in S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C}^N;\mathcal{K}_{m,\xi})\big)$, the property \eqref{4.10} follows from the {\it Composition Theorem} \ref{T.A.23} a) and from the Proposition \ref{P.A.26}.
\end{proof}
\begin{theorem}\label{T.4.3}
For a sufficiently small $\epsilon_0>0$ we have that for any $(\lambda,\epsilon)\in I\times[-\epsilon_0,\epsilon_0]$ the operator $\mathcal{P}_{\epsilon,\lambda}$ from Lemma \ref{L.4.1} has an inverse denoted by
\begin{equation}\label{4.11}
\mathcal{E}_{\epsilon,\lambda}\ :=\ \left(
\begin{array}{cc}
\mathfrak{E}(\epsilon,\lambda)&\mathfrak{E}_+(\epsilon,\lambda)\\
\mathfrak{E}_-(\epsilon,\lambda)&\mathfrak{E}_{-+}(\epsilon,\lambda)
\end{array}
\right)\,\in\,\mathbb{B}\big(\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N);\mathcal{K}^m_\epsilon(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N)\big),
\end{equation}
uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$. Moreover we have that
$$
\mathcal{E}_{\epsilon,\lambda}\ =\ \mathcal{E}_{0,\epsilon,\lambda}\,+\,\mathcal{R}_{\epsilon,\lambda},\qquad\mathcal{R}_{\epsilon,\lambda}=\mathfrak{Op}^{A_\epsilon}\big(\rho_{\epsilon,\lambda}\big),\qquad\underset{\epsilon\rightarrow0}{\lim}\rho_{\epsilon,\lambda}=0\ \text{in}\ S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_{m,\xi}\times\mathbb{C}^N)\big).
$$
In particular we have that
\begin{equation}\label{4.12}
\mathfrak{E}_{-+}(\epsilon,\lambda)\ =\ \mathfrak{Op}^{A_\epsilon}\big(E^{-+}_{\epsilon,\lambda}\big),\qquad\underset{\epsilon\rightarrow0}{\lim}E^{-+}_{\epsilon,\lambda}=E^0_{-+}(.,\lambda)\ \text{in}\ S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C}^N;\mathbb{C}^N)\big),
\end{equation}
uniformly with respect to $\lambda\in I$.
\end{theorem}
\begin{proof}
We begin the proof with the following remarks.
1. The symbol $\mathcal{E}^0(\xi,\lambda)$ appearing in \eqref{4.4} does not depend on $x\in\mathcal X$ and on $\epsilon\in[-\epsilon_0,\epsilon_0]$. We can thus consider that $\mathcal{E}^0(\xi,\lambda)\in S^0_{0,\epsilon}\big(\mathcal X;\mathbb{B}(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_{m,\xi}\times\mathbb{C}^N)\big)$ uniformly for $\lambda\in I$.
2. The symbol $\mathcal{P}_0(\xi,\lambda)$ appearing in \eqref{4.2} does not depend on $x\in\mathcal X$ and on $\epsilon\in[-\epsilon_0,\epsilon_0]$. We can thus consider that $\mathcal{P}_0(\xi,\lambda)\in S^0_{0,\epsilon}\big(\mathcal X;\mathbb{B}(\mathcal{K}_{m,\xi}\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N)\big)$ uniformly for $\lambda\in I$.
3. From the relations \eqref{4.1}, \eqref{4.2} and \eqref{4.6} it follows that
$$
\mathcal{P}_\epsilon(x,\xi,\lambda)\,-\,\mathcal{P}_0(\xi,\lambda)\ =\ \left(
\begin{array}{cc}
\mathfrak{q}^\prime_\epsilon(x,\xi)&0\\
0&0
\end{array}
\right),
$$
where $\mathfrak{q}^\prime_\epsilon(x,\xi):=\mathfrak{Op}\big(\widetilde{r}_\epsilon(x,.\xi,.)\big)$ and $\widetilde{r}_\epsilon(x,y,\xi,\eta):=r_\epsilon(x,y,\xi+\eta)$. The fact that $\underset{\epsilon\rightarrow0}{\lim}r_\epsilon=0$ in $S^m_1\big(\mathcal X\times\mathbb{T}\big)$ and that the map $S^m_1\big(\mathcal X\times\mathbb{T}\big)\ni r_\epsilon\mapsto\mathfrak{q}^\prime_\epsilon\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_{m,\xi};\mathcal{K}_0)\big)$ is continuous (by an evident generalization of property \eqref{A.32}), we conclude that
\begin{equation}\label{4.13}
\underset{\epsilon\rightarrow0} {\lim}\big[\mathcal{P}_\epsilon(x,\xi,\lambda)-\mathcal{P}_0(\xi,\lambda)\big]=0
\end{equation}
in $S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_{m,\xi}\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N)\big)$ uniformly with respect to $\lambda\in I$.
Let us come back to the proof of the Theorem and denote by $\mathcal{P}^0_{\epsilon,\lambda}:=\mathfrak{Op}^{A_\epsilon}\big(\mathcal{P}_0(\xi,\lambda)\big)$. We can write that
\begin{equation}\label{4.14}
\mathcal{P}_{\epsilon,\lambda}\,\mathcal{E}_{0,\epsilon,\lambda}\ =\ \mathcal{P}^0_{\epsilon,\lambda}\,\mathcal{E}_{0,\epsilon,\lambda}\,+\,\big(\mathcal{P}_{\epsilon,\lambda}-\mathcal{P}^0_{\epsilon,\lambda})\,\mathcal{E}_{0,\epsilon,\lambda}
\end{equation}
in $\mathbb{B}\big(\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X:\mathbb{C}^N);\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N)\big)$.
Using the {\it Composition Theorem} \ref{T.A.23} and the above remarks, we conclude that
\begin{equation}\label{4.15}
\mathcal{P}_{\epsilon,\lambda}\,\mathcal{E}_{0,\epsilon,\lambda}\ =\ {\rm{1}\hspace{-3pt}\mathbf{l}}\,+\,\mathfrak{Op}^{A_\epsilon}\big(s_{\epsilon,\lambda}\big)
\end{equation}
in $\mathbb{B}\big(\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X:\mathbb{C}^N);\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N)\big)$, where
\begin{equation}\label{4.16}
\underset{\epsilon\rightarrow0}{\lim}s_{\epsilon,\lambda}=0,
\end{equation}
in $S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N)\big)$ uniformly with respect to $\lambda\in I$.
It follows from Proposition \ref{P.A.27} that for $\epsilon_0>0$ small enough, the operator ${\rm{1}\hspace{-3pt}\mathbf{l}}+\mathfrak{Op}^{A_\epsilon}(s_{\epsilon,\lambda})$ is invertible in $\mathbb{B}\big(\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N);\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N)\big)$ for any $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$ and it exists a symbol $t_{\epsilon,\lambda}$ such that
\begin{equation}\label{4.17}
\underset{\epsilon\rightarrow0}{\lim}t_{\epsilon,\lambda}=0,
\end{equation}
in $S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N)\big)$ uniformly with respect to $\lambda\in I$ and
\begin{equation}\label{4.18}
\big[{\rm{1}\hspace{-3pt}\mathbf{l}}\,+\,\mathfrak{Op}^{A_\epsilon}(s_{\epsilon,\lambda})\big]^{-1}\ =\ {\rm{1}\hspace{-3pt}\mathbf{l}}\,+\,\mathfrak{Op}^{A_\epsilon}(t_{\epsilon,\lambda}).
\end{equation}
Let us define
$$
\mathcal{E}_{\epsilon,\lambda}\ :=\ \mathcal{E}_{0,\epsilon,\lambda}\,\big[{\rm{1}\hspace{-3pt}\mathbf{l}}\,+\,\mathfrak{Op}^{A_\epsilon}(t_{\epsilon,\lambda})\big]
$$
and let us notice that it is a right inverse for $\mathcal{P}_{\epsilon,\lambda}$. As the operator $\mathcal{P}_{\epsilon,\lambda}$ is self-adjoint, it follows that $\mathcal{E}_{\epsilon,\lambda}$ defined above is also a left inverse for it.
The other properties in the statement of the Theorem are evident now.
\end{proof}
\begin{remark}\label{R.4.4}
The operator $\mathfrak{E}_{-+}(\epsilon,\lambda)$ defined in \eqref{4.12} will be the effective Hamiltonian associated to the Hamiltonian $P_\epsilon$ and the interval $I$. Its importance will partially be explained in the following Corollary.
\end{remark}
\begin{corollary}\label{C.4.5}
Under the assumptions of Theorem \ref{T.4.3}, for any $\lambda\in I$ and any $\epsilon\in[-\epsilon_0,\epsilon_0]$ the following equivalence is true:
\begin{equation}\label{4.19}
\lambda\,\in\,\sigma\big(\widetilde{P}_\epsilon\big)\quad\Longleftrightarrow\quad0\,\in\,\sigma\big(\mathfrak{E}_{-+}(\epsilon,\lambda)\big).
\end{equation}
\end{corollary}
\begin{proof}
The equality
$$
\mathcal{P}_{\epsilon,\lambda}\,\mathcal{E}_{\epsilon,\lambda}\ =\ \left(
\begin{array}{cc}
{\rm{1}\hspace{-3pt}\mathbf{l}}_{\mathcal{K}(\mathcal X^2)}&0\\
0&{\rm{1}\hspace{-3pt}\mathbf{l}}_{L^2(\mathcal X;\mathbb{C}^N)}
\end{array}
\right)
$$
is equivalent with the following system of equations:
\begin{equation}\label{4.20}
\left\{
\begin{array}{rcl}
\big(\widetilde{P}_\epsilon-\lambda\big)\,\mathfrak{E}(\epsilon,\lambda)\,+\,\mathfrak{R}_{-,\epsilon}\,\mathfrak{E}_-(\epsilon,\lambda)&=&{\rm{1}\hspace{-3pt}\mathbf{l}}_{\mathcal{K}(\mathcal X^2)},\\
\big(\widetilde{P}_\epsilon-\lambda\big)\,\mathfrak{E}_+(\epsilon,\lambda)\,+\,\mathfrak{R}_{-,\epsilon}\,\mathfrak{E}_{-+}(\epsilon,\lambda)&=&0,\\
\mathfrak{R}_{+,\epsilon}\,\mathfrak{E}(\epsilon,\lambda)&=&0,\\
\mathfrak{R}_{+,\epsilon}\,\mathfrak{E}_+(\epsilon,\lambda)&=&{\rm{1}\hspace{-3pt}\mathbf{l}}_{L^2(\mathcal X;\mathbb{C}^N)}.
\end{array}
\right.
\end{equation}
If $0\notin\sigma\big(\mathfrak{E}_{-+}(\epsilon,\lambda)\big)$ the second equality in \eqref{4.20} implies that
$$
\mathfrak{R}_{-,\epsilon}\ =\ -\big(\widetilde{P}_\epsilon-\lambda\big)\mathfrak{E}_+(\epsilon,\lambda)\mathfrak{E}_{-+}(\epsilon,\lambda)^{-1}
$$
and by substituting this value in the first equality in \eqref{4.20} we obtain
$$
\big(\widetilde{P}_\epsilon-\lambda\big)\left[\mathfrak{E}(\epsilon,\lambda)-\mathfrak{E}_+(\epsilon,\lambda)\mathfrak{E}_{-+}(\epsilon,\lambda)^{-1}\mathfrak{E}_-(\epsilon,\lambda)\right]\ =\ {\rm{1}\hspace{-3pt}\mathbf{l}}_{\mathcal{K}(\mathcal X^2)}.
$$
It follows that $\lambda\notin\sigma(\widetilde{P}_\epsilon)$.
Suppose now that $\lambda\notin\sigma(\widetilde{P}_\epsilon)$; then the second equality in \eqref{4.20} implies that
$$
\mathfrak{E}_+(\epsilon,\lambda)\ =\ -\big(\widetilde{P}_\epsilon-\lambda\big)^{-1}\mathfrak{R}_{-,\epsilon}\,\mathfrak{E}_{-+}(\epsilon,\lambda).
$$
After substituting this last expression in the last equality in \eqref{4.20} we get
$$
-\mathfrak{R}_{+,\epsilon}\big(\widetilde{P}_\epsilon-\lambda\big)^{-1}\mathfrak{R}_{-,\epsilon}\,\mathfrak{E}_{-+}(\epsilon,\lambda)\ =\ {\rm{1}\hspace{-3pt}\mathbf{l}}_{L^2(\mathcal X;\mathbb{C}^N)}.
$$
It follows that $0\notin\sigma\big(\mathfrak{E}_{-+}(\epsilon,\lambda)\big)$ and we obtain the following identity (valid in this case):
$$
\mathfrak{E}_{-+}(\epsilon,\lambda)^{-1}\ =\ -\mathfrak{R}_{+,\epsilon}\big(\widetilde{P}_\epsilon-\lambda\big)^{-1}\mathfrak{R}_{-,\epsilon}.
$$
\end{proof}
We recall that for any $\gamma^*\in\Gamma^*$ we denote by $\sigma_{\gamma^*}$ the operator of multiplication with the character $e^{i<\gamma^*,.>}$ on $\mathscr{S}^\prime(\mathcal X;\mathbb{C}^N)$ and we define now $\Upsilon_{\gamma^*}$ as the operator of multiplication with the function $\sigma_{\gamma^*}\otimes\sigma_{-\gamma^*}$ on the space $\mathscr{S}^\prime(\mathcal X^2)$. We shall need further the following commutation property.
\begin{lemma}\label{L.4.6}
For any $\gamma^*\in\Gamma^*$ the following equality is true $\forall(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$:
\begin{equation}\label{4.21}
\left(
\begin{array}{cc}
\Upsilon_{\gamma^*}&0\\
0&\sigma_{\gamma^*}
\end{array}
\right)\,\mathcal{P}_{\epsilon,\lambda}\ =\ \mathcal{P}_{\epsilon,\lambda}\,
\left(
\begin{array}{cc}
\Upsilon_{\gamma^*}&0\\
0&\sigma_{\gamma^*}
\end{array}
\right)
\end{equation}
as operators on $\mathscr{S}(\mathcal X\times\mathbb{T})\times\mathscr{S}(\mathcal X;\mathbb{C}^N)$ (identifying the test functions on the torus with the associated periodic distributions).
\end{lemma}
\begin{proof}
From equality \eqref{2.21} we deduce that for any $\gamma^*\in\Gamma^*$ we have the following equality on $\mathscr{S}(\mathcal X\times\mathbb{T})$:
\begin{equation}\label{4.22}
\Upsilon_{\gamma^*}\,\widetilde{P}_\epsilon\ =\ \widetilde{P}_\epsilon\,\Upsilon_{\gamma^*}.
\end{equation}
Taking now into account Lemma \ref{L.3.7} and the definitions \eqref{3.31} and \eqref{3.32} we obtain that
\begin{equation}\label{4.23}
R_+(\xi+\gamma^*)\ =\ R_+(\xi)\,\sigma_{\gamma^*},\quad R_-(\xi+\gamma^*)\ =\ \sigma_{-\gamma^*}\,R_-(\xi),\qquad\forall\xi\in\mathcal X^*,\ \forall\gamma^*\in\Gamma^*.
\end{equation}
Repeating the computations done in Exemple \ref{E.A.4} we obtain for any $\underline{u}\in\mathscr{S}(\mathcal X;\mathbb{C}^N)$:
$$
\big(\mathfrak{R}_{-,\epsilon}\sigma_{\gamma^*}\underline{u}\big)(x,y)=\int_\Xi e^{i<\zeta,x-z>}\omega^{A_\epsilon}(x,z)\left[R_-(\zeta)e^{i<\gamma^*,z>}\underline{u}(z)\right](y)dz\,\;\;\bar{}\!\!\!d\zeta=
$$
$$
=e^{i<\gamma^*,x>}\int_\Xi e^{i<\zeta-\gamma^*,x-z>}\omega^{A_\epsilon}(x,z)\left[R_-(\zeta)\underline{u}(z)\right](y)dz\,\;\;\bar{}\!\!\!d\zeta=
$$
$$
=e^{i<\gamma^*,x>}\int_\Xi e^{i<\zeta,x-z>}\omega^{A_\epsilon}(x,z)\left[R_-(\zeta+\gamma^*)\underline{u}(z)\right](y)dz\,\;\;\bar{}\!\!\!d\zeta=
$$
$$
=e^{i<\gamma^*,x>}\int_\Xi e^{i<\zeta,x-z>}\omega^{A_\epsilon}(x,z)\left[\sigma_{-\gamma^*}R_-(\zeta)\underline{u}(z)\right](y)dz\,\;\;\bar{}\!\!\!d\zeta=\big(\Upsilon_{\gamma^*}\mathfrak{R}_{-,\epsilon}\underline{u}\big)(x,y),
$$
concluding that on $\mathscr{S}(\mathcal X;\mathbb{C}^N)$ we have the equality
\begin{equation}\label{4.24}
\mathfrak{R}_{-,\epsilon}\sigma_{\gamma^*}\ =\ \Upsilon_{\gamma^*}\mathfrak{R}_{-,\epsilon}.
\end{equation}
In a similar way we obtain that on $\mathscr{S}(\mathcal X\times\mathbb{T})$ we have the equality:
\begin{equation}\label{4.25}
\mathfrak{R}_{+,\epsilon}\,\Upsilon_{\gamma^*}\ =\ \sigma_{\gamma^*}\,\mathfrak{R}_{+,\epsilon}.
\end{equation}
\end{proof}
\begin{remark}\label{R.4.7}
Of course the inverse of the operator $\mathcal{P}_{\epsilon,\lambda}$ verifies a commutation equation similar to \eqref{4.21}:
\begin{equation}\label{4.26}
\left(
\begin{array}{cc}
\Upsilon_{\gamma^*}&0\\
0&\sigma_{\gamma^*}
\end{array}
\right)\,\mathcal{E}_{\epsilon,\lambda}\ =\ \mathcal{E}_{\epsilon,\lambda}\,
\left(
\begin{array}{cc}
\Upsilon_{\gamma^*}&0\\
0&\sigma_{\gamma^*}
\end{array}
\right),\qquad\forall\gamma^*\in\Gamma^*,
\end{equation}
on $\mathscr{S}(\mathcal X\times\mathbb{T})\times\mathscr{S}(\mathcal X;\mathbb{C}^N)$ and for any $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$.
\end{remark}
\section{The auxiliary Hilbert spaces $\mathfrak{V}_0$ and $\mathfrak{L}_0$}\label{S.5}
\setcounter{equation}{0}
\setcounter{theorem}{0}
The procedure we use after \cite{GMS} for coming back from the operator $\widetilde{P}_\epsilon$ to the basic Hamiltonian $P_\epsilon$ supposes to consider the extension of the pseudodifferential operator $\widetilde{P}_\epsilon$ to the tempered distributions and a restriction of this one to some Hilbert spaces of distributions that we introduce and study in this section.
\begin{definition}\label{D.5.1}
Let us consider the following complex space (denoting by $\delta_\gamma:=\tau_\gamma\delta$ with $\delta$ the Dirac distribution of mass 1 supported in $\{0\}$ and $\gamma\in\Gamma$):
$$
\mathfrak{V}_0\ :=\ \left\{\,w\in\mathscr{S}^\prime(\mathcal X)\,\mid\,\exists f\in l^2(\Gamma)\ \text{such that}\ w=\underset{\gamma\in\Gamma}{\sum}f_\gamma\delta_{-\gamma}\,\right\},
$$
endowed with the quadratic norm:
$$
\|w\|_{\mathfrak{V}_0}\ :=\ \sqrt{\underset{\gamma\in\Gamma}{\sum}\left|f_\gamma\right|^2},\qquad\forall w\in\mathfrak{V}_0.
$$
\end{definition}
It is evident that $\mathfrak{V}_0$ is a Hilbert space and is canonically unitarily equivalent with $l^2(\Gamma)$.
The Hilbert space $\mathfrak{V}_0$ has a 'good comparaison property' with respect to the scale of magnetic Sobolev spaces introduced in \cite{IMP1}. In order to study this relation let us choose a family of vector potentials $\{A_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ having components of class $C^\infty_{\text{\sf pol}}(\mathcal X)$ and defining the magnetic fields $\{B_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ satisfying Hypothesis H.1.
\begin{lemma}\label{L.5.2}
For any $s>d$ and for any $\epsilon\in[-\epsilon_0,\epsilon_0]$ we have the algebraic and topologic inclusion $\mathfrak{V}_0\hookrightarrow\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)$, uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{lemma}
\begin{proof}
We use the operator $Q_{s,\epsilon}$ from Remark \ref{R.A.25}. Let $u=\underset{\gamma\in\Gamma}{\sum}f_\gamma\delta_{-\gamma}\in\mathfrak{V}_0$. Then we have (in $\mathscr{S}^\prime(\mathcal X)$):
$$
g:=\ Q_{-s,\epsilon}u\ =\ \underset{\gamma\in\Gamma}{\sum}f_\gamma Q_{-s,\epsilon}\delta_{-\gamma}.
$$
A computation made in $\mathscr{S}^\prime(\mathcal X)$ shows that (for $s>d$) we have that $Q_{-s,\epsilon}\delta_{-\gamma}$ belongs in fact to $C(\mathcal X)$ (as Fourier transform of an integrable function) and moreover:
$$
\big(Q_{-s,\epsilon}\delta_{-\gamma}\big)(x)\ =\ (2\pi)^{-n}\int_\Xi e^{i<\eta,x-y>}\omega_{A_\epsilon}(x,y)\,q_{-s,\epsilon}\big(\frac{x+y}{2},\eta\big)\,\delta_{-\gamma}(y)dy\,d\eta\ =
$$
$$
=(2\pi)^{-n}\int_{\mathcal X^*}e^{i<\eta,x+\gamma>}\omega_{A_\epsilon}(x,-\gamma)\,q_{-s,\epsilon}\big(\frac{x-\gamma}{2},\eta\big)\,d\eta.
$$
From this last formula we may deduce that for any $N\in\mathbb{N}$ there exists a strictly positive constant $C_N>0$ such that for any $\epsilon\in[-\epsilon_0,\epsilon_0]$ and for any $x\in\mathcal X$ we have the estimation:
$$
\left|\big(Q_{-s,\epsilon}\delta_{-\gamma}\big)(x)\right|\ \leq\ C_N<x+\gamma>^{-N}.
$$
Choosing $N>d$ we notice that for any $x\in\mathcal X$:
$$
|g(x)|\ \leq\ C_N\underset{\gamma\in\Gamma}{\sum}\left|f_{\gamma}\right|<x+\gamma>^{-N}\ \leq\ C_N\left(\underset{\gamma\in\Gamma}{\sum}\left|f_{\gamma}\right|^2<x+\gamma>^{-N}\right)^{1/2}\left(\underset{\gamma\in\Gamma}{\sum}<x+\gamma>^{-N}\right)^{1/2}.
$$
We may conclude that $g\in L^2(X)$ and $\|g\|_{L^2(\mathcal X)}\leq C_N\|u\|^2_{\mathfrak{V}_0}$. Finally this is equivalent with the fact that $Q_{s,\epsilon}g\in\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)$ and there exists a strictly positive constant $C$ such that
$$
\|u\|_{\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)}\ \leq\ C\|u\|_{\mathfrak{V}_0},\qquad\forall u\in\mathfrak{V}_0,\ \forall\epsilon\in[-\epsilon_0,\epsilon_0].
$$
\end{proof}
We shall need a property characterizing the elements from $\mathfrak{V}_0$ (replacing the property proposed in \cite{GMS} that is not easy to generalise to our situation and moreover its proof given in \cite{GMS} and \cite{DS} has some gaps that make necessary some modifications in the arguments!).
\begin{lemma}\label{L.5.3}
For any $s>d$ there exists a strictly positive constant $C_s>0$ such that the following inequality is true:
\begin{equation}\label{5.1}
\underset{\gamma\in\Gamma}{\sum}|u(\gamma)|^2\ \leq\ C_s\|u\|^2_{\mathcal{H}^s_{A_\epsilon}(\mathcal X)},\qquad\forall u\in\mathscr{S}(\mathcal X),\ \forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
\end{lemma}
\begin{proof}
For any fixed $u\in\mathscr{S}(\mathcal X)$ let us denote by $v:=Q_{s,\epsilon}u\in\mathscr{S}(\mathcal X)$. Then $u=Q_{-s,\epsilon}v$ and thus for any $N\in\mathbb{N}$ and for any $x\in\mathcal X$ we can write that:
$$
u(x)\ =\ \int_\Xi e^{i<\eta,x-y>}<x-y>^{-2N}\omega_{A_\epsilon}(x,y)\left[\big({\rm{1}\hspace{-3pt}\mathbf{l}}-\Delta_\eta\big)^Nq_{-s,\epsilon}\big(\frac{x+y}{2},\eta\big)\right]v(y)dy\,\;\;\bar{}\!\!\!d\eta.
$$
This equality implies that we can find two strictly positive constants $C$ and $C^\prime$ such that for any $\epsilon\in[-\epsilon_0,\epsilon_0]$ and for any $x\in\mathcal X$ one has the estimation:
$$
|u(x)|\ \leq\ C\int_\mathcal X<x-y>^{-2N}|v(y)|dy,
$$
$$
|u(x)|^2\ \leq\ C^2\left(\int_\mathcal X<x-y>^{-2N}dy\right)\left(\int_\mathcal X<x-y>^{-2N}|v(y)|^2dy\right)\ \leq\ C^\prime\int_\mathcal X<x-y>^{-2N}|v(y)|^2dy.
$$
We choose now $N\in\mathbb{N}$ large enough and notice that
$$
\underset{\gamma\in\Gamma}{\sum}|u(\gamma)|^2\ \leq\ C^\prime\int_\mathcal X\left(\underset{\gamma\in\Gamma}{\sum}<\gamma-y>^{-2N}\right)|v(v)|^2dy\ \leq\ C^{\prime\prime}\|v\|^2_{L^2(\mathcal X)}\ \leq\ C_s\|u\|^2_{\mathcal{H}^s_{A_\epsilon}(\mathcal X)}.
$$
\end{proof}
For any $\gamma^*\in\Gamma^*$ we use the notation $\sigma_{\gamma^*}$ also for the operator of multiplication with the character $e^{i<\gamma^*,.>}$ on the space of tempered distributions (that it evidently leaves invariant).
\begin{proposition}\label{P.5.4}
We have the following charaterization of the vectors from $\mathfrak{V}_0$:
\begin{enumerate}
\item[a)] Given any vector $u\in\mathfrak{V}_0$ there exists a vector $u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)$ such that
\begin{equation}\label{5.2}
u\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}\sigma_{\gamma^*}u_0.
\end{equation}
Moreover the map $\mathfrak{V}_0\ni u\mapsto u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)$ is continuous uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\item[b)] Given any $u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)$ the series $\underset{\gamma^*\in\Gamma^*}{\sum}\sigma_{\gamma^*}u_0$ converges in $\mathscr{S}^\prime(\mathcal X)$ and its sum denoted by $u$ belongs in fact to $\mathfrak{V}_0$. Moreover the map $\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\ni u_0\mapsto u\in\mathfrak{V}_0$ is continuous uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{enumerate}
\end{proposition}
\begin{proof}
We shall use the notation $u_{\gamma^*}:=\sigma_{\gamma^*}u_0$, for any $\gamma^*\in\Gamma^*$ and for any tempered distribution $u_0\in\mathscr{S}^\prime(\mathcal X)$.
a) Lemma \ref{L.5.2} implies that for any $s>d$ and any $\epsilon\in[-\epsilon_0,\epsilon_0]$ we have that $\mathfrak{V}_0\subset\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)$ and there exists a strictly positive constant $C_s>0$, independent of $\epsilon$, such that
\begin{equation}\label{5.3}
\|u\|_{\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)}\ \leq\ C_s\|u\|_{\mathfrak{V}_0},\qquad\forall\epsilon\in[-\epsilon_0,\epsilon_0],\ \forall u\in\mathfrak{V}_0.
\end{equation}
Let us choose a real function $\chi\in C^\infty_0(\mathcal X^*)$ such that $\underset{\gamma^*\in\Gamma^*}{\sum}\tau_{\gamma^*}\chi=1$ on $\mathcal X^*$. For any distribution $u\in\mathfrak{V}_0$ we define
$$
u_0\ :=\ \mathfrak{Op}^{A_\epsilon}(\chi)u.
$$
Due to the fact that $\chi\in S^{-\infty}_1(\mathcal X)$ it follows by the properties of magnetic Sobolev spaces (see \cite{IMP1}) that $u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)$ and the map $\mathfrak{V}_0\ni u\mapsto u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)$ is continuous uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
We shall prove now that for any $s>d$ the series $\underset{\gamma^*\in\Gamma^*}{\sum}u_{\gamma^*}$ is convergent in $\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)$ to an element $v\in\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)$ and that there exists a constant $C>0$ such that
\begin{equation}\label{5.4}
\|v\|_{\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)}\ \leq\ C\|u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)}
\end{equation}
uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
We denote by
$$
g_{\gamma^*}:=\ Q_{-s,\epsilon}\sigma_{\gamma^*}u_0\ =\ \sigma_{\gamma^*}\mathfrak{Op}^{A_\epsilon}\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma^*})q_{-s,\epsilon}\big)u_0
$$
where we have used \eqref{A.4} for the last equality. We notice that the family $\left\{<\gamma^*>^s({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma^*})q_{-s,\epsilon}\right\}_{|\epsilon|\leq\epsilon_0}$ is bounded as subset of $S^s_1(\Xi)$ and thus there exists a constant $C>0$ such that
$$
\|g_{\gamma^*}\|_{L^2(\mathcal X)}\ \leq\ C<\gamma^*>^{-s}\|u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)},\qquad\forall\epsilon\in[-\epsilon_0,\epsilon_0],\ \forall\gamma^*\in\Gamma^*.
$$
We conclude that it exists an element $g\in L^2(\mathcal X)$ such that $\underset{\gamma^*\in\Gamma^*}{\sum}g_{\gamma^*}=g$ in $L^2(\mathcal X)$ and we have the estimation $\|g\|_{L^2(\mathcal X)}\leq C^\prime\|u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)}$ for any $\epsilon\in[-\epsilon_0,\epsilon_0]$. Due to the properties of the magnetic pseudodifferential calculus (see \cite{IMP1}) it follows that the series $\underset{\gamma^*\in\Gamma^*}{\sum}u_{\gamma^*}$ converges in $\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)$ to an element $v\in\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)$ and \eqref{5.4} is true.
We still have to show that $v=u$ as tempered distributions. Let us fix a test function $\varphi\in\mathscr{S}(\mathcal X)$ and compute:
$$
<v,\varphi>\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}<u_{\gamma^*},\varphi>\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}<\sigma_{\gamma^*}u_0,\varphi>\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}\left\langle\sigma_{\gamma^*}\mathfrak{Op}^{A_\epsilon}(\chi)u,\varphi\right\rangle\ =
$$
$$
=\ \underset{\gamma^*\in\Gamma^*}{\sum}\left\langle\mathfrak{Op}^{A_\epsilon}(\tau_{-\gamma^*}\chi)\sigma_{\gamma^*}u,\varphi\right\rangle\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}\left\langle u,\mathfrak{Op}^{A_\epsilon}(\tau_{-\gamma^*}\chi)\varphi\right\rangle
$$
where we have used the relation $\sigma_{\gamma^*}u=u$ verified by all the elements from $\mathfrak{V}_0$. Let us also notice that for any $s>d$ we have that
$$
\underset{\gamma^*\in\Gamma^*}{\sum}\tau_{-\gamma^*}\chi=1\ \text{in}\ S^s_1(\Xi)
$$
so that we can write that
$$
\varphi\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}\mathfrak{Op}^{A_\epsilon}(\tau_{-\gamma^*}\chi)\varphi,\quad\text{in}\ \mathscr{S}(\mathcal X).
$$
We conclude that $<v,\varphi>=<u,\varphi>$ for any $\varphi\in\mathscr{S}(\mathcal X)$ and thus $v=u$.
b) During the proof of point (a) we have shown that for any $s>d$ there exists a constant $C_s>0$ such that for any $u_0\in\mathcal{H}^{\infty}_{A_\epsilon}(\mathcal X)$ and for any $\epsilon\in[-\epsilon_0,\epsilon_0]$ the series $\underset{\gamma^* \in\Gamma^*}{\sum}u_{\gamma^*}$ converges in $\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)$ to an element $u\in\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)$ and the following estimation is true:
\begin{equation}\label{5.5}
\|u\|_{\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)}\ \leq\ C_s\|u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)},\qquad\forall u_0\in\mathcal{H}^{\infty}_{A_\epsilon}(\mathcal X),\ \forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
Let us recall the Poisson formula:
\begin{equation}\label{5.6}
\underset{\gamma^*\in\Gamma^*}{\sum}\sigma_{\gamma^*}\ =\ \frac{(2\pi)^d}{|E^*|}\ \underset{\gamma\in\Gamma}{\sum}\delta_{-\gamma},\qquad\text{in }\mathscr{S}^\prime(\mathcal X).
\end{equation}
Let us first suppose that $u_0\in\mathscr{S}(\mathcal X)$. Multiplying in the equality \eqref{5.6} with $u_0$ we obtain the following equality:
\begin{equation}\label{5.7}
u\ =\ \frac{(2\pi)^d}{|E^*|}\ \underset{\gamma\in\Gamma}{\sum}u_0(-\gamma)\delta_{-\gamma},\qquad\text{in }\mathscr{S}^\prime(\mathcal X).
\end{equation}
Lemma \ref{L.5.3} implies that $u\in\mathfrak{V}_0$ and for any $s>d$ we have the estimation:
\begin{equation}\label{5.8}
\|u\|_{\mathfrak{V}_0}\ \leq\ C_s\|u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)}, \qquad\forall u_0\in\mathscr{S}(\mathcal X),\ \forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
We come now to the general case $u_0\in\mathcal{H}^{\infty}_{A_\epsilon}(\mathcal X)$. Let us fix some $\epsilon\in[-\epsilon_0,\epsilon_0]$ and some $s>d$. Using the fact that $\mathscr{S}(\mathcal X)$ is dense in $\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)$ we can choose a sequence $\{u_0^k\}_{k\in\mathbb{N}^*}\subset\mathscr{S}(\mathcal X)$ such that $u_0=\underset{k\nearrow\infty}{\lim}u_0^k$ in $\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)$. For each element $u_0^k$ we can associate, as we proved above, an element $u^k\in\mathfrak{V}_0$ such that the following inequalities hold:
$$
\|u^k\|_{\mathfrak{V}_0}\ \leq\ C_s\|u_0^k\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)},\quad\forall k\in\mathbb{N}^*,
$$
$$
\|u^k-u^l\|_{\mathfrak{V}_0}\ \leq\ C_s\|u_0^k-u_0^l\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)},\quad\forall(k,l)\in[\mathbb{N}^*]^2.
$$
It follows that there exists $v\in\mathfrak{V}_0$ such that $v=\underset{k\nearrow\infty}{\lim}u^k$ in $\mathfrak{V}_0$ and the following estimation is valid:
\begin{equation}\label{5.9}
\|v\|_{\mathfrak{V}_0}\ \leq\ C_s\|u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)}.
\end{equation}
We still have to prove that the element $v\in\mathfrak{V}_0$ obtained above is exactly the limit $u=\underset{\gamma^*\in\Gamma^*}{\sum}u_{\gamma^*}$. But we know that $u^k=\underset{\gamma^*\in\Gamma^*}{\sum}\sigma_{\gamma^*}u_0^k$ so that from \eqref{5.5} we deduce that we have the estimations:
$$
\|u^k-u\|_{\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)}\ \leq\ C_s\|u_0^k-u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)}\ \underset{k\nearrow\infty}{\rightarrow}\ 0.
$$
In conclusion $u^k\underset{k\nearrow\infty}{\rightarrow}u$ in $\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)$ and $u^k\underset{k\nearrow\infty}{\rightarrow}v$ in $\mathfrak{V}_0$. But Lemma \ref{L.5.2} implies that $\mathfrak{V}_0$ is continuously embedded in $\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)$ and we conclude that $v=u$.
\end{proof}
\begin{lemma}\label{L.5.6.a}
Let us consider the map $\boldsymbol{\psi}$ defined in \eqref{1.1}. For any vector $v\in L^2(\mathcal X)$ the series
$$
w_v\ :=\ \underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big)
$$
converges in $\mathscr{S}^\prime(\mathcal X^2)$ and satisfies the identity $\big({\rm id\hspace*{-1.5pt}l}\otimes\tau_\alpha\big)w_v=w_v$, $\forall\alpha\in\Gamma$.
\end{lemma}
\begin{proof}
Let us denote by $w_\gamma$ the general term of the series defining $w_v$; then for any $\varphi\in\mathscr{S}(\mathcal X^2)$ we have that
$$
\langle w_\gamma,\varphi\rangle\ =\ \left\langle\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big),\varphi\right\rangle\ =\ \left\langle v\otimes\delta_{-\gamma},\boldsymbol{\psi}^*(\varphi)\right\rangle\ =\ \int_\mathcal X v(x)\varphi(x,x+\gamma)\,dx.
$$
But using the definition of test functions it follows that for any $N\in\mathbb{N}$ there exists a defining semi-norm $\nu_N(\varphi)$ on $\mathscr{S}(\mathcal X^2)$ such that
$$
|\varphi(x,x+\gamma)|\ \leq\ \nu_N(\varphi)<x>^{-N}<\gamma>^{-N},\qquad\forall x\in\mathcal X,\ \forall\gamma\in\Gamma.
$$
We deduce that for any $N\in\mathbb{N}$ there exists a defining semi-norm $\mu_N(\varphi)$ on $\mathscr{S}(\mathcal X^2)$ such that:
$$
|\langle w_\gamma,\varphi\rangle|\ \leq\ \mu_N(\varphi)\|v\|_{L^2(\mathcal X)}<\gamma>^{-N},\qquad\forall\varphi\in\mathscr{S}(\mathcal X^2).
$$
It follows that the series $\underset{\gamma\in\Gamma}{\sum}w_\gamma$ converges in $\mathscr{S}^\prime(\mathcal X^2)$ and its sum $w\in\mathscr{S}^\prime(\mathcal X^2)$ satisfies the inequality
\begin{equation}\label{5.10}
|\langle w,\varphi\rangle|\ \leq\ \rho(\varphi)\|v\|_{L^2(\mathcal X)},\qquad\forall\varphi\in\mathscr{S}(\mathcal X^2),
\end{equation}
for some defining semi-norm $\rho(\varphi)$ on $\mathscr{S}(\mathcal X^2)$.
Finally let us notice that for any $\alpha\in\Gamma$ we can write that
$$
\big({\rm id\hspace*{-1.5pt}l}\otimes\tau_\alpha\big)w_\gamma\ =\ \boldsymbol{\psi}^*\big(v\otimes\delta_{-(\gamma+\alpha)}\big)\ =\ w_{\gamma+\alpha}
$$
and thus $\big({\rm id\hspace*{-1.5pt}l}\otimes\tau_\alpha\big)w_v=w_v$.
\end{proof}
\begin{definition}\label{D.5.5}
Let us consider the map $\boldsymbol{\psi}$ defined in \eqref{1.1}. We define the following complex space:
$$
\mathfrak{L}_0\ :=\ \left\{\,w\in\mathscr{S}^\prime(\mathcal X^2)\,\mid\,\exists v\in L^2(\mathcal X)\ \text{such that}\ w=\underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big)\,\right\}
$$
endowed with the quadratic norm $\|w\|_{\mathfrak{L}_0}:=\|v\|_{L^2(\mathcal X)}$.
\end{definition}
\begin{lemma}\label{L.5.6.b}
The complex space $\mathfrak{L}_0$ is a Hilbert space and is embedded continuously into $\mathscr{S}^\prime(\mathcal X^2)$.
\end{lemma}
\begin{proof}
The space $\mathfrak{L}_0$ is evidently a Hilbert space canonically unitarily equivalent with $L^2(\mathcal X
)$ and the continuity of the embedding into $\mathscr{S}^\prime(\mathcal X^2)$ follows easily from \eqref{5.10}.
\end{proof}
\begin{lemma}\label{L.5.7}
For any $s>d$ and any $\epsilon\in[-\epsilon_0,\epsilon_0]$ we have a continuous embedding $\mathfrak{L}_0\hookrightarrow\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{lemma}
\begin{proof}
Let us fix some $s>d$ and some vector $v\in L^2(\mathcal X)$ and define
$$
u\ :=\ \underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big)\in\mathfrak{L}_0,
$$
$$
Q^\prime_{-s,\epsilon}\ :=\ Q_{-s,\epsilon}\otimes{\rm id\hspace*{-1.5pt}l},\qquad g_\gamma\ :=\ \big(Q^\prime_{-s,\epsilon}\circ\boldsymbol{\psi}^*\big)(v\otimes\delta_{-\gamma}).
$$
A straightforward computation in $\mathscr{S}^\prime(\mathcal X^2)$ shows that we have the equality:
$$
g_\gamma(x,y)\ =\ v(y-\gamma)\int_\mathcal X e^{i<\eta,x-y+\gamma>}\omega_{A_\epsilon}(x,y-\gamma)\,q_{-s,\epsilon}\big(\frac{x+y-\gamma}{2},\eta\big)\,\;\;\bar{}\!\!\!d\eta.
$$
By estimating the integral in the right hand side above we obtain that for any sufficiently large $N\in\mathbb{N}$ there exists $C_N>0$ such that
$$
\left|g_\gamma(x,y)\right|\ \leq\ C_N|v(y-\gamma)|<x-y+\gamma>^{-N},\qquad\forall(x,y)\in\mathcal X^2,\ \forall\gamma\in\Gamma.
$$
We conclude that for some suitable constants $C^\prime_N,C^{\prime\prime}_N,\ldots$ we get
$$
\left[\underset{\gamma\in\Gamma}{\sum}|g_\gamma(x,y)|\right]^2\ \leq\ C^2_N\left(\underset{\gamma\in\Gamma}{\sum}|v(y-\gamma)|^2\,<x-y+\gamma>^{-N}\right)\left(\underset{\gamma\in\Gamma}{\sum}<x-y+\gamma>^{-N}\right)\ \leq
$$
$$
\leq\ C^\prime_N\underset{\gamma\in\Gamma}{\sum}|v(y-\gamma)|^2\,<x-y+\gamma>^{-N},
$$
and the integral of the last expression over $\mathcal X\times E$ is bounded by $C^{\prime\prime}_N\|v\|^2_{L^2(\mathcal X)}$.
If we denote by $g:=\underset{\gamma\in\Gamma}{\sum}g_\gamma=Q^\prime_{-s,\epsilon}u$, we notice that $({\rm id\hspace*{-1.5pt}l}\otimes\tau_{\alpha})g=g$ for any $\alpha\in\Gamma$ (because by Lemma \ref{L.5.6.a} the vector $u$ has this property). Thus
$$
g\in L^2(\mathcal X)\otimes L^2(\mathbb{T}),\qquad\|g\|_{L^2(\mathcal X)\otimes L^2(\mathbb{T})}\ \leq\ \sqrt{C^{\prime\prime}}\|v\|_{L^2(\mathcal X)}.
$$
It follows that
$$
u\ =\ Q^\prime_{s,\epsilon}g\,\in\,\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T}),\quad\|u\|_{\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})}\ \leq\ C^{\prime\prime\prime}_N\|v\|_{L^2(\mathcal X)},\quad\forall\epsilon\in[-\epsilon_0,\epsilon_0],\ \forall v\in L^2(\mathcal X).
$$
\end{proof}
We shall obtain a characterization of the space $\mathfrak{L}_0$ that is similar to our Proposition \ref{P.5.4}. In order to do that we need some technical results contained in the next Lemma. We shall use the notation:
$$
\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})\ :=\ \underset{s\in\mathbb{R}}{\cap}\left(\mathcal{H}^s_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})\right)
$$
with the natural projective limit topology.
\begin{lemma}\label{L.5.8}
Suppose given some $u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$ and for any $\gamma^*\in\Gamma^*$ let us denote by $u_{\gamma^*}:=\Upsilon_{\gamma^*}u_0$. For any $s>d$ there exists $C_s>0$ such that the series $\underset{\gamma^*\in\Gamma^*}{\sum}u_{\gamma^*}$ converges in $\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$ and the sum denoted by $v\in\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$ satisfies the estimation:
\begin{equation}\label{5.11}
\|v\|_{\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})}\ \leq\ C_s\|u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})},\qquad\forall u_0\in\mathcal{H}^{\infty}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T}),\ \forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
\end{lemma}
\begin{proof}
From \eqref{A.4} it follows that on $\mathscr{S}(\mathcal X)$ we have the equality
$$
Q_{-s,\epsilon}\sigma_{\gamma^*}\ =\ \sigma_{\gamma^*}\mathfrak{Op}^{A_\epsilon}\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma^*})\mathfrak{q}_{-s,\epsilon}\big)
$$
so that finally
\begin{equation}\label{5.12}
\big(Q_{-s,\epsilon}\otimes{\rm id\hspace*{-1.5pt}l}\big)u_{\gamma^*}\ =\ \Upsilon_{\gamma^*}\big[\mathfrak{Op}^{A_\epsilon}\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma^*})\mathfrak{q}_{-s,\epsilon}\big)\otimes{\rm id\hspace*{-1.5pt}l}\big]u_0.
\end{equation}
Taking into account that the family $\left\{<\gamma^*>^s({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma^*})\mathfrak{q}_{-s,\epsilon}\right\}_{(\epsilon,\gamma^*)\in[-\epsilon_0,\epsilon_0]\times\Gamma^*}$ is a bounded subset of $S^s(\Xi)$, it follows the existence of a constant $C>0$ such that for any $\epsilon\in[-\epsilon_0,\epsilon_0]$ one has the estimation:
\begin{equation}\label{5.13}
\left\|\big(Q_{-s,\epsilon}\otimes{\rm id\hspace*{-1.5pt}l}\big)u_{\gamma^*}\right\|_{L^2(\mathcal X)\otimes L^2(\mathbb{T})}\ \leq\ C<\gamma^*>^{-s}\|u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})},\qquad\forall u_0\in\mathcal{H}^{\infty}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T}).
\end{equation}
it follows that the series $\underset{\gamma^*\in\Gamma^*}{\sum}\big(Q_{-s,\epsilon}\otimes{\rm id\hspace*{-1.5pt}l}\big)u_{\gamma^*}$ converges in $L^2(\mathcal X)\otimes L^2(\mathbb{T})$ uniformly for $\epsilon\in[-\epsilon_0,\epsilon_0]$. The stated inequality follows now by summing up the estimation \eqref{5.13} over all $\Gamma^*$.
\end{proof}
\begin{proposition}\label{P.5.9}
For any $u\in\mathfrak{L}_0$ there exists a vector $u_0\in\mathcal{H}^{\infty}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$ such that
\begin{equation}\label{5.14}
u\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}\Upsilon_{\gamma^*}u_0,\ \text{in}\ \mathscr{S}^\prime(\mathcal X^2).
\end{equation}
Moreover, the application $\mathfrak{L}_0\ni u\mapsto u_0\in\mathcal{H}^{\infty}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$ is continuous uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{proposition}
\begin{proof}
We recall the notation $u_{\gamma^*}:=\Upsilon{_\gamma^*}u_0$ and, as in the proof of point (a) of Proposition \ref{P.5.4} we fix some real function $\chi\in C^\infty_0(\mathcal X)$ satisfying the following identity on $\mathcal X$:
$$
\underset{\gamma^*\in\Gamma^*}{\sum}\tau_{\gamma^*}\chi\ =\ 1.
$$
For any $u\in\mathfrak{L}_0$ let us denote by $u_0:=\big(\mathfrak{Op}^{A_\epsilon}(\chi)\otimes{\rm id\hspace*{-1.5pt}l}\big)u$. We notice that $\chi\in S^{-\infty}_1(\Xi)$ and using Lemma \ref{L.5.7} we deduce that $u_0\in\mathcal{H}^{\infty}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$ and that the continuity property in the end of the Proposition is clearly true. We still have to verify the equality \eqref{5.14}.
1. First let us notice that following Lemma \ref{L.5.8}, the series $\underset{\gamma^*\in\Gamma^*}{\sum}u_{\gamma^*}$ converges in $\mathscr{S}^\prime(\mathcal X^2)$.
2. An argument similar to that in the proof of Proposition \ref{P.5.4} a) proves that on $\mathscr{S}(\mathcal X^2)$ we have the equality
\begin{equation}\label{5.15}
\underset{\gamma^*\in\Gamma^*}{\sum}\mathfrak{Op}^{A_\epsilon}\big(\tau_{-\gamma^*}\chi\big)\otimes{\rm id\hspace*{-1.5pt}l}\ =\ {\rm id\hspace*{-1.5pt}l}.
\end{equation}
3. From \eqref{A.4} we have that $\mathfrak{Op}^{A_\epsilon}\big(\tau_{-\gamma^*}\chi\big)=\sigma_{-\gamma^*}\mathfrak{Op}^{A_\epsilon}(\chi)\sigma_{\gamma^*}$.
4. For any $u\in\mathfrak{L}_0$ there exists $v\in L^2(\mathcal X)$ such that $u=\underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big)$; thus for any $\gamma^*\in\Gamma^*$ we have that:
$$
\Upsilon_{\gamma^*}u\ =\ \underset{\gamma\in\Gamma}{\sum}\Upsilon_{\gamma^*}\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big)\ =\ \underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(({\rm id\hspace*{-1.5pt}l}\otimes\sigma_{\gamma^*})(v\otimes\delta_{-\gamma})\big)\ =\ \underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big)\ =\ u.
$$
Using the last two remarks above we deduce that for any $u\in\mathfrak{L}_0$ we have the equalities:
$$
\big[\mathfrak{Op}^{A_\epsilon}\big(\tau_{-\gamma^*}\chi\big)\otimes{\rm id\hspace*{-1.5pt}l}\big]u\ =\ \big[\big(\sigma_{-\gamma^*}\mathfrak{Op}^{A_\epsilon}(\chi)\sigma_{\gamma^*}\big)\otimes{\rm id\hspace*{-1.5pt}l}\big]\big(\sigma_{-\gamma^*}\otimes\sigma_{\gamma^*}\big)u\ =
$$
$$
=\ \Upsilon_{-\gamma^*}\big(\mathfrak{Op}^{A_\epsilon}(\chi)\otimes{\rm id\hspace*{-1.5pt}l}\big)u\ =\ \Upsilon_{-\gamma^*}u_0\ =\ u_{-\gamma^*}.
$$
We apply now equality \eqref{5.15} to the vector $u\in\mathfrak{L}_0$ in order to obtain that:
$$
u\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}u_{-\gamma^*}\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}u_{\gamma^*},
$$
as tempered distributions.
\end{proof}
In order to prove the reciprocal statement of Proposition \ref{P.5.9} we need a technical Lemma similar to Lemma\ref{L.5.3}.
\begin{lemma}\label{L.5.10}
For any $s>d$ there exists $C_s>0$ such that the following estimation holds:
\begin{equation}\label{5.16}
\sqrt{\int_\mathcal X|u(x,x)|^2\,dx}\ \leq\ C_s\|u\|_{\mathcal{H}^s_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})},\qquad\forall u\in\mathscr{S}(\mathcal X\times\mathbb{T}),\ \forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
\end{lemma}
\begin{proof}
Let us fix some $u\in\mathscr{S}(\mathcal X\times\mathbb{T})$, $\epsilon\in[-\epsilon_0,\epsilon_0]$ and let us define $v:=\big(Q_{s,\epsilon}\otimes{\rm id\hspace*{-1.5pt}l}\big)u\in\mathscr{S}(\mathcal X\times\mathbb{T})$. It follows that $u=\big(Q_{-s,\epsilon}\otimes{\rm id\hspace*{-1.5pt}l}\big)v$ and we deduce that for any $N\in\mathbb{N}$ (that we shall choose sufficiently large) we have the identity:
$$
u(x,y)\ =\ \int_\Xi<x-z>^{-2N}e^{i<\zeta,x-z>}\omega_{A_\epsilon}(x,z)\Big[\big(({\rm id\hspace*{-1.5pt}l}-\Delta_{\zeta})^N\mathfrak{q}_{-s,\epsilon}\big)\big(\frac{x+z}{2},\zeta\big)\Big]v(z,y)\,dz\,\;\;\bar{}\!\!\!d\zeta,\qquad\forall(x,y)\in\mathcal X^2.
$$
We deduce that there exist the strictly positive constants $C_N,C^\prime_N,\ldots$ such that the following estimations hold:
$$
|u(x,y)|\ \leq\ C_N\int_\mathcal X<x-z>^{-2N}|v(z,y)|\,dz,\qquad|u(x,y)|^2\ \leq\ C_N^\prime\int_\mathcal X<x-z>^{-2N}|v(z,y)|^2dz.
$$
In conclusion we have that:
$$
\int_\mathcal X|u(x,x)|^2dx\ =\ \underset{\gamma\in\Gamma}{\sum}\int_{\tau_{-\gamma}E}|u(x,x)|^2dx\ =\ \underset{\gamma\in\Gamma}{\sum}\int_{E}|u(x+\gamma,x+\gamma)|^2dx\ =\ \underset{\gamma\in\Gamma}{\sum}\int_{E}|u(x+\gamma,x)|^2dx\ \leq
$$
$$
\leq\ C_N^\prime\underset{\gamma\in\Gamma}{\sum}\int_E\int_\mathcal X<x-z+\gamma>^{-2N}|v(z,x)|^2dz\,dx\ \leq\ C_N^{\prime\prime}\underset{\gamma\in\Gamma}{\sum}\int_E\int_\mathcal X<z-\gamma>^{-2N}|v(z,x)|^2dz\,dx\ \leq
$$
$$
\leq\ C_N^{\prime\prime\prime}\int_E\int_\mathcal X|v(z,x)|^2dz\,dx\ =\ C_N^{\prime\prime\prime}\|v\|^2_{L^2(\mathcal X\times\mathbb{T})}\ \leq\ C_s^2\|u\|_{\mathcal{H}^s_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})}^2.
$$
\end{proof}
We come now to the reciprocal statement of Proposition \ref{P.5.9}.
\begin{proposition}\label{P.5.11}
Suppose given $u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$ and for any $\gamma^*\in\Gamma^*$ let us consider $u_{\gamma^*}:=\Upsilon_{\gamma^*}u_0$. Then the series $\underset{\gamma^*\in\Gamma^*}{\sum}u_{\gamma^*}$ converges in $\mathscr{S}(\mathcal X^2)$ to an element $u\in\mathfrak{L}_0$. Moreover, the application $\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})\ni u_0\mapsto u\in\mathfrak{L}_0$ is continuous uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{proposition}
\begin{proof}
For any $s>d$ and any $u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$, Lemma \ref{L.5.8} implies that the series $\underset{\gamma^*\in\Gamma^*}{\sum}u_{\gamma^*}$ converges in $\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$ to an element $u\in\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$ and there exists $C_s>0$ such that the following estimation holds:
\begin{equation}\label{5.17}
\|u\|_{\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})}\ \leq\ C_s\|u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})},\qquad\forall u_0\in\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T}),\ \forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
We still have to prove that $u\in\mathfrak{L}_0$ and that the continuity property stated above is true. As in the proof of Proposition \ref{P.5.4} b) we make use of the Poisson formula \eqref{5.6}. Once we notice that $\boldsymbol{\psi}^*\big({\rm id\hspace*{-1.5pt}l}\otimes\sigma_{\gamma^*}\big)=\Upsilon_{\gamma^*}$, we conclude that for any $u_0\in\mathscr{S}(\mathcal X^2)$ one has the identity:
\begin{equation}\label{5.18}
\underset{\gamma^*}{\sum}u_{\gamma^*}\ =\ \frac{(2\pi)^d}{|E^*|}\left[\underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big({\rm id\hspace*{-1.5pt}l}\otimes\delta_{-\gamma}\big)\right]u_0.
\end{equation}
But we notice that $\underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big({\rm id\hspace*{-1.5pt}l}\otimes\delta_{-\gamma}\big)$ belongs to $\mathscr{S}^\prime(\mathcal X\times\mathbb{T})$ and we deduce that the identity \eqref{5.18} also holds for $u_0\in\mathscr{S}(\mathcal X\times\mathbb{T})$. In this case, $\boldsymbol{\psi}^*\big({\rm id\hspace*{-1.5pt}l}\otimes\delta_{-\gamma}\big)\cdot u_0$ also belongs to $\mathscr{S}^\prime(\mathcal X^2)$ and for any $\varphi\in\mathscr{S}(\mathcal X^2)$ we can write that:
$$
\left\langle\boldsymbol{\psi}^*\big({\rm id\hspace*{-1.5pt}l}\otimes\delta_{-\gamma}\big)\cdot u_0\,,\,\varphi\right\rangle\ =\ \left\langle\boldsymbol{\psi}^*\big({\rm id\hspace*{-1.5pt}l}\otimes\delta_{-\gamma}\big)\,,\,u_0\varphi\right\rangle\ =\ \left\langle{\rm id\hspace*{-1.5pt}l}\otimes\delta_{-\gamma}\,,\,\boldsymbol{\psi}^*\big(u_0\varphi\big)\right\rangle\ =\ \int_\mathcal X\varphi(x,x+\gamma)u_0(x,x)\,dx.
$$
Let us denote by $v_0(x):=u_0(x,x)$ so that we obtain a test function $v_0\in\mathscr{S}(\mathcal X)$ and an equality:
\begin{equation}\label{5.19}
\boldsymbol{\psi}^*\big({\rm id\hspace*{-1.5pt}l}\otimes\delta_{-\gamma}\big)\cdot u_0\ =\ \boldsymbol{\psi}^*\big(v_0\otimes\delta_{-\gamma}\big).
\end{equation}
Let us further denote by $v:=\left((2\pi)^d/|E^*|\right)v_0\in\mathscr{S}(\mathcal X)\subset L^2(\mathcal X)$. If we insert \eqref{5.19} into \eqref{5.18} we obtain:
\begin{equation}\label{5.20}
u\ :=\ \underset{\gamma^*\in\Gamma^*}{\sum}u_{\gamma^*}\ =\ \underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big)\,\in\,\mathfrak{L}_0.
\end{equation}
Let us verify now the continuity property. Suppose given $u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$ and suppose given a sequence $\{u_{0,k}\}_{k\in\mathbb{N}^*}\subset\mathscr{S}(\mathcal X\times\mathbb{T})$ and some $s>d$ such that $u_0=\underset{k\nearrow\infty}{\lim}u_{0,k}$ in $\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$. Let us also introduce the notations:
$$
v_k(x):=\frac{(2\pi)^d}{|E^*|}u_{0,k}(x,x),\ \forall x\in\mathcal X;\qquad u_k:=\underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(v_k\otimes\delta_{-\gamma}\big)\,\in\,\mathfrak{L}_0.
$$
From Lemma \ref{L.5.10} we deduce that there exists a strictly positive constant $C_s$ such that for any $\epsilon\in[-\epsilon_0,\epsilon_0]$ and for any pair of indices $(k,l)\in[\mathbb{N}^*]^2$ the following estimations hold:
\begin{equation}\label{5.21}
\|u_k\,-\,u_l\|_{\mathfrak{L}_0}\ :=\ \|v_k\,-\,v_l\|_{L^2(\mathcal X)}\ \leq\ C_s\|u_{0,k}\,-\,u_{0,l}\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})},
\end{equation}
\begin{equation}\label{5.22}
\|u_k\|_{\mathfrak{L}_0}\ \leq\ C_s\|u_{0,k}\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})}.
\end{equation}
From \eqref{5.21} we deduce that there exists $v\in L^2(\mathcal X)$ limit of the sequence $\{v_k\}_{k\in\mathbb{N}^*}$ in $L^2(\mathcal X)$ and moreover, using also \eqref{5.22}, that it satisfies the estimation:
\begin{equation}\label{5.23}
\|v\|_{L^2(\mathcal X)}\ \leq\ C_s\|u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})},\qquad\forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
Let us denote by $\widetilde{u}:=\underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big)\,\in\,\mathfrak{L}_0$. From \eqref{5.23} we deduce that
\begin{equation}\label{5.24}
\|\widetilde{u}\|_{\mathfrak{L}_0}\ \leq\ C_s\|u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})},\qquad\forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
In order to end the proof we have to show that $\widetilde{u}=u:=\underset{\gamma^*\in\Gamma^*}{\sum}u_{\gamma^*}$ in $\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$. From \eqref{5.20} we know that $u_k:=\underset{\gamma^*\in\Gamma^*}{\sum}\Upsilon_{\gamma^*}u_{0,k}$. If we use now the inequality \eqref{5.17} with $u_0$ replaced by $u_{0,k}\,-\,u_0$, we obtain the estimation:
$$
\|u_k\,-\,u\|_{\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})}\ \leq\ C_s\|u_{0,k}\,-\,u_0\|_{\mathcal{H}^{s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})}.
$$
From this we deduce that $u=\underset{k\nearrow\infty}{\lim}u_k$ in $\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$. But from \eqref{5.21} we deduce that $\widetilde{u}=\underset{k\nearrow\infty}{\lim}u_k$ in $\mathfrak{L}_0$ and thus, due to Lemma \ref{L.5.7} also in $\mathcal{H}^{-s}_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$. In conclusion $\widetilde{u}=u$ and the proof is finished.
\end{proof}
Proceeding as in Lemma \ref{L.5.6.a} we can show that the following definition is meaningful, the series appearing in the definition of the space $\mathfrak{L}_s(\epsilon)$ being convergent as tempered distribution, and the space with the associated norm being a Hilbert space canonically isomorphic with $\mathcal{H}^s_{A_\epsilon}(\mathcal X)$ and continuously embedded into $\mathscr{S}^\prime(\mathcal X^2)$.
\begin{definition}\label{D.5.12}
For any $s\in\mathbb{R}$ and any $\epsilon\in[-\epsilon_0,\epsilon_0]$, we define the following subspace of tempered distributions:
$$
\mathfrak{L}_s(\epsilon)\ :=\ \left\{\,w\in\mathscr{S}^\prime(\mathcal X^2)\,\mid\,\exists v\in\mathcal{H}^s_{A_\epsilon}(\mathcal X),\ w\equiv w_v=\underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big)\,\right\},
$$
endowed with the quadratic norm:
\begin{equation}\label{5.25}
\|w_v\|_{\mathfrak{L}_s(\epsilon)}\ :=\ \|v\|_{\mathcal{H}^s_{A_\epsilon}(\mathcal X)}.
\end{equation}
\end{definition}
\begin{remark}\label{R.5.12}
For any $w\in\mathfrak{L}_s(\epsilon)$ we have the identity: $({\rm id\hspace*{-1.5pt}l}\otimes\tau_\alpha)w=w, \forall\alpha\in\Gamma$.
\end{remark}
\begin{lemma}\label{L.5.13}
We recall the notation $\widetilde{Q}_{s,\epsilon}$ introduced in Definition \ref{D.1.4} b).
\begin{enumerate}
\item We have the equality:
\begin{equation}\label{5.26}
\mathfrak{L}_s(\epsilon) = \left\{\,w\in\mathscr{S}^\prime(\mathcal X^2)\,\mid\,\widetilde{Q}_{s,\epsilon}w\,\in\,\mathfrak{L}_0\,\right\}.
\end{equation}
\item On $\mathfrak{L}_s(\epsilon)$ the definition norm is equivalent with the following norm:
\begin{equation}\label{5.27}
\|w\|^\prime_{\mathfrak{L}_s(\epsilon)}\ :=\ \left\|\widetilde{Q}_{s,\epsilon}w\right\|_{\mathfrak{L}_0}.
\end{equation}
\item If $s\geq0$, then $\mathfrak{L}_s(\epsilon)$ is continuously embedded into $\mathfrak{L}_0$, uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{enumerate}
\end{lemma}
\begin{proof}
1. For any $w\in\mathfrak{L}_s(\epsilon)$, there exists a vector $v\in\mathcal{H}^s_{A_\epsilon}(\mathcal X)$ such that $w\equiv w_v=\underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big)$. But, we know that by definition we have that $Q_{s,\epsilon}v\in L^2(\mathcal X)$, so that we deduce that
$$
\widetilde{Q}_{s,\epsilon}w_v\ =\ \boldsymbol{\psi}^*\big(Q_{s,\epsilon}\otimes{\rm id\hspace*{-1.5pt}l}\big)\boldsymbol{\psi}^*w_v\ =\ \underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big((Q_{s,\epsilon}v)\otimes\delta_{-\gamma}\big)\,\in\,\mathfrak{L}_0.
$$
Reciprocally let $w\in\mathscr{S}^\prime(\mathcal X^2)$ be such that $\widetilde{Q}_{s,\epsilon}w$ belongs to $\mathfrak{L}_0$. By the definition of this last space it follows that there exists $f\in L^2(\mathcal X)$ such that
$$
\widetilde{Q}_{s,\epsilon}w\ =\ \underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(f\otimes\delta_{-\gamma}\big).
$$
It follows that
$$
w\ =\ \underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big((Q_{-s,\epsilon}f)\otimes\delta_{-\gamma}\big).
$$
But then we have that $v=Q_{-s,\epsilon}f\in\mathcal{H}^s_{A_\epsilon}(\mathcal X)$ and in conclusion $w$ belongs to $\mathfrak{L}_s(\epsilon)$.
2. This result follows from the Closed Graph Theorem.
3. This result follows from the continuous embedding of $\mathcal{H}^s_{A_\epsilon}(\mathcal X)$ into $L^2(\mathcal X)$ for any $s\geq0$ uniformly for $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{proof}
\begin{lemma}\label{L.5.14}
For any $m\in\mathbb{R}_+$ and for any $\epsilon\in[-\epsilon_0,\epsilon_0]$ we have the following topological embedding:
\begin{equation}\label{5.29}
\mathfrak{L}_m(\epsilon)\ \hookrightarrow\ \mathscr{S}^\prime\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big),
\end{equation}
uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{lemma}
\begin{proof}
From the Lemmas \ref{L.5.13} c) and \ref{L.5.7} it follows that we have the following topological embedding: $\mathfrak{L}_m(\epsilon)\hookrightarrow\mathscr{S}^\prime
\big(\mathcal X;L^2(\mathbb{T})\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$. From here on we proceed as in the proof of the secong inclusion in Lemma \ref{L.2.16}. Extending by continuity the sesquilinear form \eqref{2.44} to the canonical sesquilinear form $(.,.)_m$ on $\mathscr{S}^\prime\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big)\times\mathscr{S}\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big)$ we can write it in the following way
\begin{equation}\label{5.30}
(u,v)_m\ =\ \big(u,({\rm id\hspace*{-1.5pt}l}\otimes<D_\Gamma>^{2m})v\big)_0,\qquad\forall(u,v)\in\mathscr{S}^\prime\big(\mathcal X;\mathcal{H}^m(\mathbb{T})\big)\times\mathscr{S}(\mathcal X\times\mathbb{T})
\end{equation}
and may extend it to $\mathscr{S}^\prime\big(\mathcal X;L^2(\mathbb{T})\big)\times\mathscr{S}(\mathcal X\times\mathbb{T})$.
Let us choose now some $u\in\mathfrak{L}_m(\epsilon)$ and let us denote by $f:=\widetilde{Q}_{m,\epsilon}u\in\mathfrak{L}_0$. Because $u=\widetilde{Q}_{-m,\epsilon}\widetilde{Q}_{m,\epsilon}u=\widetilde{Q}_{-m,\epsilon}f$, for any $v\in\mathscr{S}(\mathcal X\times\mathbb{T})$ we shall have the equality
\begin{equation}\label{5.31}
(u,v)_m\ =\ \big(f,\widetilde{Q}_{-m,\epsilon}({\rm id\hspace*{-1.5pt}l}\otimes<D_\Gamma>^{2m})v\big)_0.
\end{equation}
From the fact that $\mathfrak{L}_0$ is continuously embedded into $\mathscr{S}^\prime\big(\mathcal X;L^2(\mathbb{T})\big)$, it follows the existence of a defining semi-norm $|.|_l$ on $\mathscr{S}\big(\mathcal X;L^2(\mathbb{T})\big)$ such that
\begin{equation}\label{5.32}
\left|(u,v)_m\right|\ \leq\ \|f\|_{\mathfrak{L}_0}\left|\widetilde{Q}_{-m,\epsilon}({\rm id\hspace*{-1.5pt}l}\otimes<D_\Gamma>^{2m})v\right|_l,\qquad\forall v\in\mathscr{S}(\mathcal X\times\mathbb{T}).
\end{equation}
Noticing that $\|f\|_{\mathfrak{L}_0}=\|u\|_{\mathfrak{L}_m(\epsilon)}$ and applying Lemma \ref{L.2.15}, we deduce the existence of a defining semi-norm $|.|_{-m,k}$ on $\mathscr{S}\big(\mathcal X;\mathcal{H}^{-m}(\mathbb{T})\big)$ such that
\begin{equation}\label{5.33}
\left|(u,v)_m\right|\ \leq\ C\|u\|_{\mathfrak{L}_m(\epsilon)}\left|({\rm id\hspace*{-1.5pt}l}\otimes<D_\Gamma>^{2m})v\right|_{-m,k},\qquad\forall v\in\mathscr{S}(\mathcal X\times\mathbb{T}).
\end{equation}
But $\left|({\rm id\hspace*{-1.5pt}l}\otimes<D_\Gamma>^{2m})v\right|_{-m,k}=\|v\|_{m,k}$ so that the statement of the Lemma follows from \eqref{5.33}.
\end{proof}
\section{The proof of Theorem \ref{T.0.1}}
\setcounter{equation}{0}
\setcounter{theorem}{0}
The main technical results discussed in this section concern some continuity properties of the operators $\mathcal{P}_{\epsilon,\lambda}$ and $\mathcal{E}_{\epsilon,\lambda}$ defined in Section \ref{S.4} extended to some spaces of tempered distributions of the type considered in Section \ref{S.5}. We shall suppose that the Hypothesis H.1 - H.6 are satisfied and we shall use the notations introduced in Sections \ref{S.0} and \ref{S.1}. Let us just recall that:
\begin{itemize}
\item The operator $P_{\epsilon}:=\mathfrak{Op}^{A_\epsilon}(\overset{\circ}{p}_\epsilon)$ with $\overset{\circ}{p}_\epsilon(y,\eta):=p(y,y,\eta)$, defines a self-adjoint operator in $L^2(\mathcal X)$ on the domain $\mathcal{H}^m_{A_\epsilon}(\mathcal X)$.
\item The operator $\widetilde{P}_\epsilon$ defines a self-adjoint operator $\widetilde{P}_\epsilon^\prime$ in the Hilbert space $L^2(\mathcal X^2)$ with domain $\widetilde{\mathcal{H}}^m_{A_\epsilon}(\mathcal X^2)$ and another self-adjoint operator $\widetilde{P}_\epsilon^{\prime\prime}$ in the Hilbert space $L^2(\mathcal X\times\mathbb{T})$ with the domain $\mathcal{K}^m_\epsilon(\mathcal X^2)$.
\end{itemize}
\begin{lemma}\label{L.6.1}
For any $\epsilon\in[-\epsilon_0,\epsilon_0]$ we have that:
\begin{enumerate}
\item $\widetilde{P}_\epsilon\in\mathbb{B}\big(\mathfrak{L}_m(\epsilon);\mathfrak{L}_0\big)$ uniformly in $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\item The operator $\widetilde{P}_\epsilon$ considered as an unbounded operator in the Hilbert space $\mathfrak{L}_0$ defines a self-adjoint operator $\widetilde{P}_\epsilon^{\prime\prime\prime}$ having domain $\mathfrak{L}_m(\epsilon)$ and this self-adjoint operator is unitarily equivalent with $P_\epsilon$.
\end{enumerate}
\end{lemma}
\begin{proof}
1. Let us choose two test functions $v$ and $\varphi$ from $\mathscr{S}(\mathcal X)$. Using formula \eqref{1.4} we obtain that
$$
\big[\big(\boldsymbol{\psi}^*\widetilde{P}_\epsilon
\boldsymbol{\psi}^*\big)(v\otimes\varphi)\big](x,y)\ =\ \left[\mathfrak{Op}^{A_\epsilon}\big([({\rm id\hspace*{-1.5pt}l}\otimes\tau_y
\otimes{\rm id\hspace*{-1.5pt}l})p_\epsilon]^\circ\big)v\right](x)\varphi(y),\qquad\forall(x,y)\in\mathcal X^2.
$$
In this equality we insert $\varphi(y)\equiv\varphi_\lambda(y):=\lambda^{-d}\theta\big(\frac{y+\gamma}{\lambda}\big)$ for some $(\lambda,\gamma)\in\mathbb{R}_+^*\times\Gamma$ and for any $y\in\mathcal X$, where we denoted by $\theta$ a test function of class $C^\infty_0(\mathcal X)$ that satisfies the condition $\int_\mathcal X\theta(y)dy=1$. With this choice we consider the limit for $\lambda\searrow0$ as tempered distribution on $\mathcal X^2$. Taking into account that for $\lambda\searrow0$ we have that $\varphi_\lambda$ converges in $\mathscr{S}^\prime(\mathcal X)$ to $\delta_{-\gamma}$ and using Hypothesis H.6, we conclude that
$$
\big(\boldsymbol{\psi}^*\widetilde{P}_\epsilon
\boldsymbol{\psi}^*\big)(v\otimes\delta_{-\gamma})\ =\ \big(P_\epsilon v\big)\otimes\delta_{-\gamma},\qquad\forall v\in\mathscr{S}(\mathcal X),\ \forall\gamma\in\Gamma.
$$
Extending by continuity we can write the equality
\begin{equation}\label{6.1}
\big(\boldsymbol{\psi}^*\widetilde{P}_\epsilon
\boldsymbol{\psi}^*\big)(v\otimes\delta_{-\gamma})\ =\ \big(P_\epsilon v\big)\otimes\delta_{-\gamma},\qquad\forall v\in\mathscr{S}^\prime(\mathcal X),\ \forall\gamma\in\Gamma.
\end{equation}
We conclude that for any $u\in\mathfrak{L}_m(\epsilon)$ of the form
$$
u\ \equiv\ u_v\ :=\underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big)
$$
for some $v\in\mathcal{H}^m_{A_\epsilon}(\mathcal X)$ we can write:
$$
\widetilde{P}_\epsilon u\ =\ \boldsymbol{\psi}^*\left(\underset{\gamma\in\Gamma}{\sum}\big(\boldsymbol{\psi}^*\widetilde{P}_\epsilon
\boldsymbol{\psi}^*\big)(v\otimes\delta_{-\gamma})\right)\ =\ \underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big((P_\epsilon v)\otimes\delta_{-\gamma}\big).
$$
The first statement of the Lemma follows now from the fact that $P_\epsilon\in\mathbb{B}\big(\mathcal{H}^m_{A_\epsilon}(\mathcal X);L^2(\mathcal X)\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
2. Let us notice that the linear operator defined by
$$
U^s_\epsilon:\mathcal{H}^s_{A_\epsilon}(\mathcal X)\rightarrow\mathfrak{L}_s(\epsilon),\qquad U^s_\epsilon v:=\underset{\gamma\in\Gamma}{\sum}\boldsymbol{\psi}^*\big(v\otimes\delta_{-\gamma}\big)
$$
is in fact a unitary operator for any pair $(s,\epsilon)\in\mathbb{R}\times[-\epsilon_0,\epsilon_0]$. Following the arguments from the proof of the first point of the Lemma we have the following equality $\widetilde{P}_\epsilon U^s_\epsilon=U^s_\epsilon P_\epsilon$ valid on $\mathcal{H}^m_{A_\epsilon}(\mathcal X)$ (the domain of self-adjointness of $P_\epsilon$).
\end{proof}
We shall study now the {\it effective Hamiltonian} $\mathfrak{E}_{-+}(\epsilon,\lambda)$ defined in Theorem \ref{T.4.3}. The following two technical results will be used in proving the boundedness and self-adjointness of $\mathfrak{E}_{-+}(\epsilon,\lambda)$ in $\mathfrak{V}_0^N$.
\begin{lemma}\label{L.6.2}
Suppose given an operator-valued symbol $\mathfrak{q}\in S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C}^N)\big)$ that is hermitian (i.e. $\mathfrak{q}(x,\xi)^*=\mathfrak{q}(x,\xi),\,\forall(x,\xi)\in\Xi$) and verifies the following invariance property: $({\rm id\hspace*{-1.5pt}l}\otimes\tau_{\gamma^*})\mathfrak{q}=\mathfrak{q},\,\forall\gamma^*\in\Gamma^*$. Then, for any $\epsilon\in[-\epsilon_0,\epsilon_0]$ the operator $\mathfrak{Op}^{A_\epsilon}(\mathfrak{q})$ belongs to $\mathbb{B}\big(\mathfrak{V}_0^N\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$ and is self-adjoint.
Moreover, the application $ S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C}^N)\big)\ni\mathfrak{q}\mapsto\mathfrak{Op}^{A_\epsilon}(\mathfrak{q})\in\mathbb{B}\big(\mathfrak{V}_0^N\big)$ is continuous uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{lemma}
\begin{proof}
The invariance with respect to translations from $\Gamma^*$ assumed in the statement implies that the operator-valued symbol $\mathfrak{q}$ is in fact a $\Gamma^*$-periodic function with respect to the second variable $\xi\in\mathcal X^*$ and thus can be decomposed in a Fourier series (as tempered distributions in $\mathscr{S}^\prime\big(\Xi;\mathbb{B}(\mathbb{C}^N)\big)$):
\begin{equation}\label{6.2}
\mathfrak{q}(x,\xi)\ =\ \underset{\alpha\in\Gamma}{\sum}\hat{\mathfrak{q}}_\alpha(x)e^{i<\xi,\alpha>},\qquad\hat{\mathfrak{q}}_\alpha(x):=|E^*|^{-1}\int_{E^*}e^{-i<\xi,\alpha>}\mathfrak{q}(x,\xi)\,d\xi.
\end{equation}
Due to the regularity of the symbol functions we deduce that for any $\beta\in\mathbb{N}^d$ and for any $k\in\mathbb{N}$ there exists a strictly positive constant $C_{\beta,k}$ such that
\begin{equation}\label{6.3}
\left|\big(\partial^\beta_x\hat{\mathfrak{q}}_\alpha\big)(x)\right|\ \leq\ C_{\beta,k}<\alpha>^{-k},\qquad\forall x\in\mathcal X,\ \forall\alpha\in\Gamma
\end{equation}
and we conclude that the series in \eqref{6.2} converges in fact in $BC^\infty\big(\Xi;\mathbb{B}(\mathbb{C}^N)\big)\equiv S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C}^N)\big)$. From \eqref{6.2} we deduce that
\begin{equation}\label{6.4}
\big(\mathfrak{Op}^{A_\epsilon}(\mathfrak{q})u\big)(x)\ =\ \underset{\alpha\in\Gamma}{\sum}\big(Q_\alpha u\big)(x),\qquad\forall x\in\mathcal X,\ \forall u\in\mathscr{S}(\mathcal X:\mathbb{C}^N),
\end{equation}
where $Q_\alpha$ is the linear operator defined on $\mathscr{S}(\mathcal X;\mathbb{C}^N)$ by the following oscillating integral:
\begin{equation}\label{6.5}
\big(Q_\alpha u\big)(x)\ :=\ \int_\Xi e^{i<\eta,x-y+\alpha>}\omega_{A_\epsilon}(x,y)\,\hat{\mathfrak{q}}_\alpha\big(\frac{x+y}{2}\big)\,u(y)\,dy\,\;\;\bar{}\!\!\!d\eta\ =\ \omega_{A_\epsilon}(x,x+\alpha)\,\hat{\mathfrak{q}}_\alpha(x+\alpha/2)(\tau_{-\alpha}u)(x).
\end{equation}
Both equalities \eqref{6.4} and \eqref{6.5} may be extended by continuity to any $u\in\mathscr{S}^\prime(\mathcal X;\mathbb{C}^N)$.
Let us consider then $u\equiv u_{\underline{f}}=\underset{\gamma\in\Gamma}{\sum}\,\underline{f}_\gamma\delta_{-\gamma}\in\mathfrak{V}_0^N$ for some $\underline{f}\in\big[l^2(\Gamma)\big]^N$. Then we can write:
\begin{equation}\label{6.6}
Q_\alpha u\ =\ \underset{\gamma\in\Gamma}{\sum}\omega_{A_\epsilon}(-\gamma-\alpha,-\gamma)\,\hat{\mathfrak{q}}_\alpha(-\gamma-\alpha/2)\,\underline{f}_\gamma\,\delta_{-\alpha-\gamma}\ =\ \underset{\gamma\in\Gamma}{\sum}\omega_{A_\epsilon}(-\gamma,\alpha-\gamma)\,\hat{\mathfrak{q}}_\alpha(-\gamma+\alpha/2)\,\underline{f}_{\gamma-\alpha}\,\delta_{-\gamma}.
\end{equation}
If we use now the formula \eqref{6.6} in \eqref{6.4} we get
\begin{equation}\label{6.7}
\mathfrak{Op}^{A_\epsilon}(\mathfrak{q})u\ =\ \underset{\gamma\in\Gamma}{\sum}\widetilde{f}_\gamma\,\delta_{-\gamma},
\end{equation}
\begin{equation}\label{6.8}
\widetilde{f}_\gamma\ :=\ \underset{\alpha\in\Gamma}{\sum}\omega_{A_\epsilon}(-\gamma,\alpha-\gamma)\,\hat{\mathfrak{q}}_{\alpha}(-\gamma+\alpha/2)\,\underline{f}_{\gamma-\alpha}\ =\ \underset{\alpha\in\Gamma}{\sum}\omega_{A_\epsilon}(-\gamma,-\alpha)\,\hat{\mathfrak{q}}_{\gamma-\alpha}\big(-\frac{\gamma+\alpha}{2}\big)\,\underline{f}_{\alpha}.
\end{equation}
Let us verify that $\widetilde{f}\in\big[l^2(\Gamma)\big]^N$. In fact from \eqref{6.3} and \eqref{6.8} it follows that for any $k\in\mathbb{N}$ (sufficiently large) there exists $C_k>0$ such that
$$
|\widetilde{f}_{\gamma}|\ \leq\ C_k\underset{\alpha\in\Gamma}{\sum}<\gamma-\alpha>^{-k}|\underline{f}_\alpha|\ \leq\ C_k\sqrt{\underset{\alpha\in\Gamma}{\sum}<\gamma-\alpha>^{-k}}\sqrt{\underset{\alpha\in\Gamma}{\sum}<\gamma-\alpha>^{-k}|\underline{f}_\alpha|^2},
$$
so that we have the estimation:
\begin{equation}\label{6.9}
\|\widetilde{f}\|_{[l^2(\Gamma)]^N}^2\ =\ \underset{\gamma\in\Gamma}{\sum}|\widetilde{f}_\gamma|^2\ \leq\ C^\prime\underset{\alpha\in\Gamma}{\sum}|\underline{f}_\alpha|^2\ =\ C^\prime\|\underline{f}\|_{[l^2(\Gamma)]^N}^2.
\end{equation}
From \eqref{6.7} and \eqref{6.9} we clearly deduce the fact that $\mathfrak{Op}^{A_\epsilon}(\mathfrak{q})\in\mathbb{B}\big(\mathfrak{V}_0^N\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$ and the continuity of the application $ S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C}^N)\big)\ni\mathfrak{q}\mapsto\mathfrak{Op}^{A_\epsilon}(\mathfrak{q})\in\mathbb{B}\big(\mathfrak{V}_0^N\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$ clearly follows from \eqref{6.8} and \eqref{6.2}.
In order to prove the self-adjointness of $\mathfrak{Op}^{A_\epsilon}(\mathfrak{q})$ we fix a second element $v\in\mathfrak{V}_0^N$ of the form $v\equiv v_{\underline{g}}=\underset{\gamma\in\Gamma}{\sum}\underline{g}_\gamma\,\delta_{-\gamma}$ for some $\underline{g}\in \big[l^2(\Gamma)\big]^N$. Then we notice that
\begin{equation}\label{6.10}
\left\{
\begin{array}{rcl}
\mathfrak{Op}^{A_\epsilon}(\mathfrak{q})v&=&\underset{\gamma\in\Gamma}{\sum}\widetilde{g}_\gamma\,\delta_{-\gamma}\\
\widetilde{g}_\gamma&=&\underset{\alpha\in\Gamma}{\sum}\omega_{A_\epsilon}(-\gamma,-\alpha)\,\hat{\mathfrak{q}}_{\gamma-\alpha}\big(-\frac{\gamma+\alpha}{2}\big)\,\underline{g}_{\alpha}.
\end{array}
\right.
\end{equation}
Let us point out the following evident equalities:
\begin{equation}\label{6.11}
\big[\hat{\mathfrak{q}}(x)]^*\ =\ \hat{\mathfrak{q}}_{-\alpha}(x);\qquad\overline{\omega_{A_\epsilon}(-\gamma,-\alpha)}\ =\ \omega_{A_\epsilon}(-\alpha,-\gamma)
\end{equation}
in order to deduce that
$$
\left(\mathfrak{Op}^{A_\epsilon}(\mathfrak{q})u\,,\,v\right)_{\mathfrak{V}_0^N}\ =\ \underset{\gamma\in\Gamma}{\sum}\left(\widetilde{f}_\gamma\,,\,\underline{g}_\gamma\right)_{\mathbb{C}^N}\ =\ \underset{(\alpha,\gamma)\in\Gamma^2}{\sum}\left(\omega_{A_\epsilon}(-\gamma,-\alpha)\,\hat{\mathfrak{q}}_{\gamma-\alpha}\big(-\frac{\gamma+\alpha}{2}\big)\,\underline{f}_{\alpha}\,,\,\underline{g}_\gamma\right)_{\mathbb{C}^N}\ =
$$
$$
=\ \underset{(\alpha,\gamma)\in\Gamma^2}{\sum}\left(\underline{f}_{\alpha}\,,\,\omega_{A_\epsilon}(-\alpha,-\gamma)\,\hat{\mathfrak{q}}_{\alpha-\gamma}\big(-\frac{\gamma+\alpha}{2}\big)\,\underline{g}_\gamma\right)_{\mathbb{C}^N}\ =\ \underset{\alpha\in\Gamma}{\sum}\left(\underline{f}_\alpha\,,\,\widetilde{g}_\alpha\right)_{\mathbb{C}^N}\ =\ \left(u\,,\,\mathfrak{Op}^{A_\epsilon}(\mathfrak{q})v\right)_{\mathfrak{V}_0^N}.
$$
\end{proof}
\begin{remark}\label{R.6.3}
Let us point out that a shorter proof of the boundedness of $\mathfrak{Op}^{A_\epsilon}(\mathfrak{q})$ on $\mathfrak{V}_0^N$ may be obtained by using the Proposition \ref{P.5.4} characterizing the distributions from $\mathfrak{V}_0$. The proof we have given has the advantage of giving the explicit form of the operator $\mathfrak{Op}^{A_\epsilon}(\mathfrak{q})$ when we identify $\mathfrak{V}_0^N$ with $\big[l^2(\Gamma)\big]^N$ (see \eqref{6.7} and \eqref{6.8}). Moreover, the self-adjointness is a very easy consequence of these formulae.
\end{remark}
In order to prove that the effective Hamiltonian $\mathfrak{E}_{-+}(\epsilon,\lambda)$ satisfies the hypothesis of the Lemma \ref{L.6.2} we shall need the commutation properties we have proved at the end of Section \ref{S.4}, that we now recall in the following Lemma.
\begin{lemma}\label{L.6.4}
With the notations introduced in Lemma \ref{L.4.6} and Remark \ref{R.4.7}, for any $\gamma^*\in\Gamma^*$ and for any $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$ the following equalities are true:
\begin{equation}\label{6.12}
\left\{
\begin{array}{rcl}
\mathfrak{R}_{-,\epsilon}\,\sigma_{\gamma^*}&=&\Upsilon_{\gamma^*}\,\mathfrak{R}_{-,\epsilon}\\
\mathfrak{R}_{+,\epsilon}\,\Upsilon_{\gamma^*}&=&\sigma_{\gamma^*}\,\mathfrak{R}_{+,\epsilon},
\end{array}
\right.
\end{equation}
\begin{equation}\label{6.13}
\left\{
\begin{array}{rcl}
\mathfrak{E}(\epsilon,\lambda)\,\Upsilon_{\gamma^*}&=&\Upsilon_{\gamma^*}\,\mathfrak{E}(\epsilon,\lambda)\\
\mathfrak{E}_+(\epsilon,\lambda)\,\sigma_{\gamma^*}&=&\Upsilon_{\gamma^*}\,\mathfrak{E}_+(\epsilon,\lambda)\\
\mathfrak{E}_-(\epsilon,\lambda)\,\Upsilon_{\gamma^*}&=&\sigma_{\gamma^*}\,\mathfrak{E}_-(\epsilon,\lambda)\\
\mathfrak{E}_{-+}(\epsilon,\lambda)\,\sigma_{\gamma^*}&=&\sigma_{\gamma^*}\,\mathfrak{E}_{-+}(\epsilon,\lambda).
\end{array}
\right.
\end{equation}
\end{lemma}
\begin{proof}
The equalities \eqref{6.12} are exactly the equalities \eqref{4.24} and \eqref{4.25} that we have proved in Section \ref{S.4}. The equalities \eqref{6.13} follow from \eqref{4.26} and \eqref{4.11}.
\end{proof}
\begin{lemma}\label{L.6.5}
Under the Hypothesis of Theorem \ref{T.4.3}, we have that $\mathfrak{E}_{-+}(\epsilon,\lambda)\in\mathbb{B}\big(\mathfrak{V}_0^N\big)$ uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$ and is self-adjoint on the Hilbert space $\mathfrak{V}_0^N$.
\end{lemma}
\begin{proof}
We recall that $\mathfrak{E}_{-+}(\epsilon,\lambda):=\mathfrak{Op}^{A_\epsilon}\big(E^{-,+}_{\epsilon,\lambda}\big)$ where $E^{-,+}_{\epsilon,\lambda}\in S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C}^N)\big)$. This Lemma will thus follow directly from Lemma \ref{L.6.2} once we have shown that $E^{-,+}_{\epsilon,\lambda}$ is hermitian and $\Gamma^*$-periodic in the second variable $\xi\in\mathcal X^*$.
In order to prove the symmetry we use the fact that the operator $\mathcal{E}_{\epsilon,\lambda}$ is self-adjoint on $\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N)$ and deduce that $\mathfrak{E}_{-+}(\epsilon,\lambda)$ is self-adjoint on the Hilbert space $L^2(\mathcal X;\mathbb{C}^N)$. Thus we have the equality $\big[\mathfrak{E}_{-+}(\epsilon,\lambda)\big]^*=\mathfrak{E}_{-+}(\epsilon,\lambda)$ from which we deduce that
$$
\mathfrak{Op}^{A_\epsilon}\big(\big[E^{-,+}_{\epsilon,\lambda}\big]^*-E^{-,+}_{\epsilon,\lambda}\big)\ =\ 0.
$$
As the application $\mathfrak{Op}^{A_\epsilon}:\mathscr{S}^\prime(\Xi)\rightarrow\mathbb{B}\big(\mathscr{S}(\mathcal X);\mathscr{S}^\prime(\mathcal X)\big)$ is an isomorphism (see \cite{MP1}) it follows the symmetry relation $\big[E^{-,+}_{\epsilon,\lambda}\big]^*=E^{-,+}_{\epsilon,\lambda}$.
For the $\Gamma^*$-periodicity we use the last equality in \eqref{6.13} that can also be written as
$$
\sigma_{-\gamma^*}\mathfrak{E}_{-+}(\epsilon,\lambda)\sigma_{\gamma^*}\ =\ \mathfrak{E}_{-+}(\epsilon,\lambda).
$$
Considering now the equality \eqref{A.4}, that evidently remains true also for the $\mathfrak{Op}^{A_\epsilon}$ quantization, we can write
$$
\sigma_{-\gamma^*}\mathfrak{E}_{-+}(\epsilon,\lambda)\sigma_{\gamma^*}\ =\ \mathfrak{Op}^{A_\epsilon}\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma^*})E^{-+}_{\epsilon,\lambda}\big).
$$
Repeating the above argument based on the injectivity of the quantization map (\cite{MP1}) we conclude that $({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma^*})E^{-+}_{\epsilon,\lambda}=E^{-+}_{\epsilon,\lambda}$ for any $\gamma^*\in\Gamma^*$.
\end{proof}
We shall now study the continuity properties of the operators $\mathfrak{R}_{\pm,\epsilon}$, $\mathfrak{E}_{\pm}(\epsilon,\lambda)$ and $\mathfrak{E}(\epsilon,\lambda)$ acting on the distribution spaces $\mathfrak{V}_0$ or $\mathfrak{L}_0$ by using the characterizations of these spaces obtained in Section \ref{S.5} (Propositions \ref{P.5.4}, \ref{P.5.9} and \ref{P.5.11}) as well as the commutation properties recalled in Lemma \ref{L.6.4}. We shall suppose the hypothesis of Theorem \ref{T.4.3} are satisfied and $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$.
\begin{lemma}\label{L.6.6}
$\mathfrak{R}_{+,\epsilon}\in\mathbb{B}\big(\mathfrak{L}_0;\mathfrak{V}_0^N\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{lemma}
\begin{proof}
Let us recall that $\mathfrak{R}_{+,\epsilon}=\mathfrak{Op}^{A_\epsilon}\big(R_+\big)$ with $R_+\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_0;\mathbb{C}^N)\big)$ so that finally we deduce that $\mathfrak{R}_{+,\epsilon}\in\mathbb{B}\big(\mathscr{S}^\prime(\mathcal X;\mathcal{K}_0);\mathscr{S}^\prime(\mathcal X;\mathbb{C}^N)\big)$. Moreover, from Proposition \ref{P.A.26} we deduce that for any $s\in\mathbb{R}$ we have that $\mathfrak{R}_{+,\epsilon}\in\mathbb{B}\big(\mathcal{H}^s_{A_\epsilon}(\mathcal X)\otimes\mathcal{K}_0\,;\,\big[\mathcal{H}^s_{A_\epsilon}(\mathcal X)\big]^N\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
Suppose fixed some $u\in\mathfrak{L}_0$; from Proposition \ref{P.5.9} we deduce the existence of a unique $u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes\mathcal{K}_0\equiv\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes L^2(\mathbb{T})$ such that $u=\underset{\gamma^*}{\sum}\Upsilon_{\gamma^*}u_0$ with convergence in $\mathscr{S}^\prime(\mathcal X^2)$. In fact Lemma \ref{L.5.8} implies the convergence of the above series in $\mathscr{S}^\prime\big(\mathcal X;\mathcal{K}_0\big)$. Using now also the second equation in \eqref{6.12} we can write that in $\mathscr{S}^\prime\big(\mathcal X;\mathbb{C}^N\big)$ we have the equalities
$$
\mathfrak{R}_{+,\epsilon}u\ =\ \underset{\gamma^*}{\sum}\mathfrak{R}_{+,\epsilon}\Upsilon_{\gamma^*}u_0\ =\ \underset{\gamma^*}{\sum}\sigma_{\gamma^*}\mathfrak{R}_{+,\epsilon}u_0.
$$
But we have seen that $\mathfrak{R}_{+,\epsilon}u_0\in\big[\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\big]^N$ and thus Proposition \ref{P.5.4} b) implies that $\mathfrak{R}_{+,\epsilon}u\in\mathfrak{V}_0^N$. The fact that $\mathfrak{R}_{+,\epsilon}\in\mathbb{B}\big(\mathfrak{L}_0;\mathfrak{V}_0^N\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$ follows now from this result and the following three remarks:
\begin{enumerate}
\item The above mentioned continuity property of $\mathfrak{R}_{+,\epsilon}$ that follows from Proposition \ref{P.A.26}.
\item The uniform continuity of the application $\mathfrak{L}_0\ni u\mapsto u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes\mathcal{K}_0$ with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$, that follows from Proposition \ref{P.5.9}.
\item The uniform continuity of the application $\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\ni\mathfrak{R}_{+,\epsilon}u_0\mapsto\mathfrak{R}_{+,\epsilon}u\in\mathfrak{V}_0^N$ with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$, that follows from Proposition \ref{P.5.4} b).
\end{enumerate}
\end{proof}
\begin{lemma}\label{L.6.7}
$\mathfrak{E}_{-}(\epsilon,\lambda)\in\mathbb{B}\big(\mathfrak{L}_0;\mathfrak{V}_0^N\big)$ uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$.
\end{lemma}
\begin{proof}
Let us recall that $\mathfrak{E}_-(\epsilon,\lambda)=\mathfrak{Op}^{A_\epsilon}\big(E^-_{\epsilon,\lambda}\big)$ with $E^-_{\epsilon,\lambda}\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_0;\mathbb{C}^N)\big)$ uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$. We continue as in the above proof of Lemma \ref{L.6.6}. Considering $\mathfrak{E}_-(\epsilon,\lambda)$ as a magnetic pseudodifferential operator it can be extended to an operator $\mathfrak{E}_-(\epsilon,\lambda)\in\mathfrak{B}\big(\mathscr{S}^\prime(\mathcal X;\mathcal{K}_0);\mathscr{S}^\prime(\mathcal X;\mathbb{C}^N)\big)$; moreover we notice that $\mathfrak{L}_0$ is continuously embedded into $\mathscr{S}^\prime(\mathcal X;\mathcal{K}_0)$ and we conclude that we have indeed that $\mathfrak{E}_-(\epsilon,\lambda)\in\mathbb{B}\big(\mathfrak{L}_0;\mathscr{S}^\prime(\mathcal X;\mathbb{C}^N)\big)$. From Proposition \ref{P.5.9} we conclude that for any $u\in\mathfrak{L}_0$ there exists some $u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes\mathcal{K}_0$ such that $u=\underset{\gamma^*\in\Gamma^*}{\sum}\Upsilon_{\gamma^*}u_0$, the series converging in $\mathscr{S}^\prime(\mathcal X;\mathcal{K}_0)$. Using this identity and the third equality in \eqref{6.13} we obtain that
$$
\mathfrak{E}_-(\epsilon,\lambda)u\ =\ \underset{\gamma^*}{\sum}\mathfrak{E}_-(\epsilon,\lambda)\Upsilon_{\gamma^*}u_0\ =\ \underset{\gamma^*}{\sum}\sigma_{\gamma^*}\mathfrak{E}_-(\epsilon,\lambda)u_0.
$$
Taking into account that for any $s\in\mathbb{R}$ we have that $\mathfrak{E}_-(\epsilon,\lambda)\in\mathbb{B}\big(\mathcal{H}^s_{A_\epsilon}(\mathcal X)\otimes\mathcal{K}_0;\big[\mathcal{H}^s_{A_\epsilon}(\mathcal X)\big]^N\big)$ uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$, the proof of the Lemma ends similarly to the proof of Lemma \ref{L.6.6} above.
\end{proof}
\begin{lemma}\label{L.6.8}
$\mathfrak{E}_{+}(\epsilon,\lambda)\in\mathbb{B}\big(\mathfrak{V}_0^N;\mathfrak{L}_m(\epsilon)\big)$ uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$.
\end{lemma}
\begin{proof}
Let us recall that $\mathfrak{E}_{+}(\epsilon,\lambda)=\mathfrak{Op}^{A_\epsilon}\big(E^+_{\epsilon,\lambda}\big)$ with $E^+_{\epsilon,\lambda}\in S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C}^N;\mathcal{K}_{m,\xi})\big)$ uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$. We conclude that $\mathfrak{E}_+(\epsilon,\lambda)\in\mathbb{B}\big(\mathscr{S}^\prime(\mathcal X;\mathbb{C}^N);\mathscr{S}^\prime(\mathcal X;\mathcal{K}_{m,0})\big)$. Noticing that by Lemma \ref{L.5.2} the space $\mathfrak{V}_0^N$ embeds continuously into $\mathscr{S}^\prime(\mathcal X;\mathbb{C}^N)$ we conclude that $\mathfrak{E}_+(\epsilon,\lambda)\in\mathbb{B}\big(\mathfrak{V}_0^N;\mathscr{S}^\prime(\mathcal X;\mathcal{K}_{m,0})\big)$ uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$.
Suppose now fixed some $u\in\mathfrak{V}_0^N$; we know from Proposition \ref{P.5.4} that there exists an element $u_0\in\big[\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\big]^N$ such that $u=\underset{\gamma^*\in\Gamma^*}{\sum}\sigma_{\gamma^*}u_0$ converging as tempered distribution and such that the application $\mathfrak{V}_0^N\ni u\mapsto u_0\in\big[\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\big]^N$ is continuous uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$. Using this result and the second equation in \eqref{6.13} we obtain that
$$
\mathfrak{E}_+(\epsilon,\lambda)u\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}\mathfrak{E}_+(\epsilon,\lambda)\sigma_{\gamma^*}u_0\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}\Upsilon_{\gamma^*}\big(\mathfrak{E}_+(\epsilon,\lambda)u_0\big).
$$
Using now Lemma \ref{L.5.13}, in order to prove that $\mathfrak{E}_+(\epsilon,\lambda)u\in\mathfrak{L}_m(\epsilon)$ all we have to prove is that $\widetilde{Q}_{m,\epsilon}\mathfrak{E}_+(\epsilon,\lambda)u\in\mathfrak{L}_0$. In order to do that we shall need two of the properties of the operator $\widetilde{Q}_{m,\epsilon}$ that we have proved in the previous sections.
First we know from \eqref{2.22} that
$$
\widetilde{Q}_{m,\epsilon}\Upsilon_{\gamma^*}\ =\ \Upsilon_{\gamma^*}\widetilde{Q}_{m,\epsilon},\qquad\forall\gamma^*\in\Gamma^*.
$$
Secondly, at the end of the proof of Lemma \ref{L.4.2} we have shown that $\widetilde{Q}_{m,\epsilon}=\mathfrak{Op}^{A_\epsilon}\big(\widetilde{\mathfrak{q}}_{m,\epsilon}\big)$ with $\widetilde{\mathfrak{q}}_{m,\epsilon}\in S^0_0(\mathcal X;\mathbb{B}(\mathcal{K}_{m,\xi};\mathcal{K}_0)\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$. If we use The Composition Theorem \ref{T.A.23} we notice that $\widetilde{\mathfrak{q}}_{m,\epsilon}\sharp^{B_\epsilon}E^+_{\epsilon,\lambda}\in S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C}^N;\mathcal{K}_0)\big)$ uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$. Applying then Proposition \ref{P.A.26} gives that $\widetilde{Q}_{m,\epsilon}\mathfrak{E}_+(\epsilon,\lambda)\in\mathbb{B}\big(\big[\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\big]^N;\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes\mathcal{K}_0\big)$ uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$. We conclude that
$$
\widetilde{Q}_{m,\epsilon}\mathfrak{E}_+(\epsilon,\lambda)u\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}\Upsilon_{\gamma^*}\widetilde{Q}_{m,\epsilon}\big(\mathfrak{E}_+(\epsilon,\lambda)u_0\big),
$$
and this last element belongs to $\mathfrak{L}_0$ as implied by Proposition \ref{P.5.11}. The conclusion of the Lemma follows now from the following remarks:
\begin{enumerate}
\item The application $\mathfrak{V}_0^N\ni u\mapsto u_0\in\big[\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\big]^N$ is continuous uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$, as proved in Proposition \ref{P.5.4} a).
\item The application $\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes\mathcal{K}_0\ni \widetilde{Q}_{m,\epsilon}\mathfrak{E}_+(\epsilon,\lambda)u_0\mapsto\widetilde{Q}_{m,\epsilon}\mathfrak{E}_+(\epsilon,\lambda)u\in\mathfrak{L}_0$ is continuous uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$, as proved in Proposition \ref{P.5.11}.
\end{enumerate}
\end{proof}
\begin{lemma}\label{L.6.9}
$\mathfrak{R}_{-,\epsilon}\in\mathbb{B}\big(\mathfrak{V}_0^N;\mathfrak{L}_m(\epsilon)\big)$uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{lemma}
\begin{proof}
Let us recall that $\mathfrak{R}_{-,\epsilon}=\mathfrak{Op}^{A_\epsilon}\big(R_-\big)$ with $R_-\in S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C}^N;\mathcal{K}_{m,\xi})\big)$ as implied by \eqref{3.22} and \eqref{3.26}. Using now the first equality in \eqref{6.12} we notice that $\mathfrak{R}_{-\epsilon}\sigma_{\gamma^*}=\Upsilon_{\gamma^*}\mathfrak{R}_{-,\epsilon},\ \forall\gamma^*\in\Gamma^*$ and the arguments from the proof of Lemma \ref{L.6.8} may be repeated and one obtains the desired conclusion of the Lemma.
\end{proof}
\begin{lemma}\label{L.6.10}
$\mathfrak{E}(\epsilon,\lambda)\in\mathbb{B}\big(\mathfrak{L}_0;\mathfrak{L}_m(\epsilon)\big)$ uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$.
\end{lemma}
\begin{proof}
Let us recall that $\mathfrak{E}(\epsilon,\lambda)=\mathfrak{Op}^{A_\epsilon}\big(E_{\epsilon,\lambda}\big)$ with $E_{\epsilon,\lambda}\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_0;\mathcal{K}_{m,\xi})\big)$ uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$. As magnetic pseudodifferential operator we can then extend it to $\mathfrak{E}(\epsilon,\lambda)\in\mathfrak{B}\big(\mathscr{S}^\prime(\mathcal X;\mathcal{K}_0);\mathscr{S}^\prime(\mathcal X;\mathcal{K}_{m,0})\big)$. Recalling that we have a continuous embedding $\mathfrak{L}_0\hookrightarrow\mathscr{S}^\prime(\mathcal X;\mathcal{K}_0)$ we deduce that $\mathfrak{E}(\epsilon,\lambda)\in\mathbb{B}\big(\mathfrak{L}_0;\mathscr{S}^\prime(\mathcal X;\mathcal{K}_{m,0})\big)$. We use now Proposition \ref{P.5.9} and the first equality in \eqref{6.13} and write that for any $u\in\mathfrak{L}_0$ there exists $u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes\mathcal{K}_0$ such that:
$$
\mathfrak{E}(\epsilon,\lambda)u\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}\mathfrak{E}(\epsilon,\lambda)\Upsilon_{\gamma^*}u_0\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}\Upsilon_{\gamma^*}
\big(\mathfrak{E}(\epsilon,\lambda)u_0\big),
$$
with convergence in the sense of tempered distributions on $\mathcal X^2$. From Proposition \ref{P.5.9} we deduce that the application $\mathfrak{L}_0\ni u\mapsto u_0\in\mathcal{H}^\infty_{A_\epsilon}(\mathcal X)\otimes\mathcal{K}_0$ is continuous uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$ and from The Composition Theorem \ref{T.A.23} we deduce that $\widetilde{\mathfrak{q}}_{m,\epsilon}\sharp^{B_\epsilon}E_{\epsilon,\lambda}\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_0)\big)$ and the proof of the Lemma ends exactly as the proof of Lemma \ref{L.6.8}.
\end{proof}
Now we shall prove a variant of Theorem \ref{T.4.3} in the frame of the Hilbert spaces $\mathfrak{V}_0$ and $\mathfrak{L}_0$.
\begin{theorem}\label{T.6.11}
We suppose verified the hypothesis of Theorem \ref{T.4.3} and use the same notations; then we have that
\begin{equation}\label{6.14}
\mathcal{P}_{\epsilon,\lambda}\,\in\,\mathbb{B}\big(\mathfrak{L}_m(\epsilon)\times\mathfrak{V}_0^N;\mathfrak{L}_0\times\mathfrak{V}_0^N\big),\quad\mathcal{E}_{\epsilon,\lambda}\,\in\,\mathbb{B}\big(\mathfrak{L}_0\times\mathfrak{V}_0^N;\mathfrak{L}_m(\epsilon)\times\mathfrak{V}_0^N\big),
\end{equation}
uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$. Moreover, for any $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$ the operator $\mathcal{P}_{\epsilon,\lambda}$ is invertible and its inverse is $\mathcal{E}_{\epsilon,\lambda}$.
\end{theorem}
\begin{proof}
The boundedness properties in \eqref{6.14} follow from Lemmas \ref{L.6.1} (a), \ref{L.6.5}, \ref{L.6.6}, \ref{L.6.7}, \ref{L.6.8}, \ref{L.6.9} and \ref{L.6.10}.
Concerning the invertibility of $\mathcal{P}_{\epsilon,\lambda}$ let us recall that in Theorem \ref{T.4.3} we have proved that the operator $\mathcal{P}_{\epsilon,\lambda}$ considered as operator in $\mathbb{B}\big(\mathcal{K}^m_\epsilon(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N);\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N)\big)$ is invertible and its inverse is $\mathcal{E}_{\epsilon,\lambda}\in\mathbb{B}\big(\mathcal{K}(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N);\mathcal{K}^m_\epsilon(\mathcal X^2)\times L^2(\mathcal X;\mathbb{C}^N)\big)$.
From \eqref{4.7} we recall that $\mathcal{P}_{\epsilon,\lambda}$ is a magnetic pseudodifferential operator with symbol $\mathcal{P}_\epsilon$ of class $S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_{m,\xi}\times\mathbb{C}^N;\mathcal{K}_0\times\mathbb{C}^N)\big)$ uniformly with respect to $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I$. Applying Proposition \ref{P.A.7} we deduce that
\begin{equation}\label{6.15}
\mathcal{P}_{\epsilon,\lambda}\,\in\,\mathbb{B}\big(\mathscr{S}(\mathcal X;\mathcal{K}_{m,0})\times\mathscr{S}(\mathcal X;\mathbb{C}^N);\mathscr{S}(\mathcal X;\mathcal{K}_0)\times\mathscr{S}(\mathcal X;\mathbb{C}^N)\big),
\end{equation}
and extending by continuity we also have that
\begin{equation}\label{6.16}
\mathcal{P}_{\epsilon,\lambda}\,\in\,\mathbb{B}\big(\mathscr{S}^\prime(\mathcal X;\mathcal{K}_{m,0})\times\mathscr{S}^\prime(\mathcal X;\mathbb{C}^N);\mathscr{S}^\prime(\mathcal X;\mathcal{K}_0)\times\mathscr{S}^\prime(\mathcal X;\mathbb{C}^N)\big).
\end{equation}
Similarly, the operator $\mathcal{E}_{\epsilon,\lambda}$ appearing in Theorem \ref{T.4.3} has a symbol of class $S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_0\times\mathbb{C}^N;\mathcal{K}_{m,\xi}\times\mathbb{C}^N)\big)$ and thus defines first an operator of the form
\begin{equation}\label{6.17}
\mathcal{E}_{\epsilon,\lambda}\,\in\,\mathbb{B}\big(\mathscr{S}(\mathcal X;\mathcal{K}_0)\times\mathscr{S}(\mathcal X;\mathbb{C}^N);\mathscr{S}(\mathcal X;\mathcal{K}_{m,0})\times\mathscr{S}(\mathcal X;\mathbb{C}^N)\big),
\end{equation}
and extending by continuity we also have that
\begin{equation}\label{6.18}
\mathcal{E}_{\epsilon,\lambda}\,\in\,\mathbb{B}\big(\mathscr{S}^\prime(\mathcal X;\mathcal{K}_0)\times\mathscr{S}^\prime(\mathcal X;\mathbb{C}^N);\mathscr{S}^\prime(\mathcal X;\mathcal{K}_{m,0})\times\mathscr{S}^\prime(\mathcal X;\mathbb{C}^N)\big).
\end{equation}
From the first inclusion in Lemma \ref{L.2.16} it follows that $\mathscr{S}(\mathcal X;\mathcal{K}_{m,0})\hookrightarrow\mathcal{K}^m_\epsilon(\mathcal X^2)$, so that from the invertibility implied by Theorem \ref{T.4.3} (see above in this proof), it also follows that the operator $\mathcal{P}_{\epsilon,\lambda}$ appearing in \eqref{6.15} is invertible and its inverse is the operator $\mathcal{E}_{\epsilon,\lambda}$ appearing in \eqref{6.17}. As both operators $\mathcal{P}_{\epsilon,\lambda}$ and $\mathcal{E}_{\epsilon,\lambda}$ are symmetric, by duality we deduce that also the operators appearing in \eqref{6.16} and \eqref{6.18} are the inverse of one another. This property, together with the embeddings $\mathfrak{L}_m(\epsilon)\hookrightarrow\mathscr{S}^\prime(\mathcal X;\mathcal{K}_{m,0})$ given by Lemma \ref{L.5.14}, $\mathfrak{L}_0\hookrightarrow\mathscr{S}^\prime(\mathcal X;\mathcal{K}_o)$ given by Lemma \ref{L.5.7} and $\mathfrak{V}_0\hookrightarrow\mathscr{S}^\prime(\mathcal X)$ given by Lemma \ref{L.5.2} allow us to end the proof of the Theorem.
\end{proof}
We come now to the proof of the main result of this paper.
\begin{description}
\item[Proof of the Theorem \ref{T.0.1}] We proceed exactly as in the proof of Corollary \ref{C.4.5}. We start from the equality
$$
\mathcal{P}_{\epsilon,\lambda}\,\mathcal{E}_{\epsilon,\lambda}\ =\ \left(
\begin{array}{cc}
{\rm id\hspace*{-1.5pt}l}_{\mathfrak{L}_0}&0\\
0&{\rm id\hspace*{-1.5pt}l}_{\mathfrak{V}_0^N}
\end{array}
\right)
$$
and use the fact that $\widetilde{P}_\epsilon^{\prime\prime\prime}$ is a self-adjoint operator in $\mathfrak{L}_0$ that is unitarily equivalent with $P_{\epsilon}$ (by Lemma \ref{L.6.1}) so that we deduce that $\sigma\big(\widetilde{P}_\epsilon^{\prime\prime\prime}\big)=\sigma\big(P_{\epsilon}\big)$. Then we can write that
\begin{equation}\label{6.19}
0\notin\sigma\big(\mathfrak{E}_{-+}(\epsilon,\lambda)\big)\ \Longrightarrow\ \lambda\notin\sigma\big(\widetilde{P}_\epsilon^{\prime\prime\prime}\big),\ \text{and}\ \big(\widetilde{P}_\epsilon^{\prime\prime\prime}-\lambda\big)^{-1}\ =\ \mathfrak{E}(\epsilon,\lambda)\,-\,\mathfrak{E}_{+,\epsilon}(\epsilon,\lambda)\mathfrak{E}_{-+}(\epsilon,\lambda)^{-1}\mathfrak{E}_{-,\epsilon}(\epsilon,\lambda)
\end{equation}
\begin{equation}\label{6.20}
\lambda\notin\sigma\big(\widetilde{P}_\epsilon^{\prime\prime\prime}\big)\ \Longrightarrow\ 0\notin\sigma\big(\mathfrak{E}_{-+}(\epsilon,\lambda)\big),\ \text{and}\ \mathfrak{E}_{-+}(\epsilon,\lambda)^{-1}\ =\ -\mathfrak{R}_{+,\epsilon}\big(\widetilde{P}_\epsilon^{\prime\prime\prime}-\lambda\big)^{-1}\mathfrak{R}_{-,\epsilon}.
\end{equation}
In conclusion we have obtained that $\lambda\in\sigma\big(\widetilde{P}_\epsilon^{\prime\prime\prime}\big)\ \Leftrightarrow\ 0\in\sigma\big(\mathfrak{E}_{-+}(\epsilon,\lambda)\big)$ and this implies that $\lambda\in\sigma\big(P_\epsilon\big)\ \Leftrightarrow\ 0\in\sigma\big(\mathfrak{E}_{-+}(\epsilon,\lambda)\big)$.\cqfd
\end{description}
An imediate consequence of Theorem \ref{T.0.1} is the following result concerning the stability of spectral gaps for the operator $P_{\epsilon}$:
\begin{description}
\item[Proof of Corollary {C.0.2}] We apply Theorem \ref{T.0.1} and the arguments from its proof above, taking $I=K$ and $\epsilon_0>0$ sufficiently small. Knowing that $\text{\sf dist}\big(K,\sigma\big(P_0\big)\big)>0$, we deduce that we also have $\text{\sf dist}\big(K,\sigma\big(\widetilde{P}_0^{\prime\prime\prime}\big)\big)>0$ and thus we have the estimation:
\begin{equation}\label{6.21}
\underset{\lambda\in K}{\sup}\left\|\big(\widetilde{P}_0^{\prime\prime\prime}-\lambda\big)^{-1}\right\|_{\mathbb{B}(\mathfrak{L}_0)}\ <\ \infty.
\end{equation}
From \eqref{6.20} we deduce that
$$
\lambda\in K\ \Longrightarrow\ 0\notin\sigma\big(\mathfrak{E}_{-+}(0,\lambda)\big)\ \text{and}\ \mathfrak{E}_{-+}(0,\lambda)^{-1}\ =\ -\mathfrak{R}_{+,0}\big(\widetilde{P}_0^{\prime\prime\prime}-\lambda\big)^{-1}\mathfrak{R}_{-,0},
$$
and thus
\begin{equation}\label{6.22}
\underset{\lambda\in K}{\sup}\left\|\mathfrak{E}_{-+}(0,\lambda)^{-1}\right\|_{\mathbb{B}(\mathfrak{V}_0^N)}\ <\ \infty.
\end{equation}
From Theorem \ref{T.4.3} it follows that for any $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times K$:
\begin{equation}\label{6.23}
\mathfrak{E}_{-+}(\epsilon,\lambda)\ =\ \mathfrak{E}_{-+}(0,\lambda)\,+\,\mathfrak{S}_{-+}(\epsilon,\lambda),\qquad\mathfrak{S}_{-+}(\epsilon,\lambda):=\mathfrak{Op}^{A_\epsilon}\big(S^{-+}_{\epsilon,\lambda}\big),
\end{equation}
\begin{equation}\label{6.24}
\underset{\epsilon\rightarrow0}{\lim}S^{-+}_{\epsilon,\lambda}\ =\ 0\ \text{in}\ S^0\big(\mathcal X;\mathbb{B}(\mathcal{C}^N)\big),
\end{equation}
uniformly with respect to $\lambda\in K$.
We notice that the symbol $S^{-+}_{\epsilon,\lambda}(x,\xi)$ is $\Gamma^*$-periodic in the second variable $\xi\in\mathcal X^*$, so that from Lemma \eqref{L.6.2} we deduce that
\begin{equation}\label{6.25}
\underset{\epsilon\rightarrow0}{\lim}\left\|\mathfrak{S}_{-+}(\epsilon,\lambda)\right\|_{\mathbb{B}(\mathfrak{V}_0^N)}\ =\ 0,
\end{equation}
uniformly with respect to $\lambda\in K$.
From \eqref{6.22}, \eqref{6.23} and \eqref{6.25} imply that for $\epsilon_0>0$ sufficiently small, the magnetic pseudodifferential operator $\mathfrak{E}_{-+}(\epsilon,\lambda)$ is invertible in $\mathbb{B}(\mathfrak{V}_0)$ for any $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times K$; in conclusion $0\notin\sigma\big(\mathfrak{E}_{-+}(\epsilon,\lambda)\big)$ and thus $\lambda\notin\sigma\big(P_\epsilon\big)$ for any $(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times K$.
\end{description}
The arguments elaborated in the proof of Corollary \ref{C.0.2} allow to obtain an interesting relation between the spectra of the operators $P_\epsilon$ and $P_0$, under some stronger hypothesis.
\noindent{\bf Hypothesis I.1} Under the conditions of Hypothesis H.1 we suppose further that for any pair $(j,k)$ of indices between $1$ and $d$ the family $\{\epsilon^{-1}B_{\epsilon,jk}\}_{0<|\epsilon|\leq\epsilon_0}$ are bounded subsets of $BC^\infty(\mathcal X)$.
\noindent{\bf Hypothesis I.2} We suppose that $p_\epsilon(x,y,\eta)=p_0(y,\eta)+r_\epsilon(x,y,\eta)$ where $p_0$ is a real valued symbol from $S^m_1(\mathbb{T})$ with $m>0$ and the family $\{\epsilon^{-1}r_\epsilon\}_{0<|\epsilon|\leq\epsilon_0}$ is a bounded subset of $S^m_1(\mathcal X\times\mathbb{T})$, each symbol $r_\epsilon$ being real valued.
\noindent{\bf Hypothesis I.3} The symbol $p_0$ is elliptic; i.e. there exist $C>0$, $R>0$ such that $p_0(y,\eta)\geq C|\eta|^m$ for any $(y,\eta)\in\Xi$ with $|\eta|\geq R$.
\begin{remark}\label{R.6.12}
If we come back to the proofs of Theorem \ref{T.4.3}, Theorem \ref{T.A.23} (of Composition) and Proposition \ref{P.A.27} and suppose the Hypothesis I.1 - I.3 to be true, we can prove the following fact that extends our property \eqref{4.12}:
\begin{equation}\label{6.26}
\begin{array}{l}
\forall I\subset\mathbb{R}\ \text{compact interval, }\exists\epsilon_0>0,\,\exists N\in\mathbb{N},\ \text{such that:}\\
\left\{
\begin{array}{l}
\forall(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I,\qquad\mathfrak{E}_{-+}(\epsilon,\lambda)\ =\ \mathfrak{E}_{-+}(0,\lambda)\,+\,\mathfrak{S}_{-+}(\epsilon,\lambda),\qquad\mathfrak{S}_{-+}(\epsilon,\lambda):=\mathfrak{Op}^{A_\epsilon}\big(S^{-+}_{\epsilon,\lambda}\big),\\
\text{the familly }\left\{\epsilon^{-1}S^{-+}_{\epsilon,\lambda}\right\}_{(|\epsilon|,\lambda)\in(0,\epsilon_0]\times I}\ \text{is a bounded subset of }S^0\big(\mathcal X;\mathbb{B}(\mathbb{C}^N)\big).
\end{array}
\right.
\end{array}
\end{equation}
\end{remark}
Once again we notice the $\Gamma^*$-periodicity of the symbol $S^{-+}_{\epsilon,\lambda}(x,\xi)$ with respect to the variable $\xi\in\mathcal X^*$ and from Lemma \ref{L.6.2} we deduce that there exists a strictly positive constant $C_1$ such that the following estimation is true:
\begin{equation}\label{6.27}
\left\|\mathfrak{S}_{-+}(\epsilon,\lambda)\right\|_{\mathbb{B}(\mathfrak{V}_0^N)}\ \leq\ C_1\epsilon,\qquad\forall(\epsilon,\lambda)\in[-\epsilon_0,\epsilon_0]\times I.
\end{equation}
Using Lemma \ref{L.6.6} and \ref{L.6.9} we conclude that there exists a strictly positive constant $C_2$ such that the following estimation is true:
\begin{equation}\label{6.28}
\left\|\mathfrak{R}_{+,\epsilon}\right\|_{\mathbb{B}(\mathfrak{L}_0;\mathfrak{V}_0^N)}\ +\ \left\|\mathfrak{R}_{-,\epsilon}\right\|_{\mathbb{B}(\mathfrak{V}_0^N;\mathfrak{L}_m(\epsilon))}\ \leq\ C_2\qquad\forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
\begin{description}
\item[Proof of Proposition \ref{P.6.12}]
For $M\subset\mathbb{R}$ and $\delta>0$ we use the notation $M_\delta:=\{t\in\mathbb{R}\,\mid\,{\sf dist}(t,M)\leq\delta\}$. Then we have to prove the following inclusions:
\begin{equation}\label{6.29}
\sigma\big(P_\epsilon\big)\cap I\,\subset\,\sigma\big(P_0\big)_{C\epsilon}\cap I,\qquad\forall\epsilon\in[0,\epsilon_0].
\end{equation}
\begin{equation}\label{6.34}
\sigma\big(P_0\big)\cap I\,\subset\,\sigma\big(P_\epsilon\big)_{C\epsilon}\cap I,\qquad\forall\epsilon\in[0,\epsilon_0].
\end{equation}
Suppose there exists $\lambda\in I$ such that ${\sf dist}\big(\lambda,\sigma\big(P_0\big)\big)>C\epsilon$. From Lemma \ref{L.6.1} we know that $\sigma\big(P_0\big)=\sigma\big(\widetilde{P}_0^{\prime\prime\prime}\big)$ so that we deduce that ${\sf dist}\big(\lambda,\sigma\big(\widetilde{P}_0^{\prime\prime\prime}\big)\big)>C\epsilon$ and conclude that:
\begin{equation}\label{6.30}
\left\|\big(\widetilde{P}_0^{\prime\prime\prime}-\lambda\big)^{-1}\right\|_{\mathbb{B}(\mathfrak{L}_0)}\ \leq\ (C\epsilon)^{-1}.
\end{equation}
From \eqref{6.20} we also deduce that $0\notin\sigma\big(\mathfrak{E}_{-+}(0,\lambda)\big)$ and $\mathfrak{E}_{-+}(0,\lambda)^{-1}=-\mathfrak{R}_{+,0}\big(\widetilde{P}_0^{\prime\prime\prime}-\lambda\big)^{-1}\mathfrak{R}_{-,0}$. Using these facts together with \eqref{6.28} and \eqref{6.30} we obtain the estimation:
\begin{equation}\label{6.31}
\left\|\mathfrak{E}_{-+}(0,\lambda)^{-1}\right\|_{\mathbb{B}(\mathfrak{V}_0^N)}\ \leq\ C_2^2(C\epsilon)^{-1}.
\end{equation}
Using \eqref{6.27} and \eqref{6.31} we also obtain the following estimation:
\begin{equation}\label{6.32}
\left\|\mathfrak{E}_{-+}(0,\lambda)^{-1}\right\|_{\mathbb{B}(\mathfrak{V}_0^N)}\cdot\left\|\mathfrak{S}_{-+}(\epsilon,\lambda)\right\|_{\mathbb{B}(\mathfrak{V}_0^N)}\ \leq\ C_1C_2^2C^{-1},\qquad\forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
If we choose now $C>0$ such that $C>C_1C_2^2$, we notice that the operator $\mathfrak{E}_{-+}(\epsilon,\lambda)=\mathfrak{E}_{-+}(0,\lambda)+\mathfrak{S}_{-+}(\epsilon,\lambda)$ is invertible in $\mathbb{B}(\mathfrak{V}_0^N)$ and thus we deduce that $0\notin\sigma\big(\mathfrak{E}_{-+}(\epsilon,\lambda)\big)$. It follows then that $\lambda\notin\sigma\big(P_\epsilon\big)$ for any $\epsilon\in[-\epsilon_0,\epsilon_0]$ and the inclusion \eqref{6.29} follows.
Let us prove now \eqref{6.34}. Let us suppose that for some $\epsilon$ with $|\epsilon|\in(0,\epsilon_0]$ there exists some $\lambda\in I$ such that ${\sf dist}\big(\lambda,\sigma\big(P_\epsilon\big)\big)>C\epsilon$. Recalling that $\sigma\big(P_\epsilon\big)=\sigma\big(\widetilde{P}_\epsilon^{\prime\prime\prime}\big)$ we deduce that we also have ${\sf dist}\big(\lambda,\sigma\big(\widetilde{P}_\epsilon^{\prime\prime\prime}\big)\big)>C\epsilon$ and thus
\begin{equation}\label{6.35}
\left\|\big(\widetilde{P}_\epsilon^{\prime\prime\prime}-\lambda\big)^{-1}\right\|_{\mathbb{B}(\mathfrak{L}_0)}\ \leq\ (C\epsilon)^{-1}.
\end{equation}
We also deduce that $0\notin\sigma\big(\mathfrak{E}_{-+}(\epsilon,\lambda)\big)$ and $\mathfrak{E}_{-+}(\epsilon,\lambda)^{-1}=-\mathfrak{R}_{+,\epsilon}\big(\widetilde{P}_\epsilon^{\prime\prime\prime}-\lambda\big)^{-1}\mathfrak{R}_{-,\epsilon}$. Using these facts together with \eqref{6.28} and \eqref{6.30} we obtain the estimation:
\begin{equation}\label{6.31.a}
\left\|\mathfrak{E}_{-+}(\epsilon,\lambda)^{-1}\right\|_{\mathbb{B}(\mathfrak{V}_0^N)}\ \leq\ C_2^2(C\epsilon)^{-1}.
\end{equation}
It follows like above that the operator $\mathfrak{E}_{-+}(0,\lambda)=\mathfrak{E}_{-+}(\epsilon,\lambda)-\mathfrak{S}_{-+}(\epsilon,\lambda)$ is invertible in $\mathbb{B}(\mathfrak{V}_0^N)$ and thus we deduce that $0\notin\sigma\big(\mathfrak{E}_{-+}(0,\lambda)\big)$. It follows then that $\lambda\notin\sigma\big(P_0\big)$ and the inclusion \eqref{6.34} follows.
\end{description}
\begin{remark}\label{R.6.12.a}
The relations \eqref{6.29} and \eqref{6.34} clearly imply that the boundaries of the spectral gaps of the operator $P_\epsilon$ are Lipshitz functions of $\epsilon$ in $\epsilon=0$.
\end{remark}
\section{Some particular situations}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{The simple spectral band}
In this subsection we shall find some explicit forms for the principal part of the effective Hamiltonian $\mathfrak{E}_{-+}(\epsilon,\lambda)$. We shall suppose the Hypothesis H.1 - H.6 to be satisfied.
For the beginning we concentrate on the operator $P_0=\mathfrak{Op}(p_0)$ with $p_0\in S^m_1(\mathbb{T})$ a real elliptic symbol. We know tht $P_0$ has a self-adjoint realization as operator acting in $L^2(\mathcal X)$ with the domain $\mathcal{H}^m(\mathcal X)$ (the usual Sobolev space of order $m$). From Lemma \ref{L.A.18} we obtain that $\tau_\gamma P_0=P_0\tau_\gamma$, $\forall\gamma\in\Gamma$ and thus we can use the Floquet theory in order to study the spectrum of the operator $P_0$.
We shall consider the following spaces similar to the one used in Section \ref{S.2} but with one variable less:
\begin{equation}\label{7.1}
\mathscr{S}^\prime_\Gamma(\Xi)\ :=\ \left\{v\in\mathscr{S}^\prime(\Xi)\,\mid\,v(y+\gamma,\eta)=e^{i<\eta,\gamma>}v(y,\eta)\ \forall\gamma\in\Gamma,\ v(y,\eta+\gamma^*)=v(y,\eta)\ \forall\gamma\in\Gamma^*\right\}
\end{equation}
endowed with the topology induced from $\mathscr{S}^\prime(\Xi)$.
\begin{equation}\label{7.2}
\mathscr{F}_0(\Xi)\ :=\ \left\{v\in L^2_{\text{\sf loc}}(\Xi)\cap\mathscr{S}^\prime_\Gamma(\Xi)\,\mid\,v\in L^2(E\times E^*)\right\}
\end{equation}
that is a Hilbert space for the norm $\|v\|_{\mathscr{F}_0(\Xi)}:=\left(|E^*|^{-1}\int_E\int_{E^*}|v(x,\xi)|^2\,dx\,d\xi\right)^{1/2}$.
\begin{equation}\label{7.4}
\forall s\in\mathbb{R},\qquad\mathscr{F}_s(\Xi)\ :=\ \left\{v\in\mathscr{S}^\prime_\Gamma\,\mid\,\big(<D>^s\otimes{\rm id\hspace*{-1.5pt}l}\big)v\in\mathscr{F}_0(\Xi)\right\},
\end{equation}
that is a Hilbert space with the norm $\|v\|_{\mathscr{F}_s(\Xi)}:=\|\big(<D>^s\otimes{\rm id\hspace*{-1.5pt}l}\big)v\|_{\mathscr{F}_0(\Xi)}$.
The following Lemma and its prof are completely similar with the case discussed in Section \ref{S.2}.
\begin{lemma}\label{L.7.1}
With the above notations the following statements are true:
\begin{enumerate}
\item The operator $\mathcal{U}_0:L^2(\mathcal X)\rightarrow\mathscr{F}_0(\Xi)$ defined by
\begin{equation}\label{7.5}
\big(\mathcal{U}_0u\big)(x,\xi)\ :=\ \underset{\gamma\in\Gamma}{\sum}e^{i<\xi,\gamma>}u(x-\gamma),\qquad\forall(x,\xi)\in\Xi,
\end{equation}
is a unitary operator. The inverse of the operator $\mathcal{U}_0$ is the operator $\mathcal{W}_0:\mathscr{F}_0(\Xi)\rightarrow L^2(\mathcal X)$ defined as
\begin{equation}\label{7.6}
\big(\mathcal{W}_0v\big)(x)\ :=\ |E^*|^{-1}\int_{E^*}v(x,\xi)\,d\xi,\qquad\forall x\in\mathcal X.
\end{equation}
\item For any $s\in\mathbb{R}$ the restriction of the operator $\mathcal{U}_0$ to $\mathcal{H}^s(\mathcal X)$ defines a unitary operator $\mathcal{U}_0:\mathcal{H}^s(\mathcal X)\rightarrow\mathscr{F}_s(\Xi)$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{L.7.2}
With the above notations the following statements are true:
\begin{enumerate}
\item The operator $\widehat{P}_0:=P_0\otimes{\rm id\hspace*{-1.5pt}l}$ leaves invariant the subspace $\mathscr{S}^\prime_\Gamma(\Xi)$.
\item Considered as an unbounded operator in the Hilbert space $\mathscr{F}_0(\Xi)$, the operator $\widehat{P}_0$ is self-adjoint and lower semi-bounded on the domain $\mathscr{F}_m(\Xi)$ and is unitarily equivalent to the operator $P_0$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first conclusion follows easily from Lemma \ref{L.A.18}. For the second conclusion let us notice that \eqref{7.5} implies the following equality:
\begin{equation}\label{7.7}
\mathcal{U}_0P_0u\ =\ \widehat{P}_0\mathcal{U}_0u,\qquad\forall u\in\mathcal{H}^m(\mathcal X),
\end{equation}
that together withLemma \ref{L.7.1} implies that $\widehat{P}_0$ and $P_0$ are unitarily equivalent.
\end{proof}
Let us notice that for any $v\in\mathscr{F}_0(\Xi)$ and for any $\xi\in\mathcal X^*$, the restriction $v(\cdot,\xi)$ defines a function on $\mathcal X$ that is of class $\mathscr{F}_{0,\xi}$ and by Remark \ref{R.A.17} this last space is canonically unitarily equivalent with $L^2(E)$; moreover we have the equality
$$
\|v\|_{\mathscr{F}_0(\Xi)}\ =\ |E^*|^{-1/2}\left(\int_{E^*}\|v(\cdot,\xi)\|^2_{\mathscr{F}_{0,\xi}}d\xi\right)^{1/2}.
$$
The following periodicity is evidently true: $\mathscr{F}_{0,\xi}=\mathscr{F}_{0,\xi+\gamma^*}$ for any $\gamma^*\in\Gamma^*$. Moreover, it is easy to see that we can make the following identification:
$$
\mathscr{F}_0(\Xi)\ \equiv\ \int_{\mathbb{T}^{*,d}}^\oplus\mathscr{F}_{0,\xi}\,d\xi.
$$
Similarly, we also have the following relations:
$$
\mathscr{F}_{m,\xi+\gamma^*}=\mathscr{F}_{m,\xi},\ \forall\gamma^*\in\Gamma^*;\qquad\mathscr{F}_m(\Xi)\ \equiv\ \int_{\mathbb{T}^{*,d}}^\oplus\mathscr{F}_{m,\xi}\,d\xi.
$$
Taking into account the Remark \ref{R.A.22} we notice that for any $\xi\in\mathcal X^*$ the operator $P_0$ induces on the Hilbert space $\mathscr{F}_{0,\xi}$ a self-adjoint operator with domain $\mathscr{F}_{m,\xi}$ that we shalll denote by $\widehat{P}_0(\xi)$; we evidently have the periodicity $\widehat{P}_0(\xi+\gamma^*)=\widehat{P}_0(\xi)$ for any $\gamma^*\in\Gamma^*$.
If we idntify $\mathcal{K}_0$ with $L^2_{\text{\sf loc}}\cap\mathscr{S}^\prime_\Gamma(\Xi)\equiv L^2(E)$, the same Remark \ref{R.A.22} implies that the operator $\widehat{P}_0(\xi)$ is unitarily equivalent with the operator $\check{P}_0(\xi)$ that is induced by $\mathfrak{Op}\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\xi})p\big)$ on the space $\mathcal{K}_0$; this is a self-adjoint lower semi-bounded operator on the domain $\mathcal{K}_{m,\xi}$ (identified with $\mathcal{H}^m_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$, with the norm $\|<D+\xi>^m\cdot\|_{L^2(E)}$). More precisely, we can write that
\begin{equation}\label{7.8}
\check{P}_0(\xi)\ =\ \sigma_{-\xi}\widehat{P}_0(\xi)\sigma_\xi,\qquad\forall\xi\in\mathcal X^*.
\end{equation}
\begin{lemma}\label{L.7.3}
For any ${\rm z}\in\mathbb{C}\setminus\overline{\underset{\xi\in\mathcal X^*}{\cup}\sigma\big(\check{P}_0(\xi)\big)}$ the application
\begin{equation}\label{7.8.a}
\mathcal X^*\ni\xi\mapsto\big(\check{P}_0(\xi)\,-\,{\rm z}\big)^{-1}\in\mathbb{B}(\mathcal{K}_0)
\end{equation}
is of class $C^\infty(\mathcal X^*)$.
\end{lemma}
\begin{proof}
From the Example \ref{E.A.20} follows that the application
$$
\mathcal X^*\ni\xi\mapsto\big(\check{P}_0(\xi)\,-\,{\rm z}\big)^{-1}\in\mathbb{B}(\mathcal{K}_{m,0};\mathcal{K}_0)
$$
is of class $C^\infty(\mathcal X^*)$. But from the second rezolvent equality:
$$
\big(\check{P}_0(\xi)\,-\,{\rm z}\big)^{-1}\,-\,\big(\check{P}_0(\xi_0)\,-\,{\rm z}\big)^{-1}\ =\ \big(\check{P}_0(\xi)\,-\,{\rm z}\big)^{-1}\big(\check{P}_0(\xi_0)\,-\,\check{P}_0(\xi)\big)\big(\check{P}_0(\xi_0)\,-\,{\rm z}\big)^{-1}
$$
follows the continuity of the application \eqref{7.8.a} and the existence of the derivatives follows by usual arguments.
\end{proof}
\begin{lemma}\label{L.7.4}
The following equality is true:
\begin{equation}\label{7.9}
\widehat{P}_0\ =\ \int_{\mathbb{T}_*}^\oplus\widehat{P}_0(\xi)\,d\xi.
\end{equation}
\end{lemma}
\begin{proof}
First let us notice the equality
\begin{equation}\label{7.10}
\big(\widehat{P}_0u\big)(\cdot,\xi)\ =\ \widehat{P}_0(\xi)u(\cdot,\xi),\qquad\forall\xi\in\mathbb{T}_*,\ \forall u\in\mathscr{F}_0(\Xi).
\end{equation}
Then, from \eqref{7.8} we deduce the equality:
$$
\big(\widehat{P}_0(\xi)\,+\,i\big)^{-1}\ =\ \sigma_\xi\big(\check{P}_0(\xi)\,+\,i\big)^{-1}\sigma_{-\xi},\qquad\forall\xi\in\mathcal X^*
$$
and from Lemma \ref{L.7.3} we obtain the continuity of the application
$$
\mathbb{T}_*\ni\xi\mapsto\big(\widehat{P}_0(\xi)\,+\,i\big)^{-1}\in\mathbb{B}\big(L^2(E)\big),
$$
and that together with \eqref{7.10} imply equality \eqref{7.9}.
\end{proof}
\begin{remark}\label{R.7.5}
Let us notice that $\mathcal{K}_{m,\xi}$ is compactly embedded into $\mathcal{K}_0$ and thus, the operator $\check{P}_0(\xi)$ has compact rezolvent for any $\xi\in\mathcal X^*$; it is clearly lower semi-bounded uniformly with respect to $\xi\in\mathcal X^*$ (taking into account that $\check{P}_0(\xi+\gamma^*)=\sigma_{-\gamma^*}\check{P}_0(\xi)\sigma_{\gamma^*},\ \forall\gamma^*\in\Gamma^*$). We deduce that $\sigma\big(\widehat{P}_0(\xi)\big)=\sigma\big(\check{P}_0(\xi)\big)=\{\lambda_j(\xi)\}_{j\geq1}$, where for any $\xi\in\mathcal X^*$ and any $j\geq1$, $\lambda_j(\xi)$ is a real finitely degenerated eigenvalue and $\underset{j\rightarrow\infty}{\lim}\lambda_j(\xi)=\infty\ \forall\xi\in\mathcal X^*$; we can always renumber the eigenvalues and suppose that $\lambda_j(\xi)\leq\lambda_{j+1}(\xi)$ for any $j\geq1$ and for any $\xi\in\mathcal X^*$. Due to the $\Gamma^*$-periodicity of $\widehat{P}(\xi)$ we have that $\lambda_j(\xi+\gamma^*)=\lambda_j(\xi)$ for any $j\geq1$, for any $\xi\in\mathcal X^*$ and for any $\gamma^*\in\Gamma^*$. These are the {\it Floquet eigenvalues of the operator $\widehat{P}_0$}.
\end{remark}
\begin{lemma}\label{L.7.6}
For each $j\geq1$ the function $\mathbb{T}^{*,d}\ni\xi\mapsto\lambda_j(\xi)\in\mathbb{R}$ is continuous on $\mathbb{T}^{*,d}$ uniformly in $j\geq1$.
\end{lemma}
\begin{proof}
From the uniform lower semi-boundedness it follows the existence of some real number $c\in\mathbb{R}$ such that $\lambda_j(\xi)\geq\,c+1$ for any $j\geq1$ and for any $\xi\in\mathcal X^*$. We can thus define the operators $R(\xi):=\big(\check{P}_0(\xi)-c\big)^{-1}$, for any $\xi\in\mathcal X^*$ and due to the result in Lemma \ref{L.7.3} we obtain a function of class $C^\infty\big(\mathcal X^*;\mathbb{B}(\mathcal{K}_0)\big)$; for any $\xi\in\mathcal X^*$ the operator $R(\xi)$ is a bounded self-adjoint operator on $\mathcal{K}_0$ and $\sigma\big(R(\xi)\big)=\{\big(\lambda_j(\xi)-c\big)^{-1}\}_{j\geq1}$. Applying now the {\it Min-Max Principle} (see \cite{RS-4}) we obtain easily that:
$$
\left|\big(\lambda_j(\xi)-c\big)^{-1}\,-\,\big(\lambda_j(\xi_0)-c\big)^{-1}\right|\ \leq\ \left\|R(\xi)\,-\,R(\xi_0)\right\|_{\mathbb{B}(\mathcal{K}_0)},\qquad\forall(\xi,\xi_0)\in[\mathcal X^*]^2,\ \forall j\geq1.
$$
\end{proof}
\begin{lemma}\label{L.7.7}
We have the following spectral decomposition: $\sigma\big(P_0\big)=\sigma\big(\widehat{P}_0\big)=\overset{\infty}{\underset{k=1}{\cup}}J_k$ with $J_k:=\lambda_k\big(\mathbb{T}^{*,d}\big)$ is a compact interval in $\mathbb{R}$.
\end{lemma}
\begin{proof}
Lemma \ref{L.7.2} states that the operators $P_0$ and $\widehat{P}_0$ are unitarily equivalent and thus they have the same spectrum. From Theorem XIII.85 (d) in \cite{RS-4} it follows that:
$$
\lambda\,\in\,\sigma\big(\widehat{P}_0\big)\quad\Longleftrightarrow\quad\forall\epsilon>0,\ \left|\left\{\xi\in\mathbb{T}^{*,d}\,\mid\,\sigma\big(\check{P}_0(\xi)\big)\cap(\lambda-\epsilon,\lambda+\epsilon)\ne\emptyset\right\}\right|\,>\,0.
$$
Let us denote by $M:=\overset{\infty}{\underset{k=1}{\cup}}J_k$. If $\lambda\in M$, it exist $\xi_0\in\mathbb{T}^{*,d}$ and $k\geq1$ such that $\lambda=\lambda_k(\xi_0)$. Due to the continuity of $\lambda_k(\xi)$ we know that for any $\epsilon>0$ there exists a neighborhood $V$ of $\xi_0$ in $\mathbb{T}^{*,d}$ such that $|\lambda_k(\xi)-\lambda|\leq\epsilon$ for any $\xi\in V$. It follows that $\lambda\in\sigma\big(\widehat{P}_0\big)$ and thus $M\subset\sigma\big(\widehat{P}_0\big)$.
On the other hand it is evident by definitions that $\sigma\big(\widehat{P}_0\big)\subset\overline{M}$ and thus we need to prove that $M$ is a closed set. Let us fix some $\lambda\in\overline{M}$; it follows that there exists a sequence $\{\mu_l\}_{l\geq1}\subset M$ such that $\mu_l\underset{l\rightarrow\infty}{\rightarrow}\lambda$. We deduce that for any $l\geq1$ there exists a point $\xi^l\in\mathbb{T}^{*,d}$ and an index $k_l\geq1$ such that $\mu_l=\lambda_{k_l}(\xi^l)$. The manifold $\mathbb{T}^{*,d}$ being compact it follows that we may suppose that it exists $\xi\in\mathbb{T}^{*,d}$ such that $\xi^l\underset{l\rightarrow\infty}{\rightarrow}\xi$. Due to the uniform continuity of the functions $\lambda_j$ with respect to $j\geq1$ we deduce that $\lambda_{k_l}(\xi)\underset{l\rightarrow\infty}{\rightarrow}\lambda$. Taking into account that $\lambda_j(\xi)\underset{j\rightarrow\infty}{\rightarrow}\infty$ for any $\xi\in\mathbb{T}^{*,d}$ we deduce that the sequence $\{j_l\}_{l\geq1}$ becomes constant above some rank. We conclude that $\lambda=\lambda_{j_l}(\xi)$ for $l$ sufficiently large and that means that $\lambda\in M$.
\end{proof}
If we suppose that Hypothesis H.7 is satisfied, i.e. there exists $k\geq1$ such that $J_k$ is a simple spectral band for $P_0$, then we have some more regularity for the Floquet eigenvalue $\lambda_k(\xi)$.
\begin{lemma}\label{L.7.8}
Under Hypothesis H.7, if $J_k$ is a simple spectral band for $P_0$, then the function $\lambda_k(\xi)$ is of class $C^\infty(\mathbb{T}^{*,d})$.
\end{lemma}
\begin{proof}
Let us fix a circle $\mathscr{C}$ in the complex plane having its center on the real axis and such that: $J_k$ is contained in the open interior domain delimited by $\mathscr{C}$ and all the other spectral bands $J_l$ with $l\ne k$ are contained in the exterior open domain delimited by $\mathscr{C}$ (that is unbounded). In particular, for such a choice we get that the distance $d\big(\mathscr{C},\sigma(\check{P}_0)\big)>0$.
With the above noations let us define the following operator:
\begin{equation}\label{7.11}
\Pi_k(\xi)\ :=\ -\frac{i}{2\pi}\oint_{\mathscr{C}}\big(\check{P}_0(\xi)-{\rm z}\big)^{-1}d{\rm z},\qquad\forall\xi\in\mathcal X^*.
\end{equation}
One verifies easily that it is an orthogonal projection on the subspace $\mathcal{N}_k(\xi):=\ker\big(\check{P}_0(\xi)-{\rm z}\big)^{-1}$, that is a complex vector space of dimension 1. Moreover, it is easy to verify using Lemma \ref{L.7.3} that the application $\mathbb{T}^{*,d}\ni\xi\mapsto\Pi_k(\xi)\in\mathbb{B}(\mathcal{K}_0)$ is of class $C^\infty\big(\mathbb{T}^{*,d};\mathbb{B}(\mathcal{K}_0)\big)$.
Let us now fix some point $\xi_0\in\mathbb{T}^{*,d}$ and some vector $\phi(\xi_0)\in\mathcal{N}_k(\xi_0)$ having norm $\|\phi(\xi_0)\|_{\mathcal{K}_0}=1$. We can find a sufficiently small open neighborhood $V_0$ of $\xi_0$ in $\mathbb{T}^{*,d}$ such that
$$
\left\|\Pi_k(\xi)\phi(\xi_0)\right\|_{\mathcal{K}_0}\ \geq\ \frac{1}{2},\qquad\forall\xi\in V_0.
$$
We denote by
$$
\phi(\xi)\ :=\ \left\|\Pi_k(\xi)\phi(\xi_0)\right\|_{\mathcal{K}_0}^{-1}\Pi_k(\xi)\phi(\xi_0),\qquad\forall\xi\in V_0.
$$
For any $\xi\in V_0$ the vector $\phi(\xi)$ is a norm one vector that generates the subspace $\mathcal{N}_k(\xi)$ and $\phi\in C^\infty\big(V_0;\mathcal{K}_0\big)$.
We choose now $c\in\mathbb{R}$ as in the proof of Lemma \ref{L.7.6}. Then, for any $\xi\in V_0$ we have that
$$
\big(\lambda_k(\xi)\,-\,c\big)^{-1}\phi(\xi)\ =\ \big(\check{P}_0(\xi)\,-\,c{\rm id\hspace*{-1.5pt}l}\big)^{-1}\phi(\xi)
$$
and we conclude that
$$
\big(\lambda_k(\xi)\,-\,c\big)^{-1}\ =\ \left(\big(\check{P}_0(\xi)\,-\,c{\rm id\hspace*{-1.5pt}l}\big)^{-1}\phi(\xi)\,,\,\phi(\xi)\right)_{\mathcal{K}_0}.
$$
Using Lemma \ref{L.7.3} we conclude finally that $\lambda_k\in C^\infty(V_0)$.
\end{proof}
\begin{lemma}\label{L.7.9}
With the above definitions and notations the following statements are true:
\begin{enumerate}
\item For any $(s,\xi)\in\mathbb{R}\times\mathcal X^*$ the Hilbert spaces $\mathcal{K}_{s,\xi}$ and $\mathscr{F}_{s,\xi}$ are stable under complex conjugation.
\item $\forall\xi\in\mathcal X^*$ and $\forall\gamma^*\in\Gamma^*$ we have that $\check{P}_0(\xi+\gamma^*)=\sigma_{-\gamma^*}\check{P}_0(\xi)\sigma_{\gamma^*}$ and $\lambda_j(\xi+\gamma^*)=\lambda_j(\xi)$ for any $j\geq1$.
\item If the symbol $p_0$ verifies the property
\begin{equation}\label{7.9.a}
p_0(x,-\xi)\ =\ p_0(x,\xi)
\end{equation}
then the following relations hold:
\begin{equation}\label{7.9.e}
\overline{\check{P}_0(\xi)u}\ =\ \check{P}_0(-\xi)\overline{u},\qquad\forall u\in\mathcal{K}_{m,\xi},\ \forall\xi\in\mathcal X^*.
\end{equation}
\begin{equation}\label{7.9.f}
\lambda_j(-\xi)\ =\ \lambda_j(\xi),\qquad\forall j\geq1.
\end{equation}
\begin{equation}\label{7.9.g}
\overline{\Pi_k(\xi)u}\ =\ \Pi_k(-\xi)\overline{u},\qquad\forall u\in\mathcal{K}_0,\ \forall\xi\in\mathcal X^*,
\end{equation}
for any simple spectral band $J_k$ of $P_0$.
\end{enumerate}
\end{lemma}
\begin{proof} The first statement follows directly from the definitions \eqref{A.21} and \eqref{A.23}, while the second statement follows from Remark \ref{R.7.5}.
As we know that $\check{P}_0(\xi)$ is induced by $P_{0,\xi}:=\mathfrak{Op}\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\xi})p\big)$ on the Hilbert space $\mathcal{K}_0$, it is enough to prove that $\overline{P_{0,\xi}u}=P_{0,-\xi}\overline{u}$ for any $u\in\mathscr{S}(\mathcal X)$. In fact, for any $x\in\mathcal X$ we have that:
$$
\overline{\big(P_{0,\xi}u\big)(x)}\ =\ \int_\Xi e^{i<\eta,y-x>}p_0\big(\frac{x+y}{2},\xi+\eta\big)\overline{u(y)}\,dy\,\;\;\bar{}\!\!\!d\eta\ =
$$
$$
=\ \int_\Xi e^{i<\eta,x-y>}p_0\big(\frac{x+y}{2},\xi-\eta\big)\overline{u(y)}\,dy\,\;\;\bar{}\!\!\!d\eta\ =\ \int_\Xi e^{i<\eta,x-y>}p_0\big(\frac{x+y}{2},-\xi+\eta\big)\overline{u(y)}\,dy\,\;\;\bar{}\!\!\!d\eta\ =\ \big(P_{0,-\xi}\overline{u}\big)(x).
$$
Let us fix now some point $\xi\in\mathcal X^*$ and some vector $u\in\mathcal{K}_{m,\xi}$; following statement (1) of this Lemma and \eqref{7.9.e}, the vector $u$ is an eigenvector of $\check{P}_0(\xi)$ for the eigenvalue $\lambda_j(\xi)$ if and only if $\overline{u}$ is eigenvector of $\check{P}_0(-\xi)$ for the eigenvalue $\lambda_j(\xi)$. We deduce that $\{\lambda_j(-\xi)\}_{j\geq1}\,=\,\{\lambda_j(\xi)\}_{j\geq1}$; as both sequences are monotonuous we conclude that $\lambda_j(-\xi)=\lambda_j(\xi)$ for any $j\geq1$ so that we obtain \eqref{7.9.f}.
Finally \eqref{7.9.g} follows from \eqref{7.9.e} and \eqref{7.11}.
\end{proof}
The next Lemma is very important for the construction in the Grushin problem under Hypothesis H.7; we shall prove it following the ideas in \cite{HS1}.
\begin{lemma}\label{L.7.10}
Supposing that Hypothesis H.7 is also satisfied and supposing that $p(y,-\eta)=p(y,\eta)$ for any $(y,\eta)\in\Xi$, we can construct a function $\phi$ having the following properties:
\begin{enumerate}
\item $\phi\in C^\infty(\Xi;\mathcal{K}_{lm,0})$ for any $l\in\mathbb{N}$.
\item $\phi(y+\gamma,\eta)\ =\ \phi(y,\eta),\quad\forall(y,\eta)\in\Xi,\ \forall\gamma\in\Gamma$.
\item $\phi(y,\eta+\gamma^*)\ =\ e^{-i<\gamma^*,y>}\phi(y,\eta),\quad\forall(y,\eta)\in\Xi,\ \forall\gamma^*\in\Gamma^*$.
\item $\left\|\phi(\cdot,\eta)\right\|_{\mathcal{K}_0}\ =\ 1,\qquad\forall\eta\in\mathcal X^*$.
\item $\overline{\phi(y,\eta)}\ =\ \phi(y,-\eta)\quad\forall(y,\eta)\in\Xi$.
\item $\phi(\cdot,\eta)\,\in\,\mathcal{N}_k(\eta)\,=\,\ker\big(\check{P}_0(\eta)\,-\,\lambda_k(\eta)\big),\quad\forall\eta\in\mathcal X^*$.
\end{enumerate}
\end{lemma}
\begin{proof}
First we shall construct a function $\phi\in C\big(\mathcal X^*;\mathcal{K}_0\big)$ that satisfies properties (2)-(6).
We begin by recalling that $\Pi_k\in C^\infty\big(\mathcal X;\mathbb{B}(\mathcal{K}_0)\big)$ and deducing that there exists some $\delta>0$ such that for any pair of points $(\xi^\prime,\xi^{\prime\prime})\in[\overline{E^*}]^2$ with $|\xi^\prime-\xi^{\prime\prime}|<\delta$ the following estimation is true:
\begin{equation}\label{7.12}
\left\|\Pi_k(\xi^\prime)\,-\,\Pi_k(\xi^{\prime\prime})\right\|_{\mathbb{B}(\mathcal{K}_0)}\ <\ \frac{1}{2}.
\end{equation}
We decompose the vectors of $\mathcal X^*$ with respect to the dual basis $\{e^*_j\}_{1\leq j\leq d}$ associated to $\Gamma$, and write $\xi=t_1e^*_1+\dots+t_de^*_d$ or $\xi=(t_1,\ldots,t_d)$ for any $\xi\in\mathcal X^*$. In particular we have that $\xi\in E^*$ if and only if $-(1/2)\leq t_j<(1/2)$ for any $j=1,\ldots,d$. The construction of the function $\phi$ will be done by induction on the number $d$ of variables $(t_1,\ldots,t_d)$.
We start with the case $d=1$, or equivalently, we consider only momenta of the type $(t,0)$ with $t\in\mathbb{R}$ and $0\in\mathbb{R}^{d-1}$. We choose some $n\in\mathbb{N}^*$ such that $n^{-1}<\delta$ and consider a division of the interval $[0,1/2]\subset\mathbb{R}$:
$$
0=\tau_0<\tau_1<\ldots<\tau_n=1/2,\quad\tau_j=\frac{j}{2n},\ \forall0\leq j\leq n.
$$
Let us fix some vector $\psi(0)\in\mathcal{N}_k((0,0)$ such that $\|\psi(0)\|_{\mathcal{K}_0}=1$ and $\overline{\psi(0)}=\psi(0)$; this last property may be realized noticing that Lemma \ref{L.7.9} implies that if $v\in\mathcal{N}_k((0,0))$ then also $\overline{v}\in\mathcal{N}_k((0,0))$ and if $\|\psi(0)\|_{\mathcal{K}_0}=1$ then it exists $f\in\mathbb{R}$ such that $\overline{\psi(0)}=e^{if}\psi(0)$ and we can replace $\psi(0)$ by $e^{-if/2}\psi(0)$. We use now \eqref{7.12} and deduce that
$$
\left\|\Pi_k((t,0))\psi(0)\right\|_{\mathcal{K}_0}\ \geq\ \frac{1}{2},\quad\forall t\in[0,\tau_1].
$$
Thus we can define:
\begin{equation}\label{7.14}
\psi(t)\ :=\ \left\|\Pi_k((t,0))\psi(0)\right\|_{\mathcal{K}_0}^{-1}\Pi_k((t,0))\psi(0),\quad\forall t\in[0,\tau_1].
\end{equation}
We notice that this function verifies the following properties:
\begin{equation}\label{7.13}
\psi\in C\big([0,\tau_1];\mathcal{K}_0\big);\quad\overline{\psi(0)}=\psi(0);\quad\|\psi(t)\|_{\mathcal{K}_0}=1,\ \forall t\in[0,\tau_1];\quad\psi(t)\in\mathcal{N}_k((t,0)),\ \forall t\in[0,\tau_1].
\end{equation}
But let us notice that we can use \eqref{7.12} once more starting with $\psi(\tau_1)$, noticing that we also have
$$
\left\|\Pi_k((t,0))\psi(\tau_1)\right\|_{\mathcal{K}_0}\ \geq\ \frac{1}{2},\quad\forall t\in[\tau_1,\tau_2].
$$
and defining
\begin{equation}\label{7.14.b}
\psi(t)\ :=\ \left\|\Pi_k((t,0))\psi(\tau_1)\right\|_{\mathcal{K}_0}^{-1}\Pi_k((t,0))\psi(\tau_1),\quad\forall t\in[\tau_1,\tau_2].
\end{equation}
We notice that $\underset{t\searrow\tau_1}{\lim}\psi(t)=\psi(\tau_1)$ and conclude that the function $\psi:[0,\tau_2]\rightarrow\mathcal{K}_0$, defined by \eqref{7.14} and \eqref{7.14.b} is continuous in $t=\tau_1$ and also verifies
\begin{equation}\label{7.13.2}
\psi\in C\big([0,\tau_2];\mathcal{K}_0\big);\quad\overline{\psi(0)}=\psi(0);\quad\|\psi(t)\|_{\mathcal{K}_0}=1,\ \forall t\in[0,\tau_2];\quad\psi(t)\in\mathcal{N}_k((t,0)),\ \forall t\in[0,\tau_2].
\end{equation}
Continuing in this way, after a finite number of steps ($n$ steps) we obtain a function $\psi:[0,(1/2)]\rightarrow\mathcal{K}_0$ verifying the properties:
\begin{equation}\label{7.13.n}
\psi\in C\big([0,1/2];\mathcal{K}_0\big);\quad\overline{\psi(0)}=\psi(0);\quad\|\psi(t)\|_{\mathcal{K}_0}=1,\ \forall t\in[0,1/2];\quad\psi(t)\in\mathcal{N}_k((t,0)),\ \forall t\in[0,1/2].
\end{equation}
We extend now this function to the interval $[-(1/2),(1/2)]$ by defining $\psi(-t):=\overline{\psi(t)}$ for any $t\in[0,1/2]$. It verifies the properties:
\begin{equation}\label{7.13.-}
\psi\in C\big([-1/2,1/2];\mathcal{K}_0\big);\qquad\psi(t)\in\mathcal{N}_k((t,0)),\ \forall t\in[-1/2,1/2].
\end{equation}
The second property above follows easily from \eqref{7.9.e}.
We take now into account the second statement of the Lemma \ref{L.7.9} and notice that it implies the equality
$$
\check{P}_0((1/2,0))\ =\ \sigma_{-e^*_1}\check{P}_0((-1/2,0))\sigma_{e^*_1}
$$
and from that we deduce that $\sigma_{-e^*_1}\psi(-1/2)$ is an eigenvector of $\check{P}_0((1/2,0))$ for the eigenvalue $\lambda_k((-1/2,0))=\lambda_k((1/2,0))$ (following \eqref{7.9.f}). We deduce that it exists $\kappa\in\mathbb{R}$ such that
$$
\sigma_{-e^*_1}\psi(-1/2)\ =\ e^{i\kappa}\psi(1/2).
$$
Let us define now $\phi(t):=e^{it\kappa}\psi(t)$ for any $t\in[-1/2,1/2]$. It will evidently have the following properties:
\begin{equation}\label{7.phi}
\phi\in C\big([-1/2,1/2];\mathcal{K}_0\big);\quad\phi(t)\in\mathcal{N}_k((t,0)),\ \forall t\in[-1/2,1/2];\quad\phi(1/2)=\sigma_{-e^*_1}\phi(-1/2).
\end{equation}
We extend this function to $\mathbb{R}$ by the following reccurence relation:
\begin{equation}\label{7.16}
\phi(t)\ :=\ \sigma_{-e^*_1}\phi(t-1),\qquad\forall t\in[j-1/2,j+1/2],\ j\in\mathbb{Z}.
\end{equation}
We obtain a function $\phi:\mathbb{R}\rightarrow\mathcal{K}_0$ verifying the following properties:
\begin{equation}\label{7.17}
\left\{
\begin{array}{l}
\phi\,\in\,C\big(\mathbb{R};\mathcal{K}_0\big).\\
\phi(t+l)\ =\ \sigma_{-le^*_1}\,\phi(t),\qquad\forall t\in\mathbb{R},\ \forall l\in\mathbb{Z}.\\
\|\phi(t)\|_{\mathcal{K}_0}\ =\ 1,\qquad\forall t\in\mathbb{R}.\\
\overline{\phi(t)}\ =\ \phi(-t),\qquad\forall t\in\mathbb{R}.\\
\phi(t)\,\in\,\mathcal{N}_k((t,0)),\qquad\forall t\in\mathbb{R}.
\end{array}
\right.
\end{equation}
Let us denote now by $t^\prime:=(t_1,\ldots,t_{d-1})\in\mathbb{R}^{d-1}$ and suppose by hypothesis that we have constructed a function $\psi:\mathbb{R}^{d-1}\rightarrow\mathcal{K}_0$ satisfying the following properties:
\begin{equation}\label{7.18}
\left\{
\begin{array}{l}
\psi\,\in\,C\big(\mathbb{R}^{d-1};\mathcal{K}_0\big).\\
\psi(t^\prime+l^\prime)\ =\ \sigma_{-<e^{*\prime,l^\prime>}}\,\psi(t^\prime),\qquad\forall t^\prime\in\mathbb{R}^{d-1},\ \forall l^\prime\in\mathbb{Z}^{d-1}.\\
\|\psi(t^\prime)\|_{\mathcal{K}_0}\ =\ 1,\qquad\forall t^\prime\in\mathbb{R}^{d-1}.\\
\overline{\psi(t^\prime)}\ =\ \psi(-t^\prime),\qquad\forall t^\prime\in\mathbb{R}^{d-1}.\\
\psi(t^\prime)\,\in\,\mathcal{N}_k((t^\prime,0)),\qquad\forall t^\prime\in\mathbb{R}^{d-1}.
\end{array}
\right.
\end{equation}
We want to construct now a function $\phi:\mathbb{R}^d\rightarrow\mathcal{K}_0$ satisfying the same properties as $\psi$ above (with $d-1$ replaced by $d$). For doing that let us introduce the following notations $t=(t^\prime,t_d)$ with $t^\prime=(t_1,\ldots,t_{d-1})\in\mathbb{R}^{d-1}$ for any $t=(t_1,\ldots,t_d)\in\mathbb{R}^d$.
We start by defining first $\widetilde{\psi}(t^\prime,0):=\psi(t^\prime)$ for any $t^\prime\in\mathbb{R}^{d-1}$. Repeating the argumet at the beginning of this proof, we extend step by step our function $\widetilde{\psi}$ on subsets of the form $\mathbb{R}^{d-1}\times[\tau_j,\tau_{j+1}]$ with $0\leq j\leq n-1$. Finally we extend it to the subset $\mathbb{R}^{d-1}\times[-1/2,1/2]$ by the definition $\widetilde{\psi}(t^\prime,-t_d):=\overline{\widetilde{\psi}(-t^\prime,t_d)}$ for any $t^\prime\in\mathbb{R}^{d-1}$ and any $t_d\in[0,1/2]$.
We obtain in this way a function $\widetilde{\psi}:\mathbb{R}^{d-1}\times[-1/2,1/2]\rightarrow\mathcal{K}_0$ verifying the properties:
\begin{equation}\label{7.19}
\left\{
\begin{array}{l}
\widetilde{\psi}\,\in\,C\big(\mathbb{R}^{d-1}\times[-1/2,1/2];\mathcal{K}_0\big).\\
\widetilde{\psi}(t^\prime+l^\prime,t_d)\ =\ \sigma_{-<e^{*\prime},l^\prime>}\,\widetilde{\psi}(t^\prime,t_d),\qquad\forall t^\prime\in\mathbb{R}^{d-1},\ \forall l^\prime\in\mathbb{Z}^{d-1},\ \forall t_d\in[-1/2,1/2].\\
\|\widetilde{\psi}(t^\prime,t_d)\|_{\mathcal{K}_0}\ =\ 1,\qquad\forall t^\prime\in\mathbb{R}^{d-1},\ \forall t_d\in[-1/2,1/2].\\
\overline{\widetilde{\psi}(t^\prime,t_d)}\ =\ \widetilde{\psi}(-t^\prime,-t_d),\qquad\forall t^\prime\in\mathbb{R}^{d-1},\ \forall t_d\in[-1/2,1/2].\\
\widetilde{\psi}(t^\prime,t_d)\,\in\,\mathcal{N}_k((t^\prime,0)),\qquad\forall t^\prime\in\mathbb{R}^{d-1},\ \forall t_d\in[-1/2,1/2].
\end{array}
\right.
\end{equation}
Now we come once more to the second statement of the Lemma \ref{L.7.9} and notice that:
$$
\check{P}_0((t^\prime,1/2))\ =\ \sigma_{-e^*_d}\check{P}_0((t^\prime,-1/2))\sigma_{e^*_d},\qquad\forall t^\prime\in\mathbb{R}^{d-1}.
$$
We deduce that $\sigma_{-e^*_d}\widetilde{\psi}(t^\prime,-1/2)$ is a normed eigenvector of $\check{P}_0((t^\prime,1/2))$ for the eigenvalue $\lambda_k((t^\prime,-1/2))=\lambda_k((t^\prime,1/2))$ (we made use of point (3) in Lemma \ref{L.7.9}). We conclude that it exists a function $\kappa^\prime:\mathbb{R}^{d-1}\rightarrow\mathbb{R}$ such that
\begin{equation}\label{7.21}
\sigma_{-e^*_d}\widetilde{\psi}(t^\prime,-1/2)\ =\ e^{i\kappa^\prime(t^\prime)}\widetilde{\psi}(t^\prime,1/2),\qquad\forall t^\prime\in\mathbb{R}^{d-1}.
\end{equation}
From the continuity of the function $\widetilde{\psi}\in C\big(\mathbb{R}^{d-1}\times[-1/2,1/2];\mathcal{K}_0\big)$ we deduce the continuity of the function $e^{i\kappa^\prime}:\mathbb{R}^{d-1}\rightarrow\mathbb{U}(1)$ and also of the function $\kappa^\prime:\mathbb{R}^{d-1}\rightarrow\mathbb{R}$. We take into account now \eqref{7.21} and the second equality in \eqref{7.19} with $t^\prime\in\mathbb{R}^{d-1}$ and $l^\prime\in\mathbb{Z}^{d-1}$ and get
$$
e^{i\kappa^\prime(t^\prime+l^\prime)}\widetilde{\psi}(t^\prime+l^\prime,1/2)\ =\ \sigma_{-e^*_d}\widetilde{\psi}(t^\prime+l^\prime,-1/2)\ =\ \sigma_{-e^*_d}\sigma_{-<e^{*\prime},l^\prime>}\widetilde{\psi}(t^\prime,-1/2)\ =
$$
$$
=\ \sigma_{-<e^{*\prime},l^\prime>}e^{i\kappa^\prime(t^\prime)}\widetilde{\psi}(t^\prime,1/2)\ =\ e^{i\kappa^\prime(t^\prime)}\widetilde{\psi}(t^\prime+l^\prime,1/2).
$$
It follows that $e^{i[\kappa^\prime(t^\prime+l^\prime)-\kappa^\prime(t^\prime)]}=1$ and the function $\kappa^\prime:\mathbb{R}^{d-1}\rightarrow\mathbb{R}$ being continuous we deduce that for any $l^\prime\in\mathbb{Z}^{d-1}$ there exists $n(l^\prime)\in\mathbb{Z}$ such that
\begin{equation}\label{7.23}
\kappa^\prime(t^\prime+l^\prime)\,-\,\kappa^\prime(t^\prime)\ =\ 2\pi n(l^\prime),\qquad\forall t^\prime\in\mathbb{R}^{d-1}.
\end{equation}
But from \eqref{7.21} we deduce that
$$
\sigma_{-e^*_d}\widetilde{\psi}(-t^\prime,-1/2)\ =\ e^{i\kappa^\prime(-t^\prime)}\widetilde{\psi}(-t^\prime,1/2),\qquad\forall t^\prime\in\mathbb{R}^{d-1}.
$$
After complex conjugation and making use of \eqref{7.19} we get
$$
\sigma_{e^*_d}\widetilde{\psi}(t^\prime,1/2)\ =\ e^{-i\kappa^\prime(-t^\prime)}\widetilde{\psi}(t^\prime,-1/2),\qquad\forall t^\prime\in\mathbb{R}^{d-1},
$$
or equivalently
$$
\sigma_{-e^*_d}\widetilde{\psi}(t^\prime,-1/2)\ =\ e^{i\kappa^\prime(-t^\prime)}\widetilde{\psi}(t^\prime,1/2),\qquad\forall t^\prime\in\mathbb{R}^{d-1}.
$$
We use once again \eqref{7.21} in order to obtain the equality $e^{i\kappa^\prime(-t^\prime)}=e^{i\kappa^\prime(t^\prime)}$, or equivalently the relation $\kappa^\prime(-t^\prime)-\kappa^\prime(t^\prime)\in2\pi\mathbb{Z}$. As the function $\kappa^\prime$ is continuous, we conclude that the difference $\kappa^\prime(-t^\prime)-\kappa^\prime(t^\prime)$ must be constant; as in $t^\prime=0$ this difference is by definition $0$ we conclude that
\begin{equation}\label{7.24}
\kappa^\prime(-t^\prime)=\kappa^\prime(t^\prime),\qquad\forall t^\prime\in\mathbb{R}^{d-1}.
\end{equation}
We consider now \eqref{7.23} and notice that for any $l^\prime\in\mathbb{Z}^{d-1}$ we can choose the point $t^\prime:=-(1/2)l^\prime\in\mathbb{R}^{d-1}$ verifying the relation $t^\prime+l^\prime=-t^\prime$ and thus we conclude that $n(l^\prime)=0$ for any $l^\prime\in\mathbb{Z}^{d-1}$ obtaining the equalities
\begin{equation}\label{7.25}
\kappa^\prime(t^\prime+l^\prime)\ =\ \kappa^\prime(t^\prime),\qquad\forall t^\prime\in\mathbb{R}^{d-1},\ \forall l^\prime\in\mathbb{Z}^{d-1}.
\end{equation}
We can now define $\widetilde{\phi}:\mathbb{R}^{d-1}\times[-1/2.1/2]\rightarrow\mathcal{K}_0$ by the following equality:
\begin{equation}\label{7.26}
\widetilde{\phi}(t^\prime,t_d)\ :=\ e^{i\kappa^\prime(t^\prime)t_d}\widetilde{\psi}(t^\prime,t_d),\qquad\forall(t^\prime,t_d)\in\mathbb{R}^{d-1}\times[-1/2,1/2].
\end{equation}
From \eqref{7.24} and \eqref{7.25} the function $\widetilde{\phi}$ has all the properties \eqref{7.19} and it also has the following property:
\begin{equation}\label{7.20}
\widetilde{\phi}(t^\prime+l^\prime,1/2)\ =\ \sigma_{<e^{*\prime},l^\prime>}\sigma_{-e^*_d}\widetilde{\phi}(t^\prime,-1/2),\qquad\forall t^\prime\in\mathbb{R}^{d-1},\ \forall l^\prime\in\mathbb{Z}^{d-1}.
\end{equation}
In fact we see that
$$
\widetilde{\phi}(t^\prime+l^\prime,1/2)\ =\ e^{i\kappa^\prime(t^\prime)/2}\widetilde{\psi}(t^\prime+l^\prime,1/2)\ =\ e^{i\kappa^\prime(t^\prime)/2}\sigma_{-<e^{*\prime},l^\prime>}\widetilde{\psi}(t^\prime,1/2)\ =\ e^{-i\kappa^\prime(t^\prime)/2}\sigma_{-<e^{*\prime},l^\prime>}\sigma_{-e^*_d}\widetilde{\psi}(t^\prime,-1/2)\ =
$$
$$
=\ \sigma_{-<e^{*\prime},l^\prime>}\sigma_{-e^*_d}\widetilde{\phi}(t^\prime,-1/2).
$$
As in the case $d=1$ we extend the function $\widetilde{\phi}$ to the entire $\mathbb{R}^d$ by the following relation:
\begin{equation}\label{7.27}
\widetilde{\phi}(t^\prime,t_d)\ :=\ \sigma_{-e^*_d}\widetilde{\phi}(t^\prime,t_d-1),\qquad\forall t^\prime\in\mathbb{R}^{d-1}\ \forall t_d\in[j-1/2,j+1/2],\ j\in\mathbb{Z}.
\end{equation}
We evidently obtain a function of class $C\big(\mathcal X^*;\mathcal{K}_0\big)$ satisfying the properties (2)-(6) in our Lemma.
We end up our construction by a regularization procedure. Let us choose a real, non-negative, even function $\chi\in C^\infty_0(\mathcal X^*)$ and such that $\int_{\mathcal X^*}\chi(t)dt=1$. For any $\delta>0$ we denote by $\chi_\delta(\xi):=\delta^{-d}\chi(\xi/\delta)$ and define:
\begin{equation}\label{7.28}
\widetilde{\phi}_\delta(\xi)\ :=\ \int_{\mathcal X^*}\chi_\delta(\xi-\eta)\widetilde{\phi}(\eta)\,d\eta,\qquad\forall\xi\in\mathcal X^*.
\end{equation}
Evidently $\widetilde{\phi}_\delta\in C^\infty\big(\mathcal X^*;\mathcal{K}_0\big)$ for any $\delta>0$. Moreover we also have that:
\begin{equation}\label{7.29}
\widetilde{\phi}_\delta(\xi+\gamma^*)\ =\ \int_{\mathcal X^*}\chi_\delta(\xi+\gamma^*-\eta)\widetilde{\phi}(\eta)\,d\eta\ =\ \int_{\mathcal X^*}\chi_\delta(\xi-\eta)\widetilde{\phi}(\eta+\gamma^*)\,d\eta\ =\ \int_{\mathcal X^*}\chi_\delta(\xi-\eta)\sigma_{-\gamma^*}\widetilde{\phi}(\eta)\,d\eta\ =
\end{equation}
$$
=\ \sigma_{-\gamma^*}\widetilde{\phi}_\delta(\xi),\qquad\forall\xi\in\mathcal X^*,\ \forall\gamma^*\in\Gamma^*;
$$
\begin{equation}\label{7.30}
\overline{\widetilde{\phi}_\delta(\xi)}\ =\ \int_{\mathcal X^*}\chi_\delta(\xi-\eta)\overline{\widetilde{\phi}(\eta)}\,d\eta\ =\ \int_{\mathcal X^*}\chi_\delta(\xi-\eta)\widetilde{\phi}(-\eta)\,d\eta\ =
\end{equation}
$$
=\ \int_{\mathcal X^*}\chi_\delta(\xi+\eta)\widetilde{\phi}(\eta)\,d\eta\ =\ \int_{\mathcal X^*}\chi_\delta(-\xi-\eta)\widetilde{\phi}(\eta)\,d\eta\ =\ \widetilde{\phi}_\delta(-\xi),\qquad\forall\xi\in\mathcal X^*.
$$
We shall prove now the following continuity relation:
\begin{equation}\label{7.31}
\forall\theta>0,\ \exists\delta_0>0:\ \|\widetilde{\phi}_\delta(\xi)\,-\,\widetilde{\phi}(\xi)\|_{\mathcal{K}_0}\leq\,\theta,\ \forall\delta\in[0,\delta_0],\quad\forall\xi\in\mathcal X^*.
\end{equation}
In fact, let us notice that:
$$
\|\widetilde{\phi}_\delta(\xi)\,-\,\widetilde{\phi}(\xi)\|_{\mathcal{K}_0}\ \leq\ \int_{\mathcal X^*}\|\widetilde{\phi}(\xi-\delta\eta)\,-\,\widetilde{\phi}(\xi)\|_{\mathcal{K}_0}|\chi(\eta)|\,d\eta\ \leq\ C\int_{\text{\sf supp}\chi}\|\widetilde{\phi}(\xi-\delta\eta)\,-\,\widetilde{\phi}(\xi)\|_{\mathcal{K}_0}\,d\eta.
$$
The function $\widetilde{\phi}$ being continuous and satisfying the relation $\widetilde{\phi}(\xi+\gamma^*)=\sigma_{-\gamma^*}\widetilde{\phi}(\xi)$ for all $\gamma^*\in\Gamma^*$ and for all $\xi\in\mathcal X^*$ we conclude that it is uniformly continuous on $\mathcal X^*$ and the support of $\chi$ being compact, the above estimation implies the stated continuity in $\delta>0$. Thus we conclude that for $\delta\in[0,\delta_0]$ with $\delta_0>0$ small enough we have
$$
\left\|\Pi_k(\xi)\widetilde{\phi}_\delta(\xi)\right\|_{\mathcal{K}_0}\ \geq\ \frac{1}{2},\qquad\forall\xi\in\mathcal X^*
$$
and we can define:
$$
\phi(\xi)\ :=\ \|\Pi_k(\xi)\widetilde{\phi}_{\delta_0}(\xi)\|_{\mathcal{K}_0}^{-1}\Pi_k(\xi)\widetilde{\phi}_{\delta_0}(\xi),\qquad\forall\xi\in\mathcal X^*.
$$
It remains to prove that $\phi\in C^\infty\big(\mathcal X^*;\mathcal{K}_{lm,0}\big)$ for any $l\in\mathbb{N}$ because all the other properties from the statement of our Lemma are clearly satisfied considering the definition of $\phi$. The case $l=0$ is also clear from the definition.
Let us notice that $\check{P}_0\in C^\infty\big(\mathcal X^*;\mathbb{B}(\mathcal{K}_{m,0};\mathcal{K}_0)\big)$ and we know that for any $\xi_0\in\mathcal X^*$ there exists a constant $C_0>0$ such that
$$
\|u\|_{\mathcal{K}_{m,0}}\ \leq\ C_0\left(\left\|\check{P}_0(\xi_0)u\right\|_{\mathcal{K}_0}\,+\,\|u\|_{\mathcal{K}_0}\right),\qquad\forall u\in\mathcal{K}_{m,0}
$$
and evidently
$$
\|u\|_{\mathcal{K}_{m,0}}\ \leq\ C_0\left(\left\|\check{P}_0(\xi)u\right\|_{\mathcal{K}_0}\,+\,\left\|\big(\check{P}_0(\xi_0)\,-\,\check{P}_0(\xi)\big)u\right\|_{\mathcal{K}_0}\,+\,\|u\|_{\mathcal{K}_0}\right),\ \forall\xi\in\mathcal X^*,\qquad\forall u\in\mathcal{K}_{m,0}.
$$
Choosing a sufficiently small neighborhood $V_0$ of $\xi_0\in\mathcal X^*$ we deduce that it exists $C^\prime_0>0$ such that
$$
\|u\|_{\mathcal{K}_{m,0}}\ \leq\ C^\prime_0\left(\left\|\check{P}_0(\xi)u\right\|_{\mathcal{K}_0}\,+\,\|u\|_{\mathcal{K}_0}\right),\ \forall\xi\in V_0,\qquad\forall u\in\mathcal{K}_{m,0}.
$$
Thus, with $c\in\mathbb{R}$ the constant introduced in the proof of Lemma \ref{L.7.6}, if we take $u:=\big(\check{P}_0(\xi)\,-\,c\big)^{-1}v$ for some $v\in\mathcal{K}_0$ we obtain that
$$
\left\|\big(\check{P}_0(\xi)\,-\,c\big)^{-1}v\right\|_{\mathcal{K}_{m,0}}\ \leq\ C^{\prime\prime}_0\|v\|_{\mathcal{K}_0},\ \forall\xi\in V_0,\qquad\forall v\in\mathcal{K}_0.
$$
We conclude that $\big(\check{P}_0(\xi)\,-\,c\big)^{-1}\in\mathbb{B}(\mathcal{K}_0;\mathcal{K}_{m,0})$ uniformly for $\xi$ in any compact subset of $\mathcal X^*$.
Considering now the proof of Lemma \ref{L.7.3} once again we obtain that in fact $\phi\in C^\infty\big(\mathcal X^*;\mathcal{K}_{m,0}\big)$ and we get the case $l=1$.
In order to obtain the general situation $l\in\mathbb{N}$ let us notice that for any $\xi\in\mathcal X^*$ we know that $\phi(\xi)\in\mathcal{D}\big(\check{P}_0(\xi)^l\big)$ and $\check{P}_0(\xi)^l\phi(\xi)=\lambda_k(\xi)^l\phi(\xi)$ for any $l\in\mathbb{N}$. We also have that $\check{P}_0(\cdot)^l\in C^\infty\big(\mathcal X^*;\mathbb{B}(\mathcal{K}_0;\mathcal{K}_{lm,0})\big)$. The fact that $\check{P}_0(\xi)^l$ is induced by the operator $P_{0,\xi}^l$ on the Hilbert space of tempered distributions $\mathcal{K}_0$ and we know that $P_0^l$ is elliptic of order $lm$, allows us to conclude by an argument similar to the one above, that $\phi\in C^\infty\big(\mathcal X^*;\mathcal{K}_{lm,0}\big)$ for any $l\in\mathbb{N}$.
\end{proof}
\begin{remark}\label{R.7.11}
The arguments used in the proof of Lemma \ref{L.3.7} allow to deduce from properties (1) - (3) from Lemma \ref{L.7.10} that for any $\alpha\in\mathbb{N}^d$ and for any $s\in\mathbb{R}$ there exists a constant $C_{\alpha,s}>0$ such that:
\begin{equation}\label{7.32}
\left\|\big(\partial^\alpha_\xi\phi\big)(\cdot,\xi)\right\|_{\mathcal{K}_{s,\xi}}\ \leq\ C_{\alpha,s},\quad\forall\xi\in\mathcal X^*.
\end{equation}
\end{remark}
\begin{description}
\item[Proof of Proposition \ref{P.0.3}]
We shall repeat the construction of the Grushin operator \eqref{3.34} from Section \ref{S.3} under the Hypothesis of Proposition \ref{P.0.3}. We shall prove that in this case we can take $N=1$ and $\phi_1(x,\xi)=\phi(x,\xi)$ the function obtained in Lemma \ref{L.7.10}. Due to Lemma \ref{L.7.10} and Remark \ref{R.7.11} this function has all the properties needed in Lemma \ref{L.3.7}. It is thus possible to obtain the operator $\mathcal{P}_0(\xi,\lambda)$ and the essential problem is to prove its invertibility in order to obtain a result similar to Proposition \ref{P.3.8}. Frm that point the proof of Proposition \ref{P.0.3} just repeats the arguments of Section \ref{S.4}.
As in Section \ref{S.3}, for any $\xi\in\mathcal X^*$ we construct the operators:
\begin{equation}\label{7.33}
R_+(\xi)\,\in\,\mathbb{B}(\mathcal{K}_0,\mathbb{C}),\quad R_+(\xi)u\ :=\ \left(u,\phi(\cdot,\xi)\right)_{\mathcal{K}_0},\qquad\forall u\in\mathcal{K}_0,
\end{equation}
\begin{equation}\label{7.34}
R_-(\xi)\,\in\,\mathbb{B}(\mathbb{C},\mathcal{K}_0),\quad R_-(\xi)c\ :=\ c\phi(\cdot,\xi),\qquad\forall c\in\mathbb{C}.
\end{equation}
It is evident that $R_+\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_{m,\xi};\mathbb{C})\big)$ and $R_-\in S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C};\mathcal{K}_{0})\big)$ and from Example \ref{E.A.20} we deduce that $\check{P}_0(\cdot)\,-\,\lambda{\rm id\hspace*{-1.5pt}l}\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_{m,\xi};\mathcal{K}_0)\big)$ uniformly for $\lambda\in I$.
For any $(\xi,\lambda)\in\mathcal X^*\times I$ we define
\begin{equation}\label{7.35}
\mathcal{P}_0(\xi,\lambda)\ :=\ \left(
\begin{array}{cc}
\check{P}_0(\xi)&R_-(\xi)\\
R_+(\xi)&0
\end{array}
\right)\,\in\,\mathbb{B}\big(\mathcal{K}_{m,\xi}\times\mathbb{C};\mathcal{K}_0\times\mathbb{C}\big).
\end{equation}
We denote by $\mathcal{A}_\xi:=\mathcal{K}_{m,\xi}\times\mathbb{C}$ and by $\mathcal{B}_\xi:=\mathcal{K}_0\times\mathbb{C}$ and we notice that
\begin{equation}\label{7.36}
\mathcal{P}_0(\cdot,\lambda)\,\in\,S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet\big)\ \text{uniformly in }\lambda\in I.
\end{equation}
In order to construct an inverse for $\mathcal{P}_0(\xi,\lambda)$ we make the following choices:
\begin{equation}\label{7.38}
E^0_+(\xi,\lambda)\,\in\,\mathbb{B}(\mathbb{C};\mathcal{K}_{m,\xi}),\quad E^0_+(\xi,\lambda)c\ :=\ c\phi(\cdot,\xi),\qquad\forall c\in\mathbb{C},
\end{equation}
\begin{equation}\label{7.39}
E^0_-(\xi,\lambda)\,\in\,\mathbb{B}(\mathcal{K}_0;\mathbb{C}),\quad E^0_-(\xi,\lambda)u\ :=\ \left(u,\phi(\cdot,\xi)\right)_{\mathcal{K}_0},\qquad\forall u\in\mathcal{K}_0,
\end{equation}
\begin{equation}\label{7.40}
E^0_{-+}(\xi,\lambda)\,\in\,\mathbb{B}(\mathbb{C}),\quad E^0_{-+}(\xi,\lambda)c\ :=\ \big(\lambda\,-\,\lambda_k(\xi)\big)c,\qquad\forall c\in\mathbb{C},
\end{equation}
\begin{equation}\label{7.41}
E^0(\xi,\lambda)\,\in\,\mathbb{B}(\mathcal{K}_0;\mathcal{K}_{m,\xi}),\quad E^0(\xi,\lambda)\ :=\ \left[{\rm id\hspace*{-1.5pt}l}\,-\,\Pi_k(\xi)\right]\left[\check{P}_0(\xi)\,-\,\lambda\right]\left[{\rm id\hspace*{-1.5pt}l}\,-\,\Pi_k(\xi)\right],
\end{equation}
where $\lambda_k(\xi)$ is the Floquet eigenvalue generating the simple spectral band $J_k$ and $\Pi_k(\xi)$ is the orthogonal projection on $\mathcal{N}_k(\xi)$ as defined in \eqref{7.11}, with the circle $\mathscr{C}$ chosen to contain the interval $I\subset\mathbb{R}$ into its interior domain.
Let us notice that the operator $E^0(\xi,\lambda)$ is well defined; in fact $\Re\text{ange}\left[{\rm id\hspace*{-1.5pt}l}\,-\,\Pi_k(\xi)\right]$ is the orthogonal complement in $\mathcal{K}_0$ of $\mathcal{N}_k(\xi):=\ker\left[\check{P}_0(\xi)\,-\,\lambda\right]$ (linearly generated by $\phi(\cdot,\xi)$) and is a reducing subspace for $\check{P}_0(\xi)$. Thus $\check{P}_0(\xi)$ induces on the space $\Re\text{ange}\left[{\rm id\hspace*{-1.5pt}l}\,-\,\Pi_k(\xi)\right]$ a self-adjoint operator having the spectrum $\{\lambda_j(\xi)\}_{j\ne k}$. If $\lambda\in I$ and $I$ is as in the hypothesis (i.e. $I\cap J_l=\emptyset$ for any $l\ne k$), the distance from $\lambda$ to the spectrum of this induced operator is bounded below by a strictly positive constant $C>0$ that does not depend on $(\xi,\lambda)\in\mathcal X^*\times I$; thus the norm of the rezolvent of this induced operator, in point $\lambda\in I\subset\mathbb{R}$ is bounded above by $C^{-1}$. This rezolvent is exactly our operator $E^0(\xi,\lambda)$ defined in \eqref{7.41}. Recalling that $\check{P}_0(\xi)\,-\,\lambda\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_{m,\xi};\mathcal{K}_0)\big)$ uniformly with respect to $\lambda\in I$ and the formula \eqref{7.11} for the operator $\Pi_k(\xi)$ one proves that $E^0(\cdot,\xi)\in\,S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_0;\mathcal{K}_{m,\xi})\big)$ uniformly with respect to $\lambda\in I$.
By definition we have that $E^0_+(\xi,\lambda)\in S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C};\mathcal{K}_{m,\xi})\big)$, $E^0_-(\xi,\lambda)\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{K}_0;\mathbb{C})\big)$, $E^0_{-+}(\xi,\lambda)\in S^0_0\big(\mathcal X;\mathbb{B}(\mathbb{C})\big)$ uniformly for $\lambda\in I$ and thus if we define for any $(\xi,\lambda)\in\mathcal X^*\times I$:
\begin{equation}\label{7.37}
\mathcal{E}_0(\xi,\lambda)\ :=\ \left(
\begin{array}{cc}
E^0(\xi,\lambda)&E^0_+(\xi,\lambda)\\
E^0_-(\xi,\lambda)&E^0_{-+}(\xi,\lambda)
\end{array}
\right)\,\in\,\mathbb{B}(\mathcal{B}_\xi;\mathcal{A}_\xi)
\end{equation}
we obtain an operator-valued symbol:
\begin{equation}\label{7.42}
\mathcal{E}_0(\cdots,\lambda)\,\in\,S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{B}_\bullet;\mathcal{A}_\bullet)\big),\ \text{uniformly in }\lambda\in I.
\end{equation}
We shall verify now that for each $(\xi,\lambda)\in\mathcal X^*\times I$ this defines an inverse for the operator $\mathcal{P}_0(\xi,\lambda)$:
\begin{equation}\label{7.43}
\mathcal{P}_0(\xi,\lambda)\,\mathcal{E}_0(\xi,\lambda)\ =\ {\rm id\hspace*{-1.5pt}l},\quad\text{on }\mathcal{K}_0\times\mathbb{C}.
\end{equation}
We can write:
$$
\mathcal{P}_0(\xi,\lambda)\,\mathcal{E}_0(\xi,\lambda)\ =\ \left(
\begin{array}{cc}
\big(\check{P}_0(\xi)\,-\,\lambda\big)E^0(\xi,\lambda)\,+\,R_-(\xi)E^0_-(\xi,\lambda)&\big(\check{P}_0(\xi)\,-\,\lambda\big)E^0_+(\xi,\lambda)\,+\,R_-(\xi)E^0_{-+}(\xi,\lambda)\\
R_+(\xi)E^0(\xi,\lambda)&R_+(\xi)E^0_+(\xi,\lambda)
\end{array}
\right).
$$
For any $u\in\mathcal{K}_0$ we can write:
$$
\big(\check{P}_0(\xi)\,-\,\lambda\big)E^0(\xi,\lambda)u\ =\ \left[\check{P}_0(\xi)\,-\,\lambda\right]\left[{\rm id\hspace*{-1.5pt}l}\,-\,\Pi_k(\xi)\right]\left[\check{P}_0(\xi)\,-\,\lambda\right]^{-1}\left[{\rm id\hspace*{-1.5pt}l}\,-\,\Pi_k(\xi)\right]u\ =\ \left[{\rm id\hspace*{-1.5pt}l}\,-\,\Pi_k(\xi)\right]u,
$$
$$
R_-(\xi)E^0_-(\xi,\lambda)u\ =\ \left(u,\phi(\cdot,\xi)\right)_{\mathcal{K}_0}\phi(\cdot,\xi)\ =\ \Pi_k(\xi)u
$$
$$
R_+(\xi)E^0(\xi,\lambda)u\ =\ \left(\left[{\rm id\hspace*{-1.5pt}l}\,-\,\Pi_k(\xi)\right]\left[\check{P}_0(\xi)\,-\,\lambda\right]^{-1}\left[{\rm id\hspace*{-1.5pt}l}\,-\,\Pi_k(\xi)\right]u\,,\,\phi(\cdot,\xi)\right)_{\mathcal{K}_0}\ =\ 0.
$$
For $c\in\mathbb{C}$ we can write that:
$$
\big(\check{P}_0(\xi)\,-\,\lambda\big)E^0_+(\xi,\lambda)c\ =\ c\big(\check{P}_0(\xi)\,-\,\lambda\big)\phi(\cdot,\xi)\ =\ c\big(\lambda_k(\xi)\,-\,\lambda\big)\phi(\cdot,\xi),
$$
$$
R_-(\xi)E^0_{-+}(\xi,\lambda)c\ =\ \big(\lambda\,-\,\lambda_k(\xi)\big)c\phi(\cdot,\xi),
$$
$$
R_+(\xi)E^0_+(\xi,\lambda)c\ =\ c\left(\phi(\cdot,\xi),\phi(\cdot,\xi)\right)_{\mathcal{K}_0}\ =\ c.
$$
These identities imply \eqref{7.43}. The property of being a left inverse is verified by very similar computations or by taking into account the self-adjointness of both $\mathcal{P}_0(\xi,\lambda)$ and $\mathcal{E}_0(\xi,\lambda)$.
From this point one can repeat identically the arguments in the proof of Theorem \ref{T.0.1} noticing that \eqref{4.12} and \eqref{7.40} imply that we can take $E^{-+}_{\lambda,\epsilon}(x,\xi)$ as in \eqref{0.37}.
\end{description}
\subsection{The constant magnetic field}
In this subsection we prove Proposition \ref{P.0.4}. Thus we suppose that the symbols $p_\epsilon$ do not depend on the first argument and the magnetic field has constant components:
\begin{equation}\label{8.1}
\left\{
\begin{array}{l}
B_\epsilon\ =\ \frac{1}{2}\underset{1\leq j,k\leq d}{\sum}B_{jk}(\epsilon)\,dx_j\wedge dx_k,\quad B_{jk}(\epsilon)\ =\ -B_{kj}(\epsilon)\,\in\,\mathbb{R},\\
\underset{\epsilon\rightarrow0}{\lim}B_{jk}(\epsilon)\ =\ 0.
\end{array}
\right.
\end{equation}
Using the transversal gauge \eqref{0.28} we associate to these magnetic fields some vector potentials $A_\epsilon:=(A_{\epsilon,1},\ldots,A_{\epsilon,d})$ satisfying:
\begin{equation}\label{8.2}
A_{\epsilon,j}(x)\ =\ \frac{1}{2}\underset{1\leq k\leq d}{\sum}B_{jk}(\epsilon)x_k.
\end{equation}
\begin{description}
\item[Proof of Proposition \ref{P.0.4} (1)]
We use formula \eqref{1.5} from Lemma \ref{L.1.2} noticing that the linearity of the functions $A_{\epsilon,j}$ and the definition of $\omega_{A}$ imply that
\begin{equation}\label{8.3}
\omega_{\tau_{-x}A_\epsilon}(y,\tilde{y})\ =\ \omega_{A_\epsilon}(y,\tilde{y})\,e^{i<A_\epsilon(x),y-\tilde{y}>}\ =\ \omega_{A_\epsilon+A_\epsilon(x)}(y,\tilde{y}).
\end{equation}
We deduce that for any $u\in\mathscr{S}(\mathcal X^2)$ and for any $(x,y)\in\mathcal X^2$ we have that:
\begin{equation}\label{8.4}
\left(\boldsymbol{\chi}^*\widetilde{P}_\epsilon(\boldsymbol{\chi}^*)^{-1}u\right)(x,y)\ =\ \left[\big({\rm id\hspace*{-1.5pt}l}\otimes\sigma_{A(x)}\big)({\rm id\hspace*{-1.5pt}l}\otimes P_\epsilon)\big({\rm id\hspace*{-1.5pt}l}\otimes\sigma_{-A(x)}\big)u\right](x,y).
\end{equation}
It follows that the operator $\widetilde{P}_\epsilon$, that is an unbounded self-adjoint operator in $L^2(\mathcal X^2)$ denoted in Proposition \ref{P.2.14} by $\widetilde{P}_\epsilon^\prime$, is unitarily equivalent with the oprtator ${\rm id\hspace*{-1.5pt}l}\otimes P_\epsilon$ with $P_\epsilon$ self-adjoint unbounded operator in $L^2(\mathcal X)$. It follows that $\sigma\big(\widetilde{P}_\epsilon^\prime\big)=\sigma\big(P_\epsilon\big)$. From Proposition \ref{P.2.14} we deduce that $\sigma\big(\widetilde{P}_\epsilon^\prime\big)=\sigma
\big(\widetilde{P}_\epsilon^{\prime\prime}\big)$ where $\widetilde{P}_\epsilon^{\prime\prime}$ is the self-adjoint realization of $\widetilde{P}_\epsilon$ in the space $L^2\big(\mathcal X\times\mathbb{T}\big)$. Finally, from Corollary \ref{C.4.5} we deduce that for any $(\lambda,\epsilon)\in I\times[-\epsilon_0,\epsilon_0]$ we have the equivalence relation:
$$
\lambda\,\in\,\sigma\big(\widetilde{P}_\epsilon^{\prime\prime}\big)\quad\Longleftrightarrow\quad0\,\in\,\sigma\big(\mathfrak{E}_{-+}(\epsilon,\lambda)\big),
$$
where $\mathfrak{E}_{-+}(\epsilon,\lambda)$ is considered as a bounded self-adjoint operator on $\big[L^2(\mathcal X)\big]^N$.
\end{description}
In order to prove the second point of Proposition \ref{P.0.4} we shall use the {\it magnetic translations} $T_{\epsilon,a}:=\sigma_{A_\epsilon(a)}\tau_a$ for any $a\in\mathcal X$, that define a family of unitary operators in $L^2(\mathcal X)$.
\begin{lemma}\label{L.8.1}
For any two families of Hilbert spaces with temperate variation $\{\mathcal{A}_\xi\}_{\xi\in\mathcal X^*}$ and $\{\mathcal{B}_\xi\}_{\xi\in\mathcal X^*}$ and for any operator-valued symbol $q\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ the following equality holds:
\begin{equation}\label{8.5}
T_{\epsilon,a}\mathfrak{Op}^{A_\epsilon}(q)\ =\ \mathfrak{Op}^{A_\epsilon}\big((\tau_a\otimes{\rm id\hspace*{-1.5pt}l})q\big)T_{\epsilon,a},\qquad\forall a\in\mathcal X.
\end{equation}
\end{lemma}
\begin{proof}
From Lemma \ref{L.A.18} it follows that
\begin{equation}\label{8.6}
\tau_a\mathfrak{Op}^{A_\epsilon}(q)\ =\ \mathfrak{Op}^{\tau_aA_\epsilon}\big((\tau_a\otimes{\rm id\hspace*{-1.5pt}l})q\big)\tau_a.
\end{equation}
We notice that $\tau_aA_\epsilon=A_\epsilon\,-\,A_\epsilon(a)$, so that the equality \eqref{8.6} becomes
\begin{equation}\label{8.7}
\tau_a\mathfrak{Op}^{A_\epsilon}(q)\ =\ \mathfrak{Op}^{(A_\epsilon\,-\,A_\epsilon(a))}\big((\tau_a\otimes{\rm id\hspace*{-1.5pt}l})q\big)
\tau_a.
\end{equation}
Then, for any $u\in\mathscr{S}(\mathcal X;\mathcal{A}_0)$ and for any $x\in\mathcal X$ we can write that
$$
\left(\sigma_{-A_\epsilon(a)}\mathfrak{Op}^{A_\epsilon}(q)\sigma_{A_\epsilon(a)}u\right)(x)\ =\ \int_{\Xi}e^{i<\eta,x-y>}e^{-i<A_\epsilon(a),x-y>}\omega_{A_\epsilon}(x,y)\,q\big(\frac{x+y}{2},\eta\big)\,u(y)\,dy\,\;\;\bar{}\!\!\!d\eta.
$$
Noticing that $<A_\epsilon(a),x-y>=-\int_{[x,y]}A_\epsilon(a)$, the above formula implies that
\begin{equation}\label{8.8}
\mathfrak{Op}^{A_\epsilon}(q)\sigma_{A_\epsilon(a)}\ =\ \sigma_{A_\epsilon(a)}\mathfrak{Op}^{(A_\epsilon-A_\epsilon(a))}(q).
\end{equation}
From \eqref{8.7} and \eqref{8.8} we deduce that
$$
T_{\epsilon,a}\mathfrak{Op}^{A_\epsilon}(q)\ =\ \sigma_{A_\epsilon(a)}\tau_a\mathfrak{Op}^{A_\epsilon}(q)\ =\ \sigma_{A_\epsilon(a)}\mathfrak{Op}^{(A_\epsilon\,-\,A_\epsilon(a))}\big((\tau_a\otimes{\rm id\hspace*{-1.5pt}l})q\big)
\tau_a\ =\ \mathfrak{Op}^{A_\epsilon}\big((\tau_a\otimes{\rm id\hspace*{-1.5pt}l})q\big)
\sigma_{A_\epsilon(a)}\tau_a\ =
$$
$$
=\ \mathfrak{Op}^{A_\epsilon}\big((\tau_a\otimes{\rm id\hspace*{-1.5pt}l})q\big)T_{\epsilon,a}.
$$
\end{proof}
\begin{description}
\item[Proof of Proposition \ref{P.0.4} (2)]
The operator $\mathcal{P}_{\epsilon,\lambda}:=\mathfrak{Op}\big(\mathcal{P}_\epsilon(\cdot,\cdot,\lambda)\big)$ from Theorem \ref{T.4.3} has its symbol defined in \eqref{4.6}:
$$
\mathcal{P}_\epsilon(x,\xi,\lambda)\ :=\ \left(
\begin{array}{cc}
\mathfrak{q}_\epsilon(x,\xi)\,-\,\lambda&R_-(\xi)\\
R_+(\xi)&0
\end{array}
\right),
\qquad\forall(x,\xi)\in\Xi,\ \forall(\lambda,\epsilon)\in I\times[-\epsilon_0,\epsilon_0]
$$
where $\mathfrak{q}_\epsilon(x,\xi):=\mathfrak{Op}\big(\widetilde{p}_\epsilon(x,\cdot,\xi,\cdot)\big)$ with $\widetilde{p}_\epsilon(x.y.\xi,\eta):=p_\epsilon(x,y,\xi+\eta)$. As we have noticed from the beginning, we suppose that our symbol $p_\epsilon$ does not depend on the first variable so that neither the operator-valued symbol $\mathcal{P}_\epsilon$ will not depend on the first variable. From Lemma \ref{L.8.1} the operator
$$
\mathcal{P}_{\epsilon,\lambda}:\mathscr{S}\big(\mathcal X;\mathcal{K}_{m,0}\times\mathbb{C}^N\big)\rightarrow
\mathscr{S}\big(\mathcal X;\mathcal{K}_0\times\mathbb{C}^N\big)
$$
commutes with the family $\{T_{\epsilon,a}\otimes{\rm id\hspace*{-1.5pt}l}_{\mathcal{K}_0\times
\mathbb{C}^N}\}_{a\in\mathcal X}$. Then its inverse appearing in Theorem \ref{T.4.3},
$$
\mathcal{E}_{\epsilon,\lambda}:\mathscr{S}\big(\mathcal X;\mathcal{K}_0\times\mathbb{C}^N\big)\rightarrow\mathscr{S}\big(\mathcal X;\mathcal{K}_{m,0}\times\mathbb{C}^N\big)
$$
also commutes with the family $\{T_{\epsilon,a}\otimes{\rm id\hspace*{-1.5pt}l}_{\mathcal{K}_0\times
\mathbb{C}^N}\}_{a\in\mathcal X}$. From this property we deduce that also the operator $\mathfrak{E}_{-+}(\epsilon,\lambda):L^2(\mathcal X;\mathbb{C}^N)\rightarrow L^2(\mathcal X;\mathbb{C}^N)$ commutes with the family $\{T_{\epsilon,a}\otimes{\rm id\hspace*{-1.5pt}l}_{\mathbb{C}^N}\}_{a\in\mathcal X}$. Using Lemma \ref{L.8.1} once again we deduce that :
$$
\mathfrak{Op}^{A_\epsilon}\big(E^{-+}_{\epsilon,\lambda}\big)\ =\ \mathfrak{E}_{-+}(\epsilon,\lambda)\ =\ \left[T_{\epsilon,a}\otimes{\rm id\hspace*{-1.5pt}l}_{\mathbb{C}^N}\right]
\mathfrak{E}_{-+}(\epsilon,\lambda)\left[T_{\epsilon,a}\otimes{\rm id\hspace*{-1.5pt}l}_{\mathbb{C}^N}
\right]^{-1}\ =\ \mathfrak{Op}^{A_\epsilon}\big((\tau_a\otimes{\rm id\hspace*{-1.5pt}l})E^{-+}_{\epsilon,\lambda}\big),\ \forall a\in\mathcal X.
$$
We conclude that $E^{-+}_{\epsilon,\lambda}(x,\xi)=E^{-+}_{\epsilon,\lambda}(x-a,\xi)$ for any $(x,\xi)\in\Xi$ and for any $a\in\mathcal X$. It follows that $E^{-+}_{\epsilon,\lambda}(x,\xi)=E^{-+}_{\epsilon,\lambda}(0,\xi)$ for any $(x,\xi)\in\Xi$. The $\Gamma^*$-periodicity follows as in the general case (see the proof of Lemma \ref{L.6.5}).
\end{description}
\section{Appendices}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{Magnetic pseudodifferential operators with operator-valued symbols}
\begin{definition}\label{D.A.1}
A family of Hilbert spaces $\{\mathcal{A}_\xi\}_{\xi\in\mathcal X^*}$ (indexed by the points in the {\it momentum space}) is said {\it to have temperate variation} when it verifies the following two conditions:
\begin{enumerate}
\item $\mathcal{A}_\xi\ =\ \mathcal{A}_\eta$ as complex vector spaces $\forall(\xi,\eta)\in[\mathcal X^*]^2$.
\item There exist $C>0$ and $M\geq0$ such that $\forall u\in\mathcal{A}_0$ we have the estimation:
\begin{equation}\label{A.1}
\|u\|_{\mathcal{A}_\xi}\ \leq\ C<\xi-\eta>^M\|u\|_{\mathcal{A}_\eta},\qquad\forall(\xi,\eta)\in[\mathcal X^*]^2.
\end{equation}
\end{enumerate}
\end{definition}
\begin{ex}\label{E.A.2}
We can take $\mathcal{A}_\xi=\mathcal{H}^s(\mathcal X)$, with any $s\in\mathbb{R}$ endowed with the $\xi$-dependent norm:
$$
\|u\|_{\mathcal{A}_\xi}\ :=\ \left(\int_\mathcal X<\xi+\eta>^{2s}|\hat{u}(\eta)|^2d\eta\right)^{1/2},\qquad\forall u\in\mathcal{H}^s(\mathcal X),\ \forall\xi\in\mathcal X^*.
$$
The inequality \eqref{A.1} clearly follows from the well known inequality:
\begin{equation}\label{A.2}
<\xi+\eta>^{2s}\ \leq\ C_s<\zeta+\eta>^{2s}<\xi-\zeta>^{2|s|},\qquad\forall(\xi,\eta,\zeta)\in[\mathcal X^*]^3,
\end{equation}
where the constant $C_s$ only depends on $s\in\mathbb{R}$. For this specific family we shall use the shorter notation $\mathcal{A}_\xi\equiv\mathcal{H}^s_\xi(\mathcal X)$.
\end{ex}
\begin{definition}\label{D.A.3}
Suppose given two families of Hilbert spaces with tempered variation $\{\mathcal{A}_\xi\}_{\xi\in\mathcal X^*}$ and $\{\mathcal{B}_\xi\}_{\xi\in\mathcal X^*}$; suppose also given $m\in\mathbb{R}$, $\rho\in[0,1]$ and $\mathcal{Y}$ a finite dimensional real vector space. A function $p\in C^\infty\big(\mathcal{Y}\times\mathcal X^*;\mathbb{B}(\mathcal{A}_0;\mathcal{B}_0)\big)$ is called {\it an operator-valued symbol of class} $S^m_\rho\big(\mathcal{Y};\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ when it verifies the following property:
\begin{equation}\label{A.3}
\begin{array}{l}
\forall\alpha\in\mathbb{N}^{\dim\mathcal{Y}},\,\forall\beta\in\mathbb{N}^d,\ \exists C_{\alpha,\beta}>0:\\
\left\|\big(\partial^\alpha_y\partial^\beta_\xi p\big)(y,\xi)\right\|_{\mathbb{B}(\mathcal{A}_\xi;\mathcal{B}_\xi)}\ \leq\ C_{\alpha,\beta}<\xi>^{m-\rho|\beta|},\qquad\forall(y,\xi)\in\mathcal{Y}\times\mathcal X^*.
\end{array}
\end{equation}
\end{definition}
The space $S^m_\rho\big(\mathcal{Y};\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ endowed with the family of seminorms $\nu_{\alpha,\beta}$ defined as being the smallest constants $C_{\alpha,\beta}$ that satisfy the defining property \eqref{A.3} is a metrizable locally convex linear topological space. In case we have for any $\xi\in\mathcal X^*$ that $\mathcal{A}_\xi=\mathcal{A}_0$ and $\mathcal{B}_\xi=\mathcal{B}_0$ as algebraic and topological structures, then we use the notation $S^m_\rho\big(\mathcal{Y};\mathbb{B}(\mathcal{A}_0;\mathcal{B}_0)\big)$. If moreover we have that $\mathcal{A}_0=\mathcal{B}_0=\mathbb{C}$, then we use the simple notation $S^m_\rho(\mathcal{Y})$.
\begin{ex}\label{E.A.4}
If $p\in S^m_1(\mathcal X)$ and if for any $\xi\in\mathcal X^*$ we denote by $p_\xi:=({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\xi})p$, by $P_\xi:=\mathfrak{Op}\big(p_\xi\big)$ and by $\mathfrak{p}$ the application $\Xi\ni(x,\xi)\mapsto P_\xi\in\mathbb{B}(\mathcal{H}^{s+m}_\xi(\mathcal X);\mathcal{H}^s_\xi(\mathcal X))$, for some $s\in\mathbb{R}$, we can prove that $\mathfrak{p}$ is an operator valued symbol of class $S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{H}^{s+m}_\bullet(\mathcal X);\mathcal{H}^s_\bullet(\mathcal X))\big)$. Moreover the map $S^m_1(\mathcal X)\ni p\mapsto\mathfrak{p}\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{H}^{s+m}_\bullet(\mathcal X);\mathcal{H}^s_\bullet(\mathcal X))\big)$ is continuous.
\end{ex}
\begin{description}
\item[] \hspace{1cm} In fact, let us recall that for any $\xi\in\mathcal X^*$ we have denoted by $\sigma_\xi$ the multiplication operator with the function $e^{i<\xi,\cdot>}$ on the space $\mathscr{S}^\prime(\mathcal X)$. Then, for any $u\in\mathscr{S}(\mathcal X)$ and for any $\xi\in\mathcal X^*$ we have that $u\in\mathcal{H}^{s+m}_\xi(\mathcal X)$ and we can write:
$$
\big(\sigma_{-\xi}P_0\sigma_\xi u\big)(x)=\int_\Xi e^{i<\eta-\xi,x-y>}p\big(\frac{x+y}{2},\eta\big)u(y)\,dy\,\;\;\bar{}\!\!\!d\eta=\int_\Xi e^{i<\eta,x-y>}p\big(\frac{x+y}{2},\eta+\xi\big)u(y)\,dy\,\;\;\bar{}\!\!\!d\eta=\big(P_\xi u\big)(x)
$$
and we conclude that
\begin{equation}\label{A.4}
P_\xi\ =\ \sigma_{-\xi}P_0\sigma_\xi,\qquad\forall\xi\in\mathcal X^*.
\end{equation}
On the other side, for any $\xi\in\mathcal X^*$ we notice that $p_\xi$ is a symbol of class $S^m_1(\mathcal X)$ and thus, the usual Weyl calculus implies that $P_\xi\in\mathbb{B}\big(\mathcal{H}^{s+m}(\mathcal X);\mathcal{H}^s(\mathcal X)\big)$ for any $s\in\mathbb{R}$. We notice easily that for any multi-index $\beta\in\mathbb{N}^d$ we can write:
\begin{equation}\label{A.4a}
\partial^\beta_\xi P_\xi\ =\ \mathfrak{Op}\big(\partial^\beta_\xi p_\xi\big)
\end{equation}
so that we conclude that $P_\xi\in C^\infty\big(\Xi;\mathbb{B}(\mathcal{H}^{s+m}(\mathcal X);\mathcal{H}^s(\mathcal X))\big)$ (constant with respect to the variable $x\in\mathcal X$) for any $s\in\mathbb{R}$.
Let us further notice that for any $u\in\mathscr{S}(\mathcal X)$ and any $\xi\in\mathcal X^*$ we have the equalities:
\begin{equation}\label{A.5}
\widehat{\sigma_\xi u}\ =\ \tau_\xi\hat{u},
\end{equation}
\begin{equation}\label{A.6}
\left\|\sigma_{-\xi}u\right\|^2_{\mathcal{H}^s_\xi(\mathcal X)}\ =\ \int_{\mathcal X^*}<\xi+\eta>^{2s}\left|\hat{u}(\xi+\eta)\right|^2d\eta\ =\ \|u\|^2_{\mathcal{H}^s(\mathcal X)}.
\end{equation}
Coming back to \eqref{A.4} we deduce that for any $u\in\mathscr{S}(\mathcal X)$ and any $\xi\in\mathcal X^*$ we have the estimation:
$$
\left\|P_\xi u\right\|^2_{\mathcal{H}^s_\xi(\mathcal X)}\ =\ \left\|P_0\sigma_\xi u\right\|^2_{\mathcal{H}^s(\mathcal X)}\ \leq\ C_s\left\|\sigma_\xi u\right\|^2_{\mathcal{H}^{s+m}(\mathcal X)}\ =\ C_s\|u\|^2_{\mathcal{H}^{s+m}_\xi(\mathcal X)}.
$$
Using also the equality \eqref{A.4a} we obtain similar estimations for the derivatives of $P_\xi$ and conclude that $\mathfrak{p}\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{H}^{s+m}_\bullet(\mathcal X);\mathcal{H}^s_\bullet(\mathcal X))\big)$ and we have the continuity of the map $S^m_1(\mathcal X)\ni p\mapsto\mathfrak{p}\in S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{H}^{s+m}_\bullet(\mathcal X);\mathcal{H}^s_\bullet(\mathcal X))\big)$.
\end{description}
\begin{definition}\label{D.A.5}
We denote by $S^m_{\rho,\epsilon}\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ the linear space of families $\{p_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ satisfying the following three conditions:
\begin{enumerate}
\item $\forall\epsilon\in[-\epsilon_0,\epsilon_0],\ p_\epsilon\in S^m_\rho\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$,
\item $\underset{\epsilon\rightarrow0}{\lim}\,p_\epsilon\ =\ p_0$ in $S^m_\rho\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$,
\item Denoting the variable in $\mathcal X^2$ by $(x,y)$, for any multi-index $\alpha\in\mathbb{N}^d$ with $|\alpha|\geq1$ we have that $\underset{\epsilon\rightarrow0}{\lim}\,\partial^\alpha_xp_\epsilon\ =\ 0$ in $S^m_\rho\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$,
\end{enumerate}
endowed with the natural locally convex topology of symbols of H\"{o}rmander type.
\end{definition}
As in the case of Definition \ref{D.A.3}, in case we have for any $\xi\in\mathcal X^*$ that $\mathcal{A}_\xi=\mathcal{A}_0$ and $\mathcal{B}_\xi=\mathcal{B}_0$ as algebraic and topological structures, then we use the notation $S^m_{\rho,\epsilon}\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_0;\mathcal{B}_0)\big)$. If moreover we have that $\mathcal{A}_0=\mathcal{B}_0=\mathbb{C}$, then we use the simple notation $S^m_{\rho,\epsilon}(\mathcal X^2)$.
For the families of symbols of type $S^m_{\rho,\epsilon}\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ that do not depend on the first variable $x$ in $\mathcal X^2$ we shall use the notation $S^m_{\rho,\epsilon}\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$.
Let us also notice the following canonical injection:
$$
S^m_{\rho,\epsilon}\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)\ni p_\epsilon\mapsto{\rm id\hspace*{-1.5pt}l}\otimes p_\epsilon\in S^m_{\rho,\epsilon}\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big).
$$
\begin{remark}\label{R.A.6}
We can obtain the following description of the space $S^m_{\rho,\epsilon}\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$:
\end{remark}
\begin{description}
\item[•] \hspace{1cm} If $\{p_\epsilon\}_{|\epsilon|\leq\epsilon_0}\in S^m_{\rho,\epsilon}\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ then we can write the first order Taylor expansion:
$$
p_\epsilon(x,y,\eta)\ =\ p_\epsilon(0,y,\eta)\,+\,\left\langle x\,,\,\int_0^1\big(\nabla_x p_\epsilon\big)(tx,y,\eta)\,dt\right\rangle.
$$
Using conditions (2) and (3) from Definition \ref{D.A.5} we deduce that
$$
p_0(x,y,\eta)\ =\ p_0(0,y,\eta).
$$
We denote by: $p_0(y,\eta)\ :=\ p_0(0,y,\eta)$ as element in $S^m_\rho\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ and by:
$$
r_\epsilon(x,y,\eta)\ :=\ p_\epsilon(x,y,\eta)\,-\,p_0(y,\eta).
$$
This last symbol is of class $S^m_\rho\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
We have: $r_0=0$ and $\underset{\epsilon\rightarrow0}{\lim}\,r_\epsilon\,=\,0$ in $S^m_\rho\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ and evidently:
\begin{equation}\label{A.7}
p_\epsilon(x,y,\eta)\ =\ p_0(y,\eta)\,+\,r_\epsilon(x,y,\eta).
\end{equation}
Reciprocally, if $p_0$ and $r_\epsilon$ from the equality \eqref{A.7} have the properties stated above, then $\{p_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ belongs to $S^m_{\rho,\epsilon}\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$.
\end{description}
We shall consider now {\it magnetic pseudodifferential operators associated to operator-valued symbols}. Let us first consider $p\in S^m_\rho\big(\mathcal X;\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ and a magnetic field $B$ with components of class $BC^\infty(\mathcal X)$; this magnetic field can always be associated with a vector potential $A$ with components of class $C^\infty_{\text{\sf pol}}(\mathcal X)$ (as for example in the transversal gauge). Let us recall the notation $\omega_A(x,y):=\exp\{-i\int_{[x,y]}A\}$. For any $u\in\mathscr{S}(\mathcal X;\mathcal{A}_0)$ we can define the oscillating integral (its existence following from the next Proposition):
\begin{equation}\label{A.8}
\big[\mathfrak{Op}^A(p)u\big](x)\ :=\ \int_\Xi e^{i<\eta,x-y>}\omega_A(x,y)\,p\big(\frac{x+y}{2},\eta\big)u(y)\,dy\,\;\;\bar{}\!\!\!d\eta,\qquad\forall x\in\mathcal X.
\end{equation}
\begin{proposition}\label{P.A.7}
Under the hypothesis described in the paragraph above \eqref{A.8} the following facts are true:
\begin{enumerate}
\item The integral in \eqref{A.8} exists for any $x\in\mathcal X$ as oscillating Bochner integral and defines a function $\mathfrak{Op}^A(p)u\in\mathscr{S}(\mathcal X;\mathcal{B}_0)$.
\item The map $\mathfrak{Op}^A(p):\mathscr{S}(\mathcal X;\mathcal{A}_0)\rightarrow\mathscr{S}(\mathcal X;\mathcal{B}_0)$ defined by \eqref{A.8} and point (1) above is linear and continuous.
\item The formal adjoint $\big[\mathfrak{Op}^A(p)\big]^*:\mathscr{S}(\mathcal X;\mathcal{B}_0\big)\rightarrow\mathscr{S}(\mathcal X;\mathcal{A}_0)$ of the linear continuous operator defined at point (2) above is equal to $\mathfrak{Op}^A(p^*)$ where $p^*\in S^{m^\prime}_\rho\big(\mathcal X;\mathbb{B}(\mathcal{B}_\bullet;\mathcal{A}_\bullet)\big)$ where $m^\prime=m+2(M_\mathcal{A}+M_\mathcal{B})$ and $p^*(x,\xi):=\big[p(x,\xi)\big]^*$ (the adjoint in $\mathbb{B}(\mathcal{A}_0;\mathcal{B}_0)$.
\item The operator $\mathfrak{Op}^A(p)$ extends in a natural way to a linear continuous operator $\mathscr{S}^\prime(\mathcal X;\mathcal{A}_0)\rightarrow\mathscr{S}^\prime(\mathcal X;\mathcal{B}_0)$ that we denote in the same way.
\end{enumerate}
\end{proposition}
\begin{proof}
Let us first prove the first two points of the Proposition.
Fix some $u\in\mathscr{S}(\mathcal X;\mathcal{A}_0)$ and for the begining let us suppose that $p(y,\eta)=0$ for $|\eta|\geq R$, with some $R>0$. Then, for any $x\in\mathcal X$, the integral in \eqref{A.8} exists as a Bochner integral of a $\mathcal{B}_0$-valued function. Let us notice that in this case we can integrate by parts in \eqref{A.8} and use the identities:
$$
e^{i<\eta,x-y>}=<x-y>^{-2N_1}\big[({\rm id\hspace*{-1.5pt}l}-\Delta_\eta)^{N_1}e^{i<\eta,x-y>}\big],\ \forall N_1\in\mathbb{N};\quad
e^{-i<\eta,y>}=<\eta>^{-2N_2}\big[({\rm id\hspace*{-1.5pt}l}-\Delta_y)^{N_2}e^{-i<\eta,y>}\big],\ \forall N_2\in\mathbb{N}.
$$
In this way we obtain the equality:
\begin{equation}\label{A.9}
\big[\mathfrak{Op}^A(p)u\big](x)\ =
\end{equation}
$$
=\ \int_\Xi e^{i<\eta,x-y>}<x-y>^{-2N_1}({\rm id\hspace*{-1.5pt}l}-\Delta_\eta)^{N_1}\left[<\eta>^{-2N_2}({\rm id\hspace*{-1.5pt}l}-\Delta_y)^{N_2}\left(\omega_A(x,y)\big[p\big(\frac{x+y}{2},\eta\big)u(y)\big]\right)\right]dy\;\;\bar{}\!\!\!d\eta.
$$
From this one easily obtains the following estimation: $\exists C(N_1,N_2)>0$, $\exists k(N_2)\in\mathbb{N}$ such that for any $l\in\mathbb{N}$ we have
$$
\left\|\big[\mathfrak{Op}^A(p)u\big](x)\right\|_{\mathcal{B}_0}\ \leq
$$
$$
\leq\ C\left[\int_\Xi<x-y>^{-2N_1}<\eta>^{-2N_2}\big(<x>+<y>\big)^{k(N_2)}
<\eta>^{m+M_{\mathcal{B}}+M_{\mathcal{A}}}<y>^{-l}dy\;\;\bar{}\!\!\!d\eta\right]\times
$$
$$
\times\underset{|\alpha|\leq2N_2}{\sup}\underset{y\in\mathcal X}{\sup}\,<y>^l\left\|\big(\partial^\alpha u\big)(y)\right\|_{\mathcal{A}_0},
$$
where $M_\mathcal{A}$ and $M_\mathcal{B}$ are the constants appearing in condition \eqref{A.1} with respect to each of the two families $\{{A}_\xi\}_{\xi\in\mathcal X^*}$ and $\{\mathcal{B}_\xi\}_{\xi\in\mathcal X^*}$. We make the following choices:
$$
2N_2\geq m+M_\mathcal{A}+M_\mathcal{B}+d,\ l=2N_1+k(N_2)+d+1,\ 2N_1\geq k(N_2)
$$
and we obtain that
\begin{equation}\label{A.10}
\left\|\big[\mathfrak{Op}^A(p)u\big](x)\right\|_{\mathcal{B}_0}\ \leq\ C(N_1)<x>^{-2N_1+k(N_2)}\underset{|\alpha|\leq2N_2}{\sup}\ \underset{y\in\mathcal X}{\sup}\,<y>^l\left\|\big(\partial^\alpha u\big)(y)\right\|_{\mathcal{A}_0},\qquad\forall x\in\mathcal X.
\end{equation}
Similar estimations may be obtained for the derivatives $\partial^\beta_x\mathfrak{Op}^A(p)u$ and this finishes the proof of the first two points of the Proposition for the "compact support" situation we have considered first.
For the general case we consider a cut-off function $\varphi\in C^\infty_0(\mathcal X^*)$ that is equal to 1 for $|\eta|\leq1$. For any $R\geq1$ we define then the 'approximating' symbols $p_R(y,\eta):=\varphi(\eta/R)p(y,\eta)$ that has compact support in the $\eta$-variable and belongs to $S^m_\rho\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ uniformly with respect to $R\in[1,\infty)$. We write the operator $\mathfrak{Op}^A(p_R)$ under a form similar to \eqref{A.9} with the choice of the parameters $N_1$ and $N_2$ as above; then the Dominated Convergence Theorem allows us to take the limit $R\nearrow\infty$ obtaining now for $\mathfrak{Op}^A(p)$ the integral expression \eqref{A.9} that is well defined and verifies the estimation \eqref{A.10}.
The last two points of the statement of the Proposition follow in a standard way from the equality:
\begin{equation}\label{A.11}
\int_\mathcal X\left(\big[\mathfrak{Op}^A(p)u\big](x)\,,\,v(x)\right)_{\mathcal{B}_0}dx\ =\ \int_\mathcal X\left(u(y)\,,\,\big[\mathfrak{Op}^A(p^*)v\big](y)\right)_{\mathcal{A}_0}dy,\qquad\forall(u,v)\in\mathscr{S}(\mathcal X;\mathcal{A}_0)\times\mathscr{S}(\mathcal X;\mathcal{B}_0).
\end{equation}
\end{proof}
\begin{ex}\label{E.A.8}
Let us consider a family $\{p_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ of class $S^m_{1,\epsilon}(\mathcal X^2)$ and let us define, as in Subsection \ref{S.1.3}:
$$
\widetilde{p}_\epsilon(x,y,\xi,\eta)\ :=\ p_\epsilon(x,y,\xi+\eta);\qquad\mathfrak{q}_\epsilon(x,\xi)\ :=\ \mathfrak{Op}\big(\widetilde{p}_\epsilon(x,\cdot,\xi,\cdot)\big).
$$
The following two statements are true:
\begin{enumerate}
\item $\{\mathfrak{q}_\epsilon\}_{|\epsilon|\leq\epsilon_0}\,\in\,S^0_{0,\epsilon}\big(\mathcal X;\mathbb{B}(\mathcal{H}^{s+m}_\bullet(\mathcal X);\mathcal{H}^s_\bullet(\mathcal X)\big)$ for any $s\in\mathbb{R}$.
\item If the family of magnetic fields $\{B_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ satisfies Hypothesis H.1 and if the associated vector potentials are choosen as in \eqref{0.28}, then we have that:
\begin{equation}\label{A.12}
\mathfrak{Op}^{A_\epsilon}(\mathfrak{q}_\epsilon)\ \in\ \mathbb{B}\big(\mathscr{S}(\mathcal X;\mathcal{H}^{s+m}(\mathcal X));\mathscr{S}(\mathcal X;\mathcal{H}^{s}(\mathcal X))\big)\ \cap\ \mathbb{B}\big(\mathscr{S}^\prime(\mathcal X;\mathcal{H}^{s+m}(\mathcal X));\mathscr{S}^\prime(\mathcal X;\mathcal{H}^{s}(\mathcal X))\big),
\end{equation}
for any $s\in\mathbb{R}$. Moreover,
\begin{equation}\label{A.13}
\mathfrak{Op}^{A_\epsilon}(\mathfrak{q}_\epsilon)\ \in\ \mathbb{B}\big(\mathscr{S}(\mathcal X^2);\mathscr{S}(\mathcal X^2)\big)\ \cap\ \mathbb{B}\big(\mathscr{S}^\prime(\mathcal X^2);\mathscr{S}^\prime(\mathcal X^2)\big),
\end{equation}
and all the continuities are uniform with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{enumerate}
\end{ex}
\begin{proof}
We begin by verifying point (1). By similar arguments as in Example \ref{E.A.4} we prove that for any $\epsilon\in[-\epsilon_0,\epsilon_0]$ and $s\in\mathbb{R}$ we have that $\mathfrak{q}_\epsilon\,\in\,S^0_{0}\big(\mathcal X;\mathbb{B}(\mathcal{H}^{s+m}_\bullet(\mathcal X);\mathcal{H}^s_\bullet(\mathcal X))\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$ and the following application is continuous:
$$
S^m_1(\mathcal X^2)\ni p_\epsilon\mapsto\mathfrak{q}_\epsilon\in S^0_{0}\big(\mathcal X;\mathbb{B}(\mathcal{H}^{s+m}_\bullet(\mathcal X);\mathcal{H}^s_\bullet(\mathcal X)\big),\qquad\forall s\in\mathbb{R},
$$
uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$. Point (1) follows then clearly.
Concerning the second point of the Proposition, let us notice that \eqref{A.12} and the uniformity with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$ follow easily from Proposition \ref{P.A.7} and its proof.
In order to prove \eqref{A.13} let us notice that $\widetilde{p}_\epsilon^\prime(x,\cdot,\xi,\cdot):=<\xi>^{-|m|}\widetilde{p}_\epsilon(x,\cdot,\xi,\cdot)$ defines a symbol of class $S^m_0(\mathcal X)$ uniformly with respect to $((x,\xi),\epsilon)\in\Xi\times[-\epsilon_0,\epsilon_0]$ and we can view the element $\widetilde{p}_\epsilon^\prime$ as a function in $BC^\infty\big(\Xi;S^m_0(\mathcal X)\big)$. Then, the operator-valued symbol $\mathfrak{q}^\prime_\epsilon(x,\xi):=<\xi>^{-|m|}\mathfrak{q}_\epsilon(x,\xi)$ has the following property:
$$
\forall(\alpha,\beta)\in[\mathbb{N}^d]^2,\ \big(\partial^\alpha_x\partial^\beta_\xi\mathfrak{q}^\prime_\epsilon\big)(x,\xi)\,\in\,\mathbb{B}\big(\mathscr{S}(\mathcal X)\big),
$$
uniformly with respect to $((x,\xi),\epsilon)\in\Xi\times[-\epsilon_0,\epsilon_0]$.
Denoting $\mathfrak{s}_{s}(x,\xi):=<\xi>^{s}$ for any $s\in\mathbb{R}$ and writing $\mathfrak{Op}^{A_\epsilon}(\mathfrak{q}_\epsilon)=\mathfrak{Op}^{A_\epsilon}(\mathfrak{s}_{|m|}\mathfrak{q}^\prime)$, the proof of Proposition \ref{P.A.7} implies \eqref{A.13} uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{proof}
\subsection{Some spaces of periodic distributions}
We shall use the following notations:
\begin{itemize}
\item $\mathscr{S}^\prime_\Gamma(\mathcal X)\ :=\ \left\{u\in\mathscr{S}^\prime(\mathcal X)\,\mid\,\tau_\gamma u=u,\ \forall\gamma\in\Gamma\right\}$, the space of $\Gamma$-periodic distributions on $\mathcal X$.
\item $\mathscr{S}(\mathbb{T}):=C^\infty(\mathbb{T})$ with the usual Fr\'{e}chet topology, We have the evident identification:
$$
\mathscr{S}(\mathbb{T})\ \cong\ \left\{\varphi\in\mathscr{E}(\mathcal X)\,\mid\,\tau_\gamma\varphi=\varphi,\ \forall\gamma\in\Gamma\right\}\ =\ \mathscr{S}^\prime_\Gamma(\mathcal X)\cap\mathscr{E}(\mathcal X).
$$
\item $\mathscr{S}^\prime(\mathbb{T})$ is the dual of $\mathscr{S}(\mathbb{T})$.
\item We shall denote by $<\cdot,\cdot>_{\mathbb{T}}$ the natural bilinear map defined by the duality relation on $\mathscr{S}^\prime(\mathbb{T})\times\mathscr{S}(\mathbb{T})$ and by $(\cdot,\cdot)_{\mathbb{T}}$ the natural sesquilinear map on $\mathscr{S}^\prime(\mathbb{T})\times\mathscr{S}(\mathbb{T})$ obtaind by extending the scalar product from $L^2(\mathbb{T})$.
\end{itemize}
\begin{remark}\label{R.A.9}
It is well known that the spaces $\mathscr{S}^\prime_\Gamma(\mathcal X)$ and $\mathscr{S}^\prime(\mathbb{T})$ can be identified through the following topological isomorphism:
\begin{equation}\label{A.14}
\mathfrak{i}:\mathscr{S}^\prime(\mathbb{T})\overset{\sim}{\rightarrow}\mathscr{S}^\prime_\Gamma(\mathcal X);\qquad<\mathfrak{i}(u),\varphi>:=<u,\underset{\gamma\in\Gamma}{\sum}\tau_\gamma\varphi>_{\mathbb{T}},\qquad\forall(u,\varphi)\in\mathscr{S}^\prime(\mathbb{T})\times\mathscr{S}(\mathcal X).
\end{equation}
In order to give an explicit form to the inverse of the isomorphism $\mathfrak{i}$ let us fix some test function $\phi\in C^\infty_0(\mathcal X)$ such that $\underset{\gamma\in\Gamma}{\sum}\tau_\gamma\phi=1$ (it is easy to see that there exist enough such functions). Then we can easily verify that:
\begin{equation}\label{A.15}
<\mathfrak{i}^{-1}(v),\theta>_{\mathbb{T}}=<v,\phi\theta>,\qquad\forall(v,\theta)\in\mathscr{S}^\prime_\Gamma(\mathcal X)\times\mathscr{S}(\mathbb{T}).
\end{equation}
\end{remark}
\begin{remark}\label{R.A.10}
For any distribution $u\in\mathscr{S}^\prime_\Gamma(\mathcal X)\cong\mathscr{S}^\prime(\mathbb{T})$ we have the Fourier series decomposition:
\begin{equation}\label{A.16}
u\ =\ \underset{\gamma^*\in\Gamma^*}{\sum}\hat{u}(\gamma^*)\sigma_{\gamma^*},\qquad\hat{u}(\gamma^*):=|E|^{-1}<u,\sigma_{-\gamma^*}>_{\mathbb{T}},
\end{equation}
where $\sigma_{\gamma^*}(y):=e^{i<\gamma^*,y>},\forall y\in\mathbb{T},\forall\gamma^*\in\Gamma^*$ and the series converges as tempered distribution.
In particular, if $u\in L^2(\mathbb{T})$ we also have the equality:
\begin{equation}\label{A.17}
\|u\|^2_{L^2(\mathbb{T})}\ =\ |E|\underset{\gamma^*\in\Gamma^*}{\sum}|\hat{u}(\gamma^*)|^2.
\end{equation}
\end{remark}
\begin{remark}\label{R.A.11}
A simple computation allows to prove that for any $s\in\mathbb{R}$ and any $\gamma^*\in\Gamma^*$ the following equality is true in $\mathscr{S}^\prime(\mathcal X)$:
\begin{equation}\label{A.18}
<D>^s\sigma_{\gamma^*}\ =\ <\gamma^*>^s\sigma_{\gamma^*}.
\end{equation}
Using now \eqref{A.16} we deduce that $<D>^s$ induces on $\mathscr{S}^\prime(\mathbb{T})\cong\mathscr{S}^\prime_\Gamma(\mathcal X)$ a well-defined operator, denoted by $<D_\Gamma>^s$, explicitely given by the following formula:
\begin{equation}\label{A.19}
<D_\Gamma>^su\ :=\ \underset{\gamma^*\in\Gamma^*}{\sum}<\gamma^*>^s\hat{u}(\gamma^*)\sigma_{\gamma^*},\qquad\forall u\in\mathscr{S}^\prime(\mathbb{T}).
\end{equation}
\end{remark}
\begin{definition}\label{D.A.12}
Given any $s\in\mathbb{R}$ we define the complex linear space:
$$
\mathcal{H}^s(\mathbb{T})\ :=\ \left\{u\in\mathscr{S}^\prime(\mathbb{T})\,\mid\,<D_\Gamma>^su\in L^2(\mathbb{T})\right\}
$$
endowed with the quadratic norm $\|u\|_{\mathcal{H}^s(\mathbb{T})}:=\left\|<D_\Gamma>^su\right\|_{L^2(\mathbb{T})}$ for which it becomes a Hilbert space.
\end{definition}
Let us notice that for any $u\in\mathscr{S}^\prime(\mathbb{T})$ we have the following equivalence relation:
$$
u\,\in\,\mathcal{H}^s(\mathbb{T})\quad\Longleftrightarrow\quad|E|^{-1}\underset{\gamma^*\in\Gamma^*}{\sum}<\gamma^*>^{2s}|\hat{u}(\gamma^*)|^2\,<\,\infty,
$$
and the formula in the right hand side of the above equivalence relation is equal to $\|u\|^2_{\mathcal{H}^s(\mathbb{T})}$. From these facts it follows easily that $\mathscr{S}(\mathbb{T})$ is dense in $\mathcal{H}^s(\mathbb{T})$.
\begin{lemma}\label{L.A.13}
Let $p\in S^m_1(\mathcal X)$ and let us denote by $P:=\mathfrak{Op}(p)$. Then for any $s\in\mathbb{R}$ and for any $u\in\mathcal{H}^{s+m}_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime(\mathcal X)$ we have that $Pu\in\mathcal{H}^s_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime(\mathcal X)$.
\end{lemma}
\begin{proof}
It is clear from the definitions that we have $Pu\in\mathscr{S}^\prime(\mathcal X)$. For any relatively compact open subset $\Omega\subset\mathcal X$ we can choose two positive test functions $\psi$ and $\chi$ of class $C^\infty_0(\mathcal X)$ such that: $\psi=1$ on $\overline{\Omega}$ and $\chi=1$ on a neighborhood $V_\psi$ of the support of $\psi$. Then $\chi u\in\mathcal{H}^{s+m}(\mathcal X)$ and we know that $P\chi u\in\mathcal{H}^s(\mathcal X)$. Thus the Lemma will follow if we prove that the restriction of $P(1-\chi)u$ to $\Omega$ is of class $C^\infty(\Omega)$. For that, let us choose $\phi\in C^\infty_0(\Omega)$ and notice that
$$
\left\langle P(1-\chi)u,\phi\right\rangle\ =\ \left\langle u,(1-\chi)P^t(\psi\phi)\right\rangle\ =\ \left\langle u,\int_\mathcal X K(x,y)\phi(y)dy\right\rangle,
$$
where for any $N\in\mathbb{N}$ we have the following equality (obtained by the usual integration by parts method):
$$
K(x,y)\ :=\ \big(1-\chi(x)\big)\psi(y)\int_\mathcal X e^{i<\eta,x-y>}|x-y|^{-2N}\big[\Delta_\eta^N p\big(\frac{x+y}{2},-\eta\big)\big]\;\;\bar{}\!\!\!d\eta.
$$
It is evident that $K\in\mathscr{S}(\mathcal X^2)$, so that $v(y):=<u(\cdot),K(\cdot,y)>$ is a function of class $\mathscr{S}(\mathcal X)$ and we have the equality:
$$
\left\langle P(1-\chi)u,\phi\right\rangle\ =\ <v,\phi>,
$$
from which we conclude that $P(1\,-\,\chi)u\in C^\infty(\Omega)$.
\end{proof}
\begin{corollary}\label{C.A.14}
The space $\mathcal{H}^s(\mathbb{T})$ can be identified with the usual Sobolev space of order $s$ on the torus that is defined as $\mathcal{H}^s_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$.
\end{corollary}
\begin{proof}
The Corollary follows from Lemma \ref{L.A.13} using the fact that we have the equality $<D_\Gamma>^s=<D>^s$ on $\mathscr{S}^\prime_\Gamma(\mathcal X)$ and thus for any $u\in\mathscr{S}^\prime_\Gamma(\mathcal X)$ the following relations hold:
$$
u\in\mathcal{H}^s(\mathbb{T})\ \Leftrightarrow\ <D_\Gamma>^su\in L^2(\mathbb{T})\ \Leftrightarrow\ <D>^su\in L^2_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)\ \Leftrightarrow\ u\in\mathcal{H}^s_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X).
$$
\end{proof}
\begin{remark}\label{R.A.15}
Let us notice the following facts to be used in the paper.
\begin{enumerate}
\item For any pair of real numbers $(s,t)$ the application $<D_\Gamma>^s:\mathcal{H}^{t+s}(\mathbb{T})\rightarrow\mathcal{H}^s(\mathbb{T})$ is an isometric isomorphism.
\item For any pair $(u,v)$ of test functions from $\mathscr{S}(\mathbb{T})$ the following equalities are true:
$$
(u,v)_{\mathcal{H}^s(\mathbb{T})}\ =\ \left(<D_\Gamma>^su,<D_\Gamma>^sv\right)_{\mathbb{T}}\ =\ \left(u,<D_\Gamma>^{2s}v\right)_{\mathbb{T}}.
$$
\item The dual of $\mathcal{H}^s(\mathbb{T})$ can be canonically identified with the space $\mathcal{H}^{-s}(\mathbb{T})$. In fact, using also the above remark, we can identify the dual of $\mathcal{H}^s(\mathbb{T})$ with itself via the operator $<D_\Gamma>^{-2s}$.
\end{enumerate}
\end{remark}
\begin{ex}\label{E.A.16}
For any $s\in\mathbb{R}$ and for any $\xi\in\mathcal X^*$ we define the following operator:
\begin{equation}\label{A.20}
<D_\Gamma+\xi>^s:\mathscr{S}^\prime(\mathbb{T})\rightarrow\mathscr{S}^\prime(\mathbb{T}),\qquad<D_\Gamma+\xi>^su:=\underset{\gamma^*\in\Gamma^*}{\sum}<\gamma^*+\xi>^s\hat{u}(\gamma^*)\sigma_{\gamma^*}.
\end{equation}
Considering $u\in\mathscr{S}^\prime_\Gamma(\mathcal X)$ we evidently have that $<D_\Gamma+\xi>^su=<D+\xi>^su$.
\end{ex}
We define the following complex linear space:
\begin{equation}\label{A.21}
\mathcal{K}_{s,\xi}\ :=\ \left\{u\in\mathscr{S}^\prime(\mathbb{T})\,\mid\,<D_\Gamma+\xi>^su\in L^2(\mathbb{T})\right\},
\end{equation}
endowed with the quadratic norm:
\begin{equation}\label{A.22}
\|u\|^2_{\mathcal{K}_{s,\xi}}\ :=\ \left\|<D_\Gamma+\xi>^su\right\|^2_{L^2(\mathbb{T})}\ =\ |E|^{-1}\underset{\gamma^*\in\Gamma^*}{\sum}<\gamma^*+\xi>^{2s}|\hat{u}(\gamma^*)|^2
\end{equation}
that defines a structure of Hilbert space on it.
It is clear that $\mathcal{K}_{s,\xi}=\mathcal{H}^s(\mathbb{T})$ as complex vector spaces and for $\xi=0$ even as Hilbert spaces (having the same scalar product). Similar arguments to those in Example \ref{E.A.2} show that the family $\{\mathcal{K}_{s,\xi}\}_{\xi\in\mathcal X^*}$ has temperate variation.
Coming back to Corollary \ref{C.A.14} we can consider the elements of $\mathcal{K}_{s,\xi}$ as distributions from $\mathcal{H}^s_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$ and we can define the spaces:
\begin{equation}\label{A.23}
\mathcal{F}_{s,\xi}\ :=\ \left\{u\in\mathscr{S}^\prime(\mathcal X)\,\mid\,\sigma_{-\xi}u\in\mathcal{K}_{s,\xi}\right\}.
\end{equation}
It will become a Hilbert space isometrically isomorphic with $\mathcal{K}_{s,\xi}$ (through the operator $\sigma_{-\xi}$) once we endow it with the norm:
\begin{equation}\label{A.24}
\|u\|_{\mathcal{F}_{s,\xi}}\ :=\ \left\|\sigma_{-\xi}u\right\|_{\mathcal{K}_{s,\xi}},\qquad\forall u\in\mathcal{F}_{s,\xi}.
\end{equation}
\begin{remark}\label{R.A.17}
Let us fix some $\xi\in\mathcal X^*$.
\begin{enumerate}
\item Let us denote by:
\begin{equation}\label{A.25}
\mathscr{S}^\prime_\xi(\mathcal X)\ :=\ \left\{u\in\mathscr{S}^\prime(\mathcal X)\,\mid\,\tau_{-\gamma}u=e^{i<\xi,\gamma>}u,\forall\gamma\in\Gamma\right\}.
\end{equation}
Then $\sigma_{-\xi}:\mathscr{S}^\prime_\xi(\mathcal X)\rightarrow\mathscr{S}^\prime_\Gamma(\mathcal X)$ is an isomorphism having the inverse $\sigma_\xi$.
\item Let us notice that we can write:
$$
\mathcal{F}_{0,\xi}\ =\ \left\{u\in\mathscr{S}^\prime(\mathcal X)\,\mid\,\sigma_{-\xi}u\in L^2_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)\right\}\ =\ \mathscr{S}^\prime_\xi(\mathcal X)\cap L^2_{\text{\sf loc}}(\mathcal X)
$$
and conclude that we can identify $\mathcal{F}_{0,\xi}$ with $L^2(E)$ and notice that we have the equality of the norms $\|u\|_{\mathcal{F}_{0,\xi}}=\|u\|_{L^2(E)}$. In fact the isomorphism $\mathfrak{j}_\xi:\mathcal{F}_{0,\xi}\overset{\sim}{\rightarrow}L^2(E)$ is defined by taking the restriction to $E\subset\mathcal X$, i.e. $\mathfrak{j}_\xi u:=u|_E,\forall u\in\mathcal{F}_{0,\xi}$. We can obtain an explicit formula for its inverse $\mathfrak{j}_\xi^{-1}:L^2(E)\overset{\sim}{\rightarrow}\mathcal{F}_{0,\xi}$; for any $v\in L^2(E)$ we define a distribution $\tilde{v}_\xi$ that is equal to $\sigma_{-\xi}v$ on $E$ and is extended to $\mathcal X$ by $\Gamma$-periodicity. This clearly gives us a distribution from $L^2_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$. One can easily see that we have $\mathfrak{j}_\xi^{-1}v=\sigma_\xi\tilde{v}_\xi$.
\item From \eqref{A.4} it follows that $<D+\xi>^s=\sigma_{-\xi}<D>^s\sigma_\xi$. Thus:
$$
\mathcal{F}_{s,\xi}\ =\ \left\{u\in\mathscr{S}^\prime(\mathcal X)\,\mid\,\sigma_{-\xi}u\in\mathcal{H}^s_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)\right\}\ =\ \mathscr{S}^\prime_\xi(\mathcal X)\cap\mathcal{H}^s_{\text{\sf loc}}(\mathcal X),
$$
and for any $u\in\mathcal{F}_{s,\xi}$ we have that
\begin{equation}\label{A.26}
\|u\|_{\mathcal{F}_{s,\xi}}\ =\ \left\|\sigma_{-\xi}u\right\|_{\mathcal{K}_{s,\xi}}\ =
\end{equation}
$$
=\ \left\|<D_\Gamma+\xi>^s\sigma_{-\xi}u\right\|_{L^2(\mathbb{T})}\ =\ \left\|\sigma_{-\xi}<D>^su\right\|_{L^2(E)}\ =\ \left\|<D>^su\right\|_{L^2(E)}.
$$
In particular we obtain that
\begin{equation}\label{A.27}
\mathcal{F}_{s,\xi}\ =\ \left\{u\in\mathscr{S}^\prime_\xi(\mathcal X)\,\mid\,<D>^su\in\mathcal{F}_{0,\xi}\right\},\qquad\|u\|_{\mathcal{F}_{s,\xi}}\ =\ \|<D>^su\|_{\mathcal{F}_{0,\xi}}.
\end{equation}
Moreover, if $s\geq0$ we have that $\mathcal{F}_{s,\xi}=\{u\in\mathcal{F}_{0,\xi}\,\mid\,<D>^su\in\mathcal{F}_{0,\xi}\}$.
\end{enumerate}
\end{remark}
\begin{definition}\label{D.A.18}
We shall denote by $S^m_\rho\big(\mathcal X\times\mathbb{T};\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ the space of symbols $p\in S^m_\rho\big(\mathcal X^2;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ that are $\Gamma$-periodic with respect to the second variable (i.e. $p(x,y+\gamma,\xi)=p(x,y,\xi)$, $\forall(x,y)\in\mathcal X^2$, $\forall\xi\in\mathcal X^*$ and $\forall\gamma\in\Gamma$).
\end{definition}
In a similar way we define the spaces $S^m_\rho\big(\mathbb{T};\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$, $S^m_\rho\big(\mathcal X\times\mathbb{T};\mathbb{B}(\mathcal{A}_0;\mathcal{B}_0)\big)$, $S^m_\rho\big(\mathbb{T};\mathbb{B}(\mathcal{A}_0;\mathcal{B}_0)\big)$, $S^m_\rho\big(\mathcal X\times\mathbb{T}\big)$, $S^m_\rho(\mathbb{T})$, $S^m_{\rho,\epsilon}\big(\mathcal X\times\mathbb{T};\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$, $S^m_{\rho,\epsilon}\big(\mathcal X\times\mathbb{T};\mathbb{B}(\mathcal{A}_0;\mathcal{B}_0)\big)$, $S^m_{\rho,\epsilon}\big(\mathcal X\times\mathbb{T}\big)$. Let us notice that we have an evident identification of $S^m_{\rho,\epsilon}\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ with a subspace of $S^m_{\rho,\epsilon}\big(\mathcal X\times\mathbb{T};\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$.
\begin{lemma}\label{L.A.18}
Under the hypothesis of Proposition \ref{P.A.7}, for any $a\in\mathcal X$ we have the equality:
\begin{equation}\label{A.28}
\tau_a\mathfrak{Op}^{A}(p)\ =\ \mathfrak{Op}^{\tau_aA}\big((\tau_a\otimes{\rm id\hspace*{-1.5pt}l})p\big)\tau_a.
\end{equation}
\end{lemma}
\begin{proof}
We start from equality \eqref{A.8} with $u\in\mathscr{S}\big(\mathcal X;\mathcal{A}_0\big)$ and we get:
$$
\left[\tau_a\mathfrak{Op}^{A}(p)u\right](x)\ =\ \int_\Xi e^{i<\eta,x-a-y>}\omega_A(x-a,y)\,p\big(\frac{x-a+y}{2},\eta\big)u(y)\,dy\,\;\;\bar{}\!\!\!d\eta\ =
$$
$$
=\ \int_\Xi e^{i<\eta,x-y>}\omega_A(x-a,y-a)\,p\big(\frac{x+y}{2}-a,\eta\big)u(y-a)\,dy\,\;\;\bar{}\!\!\!d\eta,\qquad\forall x\in\mathcal X.
$$
First let us recall that:
\begin{equation}\label{A.29}
\omega_A(x,y)\ =\ e^{-i\int_{[x,y]}A}\ =\ \exp\left\{i\left\langle(x-y),\int_0^1A\big((1-s)x+sy\big)ds\right\rangle\right\}.
\end{equation}
Let us notice that
$$
\int_{[x-a,y-a]}\hspace*{-1cm}A\hspace*{0.8cm}\ =\ -\left\langle(x-y),\int_0^1A\big((1-s)x+sy-a\big)ds\right\rangle
$$
and thus $\omega_A(x-a,y-a)=\omega_{\tau_aA}(x,y)$.
\end{proof}
\begin{lemma}\label{L.A.19}
For any symbol $p\in S^m_1(\mathbb{T})$ the pseudodifferential operator $P:=\mathfrak{Op}(p)$ induces on $\mathbb{T}$ an operator $P_\Gamma\in\mathbb{B}(\mathcal{K}_{s+m,0};\mathcal{K}_{s,0})$ for any $s\in\mathbb{R}$ and the application $S^m_1(\mathbb{T})\ni p\mapsto P_\Gamma\in\mathbb{B}(\mathcal{K}_{s+m,0};\mathcal{K}_{s,0})$ is continuous.
\end{lemma}
\begin{proof}
From equality \eqref{A.28} with $A=0$ and from the fact that $(\tau_\gamma\otimes{\rm id\hspace*{-1.5pt}l})p=p,\forall\gamma\in\Gamma$ we deduce that $P$ leaves $\mathscr{S}^\prime_\Gamma(\mathcal X)$ invariant and thus induces a linear and continuous operator $P_\Gamma:\mathscr{S}^\prime(\mathbb{T})\rightarrow\mathscr{S}^\prime(\mathbb{T})$. If $u\in\mathcal{K}_{s+m,0}=\mathcal{H}^{s+m}(\mathbb{T})$ we can write
$$
\left\|P_\Gamma u\right\|_{\mathcal{K}_{s,0}}\ =\ \left\|<D_\Gamma>^sP_\Gamma u\right\|_{L^2(\mathbb{T})}\ =\ \left\|<D>^sPu\right\|_{L^2(E)}\ =\ \left\|<D>^sP<D>^{-s-m}<D>^{s+m}u\right\|_{L^2(E)}.
$$
From the Weyl calculus we know that $<D>^sP<D>^{-s-m}=\mathfrak{Op}(q)$ for a well defined symbol $q\in S^0_1(\mathcal X)$ and the map $S^m_1(\mathbb{T})\ni p\mapsto q\in S^0_1(\mathcal X)$ is continuous; by Lemma \ref{L.A.13} we can find a strictly positive constant $C^\prime_0(p)$ (it is one of the defining seminorms for the topology of $S^m_1(\mathbb{T})$ and we can find a number $N\in\mathbb{N}$ (that does not depend on $p$ as seen in the proof of Lemma \ref{L.A.13}) such that
$$
\left\|<D>^sP<D>^{-s-m}v\right\|_{L^2(E)}\ \leq\ C^\prime_0(p)\|v\|_{L^2(F)},\qquad\forall v\in L^2_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime(\mathcal X),
$$
where $F:=\underset{\gamma\in\Gamma_N}{\cup}\tau_\gamma E$, and $\Gamma_N:=\{\gamma\in\Gamma\,\mid\,|\gamma|\leq N\}$. Let us consider now $v=<D>^{s+m}u\in L^2_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$. We deduce that
$$
\|v\|^2_{L^2(F)}\ =\ \underset{|\gamma|\leq N}{\sum}\int_{\tau_\gamma E}|v(x)|^2dx\ \leq\ C_N^2\|v\|^2_{L^2(E)}\ =\ C_N^2\left\|<D>^{s+m}u\right\|_{L^2(E)}\ =\ C_N^2\|u\|^2_{\mathcal{K}_{s+m,0}}.
$$
We conclude that $\|P_\Gamma u\|_{\mathcal{K}_{s,0}}\ \leq\ C_NC^\prime_0(p)\|u\|_{\mathcal{K}_{s+m,0}}$.
\end{proof}
\begin{ex}\label{E.A.20}
For any symbol $p\in S^m_1(\mathbb{T})$ and for any point $\xi\in\mathcal X^*$ we know that $({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\xi})p\in S^m_1(\mathbb{T})$ and due to Lemma \ref{L.A.19} the operator $P_\xi:=\mathfrak{Op}\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\xi})p\big)$ induces on $\mathbb{T}$ a well defined operator $P_{\Gamma,\xi}\in\mathbb{B}(\mathcal{K}_{s+m,0};\mathcal{K}_{s,0}\big)$ for any $s\in\mathbb{R}$. From the same Lemma we deduce that the application $\mathcal X^*\ni\xi\mapsto P_{\Gamma,\xi}\in\mathbb{B}(\mathcal{K}_{s+m,0};\mathcal{K}_{s,0}\big)$ is continuous; taking into account that $\partial^\alpha_\xi P_\xi=\mathfrak{Op}\big(({\rm id\hspace*{-1.5pt}l}\otimes\tau_{-\gamma})({\rm id\hspace*{-1.5pt}l}\otimes\partial^\alpha)p\big)$ we deduce that the previous application is in fact of class $C^\infty$.
Let us prove now that for any $s\in\mathbb{R}$ we have that
\begin{equation}\label{A.31}
P_{\Gamma,\xi}\ \in\ S^0_0\big(\mathbb{T};\mathbb{B}(\mathcal{K}_{s+m,\xi};\mathcal{K}_{s,\xi})\big)
\end{equation}
and the application
\begin{equation}\label{A.32}
S^m_1(\mathbb{T})\ni p\mapsto P_{\Gamma,\xi}\,\in\,S^0_0\big(\mathbb{T};\mathbb{B}(\mathcal{K}_{s+m,\xi};\mathcal{K}_{s,\xi})\big)
\end{equation}
is continuous.
These two last statements will follow once we have proved that for any $\alpha\in\mathbb{N}^d$ there exists $c_\alpha(p)$ defining seminorm of the topology of $S^m_1(\mathbb{T})$, such that
\begin{equation}\label{A.33}
\left\|\partial^\alpha_\xi P_{\Gamma,\xi}\right\|_{\mathbb{B}(\mathcal{K}_{s+m,\xi};\mathcal{K}_{s,\xi})}\ \leq\ c_\alpha(p),\ \forall\xi\in\mathcal X^*.
\end{equation}
It is clearly enough to prove the case $\alpha=0$. Then, using \eqref{A.4} we deduce that for any $u\in\mathcal{K}_{s+m,\xi}$ we have that:
$$
\left\|P_{\Gamma,\xi}u\right\|_{\mathcal{K}_{s,\xi}}\ =\ \left\|<D_\Gamma+\xi>^sP_{\Gamma,\xi}u\right\|_{L^2(\mathbb{T})}\ =\ \left\|<D+\xi>^sP_\xi u\right\|_{L^2(E)}\ =\ \left\|<D+\xi>^s\sigma_{-\xi}P\sigma_\xi u\right\|_{L^2(E)}\ =
$$
$$
=\ \left\|\sigma_{-\xi}<D>^sP\sigma_\xi u\right\|_{L^2(E)}\ =\ \left\|<D>^sP<D>^{-s-m}<D>^{s+m}\sigma_\xi u\right\|_{L^2(E)}\ =
$$
$$
=\ \left\|<D>^sP<D>^{-s-m}\sigma_\xi<D+\xi>^{s+m}u\right\|_{L^2(E)}.
$$
As in the proof of Lemma \ref{L.A.19} we deduce that
$$
\left\|<D>^sP<D>^{-s-m}v\right\|_{L^2(E)}\ \leq\ C^\prime_0(p)\|v\|_{L^2(F)},\qquad\forall v\in L^2_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime(\mathcal X).
$$
We consider a vector $w:=<D+\xi>^{s+m}u\in L^2_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$ and $v:=\sigma_\xi w$. Then
$$
\|v\|^2_{L^2(F)}\ =\ \|w\|^2_{L^2(F)}\ \leq\ C_N^2\|w\|^2_{L^2(E)}\ =\ C_N^2\left\|<D+\xi>^{s+m}u\right\|^2_{L^2(E)}\ =\ C_N^2\|u\|^2_{\mathcal{K}_{s+m,\xi}}.
$$
Thus \eqref{A.33} follows for $\alpha=0$ with $C_0(p)=C_NC_0^\prime(p)$.
\end{ex}
\begin{lemma}\label{L.A.21}
Let $p\in S^m_1(\mathbb{T})$ be a real elliptic symbol (i.e. $\exists C>0$, $\exists R>0$ such that $p(y,\eta)\geq C|\eta|^m$ for any $(y,\eta)\in\Xi$ with $|\eta|\geq R$), with $m>0$. Then the operator $P_\Gamma$ defined in Lemma \ref{L.A.19} is self-adjoint on the domain $\mathcal{K}_{m,0}$. Moreover, $P_\Gamma$ is lower semi-bounded and its graph-norm on $\mathcal{K}_{m,0}$ gives a norm equivalent to the defining norm of $\mathcal{K}_{m,0}$.
\end{lemma}
\begin{proof}
Let us first verify the symmetry of $P_\Gamma$ on $\mathcal{K}_{m,0}$. Due to the density of $\mathscr{S}(\mathbb{T})$ in $\mathcal{K}_{m,0}$ and to the fact that $P_\Gamma\in\mathbb{B}\big(\mathcal{K}_{m,0};L^2(\mathbb{T})\big)$, it is enough to verify the symmetry of $P_\Gamma$ on $\mathscr{S}(\mathbb{T})$. Let us choose two vectors $u$ and $v$ from $\mathscr{S}(\mathbb{T})$ so that we have to prove the equality $(P_\Gamma u,v)_{L^2(\mathbb{T})}=(u,P_\Gamma v))_{L^2(\mathbb{T})}$ or equivalently
\begin{equation}\label{A.34}
(P u,v)_{L^2(E)}\ =\ (u,Pv))_{L^2(E)}.
\end{equation}
Identifying $\mathscr{S}(\mathbb{T})$ with $\mathscr{E}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$ and using the definition of the operator $P$ on the space $\mathscr{S}^\prime(\mathcal X)$ one easily verifies that $Pu$ also belongs to $\mathscr{E}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$ and is explicitely given by the following oscillating integral ($\forall x\in\mathcal X$):
\begin{equation}\label{A.35}
\big(Pu\big)(x)\ =\ \int_\Xi e^{i<\eta,x-y>}p\big(\frac{x+y}{2},\eta\big)\,u(y)\,dy\,\;\;\bar{}\!\!\!d\eta\ =
\end{equation}
$$
=\ \underset{\gamma\in\Gamma}{\sum}\int_{\tau_\gamma E}\int_{\mathcal X^*}e^{i<\eta,x-y>}p\big(\frac{x+y}{2},\eta\big)\,u(y)\,dy\,\;\;\bar{}\!\!\!d\eta\ =\ \underset{\gamma\in\Gamma}{\sum}\int_{E}\int_{\mathcal X^*}e^{i<\eta,x-y+\gamma>}p\big(\frac{x+y-\gamma}{2},\eta\big)\,u(y)\,dy\,\;\;\bar{}\!\!\!d\eta,
$$
the series converging in $\mathscr{E}(\mathcal X)$. Using the $\Gamma$-periodicity of $p$ we obtain that:
$$
(Pu,v)_{L^2(E)}\ =\ \int_E\big(Pu\big)(x)\overline{v(x)}\,dx\ =\ \underset{\gamma\in\Gamma}{\sum}\int_{E}\int_{E}\int_{\mathcal X^*}e^{i<\eta,x-y+\gamma>}p\big(\frac{x+y-\gamma}{2},\eta\big)\,u(y)\overline{v(x)}\,dx\,dy\,\;\;\bar{}\!\!\!d\eta\ =
$$
$$
=\ \int_{E}u(y)\overline{\left[\underset{\gamma\in\Gamma}{\sum}\int_{E}\int_{\mathcal X^*} e^{i<\eta,y-x-\gamma>}p\big(\frac{x+y+\gamma}{2},\eta\big)v(x)dx\,\;\;\bar{}\!\!\!d\eta\right]}dy\ =\ \int_Eu(y)\overline{\big(Pv\big)(y)}\,dy\ =(u,Pv)_{L^2(E)},
$$
and thus we proved the equality \eqref{A.34}.
In order to prove the self-adjointness of $P_\Gamma$ let us choose some vector $u\in\mathcal{D}\big(P_\Gamma^*\big)$; thus it exists $f\in L^2(\mathbb{T})$ such that we have the equality $(P_\Gamma \varphi,u)_{L^2(\mathbb{T})}=(\varphi,f)_{L^2(\mathbb{T})}$, $\forall\varphi\in\mathscr{S}(\mathbb{T})$. Using now the facts that $\mathscr{S}(\mathbb{T})$ is dense in $\mathscr{S}^\prime(\mathbb{T})$ and $P_\Gamma$ is symmetric on $\mathscr{S}(\mathbb{T})$, we deduce that
$$
(\varphi,f)_{\mathbb{T}}\ =\ \left(P_\Gamma\varphi,u\right)_{\mathbb{T}}\ =\ \left(\varphi,P_\Gamma u\right)_{\mathbb{T}},\qquad\forall\varphi\in\mathscr{S}(\mathbb{T})
$$
and thus we obtain the equality $P_\Gamma u=f$ in $\mathscr{S}^\prime(\mathbb{T})$. By hypothesis $P_\Gamma$ is an elliptic pseudifferential operator of strictly positive order $m$, on the compact manifold $\mathbb{T}$, so that the usual regularity results imply that $u\in\mathcal{K}_{m,0}=\mathcal{D}(P_\Gamma)$. In conclusion $P_\Gamma$ is self-adjoint on the domain $\mathcal{K}_{m,0}$.
The lower semiboundedness property follows from the G\aa rding inequality:
\begin{equation}\label{A.36}
\left(P_\Gamma u,u\right)_{L^2(\mathbb{T})}\ \geq\ C^{-1}\|u\|^2_{\mathcal{K}_{m/2,0}}\,-\,C\|u\|^2_{L^2(\mathbb{T})},\qquad\forall u\in\mathcal{K}_{m,0}.
\end{equation}
The equivalence of the norms stated as the last point of the Lemma follows from the Closed Graph Theorem.
\end{proof}
\begin{remark}\label{R.A.22}
Under the Hypothesis of Lemma \ref{L.A.21}, the same proof also shows that for any $\xi\in\mathcal X^*$, the operator $P_{\Gamma,\xi}$ from Example \ref{E.A.20} is self-adjoint and lower semibounded on $L^2(\mathbb{T})$ on the domain $\mathcal{K}_{m,\xi}$. As in Remark \ref{R.A.17} we can identify $\mathcal{K}_{m,\xi}$ with $\mathcal{H}^m_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$ (endowed with the norm $\left\|<D+\xi>^mu\right\|_{L^2(E)}$) and thus we can deduce that the operator $P_\xi$ is a self-adjoint operator in the space $L^2_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$ on the domain $\mathcal{K}_{m,\xi}$. From \eqref{A.4} we know that $P=\sigma_\xi P_\xi\sigma_{-\xi}$ and we also know that $\sigma_\xi:\mathcal{K}_{s,\xi}\rightarrow\mathcal{F}_{s,\xi}$ is a unitary operator for any $s\in\mathbb{R}$ and for any $\xi\in\mathcal X^*$ and we conclude that the operator induced by $P$ in $\mathcal{F}_{0,\xi}$ is unitarily equivalent with the operator induced by $P_\xi$ in $\mathcal{K}_{0,\xi}\cong L^2_{\text{\sf loc}}(\mathcal X)\cap\mathscr{S}^\prime_\Gamma(\mathcal X)$. It follows that the operator $P$ acting in $\mathcal{F}_{0,\xi}$ with domain $\mathcal{F}_{m,\xi}$ is self-adjoint and lower semibounded.
\end{remark}
\subsection{Properties of magnetic pseudodifferential operators with operator-valued symbols}
\begin{theorem}\label{T.A.23}
Let us first consider the composition operation. Suppose chosen three families of Hilbert spaces with temperate variation $\{\mathcal{A}_\xi\}_{\xi\in\mathcal X^*}$, $\{\mathcal{B}_\xi\}_{\xi\in\mathcal X^*}$ and $\{\mathcal{C}_\xi\}_{\xi\in\mathcal X^*}$, two families of symbols $\{p_\epsilon\}_{|\epsilon|\leq\epsilon_0}\in S^m_{\rho,\epsilon}\big(\mathcal X;\mathbb{B}(\mathcal{B}_\bullet;\mathcal{C}_\bullet)\big)$ and $\{q_\epsilon\}_{|\epsilon|\leq\epsilon_0}\in S^{m^\prime}_{\rho,\epsilon}\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ and a family of magnetic fields $\{B_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ satisfying Hypothesis H.1 from Section \ref{S.1} with an associated family of vector potentials $\{A_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ given by \eqref{0.28}. Then
\begin{enumerate}
\item There exist a family of symbols
$$
\{p_\epsilon\sharp^{B_\epsilon}q_\epsilon\}_{|\epsilon|\leq\epsilon_0}\ \in\ S^{m+m^\prime}_{\rho,\epsilon}\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{C}_\bullet)\big),\quad\text{such that}\quad\mathfrak{Op}^{A_\epsilon}(p_\epsilon)\mathfrak{Op}^{A_\epsilon}(q_\epsilon)=\mathfrak{Op}^{A_\epsilon}(p_\epsilon\sharp^{B_\epsilon}q_\epsilon).
$$
\item The application
\begin{equation}\label{A.37}
S^m_{\rho}\big(\mathcal X;\mathbb{B}(\mathcal{B}_\bullet;\mathcal{C}_\bullet)\big)\times S^{m^\prime}_{\rho,}\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)\ni(p_\epsilon,q_\epsilon)\ \mapsto\ p_\epsilon\sharp^{B_\epsilon}q_\epsilon\in S^{m+m^\prime}_{\rho,}\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{C}_\bullet)\big)
\end{equation}
is continuous uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\item There exists a family of symbols $\{r_\epsilon\}_{|\epsilon|\leq\epsilon_0}\in S^{m+m^\prime-\rho}_{\rho,\epsilon}\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{C}_\bullet)\big)$ having the following properties:
\begin{equation}\label{A.38}
\underset{\epsilon\rightarrow0}{\lim}\,r_\epsilon\ =\ 0\quad\text{in}\ S^{m+m^\prime-\rho}_{\rho}\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{C}_\bullet)\big)
\end{equation}
\begin{equation}\label{A.39}
p_\epsilon\sharp^{B_\epsilon}q_\epsilon\ =\ p_\epsilon\cdot q_\epsilon\ +\ r_\epsilon,\qquad\forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
By a standard cut-off procedure, as in the proof of Proposition \ref{P.A.7} we may reduce the problem to the case of symbols with compact support in both arguments $(x,\xi)\in\Xi$.
A direct computation using Stokes formula and the fact that $dB_\epsilon=0$ for any $\epsilon\in[-\epsilon_0,\epsilon_0]$ shows that for point (1) of the Theorem we may take the definition of the composition operation to be the following well defined integral formula:
\begin{equation}\label{A.40}
\big(p_\epsilon\sharp^{B_\epsilon}q_\epsilon\big)(X)\ =\ \int_\Xi\int_\Xi e^{-2i[\hspace*{-1.5pt}[ Y,Z]\hspace*{-1.5pt}]}\,\omega^{B_\epsilon}(x,y,z)\,p_\epsilon(X-Y)q_\epsilon(X-Z)\,\;\;\bar{}\!\!\!d Y\,\;\;\bar{}\!\!\!d Z,
\end{equation}
where we used the notation $X:=(x,\xi)$, $Y:=(y,\eta)$, $Z:=(z,\zeta)$, $[\hspace*{-1.5pt}[ Y,Z]\hspace*{-1.5pt}]:=<\eta,z>-<\zeta,y>$ and
$$
\omega^{B_\epsilon}(x,y,z):=e^{-iF_\epsilon(x,y,z)},\qquad F_\epsilon(x,y,z):=\int_{<x-y+z,x-y-z,x+y-z>}\hspace*{-2cm}B_\epsilon
$$
with $<a,b,c>$ the triangle with vertices $a\in\mathcal X$, $b\in\mathcal X$ and $c\in\mathcal X$.
A direct computation (see for example Lemma 1.1 from \cite{IMP1}) shows that all the vectors $\nabla_xF_\epsilon$, $\nabla_yF_\epsilon$ and $\nabla_zF_\epsilon$ have the form $C_\epsilon(x,y,z)y\,+\,D_\epsilon(x,y,z)z$ with $C_\epsilon$ and $D_\epsilon$ functions of class $BC^\infty\big(\mathcal X^3;\mathbb{B}(\mathcal X)\big)$ satisfying the conditions $\underset{\epsilon\rightarrow0}{\lim}\,C_\epsilon\,=\ \underset{\epsilon\rightarrow0}{\lim}\,D_\epsilon\,=\ 0$ in $BC^\infty\big(\mathcal X^3;\mathbb{B}(\mathcal X)\big)$. It follows easily then that the derivatives of $\omega^{B_\epsilon}(x,y,z)$ of order at least 1 are finite linear combinations of terms of the form $C_{(\alpha,\beta);\epsilon}y^\alpha z^\beta\omega^{B_\epsilon}(x,y,z)$ with $C_{(\alpha,\beta);\epsilon}\in BC^\infty(\mathcal X^3)$ satisfying the property $\underset{\epsilon\rightarrow0}{\lim}\,C_{(\alpha,\beta);\epsilon}=0$ in $BC^\infty(\mathcal X^3)$. Applying now the usual integration by parts with respect to the variables $\{y,z,\eta,\zeta\}$ we obtain that for any $N_j\in\mathbb{N}$ ($1\leq j\leq4$) and for any $X\in\Xi$ the following equality is true:
\begin{equation}\label{A.41}
\big(p_\epsilon\sharp^{B_\epsilon}q_\epsilon\big)(X)\ =\ \int_\Xi\int_\Xi e^{-2i[\hspace*{-1.5pt}[ Y,Z]\hspace*{-1.5pt}]}<\eta>^{-2N_1}<\zeta>^{-2N_2}\big({\rm id\hspace*{-1.5pt}l}-\frac{1}{4}\Delta_z\big)^{N_1}\big({\rm id\hspace*{-1.5pt}l}-\frac{1}{4}\Delta_y\big)^{N_2}\times
\end{equation}
$$
\times\left[<y>^{-2N_3}<z>^{-2N_4}\omega^{B_\epsilon}(x,y,z)\left(\big({\rm id\hspace*{-1.5pt}l}-\frac{1}{4}\Delta_\eta\big)^{N_4}p_\epsilon(X-Y)\right)\left(\big({\rm id\hspace*{-1.5pt}l}-\frac{1}{4}\Delta_\zeta\big)^{N_3}q_\epsilon(X-Z)\right)\right]\;\;\bar{}\!\!\!d Y\,\;\;\bar{}\!\!\!d Z.
$$
First we apply the differentiation operators on the functions they act on and then we eliminate all the monomials of the form $y^\alpha z^\beta$ that appear from the differentiation of $\omega^{B_\epsilon}$ by integrating by parts using the formulas:
$$
y_je^{-2i[\hspace*{-1.5pt}[ Y,Z]\hspace*{-1.5pt}]}\ =\ \frac{1}{2i}\partial_{\zeta_j}e^{-2i[\hspace*{-1.5pt}[ Y,Z]\hspace*{-1.5pt}]},\quad z_je^{-2i[\hspace*{-1.5pt}[ Y,Z]\hspace*{-1.5pt}]}\ =\ -\frac{1}{2i}\partial_{\eta_j}e^{-2i[\hspace*{-1.5pt}[ Y,Z]\hspace*{-1.5pt}]}.
$$
These computations allow us to obtain the following estimation (for some $C>0$ and $N\in\mathbb{N}$):
\begin{equation}\label{A.42}
\left\|\big(p_\epsilon\sharp^{B_\epsilon}q_\epsilon\big)(X)\right\|_{\mathbb{B}(\mathcal{A}_\xi;\mathcal{C}_\xi)}\ \leq
\end{equation}
\begin{scriptsize}
$$
\leq\ C\underset{|\alpha|,|\beta|,|\gamma|,|\delta|\leq N}{\max}\ \int_\Xi\int_\Xi<\eta>^{-2N_1}<\zeta>^{-2N_2}<y>^{-2N_3}<z>^{-2N_4}\left\|\partial^\alpha_x\partial^\beta_\xi p_\epsilon(X-Y)\right\|_{\mathbb{B}(\mathcal{B}_\xi;\mathcal{C}_\xi)}\left\|\partial^\gamma_x\partial^\delta_\xi q_\epsilon(X-Z)\right\|_{\mathbb{B}(\mathcal{A}_\xi;\mathcal{B}_\xi)}\,\;\;\bar{}\!\!\!d Y\,\;\;\bar{}\!\!\!d Z,
$$
\end{scriptsize}
for any $\epsilon\in[-\epsilon_0,\epsilon_0]$. We use now \eqref{A.1} and \eqref{A.3} and obtain the following estimations valid for any $\epsilon\in[-\epsilon_0,\epsilon_0]$:
\begin{equation}\label{A.43}
\left\|\partial^\alpha_x\partial^\beta_\xi p_\epsilon(X-Y)\right\|_{\mathbb{B}(\mathcal{B}_\xi;\mathcal{C}_\xi)}\ \leq\ C<\eta>^{2M}\left\|\partial^\alpha_x\partial^\beta_\xi p_\epsilon(X-Y)\right\|_{\mathbb{B}(\mathcal{B}_{\xi-\eta};\mathcal{C}_{\xi-\eta})}\ \leq
\end{equation}
$$
\leq\ C<\eta>^{2M}<\xi-\eta>^{m-\rho|\beta|}\left(\underset{Z\in\Xi}{\sup}<\zeta>^{-m+\rho|\beta|}\left\|\big(\partial^\alpha_z\partial^\beta_\zeta p_\epsilon\big)(Z)\right\|_{\mathbb{B}(\mathcal{B}_{\zeta};\mathcal{C}_{\zeta})}\right).
$$
Repeating the same computations for the derivatives of $q_\epsilon$ and choosing suitable large exponents $N_j$ ($1\leq j\leq4$) in \eqref{A.42} we deduce the existence of two defining seminorms $|\cdot|_{n_1}$ and respectively $|\cdot|_{n_2}$ on the Fr\'{e}chet space $S^m_\rho\big(\mathcal X;\mathbb{B}(\mathcal{B}_\bullet;\mathcal{C}_\bullet)\big)$ and respectively on $S^{m^\prime}_\rho\big(\mathcal X;\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)$ such that we have the estimation:
\begin{equation}\label{A.44}
\underset{X\in\Xi}{\sup}<\xi>^{-(m+m^\prime)}\left\|\big(p_\epsilon\sharp^{B_\epsilon}q_\epsilon\big)(X)\right\|_{\mathbb{B}(\mathcal{A}_\xi;\mathcal{C}_\xi)}\ \leq\ |p_\epsilon|_{n_1}\,|q_\epsilon|_{n_2},\qquad\forall\epsilon\in[-\epsilon_0,\epsilon_0].
\end{equation}
The derivatives of $p_\epsilon\sharp^{B_\epsilon}q_\epsilon$ can be estimated in a similar way in order to conclude that $p_\epsilon\sharp^{B_\epsilon}q_\epsilon\in S^{m+m^\prime}_\rho\big(\mathcal X;\mathcal{A}_\bullet;\mathcal{C}_\bullet)\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$ and that property (2) is valid.
Considering now the family of symbols $\{p_\epsilon\sharp^{B_\epsilon}q_\epsilon\}_{|\epsilon|\leq\epsilon_0}$, the hypothesis (2) and (3) from the Definition \ref{D.A.5} follow easily from \eqref{A.38} and \eqref{A.39}.
In conclusion there is only point (3) that remains to be proved. By the same arguments as above we can once again assume that the symbols $p_\epsilon$ and $q_\epsilon$ have compact support. We begin by using \eqref{A.40} in the equality:
\begin{equation}\label{A.45}
p_\epsilon(X-Y)q_\epsilon(X-Z)\ =
\end{equation}
$$
=\ p_\epsilon(X)q_\epsilon(X)\,-\,\int_0^1\left[\left\langle Y,\nabla_Xp_\epsilon(X-tY)\right\rangle q_\epsilon(X-tZ)\,+\,p_\epsilon(X-tY)\left\langle Z,\nabla_X q_\epsilon(X-tZ)\right\rangle\right]dt.
$$
The first term on the right side of the equality \eqref{A.45} will produce the term $p_\epsilon q_\epsilon$ in the equality \eqref{A.39} (see also the Lemma 2.1 from \cite{IMP1}). Let us study now the term obtained by replacing \eqref{A.40} into \eqref{A.45}. We eliminate $Y$ and $Z$ by integration by parts as in the beginning of this proof taking also into account the following identities:
$$
\eta_je^{-2i[\hspace*{-1.5pt}[ Y,Z]\hspace*{-1.5pt}]}\ =\ -\frac{1}{2i}\partial_{z_j}e^{-2i[\hspace*{-1.5pt}[ Y,Z]\hspace*{-1.5pt}]},\quad\zeta_je^{-2i[\hspace*{-1.5pt}[ Y,Z]\hspace*{-1.5pt}]}\ =\ \frac{1}{2i}\partial_{y_j}e^{-2i[\hspace*{-1.5pt}[ Y,Z]\hspace*{-1.5pt}]}.
$$
These operations will produce derivatives of $p_\epsilon$ and $q_\epsilon$ with respect to $x\in\mathcal X$, that go to 0 for $\epsilon\rightarrow0$ in their symbol spaces topology and derivatives of $F_\epsilon$ with respect to $y$ and $z$; but these derivatives may be once again transformed by integrations by parts into factors of the form $C_\epsilon\in BC^\infty(\mathcal X^3)$ having limit 0 for $\epsilon\rightarrow0$ as elements from $BC^\infty(\mathcal X^3)$. Thus, the estimations proved in the first part of the proof imply that the equality \eqref{A.39} holds with $r_\epsilon=\int_0^1s_\epsilon(t)dt$ with $s_\epsilon(t)\in S^{m+m^\prime-\rho}_\rho\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{C}_\bullet)\big)$ uniformly with respect to $(\epsilon,t)\in[-\epsilon_0,\epsilon_0]\times[0,1]$ and $\underset{\epsilon\rightarrow0}{\lim}\,s_\epsilon(t)\,=\,0$ in $S^{m+m^\prime-\rho}_\rho\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{C}_\bullet)\big)$ uniformly with respect to $t\in[0,1]$. We conclude that $r_\epsilon$ has the properties stated in the Theorem.
\end{proof}
\begin{remark}\label{R.A.24}
The proof of Theorem \ref{T.A.23} also implies the following fact (that we shall use in the paper):
{\it the operation $\sharp^{B_\epsilon}$ is well defined also as operation: $S^m_\rho\big(\mathcal X;\mathbb{B}(\mathcal{B}_\bullet;\mathcal{C}_\bullet)\big)\times S^{m^\prime}_\rho\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{B}_\bullet)\big)\rightarrow S^{m+m^\prime}_\rho\big(\mathcal X;\mathbb{B}(\mathcal{A}_\bullet;\mathcal{C}_\bullet)\big)$ being bilinear and continuous uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.}
\end{remark}
\begin{remark}\label{R.A.25}
As in \cite{IMP2} one can define a family of symbols $\{q_{s,\epsilon}\}_{(s,\epsilon)\in\mathbb{R}\times[-\epsilon_0,\epsilon_0]}$ having the following properties:
\begin{enumerate}
\item $q_{s,\epsilon}\,\in\,S^s_1(\mathcal X)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$,
\item $q_{s,\epsilon}\sharp^{B_\epsilon}q_{-s,\epsilon}\,=\,1$,
\item $\forall s>0$ we have that $q_{s,\epsilon}(x,\xi)=<\xi>^s+\mu$ with some sufficiently large $\mu>0$ and $q_{0,\epsilon}=1$.
\end{enumerate}
\end{remark}
Evidently that for any Hilbert space $\mathcal{A}$ we can identify the symbol $q_{s,\epsilon}$ with the operator-valued symbol $q_{s,\epsilon}{\rm id\hspace*{-1.5pt}l}_{\mathcal{A}}$ and thus we may consider that $q_{s,\epsilon}\in S^s_1\big(\mathcal X;\mathbb{B}(\mathcal{A})\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$. We shall use the notation $Q_{s,\epsilon}:=\mathfrak{Op}^{A_\epsilon}(q_{s,\epsilon})$.
\begin{proposition}\label{P.A.26}
Suppose given two Hilbert spaces $\mathcal{A}$ and $\mathcal{B}$ and for any $\epsilon\in[-\epsilon_0,\epsilon_0]$ a symbol $p_\epsilon\in S^m_0\big(\mathcal X;\mathbb{B}(\mathcal{A};\mathcal{B})\big)$, uniformly in $\epsilon\in[-\epsilon_0,\epsilon_0]$. Then for any $s\in\mathbb{R}$ the operator $\mathfrak{Op}^{A_\epsilon}(p_\epsilon)$ belongs to the space $\mathbb{B}\big(\mathcal{H}^{s+m}_{A_\epsilon}(\mathcal X)\otimes\mathcal{A};\mathcal{H}^s_{A_\epsilon}(\mathcal X)\otimes\mathcal{B}\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$. Moreover, the norm of $\mathfrak{Op}^{A_\epsilon}(p_\epsilon)$ in the above Banach space is bounded from above by a seminorm of $p_\epsilon$ in $S^m_0\big(\mathcal X;\mathbb{B}(\mathcal{A};\mathcal{B})\big)$, uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\end{proposition}
\begin{proof}
For $m=s=0$ the proposition may be proved by the same arguments as in the scalar case: $\mathcal{A}=\mathcal{B}=\mathbb{C}$ (see for example \cite{IMP1}). Also using the results from \cite{IMP1} we can see that for any $t\in\mathbb{R}$ the operator $Q_{s,\epsilon}$ belongs to the space $\mathbb{B}\big(\mathcal{H}^{t+s}_{A_\epsilon}(\mathcal X);\mathcal{H}^t_{A_\epsilon}(\mathcal X)\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$. The proof of the general case folows now from the following identity:
$$
\mathfrak{Op}^{A_\epsilon}(p_\epsilon)\ =\ Q_{-s,\epsilon}\,Q_{s,\epsilon}\,\mathfrak{Op}^{A_\epsilon}(p_\epsilon)\,Q_{-(s+m),\epsilon}\,Q_{s+m,\epsilon}
$$
and the fact that $q_{s,\epsilon}\sharp^{B_\epsilon}p_\epsilon\sharp^{B_\epsilon}q_{-(s+m),\epsilon}$ is a symbol of class $S^0_0\big(\mathcal X;\mathbb{B}(\mathcal{A};\mathcal{B})\big)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$ (as implied by the Remark \ref{R.A.24}).
\end{proof}
\begin{proposition}\label{P.A.27}
Suppose given a Hilbert space $\mathcal{A}$ and a bounded subset $\{p_\epsilon\}_{|\epsilon|\leq\epsilon_0}\subset S^0_\rho\big(\mathcal X;\mathbb{B}(\mathcal{A})\big)$ such that $\underset{\epsilon\rightarrow0}{\lim}\,p_\epsilon=0$ in this space of symbols. Then, for sufficiently small $\epsilon_0>0$ the following statements are true:
\begin{enumerate}
\item ${\rm id\hspace*{-1.5pt}l}+\mathfrak{Op}^{A_\epsilon}(p_\epsilon)$ is invertible in $\mathbb{B}\big(L^2(\mathcal X)\otimes\mathcal{A}\big)$ for any $\epsilon\in[-\epsilon_0,\epsilon_0]$.
\item It exists a bounded subset of symbols $\{q_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ from $S^0_\rho\big(\mathcal X;\mathbb{B}(\mathcal{A})\big)$ such that $\underset{\epsilon\rightarrow0}{\lim}\,q_\epsilon=0$ in $S^0_\rho\big(\mathcal X;\mathbb{B}(\mathcal{A})\big)$ and the following equality holds:
\begin{equation}\label{A.46}
\big[{\rm id\hspace*{-1.5pt}l}\,+\,\mathfrak{Op}^{A_\epsilon}(p_\epsilon)\big]^{-1}\ =\ {\rm id\hspace*{-1.5pt}l}\,+\,\mathfrak{Op}^{A_\epsilon}(q_\epsilon).
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
The first statement above is quite evident once we notice that following Proposition \ref{P.A.26} we can choose some small enough $\epsilon_0>0$ such that $\left\|\mathfrak{Op}^{A_\epsilon}(p_\epsilon)\right\|_{\mathbb{B}(L^2(\mathcal X)\otimes\mathcal{A})}\leq(1/2)$ for any $\epsilon\in[-\epsilon_0,\epsilon_0]$. By a straightforward modification of the arguments given in \textsection 6.1 from \cite{IMP2} in order to deal with operator-valued symbols, we deduce that there exists a bounded subset $\{r_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ in $S^0_\rho\big(\mathcal X;\mathbb{B}(\mathcal{A})\big)$ such that:
\begin{equation}\label{A.47}
\big[{\rm id\hspace*{-1.5pt}l}\,+\,\mathfrak{Op}^{A_\epsilon}(p_\epsilon)\big]^{-1}\ =\ \mathfrak{Op}^{A_\epsilon}(r_\epsilon).
\end{equation}
The equality \eqref{A.46} follows if we notice that
$$
\big[{\rm id\hspace*{-1.5pt}l}\,+\,\mathfrak{Op}^{A_\epsilon}(p_\epsilon)\big]^{-1}\ =\ {\rm id\hspace*{-1.5pt}l}\,-\,\mathfrak{Op}^{A_\epsilon}(p_\epsilon)\big[{\rm id\hspace*{-1.5pt}l}\,+\,\mathfrak{Op}^{A_\epsilon}(p_\epsilon)\big]^{-1}\ =\ {\rm id\hspace*{-1.5pt}l}\,-\,\mathfrak{Op}^{A_\epsilon}(p_\epsilon)\mathfrak{Op}^{A_\epsilon}(r_\epsilon)
$$
and also that Remark \ref{R.A.24} implies that $q_\epsilon:=-p_\epsilon\sharp^{B_\epsilon}r_\epsilon$ has all the stated properties.
\end{proof}
\subsection{Relativistic Hamiltonians}
We shall close this subsection with the study of a property that connects the two relativistic Schr\"{o}dinger Hamiltonians $\mathfrak{Op}^{A_\epsilon}(h_R)$ and $\big[\mathfrak{Op}^{A_\epsilon}(h_{NR})\big]^{1/2}$ with $h_R(x,\xi):=<\xi>\equiv\sqrt{1+|\xi|^2}$ and $h_{NR}(x,\xi):=1+|\xi|^2\equiv<\xi>^2$. We shall use some arguments presented in \textsection 6.3 of \cite{IMP2}.
\begin{proposition}\label{P.A.28}
There exists a bounded subset $\{q_\epsilon\}_{|\epsilon|\leq\epsilon_0}$ of symbols from $S^0_1(\mathcal X)$ such that $\underset{\epsilon\rightarrow0}{\lim}\,q_\epsilon=0$ in $S^0_1(\mathcal X)$ and
\begin{equation}\label{A.48}
\big[\mathfrak{Op}^{A_\epsilon}(h_{NR})\big]^{1/2}\ =\ \mathfrak{Op}^{A_\epsilon}(h_R)\,+\,\mathfrak{Op}^{A_\epsilon}(q_\epsilon).
\end{equation}
\end{proposition}
\begin{proof}
Following \cite{IMP2}, if we denote by $p^-$ the inverse of the symbol $p$ with respect to the composition $\sharp^{B_\epsilon}$,
\begin{equation}\label{A.49}
\big[\mathfrak{Op}^{A_\epsilon}(h_{NR})\big]^{1/2}\ =\ \mathfrak{Op}^{A_\epsilon}(h_{NR})\mathfrak{Op}^{A_\epsilon}\left(-\frac{1}{2\pi i}\int_{-i\infty}^{i\infty}{\rm z}^{-1/2}\big(<\xi>^2-{\rm z}\big)^-\,d{\rm z}\right).
\end{equation}
Recalling the proof of point (3) in Theorem \ref{T.A.23} we can easily prove that:
\begin{equation}\label{A.50}
\big(<\xi>^2-{\rm z}\big)\,\sharp^{B_\epsilon}\,\big(<\xi>^2-{\rm z}\big)^{-1}\ =\ 1\,+\,r_{\epsilon,{\rm z}}
\end{equation}
where $<{\rm z}>r_{\epsilon,{\rm z}}\in S^0_1(\mathcal X)$ uniformly for $(\epsilon,{\rm z})\in[-\epsilon_0,\epsilon_0]\times i\mathbb{R}$ and $\underset{\epsilon\rightarrow0}{\lim}\,<{\rm z}>r_{\epsilon,{\rm z}}=0$ in $S^0_1(\mathcal X)$ uniformly with respect to ${\rm z}\in i\mathbb{R}$. Following the proof of Proposition \ref{P.A.27}, for $\epsilon_0>0$ sufficiently small there exists a symbol $f_{\epsilon,{\rm z}}$ such that $<{\rm z}>f_{\epsilon,{\rm z}}\in S^0_1(\mathcal X)$ uniformly with respect to $(\epsilon,{\rm z})\in[-\epsilon_0,\epsilon_0]\times i\mathbb{R}$, $\underset{\epsilon\rightarrow0}{\lim}<{\rm z}>f_{\epsilon,{\rm z}}=0$ in $S^0_1(\mathcal X)$ uniformly with respect to ${\rm z}\in i\mathbb{R}$ and we also have
\begin{equation}\label{A.52}
\big(1\,+\,r_{\epsilon,{\rm z}}\big)^-\ =\ 1\,+\,f_{\epsilon,{\rm z}}.
\end{equation}
From \eqref{A.49} and from the properties of the symbol $r_{\epsilon,{\rm z}}$ it follows that we can define:
\begin{equation}\label{A.53}
\big(<\xi>^2-{\rm z}\big)^-\ :=\ \big(<\xi>^2-{\rm z}\big)^{-1}\,\sharp^{B_\epsilon}\,\big(1\,+\,f_{\epsilon,{\rm z}}\big)\ =\ \big(<\xi>^2-{\rm z}\big)^{-1}\,+\,\big(<\xi>^2-{\rm z}\big)^{-1}\,\sharp^{B_\epsilon}\,f_{\epsilon,{\rm z}}.
\end{equation}
Using \eqref{A.53} in \eqref{A.49} we notice that the term $\big(<\xi>^2-{\rm z}\big)^{-1}$ produces by magnetic quantization a term of the form $\mathfrak{Op}^{A_\epsilon}(h_{NR})\mathfrak{Op}^{A_\epsilon}(h_{R}^{-1})$ and using Theorem \ref{T.A.23} this operator may be put in the form $\mathfrak{Op}^{A_\epsilon}(h_{R})+\mathfrak{Op}^{A_\epsilon}(q_\epsilon^\prime)$ where $q_\epsilon^\prime\in S^0_1(\mathcal X)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$ with $\underset{\epsilon\rightarrow0}{\lim}\,q_\epsilon^\prime=0$ in $S^0_1(\mathcal X)$. If we notice that $h_{NR}\sharp^{B_\epsilon}\big(h_{NR}-{\rm z}\big)^{-1}\in S^0_1(\mathcal X)$ uniformly with respect to $(\epsilon,{\rm z})\in[-\epsilon_0,\epsilon_0]\times i\mathbb{R}$, then we can see that the second term of \eqref{A.53} gives in \eqref{A.49} by magnetic quantization an expression of the form $\mathfrak{Op}^{A_\epsilon}(q_\epsilon^{\prime\prime})$ with $q_\epsilon^{\prime\prime}\in S^0_1(\mathcal X)$ uniformly with respect to $\epsilon\in[-\epsilon_0,\epsilon_0]$ and such that $\underset{\epsilon\rightarrow0}{\lim}\,q_\epsilon^{\prime\prime}=0$ in $S^0_1(\mathcal X)$.
\end{proof}
{\bf Acknowledgements:}
R.Purice aknowledges the CNCSIS support
under the Ideas Programme, PCCE project no. 55/2008 {\it Sisteme
diferen\c{t}iale \^{\i}n analiza neliniar\u{a} \c{s}i aplica\c{t}ii}.
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\nextext{Throughout cosmic history a wealth of information on the origin and evolution of our Universe has been imprinted to the large scale structure via the gravitational amplification of primordial density perturbations. Harvesting this information from probes of the large scale structure, such as large galaxy surveys, therefore is an important scientific task to further our knowledge and to establish a conclusive cosmological picture. In recent years great advances have been made, both in retrieving huge amounts of data and increasing sensitivity in galaxy redshift surveys. Especially the recent galaxy surveys, the 2dF Galaxy Redshift Survey \citep[][]{COLLESS2001} and the Sloan Digital Sky Survey \citep[][]{SDSS7} provide sufficient redshifts to probe the 3D galaxy distribution on large scales.
In particular, the two point statistics of the matter distribution contains valuable information to test standard inflation and cosmological models, which describe the origin and evolution of all observed structure in the Universe.
Measuring the power-spectrum from galaxy observations therefore has always attracted great interest. Precise determination of the overall shape of the power-spectrum can for instance place important constraints on neutrino masses, help to identify the primordial power-spectrum, and break degeneracies for cosmological parameter estimation from CMB data \citep[e.g.][]{hu-98,wmap-spergel,HANNESTAD2003,Efstathiou_2002,PERCIVAL2002,wmap-spergel,VERDE2003}. In addition, several characteristic length scales have been imprinted to the matter distribution throughout cosmic history, which can serve as new standard rulers to measure the Universe.
A prominent example of these length scales is the sound horizon, which
yields oscillatory features in the power-spectrum, the so called
baryon \nextext{acoustic} oscillations (BAO) \citep[e.g.][]{SILK1968,PEEBLES1970,SUNYAEV1970}. Since the physics governing these oscillatory features is well understood, precise measurements of the BAO will allow us to establish a new, precise standard ruler to measure the Universe through the distance redshift relation \citep[][]{BLAKE2003,SEO2003}.
Precision analysis of large scale structure data therefore is a crucial step in developing a conclusive cosmological theory.}
\nextext{Unfortunately, contact between theory and observations cannot be made directly, since observational data is subject to a variety of systematic effects and statistical uncertainties. Such systematics and uncertainties arise either from the observational strategy or are due to intrinsic clustering behavior of the galaxy sample itself \citep[][]{SANCHEZ2008}. Some of the most prominent uncertainties and systematics are:}
\begin{itemize}
\item survey geometry and selection effects
\item close pair incompleteness due to fiber collisions
\item galaxy biases
\item redshift space distortions
\end{itemize}
\nextext{The details of galaxy clustering, and how galaxies trace the underlying density field are in general very complicated. The bias between galaxies and mass density is most likely non-linear and stochastical, so that the estimated galaxy spectrum is expected to differ from that of the mass \citep[][]{DEKEL1999}. Even in the limit where a linear bias could be employed, it still differs for different classes of galaxies \citep[see e.g.][]{COLE2005}.
In addition, the apparent density field, obtained from redshift surveys, will generally be distorted along the line-of-sight due to the existence of peculiar velocities.}
\nextext{
However, the main cause for the systematic uncertainties in large scale power-spectrum estimations is the treatment of the survey geometry \citep[][]{TEGMARK1995,BALLINGER1995}. Due to the survey geometry the raw power-spectrum yields an expectation value for the power-spectrum, which is the true cosmic power-spectrum convolved with the survey mask \citep[][]{COLE2005}. This convolution causes an overall distortion of the power-spectrum shape, and drastically decreases the visibility of the baryonic features.}
\nextext{
The problems, mentioned above, have been discussed extensively in literature, and many different approaches to power-spectrum analysis have been proposed.
Some of the main techniques to recover the power-spectrum from galaxy surveys are Fourier transform based, such as the optimal weighting scheme, which assigns a weight to the galaxy fluctuation field, in order to reduce the error in the estimated power \citep[see e.g.][]{FELDMAN1994,TEGMARK1995,HAMILTON1997A,YAMAMOTO2003,PERCIVAL2004}. Alternative methods rely on Karhunen-Lo\`{e}ve decompositions \citep[][]{TEGMARK1997,TEGMARK_2004,POPE2004} or decompositions into spherical harmonics, which is especially suited to address the redshift space distortions problematic \citep[][]{FISHER1994,HEAVENS1995,TADROS1999,PERCIVAL2004,PERCIVAL2005}.
In addition, to these deconvolution methods there exists a variety of likelihood methods to estimate the real space power-spectrum \citep[][]{BALLINGER1995,HAMILTON1997A,HAMILTON1997B,TADROS1999,PERCIVAL2005}. In order to not just provide the maximum likelihood estimate but also conditional errors, \cite{PERCIVAL2005} proposed a Markov Chain Monte Carlo method to map out the likelihood surface.
}
\nextext{
Nevertheless, as the precision of large scale structure experiments has improved, the requirement on the control and characterization of systematic effects, as discussed above, also steadily increases. It is of critical importance to propagate properly the uncertainties caused by these effects through to the matter power-spectrum and cosmological parameters estimates, in order to not underestimate the final uncertainties and thereby draw incorrect conclusions on the cosmological model.
}
\nextext{We therefore felt inspired to develope a new Bayesian approach to extract information on the two point statistics from a given large scale structure dataset. We prefer Bayesian methods to conventional likelihood methods, as the yield more general and profound statements about measurements. In example, conventional likelihood methods can only answer questions of the inner form like :" Given the true value \(s\) of a signal, what is the probability distribution of the measured values \(d\)?" A Bayesian method, on the other hand, answers questions of the type :"Given the observations \(d\), what is the probability distribution of the true underlying signal \(s\)?" For this reason, Bayesian statistics answers the underlying question to every measurement problem, of how to estimate the true value of the signal from observations, while conventional likelihood methods do not \citep[][]{MICHEL1999}.}
\nextext{Since the result of any Bayesian method is a complete probability distribution they permit fully global analyses, taking into account all systematic effects and statistical uncertainties. In particular, here, we aim at evaluating the power-spectrum posterior distribution \(\mathcal{P}\left(\{P(k_i)\}|\{d_i\}\right)\), with \(P(k_i)\) being the power-spectrum coefficients of the \(k_i\)th mode and \(d_i=d(\vec{x}_i)\) is an observation at position \(\vec{x}_i\) in three dimensional configuration space.}
This probability distribution would then contain all information on the two point statistics supported by the data.
In order to explore this posterior distribution we employ a Gibbs sampling method, previously applied to CMB data analysis \citep[see e.g.][]{WANDELT2004,2004ApJS..155..227E,JEWELL2004}.
\nextext{
Since direct sampling from \(\mathcal{P}\left(\{P(k_i)\}|\{d_i\}\right)\) is impossible or at least difficult, they propose instead to draw samples from the full joint posterior distribution \(\mathcal{P}\left(\{P(k_i)\},\{s_i\}|\{d_i\}\right)\) of the power-spectrum coefficients \(P(k_i)\) and the 3D matter density contrast amplitudes \(s_i\) conditional on a given set of data points \(\{d_i\}\).
This is achieved by iteratively drawing density samples from a Wiener-posterior distribution and power-spectrum samples via an efficient Gibbs sampling scheme (see figure \ref{fig:flowchart} for an illustration). Here, artificial mode coupling, as introduced by survey geometry and selection function, is resolved by solving the Wiener-filtering equation, which naturally regularizes inversions of the observational response operator in unobserved regions.}
In this fashion\nextext{,} we obtain a set of Monte Carlo samples from the joint posterior, which allows us to compute any desired property of the joint posterior density, with the accuracy only limited by the sample size.
In particular\nextext{,} we obtain \nextext{the power spectrum posterior} \(\mathcal{P}\left(\{P(k_i)\}|\{d_i\}\right)\) by simply marginalizing \nextext{the joint posterior} \(\mathcal{P}\left(\{P(k_i)\},\{s_i\}|\{d_i\}\right)\) over the auxiliary density amplitudes \(s_i\), which is trivially achieved by ignoring the \(s_i\) samples.
The Gibbs sampler also offers unique capabilities for propagating systematic uncertainties end-to-end. Any effect, for which we can define a sampling algorithm, either jointly with or conditionally on other quantities, can be propagated seamlessly through to the final posterior.
It is worth noting, that our method differs from traditional methods of analyzing galaxy surveys in a fundamental aspect.
Traditional methods consider the analysis task as a set of steps, each of which arrives at intermediate outputs which are then fed as inputs to the next step in the pipeline. Our approach is a truly global analysis, in the sense that the statistics of all science products are computed jointly, respecting and exploiting the full statistical dependence structure between various components.
In this paper we present \textsc{ARES} (Algorithm for REconstruction and Sampling), a computer algorithm to perform a full Bayesian data analysis of 3D redshift surveys.
In section \ref{LSS_SAMPLER} we give an introduction to the general idea of the large scale structure Gibbs sampling approach, followed by section \ref{MAP_SAMPLER} and \ref{PS_SAMPLER}, where we describe and derive in detail the necessary ingredients to sample the 3D density distribution and the power-spectrum respectively. The choice of the prior and the relevance for the cosmic variance are discussed in section \ref{Prior_and_Variance}.
Details concerning the numerical implementation are discussed in section \ref{NUMERICAL_IMPLEMENTATION}. We then test \textsc{ARES} thouroughly in section \ref{Gaussian_Test_Cases}, particularly focussing on the treatment of survey masks and selection functions. In section \ref{OPERATIONS_GIBBS_SAMPLES} we demonstrate the running median filter, and use it as an example to demonstrate how uncertainties can be propagated to all inferences based on the set of Gibbs samples. Finally we conclude in section \ref{Conclusion}, by discussing the results of the method and giving an outlook for future extensions and application of our method.
\section{Notation}
\label{Notation}
In this section, we describe the basic notation used throughout this work. Let the quantity \(\rho_i=\rho(\vec{x}_i)\) be the field amplitude of the three dimensional field \(\rho(\vec{x})\) at position \(\vec{x}_i\). Then the index \(i\) has to be understood as a multi index, which labels the three components of the position vector:
\begin{equation}
\label{eq:multi_index}
\vec{x}_i =[x^1_i,x^2_i,x^3_i] \, ,
\end{equation}
where \(x^j_i\) is the \(j\)th component of the \(i\)th position vector. Alternatively one can understand the index \(i\) as a set of three indices \(\{r,s,t\}\) so that for an equidistant grid along the three axes the position vector can be expressed as:
\begin{equation}
\label{eq:multi_index_a}
\vec{x}_i =\vec{x}_{r,s,t} = [\Delta x\, r,\Delta y\, s,\Delta z\, t] \, ,
\end{equation}
with \(\Delta x\), \(\Delta y\) and \(\Delta z\) being the grid spacing along the three axes.
With this definition we yield:
\begin{equation}
\label{eq:multi_index_a}
\rho_i \equiv \rho_{r,s,t} \, .
\end{equation}
Also note that any summation running over the multi index \(i\) is defined as the three sums over the three indices \(r\), \(s\) and \(t\):
\begin{equation}
\label{eq:multi_index_c}
\sum_i \equiv \sum_r \sum_s \sum_t \, .
\end{equation}
Further, we will frequently use the notation \(\{\rho_i\}\), which denotes the set of field amplitudes at different positions \(\vec{x}_i\).
In particular:
\begin{equation}
\label{eq:multi_index_set}
\{\rho_i\} \equiv \{\rho_0, \rho_1, \rho_2, ... ,\rho_{N-1}\} \, ,
\end{equation}
where \(N\) is the total number of position vectors.
\begin{figure*}
\centering
{
\tikzstyle{blank} = [rectangle, fill=white!20,text width=5em, text centered, rounded corners, minimum height=4em]
\tikzstyle{decision} = [diamond, draw, fill=blue!20,text width=4.5em, text badly centered, node distance=3cm, inner sep=0pt]
\tikzstyle{block} = [rectangle, draw, fill=blue!20,text width=7em, text centered, rounded corners, minimum height=4em]
\tikzstyle{line} = [draw, -latex']
\tikzstyle{cloud} = [draw, ellipse,fill=red!20, node distance=3cm, minimum height=2em]
\tikzstyle{clouda} = [draw, ellipse,fill=green!20, node distance=3cm, minimum height=2em]
\begin{tikzpicture}[node distance = 1.5cm, auto]
\node [clouda] (data) {data};
\node [block,below of=data] (DENSSAMPLING) {Wiener-mean \\ + \\ Wiener-variance};
\node [blank,below of=DENSSAMPLING] (blank) {};
\node [block,below of=blank] (PSSAMPLING) {Inverse Gamma sampling};
\node [cloud,left of=blank] (denssample) {3D density sample};
\node [cloud,right of=blank] (pssample) {power-spectrum sample};
\path [line] (data) -- (DENSSAMPLING);
\path [line] (DENSSAMPLING) -| (denssample);
\path [line] (denssample) |- (PSSAMPLING);
\path [line] (PSSAMPLING) -| (pssample);
\path [line] (pssample) |- (DENSSAMPLING);
\end{tikzpicture}
}
\caption{Flow-chart depicting the two step iterative Gibbs sampling procedure.}
\label{fig:flowchart}
\end{figure*}
\section{The Large scale structure Gibbs sampler}
\label{LSS_SAMPLER}
As already described in the introduction, we seek to sample from the joint posterior distribution \(\mathcal{P}\left(\{P(k_i)\},\{s_i\}|\{d_i\}\right)\) of the power-spectrum coefficients \(P(k_i)\) and the 3D matter density contrast amplitudes \(s_i\) given a set of observations \(\{d_i\}\).
In principle, this joint posterior distribution could be mapped out over a grid in the multi-dimensional space of the signal amplitudes \(s_i\) and power-spectrum coefficients \(P(k_i)\).
But since the number of grid points required for such an analysis scales exponentially with the number of free parameters, this approach cannot be realized efficiently.
For this reason, we propose a Gibbs sampling approach to this problem.
The theory of Gibbs sampling \citep{Gelfand_1990,Tanner_1996,OHagan} states, that if it is possible to sample from the conditional densities \({\cal P}(\{s_i\}|\{P(k_i)\},\{d_i\})\) and \({\cal P}(\{P(k_i)\}|\{s_i\},\{d_i\})\), then iterating the following two sampling equations will, after an initial burn-in period, lead to samples from the joint posterior \({\cal P}(\{s_i\},\{P(k_i)\}|\{d_i\})\):
\begin{equation}
\label{eq:signal_sampling}
\{s_i\}^{(j+1)}\curvearrowleft {\cal P}(\{s_i\}|\{P(k_i)\}^{(j)},\{d_i\}) \, ,
\end{equation}
\begin{equation}
\label{eq:Pspec_sampling}
\{P(k_i)\}^{(j+1)}\curvearrowleft {\cal P}(\{P(k_i)\}|\{s_i\}^{(j+1)},\{d_i\}) \, ,
\end{equation}
where the symbol \(\curvearrowleft\) denotes a random draw from the probability density on its right.
Once a set of samples from \({\cal P}(\{s_i\},\{P(k_i)\}|\{d_i\})\) has been obtained, the properties of this probability density can be summarized in terms of any preferred statistic, such as its multivariate mean, mode or variance.
As our approach probes the joint distribution, we are able to quantify joint uncertainties of the signal amplitudes and the power-spectrum conditional just on the data. For this reason, the Gibbs sampling approach should not be considered as yet another maximum likelihood technique, although it certainly is able to produce such an estimate.
In the following we are going to describe the necessary methods and procedures required for iterating the processes \ref{eq:signal_sampling} and \ref{eq:Pspec_sampling} of signal and power-spectrum sampling.
\section{Sampling the signal maps}
\label{MAP_SAMPLER}
Assuming a Gaussian signal posterior distribution \({\cal P}(\{s_i\}|\{P(k_i)\},\{d_i\})\), the task of drawing a random signal sample can be split into two steps.
First, we estimate the maximum a postiori values for the signal amplitudes \(s_i\), which in the Gaussian case coincide with the mean values.
Then a fluctuation term, being consistent with the correct covariance, is added to the mean signal. The sum of the mean and the fluctuation term will then yield a sample from the conditional posterior.
The most challenging procedure in this signal sampling step is to calculate the a postiori values for the signal amplitudes \(s_i\).
Assuming a Gaussian posterior will directly lead to a Wiener filtering procedure, described below.
However, this method requires to invert huge matrices which consists of the sum of the inverse signal \(\mat{S}\) and inverse noise \(\mat{N}\) covariance matrices. The matrix inversion is a numerically very demanding step, and at the same time presents the bottleneck for our method, as it defines the computational speed with which a signal sample can be produced.
The efficient implementation of these matrix inversion step, as described by \cite{Kitaura}, allows for the production of many thousands of samples in computational feasible times.
In the following sections we will describe the details of the signal sampling procedure.
\subsection{The Wiener filter}
\label{WIENER_FILTER}
As already described above, the main task for the signal sampling step is to derive the maximum a postiori values for the signal amplitudes \(s_i\).
According to Bayes' theorem the conditional signal posterior can be written as the product of a signal prior and a likelihood normalized by the so called evidence. Further, here we will use the signal covariance matrix \(\mat{S}\) rather than the power-spectrum \(\{P(k_i)\}\). It is well known, that the power-spectrum is just the Fourier transform of the signal covariance in configuration space. Since the Fourier transform is a basis transformation with a unitary transformation matrix, the signal covariance matrix \(\mat{S}\) and the power-spectrum \(\{P(k_i)\}\) can be used interchangeably for a normalized Fourier transform (see section \ref{Power_spectrum_sampling} and Appendix \ref{CHANGE_TO_FFT_REPRESENTATION} for more details).
We can therefore write the signal posterior as:
\begin{eqnarray}
\label{eq:signal_posterior}
{\cal P}(\{s_i\}|\mat{S},\{d_i\})&=&\frac{{\cal P}(\mat{S})}{{\cal P}(\{d_i\},\mat{S})}\,{\cal P}(\{s_i\}|\mat{S})\,{\cal P}(\{d_i\}|\{s_i\},\mat{S}) \nonumber\\
&=& \frac{1}{{\cal P}(\{d_i\}|\mat{S})}\,{\cal P}(\{s_i\}|\mat{S})\,{\cal P}(\{d_i\}|\{s_i\}) \, ,
\end{eqnarray}
where we assume that the data amplitudes \(d_i\) are conditionally independent of the signal covariance matrix \(\mat{S}\), once the signal amplitudes \(s_i\) are given.
Following \cite{BARDEEN1986}, we describe the signal prior for the large scale matter distribution as a multivariate Gaussian, with zero mean and the signal covariance \(\mat{S}\). We can then write:
\begin{equation}
\label{eq:signal_prior}
{\cal P}(\{s_i\}|\mat{S})=\frac{1}{\sqrt{\rm{det}\left(2\pi \mat{S}\right)}} e^{-\frac{1}{2}\sum_i\sum_j \,s_i\, \mat{S_{ij}}^{-1}\,s_j} \, .
\end{equation}
The Fourier transform of the signal covariance matrix \(\mat{S}\) has an especially appealing form in Fourier space.
It is well known, that in an homogeneous and isotropic universe the Fourier transform of the signal covariance is a diagonal matrix, with the diagonal elements being the power-spectrum. Hence, we can express the Fourier representation of the signal covariance as:
\begin{equation}
\label{signal_covarianceFS}
\hat{\hat{\mat{S}}}_{kl}= \delta^K_{kl}\, P_k \,
\end{equation}
where the \(\,\hat{}\,\)-symbol denotes a Fourier transform, \(\delta^K_{ij}\) is the Kronecker delta and \(P_k=P(k_k)\) is the power-spectrum coefficient at the Fourier mode \(\vec{k}_k\) in three dimensional pixel space \citep[see e.g.][]{PADMANABHAN1993,2004LRR.....7....8L}.
The choice of the Gaussian prior can be justified by inflationary theories, which predict the matter field amplitudes to be Gaussian distributed in the linear regimes at scales \(k \lesssim 0.15 \, \rm{h/Mpc}\) \citep[][]{PEACOCK1994,PERCIVAL2001}.
At nonlinear scales the Gaussian prior does not represent the full statistical behavior of the matter field anymore. During nonlinear gravitational structure formation the statistics of the initial density field has evolved from a Gaussian distribution towards a log normal distribution as commonly assumed in literature \citep[][]{COLES1991,COLOMBI1994,KAYO2001}.
However, note that in this case the Gaussian prior still describes the two point statistics of the underlying density field even in the nonlinear regime. The Gaussian prior should therefore be regarded as our a priori knowledge of the matter distribution, which is only formulated up to two point statistics. Next, we discuss the likelihood \({\cal P}(\{d_i\}|\{s_i\})\) given in equation (\ref{eq:signal_posterior}).
As we seek to recover the maximum a postiori signal \(s_i\) from the set of observations \(d_i\) we must assume a model which relates these both quantities. The most straight forward data model is linear, and can be written as:
\begin{equation}
\label{eq:WIENER_DATA_MODEL}
d_i=\sum_k \, K_{ik}\, s_k + \epsilon_i \, ,
\end{equation}
where \(K_{ij}\) is an observation response operator and \(\epsilon_i\) is an additive noise contribution, which will be defined in more detail in the next section.
If we assume the noise \(\epsilon_i\) to be Gaussian distributed, with zero mean and covariance \(\mat{N}\), we can express the likelihood as:
\begin{equation}
\label{signal_likelihood}
{\cal P}(\{d_i\}|\{s_i\})=\frac{1}{\sqrt{\rm{det}\left(2\pi \mat{N}\right)}} e^{-\frac{1}{2}\left(\sum_i\sum_j \,\left[d_i-\sum_k \, K_{ik}\, s_k\right]\, \mat{N_{ij}}^{-1}\,\left[d_j-\sum_l \, K_{jl}\, s_l\right]\right)} \, ,
\end{equation}
where we simply inserted the data model given in equation (\ref{eq:WIENER_DATA_MODEL}) into the Gaussian noise distribution.
With these definitions the signal posterior is a multivariate Gaussian distribution and can be written as:
\begin{eqnarray}
\label{eq:Gaussian_signal_posterior}
{\cal P}(\{s_i\}|\mat{S},\{d_i\})&\propto& e^{-\frac{1}{2}\left(\sum_i\sum_j \,s_i\, \mat{S_{ij}}^{-1}\,s_j + \,\left[d_i-\sum_k \, K_{ik}\, s_k\right]\, \mat{N_{ij}}^{-1}\,\left[d_j-\sum_l \, K_{jl}\, s_l\right]\right)}\, , \nonumber \\
\end{eqnarray}
where we omitted the normalization factor, which is of no interest in the following.
Note, that omitting the normalization of the likelihood, requires that the additive noise term is independent of any signal contribution, as otherwise the noise covariance matrix would carry signal information and could not be neglected in the following. This assumption, however, is in general not true for the Poissonian noise, as described, in the next section.
The maximum of this signal posterior can then easily be found by either completing the square in the exponent of equation (\ref{eq:Gaussian_signal_posterior}), or by extremizing \({\cal P}(\{s_i\}|\mat{S},\{d_i\})\) with respect to the signal amplitudes \(s_i\).
The latter approach allows us to directly read of the Wiener filter equation from equation (\ref{eq:Gaussian_signal_posterior}), by simply differentiating the exponent with respect to \(s_i\) and setting the result to zero.
The result is the famous Wiener filter equation which is given as:
\begin{eqnarray}
\label{eq:WIENER_FILTER_EQUATION}
\sum_j \left[ \mat{S}^{-1}_{ij} + \sum_m \sum_l \mat{K}_{mi} \mat{N}^{-1}_{ml} \mat{K}_{lj} \right] \, m_j = \sum_m \sum_l \mat{K}_{mi} \mat{N}^{-1}_{ml} d_l \, ,
\end{eqnarray}
where we denoted the variable \(m_j\) as a Wiener mean signal amplitude, to clarify that this reconstruction is the mean and not a typical sample of the distribution.
The solution of this equation requires to invert the matrix:
\begin{equation}
\label{eq:LSS_Posterior_1}
\mat{D}_{ij} = \mat{S}^{-1}_{ij} + \sum_m \sum_l \mat{K}_{mi} \mat{N}^{-1}_{ml} \mat{K}_{lj} \, ,
\end{equation}
which leads to the solution for the signal amplitudes
\begin{eqnarray}
\label{eq:WIENER_SOLUTION}
m_i = \sum_j \mat{D}^{-1}_{ij}\sum_m \sum_l \mat{K}_{mj} \mat{N}^{-1}_{ml} d_l \, .
\end{eqnarray}
This result demonstrates that estimating the maximum a postiori values \(m_i\) for the signal amplitudes \(s_i\) involves inversions of the Wiener filter operator \(\mat{D}\).
Therefore, the signal-sampling operation is by far the most demanding step of our Gibbs sampling scheme, as it requires the solution of a very large linear system.
Formally speaking, in practice, this corresponds to inverting matrices of order \(\sim 10^6 \times 10^6\) or larger, which clearly is not computationally feasible through brute-force methods. For example, matrix inversion algorithms, based on usual linear algebra methods, have a numerically prohibitive \(\mathcal{O}(N_{pix}^3)\) scaling, in order to transform to the eigenspace of the system, which bars sampling from the signal posterior.
This is the situation in which \cite{Kitaura} proposed a particular operator based inversion technique to allow for computationally efficient calculation of the Wiener filter equation in three dimensional space.
In this implementation, the system of equations (\ref{eq:WIENER_SOLUTION}) can be solved by means of conjugate gradients (CGs). The computational scaling of this method is thus reduced to the most expensive step for applying the operator on the right-hand side of the equations, which in our case is the Fast Fourier transform, which scales as \(\mathcal{O}(N_{pix}\,\log(N_{pix}))\) .
\subsection{The galaxy data model}
\label{DATA_MODEL}
In order to adapt the Wiener filter procedure for the specific application to galaxy observations, we are going to present the galaxy data model together with the according Poissonian noise covariance matrix.
It is possible to describe the observed galaxy distribution as a realization of an inhomogeneous Poissonian process \citep[][]{MARTINEZ2002}.
We can therefore assume the observed galaxy numbers \(N^{O}_i=N^{O}(\vec{x}_i)\) at position \(\vec{x}_i\) in three dimensional configuration space to be drawn from a Poissonian distribution \citep[][]{MARTINEZ2002,KITAURA2009}.
\begin{equation}
\label{eq:Poissonian}
N^{O}_i\curvearrowleft {\mathcal P}(N^{O}_i|\lambda^{O}_i)= \frac{{\lambda^{O}_i}^{N^{O}_i} e^{-\lambda^{O}_i}}{{N^{O}_i}!} \, ,
\end{equation}
where the arrow denotes a random draw from the probability distribution and \(\lambda^{O}_i\) is the mean observable galaxy number at position \(\vec{x}_i\).
We can then write the observed galaxy numbers at discrete positions as:
\begin{equation}
\label{eq:Observed_gal_num}
N^{O}_i= \langle N^{O}_i \rangle + \epsilon^{O}_i = \lambda^{O}_i + \epsilon^{O}_i \, ,
\end{equation}
where the noise term \(\epsilon^{O}_i\) denotes the difference between the observed galaxy number and the mean observable galaxy number.
The Poissonian noise covariance matrix can then easily be obtained by:
\begin{equation}
\label{eq:Poissonian_Noise_Cov}
N^{P}_{ij} = \langle \epsilon^{O}_i \epsilon^{O}_j \rangle = \langle [N^{O}_i -\langle N^{O}_i \rangle][N^{O}_j -\langle N^{O}_j \rangle] \rangle=\delta^K_{ij}\langle N^{O}_i \rangle =\delta^K_{ij}\,\lambda^{O}_i \, ,
\end{equation}
where we simply calculated the Poissionian variance for the observed galaxy number assuming the galaxies to be independent and identically distributed (i.i.d).
The mean observable galaxy number can be related to the true mean galaxy number \(\lambda_i\) by applying the observation response operator \(R_{ij}\) as:
\begin{equation}
\label{eq:True_gal_num}
\lambda^{O}_{i} = \sum_j\, R_{ij}\,\lambda_j\, ,
\end{equation}
The true mean galaxy number, on the other hand, can be related to the dark matter over density field, the signal \(s_i\), by introducing a physical model in the form of a bias operator \(B_{ij}\), e.g. a scale dependent bias:
\begin{equation}
\label{eq:True_signal}
\lambda_i = \bar{\lambda} \left(1+\sum_j B_{ij}\, s_j \right) \, .
\end{equation}
By inserting equations (\ref{eq:True_gal_num}) and (\ref{eq:True_signal}) into equation (\ref{eq:Observed_gal_num}) and applying trivial algebraic conversions, we yield the data model:
\begin{eqnarray}
\label{eq:data_model}
d_i &=& \frac{N^{O}_i}{\bar{\lambda}}-\sum_j\, R_{ij}= \sum_j\, R_{ij} \sum_k \, B_{jk}\, s_k + \frac{\epsilon^{O}_i}{\bar{\lambda}} \, ,
\end{eqnarray}
For the case of galaxy redshift surveys the response operator \(R_{ij}\) is the product of the sky mask and the selection function, which are both local in configuration space, and hence the response operator turns to:
\begin{equation}
\label{eq:Response_operator}
R_{ij} = \delta^K_{ij} M_i\,F_i \, ,
\end{equation}
where \(M_i\) is the value of the sky mask and \(F_i\) is the value of the selection function at position \(i\).
We therefore arrive at the data model already described in equation (\ref{eq:WIENER_DATA_MODEL}), which reads:
\begin{eqnarray}
\label{eq:data_model_a}
d_i &=& M_i\,F_i \sum_k \, B_{ik}\, s_k + \frac{\epsilon^{O}_i}{\bar{\lambda}} \nonumber \\
&=& \sum_k \, K_{ik}\, s_k + \epsilon_i \, ,
\end{eqnarray}
where we introduced the effective observation response operator \(K_{ij}=M_i\,F_i\, B_{ij}\) and the noise contribution \( \epsilon_i= \epsilon^{O}_i/\bar{\lambda}\).
This is the galaxy data model which we derived from the assumption of the Poissonian distribution of galaxies.
The Wiener filter operator requires the definition of the noise covariance matrix \(\mat{N}\), which for the Poissonian noise can be expressed as:
\begin{equation}
\label{Noise_Cov}
N_{ij} = \langle \epsilon_i \epsilon_j \rangle= \frac{\langle \epsilon^{O}_i \epsilon^{O}_j \rangle}{\bar{\lambda}^2}=\delta^K_{ij}\,\frac{\lambda^{O}_i}{\bar{\lambda}^2} \, ,
\end{equation}
where we used the Poissonian noise covariance matrix given in equation (\ref{eq:Poissonian_Noise_Cov}).
However, introducing equation (\ref{eq:True_gal_num}) and (\ref{eq:True_signal}) yields the noise covariance matrix:
\begin{equation}
\label{Noise_Cov}
\mat{N}_{ij} =\delta^K_{ij}\,\frac{1}{\bar{\lambda}} \,\left [\sum_k\mat{R}_{ik} \left(1+\sum_{l}\mat{B}_{kl}\,s_l\right)\right]\, ,
\end{equation}
which immediately reveals, that there is a correlation between the underlying signal amplitudes \(s_i\) and the level of shot noise produced by the discrete distribution of galaxies \citep[see e.g.][]{1998ApJ...503..492S}.
Nevertheless, as pointed out in the previous section, the Wiener filter relies on the fact, that the additive noise contribution is uncorrelated with the signal.
Hence, we have to assume the noise covariance as uncorrelated with the signal, but it may have some structure.
Therefore, we provide two approaches to effectively approximate the noise covariance matrix given in equation (\ref{Noise_Cov}).
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture1}}}
\caption{Slices through signal sample produced in one Gibbs sampling step. The left panels (a,d) show the Wiener filtered mean signal, panels (b,e) present the fluctuation term, and the right panels show the full, noiseless Gibbs sample. The color map encodes the amplitude of the density contrast.}
\label{fig:WIENER_VARIANCE}
\end{figure*}
In the first approach we calculate an effective noise covariance matrix by averaging the noise covariance matrix given in equation (\ref{Noise_Cov}) over the signal.
We then obtain:
\begin{eqnarray}
\label{Noise_Cov_av}
\bar{\mat{N}}_{ij} &= & \langle \mat{N}_{ij} \rangle _{s} \nonumber \\
&=& \delta^K_{ij}\,\frac{1}{\bar{\lambda}} \,\left [\sum_k\mat{R}_{ik} \left(1+\sum_{l}\mat{B}_{kl}\,\langle s_l\rangle _{s} \right)\right]\nonumber \\
&=& \delta^K_{ij}\,\frac{1}{\bar{\lambda}} \,\left [\sum_k\mat{R}_{ik}\right]\, ,
\end{eqnarray}
where we used the fact, that the ensemble mean of the signal amplitudes for the density contrast vanishes. Note, that this model also arises when persuing a least squares approach to matter field reconstructions rather than the Bayesian approach as described in this work \citep[for details see][]{Kitaura}.
In the other approach we introduce a noise structure function \(n^{SF}_{i}\) given as:
\begin{equation}
\label{eq:noisestructurefunction}
n^{SF}_{i}=\frac{\lambda^{O}_i}{\bar{\lambda}^2} \, .
\end{equation}
With this definition the noise is approximated as being uncorrelated to the signal, but nonuniform. The noise covariance matrix then reads:
\begin{equation}
\label{Noise_Cov_SF}
\mat{N}^{SF}_{ij} = \delta^K_{ij}\,n^{SF}_{i} \, .
\end{equation}
In order to use this noise structure function we have to estimate \(\lambda^{O}_i\) from the observed galaxy numbers \(N^{O}_i\).
By applying Bayes' theorem to the Poissonian distribution given in relation (\ref{eq:Poissonian}) we yield:
\begin{equation}
\label{Noise_Cov_SF_PDF}
{\mathcal P}(\lambda^{O}_i|N^{O}_i) = {\mathcal P}(\lambda^{O}_i)\,\frac{{\mathcal P}(N^{O}_i|\lambda^{O}_i)}{{\mathcal P}(N^{O}_i)} \, .
\end{equation}
In the absence of any further a priori knowledge on \(\lambda^{O}_i\) we assume a flat prior and search for the maximum of:
\begin{equation}
\label{eq:Noise_Cov_SF_PDF_a}
{\mathcal P}(\lambda^{O}_i|N^{O}_i) = \frac{{\lambda^{O}_i}^{N^{O}_i} e^{-\lambda^{O}_i}}{\Gamma (N^{O}_i+1)} \, ,
\end{equation}
which is normalized to yield unity when integrated over all \({\lambda^{O}_i}\).
The noise structure function \(n^{SF}_{i}\) can then be estimated by searching the most likely value for \(\lambda^{O}_i\) from equation (\ref{eq:Noise_Cov_SF_PDF_a}). This yields:
\begin{equation}
\label{NOISE_SF_MAX}
n^{SF}_{i} = \frac{N^{O}_i}{\bar{\lambda}^2} \, .
\end{equation}
Another estimator for \(n^{SF}_{i}\) is based on evaluating the mean of the probability distribution given in equation (\ref{eq:Noise_Cov_SF_PDF_a}).
The ensemble mean is calculated as:
\begin{equation}
\label{NOISE_SF_MEAN_a}
\langle \lambda^{O}_i \rangle = \int_0^{\infty} d\lambda^{O}_i \lambda^{O}_i \frac{{\lambda^{O}_i}^{N^{O}_i} e^{-\lambda^{O}_i}}{\Gamma (N^{O}_i+1)}=N^{O}_i+1 \, .
\end{equation}
in this case the noise structure function \(n^{SF}_{i}\) can be written as:
\begin{equation}
\label{NOISE_SF_MEAN_b}
n^{SF}_{i} = \frac{N^{O}_i+1}{\bar{\lambda}^2} \, .
\end{equation}
A more thorough discussion on Poissonian noise models and their numerical implications for matter field reconstructions can be found in \cite{KITAURA2009}.
\subsection{Drawing signal samples}
\label{DRAWING_SIGNAL_SAMPLES}
In the previous sections, we described the Wiener filter and the galaxy data model, which are required to estimate the mean of the signal posterior.
However, this mean signal is no sample from the signal posterior yet, neither does it represent a physical density field, as it lacks power in the low signal to noise regions.
To create a true sample from the signal posterior, one must therefore add a fluctuation term \(y_i\), which compensates the power lost due to noise filtering.
The signal sample can then be written as the sum of the signal mean and a fluctuation term:
\begin{equation}
\label{eq:AUGMENT_EQUATION}
s_i = m_i + y_i \, .
\end{equation}
In our approach we realize the fluctuation term by generating a mock signal \(s_i^*\) and a mock observation \(d_i^*\) consistent with the data model given in equation (\ref{eq:data_model_a}). This kind of mock observation generation is well known in literature and has been applied to various scientific applications, as for instance the generation of constrained initial conditions for Nbody simulations \citep[see e.g.][]{Bertschinger_1987,Hoffman_1991ApJ,Ganon_1993,Kitaura}. The fluctuation term can then simply be calculated as:
\begin{equation}
\label{eq:FLUCTUATION_TERM}
y_i= s_i^*-\sum_j \mat{D}^{-1}_{ij}\sum_m \sum_l \mat{K}_{mj} \mat{N}^{-1}_{ml} d_l^* \, .
\end{equation}
The interpretation of this equation is simple. In the high signal to noise regime, the Wiener filter is nearly a pass-through operator, meaning the reconstructed signal is nearly identical to the true underlying signal. Therefore, as the variance is low, the fluctuation term tends towards zero.
In the low signal to noise regime, on the other hand, the Wiener filter will block, and no signal can be reconstructed. The fluctuation term will therefore be nearly identical to the underlying mock signal \(s_i^*\).
In this fashion we add the correct power to the Wiener mean reconstruction. The effect of adding the fluctuation term to the Wiener mean is presented in figure \ref{fig:WIENER_VARIANCE}, where we see the Wiener mean reconstruction, the fluctuation term and the sum of both.
The mock data \(d_i^*\) is generated to obey the data model described in equation (\ref{eq:data_model_a}) and the Wiener variance.
We therefore first draw a mock signal \(s_i^*\) with correct statistics from the multivariate Gaussian signal prior given in equation (\ref{eq:signal_prior}). Such a mock signal is best generated in Fourier space following the description of \cite{2005astro.ph..6540M}. One first draws two Gaussian random numbers, \(\chi_a\) and \(\chi_b\), with zero mean and unit variance and then calculates the real and imaginary part of the signal in Fourier space as:
\begin{eqnarray}
\label{eq:SIGNAL_GENERATION}
RE(\hat{s}_k) &=& \sqrt{\frac{P_k}{2}}\, \chi_a \nonumber \\
IM(\hat{s}_k) &=& \sqrt{\frac{P_k}{2}}\, \chi_b \, ,
\end{eqnarray}
where \(P_k\) is the power-spectrum coefficient at the \(k\)th position in Fourier space. Note, that the mock signal \(s_i^*\) is supposed to be a real quantity, and therefore hermiticity has to be imposed in Fourier space before performing the inverse Fourier transform \citep[for details see][]{2005astro.ph..6540M}.
Next we have to generate the additive noise contribution.
In order to draw a noise term with the correct Poissonian statistics, we first draw a random number \(N^{*}_i\) from the Poissonian distribution:
\begin{equation}
\label{eq:Poissonian}
N^{*}_i\curvearrowleft {\mathcal P}(N^{*}_i|\lambda^{*}_i) \, ,
\end{equation}
where we choose the mean observed galaxy number to be \( \lambda^{*}_i=n^{SF}_{i} \bar{\lambda}^2 \).
According to equations (\ref{eq:Observed_gal_num}) and (\ref{eq:data_model_a}) the mock noise term \(\epsilon_i^*\) can be calculated as:
\begin{equation}
\label{eq:Poissonian}
\epsilon^{*}_i= \frac{N^{*}_i-n^{SF}_{i} \bar{\lambda}^2}{\bar{\lambda}} \, .
\end{equation}
It is clear by construction that this mock noise term has vanishing mean and the correct noise covariance matrix.
Then, according to equation (\ref{eq:data_model_a}) the mock observation is given as:
\begin{equation}
\label{eq:MOCK_DATA}
d_i^* = \sum_k \, K_{ik}\, s_k^* + \epsilon_i^* \, .
\end{equation}
The proof, that the fluctuation term \(y_i\) as generated by equation (\ref{eq:FLUCTUATION_TERM}) truly generates the correct variance is given in Appendix \ref{WIENER_VARIANCE}.
Note, that the application of the Wiener operator is a linear operation, and we can therefore rewrite equation (\ref{eq:AUGMENT_EQUATION}) as:
\begin{eqnarray}
\label{eq:UNBIASED_SIGNAL}
s_i = s^*_i + \sum_j \mat{D}^{-1}_{ij}\sum_m \sum_l \mat{K}_{mj} \mat{N}^{-1}_{ml}\left(d_l -d^*_l\right) \, ,
\end{eqnarray}
where the Wiener operator is applied to the true data \(d_i\) and the mock observation \(d_i^*\) simultaneously. This greately reduces the CPU time required for the generation of one signal sample.
\section{Sampling the Power Spectrum}
\label{PS_SAMPLER}
As described above, the signal sampling step provides a noise-less full sky signal sample \(s_i\) consistent with the data.
The next step in the Gibbs sampling iteration scheme requires to draw power-spectrum samples from the conditional probability distribution \({\cal P}(\mat{S}|\{s_i\},\{d_i\})\).
Since in this Gibbs sampling step the perfect sky signal amplitudes \(s_i\) are known, the power-spectrum is conditionally independent of the data amplitudes \(d_i\).
Hence, in this Gibbs sampling step, we can sample the power-spectrum from the probability distribution \({\cal P}(\mat{S}|\{s_i\})\).
In the following we will show that the power-spectrum can easily be drawn from an inverse gamma distribution.
\subsection{Drawing power-spectrum samples}
\label{Power_spectrum_sampling}
According to Bayes' theorem, we can rewrite the conditional probability \({\cal P}(\mat{S}|\{s_i\})\) as:
\begin{equation}
\label{eq:COND_INDEP_PS}
{\cal P}(\mat{S}|\{s_i\}) = \frac{{\cal P}(\mat{S})}{{\cal P}(\{s_i\})} {\cal P}(\{s_i\}|\mat{S}) \, ,
\end{equation}
where \({\cal P}(\mat{S})\) is the prior for the signal covariance, \({\cal P}(\{s_i\}|\mat{S} )\) is given by equation (\ref{eq:signal_prior}) and \({\cal P}(\{s_i\})\) is a normalization constant in this Gibbs sampling step.
More specifically, we are interested in the set of matrix coefficients \(\{\mat{S}_{ij}\}\) of the covariance matrix \(\mat{S}\).
As already pointed out in section \ref{WIENER_FILTER} the signal covariance matrix of an homogeneous and isotropic universe, has an especially appealing form in Fourier space, where it takes a diagonal form.
In our application the real space covariance matrix coefficients \(\{\mat{S}_{ij}\}\) are related to their Fourier representation via the fast Fourier transform, as defined in Appendix \ref{Discrete_Fourier_transformation}. We can therefore write:
\begin{eqnarray}
\label{eq:FT_SIGNAL_COV}
\mat{S}_{ij} &=& C^2\,\sum^{N-1}_{k=0} \sum^{N-1}_{l=0} e^{2\pi i k \frac{\sqrt{-1}}{N}}\, \hat{\hat{\mat{S}}}_{kl}\, e^{-2\pi j l \frac{\sqrt{-1}}{N}} \nonumber \\
&=& C^2\,\sum^{N-1}_{k=0} \sum^{N-1}_{l=0} e^{2\pi \frac{\sqrt{-1}}{N}(i\,k - j\,l )}\, \hat{\hat{\mat{S}}}_{kl} \, .
\end{eqnarray}
Then we can express the conditional distribution for the Fourier signal covariance coefficients \(\hat{\hat{\mat{S}}}_{kl}\) as:
\begin{equation}
\label{eq:COND_INDEP_PS_FT}
{\cal P}\left(\{\hat{\hat{\mat{S}}}_{kl}\}|\{s_i\}\right) = {\cal P}\left(\{\mat{S}_{ij}\}|\{s_i\}\right) \left |\frac{\partial\{\mat{S}_{ij}\}}{\partial\{\hat{\hat{\mat{S}}}_{kl}\}} \right | \, ,
\end{equation}
where
\begin{equation}
\label{eq:JACOBIAN_DET}
\left |\frac{\partial\{\mat{S}_{ij}\}}{\partial\{\hat{\hat{\mat{S}}}_{kl}\}} \right | = \left | \rm{det}(\mat{\mathcal{J}}_{(ij)\,(kl)}) \right |\, ,
\end{equation}
is the Jacobian determinant for this coordinate transformation. As the discrete Fourier transform is proportional to a unitary matrix, this Jacobian determinant only amounts to a normalization constant, as has been demonstrated in Appendix \ref{CHANGE_TO_FFT_REPRESENTATION}.
With this definition we can rewrite the conditional probability in equation (\ref{eq:COND_INDEP_PS}), by replacing all the real space covariance matrix coefficients \(\mat{S}_{ij}\) by their Fourier representation \(\hat{\hat{\mat{S}}}_{kl}\), and normalizing it with the constant obtained from the coordinate transformation.
We can therefore write:
\begin{eqnarray}
\label{eq:COND_DENS_PS_FT}
{\cal P}(\{\hat{\hat{\mat{S}}}_{kl}\}|\{s_i\}) &=& \frac{{\cal P}(\{\hat{\hat{\mat{S}}}_{kl})\}}{{C^{N^2}\,\cal P}(\{s_i\})} {\cal P}(\{s_i\}|\{\hat{\hat{\mat{S}}}_{kl}\}) \nonumber \\
&=&\frac{{\cal P}(\{\hat{\hat{\mat{S}}}_{kl}\})}{{C^{N^2}\,\sqrt{\rm{det}\left(2\pi \hat{\hat{\mat{S}}}\right)}\,\cal P}(\{s_i\})} e^{-\frac{C^2}{2\,\hat{C}^2}\sum^{N-1}_{k=0} \sum^{N-1}_{l=0} \hat{s}^*_k \, \hat{\hat{\mat{S}}}^{-1}_{kl} \,\hat{s}_l\,} \, , \nonumber \\
\end{eqnarray}
where we used the discrete Fourier transform definition, given in Appendix \ref{Discrete_Fourier_transformation}, to replace the real space signal amplitudes \(s_i\) by their Fourier counterparts \(\hat{s}_k\).
Introducing equation (\ref{signal_covarianceFS}) then allows us to rewrite equation (\ref{eq:COND_DENS_PS_FT}) in terms of the power-spectrum coefficients \(P_k\) as:
\begin{equation}
\label{eq:COND_DENS_PS_COEFF_FT}
{\cal P}(\{P_k\}|\{s_i\}) =\frac{{\cal P}(\{P_k\})}{{C^{N^2}\,\cal P}(\{s_i\})} \prod_{k'=0}^{N-1} \left(2\pi\,P_{k'}\right)^{-1/2} e^{-\frac{C^2}{2\,\hat{C}^2}\sum^{N-1}_{k=0} \frac{|\hat{s}_k|^2}{P_k} } \, , \nonumber \\
\end{equation}
where the determinant factorizes due to the diagonal form of the signal covariance matrix in Fourier space.
Note, that due to isotropy the power-spectrum is independent of direction in Fourier space, meaning the power-spectrum coefficients only depend on the modulus of the mode vector \(\vec{k}_k\):
\begin{equation}
P_k=P(\vec{k}_k)=P(|\vec{k}_k|)\, .
\end{equation}
For this reason, the angular dependence in Fourier space can be summed over.
To do so we remark that the mode vector \(\vec{k}_k\), as a geometrical object, will not change if we express it in the basis of cartesian coordinates \(\vec{k_k}=\vec{k_k}(k^1_k,k^2_k,k^3_k)\), or if we describe it in the basis of spherical coordinates \(\vec{k_k}=\vec{k_k}(|\vec{k}_k|,\varphi_k,\vartheta_k)\).
We can therefore split the multi index summation into the summation over the three spherical coordinates as:
\begin{eqnarray}
\label{eq:SUM_SPHERICAL}
\frac{C^2}{\hat{C}^2}\sum^{N-1}_{k=0} \frac{|\hat{s}_k|^2}{P_k} &=& \sum_{|\vec{k}_k|} \frac{1}{P(|\vec{k}_k|)}\sum_{\varphi_k} \sum_{\vartheta_k} \frac{C^2}{\hat{C}^2} \left|\hat{s}\left(|\vec{k_k}|,\varphi_k,\vartheta_k\right)\right|^2 \nonumber \\
&=& \sum_{|\vec{k}_k|} \frac{\sigma(|\vec{k}_k|)}{P(|\vec{k}_k|)} \nonumber \\
&=& \sum_{m=0}^{M -1} \frac{\sigma_{m}}{P_{m}} \, ,
\end{eqnarray}
where we introduced \(\sigma(|\vec{k}_k|)=\sum_{\varphi_k} \sum_{\vartheta_k} C^2/\hat{C}^2 \left|\hat{s}\left(|\vec{k_k}|,\varphi_k,\vartheta_k\right)\right|^2\), which is the summed signal power on spherical shells around the origin in Fourier space, and the index \(m\) labels each of the \(M\) shells belonging to the different mode vector modulus \(|\vec{k}_k|\) in the Fourier box.
Several different mode vectors \(\vec{k}_k\) may have the same vector modulus \(|\vec{k}_k|\), and therefore belong to the same shell. To account for this we introduce the number \(n_m\), which counts the number of different mode vectors \(\vec{k}_k\), belonging to the {\it \(m\)}th shell in Fourier space. This number \(n_m\), therefore counts the degrees of freedom for each of the \(M\) modes.
We can then express the product in equation (\ref{eq:COND_DENS_PS_COEFF_FT}) in terms of \(m\) as:
\begin{equation}
\label{eq:PROD_SPHERICAL}
\prod_{k=0}^{N-1} \left(2\pi\,P_k\right)^{-1/2} = \prod_{m=0}^{M-1} \left(2\pi\,P_m\right)^{-n_m/2} \, .
\end{equation}
With these definitions equation (\ref{eq:COND_DENS_PS_COEFF_FT}) turns to:
\begin{equation}
\label{eq:COND_DENS_PS_COEFF_FT_mode}
{\cal P}(\{P_k\}|\{s_i\}) =\frac{{\cal P}(\{P_k\})}{{C^{N^2}\,\cal P}(\{s_i\})} \prod_{m=0}^{M-1} \left(2\pi\,P_m\right)^{-n_m/2} e^{-\frac{1}{2}\,\frac{\sigma_{m}}{P_{m}}} \, ,
\end{equation}
When ignoring the power-spectrum prior \({\cal P}(\{P_k\})\) in the above equation (\ref{eq:COND_DENS_PS_COEFF_FT_mode}), we see that the probability distribution factorizes in the different \(P_{m}\), meaning they could be sampled independently.
If also the prior for the different \(P_{m}\) would factorize as:
\begin{equation}
\label{eq:PS_PRIOR_FACTORIZES}
{\cal P}(\{P_k\}) = \prod_{m=0}^{M-1} {\cal P}(P_m) \, ,
\end{equation}
then it is possible to sample each mode of the power-spectrum independently.
On large scales, or in the linear regime, the theory of gravitational structure formation tells us that the different Fourier modes evolve independent of each other. In these regimes the proposed power-spectrum prior would be the adequate choice.
However, we also know that nonlinear structure formation couples the different Fourier modes, the stronger the deeper we reach into the nonlinear regime. In these regimes another prior would be more adequate, but also harder to sample.
Anyhow, as already described in section \ref{LSS_SAMPLER} the entire power-spectrum sampling method requires two steps. While the different power-spectrum modes are assumed to be independent in the power-spectrum sampling step, they are not in the signal sampling step. There the different modes are coupled via the observation mask and selection function, and furthermore, the physical coupling of the different modes is represented in the data.
Therefore, in the following we assume a power-spectrum prior, as proposed in equation (\ref{eq:PS_PRIOR_FACTORIZES}), and defer a more thorough investigation of adequate prior choices in the nonlinear regime to future work.
With this prior choice each mode can be sampled independently from the following probability density distribution:
\begin{equation}
\label{eq:COND_DENS_PS_single_mode}
{\cal P}(P_m|\{s_i\}) =\frac{{\cal P}(P_m)}{\left({C^{N^2}\,\cal P}(\{s_i\})\right)^{1/M}} \left(2\pi\,P_m\right)^{-n_m/2} e^{-\frac{1}{2}\,\frac{\sigma_{m}}{P_{m}}} \, .
\end{equation}
Further, we will assume a power-law behavior for the individual mode prior \({\cal P}(P_m) \propto P_m^{-\alpha}\) where \(\alpha\) is a power law index. Note, that a power-law index \(\alpha=0\) describes the flat prior, while \(\alpha=1\) amounts to the Jeffrey's prior. The Jeffrey's prior is a solution to a measure invariant scale transformation of the form \({\cal P}(P_m) dP_m = {\cal P}(\gamma\,P_m)\,\gamma dP_m\) \citep{WANDELT2004}, and therefore is a scale independent prior, as different scales have the same probability.
Inserting this power law prior in equation (\ref{eq:COND_DENS_PS_single_mode}) and imposing the correct normalization, reveals that the power-spectrum coefficients have to be sampled from an inverse gamma distribution given as:
\begin{equation}
\label{eq:INVERSE_GAMMA}
{\cal P}(P_m|\{s_i\}) =\frac{\left(\frac{\sigma_m}{2}\right)^{(\alpha-1)+n_m/2}}{\Gamma \left((\alpha-1)+\frac{n_m}{2}\right)} \frac{1}{P_m^{(\alpha+n_m/2)}} e^{- \frac{1}{2}\frac{\sigma_m}{P_m}}\, .
\end{equation}
By introducing the new variable \(x_m=\sigma_m/P_m\) and performing the change of variables we yield the \(\chi^2\)-distribution as:
\begin{equation}
\label{eq:XI_SQUARE}
{\cal P}(x_m|\{s_i\}) = \frac{x_m^{\beta_m/2-1}}{\Gamma\left(\beta_m/2\right)\,(2)^{\beta_m/2}} e^{- \frac{x_m}{2}}\, ,
\end{equation}
where \(\beta_m=2(\alpha+n_m/2-1)\).
Sampling the power-spectrum coefficients is now an easy task, as it reduces to drawing random samples from the \(\chi^2\)-distribution.
A random sample from the \(\chi^2\)-distribution for an integer \(\beta_m\) can be drawn as follows.
Let \(z_j\) be \(\beta_m\) independent, normally distributed random variates with zero mean and unit variance then:
\begin{equation}
\label{post:posterior_of_x_1}
x_m = \sum_{j=1}^{\beta_m} z_j^2=|\vec{z}_m|^2 \,
\end{equation}
is \(\chi^2\)-distributed, and \(\vec{z}_m\) is a \(\beta_m\) element vector, with each element being normally distributed.
The power-spectrum coefficient sample is then obtained by:
\begin{equation}
\label{eq:PS_sample}
P_m = \frac{\sigma_m}{|\vec{z}_m|^2} \, .
\end{equation}
It is easy to see that each spectrum coefficient sample is a positive quantity, this ensures that the signal covariance matrix is positive definite as it has to be by definition.
To summarize, we provide an optimal estimator for the power-spectrum coefficients, and their uncertainties.
It is also worth mentioning, that the inverse gamma distribution is a highly non-Gaussian distribution, and that for this reason, the joint estimates of signal amplitudes \(s_i\) and power-spectrum coefficients \(P_m\) are drawn from a non-Gaussian distribution.
\subsection{Blackwell-Rao estimator}
\label{Blackwell-Rao_estimator}
As described in the introduction, we seek to estimate the probability distribution \({\cal P}(\{P_m\}|\{d_i\})\), which we can now simply obtain by marginalizing over the signal samples:
\begin{eqnarray}
\label{eq:BLACKWELL_RAO}
{\cal P}(\{P_m\}|\{d_i\}) & = &\int \rm{d}\{s_i\}\,{\cal P}(\{P_m\}|\{s_i\},\{d_i\})\,{\cal P}(\{s_i\}|\{d_i\}) \nonumber \\
&= &\int \rm{d}\{s_i\}\,{\cal P}(\{P_m\}|\{s_i\})\,{\cal P}(\{s_i\}|\{d_i\}) \nonumber \\
& \approx & \frac{1}{N_{Gibbs}} \sum_{j=1}^{N_{Gibbs}} {\cal P}(\{P_m\}|\{s_i\}^j) \, ,
\end{eqnarray}
where \(\{s_i\}^j\) are the signal Gibbs samples, and \(N_{Gibbs}\) is the total number of Gibbs samples.
This result is known as the Blackwell-Rao estimator of \({\cal P}(\{P_m\}|\{d_i\})\) which is guaranteed to have a lower variance than a binned estimator \citep{WANDELT2004}.
It is worth noting, that \({\cal P}(\{P_m\}|\{s_i\})\) has a very simple analytic form, and therefore equation (\ref{eq:BLACKWELL_RAO}) provides an analytic approximation to \({\cal P}(\{P_m\}|\{d_i\})\) based on the Gibbs samples.
All the information on \({\cal P}(\{P_m\}|\{d_i\})\) is therefore contained in the \(\sigma_m\) of the individual Gibbs steps, which generate a data set of size \(\O(m_{max}\,N_{Gibbs})\), where \(m_{max}\) is the maximal number of independent modes.
In addition, to being a faithful representation of \({\cal P}(\{P_m\}|\{d_i\})\) the Blackwell-Rao estimator is also a computationally efficient representation, which allows to calculate any moment of \({\cal P}(\{P_m\}|\{d_i\})\) as:
\begin{equation}
\label{eq:EST_HIGHER_MOMENTS}
\langle P_m\,P_{m'} ... P_{m''}\rangle|_{{\cal P}(P_m|\{d_i\})} \approx \frac{1}{N_{Gibbs}} \sum_{j=1}^{N_{Gibbs}} \langle P_m\,P_{m'} ... P_{m''}\rangle|_{{\cal P}(P_m|\{s_i\}^j)} \, ,
\end{equation}
where each of the terms on the right handside can be calculated analytically.
For the inverse gamma distribution given in equation (\ref{eq:INVERSE_GAMMA}) we can then simply calculate the mean of the probability distribution \({\cal P}(P_m|\{d_i\})\) as:
\begin{equation}
\label{eq:BR_MEAN}
\langle P_m \rangle|_{{\cal P}(P_m|\{d_i\})} \approx \frac{ \frac{1}{N_{Gibbs}} \sum_{j=1}^{N_{Gibbs}} \sigma_m^j}{2(\alpha-2)+n_m} \, ,
\end{equation}
and in analogy the variance as:
\begin{equation}
\label{eq:BR_VARIANCE}
\left \langle \left [ P_m -\langle P_m \rangle\right]^2 \right \rangle|_{{\cal P}(P_m|\{d_i\})} \approx \frac{\frac{1}{N_{Gibbs}}\,\sum_{j=1}^{N_{Gibbs}}\left(\sigma_m^j\right)^2}{4\,\left((\alpha-2)+n_m/2\right)^2\left((\alpha-3)+n_m/2\right)} \, .
\end{equation}
The Blackwell-Rao estimator also allows us to demonstrate another remarkable property of the Gibbs sampling approach. Although a specific power-spectrum prior has to be employed during the Gibbs analysis of the data, a post processing analysis of the power-spectrum can be performed with any desired power-spectrum prior.
Lets assume one prefers to perform a post processing analysis with a power-spectrum prior \({\cal P'}(\{P_m\})\), rather than with the prior \({\cal P}(\{P_m\})\), which was employed during Gibbs sampling. We therefore want to estimate the power-spectrum from the following posterior:
\begin{eqnarray}
\label{eq:BLACKWELL_RAO}
{\cal P'}(\{P_m\}|\{d_i\}) & = & {\cal P'}(\{P_m\}) \frac{{\cal P}(\{d_i\}|\{P_m\})}{{\cal P}(\{d_i\})}\nonumber \\
& = & \frac{{\cal P'}(\{P_m\})}{{\cal P}(\{P_m\})}\, {\cal P}(\{P_m\}|\{d_i\})\nonumber \\
&= & \frac{{\cal P'}(\{P_m\})}{{\cal P}(\{P_m\})} \int \rm{d}\{s_i\}\,{\cal P}(\{P_m\}|\{s_i\})\,{\cal P}(\{s_i\}|\{d_i\}) \nonumber \\
&= & \int \rm{d}\{s_i\}\,{\cal P'}(\{P_m\})\,\frac{{\cal P}(\{s_i\}|\{P_m\})}{{\cal P}(\{s_i\})}\,{\cal P}(\{s_i\}|\{d_i\}) \nonumber \\
& \approx & \frac{1}{N_{Gibbs}} \sum_{j=1}^{N_{Gibbs}} {\cal P'}(\{P_m\})\,\frac{{\cal P}(\{s_i\}^j|\{P_m\})}{{\cal P}(\{s_i\}^j)} \, ,
\end{eqnarray}
where we simply made use of the Bayes theorem. Since \({\cal P}(\{s_i\}|\{P_m\})\) is a simple Gaussian distribution, and therefore given analytically, the posterior \({\cal P'}(\{P_m\}|\{d_i\})\) can be calculated with any desired power-spectrum prior in a post-processing step.
\section{The prior and the cosmic variance}
\label{Prior_and_Variance}
The Gibbs sampling procedure consists of the two basic steps, of first sampling perfect noise-less full sky signal samples \(s_i\) and then sampling the power-spectrum coefficients \(P_m\) given \(s_i\).
Therefore, the probability distribution \({\cal P}(P_m|\{s_i\})\), given in equation (\ref{eq:INVERSE_GAMMA}), encodes our knowledge on the power-spectrum coefficients \(P_m\), if we had perfect knowledge of the true signal amplitudes \(s_i\).
It is clear, that in the case of perfect observations the full posterior distribution for the power-spectrum coefficients \({\cal P}(P_m|\{d_i\})\) would reduce to that one given in equation (\ref{eq:INVERSE_GAMMA}).
This is, because in the case of perfect full-sky and noise-less observations, the signal posterior would collapse to a Dirac delta distribution, due to the vanishing noise covariance matrix.
This means, that in this case the signal amplitudes \(s_i\) can be estimated with zero variance.
However, measuring the \(P_m\) to arbitrary precision will never be possible. The power-spectrum coefficients depend on the data through the \(\sigma_m\), which measure the actual fluctuation power in the observed Universe. It is clear that the probability distribution function (\ref{eq:INVERSE_GAMMA}) for the \(P_m\) will not reduce to a Dirac delta distribution, even though the \(\sigma_m\) have been measured perfectly.
Owing to this fact, there will always remain some uncertainty in the power-spectrum estimation, even in the case of perfect measurements.
This residual uncertainty is well known as cosmic variance, which is the direct consequence of only observing just one specific matter field realization.
The Gibbs sampling approach, as proposed here, takes this cosmic variance into account, by drawing samples from the probability distribution \({\cal P}(P_m|\{s_i\})\), which obey the correct statistical properties.
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture2}}}
\caption{Selection function and two dimensional sky mask.}
\label{fig:TEST_SEL_WIN}
\end{figure*}
\subsection{Flat versus Jeffrey's prior}
The main characteristics of \({\cal P}(P_m|\{s_i\})\) can be summarized in terms of the mean, mode and variance, which allows us to discuss the influence of the actual power law prior choice.
The mode of the inverse gamma distribution (\ref{eq:INVERSE_GAMMA}) is the most frequently used estimator for the power-spectrum coefficients when using fast Fourier transform techniques. The mode \(P_m^*\) is defined as \(P_m^* \in \{P_m | \forall P_l : \, \, {\cal P}(P_l|\{s_i\}) \le {\cal P}(P_m|\{s_i\}) \} \), which is simply the value of \(P_m\) which maximizes the distribution. For the inverse gamma distribution it is given as:
\begin{equation}
\label{eq:mean_inverse_Gamma}
P_m^* = \frac{\sigma_m}{(2\alpha+n_m)} \, .
\end{equation}
Assuming a flat prior \(\alpha=0\), this immediately returns the frequently used and simple power-spectrum estimator:
\begin{equation}
\label{eq:simple_power_spec_est}
P_m^* = \frac{\sigma_m}{n_m} \, ,
\end{equation}
\citep[see e.g.][]{Volker2008}.
However, note, that a flat prior is a questionable choice, when measuring a variance, which is a scale parameter, as it does not correspond to maximal ignorance but biases towards large excursions from zero \citep{WANDELT2004}.
Additionally the flat prior does not permit to sample every mode in the three dimensional Fourier box with finite variance, which can be easily seen by looking at the variance of the inverse gamma distribution given as:
\begin{eqnarray}
\label{eq:variance_inverse_Gamma}
\left \langle \left [ P_m -\langle P_m \rangle\right]^2 \right \rangle = \frac{\sigma_m^2}{4\,( (\alpha-2)+n_m/2)^2((\alpha-3)+n_m/2)}
\end{eqnarray}
which is only finite for \(2\alpha+n_m>6\).
In a three dimensional cubic Fourier box the minimal number \(n_m\) for a mode is \(min(n_m)=6\) (except for the zero mode). This corresponds to the six mode vectors \(\vec{k_k}\) with same vector modulus \(|\vec{k_k}|\) along the three axes in Fourier space. Nevertheless, a flat prior (\(\alpha=0\)) requires \(n_m>6\) in order to sample the modes with finite variance, which cannot be fulfilled for these modes.
Therefore, we favor the Jeffrey's prior with \(\alpha=1\), which requires only \(n_m>4\) to sample each mode, except for the zero mode, with finite variance. Jeffrey's prior is also scale invariant, and therefore does not introduce any bias on a log-scale.
\subsection{Informative prior}
\label{Informative_Prior}
The prior discussed in the previous section is a maximal ignorance prior in the sense, that every scale has the same probability. This prior therefore allows for large excursions around the true value of the power-spectrum. This is especially important when sampling the largest scales in a galaxy survey, which are poorly constrained by measurements. A maximum ignorance prior will therefore require to sample a huge space of possible power-spectrum configurations.
However, one can argue, that knowledge about the largest scales exists, either through theory, or CMB measurements, which provide detailed information on the largest scales.
For this reason, it might be interesting to incorporate this a priori knowledge on the power-spectrum into our sampling scheme, and therefore allowing for a more efficient strategy to sample the space of possible power-spectrum configurations.
The most simple informative prior can be obtained by limiting the range of the Jeffrey's prior, by setting the Jeffrey's prior equal to zero for power-spectrum excursion of more than a certain factor:
\begin{eqnarray}
\label{eq:INF_JEFFREYS_PRIOR}
{\cal P}(P_m) &\propto& \left \{ \begin{array}{ll}
P_m^{-\alpha} & \quad \mbox{for $ P^{Prior}_m / \tau \le P_m \le P^{Prior}_m\, \tau $}\\
0 & \quad \mbox{otherwise}\\ \end{array} \right.
\end{eqnarray}
where \(P^{Prior}_m\) is our best guess power-spectrum prior, and \(\tau\) is a factor, which permits a certain range around the prior power-spectrum.
The sampling scheme, which arises by implementing this prior, is basically the same as described in section \ref{Power_spectrum_sampling}, with the only exception that power specrum coefficients \(P_m\), which do not fulfill the requirement described in equation (\ref{eq:INF_JEFFREYS_PRIOR}), are rejected and have to be resampled.
This prior would still be a maximum ignorance prior over the allowed range.
Another possible informative prior, which allows to sample the entire range, but favoring the region, in which we expect the true power-spectrum to exist, can be very easily found by assuming some a priori knowledge on the \(\sigma_m\). For example, this can be achieved by incorporating an independent observation to the sampling scheme.
In this case we can again assume an inverse gamma distribution for the power-spectrum prior:
\begin{equation}
\label{eq:INVERSE_GAMMA_PRIOR}
{\cal P}(P_m) =\frac{\left(\frac{\sigma^{Prior}_m}{2}\right)^{(\alpha^{Prior}-1)+n_m^{Prior}/2}}{\Gamma \left((\alpha^{Prior}-1)+\frac{n_m^{Prior}}{2}\right)} \frac{1}{P_m^{(\alpha^{Prior}+n_m^{Prior}/2)}} e^{- \frac{1}{2}\frac{\sigma_m^{Prior}}{P_m}}\, ,
\end{equation}
where \(\sigma^{Prior}_m\) describes our a priori knowledge on the \(\sigma_m\), \(\alpha^{Prior}\) is the spectral index of of our power law prior, which we choose \(\alpha^{Prior}=1\) to be the Jeffrey's prior, and \(n_m^{Prior}\) is the number of mode counts for our prior guess.
Note, that the combination of \(\alpha^{Prior}\) and \(n_m^{Prior}\) defines how sharp this prior would be.
As we want our prior to contain as little information as possible, we choose \(n_m^{Prior}=5\) as this is the minimal number of modes, which lead to a finite variance with the Jeffrey's prior.
Introducing an inverse gamma prior will then yield again a inverse gamma distribution for the power-spectrum sampling procedure:
\begin{equation}
\label{eq:INVERSE_GAMMA_WITH_PRIOR}
{\cal P}(P_m|\{s_i\}) =\frac{\left(\frac{\sigma_m^{Prior}+\sigma_m}{2}\right)^{(\alpha^{Prior}-1)+(n_m^{Prior}+n_m)/2}}{\Gamma \left((\alpha^{Prior}-1)+\frac{(n_m^{Prior}+n_m)}{2}\right)} \frac{e^{- \frac{1}{2}\frac{(\sigma_m^{Prior}+\sigma_m)}{P_m}}}{P_m^{(\alpha^{Prior}+(n_m^{Prior}+n_m)/2)}} \, .
\end{equation}
By introducing \(x_m=(\sigma_m^{Prior}+\sigma_m)/P_m\), this can again be rewritten as a \(\chi^2\)-distribution:
\begin{equation}
\label{eq:XI_SQUARE_with_prior}
{\cal P}(x_m|\{s_i\}) = \frac{x_m^{\beta_m/2-1}}{\Gamma\left(\beta_m/2\right)\,(2)^{\beta_m/2}} e^{- \frac{x_m}{2}}\, ,
\end{equation}
where \(\beta_m=2(\alpha^{Prior}+(n_m^{Prior}+n_m)/2-1)\).
A power-spectrum sample is then obtained in the same fashion as described in section \ref{Power_spectrum_sampling}, by
\begin{equation}
\label{eq:PS_sample}
P_m = \frac{\sigma_m^{Prior}+\sigma_m}{|\vec{z}_m|^2} \, ,
\end{equation}
with \(\vec{z}_m\) being \(\beta_m\) element vector, with each element being normally distributed with zero mean and unit variance.
As an example of incorporating theoretical information one can generate the \(\sigma_m^{Prior}\) by:
\begin{equation}
\label{eq:PS_Prior}
\sigma_m^{Prior} = n_m^{Prior}\, P_m \, ,
\end{equation}
which will yield on average the prior power-spectrum.
In this fashion the Gibbs sampler will sample around our prior guess for the power-spectrum.
Note, that this method provides a possible interface for joint power-spectrum estimation from a joint CMB and large scale structure analysis, where the \(\sigma_m^{Prior}=\sigma_m^{CMB}\) will be obtained from a CMB analysis step.
\subsection{Hidden prior}
\label{hidden_prior}
In the previous section, we described how to sample each mode \(|\vec{k}_k|\) of the Fourier box individually, which yields extremely fine frequency resolution in the power-spectrum estimate. As in practice such a high frequency resolution is not required, or desired, one allows each shell \(m'\) to have a finite thickness \(\Delta k_m'\), rather than treating it as infinitely thin.
As the newly designated shells \(m'\) have a finite thickness, different infinitely thin shells \(m\) now belong to the same shell \(m'\).
With the shells having a finite thickness, different close by Fourier modes \(|\vec{k}_k|\) fall into the same bin, and therefore an assumption about the functional shape \(f_m(|\vec{k_k}|)\), which yields the correct weighting, over this shell \(m'\) around the central mode \(|\vec{k}|_{m'}\) must be assumed:
\begin{equation}
\label{eq:PS_sample_piece_function}
P_m = A_{m'} f^{m'}_m \mbox{ for \( \vec{k_k} \in \left[|\vec{k}|_m-\Delta k_m/2,|\vec{k}|_m\Delta k_m/2\right]\) } \, ,
\end{equation}
where \(A_{m'}\) is the constant amplitude for the {\it\(m'\)th} shell. Usually, the shape of the power-spectrum \(f^{m'}_m\) is assumed to be constant over the shell width \(\Delta k_m'\), which amounts to "binning" the power-spectrum. However, in general \(f^{m'}_m\) could assume any desired functional shape.
With this assumption, the estimate of the power-spectrum is further constrained by the assumed functional shape \(f^{m'}_m\), and we seek to estimate the set of power-spectrum coefficients \(\{P_m\}\) from a probability distribution given as:
\begin{eqnarray}
\label{1st:cont_spec_posterior_binning}
{\cal P}(\{P_m\}|\{s_i\},\{f^{m'}_m\}) &=&\frac{{\cal P}(\{f^{m'}_m\}|\{P_m\})}{{\cal P}(\{f^{m'}_m\}|\{s_i\})} \, \frac{{\cal P}(\{P_m\}) {\cal P}(\{s_i\}|\{P_m\},\{f^{m'}_m\})}{{\cal P}(\{s_i\})} \nonumber \\
&=&\frac{{\cal P}(\{f^{m'}_m\}|\{P_m\})}{{\cal P}(\{f^{m'}_m\}|\{s_i\})} \, \frac{{\cal P}(\{P_m\}) {\cal P}(\{s_i\}|\{P_m\})}{{\cal P}(\{s_i\})} \, , \nonumber \\
\end{eqnarray}
where we assumed conditional independence \({\cal P}(\{s_i\}|\{P_m\},\{f^{m'}_m\})={\cal P}(\{s_i\}|\{P_m\})\) of the functional shape, once all power-spectrum coefficients are given.
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture3}}}
\caption{Volume rendering of observed galaxy densities in two different projections.}
\label{fig:volume_rendering_observation}
\end{figure*}
The usual implicit assumption when introducing this kind of power-spectrum binning is:
\begin{equation}
\label{eq:Pimplicit assumption}
{\cal P}(\{f^{m'}_m\}|\{P_m\})¸\propto \prod_m \delta^D(A_m'\,f^{m'}_m-P_m)\, ,
\end{equation}
meaning, we claim to know the exact functional shape of the power-spectrum within the shells \(m'\).
This reduces the amount of free parameters to be sampled, from the set of \(N_m\) power-spectrum coefficients \(P_m\) to the set of \(N_{m'}\) amplitudes \(A_{m'}\).
If, for instance, we knew the exact shape of the entire power-spectrum, sampling the power-spectrum could be reduced to the task of just sampling the overall amplitude. Such an approach is persued, when trying to sample the cosmological parameters.
However, with the above definition, sampling the power-spectrum reduces to sample the amplitudes \(A_{m'}\), and we can write:
\begin{eqnarray}
\label{1st:cont_spec_posterior_binning}
{\cal P}(\{A_{m'}\}|\{s_i\},\{f^{m'}_m\}) &=& {\cal P}(\{P_m\}|\{s_i\},\{f^{m'}_m\}) \left| \frac{\partial \{P_m\} }{\partial \{A_{m'}\}} \right| \nonumber \\
&=& \prod_m \frac{\left(\frac{\sigma_m}{2\,f^{m'}_m}\right)^{(\alpha-1)+n_m/2}}{\Gamma \left((\alpha-1)+\frac{n_m}{2}\right)} \frac{ e^{- \frac{1}{2}\frac{\sigma_m}{A_{m'}\,f^{m'}_m}}}{(A_{m'})^{(\alpha+n_m/2)}}\nonumber \\
&=& \prod_{m'}\prod_{m\,\in \, m'} \frac{\left(\frac{\sigma_m}{2\,f^{m'}_m}\right)^{(\alpha-1)+n_m/2}}{\Gamma \left((\alpha-1)+\frac{n_m}{2}\right)} \frac{ e^{- \frac{1}{2}\frac{\sigma_m}{A_{m'}\,f^{m'}_m}}}{(A_{m'})^{(\alpha+n_m/2)}}\nonumber \\
&=& \prod_{m'}\frac{\left(\frac{\sigma_{m'}}{2}\right)^{(\alpha-1)+n_{m'}/2}}{\Gamma \left((\alpha-1)+\frac{n_{m'}}{2}\right)} \frac{ e^{- \frac{1}{2\,A_{m'}} \sum_{m\, \in \, m'}\frac{\sigma_m}{\,f^{m'}_m}}}{(A_{m'})^{(\alpha+n_{m'}/2)}}\nonumber \\
&=& \prod_{m'}\frac{\left(\frac{\sigma_{m'}}{2}\right)^{(\alpha-1)+n_{m'}/2}}{\Gamma \left((\alpha-1)+\frac{n_{m'}}{2}\right)} \frac{ e^{- \frac{1}{2} \frac{\sigma_{m'}}{A_{m'}}}} {(A_{m'})^{(\alpha+n_{m'}/2)}}\, , \nonumber \\
\end{eqnarray}
where \(\sigma_{m'}=\sum_{m\, \in \, m'}\sigma_m/f^{m'}_m\) and \(n_{m'}= \sum_{m\, \in \, m'} n_{m}\).
As this probability distribution factorizes in the amplitudes \(A_{m'}\), each of these amplitudes can be independently sampled from the inverse gamma distribution, with the method described in section \ref{Power_spectrum_sampling}.
A power-spectrum coefficient \(P_m\) is therefore obtained as:
\begin{equation}
\label{1st:binned_PM_estimate}
P_m = \frac{\sigma_{m'}\,f^{m'}_m}{|\vec{z}_{m'}|^2} \, ,
\end{equation}
where \(\vec{z}_{m'}\) is a \(n_{m'}\) component vector, with the elements being Gaussian distributed with zero mean and unit variance.
Note, since \(n_{m'}\ge n_{m}\) the variance for the power-spectrum coefficients \(P_m\) is reduced. This is the result of reducing the amount of free parameters, by introducing "binning", as now each amplitude estimate for the \(A_{m'}\) is based on more supporting points than the individual \(P_m\).
Due to the finite shell width \(\Delta k_{m'}\) neighboring modes are coupled. This fact could be exploited to circumvent the problem of missing mode coupling in the nonlinear regime. For example, if the different shells would be logarithmically spaced, it is possible to sample the largest scales independently, while towards the non-linear regime, the modes get more and more coupled. From a physical point of view, the logarithmic spacing would therefore be best suited for this problem.
On the other hand, introducing "binning" to the power-spectrum sampling procedure, makes the method insensitive to fluctuations an scales smaller than the shell width \(\Delta k_{m'}\), and a \(\Delta k_{m'}\) should therefore be chosen carefully in order not to miss features we intend to recover.
It is also important to note, that the variance in the power-spectrum sampling step defines the stepsize for the random walk, to sample the joint probability distribution of signal and power-spectrum. If the "binning" is unreasonable large, and therefore variance is dramatically reduced, it takes much longer to explore the joint space of signal amplitudes \(s_i\) and power-spectrum coefficients \(P_m\).
For this reason, we prefer to sample with rather high spectral resolution for the power-spectrum.
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture4}}}
\caption{Test results from low resolution test run. The left panel shows the results of the Burn-in test for eight sequential \(\xi_l^i\) as a function of Fourier modes \(k\). The right panel displays results for the correlation coefficients \(C_l(n)\) of the correlation length test for all Fourier modes.}
\label{fig:BIT_CORRCOEFF}
\end{figure*}
\section{Numerical Implementation}
\label{NUMERICAL_IMPLEMENTATION}
Our numerical implementation of the large scale structure Gibbs sampler is called \textsc{ARES} (Algorithm for REconstruction and Sampling).
It can be separated into the described two Gibbs sampling steps of estimating a signal sample, which involves solving large systems of equations, and sampling the power-spectrum, by drawing random samples from the inverse gamma distribution.
\subsection{Inversion of matrices}
The signal sampling step is by far the most numerically demanding step as it requires fast and efficient inversions of large matrices. \textsc{ARES} utilizes the fast operator based conjugate gradients inversion technique as presented in the \textsc{ARGO} code \citep[][]{Kitaura}, which has recently been applied to Sloan Digital Sky Data, to obtain matter field reconstructions \citep[][]{KITAURA2009}.\nextext{ Operator based inversion techniques have been previously developed for CMB data analysis \citep[][]{WANDELT2004,2004ApJS..155..227E,JEWELL2004}.}
Rather than requiring to store the matrices under consideration explicitely in computer memory, which would be numerically prohibitive, this approach only requires to know how these matrices would act on a vector, and therefore reduces the required amount of computer memory to numerically feasible amounts.
Further, it is possible to reduce the scaling of the most expensive matrix operation to that one of a fast Fourier transform. \textsc{ARES} uses the FFTW3 library, which therefore reduces the scaling of the most expensive operation to \(\mathcal{O}(N\,log(N))\) \citep{FFTW05}.
The FFTW3 library also incorporates the feature of parallel Fourier transforms, which allows for straight forward parallelization of our code.
\begin{figure*}
\centering{\resizebox{0.4\hsize}{!}{\includegraphics{./figures/picture5}}}
\caption{Wall clock times for a single map making step over 10000 Gibbs iterations. The red line denotes the average wall clock time for one matrix inversion step.}
\label{fig:WALL_TIMES}
\end{figure*}
\subsection{Random number generation}
Our random number generation relies on a pseudo random number generator as provided by the GNU scientific library (gsl) \citep{GSL}.
In particular, we use the Mersenne Twister MT19937, with 32-bit word length, as provided by the gsl\_rng\_mt19937 routine.
The Mersenne Twister algorithm was designed for Monte Carlo simulations, where primarily good quality numbers and speed are decisive \citep{MERSENNE_TWISTER}.
It has been proven to have a period of \(2^{19937}-1\), where for our application in practice we usually require only about \(2^{35}\) unique random numbers. Also note, that the very high order of dimensional equidistribution guarantees negligible serial correlation in the output sequence.
The Mersenne Twister algorithm passed several tests for statistical randomness, including the Diehard tests.
\subsection{Parallelization}
\label{Parallelization}
Parallelization of the code is a crucial issue, since CPU time is the main limiting factor of our method.
Even though the conjugate gradient method allows for very efficient matrix inversions, for a complete data analysis one has to perform many of these matrix inversions, and therefore the total prefactor of the algorithm increases.
In figure \ref{fig:WALL_TIMES} we show a typical progression for the wall clock times of the map making algorithm during the Gibbs sampling chain for a low resolution simulation with \(N_{VOX}=64^3\) and a setup as described in section \ref{Gaussian_Test_Cases}.
As can be seen, the wall clock times may vary from Gibbs iteration to Gibbs iteration.
This variation is due to complexity of the actual problem in the Gibbs iteration, which may require more conjugate gradients steps to reach the desired numerical accuracy.
The average creation time for one Gibbs sample in this low resolution simulation is about \(20\) seconds. Where this test was run on a single Desktop CPU. Doubling the resolution will require roughly eight times longer to create a sample. Hence, a \(N_{VOX}=128^3\) simulation will require roughly \(5\) minutes and a \(N_{VOX}=512^3\) about \(8\) hours to create a single Gibbs sample.
This immediately clarifies the need to parallelize the code, as usually tenth of thousands of Gibbs samples are required.
There are in principle two different approaches to parallelize the code.
The most demanding step in the Gibbs sampling chain is the map making process. One could therefore parallelize the map making algorithm, which in principle requires to parallelize the fast Fourier transform. The FFTW3 library provides parallelized fast Fourier transform procedures, and implementation of those is straight forward \citep{FFTW05}.
However, optimal speed up cannot be achieved.
The other approach relies on the fact that our method is a Monte Carlo process, and each CPU can therefore calculate its own Markov chain.
In this fashion we gain optimal speed up and the possibility to initialize each chain with different initial guesses.
The major difference between these two parallelization approaches is, that with the first method one tries to calculate a rather long sampling chain, while the latter one produces many shorter chains.
As we will see in the next section, successive Gibbs samples are highly correlated in the low signal to noise regime and producing a larger number of independent samples requires longer chains.
This problem, however, can be partially overcome by initializing each Markov chain with an independent power-spectrum guess.
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture6}}}
\caption{The left panel shows the mode statistics for some selected Fourier modes. The green vertical line denotes the true underlying power-spectrum. The red line denotes the averaged mean obtained from the Gibbs chain, and the black vertical line denotes the power-spectrum of the specific matter field realization. The right panel shows the power-spectrum covariance matrix estimated from the 40,000 Gibbs samples.}
\label{fig:MODE_STAT_CORR_MAT_MI_PRIOR}
\end{figure*}
\section{Testing \textsc{ARES}}
\label{Gaussian_Test_Cases}
In this section, we apply \textsc{ARES} to simulated mock observations, where the underlying dark matter signal is perfectly known. With these mock observations we will be able to demonstrate that the code produces results consistent with the theoretical expectation. In addition, we will gain insight in how the code may perform in real-world applications, when CPU time is limited.
Therefore, we will set up Gaussian mock cases, designed to highlight some specific feature of the code.
\subsection{Setting up a Gaussian Mock observation}
\label{SET_UP_MOCK_OBS}
For the cases studied in this section we set up a set of low resolution mock observations based on Gaussian random fields, which will allow us to calculate a large number of samples in reasonable computational times.
These mock observations are generated on a three dimensional cartesian box with \(N_{side}=64\), corresponding to \(N_{vox}=262144\) volume elements, and a box length of \(L=1500 \unit{Mpc}\), with the observer positioned at the center.
The mock observations are generated according to the procedure described in section \ref{DRAWING_SIGNAL_SAMPLES}, with the underlying cosmological power-spectrum being calculated, with baryonic wiggles, following the prescription described in \citet{1998ApJ...496..605E} and \citet{1999ApJ...511....5E}. For these simulations we assumed a standard \(\Lambda\)CDM cosmology with the set of cosmological parameters (\(\Omega_m=0.24\), \(\Omega_{\Lambda}=0.76\), \(\Omega_{b}=0.04\), \(h=0.73\), \(\sigma_8=0.74\), \(n_s=1\) ).
To generate the uncorrelated noise component, we assume a noise structure function \(n_i^{SF}=M_i\,F_i\,/\bar{\lambda}\), with \(\bar{\lambda}= 8.0\times10^{-3}\, L^3/N_{vox} \) in voxel space. Then we draw random Poission samples via the procedure described in section \ref{DRAWING_SIGNAL_SAMPLES}.
Note, that this is equivalent to drawing a galaxy distribution from the ensemble mean dark matter density, which has been obtained by averaging over all possible matter field realizations.
The survey properties are described by the galaxy selection function \(F_i\) and the observation Mask \(M_i\).
The selection function is given by:
\begin{equation}
\label{eq:MOCK_GAL_SELFUNC}
F_i = \left(\frac{r_i}{r_0}\right)^b\,\left(\frac{b}{\gamma}\right)^{-b/\gamma}\,e^{\frac{b}{\gamma}-\left(\frac{r_i}{r_0}\right)^{\gamma}} \, ,
\end{equation}
where \(r_i\) is the comoving distance from the observer to the center of the \(i\)th voxel. For our simulation we chose parameters \(b=0.6\), \(r_0=500{\unit{Mpc}}\) and \(\gamma=2\).
In figure \nextext{\ref{fig:TEST_SEL_WIN}} we show the selection function together with the sky mask, which defines the observed regions in the sky. The two dimensional sky mask is given in sky coordinates of right ascension and declination.
The projection of this two dimensional mask into the three dimensional volume yields the three dimensional mask \(M_i\).
Two different projections of this generated mock galaxy survey are presented in figure \ref{fig:volume_rendering_observation} to give a visual impression of the artificial low resolution observation.
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture7}}}
\caption{2D marginalized posterior densities. Each plot shows the full joint posterior of the data, integrated over all dimensions except for the two shown.}
\label{fig:MARG_DENS_MI_PRIOR}
\end{figure*}
\subsection{Testing convergence and correlations}
The theory of Gibbs sampling states that the individual Gibbs samples converge to being samples from the joint probability distribution. However, the theory itself does not provide any criterion to detect when the samples start being samples from the joint probability distribution.
Therefore, this initial burn-in behavior has to be tested by experiments.
Further we will follow a similar analysis as described in \citet{2004ApJS..155..227E}, to estimate the convergence behavior and the correlation length of the Gibbs samples.
This analysis is important, as it allows us to understand the performance of our code in real world applications, which is particularly relevant for estimating how many Gibbs samples are required for an accurate power-spectrum estimation.
We study the initial burn-in behavior by setting up a simple experiment, in which we set the initial guess for the power-spectrum to be ten times larger in amplitude than the true underlying power-spectrum. Then \textsc{ARES} is applied to the mock observation, as described in the previous section \ref{SET_UP_MOCK_OBS}, to calculate a number of Gibbs iterations.
The obtained power-spectrum samples \(P_l^i\) of the \(i\)th iteration are then compared to the true power-spectrum via:
\begin{equation}
\label{eq:BURN_IN}
\xi_l^i = \frac{P_l^i-P_l^{true}}{P_l^{true}}\, ,
\end{equation}
where the \(P_l^{true}\) are the true power-spectrum coefficients, with which the mock dark matter signal was simulated.
The results for the \(\xi_l^i\) are shown in figure \ref{fig:BIT_CORRCOEFF}. Here, we observe a systematic drift of the successive power-spectra towards the true underlying power-spectrum. Further we see that the initial burn-in phase, for the kind of setup as demonstrated here, is rather short. The algorithm requires about \(30\) Gibbs iteration after which we can assume the samples to be samples from the joint probability distribution. Also note, that in this test, we do not observe any particular hysteresis for the poorly constrained large scale modes, meaning they do not remain at their initially set values. This demonstrates the ability of the code to handle correctly the mode coupling introduced by the sky cut.
However, it is clear that a poor initial guess invalidates a certain number of samples, especially at large scales, where the uncertainty due to the sky mask dominates.
For this reason, it is advantageous to initialize the Gibbs sampling chain with an initial guess, which is close to the true power-spectrum, to ensure short burn-in times.
As can be seen in figure \ref{fig:BIT_CORRCOEFF}, any bad initial guess would reveal itself by a systematic drift in the Gibbs chain, and can therefore be detected easily.
Next, we want to analyze the correlation of the individual Gibbs samples in the sequence.
This is a crucial point, as it permits us to estimate the number of independent samples, which can be obtained from a Gibbs chain of given length.
The correlation between sequential Gibbs samples can be best understood by reviewing the sampling algorithm.
A Gibbs sample of the joint probability distribution of signal and power-spectrum is obtained in two steps.
In the first step a Wiener reconstruction is performed, based on the assumption of a given power-spectrum, and the power lost due to noise filtering, masks and selection effects is replaced by a fluctuation term.
The signal obtained in this first sampling step mimics a full sky noise-less signal.
It is clear, that the power-spectrum of this signal is determined by the data in the high signal-to-noise region and by the assumed power-spectrum in the low signal-to-noise region.
In the second step the power-spectrum sample is generated, based on this full sky noise-less signal sample, obtained in the previous signal sampling step. The obtained power-spectrum then works as input power-spectrum to the next Gibbs step, and the iteration starts again.
In this fashion the Gibbs sampler performs a random walk in the multi-dimensional space of signal map and power-spectrum.
The stepsize of the power-spectrum sampling step is solely determined by cosmic variance, and does not take into account the noise variance, as described in section \ref{Power_spectrum_sampling}.
However, we want to probe the full probability distribution, which includes both noise and cosmic variance.
This difference does not matter in the high signal to noise regime, since there cosmic variance will dominate the total variance, and for any practical purposes all Gibbs samples will be uncorrelated in this regime.
This, however, is not true in the low signal to noise regime. Since the random stepsize between two subsequent samples is determined only by the cosmic variance, and not by the much larger noise variance, two sequential samples will be strongly correlated. In this case a longer Gibbs sequence is required to produce uncorrelated samples.
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture8}}}
\caption{Power-spectrum estimates obtained from the Gibbs sampling chain for a low resolution (left panel) and a high resolution run (right panel). The black lines represent the ensemble mean of the sample set and the light gray and dark gray shaded regions denote the one and two sigma confidence regions respectively. Additionally we show the according input power-spectra. The blue line shows the cosmological power-spectrum from which the matter field realization was drawn, and the red line is the power-spectrum of this specific matter field realization.}
\label{fig:MI_PRIOR_SPECTRA}
\end{figure*}
Reducing the variance by introducing binning to the power-spectrum, as described in section \ref{hidden_prior}, will lead to even longer correlation length in the Gibbs chain. This simply means, the joint probability distribution will be sampled with a finer resolution in power-spectrum space.
We study this correlation effect by assuming the power-spectrum coefficients \(P_l\) of different modes \(l\) in the Gibbs chain to be independent and estimate their correlation in the chain by calculating the autocorrelation function:
\begin{equation}
\label{eq:CORR_COEFF}
C_l(n) =\left \langle \frac{P^i_l-\left \langle P_l\right \rangle}{\sqrt{Var P_l}} \frac{P^{i+n}_l-\left \langle P_l\right \rangle}{\sqrt{Var P_l}} \right \rangle \, ,
\end{equation}
where \(n\) is the distance in the chain measured in iterations \citep[for a similar discussion in case of the CMB see][]{2004ApJS..155..227E}.
We can then define the correlation length of the Gibbs sampler as the distance in the chain \(n_C\) beyond which the correlation coefficient \(C_l(n)\) has dropped below \(0.1\).
The results for the different modes \(l\) are presented in figure \ref{fig:BIT_CORRCOEFF}.
As one can see, the vast majority of the different Fourier modes have a correlation length of about \(n_C\sim 100\) Gibbs iterations. The rest of the modes show increasingly longer correlation length, which increases towards the highest frequencies contained in the box. For this reason, the Nyquist frequency has the longest correlation length.
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture9}}}
\caption{Same as figure \ref{fig:BIT_CORRCOEFF} but for low resolution runs with the inverse gamma prior.}
\label{fig:BIT_CORRCOEFF_INF_PRIOR}
\end{figure*}
Especially for the highest frequencies towards the Nyquist frequencies, noise domination is only a partial explanation for the long correlation length. The far more important fact is, that the Nyquist frequency can in general not be resolved properly by the data, for example the Nyquist frequency is contained only once in the Fourier box. For this reason, the variance increases towards the Nyquist frequency. Note, that this effect arises from the technical implementation of the fast Fourier transform, which operates on a finite grid, and that we are in principle able to account for these technical effects of the analysis scheme itself. However, the long correlation length for the Nyquist frequencies will in general only provide a rare amount of independent estimates at the Nyquist frequency and therefore this must be taken into account in further analysis.
It is clear, that this effect shifts to higher frequencies as soon as the resolution of our analysis scheme is increased. This can be observed in the high resolution analysis discussed in the next section \ref{High_RES_SIM}.
Also note, that this effect can be cured by introducing an informative prior which will greately reduce the correlation length at these frequencies, as shown later in section \ref{Test_Informative_prior}.
To study the marginalized posterior \(P_l\) distributions in more detail we plot their histograms in figure \ref{fig:MODE_STAT_CORR_MAT_MI_PRIOR}. It is worth mentioning that none of them is even approximately Gaussian.
Another crucial point to address is the question how well we were able to account for effects of the survey geometry. This information is contained in the correlation structure of the estimates.
Therefore, we can examine this effect by calculating the correlation matrix of the \(P_l\) estimates:
\begin{equation}
\label{eq:CORR_COEFF}
\mat{C}_{l\,l'} =\left \langle \frac{P_l-\left \langle P_l\right \rangle}{\sqrt{Var P_l}} \frac{P_{l'}-\left \langle P_{l'}\right \rangle}{\sqrt{Var P_l}} \right \rangle \, ,
\end{equation}
where the ensemble averages are taken over \(40,000\) Gibbs samples.
We present the result in figure \ref{fig:MODE_STAT_CORR_MAT_MI_PRIOR}. It can be clearly seen that this correlation matrix has a very well defined diagonal structure, as expected from theory. The highest off-diagonal correlations have been measured to be less than \(20 \%\), and are found at the highest frequencies close to the Nyquist frequency. Figure \ref{fig:MODE_STAT_CORR_MAT_MI_PRIOR} also shows a blue band of anti-correlation around the diagonal. This anti-correlation indicates that the power-spectrum frequency resolution is higher than supported by the data. Since the data is fixed, and the mask couples neighbored Fourier modes, an increase in the power-spectrum amplitude \(P_l\) has to be compensated with a decrease in the neighboring power-spectrum amplitude \(P_{l+1}\) to have a good fit to the data. It is therefore possible, in a post-processing step, to reduce the frequency resolution of the estimated power-spectra until the anti-correlation vanishes. This is the idea behind the running median estimator, which will be presented in section \ref{OPERATIONS_GIBBS_SAMPLES}.
However, since the posterior distributions for the \(P_l\) are non-Gaussian, the two point correlations do not contain all information. For this reason, we also demonstrate the marginalized posterior distribution for pairs of \(P_l\)s in figure \ref{fig:MARG_DENS_MI_PRIOR}, where we also show examples of maximally correlated and maximally anti-correlated modes.
Finally, we have plotted the full spectrum computed from our 40,000 sample run in figure \ref{fig:MI_PRIOR_SPECTRA}.
As can be seen, we chose a very high frequency resolution to reduce the correlation length of the Gibbs sampler in the low signal to noise regime. The estimated ensemble mean power-spectrum follows the true underlying power-spectrum, in particular the baryonic features. Towards the large scales, the uncertainty increases, as expected, since due to survey geometry and selection effects these scales are only poorly constrained by the data.
\subsection{High resolution Simulation}
\label{High_RES_SIM}
In the previous section, we performed a low resolution analysis to compute a sufficiently large set of samples to estimate the correlation behavior of our algorithm. However, such a large amount of samples is not necessarily required and computational time may be better invested in performing higher resolution analysis. Therefore, in this section we describe the results obtained from such a high resolution analysis.
The setup for this test is the same as described in section \ref{SET_UP_MOCK_OBS}, with the exception that here we use \(256^3\) voxels on an equidistant grid.
The main limitation for this test is CPU time, as a single sampling step takes about one hour.
For this reason, extremely long chains are not feasible. Hence, we run many, rather short, parallel Gibbs chains, as described in section \ref{Parallelization}. In this fashion we obtain an optimal speed up for this parallelization scheme.
The Gibbs sampler was run over 24 independently initialized chains and provided a total of \(4230\) samples.
We present the power-spectra obtained from the multiple-chain analysis in figure \ref{fig:MI_PRIOR_SPECTRA}. As can be seen there is overall good agreement with the realization specific power-spectrum and the ensemble mean estimate found by the Gibbs sampler. We also do not observe a detectable bias in any parts of the spectrum.
Towards the Nyquist frequency the uncertainty increases. This is expected, as towards the edges of the box, the number of modes decreases, and variance increases. Note, that in this fashion the method takes into account the uncertainty introduced by the analysis scheme itself, for example the Fourier space discretization of the FFT.
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture10}}}
\caption{The left panel shows four 2D marginalized posterior densities, which correspond to the maximally correlated and anti-correlated cases of the previous run. The right panel shows the power-spectrum covariance matrix estimated from the 40,000 Gibbs samples calculated with the informative prior.}
\label{fig:MARG_DENS_CORR_MAT_INF_PRIOR}
\end{figure*}
\subsection{Testing an informative Prior}
\label{Test_Informative_prior}
When trying to analyze real galaxy surveys, one is faced with the situation that due to sky cuts and selection effects, usually less than \(30\%\) of the volume is available for analysis. This affects mainly the estimate of the largest scales in the survey, as they are sparsely sampled, and therefore poorly constrained by the data.
In this situation the Jeffrey's prior, as a maximal ignorance prior, allows for large excursions from the expected true underlying power-spectrum. Since the Jeffrey's prior provides equal probability for all scales between zero and plus infinity, it also allows for power-spectrum values, which can be excluded on theoretical grounds, or by complementary experiments, which are more sensitive at the largest scales, such as CMB experiments.
From a Bayesian point of view, one could argue, that in the presence of a priori knowledge, a maximal ignorance prior is not the optimal choice.
Rather than sampling the entire space of possible power-spectrum coefficients \(P_m\) with equal probability, it would be beneficial to preferably sample the region in which we expect the true power-spectrum to exist and allowing for larger excursions with smaller probability.
This would have the effect, that the region of interest would be sampled more densely, and therefore allowing for better power-spectrum estimates with the same amount of Gibbs samples. Also remember, that according to the discussion in section \ref{Blackwell-Rao_estimator}, the prior can be changed for any final post-processing analysis.
As an informative prior can lead to a more efficient sampling strategy in the presence of a priori knowledge, in the following we test \textsc{ARES} when employing the inverse gamma prior as described in section \ref{Informative_Prior}.
We base the inverse gamma prior on a flat power-spectrum guess, which was calculated according to \citet{1998ApJ...496..605E} and \citet{1999ApJ...511....5E}, without the baryonic wiggles. We explicitely do not incorporate the information on the baryonic features, to demonstrate that solely the data drives their estimate. Further, we choose \(n_m^{Prior}=5\), in order to make the prior sufficiently broad, while at the same time ensuring that it has finite variance.
With this prior choice we repeat the standard testing procedure as described in section \ref{SET_UP_MOCK_OBS}.
At first we test the initial burn-in time, by starting with a power-spectrum which is a factor \(10\) higher in amplitude than the underlying true power-spectrum.
The results for the according \(\xi_l^i\), described in equation (\ref{eq:BURN_IN}), are presented in figure \ref{fig:BIT_CORRCOEFF_INF_PRIOR}. It can be seen that the burn-in time is much shorter for the large scale modes, which are poorly constrained by the data. Also note, that the overall burn-in times for the power-spectra are comparable to those of the maximal ignorance prior case (see figure \ref{fig:BIT_CORRCOEFF}), indicating that these modes are not influenced by the informative inverse gamma prior.
The real advantage of the informative prior becomes obvious when analyzing the correlation length for the individual power-spectrum coefficients \(P_m\) in the Gibbs chain. Again we used a low resolution \(64^3\) Voxel simulation in order to estimate the correlation coefficients, as given by equation (\ref{eq:CORR_COEFF}). The results for this test are presented in figure \ref{fig:BIT_CORRCOEFF_INF_PRIOR}.
In comparison to figure \ref{fig:BIT_CORRCOEFF} it is clear, that the informative prior has a positive influence on the correlation length, which in this test are maximally on the order of hundred Gibbs iterations.
As discussed previously the long correlation length at the highest frequencies are mainly of technical nature, as the Nyquist frequencies cannot be properly represented in the finite Fourier box required for the FFT. This fact introduced artificial variance, which, however, our method can take into account. The informative prior helps in this situation, as it stabilizes these artificial fluctuations by prior information.
In addition, we observed a much better numerical behavior of our method when employing the informative prior, as the code does not run as frequently into numerically extreme regimes as with the maximum ignorance prior. This leads to a faster convergence of the conjugate gradient algorithm towards the desired accuracy.
Further, we also observe a better correction of survey geometry effects. The correlation function for the \(P_l\) is plotted in figure \ref{fig:MARG_DENS_CORR_MAT_INF_PRIOR}. The maximal correlation between different \(P_l\) in this test was less than \(10\%\), which is a clear improvement.
For comparison we also plot the 2D marginalized posterior densities, for the maximally correlated and anticorrelated modes in the maximum ignorance case.
Finally, we present the low and high resolution power-spectra for the informative prior in figure \ref{fig:INF_PRIOR_SPECTRA}. Note, that our prior did not contain any information on the baryonic oscillations. As can be clearly seen, the baryonic features have nicely been recovered, demonstrating, that our informative prior provided much less information than contained in the data.
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture11}}}
\caption{Same as figure \ref{fig:mi_prior_SPECTRA} but for the inverse gamma informative prior.}
\label{fig:INF_PRIOR_SPECTRA}
\end{figure*}
\subsection{Testing with galaxy mock catalogs}
In this section, we describe the application of \textsc{ARES} to a mock galaxy survey based on the Millennium run \citep{2007MNRAS.375....2D}.
The intention of this exercise is two-fold. First we want to test \textsc{ARES} in a more realistic setup, where the intrinsic shot noise of the galaxy distribution is correlated with the underlying signal, which could not be tested with the Gaussian tests before. And second, we want to demonstrate that \textsc{ARES} is able to reconstruct the fully evolved non-linear matter distribution of the N-body simulation.
This mock galaxy survey consists of a set of comoving galaxy coordinates distributed in a \(500\) Mpc box. To obtain a realistic sky observation from this full sky galaxy sample, we virtually observe these galaxies through the sky mask and according to the selection function presented in figure \ref{fig:TEST_SEL_WIN}. The discrete galaxy distribution resulting from this mock observation is then sampled to a \(128^3\) equidistant grid.
To reduce gridding artifacts, such as aliasing power, we employ a supersampling technique as proposed in \citet{COSMO_DSP}. This allows us to accurately treat the mode coupling, and will yield a precision estimate of the power-spectra up to the highest frequencies contained in the box.
Similarly to the method described in \ref{High_RES_SIM}, here we will run \(4\) independently initialized chains. Further, we employ the maximum ignorance Jeffrey's prior.
The galaxy distribution of this mock galaxy catalog follows the fully evolved non-linear matter distribution.
Nevertheless, we initialize the Gibbs sampler with the linear power-spectrum. Then the initial burn-in period, of about \(50\) samples, is required to reach the non-linear power-spectrum. The systematic drift towards the correct power-spectrum is represented in figure \ref{fig:DELUCIA_SPEC}. This experiment nicely demonstrates that the Gibbs sampling approach is able to recover the non-linearities of the fully evolved matter density field.
At this point it is important to note, that the Wiener filter is a linear operation on the data, and as such leaves the statistics of the data intact. This has been demonstrated by \cite{KITAURA2009}, where they show, that the statistics of the reconstructed density field is consistent with a log-normal distribution, as expected for a non-linearly evolved matter distribution. This discussion clarifies, that the Wiener filter, or the Gibbs sampling approach as presented in this work, is very well able to capture the non-Gaussian characteristics of the density field.
However, in case one would like to perform a higher resolution analysis, it would be advantageous to initialize the Gibbs chain with a nonlinear power-spectrum guess, to yield even shorter burn-in times.
The ensemble averaged power-spectrum obtained from this run, together with the one and two sigma confidence regions, is also presented in figure
\ref{fig:DELUCIA_SPEC}. Here it can clearly be seen, that the recovered power-spectrum is consistent with the fully non-linearly evolved matter field. Towards the larger scales the uncertainty increase, which is due to the imposed survey geometry.
So far, in all test, we have focussed only on the recovery of the power-spectrum, and ignored the sample of reconstructed density fields.
Since the Gibbs sampler also provides samples from the matter density field posterior, we are able to calculate any desired statistical summary for the matter field reconstructions. The ability to provide uncertainty estimates for the recovered density fields will in general be valuable for further science based on the matter field estimates.
For this reason, in figure \ref{fig:DELUCIA_VARIANCE}, we present the estimated mean and variance maps obtained from the \(4000\) Gibbs samples.
As can be seen, the variance map clearly captures the features of the survey geometry and selection effects
With the set of Gibbs samples being available, all joint uncertainties can easily be propagated to the finally estimated quantities, such as the gravitational potential or large scale cosmic flows, by applying the according operation to the individual matter field samples.
The result of such a procedure then again yields a probability distribution in the final quantity, enabling us to provide uncertainty information for these quantities.
\section{Operations on the set of Gibbs samples}
\label{OPERATIONS_GIBBS_SAMPLES}
In this section, we present an example application operating on the set of Gibbs samples.
The outcome of our Bayesian method is not a single estimate but a Gibbs sample representation of the full posterior probability distribution for the power-spectrum coefficients. We are therefore able to propagate all uncertainties to any final result, simply by applying a post processing step to all Gibbs samples. The result of such an operation would again yield a probability distribution of the estimated final quantity.
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture12}}}
\caption{The left panel shows successive power-spectrum samples during the burn-in period together with the initial guess (black line). The right panel shows the estimated power-spectrum of the Gibbs sampling analysis. The blue line denotes the initial power-spectrum guess, the black curve is the ensemble mean power-spectrum and the light gray and dark gray shaded regions represent the one sigma two sigma confidence regions respectively.}
\label{fig:DELUCIA_SPEC}
\end{figure*}
As a simple demonstration, we apply a running median filter to the set of power-spectra, which will reduce the spectral resolution.
It is known, that the median is a better estimator of the typical value of a sample than the mean when there are large extraneous outliers in the sample \citep{STUART1994}. For this reason, we choose the median to estimate the mode power in a given frequency bin \(\Delta k_m\). Such a bin can be chosen to be large enough to smooth out any fluctuation below a certain scale. In our specific case we vary the bin width \(\Delta k_m\) with frequency, to allow for a logarithmic binning.
The median \(P^{\nu}_m\) of a set of power-spectrum amplitudes \(\{P_m\}\) contained within the frequency bin of width \(\Delta k_m\) around the mode \(k_m\) then satisfies the inequalities
\begin{equation}
\label{eq:MEDIAN_a}
\mathcal{P}\left(P_m\le P^{\nu}_m\right)\ge \frac{1}{2}
\end{equation}
and
\begin{equation}
\label{eq:MEDIAN_b}
\mathcal{P}\left(P_m\ge P^{\nu}_m\right)\ge \frac{1}{2} \, .
\end{equation}
The running median is then evaluated for every frequency in the power-spectrum sample.
We apply the running median to the set of power-spectrum samples obtained from the two Gaussian mock cases with the Jeffrey's and the inverse gamma prior. In doing so, we are able to calculate the mean and according uncertainty regions for the running median estimates. This effect has already been discussed in section \ref{hidden_prior}. Since the reduction of frequency resolution also decreases the amount of free parameters, the total variance decreases as well.
The results of this experiment are demonstrated in figure \ref{fig:RUNNING_MEDIAN}. As one can easily see, the running median estimates are much smoother than the according Gibbs estimates. Also the reduction of frequency resolution by the running median estimator yields smaller confidence regions.
Finally we are interested in the recovery of the baryonic features in the power-spectrum. We therefore employ the common procedure of dividing the measured power-spectrum by one without baryonic wiggles \(P^{nowiggles}_m\). We then obtain the wiggle function as:
\begin{equation}
\label{eq:wiggle_function}
f^{wiggle}_m = \frac{P_m}{P^{nowiggles}_m} \, .
\end{equation}
Calculating the wiggle function for all Gibbs samples and applying the running median estimator to the set of wiggle functions will yield the distribution of wiggle functions. We then estimate the mean and the according one and two sigma confidence regions. The result of this calculation is presented in figure \ref{fig:RUNNING_MEDIAN} for the two Gaussian test cases. As expected, the variance towards the largest scales increases. Nevertheless, figure \ref{fig:RUNNING_MEDIAN} clearly demonstrates that the baryonic features have been recovered precisely by the Gibbs sampling approach.
This example nicely demonstrates that the uncertainty estimation can easily be transported to any final quantity estimated from the set of Gibbs samples.
\section{Conclusion}
\label{Conclusion}
In this work, we presented \textsc{ARES}, a new Bayesian computer algorithm, designed to extract the full information on the two point statistics from any given probe of the three dimensional large scale structure.
The scientific aim of this algorithm is to provide an estimate of the power-spectrum posterior \(\mathcal{P}\left(\{P(k_i)\}|\{d_i\}\right)\), conditional on a data set, while accounting and correcting for all possible sources of uncertainties, such as survey geometry, selection effects and biases.
This is achieved by exploring the power-spectrum posterior \(\mathcal{P}\left(\{P(k_i)\}|\{d_i\}\right)\) via a Gibbs sampling approach.
While direct sampling from the power-spectrum posterior is not possible, it is possible to draw samples from the full joint posterior distribution \(\mathcal{P}\left(\{P(k_i)\},\{s_i\}|\{d_i\}\right)\) of the power-spectrum coefficients \(P(k_i)\) and the three dimensional matter density contrast amplitudes \(s_i\) conditional on a given set of data points \(\{d_i\}\).
The entire Gibbs sampling algorithm therefore consists of two basic sampling steps, in which first a full three dimensional Wiener reconstruction algorithm is applied to the data and then a power-spectrum is drawn from the inverse gamma distribution.
In this fashion we obtain a set of power-spectrum and signal amplitude samples, which provide a full representation of the full joint posterior distribution \(\mathcal{P}\left(\{P(k_i)\},\{s_i\}|\{d_i\}\right)\).
The scientific output of this Bayesian method therefore is not a single estimate but a complete probability distribution, enabling us to calculate any desired statistical summary such as the mean, mode or variance.
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture13}}}
\caption{Volume rendering of the ensemble variance (upper panels) and the ensemble mean (lower panels) obtained from the mock galaxy catalog analysis.}
\label{fig:DELUCIA_VARIANCE}
\end{figure*}
\begin{figure*}
\centering{\resizebox{1.\hsize}{!}{\includegraphics{./figures/picture14}}}
\caption{Running median estimates of the power-spectra (upper panels) and the according wiggle functions (lower panels) for the set of Gibbs samples with the Jeffrey's prior (left panels), and the inverse gamma prior (right panels). The black lines represent the ensemble mean of the sample set and the light gray and dark gray shaded regions denote the one and two sigma confidence regions respectively. Additionally we show the according input power-spectra. The blue line shows the cosmological power-spectrum from which the matter field realization was drawn, and the red line is the power-spectrum of this specific matter field realization.}
\label{fig:RUNNING_MEDIAN}
\end{figure*}
We also demonstrated, that given a set of Gibbs samples, it is possible to provide an analytic approximation to the power-spectrum posterior \(\mathcal{P}\left(\{P(k_i)\}|\{d_i\}\right)\). This Blackwell-Rao estimator has an analytically appealing form enabling us to calculate any desired moment of the probability distribution in a simple analytic way.
In addition, since the full joint probability distribution is available, it is easy to carefully propagate all uncertainties through to the result of further post-processing analysis steps, such as parameter estimation.
In this work, we focused on thoroughly testing the performance and behavior of our method by applying it to simulated data with controlled properties.
These tests were designed to highlight the problematic of survey geometry and selection effects, for the two cases of Gaussian random fields and a mock galaxy catalog based on the Millennium run.
One of the main goals of these tests was to build up intuition on the phenomenological behavior of the Gibbs sampling algorithm, estimating particularly issues, such as the correlation length of the Gibbs chain, burn-in and convergence times. The result of these tests is of special relevance, as it shows how long the Gibbs sampling chain has to run in order to produce a sufficient amount of independent samples.
Through these experiments we found that the longest correlation lengths are dominated by the poorly constrained Nyquist modes of the box, which can be easily alleviated by imposing some prior knowledge on these modes. In doing so we found that the maximal correlation length for the Gibbs chain was on the order of hundred Gibbs samples. Thus, creating a large number of independent samples in a full scale data analysis is numerically very well feasible.
However, the most important result of these tests is, that our method is able to correct for artificial mode coupling due to the survey geometry and selection effects. This was tested by examining the correlation structure of the Gibbs samples, which showed that the maximal residual correlation can be reduced to less than \(10 \%\), demonstrating that this method correctly accounts for geometry effects.
The application of \textsc{ARES} to a galaxy mock catalog, based on the Millennium run, demonstrated the ability of our method to capture the characteristics of the fully nonlinearly evolved matter field. This is owed to the fact, that the Wiener filter is a linear operation on the data, and as such does not destroy the intrinsic statistical characteristics of the data set.
Nevertheless, a full real data analysis of existing redshift surveys requires the treatment of additional systematic effects such as scale or luminosity dependent biases or redshift space distortion corrections, which we defer to future works.
However, the Bayesian framework, as presented here, can take all these effects naturally into account and treats them statistically fully consistent.
Beside the possibility to include various kinds of uncertainties, the Gibbs sampling approach also allows for a natural joint analysis of different data sets, taking into account the systematics of each individual data set.
In summary, we showed that \textsc{ARES} is a highly flexible and adaptive machinery for large scale structure analysis, which is able to account for a large variety of systematic effects and uncertainties. For this reason, \textsc{ARES} has the potential to contribute to the era of precision cosmology.
\section*{Acknowledgments}
We are grateful to Rainer Moll and Bj\"{o}rn Malte Sch\"{a}fer for usefull discussions and support with many valuable numerical gadgets. We also thank J\'{e}r\'{e}my Blaizot for several enlightening explanations and hints on the nature of galaxy distributions and the systematics of redshift surveys.
Further we thank the "Transregional Collaborative Research Centre TRR 33 - The Dark Universe" and the Marie Curie FP7 fellowship for the support of this work.
\include{biblio}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Reinforcement Learning (RL) has been proven to be successful at learning complex behaviors to solve a variety of robotic contact-rich tasks \cite{sutton2018reinforcement,kober2013reinforcement,wang2020deep}.
However, RL solutions are still not widely adopted in real-world industrial tasks. One reason is that RL still requires an expensive and large amount of robot interaction with its environment to learn a successful policy. The more complex the target task is, the more interaction (samples) is required.
To tackle this problem, domain transfer methods such as Domain Randomization (DR) and Curriculum Learning (CL) have been introduced.
The concept of CL, where the learning process can be made more efficient by following a curriculum that defines an order in which tasks should be learned, has been introduced in previous works \cite{bengio2009curriculum, krueger2009flexible}. Additionally, DR of visual and physical properties of a task has been shown to improve the performance of tasks in novel domains \cite{tobin2017domain, andrychowicz2020learning}. However, most of these results have been only validated in simulated environments, from video games to robotic toy tasks, or in real-world toy environments. In this work, we tackle the problem of improving sample efficiency and performance when learning real-world complex industrial assembly tasks with rigid position-controlled robots. To this end, we seek to answer the question: Does the order in which the different environments (or tasks) are presented to the agent (through DR) affect the training sample efficiency and performance of the learned policy?
We hypothesize that on top of DR, guiding a RL agent's training with a curriculum (presenting tasks in increasing order of difficulty) towards the desired behavior can increase sample efficiency. The reasoning is that the curriculum helps reduce the overall exploration needed to achieve the desired goal while DR enhances domain transferability.
This paper presents a study of the combination of CL with DR. More specifically, we compare different curricula designs and different approaches at sampling values for DR. As a result, we propose a novel method that significantly outperforms our previous work \cite{beltran2020variable}. In \cite{beltran2020variable}, only DR is used to improve sim2real transferability without CL. Experimental results in simulation and real-world environments show that our novel method can be trained with only a fifth of the training samples required by our previous method and still successfully learn to solve the target insertion tasks. Furthermore, the learned policies transferred to the real world achieved high success rates (up to 86\%) on industrial level insertion tasks, with tolerances of $\pm 0.01~mm$, not seen during the training.
This work's contributions are as follows:
\begin{itemize}
\item A study of the application of Curriculum Learning to a learning framework for rigid robots solving contact-rich manipulation tasks.
\item A novel learning framework combining curriculum learning with domain randomization to accelerate learning and domain transfer.
\item An improved reward function to guide the learning of force sensor-based contact-rich manipulation tasks. The reward perceived by the agent is dynamically discounted by the curriculum's level of difficulty. For our target domain, the idea is to encourage the agent to learn to solve the hardest tasks as discussed in \Cref{subsubsec:dynamic-reward}.
\item An empirical study of the different methods considered in this paper was conducted. Novel tasks not seen during training were used to validate the performance of each method, both in simulation and in the real world, including complex industrial insertion tasks.
Additionally, we study the impact of different components of our proposed method in the Appendix.
\end{itemize}
Additionally, this work's source-code\footnote{At https://github.com/cambel/robot-learning-cl-dr} will soon be open to the research community.
The rest of this paper is organized as follows, related work is discussed in \Cref{sec:related-work}. The case study for this work and the proposed method are explained in \Cref{sec:materials-methods}. Experimental results and comparisons with alternative methods and our previous work are shown in \Cref{sec:experiments-results}. Ablation studies are described in the Appendix.
\section{Related Work} \label{sec:related-work}
The related work to the topic of this paper, accelerating robot learning of contact-rich manipulation tasks, is introduced and discussed in this section.
\subsection{Reinforcement Learning for contact-rich manipulation tasks}
Plenty of methods have been proposed to accelerate automation of robotic assembly tasks, such as peg-in-hole tasks. From search strategies to align the peg with the hole \cite{park2013intuitive,chhatpar2001search}, to learning-based methods \cite{luo2021learning, yasutomi2021peg}. Some researchers have explicitly focused on the domain transfer of assembly tasks from simulation to real-world environments. In \cite{schoettler2020meta}, a meta-RL technique is applied to transfer experiences and generalize better to the real world. In \cite{kaspar2020sim2real}, system identification of the real robot (KUKA LBR iiwa) with its simulated counterpart is performed to improve sim2real transferability. While RL-based policies have been proposed and proven to have the potential to solve assembly tasks in the real world, there is still a lack of adoption of such methods in real industrial assembly tasks. One reason for this gap between research and industry is the sample efficiency of such learning methods; a large amount of interaction of the agent with its environment is still necessary to learn robust policies. We aim to contribute to this area by proposing a more sample-efficient approach based on CL and DR, i.e., less time is required to train a successful policy without decreasing its transferability.
\subsection{Domain Randomization}
In the context of machine learning, DR has been proposed as a technique to improve domain transfer, such as going from one task to a harder one or moving from a simulated environment to a real-world environment, in particular for training vision-based models \cite{tobin2017domain} or sim2real models \cite{andrychowicz2020learning}. In \cite{mehta2020adr} an empirical study is presented to examine the effects of DR on agent generalization. Their results show that DR may lead to suboptimal, high-variance policies, which \cite{mehta2020adr} attributes to the uniform sampling of environment parameters. Following those results, this study proposed the use of DR based on CL and empirically studied the effect of the DR's sampling strategy.
\subsection{Curriculum Learning}
The concept of curriculum learning in the context of machine learning was first proposed by Bengio et al. \cite{bengio2009curriculum}. As mentioned before, CL can be understood as learning from easier to harder tasks, i.e., the order in which information is presented affects the policy's ability to learn. A comprehensive survey on Curriculum Learning applied to Reinforcement Learning has been presented in \cite{narvekar2020curriculum}.
Most CL approaches have been validated mainly on simulated environments, such as toy examples (e.g., grid worlds, cart-pole, and low-dimensional environments), video games, and simulated robotic environments. Few research works have focused on real-world robotic environments, such as in \cite{asada1996purposive} where a robot is trained to shoot a ball into a goal, \cite{baranes2013active, luo2020accelerating} where reaching tasks with a robot arm are tackled, and \cite{riedmiller2018learning} which focused on two tasks moving a cube to a target pose and cube stacking. Most recently, \cite{leyendecker2021deep} presented a CL method focused on a specific automotive production task, trained on simulation, and transferred to its real-world equivalent. Similarly, \cite{yasutomi2022curriculum} proposes a method to enable a robot to conduct anchor-bolt insertion, a peg-in-hole task for holes in concrete.
On the other hand, our study focuses on tackling various real-world complex industrial assembly insertion tasks, trained only on toy peg-in-hole simulated environments. Furthermore, we make an exhaustive study on the performance of several approaches to combine DR and CL. As a result, we propose a method to accelerate learning and domain transfer to real-world environments by an adaptive curriculum that affects how DR and the reward signal are considered during training.
\section{Materials and Methods}\label{sec:materials-methods}
\subsection{Problem Statement}\label{sec:problem-statement}
In the present study, we consider the peg-in-hole assembly task that requires the mating of two components. One of the components is grasped and manipulated by the robot manipulator, while the second component has a fixed position via fixtures to a support surface. The proposed method is designed for position-controlled robot manipulators with access to force/torque information at the robot's end-effector (e.g., F/T sensor at the robot wrist), especially those robots where low-level torque control is not available. Thus, sensor-based force control is necessary to realize contact-rich manipulation tasks for such a type of robot.
\subsection{System Overview}\label{subsec:system-overview}
Our propose method aims to improve the sample efficiency of the training phase. \Cref{fig:system-overview} shows the overall system architecture which is based on our previous work \cite{beltran2020variable}. There are two control loops. The inner loop has an adaptive compliance controller; we choose to use a parallel position-force controller that was proven to work well for this kind of contact-rich manipulation tasks \cite{beltranhern2020learning}. The inner loop runs at a control frequency of 500 Hz, which is the maximum available in the Universal Robots e-series robotic arms\footnote{Robot details at https://www.universal-robots.com/e-series/}. The outer loop runs at a lower control frequency to account for the computation time required by the learning algorithm. Our system considers control of the 6 degrees of freedom of the Cartesian space at the robot's end-effector (position and orientation). To this previous learning control framework, we added the PID gains scheduling approach discussed in \Cref{subsubsec:pid-gains-scheduling}. Additionally, a new dense reward function is proposed and described in \Cref{subsubsec:cost-function}. Similarly, a DR method based on CL is implemented on top of this learning control framework, see \Cref{subsubsec:domain-randomization} and \Cref{subsubsec:curriculum-learning}.
\subsection{Reinforcement Learning}\label{subsec:rl-algo}
The reinforcement learning framework followed in this work is modelled as an episodic Markov Decision Process (MDP), $M$, that has finite time steps with a limit of $T$ steps per episode. For a given task, the MDP consists of a state $\textbf{s} \in \mathscr{S}$, action space $\textbf{a} \in A$, state transition function $p(\textbf{s}(t+1) \,|\, \textbf{s}(t), \textbf{a}(t))$, which is the probability of transitioning to state $s'$ given that the action $a$ is taken while being in state $s$, and a reward function $r(s, a)$, which provides an immediate numerical reward for being in state $s$ and taking the action $a$. The goal of RL is to find a policy $\pi^*$ that maximizes the expected sum of discounted future rewards given by $R(t)=\sum_i^{\textbf{T}} \gamma r(s(t),a(t))$, where $\gamma$ is the discount factor \cite{sutton2018reinforcement}.
We used a soft actor-critic (SAC) approach to learning the policy. SAC \cite{Haarnoja2018SoftAO} is an off-policy actor-critic deep RL algorithm based on maximal entropy. SAC aims to maximize the expected reward while also optimizing maximal entropy. The SAC agent optimizes a maximal-entropy objective, encouraging exploration according to a temperature parameter $\alpha$. The core idea of this method is to succeed at the task while acting as randomly as possible. Since SAC is an off-policy algorithm, it uses a replay buffer to reuse information from recent rollouts for sample-efficient training. Additionally, we used the distributed prioritized experience replay approach for further improvement \cite{horgan2018distributed}. Our implementation of the SAC algorithm was based on the TF2RL repository\footnote{TF2RL: Deep-reinforcement-learning library using TensorFlow 2.0. https://github.com/keiohta/tf2rl}.
The agent's policy maps a multi-modal state, the robot's proprioception and the Force-Torque sensor data, to the robot's actions, detailed in \ref{subsec:parallel-control}, using a Time Convolutional Network (TCN); this policy representation and its improved performance over a simple neural network-policy were introduced in our previous work \cite{beltran2020variable}.
\subsubsection{Reward function}\label{subsubsec:cost-function}
On one hand, in our previous work \cite{beltran2020variable}, the reward function is defined only in terms of the contact force and distance between the current position of the robot's end-effector and the target position. The reason is to encourage the agent to get closer to the target position; the faster, the better, while discouraging any contact force. On the other hand, in this work, we propose the inclusion of the velocity of the robot's end-effector in the reward signal. While it is ideal that the agent achieves the task as fast as possible, high speeds are not desirable when the robot is close to the environment or in contact with it, as it can generate large contact forces. Thus, the proposed reward function has the following shape:
\begin{equation}
\begin{aligned}
{r}(\textbf{s},\textbf{a}) = w_{1}r_{xv} + w_{2}r_F + w_3\rho
\end{aligned}
\label{eq:reward-function}
\end{equation}
where $r_{xv}$ is the component of the reward associated with the position and velocity of the robot's end-effector. $r_{xv}$ aims to encourage the agent to get closer and keep closer to the target position. Besides, the agent is encourage to move faster, if it is far from the target pose, or slower when it is close to the target pose. $r_{xv}$ is defined as:
\begin{equation}
\begin{aligned}
r_{xv} = (1 - \tanh(5|x|)(1-|\dot{x}|) + (|\dot{x}|/2)^2
\end{aligned}
\label{eq:reward-position-velocity}
\end{equation}
Where $x$ is the distance between the robot's end-effector and the target position, and $\dot{x}$ is its velocity. A visualization of this reward component is shown in \Cref{fig:reward_xv}.
The component of the reward \cref{eq:reward-function} associated with the contact force is defined as:
\begin{equation}
\begin{aligned}
r_F = -1/(1+e^{-15|F_g-F_{ext}|+5})
\end{aligned}
\label{eq:reward-force}
\end{equation}
where $F_g$ is the desired insertion force, $F_{ext}$ is the contact force. The reward is always negative as a discount reward to encourage minimal contact force. However, due to the nature of the task, contact with the environment is unavoidable, so an $S$ shape function is proposed to allow small contact forces while strongly discouraging large ones. A visualization of this reward component is shown in \Cref{fig:reward_f}
The position, velocity and contact force are normalized by the maximum value allowed for each one. Finally, $\rho$ is defined as follows:
\begin{equation}
\rho = \left\{\begin{matrix}
500, & \textrm{Task completed}\\
-200, & \textrm{Collision} \\
-1, & \textrm{Otherwise}
\end{matrix}\right.
\label{eq:safety-reward}
\end{equation}
The task was considered completed if the Euclidean distance between the robot's end-effector and goal positions was less than 1 mm. The agent is encouraged to complete the task as quickly as possible by discounting the reward for every time step taken. Similar to our previous work \cite{beltranhern2020learning}, we imposed a collision constraint where the agent was penalized for colliding with the environment by giving it a large negative reward and immediately ending the episode. A \textit{collision} is defined as exceeding the force limit $F_{max}$. $F_{max}$ is a hyper-parameter that was defined as 50N in simulation or 30N in the real robot. Lastly, each component was weighted via $w$; all $w$s are hyperparameters. The performance of our new reward signal approach versus the reward signal proposed in our previous work \cite{beltranhern2020learning} is shown in \Cref{apx:reward-types}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{figures/reward_r_xv.png}
\caption{Visualization of the position-velocity-based component of the reward function. $r_{xv}$ in \Cref{eq:reward-position-velocity}}
\label{fig:reward_xv}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{figures/reward_r_f.png}
\caption{Visualization of the contact-force-based component of the reward function. $r_F$ in \Cref{eq:reward-force}}
\label{fig:reward_f}
\end{figure}
\subsection{Compliance Control in Task Space}\label{subsec:parallel-control}
The agent's action space is based on our previous work \cite{beltranhern2020learning}, which consists of learning the force control parameters of a traditional sensor-based force controller. More specifically, we learn the parameters of a parallel position-force controller.
The parallel position-force control requires the fine-tuning of three sets of parameters; gains of a PID for position tracking, gains of a PI for force tracking, and a selection matrix that defines the degree of control between force and position. The controller follows the control law shown in \cref{eq:force-law}
\begin{equation}\label{eq:force-law}
\begin{aligned}
\textbf{x}_c ~= S (K_p^x\textbf{x}_e &+ K_d^x\Dot{\textbf{x}_e} + \textbf{a}_x) + (I-S)(K_p^fF_{e}+K_i^f\int F_{e} dt),
\end{aligned}
\end{equation}
where $F_{e} = F_g - F_{ext}$ (goal contact force minus sensed contact force), $\textbf{x}_{e} = \textbf{x}_g - \textbf{x}$ (goal pose minus current pose), $\textbf{a}_x$ represents an arbitrary translation/rotation given by the agent, and $\textbf{x}_c$ is the commanded positions to the robot. The selection matrix is \[S = diag(s_1, ..., s_6),\quad s_j \in [0,1]\]
The parameters to be learned are $K_p^x$, $K_p^f$, $S$, and $\textbf{a}_x$. One for each of the 6 Cartesian degrees of freedom. The remaining parameters $K_d^x$ and $K_i^f$ were defined proportionally to the $K_p^x$ and $K_p^f$ respectively. Therefore, the action space consists of 24 parameters. Each parameter is bounded to a continuous range of valid values. More details are provided in \cite{beltran2020variable}.
\subsubsection{PID Gains Scheduling} \label{subsubsec:pid-gains-scheduling}
An additional concept is explored in this paper, PID gains scheduling \cite{vesely2013gain}. Parallel force control for sub-millimeter tolerance insertion tasks tends to get stuck in the region very close to the alignment of the peg onto the hole. In the presence of very small position errors, the PID position controller barely generates any signal. On the other hand, the PI controller overcome the position PID controller due to small contact forces or noise coming from the F/T sensor. Therefore, on insertion tasks with sub-millimeter tolerances, the force controller does not move towards the target pose due to small resistance from the contact with the environment. To address this issue, we introduce PID gains scheduling, where once $\textbf{x}_e$ has been reduced to a position error of less than 1~cm, the PID gains are then scaled up based on the position error $K_p^x=K_p^x*(1/\textbf{x}_e)$. As $\textbf{x}_e$ approaches zero, the value of $K_p^x$ has a hyperbolic growth, thus the value of $K_p^x$ is bounded to be maximum $kn=10$ times its \textit{current} value (the agent's chosen value). \Cref{apx:pid-gains-scheduling} provides a comparison between the use of a traditional PID versus the PID gains scheduling on our proposed method.
\subsection{Domain Randomization}\label{subsubsec:domain-randomization}
Domain randomization (DR) \cite{tobin2017domain} is a popular method in robot learning to increase the generalization capabilities of policies trained in simulation, facilitating the transfer of the policy to a real-world robotic agent with minimal to no further refinement of the policy. In principle, the goal of DR is to provide enough variability to the simulated environment during training to generalize better to real-world conditions.
In robot learning, DR randomizes a set of numerical parameters, $N_r$, of a physics simulator. With each parameter $\psi_i$ being sample from a randomization space $\Psi \in \mathbb{R}^{N_r}$. Each parameter is bounded on a close interval $\{[\psi_i^{low},\psi_i^{high}]\}_{i=1}^{N_r}$. For every episode, a new set of parameters is sample from the randomization space $\psi_i \in \Psi$. The most common approach is to draw sample uniformly from the randomization space.
In this work, the randomized aspects of the peg-in-hole tasks are defined in \Cref{tab:randomize-values}.
\begin{table*}[h!]
\centering
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Condition}} & \textbf{Set} \\ \hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Initial position \\ (relative to goal)\end{tabular}} & Position (mm) & {[}-50, 50{]} \\ \cline{2-3}
& Orientation (\degree) & {[}-30, 30{]} \\ \hline
\multicolumn{2}{|c|}{\begin{tabular}[c]{@{}c@{}}Peg shape\end{tabular}} & {[}Cylinder, Cuboid, Hexagon prism, Triangular prism{]} \\ \hline
\multicolumn{2}{|c|}{\begin{tabular}[c]{@{}c@{}}Hole Clearance (mm)\end{tabular}} & {[}\num{3.0}, \num{0.5}{]} \\ \hline
\multicolumn{2}{|c|}{\begin{tabular}[c]{@{}c@{}} $\epsilon$: Distance from full insertion (mm)\end{tabular}} & {[}\num{15}, \num{1}{]} \\ \hline
\multicolumn{2}{|c|}{\begin{tabular}[c]{@{}c@{}}Friction\\ (in Gazebo: \textit{surface/friction/ode/mu})\end{tabular}} & {[}\num{1}, \num{5}{]} \\ \hline
\multicolumn{2}{|c|}{\begin{tabular}[c]{@{}c@{}}Stiffness\\ (in Gazebo: \textit{surface/friction/ode/kp})\end{tabular}} & {[}\num{5.0e-4}, \num{1.0e-6}{]} \\ \hline
\end{tabular}
\caption{Domain Randomization parameters and their maximum range of values}
\label{tab:randomize-values}
\end{table*}
\subsection{Curriculum Learning}\label{subsubsec:curriculum-learning}
Curriculum Learning comes from the notion that the order in which information is organized and presented to a learner impacts the learner's performance and training speed. This idea can be observed in the way humans learn, starting with simple concepts and gradually progressing to more complicated problems\cite{peterson2004day,krueger2009flexible}. CL can also be observed in the way we train animals \cite{skinner1958reinforcement}.
In this work, we follow the notion that starting with easier tasks can help the agent learn better when presented with more difficult tasks later on. We consider the CL problem in the context of DR, where the goal is to reduce the training time by guiding the learning process without loosing domain transferability. Then, the problem becomes how to select parameters from the randomized space $\Psi$ to guide the agent's training.
To this end, we consider four main approaches:
\begin{itemize}
\item Curriculum-based DR: The DR parameter's range of values is determined by the curriculum.
\item The curriculum's evolution: a linear approach vs an adaptive approach.
\item The DR sampling strategy: a Uniform distribution (UDR) vs a Gaussian distribution (GDR).
\item A dynamic reward function based on the curriculum vs a standard reward function.
\end{itemize}
\subsubsection{Curriculum-based Domain Randomization}\label{subsubsec:dr-cl}
We tackle the problem of defining a strategy to reduce the complexity of choosing a value for each randomization parameter. Though each parameter of the randomized space $\psi$ can be considered a degree of freedom that can be controlled to define the training tasks, adding new parameters would increases the difficulty of choosing the sequence of tasks to train the agent. Therefore, in order to simplify the problem while preserving the benefits of domain randomization, we propose the following approach: we represent the difficulty level $\mathbf{L}$ of a task as a numerical value in a close interval $[0, 1]$, from easiest to hardest. Then, a sub-set of each randomization parameter $\psi_i$ is defined based on the difficulty level $\mathbf{L_{ep}}$ at the beginning of each episode during training:
\begin{equation}\label{eq:randomization-ranges}
\psi_i: [\psi_i^{low}, \psi_i^{low} + \psi_i^{high}*\mathbf{L_{ep}}]
\end{equation}
where we assume that the parameter's set $\psi_i$ is defined in ascending order, such that, at $low$ and at $high$, the task is relatively the easiest and the hardest, respectively. The parameters considered in this work and their corresponding set are shown in \Cref{tab:randomize-values}.
\subsubsection{Adaptive Curriculum Learning}\label{subsubsec:adaptive-cl}
We consider two approaches to update the curriculum difficulty; on the one hand, the naive approach is to monotonically increase the difficulty in a linear way, regardless of the agent's performance, i.e.,
\begin{equation}
\mathbf{L_{ep}} = ep/ep_{max}
\end{equation}
with $\mathbf{L_{ep}}$ being a constraint equal to 1 if the current episode number exceeds a defined maximum number of episodes. On the other hand, we propose an adaptive curriculum based on the agent's performance $\mathbf{P}$ during the last few episodes.
The agent's performance is computed as the success rate of the last few episodes. Based on the agent's performance, the curriculum's level is updated by a defined step size $L_{step}$. Two thresholds are also defined. If the agent's performances surpass $L_{thld\_up}$ or fall below $L_{thld\_down}$ such thresholds, then the curriculum's level is increased or decreased respectively. \Cref{alg:cldr} describe our adaptive curriculum approach.
\begin{algorithm}
\caption{Adaptive Curriculum Learning Evolution}\label{alg:cldr}
\begin{algorithmic}
\State $\mathbf{P} = 0$
\For{Every episode $ep$}
\State Update $\psi$ based on $\mathbf{L_{ep}}$
\Comment{\cref{eq:randomization-ranges}}
\State Sample task from $\psi$
\State $\mathbin{/}\mathbin{/}$ \textit{+1: success, -1: failure}
\State $\mathbf{P} \mathrel{{+}{=}} $ rollout current policy $\pi$ on task
\State Update policy
\If{$\mathbf{P} \ge L_{thld\_up}$}
\State $L_{ep} \mathrel{{+}{=}} L_{step}$
\State $\mathbf{P} = 0$ \Comment{Consider newest rollouts}
\ElsIf{$\mathbf{P} \le L_{thld\_down}$}
\State $L_{ep} \mathrel{{-}{=}} L_{step}$
\State $\mathbf{P} = 0$ \Comment{Consider newest rollouts}
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\subsubsection{Domain Randomization Sampling Strategy}\label{subsubsec:dr-sampling-strategy}
We consider the type of distribution from which the randomized parameters are sampled. Instead of the typical uniform distribution (UDR), we propose the use of a normal distribution (NDR), $\mathcal{N}(\mu,\sigma^2)$, with the mean being centered around the current curriculum's level $\mathbf{L}_{ep}$, and variance is a hyperparameter. The reason behind this choice is to keep increasing the general difficulty of the task with the increment of the difficulty level, but with a small probability, the curriculum can generate an easier task than the difficulty level to reduce the catastrophic forgetting problem \cite{mccloskey1989catastrophic, french1999catastrophic}.
\subsubsection{Dynamic Reward Function}\label{subsubsec:dynamic-reward}
Lastly, for our target task domain, we consider desirable for the RL agent to learn to handle the \textit{hardest} conditions to improve transferability to the real-world environment. To this end, we propose and evaluate a dynamically evolving reward with respect to the curriculum level difficulty. More specifically, the reward $r$, as defined in \Cref{subsubsec:cost-function}, is scaled by the current difficulty level $L_{ep}$; thus, the full reward would be obtained only when the agent reaches and maintains the hardest level.
\begin{equation}
\mathbf{r_t^d} = r*\mathbf{L_{ep}}
\end{equation}
where $r_t^d$ stands for the dynamic reward at time $t$.
In other words, at each time step, the reward obtained by the agent is a fraction of the full possible reward for reaching such state.
\section{Experiments and results} \label{sec:experiments-results}
Through the following experiments, we aimed to understand the performance of our proposed method compared to alternative approaches, in terms of sample efficiency and generalization. To that end, experiments were performed with novel tasks not seen during training in simulation and in the real-world environment, using insertion tasks with medium grade industrial-level tolerances ($\pm 0.01~mm$).
The \textit{baseline} used throughout these experiments was based on our previous work \cite{beltran2020variable}, which mainly focused on the use of DR to enhance domain transferability. This experimental section focus on comparing the different curricula designs, sampling strategies for DR, and curriculum-based dynamic reward. As such, our previous work \cite{beltran2020variable} has been updated to include the new reward function and PID gain scheduling approach, proposed and described in \Cref{subsubsec:cost-function} and \ref{subsubsec:pid-gains-scheduling} respectively, which is used as the \textit{baseline} in this study. Ablation studies of these components of our proposed method are discussed in the Appendix.
\subsection{Experimental Setup} \label{subsec:experimental-setup}
A simulated environment was used both for training and validation. The Gazebo simulator \cite{koenig2004design} version 9 was used.
The choice of simulation environment is discussed in \Cref{sec:discussion}.
Two real-world environments were used for validation purpose only; no further re-training was performed on the target domains\footnote{Despite Gazebo's simulation of the high-stiffness robot being accurate, the robot controllers respond faster than the real robot (maybe due to safety speed reduction on the side of the real robot controller, which we have not modify). Therefore, a minimal calibration is required. From our experience, scaling the reference trajectory or the command send to the controller by a factor of two worked well enough. A rough similarity between the simulation and real robot controller is enough to enable straightforward sim2real transfer.}. The components of the real-world setup is described in \Cref{fig:real-system}. Both environments consist of a Universal Robot 3 e-series robot arm with a control frequency up to 500 Hz. The robotic arm had a force/torque sensor mounted at its end-effector.
In the simulation, the peg was considered as part of the robot's end-effector, as shown in \Cref{fig:sim_tasks}. The real-world robot simply used a Robotiq Hand-e parallel gripper. For the toy environments described in \Cref{subsubsec:toy-experiments}, this parallel gripper and a cuboid holder facilitate achieving a strong and stable grasp, similar to the simulation environment. However, for the industrial insertion tasks, we avoided the used of custom-made holders for the real-wold tasks, which increased the difficulty of the tasks as discussed in \Cref{subsubsec:wrs-experiments}.
Our implementation of the RL agent that controlled both the simulated and real robot was developed on top of the Robot Operating System (ROS) \cite{quigley2009ros} with the Universal Robot ROS Driver\footnote{ROS driver for Universal Robot robotic arms developed in collaboration between Universal Robots and the FZI Research Center for Information Technology https://github.com/UniversalRobots/Universal\_Robots\_ROS\_Driver}. In both environments, training of the RL agent was performed on a computer with an Intel i9-10900X CPU and NVIDIA® Quadro RTX™ 8000 GPU.
See the accompanying video \footnote{Graphical abstract and experimental results:
\href{https://youtu.be/_FVQC5OcGjs}{https://youtu.be/\_FVQC5OcGjs}}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/real-world-system.png}
\caption{Real experiment environment with a 6-degree-of-freedom UR3e robotic arm. WRS2020 Task board is shown, along side the three insertion tasks used for validation, motor pulley, bearing, and shaft. Each task has industrial level sub-mm tolerances.}
\label{fig:real-system}
\end{figure}
\subsection{Training}
The training phase consisted of the repeated execution of the insertion task using a variety of peg shapes and physical parameters of the simulator, as described in \Cref{tab:randomize-values}. An \textit{episode} was defined as a maximum of 1000 time steps, with each step being 50~ms. Early termination of an episode occurs under three conditions; 1) the target goal is reached, the peg inserted, and within $\epsilon$ distance from the full insertion, as described in \Cref{tab:randomize-values}. 2) the robot collides with the task board, i.e., a large contact force is sensed at any point during the task (more than 50~N in simulation or more than 30~N with the real-world robot). 3) the agent gets stuck, thus, the cumulative reward decreases to less than a set value $R_{min}$.
\begin{figure*}[t]
\centering
\hfill
\begin{subfigure}[b]{0.16\textwidth}
\centering
\includegraphics[height=30mm]{figures/cylinder-peg.png}
\caption{Cylinder.}
\label{fig:cylinder}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.16\textwidth}
\centering
\includegraphics[height=30mm]{figures/hexagonal-peg.png}
\caption{Hexagonal}
\label{fig:hexagon}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.16\textwidth}
\centering
\includegraphics[height=30mm]{figures/cuboid-peg.png}
\caption{Cuboid}
\label{fig:cuboid}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.16\textwidth}
\centering
\includegraphics[height=30mm]{figures/triangular-peg.png}
\caption{Triangular}
\label{fig:triangular}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.16\textwidth}
\centering
\includegraphics[height=30mm]{figures/trapezoid-peg.png}
\caption{Trapezoid}
\label{fig:trapezoid}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.16\textwidth}
\centering
\includegraphics[height=30mm]{figures/star-peg.png}
\caption{Star}
\label{fig:star}
\end{subfigure}
\hfill
\caption{Simulated peg-in-hole environments. The cylinder, hexagonal prism, cuboid and triangular prism were used during training. The trapezoid prism and the star prism were used for testing.}
\label{fig:sim_tasks}
\end{figure*}
\subsection{Learning performance}
First, we compare the learning performance of the approaches presented in \Cref{subsubsec:dr-cl}, \ref{subsubsec:adaptive-cl}, and \ref{subsubsec:dr-sampling-strategy}:
\begin{itemize}
\item \textit{Baseline}: DR without curriculum learning (No Curriculum), as described in \Cref{sec:experiments-results}.
\item Linear curriculum with Uniform distribution for DR (Linear Curriculum UDR).
\item Linear curriculum with Gaussian distribution for DR (Linear Curriculum GDR).
\item Adaptive curriculum with Uniform distribution for DR (Adp. Curriculum UDR).
\item Adaptive curriculum with Gaussian distribution for DR (Adp. Curriculum GDR).
\end{itemize}
Each training session had a maximum of $100,000$ time steps, one-fifth of the training time required in our previous work \cite{beltran2020variable}.
As described in \Cref{subsubsec:domain-randomization}, each episode is generated with a different set of values for the randomization parameters.
\Cref{fig:Learning-comparison} shows the cumulative reward per method during a complete training session. Each training session was repeated with different random seeds. The average value and standard deviation are shown as the bold line and shadow region, respectively. The results are a preliminary highlight of the significant improvement of applying Curriculum Learning compared to the \textit{baseline}, which relied primarily on Domain Randomization alone. Furthermore, the adaptive curricula had a considerable performance above a simple linear increment of the curricula difficulty. Finally, using a Gaussian distribution instead of a Uniform distribution for the sampling of Domain Randomization parameters also significantly improves the agents' performance during learning. The dynamic reward (DyRe) approach discussed in \Cref{subsubsec:dynamic-reward} is not included here as the scale of the reward is different.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figures/learning-comparison.png}
\caption{Learning curve comparison using the cumulative reward of the overall training session. Each method was trained three times. The results are aggregated as the average cumulative reward and corresponding standard deviation, represented by the bold line and the shadow region.}
\label{fig:Learning-comparison}
\end{figure}
\subsection{Evaluating learned Policies} \label{sucsec:eval-learning-curves}
Next, we evaluated the performance of the learned policies on novel conditions not seen during training. Each policy was executed 100 times with different initial conditions and randomized parameters (with a fixed random seed for a fair comparison). More specifically, the peg shapes used for testing were a trapezoid prism and star prism, as shown in \Cref{fig:sim_tasks}. The trapezoid introduces a non-symmetric-shaped peg. The star prism peg is more challenging due to its sharp corners that make the peg prone to getting stuck during the aligning phase, making the overall insertion task harder to complete during the allowed time limit of 50 seconds (same time limit as during training).
The results are shown in \Cref{tab:sim-results}. They include a comparison of the overall success rate and the average time needed to complete the task; failure cases are not included in the computation of the average time. Two main conclusions can be drawn from these results; 1) A curriculum may seem to have a better learning performance, but the resulting policy may not transfer well to novel environments, as is the case with the \textit{Linear Curricula} methods. Such linear curriculum approaches performed just slightly better than not using a curriculum at all. 2) The most successful methods are not necessarily the fastest. Our simulation environment did not handle very well friction between the peg and the task board, due to the high stiffness of the robot joints. In this almost friction-less world, the \textit{Adp. Curriculum UDR} method is able to solve the tasks between 20\% to 50\% faster than our best method \textit{Adp. Curriculum GDR DyRe}. However, the success rate of our method is at least 19\% higher.
The main reason for such results is that since contact force and collision avoidance have higher priority than speed during learning, our proposed method moves slower when the peg gets closer to the task board so the contact force is reduced. This conclusion is further supported by the real-world experimental data described next in \Cref{subsubsec:learning-force-control}.
\begin{table}[h!]
\centering
\begin{tabular}{|c|cc|cc|}
\hline
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Trapezoid Prism}} & \multicolumn{2}{c|}{\textbf{Star Prism}} \\ \cline{2-5}
& \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Success\\ Rate\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Avg.\\ Time(s)\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Success\\ Rate\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Avg.\\ Time(s)\end{tabular} \\ \hline
No Curriculum & \multicolumn{1}{c|}{0.88} & 9.585 & \multicolumn{1}{c|}{0.627} & 11.304 \\ \hline
Linear Curriculum UDR & \multicolumn{1}{c|}{1.00} & 9.817 & \multicolumn{1}{c|}{0.696} & 9.152 \\ \hline
Linera Curriculum GDR & \multicolumn{1}{c|}{1.00} & 11.650 & \multicolumn{1}{c|}{0.775} & 11.780 \\ \hline
Adp. Curriculum UDR & \multicolumn{1}{c|}{1.00} & \textbf{6.881} & \multicolumn{1}{c|}{0.706} & \textbf{6.875} \\ \hline
Adp. Curriculum GDR & \multicolumn{1}{c|}{1.00} & 8.460 & \multicolumn{1}{c|}{0.794} & 8.013 \\ \hline
\begin{tabular}[c]{@{}c@{}}Adp. Curriculum \\ UDR DyRe\end{tabular} & \multicolumn{1}{c|}{1.00} & 8.429 & \multicolumn{1}{c|}{0.873} & 11.602 \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Adp. Curriculum \\ GDR DyRe\end{tabular}} & \multicolumn{1}{c|}{1.00} & 8.400 & \multicolumn{1}{c|}{\textbf{0.902}} & 11.544 \\ \hline
\end{tabular}
\caption{Evaluation of learned policies on novel conditions.}
\label{tab:sim-results}
\end{table}
\subsection{Real-world experiments} \label{subsec:real-world-experiments}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.39\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/toy-experiments.png}
\caption{Trapezoid and star prism pegs}
\label{fig:toy_tasks}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.59\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/wrs2020-experiments.png}
\caption{Motor Pulley, Shaft, and Bearing.}
\label{fig:wrs_tasks}
\end{subfigure}
\hfill
\caption{Real-world experimental scenarios. \textbf{Left}: 3D printed primitive shape-pegs, different from the ones used for training in simulation. \textbf{Right}: Industrial level insertion tasks from the WRS2020 Robotics Assembly Challenge \cite{wrs2020_rulebook}.}
\label{fig:toy_experiments}
\end{figure*}
We performed two sets of experiments to evaluate the transferability of the learned policies to the real world and to novel tasks. The experiments were performed using the \textit{baseline} (No Curriculum) method and our newly proposed method (Adp. Curriculum DyRe GDR), which achieved the best results from the evaluation in simulation.
The first set of tasks consisted of the same trapezoid and star prism-shaped pegs as the simulation experiments, which were not presented during training. A simplified peg was 3D printed using PLA material, as shown in \Cref{fig:toy_tasks}.
The second set of tasks consisted of novel industrial level insertion tasks (See \Cref{fig:wrs_tasks}), similarly, these tasks were unseen during training in simulation. Both sets of tasks had sub-millimeter tolerances. Thirty trials were performed per method and task. The success rate and average completion time were measured where the initial position and orientation of the robot's end-effector at each trial were different and randomly sampled from a fixed random seed to fairly compare both methods. Each trial had a 500-time steps limit, i.e., 25~s.
\subsubsection{Primitive Shaped Pegs} \label{subsubsec:toy-experiments}
The 3D printed pegs were designed with a cuboid holder, as shown in \Cref{fig:toy_tasks}, to increase the stability of the grasp. As a result, the stiffness of the contact is very high, as the task board was also firmly fixed to the workspace. Additionally, there is high friction due to the PLA material used for 3D printing and the imperfections on the printed surface. The high stiffness and friction made the task challenging. The results are shown in \Cref{tab:real-toy-results}. As mentioned before, for this test, the \textit{baseline} was also trained with only one-fifth of the samples shown to be needed \cite{beltran2020variable} to learn a successful policy. Thus, the learned policy's less refined force control tends to apply too much force to complete the task quickly, but the high friction and the corners of the star-shaped hole cause the peg to get stuck easily. Therefore, the \textit{baseline} method struggled to align the star prism peg and to get unstuck. On the other hand, our newly proposed approach successfully adapted to the real-world environment and succeeded at the novel tasks without further re-training the policies, just a straightforward sim-to-real transfer.
\begin{table}[h!]
\centering
\begin{tabular}{|c|cc|cc|}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{Trapezoid Prism Peg} & \multicolumn{2}{c|}{Star Prism Peg} \\ \cline{2-5}
& \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Success\\ Rate\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Avg. \\ Time(s)\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Success\\ Rate\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Avg. \\ Time(s)\end{tabular} \\ \hline
No Curriculum & \multicolumn{1}{c|}{1.000} & 6.500 & \multicolumn{1}{c|}{0.000} & - \\ \hline
\begin{tabular}[c]{@{}c@{}}Adp. Curriculum \\ DyRe GDR\end{tabular} & \multicolumn{1}{c|}{\textbf{1.000}} & \textbf{5.465} & \multicolumn{1}{c|}{\textbf{1.000}} & \textbf{6.023} \\ \hline
\end{tabular}
\caption{Evaluating learned policies on the real-world environment, using 2 toy scenarios not seen during training on simulation.}
\label{tab:real-toy-results}
\end{table}
\subsubsection{Industrial Level Insertion Tasks} \label{subsubsec:wrs-experiments}
The second set of tasks used for evaluation consisted of 3 insertion tasks with industrial level tolerances. These were chosen from the assembly task used in the Industrial Robots Assembly Challenge of the World Robot Summit 2020 edition \cite{wrs2020_rulebook}. The tasks, as shown in \Cref{fig:wrs_tasks}, were the insertion of a pulley into a motor shaft, a shaft into a bearing, and a bearing into a plate. Similar to the previous tasks, the learned policies were directly transferred from the simulation environment without further training.
These tasks are considerably more challenging as the grasp's stability significantly impacts the success. All three manipulated objects are round and grasped directly with a standard parallel gripper. Thus, torques applied along the direction of the grasp could easily change the object's orientation in the gripper. Small orientation changes significantly affect these very tight insertion tasks.
The results are shown in \Cref{tab:real-wrs-results}.
Our newly proposed method (Adp. Curriculum DyRe GDR) achieved a high success rate in all the tasks. For the motor pulley and the shaft tasks, our method also solves the task faster by finding the right fit faster. Our method is less likely to get stuck as it applies less contact force as shown in \Cref{fig:distance-force-comparison} and \ref{fig:distance-force-comparison-failure}
In the case of the bearing task, the \textit{baseline} method, when successful, is slightly faster as it tends to apply higher contact force and move faster once the parts are aligned. However, the same high contact force makes it harder to find the proper alignment, thus resulting in a very low success rate. As a result, our newly proposed method outperforms the \textit{baseline} method, achieving a much higher success rate. These results are better appreciated in the supplemented video\footnote{Supplemental video:
\href{https://youtu.be/_FVQC5OcGjs}{https://youtu.be/\_FVQC5OcGjs}}.
\begin{table*}[h!]
\centering
\begin{tabular}{|c|cc|cc|cc|}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{Motor Pulley} & \multicolumn{2}{c|}{Shaft} & \multicolumn{2}{c|}{Bearing} \\ \cline{2-7}
& \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Success\\ Rate\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Avg.\\ Time(s)\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Success\\ Rate\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Avg.\\ Time(s)\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Success\\ Rate\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Avg.\\ Time (s)\end{tabular} \\ \hline
No Curriculum & \multicolumn{1}{c|}{0.400} & 8.258 & \multicolumn{1}{c|}{0.667} & 9.199 & \multicolumn{1}{c|}{0.267} & 6.819 \\ \hline
\begin{tabular}[c]{@{}c@{}}Adp. Curriculum \\ DyRe GDR\end{tabular} & \multicolumn{1}{c|}{0.867} & 7.250 & \multicolumn{1}{c|}{0.833} & 7.015 & \multicolumn{1}{c|}{0.700} & 7.212 \\ \hline
\end{tabular}
\caption{Evaluating learned policies on the real-world environment, using 2 toy scenarios not seen during training on simulation.}
\label{tab:real-wrs-results}
\end{table*}
\subsubsection{Learning Force Control} \label{subsubsec:learning-force-control}
In addition to the success rate and time to completion, we compare the detailed performance of the two methods. \Cref{fig:distance-force-comparison} shows the performance of both methods side by side for the three industrial insertion tasks.
For simplicity, only the z-axis (i.e., the insertion direction), distance error (mm), and contact force (N) are displayed.
As shown in \Cref{fig:distance-force-comparison}, our proposed method is more time-efficient and applies less contact force to the coupling part. Less contact force is desirable to avoid damage to either the assembly part or the robot.
Similarly, \Cref{fig:distance-force-comparison-failure} shows the comparison of trials where both agents fail to complete the task on time. Though both agents failed, our method again shows a reduced exertion of contact force.
In both cases, our method applies about 30\% less contact force.
\begin{figure*}[h!]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/success_pulley_df.png}
\caption{Motor Pulley}
\label{fig:motor_pulley_f}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/success_shaft_df.png}
\caption{Shaft}
\label{fig:shaft_f}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/success_bearing_df.png}
\caption{Bearing}
\label{fig:bearing_f}
\end{subfigure}
\caption{Agents performance on WRS2020 insertion tasks. For clarity, only the z-axis (Insertion direction) distance error and contact force are displayed. Comparison was made for each task when both methods successfully completed the task.}
\label{fig:distance-force-comparison}
\end{figure*}
\begin{figure*}[h!]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/failure_pulley_df.png}
\caption{Motor Pulley}
\label{fig:motor_pulley}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/failure_shaft_df.png}
\caption{Shaft}
\label{fig:shaft}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{figures/failure_bearing_df.png}
\caption{Bearing}
\label{fig:bearing}
\end{subfigure}
\caption{Performance of both methods where both failed to complete the task within the time limit.}
\label{fig:distance-force-comparison-failure}
\end{figure*}
\section{Discussion}
\label{sec:discussion}
Training a reinforcement learning agent with a curriculum that starts from easier tasks with reduced risk of encountering fatal states (e.g., a collision during a manipulation task) improves the learning sample efficiency and overall performance. In this case study, in particular, we integrate CL to contact-rich peg insertion tasks. A task is defined by various physical parameters as described in \Cref{tab:randomize-values}.
We aim to allow the agent to carefully explore more states by presenting tasks in increasing order of difficulty, e.g., by reducing the stiffness of the contact between the peg and the board, the agent is less likely to apply excessive contact force to the environment (i.e., a collision).
At the same time, starting with easier tasks, such as a shorter distance from the initial position to goal one, reduces the overall exploration. The curriculum is design around a domain randomization approach to preserve and enhance the domain transferability benefits from DR. Nevertheless, the type of curricula is very relevant for achieving better performance, both in terms of sample efficiency and success rate, as seen in the results in \cref{subsubsec:toy-experiments}.
From our previous work \cite{beltran2020variable}, we demonstrated that with sufficient domain randomization-based training in simulation (at least $500,000$ time steps or about 8 hours) and further retraining in the real-robot environment, it is possible to learn policies that adapt to novel domains successfully. The present paper introduce a study on Curriculum Learning to tackle the problem of sample-efficiency, and the need to retrain in the target domain. The experimental results on the real-robot environment confirms that our newly proposed method is effective in learning contact-rich force control tasks. Despite that our proposed method was trained only in simulation with one-fifth of the training time used in our previous work \cite{beltran2020variable} and without further retraining, it achieves a high success rate on novel tasks, including challenging industrial insertion tasks with sub-millimeter tolerances.
The main limitation of our proposed method is the assumption that the domain randomization parameter ranges are organized in order from \textit{easy} to \textit{difficult}. Prior knowledge is required to determine the \textit{difficulty} of a physical parameters on a given task. Such prior knowledge is task-specific and not necessarily easy to obtain or even impossible to determine manually. For example, considering only the peg-in-hole task and the stiffness between the peg and the task board, a low-stiffness can intuitively seem easier to handle for a high-stiffness robot arm, as less careful force control is required to achieve the task without generating large contact forces. However, depending on the material and how low the stiffness is, the peg may get stuck, or the task board may be deformed to such an extent that the task becomes impossible to solve.
To tackle this problem, an interesting future avenue is to add a layer of learning, following works such as \cite{mehta2020adr,svetlik2017automatic}. The main idea of this line of research work is to train a neural network to define the RL agent's tasks. In other words, the network learns to choose the best values for the randomization parameters at each episode to increase the performance of the learned policy. The approaches differ in how the new network is trained, and how the agent's performance is defined, such as the cumulative reward, success rate, or another evaluation metric.
Following these self-learned curricula approaches reduces the burden on prior knowledge, though as discussed in \cite{mehta2020adr}, a self-learned curriculum can provide an insight into incompatibilities between the task and randomization ranges. Therefore, such approaches may allow the use of many other parameters of the physics simulator for domain randomization, potentially increasing the transferability to novel domains. Nevertheless, the possible downside is the requirement of longer training sessions due to the added complexity.
Another future avenue to further improve the presented work is the choice of a simulation environment. At the time of writing this paper, there are various physics engine simulators available that simulate the contact dynamics between bodies with different degrees of accuracy, among other capabilities. Our choice of the Gazebo simulator was motivated by its realistic simulation of rigid position-controlled robots. Additionally, the availability of ROS controllers that worked the same in the simulated and the real-world robot reduces the implementation burden and facilitates sim2real transfer. Nonetheless, other simulators provide better contact dynamics and are better adapted for Reinforcement Learning applications, such as Mujoco \cite{todorov2012mujoco}, or Nvidia Isaac Sim \cite{liang2018gpu}. Working with such simulators would be a significant improvement, as Domain Randomization is also easier to implement for vision-based learning methods and physical parameters of the simulated environment.
\section{Conclusions}
This paper has studied the application of different approaches that combine Curriculum Learning with Domain Randomization to learn contact-rich manipulation tasks, particularly assembly tasks such as peg insertion. Based on such a study, we proposed to improve sample efficiency and generalization by training an agent purely in simulation, with the training being guided with CL and enhanced with DR. Additionally, this work introduced two enhancements to our learning framework, a new dense reward function and a PID gain scheduling approach, described in \Cref{subsubsec:cost-function} and \ref{subsubsec:pid-gains-scheduling} respectively, and validated in the Appendix.
The learning framework proposed in this work is based on our previous work \cite{beltran2020variable} where a combination of sim2real with DR was proposed. Our previous methods still required a considerably large amount of agent's interaction with its environment and additional refinement in the real-world environment to learn a robust policy. On the contrary, our novel approach can be trained purely in simulation with only toy insertion tasks.
Empirical results showed that with our proposed method a successful policy can be learned using only one-fifth of the samples needed in our previous work. Such policies can be straightforwardly transferred to real-world environments and still achieve a high success rate, up to 86\%, on novel complex industrial insertion tasks not seen during training.
\section{Patents}
\bibliographystyle{ieeetr}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sect:intro}
In the past two decades, thousands of identifiable planets outside
the solar system have already been spotted
\citep[e.g.][]{Vogt00,Pie10,Mou11,Ofir13,Rowe14}. An overwhelming
majority of them have been discovered using indirect methods, i.e.,
radial-velocity (RV) and photometric transits. This is due to the
fact that planets are non-luminous bodies, they merely reflects
light of the parent star. A planet is like a small "undetectable"
speckle in the stellar image, from a distance of a few parsec. But a
planet can cause dynamical perturbations onto parent star, providing
the possibility to detect it by indirect means. As a consequence,
this makes the characteristic parameters of a exoplanet depend
strongly and only on the characteristic parameters of the host star.
Hence, it is of great significance to obtain the accurate masses and
radii of host stars to study exoplanets
\citep[e.g.][]{Sea03,San08,Winn10}.
The most commonly used method for determination of the stellar
properties is to fit the parameters of theoretical models with the
observational constraints, e.g. effective temperature $T_{\rm{eff}}$
and luminosity $L/L_{\sun}$. However, this method generally makes it
difficult to obtain sets of complete stellar atmospheric parameters.
There are a few of free parameters in stellar structure and
evolution models, which bring more uncertainties in the
observations, result in insufficient estimation of the stellar
properties \citep{Bi08}.
Nevertheless, the abundance of lithium in stellar photospheres is
usually used to study various processes, from big bang
nucleosynthesis to the formation and evolution of planetary systems
\citep[e.g.,][]{Mel10,San10}. Lithium is readily destroyed in
stellar interiors at a comparatively low temperature ($\sim$ 2.5
$\times$ 10$^6$ K), therefore in solar-like stars, the surface
lithium abundance was treated as an extremely sensitive diagnostic
for stellar structure and evolution. Additionally, the evolution of
lithium abundance during pre-MS and MS phases have strong
correlations on several properties of star, i.e., metallicity, mass,
age, and rotational history. Meanwhile, due to total angular
momentum loss induced by magnetic braking, the rotational period of
solar-like stars decreases with age during the main sequence. And
the rate of angular momentum loss is related to stellar mass, radii
and rotational rates. Accordingly, by combining non-standard stellar
model which includes the process of extra-mixing with accurate
measurement of lithium abundance and rotational period, we can
obtain more precise estimation about the fundamental parameters of
EH stars and their planets \citep{Do Nascimento09,Cas11,Li12}.
In this work, we computed evolutionary models including
rotation-induced element mixing and microscopic diffusion. We used
three commonly used observation constraints ($T_{\rm{eff}}$,
$L/L_{\sun}$ and [Fe/H]) and two additional observation constraints
(the lithium abundance $\log$ $N$ (Li) and the rotational period
$P_{\rm{rot}}$) as restrictions of stellar models, aiming to provide
the stellar parameters accurately. We assumed that the depletion of
lithium happens at pre-main-sequence (pre-MS) and evolve differently
with different initial rotational rates.
The observed data of the six EH stars and their planets are
summarized in Section ~\ref{sect:Obs}. The computational method and
the details of our evolutionary models are described in Section
~\ref{sect:Mod}. In Section ~\ref{sect:Res}, we present our modeling
results and compare with previous studies. Finally, the discussion
\section{The selection of star sample}
\label{sect:Obs}
\subsection{EH Stars}
\begin{table}
\begin{center}
\caption{Main Characteristics of Six EH Stars.}\label{tbl1}
\begin{tabular}{cccccccc}
\hline\noalign{\smallskip}
HIP & HD & $T_{\rm{eff}}$ & $\log(L/L_{\sun})$ & [Fe/H] & $\log$ $N$ (Li) & $P_{\rm{rot}}$& ref \\
& & (K) & & & (dex) & & \\
\hline\noalign{\smallskip}
9683 & 12661 & 5743$\pm$44 & 0.093$\pm$0.063 & 0.36$\pm$0.03 & ... & ... & (1) \\
& & 5785$\pm$50 & 0.033$\pm$0.063 & 0.37$\pm$0.03 & 1.10$\pm$0.60 & ... & (2) \\
& & 5715$\pm$70 & ... & 0.36 & ... & ... & (3) \\
& & ... & ... & ... & ... & 35$\pm$2.1 & (5) \\
& & 5748$\pm$70 & 0.063$\pm$0.063 & 0.36$\pm$0.03 & 1.10$\pm$0.60 & 35$\pm$2.1 & (6) \\
\ & & & & & & & \\
33212 & 50554 & 5929$\pm$44 & 0.167$\pm$0.063 & -0.07$\pm$0.03 & ... & ... & (1) \\
& & 5982$\pm$26 & 0.119$\pm$0.062 & -0.07$\pm$0.02 & 2.40$\pm$.011 & ... & (2) \\
& & 6050$\pm$70 & ... & 0.02 & 2.59 & ... & (3) \\
& & ... & ... & ... & ... & 16$\pm$1.0 & (5) \\
& & 5987$\pm$70 & 0.143$\pm$0.063 & -0.04$\pm$0.03 & 2.50$\pm$0.11 & 16$\pm$1.0 & (6) \\
\ & & & & & & & \\
47007 & 82943 & 5997$\pm$44 & 0.169$\pm$0.047 & 0.27$\pm$0.03 & ... & ... & (1) \\
& & 6011$\pm$36 & 0.152$\pm$0.061 & 0.28$\pm$0.03 & 2.47$\pm$0.10 & ... & (2) \\
& & 6025$\pm$70 & ... & 0.33 & 2.52 & ... & (3) \\
& & ... & ... & ... & ... & 20$\pm$1.2 & (5) \\
& & 6011$\pm$70 & 0.161$\pm$0.061 & 0.29$\pm$0.03 & 2.50$\pm$0.10 & 20$\pm$1.2 & (6) \\
\ & & & & & & & \\
50473 & 89307 & 5898$\pm$44 & 0.096$\pm$0.062 & -0.16$\pm$0.03 & ... & ... & (1) \\
& & 5914$\pm$25 & 0.130$\pm$0.063 & -0.18$\pm$0.02 & 2.18$\pm$0.11 & ... & (2) \\
& & ... & ... & ... & ... & 18$\pm$1.1 & (5) \\
& & 5906$\pm$44 & 0.113$\pm$0.063 & -0.17$\pm$0.03 & 2.18$\pm$0.11 & 18$\pm$1.1 & (6) \\
\ & & & & & & & \\
59610 & 106252 & 5870$\pm$44 & 0.107$\pm$0.069 & -0.08$\pm$0.03 & ... & ... & (1) \\
& & 5923$\pm$38 & 0.108$\pm$0.063 & -0.05$\pm$0.03 & 1.69$\pm$0.13 & ... & (2) \\
& & 5890$\pm$70 & ... & -0.01 & 1.65 & ... & (3) \\
& & 5899$\pm$62 & ... & -0.034$\pm$0.041 & 1.71$\pm$0.04 & ... & (4) \\
& & ... & ... & ... & ... & 23$\pm$1.4 & (5) \\
& & 5896$\pm$70 & 0.108$\pm$0.069 & -0.04$\pm$0.04 & 1.68$\pm$0.13 & 23$\pm$1.4 & (6) \\
\ & & & & & & & \\
77740 & 141937 & 5847$\pm$44 & 0.070$\pm$0.073 & 0.13$\pm$0.03 & ... & ... & (1) \\
& & 5842$\pm$36 & -0.026$\pm$0.067 & 0.10$\pm$0.03 & 2.26$\pm$0.11 & ... & (2) \\
& & 5925$\pm$70 & ... & 0.11 & 2.48 & ... & (3) \\
& & 5900$\pm$19 & ... & 0.125$\pm$0.030 & 2.36$\pm$0.02 & ... & (4) \\
& & ... & ... & ... & ... & 21$\pm$1.3 & (5) \\
& & 5879$\pm$70 & 0.022$\pm$0.073 & 0.12$\pm$0.03 & 2.37$\pm$0.11 & 21$\pm$1.3 & (6) \\
\noalign{\smallskip}\hline
\end{tabular}
\begin{list}{}{}
\item[$^{\mathrm{}}$] Reference: (1) \cite{VF05}; (2) \cite{Ghe10a} and \cite{Ghe10b}; (3)
\cite{Isr04}; (4)\cite{Bau10}; (5)\cite{Wri04}; (6) Mean value.
\end{list}
\end{center}
\end{table}
The sample stars for our modeling are six solar-analog stars with
observed lithium abundances and rotational periods, and their
planets have been detected by using radial-velocity. The detections
of lithium abundances and rotational periods make it is possible to
obtain precise estimations of stellar parameters, especially the
masses and radii which are significant for us to determine the
properties of their planets.
We summarized the observed data of EH stars which were used for our
theoretical calculations in Table ~\ref{tbl1}. The atmospheric
features, $T_{\rm{eff}}$, $L$, and [Fe/H] were collected from
\citet{VF05}, \citet{Ghe10a}, \citet{Isr04} and \citet{Bau10}. We
adopted the lithium abundance $\log$ $N$(Li) from the observations
of \citet{Ghe10b}, \citet{Isr04} and \citet{Bau10}. The rotational
period $P_{\rm{rot}}$ we used was determined by \citet{Wri04} from
the California and Carnegie Planet Search Program with the HIRES
spectrometer at Keck Observatory. The average values of these
observations were adopted in the following study.
The Spectra of \citet{VF05} were obtained with the HIRES
spectrograph mounted on the 10-m telescope at Keck Observatory
\citep{Vogt94}, the UCLES spectrograph mounted on the 4-m
Anglo-Australian Telescope at Siding Spring Observatory
\citep{Die90}, and the Hamilton echelle spectrometer at Lick
Observatory \citep{Vogt87}. The spectra of \citet{Ghe10a,Ghe10b}
were obtained with the FEROS spectrograph mounted on the MPG/ESO
2.20-m telescope at La Silla \citep{Kau99}. The observations of
\citet{Isr04} were carried out using the UES/4.2-m William Hershel,
the SARG/3.5-m TNG at La Palma, and the FEROS/1.52-m ESO, the
CORALIE/1.2-m Euler Swiss at La Silla. Stars from \citet{Bau10} were
observed with RGT spectrograph mounted on the 2.7-m Harlan Smith
telescope at the McDonald observatory, MIKE spectrograph mounted on
the 6.5-m Magellan Clay telescope at Las Campanas observatory, and
HARPS spectrograph mounted on the 3.6-m ESO telescope at La Silla
observatory.
\subsection{Exoplanets}
\begin{table}
\begin{center}
\caption{Main Characteristics of Exoplanets.}\label{tbl5}
\begin{tabular}{lcccc}
\hline\noalign{\smallskip}
Planet & $P$ & $e$ & $K_1$ & Ref \\
& (day) & & (m s$^{-1}$) & \\
\hline\noalign{\smallskip}
HD 12661b & 262.709 $\pm$ 0.083 & 0.3768 $\pm$ 0.0077 & 73.56 $\pm$ 0.56 & (1) \\
HD 12661c & 1708.0 $\pm$ 14.0 & 0.031 $\pm$ 0.022 & 30.41 $\pm$ 0.62 & (1) \\
HD 50554b & 1293.0 $\pm$ 37.0 & 0.501 $\pm$ 0.030 & 104 $\pm$ 5 & (2) \\
HD 82943b & 442.4 $\pm$ 3.1 & 0.203 $\pm$ 0.052 & 39.8 $\pm$ 1.3 & (3) \\
HD 82943c & 219.3 $\pm$ 0.8 & 0.425 $\pm$ 0.018 & 54.4 $\pm$ 2.0 & (3) \\
HD 82943d & 1072 $\pm$ 13 & 0 $\pm$ 0 & 5.39 $\pm$ 0.57 & (4) \\
HD 89307b & 2199 $\pm$ 61 & 0.25 $\pm$ 0.09 & 32.4 $\pm$ 4.5 & (5) \\
HD 106252b & 1600.0 $\pm$ 18.0 & 0.471 $\pm$ 0.028 & 147 $\pm$ 4 & (2) \\
HD 141937b & 653.22 $\pm$ 1.21 & 0.41 $\pm$ 0.01 & 234.5 $\pm$ 6.4 & (6) \\
\noalign{\smallskip}\hline
\end{tabular}
\begin{list}{}{}
\item[$^{\mathrm{}}$] Reference: (1) \cite{Wri09}; (2) \cite{Per03}; (3) \cite{Tan13}; (4) \cite{Bal14}; (5) \cite{Boi12}; (6) \cite{Udr02}.
\end{list}
\end{center}
\end{table}
We listed the planetary orbital parameters which were obtained by RV
measurements in Table~\ref{tbl5}. Two of these systems are found
multi-planetary. HD 12661 and HD 82943 host two and three planets,
respectively. Orbital period $P$, eccentricity $e$, and
semi-amplitude $K_1$ (the velocity wobble) were gave here, for more
details about the planetary such as the periastron passage time $T$
and the angle between the periastron and the line-of-nodes $\omega$
can be found in the related literatures.
The radial velocity data were from the HIRES spectrograph (HD 12661
and HD 82943) mounted on 10-m Keck-1 telescope at the Keck
Observatory \citep{Vogt94}, the ELODIE echelle spectrograph (HD
50554 and HD 106252) and the SOPHIE spectrograph (HD 89307) mounted
on the Cassegrain focus of the 1.93-m telescope at Haute-Provence
Observatory \citep{Bar96}, and the CORALIE echelle spectrograph (HD
141937) mounted on the 1.2-m Euler Swiss telescope at La Silla
Observatory \citep{Que00,Udr00}, respectively.
\section{Stellar Models}
\label{sect:Mod}
\subsection{Input Physics}
To estimate the parameters of the sample stars, a grid calculation
were carried out based on a stellar evolutionary model named Yale
Rotating Stellar Evolution Code (YREC) \citep{Pin90,Pin92,Demarque},
which includes diffusion, angular momentum loss, angular momentum
transport and rotation-induced elements mixing. Detailed
descriptions of model can be found in \cite{Guen92}, \cite{Cha95}
and \cite{Li03}. The calculations were carried out with the
up-to-date OPAL equation-of-state tables EOS2005 \citep{Rogers}.
Solar mixture of GS98 \citep{Grevesse} ($Z_{\sun}$ = 0.0170 and
$(Z/X)_{\sun}$ = 0.0230) was adopted and hence the opacities were
generated with the composition of GS98 \citep{Grevesse} and
supplemented by low-temperature opacities from \citet{Ferguson}.
Atmosphere of the model follows the Eddington $T-\tau$ relation. We
used NACRE reaction rates \citep{Angulo} for nuclear reaction and
the mixing length theory \citep{Bohm} for convection. Following the
formulation of \citet{Thoul}, the gravitational settling of helium
and heavy elements is considered in the stellar model.
When rotation is taken into account, the characteristics of a model
depend on six parameters: mass $M$, age $t$, mixing-length parameter
$\alpha \equiv l/Hp$, two parameters: $X_{\rm{ini}}$ and
$Z_{\rm{ini}}$ described the initial chemical composition of a star,
and rotational period $P_{\rm{rot}}$. To reproduce the evolution of
lithium, evolutions of stars during Pre-MS stage is considered and
hence we selected the initial model for each calculation on the
Hayashi Line. All of the models evolved to exhaust its supply of the
hydrogen in the core. Initial helium abundance ($Y_{\rm{ini}}$ =
0.275) and the mixing-length parameter ($\alpha$ = 1.75) were
regarded as constants in the grid computation.
The ranges of variable parameters of the grid calculation and their
step sizes are shown in Table ~\ref{tbl2}. According to the
effective temperatures of the sample stars, we set the range of mass
from 0.90 to 1.10 $M_{\sun}$ with a grid size of 0.01 $M_{\sun}$.
The range of mass fraction of all heavy elements $Z_{\rm{ini}}$,
which was derived from $Z_{\sun}$ and the observed [Fe/H], is from
0.010-0.040 dex with a grid size of 0.001 dex. Although the initial
models were selected on the Hayashi Line, we used the rotational
rates at zero age main sequence ($V_{\rm{ZAMS}}$) to represent the
rotational conditions for better understanding. The range of
$V_{\rm{ZAMS}}$ is from 20 to 70 km s$^{-1}$, in steps of 10 km
s$^{-1}$.
\begin{table}
\begin{center}
\caption{Input Parameters for Theoretical Calculation.}\label{tbl2}
\begin{tabular}{cccc}
\hline\noalign{\smallskip}
Variate & MIN & MAX & $\delta^{\rm{a}}$ \\
\hline\noalign{\smallskip}
$M (M_{\sun})$ & 0.90 & 1.10 & 0.01 \\
Z & 0.010 & 0.040 & 0.001 \\
$V_{\rm{ZAMS}}$ (km $\rm{s}^{-1})$ & 20 & 70 & 10 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{center}
\tablecomments{0.56\textwidth}{$^{\rm{a}}$ The value of $\delta$
represents the increment between the minimum and maximum values.}
\end{table}
\subsection{Angular Momentum Loss}
The braking law of \citet{Kaw88} is adopted as the angular momentum
loss equation:
\begin{equation}
\frac{{dJ}}{{dt}} = \left\{ \begin{array}{l}
- K{\Omega ^3}{({R}/{{{R_ {\sun} }}})^{1/2}}{({M}/{{{M_ {\sun} }}})^{ - 1/2}} (\Omega \le {\Omega _{sat}}) \\
\\
- K\Omega {\Omega _{sat}}^2{({R}/{{{R_ {\sun} }}})^{1/2}}{({M}/{{{M_ {\sun} }}})^{ - 1/2}} (\Omega > {\Omega _{sat}}), \\
\end{array} \right.
\end{equation}
where the parameter $K$ is constant for all stars, and associated
with the magnetic field intensity. $\Omega_{\rm{sat}}$ is the
angular velocity of surface when the magnetic saturation occurs in
star. Both $K$ and $\Omega_{\rm{sat}}$ are free parameters and we
follow \citet{Bou97} setting $K$ = 2.0 $\times$ $10^{47}$$\rm{g}$
$\rm{cm^2}$ $\rm{s}$ and $\Omega_{\rm{sat}}$ = 14 $\Omega_{\sun}$.
\subsection{Extra-mixing in the Radiative Region}
Besides the microscopic diffusion of of elements, which we have
mentioned above, angular momentum transport and elements mixing
caused by rotation are taken into account in radiative regions.
These processes can be described as a couple of diffusion equations
\citep{Cha95}:
\begin{equation}
\rho {r^2}\frac{I}{M}\frac{{d\Omega }}{{dt}} = \frac{d}{{dr}}(\rho
{r^2}\frac{I}{M}{D_{rot}}\frac{{d\Omega }}{{dt}}),
\end{equation}
\begin{equation}
\rho {r^2}\frac{{d{X_i}}}{{dt}} = \frac{d}{{dr}}\left[\rho
{r^2}{D_{m,1}}{X_i} + \rho {r^2}({D_{m,2}} +
{f_c}{D_{rot}})\frac{{d{X_i}}}{{dt}}\right],
\end{equation}
where $\Omega$ is the angular velocity, $X_i$ is the mass fraction
of chemical species $i$, and $I/M$ is the moment of inertia per unit
mass. $D_{m,1}$ and $D_{m,2}$ are the microscopic diffusion
coefficients. $D_{rot}$ is the diffusion coefficient caused by
rotation-induced mixing. More details of these diffusion
coefficients were given by \citet{Cha95}. The tunable parameter
$f_c$ was used to alter the effects of rotation-induced element
mixing. It was determined by observations, that is, the depletion of
lithium in our solar model must fits the observed depletion in Sun
\citep{Cha95}.
\section{Results}
\label{sect:Res}
\subsection{Stellar Parameters}
\begin{figure}
\centering
\includegraphics[width=\textwidth, angle=0]{MS1838Fig1.eps}
\caption{Evolutionary tracks of HD 12661, HD 50554, HD 82943, HD
89307, HD 106252, and HD 141937 in the H-R diagram constrained by a)
$T_{\rm{eff}}$ + $L$ + [Fe/H]; b) $T_{\rm{eff}}$ + $L$ + [Fe/H] +
log N(Li); c) $T_{\rm{eff}}$ + $L$ + [Fe/H] + $\log$ N(Li) +
$P_{\rm{rot}}$.} \label{Fig:1}
\end{figure}
we calculated a series of evolutionary models in estimated $M$ and
$Z_{ini}$ ranges to reproduce the observational constraints of these
six EH stars. As shown in Fig. ~\ref{Fig:1}, evolutionary tracks for
each stars are plotted in conformity to observational constraints.
For the sake of simplicity, the situations of star HD 12661 is taken
as an example.
First of all, three classical observed features, the effective
temperature $T_{\rm{eff}}$, luminosity $L$ and metallicity [Fe/H],
were considered and 157 tracks are found fitting these three
observational constraints. The mass and age of HD 12661 provided by
the models are 1.02 $\pm$ 0.03 $M_{\sun}$ and 6.76 $\pm$ 4.31 Gyr.
Secondly, lithium abundance was taken into account. Lithium is the
most important element since it is readily burned in the stellar
interiors. The abundance of lithium indicates the extent of element
mixing in the stars, in addition, the depletion of lithium depends
strongly on the mass and age of star \citep{Do Nascimento09,Li12}.
In this step, there are only 76 evolutionary tracks which fit four
observational constraints, including lithium abundance $\log$ $N$
(Li), and we estimate the mass and age of the star HD 12661 are 1.02
$\pm$ 0.02 $M_{\sun}$ and 5.56 $\pm$ 3.01 Gyr. Additionally, lithium
abundance narrows the ranges of input parameters, thus the possible
position of the star in the H-R diagram is restricted to a smaller
field than what has been obtained above.
Finally, after adding the rotational period to our models as a
constraint, only 30 evolutionary tracks are found fitting the
observed $P_{\rm{rot}}$. The range of $V_{\rm{ZAMS}}$ is
significantly reduced to 30 km s$^{-1}$ - 40 km s$^{-1}$. In a same
way as lithium abundance, the ranges of input parameters of the
stellar models are also reduced by the rotational period, and hence
it makes further constraining to the possible position of the star
in the H-R diagram as shown in Fig.~\ref{Fig:1}a. The rotational
period $P_{\rm{rot}}$ helps us determinate the mass and age of HD
12661 even more precisely, which are 1.02 $\pm$ 0.02 $M_{\sun}$ and
6.39 $\pm$ 1.94 Gyr.
The same method were adopted for all the other EH stars, we plotted
their evolutionary tracks in Fig.~\ref{Fig:1}, each line illustrate
the each star. Comparing the situation of star HD 12661 with others,
we find that this star occupy larger area in H-R diagram than five
other stars in Fig.~\ref{Fig:1}, this is owing to a large error in
the abundance of lithium for HD 12661.
\subsection{Comparison with Previous Results}
\begin{figure}
\centering
\includegraphics[width=\textwidth, angle=0]{MS1838Fig2.eps}
\caption{Comparisons between masses and ages determined by our model
(the black error bar) and estimates of previous studies for all six
EH stars. The red and blue error bars represent the results of
\cite{VF05} and \cite{Ghe10a}, respectively.}\label{Fig:3}
\end{figure}
These six EH stars were previously studied by several researchers,
the methods and estimates of two of them could be seen in
Table~\ref{tbl4}. \citet{Ghe10a} and \citet{VF05} observed these
stars and provided their masses, radii and ages through different
methods. We compare their results obtained by interpolating
isochrones with ours in the following paragraphs. The comparisons of
masses and ages of these six EH stars were plotted in Fig.
~\ref{Fig:3}.
For the six EH stars, the results of \citet{Ghe10a} given the error
of mass is $\sim$ 0.10 M$_{\sun}$ and the error of age is $\sim$ 2.0
Gyr. The mass determinations of \citet{VF05} were close to those of
\citet{Ghe10a} but with higher precision, i.e., $\triangle M$ $\sim$
0.05 M$_{\sun}$. Our mass estimations of HD 12661, HD 50554, HD
82943, HD 89307, HD 106252, and HD 141937 are $1.02 \pm 0.02
M_{\sun}$, $1.04 \pm 0.01 M_{\sun}$, $1.04 \pm 0.01 M_{\sun}$, $1.05
\pm 0.01 M_{\sun}$, $1.03 \pm 0.03 M_{\sun}$, and $1.03 \pm 0.02
M_{\sun}$, respectively, most of which are less massive than what
have been obtained by \citet{Ghe10a} and \citet{VF05}. This result
is due to the element transport which caused by the interaction
between diffusion and rotation-induced mixing in the stellar
radiative region \citep{Cha95,Egg10}. The process of element
transport changes the chemical composition of the external layers
and hence cause the evolutionary tracks shift to hot side on the H-R
diagram. Thus, when the observed effective temperature is given,
rotational model tends to provide less massive result of mass than
those obtained by standard model. The precision of our mass
determinations is the best of the three, which is 0.01 $\sim$ 0.03.
Ages of these six EH stars provided by \citet{VF05} are mostly older
than those of \citet{Ghe10a}, with a similar accuracy, i.e.,
$\triangle t$ $\sim$ 2.0 Gyr. Our age determinations generally agree
within the errors of previous works, and which are much more
accurate ($\triangle t$ $\sim$ 0.5 Gyr) than determined by
interpolating isochrones. (see Table~\ref{tbl4}). Moreover,
Combining the rotational periods listed in Table~\ref{tbl1} with
ages obtained by us, we found that there is a positive correlation
between them.
This result is reasonable, because the depletion of lithium is a
function of stellar mass, age, rotational rates and metallicity,
while the rotational period increases with age during the main
sequence. Therefore, these two additional observational constraints
can effectively restrict the ranges of input parameter and improve
the precision of the stellar model.
\begin{table}
\centering
\caption{Stellar Parameters and Comparison with Previous Studies.}\label{tbl4}
\begin{tabular}{cccccc}
\hline\noalign{\smallskip}
Star & $M$ & t & $\emph{R}$ &Method &Ref.\\
& ($M_{\sun}$) & (Gyr) & ($R_{\sun}$) & & \\
\hline\noalign{\smallskip}
HD 12661 &0.96$\pm$0.47 &... &1.04$\pm$0.08 & Spectroscopic & (1)\\
&1.10$\pm$0.10 &$1.0^{+2.5}_{-1.0}$ &... & Isochrones & (1)\\
&1.22$\pm$0.18 &... &1.124$\pm$0.037 & Spectroscopic & (2)\\
&$1.13^{+0.05}_{-0.04}$ &$4.2^{+1.4}_{-2.0}$ &... & Isochrones & (2)\\
&1.02$\pm$0.02 &6.39$\pm$1.94 &1.11$\pm$0.08 & This work & \\
& & & & & \\
HD 50554 &0.81$\pm$0.39 &... &1.07$\pm$0.08 & Spectroscopic & (1)\\
&1.05$\pm$0.10 &$3.5^{+2.5}_{-2.5}$ &... & Isochrones & (1)\\
&0.93$\pm$0.14 &... &1.149$\pm$0.039 & Spectroscopic & (2)\\
&$1.06^{+0.06}_{-0.05}$ &$4.6^{+2.3}_{-2.5}$ &... & Isochrones & (2)\\
&1.04$\pm$0.01 &2.16$\pm$0.29 &1.02$\pm$0.02 & This work & \\
& & & & & \\
HD 82943 &1.03$\pm$0.50 &... &1.10$\pm$0.09 & Spectroscopic & (1)\\
&1.20$\pm$0.10 &$1.0^{+1.0}_{-1.0}$ &... & Isochrones & (1)\\
&1.22$\pm$0.17 &... &1.125$\pm$0.029 & Spectroscopic & (2)\\
&$1.19^{+0.04}_{-0.04}$ &$2.6^{+0.9}_{-1.5}$ &... & Isochrones & (2)\\
&1.04$\pm$0.01 &2.35$\pm$0.33 &1.03$\pm$0.02 & This work & \\
& & & & & \\
HD 89307 &0.85$\pm$0.41 &... &1.11$\pm$0.09 & Spectroscopic & (1)\\
&1.00$\pm$0.10 &$7.0^{+2.0}_{-2.0}$ &... & Isochrones & (1)\\
&0.91$\pm$0.13 &... &1.069$\pm$0.035 & Spectroscopic & (2)\\
&$1.01^{+0.04}_{-0.05}$ &$5.4^{+2.6}_{-3.3}$ &... & Isochrones & (2)\\
&1.05$\pm$0.01 &2.31$\pm$0.15 &1.01$\pm$0.01 & This work & \\
& & & & & \\
HD 106252 &1.16$\pm$0.57 &... &1.08$\pm$0.09 & Spectroscopic & (1)\\
&1.05$\pm$0.10 &$3.5^{+2.5}_{-2.5}$ &... & Isochrones & (1)\\
&1.01$\pm$0.15 &... &1.093$\pm$0.040 & Spectroscopic & (2)\\
&$1.03^{+0.05}_{-0.05}$ &$5.4^{+2.6}_{-3.2}$ &... & Isochrones & (2)\\
&1.03$\pm$0.03 &4.19$\pm$0.65 &1.05$\pm$0.01 & This work & \\
& & & & & \\
HD 141937 &0.58$\pm$0.28 &... &0.95$\pm$0.08 & Spectroscopic & (1)\\
&1.10$\pm$0.10 &$1.0^{+1.0}_{-1.0}$ &... & Isochrones & (1)\\
&1.07$\pm$0.13 &... &1.056$\pm$0.039 & Spectroscopic & (2)\\
&$1.10^{+0.04}_{-0.04}$ &$4.2^{+1.4}_{-2.2}$ &... & Isochrones & (2)\\
&1.03$\pm$0.02 &2.32$\pm$0.41 &0.99$\pm$0.06 & This work & \\
& & & & & \\
\noalign{\smallskip}\hline
\end{tabular}
\begin{list}{}{}
\item[$^{\mathrm{}}$] Reference: (1) \cite{Ghe10a}; (2) \cite{VF05}.
\end{list}
\end{table}
\subsection{Planetary Parameters}
For a given planetary system with known orbital parameters, we can
calculate the mass function\citep{San08}:
\begin{equation}\label{eq1}
f(m) = \frac{(M_2\sin i)^3}{(M_1+M_2)^2} = 1.036 \times
10^{-7}K_1^3(1-e^2)^{(3/2)}P
\end{equation}
where $M_1$ and $M_2$ are the masses of the star and planet, $i$ is
the inclination of the line of sight with respect to the orbital
axis, $K_1$ is the semi-amplitude of radial-velocity of the star
with mass $M_1$, $e$ is the orbital eccentricity, and $P$ is orbital
period.
Furthermore, as we know, from Kepler's third law:
\begin{equation}\label{eq2}
\frac{a^3}{P^2} = \frac{G(M_1+M_2)}{4\pi^2}
\end{equation}
where \textit{a} is orbital semimajor axis and \textit{G} is the
universal gravitational constant.
From Equation~\ref{eq1} and ~\ref{eq2}, and combined stellar masses
determined in the previous section, the minimum masses $M_2\sin i$
and orbital semimajor axes $a$ of planets can be obtained, as shown
in Table~\ref{tbl6}. It should be noted that the uncertainty of our
estimation consists of two parts. One is associated with the
observation, such as the errors of $P$, $e$, and $K_1$ (listed in
Table~\ref{tbl5}). The other is produced by the model, specifically,
the error of stellar mass. We summarized the two parts of the
uncertainty separately in Table~\ref{tbl6}. Compared with the
results of previous studies, our determinations are more accurate,
whether including the errors of observations or not.
\cite{Bat13} pointed out that a 0.1 $M_{\sun}$ companion would
induce a systematic error of approximately 2\%. Correspondingly, the
accuracy and precision of parameters of EH star will have a huge
impact on our estimates of properties of planet. Therefore, the
accurate knowledge of EH star is extremely important for the study
of exoplanets.
\begin{table}
\begin{center}
\caption{Planetary Parameters and Comparison with Previous
Studies.}\label{tbl6}
\begin{tabular}{l|ccc|cccccc}
\hline\noalign{\smallskip}
& \multicolumn{3}{c|}{Previous Studies} & \multicolumn{6}{c}{This Work}\\
Planet & $M\sin i$ & $a$ & Ref & $M\sin i$ & $\delta_M^{obs}$ & $\delta_M^{theo}$ & $a$ & $\delta_a^{obs}$ & $\delta_a^{theo}$\\
& $(M_{\rm{Jup}})$ & (AU)& & $(M_{\rm{Jup}})$ & $(M_{\rm{Jup}})$ & $(M_{\rm{Jup}})$ & (AU) & (AU) & (AU) \\
\hline\noalign{\smallskip}
HD 12661b & 2.30 $\pm$ 0.19 & 0.831 $\pm$ 0.048 & (1) & 2.176 & 0.024 & 0.028 & 0.8079 & 0.0001 & 0.0052 \\
HD 12661c & 1.92 $\pm$ 0.16 & 2.90 $\pm$ 0.17 & (1) & 1.812 & 0.042 & 0.023 & 2.8145 & 0.0153 & 0.0182 \\
HD 50554b & 5.16 & 2.41 & (2) & 4.954 & 0.388 & 0.031 & 2.3530 & 0.0446 & 0.0075 \\
HD 82943b & 1.59 & 1.1866 & (3) & 1.500 & 0.067 & 0.009 & 1.1510 & 0.0053 & 0.0036 \\
HD 82943c & 1.58 & 0.7423 & (3) & 1.500 & 0.071 & 0.009 & 0.7209 & 0.0017 & 0.0023 \\
HD 82943d & 0.294 $\pm$ 0.031 & 2.137 $\pm$ 0.017 & (4) & 0.278 & 0.030 & 0.001 & 2.0766 & 0.0167 & 0.0066 \\
HD 89307b & 2.0 $\pm$ 0.4 & 3.34 $\pm$ 0.17 & (5) & 2.074 & 0.356 & 0.013 & 3.3632 & 0.0619 & 0.0106 \\
HD 106252b & 7.56 & 2.70 & (2) & 7.613 & 0.364 & 0.147 & 2.7033 & 0.0202 & 0.0259 \\
HD 141937b & 9.7 & 1.52 & (6) & 9.316 & 0.306 & 0.120 & 1.4877 & 0.0018 & 0.0095 \\
\noalign{\smallskip}\hline
\end{tabular}
\begin{list}{}{}
\item[$^{\mathrm{}}$] Reference: (1) \cite{Wri09}; (2) \cite{Per03}; (3) \cite{Tan13}; (4) \cite{Bal14}; (5) \cite{Boi12}; (6) \cite{Udr02}.
\end{list}
\end{center}
\end{table}
\section{Discussion and Conclusion}
\label{sect:Con}
We made an investigation of the physical state of the six EH stars
and their own planet, by employing the method presented by \citet{Do
Nascimento09}. In the context of commonly observations we added two
observational constraints, the lithium abundance $\log$ $N$ (Li) and
the rotational period $P_{\rm{rot}}$, as constraints to better
determine the fundamental parameters of EH stars and their planets.
We gave the estimations of stellar masses and ages using only the
effective temperature $T_{\rm{eff}}$ and luminosity $L/L_{\sun}$ as
observation constraints. The uncertainties of the mass and age are
approximately 0.05 $M_{\sun}$ and 4.0 Gyr. As we considered the
lithium abundance $\log$ $N$ (Li) and rotational period
$P_{\rm{rot}}$ in our analysis, we obtained more precise
determinations. The lithium abundance helped us to minimize the
errors of masses and ages to 0.03 $M_{\sun}$ and 3.0 Gyr,
respectively. Additionally, we used the rotational period
$P_{\rm{rot}}$ to restrict stellar models based on the former
results. The precision has been improved with $\Delta$M $\sim$ 0.02
$M_{\sun}$ and $\Delta$t $\sim$ 0.5 Gyr. Furthermore, because of the
precise determination of age, we restricted atmospheric
characteristics more strictly than the observations, and positioned
the stars more exactly in the H-R diagram. Furthermore, we obtained
the accurate planetary parameters, i.e., minimum masses $M_2\sin i$
and orbital semimajor axes $a$ by using RV measurements and stellar
masses determined previously.
If we want to completely characterize a system, and obtain accurate
properties of the planet, i.e., the mass, radius, and density, we
need the photometric transit, the RV observations and the properties
of EH star. In the future, we hope to conduct further studies with
the data from Gaia mission.
\begin{acknowledgements}
This work is supported by the grants 10933002, 11273007 and 11273012
from the National Natural Science Foundation of China, and the
Fundamental Research Funds for the Central Universities.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Rigorous three-dimensional (3D) simulations of neutrino-driven core-collapse supernovae have become highly successful in recent years \citep[e.g.,][]{melson_15b,lentz_15,takiwaki_14,burrows_19}, and have made clear headway in explaining the properties of supernova explosions and the compact objects born in these events \citep{mueller_17,Mueller2019,Burrows2020b,Powell2020, Bollig2020, mueller_20}.
The latest 3D models are able to reproduce a range of explosion energies up to $10^{51}\mathrm{erg}$ \citep{Bollig2020}, and yield neutron star birth masses, kicks, and spins largely compatible with the population of observed young pulsars \citep{Mueller2019,Burrows2020b}.
A number of ingredients have contributed, or have
the potential to contribute, to make modern
neutrino-driven explosion models more robust.
Various microphysical effects such as
reduced neutrino scattering opacities due
to nucleon strangeness \citep{melson_15b} and nucleon correlations at high densities \citep{horowitz_17,Bollig2017,burrows_18}, muonisation \citep{Bollig2017}, and large effective nucleon masses \citep{yasin_20} can be conducive to neutrino-driven shock revival.
In addition, a particularly important turning point has been the advent of 3D \emph{progenitor} models and the recognition that
asphericities seeded prior to collapse can precipitate ``perturbation-aided'' neutrino-driven explosions
\citep{couch_13,couch_15, mueller_15a, mueller_16b,mueller_17,mueller_20,Bollig2020}. In perturbation-aided explosions, the moderately subsonic solenoidal velocity perturbation
in active convective shells at the pre-collapse stage
with Mach numbers of order $\mathord{\sim}0.1$
\citep{Collins2018} are transformed into strong
density and pressure perturbations at the
shock \citep{mueller_15a,takahashi_14,takahashi_16,abdikamalov_20}, and effectively strengthen the violent non-radial flow behind the shock
that develops due to convective
instability \citep{herant_94,burrows_95,janka_95,janka_96} and the standing accretion shock instability \citep{blondin_03,foglizzo_07}, thereby supporting neutrino-driven shock revival.
The discovery of perturbation-aided explosions has greatly enhanced interest in multi-dimensional models of late convective burning stages in massive stars. Several studies have by now followed the immediate pre-collapse phase of silicon and/or oxygen shell burning to collapse in 3D
\citep{couch_15,mueller_16b,mueller_16c,Mueller2019,Yoshida2019,Yadav2020,McNeill2020,yoshida_20}. In addition, there has been a long-standing strand of research since the 1990s \citep[e.g.,][]{bazan_94,bazan_97,meakin_06,meakin_07,meakin_07b,mueller_16c,jones_17,cristini_17,cristini_19} into the detailed behaviour of stellar convection during advanced burning stages with a view to the \emph{secular} impact of 3D effects not captured by spherically symmetric stellar evolution models based on mixing-length theory \citep{Biermann1932,Boehm1958}. The details of convective boundary mixing by processes such as quasi-steady turbulent entrainment \citep{Fernando1991,strang_01,meakin_07}
or violent shell mergers \citep{mocak_18,Yadav2020} have received particular attention for their potential to alter the core structure of massive stars and hence affect the dynamics and final nucleosynthesis yields of the subsequent supernova explosion.
Three-dimensional simulations of convection during advanced burning stages have so far largely disregarded two important aspects of real stars -- rotation and magnetic fields. The effects of rotation have been
touched upon by the seminal work of \citet{kuhlen_03}, but more recent studies \citep{arnett_10,chatzopoulos_16} have been limited to axisymmetry (2D). Magnetohydrodynamic (MHD) simulations of convection, while common and mature in the context of the Sun \citep[for a review see, e.g.,][]{charbonneau_14} have yet to be performed for advanced burning stages of massive stars.
Simulations of magnetoconvection during the late burning stages, both in rotating and non-rotating stars, are a big desideratum for several reasons.
Even in slowly rotating massive stars, magnetic fields may have a non-negligible impact on the dynamics of neutrino-driven explosions \citep{obergaulinger_14,Mueller2020}, and although efficient field amplification processes operate
in the supernova core \citep{endeve_12,Mueller2020},
it stands to reason that memory of the initial fields may not be lost entirely, especially for strong fields in the progenitor and early explosions. A better understanding of the interplay between convection, rotation, and magnetic fields in supernova progenitors
is even more critical for the magnetorotational explosion scenario \citep[e.g.,][]{burrows_07a,Winteler2012a,Mosta2014,moesta_18,Obergaulinger2020,Obergaulinger2020b,kuroda_20,aloy_21}, which probably
explains rare, unusually energetic ``hypernovae'' with energies of up to $\mathord{\sim}10^{52}\,\mathrm{erg}$.
Again, even though the requisite strong magnetic
fields may be generated after collapse by amplification processes like the magnetorotational insstability \citep{balbus_91,akiyama_03} or an $\alpha$-$\Omega$
dynamo in the proto-neutron star \citep{duncan_92,thompson_93,Raynaud2020}, a robust understanding of magnetic
fields during late burning is indispensable for reliable
hypernova models on several heads. For sufficiently
strong seed fields in the progenitor, the initial
field strengths and geometry could have a significant
impact on the development of magnetorotational explosions after collapse \citep{bugli_20,aloy_21}. Furthermore, our understanding of evolutionary pathways towards hypernova explosions
\citep{woosley_06,yoon_05,yoon_10,cantiell_07,aguilera_18,aguilera_20}
is intimately connected
with the effects of magnetic fields
on angular momentum transport in stellar interiors
\citep{spruit_02,heger_05,fuller_19,takahashi_20}.
Beyond the impact of magnetic fields on the pre-supernova
evolution and the supernova itself, the interplay of convection, rotation, and magnetic fields is obviously relevant to the origin of neutron star magnetic fields as well. It still remains to be explained what shapes the distribution of magnetic fields among young pulsars, and why some neutron stars are born as magnetars with extraordinarily strong dipole fields of up to $10^{15}\,\mathrm{G}$
\citep{olausen_14,tauris_15,Kaspi2017,Enoto2019}. Are these strong fields of fossil origin \citep{ferrario_05,Ferrario2009,schneider_20} or generated by
dynamo action during or after the supernova \citep{duncan_92,thompson_93}?
Naturally, 3D MHD simulations of the late burning stages cannot comprehensively answer all of these questions. In order to connect to
observable magnetic fields of neutron stars, an integrated approach is required that combines stellar evolution over secular time scales, 3D stellar hydrodynamics, supernova modelling, and local simulations, and also addresses aspects like field burial \citep{vigano_13,torres_16} and the long-time evolution
of magnetic fields \citep{aguilera_08,de_grandis_20}.
Significant obstacles need to be overcome until one
can construct a pipeline
from 3D progenitor models to 3D supernova simulations and beyond.
However, 3D MHD simulations of convective burning can already address meaningful questions despite the complexity of the overall problem.
As in the purely hydrodynamic case, the first step must be to understand the principles governing magnetohydrodynamic convection
during the late burning stages based on somewhat
idealised simulations that are broadly representative of
the conditions in convective cores and shells of massive stars.
In this study, we present a first simulation of magnetoconvection
during the final phase of oxygen shell burning using the
ideal MHD approximation. In the tradition of
semi-idealised models of late-stage convection, we use a setup
that falls within the typical range of conditions in the
interiors of massive stars in terms of convective Mach number
and shell geometry \citep[for an overview see][]{Collins2018}
and is not designed as a fully self-consistent
model of any one particular star.
This simulation constitutes a first
step beyond effective 1D prescriptions in stellar evolution models
to predict the magnetic field strength and geometry
encountered in the inner shells of massive stars at the pre-supernova
stage. We also compare to a corresponding non-magnetic model of
oxygen shell convection to gauge the feedback of magnetic fields
on the convective flow with a particular view to two important issues.
First, the efficiency of the ``perturbation-aided'' neutrino-driven mechanism
depends critically on the magnitude of the convective velocities
during shell burning, and it is important to determine whether
magnetic fields can significantly slow down convective motions
as suggested by some recent simulations of solar convection
\citep{hotta_15}.
Second, magnetic fields could quantitatively
or qualitatively affect shell growth by turbulent entrainment,
which has been consistently seen in all recent 3D hydrodynamics
simulations of late-stage convection in massive stars.
Our paper is structured as follows. In Section \ref{sec:methods}, we describe the
numerical methods, progenitor model, and initial conditions
used in our study and discuss the potential role of non-ideal effects . The results of the simulations are presented in Section~\ref{sec:results}. We first focus on the strength and geometry of the emerging magnetic field
and then analyse the impact of magnetic fields on the flow, and in particular
on entrainment at shell boundaries. We summarize our results and discuss their implications in Section~\ref{sec:conclusion}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{composition_s_rho.pdf}
\caption{Profiles of selected mass fractions $X_i$ (top), density $\rho$, and specific
entropy $s$ in the $18\, M_\odot$ \textsc{Kepler} stellar evolution model at the time
of mapping to \textsc{CoCoNuT}. The boundaries of the simulated
domain are indicated by dashed vertical lines. Note
subtle differences to Figure~1 in \citet{mueller_16c}, which
shows the same stellar evolution model at the onset of collapse.}
\label{fig:1d_model}
\end{figure}
\section{Numerical Methods and Simulation Setup}
\label{sec:methods}
We simulate oxygen and neon shell burning
with and without magnetic fields
in a non-rotating $18\,M_\odot$ solar-metallicity star calculated using the stellar evolution code \textsc{Kepler} \citep{Weaver1978, Woosley2002, Heger2010}. The same progenitor model has previously been used in the shell convection simulation of \citet{mueller_16c}.
The structure of the stellar evolution model at
the time of mapping to 3D is illustrated
in Figure~\ref{fig:1d_model}. The model contains
two active convective shells with sufficiently
short turnover times to make time-explicit simulations feasible.
The oxygen shell extends from
$1.75\, M_\odot$ to $2.25\, M_\odot$ in enclosed
mass and from $3,400\, \mathrm{km}$
to $7,900\,\mathrm{km}$ in radius, immediately
followed further out by the neon shell
out to $3.53\, M_\odot$ in mass and
$27,000\, \mathrm{km}$ in radius.
For our 3D simulations we employ the Newtonian magnetohydrodynamic (MHD) version of the \textsc{CoCoNuT} code as described in \citet{Mueller2020}.
The MHD equations are solved in spherical
polar coordinates using the HLLC (Harten-Lax-van Leer-Contact) Riemann solver \citep{Gurski2004a, Miyoshi2005a}. The divergence-free condition $\nabla\cdot\mathbf{B} = 0$ is maintained using a
modification
of the original hyperbolic divergence cleaning scheme of \citet{Dedner2002} that allows for a variable cleaning speed while still maintaining total energy conservation using an idea similar to \citet{Tricco2016}. Compared to the original cleaning method, we rescale
the Lagrange multiplier $\psi$ to
$\hat{\psi}=\psi/c_\mathrm{h}$, where the cleaning speed $c_\mathrm{h}$ is chosen to be the fast magnetosonic speed. Details of this approach and differences
to \citet{Tricco2016} are discussed in Appendix~\ref{app:eglm}.
The extended system of MHD equations for the density $\rho$, velocity $\mathbf{v}$, magnetic field $\mathbf{B}$, the total energy density $\hat{e}$, mass
fractions $X_i$, and the rescaled Lagrange multiplier $\hat{\psi}$ reads,
\begin{eqnarray}
\partial_t \rho
+\nabla \cdot \rho \mathbf{v}
&=&
0,
\\
\partial_t (\rho \mathbf v)
+\nabla \cdot \left(\rho \mathbf{v}\mathbf{v}-
\frac{\mathbf{B} \mathbf{B}}{4\pi}
+P_\mathrm{t}\mathcal{I}
\right)
&=&
\rho \mathbf{g}
-
\frac{(\nabla \cdot\mathbf{B}) \mathbf{B}}{4\pi}
,
\\
\partial_t {\hat{e}}+
\nabla \cdot
\left[(e+P_\mathrm{t})\mathbf{u}
-\frac{\mathbf{B} (\mathbf{v}\cdot\mathbf{B})
{-c_\mathrm{h} \hat{\psi} \mathbf{B}}}{4\pi}
\right]
&=&
\rho \mathbf{g}\cdot \mathbf{v}
+
\rho \dot{\epsilon}_\mathrm{nuc}
,
\\
\partial_t \mathbf{B} +\nabla \cdot (\mathbf{v}\mathbf{B}-\mathbf{B}\mathbf{v})
+\nabla (c_\mathrm{h} \hat{\psi})
&=&0,
\\
\partial_t \hat{\psi}
+c_\mathrm{h} \nabla \cdot \mathbf{B}
&=&-\hat{\psi}/\tau,
\\
\partial_t (\rho X_i)
+\nabla \cdot (\rho X_i \mathbf{v})
&=&
\rho \dot{X}_i
,
\end{eqnarray}
where $\mathbf{g}$ is the gravitational acceleration, $P_\mathrm{t}$ is the total pressure, $\mathcal{I}$ is the Kronecker tensor, $c_\mathrm{h}$ is the hyperbolic cleaning speed, $\tau$ is the damping time scale for divergence cleaning, and $\dot{\epsilon}_\mathrm{nuc}$ and $\dot{X}_i$ are energy and mass fraction
source terms from nuclear reactions. This
system conserves the volume integral of a
modified total energy density $\hat{e}$,
which also contains the cleaning field $\hat{\psi}$,
\begin{equation}
{\hat{e}}
=\rho \left(\epsilon+\frac{v^2}{2}\right)+\frac{B^2+\hat{\psi}^2}{8\pi},
\end{equation}
where $\epsilon$ is the mass-specific internal energy.
Note that the energy equation differs from that in \citet{Tricco2016}
because we implement divergence cleaninng on a Eulerian grid
and do not advect $\psi$ with the flow unlike in their smoothed
particle hydrodynamics approach. If translated to the
Eulerian form of the MHD equations, the approach of
\citet{Tricco2016} would give rise to an extra advection
term in the evolution equation for $\psi$, which then
requires an additional term in the energy equation to ensure
energy conservation. In our Eulerian approach, these extra terms
are not needed.
Viscosity and resistivity are not included explicitly
in the ideal MHD approximation,
and only enter through
the discretisation of the MHD equations,
specifically
the spatial reconstruction, the computation of the Riemann fluxes, and the update of the
conserved quantities in the
case of reconstruct-solve-average
schemes.
\footnote{
For an attempt to quantify the numerical diffusivities,
see, e.g., \citet{rembiasz_17}. One should emphasise,
however, that that the concepts of numerical
and physical viscosity/resistivity must
be carefully distinguished. For one thing,
the numerical viscosity and resistivity
are problem-dependent as stressed by
\citet{rembiasz_17}. In the case of higher-order
reconstruction schemes, they are also
scale-dependent (and hence more akin
to higher-order hyperdiffusivities),
and they are affected by discrete
switches in the reconstruction scheme
and Riemann solver. Because of these
subtleties, the determination of the effective
(magnetic) Reynolds number of a simulation
is non-trivial
\citep[see Section~2.1.2 of][]{mueller_20}.
}
While this ``implicit large-eddy simulation'' (ILES) approach is
well-established for hydrodynamic turbulence \citep{grinstein_07}, the magnetohydrodynamic case
is more complicated because the behaviour in the relevant astrophysical regime of low (kinematic) viscosity
$\nu$ and
resistivity $\eta$ may still depend on the magnetic
Prandtl number $\mathrm{Pm}=\nu/\eta$. Different
from the regime later encountered in the supernova
core where $\mathrm{Pm}\gg 1$, oxygen shell burning is characterised by magnetic Prandtl numbers slightly below unity ($\mathrm{Pm} \sim 0.2$). With an effective Prandtl number of $\mathrm{Pm} \sim 1$
in the ILES approach \citep{federrath_11},
there is a potential concern that spurious small-scale dynamo amplification might arise due to the overestimation of the magnetic Prandtl number, or that saturation field strengths might be too high. The debate about the magnetic Prandtl number dependence in MHD turbulence is ongoing with insights from theory and simulations \citep[e.g.,][]{schekochihin_04,schekochihin_07,isakov_07,brandenburg_11,pietarila_10,sahoo_11,thaler_15,seshasayanan_17}, astrophysical observations \citep[e.g.,][]{christensen_09}, and laboratory experiments \citep{petrelis_07,monchaux_07}. There is, at the very least, the possibility of robust small-scale dynamo action and saturation governed
by balance between the inertial terms and Lorentz force terms down to the low values of $\mathrm{Pm}\sim 5\times 10^{-6}$ in liquid sodium experiments \citep{petrelis_07,monchaux_07}, provided that
both the hydrodynamic and magnetic Reynolds number are sufficiently high. An ILES approach that places the simulation into a ``universal'', strongly magnetised regime \citep{beresnyak_19} thus appears plausible. Moreover, even if the ILES approach were only to provide upper limits for magnetic fields and their effects on the flow, meaningful conclusions can still be drawn in the context of shell convection simulations.
The nuclear source terms are calculated with the 19-species nuclear reaction network of \citet{Weaver1978}. Neutrino cooling is ignored, since it becomes subdominant in the late pre-collapse phase as the contraction of the
shells speeds up nuclear burning.
The simulations are conducted on a grid
with $400\times128\times256$ zones in radius $r$, colatitude $\theta$, and longitude $\varphi$
with an exponential grid in $r$ and uniform
spacing in $\theta$ and $\varphi$.
To reduce computational costs, we excise the non-convective inner core up to $3000\,\mathrm{km}$ and replace the excised core with a point mass. The grid extends to a radius
of $50,000\, \mathrm{km}$ and includes a small part
of the silicon shell, the entire convective oxygen and neon shells, the non-convective carbon shell,
and parts of the helium shell.
Simulations that include the entire iron core
and silicon shell are of course desirable in future,
but the treatment of nuclear quasi-equilibrium
in these regions is delicate and motivates simulations
in a shellular domain instead, with a sufficient ``buffer''
region between the convectively active shells and the inner boundary of the grid. Due to the significant buoyancy jump between
the silicon shell and the oxygen shell, the flow in the convective
shell is largely decoupled from that in the
non-convective buffer region. Hydrodynamic
waves excited at the convective boundary and Alfv\'en waves along
field lines threading both regions can introduce some coupling, but these are excited by the active convective shell and not in the convectively quiet buffer region. No physical flux of waves
\emph{into} the oxygen region from below is expected. Moreover,
there is no strong excitation of waves at the convective
boundary in the first place because of the low convective Mach number
of the flow. It turns out that coupling by Alfv\'en waves can be neglected even more safely than coupling by internal gravity waves because of the Alfv\'en number of the flow; as we shall see
the Alfv\'en number always stays well below unity.
In order to investigate the impact of magnetic fields on late-stage oxygen shell convection, we run a purely hydrodynamic, non-magnetic simulation and an MHD simulation. In the MHD simulation, we impose a homogeneous magnetic field with $B_z = 10^{8}\,\mathrm{G}$
parallel to the grid axis as initial conditions. We implement reflecting and periodic boundary conditions in $\theta$ and $\varphi$, respectively. For the hydrodynamic variables we use hydrostatic
extrapolation \citep{mueller_20} at the inner and outer boundary, and
impose strictly vanishing advective fluxes at the inner
boundary.
Different from
\citet{mueller_16c}, we do \emph{not} contract
the inner boundary to follow the contraction and
collapse of the core.
This means that our models will not (and are not
intended to) provide an accurate representation of the
pre-collapse state of the particular $18 M_\odot$ star
that we are simulating. We would expect, e.g., that
for the particular $18 M_\odot$ model, the burning rate
and hence the convective velocities would increase
until the onset of collapse \citep{mueller_16c} due
to the contraction of the convective oxygen shell. As a consequence
of accelerating convection and flux compression, the magnetic
fields will likely also be somewhat higher at the onset of collapse.
The model is rather meant to reveal the physical principles
governing late-stage magnetoconvection in massive stars,
and to be \emph{representative} of the
typical conditions in oxygen burning shells with the understanding
that there are significant variations in convective Mach
number and shell geometry at the onset of collapse \citep{Collins2018},
which will also be reflected in the magnetic field strengths in the
interiors of supernova progenitors.
The inner and outer boundary conditions for the
magnetic fields are less trivial.
In simulations of magnetoconvection in the Sun,
various choices such as vertical boundary conditions
($B_x=B_y=0$), radial boundary conditions ($B_\theta=B_\varphi=0$), vanishing tangential
electric fields or currents, perfect-conductor
boundary conditions, or extrapolation
to a potential solution have been employed
\citep[e.g.,][]{Thelen2000, Rempel2014,Kapyla2018}.
Since our domain boundaries are separated from
the convective regions by shell interfaces with
significant buoyancy jumps, we opt for the
simplest choice of boundary conditions and merely
fix the magnetic fields in the ghost zones
to their initial values for a homogeneous vertical magnetic field. We argue that due to the buffer regions at our radial boundaries, and the lack of rotational shear, our choice of magnetic boundary conditions should not have a significant impact on the dynamically relevant regions of the star.
\section{Results}
\label{sec:results}
\subsection{Evolution of the Magnetic Field}
\label{subsec:magGeometry}
\begin{figure}
\includegraphics[width=\columnwidth]{OxygenShell_MagneticField.pdf}
\caption{Evolution of the volume-averaged (solid) and maximum (dashed) magnetic field strength within the oxygen shell.
}
\label{fig:B_field}
\end{figure}
\begin{figure*}
\includegraphics[width=\linewidth]{2D_snapshots_equatorial.pdf}
\caption{Snapshots of the equatorial plane in
the MHD simulation at a time of $500\, \mathrm{s}$,
showing the inner part of the domain from the inner boundary at a radius of $3000 \, \mathrm{km}$
out to $12,000\, \mathrm{km}$. The plotted part of the domain corresponds to the range $\mathrm{1.67\, M_\odot - 2.54\, M_\odot}$ in mass coordinate.} The panels display
a) the ratio of magnetic to thermal pressure (i.e., inverse plasma-$\beta$), b) the magnitude of the magnetic field strength, c) the radial velocity and d) the silicon mass fraction.
\label{fig:2Dslices}
\end{figure*}
Both the magnetised and non-magnetised model
were run for over 12 minutes of physical time, which
corresponds to about 17 convective turnover times.
Very soon after convection develops in the oxygen shell, the turbulent convective flow starts to rapidly amplify the magnetic fields in this region. To illustrate the growth
of the magnetic field we show the
root-mean-square (RMS) average and maximum value of the
magnetic field in the oxygen shell in Figure~\ref{fig:B_field}.The magnetic field, which we initialised at $10^8\, \mathrm{G}$, is amplified by over two orders of magnitude to over $10^{10} \,\mathrm{G}$ on average within the shell due to convective and turbulent motions. The average magnetic field strength in the shell is still increasing at the end of the simulation, but
the growth rate has slowed down, likely indicating that the model is approaching some level of magnetic field saturation. While we cannot with certainty
extrapolate the growth dynamics without
simulating longer, it appears likely
that RMS saturation field strength will settle somewhere around $\mathord{\approx} 2 \times 10^{10}\,\mathrm{G}$.
A closer look reveals that the magnetic field
is not amplified homogeneously throughout the
convective region. The convective eddies push the magnetic field lines against the convective boundaries, where the fields are then more strongly amplified by shear flows. This is visualised in Figures~\ref{fig:2Dslices}a and \ref{fig:2Dslices}b, where both magnetic pressure and magnetic field strengths appear concentrated at the convective boundary. The maximum magnetic field strength
shown in Figure~\ref{fig:B_field} (dashed line) therefore essentially mirrors the field at the inner boundary of the oxygen shell
(not to be confused with the inner grid boundary). We observe a very quick rise in magnetic field strength at the boundary at the beginning of the simulation between $20 \texttt{-} 50\,\mathrm{s}$ once
the convective flow is fully developed. The rate of growth of the maximum magnetic field strength after this increases at approximately the same rate as the average magnetic field strength.
\begin{figure}
\includegraphics[width=\columnwidth]{DipoleMagneticField.pdf}
\caption{The angle-averaged root-mean-square (RMS) value
(black) and the dipole component (red) of the
radial magnetic field component $B_r$
as a function of mass coordinate at a time of $725\, \mathrm{s}$.
Note that the magnetic field strength drops dramatically
inside the thin, convectively stable buffer region below
the oxygen shell, but exhibits a peak in the first radial zone,
which is merely a numerical artefact due to imperfect
boundary conditions.
}
\label{fig:Dipole}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{MagneticFieldSpectrum.pdf}
\caption{Power $\hat{M}_{\ell}^2$ in different multipoles $\ell$ of the radial field component
of the magnetic field in the oxygen shell at different times. Dotted lines show the slopes
of Kolmogorov ($k^{-5/3}$) and Kazantsev
($k^{3/2}$) spectra. The low-wavenumber part of the spectrum
is always distinctly flatter than a Kazantsev spectrum; at
intermediate $\ell$, we see a Kolmogorov-like spectrum with a break
around $\ell=30$ to a steeper slope
in the dissipation range.
}
\label{fig:SphericalHarmonics}
\end{figure}
To characterise the geometric structure of the magnetic field, we show a radial profile of the dipole
of the radial magnetic field component\footnote{Strictly speaking, the most rigorous way to extract the dipole component of the magnetic field
would use a poloidal-toroidal decomposition
$\mathbf{B}=
\nabla\times[\nabla \times (\mathcal{P}\mathbf{r})]
+
\nabla \times (\mathcal{T} \mathbf{r})$
where the scalar functions $\mathcal{P}$ and $\mathcal{T}$
describe the poloidal and toroidal parts of the field,
and consider \emph{all} components
$B_r$, $B_\theta$, and $B_\varphi$
arising from the $\ell=1$ component of
the poloidal scalar $\mathcal{P}$. Since the poloidal-toroidal decomposition cannot be reduced to a straightforward projection onto vector spherical harmonics, this analysis is left to future papers.
} $B_r$ at the end of the simulation at $\approx 725\,\mathrm{s}$ (Figure~\ref{fig:Dipole}).
The dipole magnetic field in the convective regions is approximately an order of magnitude smaller than the RMS average radial magnetic field, aside from the first radial zone of our grid, where the dipole component is comparable to the total radial magnetic field. This behaviour at the grid boundary is likely an artefact of our choice of homogeneous magnetic fields at the inner boundary. In general, however, the magnetic fields in the convective zones appear dominated by higher-order multipoles. Disregarding the dipole fields at the inner boundary, the dipole magnetic field of $\mathord{\approx} 10^9 \mathrm{G}$ or below lies in the upper range of observed dipole magnetic fields of white dwarfs \citep{Ferrario2015}, which have often been taken as best estimates for the dipole fields in the cores of massive stars.
\begin{figure}
\includegraphics[width=\columnwidth]{AngularAverage_Stress.pdf}
\caption{The radial (solid) and non-radial (dashed)
diagonal components of the
Reynolds stress tensor
$R_{ij}$ (black) and Maxwell
tensor $M_{ij}$ (red)
for the MHD convection model at $725\,\mathrm{s}$ as a function of enclosed mass.}
\label{fig:Stress}
\end{figure}
To further illustrate the small-scale nature of
the magnetic field within the oxygen shell, we show angular power spectra, $\hat{M}_{\ell}$ of the radial field strength at different times as a function of the spherical harmonics degree $\ell$ inside the oxygen shell at a radius of $\mathord{\approx} 5000\mathrm{km}$ (Figure~\ref{fig:SphericalHarmonics}).
$\hat{M}_{\ell}$ is
computed as:
\begin{eqnarray}
\hat{M}_\ell &=&
\frac{1}{8\pi}
\sum_{m=-\ell}^\ell \left|\int Y_{\ell m}^*(\theta,\varphi) B_r
\mathrm{d} \Omega\right|^2.
\end{eqnarray}
Very early in the simulation, the spectrum shows a significant
$\ell = 1$ dipole contribution, caused by our choice of homogeneous magnetic fields as an initial condition. It takes several convective turnovers before this contribution is no longer dominant. Throughout the evolution, the spectrum shows a broad plateau at small $\ell$ and a Kolmogorov-like slope at intermediate $\ell$,
which can be more clearly distinguished at late times when the first
break in the spectrum has moved towards lower $\ell$.
High-resolution studies are desirable to extend the inertial
range and confirm the development of a Kolmogorov spectrum at intermediate
wave numbers. The break in the spectrum moves towards smaller wave numbers and
the peak of the spectrum shifts to larger scales from $\ell \approx 12$ to $ \ell \approx 7$.
Simulations of field amplification by a small-scale dynamo in isotropic turbulence often exhibit a Kazantsev spectrum
with power-law index $k^{-3/2}$ on large scales.
Our spectra show a distinctly flatter slope below the spectral speak, indicating that field amplification is subtly different from the standard picture of turbulent dynamo amplification.
This is borne out by a closer look at the magnetic
field distribution within the convective region.
Somewhat similar to our recent simulation of field amplification by neutrino-driven convection in core-collapse supernovae \citep{Mueller2020}, field amplification does not happen homogeneously throughout the convective region and appears to be predominantly driven
by shear flows at the convective boundaries.
To illustrate this, we compare the
spherically-averaged diagonal components of the
kinetic (Reynolds) and magnetic (Maxwell) stress tensors
$R_{ij}$ and $M_{ij}$
in the MHD model at the final time-step of the simulation at $\mathord{\approx} 725\,\mathrm{s}$ (Figure~\ref{fig:Stress}).
$R_{ij}$ and $M_{ij}$ are computed as
\begin{eqnarray}
R_{ij}&=& \langle \rho v_i v_j\rangle, \\
M_{ij}&=& \frac{1}{8\pi}\langle B_i B_j\rangle,
\end{eqnarray}
where angled brackets denote volume-weighted
averages.\footnote{Note that no explicit decomposition of the velocity field into fluctuating components and a spherically averaged background state is required since the background state is hydrostatic.}
The magnetic fields clearly remain well below
equipartition with the total turbulent kinetic
energy, and appear to converge to saturation
levels about one order of magnitude below, although
longer simulations will be required to confirm this.
The non-radial
diagonal components
$R_{\theta\theta}+R_{\varphi\varphi}$ and
$M_{\theta\theta}+M_{\varphi\varphi}$
are generally higher than the respective radial components $R_{rr}$ and $M_{rr}$. Throughout
most of the domain, the Maxwell stresses are
considerably smaller than the Reynolds stresses,
but it is noteworthy that the profile of the non-radial components of $M_{ij}$ runs largely parallel to those of
$R_{ij}$, just with a difference of slightly more than
an order of magnitude. Peaks of the magnetic
stresses at the shell interfaces suggest
that field amplification is driven by shear
flow at the convective boundaries. Convective motions
then transport the magnetic field into the interior
of the convective regions and also generate radial
field components. The humps of $M_{rr}$ within the convection zones are less pronounced than in $R_{rr}$, which indicates that little
amplification by convective updrafts
and downdrafts takes place within the convection region.
At the outer boundaries of
the oxygen and neon shell, we observe
rough equipartition between
$R_{rr}$ and $M_{\theta\theta}+M_{\varphi\varphi}$. The fact that the growth of the field slows down once the model approaches
$M_{\theta\theta}+M_{\varphi\varphi}\approx R_{rr} $ suggests that this ``partial equipartition'' may determine the saturation field strength, but
the very different behaviour at
the inner boundary with
$M_{\theta\theta}+M_{\varphi\varphi}\gg R_{rr}$ argues against this. It is plausible, though, that the saturation
field strength is (or rather will be) determined at the boundary. Linear stability analysis of the magnetised Kelvin-Helmholtz instability \citep[e.g.][]{chandrasekhar_61,Sen1963,Fejer1964,Miura1982,frank_96, keppens_99, ryu_00,obergaulinger_10, Liu2018} shows that shear instability, which is critical for efficiently generating small-scale fields, are suppressed by magnetic fields parallel to the direction of the shear flow as long as the Alfv\'en number
is smaller than two. With kinetic fields well below equipartition
(cp.~Figure~\ref{fig:Stress}), our models fall into the regime of low Alfv\'en
number; hence the generation of non-radial fields at the boundaries may be self-limiting as a result of the stabilising impact
of magnetic fields on the Kelvin-Helmholtz instability. A naive
application of the principle of marginal stability would suggest saturation occurs
once the Alfv\'en velocity at the boundary equals the shear velocity jump across the shell interface, which would imply
$M_{\theta\theta}+M_{\varphi\varphi}\approx R_{\theta\theta}+R_{\varphi\varphi}$. Our simulation suggests that saturation probably occurs at significantly smaller values, and a more quantitative analysis of the saturation mechanism will clearly be required in future to elucidate the relation between the shear velocity (and possibly the width of the shear layer) and the saturation field strength.
\begin{figure}
\includegraphics[width=\columnwidth]{OxygenShell_KineticEnergy.pdf}
\caption{Evolution of the total radial (solid) and non-radial (dashed) convective kinetic energy within the oxygen shell for the purely hydrodynamic (purple) and MHD (black) model, respectively.}
\label{fig:E_kin}
\end{figure}
\begin{figure*}
\includegraphics[width=\linewidth]{OxygenShell_subplots.pdf}
\caption{Effects of entrainment on the oxygen shell for both the MHD model (black) and the equivalent purely hydrodynamic model (purple). The panels show
a) the inner and outer oxygen shell radii, b) the total mass within the oxygen shell, c) the entrainment rate $\dot{M}$ into the oxygen shell, and
d) the volume-integrated nuclear energy generation rate throughout the shell..
}
\label{fig:OShellValues}
\end{figure*}
\subsection{Impact of Magnetic Fields on Convective Boundary Mixing}
\label{subsec:entrainment}
The slowing growth of the magnetic field indicates that feedback effects on the flow should become important during the later phase of the simulation. It is particularly interesting to consider the effect of the strong fields tangential to the oxygen-neon shell interface on convective boundary mixing, though we also consider potential effects on the flow in the
interior of the convective regions.
To this end, we compare the MHD model to a purely hydrodynamic simulation of oxygen and neon shell convection.
Figure~\ref{fig:E_kin} compares the total kinetic energy in convective motions in the
oxygen shell between the models.
The radial and non-radial components
$E_r$ and $E_{\theta, \varphi}$
of the kinetic energy are defined as
\begin{eqnarray}
E_r &=&\frac{1}{2} \int\limits_{r_- \leq r \leq r_+} \rho v_r^2 \,\mathrm{d}V,
\\
E_\mathrm{\theta, \varphi} &=&\frac{1}{2} \int\limits_{r_- \leq r \leq r_+} \rho (v_\mathrm{\theta}^2 + v_\mathrm{\varphi}^2 ) \,\mathrm{d}V,
\end{eqnarray}
where $r_-$ and $r_+$ are the inner and outer radii of the oxygen shell. We compute the inner
and outer shell radii $r_-$ and $r_+$ as the midpoints of the steep,
entropy slope between the oxygen shell and the silicon and neon shells below and above. Due to shell
growth by entrainment, $r_-$ and $r_+$
are time-dependent (Figure~\ref{fig:OShellValues}a).
For both models most of the turbulent kinetic energy is in the non-radial direction. This
is different from the rough equipartition
$E_r \approx E_\mathrm{\theta, \varphi}$ seen in many simulations of buoyancy-driven convection
\citep{arnett_09}. There is, however,
no firm physical principle that dictates
such equipartition; indeed a shell burning simulation of the same $18\, M_\odot$ progenitor with the \textsc{Prometheus} code also showed significantly more kinetic energy in non-radial motions \citep{mueller_16c}. Ultimately, the high ratio $E_{\theta,\varphi}/E_r$ merely indicates that the fully developed flow happens to predominantly select eddies with larger extent in the horizontal than in the vertical direction.
\footnote{In the low-Mach number regime, the anelastic condition
$\nabla \cdot(\rho \mathbf{v})\approx 0$
implies a relation between the aspect
ratio of convective cells and the horizontal and vertical kinetic energy of any mode.}
Discounting stochastic variations, the radial component $E_r$ of the turbulent kinetic energy for both the hydrodynamic and MHD model are similar until the final $\mathord{\approx} 300\, \mathrm{s}$, at which point they start to deviate more significantly. Clearer differences
appear in the non-radial component $E_{\theta,\varphi}$, with the hydro model showing irregular episodic bursts in kinetic energy which are not mirrored in the MHD model. Since the only difference in
the setup of the two runs is the presence or absence of magnetic fields, this points to feedback of the magnetic field on the flow as the underlying reason, unless the variations are purely stochastic. Closer examination
suggests that feedback of the magnetic field on the flow is
the more likely explanation, and
the nature of this feedback will become more apparent as we analyse convective boundary mixing in the models.
To this end, we first consider the
evolution of the boundaries $r_-$ and $r_+$ of the
oxygen shell and the total oxygen
shell mass $M_\mathrm{O}$.
For computing the oxygen shell mass,
we take the (small) deviation of the boundaries from spherical symmetry into account more accurately than when
computing $r_-$ and $r_+$, and integrate the mass contained in cells within
the entropy range
$3.8 \texttt{-}5.2 \, k_\mathrm{B}/\mathrm{nucleon} $.
As shown by Figure~\ref{fig:OShellValues}
(panels a and b), the oxygen
shell in the non-magnetic model grows slightly, but perceptibly faster without magnetic fields than
in the MHD model, starting
at a simulation time of about
$300\, \mathrm{s}$. The presence
of relatively strong magnetic fields
in the boundary layer apparently
reduces entrainment in line
with the inhibiting effect of
magnetic fields parallel to the
flow on shear instabilities
discussed in Section~\ref{subsec:magGeometry}.
The entrainment rate $\dot M=\mathrm{d} M_\mathrm{O}/\mathrm{d} t$
(Figure~\ref{fig:OShellValues}c)
is only slightly higher in the purely hydrodynamic model
most of the time, but the entrainment rate exhibits
occasional spikes, which do not occur in the MHD model. We note that these spikes in $\dot{M}$ occur at around the same times as the bursts in non-radial kinetic energy (Figure~\ref{fig:E_kin}). It appears that the
stabilisation of the boundary by magnetic fields
mostly suppresses rarer, but more
powerful entrainment events
that mix bigger lumps of material into
the oxygen shell. A comparison with the
shell burning simulations of \citet{mueller_16c} provides confidence that this effect is robust.
Because \citet{mueller_16c} contract the
inner boundary condition, convection grows more vigorous with time and their entrainment rates
are higher, adding about $0.05\, M_\odot$ to the
oxygen shell within $300\, \mathrm{s}$.
Their resolution test (Figure~20 in \citealt{mueller_16c}) only showed variations of $0.004\, M_\odot$ in the entrained mass
between different runs of the same progenitor model. In our
simulations, the oxygen shell only grows by
about $0.03\, M_\odot$ from $300\, \mathrm{s}$
to $700 \, \mathrm{s}$ (i.e, when the magnetic
field does not grow substantially any more),
yet we find a difference in shell growth
of about $0.01\, M_\odot$ between the
magnetic and non-magnetic model. It therefore
seems likely that there is indeed an appreciable systematic effect of magnetic fields on entrainment. Our simulations qualitatively illustrate such an effect,
but models at higher resolution are of course desirable
to more reliably quantify the impact of magnetic fields on turbulent entrainment.
The resolution requirements for accurate entrainment
rates are hard to quantify. Findings from extant numerical studies of the magnetised Kelvin-Helmholtz instability \citep[e.g.,][]{keppens_99} cannot be easily
transferred because of important physical differences (e.g., the lack
of buoyant stabilisation of the boundary). In purely
hydrodynamic simulations of entrainment during oxygen shell burning, resolving the
shear layer between the oxygen and neon shell with $\mathord{\approx} 15\texttt{-}22$ radial zones
as in our model was found to be sufficient for quantitatively credible,
but not fully converged results \citep{mueller_16c}.
The different entrainment rate also explains long-term, time-averaged differences in convective kinetic energy between the two simulations. The convective
kinetic energy is determined by the total nuclear
energy generation rate, which is shown in
Figure~\ref{fig:OShellValues}d.
Both models show an overall trend downwards over time in nuclear energy generation, which can be ascribed
to a slight thermal expansion of the shell and the depletion of fuel. After about $200\,\mathrm{s}$, the purely
hydrodynamics model exhibits a higher energy
generation rate, which persists until the end of the simulations and becomes more pronounced.
The higher energy generation rate
in the non-magnetic model is indeed mirrored
by a stronger decrease in the neon mass on the grid (Figure~\ref{fig:NeonMass}) of about
the right amount to explain the average
difference in energy generation rate
at late times. We note, however, that
the strong episodic entrainment events
in the purely hydrodynamic model are
\emph{not} associated with an immediate
increase in nuclear energy generation rate.
This is mainly because the energy release from the dissociation of the entrained neon is delayed and spread out in time as the entrained material is
diluted and eventually mixed down to regions
of sufficiently high temperature.
The overall effect of magnetic fields
on the bulk flow is rather modest.
We do not see a similarly strong quenching
of the convective flow by magnetic fields
as in recent MHD simulations of the solar convection zone \citep{hotta_15}, who reported
a reduction of convective velocities
by up to $50\%$ at the base of the convection zone. Such an effect is not expected as long
as the magnetic fields stay well below kinetic
equipartition in the interior of convective shells as in our MHD simulation. Longer simulations will be required to confirm that magnetic fields during late shell burning stages indeed remain ``weak'' by comparison and do not
appreciably influence the bulk dynamics of convection.
\begin{figure}
\includegraphics[width=\columnwidth]{TotalNeonMass.pdf}
\caption{Total mass of neon in the entire computational domain for the purely hydrodynamic simulation (purple) and the MHD simulation (black).}
\label{fig:NeonMass}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
We investigated the amplification and saturation of magnetic fields during convective oxygen shell burning and the backreaction of the field on the convective flow by conducting a 3D MHD simulation
and a purely hydrodynamic simulation of an $18\, M_\odot$ progenitor shortly before core collapse.
The simulations were run for about 12 minutes of physical time (corresponding to about
17 convective turnover times), at which point field amplification
has slowed down considerably, though a quasi-stationary state has not yet been fully established.
The magnetic field in the oxygen shell is amplified to
$\mathord{\sim}10^{10}\, \mathrm{G}$ and dominated
by small-to-medium-scale structures with angular
wavenumber $\ell \sim 7$. The dipole component
is considerably smaller with $\mathord{\sim}10^9\, \mathrm{G}$ near the inner boundary of
the oxygen shell and less further outside.
The profiles of the radial and non-radial diagonal components $M_{rr}$ and $M_{\theta\theta}+M_{\varphi\varphi}$ of
the Maxwell stress tensor mirror the corresponding
components $R_{rr}$ and
$R_{\theta\theta}+R_{\varphi\varphi}$ of the Reynolds stress tensor, but remain about an order of magnitude
smaller, i.e., kinetic equipartition is not reached.
However, $M_{\theta\theta}+M_{\varphi\varphi}$
can approach or exceed the radial
component $R_{rr}$ at the convective boundaries.
The saturation mechanism for field amplification needs
to be studied in more detail, but we speculate that
saturation is mediated by the inhibiting effect of non-radial magnetic fields on shear instabilities at shell boundaries, which appear to be the primary driver of field amplification.
We find that magnetic fields do not have an appreciable effect on the interior flow inside the oxygen shell, but
observe a moderate reduction of turbulent entrainment at
the oxygen-neon shell boundary in the presence of magnetic fields. Magnetic fields appear to suppress stronger episodic entrainment events, although they do not quench entrainment completely. Through the reduced entrainment rate, magnetic fields also indirectly affect the dynamics inside the convective region slightly because they reduce the energy release through the dissociation of ingested neon, which results in slightly smaller convective velocities in the MHD model.
Our findings have important implications for core-collapse supernova modelling. We predict initial fields in the oxygen shell of non-rotating progenitors that are significantly stronger than assumed in our recent simulation of a neutrino-driven explosion aided by dynamo-generated magnetic fields \citep{Mueller2020}. With relatively strong seed-fields, there is likely less of a delay until magnetic fields can contribute the additional ``boost'' to neutrino heating
and purely hydrodynamic instabilities seen in
\citet{Mueller2020}. This should further contribute to the robustness of the neutrino-driven mechanism for non-rotating and slowly-rotating massive stars. Our simulations also suggest that the perturbation-aided mechanism \citep{couch_13,mueller_15a} will not be substantially affected by the inclusion of magnetic fields. Since magnetic fields do not become strong enough to substantially alter the bulk flow inside the convective region, the
convective velocities and eddy scales as the key parameters for perturbation-aided explosions remain largely unchanged.
The implications of our results for neutron star magnetic fields are more difficult to evaluate since the observable fields will, to a large degree, be set by processes during and after the supernova and cannot be simply extrapolated from the progenitor stage by magnetic flux conservation. That said, dipole fields of order $10^9\, \mathrm{G}$ at the
base of the oxygen shell -- which is likely to end up as the neutron star surface region -- are not in overt conflict with dipole fields
of order $10^{13}\, \mathrm{G}$ in many young pulsars inside supernova remnants \citep{Enoto2019}. However, considering the
relatively strong small-scale fields
of $\mathord{\sim}10^{10}\, \mathrm{G}$
with peak values over $10^{11}\,\mathrm{G}$,
it may prove difficult to produce neutron stars without strong small-scale fields at the surface. There is a clear need for an integrated approach towards the evolution of magnetic fields from the progenitor phase through the supernova and into the compact remnant phase in order to fully grasp the implications of the current simulations.
Evidently, further follow-up studies are also needed on the final evolutionary phases of supernova progenitors.
Longer simulations and resolution studies will be required to better address issues like the saturation field strength, the saturation mechanism, and the impact of magnetic fields on turbulent entrainment.
Full models that include the core and self-consistently follow the evolution of the star to collapse will be required to generate
accurate 3D initial conditions for supernova simulations.
The critical issue of non-ideal effects and the behaviour of turbulent magnetoconvection for magnetic Prandtl numbers slightly smaller than one at very high Reynolds numbers deserves particular consideration. Some findings of our ``optimistic'' approach based on the ideal MHD approximation should, however, prove robust, such as the modest effect of magnetic fields on the convective bulk flow and hence the reliability of purely hydrodynamic models \citep{couch_15,mueller_16c,Yadav2020,mueller_20,Fields2020a} and even 1D mixing-length theory \citep{Collins2018} to predict pre-collapse perturbations in supernova progenitors.
Future 3D simulations will also have to address rotation and its interplay with convection and magnetic fields.
\section*{Acknowledgements}
We thank A.~Heger for fruitful conversations.
BM was supported by ARC Future Fellowship FT160100035.
We acknowledge computer time allocations
from NCMAS (project fh6) and ASTAC. This research was undertaken with the assistance of resources and services from the National Computational Infrastructure (NCI), which is supported by the
Australian Government. It was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia.
\section*{Data Availability}
The data underlying this article will be shared on reasonable request to the authors, subject to considerations of intellectual property law.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Gravitational lensing describes the effect of the gravitational potential of a massive object on the light of a background source. In the case of strong lensing (SL), the light is deflected such that multiple images of the source are observed.
Because SL is sensitive to both luminous and dark matter (DM), it is a powerful tool to probe the total galaxy mass distribution and thus give insight to the DM distribution of galaxies \citep[e.g.,][]{Dye2004, Barnabe2012,Sonnenfeld2015, Schuldt2019, Shajib2021}. Mass clumps in the lensing galaxy affect the image magnification, and substructures can be detected by analyzing flux-ratio anomalies of point-like sources (e.g., \citealt{Dalal2001}; \citealt{Moustakas2002}; \citealt{Nierenberg2014}; \citealt{Nierenberg2017}; \citealt{Hsueh2020}; \citealt{Nierenberg2020}; \citealt{Gilman2020}) or distortions in the arcs formed by lensing of spatially-extended background galaxies
(e.g., \citealt{Koopmans2005}; \citealt{Vegetti2010}; \citealt{Vegetti2012}). Clumps below a mass of $10^8$ solar masses can be detected that way.
With models we can also analyze the structural parameters of early-type lens galaxies such as the average slope of the total mass density profile or the DM fraction (e.g., \citealt{Gavazzi2007}; \citealt{Auger2010}; \citealt{Lagattuta2010}; \citealt{Barnabe2011}; \citealt{Shu2015}).
In addition, the magnifying effect of SL can be used to observe the earliest galaxies and quasars in the Universe (e.g., \citealt{Kneib2004}; \citealt{Bradley2008}; \citealt{Coe2013}; \citealt{Salmon2020}). Modeling these high redshift objects enables us to study, e.g., their rotation curves and masses (e.g., \citealt{Jones2010}; \citealt{Chirivi2020}; \citealt{Rizzo2020}). This helps our understanding of galaxy evolution and structure formation in the early Universe, which is, despite major progress in the last years, a research field with many open questions.
Another important application of SL is the measurement of cosmological parameters such as the Hubble constant $H_0$ (e.g., \citealt{Suyu.2017}; \citealt{Wong2019}; \citealt{Shajib2019b}; \citealt{Chen2020}; \citealt{Millon2020}; \citealt{Birrer2020}). This parameter, which gives the current expansion rate of the Universe, is of great interest because early-universe measurements from the cosmic microwave background (CMB) give a value of 67.36 $\pm$ 0.54 $\rm{\, km\, s^{-1}\, Mpc^{-1}}$ (\citealt{Planck2019}), whereas local measurements from the SH0ES project (\citealt{Riess2021}) based on the cosmic distance ladder using Cepheids and Type Ia supernovae (SNe) obtain a higher value of 73.04 $\pm$ 1.04 $\rm{\, km\, s^{-1}\, Mpc^{-1}}$. This 5$\sigma$ disagreement is commonly known as the Hubble tension and could hint to physics beyond flat $\Lambda$CDM, which is currently the standard cosmological model. To resolve this debate, independent methods including megamasers (e.g., \citealt{Pesce2020}), standard sirens (e.g., \citealt{Abbott2017}), surface brightness fluctuations (e.g., \citealt{Khetan2020}), expanding photospheres of supernovae (e.g., \citealt{Schmidt1994}), and gravitational lensing are necessary.
To measure $H_0$ with SL, one needs a strongly lensed variable background source like a quasar or a SN. Because of the different path length and the gravitational potential differences on the way of the photons, the characteristic brightness fluctuations reach the observer at different times. We can measure this time delay in the light curves by monitoring the multiple images and comparing these flux variations with respect to each other \citep[e.g.,][]{Millon2020b, Millon2020a}.
Apart from an accurate measurement of the time delays, the lens potential is required to determine $H_0$, which is obtained by modeling the observed lensing system. Major progress has been made in this field in recent decades, in part because of the development of advanced modeling software such as \texttt{GLEE} (Gravitational Lens Efficient Explorer; \citealt{Suyu.2010}; \citealt{Suyu.2012}) and \texttt{Lenstronomy} (\citealt{Birrer2015}; \citealt{Birrer2018LT}). Such modeling software allow us to constrain the model parameters of the lens and the source, such as their position and radial mass profile. A downside of this approach is that we need at least weeks to model a lens system with spatially extended sources since sampling the parameter space, e.g. with Markov chain Monte Carlo (MCMC) methods, is time consuming and the modeling process itself is interactive. In addition, the input files required by \texttt{GLEE} (e.g. point spread function (PSF), error map) have to be obtained manually from the available data in some cases.
With the increasing number of wide-field imaging surveys, including the Hyper Suprime-Cam (HSC, \citealt{Aihara2018}), Dark Energy Survey (DES, \citealt{DarkEnergySurveyCollaboration.2016}), the upcoming Euclid (\citealt{Laureijs2011, Scaramella+2021}) and Rubin Observatory Legacy Survey of Space and Time (LSST, \citealt{Ivezic.2019}), the number of known strong lensing systems is growing rapidly. Even though the number of detected lenses per square degree lies between 0.5 and 10 (\citealt{Li.2020}) depending on the image depth, the number of lens candidates will increase strongly thanks to large survey areas. For example, the LSST will cover a total of $\sim$20,000 deg$^2$ in multiple bandpasses over its planned ten year run time (\citealt{Tyson.2002}, \citealt{Lochner+2021}). We expect new images of billions of galaxies, of which of the order of 100,000 are strong lensing systems (\citealt{Collett.2015}). To be able to keep up with the increasing sample size, we need techniques to automate the lens modeling process.
By automating the creation of the input files, the calculation of the starting values of lens parameters and image positions, and the optimization and sampling of the parameter space, we can save many hours of user interaction during the modeling process.
Several projects in recent years have already demonstrated successfully the automation of lens modeling especially to obtain initial estimates of the lens mass distribution, \citep[e.g.,][]{Shajib2019,Nightingale2021}. There are also techniques being developed based on machine learning \citep[e.g.,][]{Hezaveh+2017, PerreaultLevasseur+2017, Park+2021, Schuldt+2021, Schuldt2022}, although the modeling of strongly lensed quasars via neural networks has yet to be tested on real data instead of mock data. Another method is the massive parallelization of Graphics Processing Units \citep{Gu2022} to speed up the lens modeling. All these approaches cannot yet replace the detailed, manual lens-by-lens analysis that is necessary to produce models that are accurate enough for time-delay cosmography, \citep[e.g.][]{Suyu+2010b,Suyu+2013,Birrer2018,Wong2019,Shajib2019b,Rusu2020}, but rather serve as a first step towards them and provide important information for follow-up observations.
In this paper, we present a new, user-time-efficient automated procedure for uniform modeling of strongly lensed quasars with the lens modeling software \texttt{GLEE}. We develop a pipeline that works with minimal user input. Providing only high-resolution imaging data and marking important regions in the field, the user can model with \texttt{GLEE} in a nearly fully automated way. The automated modeling can be done with single-band, as well as with multi-band data. We focus on lensed quasars to model the lens mass distribution with an isothermal or power-law profile, and the source brightness distribution on a grid of pixels. To test our code, we uniformly model a sample of nine strongly lensed quasars observed with the Hubble Space Telescope (HST). Furthermore, we predict the time delays and Fermat potential differences between the multiple images of the quasar. We emphasize that our approach differs from that of papers aimed at deriving cosmological parameters from individual lenses, because we are prioritizing speed over accuracy. Owing to the adopted shortcuts (e.g. we do not iteratively correct the PSF and usually model only the images in filters with visible arcs from the host galaxy) and the uneven quality of the data, we do not expect in general our results to be cosmography grade. The output of this pipeline is aimed as a first step towards cosmography grade models, and to assist in prioritization of follow-up monitoring and further modeling.
\citet[][hereafter S22]{Schmidt2022} have also developed an automated modeling pipeline using \texttt{Lenstronomy} and applied to a sample of 30 lensed quasars. Our sample of 9 is a subset of the systems in S22, allowing us to do a direct comparison of the results between the two modeling pipelines as a blind test, since the lead modelers of the two teams obtained their lens models independently and only compared the results after the models were completed. We note that our approach and that of S22 are similar but not identical. Important differences include the modeling of the point spread function, which is known to be crucially important \citep{Shajib2022}, and the strategy adopted to cope with insufficient data quality (S22 imposed informative priors, while we do not). Therefore we do not expect the two efforts to agree within the formal uncertainties, which are only a representation of the statistical error and not of the systematics, which are notoriously dominant. Thus, this should be kept in mind when interpreting the comparison results. Nevertheless, as we will see, important lessons can be gleaned from it.
The outline of the paper is as follows: in Sec.~\ref{sec:modeling} we describe the steps in modeling a strong lensing system with our \texttt{GLEE} automated code. We briefly describe each system and the observational data in Sec.~\ref{sec:observations}. The modeling results are presented in Sec.~\ref{sec:results}. The performance of the automated modeling compared to manual modeling, together with a comparison to the models obtained with \texttt{Lenstronomy} of the same systems (S22), is discussed in Sec.~\ref{sec:discuss}. In Sec.~\ref{sec:conclude} we summarize our results.
Throughout the paper, we adopt a flat $\Lambda$CDM cosmology with $H_0=70 \rm{\, km\, s^{-1}\, Mpc^{-1}}$ and $\Omega_{\rm M} = 1 - \Omega_{\Lambda}=0.3$. Parameter estimates are given by the median of its one-dimensional marginalized posterior probability density function, and the quoted uncertainties show the 16$^{\rm th}$ and 84$^{\rm th}$ percentiles (corresponding to a 68\% credible interval).
\section{Modeling process of \texttt{GLEE} automation code}
\label{sec:modeling}
In this section we describe in detail the steps of modeling a strongly lensed quasar with \texttt{GLEE} and how they are conducted in our automation code. We adopt different mass and light profiles and use the Bayesian framework of MCMC sampling and simulated annealing \citep{Kirkpatrick1983} provided by \texttt{GLEE}. The goal is to model the lens mass distribution by first using the source and image positions, and then by reconstructing the surface brightness distribution of the lens and background source galaxies.
Fig.~\ref{fig:sketch} shows as an example the strongly lensed quasar J2100$-$4452. We see the lens galaxy in the image center, four multiple images of the background quasar, and the arc, which is the lensed light of the quasar host galaxy. The code models the mass distribution of the lens galaxy and the light components of the system (i.e., lens, satellite galaxies of the lens, lensed quasar, and arc light). The modeled constituents of the lensing system are labeled in Fig.~\ref{fig:sketch}.
\begin{figure}[h]
\subfigure[Image cutout of J2100$-$4452 to illustrate its different components which are modeled with our automation code. \label{fig:sketch}]{\includegraphics[width=0.48\textwidth]{Bilder/components_sketch.eps}}\hfill
\subfigure[Image cutout of J2100$-$4452 with regions provided by the user to create the masks. \label{fig:masks}]{\includegraphics[width=0.48\textwidth]{Bilder/masks.eps}}
\caption{Overview of the components of the lensing system that are modeled by the \texttt{GLEE} automation code, and the required regions to create arc and lens mask of the lensing system given as example. The region to create the arc mask, which covers the light of the lensed quasar and arcs, is colored in red. For the lens mask, the user provides the region(s) of surrounding, luminous objects (shown in green in the image), containing the pixels that are ignored by the model.}
\end{figure}
Since the modeling usually is first done in a single filter and possible further bands are included after the model has stabilized, we present in Sec.~\ref{sec:singleband} an automation code dedicated for the single band modeling. We then introduce the automation code for the multiband modeling in Sec.~\ref{sec:multiband} and finally the prediction of the time delays between the multiple images of strongly lensed quasars in Sec.~\ref{sec:timedelay}.
\subsection{Single-band modeling}
\label{sec:singleband}
In this section we discuss the automated modeling of a lensing system with single-band data. We describe the semi-automated preparation of required input files, as well as each modeling step and their substeps in the following subsections. An overview of the basic modeling procedure is shown as a flowchart in Fig.~\ref{fig:flowchart_overview} and summarized in Table~\ref{tab:Modelingtable_single}. In particular, Table~\ref{tab:Modelingtable_single} shows the varying and fixed parameters in each modeling step.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{Bilder/1overview.eps}
\caption{Overview flowchart for single-band modeling of a strongly lensed quasar system with our new \texttt{GLEE} automation code. The modeling consists of two main steps, the lens mass distribution modeling with source/image positions and with source surface brightness (SSB; which is the unlensed light distribution of the quasar host galaxy) reconstruction. Each step is divided into smaller modeling steps. Details are described in flowcharts for each step and in the corresponding subsection, as referenced in the flowchart.}
\label{fig:flowchart_overview}
\end{figure}
\begin{table*}
\begin{tabular}{lp{5.8cm}lp{2.2cm}ccccccc}\toprule
Step & Description & Section & Criteria & \multicolumn{2}{c}{Lens mass parameters} & Lens light & \multicolumn{4}{c}{Source light}
\\\cmidrule(lr){5-6}\cmidrule(lr){8-9}
& & & & profiles & ext. shear & & quasar & host \\\midrule\midrule \vspace{5px}
1 & modeling of the lens light & \ref{sec:lenslight} & $\circlearrowleft_2 (\chi^2_{\rm red}\leq1)$ & $\times$ & $\times$ & $\checkmark$ & $\times$ & $\times$ \\ \vspace{5px}
2 & modeling of the quasar light & \ref{sec:quasarlight} & $\circlearrowleft_2 (\Delta \textrm{logP}\leq5)$ & $\times$ & $\times$ & $\bigcirc$ & $\checkmark$ & $\times$ \\ \vspace{5px}
3 & source position modeling & \ref{sec:srcimgpos} & $\circlearrowleft_1 (\chi^2<5)$ & $\checkmark$ & $\times$ & $\bigcirc$ & $\bigcirc$ & $\times$ \\ \vspace{5px}
4 & image position modeling with external shear & \ref{sec:srcimgpos} & \raggedright $\circlearrowleft_2 (\chi^2<5,$ & $\checkmark$ & $\checkmark$ & $\bigcirc$ & $\bigcirc$ & $\times$ \\ \vspace{5px}
&&&$\Delta$ \textrm{logP}$\leq$5)&&&&\\\vspace{5px}
5a & modeling of the arc light + SSB distribution reconstruction & \ref{sec:arclight} & $\circlearrowleft_2 (\Delta \textrm{logP}\leq2)$ & $\checkmark$ & $\checkmark$ & $\bigcirc$ & $\checkmark_{\rm A}$ & $\checkmark$\\
5b & modeling of the arc light + SSB distribution reconstruction & \ref{sec:arclight} & $\circlearrowleft_2 (\Delta \textrm{logP}\leq2)$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\bigcirc$ & $\checkmark$ \\\bottomrule
\end{tabular}
\ \\ \ \\
\textit{Symbols:}\\
\begin{tabular}{ll}
$\circlearrowleft_1$: optimizing cycle with simulated annealing & $\checkmark$: parameters varying\\
$\circlearrowleft_2$: optimizing and sampling cycle with simulated annealing and MCMC \ \ \ \ & $\bigcirc$: parameters fixed \\
$\checkmark_{\rm A}$: only amplitude varying & $\times$: parameters not included
\end{tabular}
\ \\
\caption{Overview of the individual modeling steps conducted in the single-band modeling code. Each step is described briefly in the second column with the corresponding subsection in the third column. The modeling technique and stopping criteria are given in the fourth column. The cycle runs until the stopping criteria in the parentheses are met. The remaining columns show which parameters are varying, fixed, or not included.}
\label{tab:Modelingtable_single}
\end{table*}
\subsubsection{Preparation}
\label{sec:preparation}
To model a strong lensing system with \texttt{GLEE} using our newly developed automated modeling codes, the user needs to provide information about the position and approximate light structure of the lens, the quasar images and satellites. In addition, several input files (see Table~\ref{tab:inputfiles}) are needed. The creation of each input file is conducted in separate codes that are described in the following.
\begin{table}[h]
\begin{tabular}{l|p{6.4cm}}
Input file & Description \\ \midrule
Image cutout & Cutout of the lensing system \\
Error map &
$1\sigma$ uncertainty of pixel intensity in the cutout, accounting for background and Poisson noise\\
PSF &
Image of point-spread function\\
Arc mask & Region of arc light (red area in Fig.~\ref{fig:masks})\\
Lens mask & Region for masking neighboring objects (green area in Fig.~\ref{fig:masks})
\end{tabular}
\caption[Sec.~\ref{sec:preparation}: Overview of input files]{Overview and description of required input files for modeling a strong gravitational lens with our \texttt{GLEE} automation code.}
\label{tab:inputfiles}
\end{table}
\paragraph{Science image cutout}
We start by creating a cutout of the lensing system from the preprocessed, reduced data that includes the
immediate environment ($\lesssim10''$) to be taken into account during the modeling. In the automated pipeline, the user creates a DS9 (\citealt{Joye.2003}) region file containing the cutout area in the field, as well as the positions of the lens, satellite galaxies, and the quasar images. Optionally, the background flux is subtracted by calculating the average flux of an empty patch in the field. The modeling code then creates the cutout, which is used for the subsequent modeling process, and calculates starting values for the lens mass and light parameters and image positions from the region information.
\paragraph{Error map}
The error map accounts for both background noise and Poisson noise. The background noise $\sigma_{\rm background}$ is approximated as a constant that is set to be the standard deviation from a small patch in the science image without astrophysical sources. The Poisson noise $\sigma_{\rm poisson}$ is computed from the weight image (exposure time map) and the science image. For images in units of counts per second we calculate
\begin{equation}
\sigma^2_{\rm poisson, \it i} = \Big|\frac{d_i}{t_i\times G}\Big|,
\end{equation}
with $d_i$ the observed intensity\footnote{Using the observed image instead of the model intensity may induce biases at low signal-to-noise ratios \citep{Horne1986}.}, $t_i$ the exposure time for pixel $i$, and $G$ the charge coupled device (CCD) gain for the specific filter. In the case of data units of counts, we have
\begin{equation}
\sigma^2_{\rm poisson, \it i} = \Big|\frac{d_i}{G}\Big|.
\end{equation}
We add $\sigma_{\rm poisson}$ in quadrature to the background noise level if $\sigma_{\rm poisson} > 2\times\sigma_{\rm background}$. We check this condition for each pixel $i$ to obtain the final error map:\\
\begin{align}
\sigma^2_{\rm tot, \it i} &=\begin{cases}
\sigma^2_{\rm background, \it i} + \sigma^2_{\rm poisson} & \text{if $\sigma_{\rm poisson} > 2\times\sigma_{\rm background}$;}\\
\sigma^2_{\rm background, \it i} & \text{otherwise.}
\end{cases}
\label{eq:sigmatotal}
\end{align}
\paragraph{Point spread function}
If the PSF is not directly provided with the science image, the PSF can be created with our PSF code. For this, the user marks several, non-saturated stars in the field, which spans $\sim$ 2.5 $\times$ 2.5 arc minutes in the case of the HST WFC3 data we use. The PSF code will then automatically create cutouts of these stars which are centered at the brightest pixel. Since the star is not necessarily located exactly at the center of the brightest pixel, the PSF code interpolates and shifts the cutout within fractions of a pixel to center it in the cutout. In particular, the profile is shifted such that the brightness of the pixels to the left and to the right of the brightest pixel is within a margin of 10\%, and similarly for the pixels above and below the brightest pixel. This results in the central 3$\times$3 region being roughly symmetric. To obtain the PSF, the recentered stars are normalized, then stacked by calculating the median per pixel among the cutouts, and renormalized such that the sum of all pixel values is 1. In addition, the \texttt{GLEE} automation code requires two PSFs, one for modeling the light of the lensed quasar, and one for the extended source. The first one includes the diffraction spikes as that PSF image is used directly as the spatial profile for the quasar images. The second one is cropped at the first diffraction minimum of the Airy disc, because it is used for the convolution of the parameterized light distribution and arcs and the computation time increases significantly with increasing size of the used PSF. The cropped PSF is also used to model the lens light. Subsampling of these two PSFs are required when the pixel sizes are large relative to the PSF's full width at half maximum, in order to avoid numerical inaccuracies due to undersampled PSF.\footnote{This is the case for the HST-IR F160W filter that is used. In practice, we subsample the PSF when the pixel size is greater than 0.05\arcsec.}
The PSF code subsamples the PSF by a custom factor $n$ such that its pixel scale is $\frac{1}{n}$ of the original pixel size. Both subsampled PSFs are renormalized by the PSF code.
\paragraph{Masks}
\texttt{GLEE} requires two masks for the modeling. The so-called arc mask contains the region of the lensed quasar and its host (red regions in Fig.~\ref{fig:masks}). It is used during the modeling of the lens light to exclude quasar and host light from the modeling, and later it gives the region of pixels that is used for reconstructing the source surface brightness (SSB) distribution. The lens mask specifies the pixels of the light of surrounding, luminous objects such as neighboring stars (green areas in Fig~\ref{fig:masks}). \texttt{GLEE} will sequentially fit the pixels of the lens light excluding regions of the arc mask and neighboring objects specified by the lens mask, as we describe in the following subsections. The masks are created by the user marking the corresponding regions, saving the DS9 region file of the marked areas, and running a mask generation code on these DS9 region files.
\subsubsection{Lens light modeling}
\label{sec:lenslight}
The first modeling step is to reconstruct the light of the lens galaxy and the multiple quasar images in order to subtract their contribution and reveal more prominently the underlying arc light. We model the lens light with the S\'{e}rsic profile (\citealt{Sersic.1963}) which is parametrized as
\begin{equation}
I_{\rm S}(x,y) = A_{\rm S}\exp\Bigg[-k\Bigg\{\Bigg(\frac{\sqrt{(x-x_{\rm S})^2+\left( \frac{y-y_{\rm S} }{q_\text{S}} \right) ^2}}{r_{\rm eff}}\Bigg)^{\frac{1}{n_{\rm s}}}-1\Bigg\}\Bigg].
\end{equation}
\noindent
An overview of the parameters in the S\'{e}rsic profile and their priors used in the automated modeling is provided in Table \ref{tab:Sersicparam}. The constant $k$ normalizes $r_{\rm eff}$ such that $r_{\rm eff}$ is the half-light radius in the direction of the semi-major axis.
\begin{table*}[h]
\begin{tabular}{l|l|l|p{5.5cm}|p{5.5cm}}
Component & Parameter & Description & Prior & Prior range / value \\ \midrule
& $x_{\rm S}\ ['']$ & $x$-centroid & Gaussian, flat prior in SSB reconstruction & centered on starting value, Gaussian $\sigma=0.3$ \\ \rule{0pt}{2ex}
& $y_{\rm S}\ ['']$ & $y$-centroid & Gaussian, flat prior in SSB reconstruction & centered on starting value, Gaussian $\sigma=0.3$ \\ \rule{0pt}{2ex}
& $q_{\rm S}$ & axis ratio & flat & single-band: [0.3, 1], multi-band: [0.1, 1] \\ \rule{0pt}{2ex}
S\'{e}rsic & $\phi_{\rm S}$ [$^\circ$] & position angle & flat & [0, 360]\\ \rule{0pt}{2ex}
& $A_{\rm S}$ & amplitude & flat & [0.01, 10] \\ \rule{0pt}{2ex}
& $r_{\rm eff}\ ['']$ & effective radius & flat & [0.01, 10]\\ \rule{0pt}{2ex}
& $n_{\rm S}$ & S\'{e}rsic index & flat & single-band: [0.5, 6], multi-band: [0.4,9]\\
\end{tabular}
\caption[Sec.~\ref{sec:lenslight}: Overview of S\'{e}rsic parameters and their priors]{Overview of S\'{e}rsic parameters and their priors. The S\'{e}rsic parameters are varied during lens light modeling and the modeling of the arc light with SSB reconstruction. The position angle is measured counterclockwise from the positive $x$-axis (East of North).}
\label{tab:Sersicparam}
\end{table*}
The Gaussian prior on the centroid coordinates is used to stabilize the optimization during lens light modeling, and is changed to a flat prior later when the code models the light of the arc (Sec.~\ref{sec:arclight}). The prior range on the axis ratio $q_{\rm S}$ and S\'{e}rsic\ index $n_{\rm S}$ are relaxed during multi-band modeling as more constraints are available when adding additional filters. As described in Sec.~\ref{sec:preparation}, we do not subsample the PSF for images with pixels smaller than 0.05\arcsec\, and use the cropped and subsampled PSF for images with a pixel scale greater than 0.05\arcsec.
The parameters are fitted to the observed intensity value $I_i^{\rm obs}$ of pixel $i$ by minimizing the $\chi^2_{\rm SB}$ of the surface brightness (SB)
\begin{equation}
\chi^2_{\rm SB} = \sum_{i=1}^{N_{\rm p}}\frac{|I_i^{\rm obs}-\rm PSF\otimes \it
I_{{\rm S},i} |^2}{\sigma^2_{\rm tot,\it i}},
\label{extsourcechi2}
\end{equation}
where $N_{\rm p}$ is the number of pixels in the cutout that are not masked, $\sigma^2_{\rm tot,\it i}$ is the total noise of pixel $i$ from Eq.~(\ref{eq:sigmatotal}), and the symbol $\otimes$ represents the convolution of the PSF with the predicted S\'{e}rsic intensity
$I_{{\rm S},i}$
of pixel $i$.
The modeling code will run cycles that consist of one simulated annealing in the beginning and afterwards one MCMC chain using a sampling covariance matrix obtained from the previous chain, which is updated after every cycle. To obtain the uncertainties and optimized parameter values, we sample the parameter space with MCMC. Each chain samples 100,000 points, and the code removes the burn-in phase, which we conservatively set to be the first 50\% of the points in the chain. The modeling runs until two criteria are met. The first criterion is
\begin{equation}
\chi^2_{\rm red} = \frac{\chi^2_{\rm SB}}{\textrm{DOF}} \leq 1,
\label{lenschi2red}
\end{equation}
with the degrees of freedom
\begin{eqnarray}
\label{eq:dof}
\textrm{DOF} = \nonumber \textrm{\# of pixels in the cutout} & - \ \textrm{\# of masked pixels} \\- \ \textrm{\# of free parameters}.
\end{eqnarray}
The number of masked pixels is the sum of all pixels inside the arc mask and of nearby objects masked with the lens mask.
The second criterion is approximate convergence of the chain, i.e.,
\begin{equation}
\Delta \textrm{logP} \leq 5.
\label{lenslogP}
\end{equation}
This is the difference in the log-likelihood between the median of the first 2000 points in the chain after burn-in is removed and the median of the last 2000 points.
We visualize the lens light modeling steps with a flowchart in Figure \ref{fig:flowchart_lenslight}.
\begin{figure*}
\centering
\includegraphics[scale=0.42]{Bilder/3lenslightmodeling_3.eps}
\caption[Sec.~\ref{sec:lenslight}: Flowchart for lens light modeling]{Flowchart showing the decision-tree for the lens light modeling. The modeling code obtains the best configuration of S\'{e}rsic and point-source profiles to fit the primary lens light. The quality check is done with a $\chi^2_{\rm red}$ approach and a criterion for stability in the posterior values. In addition, the code also has the option to model the light of satellite lens galaxies.}
\label{fig:flowchart_lenslight}
\end{figure*}
In case the chain of the lens light model with one S\'{e}rsic profile (approximately) converges (i.e., fulfilling Eq.~\ref{lenslogP}) without reaching the $\chi^2_{\rm red}$ threshold in Eq.~(\ref{lenschi2red}), the code checks if a point source at the center of the lens improves the model given the possibility of an active galactic nucleus (AGN) within the lens galaxy. The light distribution of a point source is represented by the PSF. For this, the code checks how different point sources in an amplitude range between 0.01 and 100 change the $\chi^2_{\rm SB}$ of the model. If $\chi^2_{\rm SB}$ is lowered by more than 10, a point source component with the corresponding amplitude is added to the model. This is motivated by the Bayesian information criterion (BIC, following \citealt{Rusu.2019}): the point source adds one additional parameter (the amplitude). A $\chi^2_{\rm SB}$ decrease by 10 ensures that the BIC does not increase. The model is subsequently optimized and sampled until the criterion in Eq.~(\ref{lenslogP}) is fulfilled.
When the criterion defined with Eq.~(\ref{lenslogP}) is met, the code checks the criterion of Eq.~(\ref{lenschi2red}). If the latter is fulfilled, the code stops the lens light modeling, or, in case of satellites in the cutout field, goes on to model the light of the satellites. If the criterion of Eq.~(\ref{lenschi2red}) is not fulfilled, one additional S\'{e}rsic profile with the same centroid is added to the model for the primary lens light. The code runs optimizing and sampling cycles until the criterion defined with Eq.~(\ref{lenslogP}) is met.
We model the light of the satellites with S\'{e}rsic profiles once the chain of the model for the primary lens light has converged according to Eq.~(\ref{lenslogP}). The user is queried to replace the masks such that the satellites are not masked anymore. The model is then optimized and sampled until the criterion in Eq.~(\ref{lenslogP}) is fulfilled. If this converged model of the primary lens light and of the satellite galaxies' light still does not fulfill Eq.~(\ref{lenschi2red}), the user is asked to either add a third S\'{e}rsic profile, refine the masks, or continue with the next modeling step directly. In the first two cases the code again runs an optimizing/sampling cycle until the $\Delta \textrm{logP}$ criterion (Eq.~\ref{lenslogP}) is met.
\subsubsection{Quasar light modeling}
\label{sec:quasarlight}
After the lens light modeling procedure (see Sec.~\ref{sec:lenslight} and Fig.~\ref{fig:flowchart_lenslight}), i.e., when we have obtained a good fit for the lens and the satellite light, the code continues by modeling the light of the lensed quasar. Now we mask only the luminous objects surrounding the lens system, i.e., we remove the arc mask such that the model now includes the lensed quasars and arcs. For each quasar image, we add a point-like light component, which is represented by the PSF and initially centered on the positions given by the user. The PSF used here is subsampled for images with pixels $>$0.05\arcsec and extends $\sim$4\arcsec$\times$4\arcsec\ to include the diffraction spikes. We only allow the quasar positions and amplitudes to vary; the S\'{e}rsic \, lens light parameters are fixed. To infer the best-fit parameters for the quasar images, we use again simulated annealing and MCMC with a sampling covariance matrix obtained from an earlier MCMC chain. We stop the quasar light modeling when approximate convergence of the chain (Eq.~\ref{lenslogP}) is achieved.
\subsubsection{Lens mass distribution modeling with source/image positions}
\label{sec:srcimgpos}
With the quasar image positions obtained from the quasar light modeling (see Sec.~\ref{sec:quasarlight}), we can model the mass distribution of the lens galaxy using the mapped source positions and subsequently the image positions. The modeling code queries the user for the choice of the lens mass profile. Our automated modeling code supports both a Pseudo Isothermal Elliptical Mass Distribution (PIEMD; \citealt{Kassiola.1993}) and a Singular Power-law Elliptical Mass Distribution (SPEMD; \citealt{Barkana.1998}). The steps conducted in the modeling of the lens mass distribution with source and image positions are visualized in Fig.~\ref{fig:flowchart_lensmass}.
\begin{figure}
\centering
\includegraphics[scale=0.3]{Bilder/2lensmass_5.eps}
\caption[Sec.~\ref{sec:srcimgpos}: Flowchart for lens mass distribution modeling with source and image positions]{Flowchart showing the decision tree for the lens mass distribution modeling with source/image positions. After the user specifies a lens mass profile (PIEMD or SPEMD), the code models the lens mass distribution first by using the modeled source position, and then by using the positions of the multiple quasar images with the option to include an external shear component to the model. Both steps use simulated annealing until the respective $\chi^2$ is below 5. The image position modeling is completed after an approximately converged MCMC chain is achieved. After this, the code continues the lens mass modeling with the arc light modeling and reconstruction of the SSB distribution.}
\label{fig:flowchart_lensmass}
\end{figure}
The dimensionless surface mass density (or convergence) of the SPEMD profile is given by
\begin{equation}
\label{eq:kappa_spemd}
\kappa_{\rm SPEMD}(x, y) = (1-\tilde{\gamma}_{\rm PL})\left[\frac{\theta_{\rm E}}{\sqrt{q_{\rm m}(x-x_{\rm m})^2+\frac{(y-y_{\rm m})^2}{q_{\rm m}}}}\right]^{2\tilde{\gamma}_{\rm PL}},
\end{equation}
where $(x_{\rm m}, y_{\rm m})$ is the lens mass centroid, $q_{\rm m}$ is the axis ratio of the elliptical mass distribution, $r_{\rm c}$ is the core radius which we set to $1\times10^{-4}$, and $\tilde{\gamma}_{\rm PL}=(\gamma_{\rm PL}-1)/2$ is the radial profile slope (where $\gamma_{\rm PL}$ is the power-law slope of the three-dimensional mass density $\rho(r) \propto r^{-\gamma_{\rm PL}}$).
The convergence $\kappa$ is related to the lens potential $\psi(\vec{\theta})$ via $\nabla^2 \psi(\vec{\theta}) = 2\kappa$. The scaled deflection angle for computing the deflection of light rays is the gradient of the lens potential: $\alpha(\vec{\theta}) = \nabla \psi(\vec{\theta})$.
An overview of the mass parameters and their priors is shown in Table~\ref{tab:lensmassparam}. The isothermal PIEMD profile is a special case of the SPEMD profile, where $\tilde{\gamma}_{\rm PL}$=0.5 holds (although with slightly different parameterizations for the $\kappa$) in Eq. (\ref{eq:kappa_spemd}). In case of the SPEMD profile, our uniform modeling pipeline first keeps the power-law index fixed to the isothermal case $\tilde{\gamma}_{\rm PL}$=0.5. Only later when the code models the light of the arc (Sec.~\ref{sec:arclight}), we allow $\tilde{\gamma}_{\rm PL}$ to vary between 0.3 and 0.7.
\begin{table*}[h]
\begin{tabular}{l|l|p{3.5cm}|p{5.2cm}|p{4.7cm}}
Component & Parameter & Description & Prior & Prior range / value \\ \midrule
& $x_{\rm m}\ ['']$ & $x$-centroid & Gaussian; flat prior in SSB reconstruction & centered on starting value, Gaussian $\sigma=0.3$ \\ \rule{0pt}{2ex}
primary & $y_{\rm m}\ ['']$ & $y$-centroid & Gaussian; flat prior in SSB reconstruction & centered on starting value, Gaussian $\sigma=0.3$ \\ \rule{0pt}{2ex}
deflector & $q_{\rm m}$ & axis ratio & flat & [0.3, 1]\\ \rule{0pt}{2ex}
(PIEMD/& $\phi_{\rm m}$ [$^\circ$] & position angle & flat & [0, 360]\\ \rule{0pt}{2ex}
SPEMD)& $\theta_{\rm E}\ ['']$ & Einstein radius & flat & [0.5, 3]\\ \rule{0pt}{2ex}
& $r_{\rm c}\ ['']$ & core radius & exact & $10^{-4}$\\\rule{0pt}{2ex}
& $\tilde{\gamma}_{\rm PL}$ & power-law index (SPEMD)& exact; flat in SSB modeling & 0.5; [0.3, 0.7]\\ \midrule
& $x_{\rm m, sat}\ ['']$ & $x$-centroid & exact & fixed to satellite light centroid \\ \rule{0pt}{2ex}
satellite & $y_{\rm m, sat}\ ['']$ & $y$-centroid & exact & fixed to satellite light centroid \\ \rule{0pt}{2ex}
deflector(s) & $q_{\rm m, sat}$ & axis ratio & exact & 1 \\ \rule{0pt}{2ex}
(SIS) & $\phi_{\rm m, sat}$ [$^\circ$] & position angle & exact & 0\\ \rule{0pt}{2ex}
& $\theta_{\rm E, sat}\ ['']$ & Einstein radius & flat & [0.01, 3]\\ \rule{0pt}{2ex}
& $r_{\rm c, sat}\ ['']$ & core radius & exact & $10^{-4}$\\\rule{0pt}{2ex}
& $\tilde{\gamma}_{\rm sat}$ & power-law index & exact & 0.5\\ \midrule
external& $\gamma_{\rm ext}$ & magnitude & flat & [0, 0.6]\\ \rule{0pt}{2ex}
shear& $\phi_{\rm ext}$ [$^\circ$] & position angle & flat & [0, 360]
\end{tabular}
\caption[Sec.~\ref{sec:srcimgpos}: Overview of lens mass parameters and their priors]{Overview of lens mass parameters and their priors throughout the modeling process. The position angles $\phi_\text{m}$, $\phi_\text{m, sat}$, and $\phi_{\rm ext}$ are measured counterclockwise from the positive $x$-axis (East of North). In case of the centroid coordinates, the modeling code starts with a Gaussian prior centered on the starting value obtained from the user input, which is removed after lens light modeling. The power-law index $\tilde{\gamma}_{\rm PL}$ is fixed to 0.5 (isothermal) in the beginning and changed to a flat prior during arc light modeling and SSB reconstruction.}
\label{tab:lensmassparam}
\end{table*}
Constraining the lens mass parameters is first done by modeling a source position that reproduces the observed image positions of the quasars. First, the position of the source is obtained by using the lens equation
\begin{equation}
\vec{\beta}_i=\vec{\theta}_i-\vec{\alpha}_i(\vec{\theta}_i), \ \ \ i=1,...,N_{\rm im},
\end{equation}
where the image positions $\vec{\theta}_i = (x_{{\rm QSO},i}, y_{{\rm QSO},i})$ are mapped back to the source plane with the scaled deflection angle $\vec{\alpha}_i(\vec{\theta}_i)$. $N_{\rm im}$ is the image multiplicity of the system.
The lens mass parameters $\vec{\eta}$ are then varied to minimize the quantity
\begin{equation}
\chi^2_{\rm sr} = \sum_{i=1}^{N_{\rm im}}\frac{|\vec{\beta}_i(\vec{\eta}, \vec{\theta}_i)-\vec{\beta}^{\rm mod}(\vec{\eta}, \vec{\theta}_i)|^2}{(\frac{\sigma_i}{\sqrt{\mu_i}})^2},
\end{equation}
with $\vec{\beta}_i$ the mapped source position for image $i$, $\vec{\beta}^{\rm mod}$ the modeled source position (i.e., the mean of the mapped source positions, weighted by the magnification), $\sigma_i$ the positional uncertainty on the image plane (we use 0.004\arcsec), and $\mu_i$ the magnification of image $i$. We use simulated annealing to optimize the parameters until $\chi^2_{\rm sr}<5$.
Given the model obtained by minimizing the $\chi^2_{\rm sr}$, which is based on the mapped source positions, the parameter values are further optimised by using the image positions as constraints. In the case of a satellite galaxy close to the lensing system, we can model its mass distribution with a singular isothermal sphere (SIS) profile. The satellite mass centroid is fixed to the satellite light centroid obtained in Sec.~\ref{sec:lenslight}.
In addition, the user can choose to add an external shear profile to the model after the code has constrained the lens mass parameters with the modeled source position. This accounts for the tidal gravitational field of objects surrounding the lensing system. The shear magnitude is calculated by $\gamma_{\rm ext} = \sqrt{\gamma_{\rm ext,1}^2 + \gamma_{\rm ext,2}^2}$, with $\gamma_{\rm ext,1}$ and $\gamma_{\rm ext,2}$ the components of the shear matrix. The parameters $\vec{\eta}$ are varied to minimize
\begin{equation}
\chi^2_{\rm im} = \sum_{i=1}^{N_{\rm im}}\frac{|\vec{\theta}_i^{\rm obs}-\vec{\theta}_i^{\rm pred}(\vec{\eta}, \vec{\beta}^{\rm mod})|^2}{\sigma_i^2},
\end{equation}
with $\vec{\theta}_i^{\rm obs}$ and $\vec{\theta}_i^{\rm pred}$ the observed and the predicted position for image $i$, respectively. Again, the code optimizes with simulated annealing until $\chi^2_{\rm im}<5$. Afterwards it samples with MCMC chains until approximate convergence ($\Delta$\textrm{logP}$\leq$5).\\
\subsubsection{Lens mass distribution modeling with surface brightness distribution}
\label{sec:arclight}
Once the lens mass distribution is modeled by using the multiple quasar image positions, we proceed to constrain the lens mass parameters by modeling the SB distribution of the unlensed source and the light of the arc, which is the lensed light of the quasar host galaxy. We illustrate the following modeling steps with a flowchart in Fig.~\ref{fig:flowchart_arclight}.
\begin{figure}
\centering
\includegraphics[scale=0.6]{Bilder/5arclight_2.eps}
\caption[Sec.~\ref{sec:arclight}: Flowchart for arc light modeling and reconstruction of the SSB distribution]{Flowchart for arc light modeling and SSB reconstruction. In the first stage, the quasar amplitudes are lowered, and in the second stage, the error map is boosted in the bright quasar regions, in order to model the light of the arc. The lens mass parameters are allowed to vary in this modeling step. In a final step, the code runs optimizing and sampling cycles until full convergence of the chain is achieved.}
\label{fig:flowchart_arclight}
\end{figure}
Our automated modeling pipeline uses \texttt{GLEE} to reconstruct the most probable SSB distribution on a pixelated grid through a linear inversion of the pixel intensities in the arc mask (\citealt{Suyu.2006}). The optimizing/sampling of the posterior probability distribution includes both the $\chi^2_{\rm SB}$ term in Eq.~(\ref{extsourcechi2}) and a prior term that acts on the source pixels with a quadratic regularizing function. The SSB distribution is then mapped back to the image plane to reconstruct the arc light. We use the properties of the arc light to further constrain the lens mass parameters. The inferred quasar light intensity in the image cutout (see Sec.~\ref{sec:quasarlight}) is a superposition of the real quasar amplitude and a roughly constant contribution from the lensed host light. To reconstruct the light of the host galaxy, we lower the parameter value of the quasar amplitude. The new amplitudes are determined such that the condition
\begin{equation}
I_{\rm circ,avrg} = 3 \times I_{\rm arc,avrg}
\label{qsoampcrit}
\end{equation}
is fulfilled, where $I_{\rm circ,avrg}$ is the average intensity in a circle with a radius of 4 pixels around the center of the quasar after the quasar and lens light is removed, and $I_{\rm arc,avrg}$ is the average intensity of a region provided by the user that contains a small patch of arc light. The factor of 3 is chosen empirically such that the flux is continuous across the arc. Our automation code then adjusts each quasar amplitude such that the criterion in Eq.~(\ref{qsoampcrit}) is met. For the optimizing/sampling process, we first allow the lens mass parameters, including now also the power-law index $\tilde{\gamma}_{\rm PL}$ of the SPEMD profile, and the quasar amplitudes to vary, while fixing all the parameters of the lens/satellite light profiles and the quasar centroids. From this point on, we remove the Gaussian prior of the lens galaxy centroids in the lens mass and S\'{e}rsic profiles.
To sample the parameter space efficiently, we use \texttt{EMCEE}, an ensemble sampler based on MCMC chains that is highly parallelizable \citep{ForemanMackey.2013}. With \texttt{EMCEE}, we obtain a sampling covariance matrix for the parameter space, which makes the subsequent sampling with Metropolis-Hastings MCMC more efficient. We consider this modeling part successful, when the Metropolis-Hastings chain has approximately converged, now with a stricter requirement:
\begin{equation}
\Delta \textrm{logP} \leq 2.
\label{eq:logP2}
\end{equation}
The quasar images are much brighter than the light of the arc, and residuals of quasar light fitting with imperfect PSF can often overwhelm the signal in the arcs. Therefore, we allow for bigger uncertainties in the quasar image areas by boosting the error map in these regions. The user marks the areas dominated by quasar light, and the code will boost all the pixels of the error map within this region such that their normalized residuals are within $\pm1\sigma$. The error map can also be boosted in the satellite galaxy region if the user chooses to do so. This can be necessary if the satellite galaxy is very close to the arc of the lensing system.
In the next modeling step, now including the boosted error map, we fix all quasar light parameters and allow the lens mass and the lens light parameters to vary. If the uncertainties in the satellite region are not boosted the satellite light parameters are varied as well. We use \texttt{EMCEE} again to obtain a sampling covariance matrix and optimize/sample with simulated annealing/MCMC until approximate convergence of the chain (Eq.~\ref{eq:logP2}) is achieved. In a last step, the code checks the model for full convergence by analyzing the discrete power spectrum of the chain (following \citealt{Dunkley.2005}).
If full convergence is not yet achieved, the code runs optimizing and sampling cycles with an increased chain length by now sampling 200,000 points. When full convergence is achieved, the code stops and the modeling is completed.
\subsection{Multi-band modeling}
\label{sec:multiband}
In addition to the single-band automation code described in Sec.~\ref{sec:singleband}, we developed a code to model simultaneously systems with strongly lensed quasars using multi-band data. The modeling procedure is in most parts analogous to the single-band case and described in Table \ref{tab:Modelingtable_multi}. We present an overview flowchart in Fig.~\ref{fig:flowchart_multiband}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.55]{Bilder/6multiband_2.eps}
\caption{Overview flowchart for multi-band modeling of a lensed quasar system with the multi-band \texttt{GLEE} automation code. Details are described in flowcharts for each step and in the corresponding subsections, as referenced in the flowchart.}
\label{fig:flowchart_multiband}
\end{figure}
Once we obtain a single-band lens mass model in one band (the reference band) and a lens and quasar light model in the other bands with the pipeline described in Sec.~\ref{sec:singleband}, we proceed to model jointly the lens and quasar light in all the considered bands. We choose the F160W band as our reference band unless otherwise specified, because the arc light is most prominent in this wavelength.
We use the modeled quasar positions of the individual bands to align the multiple filters. The lens mass distribution model in the reference filter serves as a starting model for the multi-band modeling.
In the first step, we pair the reference filter with one of the additional filters and model the lens and quasar light together with MCMC chains. We fix the amplitudes of the lens light components in the reference filter and force the lens light amplitudes in the other filter to follow the same ratio of light amplitudes as in the reference filter. In addition, we link the remaining lens light parameters (centroid, axis ratio, position angle, effective radius, and S\'{e}rsic index) between the filters. The quasar light centroids are linked as well, while the quasar amplitudes vary independently. After a MCMC chain is approximately converged with $\Delta \textrm{logP} \leq 5$, we fix the lens light amplitudes of this filter and add the next filter which is modeled together with the reference filter in the same way as the filter before. We let the quasar light parameters of the previous filters vary too.
Once approximate convergence ($\Delta \textrm{logP} \leq 5$) is achieved for all pairs, we model the lens and quasar light parameters of all filters simultaneously and allowing now all lens light amplitudes to vary independently while keeping the remaining lens light parameters and quasar light centroids linked.
Like in the single-band modeling, the modeled quasar positions are then used for image position modeling (see Sec.~\ref{sec:srcimgpos}) to constrain the lens mass parameters. In the last step, the light of the arc and the SSB distribution are reconstructed analogous to the single-band modeling case described in Sec.~\ref{sec:arclight}. Finally, we run additional MCMC chains to achieve full convergence of our lens model parameters.
\begin{table*}
\begin{tabular}{lp{6.4cm}p{1cm}lccccccc}\toprule
Step & Description & Section & Criteria & \multicolumn{2}{c}{Lens mass parameters} & Lens light & \multicolumn{4}{c}{Source light}
\\\cmidrule(lr){5-6}\cmidrule(lr){8-9}
& & & & profiles & ext. shear & & quasar & host \\\midrule\midrule \vspace{5px}
1 & pair each band with reference filter and model lens + quasar light together & \ref{sec:lenslight}/ \ref{sec:quasarlight}, \ref{sec:multiband} & $\circlearrowleft_2 (\chi^2_{\rm red}\leq1)$ & $\bigcirc$ & $\bigcirc$ & $\checkmark$ & $\checkmark$ & $\times$ \\ \vspace{5px}
2 & modeling of lens + quasar light with all bands at once & \ref{sec:lenslight}/ \ref{sec:quasarlight}, \ref{sec:multiband} & $\circlearrowleft_2 (\Delta \textrm{logP}\leq5)$ & $\bigcirc$ & $\bigcirc$ & $\checkmark$ & $\checkmark$ & $\times$ \\ \vspace{5px}
3 & image position modeling with external shear & \ref{sec:srcimgpos}, \ref{sec:multiband} & \begin{tabular}[t]{@{}l@{}}$\circlearrowleft_1 (\chi^2<5,$ \\ $\Delta \textrm{logP}\leq5)$\end{tabular} & $\checkmark$ & $\checkmark$ & $\bigcirc$ & $\bigcirc$ & $\times$ \\ \vspace{5px}
4a & modeling of arc light + SSB distribution reconstruction & \ref{sec:arclight}, \ref{sec:multiband} & $\circlearrowleft_2 (\Delta \textrm{logP}\leq2)$ & $\checkmark$ & $\checkmark$ & $\bigcirc$ & $\checkmark_{\rm A}$ & $\checkmark$\\ \vspace{5px}
4b & modeling of arc light + SSB distribution reconstruction & \ref{sec:arclight}, \ref{sec:multiband} & $\circlearrowleft_2 (\Delta \textrm{logP}\leq2)$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\bigcirc$ & $\checkmark$ \\\bottomrule
\end{tabular}
\ \\ \ \\
\textit{Symbols:}\\
\begin{tabular}{ll}
$\circlearrowleft_1$: optimizing cycle with simulated annealing & $\checkmark$: parameters varying\\
$\circlearrowleft_2$: optimizing and sampling cycle with simulated annealing and MCMC \ \ \ \ \ & $\bigcirc$: parameters fixed \\
$\checkmark_{A}$: only amplitude varying & $\times$: parameters not included
\end{tabular}
\ \\
\caption{Overview of the different modeling steps conducted in the multi-band modeling code. Each step is described briefly in the second column. The modeling technique and stopping criteria are given in the third column with the symbols described under the table. The cycle runs until the stopping criteria in the parentheses are met. The remaining columns show which parameters are varying, fixed, or not included.}
\label{tab:Modelingtable_multi}
\end{table*}
\subsection{Time delay estimates}
\label{sec:timedelay}
Strong lensing time-delay cosmography requires time-delay measurements and thus observational programs that monitor the lensed quasars. To facilitate the scheduling of the monitoring, estimations of the time delays of new quasar systems are useful. Furthermore, a comparison of the (blind) prediction of the time delays to the subsequently measured time delays provides a crash test of mass modeling procedures (\citealt{Treu.2016}).
Once we obtain a good model for the lens potential $\psi(\vec{\theta})$ of the system from the previous subsections, we can use it to compute the relative time delays between the different images. If the redshifts of the lens ($z_{\rm d}$) and of the source ($z_{\rm s}$) are available, then we can calculate the time-delay distance
\begin{equation}
D_{\Delta t} = (1+z_{\rm d})\frac{D_{\rm d} D_{\rm s}}{D_{\rm ds}}
\label{eq:tddistance}
\end{equation}
by assuming a cosmological model. The distances on the right-hand side $D_{\rm d}$, $D_{\rm s}$ and $D_{\rm ds}$ are, respectively, the angular diameter distance to the deflector/lens galaxy, to the source galaxy, and between the deflector and source galaxy. With the time-delay distance, \texttt{GLEE} can calculate the time delays between the quasar images \textit{i} and \textit{j}:
\begin{equation}
\Delta t_{ij} = \frac{1}{c}D_{\Delta t}\Delta \tau_{ij},
\label{eq:td}
\end{equation}
where
\begin{equation}
\label{eq:fermatpot}
\Delta \tau_{ij} = \left[\frac{1}{2}(\vec{\theta}_i-\vec{\beta})^2 - \psi(\vec{\theta}_i)\right] - \left[\frac{1}{2}(\vec{\theta}_j-\vec{\beta})^2 - \psi(\vec{\theta}_j)\right]
\end{equation}
is the Fermat potential difference between quasar images \textit{i} and \textit{j} which is obtained from the final lens mass model. We have a separate code, that calculates the time-delay distance from Eq.~(\ref{eq:tddistance}) and the relative time delays from Eq.~(\ref{eq:td}) including 1$\sigma$ uncertainties by using the final MCMC chain of the model parameters from the modeling code and the cosmological parameters provided by the user.
\section{Observations}
\label{sec:observations}
Our sample consists of the nine strongly lensed quasars DES J0029$-$3814 (Schechter et al. in prep), DES J0214$-$2105 \citep{Agnello2019, Lee2019}, DES J0420$-$4037 (Ostrovski et al. in prep), PS J0659$+$1629 \citep{Delchambre2019}, 2M1134$-$2103 \citep{Lucey.2018,Rusu.2019}, J1537$-$3010 \citep{Lemon.2019,Delchambre2019,Stern2021}, PS J1606$-$2333 \citep{Lemon2018}, PS J1721$+$8842 \citep{Lemon2018}, and DES J2100$-$4452 \citep{Agnello2019}.
A detailed description of the observations and the peculiarities of each lensing system are presented by S22.
In Fig.~\ref{fig:colorimgs} we show a color image that was created with the three HST bands F160W, F475X, and F814W used as the red, blue, and green channels, respectively.
\begin{figure*}[h]
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/ColorImgs/0029_labeled.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/ColorImgs/0214_labeled.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/ColorImgs/0420_labeled.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/ColorImgs/0659_labeled.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/ColorImgs/1134_labeled.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/ColorImgs/1537_labeled.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/ColorImgs/1606_labeled.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/ColorImgs/1721_labeled_2.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/ColorImgs/2100_labeled.eps}}
\caption{Color images for each of the nine lenses in our sample created with the three HST bands F160W (red), F475X (blue), and F814W (green).}
\label{fig:colorimgs}
\end{figure*}
\section{Modeling results}
\label{sec:results}
In this section, we describe the modeling results of our sample of lensing systems obtained with our autonomous modeling pipeline presented in Sec.~\ref{sec:modeling}.
The input files for each lensing system, i.e., PSF, error map, and masks, were obtained by following Sec.~\ref{sec:preparation}. Each system is modeled with a SPEMD profile and external shear. The lens light is modeled with two S\'{e}rsic profiles unless specified otherwise in the following subsections. For two systems (J0659$+$1629 and J1606$-$2333) we additionally model the light and mass distribution of a close-by satellite galaxy. We decide which bands to include by visual inspection of the residuals after modeling lens and quasar light, i.e., the remaining light from the arcs, which are the lensed host galaxy of the quasar.
If there is significant arc light in one wavelength band, then we include that band in the final multi-band model.
In Table \ref{tab:results_light} we present figures of the modeled light and reconstructed source for each of the nine lensing system in our sample. We show the observed image (third column), the modeled light and critical curves (fourth column), and the normalized residuals in a range between $-$5$\sigma$ and 5$\sigma$ (fifth column). For each pixel, the normalized residuals show the difference between data and model, normalized by the estimated standard deviation. We show cropped images instead of the full cutout used for modeling for better visibility and indicate 1\arcsec\ with a white line. The panels are oriented such that North is up and East is left. The sixth column shows the reconstructed SSB distribution of the quasar host galaxy on the image plane. The seventh column shows the SSB distribution on the source plane with plotted caustic curves in red, and the mean, weighted source position of the quasar as a blue star. We show rulers with length of 0.5\arcsec\ in $x$- and $y$-direction because the pixel size can be different in those directions.
In Table \ref{tab:results_param}, we present the median parameter values of the lens mass distribution together with their 1$\sigma$ uncertainties from the final chain of each multi-band model. We also show the best-fit lens and satellite light parameters in the F160W band (the reference filter). The centroid coordinates are with respect to the bottom-left corner of the image cutout. The light parameters in the F475X and F814W bands are presented in Tables \ref{tab:results_param_475} and \ref{tab:results_param_814}, respectively.
In Table \ref{tab:chi2} we summarize the single-band $\chi_{\rm red}^2$ of each modeled band, and a total $\chi_{\rm red, tot}^2$, which is the sum of the independent single-band $\chi^2$ divided by the DOF of the multi-band model. We exclude pixels masked in the lens mask and the boosted quasar light regions in computing the DOF via eq.~(\ref{eq:dof}). In Table \ref{tab:kappa} we present the convergence $\kappa$, total shear strength $\gamma_{\rm tot}$, and magnification $\mu = 1/((1-\kappa)^2 + \gamma_{\rm tot}^2)$ at the (modeled) image positions.
In addition, we calculate the Fermat potential differences at the multiple, modeled image positions, and predict the relative time delays for each system. We use the redshifts mentioned in Sec. \ref{sec:observations}. If the redshifts are not available yet, we assume $z_{\rm d}\equiv0.5$ and $z_{\rm s}\equiv2$. The results are shown in Tables \ref{tab:fermat} and \ref{tab:timedelays}.
Since the lens light is described by multiple S\'{e}rsic \, components, we calculated the second brightness moments of the primary lens light model to compare the median centroid, axis ratio, and position angle to that of the mass distribution. The result is shown in Fig.~\ref{fig:alignment}. The mass and light centroid align well in most cases within one pixel (0.08\arcsec) with the exception of PS J0659$+$1629 which shows a big offset in the x-centroid. The position angle of mass and light agree mostly well within $\sim10\degree$ for 5/9 systems. The biggest offset is also reached for J0659+1629 with $\sim$ 50$^\circ$. For the axis ratio, the models do not follow the 1:1 relation so closely. Five models tend to be more elliptical in mass compared to the light where four of them either have a satellite or are highly sheared which could explain the low mass axis ratio.
Individual modeling details for each lensing system are discussed further in the following subsections.
\subsection{DES J0029-3814}
This lensing system was modeled solely in the IR F160W band. In the remaining bands sufficient arc light from the host galaxy could not be identified by eye . We used two S\'{e}rsic profiles and a central point source component for the primary lens light. The PSF was estimated by selecting two stars in the field.
The best-fit parameters in Table \ref{tab:results_param} show that the centers of the mass and light distribution are offset by $\sim$0.04\arcsec\ in $x$-direction and by $\sim$0.02\arcsec\ in $y$-direction. We obtain a high shear magnitude of $\sim$0.2 with a low position angle (14$^\circ$), which is possibly due to the overdense environment to the North of the lensing system.
\subsection{DES J0214-2105}
We modeled this system in all three considered bands (F160W, F475X, and F814W) and noticed a strong underfitting of the quasar amplitudes in the F475X band after step 5(a) (see Table \ref{tab:Modelingtable_multi}). This can be due to the faint arc light and an imperfect PSF model resulting in higher residuals in the multiple quasar image regions. The underfitting is not directly obvious from the residuals because the missing quasar amplitude is compensated by high source intensity values in a single pixel. To avoid negative impact of this band on the lens mass model, we present the F160W single-band model. The PSF was approximated with one star from the field in the F160W band, and 2 stars in each of the F475X and F814 bands. The centers of the mass and light distribution are very well aligned with an offset within $\sim$0.01\arcsec.
\subsection{DES J0420-4037}
This lensing system was modeled in all three bands. As in the case of J0214$-$2105, the quasar amplitudes in the F475X band are strongly underfitted. We present the F160W single-band model results.
We approximated the PSF in the F160W band by choosing two stars in the field; for the F475X and F814 bands, we used one star. Mass and light show a small offset of $<0.01$\arcsec\ in the $x$-direction and $\sim$0.01\arcsec\ in the $y$-directions. The modeling residuals indicate the presence of several light clumps near or in the arc. We confirmed two of them to be counter images, which hints at the presence of a second, lensed background source.
\subsection{PS J0659+1629}
This lensing system was modeled in the IR F160W band because of insufficient arc light in the other bands. We also include an SIS profile for the satellite mass and a S\'{e}rsic profile for the satellite light. The satellite is very close to the northern-most quasar image, thus we boost the error map in the satellite region in the same way as the quasar images in the last modeling step. The PSF was estimated with four stars from the field. We found two very faint light clumps in the residuals, which are at positions that suggest them to be counter images of a second, lensed background source at a similar redshift as the first source. The dark region in the source light reconstruction at the satellite position does not have an effect on the mass model since its counter image lies outside the reconstruction region (arc mask). Also for this system the quasar amplitudes are underfitted after step 5(a) (see Table \ref{tab:Modelingtable_single}) which might have an effect on the lens mass model. This is difficult to overcome with automated modeling procedures, and require individual tweaks that are deferred to future work for cosmographic analysis.
This system shows the biggest difference between mass and light in our sample, with an offset of $\sim$0.1\arcsec\ in $x$-direction and $\sim$0.05\arcsec\ in $y$-direction. Furthermore, the modeled lens mass distribution is much more elliptical with $\Delta q=q_{\rm S}-q_{\rm m} \sim -0.4$ and the position angle is offset by $-45$\degree. This could be due to the presence of the satellite galaxy close to image D.
\subsection{2M1134-2103}
This system has a faint lens and very bright quasar images, which makes an exact modeling of the lens light and spectroscopic redshift of the lens difficult to obtain. We performed the modeling in the F160W and F814W bands. The PSF for the F160W band was estimated by using five stars. We chose three stars to build the PSF in the F814W band.
The median mass and light centroid align very well with an offset of less than $\sim$0.01\arcsec\ in $x$-direction and $\sim$0.01\arcsec\ in $y$-direction. This system has the highest external shear magnitude $\gamma_{\rm ext}$ of the sample which is consistent with the elongated shape and the overdense environment in the field of view of this system \citep{Rusu.2019}.
\subsection{J1537-3010}
This system was modeled in all three bands. In each band, we chose five stars from the field to approximate the PSF.
The centers of the mass and light distribution align very well within $\sim$0.01\arcsec. In the model residuals shown in Table \ref{tab:results_light} we find a small offset between the quasar images and the arc in the bands F475X and F814W. A possible explanation for this offset is that these two bands are in the rest-frame UV of the quasar host galaxy and thus are likely dominated by light from specific star-forming regions in the host galaxy. The power-law slope $\tilde{\gamma}_{\rm PL}\sim0.3$ hits the lower bound of the prior, which could be due to our imperfect PSF models in the F475X and F814W models. As we will see in Sec.~\ref{sec:comparison}, the PSF has a significant effect on $\tilde{\gamma}_{\rm PL}$.
This system has one of the most prominent arcs in all three \textit{HST} filters in our sample, which is ideal for future cosmographic analysis.
\subsection{PS J1606-2333}
This lensing system was modeled in the F160W band with two S\'{e}rsic profiles and a central point source component for the primary lens light. Although we initially included the other bands because they show significant arc light, the source reconstruction shows no visible source (above the noise level) in these bands. We exclude these bands and optimize only the single-band model. For this lensing system, the PSF was stacked from 5 neighboring stars. There is a bright light clump in the South below image B which is embedded in a faint, extended structure. We assume this clump to be a satellite at the same redshift as the lens galaxy and include an SIS profile for the satellite mass and a S\'{e}rsic profile for the satellite light. The error map was boosted at the satellite position in the last modeling step because of its proximity to the arc. Mass and light align very well within $\sim$0.01\arcsec. The power-law slope of this system is close to the lower bound of the prior with $\tilde{\gamma}_{\rm PL}\sim0.31$. Again, this could be due to an imperfect PSF model.
\subsection{PS J1721+8842}
\label{sec:1721}
This is an unusual and interesting lens system with 6 quasar images. It is composed of two quasar sources at the same redshift, with one quasar lensed into a quad (images A, B, C, and D) and the other one lensed into a double (images E and F) \citep{Lemon2021}. Most of the arc light in this system comes from the double system, so we only include these regions for the arc light modeling and SSB reconstruction. We modeled this system in all three considered bands with two S\'{e}rsic profiles and a central point source component for the primary lens light. We picked three stars from the field to approximate the PSF in the F160W band, and one star in each of the F475X and F814W bands. The second source component was added manually since the current version of the automation code does not support an automated detection of multiple source components. It is represented as a green star in the source reconstruction plot in Table~\ref{tab:results_light}. Quasar image F is offset relative to the arc's peak SB, possibly due to the quasar being kicked out of its host galaxy (\citealt{Mangat2021}). We use F475X as the reference band, because image F is not clearly distinct from the arc in the F160W band where the arc is the brightest. The offset introduces a challenge to the quasar and arc light modeling. We see negative source intensity at the position of the second source component. The mass and light centroids are offset by $\sim$0.04\arcsec\ in the $x$-direction and align very well within $\sim$0.01\arcsec\ in the $y$-direction.
\subsection{DES J2100-4452}
The final model of this system includes only the F160W band. The F475X and F814W bands show small hints of arc light, but including these bands in the model lead to no visible source reconstruction. The PSF was created by using five stars. There seems to be a dust lane crossing the lens galaxy from North to South whose influence on the model needs to be investigated in future work. The dust lane can be seen best in the bluest band (in our case the F475X band) and given its rest frame wavelength of 1.3 microns in the F160W band the effect on the model should be minimal.
The mass is offset relative to the light by $\sim$0.04\arcsec\ in $x$-direction and by $\sim$0.06\arcsec\ in $y$-direction.
\begin{landscape}
\begin{table}[]
\begin{tabularx}{\linewidth}{c|c|ccc|cc}\toprule \toprule
System & Filter & Observed & Model & \begin{tabular}{@{}c@{}}Normalized \\residuals\end{tabular} & \begin{tabular}{@{}c@{}}Reconstructed \\arc\end{tabular} & \begin{tabular}{@{}c@{}}Reconstructed \\source\end{tabular} \\ \toprule \toprule
DES J0029$-$3814 & F160W & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0029_data_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0029_model_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0029_normresid_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0029_predarc_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/0029_source_filter1.eps}} \\ \midrule
DES J0214$-$2105 & F160W & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0214_data_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0214_model_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0214_normresid_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0214_predarc_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/0214_source_filter1.eps}} \\ \midrule
DES J0420$-$4037 & F160W & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0420_data_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0420_model_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0420_normresid_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0420_predarc_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/0420_source_filter1.eps}} \\ \bottomrule
\end{tabularx}
\ \\ \ \\
\caption{Lens modeling results of the nine systems in \textit{HST} filters where the arcs are visible, as indicated in the first two columns. Column (3): the observed \textit{HST} image. Column (4): reconstructed image from our most probable model with critical curves (blue, solid lines). Column (5): normalised image residuals, i.e., the difference between (3) and (4), normalised by the estimated $1\sigma$ uncertainties. Column (6): reconstructed intensity distribution of the lensed quasar host galaxy. Column (7): reconstructed quasar-host intensity distribution on a grid of pixels with caustic curves (red, dashed lines) and mean, weighted source position of the quad (blue star) and double (green star) quasar systems. We show cropped images instead of the full cutout used for modeling for better visibility. The horizontal white line shows 1'' for each system and the figures are oriented such that North is up and East is left. The white lines in the source reconstruction indicate lengths of 0.5'' in $x$- and $y$-direction.}
\label{tab:results_light}
\end{table}
\clearpage
\begin{table}[]
\begin{tabularx}{\linewidth}{c|c|ccc|cc}\toprule \toprule
System & Filter & Observed & Model & \begin{tabular}{@{}c@{}}Normalized \\residuals\end{tabular} & \begin{tabular}{@{}c@{}}Reconstructed \\arc\end{tabular} & \begin{tabular}{@{}c@{}}Reconstructed \\source\end{tabular}\\ \toprule \toprule
PS J0659$+$1629 & F160W & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0659_data_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0659_model_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0659_normresid_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/0659_predarc_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/0659_source_filter1.eps}} \\ \midrule
2M1134$-$2103 & F160W & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1134_data_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1134_model_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1134_normresid_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1134_predarc_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/1134_source_filter1.eps}} \\
& F814W & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1134_data_filter2.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1134_model_filter2.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1134_normresid_filter2.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1134_predarc_filter2.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/1134_source_filter2.eps}} \\ \midrule
J1537$-$3010 & F160W & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1537_data_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1537_model_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1537_normresid_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1537_predarc_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/1537_source_filter1.eps}} \\
\end{tabularx}
\\ \ \\
\captionsetup{labelformat=empty}
\caption*{\textbf{Table \ref{tab:results_light} continued.}}
\ \\
\end{table}
\clearpage
\begin{table}[]
\begin{tabularx}{\linewidth}{c|c|ccc|cc}\toprule \toprule
System & Filter & Observed & Model & \begin{tabular}{@{}c@{}}Normalized \\residuals\end{tabular} & \begin{tabular}{@{}c@{}}Reconstructed \\arc\end{tabular} & \begin{tabular}{@{}c@{}}Reconstructed \\source\end{tabular}\\ \toprule \toprule
& F475X & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1537_data_filter2.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1537_model_filter2.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1537_normresid_filter2.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1537_predarc_filter2.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/1537_source_filter2.eps}} \\
& F814W & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1537_data_filter3.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1537_model_filter3.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1537_normresid_filter3.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1537_predarc_filter3.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/1537_source_filter3.eps}} \\ \midrule
PS J1606$-$2333 & F160W & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1606_data_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1606_model_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1606_normresid_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1606_predarc_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/1606_source_filter1.eps}} \\ \midrule
PS J1721$+$8842 & F160W & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1721_data_filter3.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1721_model_filter3.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1721_normresid_filter3.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1721_predarc_filter3.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/1721_source_filter3.eps}} \\
\end{tabularx}
\\ \ \\
\captionsetup{labelformat=empty}
\caption*{\textbf{Table \ref{tab:results_light} continued.}}
\ \\
\end{table}
\clearpage
\begin{table}[]
\begin{tabularx}{\linewidth}{c|c|ccc|cc}\toprule \toprule
System & Filter & Observed & Model & \begin{tabular}{@{}c@{}}Normalized \\residuals\end{tabular} & \begin{tabular}{@{}c@{}}Reconstructed \\arc\end{tabular} & \begin{tabular}{@{}c@{}}Reconstructed \\source\end{tabular}\\ \toprule \toprule
& F475X & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1721_data_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1721_model_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1721_normresid_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1721_predarc_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/1721_source_filter1.eps}} \\
& F814W & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1721_data_filter2.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1721_model_filter2.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1721_normresid_filter2.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/1721_predarc_filter2.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/1721_source_filter2.eps}} \\ \midrule
DES J2100$-$4452 & F160W & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/2100_data_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/2100_model_filter1.eps}} & \raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/2100_normresid_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[height=0.2\textwidth]{Bilder/Plots/2100_predarc_filter1.eps}} &
\raisebox{-.5\height}{\includegraphics[width=0.2\textwidth]{Bilder/Plots/2100_source_filter1.eps}} \\ \bottomrule
\end{tabularx}
\\ \ \\
\captionsetup{labelformat=empty}
\caption*{\textbf{Table \ref{tab:results_light} continued.}}
\ \\
\end{table}
\end{landscape}
\begin{table}
\begin{tabular}{lccccc}
\toprule System & $\chi_{\rm red, 160}^2$ & $\chi_{\rm red, 475}^2 $& $\chi_{\rm red, 814}^2 $ & $\chi_{\rm red, tot}^2 $\\ \toprule \vspace{2px}
J0029$-$3814 & 0.80 & $-$ & $-$ & 0.80 \vspace{2px}\\
J0214$-$2105 & 0.74 & $-$ & $-$ & 0.74 \vspace{2px}\\
J0420$-$4037 & 0.56 & $-$ & $-$ & 0.56 \vspace{2px}\\
J0659$+$1629 & 1.08 & $-$ & $-$ & 1.08 \vspace{2px}\\
2M1134$-$2103 & 2.76 & $-$ & 1.62 & 1.84 \vspace{2px}\\
J1537$-$3010 & 1.62 & 1.08 & 1.18 & 1.18 \vspace{2px}\\
J1606$-$2333 & 1.36 & $-$ & $-$ & 1.36 \vspace{2px}\\
J1721$+$8842 & 0.51 & 1.16 & 0.96 & 0.94 \vspace{2px}\\
J2100$-$4452 & 0.94 & $-$ & $-$ & 0.94 \vspace{2px}\\ \bottomrule
\end{tabular}
\caption{$\chi_{\rm red}^2$ for each individual band with total $\chi_{\rm red, tot}^2$.}
\label{tab:chi2}
\end{table}
\begin{figure*}[h]
\centering
\subfigure{\includegraphics[width=0.7\textwidth]{Bilder/centroid_alignment.eps}}
\subfigure{\includegraphics[width=0.4\textwidth]{Bilder/pa_alignment.eps}}
\subfigure{\includegraphics[width=0.4\textwidth]{Bilder/q_alignment.eps}}
\caption{\textit{Top:} Difference between the median mass and light centroid of our automated GLEE models, with $\Delta x = x_{\rm S} - x_{\rm m}$ and $\Delta y = y_{\rm S} - y_{\rm m}$. \textit{Bottom left:} Comparison between the median mass and light position angle. \textit{Bottom right:} Comparison between the median mass and light axis ratio. The dashed lines show lines of no centroid offset (top), and lines of equal position angle/axis ratio (bottom left)/(bottom right).}
\label{fig:alignment}
\end{figure*}
\section{Discussion}
\label{sec:discuss}
\subsection{Comparison of automated modeling to manual modeling with \texttt{GLEE}}
In this section we compare the automated modeling with our automation code and the manual modeling of a lensing system with \texttt{GLEE} based on the total time the user actively spends on modeling a lensing system.
We report a significant reduction of preparation time for the input files. The creation of the image cutout is sped up from $\sim$15 minutes to $\sim$5 minutes in the automated modeling. The time for creating the error map stays roughly the same, because this was already automated prior to this work. The mask regions are in both cases obtained by eye and marked manually, but the creation of the masks from the region files is now fully automated. This automation halves the user input time from roughly 1 hour in the manual case to $\sim$30 minutes when using the automation code. The time for creating the PSF is significantly reduced from $\sim$2 hours to $\sim$15 minutes in case of the automated modeling. When using the PSF code, the only step the user has to conduct in order to obtain the PSF is to find suitable stars in the field and save their positions. The time-consuming, technical steps like saving the region file, stacking them together, subsampling if requested, and normalizing to create the PSF are fully automated.
For the actual lens modeling process, we can also report a significant time reduction. The initial setup of the \texttt{GLEE} configuration file, which also includes estimations for the starting values of the parameters, is of the order of an hour when modeling manually. In the automated case, the user only has to provide a region file which can be created within 10 minutes. The setup of the configuration file and the estimations for the starting values is fully automated, so it is not necessary for the user to learn how to use \texttt{GLEE} (e.g., how to set up the \texttt{GLEE} configuration file). Quality control and modifications to the configuration file between different modeling steps are also fully automated, yielding a time saving again of the order of hours over the full modeling process. Also, there is no waiting time between the multiple MCMC chains and the code obtains the optimal sampling parameters automatically. Together with short interaction between the user and the code, and the inspection of the output, we estimate an average of 1 hour of total user time per lensing system for the automated modeling process. Table \ref{tab:usertime} gives a comparison between user input time in the automated and in the manual modeling case.
\begin{table*}
\begin{center}
\begin{tabular}{lcc}\toprule
Task & User time for manual modeling & User time for automated modeling \\ \toprule
Creation of image cutout & $\sim$15 minutes & $\sim$5 minutes \\
Creation of error map & $\sim$10 minutes & $\sim$10 minutes \\
Creation of masks & $\sim$1 hour & $\sim$30 minutes \\
Creation of PSF & $\sim$2 hours & $\sim$15 minutes \\
Modeling process & $\sim$10 hours & $\sim$1 hour \\ \bottomrule
\end{tabular}
\ \\ \ \\
\caption{Comparison between user input time for each task in the manual and in the automated modeling of a lensing system. This does not include computation time, which is summarized in Table \ref{tab:computationtime}.}
\label{tab:usertime}
\end{center}
\end{table*}
The computational time for the arc light modeling and SSB distribution reconstruction depends mainly on the arc mask and PSF size. The optimizing and sampling cycles with simulated annealing and MCMC chains use one core at a time. We used \texttt{EMCEE}, which is highly parallelizable, to obtain an initial sampling covariance matrix at the beginning of the last modeling steps (5(a) and 5(b) in single-band, and 4(a) and 4(b) in multi-band modeling) with a chain length of 400,000 on a cluster with Intel Xeon E5-2640 v4 CPU using 30 cores. The average computation time on the cluster for each \texttt{EMCEE} chain is $\sim$5 hours. Because of typical overhead in parallel computing we estimate the single core computation time for each \texttt{EMCEE} chain of $\sim$100 hours. \texttt{EMCEE} sampling is used twice in the automated modeling procedure, and thus we have a total \texttt{EMCEE} computation time of $\sim$200 hours per core. With this setup and modeling specifications we estimate the average computation time per core per lens system for each modeling step described in Sec.~\ref{sec:modeling} and summarize them in Table \ref{tab:computationtime}. The computationally expensive step of arc light modeling and SSB distribution reconstruction takes 15 to 20 days with a single core. Because of parallelization of \texttt{EMCEE} the effective computation time ranges between 7 to 14 days. When only approximate (and not full) convergence of a chain is sufficient, models can already be expected after 5 to 7 days. These models don't differ much from the final model that is fully converged and can be used for tentative analysis. We expect the computation time to be roughly the same in both the manual and the automated modeling.
\begin{table*}[h]
\begin{center}
\begin{tabular}{lcc}\toprule
Modeling step & Section & Average computation time \\ \toprule
Lens light modeling & \ref{sec:lenslight} &$\sim$1 day \\
Quasar light modeling & \ref{sec:quasarlight} &$\sim$1 day \\
Lens mass modeling with source/image positions & \ref{sec:srcimgpos} & $\sim$1 minute \\
Arc light modeling and SSB distribution reconstruction & \ref{sec:arclight} &$\sim15-20$ days \\\bottomrule
\end{tabular}
\end{center}
\caption{Estimates of average computation time per core per lensing system for each step conducted by the automated modeling code. Note that the last and most time expensive step is partly parallelized and is thus effectively and notably shorter ($\sim 7-14$ days).}
\label{tab:computationtime}
\end{table*}
\subsection{Comparison of lens modeling results between \texttt{GLEE} and \texttt{Lenstronomy}}
\label{sec:comparison}
\subsubsection{Blind comparison}
All nine systems of our sample are part of a more extensive sample of 30 quads that were modeled independently by S22 with a similiar, automated pipeline based on the lens modeling software \texttt{Lenstronomy}. Both modeling frameworks are dedicated for high-resolution images of lensed quasars and thus use similar modeling assumptions for the lenses: the lens mass distribution is modeled with a power-law profile with external shear, and the lens light distribution is modeled with S\'{e}rsic \, profiles.
There are important differences between the two procedures that need to be taken into account. First, the two softwares have different implementation for the SSB reconstruction: \texttt{GLEE} uses a pixelated grid of source intensity distribution, whereas \texttt{Lenstronomy} uses circular S\'{e}rsic\ profiles with optional shapelet components in addition.
Second, S22 always model the three available bands of HST images while in this work we focus only on those where the arcs are clearly visible by eye, typically F160W. Even though those are the most informative bands in our 9 system sample, they do not capture the full information, especially since the resolution is better in the UV/visible (UVIS) data. Third, different priors have been chosen for some of the key parameters. S22 adopted informative priors from the SLACS sample \citep[][]{Bolton2006,Bolton2008,Auger2010} on the radial slope of the mass density profile and the ellipticity of the mass distribution in order to avoid unphysical solutions when the parameters are poorly constrained by the data. In this study we opted to adopt uniform priors within some bounds to constrain the parameters solely by the data at hand. In the best cases the priors should not matter because the likelihood should dominate the posterior, but in practice, as we will see, many of the systems are poorly constrained by the data and therefore the priors matter.
Fourth, S22 adopted an iterative scheme to improve the PSF estimate based on the multiple images of the quasar, sampled at the scale of the reduced data. In contrast, in this work we used an subsampled PSF estimated from images of stars in the field, without additional corrections. As shown by \citet{Shajib2022} the PSF is a crucial ingredient for cosmography grade inference.
Keeping the caveats in mind, we compare the lens mass parameters, external shear parameters, convergence and total shear at the quasar image positions, image magnifications, Fermat potential differences, and predicted time delays. The adopted cosmology is the same as in S22. We plot in Fig.~\ref{fig:comp1} the results of the two modeling codes together with a line showing the identity. In addition, we present the difference of the median values of both modeling codes against the \texttt{GLEE} parameter values to better illustrate the absolute differences between the final results.
\begin{figure*}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/Comparison/NEW_theta_e_new2.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/Comparison/NEW_q_new2.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/Comparison/NEW_pa_new2.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/Comparison/NEW_gam_new2.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/Comparison/NEW_shear_new2.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/Comparison/NEW_shear_pa_new2.eps}}
\caption{Direct comparison of lens mass parameter and external shear values between our models and \texttt{Lenstronomy} models from Schmidt et al. (submitted).}
\label{fig:comp1}
\end{figure*}
\begin{figure*}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/Comparison/NEW_comp_thetae_new2.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/Comparison/NEW_comp_q_new2.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/Comparison/NEW_comp_pa_new2.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/Comparison/NEW_comp_gam_new2.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/Comparison/NEW_comp_shear_new2.eps}}
\subfigure{\includegraphics[width=0.33\textwidth]{Bilder/Comparison/NEW_comp_shear_pa_new2.eps}}
\caption{Change in lens mass parameter and external shear values when using the \texttt{Lenstronomy} PSF with our \texttt{GLEE} automation code for J1606$-$2333 and J2100$-$4452.}
\label{fig:comp2}
\end{figure*}
We find excellent agreement in the Einstein radius of the primary lens galaxy, where the median values agree within 4\% for 8/9 systems. This is expected and reassuring that strong lensing provides robust masses enclosed within the Einstein radius. An apparent outlier is J0659$+$1629 with a difference of $\sim$0.2\arcsec, which is due to the presence of a satellite galaxy whose mass is degenerate with that of the primary lens. We calculated a value for the ``effective'' Einstein radius that includes the satellite mass and compare it to the one that was obtained by S22~for this purpose. Our value of $\theta_{\rm E,eff,GLEE}=2.368_{-0.02}^{+0.02}\arcsec$ matches very well $\theta_{\rm E,eff,Lenstronomy}=2.368 _{-0}^{+0.01}\arcsec$ for J0659$+$1629 (the distribution for $\theta_{\rm E,eff,Lenstronomy}$ is heavily skewed such that the median coincides with the 16$^{\rm th}$ percentile).
We thus conclude that for all systems we recover the same Einstein radius to within a root-mean-square (r.m.s.) scatter of 1.6\%, once the proper accounting for satellites is considered.
Flattening, position angle of the mass distribution, and external shear magnitude and direction are strongly degenerate with each other (and to some extent with the slope) and should therefore be looked at holistically.
For 6/9 systems the position angles of the lens mass distribution match within the errors (1 or 2 $\sigma$ at most). The outliers are: J0659$+$1629 for which \texttt{GLEE} converges to a mass distribution that is highly flattened, more than the light distribution, and therefore likely not fully correct, while S22 -- driven by the prior -- converges to a much rounder distribution; J0214$-$2105 and J0420$-$4037, for which the discrepancy is due to a combination of differences in flattening, PA, and external shear. We note that for this system the slope of the mass density profile is poorly constrained without prior, suggesting that the information content of the data is not sufficient.
The mass flattening parameter agrees within $\sim0.1$ (larger than the formal uncertainty, which is therefore underestimated) except for two cases: i) J0659$+$1629 discussed above; and ii) J0029$+$3814 for which S22 finds more flattening than GLEE, perhaps driven by the prior on the axis ratio of the light distribution.
The inferred external shear magnitudes agree within 0.05 (again larger than the formal uncertainty); the two most discrepant systems are J1537$-$3010 and J2100$-$4452 which are cases where the slope of the mass density profile is relatively poorly constrained without prior and therefore are likely to suffer from poor information content of the data. Reassuringly, for the two systems with the highest shear magnitude we find a good match of the shear position angle.
As mentioned above, a major source of difference is due to the prior of the power-law slope $\tilde{\gamma}_{\rm PL}$, which reverberates through the other parameters. Not surprisingly, the inferred power-law slope $\tilde{\gamma}_{\rm PL}$ of the \texttt{Lenstronomy} models obtained by S22 follows their imposed Gaussian prior that is centered on a close-to-isothermal value of $\tilde{\gamma}_{\rm PL}=0.54$. In contrast, the \texttt{GLEE} models have slope values spanning the uniform prior range of [0.3,0.7] with values closer to the bounds than to an isothermal value. As we discuss before, the difference arises from a combination of two factors: differences in the PSF and insufficient information in the data to constrain the slope.
Not surprisingly, given the differences highlighted above, the inferred quantities $\kappa$, $\gamma_{\rm tot}$, and $\mu$ at the image positions show significant deviations (see Fig.~\ref{fig:comp_appendix1}). We calculate the average relative scatter of these quantities for each lensing system and find that it ranges from $\sim$3\% to \textgreater 100\%, and the statistical uncertainties are underestimated in nearly all cases.
The relative Fermat potential differences and predicted time delays (also shown in Fig.~\ref{fig:comp_appendix1}) follow overall the 1:1 line although they often do not agree within the statistical uncertainties. For these two quantities we removed the outlier image C of J0659$+$1629 for better visibility in the plot, because its time delay is one order of magnitude larger than all other images. We show in Table \ref{tab:scatter} the relative deviation in Fermat potential differences $\delta(\Delta\tau) / |\Delta \tau_\text{GLEE}|$ with $\delta(\Delta\tau) = \Delta \tau_\text{GLEE} - \Delta \tau_\text{Lenstronomy}$. We estimated the statistical uncertainties on these relative deviations by symmetrizing the uncertainties on the GLEE and Lenstronomy parameter values (e.g., on $\Delta \tau_\text{GLEE}$ and $\Delta \tau_\text{Lenstronomy}$) and assuming that they follow Gaussian distributions.
\begin{table}
\begin{tabular}{llp{1.8cm}p{2.2cm}}
\toprule System & Image pair & $\delta(\Delta\tau)/|\Delta\tau_{\rm GLEE}|$ & $\delta(\Delta\tau_{\rm iso})/|\Delta\tau_{\rm GLEE,iso}|$ \\ \toprule \ \\
DES J0029$-$3814 & DA & $0.69$\textpm0.16 & $0.20$\textpm0.20 \\
& DB & $0.74$\textpm0.18 & $0.23$\textpm0.21 \\
& DC & $0.70$\textpm0.18 & $0.20$\textpm0.21 \vspace{5px}\\
DES J0214$-$2105 & BA & $0.38$\textpm0.21 & $0.31$\textpm0.27 \\
& CA & $0.34$\textpm0.20 & $0.27$\textpm0.26 \\
& DA & $0.36$\textpm0.23 & $0.29$\textpm0.28 \vspace{5px}\\
DES J0420$-$4037 & CA & $0.22$\textpm0.12 & $0.56$\textpm0.16 \\
& CB & $0.34$\textpm0.12 & $0.71$\textpm0.17 \\
& DC & $-0.33$\textpm0.12 & $-0.69$\textpm0.17 \vspace{5px}\\
PS J0659$+$1629 & CA & $0.06$\textpm0.06 & $-0.46$\textpm0.10 \\
& CB & $0.42$\textpm0.03 & $0.1$\textpm0.05 \\
& DC & $0.45$\textpm0.17 & $-1.24$\textpm0.26
\vspace{5px}\\
2M1134$-$2103 & BA & $-0.19$\textpm0.02 & $-0.07$\textpm0.03 \\
& CB & $-0.17$\textpm0.02 & $-0.04$\textpm0.03 \\
& DB & $0.12$\textpm0.02 & $-0.02$\textpm0.02 \vspace{5px}\\
J1537$-$3010 & BA & $-0.58$\textpm0.03 & $0.07$\textpm0.02\\
& CA & $1.02$\textpm0.04 & $0.19$\textpm0.03 \\
& DA & $-0.54$\textpm0.03 & $0.09$\textpm0.02 \vspace{5px}\\
PS J1606$-$2333 & BA & $-0.45$\textpm0.08 & $0.03$\textpm0.08 \\
& CA & $-0.46$\textpm0.05 & $0.03$\textpm0.05 \\
& DA & $-0.46$\textpm0.05 & $0.03$\textpm0.06 \vspace{5px}\\
PS J1721$+$8842 & DA & $-0.09$\textpm0.02 & $0.07$\textpm0.03 \\
& DB & $-0.1$\textpm0.02 & $0.06$\textpm0.03 \\
& DC & $-0.09$\textpm0.02 & $0.07$\textpm0.03 \vspace{5px}\\
DES J2100$-$4452 & DA & $-0.62$\textpm0.11 & $0.05$\textpm0.16 \\
& DB & $-0.66$\textpm0.12 & $0.03$\textpm0.17 \\
& DC & $-0.54$\textpm0.14 & $0.1$\textpm0.18 \vspace{5px}\\ \bottomrule
\end{tabular}
\caption{Relative deviation of Fermat potential differences $\delta(\Delta\tau) / |\Delta \tau_\text{GLEE}|$ for each lensing system. The fourth column shows the same comparison but with the rescaled values retrieved when assuming an isothermal mass profile. The relative statistical uncertainty is underpredicted in the majority of cases.}
\label{tab:scatter}
\end{table}
This comparison shows us that for only 1 system (J1721$+$8842) the relative Fermat potential differences and predicted time delays are within 10\%. For 7/9 systems we see on average a 30\% or more deviation of the relative Fermat potential differences of the multiple images. Therefore, for most systems more detailed modeling and/or better data is needed to bring all these models to cosmography-grade level. In some cases, the UVIS exposures may provide further constraints on the mass and light profile parameters, even though the source contribution is fainter than in the IR, especially since the resolution is twice as high in the UVIS.
\subsubsection{Post-blind analysis}
\label{subsec:postblind}
Having completed the blind comparison between the results we turn to efforts that have been made after unblinding to understand the origin of the discrepancies.
To test the influence of the power-law slope $\tilde{\gamma}_{\rm PL}$ on the physical quantities $\kappa$ and $\Delta\tau$, we rescale the \texttt{GLEE} and \texttt{Lenstronomy} values of $\Delta\tau$ with $\Delta\tau_{\rm iso} \simeq \Delta\tau/2\tilde{\gamma}_{\rm PL}$ \citep[e.g.,][]{Suyu2012b} and calculate new $\kappa_{\rm iso}$ values with Eq. (\ref{eq:kappa_spemd}) and $\tilde{\gamma}_{\rm PL}=0.5$, thus assuming isothermal mass profiles. In this case, we get overall a much better agreement between the two modeling results, as shown in Fig.~\ref{fig:comp_appendix1}.
With these assumptions, all $\kappa$ values agree now within 20\%, with 5/9 systems having a $\leq$10\% scatter. The trend is similar for the relative Fermat potential differences at the multiple image positions, where 6/9 systems agree within $\sim$20\% (see the last column $\delta(\Delta\tau_{\rm iso})/|\Delta\tau_{\rm GLEE,iso}|$ in Table \ref{tab:scatter}). Two systems, J0420$-$4037 and J0659$+$1629, have a higher relative deviation among the multiple images now, but their absolute values of Fermat potential differences are small. Therefore, most of the discrepancy in $\kappa$ and $\Delta\tau$ between the two independent models are due to the different power-law slope values. It is not a surprise that the two modeling pipelines yield different slope values, as discussed above, due to differences in priors and PSFs.
To disentangle the two effects (priors and PSF), we use the final PSF of the \texttt{Lenstronomy} modeling by S22 in our \texttt{GLEE} automated modeling procedure. The creation of the PSF of SS22 is different than in our pipeline because it includes iterative PSF correction, while our PSF is simply constructed from stars in the field (see Sec.~\ref{sec:preparation}). Also, the \texttt{GLEE} PSF is subsampled by a factor of 3 compared to the pixel scale of the data, while the \texttt{Lenstronomy} PSF uses the original data's pixel size.
To ensure the information content is sufficient (and thus the prior is less important), the two teams focused for this comparison on two systems with high SNR data and visible arcs: J1606$-$2333 and J2100$-$4452. We remodel both systems in the F160W (IR) band with our \texttt{GLEE}-based pipeline but using the not subsampled \texttt{Lenstronomy} PSF and compare the results with those based on the \texttt{GLEE} PSF. We show the comparison in Fig.~\ref{fig:comp2} and Fig.~\ref{fig:comp_appendix2}. In both figures, we present the \texttt{GLEE} blind results as presented in this work and the new post-blind results using the \texttt{Lenstronomy} PSF. In most cases, the values obtained with the \texttt{Lenstronomy} PSF are closer to the modeling results of S22. The Einstein radius of J1606$-$2333 is lower with the new PSF because it is degenerate with the Einstein radius of the satellite, which is larger now. The effective Einstein radius of this system remains unchanged. The influence on the power-law slope is evident, as the slope values move much closer to the values of S22, and closer to isothermal, without imposing a prior. This also leads to a better agreement of other quantities such as $\kappa$ or the Fermat potential differences to the \texttt{Lenstronomy} modeling results, as shown in \ref{fig:comp_appendix1}. We conclude from this that for data with sufficiently high signal-to-noise ratio, the PSF is a crucial ingredient to infer accurately the power-law slope, and thus the Fermat potential. Crucially, the PSF can be reconstructed directly from the data, thus improving the accuracy of cosmographic analysis. This result reinforces the necessity of PSF reconstruction, which has become best practice in recent years for cosmographic analysis using \texttt{GLEE} and \texttt{Lenstronomy} by members of our team and collaborators \citep[e.g.,][]{Wong2019,Birrer2018,Shajib2019,Chen2019}
The counterpoint is that data of insufficient quality should not be used for cosmographic analysis, unless one has an accurate and informative prior.
We conducted the same test with a subsampled version of the \texttt{Lenstronomy} PSF. It is subsampled in the same way as the \texttt{GLEE} PSF. The modeling results do not agree as well with the \texttt{Lenstronomy} values and are closer to the \texttt{GLEE} values. This comparison shows that although subsampling in the IR band is important (\citealt{Suyu.2012}, \citealt{Shajib2022}), the modeling results depend on the way the PSF is subsampled. It should be done during the PSF reconstruction process, and not as a simple interpolation after the corrections of the PSF.
To conclude, we remind the reader that the goal of this work was not to produce cosmography-ready models but to automate the modeling procedure for a wide range of lens morphologies that can subsequently enable the construction of cosmography-grade models; thus differences in the results of the two modeling teams are expected. Our analysis shows that the origin of the differences can generally be understood and that the two most significant factors are data quality and accuracy of PSF modeling.
\subsection{Comparison of astrometry}
To assess the precision of our astrometry we compare the relative positions of the multiple quasar images from our model with the modeled positions from S22 and \citet{Luhtaru2021}, and the measured positions from the Gaia satellite data release 3 (\citealt{Brown2021}). Like in our pipeline, S22 obtained positions by modeling the surface brightness distribution, while \citet{Luhtaru2021} used geometric properties in the image plane. The comparison with S22 and \citet{Luhtaru2021} is done for all 9 systems in our sample, the comparison with Gaia for 7/9 systems. For the remaining 2 systems, there is no Gaia data available.
In all comparisons, J1721$+$8842 has the biggest offsets for all four images of the quad, which is not suprising, given its complexity (see Sec.~\ref{sec:1721}). Excluding this system, we get the following results w.r.t. S22: We have a r.m.s.~scatter of $\sim$6 milli-arcsec (mas) in $x$-direction (RA) and $\sim$5 mas in $y$-direction (dec).
The comparison with \citet{Luhtaru2021} reveals an r.m.s.~scatter of $\sim$ 1.7 mas in $x$- and $y$-directions. The r.m.s.~scatter for the Gaia comparison is 1.7 mas in $x$- and 2.3 mas in $y$-direction.
The top panel of Fig.~\ref{fig:astrometry} shows the difference in quasar image positions for the comparison with S22, \citet{Luhtaru2021}, and Gaia for all systems that are in all three samples (7 systems). The images of J1721$+$8842 are marked in orange as the mentioned outlier. The bottom panel shows the difference in quasar images positions for all systems that are common in \citet{Luhtaru2021} and Gaia. To give an idea of how small the differences are, we include a box centered around zero offset with dimensions $\pm 5$mas. The vast majority of points lie within this box. Despite not having performed PSF reconstruction in our automated modelling pipeline, we have recovered the astrometry of the quasar images within $\sim$2 mas of the Gaia measurements (which are the most precise/accurate since Gaia was designed to measure astrometry).
\citet{Birrer2019} provide an estimate on how the astrometric uncertainties translate into the uncertainty on the $H_0$ inference. The $\sim$ 2 mas rms offset to Gaia in our analysis translates to a $\sim$ 0.2 mas offset in the source plane, which leads to uncertainties on $H_0$ well below 5\%, which is chosen by \citet{Birrer2019} as an estimate of the total uncertainties (i.e., from modeling the lens mass potential and the contribution from the mass along the line of sight). Thus the astrometric uncertainties in this work are not a significant contribution to the cosmographic error budget, demonstrating that our automated pipeline can meet the astrometric requirements.
\begin{figure}[h]
\centering
\subfigure{\includegraphics[scale=0.52]{Bilder/comp1_NEW.eps}}
\subfigure{\includegraphics[scale=0.7]{Bilder/comp2_NEW.eps}}
\caption{Relative difference in quasar image positions between our results and Schmidt et al. (submitted, squares), \citet[][circles]{Luhtaru2021} or Gaia (triangles) for systems that overlap in the multiple samples. The image positions of the outlier J1721$+$8842 are marked in orange (\textit{top}) or are excluded (\textit{bottom}).}
\label{fig:astrometry}
\end{figure}
\subsection{Possible improvements of the automated lens modeling pipeline}
To further improve the automated, uniform modeling in combination with detailed cosmographic analyses of lenses, we plan to implement several additions to the code.
For the uniform modeling approach, we reconstruct the PSF from several stars in the field. This approach introduces inaccuracies that are negligible for uniform modeling but need to be minimized for detailed analysis, as discussed in Sec.~\ref{sec:comparison}. We will automate a method to iteratively update the initial PSF using the multiple lensed quasar images. This PSF correction might also resolve the tendency of the power-law slope parameter $\tilde{\gamma}_{\rm PL}$ to move to the boundaries of the prior.
To make the modeling procedure faster and more convenient, we can automate the detection of extended structures in the cutout, e.g., the positions of multiple quasar images and objects that need to be masked. Other automated modeling codes already show that an accurate detection in the scope of uniform modeling is often possible (e.g., \citealt{Savary2021}, \citealt{Rojas2021}). These papers also showed that sometimes objects will be misidentified and the automated modeling has to be stopped. Our current approach with manually identifying objects is thus a compromise to ensure very high accuracy. This works best for a medium-size sample (e.g., few dozen) but is not applicable for samples of the order of hundred lenses or more.
The initial background subtraction of the data can be made more accurate by allowing for the selection of multiple sky regions. This is especially important for systems that show a gradient in background flux. Another improvement can be the usage of a $p$-value threshold for the lens light modeling (Sec.~\ref{sec:lenslight} and \ref{sec:multiband}). This would decrease the probability that the $\chi^2_{\rm red}$ criterion (Eq.~\ref{lenschi2red}) is met only due to specific noise realisations.
Other modeling teams have shown that optimizing methods that are based on gradient descent and can be heavily parallelized are superior to conventional methods in terms of computation time, e.g., \texttt{GIGA-Lens} (\citealt{Gu2022}). These are directions worth exploring for future developments of automated lens modeling.
\section{Conclusions}
\label{sec:conclude}
We presented a new automated modeling code for the uniform modeling of strongly lensed quasars in galaxy-scale systems with high-resolution data based on the well-tested software \texttt{GLEE}. We additionally developed several codes that create necessary input files (i.e., PSF, error map, and masks) which are used for the modeling. The modeling of the lens mass distribution of the system is done in two steps. In the first step, the modeling code uses the predicted source position and the observed image positions to constrain the lens mass distribution parameters. In the second step, the light distributions of the lens, the multiple lensed quasar images and their host galaxy are modeled. The latter is used to reconstruct the surface brightness distribution of the source. With the SSB distribution, the code models the light of the arc to further constrain the lens mass parameters. The modeling code and the creation of input files are nearly fully automated, requiring only minimal user input. Quality control and technical steps in the modeling process are fully automated.
We tested the modeling code on a sample of nine strongly lensed quasars and obtained a good light fit and a good alignment of the mass and light distributions. The current automated pipeline is versatile enough to model a variety of lens systems in a uniform manner to obtain lens mass models.
The models provide robust estimates of Einstein radii and astrometry of the quasar images. The accuracy of parameters such as the power-law slope depends crucially on the data quality and on the details of the modeling. This is highlighted by a blind comparison of our models with those of an automated modeling framework from S22, based on the lens modeling code \texttt{Lenstronomy}. The two approaches are similar in the choice of parametrization and philosophy but have a few crucial differences: the description of the lensed galaxy light (pixellated vs S\'{e}rsic+shapelets), the number of modeled bands (user choice vs all available bands), priors (uniform vs informative), and the PSF (initial guess based on stars vs corrections based on the QSO images for S22). Despite the differences, Einstein radii and mass flattening are in good agreement between the two studies, with a few exceptions that can be traced to inadequacy of the data or the PSF modeling.
Other quantities like convergence and image magnifications show differences that are generally larger than the estimated formal statistical uncertainties, but still encouraging in amplitude, considering the automated approach.
The differences arise primarily from two effects as shown by our post-blind analysis. First, when the data are of insufficient quality to constrain the slope of the power-law mass density profile, the S22 results are driven by the prior, while our current results tend to be more uncertain and constrained by the bounds of the uniform prior. We show that if the same power-law slope is imposed the agreement between the codes improves significantly.
Second, the reconstruction of the PSF is a key factor in determining the slope and other parameters for data of sufficient quality. We illustrate this point by modeling two high quality systems with the same PSF as S22, and finding much better agreement.
We conclude that, for system with high SNR, the PSF can be reconstructed directly from the data and the power-law slope can be stably inferred, confirming the results obtained via detailed modeling by
\citet{Wong2019,Birrer2019b,Shajib2019,Chen2019,Shajib2022}.
In terms of our automated modeling pipeline, we conclude that models are a good starting point, but more work, particularly PSF corrections, and in some cases better data, are needed to bring them to cosmography grade.
Nonetheless, these automated modeling results provide important information about the lens systems such as the approximate time delays to help schedule the monitoring of the system. The lensing systems modeled with our automated pipeline are being follow-up observationally to acquire, e.g., redshift and velocity dispersion of the lens, time delays, and environment properties. We can use the modeling results that are presented here, as well as models from lensing systems that are obtained with the automation code in the future, as input models for more detailed modeling, in a fraction of the user's time compared to conventional modeling.
\begin{acknowledgements}
We thank P. Schechter for data on the astrometric comparison and for helpful discussions.
This research is based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs HST-GO-15320 and HST-GO-15652. Support for the two programs was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
SE, SS, and SHS thank the Max Planck Society for support through the Max Planck Research Group for SHS.
This project has received funding from the European Research Council (ERC)
under the European Union's Horizon 2020 research and innovation
programme (LENSNOVA: grant agreement No 771776; COSMICLENS: grant
agreement No 787886).
This research is supported in part by the Excellence Cluster ORIGINS which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- EXC-2094 -- 390783311.
TS and TT acknowledge support by the the National Science Foundation through grant NSF-AST-1906976 and NSF-AST-1907396 "Collaborative Research: Toward a 1\% measurement of the Hubble Constant with gravitational time delays". TT acknowledges support by the Packard Foundation through a Packard Research Fellowship.
\end{acknowledgements}
\bibliographystyle{aa}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{intro}
This article aims to elucidate the key differences between Ozsv\'ath and Szab\'o's cube of resolutions chain complex for knot Floer homology~\cite{ozsszcube} and the cube of resolutions chain complex underlying Khovanov and Rozansky's HOMFLY-PT homology~\cite{kr2,rasmussenonkr}. Comparing the constructions is especially interesting in light of the conjecture~\cite{dgr,rasmussenonkr} that there should be a spectral sequence from HOMFLY-PT homology to knot Floer homology. In both constructions, a knot in $S^3$ is studied by considering the collection of graphs $G_I$ for $I\in\left\{0,1\right\}^n$ obtained by replacing each crossing in an $n$-crossing braid diagram with its oriented resolution or with a thick edge, as in Figure~\ref{resolutions}. The graphs are planar and trivalent, with one thick and two thin edges incident to each vertex. They are equivalent to singular knots by exchanging thick edges for 4-valent vertices as in Figure~\ref{4valwideedgeexchange}.
\begin{figure}[t]
\begin{center}
$$\input{resolutions.tex}$$
\caption{A collection of graphs is obtained from a braid diagram for a knot by replacing each crossing with either its oriented resolution (left) or with a thick edge (right).}
\label{resolutions}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
$$\input{4valwideedgeexchange.tex}$$
\caption{Graphs, as defined in Section~\ref{framedgraphs}, correspond to singularized links via the exchange above.}
\label{4valwideedgeexchange}
\end{center}
\end{figure}
In the cube of resolutions for knot Floer homology, one associates a graded algebra $\cB_{\textrm{HFK}}(G_I)$ to each graph and assembles these into a bigraded chain complex whose homology is the knot Floer homology of the original knot. In HOMFLY-PT homology, one associates a bigraded chain complex to each $G_I$, then assembles these into a triply-graded chain complex. The triply-graded complex has one differential coming from the complexes associated to the $G_I$, but it is also given a new differential. Taking homology with respect to each of these in turn produces the HOMFLY-PT homology of the knot. Let $\cB_{\textrm{KR}}(G_I)$ denote the homology of the chain complex associated to $G_I$.
The process of assembling a final chain complex from the $\cB_{\textrm{HFK}}(G_I)$ or the $\cB_{\textrm{KR}}(G_I)$ is quite similar; it is a standard cube of resolutions construction. We focus here on the differences between $\cB_{\textrm{HFK}}$ and $\cB_{\textrm{KR}}$, which we call the knot Floer graph homology and the HOMFLY-PT graph homology, respectively.
Both the knot Floer and HOMFLY-PT graph homologies are built from certain ideals in polynomial rings. The polynomial rings are edge rings: they have indeterminates corresponding to thin edges of the graph. Generating sets for the ideals can be read off directly from the graph. Ozsv\'ath and Szab\'o observe that the ideals used in the two constructions are remarkably similar~\cite{ozsszcube}, but a precise relationship between the constructions has not been previously described. Our goal will be to make the comparison precise, with the intention of facilitating progress on the spectral sequence conjecture.
We address two major differences between the knot Floer and HOMFLY-PT graph homologies.
\begin{enumerate}
\item \textbf{Twisted coefficients.} The knot Floer edge ring is defined over $\mathbb{Z}[t^{-1},t]]$, the ring of Laurent series in $t$, while the HOMFLY-PT edge ring is defined over $\mathbb{Q}$~\cite{kr2,rasmussenonkr} or $\mathbb{Z}$~\cite{krasner}. The variable $t$ appears in the definition of the knot Floer ideals as well because the knot Floer graph homology is in fact the singular knot Floer homology~\cite{ozsszstipsing} of the graph in $S^3$, computed with a particular choice of twisted coefficients.
\item \textbf{The non-local ideal.} The HOMFLY-PT graph homology is built from two ideals, $L(G)$ and $Q(G)$, both of which are specified entirely by local information (individual thick edges and their incident thin edges) in the graph. The knot Floer graph homology uses (twisted analogues of) these ideals, but also a non-local ideal $N(G)$, which cannot in general be specified by only local data from the graph.
\end{enumerate}
We address the issue of twisted coefficients by recasting it in terms of framed graphs. For a framed, planar, trivalent graph $G$, possibly with boundary, we define an edge ring $\cE(G)$, which is itself a quotient of a polynomial ring by an ideal $F(G)$ derived from the framing. We define mild generalizations of the ideals $L(G)$, $Q(G)$, and $N(G)$ mentioned above, and a graph homology
\[\cB\!\left(G\right)=\tor_\ast\!\left(\frac{\cE\!\left(G\right)}{L\!\left(G\right)},\frac{\cE\!\left(G\right)}{N\!\left(G\right)}\right)\otimes\Lambda^\ast V_G,\]
where $V_G$ is the free $\cE(G)$-module spanned by certain connected components of $G$.
If $G$ is a closed braid graph (i.e.,~obtained by replacing the crossings in a closed braid diagram with thick edges) with its outermost strand cut, then we recover the knot Floer and (conjecturally) HOMFLY-PT graph homologies by imposing certain framings. For a particular non-blackboard framing {\tt e}, we have $\cB_{\textrm{HFK}}(G)\isom\cB\!\left(G^{\tt e}\right)$. For a different non-blackboard framing, we recover the variant on the knot Floer graph homology considered in~\cite{reidmoves}.
Letting {\tt b} denote the blackboard framing, one may write the HOMFLY-PT graph homology as \[\cB_{\textrm{KR}}(G)\isom\tor_\ast\!\left(\frac{\cE(G^{\tt b})}{L(G^{\tt b})},\frac{\cE(G^{\tt b})}{Q(G^{\tt b})}\right)\otimes\Lambda^\ast V_G,\] but one may also re-state Conjecture 1.3 of \cite{manolescucube} as
\[\tor_\ast\!\left(\frac{\cE(G^{\tt b})}{L(G^{\tt b})},\frac{\cE(G^{\tt b})}{Q(G^{\tt b})}\right)\isom\tor_\ast\!\left(\frac{\cE(G^{\tt b})}{L(G^{\tt b})},\frac{\cE(G^{\tt b})}{N(G^{\tt b})}\right).\]
If that conjecture holds, it would follow immediately that $\cB_{\textrm{KR}}(G)\isom\cB(G^{\tt b})$. That is, our $\cB$ would specialize to the HOMFLY-PT graph homology for the blackboard framing.
The approach via framed graphs is a modest generalization of existing graph homologies, but it situates these graph homologies in the context of quantum topology. In that setting, invariants of framed graphs are a natural extension of invariants of knots, and a typical stop along the way to invariants of 3-manifolds. It should be possible to extend $\cB$ to an invariant of knotted framed trivalent graphs via a cube of resolutions chain complex. It would be interesting to relate the resulting invariant to Viro's \cite{viroquantumrel} quantum relative of the Alexander polynomial, which draws on the representation theory of the quantum supergroup $\mathfrak{gl}(1\vert 1)$ to extend the multivariable Alexander polynomial to knotted framed trivalent graphs. Understanding such a relationship could help fill gaps in both the categorified and decategorified settings. On the categorified side, one might hope to extend knot Floer homology to tangles without appealing to bordered sutured theory~\cite{zarev}. On the decategorified side, Heegaard Floer homology might suggest how to upgrade Viro's invariant of framed graphs to a $\mathfrak{gl}(1\vert 1)$ invariant of closed 3-manifolds.
These advertisements for the framed graphs approach aside, our main result concerns the non-local ideal $N(G)$. It will be clear from the definitions that $Q(G)\subseteq N(G)$ for any graph $G$. Furthermore, the non-local ideal $N(G)$ coincides with the local ideal $Q(G)$ when $G$ is a braid graph with none of its strands closed; that is, when $G$ can be obtained from a braid with none of its strands closed by replacing crossings with thick edges as in Figure~\ref{resolutions} (see~\cite[Proposition~3.1.1]{gilmorethesis} and~\cite[Proposition~5.4]{manolescucube}, or implicitly \cite[Lemma~3.12]{ozsszcube} and \cite[Proposition~3.1]{reidmoves}). It is only as we close strands of the braid graph that we begin to see examples in which $Q(G)\subsetneq N(G)$. Therefore, we study the partially closed braids $G, G\ksup{1},\ldots,G\ksup{b-1}$ obtained by closing one strand at a time, as in Figure~\ref{intermedgraphs}. We allow any framing on $G$, and assume that the framing on $G\ksup{i}$ is inherited from that on $G$. Throughout, we consider $G\ksup{b-1}$ to be the closure of $G$, even though its outermost strand is still open. This is consistent with the quantum topology approach to the Alexander polynomial (e.g.~in~\cite{viroquantumrel}) and with the use of basepoints in~\cite{ozsszcube,reidmoves,manolescucube}.
We prove that closing a braid strand corresponds to taking an ideal quotient of the non-local ideal by the edge variable associated to the strand being closed. See Section~\ref{mainresult} for full details of the notation.
\begin{thm}
\label{idealqthmintro}
Let $G$ be a braid graph with no strands closed and $G\ksup{k}$ denote the diagram obtained by closing the right-most $k$ strands of $G$. Let $\pi_k:\cE\!\left(G\ksup{k}\right)\to\cE\!\left(G\ksup{k+1}\right)$ denote the projection of edge rings corresponding to closing the $(k+1)^{\text{st}}$ strand of $G\ksup{k}$. Let $\ztau\ksup{k+1}$ denote the edge ring variable corresponding to the top boundary edge of the $(k+1)^{\text{st}}$ strand of $G\ksup{k}$. Then for $0\leq k\leq b-2$, the equality
$$\pi_k\!\left(N\!\left(G\ksup{k}\right)\right) : \left(\ztau\ksup{k+1}\right) = N\!\left(G\ksup{k+1}\right),$$ holds in $\cE\!\left(G\ksup{k+1}\right)$.
\end{thm}
As a corollary, we may express the non-local ideal of a braid graph's closure in terms of a local ideal of the underlying braid graph.
\begin{cor}
\label{idealqcorintro}
With notation as in Theorem~\ref{idealqthmintro}, the equality
$$\pi_{b-2}\circ\cdots\circ\pi_0\!\left(Q\!\left(G\right)\right) : \left(\ztau\ksup{1}\cdots\ztau\ksup{b-1}\right)=N\!\left(G\ksup{b-1}\right)$$ holds in $\cE\!\left(G\ksup{b-1}\right)$.
\end{cor}
The reason for the appearance of the ideal quotient in relation to braid closures remains mysterious. There is a tempting analogy to Hochschild homology, which is the closure operation in Khovanov's construction of HOMFLY-PT homology via Soergel bimodules~\cite{khovHH}. Soergel bimodules categorify the Hecke algebra and Hochschild homology categorifies Ocneanu's trace on the Hecke algebra, so Khovanov's whole construction has a clean decategorification. One might hope for a similar story involving the ideal quotient and the Alexander polynomial.
\begin{figure}[tb]
\begin{center}
\input{intermedgraphs}
\caption{From left to right: $G_\sigma\ksup{0}=G_\sigma$, the partial closure $G_\sigma\ksup{1}$, and the full closure $G_\sigma\ksup{b-1}=G_{\widehat{\sigma}}$.}
\label{intermedgraphs}
\end{center}
\end{figure}
With the ideal quotient result and interpretation via framed graphs in hand, we may describe the status of the HOMFLY-PT to knot Floer spectral sequence conjecture~\cite{dgr,rasmussenonkr} as follows.\footnote{All of the following results have parallels involving $\hfkhat$ and the reduced HOMFLY-PT homology.} Let $K$ be a knot in $S^3$. Let $\mathcal{K}$ be an $n$-crossing braid diagram for $K$ with outermost strand cut. Let $G_I$ for $I\in\left\{0,1\right\}^n$ be the collection of planar trivalent graphs that can be obtained by replacing each crossing of $\mathcal{K}$ with either a thick edge or with the oriented smoothing, as in Figure~\ref{resolutions}.
Recall that {\tt e} denotes the framing for which $\cB$ specializes to the knot Floer graph homology. Ozsv\'ath and Szab\'o constructed the original cube of resolutions for knot Floer homology from the collection of $\cB\!\left(G_I^{\tt e}\right)$ for $I\in\left\{0,1\right\}^n$, which arose for them as singular knot Floer homology \cite{ozsszstipsing} with twisted coefficients. They showed that their cube of resolutions complex is the $E_1$ page of a spectral sequence to $\hfkmin(K)$ that collapses at the $E_2$ page~\cite[Theorem~1.1, Section~5]{ozsszcube}. Manolescu~\cite{manolescucube} studied the untwisted version of their construction. He described appropriate differentials and gradings from which to assemble a cube of resolutions chain complex from the $\cB\!\left(G_I^{\tt b}\right)$. He also identified that complex as the $E_1$ page of a spectral sequence to $\hfkmin(K)$~\cite[Theorem~1.1]{manolescucube}. Using yet another framing, for which $\cB$ also specializes to the knot Floer graph homology, the author has described another cube of resolutions chain complex~\cite{reidmoves}. Like the Ozsv\'ath-Szab\'o complex, it is the $E_1$ page of a spectral sequence to $\hfkmin(K)$ that collapses at the $E_2$ page (see the proof of \cite[Proposition~9.1]{reidmoves}).
While both of the non-blackboard-framed spectral sequences mentioned above collapse at the $E_2$ page, Manolescu's blackboard-framed spectral sequence does not. In fact, he conjectures that it is exactly the desired spectral sequence from HOMFLY-PT homology to knot Floer homology. More precisely, and translating to our language, Manolescu conjectures that $\cB\!\left(G_I^{\tt b}\right)\isom\cB_{\textrm{KR}}\!\left(G_I\right)$, which would imply that the $E_2$ page of the blackboard-framed spectral sequence was the middle HOMFLY-PT homology of $K$ \cite[Conjecture~1.3]{manolescucube}.
Corollary~\ref{idealqcorintro} allows us to rephrase Manolescu's conjecture as
\[\tor_\ast\!\left(\frac{\cE\!\left(G_I^{\tt b}\right)}{L\!\left(G_I^{\tt b}\right)},\frac{\cE\!\left(G_I^{\tt b}\right)}{Q\!\left(G_I^{\tt b}\right):\left(\ztau\ksup{1}\cdots \ztau\ksup{b-1}\right)}\right)\isom \tor_\ast\!\left(\frac{\cE\!\left(G_I^{\tt b}\right)}{L\!\left(G_I^{\tt b}\right)},\frac{\cE\!\left(G_I^{\tt b}\right)}{Q\!\left(G_I^{\tt b}\right)}\right).\]
Theorem~\ref{idealqthmintro} suggests an inductive approach to the proof: close one strand of a braid diagram at a time and study how the corresponding ideal quotient changes the result of applying $\tor_\ast\!\left(\frac{\cE\!\left(G_I^{\tt b}\right)}{L\!\left(G_I^{\tt b}\right)},-\right)$. Theorem~\ref{idealqthmintro} and Corollary~\ref{idealqcorintro} also provide a new map to employ: the multiplication map $\frac{\cE\!\left(G_I\right)}{N\!\left(G_I\right)}\xrightarrow{\ztau\ksup{1}\cdots\ztau\ksup{b-1}}\frac{\cE\!\left(G_I\right)}{Q\!\left(G_I\right)}$. The multiplication map fits into a short exact sequence \[0\to\frac{\cE\!\left(G_I\right)}{N\!\left(G_I\right)}\xrightarrow{\ztau\ksup{1}\cdots\ztau\ksup{b-1}}\frac{\cE\!\left(G_I\right)}{Q\!\left(G_I\right)}\to\frac{\cE\!\left(G_I\right)}{Q\!\left(G_I\right)+\left(\ztau\ksup{1}\cdots\ztau\ksup{b-1}\right)}\to 0,\] which induces a long exact sequence when $\tor_\ast\!\left(\frac{\cE\!\left(G_I\right)}{L\!\left(G_I\right)},-\right)$ is applied. The multiplication map does not have the correct grading to induce Manolescu's conjectured isomorphism for all $\tor_i$, but it does have the appropriate grading to induce the isomorphism in the top degree, i.e.~when $i=b-1$ \cite[Conjecture 5.2]{manolescucube}.
We expect that there are cube of resolutions chain complexes and spectral sequences analogous to those described above for many other compatible choices of framings on the set of $G_I$ obtained from a given knot diagram. Any choice of framings that corresponds to an admissible twisting of the singular knot Floer homology (see~\cite[Lemma~2.1]{manolescucube},\cite[Section~2.1]{ozsszcube}) should do, and that encompasses any non-negative framing. We expect all such spectral sequences to converge to knot Floer homology, and to collapse if sufficiently far from blackboard-framed. In particular, we expect that such a spectral sequence would collapse if the compatible choice of framings had the property that every closed component of every $G_I$ had non-zero total framing. It would be interesting to know under what conditions the $E_1$ and/or $E_2$ pages of such spectral sequences are knot invariants, and whether there is any relationship to HOMFLY-PT homology outside the blackboard-framed case.
Aside from the conjectured spectral sequence, it would be interesting to study $\cB(G)$ (or a suitable generalization to knotted framed graphs) as an invariant in its own right. For example, it would be interesting to know for what framings $\cB$ satisfies (perhaps modified) categorified MOY relations, as it does for {\tt b} and {\tt e}~\cite{kr2,reidmoves}. There has also been little work done on applications of knot Floer homology to the study of singular knots or spatial graphs. As a starting point, one might look for a relationship between $\cB$ and the sutured Floer homology of a graph's complement in $S^3$.
The proof of Theorem~\ref{idealqthmintro} is a computational commutative algebra argument. We use a Gr\"obner basis technique (Buchberger's Algorithm) to construct a generating set for the appropriate ideal quotients from the defining generating set of $N\!\left(G\ksup{k}\right)$. The result is miraculously the same as the defining generating set for $N\!\left(G\ksup{k+1}\right)$. The computational approach makes for some rather involved arguments, but ultimately succeeds because of a match between the combinatorics of Buchberger's Algorithm and those of the ideals we associate to framed graphs. We are optimistic that Gr\"obner basis techniques may prove useful for the spectral sequence conjecture or in other efforts to study $\cB$.
The paper is organized as follows. Section~\ref{dfns} makes precise the concepts and notation referenced so far: framed graphs, the framing ideal, the local and non-local ideals, edge rings, and the graph homology $\cB$. It also discusses the relation of $\cB$ to the HOMFLY-PT and knot Floer graph homologies, and computes $\cB$ in two simple cases. Finally, it establishes the notation used to state Theorem~\ref{idealqthmintro}. Section~\ref{gbbackground} is a primer on Gr\"obner basis techniques and Buchberger's Algorithm. Since the proof of Theorem~\ref{idealqthmintro} is rather technical, Section~\ref{outlinesec} gives an overview and Section~\ref{examplesection} an example illustrating the arguments to come. Sections~\ref{outlinesec} and~\ref{examplesection} also highlight the reasons that Gr\"obner basis techniques are well suited to the combinatorics of our problem. Sections~\ref{bbrd1} and~\ref{bbrd2} carry out the proof in detail for the blackboard framed case, where it is at least somewhat less notationally intensive. Section~\ref{nonbbframings} describes the modifications necessary to extend from blackboard to arbitrary framings.
\subsection*{Acknowledgements} The author thanks Ciprian Manolescu, who was the first to mention ideal quotients to her in this context, and who provided useful input on drafts of this paper. She is also grateful for several useful conversations with Mikhail Khovanov, Robert Lipshitz, Peter Ozsv\'ath, and Zolt\'an Szab\'o. Finally, the author appreciates the hospitality of the Simons Center for Geometry and Physics, where she proved a limited version of this result (see~\cite{gilmorethesis}) while a visiting student.
\section{Definitions: Framed graphs and associated algebraic objects}
\label{dfns}
\subsection{Framed graphs}
\label{framedgraphs}
In this paper, \emph{graph} will mean an oriented graph properly embedded in the disk $D^2$ with the following properties:
\begin{enumerate}
\item vertices have degree at most three;
\item every connected component has at least one vertex of degree greater than one;
\item edges have an assigned weight of one (thin) or two (thick);
\item edges incident to univalent and bivalent vertices are thin; and
\item for trivalent vertices, the sum of weights of incoming edges equals the sum of weights of outgoing edges.
\end{enumerate}
Univalent vertices will also be called \emph{boundary vertices} of the graph and their incident edges will be called \emph{boundary edges}. All other thin edges will be called \emph{interior edges}. Graphs with these properties are equivalent to singularized projections of tangles: exchange thick edges in the graph for 4-valent vertices as in Figure~\ref{4valwideedgeexchange}.
A \emph{framing} of a graph $G$ will mean an extension of the embedding of $G\hookrightarrow D^2$ to an embedding $F\hookrightarrow D^2\times[0,1]$, where $F$ is a compact surface with boundary and
\begin{enumerate}
\item $G=F\cap\left(D^2\times\left\{\frac{1}{2}\right\}\right)$ is a deformation retract of $F$;
\item $F\cap\partial G=\partial F\cap \partial G$; and
\item $F\cap\left(\partial D^2\times[0,1]\right)=F\cap\left(\partial D^2\times\left\{\frac{1}{2}\right\}\right)$ with each component thereof an arc containing exactly one univalent vertex of $G$.
\end{enumerate}
The \emph{blackboard framing} of $G$ is the surface $F_0$ obtained by taking a closed neighborhood of $G$ in $D^2\times\left\{\frac{1}{2}\right\}$. We require the thick edges in our graphs to be blackboard framed. On each edge, one may compare a framing $F$ to the blackboard framing to obtain an integer, which is the number of positive or negative half-twists that must be inserted in $F_0$ to match $F$. We will represent a framed graph diagrammatically by marking each thin edge and labeling the marking with an integer. We will omit the framing from the text notation for the graph unless discussing a property that holds only for particular framings.
We will consider framed graphs up to planar graph
isotopies: isotopies of the graph in $D^2\times\left\{\frac{1}{2}\right\}$ that extend to isotopies of the framing surface $F$ in $D^2\times[0,1]$ and fix its intersection with $\partial D^2\times [0,1]$. It will be clear from the definitions that $\cB$ is invariant under such isotopies. We expect that $\cB$ could be extended to an invariant of knotted framed graphs (i.e.~allow the graph to be embedded in $D^2\times[0,1]$ and require merely that isotopies fix the intersection of the graph with $\partial D^2$) using a cube of resolutions construction, but we do not pursue the point here.
\subsection{Ideals associated to framed graphs}
\label{idealdfnsec}
We work over a ground ring $\cR=\itk[t^{-1},t]]$ of Laurent series in $t$, with $\itk$ a field.\footnote{Much of the background material on Gr\"obner bases that we use generalizes to the case where $\itk$ is a Noetherian commutative ring, but certain computability properties are required of the ring for the full theory of Gr\"obner bases to generalize.} Let $G$ be a framed graph, so each thin edge of $G$ has an orientation and a marking. Assign to each thin edge a pair of indeterminates $x_i$ and $y_i$ labeling the first and second (with respect to the orientation) segments of the thin edge. Let $\underline{x}(G)$ and $\underline{y}(G)$ denote the sets of $x_i$ and $y_i$. We consider four ideals in $\cR[\underline{x}(G),\underline{y}(G)]$ associated to the graph $G$.
\begin{dfn}
\label{idealdfn}
\begin{enumerate}
\item[(a)] the \emph{framing ideal}, $F(G)$, is generated by linear polynomials associated to markings on thin edges
\begin{align*}
t^\ell y_i-x_i\quad\text{to}\quad
\input{labeledframededge.tex}
\end{align*}
\item[(b)] the \emph{linear ideal}, $L(G)$, is generated by linear polynomials associated to thick edges
\begin{align*}
(x_a+x_b)&-(y_c+y_d)\quad\text{to}\quad
\input{labeledwideedge.tex}
\end{align*}
\item[(c)] the \emph{quadratic ideal}, $Q(G)$, is generated by quadratic polynomials associated to thick edges
\begin{align*}
x_ax_b-y_cy_d\quad\text{to}\quad
\input{labeledwideedge.tex}
\end{align*}
and linear polynomials associated to bivalent vertices
\begin{align*}
x_a-y_c\quad\text{to}\quad
\input{labeledbivalentvertex.tex}
\end{align*}
\item[(d)] the \emph{non-local ideal}, $N(G)$, is generated by polynomials of varying degrees associated to sets of thick edges and bivalent vertices in $G$. Let $\gm$ be such a set. The \emph{weight} $\weight(\gm)$ of $\gm$ is the sum of the framings on thin edges that are internal to $\gm$ (i.e.~have both endpoints incident to a thick edge or bivalent vertex in $\gm$). Let $x_{\gm,\text{out}}$ be the product of $x_i$ associated to thin edges from $\gm$ to its complement and $y_{\gm, \text{in}}$ be the product of $y_i$ associated to thin edges into $\gm$ from its complement. Then the generator of $N(G)$ associated to $\gm$ is $$t^{\weight(\gm)}x_{\gm,\text{out}}-y_{\gm,\text{in}},$$ We consider a thin edge with a marking denoting its framing to be a single edge.
\end{enumerate}
\end{dfn}
The definition of a subset's weight given above differs from that in~\cite{ozsszcube}, but becomes equivalent in the edge ring when the graph has the appropriate framing. See Proposition~\ref{hfkspec}. If $G$ is a closed braid graph (obtained from a braid diagram by replacing crossings with thick edges), then the ideal $N(G)$ has other generating sets. Rather than associating a generator to each subset in $G$, we may instead associate a generator to each closed path in $G$ or to certain regions in $D^2\setminus G$. The equivalence of all of these definitions is proved in \cite[Proposition 3.1]{reidmoves}. These alternative generating sets are smaller in general, but less well adapted to the combinatorics of Buchberger's Algorithm.
\begin{dfn}
\label{edgering}
The \emph{edge ring} of $G$ is $$\cE(G)=\cR[\underline{x}(G)]\isom\frac{\cR[\underline{x}(G),\underline{y}(G)]}{F(G)},$$ fixing the isomorphism that retains the variable $x_i$ from each generator $t^\ell y_i-x_i$ of $F(G)$.\end{dfn}
\noi Although we have defined $L(G)$, $Q(G)$, and $N(G)$ in $\cR[\underline{x}(G),\underline{y}(G)]$, we will actually work with their images in $\cE(G)$.
In the edge ring, we have $Q(G)\subseteq N(G)$ for any $G$. The generator associated to a bivalent vertex in $Q(G)$ is the same as it is in $N(G)$, unless the incoming and outgoing edges of the bivalent vertex coincide. If they do, and that edge has framing $\ell$, then the generator of $N(G)$ is $t^\ell-1$. The generator of $Q(G)$ is $(t^\ell-1)x$ for the appropriate edge variable $x$. Let $v$ be a thick edge in $G$ and $g_{\left\{v\right\}}$ its corresponding generator in $N(G)$. If none of its incident thin edges coincide, then $g_{\left\{v\right\}}$ is identical to the generator of $Q(G)$ associated to $v$. If some of its incident thin edges do coincide, then the generator of $Q(G)$ associated to $v$ is a multiple of $g_{\left\{v\right\}}$. For example, consider the diagram from the definition of $Q(G)$ above. Suppose $x_b$ and $y_d$ label the same thin edge, and that thin edge has framing $\ell$. Let $\ell_c$ be the framing on the edge labeled by $y_c$. Then $g_{\left\{v\right\}}=t^\ell x_a-y_c$, which becomes $t^\ell x_a-t^{-\ell_c}x_c$ in the edge ring. The generator of $Q(G)$ corresponding to $v$ is $x_ax_b-y_cy_d$, which becomes $x_ax_b-t^{-\ell_c-\ell}x_cx_b=t^{-\ell}x_bg_{\left\{v\right\}}$ in the edge ring.
We allow some of the connected components of $G$ to be designated ``special.'' Let $V_G$ be the free $\cE(G)$-module spanned by the non-special connected components of $G$. Then we define the graph homology promised in the introduction to be
\begin{equation}
\cB(G)=\tor_\ast\!\left(\frac{\cE(G)}{L(G)},\frac{\cE(G)}{N(G)}\right)\otimes\Lambda^\ast V_G.
\end{equation}
\subsection{Relation of $\cB$ to other graph homologies}
\label{relationtootherssec}
As mentioned in the introduction, $\cB(G)$ specializes to the knot Floer and (conjecturally) HOMFLY-PT graph homologies for particular framings on $G$. In this section, we indicate more precisely how those specializations hold. Although the definition of $\cB(G)$ makes sense for framed graphs in the generality described in Section~\ref{idealdfnsec}, we restrict to closed braid graphs here because of the parallel restriction on the graph homologies in~\cite{ozsszcube,reidmoves,manolescucube}. Recall from the introduction that we use ``closed'' to describe a braid graph with its outermost strand cut.
\begin{prop}
\label{homflyspec}
Let $G$ be a closed braid graph with its outermost connected component (containing the cut edge) designated special. Let {\tt b} denote the blackboard framing. Then it follows from \cite[Conjecture~1.4]{manolescucube} that
\[\cB\!\left(G^{\tt b}\right)\isom\cB_{\textrm{KR}}(G),\]
where the right-hand side is the HOMFLY-PT graph homology defined in~\cite{kr2}.
\end{prop}
\begin{proof}
It is immediate from the definitions that $L$, $Q$, and $N$ (as ideals in $\cE$) are identical to those defined in \cite{manolescucube}. When $G$ is blackboard framed, every subset has weight zero and the framing ideal merely identifies each $x_i$ with its corresponding $y_i$. Our $\cB(G^{\tt b})$ and $\cB_{\textrm{KR}}(G)$ are then identical to those in \cite{manolescucube}. Theorem~1.2 of~\cite{manolescucube} is that $\cB_{\textrm{KR}}$ is the HOMFLY-PT graph homology from~\cite{kr2} and Conjecture~1.4 of~\cite{manolescucube} is that $\cB\isom\cB_{\textrm{KR}}$.
\end{proof}
\begin{prop}
\label{hfkspec}
Let $G$ be a closed braid graph with its outermost connected component (containing the cut edge) designated special. Let {\tt e} denote the framing in which each thin edge is +1 framed. Then
\[\cB\!\left(G^{\tt e}\right)\isom\cA(G),\]
where $\cA$ is the knot Floer graph homology defined in~\cite{ozsszcube}.
\end{prop}
\begin{proof}
Treat the graph $G$ as a singular knot by replacing thick edges with 4-valent vertices as in Figure~\ref{4valwideedgeexchange}. The images in $\cE\!\left(G^{\tt e}\right)$ of the generating sets of $L\!\left(G^{\tt e}\right)$, $Q\!\left(G^{\tt e}\right)$, and $N\!\left(G^{\tt e}\right)$ are exactly the ideal generated by the relations given in the definition of $\cA(G)$ in~\cite[Section~1]{ozsszcube}. For example, the generator $x_a+x_b-y_c-y_d$ of $L\!\left(G^{\tt e}\right)$ becomes $x_a+x_b-t^{-1}x_c-t^{-1}x_d$ in $\cE\!\left(G^{\tt e}\right)$ via the elements $ty_c-x_c$ and $ty_d-x_d$ of $F\!\left(G^{\tt e}\right)$. The generator $t^{\weight(\gm)}x_{\gm,\out}-y_{\gm,\into}$ becomes $t^{\weight(\gm)}x_{\gm,\out}-t^{-\weight_{\into}(\gm)}x_{\gm,\into}$, where $\weight_{\into}(\gm)$ is the sum of the framings on the thin edges into $\gm$ from its complement and $x_{\gm,\into}$ is the product of the $x_i$ associated to those thin edges. Clearing denominators, we have $\weight(\gm)+\weight_{\into}(\gm)$ as the exponent of $t$, which is the sum of the framings on all thin edges incoming to $\gm$. When $G$ has the framing {\tt e}, that sum is twice the number of thick edges plus the number of bivalent vertices in $\gm$, which is the weight that Ozsv\'ath and Szab\'o assign to $\gm$ in~\cite{ozsszcube}.
It follows directly from these observations that the knot Floer graph homology $\cA(G)$ defined in~\cite[Section~1]{ozsszcube} is isomorphic to $\cE\!\left(G^{\tt e}\right)/\left(L\!\left(G^{\tt e}\right)+N\!\left(G^{\tt e}\right)\right)$, which is the degree zero part of $\cB\!\left(G^{\tt e}\right)$. If $G$ is connected, then one may adapt the argument for~\cite[Theorem~3.1]{ozsszcube} to see that $\cB\!\left(G^{\tt e}\right)$ is concentrated in degree zero, so $\cB\!\left(G^{\tt e}\right)\isom\cA(G)$. If $G$ is not connected, then $\cA(G)$ vanishes. The same is true of $\cB\!\left(G^{\tt e}\right)$. The complete set of thick edges and bivalent vertices in a closed component of $G^{\tt e}$ not containing the cut strand will yield a generator of the form $t^\ell-1$ in $N\!\left(G^{\tt e}\right)$, where $\ell>0$. Since $t^\ell-1$ is a unit in the ground ring, $\cE\!\left(G^{\tt e}\right)/N\!\left(G^{\tt e}\right)$ will vanish.
\end{proof}
Finally, we define a framing for which $\cB$ specializes to the graph homology studied in~\cite{reidmoves}, which is an alternative knot Floer graph homology that also satisfies certain categorified Murakami-Ohtsuki-Yamada relations. Let $G$ be obtained (by replacing crossings with thick edges) from a braid diagram in which each crossing appears in a distinct horizontal layer. If there are $k$ thick edges between the incident thick edges of a given thin edge, assign the framing $k+1$ to that thin edge. The total framing on each strand will be equal to the number of thick edges in $G$. Call this framing {\tt s}.
\begin{prop}
\label{gilmorespec}
Let $G$ be a closed braid graph with its outermost connected component (containing the cut edge) designated special. Let {\tt s} be the framing defined above. Then
\[\cB\!\left(G^{\tt s}\right)\isom\cA(G),\]
where $\cA$ is the alternative knot Floer graph homology defined in~\cite{reidmoves}.
\end{prop}
\begin{proof}
The argument is almost the same as for Proposition~\ref{hfkspec}. If $G$ is disconnected, both $\cB\!\left(G^{\tt s}\right)$ and $\cA(G)$ vanish, again because a generator of the form $t^\ell-1$ appears in $N\!\left(G^{\tt s}\right)$. Otherwise, with a bit more care, the proof of \cite[Theorem~3.1]{ozsszcube} may be adapted to prove that $\cB\!\left(G^{\tt s}\right)$ is concentrated in degree zero.
The degree zero part of $\cB\!\left(G^{\tt s}\right)$ is $\cE\!\left(G^{\tt s}\right)/\left(L\!\left(G^{\tt s}\right)+N\!\left(G^{\tt s}\right)\right)$. Compare the layered diagram used to define the framing {\tt s} with the layered diagrams studied in~\cite{reidmoves} by replace a marking denoting a framing of $k$ with $k-1$ bivalent vertices. Tracing through the definitions, one may confirm that the images of $L\!\left(G^{\tt s}\right)$, $Q\!\left(G^{\tt s}\right)$, and $N\!\left(G^{\tt s}\right)$ in $\cE\!\left(G^{\tt s}\right)$ are exactly the $L$, $Q$, and $N$ defined in~\cite{reidmoves} (after clearing denominators in some generators). Therefore, the degree zero part of $\cB\!\left(G^{\tt s}\right)$ is isomorphic to $\cA(G)$.
\end{proof}
\subsection{Examples}
\label{smallexamplesec}
\begin{eg}
\label{onecrossingunknot}
Consider the diagram with one thick edge, two boundary edges, and framing $\ell$ on the remaining thin edge. Assume that the edges labeled $x$ and $z$ were blackboard-framed and that we have already used the corresponding generators of the framing ideal to eliminate one variable associated to each.\smallskip
\hspace{.01cm}
\input{singularunknot.tex}
\hspace{-.2cm}
\begin{tabular}{c p{3.2cm} | p{6.2cm}}
&
{$F=(t^\ell y-w)$
$\cE\isom\cR[w,x,z]$
$L=(x+w-z-t^{-\ell}w)$
$Q=(xw-t^{-\ell}wz)$
$N=(t^\ell x-z)$
}
&
{$\cE/N\xrightarrow{x+w-z-t^{-\ell}w}\cE/N$
\vspace{.1cm}
$\cR[w,z]\xrightarrow{(t^{-\ell}-1)(z-w)}\cR[w,z]$
\vspace{.2cm}
$\cB\isom\begin{cases}
\cR[w,z]_{(1)}\oplus\cR[w,z]_{(0)} & \text{ if } \ell=0\\
\cR[z]_{(0)} & \text{otherwise.}\end{cases}$
}
\end{tabular}
\smallskip
\noindent On the left, we present the edge ring under the convention for retaining edge labels specified in Definition~\ref{edgering}, along with generating sets for (the images of) $L$, $Q$, and $N$ in that edge ring. On the right, we use the sole generator of $L$ to resolve $\cE/L$, then tensor with $\cE/N$ to obtain (in the first line) a chain complex with homology $\tor_\ast(\cE/L,\cE/N)$. In the second line, we simplify that chain complex using the generator of $N$ to eliminate $x$. The result in the third line follows: the map $(t^{-\ell}-1)(z-w)$ has no kernel unless $\ell=0$.
\end{eg}
\begin{eg}
\label{twocrossingexample}
Consider the diagram below. Assume that unmarked edges were blackboard-framed and that we have already eliminated one variable associated to each of them using the appropriate generators of the framing ideal. \smallskip
\hspace{.001cm}
\input{twocrossingexample2.tex}
\begin{tabular}{c p{7cm}}
&
{$F=(t^\ell w^\prime-w, t^k y^\prime-y, t^mz^\prime-z)$\medskip
$\cE\isom\cR[v,w,x,y,z]$\medskip
$L=(y+z-t^{-\ell}w-t^{-m}z,x+w-v-t^{-k}y)$\medskip
$Q=((y-t^{-\ell-m}w)z,xw-t^{-k}vy)$\medskip
$N=(t^{m}y-t^{-\ell}w,t^{k+\ell+m}x-v)$\medskip
}
\end{tabular}\smallskip
\noindent The generators of $L$ are a regular sequence in $\cE$. We use the Koszul complex to resolve $\cE/L$, then tensor with $\cE/N$ to obtain a complex whose homology is $\tor_\ast(\cE/L,\cE/N)$. In the second line below, we simplify by using the generators of $N$ to eliminate $w$ and $v$.
\begin{align*}
\left(\cE/N\xrightarrow{y-t^{-\ell}w+(1-t^{-m})z}\cE/N\right)&\otimes\left(\cE/N\xrightarrow{x+w-v-t^{-k}y}\cE/N\right)\\
\left(\cR[x,y,z]\xrightarrow{(t^m-1)(t^{-m}z-y)}\cR[x,y,z]\right)&\otimes\left(\cR[x,y,z]\xrightarrow{(t^{k+\ell+m}-1)(t^{-k}y-x)}\cR[x,y,z]\right)
\end{align*}
\noindent The homology of the complex above depends on whether $m$ and/or $k+\ell+m$ is zero. If one or both values is zero, then the maps in one or both tensor factors of the complex are zero. Otherwise, $t^m-1$ and/or $t^{k+\ell+m}-1$ is a unit in $\cR$ and the map involving that factor has no kernel. When both maps are non-zero, we have a Koszul complex on a regular sequence in $\cR[x,y,z]$. Therefore, the possible outcomes are as follows.
\[\cB\isom\begin{cases}
\cR[x]_{(0)} & m\neq 0, k+\ell+m\neq 0\\
\cR[x,z]_{(0)}\oplus\cR[x,z]_{(1)} & m=0, k+\ell+m\neq 0\\
\cR[x,y]_{(0)}\oplus\cR[x,y]_{(1)} & m\neq0, k+\ell+m=0\\
\cR[x,y,z]_{(0)}\oplus\cR[x,y,z]^{\oplus 2}_{(1)}\oplus\cR[x,y,z]_{(2)} & m=0, k+\ell+m=0\\
\end{cases}\]
\end{eg}
\subsection{Statement of main result}
\label{mainresult}
Our main result concerns the process of closing strands in braid graphs and the corresponding algebraic operation on the non-local ideal defined in Section~\ref{idealdfnsec}. Let $\sigma$ be a braid with $b$ strands, none of which are closed. Let $G_\sigma=G_\sigma\ksup{0}$ be the framed braid graph obtained by replacing the crossings in $\sigma$ with thick edges. Let $G_\sigma\ksup{1},\ldots,G_\sigma\ksup{b-1}$ be the intermediary graphs obtained by closing one strand of $G_\sigma$ at a time from right to left, as in Figure~\ref{intermedgraphs}.
The final graph $G_\sigma\ksup{b-1}$ has two boundary vertices. We refer to it as the closure of $G$ nonetheless and consider it to be the singularization of $\widehat{\sigma}$, writing $G_{\widehat{\sigma}}$ for $G_\sigma\ksup{b-1}$.
The choice to work with diagrams in which the outermost strand is cut is typical in the quantum topology literature because the representation theory underlying the Alexander polynomial demands it (see e.g.~\cite{viroquantumrel}). It is also in line with the use of basepoints in related work~\cite{ozsszcube,reidmoves,manolescucube}.
We work in the edge rings $\cE\!\left(G_\sigma\ksup{k}\right)$, where we have retained the edge labels in $\underline{x}\!\left(G_\sigma\ksup{k}\right)$ and discarded those in $\underline{y}\!\left(G_\sigma\ksup{k}\right)$ under the isomorphism specified in Definition~\ref{edgering}. We assume that generators of the ideals $L\!\left(G_\sigma\ksup{k}\right)$, $Q\!\left(G_\sigma\ksup{k}\right)$, and $N\!\left(G_\sigma\ksup{k}\right)$ have been rewritten accordingly. In diagrams, we assume that edges bearing a single label have retained the appropriate one under our conventions. Abbreviate the edge ring $\cE\!\left(G_\sigma\ksup{k}\right)$ as $\cE_k$.
Figure~\ref{intermedgraphs} illustrates the notation used in the rest of this section. Let $\ztau\ksup{k}$ and $\zbeta\ksup{k}$ denote the top- and bottom-most remaining edges on the $k^\text{th}$ strand of $G_\sigma$ after the variables in $\underline{y}\!\left(G_\sigma\right)$ have been discarded and the markings denoting framing removed from thin edges. The $\ztau\ksup{k}$ and $\zbeta\ksup{k}$ are not new variables, but simply alternate names for certain variables in $\underline{x}\!\left(G_\sigma\right)$. Let $a_k$ be the framing on the top boundary edge of the $k^\text{th}$ strand of $G_\sigma$. Let $Z_{k+1}\subset\cE_k$ be the ideal generated by $\ztau\ksup{k+1}-t^{a_{k+1}}\zbeta\ksup{k+1}$. Closing the $(k+1)^\text{st}$ strand of $G_\sigma\ksup{k}$ corresponds to taking the quotient of $\cE_k$ by $Z_{k+1}$. Let $\pi_k:\cE_k\to \cE_{k+1}$ be the quotient map with kernel $Z_{k+1}$ that retains $\ztau\ksup{k+1}$ and discards $\zbeta\ksup{k+1}$. In $G\ksup{k}$, we call the joined edges $\ztau\ksup{i}=t^{a_i}\zbeta\ksup{i}$ for $i\leq k$ \emph{closure edges} and continue to call the remaining $\ztau\ksup{i}$ and $\zbeta\ksup{i}$ for $i>k$ \emph{boundary edges}.
Closing $G_\sigma$ completely to $G_{\widehat{\sigma}}$ corresponds to taking successive quotients of $\cE(G_\sigma)$ by each of the $Z_k$. Equivalently, let $Z\subset\cE(G_\sigma)$ be the ideal generated by $\ztau\ksup{k}-t^{a_k}\zbeta\ksup{k}$ for $1\leq k \leq b-1$. Then the edge rings of $G_{\sigma}$ and its closure are related by $\cE(G_\sigma)/Z\isom\cE(G_{\widehat{\sigma}})$.
Let $\pi:\cE(G_\sigma)\to \cE(G_{\widehat{\sigma}})$ be the composition $\pi_{b-2}\circ\cdots\circ\pi_0$, so $\pi$ retains all of the $\ztau\ksup{k}$ and discards all of the $\zbeta\ksup{k}$ except $\zbeta\ksup{b}$.
For each intermediary graph, we have ideals $L\!\left(G_\sigma\ksup{k}\right)$, $Q\!\left(G_\sigma\ksup{k}\right)$, and $N\!\left(G_\sigma\ksup{k}\right)$ in $\cE_k$. We abbreviate these $L_k$, $Q_k$, and $N_k$. The generators of the linear and quadratic ideals depend only on local information at each thick edge, which will not change when strands are closed. Therefore, $\pi_k(L_k)=L_{k+1}$ and
$\pi_k(Q_{k})=Q_{k+1}$. The analogous statement is not at all true for the non-local ideal. The set of edges that are internal to a subset $\gm$ may change as we proceed from $G_\sigma\ksup{k}$ to $G_{\sigma}\ksup{k+1}$. Specifically, a subset $\gm$ in $G_\sigma\ksup{k}$ may have $\ztau\ksup{k+1}$ as an outgoing edge and $\zbeta\ksup{k+1}$ as an incoming edge. The generator of $N_k\subset\cE_k$ associated to $\gm$ will then have $\ztau\ksup{k+1}$ dividing one of its terms and $\zbeta\ksup{k+1}$ dividing the other. Under $\pi_k$, the $\zbeta\ksup{k+1}$ will be replaced by $t^{a_{k+1}}\ztau\ksup{k+1}$, so both of its terms will be divisible by $\ztau\ksup{k+1}$. In $G_\sigma\ksup{k+1}$, however, the closure edge on the $(k+1)^\text{st}$ strand will be internal to $\gm$. Therefore, neither term of the generator of $N_{k+1}$ associated to $\gm$ will be divisible by $\ztau\ksup{k+1}$. Refer to Section~\ref{examplesection} and the set $\gm\cup\dl$ in Figure~\ref{smallexample} for an example of how this can occur.
We have seen so far that $\pi_k(N_{k})\subsetneq N_{k+1}$. The content of our main result is that the situation described above fully explains the behavior of the non-local ideal under braid closure. That is, closing a braid strand corresponds to taking a quotient (Definition~\ref{iqdfn}) of the non-local ideal. Now that all of the necessary notation is in place, we remind the reader of our main result and its corollary.
\begin{thm*}[Restatement of Theorem~\ref{idealqthmintro}]
With notation as above, the following ideals are equal in $\cE\!\left(G_\sigma\ksup{k+1}\right)$
$$\pi_k\!\left(N\!\left(G_\sigma\ksup{k}\right)\right) : \left(\ztau\ksup{k+1}\right) = N\!\left(G_\sigma\ksup{k+1}\right)$$ for $0\leq k\leq b-2$.
\end{thm*}
\begin{cor*}[Restatement of Corollary~\ref{idealqcorintro}]
Let $\ztau$ be the product $\ztau\ksup{1}\cdots\ztau\ksup{b-1}$. The following ideals are equal in $\cE(G_{\widehat{\sigma}})$
$$\pi(Q(G_\sigma)) : (\ztau) = N(G_{\widehat{\sigma}}).$$
\end{cor*}
\begin{proof}
The corollary follows immediately from the facts that $Q(G_\sigma)=N(G_\sigma)$ and that $I:(xy)=(I:(x)):(y)$ for any ideal $I$, ring $R$, and ring elements $x,y\in R$.
\end{proof}
We will prove Theorem~\ref{idealqthmintro} in detail when $G_\sigma$ is blackboard framed, then describe in Section~\ref{nonbbframings} how to modify the proof to handle non-blackboard framings. No substantive changes to the proof are required; the restriction to blackboard framings merely simplifies the notation.
\section{Background: Gr\"obner bases and Buchberger's Algorithm}
\label{gbbackground}
We approach Theorem~\ref{idealqthmintro} as a commutative algebra calculation: given generating sets for two ideals in a polynomial ring, create a generating set for their ideal quotient. In fact, we would like to re-create a previously specified generating set. Gr\"obner bases are a convenient tool for this sort of calculation. They make it possible to generalize sensibly the division algorithm for single-variable polynomials to a division algorithm for multivariable polynomials, thereby reducing certain difficult questions in commutative algebra and algebraic geometry to computational problems. Gr\"obner bases are the foundation of computer algebra programs that do commutative algebra in polynomial rings, such as Macaulay 2~\cite{macaulay2}. In this section, we define Gr\"obner bases, describe an algorithm for converting an arbitrary generating set for an ideal into a Gr\"obner basis, and explain how Gr\"obner bases can be used to calculate generating sets for ideal intersections and quotients. The exposition here is an adaptation of that in~\cite{gbbook}.
\subsection{Monomial orders}
Let $\itk$ be a field and $\itk[x_0,\ldots,x_n]=\kpoly$ a polynomial ring over it.
\begin{dfn}
A \emph{monomial order} is a total ordering of the monomials $x_0^{\alpha_0}\cdots x_n^{\alpha_n}$ in $\kpoly$ that satisfies
\begin{enumerate}
\item $1<x_0^{\alpha_0}\cdots x_n^{\alpha_n}$ for all monomials with $\alpha_i$ not all zero, and
\item $y<y^\prime$ implies $yz<y^\prime z$ for any monomials $y, y^\prime, z$ in $\kpoly$.
\end{enumerate}
\end{dfn}
We will use the lexicographic ordering on $\kpoly$ in which $x_0>x_1>\cdots>x_n>1$. This means that $x_0^{\alpha_0}\cdots x_n^{\alpha_n}>x_0^{\beta_0}\cdots x_n^{\beta_n}$ when $\alpha_i>\beta_i$ for the first $i$ at which the exponents differ. The largest monomial is written first. For example, the following polynomials are written correctly with respect to the lexicographic term order.
\begin{center}
\begin{tabular}{ccc}
$f_1=x_1^2x_2-x_1x_2^2$ & $f_2=2x_1-x_2x_3x_4$ & $f_3=x_5+4x_6^3-1$
\end{tabular}
\end{center}
Throughout the remaining sections, we will write polynomials with respect to the lexicographic order unless we specify otherwise.
Given a monomial order, denote the leading term and the leading monomial of a polynomial $f\in\kpoly$ by $\lt{f}$ and $\lm{f}$ respectively. For example, $\lt{f_2}=2x_1$ and $\lm{f_2}=x_1$. For blackboard framed graphs, there will be no difference between leading terms and leading monomials because our coefficients are always $\pm 1$. When we deal with polynomials that have only two terms, we will denote the trailing term (i.e.~the non-leading term) by $\trt{f}$ and the trailing monomial by $\trm{f}$.
\subsection{Gr\"obner bases and the division algorithm}
A Gr\"obner basis is a generating set for an ideal that accounts for all possible leading monomials of polynomials in that ideal.
\begin{dfn}
A Gr\"obner basis for an ideal $I\subset \kpoly$ is a set of polynomials $g_1,\ldots,g_k$ in $I$ such that for any $f\in I$, there is some $i$ for which $\lm{g_i}$ divides $\lm{f}$.
\end{dfn}
\noi It follows from the Hilbert Basis Theorem and a few basic observations that every nonzero ideal in $\kpoly$ has a Gr\"obner basis~\cite[Corollary 1.6.5]{gbbook}. Gr\"obner bases are not unique and are typically highly redundant; an ideal typically has a smaller generating set that is not a Gr\"obner basis.
The key advantage of Gr\"obner bases over other generating sets is that they make it possible to generalize the division algorithm to multivariable polynomials in a useful way. Generalizing the algorithm is straightforward enough: To divide $f$ by $g$ in $\kpoly$, we see whether $\lm{g}$ divides $\lm{f}$. If it does, we record $\lt{f} / \lt{g}$ as a term of the quotient and replace $f$ by $f-\frac{\lt{f}}{\lt{g}}g$. If not, we record $\lt{f}$ as a term in the remainder and replace $f$ with $f-\lt{f}$. Continuing this process as long as possible, we eventually obtain a decomposition of $f$ as $f=qg+r$ for some $q, r\in\kpoly$. We may also divide $f$ by a collection of polynomials $g_1,\ldots,g_k$ to obtain a decomposition $f=q_1g_1+\cdots+q_kg_k+r$. At each step, we look for the first $i$ such that $\lm{g_i}$ divides $\lm{f}$, then record $\lt{f}/\lt{g_i}$ as a term in the quotient $q_i$ and replace $f$ by $f-\frac{\lt{f}}{\lt{g_i}}g_i$. If no $\lm{g_i}$ divides $\lm{f}$, then we record $\lt{f}$ as a term in the remainder $r$ and replace $f$ with $f-\lt{f}$. We will write $$f\xrightarrow{g_1,\ldots,g_k}r$$ and say ``$f$ reduces to $r$ via $g_1,\ldots,g_k$'' if $r$ is obtained as a remainder when using this algorithm to divide $f$ by $g_1,\ldots,g_k$.
In general, the result of this generalized division algorithm depends on the monomial order chosen on $\kpoly$ and the order in which the polynomials $g_1,\ldots,g_k$ are listed. Neither the quotients $q_1,\ldots,q_k$ nor the remainder $r$ are unique. Consequently, this generalized division algorithm on its own is of little use. It is not true, for example, that the remainder $r$ is zero if and only if $f$ is in the ideal generated by $g_1,\ldots,g_k$.
However, if $g_1,\ldots,g_k$ are a Gr\"obner basis for the ideal they generate, then the remainder $r$ is unique: it does not depend on the monomial order or on the order in which the $g_i$ are listed. The quotients are still not unique, but the uniqueness of the remainder is sufficient to make the generalized division algorithm useful for commutative algebra computations. For instance, if $g_1,\ldots,g_k$ are a Gr\"obner basis for the ideal they generate, then $f\in (g_1,\ldots,g_k)$ if and only if $f$ reduces to zero via $g_1,\ldots,g_k$.
\subsection{Buchberger's Algorithm and ideal quotients}
Buchberger~\cite{buch1} developed an algorithm for converting any generating set of an ideal into a Gr\"obner basis. Such an algorithm must produce new generators that account for leading monomials of polynomials in the ideal that did not appear as leading monomials among the original generators. New leading monomials arise when a linear combination of existing generators causes their leading terms to cancel. Buchberger's Algorithm systematically produces these cancellations using S-polynomials.
\begin{dfn}
\label{spolydfn}
The \emph{S-polynomial} of two non-zero polynomials $f,g\in\kpoly$ is
\begin{align*}
S(f,g)&=\frac{\lcm{\lm{f},\lm{g}}}{\lt{f}}f-\frac{\lcm{\lm{f},\lm{g}}}{\lt{g}}g\\
&=\frac{\lm{g}}{\lc{f}\gcd(\lm{f},\lm{g})}\overline{f}-\frac{\lm{f}}{\lc{g}\gcd(\lm{f},\lm{g})}\overline{g},
\end{align*}
where $\overline{f}=f-\lt{f}$ and $\overline{g}=g-\lt{g}$.
\end{dfn}
\noi If $\lt{f}=\lm{f}$ and $\lt{g}=\lm{g}$, we have the simplification
\begin{align*}
S(f,g)=\frac{\lm{g}}{\gcd(\lm{f},\lm{g})}\overline{f}-\frac{\lm{f}}{\gcd(\lm{f},\lm{g})}\overline{g}.
\end{align*}
Buchberger's theorem~\cite[Theorem 1.7.4]{gbbook} is that a generating set $g_1,\ldots,g_k$ for an ideal $I\subset\kpoly$ is a Gr\"obner basis for $I$ if and only if $S(g_i,g_j)\xrightarrow{g_1,\ldots,g_k}0$ for all $i\neq j$. Buchberger's Algorithm, then, is as follows.
\begin{algorithm}[Buchberger]
\label{buchalg}
\noindent Let $g_1,\ldots,g_k$ be a generating set for an ideal $I\subset\kpoly$.
\begin{enumerate}
\item Compute $S(g_i,g_j)$ for some $i\neq j$ and attempt to reduce it via $g_1,\ldots,g_k$ using the generalized division algorithm.
\item If $S(g_i,g_j)\xrightarrow{g_1,\ldots,g_k}0$, go back to the previous step and compute a different S-polynomial. If $S(g_i,g_j)\xrightarrow{g_1,\ldots,g_k}r$ and $r\neq 0$, then add $r$ to a working basis.
\item Repeat the previous two steps until a basis $g_1,\ldots,g_{k+s}$ is obtained for which $S(g_i,g_j)\xrightarrow{g_1,\ldots,g_{k+s}}0$ for all $i\neq j$.
\end{enumerate}
\end{algorithm}
\noindent Buchberger~\cite{buch1} proved that this algorithm terminates and produces a Gr\"obner basis for $I$.
Theorem~\ref{idealqthmintro} claims that the process of closing a braid strand corresponds to taking an ideal quotient of the non-local ideal.
\begin{dfn}
\label{iqdfn}
Let $I,J$ be ideals in a ring $R$. The \emph{ideal quotient of $I$ by $J$} is
$$I:J=\{r\in R\,\vert\,rJ\subset I\}.$$ Note that $I$ is always contained in $I\!:\!J$.
\end{dfn}
We will use Buchberger's Algorithm to produce an explicit generating set for the ideal quotient $\pi_k\!\left(N\!\left(G_\sigma\ksup{k}\right)\right):\left(\ztau\ksup{k+1}\right)$. It will be readily recognizable as the generating set by which $N\!\left(G_{\sigma}\ksup{k+1}\right)$ was defined. First, we produce a generating set for the intersection $\pi_k\!\left(N\!\left(G_\sigma\ksup{k}\right)\right)\cap\left(\ztau\ksup{k+1}\right)$ in $\cE\!\left(G_\sigma\ksup{k+1}\right)$. The following straightforward proposition explains how a generating set for an intersection yields a generating set for a quotient. It is a rephrasing of \cite[Lemma 2.3.11]{gbbook}, for example.
\begin{prop}
Let $I\subset R$ be an ideal in a polynomial ring and $x\in R$. If $h_1,\ldots,h_k$ is a generating set for $I\cap (x)$, then $\frac{h_1}{x},\ldots,\frac{h_k}{x}$ is a generating set for $I:(x)$.
\end{prop}
To produce a Gr\"obner basis for an intersection, we follow the method prescribed in \cite[Proposition 2.3.5]{gbbook}. Suppose that $I,J\subset\kpolylong$ are ideals with generating sets $p_1,\ldots,p_k$ and $q_1,\ldots,q_\ell$ respectively. Enlarge the polynomial ring to include a dummy variable $\nu$. Define the monomial order on $\itk[x_0,\ldots,x_n,\nu]$ to be lexicographic with $\nu>x_0>\cdots>x_n>1$. (The lexicographic ordering is a special case of an ``elimination ordering,'' which is what is actually required for this procedure to work.) Then
$$I\cap J=(\nu I+(\nu-1)J)\cap\kpolylong$$ and a Gr\"obner basis for $I\cap J$ can be obtained from a Gr\"obner basis for $\nu I + (\nu-1)J$ by intersecting the basis with $\kpolylong$~\cite[Theorem 2.3.4]{gbbook}. Therefore, to obtain a basis for $I\cap J$, we apply Buchberger's Algorithm to the basis $$\nu p_1,\ldots,\nu p_k,(\nu-1)q_1,\ldots,(\nu-1)q_\ell,$$ then discard any generator in which $\nu$ appears.
In sum, we have the following algorithm for producing a Gr\"obner basis for the ideal quotient $I:(x)$ (where $x$ is a monomial) starting from a generating set $p_1,\ldots,p_k$ for $I$.
\begin{algorithm}
\label{iqalg}
\begin{enumerate}
\item[]{}
\item Apply Buchberger's Algorithm (Algorithm~\ref{buchalg}) to\\ $\displaystyle{\left\{\nu p_1,\ldots,\nu p_k,\nu x-x\right\}}$ in $\itk[x_0,\ldots,x_n,\nu]$ with an ordering in which $\nu>x_i$ for all $x_i$. Let $\left\{p_1,\ldots,p_{k+s}\right\}$ ($m\geq k$) be the output of Buchberger's Algorithm.
\item Intersect $\left\{p_1,\ldots,p_{k+s}\right\}$ with $\itk[x_0,\ldots,x_n]$. Let $\left\{p_1^\prime,\ldots,p_m^\prime\right\}$ be the resulting subset of generators.
\item Divide each of the $p_i^\prime$ by $x$. The set $\displaystyle{\left\{\frac{p_1^\prime}{x},\ldots,\frac{p_m^\prime}{x}\right\}}$ is a Gr\"obner basis for $I:(x)$.
\end{enumerate}
\end{algorithm}
\subsection{Simplifying Gr\"obner basis computations}
\label{gbsimplify}
We record here a collection of propositions that will simplify computations encountered when applying Buchberger's Algorithm.
\begin{prop}
\label{gbprinciplegcd}
Let $f,g\in\kpoly$. If $\gcd(\lm{f},\lm{g})=1$, then $S(f,g)\xrightarrow{f,g}0$.
\end{prop}
\begin{proof}
Let $f=\lt{f}+\overline{f}$ and $g=\lt{g}+\overline{g}$. Then we can compute and reduce $S(f,g)$ as follows. The two possible term orders are considered in two columns.
\begin{equation*}
\begin{aligned}
S(f,g)&=\frac{\lm{g}}{\lc{f}}\overline{f}-\frac{\lm{f}}{\lc{g}}\overline{g}\\
\textrm{reduce}\;&-\frac{\overline{f}}{\lc{f}\lc{g}}(\lt{g}+\overline{g})\\
&=-\frac{\overline{g}}{\lc{g}}\left(\lm{f}+\frac{\overline{f}}{\lc{f}}\right)\\
\textrm{reduce}\;&+\frac{\overline{g}}{\lc{f}\lc{g}} f\\
&=0
\end{aligned}
\quad\vline\quad
\begin{aligned}
S(f,g)&=-\frac{\lm{f}}{\lc{g}}\overline{g}+\frac{\lm{g}}{\lc{f}}\overline{f}\\
\textrm{reduce}\;&+\frac{\overline{g}}{\lc{f}\lc{g}}(\lt{f}+\overline{f})\\
&=\frac{\overline{f}}{\lc{f}}\left(\lm{g}+\frac{\overline{g}}{\lc{g}}\right)\\
\textrm{reduce}\;&-\frac{\overline{f}}{\lc{f}\lc{g}} g\\
&=0
\end{aligned}
\end{equation*}
\end{proof}
\begin{prop}
\label{gbprinciplecoeff}
Let $f=\lt{f}+\overline{f}$ and $g=\lt{g}+\overline{g}$ be polynomials in $\kpoly$. Let $a,b$ be monomials in $\kpoly$. Then $$S(af,ag)=aS(f,g).$$
If $\gcd(a,b)=\gcd(a,\lm{g})=\gcd(b,\lm{f})=1$, then $$S(af,bg)=abS(f,g).$$
If $\gcd(a,\lm{f})=1$, then $$S(af, a\lt{g}+\overline{g})=S(f,a\lt{g}+\overline{g}).$$
\end{prop}
\begin{proof}
Since $\gcd(a\lm{f}, a\lm{g})=a\gcd(\lm{f}, \lm{g}),$ we compute as follows for the first claim.
\begin{eqnarray*}
S(af,ag)
&=&\frac{a\lm{g}}{\lc{f}a\gcd(\lm{f},\lm{g})}a\overline{f}-\frac{a\lm{f}}{\lc{g}a\gcd(\lm{f},\lm{g})}a\overline{g}\\
&=&a\left(\frac{\lm{g}}{\lc{f}\gcd(\lm{f},\lm{g})}\overline{f}-\frac{\lm{f}}{\lc{g}\gcd(\lm{f},\lm{g})}\overline{g}\right)\\&=&aS(f,g)
\end{eqnarray*}
The term order in the first two lines above may not be as written, but it is not changed by cancelling or factoring out $a$ from the expression.
For the second claim, our assumptions imply that $\gcd(af,bg)=\gcd(f,g)$. The computation is as follows.
\begin{align*}
S(af,bg)&=\frac{b\lm{g}}{\lc{f}\gcd(\lm{f},\lm{g})}a\overline{f}-\frac{a\lm{f}}{\lc{g}\gcd(\lm{f},\lm{g})}b\overline{g}\\
&=ab\left(\frac{\lm{g}}{\lc{f}\gcd(\lm{f},\lm{g})}\overline{f}-\frac{\lm{f}}{\lc{g}\gcd(\lm{f},\lm{g})}\overline{g}\right)\\
&=abS(f,g)
\end{align*}
Again, the term order may not be as written, but it does not change when we factor out $ab$.
For the third claim, the key observations are that \begin{align*}\gcd(a\lm{f},a\lm{g})&=a\gcd(\lm{f},\lm{g})\quad\text{and}\\\gcd(\lm{f},a\lm{g})&=\gcd(\lm{f},\lm{g})\end{align*} when $\gcd(a,\lm{f})=1$. Therefore,
\begin{align*}
S(af, a\lt{g}+\overline{g})=&\frac{a\lm{g}}{\lc{f}a\gcd(\lm{f},\lm{g})}a\overline{f}\\
&\,-\frac{a\lm{f}}{\lc{g}a\gcd(\lm{f},\lm{g})}\overline{g}\\
=&\frac{a\lm{g}}{\lc{f}\gcd(\lm{f},a\lm{g})}\overline{f}\\
&\,-\frac{\lm{f}}{\lc{g}\gcd(\lm{f},a\lm{g})}\overline{g}\\
=&S(f, a\lt{g}+\overline{g}).
\end{align*}
\end{proof}
We will sometimes encounter expressions with unknown term order after computing an S-polynomial. The following proposition allows us to reduce some such expressions without explicitly determining their term order.
\begin{prop}
\label{unorderedreduction}
Let $p, q, r, s\in\kpoly$ be monomials whose relationships to each other under the monomial order are unknown. Then whichever of $ps-rq$ or $rq-ps$ is correctly ordered is reducible to zero by the correctly ordered versions of $p-q$ and $r-s$.
\end{prop}
\begin{proof}
Suppose that $ps-rq$ is correctly ordered, so $ps>rq$. Then either $p>q$ or $s>r$ or both. Assume without loss of generality that $p>q$. Then term orders are correct in the following computation.
\begin{align*}
&ps-rq\\
\text{reduce:}\quad-\,&s(p-q)\\
=\,&q(s-r) \text{ or } q(r-s)
\end{align*}
The term order in the last line depends on whether $r<s$ or $s<r$. Either way, the last expression reduces by the version of $r-s$ with the correct term order.
If instead $rq-ps$ is correctly ordered, then either $q>p$ or $r>s$ or both. Without loss of generality, assume $q>p$. Reduce by $q-p$ to get $p(r-s)$ or $p(s-r)$ depending on which term order is correct for $r-s$. Either way, the result reduces by the correctly ordered version of $r-s$.
\end{proof}
\section{Outline of Proof of Theorem~\ref{idealqthmintro}}
\label{outlinesec}
Gr\"obner bases and Buchberger's Algorithm offer a concrete, constructive approach to our claim that the non-local ideal arises as an ideal quotient. With a carefully chosen monomial order, the computations and reductions of S-polynomials prescribed by Buchberger's Algorithm actually produce exactly the defining generators of the non-local ideal for the braid graph with one additional strand closed. Moreover, it is possible to interpret all S-polynomial computations required by Buchberger's Algorithm with reference to the graph, and thereby ensure that the algorithm produces no extraneous generators for the ideal quotient.
This section outlines the proof of Theorem~\ref{idealqthmintro} via Algorithm~\ref{iqalg}. Algorithm~\ref{iqalg} calls for an application of Buchberger's Algorithm (Algorithm~\ref{buchalg}) to a generating set for $\pi_k\!\left(N\!\left(G_\sigma\ksup{k}\right)\right)\subset\cE\!\left(G_\sigma\ksup{k+1}\right)\!\left[\nu\right]$, so we begin in Sections~\ref{mosec} and~\ref{startingbasissec} by setting up notation to describe such a set and defining a monomial order on $\cE\!\left(G_\sigma\ksup{k+1}\right)\!\left[\nu\right]$. We then describe how Buchberger's Algorithm progresses (Section~\ref{bbrd1ovw}) and the output it produces (Lemma~\ref{buchend}). We go on to prove Theorem~\ref{idealqthmintro} from Lemma~\ref{buchend} in Section~\ref{lemmatothmsec}. In Section~\ref{examplesection}, we work out a detailed example illustrating the algorithm for a particular small graph.
\subsection{Notation and a Monomial Order}
\label{mosec}
We will use the notation first introduced in Section~\ref{mainresult} and shown in Figure~\ref{intermedgraphs}, but drop all reference to $\sigma$. Refer also to Section~\ref{examplesection} and Figure~\ref{smallexample} for a specific example. Let $G=G\ksup{0}$ be the graph obtained by from a non-closed braid diagram with $b$ strands by replacing crossings with thick edges. Let $G\ksup{1},\ldots,G\ksup{b-1}$ be the intermediary graphs obtained by closing strands of $G$ one at a time from right to left. Assume that $G$ carries some framing, and that the $G\ksup{i}$ inherit this framing with newly closed strands bearing the sum of the framings on the edges that formed them.
We have already fixed the isomorphisms $\cE\!\left(G\ksup{i}\right)\isom\cR\!\left[\underline{x}\!\left(G\ksup{i}\right)\right]$ that retain the edge labels in $\underline{x}\!\left(G\ksup{i}\right)$ and discard the edge labels in $\underline{y}\!\left(G\ksup{i}\right)$. Recall that we abbreviate $\cE\!\left(G\ksup{i}\right)$ by $\cE_i$. Let $\underline{x}(G)=\left(x_0,\ldots,x_n\right)$ be the edges in $G$, labeled from top to bottom, right to left. Recall that $\ztau\ksup{i}$ and $\zbeta\ksup{i}$ are additional labels for the top- and bottom-most edges on the $i^\text{th}$ strand of $G$ after the variables in $\underline{y}(G)$ have been discarded. Recall also that $a_{i}$ is the framing on the top boundary edge of the $i^\text{th}$ strand of $G$, and $\pi_i$ is the quotient map with kernel $Z_{i+1}=\left(\ztau\ksup{i+1}-t^{a_{i+1}}\zbeta\ksup{i+1}\right)$ that retains the edge label $\ztau\ksup{i+1}$ and discards $\zbeta\ksup{i+1}$.
Consider the closure of the $(k+1)^\text{st}$ strand of $G\ksup{k}$ as in Theorem~\ref{idealqthmintro}. The diagrams $G\ksup{k}$ and $G\ksup{k+1}$ inherit their edge labels from $G$ in accordance with our conventions for the isomorphisms $\pi_i$ that relate $\cE_{i}$ and $\cE_{i+1}$. Under these conventions, $\cE_{k+1}$ is the polynomial ring over $\cR$ with indeterminates $\ztau\ksup{1},\ldots,\ztau\ksup{b}$, $\zbeta\ksup{k+2},\ldots,\zbeta\ksup{b}$, and an appropriate proper subset of the original $x_0,\ldots,x_n$.
To implement Algorithm~\ref{iqalg}, we require a monomial order on $\cE_{k+1}[\nu]$. The ordering we employ relies crucially on the edge labeling conventions specified above.
\begin{dfn}
\label{edgeringmo}
Let $x_0,\ldots,x_n$ label the edges of $G$ from top to bottom, right to left. Let $\ztau\ksup{i}$ and $\zbeta\ksup{i}$ be additional labels for the top- and bottom-most remaining edges on the $i^\text{th}$ strand for $0\leq i\leq b-1$. For each $i\leq k+1$, label the closure edges of $G\ksup{k+1}$ with $\ztau\ksup{i}$ and discard the label $\zbeta\ksup{i}$. The monomial order on $\cE_{k+1}[\nu]$ is the lexicographic ordering with $\nu>\ztau\ksup{k+1}>x_i>1$ for all $i$ and $x_i>x_j$ when $i<j$.
\end{dfn}
The property that $\ztau\ksup{k+1}$ precedes all other edge variables in the monomial order on $\cE_{k+1}[\nu]$ allows us to relate divisibility by $\ztau\ksup{k+1}$ to determination of a polynomial's leading term. In diagrammatic terms, divisibility by $\ztau\ksup{k+1}$ encodes the relationship of a subset to the braid strand being closed. This connection between the monomial order and the braid diagram is what makes it possible to keep the size and composition of our Gr\"obner basis under control, which is ultimately what allows us to describe the process and outcome of Buchberger's Algorithm.
\begin{obs}
\label{MOanddiv}
Let $f\in\cE_{k+1}[\nu]$ be a polynomial with no term divisible by $\nu$. If $\ztau\ksup{k+1}$ divides exactly one term of $f$, then the term divisible by $\ztau\ksup{k+1}$ must be the leading term of $f$.
\end{obs}
The observation follows immediately from the requirement that the monomial order satisfy $\ztau\ksup{k+1}>x_i$ for all $x_i$. With this requirement, only a term divisible by $\nu$ could precede a term divisible by $\ztau\ksup{k+1}$. In the absence of $\nu$, we look to $\ztau\ksup{k+1}$ to determine the leading term. If it occurs in only one term, then that term must lead. Any monomial order for which Observation~\ref{MOanddiv} holds could be used to prove Theorem~\ref{idealqthmintro} in the same way. We have specified the lexicographic ordering on the remaining variables simply for concreteness.
\subsection{Input and output bases and the monomial order}
\label{startingbasissec}
Let $N_i\subset\cE_i$ be the non-local ideal defined in Section~\ref{idealdfnsec}. In our abbreviated notation, Theorem~\ref{idealqthmintro} claims that $$\pi_k\!\left(N_k\right):\left(\ztau\ksup{k+1}\right) = N_{k+1}$$ in $\cE_{k+1}$ for $0\leq k\leq b-2$.
To implement Algorithm~\ref{iqalg}, we first require a generating set for $\pi_k(N_k)\subset\cE_{k+1}$. We will obtain it from the generating set of $N_k\subset\cE_k$ specified in Section~\ref{idealdfnsec}. Let $h_\gm\ksup{k}$ denote the generator of $N_k$ associated to a subset $\gm$ of the thick edges and bivalent vertices in $G$. Let $g_\gm\ksup{k}=\pi_k\!\left(h_\gm\ksup{k}\right)$ denote its image in $N_{k+1}$. Let $g_\gm\ksup{k+1}$ denote the generator of $N_{k+1}$ associated to $\gm$. Then our input basis for Algorithm~\ref{iqalg} is the set of $g_\gm\ksup{k}$ for all subsets $\gm$, which means our starting basis for Buchberger's Algorithm is
\begin{equation}
\label{startingbasis}
\cG_0=\left\{\nu g_\gm\ksup{k}\,\vert\,\gm\subset G\right\}\cup\left\{\nu\ztau\ksup{k+1}-\ztau\ksup{k+1}\right\},
\end{equation}
as a set of polynomials in $\cE_{k+1}[\nu]$. We will prove Theorem~\ref{idealqthmintro} by showing that the output of Algorithm~\ref{iqalg} is
\begin{equation}
\label{endingbasis}
\cG_{\text{end}}=\left\{g_\gm\ksup{k+1}\,\vert\,\gm\subset G\right\},
\end{equation}
which is the defining basis for $N_{k+1}$. (We abuse notation slightly throughout by letting the words ``in $G$'' and the symbols ``$\subset G$'' mean ``is a subset of the thick edges and bivalent vertices in $G$.'')
We now analyze the polynomials $g_\gm\ksup{k}$ and $g_\gm\ksup{k+1}$ in greater detail, particularly with respect to the monomial order on $\cE_{k+1}[\nu]$. For simplicity, we will assume from now until Section~\ref{nonbbframings} that the graphs $G\ksup{i}$ are blackboard-framed.
Given subsets $\gm$ and $\dl$ in $G$, let $\xsub{\gm}{\dl}$ be the product of interior edges in $G=G\ksup{0}$ from $\gm$ to $\dl$. Let $\zsub{\gm}{\dl}\ksup{i}$ denote the product of closure edges $\ztau\ksup{j}$ that go from $\gm$ to $\dl$ in $G\ksup{i}$, $\zsub{\gm}{\tau}\ksup{i}$ denote the product of edges $\ztau\ksup{j}$ that go from $\gm$ to the top boundary of $G\ksup{i}$, and $\zsub{\beta}{\gm}\ksup{i}$ denote the product of $\zbeta\ksup{j}$ in $G\ksup{i}$ that go from the bottom boundary of $G\ksup{i}$ to $\gm$. Notice that the factors of $\zsub{\gm}{\dl}\ksup{i}$ come only from $\ztau\ksup{j}$ with indices $j\leq i$, while the factors of $\zsub{\gm}{\tau}\ksup{i}$ and $\zsub{\beta}{\gm}\ksup{i}$ come only from $\ztau\ksup{j}$ or $\zbeta\ksup{j}$ with indices $j>i$.
Ignoring term orders for the moment, we may express generators of $N_k$ as
\begin{equation}
\label{ggmkdfn}
h_\gm\ksup{k}=\xsub{\gm}{G\sm\gm}\zsub{\gm}{G\sm\gm}\ksup{k}\zsub{\gm}{\tau}\ksup{k}-\xsub{G\sm\gm}{\gm}\zsub{G\sm\gm}{\gm}\ksup{k}\zsub{\beta}{\gm}\ksup{k}.
\end{equation}
The generator of $N_{k+1}$ associated to $\gm\subset G$ is
\begin{equation}
\label{ggmkplusonedfn}
g_\gm\ksup{k+1}=\xsub{\gm}{G\sm\gm}\zsub{\gm}{G\sm\gm}\ksup{k+1}\zsub{\gm}{\tau}\ksup{k+1}-\xsub{G\sm\gm}{\gm}\zsub{G\sm\gm}{\gm}\ksup{k+1}\zsub{\beta}{\gm}\ksup{k+1}.
\end{equation}
\noindent The map $\pi_k$ replaces all instances of $\zbeta\ksup{k+1}$ with $\ztau\ksup{k+1}$. Let
\begin{equation}
\label{anyzetadfn}
\anyzeta_\gm = \frac{\zsub{\gm}{\gm}\ksup{k+1}}{\zsub{\gm}{\gm}\ksup{k}}=\begin{cases} \ztau\ksup{k+1} & \text{ if } \ztau\ksup{k+1} \text{ is internal to } \gm \text{ in } G\ksup{k+1} \\ 1 & \text{ otherwise.}\end{cases}
\end{equation}
Then the generators of $\pi_k(N_k)$ and $N_{k+1}$ are related by
\begin{equation}
\label{ggmkvkplusone}
g_\gm\ksup{k}=\pi_k\!\left(h_\gm\ksup{k}\right)=\anyzeta_\gm g_\gm\ksup{k+1}.
\end{equation}
We write $g_\gm^{(i), \out}$ for the monomial of $g_\gm\ksup{i}$ that is a product of edges outgoing from $\gm$ and $g_\gm^{(i), \into}$ for the monomial of $g_\gm\ksup{i}$ that is a product of edges incoming to $\gm$. It follows from Equation~\ref{ggmkvkplusone} that $g_\gm\ksup{k}$ and $g_\gm\ksup{k+1}$ have the same term order with respect to the monomial order on $\cE_{k+1}[\nu]$. If this term order is determined by an outgoing edge variable, then we call $\gm$ \emph{out-led}. If it is determined by an incoming edge variable, we call $\gm$ \emph{in-led}. The labeling of $\gm$ as out-led or in-led depends on $k$ because the monomial ordering depends on $k$. Since we work in $\cE_{k+1}$ throughout the proof of Theorem~\ref{idealqthmintro}, we will always mean in-led and out-led with respect to the term order on $\cE_{k+1}[\nu]$. See Section~\ref{examplesection} for examples of in-led and out-led subsets.
\subsection{Algorithm Overview}
\label{bbrd1ovw}
We now outline the computations that occur as we run Buchberger's Algorithm on $\cG_0$. The flowchart in Figure~\ref{flowchart} summarizes the first round of the algorithm, which produces the basis $\cG^\prime$. In the second round, all S-polynomials reduce to zero within $\cG^\prime$ and the algorithm terminates. Lemma~\ref{buchend} describes the outcome of the algorithm.
Buchberger's Algorithm instructs us to compute S-polynomials among all of the generators in the initial basis $\cG_0$. Initially, this means we have two types of computations: S-polynomials between $\nutopk$ and the $\nu g_\gm\ksup{k}$ and S-polynomials among the $\nu g_\gm\ksup{k}$. These are handled in Sections~\ref{nutoponespolys} and~\ref{subsetspolys}, respectively.
\begin{figure}
\begin{center}
\input{flowchart.tex}
\caption{Round 1 of Buchberger's Algorithm applied to $\cG_0$. Generators added to the working basis $\cG^\prime$ are shown in blue. The ** indicates that various outcomes are produced at that step, but all reduce to zero as indicated, where $\Lambda\in\{\gm\sm(\gm\cap\dl), \dl\sm(\gm\cap\dl), \gm\cap\dl, \gm\cup\dl\}$.}
\label{flowchart}
\end{center}
\end{figure}
\subsubsection{S-polynomials with $\nutopk$}
As the only element of $\cG_0$ that is not divisible by $\nu$, the generator $\nutopk$ plays a special role. Proposition~\ref{gbprinciplecoeff} implies that S-polynomials among the elements of $\cG_0$ divisible by $\nu$ are equal to $\nu$ times an S-polynomial of the underlying generators of $\pi_k(N_k)$. Therefore, the steps of Buchberger's Algorithm on $\cG_0$ that do not involve $\nutopk$ are parallel to the steps of Buchberger's Algorithm applied to the basis for $\pi_k(N_k)$. That is, in the process of running Buchberger's Algorithm on $\cG_0$ we incidentally produce a Gr\"obner basis for $\pi_k(N_k)$ itself, except that every basis element is multiplied by $\nu$. By contrast, the S-polynomials involving $\nutopk$ have no parallel in Buchberger's Algorithm applied to a basis for $\pi_k(N_k)$. They are the only steps of the algorithm that can possibly produce generators that do not involve $\nu$ in any of their terms. The plan, of course, is to discard any generator in which $\nu$ appears (Step 2 of Algorithm~\ref{iqalg}). Therefore, precursors to the $g_\gm\ksup{k+1}$, which we are hoping to find in the ideal quotient, will have to be produced by S-polynomials with $\nutopk$.
The result of $S(\nutopk,\nu g_\gm\ksup{k})$ depends on whether $\ztau\ksup{k+1}$ divides the leading monomial of $g_\gm\ksup{k}$. If not, then the S-polynomial reduces to $\ztau\ksup{k+1}g_\gm\ksup{k+1}$, which is a precursor to $g_\gm\ksup{k+1}$. If it does, then the S-polynomial with $\nutopk$ reverses the term order of $\nu g_\gm\ksup{k}$ by removing $\nu$ from its leading term. We call the resulting polynomial a tilde generator
\begin{equation}
\label{tildegendfn}
\widetilde{g}_\gm=\begin{cases} \nu g_\gm^{(k),\into}-g_\gm^{(k),\out}& \text{ if } \gm \text{ is out-led}\\
\nu g_\gm^{(k),\out}-g_\gm^{(k),\into}& \text{ if } \gm \text{ is in-led.}
\end{cases}
\end{equation}
If $g_\gm\ksup{k}$ had also a trailing monomial divisible by $\ztau\ksup{k+1}$ (meaning that the edge labeled $\ztau\ksup{k+1}$ goes both into and out of $\gm$ in $G\ksup{k+1}$), then the tilde generator will reduce to $\ztau\ksup{k+1} g_\gm\ksup{k+1}$, which belongs in $\cG^\prime$. Otherwise, we add the tilde generator itself to the working basis. We then immediately compute a second S-polynomial $S(\nutopk,\widetilde{g}_\gm)$, which produces $\ztau\ksup{k+1} g_\gm\ksup{k+1}$, so we add that to $\cG^\prime$ as well. At this point, we will have produced $\ztau\ksup{k+1}g_\gm\ksup{k+1}$ for all subsets $\gm$, so we will have confirmed that $\pi_k(N_k):\ztau\ksup{k+1}\supseteq N_{k+1}$. The working basis will be
\begin{align}
\label{workingbasis}
\cG^\prime=\,\cG_0&\cup\left\{\widetilde{g}_\gm\,\vert\,\gm\subset G, \ztau\ksup{k+1}\vert\lt{g_\gm\ksup{k}}, \ztau\ksup{k+1}\nmid\trt{g_\gm\ksup{k}}\right\}\nonumber\\&\cup\left\{\ztau\ksup{k+1}g_\gm\ksup{k+1}\,\vert\,\gm\subset G\right\}.
\end{align}
All remaining computations will be aimed at proving that no further additions to our working basis are required, which will establish that $\pi_k(N_k):\ztau\ksup{k+1}\subseteq N_{k+1}$.
\begin{lemma}
\label{buchend}
The outcome of Algorithm~\ref{buchalg} applied to $\cG_0$ in $\cE_{k+1}[\nu]$ with the monomial order of Definition~\ref{edgeringmo} is
$\cG^\prime.$
In particular, $\cG^\prime$ is a Gr\"obner basis for $\pi_k(N_k)$.
\end{lemma}
\subsubsection{S-polynomials among the $\nu g_\gm\ksup{k}$}
Since S-polynomials with $\nutopk$ produced the precursors to all of the generators of $N_{k+1}$, the hope now is that all remaining S-polynomials reduce to zero via the working basis $\cG^\prime$. Section~\ref{subsetspolys} establishes that this is the case for $S(g_\gm\ksup{k},g_\dl\ksup{k})$ for any pair of subsets $\gm$ and $\dl$.
The linchpin to the argument is Lemma~\ref{subsetoverlap}, which expresses $S(g_\gm\ksup{k},g_\dl\ksup{k})$ in terms of intersections, unions, and complements of $\gm$ and $\dl$. The least common multiples and greatest common divisors of monomials that appear in the S-polynomial formula translate into unions and intersections of sets in $G$. For example, $\gcd(\xsub{\gm}{G\sm\gm},\xsub{\dl}{G\sm\dl})=\xsub{\gm\cap\dl}{G\sm(\gm\cup\dl)}$. This correspondence between operations on monomials and operations on sets in $G$ is what makes it possible to describe the progress of Buchberger's Algorithm in terms of the graph.
Once these convenient expressions for the $S(g_\gm\ksup{k},g_\dl\ksup{k})$ have been obtained, it remains to argue that they may be reduced to zero in $\cG^\prime$. For that, we make liberal use of Observation~\ref{MOanddiv} to analyze and compare term orders of the S-polynomials and the elements of $\cG^\prime.$
\subsubsection{S-polynomials involving generators in $\cG^\prime\setminus\cG_0$}
Out of the initial S-polynomial calculations we produced only two types of generators to include in $\cG^\prime$ that were not already in $\cG_0$: tilde generators for a limited class of subsets and $\ztau\ksup{k+1}g_\gm\ksup{k+1}$ for all subsets. Section~\ref{bbrd2} carries out a final round of computations to check that S-polynomials involving these new generators reduce to zero within $\cG^\prime$. Much of the computational work follows from Section~\ref{bbrd1} combined with the shortcuts of Section~\ref{gbsimplify}. The arguments for the reductions to zero essentially follow from Observation~\ref{MOanddiv}, but require careful case by case analyses of term orders. This work completes the proof of Lemma~\ref{buchend}.
\subsection{Proof of Theorem~\ref{idealqthmintro} from Lemma~\ref{buchend}}
\label{lemmatothmsec}
Once Lemma~\ref{buchend} is established, Theorem~\ref{idealqthmintro} follows readily by applying the rest of Algorithm~\ref{iqalg}.
\begin{proof}[Proof of Theorem~\ref{idealqthmintro} from Lemma~\ref{buchend}]
Having completed Buchberger's Algorithm (Step 1 of Algorithm~\ref{iqalg}) and obtained $\cG^\prime\subset\cE_{k+1}[\nu]$, we must now intersect with $\cE_{k+1}$. Doing so produces
\[\cG^\prime\cap\cE_{k+1}=\left\{\ztau\ksup{k+1}g_\gm\ksup{k+1}\,\vert\,\gm\subset G\right\}\]
as a basis for $\pi_k(N_k)\cap\left(\ztau\ksup{k+1}\right)$ in $\cE_{k+1}$. For the last step of Algorithm~\ref{iqalg}, we divide each element of $\cG^\prime\cap\cE_{k+1}$ by $\ztau\ksup{k+1}$ to obtain
\[\cG_{\text{end}}=\left\{g_\gm\ksup{k+1}\,\vert\,\gm\subset G\right\}\] as our basis for $\pi_k(N_k):\left(\ztau\ksup{k+1}\right)$ in $\cE_{k+1}$. The result is exactly the generating set for $N_{k+1}$ described in Definition~\ref{idealdfn}.
\end{proof}
\section{Example/Illustration}
\label{examplesection}
There are three critical features of our set-up that make it possible to characterize the progress and outcome of Algorithm~\ref{iqalg} as we have done:
\begin{enumerate}
\item The divisor ideal $\left(\ztau\ksup{k+1}\right)$ is principal and monomial, which makes the role of $S(\nutopk,-)$ clear.
\item The S-polynomial of the non-local generators associated to a pair of subsets can be described in terms of operations on subsets.
\item Divisibility by $\ztau\ksup{k+1}$ is closely related to the determination of a polynomial's leading term, as recorded in Observation~\ref{MOanddiv}.
\end{enumerate}
This section aims to illustrate these features by way of the small graphs in Figure~\ref{smallexample}.
\begin{figure}
\begin{center}
\input{smallexample}
\caption{Example for Section~\ref{examplesection}: $G_1$ is on the left; $G_2$ is on the right.}
\label{smallexample}
\end{center}
\end{figure}
\subsection{Set-up}
The graphs $G_1$ and $G_2$ in Figure~\ref{smallexample} are labeled as instructed in Definition~\ref{edgeringmo}. For simplicity, we take them to be blackboard framed. Section~\ref{nonbbframings} contains the set-up for a non-blackboard framed example. The edge rings of $G_1$ and $G_2$, respectively, are
\begin{align*}
&\cE_1=\cR[x_0,x_1,x_2,x_3,x_4,x_5]
&\cE_2=\cR[x_0,x_1,x_2,x_4,x_5].
\end{align*}
It would also be consistent with our notation to say that $x_0=\ztau\ksup{1}$, $x_1=\ztau\ksup{2}$, $x_3=\zbeta\ksup{2}$, $x_4=\ztau\ksup{3}$, and $x_5=\zbeta\ksup{3}$, but we will not need these labels in this section.
The map $\pi_1:\cE_1\to\cE_2$ is the quotient by $(x_1-x_3)$. Let $N_1$ and $N_2$ be the non-local ideals for $G_1$ and $G_2$, respectively. Theorem~\ref{idealqthmintro} claims that \[\pi_1(N_1):(x_1)=N_2\] in $\cE_2$. Algorithm~\ref{iqalg} begins with a basis for $\nu\pi_1(N_1)+(\nu-1)(x_1)$, which Algorithm~\ref{buchalg} will turn into a Gr\"obner basis.
The monomial order on $\cE_2$ is $x_1>x_0>x_2>x_4>x_5$.
All polynomials in this section are written correctly with respect to the monomial order. Let $\gm$ be the upper (right) thick edge and $\dl$ be the lower (left) thick edge. The non-local generators are as follows.
\begin{align*}
g_\gm\ksup{1}&=\pi_1(x_1-x_2)=x_1-x_2 &\quad g_\gm\ksup{2}&=x_1-x_2\\
g_\dl\ksup{1}&=\pi_1(x_2x_4-x_3x_5)=-x_1x_5+x_2x_4 &\quad g_\dl\ksup{2}&=-x_1x_5+x_2x_4\\
g_{\gm\cup\dl}\ksup{1}&=\pi_1(x_1x_4-x_3x_5)=x_1x_4-x_1x_5 &\quad g_{\gm\cup\dl}\ksup{2}&=x_4-x_5
\end{align*}
Notice that $\gm$ and $\gm\cup\dl$ are out-led while $\dl$ is in-led. These labels are to be interpreted with respect to $\cE_2$ and its monomial order. With respect to $\cE_1$, all three of $\gm$, $\dl$, and $\gm\cup\dl$ would be out-led.
We run Buchberger's Algorithm in $\cE_2[\nu]$ with monomial order \[\nu>x_1>x_0>x_2>x_4>x_5.\] and starting basis
\[\cG_0=\left\{\nu x_1-x_1,\nu g_\gm\ksup{1},\nu g_\dl\ksup{1},\nu g_{\gm\cup\dl}\ksup{1}\right\}.\]
\subsection{S-polynomials with $\nu x_1-x_1$}
As expected, the S-polynomials with $\nu x_1-x_1$ remove factors of $\nu$, thereby reversing term orders. In two cases, the result is a tilde generator:
\begin{align*}
&S(\nu x_1-x_1, \nu g_\gm\ksup{1})=\nu x_2-x_1=\widetilde{g}_\gm;\\
&S(\nu x_1-x_1, \nu g_\dl\ksup{1})=\nu x_2x_4-x_1x_5=\widetilde{g}_\dl.
\end{align*}
In the third case, the result reduces by $\nu x_1-x_1$ to a precursor of a generator of $N_2$:
\begin{align*}
&S(\nu x_1-x_1, \nu g_{\gm\cup\dl}\ksup{1})=\nu x_1x_5-x_1x_4\xrightarrow{\nu x_1-x_1}-x_1x_4+x_1x_5=-x_1g_{\gm\cup\dl}\ksup{2}.
\end{align*}
We add all three of these outputs to the working basis.
Further S-polynomials between $\nu x_1-x_1$ and the tilde generators produce the remaining precursors to generators of $N_2$, which we also add to the working basis:
\begin{align*}
&S(\nu x_1-x_1, \widetilde{g}_\gm) = x_1^2-x_1x_2 = x_1g_\gm\ksup{2};\\
&S(\nu x_1-x_1, \widetilde{g}_\dl) = x_1^2x_5-x_1x_2x_4 = -x_1g_\dl\ksup{2}.
\end{align*}
As expected, the working basis at this point is
\[\cG^\prime=\left\{\nu x_1-x_1, \nu g_\gm\ksup{1}, \nu g_\dl\ksup{1}, \nu g_{\gm\cup\dl}\ksup{1}, \widetilde{g}_\gm, \widetilde{g}_\dl, x_1g_\gm\ksup{2}, x_1 g_\dl\ksup{2}, x_1 g_{\gm\cup\dl}\ksup{2}\right\}.\]
It contains all of the precursors to the generators of $N_2$ and contains no other polynomials that will survive the intersection with $\cE_2$. The hope is that all remaining S-polynomials will reduce to zero within $\cG^\prime$.
Notice that the choice of monomial order contributed to the efficiency of the calculations in this section. Since $x_1$ determined the leading terms of $g_\gm\ksup{1}$ and $g_\dl\ksup{1}$, these polynomials were efficiently handled by $S(\nu x_1-x_1,-)$. For example, if the term order of $g_\dl\ksup{1}$ had been reversed, $S(\nu x_1-x_1, \nu g_\dl\ksup{1})$ would have produced $x_1\widetilde{g}_\dl$. Keeping the degree of S-polynomials as low as possible helps to keep the size and composition of the working basis under control.
\subsection{S-polynomials among $\nu g_\gm\ksup{1}$, $\nu g_\dl\ksup{1}$, and $\nu g_{\gm\cup\dl}\ksup{1}$}
As expected, these S-polynomials all reduce to zero within the working basis $\cG^\prime$. There are often several ways to reduce one of these S-polynomials. We follow the methods used in the general arguments of Section~\ref{subsetspolys}.
Recall that $\gm$ and $\gm\cup\dl$ are out-led, while $\dl$ is in-led. The S-polynomials among $\nu g_\gm\ksup{1}$, $\nu g_\dl\ksup{1}$, and $g_{\gm\cup\dl}\ksup{1}$ can be computed from the expressions in Proposition~\ref{ksubsetoverlap} using their relationships to the non-local generators of $N_2$, along with Proposition~\ref{gbprinciplecoeff}. For example:
\begin{align*}
S(\nu g_\gm\ksup{1}, \nu g_\dl\ksup{1})
&=S(\nu g_\gm\ksup{2}, \nu g_\dl\ksup{2})\\
&=\nu S(g_\gm\ksup{2}, g_\dl\ksup{2})\quad\text{by Prop.~\ref{gbprinciplecoeff},}\\
&=\nu \xsub{\dl}{\gm}g_{\gm\cup\dl}\ksup{2}\quad\text{by Prop.~\ref{ksubsetoverlap}, with roles of $\gm$ and $\dl$ reversed,}\\
&=\nu x_2(x_4-x_5)
\end{align*}
Similar computations give
\begin{align*}
S(\nu g_\gm\ksup{1}, \nu g_{\gm\cup\dl}\ksup{1}) &=\nu g_\dl\ksup{2}\\
&=\nu x_1x_5 - \nu x_2x_4\\
S(\nu g_\dl\ksup{1}, \nu g_{\gm\cup\dl}\ksup{1}) &= \nu\left(g_{\gm\cup\dl}^{(2),\out}g_\dl^{(2),\out}-g_{\gm\cup\dl}^{(2),\into}g_\dl^{(2),\into}\right)\\
&= \nu x_1x_5^2 - \nu x_2x_4^2.
\end{align*}
All three of these S-polynomials reduce as described in Lemma~\ref{subsetoverlap}. To determine which case is relevant, notice that $x_1$ is internal to $\gm\cup\dl$ and not to $\gm$ or $\dl$, and that $x_1$ divides exactly one term of $g_\gm\ksup{1}$ and of $g_\dl\ksup{1}$. Therefore:
\begin{align*}
&S(\nu g_\gm\ksup{1}, \nu g_\dl\ksup{1})\xrightarrow{\widetilde{g}_\dl,\widetilde{g}_\gm}0\quad\text{by Case 3 in the proof of Lemma~\ref{subsetoverlap};}\\
&S(\nu g_\gm\ksup{1}, \nu g_{\gm\cup\dl}\ksup{1})\xrightarrow{\nu g_\dl\ksup{1}}0\quad\text{by Case 2(b) in the proof of Lemma~\ref{subsetoverlap}; and}\\
&S(\nu g_\dl\ksup{1}, \nu g_{\gm\cup\dl}\ksup{1}) \xrightarrow{\nu x_1-x_1, \widetilde{g}_\dl, x_1g_{\gm\cup\dl}\ksup{2}}0\quad\text{by Case 2(b).}
\end{align*}
Note also that the full generality of Case 2(b) is not needed in the second computation above because $g_\dl\ksup{2}=g_\dl\ksup{1}$.
We have now computed all of the S-polynomials among generators in the original basis $\cG_0$. Although reducing these S-polynomials by hand was straightforward enough, the computations might seem rather ad hoc, and would seem more so if we had considered all of the alternative ways to reduce each S-polynomial. Standardizing the reduction procedures is crucial to being able to generalize to arbitrary pairs of subsets in arbitrary graphs. In turn, characterizing the output of these S-polynomials in terms of operations on subsets is crucial to standardizing the reduction procedures. This, in brief, is the content of Section~\ref{subsetspolys}.
\subsection{Remaining S-polynomials}
It remains to check that S-polynomials involving elements of $\cG^\prime\setminus\cG_0$ reduce to zero within $\cG^\prime$. We leave it to the unusually detail-oriented reader to confirm the necessary calculations, but state the results with references to the relevant arguments in Section~\ref{bbrd2}.
It may seem that many of the calculations in this section are redundant. For example, the various S-polynomials involving generators associated to $\gm$ and $\dl$ almost all produce a multiple of $g_{\gm\cup\dl}\ksup{2}$. However, $g_{\gm\cup\dl}\ksup{2}$ itself is not in the working basis, hence not available for reductions. Therefore, the multiple makes a difference: $\nu$ may allow us to reduce by a multiple of $\nu g_{\gm\cup\dl}\ksup{1}$; $x_1$ may allow us to reduce by a multiple of $g_{\gm\cup\dl}\ksup{2}$. Sometimes, as for $S(\nu g_\gm\ksup{1}, \nu g_\dl\ksup{1})$ above, these simple reductions are impossible. We must turn instead to tilde generators or $\nu x_1-x_1$. So, despite the similarity of the remaining S-polynomial calculations, the reduction arguments are delicate.
It is also worth noting that S-polynomials involving tilde generators do not behave like the S-polynomials involving their non-tilde counterparts. The generator $\widetilde{g}_\gm$ is effectively $\nu g_\gm\ksup{1}$ with its term order reversed. Therefore, $S(\widetilde{g}_\gm,-)$ bears little relation to $S(\nu g_\gm\ksup{1},-)$. For example, $S(\nu g_\gm\ksup{1},\nu g_{\gm\cup\dl}\ksup{1})$ reduced via $g_\dl\ksup{1}$ while $S(\widetilde{g}_\gm, \nu g_{\gm\cup\dl}\ksup{1})$ reduces via $x_1g_{\gm\cup\dl}\ksup{2}$ and $x_1g_\gm\ksup{2}$.
The argument at the beginning of Section~\ref{bbrd2}, referring to Statement (6) of Proposition~\ref{nutopone}, takes care of S-polynomials involving $\nu x_1-x_1$ and elements of $\cG^\prime\setminus\cG_0$. It confirms that
$S(\nu x_1-x_1,x_1g_\Lambda\ksup{2})\xrightarrow{\nu x_1-x_1,x_1g_\Lambda\ksup{2}}0
$
for $\Lambda\in\left\{\gm,\dl,\gm\cup\dl\right\}$.
The next argument in Section~\ref{bbrd2}, which refers to Proposition~\ref{gbprinciplecoeff} and Lemma~\ref{subsetoverlap}, concerns S-polynomials involving pairs of elements of the form $x_1g_\Lambda\ksup{2}$. It confirms that
\begin{align*}
&S(x_1g_\gm\ksup{2},x_1g_\dl\ksup{2})\xrightarrow{x_1g_{\gm\cup\dl}\ksup{2}}0;\\
&S(x_1g_\gm\ksup{2},x_1g_{\gm\cup\dl}\ksup{2})\xrightarrow{x_1g_{\dl}\ksup{2}}0; \text{ and}\\
&S(x_1g_\dl\ksup{2},x_1g_{\gm\cup\dl}\ksup{2})\xrightarrow{x_1g_\dl\ksup{2},x_1g_{\gm\cup\dl}\ksup{2}}0.
\end{align*}
Lemma~\ref{2nunonu} concerns S-polynomials involving one generator of the form $\nu g_\Lambda\ksup{1}$ and one of the form $x_1g_\Lambda\ksup{2}$. It applies to
\begin{align*}
&S(\nu g_\gm\ksup{1},x_1 g_\gm\ksup{2})=0 \quad\text{(Case 3, but not in its full generality);}\\
&S(\nu g_\gm\ksup{1},x_1 g_\dl\ksup{2})\xrightarrow{\nu g_{\gm\cup\dl}\ksup{1}}0 \quad\text{(Case 3, but not in its full generality); and}\\
&S(\nu g_\gm\ksup{1},x_1 g_{\gm\cup\dl}\ksup{2})=S(\nu g_\gm\ksup{1}, \nu g_{\gm\cup\dl}\ksup{1})\xrightarrow{\nu g_{\dl}\ksup{1}}0 \quad\text{(before the break-down to cases).}\\
\end{align*}
Similarly, it applies to
\begin{align*}
&S(\nu g_\dl\ksup{1},x_1 g_\gm\ksup{2})\xrightarrow{\nu g_{\gm\cup\dl}\ksup{1}}0 \quad\text{(Case 3, but not in its full generality)}\\
&S(\nu g_\dl\ksup{1},x_1 g_\dl\ksup{2})=0 \quad\text{(Case 3, but not in its full generality); and}\\
&S(\nu g_\dl\ksup{1},x_1 g_{\gm\cup\dl}\ksup{2})=S(\nu g_\dl\ksup{1}, \nu g_{\gm\cup\dl}\ksup{1})\xrightarrow{\nu x_1-x_1,\widetilde{g}_\dl,x_1g_{\gm\cup\dl}\ksup{2}}0 \\&\text{(before the break-down to cases).}
\end{align*}
Finally, it applies to
\begin{align*}
&S(\nu g_{\gm\cup\dl}\ksup{1},x_1 g_\gm\ksup{2})\xrightarrow{x_1 g_{\dl}\ksup{2}}0 \quad\text{(Case 1);}\\
&S(\nu g_{\gm\cup\dl}\ksup{1},x_1 g_\dl\ksup{2})\xrightarrow{x_1g_\dl\ksup{2},x_1g_{\gm\cup\dl}\ksup{2}}0 \quad\text{(Case 1); and}\\
&S(\nu g_{\gm\cup\dl}\ksup{1},x_1 g_{\gm\cup\dl}\ksup{2})=0 \quad\text{(before the break-down to cases).}
\end{align*}
Lemma~\ref{tildeprop} concerns S-polynomials between tilde generators and generators of the form $\nu g_\Lambda\ksup{1}$. Case 1 applies to
\begin{align*}
&S(\nu g_{\gm\cup\dl}\ksup{1},\widetilde{g}_\gm)\xrightarrow{\nu x_1-x_1, x_1g_{\gm\cup\dl}\ksup{2}, x_1g_\gm\ksup{2}}0\quad\text{and}\\
&S(\nu g_{\gm\cup\dl}\ksup{1},\widetilde{g}_\dl)\xrightarrow{\nu x_1-x_1, x_1g_{\gm}\ksup{2}}0.
\end{align*}
Case 2 applies to
\begin{align*}
&S(\nu g_\gm\ksup{1}, \widetilde{g}_\gm)\xrightarrow{\widetilde{g}_\gm,x_1g_\gm\ksup{2}}0;\\
&S(\nu g_\gm\ksup{1}, \widetilde{g}_\dl)\xrightarrow{\widetilde{g}_\gm,x_1g_\dl\ksup{2}}0;\\
&S(\nu g_\dl\ksup{1}, \widetilde{g}_\gm)\xrightarrow{\widetilde{g}_\dl,x_1g_\gm\ksup{2}}0;\text{ and}\\
&S(\nu g_\dl\ksup{1}, \widetilde{g}_\dl)\xrightarrow{\widetilde{g}_\dl,x_1g_\dl\ksup{2}}0.
\end{align*}
Finally, we have
\begin{align*}
&S(\widetilde{g}_\gm,\widetilde{g}_\dl)\xrightarrow{x_1g_{\gm\cup\dl}\ksup{2}}0\quad\text{by Lemma~\ref{doubletilde} and}\\
&S(\widetilde{g}_\dl,x_1g_{\gm\cup\dl}\ksup{2})\xrightarrow{\nu x_1-x_1,x_1g_\dl\ksup{2},x_1g_{\gm\cup\dl}\ksup{2}}0\quad\text{by Lemma~\ref{ztautilde}.}
\end{align*}
The remaining pairs of elements in $\cG^\prime$ involving at least one element of $\cG^\prime\setminus\cG_0$ have no common divisors in their leading monomials, so their S-polynomials reduce to zero by Proposition~\ref{gbprinciplegcd}.
\subsection{Calculating the ideal quotient}
We have checked that all S-polynomials among elements of $\cG^\prime$ reduce to zero within $\cG^\prime$. Therefore, Buchberger's Algorithm has terminated and $\cG^\prime$ is a Gr\"obner basis for $\nu\pi_1(N_1)+(\nu-1)(x_1)$. That is, we have verified Lemma~\ref{buchend} for this example. We now intersect with $\cE_2$ to obtain a basis for $\pi_1(N_1)\cap(x_1)$:
\[\cG^\prime\cap\cE_2=\left\{x_1g_\gm\ksup{2},x_1g_\dl\ksup{2},x_1g_{\gm\cup\dl}\ksup{2}\right\}.\] Divide each of these generators by $x_1$ to obtain a basis for the quotient
\[\pi_1(N_1):(x_1)=\left(g_\gm\ksup{2},g_\dl\ksup{2},g_{\gm\cup\dl}\ksup{2}\right).\] This basis is the defining basis for $N_2$.
\section{Implementing Buchberger's Algorithm: Round 1}
\label{bbrd1}
As we compute S-polynomials, we will record the results in tables showing the propositions used and whether the result of the S-polynomial was added to the working basis. Table~\ref{round1table} records the S-polynomials we compute in this section.
\begin{table}[htbp]
\begin{center}
\renewcommand\arraystretch{1.5}
\begin{tabular}{|c|c|c|c|}
\hline
$S\!\left(-,-\right)$ & Result & Proposition & Add to $\cG^\prime$? \\\hline\hline
$S\!\left(\nu g_\gm\ksup{k},\nutopk\right)$ & $\ztau\ksup{k+1} g_\gm\ksup{k+1}$ or $\widetilde{g}_\gm$ & Prop.~\ref{nutopone} & yes \\\hline
$S\!\left(\widetilde{g}_\gm,\nutopk\right)$ & $\ztau\ksup{k+1} g_\gm\ksup{k+1}$ & Prop.~\ref{nutopone} & yes \\\hline
$S\!\left(\nu g_\gm\ksup{k},\nu g_\dl\ksup{k}\right)$ & 0 & Lemma~\ref{subsetoverlap} & \text{no} \\\hline
\end{tabular}
\end{center}\medskip
\caption{S-polynomials Round 1: All computations are assumed to be among generators in $\cG^\prime$ and are carried out in $\cE_{k+1}[\nu]$. S-polynomials are listed in the order they are computed in Section~\ref{bbrd1}.}
\label{round1table}
\end{table}
\subsection{S-polynomials with $\nutopk$}
\label{nutoponespolys}
We begin by describing the behavior of $S(\nutopk,-)$ with respect to various types of polynomials in $\cE_{k+1}[\nu]$.
\begin{prop}
\label{nutopone}
Let $f\in\cE_{k+1}$ and $f=\lt{f}+\overline{f}$.
If $\gcd(\lm{f},\ztau\ksup{k+1})=1$, then
\begin{align}
\tag{1}\label{nutop2nunodiv}&S(\nutopk,\nu f)\xrightarrow{\nutopk}-\frac{\ztau\ksup{k+1} f}{\lc{f}}\\
\tag{2}\label{nutop1nunodiv}&S(\nutopk, \nu\lt{f} + \overline{f})=-\frac{\ztau\ksup{k+1} f}{\lc{f}}\\
\tag{3}\label{nutop0nunodiv}&S(\nutopk, f)\xrightarrow{\nutopk}-\frac{\ztau\ksup{k+1} f}{\lc{f}}
\end{align}
\noi If $\gcd(\lm{f},\ztau\ksup{k+1})=\ztau\ksup{k+1}$, then
\begin{align}
\tag{4}\label{nutop2nu}&S(\nutopk,\nu f)=-\frac{1}{\lc{f}}\left(\nu \overline{f}+\lt{f}\right)\\
\tag{5}\label{nutop1nu}&S(\nutopk, \nu\lt{f}+\overline{f})=-\frac{f}{\lc{f}}\\
\tag{6}\label{nutop0nu}&S(\nutopk, f)=-\frac{1}{\lc{f}}\left(\nu \overline{f}+\lt{f}\right)
\end{align}
\end{prop}
\begin{proof}
The least common multiple of the leading monomials in the first three cases is $\nu\ztau\ksup{k+1}\lm{f}$. We calculate the first S-polynomial above as follows.
\begin{align*}
S(\nutopk,\nu f)&=\frac{\nu\ztau\ksup{k+1}\lm{f}}{\nu\ztau\ksup{k+1}}\left(-\ztau\ksup{k+1}\right)-\frac{\nu\ztau\ksup{k+1}\lm{f}}{\nu\lt{f}}\left(\nu \overline{f}\right)\\
&=-\frac{\nu\ztau\ksup{k+1} \overline{f}}{\lc{f}}-\ztau\ksup{k+1}\lm{f}\quad\quad\quad\text{LT determined by }\nu\\
\textrm{reduce}\quad&+\frac{\overline{f}}{\lc{f}}\left(\nutopk\right)\\
&=-\frac{\ztau\ksup{k+1} f}{\lc{f}}
\end{align*}
The second and third claims come from similar calculations.
In the latter three cases, the least common multiple of the leading monomials is $\nu\lm{f}$. Given this, we calculate as follows.
\begin{align*}
S(\nutopk,\nu f)=&\frac{\nu\lm{f}}{\nu\ztau\ksup{k+1}}\left(-\ztau\ksup{k+1}\right)-\frac{\nu\lm{f}}{\nu\lt{f}}\left(\nu \overline{f}\right)&\\
=&-\frac{\nu \overline{f}}{\lc{f}}-\lm{f}\quad\quad\quad\quad\quad\text{LT determined by }\nu\\
\end{align*}
The fifth and sixth cases are similar.
\end{proof}
We apply Proposition~\ref{nutopone} to compute $S(\nutopk,\nu g_\gm\ksup{k})$ and see which new generators must be added to the working basis, keeping in mind that leading coefficients are currently assumed to be 1. See the flowchart in Figure~\ref{flowchart}. If $\ztau\ksup{k+1}$ does not divide the leading term of $g_\gm\ksup{k}$, then Statement~(\ref{nutop2nunodiv}) of Proposition~\ref{nutopone} applies, so $S(\nutopk,\nu g_\gm\ksup{k})\xrightarrow{\nutopk}-\ztau\ksup{k+1}g_\gm\ksup{k}$. Since $\ztau\ksup{k+1}$ does not divide both terms of $g_\gm\ksup{k}$, we have $g_\gm\ksup{k}=g_\gm\ksup{k+1}$. So we may say that it is $\ztau\ksup{k+1}g_\gm\ksup{k+1}$ that should be added to the working basis.
If $\ztau\ksup{k+1}$ does divide the leading term of $g_\gm\ksup{k}$, then Statement~(\ref{nutop2nu}) of Proposition~\ref{nutopone} applies, and $S(\nutopk,\nu g_\gm\ksup{k}) = -\widetilde{g}_\gm$. Recall from Equation~\ref{tildegendfn} that
\begin{equation*}
\widetilde{g}_\gm=\begin{cases} \nu g_\gm^{(k),\into}-g_\gm^{(k),\out}& \text{ if } \gm \text{ is out-led}\\
\nu g_\gm^{(k),\out}-g_\gm^{(k),\into}& \text{ if } \gm \text{ is in-led.}
\end{cases}
\end{equation*}
If $\ztau\ksup{k+1}$ also divides the trailing term of $g_\gm\ksup{k}$, then $\widetilde{g}_\gm$ reduces via $\nutopk$ to leave $g_\gm\ksup{k}$. In this case, $\ztau\ksup{k+1}$ must have been both an outgoing and an incoming edge to $\gm$ in $G\ksup{k+1}$, so $g_\gm\ksup{k}=\ztau\ksup{k+1} g_\gm\ksup{k+1}$. Therefore, if $\ztau\ksup{k+1}$ divides $g_\gm\ksup{k}$, we end up adding $\ztau\ksup{k+1} g_\gm\ksup{k+1}$ and not $\widetilde{g}_\gm$ to the working basis. These results are recorded in Table~\ref{round1table}.
The only case in which we have added $\widetilde{g}_\gm$ and not $\ztau\ksup{k+1}g_\gm\ksup{k+1}$ to $\cG^\prime$ is when $\ztau\ksup{k+1}$ divides the leading term but not the trailing term of $g_\gm\ksup{k}$. For convenience, we immediately compute S-polynomials $S(\nutopk, \widetilde{g}_\gm)$ in this case: Statement~(\ref{nutop1nunodiv}) of Proposition~\ref{nutopone} implies that $S(\nutopk,\widetilde{g}_\gm)=\ztau\ksup{k+1}g_\gm\ksup{k}$. Since $\ztau\ksup{k+1}$ divided only one term of $g_\gm\ksup{k}$, we also have $g_\gm\ksup{k}=g_\gm\ksup{k+1}$. Therefore, we may record that we are adding $\ztau\ksup{k+1} g_\gm\ksup{k+1}$ to the working basis in this case as well.
Putting all of this together, we have produced $\ztau\ksup{k+1}g_\gm\ksup{k+1}$ for all $\gm$. The working basis is now
\begin{align*}
\cG^\prime=\,\cG_0&\cup\left\{\widetilde{g}_\gm\,\vert\,\gm\subset G, \ztau\ksup{k+1}\vert\lt{g_\gm\ksup{k}}, \ztau\ksup{k+1}\nmid\trt{g_\gm\ksup{k}}\right\}\nonumber\\&\cup\left\{\ztau\ksup{k+1}g_\gm\ksup{k+1}\,\vert\,\gm\subset G\right\}.
\end{align*}
\subsection{S-polynomials among the $\nu g_\gm\ksup{k}$}
\label{subsetspolys}
Our goal in this section is to describe the results of S-polynomials among generators of the form $\nu g_\gm\ksup{k}$ in terms of generators associated to related subsets. We first establish a general principle that will allow us to tackle products of interior edges (i.e., monomials labeled $x$) separately from products of boundary and closure edges (i.e., monomials labeled $z$).
\begin{prop}
\label{separateinteriorclosure}
Let $f_x,\overline{f}_x,f_z,\overline{f}_z,g_x,\overline{g}_x,g_z,\overline{g}_z\in\mathbb{Q}[\underline{x}^\prime]$ be monomials with the property that any monomial with an $x$ subscript is relatively prime to any monomial with a $z$ subscript. Let $S(f_x+\overline{f}_x,g_x+\overline{g}_x)_1$ and $S(f_x+\overline{f}_x,g_x+\overline{g}_x)_2$ denote the first and second terms of $S(f_x+\overline{f}_x,g_x+\overline{g}_x)$ as written in the definition of S-polynomial in Section~\ref{gbbackground}, not necessarily with respect to the monomial order, and similarly for $S(f_z+\overline{f}_z,g_z+\overline{g}_z)$. Then
\begin{align*}
S(f_xf_z+\overline{f}_x \overline{f}_z, g_xg_z+\overline{g}_x \overline{g}_z)&= S(f_x+\overline{f}_x,g_x+\overline{g}_x)_1S(f_z+\overline{f}_z,g_z+\overline{g}_z)_1\\
&\quad- S(f_x+\overline{f}_x,g_x+\overline{g}_x)_2S(f_z+\overline{f}_z,g_z+\overline{g}_z)_2\\
&\xrightarrow{S(f_x+\overline{f}_x,g_x+\overline{g}_x),S(f_z+\overline{f}_z,g_z+\overline{g}_z)}0
\end{align*}
\end{prop}
\begin{proof}
The assumptions about greatest common divisors among the monomials mean that
$$\lcm{f_xf_z,g_xg_z}=\frac{f_xf_zg_xg_z}{\gcd(f_x,g_x)\gcd(f_z,g_z)}=\lcm{f_x,g_x}\lcm{f_z,g_z}.$$
The S-polynomial calculations proceed as follows.
\begin{align*}
&S(f_xf_z+\overline{f}_x \overline{f}_z, g_xg_z+\overline{g}_x \overline{g}_z)=\\
&\quad\frac{g_xg_z}{\gcd(f_x,g_x)\gcd(f_z,g_z)}\overline{f}_x \overline{f}_z-\frac{f_xf_z}{\gcd(f_x,g_x)\gcd(f_z,g_z)}\overline{g}_x \overline{g}_z\\
&\quad\frac{g_x}{\gcd(f_x,g_x)}\overline{f}_x\frac{g_z}{\gcd(f_z,g_z)}\overline{f}_z-\frac{f_x}{\gcd(f_x,g_x)}\overline{g}_x\frac{f_z}{\gcd(f_z,g_z)}\overline{g}_z
\end{align*}
The term order in this expression is not clear. However, the first term is a product of the first terms of $S(f_x+\overline{f}_x,g_x+\overline{g}_x)$ and $S(f_z+\overline{f}_z,g_z+\overline{g}_z)$ and the second term is a product of their second terms, assuming everything is written as in Definition~\ref{spolydfn}, not necessarily with respect to the monomial order. Regardless of the correct term order, Proposition~\ref{unorderedreduction} shows that expressions of this form reduce to zero by their constituent parts.
\end{proof}
\begin{prop}
\label{subsetoverlapinterior}
Let $\gm, \dl\subset G$. Assume term orders of the S-polynomial input are as written. Then the following statements hold in any $\cE_i$.
\begin{align*}\tag{1}\label{subsetinteriorout}
S(\xsub{\gm}{G\sm\gm}&-\xsub{G\sm\gm}{\gm},\xsub{\dl}{G\sm\dl}-\xsub{G\sm\dl}{\dl})\\
&=\, \xsub{G\sm(\gm\cup\dl)}{\gm\cap\dl}\left(x_{\dl\sm(\gm\cap\dl)}^{\out}x_{\gm\sm(\gm\cap\dl)}^{\into}-x_{\gm\sm(\gm\cap\dl)}^{\out}x_{\dl\sm(\gm\cap\dl)}^{\into}\right)\\
\tag{2}\label{subsetinteriorin}
S(\xsub{G\sm\gm}{\gm}&-\xsub{\gm}{G\sm\gm},\xsub{G\sm\dl}{\dl}-\xsub{\dl}{G\sm\dl})\\
&=\, \xsub{\gm\cap\dl}{G\sm(\gm\cup\dl)}\left(x_{\dl\sm(\gm\cap\dl)}^{\into}x_{\gm\sm(\gm\cap\dl)}^{\out}-x_{\gm\sm(\gm\cap\dl)}^{\into}x_{\dl\sm(\gm\cap\dl)}^{\out}\right)\\
\tag{3}\label{subsetinteriormix}
S(\xsub{G\sm\gm}{\gm}&-\xsub{\gm}{G\sm\gm},\xsub{\dl}{G\sm\dl}-\xsub{G\sm\dl}{\dl})\\
&=\,\xsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\left(x_{\gm\cup\dl}^\out x_{\gm\cap\dl}^\out
-x_{\gm\cup\dl}^\into x_{\gm\cap\dl}^\into\right)
\end{align*}
\noindent The term orders of the results in the first two statements are undetermined in general.
\end{prop}
\begin{proof}
This proof is mainly a long calculation. It holds in any $\cE_i$ because it involves only interior edges and the projection of edge rings $\pi_i$ is the identity when restricted to the subring of $\cE_i$ generated by such edges. The outcome of the calculation in all cases relies on the fact that least common multiples and greatest common divisors of monomials behave in the same way as union and intersection of subsets. Statement~(\ref{subsetinteriorin}) follows from Statement~(\ref{subsetinteriorout}) by taking complements, so we exhibit the calculation only in the first and last cases. \smallskip
\noindent\emph{Case 1:}
The greatest common divisor of the leading monomials is $\xsub{\gd{\cap}}{G\sm(\gd{\cup})}$ so the least common multiple is
$$
\frac{\xsub{\gm}{G\sm\gm}\xsub{\dl}{G\sm\dl}}
{\xsub{\gd{\cap}}{G\sm(\gd{\cup})}}.
$$
The least common multiple divided by each leading term is
\begin{align*}
&\frac{\xsub{\gm}{G\sm\gm}\xsub{\dl}{G\sm\dl}}{\xsub{\gd{\cap}}{G\sm(\gd{\cup})}\xsub{\gm}{G\sm\gm}}
=\xsub{\dl\sm(\gd{\cap})}{G\sm(\gd{\cup})}\xsub{\dl}{\gm\sm(\gd{\cap})}\\
&\frac{\xsub{\gm}{G\sm\gm}\xsub{\dl}{G\sm\dl}}{\xsub{\gd{\cap}}{G\sm(\gd{\cup})}\xsub{\dl}{G\sm\dl}}
=\xsub{\gm\sm(\gd{\cap})}{G\sm(\gd{\cup})}\xsub{\gm}{\dl\sm(\gd{\cap})}
\end{align*}
We may now compute the S-polynomial. Expanding, then regrouping produces the form claimed in the proposition. Term order is unknown throughout.
\begin{align*}
&S(\xsub{\gm}{G\sm\gm}-\xsub{G\sm\gm}{\gm},\xsub{\dl}{G\sm\dl}-\xsub{G\sm\dl}{\dl})\\
&=\xsub{\dl\sm(\gd{\cap})}{G\sm(\gd{\cup})}\xsub{\dl}{\gm\sm(\gd{\cap})}\xsub{G\sm\gm}{\gm}
+\xsub{\gm\sm(\gd{\cap})}{G\sm(\gd{\cup})}\xsub{\gm}{\dl\sm(\gd{\cap})}\xsub{G\sm\dl}{\dl}\\
&=\xsub{\dl\setminus(\gm\cap\dl)}{G\setminus(\gm\cup\dl)}
\xsub{\dl\setminus(\gm\cap\dl)}{\gm\setminus(\gm\cap\dl)}
\xsub{\gm\cap\dl}{\gm\setminus(\gm\cap\dl)}\\
&\quad\cdot
\xsub{\dl\setminus(\gm\cap\dl)}{\gm\setminus(\gm\cap\dl)}
\xsub{G\setminus(\gm\cup\dl)}{\gm\setminus(\gm\cap\dl)}
\xsub{\dl\setminus(\gm\cap\dl)}{\gm\cap\dl}
\xsub{G\setminus(\gm\cup\dl)}{\gm\cap\dl}\\
&+
\xsub{\gm\setminus(\gm\cap\dl)}{G\setminus(\gm\cup\dl)}
\xsub{\gm\setminus(\gm\cap\dl)}{\dl\setminus(\gm\cap\dl)}
\xsub{\gm\cap\dl}{\dl\setminus(\gm\cap\dl)}\\
&\quad\cdot
\xsub{\gm\setminus(\gm\cap\dl)}{\dl\setminus(\gm\cap\dl)}
\xsub{G\setminus(\gm\cup\dl)}{\dl\setminus(\gm\cap\dl)}
\xsub{\gm\setminus(\gm\cap\dl)}{\gm\cap\dl}
\xsub{G\setminus(\gm\cup\dl)}{\gm\cap\dl}\\\smallskip
&=\xsub{G\sm(\gm\cup\dl)}{\gm\cap\dl}(\xsub{\dl\sm(\gm\cap\dl)}{G\sm\dl}\xsub{\gm\cap\dl}{\gm\sm(\gm\cap\dl)}\xsub{G\sm\gm}{\gm\sm(\gm\cap\dl)}\xsub{\dl\sm(\gm\cap\dl)}{\gm\cap\dl}\\
&-\xsub{\gm\sm(\gm\cap\dl)}{G\sm\gm}\xsub{\gm\cap\dl}{\dl\sm(\gm\cap\dl)}\xsub{G\sm\dl}{\dl\sm(\gm\cap\dl)}\xsub{\gm\sm(\gm\cap\dl)}{\gm\cap\dl})\\
&=\xsub{G\sm(\gm\cup\dl)}{\gm\cap\dl}\left(x_{\dl\sm(\gm\cap\dl)}^{\out}x_{\gm\sm(\gm\cap\dl)}^{\into}-x_{\gm\sm(\gm\cap\dl)}^{\out}x_{\dl\sm(\gm\cap\dl)}^{\into}\right)
\end{align*}
\noindent\emph{Case 3:}
The greatest common divisor of leading monomials in this case is the product of the edges that go from $\dl\sm(\gm\cap\dl)$ to $\gm\sm(\gm\cap\dl)$. The S-polynomial removes those edges, which are internal to $\gm\cup\dl$, while combining the incoming edges of $\gm$ with those of $\dl$ and the outgoing edges of $\gm$ with those of $\dl$. Specifically, the greatest common divisor of the leading monomials is
$
\xsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)},$
so the least common multiple is
\begin{align*}
&\frac{\xsub{G\sm\gm}{\gm}\xsub{\dl}{G\sm\dl}}{\xsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)}}\\
&=\xsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)}
\xsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\xsub{G\sm\gm}{\gm\cap\dl}
\xsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm\dl}
\end{align*}
and the S-polynomial calculation is as follows.
\begin{align}
\nonumber S(\xsub{G\sm\gm}{\gm}-\xsub{\gm}{G\sm\gm},&\xsub{\dl}{G\sm\dl}-\xsub{G\sm\dl}{\dl})=
\xsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm\dl}
\cdot\xsub{\gm}{G\sm\gm}\\
\label{altfactorzn}&\phantom{\xsub{\dl}{G\sm\dl}-\xsub{G\sm\dl}{\dl})}-\xsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\xsub{G\sm\gm}{\gm\cap\dl}
\cdot\xsub{G\sm\dl}{\dl}\\
\nonumber =&\,
\xsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{\gm\sm(\gm\cap\dl)}\\
\nonumber \cdot\,&\xsub{\gm\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\xsub{\gm\cap\dl}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{\dl\sm(\gm\cap\dl)}\\
\nonumber -&\xsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\xsub{G\sm(\gm\cup\dl)}{\gm\cap\dl}\xsub{\dl\sm(\gm\cap\dl)}{\gm\cap\dl}\\
\nonumber \cdot\,&\xsub{G\sm(\gm\cup\dl)}{\dl\sm(\gm\cap\dl)} \xsub{G\sm(\gm\cup\dl)}{\gm\cap\dl} \xsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)} \xsub{\gm\sm(\gm\cap\dl)}{\gm\cap\dl}\\
\nonumber =&\,\xsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}
\nonumber (\xsub{\gm\cup\dl}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm(\gm\cap\dl)}\\
\nonumber &\,-\xsub{G\sm(\gm\cup\dl)}{\gm\cup\dl}\xsub{G\sm(\gm\cap\dl)}{\gm\cap\dl})
\end{align}
\end{proof}
So far, we have established that S-polynomials of the interior edge portions of the $g_\gm\ksup{k}$ can always be written in terms of the interior edge portions of generators of the same form associated to unions, intersections, and complements of the original subsets. The next task is to consider the boundary and closure edge portions of the $g_\gm\ksup{k}$. In consideration of S-polynomials that will need to be computed later in the algorithm, we do the necessary computations for the boundary and closure edge portions of $g\ksup{k+1}_\gm\in\cE_{k+1}$, which have the form $\zsub{\gm}{G\sm\gm}\ksup{k+1}\zsub{\gm}{G\sm\gm}\ksup{k+1}-\zsub{G\sm\gm}{\gm}\ksup{k+1}\zsub{\beta}{\gm}\ksup{k+1}$. In each of these terms, the first factor is divisible by $\ztau\ksup{i}$ only if $i\leq k+1$ and the second only if $i>k+1$, so the two factors are relatively prime. In light of Proposition~\ref{separateinteriorclosure}, we may separate the S-polynomial computations for products of boundary edges like $\zsub{\gm}{\tau}\ksup{k+1}$ from those for products of closure edges like $\zsub{\gm}{G\sm\gm}\ksup{k+1}$. For the closure edges, the computations will be identical to those we did for interior edges, but we state the results below for completeness. For the boundary edges, the computations are slightly different, but still straightforward. They are outlined in Proposition~\ref{subsetoverlapboundary}.
\begin{prop}
\label{subsetoverlapclosure}
Let $\gm, \dl\subset G$. Assume term orders of the S-polynomial input are as written. The following equalities hold in $\cE_{k+1}$.
\begin{align*}
\tag{1}\label{subsetclosureout}
S(\zsub{\gm}{G\sm\gm}\ksup{k+1}&-\zsub{G\sm\gm}{\gm}\ksup{k+1}, \,
\zsub{\dl}{G\sm\dl}\ksup{k+1}-\zsub{G\sm\dl}{\dl}\ksup{k+1})
=\zsub{G\sm(\gm\cup\dl)}{\gm\cap\dl}\ksup{k+1}\\
&\cdot\left(\zsub{\dl\sm(\gm\cap\dl)}{G\sm\dl}\ksup{k+1}\zsub{\dl\sm(\gm\cap\dl)}{\gm\cap\dl}\ksup{k+1}
\zsub{\gm\cap\dl}{\gm\sm(\gm\cap\dl)}\ksup{k+1}\zsub{G\sm\gm}{\gm\sm(\gm\cap\dl)}\ksup{k+1}
\right. \\
&-
\left.\zsub{\gm\sm(\gm\cap\dl)}{\gm\cap\dl}\ksup{k+1}\zsub{\gm\sm(\gm\cap\dl)}{G\sm\gm}\ksup{k+1}\zsub{\gm\cap\dl}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\zsub{G\sm\dl}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\right)\text{;}\\
\tag{2}\label{subsetclosurein}
S(\zsub{G\sm\gm}{\gm}\ksup{k+1}&-\zsub{\gm}{G\sm\gm}\ksup{k+1},
\,\zsub{G\sm\dl}{\dl}\ksup{k+1}-\zsub{\dl}{G\sm\dl}\ksup{k+1})
=\zsub{\gm\cap\dl}{G\sm(\gm\cup\dl)}\ksup{k+1}\\
&\cdot\left(\zsub{\gm\cap\dl}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\zsub{G\sm\dl}{\dl\sm(\gm\cap\dl)}\ksup{k+1}
\zsub{\gm\sm(\gm\cap\dl)}{\gm\cap\dl}\ksup{k+1}\zsub{\gm\sm(\gm\cap\dl)}{G\sm\gm}\ksup{k+1}\right.\\
&-
\left.\zsub{\gm\cap\dl}{\gm\sm(\gm\cap\dl)}\ksup{k+1}\zsub{G\sm\gm}{\gm\sm(\gm\cap\dl)}\ksup{k+1}
\zsub{\dl\sm(\gm\cap\dl)}{G\sm\dl}\ksup{k+1}\zsub{\dl\sm(\gm\cap\dl)}{\gm\cap\dl}\ksup{k+1}
\right)
\text{; and}\\
\tag{3}\label{subsetclosuremix}
S(\zsub{G\sm\gm}{\gm}\ksup{k+1}&-\zsub{\gm}{G\sm\gm}\ksup{k+1}, \,
\zsub{\dl}{G\sm\dl}\ksup{k+1}-\zsub{G\sm\dl}{\dl}\ksup{k+1})
=\zsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\\
&\cdot\left(\zsub{\gm\cup\dl}{G\sm(\gm\cup\dl)}\ksup{k+1}\zsub{\gm\cap\dl}{G\sm(\gm\cap\dl)}\ksup{k+1}
-\zsub{G\sm(\gm\cup\dl)}{\gm\cup\dl}\ksup{k+1}\zsub{G\sm(\gm\cap\dl)}{\gm\cap\dl}\ksup{k+1}\right).
\end{align*}
\noindent In all cases, the term orders of the results are undetermined in general.
\end{prop}
\begin{proof}
These are all straightforward computations that are analogous to those in the proof of Proposition~\ref{subsetoverlapinterior}.
\end{proof}
\begin{prop}
\label{subsetoverlapboundary}
Let $\gm, \dl\subset G$. Assume term orders of the S-polynomial input are as written. The following equalities hold in $\cE_{k+1}$.
\begin{align*}
\tag{1}\label{subsetboundaryout}
S(\zsub{\gm}{\tau}\ksup{k+1}&-\zsub{\beta}{\gm}\ksup{k+1}, \,
\zsub{\dl}{\tau}\ksup{k+1}-\zsub{\beta}{\dl}\ksup{k+1})\\
&=-\zsub{\beta}{\gm\cap\dl}\ksup{k+1}
\left(\zsub{\dl\sm(\gm\cap\dl)}{\tau}\ksup{k+1} \zsub{\beta}{\gm\sm(\gm\cap\dl)}\ksup{k+1}
-
\zsub{\gm\sm(\gm\cap\dl)}{\tau}\ksup{k+1} \zsub{\beta}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\right)\text{;}\\
\tag{2}\label{subsetboundaryin}S(\zsub{\beta}{\gm}\ksup{k+1}&-\zsub{\gm}{\tau}\ksup{k+1},\,\zsub{\beta}{\dl}\ksup{k+1}-\zsub{\dl}{\tau}\ksup{k+1})\\
&=-\zsub{\gm\cap\dl}{\tau}\ksup{k+1}\left(\zsub{\beta}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\zsub{\gm\sm(\gm\cap\dl)}{\tau}\ksup{k+1}
-
\zsub{\beta}{\gm\sm(\gm\cap\dl)}\ksup{k+1}\zsub{\dl\sm(\gm\cap\dl)}{\tau}\ksup{k+1}
\right)\text{; and}\\
\tag{3}\label{subsetboundarymix}S(\zsub{\beta}{\gm}\ksup{k+1}&-\zsub{\gm}{\tau}\ksup{k+1},\,\zsub{\dl}{\tau}\ksup{k+1}-\zsub{\beta}{\dl}\ksup{k+1})
=\zsub{\gm\cup\dl}{\tau}\ksup{k+1}\zsub{\gm\cap\dl}{\tau}\ksup{k+1}-\zsub{\beta}{\gm\cup\dl}\ksup{k+1}\zsub{\beta}{\gm\cap\dl}\ksup{k+1}.
\end{align*}
The term orders of the results in the first two statements are undetermined in general.
\end{prop}
\begin{proof} The argument is a straightforward calculation in each case. Keep in mind throughout that all monomials involved in this computation are products of $\ztau\ksup{i}$ and $\zbeta\ksup{i}$ for $i>k+1$. \smallskip\\
\noindent\emph{Case 1:}
\begin{align*}
S(\zsub{\gm}{\tau}\ksup{k+1}-\zsub{\beta}{\gm}\ksup{k+1},\zsub{\dl}{\tau}\ksup{k+1}-\zsub{\beta}{\dl}\ksup{k+1})=&\frac{\zsub{\gm}{\tau}\ksup{k+1} \zsub{\dl}{\tau}\ksup{k+1}}{\zsub{\gm\cap\dl}{\tau}\ksup{k+1}}\frac{1}{\zsub{\gm}{\tau}\ksup{k+1}}\left(-\zsub{\beta}{\gm}\ksup{k+1}\right)\\
&\,-\frac{\zsub{\gm}{\tau}\ksup{k+1} \zsub{\dl}{\tau}\ksup{k+1}}{\zsub{\gm\cap\dl}{\tau}\ksup{k+1}}\frac{1}{\zsub{\dl}{\tau}\ksup{k+1}}\left(-\zsub{\beta}{\dl}\ksup{k+1}\right)\\
=&-\zsub{\dl\sm(\gm\cap\dl)}{\tau}\ksup{k+1} \zsub{\beta}{\gm}\ksup{k+1}+\zsub{\gm\sm(\gm\cap\dl)}{\tau}\ksup{k+1} \zsub{\beta}{\dl}\ksup{k+1}\\
=&-\zsub{\beta}{\gm\cap\dl}\ksup{k+1}\left(\zsub{\dl\sm(\gm\cap\dl)}{\tau}\ksup{k+1} \zsub{\beta}{\gm\sm(\gm\cap\dl)}\ksup{k+1}\right.\\
&\left.\,\phantom{\zsub{\beta}{\gm\cap\dl}\ksup{k+1}\left(\right.}-\zsub{\gm\sm(\gm\cap\dl)}{\tau}\ksup{k+1} \zsub{\beta}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\right)
\end{align*}
\smallskip
\noindent\emph{Case 2:} The calculation here is almost identical to that of Case 1.
\smallskip
\noindent\emph{Case 3:} Since $\zsub{\beta}{\gm}\ksup{k+1}$ is a product of $\zbeta\ksup{i}$ and $\zsub{\dl}{\tau}\ksup{k+1}$ is a product of $\ztau\ksup{i}$, they cannot have any common divisors. The result as stated just follows from rewriting $$\zsub{\gm}{\tau}\ksup{k+1}\zsub{\dl}{\tau}\ksup{k+1}=\zsub{\gm\cup\dl}{\tau}\ksup{k+1}\zsub{\gm\cap\dl}{\tau}\ksup{k+1}
\quad\text{and}\quad
\zsub{\beta}{\gm}\ksup{k+1}\zsub{\beta}{\dl}\ksup{k+1}=\zsub{\beta}{\gm\cup\dl}\ksup{k+1}\zsub{\beta}{\gm\cap\dl}\ksup{k+1}.$$
\end{proof}
Combining the calculations in Propositions~\ref{subsetoverlapinterior},~\ref{subsetoverlapclosure}, and~\ref{subsetoverlapboundary} with the general principle in Proposition~\ref{separateinteriorclosure}, we obtain $S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})$ for various combinations of in-led and out-led subsets.
\begin{prop}
\label{ksubsetoverlap}
Let $\gm, \dl\subset G$. The following statements hold in $\cE_{k+1}$.
\begin{align*}
\tag{1}\label{ksubsetoverlapout}
\text{If $\gm$ and $\dl$ are both out-led}& \text{, then}\\
S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})=&\,\xsub{G\sm(\gm\cup\dl)}{\gm\cap\dl}\zsub{G\sm(\gm\cup\dl)}{\gm\cap\dl}\ksup{k+1}\zsub{\beta}{\gm\cap\dl}\ksup{k+1}\\
&\cdot\left(g_{\dl\sm(\gm\cap\dl)}^{(k+1), \out}g_{\gm\sm(\gm\cap\dl)}^{(k+1), \into}-g_{\gm\sm(\gm\cap\dl)}^{(k+1), \out}g_{\dl\sm(\gm\cap\dl)}^{(k+1), \into}\right)\\
&\xrightarrow{g_{\gm\sm(\gm\cap\dl)}\ksup{k+1},g_{\dl\sm(\gm\cap\dl)}\ksup{k+1}}0
\end{align*}
\begin{align*}
\tag{2}\label{ksubsetoverlapin}
\text{If $\gm$ and $\dl$ are both in-led, then}\\
S(g\ksup{k+1}_\gm,g\ksup{k+1}_\dl)=&\,\xsub{\gm\cap\dl}{G\sm(\gm\cup\dl)}\zsub{\gm\cap\dl}{G\sm(\gm\cup\dl)}\ksup{k+1}\zsub{\gm\cap\dl}{\tau}\ksup{k+1}\\
&\cdot\left(g_{\dl\sm(\gm\cap\dl)}^{(k+1),\into}g_{\gm\sm(\gm\cap\dl)}^{(k+1),\out}-g_{\gm\sm(\gm\cap\dl)}^{(k+1),\into}g_{\dl\sm(\gm\cap\dl)}^{(k+1),\out}\right)\\
&\xrightarrow{g_{\gm\sm(\gm\cap\dl)}\ksup{k+1},g_{\dl\sm(\gm\cap\dl)}\ksup{k+1}}0
\end{align*}
\begin{align*}
\tag{3}\label{ksubsetoverlapmix}
\text{If $\gm$ is in-led and $\dl$ is out-led, }& \text{then}\\
S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})=&\,\xsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\zsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\\
&\cdot\left(g_{\gm\cup\dl}^{(k+1),\out} g_{\gm\cap\dl}^{(k+1),\out}-g_{\gm\cup\dl}^{(k+1),\into} g_{\gm\cap\dl}^{(k+1),\into}\right)\\
&\xrightarrow{g_{\gm\cup\dl}\ksup{k+1},g_{\gm\cap\dl}\ksup{k+1}}0.
\end{align*}
\end{prop}
\begin{proof}
All three cases follow directly from applying the appropriate cases of Propositions~\ref{subsetoverlapinterior},~\ref{subsetoverlapclosure}, and~\ref{subsetoverlapboundary}. The reduction statements follow from Proposition~\ref{unorderedreduction}.
\end{proof}
Finally, we compute the S-polynomials among generators of the form $\nu g_\gm\ksup{k}$ from the original basis $\cG_0$ and show that they can all be reduced by generators in the working basis $\cG^\prime$. This will be the first argument in which the choice of monomial order comes into play. Observation~\ref{MOanddiv} will be used repeatedly.
\begin{lemma}
\label{subsetoverlap}
Let $\gm, \dl\subset G$. Then
$$S(\nu g_\gm\ksup{k}, \nu g_\dl\ksup{k})\xrightarrow{\cG^\prime}0$$ in $\cE_{k+1}[\nu]$.
\end{lemma}
\begin{proof} For any in-led / out-led combination of $\gm$ and $\dl$, the first step to compute $S(\nu g_\gm\ksup{k}, \nu g_\dl\ksup{k})$ is to rewrite $g_\gm\ksup{k}=\anyzeta_\gm g_\gm\ksup{k+1}$ and $g_\dl=\anyzeta_\dl g_\dl\ksup{k+1}$ so that we may apply the results of Proposition~\ref{ksubsetoverlap}. The extra factors of $\anyzeta_\gm$ and $\anyzeta_\dl $ are either $\ztau\ksup{k+1}$ or 1, depending on whether $\ztau\ksup{k+1}$ is internal to $\gm$ and/or $\dl$.
Consider $S(\nu g_\gm\ksup{k}, \nu g_\dl\ksup{k})=\nu S(\anyzeta_\gm g_\gm\ksup{k+1}, \anyzeta_\dl g_\dl\ksup{k+1})$, where we have used Proposition~\ref{gbprinciplecoeff} to move the factor of $\nu$. The possible values of $\anyzeta_\gm$ and $\anyzeta_\dl$, along with the rules in Proposition~\ref{gbprinciplecoeff} give us the following cases.
\begin{enumerate}
\item $\anyzeta_\gm=\anyzeta_\dl=\ztau\ksup{k+1}$; that is, $\ztau\ksup{k+1}$ is internal to both $\gm$ and $\dl$. Then Statement (1) of Proposition~\ref{gbprinciplecoeff} implies $$\nu S(\anyzeta_\gm g_\gm\ksup{k+1}, \anyzeta_\dl g_\dl\ksup{k+1})=\nu\ztau\ksup{k+1} S(g_\gm\ksup{k+1}, g_\dl\ksup{k+1}).$$
\item $\anyzeta_\gm=\ztau\ksup{k+1}$, $\anyzeta_\dl=1$; that is, $\ztau\ksup{k+1}$ is internal to exactly one of $\gm$ and $\dl$, which we take to be $\gm$ without loss of generality. In this case, $\ztau\ksup{k+1}$ divides at most one term of $g_\dl\ksup{k}$.
\begin{enumerate}
\item $\ztau\ksup{k+1}$ divides neither term of $g_\dl\ksup{k}$. Then Statement (2) of Proposition~\ref{gbprinciplecoeff} implies $$\nu S(\anyzeta_\gm g_\gm\ksup{k+1}, \anyzeta_\dl g_\dl\ksup{k+1})=\nu \ztau\ksup{k+1}S(g_\gm\ksup{k+1}, g_\dl\ksup{k+1}).$$
\item $\ztau\ksup{k+1}$ divides exactly one term of $g_\dl\ksup{k}$. Observation~\ref{MOanddiv} implies that it divides the leading term, so Statement (3) of Proposition~\ref{gbprinciplecoeff} implies $$\nu S(\anyzeta_\gm g_\gm\ksup{k+1}, \anyzeta_\dl g_\dl\ksup{k+1})=\nu S(g_\gm\ksup{k+1}, g_\dl\ksup{k+1}).$$
\end{enumerate}
\item $\anyzeta_\gm=\anyzeta_\dl=1$; that is, $\ztau\ksup{k+1}$ is internal to neither $\gm$ nor $\dl$. Then $$\nu S(\anyzeta_\gm g_\gm\ksup{k+1}, \anyzeta_\dl g_\dl\ksup{k+1})=\nu S(g_\gm\ksup{k+1}, g_\dl\ksup{k+1}).$$
\end{enumerate}
Explicit expressions for $S(\nu g_\gm\ksup{k}, \nu g_\dl\ksup{k})$ in $\cE_{k+1}[\nu]$ are then multiples (by $\nu$ and possibly by $\ztau\ksup{k+1}$) of the expressions for $S(g_\gm\ksup{k+1}, g_\dl\ksup{k+1})$ in Proposition~\ref{ksubsetoverlap}. In Cases 1 and 2(a), where there is a factor of $\ztau\ksup{k+1}$ in front of $S(g_\gm\ksup{k+1}, g_\dl\ksup{k+1})$, we may reduce by some combination of $\ztau\ksup{k+1}g_\Lambda\ksup{k+1}$ for $\Lambda\in\{\gm\sm(\gm\cap\dl), \dl\sm(\gm\cap\dl), \gm\cap\dl, \gm\cup\dl\}$, exactly paralleling the reductions in Proposition~\ref{ksubsetoverlap}. Generators of the form $\ztau\ksup{k+1} g_\Lambda\ksup{k+1}$ are in the working basis for any $\Lambda$.
Cases 2(b) and 3 are more delicate. Since we do not have the factor of $\ztau\ksup{k+1}$ in front of $\nu S(g_\gm\ksup{k+1}, g_\dl\ksup{k+1})$, we cannot necessarily reduce by generators of the form $\ztau\ksup{k+1}g_\Lambda\ksup{k+1}$. We may always reduce by generators of the form $\nu g_\Lambda\ksup{k+1}$ as in Proposition~\ref{ksubsetoverlap}, but these are in the working basis $\cG^\prime$ only if $g_\Lambda\ksup{k}=g_\Lambda\ksup{k+1}$ (i.e.~if $\anyzeta_\Lambda=1$), which is not always true. We suppose now that $g_\Lambda\ksup{k}\neq g_\Lambda\ksup{k+1}$ for at least one of $\Lambda\in\{\gm\sm(\gm\cap\dl), \dl\sm(\gm\cap\dl), \gm\cap\dl, \gm\cup\dl\}$.
\smallskip
\noindent\emph{Case 2(b):} $\anyzeta_\gm=\ztau\ksup{k+1}$, $\anyzeta_\dl=1$, $\ztau\ksup{k+1}\,\vert\,\lt{g_\dl\ksup{k}}$, and $g_\Lambda\ksup{k}\neq g_\Lambda\ksup{k+1}$ for at least one of $\Lambda\in\{\gm\sm(\gm\cap\dl), \dl\sm(\gm\cap\dl), \gm\cap\dl, \gm\cup\dl\}$. \smallskip
We have assumed that $\ztau\ksup{k+1}$ is internal to $\gm$ but not $\dl$, which means that it cannot be internal to $\gm\cap\dl$ or $\dl\sm(\gm\cap\dl)$. We have also assumed that $\ztau\ksup{k+1}$ divides one term of $g_\dl\ksup{k}$, which means that $\ztau\ksup{k+1}$ must go either into or out of $\dl$. Therefore, $\ztau\ksup{k+1}$ cannot be internal to $\gm\sm(\gm\cap\dl)$ either. So $g_\Lambda\ksup{k}=g_\Lambda\ksup{k+1}$ for $\Lambda\in\left\{\gm\cap\dl,\dl\sm(\gm\cap\dl), \gm\sm(\gm\cap\dl)\right\}$.
The only scenario compatible with our assumptions, then, is that $\ztau\ksup{k+1}$ is internal to $\gm\cup\dl$ and so $g_{\gm\cup\dl}\ksup{k}\neq g_{\gm\cup\dl}\ksup{k+1}$. We then must find an alternative method of reducing $\nu S(g_\gm\ksup{k+1}, g_\dl\ksup{k+1})$ when one subset is in-led and the other out-led (Case 3 of Proposition~\ref{ksubsetoverlap}).
Suppose first that $\gm$ is in-led and $\dl$ is out-led. Then $\ztau\ksup{k+1}$ must go out of $\gm\cap\dl$ and into $\gm\sm(\gm\cap\dl)$. Proposition~\ref{ksubsetoverlap}(3) gives the following expression for the S-polynomial we are trying to reduce.
\begin{align*}
\nu S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})&=\,\nu \xsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\zsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\\
&\cdot\left(g_{\gm\cup\dl}^{(k+1),\out} g_{\gm\cap\dl}^{(k+1),\out}-g_{\gm\cup\dl}^{(k+1),\into} g_{\gm\cap\dl}^{(k+1),\into}\right)
\end{align*}
\noi The term order shown is correct; it is determined by the fact that $\ztau\ksup{k+1}$ divides only one of the terms. Since $\ztau\ksup{k+1}$ is internal to $\gm\cup\dl$, it divides neither term of $g_{\gm\cup\dl}\ksup{k+1}$. Since it goes out of but not into $\gm\cap\dl$, it divides $g_{\gm\cap\dl}^{(k+1),\out}$ but not $g_{\gm\cap\dl}^{(k+1),\into}$. Since both $\nu$ and $\ztau\ksup{k+1}$ divide the leading term of this expression, we reduce first by $\nutopk$.
\begin{align*}
\nu S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})\xrightarrow{\nutopk}\, &\xsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\zsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\\
\cdot\left(\right. & \nu g_{\gm\cup\dl}^{(k+1),\into} g_{\gm\cap\dl}^{(k+1),\into} - g_{\gm\cup\dl}^{(k+1),\out} g_{\gm\cap\dl}^{(k+1),\out}\left.\right)
\end{align*}
The leading term in the result is determined by $\nu$. Now since $\ztau\ksup{k+1}$ goes out of but not into $\gm\cap\dl$, we have available in the working basis the $\gm\cap\dl$ tilde generator, which must have the term order
$$\widetilde{g}_{\gm\cap\dl}=\nu g_{\gm\cap\dl}^{(k+1),\into} - g_{\gm\cap\dl}^{(k+1),\out}.$$
This term order is compatible with the term order of our reduced expression for $\nu S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})$, so we reduce further as follows.
\begin{align*}
\nu S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})&\xrightarrow{\nutopk,\,\widetilde{g}_{\gm\cap\dl}}\\
&\quad\xsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\zsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\ksup{k+1}g_{\gm\cap\dl}^{(k+1),\out}g_{\gm\cup\dl}\ksup{k+1}.
\end{align*}
Since $g_{\gm\cup\dl}\ksup{k+1}$ is the only factor in this expression with more than one term, its term order determines the term order of the expression. Since $\ztau\ksup{k+1}$ divides $g_{\gm\cap\dl}^{(k+1),\out}$, we may reduce to zero using $\ztau\ksup{k+1}g_{\gm\cup\dl}\ksup{k+1}$, which is in the working basis.
The other possibility in Case 2(b) was that $\gm$ was out-led and $\dl$ in-led. This means that $\ztau\ksup{k+1}$ goes out of $\gm\sm(\gm\cap\dl)$ and into $\gm\cap\dl$. Our expression for $\nu S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})$ comes from Part (3) of Proposition~\ref{ksubsetoverlap} again, but with the roles of $\gm$ and $\dl$ reversed. The argument for reducing by $\nutopk$, then $\widetilde{g}_{\gm\cap\dl}$, then $\ztau\ksup{k+1}g_{\gm\cup\dl}\ksup{k+1}$ is very similar to the argument just given, so we omit the details here.
\smallskip
\noindent\emph{Case 3:} $\anyzeta_\gm=\anyzeta_\dl=1$ and $g_\Lambda\ksup{k}\neq g_\Lambda\ksup{k+1}$ for at least one of $\Lambda\in\{\gm\sm(\gm\cap\dl), \dl\sm(\gm\cap\dl), \gm\cap\dl, \gm\cup\dl\}$. \smallskip
Our assumptions about $\anyzeta_\gm$ and $\anyzeta_\dl$ mean that $\ztau\ksup{k+1}$ is not internal to $\gm$ or $\dl$, hence not to $\gm\sm(\gm\cap\dl)$, $\dl\sm(\gm\cap\dl)$, or $\gm\cap\dl$. Therefore, we assume that it is internal to $\gm\cup\dl$, so that $g_{\gm\cup\dl}\ksup{k}\neq g_{\gm\cup\dl}\ksup{k+1}$. Then $\ztau\ksup{k+1}$ must go between $\gm\sm(\gm\cap\dl)$ and $\dl\sm(\gm\cap\dl)$ in one direction or the other. We will assume that it goes out of $\dl\sm(\gm\cap\dl)$ and into $\gm\sm(\gm\cap\dl)$. The other case is analogous, with the roles of $\gm$ and $\dl$ reversed throughout the argument
We know, then, that $\dl$ is out-led, $\gm$ is in-led, and $\ztau\ksup{k+1}$ divides exactly one term of $g_\gm\ksup{k+1}$ and exactly one term of $g_\dl\ksup{k+1}$. Therefore, we have available in the working basis
$$\widetilde{g}_\gm = \nu g_\gm^{(k+1),\out} - g_\gm^{(k+1),\into} \quad\text{and}\quad\widetilde{g}_\dl = \nu g_\dl^{(k+1),\into} - g_\dl^{(k+1),\out}.$$
We will use these to reduce $\nu S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})$ to zero. Begin with a refactored version of Statement (3) in Proposition~\ref{ksubsetoverlap} (e.g.~refer to the first line of Case \ref{subsetinteriormix} in Proposition~\ref{subsetoverlapinterior}).
\begin{align*}
\nu S(g_\gm\ksup{k+1},\, &g_\dl\ksup{k+1})=\\
&\nu \xsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\zsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl}{\tau}\ksup{k+1}
g_\gm^{(k+1),\out}\\
\,- &\nu\xsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\xsub{G\sm\gm}{\gm\cap\dl}\zsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\zsub{G\sm\gm}{\gm\cap\dl}\zsub{\beta}{\gm}\ksup{k+1}
g_{\dl}^{(k+1),\into}
\end{align*}
The term order of this expression is unknown since $\nu$ divides both terms and $\ztau\ksup{k+1}$ divides neither. It is reducible by $\widetilde{g}_\gm$ if the term order shown is correct and by $\widetilde{g}_\dl$ if not. The argument is similar either way, so we suppose now that the term order shown is correct and omit the other case. Reducing by $\widetilde{g}_\gm$ produces the following, in which the term order is determined by $\nu$ and shown correctly.
\begin{align*}
\nu S(g_\gm\ksup{k+1},\, &g_\dl\ksup{k+1})\xrightarrow{\widetilde{g}_\gm}\\
&\nu\xsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\xsub{G\sm\gm}{\gm\cap\dl}\zsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\zsub{G\sm\gm}{\gm\cap\dl}\zsub{\beta}{\gm}\ksup{k+1}
g_{\dl}^{(k+1),\into}\\
\,- & \xsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\zsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl}{\tau}\ksup{k+1}
g_\gm^{(k+1),\into}
\end{align*}
This expression can be reduced by $\widetilde{g}_\dl$, leaving the following, in which the term order is unknown.
\begin{align*}
\nu S(g_\gm\ksup{k+1},\, &g_\dl\ksup{k+1})\xrightarrow{\widetilde{g}_\gm,\widetilde{g}_\dl}\\
& \xsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\zsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl}{\tau}\ksup{k+1}
g_\gm^{(k+1),\into}\\
\,- &\xsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\xsub{G\sm\gm}{\gm\cap\dl}\zsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\zsub{G\sm\gm}{\gm\cap\dl}\zsub{\beta}{\gm}\ksup{k+1}
g_{\dl}^{(k+1),\out}
\end{align*}
This expression is actually zero, which we may see by refactoring the term written first above.
\begin{align*}
&\xsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\zsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl}{\tau}\ksup{k+1} g_\gm^{(k+1),\into}\\
=&\xsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm\dl}\xsub{G\sm\gm}{\gm}\\
\cdot &\zsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\zsub{\gm\cap\dl}{G\sm\dl}\zsub{G\sm\gm}{\gm}\zsub{\dl}{\tau}\ksup{k+1}\zsub{\beta}{\gm}\ksup{k+1}\\
=&\xsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm\dl}\xsub{G\sm(\gm\cup\dl)}{\gm}\xsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)}\xsub{\dl\sm(\gm\cap\dl)}{\gm\cap\dl}\\
\cdot &\zsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\zsub{\gm\cap\dl}{G\sm\dl}\zsub{G\sm(\gm\cup\dl)}{\gm}\zsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)}\zsub{\dl\sm(\gm\cap\dl)}{\gm\cap\dl}\\
\cdot &\zsub{\dl}{\tau}\ksup{k+1}\zsub{\beta}{\gm}\ksup{k+1}\\
=&\xsub{\dl}{G\sm\dl)}\xsub{G\sm(\gm\cup\dl)}{\gm}\xsub{\dl\sm(\gm\cap\dl)}{\gm\cap\dl}\\
\cdot &\zsub{\dl}{G\sm\dl}\zsub{G\sm(\gm\cup\dl)}{\gm}\zsub{\dl\sm(\gm\cap\dl)}{\gm\cap\dl}
\cdot \zsub{\dl}{\tau}\ksup{k+1}\zsub{\beta}{\gm}\ksup{k+1}\\
=&g_\dl^{(k+1),\out}\xsub{G\sm(\gm\cup\dl)}{\gm}\xsub{\dl\sm(\gm\cap\dl)}{\gm\cap\dl}
\cdot \zsub{G\sm(\gm\cup\dl)}{\gm}\zsub{\dl\sm(\gm\cap\dl)}{\gm\cap\dl}
\cdot \zsub{\beta}{\gm}\ksup{k+1}
\end{align*}
One more refactoring shows that this last expression is the same as the second term in our reduced expression for $\nu S(g_\gm\ksup{k+1},\, g_\dl\ksup{k+1})$ above. So we have shown that
$$\nu S(g_\gm\ksup{k+1},\, g_\dl\ksup{k+1})\xrightarrow{\widetilde{g}_\gm,\widetilde{g}_\dl}0.$$
\end{proof}
We have now computed S-polynomials among generators of the form $\nu g_\gm\ksup{k}$ for all combinations of in-led and out-led subsets, and seen that they all reduce to zero by elements of the working basis $\cG^\prime$. Therefore, we close this section with the same working basis as at the end of the previous section.
\section{Implementing Buchberger's Algorithm: Round 2}
\label{bbrd2}
Buchberger's Algorithm now calls for a new round of S-polynomials: those involving elements of $\cG^\prime\setminus\cG_0$. As we will see, these all reduce to zero within $\cG^\prime$. So, at the close of this section, we will have confirmed that the S-polynomial of any pair of generators in $\cG^\prime$ reduces to zero within $\cG^\prime$. This will complete the proof of Lemma~\ref{buchend}. Table~\ref{round2table} records the computations undertaken in this section.
\begin{table}[htdp]
\begin{center}
\renewcommand\arraystretch{1.5}
\begin{tabular}{|c|c|c|c|}
\hline
$S\!\left(-,-\right)$ & Result & Prop. & Add to $\cG^\prime$? \\\hline\hline
$S\!\left(\nutopk,\ztau\ksup{k+1} g_\gm\ksup{k+1}\right)$ & 0 & Stmt.~(\ref{nutop0nu}) of Prop.~\ref{nutopone} & no \\\hline
$S\!\left(\ztau\ksup{k+1} g_\gm\ksup{k+1}, \ztau\ksup{k+1} g_\dl\ksup{k+1}\right)$ & 0 & Prop.~\ref{gbprinciplecoeff} \& Prop.~\ref{ksubsetoverlap} & no \\\hline
$S\!\left(\ztau\ksup{k+1} g_\gm\ksup{k+1}, \widetilde{g}_\dl\right)$ & 0 & Lemma~\ref{ztautilde} & no \\\hline
$S\!\left( \widetilde{g}_\gm, \widetilde{g}_\dl\right)$ & 0 & Lemma~\ref{doubletilde} & no \\\hline
$S\!\left(\nu g_\gm\ksup{k},\ztau\ksup{k+1} g_\dl\ksup{k+1}\right)$ & 0 & Lemma~\ref{2nunonu} & no \\\hline
$S\!\left(\nu g_\gm\ksup{k}, \widetilde{g}_\dl\right)$ & 0 & Lemma~\ref{tildeprop} & no \\\hline
\end{tabular}
\end{center}
\medskip
\caption{S-polynomials Round 2: All computations are assumed to be among generators in $\cG^\prime$ and are carried out in $\cE_{k+1}[\nu]$. S-polynomials are listed in the order they are computed in Section~\ref{bbrd2}.}
\label{round2table}
\end{table}
First, we may quickly take care of $S(\nutopk,\ztau\ksup{k+1}g_\gm\ksup{k+1})$ using Statement~(\ref{nutop0nu}) of Proposition~\ref{nutopone}. Assuming that $\lc{g_\gm\ksup{k+1}}=1,$
\begin{align*}
S(\nutopk,\ztau\ksup{k+1}g_\gm\ksup{k+1})&=\,-\nu\ztau\ksup{k+1}\trt{g_\gm\ksup{k+1}}-\ztau\ksup{k+1}\lt{g_\gm\ksup{k+1}}\\
\text{reduce}\quad&+\,\,\trt{g_\gm\ksup{k+1}}\left(\nutopk\right)\\
&=-\ztau\ksup{k+1}g_\gm\ksup{k+1}
\end{align*}
Therefore, $S(\nutopk,\ztau\ksup{k+1}g_\gm\ksup{k+1})\xrightarrow{\nutopk, \ztau\ksup{k+1}g_\gm\ksup{k+1}}0.$
Second, we apply Proposition~\ref{gbprinciplecoeff} and Proposition~\ref{ksubsetoverlap} to see that
\[S(\ztau\ksup{k+1}g_\gm\ksup{k+1},\ztau\ksup{k+1}g_\dl\ksup{k+1})=\ztau\ksup{k+1}S(g_\gm\ksup{k+1},g_\dl\ksup{k+1}),\]
which reduces to zero either by the pair $\ztau\ksup{k+1}g_{\gm\sm(\gm\cap\dl)}\ksup{k+1}$ and $\ztau\ksup{k+1}g_{\dl\sm(\gm\cap\dl)}\ksup{k+1}$,
or by the pair
$\ztau\ksup{k+1}g_{\gm\cup\dl}\ksup{k+1}$ and $\ztau\ksup{k+1}g_{\gm\cap\dl}\ksup{k+1}$.
In either case, these are elements of the working basis $\cG^\prime$.
A similar method works for $S(\ztau\ksup{k+1} g_\gm\ksup{k+1},\widetilde{g}_\dl)$, but in this case the term order of the S-polynomial's result will be determined by the presence of $\nu$ on one term but not the other. This means that we cannot use Proposition~\ref{unorderedreduction} to reduce these expressions to zero (as we did in Proposition~\ref{ksubsetoverlap}).
\begin{lemma}
\label{ztautilde}
Let $\gm, \dl\subset G$. Suppose that $\ztau\ksup{k+1}$ divides the leading term but not the trailing term of $g_\dl\ksup{k}$. Then
\begin{align*}
S(\ztau\ksup{k+1} g_\gm\ksup{k+1},\widetilde{g}_\dl)&\xrightarrow{\nutopk,\ztau\ksup{k+1}g_{\gm\sm(\gm\cap\dl)}\ksup{k+1},\ztau\ksup{k+1}g_{\dl\sm(\gm\cap\dl)}\ksup{k+1}}0\\
\quad\text{or}\quad
&\xrightarrow{\nutopk, \ztau\ksup{k+1}g_{\gm\cup\dl}\ksup{k+1},\ztau\ksup{k+1}g_{\gm\cap\dl}\ksup{k+1}}0.
\end{align*}
\end{lemma}
\begin{proof}
Applying Statement (2) of Proposition~\ref{gbprinciplecoeff}, we have
\begin{align*}
S(\ztau\ksup{k+1} g_\gm\ksup{k+1},\widetilde{g}_\dl)&=\ztau\ksup{k+1} S(g_\gm\ksup{k+1},\widetilde{g}_\dl)\\
&=\ztau\ksup{k+1}S(g_\gm\ksup{k+1},\nu\trt{g_\dl\ksup{k+1}}-\lt{g_\dl\ksup{k+1}})
\end{align*}
Propositions~\ref{subsetoverlapinterior},~\ref{subsetoverlapclosure}, and~\ref{subsetoverlapboundary} can then be used to compute the S-polynomial explicitly in terms of $g_\Lambda\ksup{k+1}$ for $\Lambda\in\left\{\gm\sm(\gm\cap\dl), \dl\sm(\gm\cap\dl), \gm\cup\dl, \gm\cap\dl \right\}$, just as in Proposition~\ref{ksubsetoverlap}. The leading term of the result will be divisible by both $\nu$ and $\ztau\ksup{k+1}$, so it will be reducible by $\nutopk$ (and term order will be determined by $\nu$). Let $d=\gcd\left(\lm{g_\gm\ksup{k+1}},\trm{g_\dl\ksup{k+1}}\right)$. Then assuming that all coefficients are $\pm 1$,
\begin{align*}
\ztau\ksup{k+1}S(g_\gm\ksup{k+1}&, \widetilde{g}_\dl)=\\
&\ztau\ksup{k+1}\!\left(\frac{\nu\trt{g_\dl\ksup{k+1}}}{d}\trt{g_\gm\ksup{k+1}}-
\frac{\lt{g_\gm\ksup{k+1}}}{d}\lt{g_\dl\ksup{k+1}}
\right)\\
\xrightarrow{\nutopk}&\ztau\ksup{k+1}\!\left(\frac{\lt{g_\gm\ksup{k+1}}}{d}\lt{g_\dl\ksup{k+1}}
-\frac{\trt{g_\dl\ksup{k+1}}}{d}\trt{g_\gm\ksup{k+1}}\right).
\end{align*}
The expression inside the parentheses may be expanded in terms of $g_\Lambda\ksup{k+1}$ for $\Lambda\in\{\gm\sm(\gm\cap\dl),\dl\sm(\gm\cap\dl),\gm\cap\dl,\gm\cup\dl\}$ just as in Proposition~\ref{ksubsetoverlap}. The term order of the resulting expression will be unknown, but since we have $\ztau\ksup{k+1}$ in front of it, we are effectively in the situation of Cases 1 and 2(a) of Lemma~\ref{subsetoverlap}. Proposition~\ref{unorderedreduction} allows us to reduce by some combination of $\ztau\ksup{k+1} g_\Lambda\ksup{k+1}$. These generators are in the working basis $\cG^\prime$ for any $\Lambda$.
\end{proof}
Next, we consider S-polynomials between tilde generators. Note that we only consider tilde generators that occur in $\cG^\prime$, which justifies the assumptions about divisibility by $\ztau\ksup{k+1}$ in the statement below.
\begin{lemma}
\label{doubletilde}
Let $\gm, \dl\subset G$. Assume that $\ztau\ksup{k+1}$ divides $\lt{g_\gm\ksup{k}}$ and $\lt{g_\dl\ksup{k}}$ and $\ztau\ksup{k+1}$ does not divide $\trt{g_\gm\ksup{k}}$ or $\trt{g_\dl\ksup{k}}$. Then the reduction
$$S(\widetilde{g}_\gm,\widetilde{g}_\dl)\xrightarrow{\left\{\ztau\ksup{k+1}g_\Lambda\ksup{k+1}\right\}}0$$
holds in $\cE_{k+1}[\nu]$ for some combination of $\Lambda\in\{\gm\sm(\gm\cap\dl),\dl\sm(\gm\cap\dl),\gm\cap\dl,\gm\cup\dl\}$. All such generators are in the working basis $\cG^\prime$.
\end{lemma}
\begin{proof}
Since we have assumed that $\ztau\ksup{k+1}$ divides the leading terms of $g_\gm\ksup{k}$ and $g_\dl\ksup{k}$ but does not divide the trailing terms, we know that $\ztau\ksup{k+1}$ is incident but not internal to both $\gm$ and $\dl$. Therefore $g_\gm\ksup{k}=g_\gm\ksup{k+1}$ and $g_\dl\ksup{k}=g_\dl\ksup{k+1}$. Let $d=\gcd\left(\trm{g_\gm\ksup{k+1}},\trm{g_\dl\ksup{k+1}}\right).$ Assuming that all coefficients are $\pm 1$,
\begin{align}
\nonumber S(\widetilde{g}_\gm,\widetilde{g}_\dl)&=S(\nu \trt{g_\gm\ksup{k+1}} - \lt{g_\gm\ksup{k+1}},\nu \trt{g_\gm\ksup{k+1}} - \lt{g_\gm\ksup{k+1}})\\
\label{expandmedoubletilde}&=\frac{\trt{g_\dl\ksup{k+1}}}{d}\lt{g_\gm\ksup{k+1}} - \frac{\trt{g_\gm\ksup{k+1}}}{d}\lt{g_\dl\ksup{k+1}}.
\end{align}
The term order here is unknown, since $\nu$ divides neither term and $\ztau\ksup{k+1}$ divides both (because it divides the leading terms of both $g_\gm\ksup{k+1}$ and $g_\dl\ksup{k+1}$). We may expand this expression in terms of $g_\Lambda\ksup{k+1}$ for some combination of $\Lambda\in\{\gm\sm(\gm\cap\dl),\dl\sm(\gm\cap\dl),\gm\cap\dl,\gm\cup\dl\}$ just as in Proposition~\ref{ksubsetoverlap} or directly, using the computations in Propositions~\ref{subsetoverlapinterior},~\ref{subsetoverlapclosure}, and~\ref{subsetoverlapboundary}. Effectively, we are computing the S-polynomial of $g_\gm\ksup{k+1}$ and $g_\dl\ksup{k+1}$ both with their term orders reversed.
If the result of expanding line~(\ref{expandmedoubletilde}) looks like Case (1) of Proposition~\ref{ksubsetoverlap}, then both trailing terms must be products of outgoing edges, which means $\ztau\ksup{k+1}$ is incoming to both $\gm$ and $\dl$. Since it is internal to neither $\gm$ nor $\dl$, $\ztau\ksup{k+1}$ must go from $G\sm(\gm\cup\dl)$ to $\gm\cap\dl$. Then $\ztau\ksup{k+1}$ divides the factor of $\zsub{G\sm(\gm\cup\dl)}{\gm\cap\dl}\ksup{k+1}$ in front of the expanded expression. Therefore, regardless of term order, we may invoke Proposition~\ref{unorderedreduction} to reduce to zero by $\ztau\ksup{k+1}g_{\gm\sm(\gm\cap\dl)}\ksup{k+1}$ and $\ztau\ksup{k+1}g_{\dl\sm(\gm\cap\dl)}\ksup{k+1}$, which are in the working basis $\cG^\prime$.
If the result of expanding line (\ref{expandmedoubletilde}) looks like Case (2) of Proposition~\ref{ksubsetoverlap}, then both trailing terms are products of incoming edges, which means $\ztau\ksup{k+1}$ goes from $\gm\cap\dl$ to $G\sm(\gm\cup\dl)$. Therefore, it divides the factor of $\zsub{\gm\cap\dl}{G\sm(\gm\cup\dl)}\ksup{k+1}$ in front of the expanded expression. So we may again reduce by $\ztau\ksup{k+1}g_{\gm\sm(\gm\cap\dl)}\ksup{k+1}$ and $\ztau\ksup{k+1}g_{\dl\sm(\gm\cap\dl)}\ksup{k+1}$ regardless of term order.
If the result of expanding line (\ref{expandmedoubletilde}) looks like Case (3) of Proposition~\ref{ksubsetoverlap}, then the trailing term of $g_\gm\ksup{k+1}$ is a product of incoming edges and the trailing term of $g_\dl\ksup{k+1}$ is a product of outgoing edges. Therefore, $\ztau\ksup{k+1}$ goes from $\gm\sm(\gm\cap\dl)$ to $\dl\sm(\gm\cap\dl)$, which means that it divides the factor of $\zsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\ksup{k+1}$ in front of the expanded expression. Regardless of term order, we may reduce to zero by a combination of $\ztau\ksup{k+1}g_{\gm\cap\dl}\ksup{k+1}$ and $\ztau\ksup{k+1}g_{\gm\cup\dl}\ksup{k+1}$, both of which are in the working basis $\cG^\prime$.
The only other possibility is that the result of expanding line~(\ref{expandmedoubletilde}) looks like Case (3) of Proposition~\ref{ksubsetoverlap} with the roles of $\gm$ and $\dl$ reversed. In that case, reverse the roles of $\gm$ and $\dl$ in the previous paragraph's argument to see that we may again reduce to zero by $\ztau\ksup{k+1}g_{\gm\cap\dl}\ksup{k+1}$ and $\ztau\ksup{k+1}g_{\gm\cup\dl}\ksup{k+1}$.
\end{proof}
The remaining two propositions carry out the computations necessary to see that $S(\nu g_\gm\ksup{k},\ztau\ksup{k+1} g_\dl\ksup{k+1})\xrightarrow{\cG^\prime}0$ and $S(\nu g_\gm\ksup{k},\widetilde{g}_\dl)\xrightarrow{\cG^\prime}0$. In both cases, the computations themselves follow directly from our earlier results, but some additional work is required to see that reductions are always possible by generators in $\cG^\prime$. This will complete Table~\ref{round2table} and finish the proof of Lemma~\ref{buchend}.
\begin{lemma}
\label{2nunonu}
Let $\gm,\dl\subset G$. Then
$S(\nu g_\gm\ksup{k},\ztau\ksup{k+1} g_\dl\ksup{k+1})\xrightarrow{\cG^\prime}0$
in $\cE_{k+1}[\nu]$.
\end{lemma}
\begin{proof}
This S-polynomial is closely related to the one considered in Lemma~\ref{subsetoverlap}. By Statement (2) of Proposition~\ref{gbprinciplecoeff}, we have
$$S(\nu g_\gm\ksup{k},\ztau\ksup{k+1} g_\dl\ksup{k+1})=\nu S(g_\gm\ksup{k},\ztau\ksup{k+1} g_\dl\ksup{k+1}).$$
If $\ztau\ksup{k+1}$ is internal to $\dl$, then $$\nu S(g_\gm\ksup{k},\ztau\ksup{k+1} g_\dl\ksup{k+1})=\nu S(g_\gm\ksup{k}, g_\dl\ksup{k})=S(\nu g_\gm\ksup{k}, \nu g_\dl\ksup{k}).$$ We have already showed in Lemma~\ref{subsetoverlap} that this S-polynomial reduces to zero within $\cG^\prime$.
If $\ztau\ksup{k+1}$ is not internal to $\dl$, then we have the following breakdown of cases.
\begin{enumerate}
\item $\ztau\ksup{k+1}$ is internal to $\gm$. Then
\begin{align*}
\nu S(g_\gm\ksup{k},\ztau\ksup{k+1} g_\dl\ksup{k+1})&=\nu S(\ztau\ksup{k+1}g_\gm\ksup{k+1},\ztau\ksup{k+1} g_\dl\ksup{k+1})\\
&=\nu\ztau\ksup{k+1} S(g_\gm\ksup{k+1},g_\dl\ksup{k+1}),
\end{align*}
by Statement~(1) of Proposition~\ref{gbprinciplecoeff}. After expanding via Proposition~\ref{ksubsetoverlap}, we may reduce this expression to zero by some combination of $\ztau\ksup{k+1}g_\Lambda\ksup{k+1}$ for $\Lambda\in\{\gm\sm(\gm\cap\dl),\dl\sm(\gm\cap\dl),\gm\cap\dl,\gm\cup\dl\}$.
\item $\ztau\ksup{k+1}$ divides neither term of $g_\gm\ksup{k+1}$. Then $g_\gm\ksup{k}=g_\gm\ksup{k+1}$ and Statement (2) of Proposition~\ref{gbprinciplecoeff} implies that
$$\nu S(g_\gm\ksup{k},\ztau\ksup{k+1} g_\dl\ksup{k+1}) = \nu\ztau\ksup{k+1} S(g_\gm\ksup{k+1},g_\dl\ksup{k+1}).$$
After expanding via Proposition~\ref{ksubsetoverlap}, we may reduce this expression to zero by some combination of $\ztau\ksup{k+1}g_\Lambda\ksup{k+1}$ for $\Lambda\in\{\gm\sm(\gm\cap\dl),\dl\sm(\gm\cap\dl),\gm\cap\dl,\gm\cup\dl\}$.
\item $\ztau\ksup{k+1}$ divides exactly one term, hence the leading term, of $g_\gm\ksup{k+1}$. Then $g_\gm\ksup{k}=g_\gm\ksup{k+1}$ and Statement~(3) of Proposition~\ref{gbprinciplecoeff} implies that
\begin{align*}
\nu S(g_\gm\ksup{k},\ztau\ksup{k+1} g_\dl\ksup{k+1})&=\nu S(g_\gm\ksup{k+1},\ztau\ksup{k+1} g_\dl\ksup{k+1})\\
&=\nu S(g_\gm\ksup{k+1},g_\dl\ksup{k+1}).
\end{align*}
Also, our assumptions to this point mean that $\ztau\ksup{k+1}$ is internal to neither $\gm$ nor $\dl$. Therefore, we are in the same situation as Case (3) of Lemma~\ref{subsetoverlap}, in which we have already established that we may reduce to zero by generators already contained in $\cG^\prime$.
\end{enumerate}
\end{proof}
\begin{lemma}
\label{tildeprop}
Let $\gm, \dl\subset G$ and assume that $\widetilde{g}_\dl\in\cG^\prime$. Then $S(\nu g_\gm\ksup{k},\widetilde{g}_\dl)\xrightarrow{\cG^\prime}0$ in $\cE_{k+1}[\nu]$.
\end{lemma}
\begin{proof}
For the most part, the argument here is parallel to the proof of Lemma~\ref{subsetoverlap}. However, the leading terms in the results of these S-polynomials are determined by the presence of $\nu$ on exactly one of the terms, so it is not clear that the same reductions are always possible. Applying Statement (3) of Proposition~\ref{gbprinciplecoeff}, we have
$S(\nu g_\gm\ksup{k}, \widetilde{g}_\dl)=S(g_\gm\ksup{k}, \widetilde{g}_\dl).$
Note that our assumption that $\widetilde{g}_\dl\in\cG^\prime$ implies that $\ztau\ksup{k+1}$ is not internal to $\dl$ and that $g_\dl\ksup{k}=g_\dl\ksup{k+1}$. We consider the following cases, which parallel those in Lemma~\ref{subsetoverlap}.
\begin{enumerate}
\item $\ztau\ksup{k+1}$ is internal to $\gm$
\item $\ztau\ksup{k+1}$ divides exactly one term, hence the leading term of $g_\gm\ksup{k+1}$.
\item $\ztau\ksup{k+1}$ divides neither term of $g_\gm\ksup{k+1}$.
\end{enumerate}
\noi\emph{Case 1:} If $\ztau\ksup{k+1}$ is internal to $\gm$, then $g_\gm\ksup{k}=\ztau\ksup{k+1}g_\gm\ksup{k+1}$. We know that $\ztau\ksup{k+1}$ does not divide the trailing term of $g_\dl\ksup{k}$, so it does not divide the leading term of $\widetilde{g}_\dl$. Applying Statement (2) of Proposition~\ref{gbprinciplecoeff}, we have
\begin{align*}
S(g_\gm\ksup{k}, \widetilde{g}_\dl)&=S(\ztau\ksup{k+1}g_\gm\ksup{k+1}, \widetilde{g}_\dl)\\
&=\ztau\ksup{k+1}S(g_\gm\ksup{k+1}, \widetilde{g}_\dl)\\
&=\ztau\ksup{k+1}S(\lt{g_\gm\ksup{k+1}}-\trt{g_\gm\ksup{k+1}}, \nu \trt{g_\dl\ksup{k+1}}-\lt{g_\dl\ksup{k+1}})
\end{align*}
Let $d=\gcd\left(\lm{g_\gm\ksup{k+1}}, \trm{g_\dl\ksup{k+1}}\right)$. Note that $\ztau\ksup{k+1}$ does not divide $d$. Then we expand and reduce the S-polynomial above as follows, assuming that coefficients are all $\pm 1$.
\begin{align*}
&=\ztau\ksup{k+1}\left(\nu\frac{\trt{g_\dl\ksup{k+1}}}{d}\trt{g_\gm\ksup{k+1}} - \frac{\lt{g_\gm\ksup{k+1}}}{d}\lt{g_\dl\ksup{k+1}}\right)\\
&\xrightarrow{\nutopk} \ztau\ksup{k+1}\left(\frac{\trt{g_\dl\ksup{k+1}}}{d}\trt{g_\gm\ksup{k+1}} - \frac{\lt{g_\gm\ksup{k+1}}}{d}\lt{g_\dl\ksup{k+1}}\right)
\end{align*}
The term order in the last expression is unknown. The expression inside the parentheses may be expanded in terms of $g_\Lambda\ksup{k+1}$ using Proposition~\ref{ksubsetoverlap}, for $\Lambda\in\{\gm\sm(\gm\cap\dl),\dl\sm(\gm\cap\dl),\gm\cap\dl,\gm\cup\dl\}$. Since the factor of $\ztau\ksup{k+1}$ is available out front, we may then reduce by $\ztau\ksup{k+1}g_\Lambda\ksup{k+1}$ for the appropriate combination of $\Lambda$. All such generators are contained in $\cG^\prime$.
\medskip
In Cases 2 and 3, $\ztau\ksup{k+1}$ is not internal to $\gm$, and we continue to assume that it is not internal to $\dl$. Then $g_\gm\ksup{k}=g_\gm\ksup{k+1}$, so we expand the S-polynomial as in Case 1, but do not obtain a factor of $\ztau\ksup{k+1}$ in front.
\begin{align}
\nonumber S(g_\gm\ksup{k}, \widetilde{g}_\dl)&=S(g_\gm\ksup{k+1}, \widetilde{g}_\dl)\\
\label{reduceme}&=\nu\frac{\trt{g_\dl\ksup{k+1}}}{d}\trt{g_\gm\ksup{k+1}} - \frac{\lt{g_\gm\ksup{k+1}}}{d}\lt{g_\dl\ksup{k+1}}
\end{align}
\noi\emph{Case 2:}
Since we have assumed that $\ztau\ksup{k+1}$ divides $\lt{g_\gm\ksup{k}}$ but not $\trt{g_\gm\ksup{k}}$, we have $\widetilde{g}_\gm\in\cG^\prime$. We may use it to reduce line~(\ref{reduceme}) above.
\begin{align*}
S(g_\gm\ksup{k}, \widetilde{g}_\dl)\xrightarrow{\widetilde{g}_\gm}&
\frac{\trt{g_\dl\ksup{k+1}}}{d}\lt{g_\gm\ksup{k+1}}-\frac{\lt{g_\gm\ksup{k+1}}}{d}\lt{g_\dl\ksup{k+1}}\\
=&\frac{\lt{g_\gm\ksup{k+1}}}{d}g_\dl\ksup{k+1}
\end{align*}
Since $\ztau\ksup{k+1}$ is not internal to $\dl$, it does not divide $\trt{g_\dl\ksup{k+1}}$, which means it also does not divide $d$. Our assumption that $\ztau\ksup{k+1}$ divides $\lt{g_\gm\ksup{k+1}}$ then implies that we may reduce further by $\ztau\ksup{k+1}g_\dl\ksup{k+1}$. So we have showed that $S(\nu g_\gm\ksup{k},\widetilde{g}_\dl)$ reduces to zero via $\widetilde{g}_\gm$ and $\ztau\ksup{k+1}g_\dl\ksup{k+1}$, both of which are in $\cG^\prime$.
\noi\emph{Case 3:} Here we have assumed that $\ztau\ksup{k+1}$ is not incident to $\gm$, and we continue to assume that it divides only the leading term of $g_\dl\ksup{k}=g_\dl\ksup{k+1}$. We would like to reduce the expression in line~(\ref{reduceme}). Since $\ztau\ksup{k+1}$ is not incident to $\gm$, we do not have $\widetilde{g}_\gm$ available in $\cG^\prime$ in this case. Instead, we must expand line~(\ref{reduceme}) using Propositions~\ref{subsetoverlapinterior},~\ref{subsetoverlapclosure}, and~\ref{subsetoverlapboundary}, keeping careful track of which term retains the factor of $\nu$. Since $\ztau\ksup{k+1}$ is not incident to $\gm$, we know that $\gm$ is out-led. The computation will depend on whether $\dl$ is in-led or out-led.
Suppose first that $\dl$ is in-led. Then $\ztau\ksup{k+1}$ goes from $G\sm(\gm\cup\dl)$ to $\dl\sm(\gm\cap\dl)$. We use Statement~(\ref{subsetinteriorout}) in each of Propositions~\ref{subsetoverlapinterior},~\ref{subsetoverlapclosure}, and~\ref{subsetoverlapboundary} to expand. We use $\widetilde{g}_{\dl\sm(\gm\cap\dl)}$ to reduce the resulting expression. It is available in $\cG^\prime$ because $\ztau\ksup{k+1}$ is incident but not internal to $\dl\sm(\gm\cap\dl)$.
\begin{align*}
S(g_\gm\ksup{k}, \widetilde{g}_\dl)
=&\,\xsub{G\sm(\gm\cap\dl)}{\gm\cap\dl}\zsub{G\sm(\gm\cap\dl)}{\gm\cap\dl}\ksup{k+1}\zsub{\beta}{\gm\cap\dl}\ksup{k+1}\\
&\cdot\,\left(\nu g_{\dl\sm(\gm\cap\dl)}^{(k+1),\out}g_{\gm\sm(\gm\cap\dl)}^{(k+1),\into} - g_{\gm\sm(\gm\cap\dl)}^{(k+1),\out}g_{\dl\sm(\gm\cap\dl)}^{(k+1),\into}\right)\\
\text{reduce}\quad-\,&\widetilde{g}_{\dl\sm(\gm\cap\dl)}\cdot \xsub{G\sm(\gm\cap\dl)}{\gm\cap\dl}\zsub{G\sm(\gm\cap\dl)}{\gm\cap\dl}\ksup{k+1}\zsub{\beta}{\gm\cap\dl}\ksup{k+1}g_{\gm\sm(\gm\cap\dl)}^{(k+1),\into} \\
=&\,\xsub{G\sm(\gm\cap\dl)}{\gm\cap\dl}\zsub{G\sm(\gm\cap\dl)}{\gm\cap\dl}\ksup{k+1}\zsub{\beta}{\gm\cap\dl}\ksup{k+1}g_{\dl\sm(\gm\cap\dl)}^{(k+1),\into}g_{\gm\sm(\gm\cap\dl)}\ksup{k+1}.
\end{align*}
Since $\ztau\ksup{k+1}$ divides$g_{\dl\sm(\gm\cap\dl)}^{(k+1),\into}$, we may reduce the above expression to zero by $\ztau\ksup{k+1}g_{\gm\sm(\gm\cap\dl)}\ksup{k+1}$, which is in $\cG^\prime$.
Suppose instead that $\dl$ is out-led. Then $\ztau\ksup{k+1}$ goes from $\dl\sm(\gm\cap\dl)$ to $G\sm(\gm\cup\dl)$. We use Statement~(\ref{subsetinteriormix}) in each of Propositions~\ref{subsetoverlapinterior},~\ref{subsetoverlapclosure}, and~\ref{subsetoverlapboundary}, with the roles of $\gm$ and $\dl$ reversed. We will use $\widetilde{g}_{\gm\cup\dl}$ to reduce the resulting expression. It is available in $\cG^\prime$ because $\ztau\ksup{k+1}$ is incident but not internal to $\gm\cup\dl$.
\begin{align*}
S(g_\gm\ksup{k}, \widetilde{g}_\dl)=&\,\xsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)}\zsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)}\ksup{k+1}\\
&\cdot\,\left(\nu g_{\gm\cup\dl}^{(k+1),\into}g_{\gm\cap\dl}^{(k+1),\into}-g_{\gm\cup\dl}^{(k+1),\out}g_{\gm\cap\dl}^{(k+1),\out}\right)\\
\text{reduce}\quad-\,&\widetilde{g}_{\gm\cup\dl}\cdot\xsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)}\zsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)}\ksup{k+1}g_{\gm\cap\dl}^{(k+1),\into}\\
=&\,\xsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)}\zsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)}\ksup{k+1}g_{\gm\cup\dl}^{(k+1),\out}g_{\gm\cap\dl}\ksup{k+1}
\end{align*}
Since $\ztau\ksup{k+1}$ divides $g_{\gm\cup\dl}^{(k+1),\out}$, we may reduce the above expression to zero by $\ztau\ksup{k+1}g_{\gm\cap\dl}\ksup{k+1}$, which is in $\cG^\prime$.
\end{proof}
\section{Extending to non-blackboard framings}
\label{nonbbframings}
We adopted the blackboard framing assumption in Section~\ref{startingbasissec} when setting up the starting basis for Buchberger's Algorithm. We now pick up from there to describe the modifications necessary to generalize to arbitrary framings. Refer to Section~\ref{twistedexamplesection} to see how these modifications change the set-up of the small example in Section~\ref{examplesection}.
\subsection{Set-up}
\label{twistedsetup}
In non-blackboard framed graphs, the weight of $\gm$ as a subset in $G\ksup{i}$ depends on $i$. As we close strands of the braid, non-blackboard-framed boundary edges may become internal to $\gm$, and therefore contribute to its weight. Let $\weight_i(\gm)$ denote the weight of $\gm$ as a subset in $G\ksup{i}$. That is, $\weight_i(\gm)$ is the sum of the framings on edges that are internal to $\gm$ in $G\ksup{i}$. Recall that the generator of $N_i$ associated to a set $\gm$ was defined to be
$t^{\weight_i(\gm)}x_{\gm,\out}-y_{\gm,\into}$.
When we pass to the edge ring, the monomial $y_{\gm,\into}$ picks up a coefficient as each $y_i$ is replaced by $t^{-\ell_i}x_i$, where $\ell_i$ denotes the framing on the edge labeled by $x_i$ and $y_i$. Let $\weight_{i,\into}(\gm)$ denote the sum of the framings on edges incoming to $\gm$ in $G\ksup{i}$ from its complement or from the bottom boundary. There is a dependence on $i$ because edges that were incoming to $\gm$ from the bottom boundary may become internal to $\gm$ as we close braid strands. It will be convenient to clear denominators in our standard generators for $N_i$ before beginning Buchberger's Algorithm, so we multiply the usual generators by $t^{\weight_{i,\into}}$. Using the more detailed notation of Section~\ref{mosec}, we let
\begin{equation}
h_\gm\ksup{k}=t^{\weight_k(\gm)+\weight_{k,\into}(\gm)}x_{\gm,G\sm\gm}z_{\gm,G\sm\gm}\ksup{k}z_{\gm,\tau}\ksup{k}-x_{G\sm\gm,\gm}z_{G\sm\gm,\gm}\ksup{k}z_{\beta,\gm}\ksup{k}
\end{equation}
be the new standard form of a generator of $N_k$ and
\begin{equation}
\label{ggmkplusonetw}
g_\gm\ksup{k+1}=t^{\weight_{k+1}(\gm)+\weight_{k+1,\into}(\gm)}x_{\gm,G\sm\gm}z_{\gm,G\sm\gm}\ksup{k+1}z_{\gm,\tau}\ksup{k+1}-x_{G\sm\gm,\gm}z_{G\sm\gm,\gm}\ksup{k+1}z_{\beta,\gm}\ksup{k+1}
\end{equation}
be the new standard form of a generator of $N_{k+1}$. Compare to Equations~\ref{ggmkdfn} and~\ref{ggmkplusonedfn}.
Applying $\pi_k$ to $h_\gm\ksup{k}$ may also cause it to pick up a new factor of $t$. Recall that $\pi_i$ is the quotient map $\cE_i\to\cE_{i+1}$ with kernel $Z_{i+1}=\left(\ztau\ksup{i+1}-t^{a_{i+1}}\zbeta\ksup{i+1}\right)$, where $a_{i+1}$ is the framing on the top edge of the $(i+1)^{\text{st}}$ strand of $G$. Recall also that $\pi_i$ retains the label $\ztau\ksup{i+1}$. So applying $\pi_k$ produces a factor of $t^{-a_{k+1}}$ whenever $\zbeta\ksup{k+1}$ appears.
Let
\[\weight_{k,\tau}(\gm)=\begin{cases}
a_{k+1} & \text{if }\zbeta\ksup{k+1} \text{ is incoming to $\gm$ in }G\ksup{k}\\
0 & \text{otherwise.}
\end{cases}\]
Then we have
\begin{align*}
\pi_k\!\left(h_\gm\ksup{k}\right)&=t^{\weight_{k}(\gm)+\weight_{k,\into}(\gm)}x_{\gm,G\sm\gm}z_{\gm,G\sm\gm}\ksup{k+1}z_{\gm,\tau}\ksup{k+1}\anyzeta_\gm\\&-x_{G\sm\gm,\gm}z_{G\sm\gm,\gm}\ksup{k+1}z_{\beta,\gm}\ksup{k+1}t^{-\weight_{k,\tau}(\gm)}\anyzeta_\gm,
\end{align*}
where $\anyzeta_\gm$ is defined as in Equation~\ref{anyzetadfn} to be $\ztau\ksup{k+1}$ if $\ztau\ksup{k+1}$ is internal to $\gm$ in $G\ksup{k+1}$ and $1$ otherwise. The collection of $\pi_k\!\left(h_\gm\ksup{k}\right)$, taken over all subsets $\gm$ in $G$, is a basis for $\pi_k(N_k)$ just as it was in the blackboard framed case. We will clear denominators to arrive at a more convenient starting basis for Buchberger's Algorithm. Define
\begin{equation}
\label{ggmktw}
g_\gm\ksup{k}=t^{\weight_{k,\tau}(\gm)}\pi_k\!\left(h_\gm\ksup{k}\right).
\end{equation}
The collection of $g_\gm\ksup{k}$ is also a basis for $\pi_k(N_k)$.
The advantage of clearing denominators is that $g_\gm\ksup{k}$ and $g_\gm\ksup{k+1}$ now have the same relationship as in the blackboard framed case:
$g_\gm\ksup{k}=\anyzeta_\gm g_\gm\ksup{k+1}$,
just as in Equation~\ref{ggmkvkplusone}. The equality follows from the fact that \[\weight_k(\gm)+\weight_{k,\into}(\gm)+\weight_{k,\tau}(\gm)=\weight_{k+1}(\gm)+\weight_{k+1,\into}(\gm)\] for any $\gm$.
With the new notation of this section, we may again say that
\begin{equation*}
\cG_0=\left\{\nu g_\gm\ksup{k}\,\vert\,\gm\subset G\right\}\cup\left\{\nu\ztau\ksup{k+1}-\ztau\ksup{k+1}\right\},
\end{equation*}
is our starting basis for Buchberger's Algorithm. Compare to Equation~\ref{startingbasis}.
\subsection{Modifications for Section~\ref{examplesection} example}
\label{twistedexamplesection}
Refer to Figure~\ref{smalltwistedexample}. The various weights associated to $\gm$, $\dl$, and $\gm\cup\dl$ are shown in Table~\ref{weighttable}. The generators associated to $\gm$, $\dl$, and $\gm\cup\dl$ are as follows. Observe that the generators of $N_1$ and $N_2$ are related exactly as they were in the blackboard framing case after we have modified their coefficients as described in the previous section.
\begin{table}
\begin{center}
\renewcommand{\tabcolsep}{11pt}
\begin{tabular}{lll}
$\weight_1(\gm)=\ell_0$ &
$\weight_1(\dl)=0$ &
$\weight_1(\gm\cup\dl)=\ell_0+\ell_2$ \\
$\weight_{1,\into}(\gm)=\ell_2$ &
$\weight_{1,\into}(\dl)=\ell_3+\ell_5$ &
$\weight_{1,\into}(\gm\cup\dl)=\ell_3+\ell_5$ \\
$\weight_{1,\tau}(\gm)=0$ &
$\weight_{1,\tau}(\dl)=\ell_1$ &
$\weight_{1,\tau}(\gm\cup\dl)=\ell_1$ \\
$\weight_2(\gm)=\ell_0$ &
$\weight_2(\dl)=0$ &
$\weight_2(\gm\cup\dl)=\ell_0+\ell_1+\ell_2+\ell_3$ \\
$\weight_{2,\into}(\gm)=\ell_2$ &
$\weight_{2,\into}(\dl)=\ell_1+\ell_3+\ell_5$ &
$\weight_{2,\into}(\gm\cup\dl)=\ell_5$ \\
\end{tabular}
\end{center}
\caption{Weights for the graphs in Figure~\ref{smalltwistedexample}.}
\label{weighttable}
\end{table}
\begin{align*}
h_\gm\ksup{1}&=t^{\weight_{1,\into}(\gm)}\left(t^{\weight_1(\gm)}x_1-t^{-\ell_2}x_2\right)\\
&=t^{\ell_0+\ell_2}x_1-x_2\\
g_\gm\ksup{1}&=t^{\weight_{1,\tau}(\gm)}\pi_1\!\left(h_\gm\ksup{1}\right)\\
&=t^{\ell_0+\ell_2}x_1-x_2\\
g_\gm\ksup{2}&=t^{\weight_{2,\into}(\gm)}\left(t^{\weight_2(\gm)}x_1-t^{-\ell_2}x_2\right)\\
&=t^{\ell_0+\ell_2}x_1-x_2
\end{align*}
\begin{align*}
h_\dl\ksup{1}&=t^{\weight_{1,\into}(\dl)}\left(t^{\weight_1(\dl)}x_2x_4-t^{-\ell_3-\ell_5}x_3x_5\right)\\
&=t^{\ell_3+\ell_5}x_2x_4-x_3x_5\\
g_\dl\ksup{1}&=t^{\weight_{1,\tau}(\dl)}\pi_1\!\left(h_\dl\ksup{1}\right)\\
&=t^{\ell_1}\left(t^{\ell_3+\ell_5}x_2x_4-t^{-\ell_1}x_1x_5\right)\\
&=t^{\ell_1+\ell_3+\ell_5}x_2x_4-x_1x_5\\
g_\dl\ksup{2}&=t^{\weight_{2,\into}(\dl)}\left(t^{\weight_2(\dl)}x_2x_4-t^{-\ell_1-\ell_3-\ell_5}x_1x_5\right)\\
&=t^{\ell_1+\ell_3+\ell_5}x_2x_4-x_1x_5
\end{align*}
\begin{align*}
h_{\gm\cup\dl}\ksup{1}&=t^{\weight_{1,\into}(\gm\cup\dl)}\left(t^{\weight_1(\gm\cup\dl)}x_1x_4-t^{-\ell_3-\ell_5}x_3x_5\right)\\
&=t^{\ell_0+\ell_2+\ell_3+\ell_5}x_1x_4-x_3x_5\\
g_{\gm\cup\dl}\ksup{1}&=t^{\weight_{1,\tau}(\gm\cup\dl)}\pi_1\!\left(h_{\gm\cup\dl}\ksup{1}\right)\\
&=t^{\ell_1}\left(t^{\ell_0+\ell_2+\ell_3+\ell_5}x_1x_4-t^{-\ell_1}x_1x_5\right)\\
&=t^{\ell_0+\ell_1+\ell_2+\ell_3+\ell_5}x_1x_4-x_1x_5\\
g_{\gm\cup\dl}\ksup{2}&=t^{\weight_{2,\into}(\gm\cup\dl)}\left(t^{\weight_2(\gm\cup\dl)}x_4-t^{-\ell_5}x_5\right)\\
&=t^{\ell_0+\ell_1+\ell_2+\ell_3+\ell_5}x_4-x_5
\end{align*}
\begin{figure}
\begin{center}
$$\scalebox{1.3}{\input{smalltwistedexample.tex}}$$
\caption{Modification of Figure~\ref{smallexample} for arbitrary framings.}
\label{smalltwistedexample}
\end{center}
\end{figure}
\subsection{Buchberger's Algorithm}
We use the same monomial order on $\cE_k$ from Definition~\ref{edgeringmo}. Coefficients play no role in the definition or application of a monomial order, so our analysis of the leading terms of standard generators of $N(G\ksup{k})$ and $Z_k$ in Sections~\ref{mosec} and~\ref{startingbasissec} goes through unchanged for arbitrary framings.
The propositions in which S-polynomials are computed also change very little, since analysis of greatest common divisors and least common multiples of leading monomials is not affected by the presence of coefficients. We merely have to check that the exponents on the factors of $t$ work out correctly. They do, because both $\weight_i+\weight_{i,\into}$ and $\weight_{k,\tau}$ are additive under disjoint union. Define a total weight $\Weight$ by
\[\Weight(\gm)=\weight_k(\gm)+\weight_{k,\into}(\gm)+\weight_{k,\tau}(\gm)=\weight_{k+1}(\gm)+\weight_{k+1,\into}(\gm).\] Note that $\Weight$ is also additive under disjoint union.
Analyses involving the monomial order are also unaffected by the presence of coefficients, so reduction arguments generally proceed in the same way as before. In particular, Observation~\ref{MOanddiv} carries through unchanged.
One technical note is in order before we check through Sections~\ref{bbrd1} and~\ref{bbrd2}: We will clear denominators before adding any new generator to our working basis. This does not affect the progress of the algorithm. If $f, g, h\in\itk[\underline{x}]$ and $a\in\itk$, then $S(af,g)=S(f,g)$ and $g\xrightarrow{f}h$ if any only if $g\xrightarrow{af}h$. That is, multiplying a generator in the working basis by an element of the ground field does not affect its properties with respect to S-polynomials or the division algorithm, which means that it does not affect its role in Buchberger's Algorithm.
\subsection*{Reprise of Section~\ref{nutoponespolys}}
Proposition~\ref{nutopone} is already stated in sufficient generality for arbitrary framings. After clearing denominators, the application of Proposition~\ref{nutopone} goes through as explained in the blackboard case. In particular, it remains true that if $\ztau\ksup{k+1}$ does not divide both terms of $g_\gm\ksup{k}$, then $g_\gm\ksup{k}=g_\gm\ksup{k+1}$. So, if $\ztau\ksup{k+1}$ fails to divide the leading term of $g_\gm\ksup{k}$, then we apply Statement~(\ref{nutop2nunodiv}) of Proposition~\ref{nutopone} and add $\ztau\ksup{k+1}g_\gm\ksup{k+1}$ to the working basis $\cG^\prime$.
Let $g_\gm^{(k),\out}$, $g_\gm^{(k),\into}$, $g_\gm^{(k+1),\out}$, and $g_\gm^{(k+1),\into}$ be monomials defined exactly as in the blackboard case (i.e., all with coefficient 1). Define the tilde generators to be
\begin{align*}
\widetilde{g}_\gm&=\nu g_\gm^{(k),\into}-t^{\Weight(\gm)}g_\gm^{(k), \out} \text{ if } \gm \text{ is out-led}\\
\widetilde{g}_\gm&= \nu t^{\Weight(\gm)}g_\gm^{(k), \out}-g_\gm^{(k), \into}
\text{ if } \gm \text{ is in-led.}
\end{align*}
If $\ztau\ksup{k+1}$ divides the leading term of $g_\gm\ksup{k}$, then we apply Statement~(\ref{nutop2nu}) of Proposition~\ref{nutopone} to see that the S-polynomial between $\nutopk$ and $\nu g_\gm\ksup{k}$ still produces $\widetilde{g}_\gm$, possibly after clearing denominators. If $\ztau\ksup{k+1}$ also divides the trailing term of $g_\gm\ksup{k}$, then we have the same reduction to $\ztau\ksup{k+1}g_\gm\ksup{k+1}$ as in the blackboard case.
Overall, then, S-polynomials between $\nutopk$ and generators of the form $\nu g_\gm\ksup{k}$ yield the working basis $\cG^\prime$ defined at the end of Section~\ref{nutoponespolys}.
\subsection*{Reprise of Section~\ref{subsetspolys}} Propositions~\ref{separateinteriorclosure},~\ref{subsetoverlapinterior},~\ref{subsetoverlapclosure}, and~\ref{subsetoverlapboundary} are not specific to the blackboard case. We may still use these as building blocks to describe the outcome of S-polynomials among generators of the form $\nu g_\gm\ksup{k}$ for arbitrary framings -- that is, to prove the following analogue of Proposition~\ref{ksubsetoverlap}.
\begin{prop}[Analogue of Proposition~\ref{ksubsetoverlap}]
\label{twistedksubsetoverlap}
Let $\gm, \dl\subset G$. After clearing denominators, the following statements hold in $\cE_{k+1}$, up to multiplication by non-zero elements of the ground field.
\begin{align*}
\tag{1}\label{twistedksubsetoverlapout}
\text{If $\gm$ and $\dl$ are both out-led, then}\\
S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})=&\,\xsub{G\sm(\gm\cup\dl)}{\gm\cap\dl}\zsub{G\sm(\gm\cup\dl)}{\gm\cap\dl}\ksup{k+1}\zsub{\beta}{\gm\cap\dl}\ksup{k+1}\\
&\cdot\left(t^{\Weight(\dl\sm(\gm\cap\dl))}g_{\dl\sm(\gm\cap\dl)}^{(k+1), \out}g_{\gm\sm(\gm\cap\dl)}^{(k+1), \into}\right.\\
&\left.-\,t^{\Weight(\gm\sm(\gm\cap\dl))}g_{\gm\sm(\gm\cap\dl)}^{(k+1), \out}g_{\dl\sm(\gm\cap\dl)}^{(k+1), \into}\right)\\
&\xrightarrow{g_{\gm\sm(\gm\cap\dl)}\ksup{k+1},g_{\dl\sm(\gm\cap\dl)}\ksup{k+1}}0\\
\tag{2}\label{twistedksubsetoverlapin}
\text{If $\gm$ and $\dl$ are both in-led then}\\
S(g\ksup{k+1}_\gm,g\ksup{k+1}_\dl)&=\,\xsub{\gm\cap\dl}{G\sm(\gm\cup\dl)}\zsub{\gm\cap\dl}{G\sm(\gm\cup\dl)}\ksup{k+1}\zsub{\gm\cap\dl}{\tau}\ksup{k+1}\\
&\cdot\left(t^{\Weight(\gm\sm(\gm\cap\dl))}g_{\dl\sm(\gm\cap\dl)}^{(k+1),\into}g_{\gm\sm(\gm\cap\dl)}^{(k+1),\out}\right.\\
&\left.-\,t^{\Weight(\dl\sm(\gm\cap\dl))}g_{\gm\sm(\gm\cap\dl)}^{(k+1),\into}g_{\dl\sm(\gm\cap\dl)}^{(k+1),\out}\right)\\
&\xrightarrow{g_{\gm\sm(\gm\cap\dl)}\ksup{k+1},g_{\dl\sm(\gm\cap\dl)}\ksup{k+1}}0\\
\tag{3}\label{twistedksubsetoverlapmix}
\text{If $\gm$ is in-led and $\dl$ is out-led, then}\\
S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})&=\,\xsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\zsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\\
&\cdot\left(t^{\Weight(\gm\cup\dl)+\Weight(\gm\cap\dl)}g_{\gm\cup\dl}^{(k+1),\out} g_{\gm\cap\dl}^{(k+1),\out}\right.\\
&\left.-\,g_{\gm\cup\dl}^{(k+1),\into} g_{\gm\cap\dl}^{(k+1),\into}\right)\\
&\xrightarrow{g_{\gm\cup\dl}^{(k+1)},g_{\gm\cap\dl}\ksup{k+1}}0.
\end{align*}
\end{prop}
\begin{proof}
Apply Propositions~\ref{separateinteriorclosure},~\ref{subsetoverlapinterior},~\ref{subsetoverlapclosure}, and~\ref{subsetoverlapboundary} to the definitions of $g_\gm\ksup{k+1}$ and $g_\dl\ksup{k+1}$ given in Equation~\ref{ggmkplusonetw}, then multiply by appropriate powers of $t$ to clear any negative exponents. The reduction statements follow from Proposition~\ref{unorderedreduction}.
\end{proof}
Lemma~\ref{subsetoverlap} remains true as stated. We follow through the details of the proof just to be careful.
\begin{proof}[Proof of Lemma~\ref{subsetoverlap} for arbitrary framings]
We established at the beginning of Section~\ref{nonbbframings} that \[g_\gm\ksup{k}=\anyzeta_\gm g_\gm\ksup{k+1}\] holds for arbitrary framings. We have also seen that Proposition~\ref{unorderedreduction} and Observation~\ref{MOanddiv} carry through unchanged, so the division into cases in the proof of Lemma~\ref{subsetoverlap} remains valid. Proposition~\ref{twistedksubsetoverlap} (the analogue of Proposition~\ref{ksubsetoverlap}) immediately allows us to reduce the S-polynomials in Cases 1 and 2(a) to zero using generators in the working basis.
The analysis of Case 2(b) proceeds as before, establishing that $\ztau\ksup{k+1}$ goes between $\gm\cap\dl$ and $\gm\sm(\gm\cap\dl)$. If $\ztau\ksup{k+1}$ is outgoing from $\gm\cap\dl$, then Proposition~\ref{twistedksubsetoverlap} says that
\begin{align*}
\nu S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})&=\,\nu \xsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\zsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\\
&\cdot\left(t^{\Weight(\gm\cup\dl)+\Weight(\gm\cap\dl)}g_{\gm\cup\dl}^{(k+1),\out} g_{\gm\cap\dl}^{(k+1),\out}-g_{\gm\cup\dl}^{(k+1),\into} g_{\gm\cap\dl}^{(k+1),\into}\right)
\end{align*}
with the term order shown. As in the original proof, we may reduce by $\nu\ztau\ksup{k+1}-\ztau\ksup{k+1}$ to obtain
\begin{align*}
\nu S(g_\gm\ksup{k+1},g_\dl\ksup{k+1})\xrightarrow{\nutopk}\, &\xsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\zsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\ksup{k+1}\\
&\cdot\left(\nu g_{\gm\cup\dl}^{(k+1),\into} g_{\gm\cap\dl}^{(k+1),\into}\right.\\
&\left. -\, t^{\Weight(\gm\cup\dl)+\Weight(\gm\cap\dl)}g_{\gm\cup\dl}^{(k+1),\out} g_{\gm\cap\dl}^{(k+1),\out}\right)
\end{align*}
and then by $$\widetilde{g}_{\gm\cap\dl}=\nu g_{\gm\cap\dl}^{(k+1),\into} - t^{\Weight(\gm\cap\dl)}g_{\gm\cap\dl}^{(k+1),\out}$$
to produce
\begin{align*}
&\quad\xsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\zsub{\gm\sm(\gm\cap\dl)}{\dl\sm(\gm\cap\dl)}\ksup{k+1}t^{\Weight(\gm\cap\dl)}g_{\gm\cap\dl}^{(k+1),\out}g_{\gm\cup\dl}\ksup{k+1}
\end{align*}
and finally by $\ztau\ksup{k+1}g_{\gm\cup\dl}\ksup{k+1}$ to get zero. There is a similar argument for the reduction if $\ztau\ksup{k+1}$ is incoming to $\gm\cap\dl$.
In the analysis of Case 3, the initial argument remains the same, so we may assume without loss of generality that $\ztau\ksup{k+1}$ goes from $\dl\sm(\gm\cap\dl)$ to $\gm\sm(\gm\cap\dl)$. We may likewise assume that the working basis contains tilde generators for $\gm$ and $\dl$ with the forms
\[\widetilde{g}_\gm = \nu t^{\Weight(\gm)} g_\gm^{(k+1),\out} - g_\gm^{(k+1),\into} \quad\text{and}\quad\widetilde{g}_\dl = \nu g_\dl^{(k+1),\into} - t^{\Weight(\dl)}g_\dl^{(k+1),\out}.\]
The appropriate re-factoring of Statement (3) in Proposition~\ref{twistedksubsetoverlap} is then
\begin{align*}
\nu S(g_\gm\ksup{k+1},\, &g_\dl\ksup{k+1})=\\
&\nu t^{\Weight(\dl)}\xsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm\dl}\\
\,\cdot&\zsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\zsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl}{\tau}\ksup{k+1}
t^{\Weight(\gm)}g_\gm^{(k+1),\out}\\
\,- &\nu\xsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\xsub{G\sm\gm}{\gm\cap\dl}\zsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\zsub{G\sm\gm}{\gm\cap\dl}\zsub{\beta}{\gm}\ksup{k+1}
g_{\dl}^{(k+1),\into}
\end{align*}
with unknown term order. As in the argument for blackboard framings, this expression is reducible by either $\widetilde{g}_\gm$ or $\widetilde{g}_\dl$ depending on its term order, and then by whichever of the tilde generators was not already used. For the sake of illustration, we reduce by $\widetilde{g}_\gm$ and then $\widetilde{g}_\dl$ as follows.
\begin{align*}
\nu S(&g_\gm\ksup{k+1}, g_\dl\ksup{k+1})\xrightarrow{\widetilde{g}_\gm}\\
&\nu\xsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\xsub{G\sm\gm}{\gm\cap\dl}\zsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\zsub{G\sm\gm}{\gm\cap\dl}\zsub{\beta}{\gm}\ksup{k+1}
g_{\dl}^{(k+1),\into}\\
\,- & t^{\Weight(\dl)}\xsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\zsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl}{\tau}\ksup{k+1}
g_\gm^{(k+1),\into}\\
\xrightarrow{\widetilde{g}_\dl}\\
& t^{\Weight(\dl)}\xsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\xsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl\sm(\gm\cap\dl)}{G\sm(\gm\cup\dl)}\zsub{\gm\cap\dl}{G\sm\dl}\zsub{\dl}{\tau}\ksup{k+1}
g_\gm^{(k+1),\into}\\
\,- &t^{\Weight(\dl)}\xsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\xsub{G\sm\gm}{\gm\cap\dl}\zsub{G\sm(\gm\cup\dl)}{\gm\sm(\gm\cap\dl)}\zsub{G\sm\gm}{\gm\cap\dl}\zsub{\beta}{\gm}\ksup{k+1}
g_{\dl}^{(k+1),\out}.
\end{align*}
The final expression can be seen to be zero by re-factoring exactly as in the original proof.
\end{proof}
This completes our reprise of Section~\ref{bbrd1}. We are left with the same working basis $\cG^\prime$ as in the blackboard case, given the changes to notation described at the beginning of Section~\ref{nonbbframings}.
\subsection*{Reprise of Section~\ref{bbrd2}}
As in the blackboard case, we must confirm that S-polynomials involving the generators added to the working basis in Round 1 may be reduced to zero within the working basis. The arguments generalize easily to arbitrary framings, but we verify them line-by-line for the sake of completeness.
Applying Statement~(\ref{nutop0nu}) of Proposition~\ref{nutopone} to $S(\nu\ztau\ksup{k+1}-\ztau\ksup{k+1},\ztau\ksup{k+1}g_\gm\ksup{k+1})$ produces a multiple of the blackboard result by $1/\lc{g_\gm\ksup{k+1}}$. That result can be reduced to zero in the same way as before just by multiplying each step of the computation by the same factor of $1/\lc{g_\gm\ksup{k+1}}$. So, as before, \[S(\nutopk,\ztau\ksup{k+1}g_\gm\ksup{k+1})\xrightarrow{\nutopk, \ztau\ksup{k+1}g_\gm\ksup{k+1}}0.\]
Propositions~\ref{gbprinciplecoeff} and~\ref{twistedksubsetoverlap} may be applied to
$S(\ztau\ksup{k+1}g_\gm\ksup{k+1},\ztau\ksup{k+1}g_\dl\ksup{k+1})$ to see that it reduces by some combination of $\ztau\ksup{k+1}g_\Lambda\ksup{k+1}$ for $\Lambda\in\{\gm\sm(\gm\cap\dl)$, $\dl\sm(\gm\cap\dl)$, $\gm\cap\dl$, $\gm\cup\dl\}$, all of which are in $\cG^\prime$.
Lemma~\ref{ztautilde} holds as stated for arbitrary framings. The proof is the same, except that the computations must be multiplied by $\displaystyle{\frac{1}{\lc{g_\gm\ksup{k+1}}\tc{g_\dl\ksup{k+1}}}},$ where $\mathrm{TC}$ denotes the coefficient on the trailing term. These factors do not affect any analysis about leading terms or the availability of the generators needed to reduce the S-polynomial to zero, so the argument for reduction to zero is unchanged.
Lemma~\ref{doubletilde} also holds as stated for arbitrary framings. The expression that must be reduced in the proof for arbitrary framings is the product of the expression in Line~\ref{expandmedoubletilde} of the blackboard case with $\displaystyle{\frac{1}{\tc{g_\gm\ksup{k+1}}\tc{g_\dl\ksup{k+1}}}}$. The method of reduction then depends on how this expression expands in terms of $g_\Lambda\ksup{k+1}$ for $\Lambda\in\left\{\gm\sm(\gm\cap\dl),\dl\sm(\gm\cap\dl),\gm\cap\dl,\gm\cup\dl\right\}$. Just as the possible cases paralleled Proposition~\ref{ksubsetoverlap} in the blackboard setting, they parallel Proposition~\ref{twistedksubsetoverlap} in the non-blackboard setting. In each case, the arguments about term order and reduction to zero by generators in $\cG^\prime$ are identical in the blackboard and non-blackboard settings.
The proof of Lemma~\ref{2nunonu} relies entirely on results that we have already generalized to arbitrary framings, so we may conclude that the proposition holds as stated for arbitrary framings.
Finally, Lemma~\ref{tildeprop} holds as stated. Since its proof involves some explicit calculations, we check it line by line.
\begin{proof}[Proof of Lemma~\ref{tildeprop} for arbitrary framings]
The breakdown into cases is unaffected by the presence of coefficients. In Cases 1 and 2, the expanded S-polynomials for arbitrary framings are the product of the expressions in the blackboard case by a factor of $\displaystyle{\frac{1}{\lc{g_\gm\ksup{k+1}}\tc{g_\dl\ksup{k+1}}}}$. Multiplying through by that factor while reducing makes the reduction by $\nu\ztau\ksup{k+1}-\ztau\ksup{k+1}$ and $\ztau\ksup{k+1}g_\Lambda\ksup{k+1}$ or $\widetilde{g}_\gm$ and $\ztau\ksup{k+1}g_\dl\ksup{k+1}$ possible, just as before.
Case 3 breaks into subcases as in the blackboard case. When $\ztau\ksup{k+1}$ goes from $G\sm(\gm\cup\dl)$ to $\dl\sm(\gm\cap\dl)$, the expanded S-polynomial expression with coefficients is
\begin{align*}
S(g_\gm\ksup{k}, \widetilde{g}_\dl)
=&\,\xsub{G\sm(\gm\cap\dl)}{\gm\cap\dl}\zsub{G\sm(\gm\cap\dl)}{\gm\cap\dl}\ksup{k+1}\zsub{\beta}{\gm\cap\dl}\ksup{k+1}\\
&\cdot\,\left(\nu t^{\Weight(\dl\sm(\gm\cap\dl)}g_{\dl\sm(\gm\cap\dl)}^{(k+1),\out}g_{\gm\sm(\gm\cap\dl)}^{(k+1),\into} - t^{\Weight(\gm\sm(\gm\cap\dl))}g_{\gm\sm(\gm\cap\dl)}^{(k+1),\out}g_{\dl\sm(\gm\cap\dl)}^{(k+1),\into}\right).
\end{align*}
This expression reduces by $\widetilde{g}_{\dl\sm(\gm\cap\dl)}$ and then $\ztau\ksup{k+1}g_{\gm\sm(\gm\cap\dl)}\ksup{k+1}$ as before. When $\ztau\ksup{k+1}$ goes from $\dl\sm(\gm\cap\dl)$ to $G\sm(\gm\cup\dl)$, the expanded S-polynomial is
\begin{align*}
S(g_\gm\ksup{k}, \widetilde{g}_\dl)=&\,\xsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)}\zsub{\dl\sm(\gm\cap\dl)}{\gm\sm(\gm\cap\dl)}\ksup{k+1}\\
&\cdot\,\left(\nu g_{\gm\cup\dl}^{(k+1),\into}g_{\gm\cap\dl}^{(k+1),\into}-t^{\Weight(\gm\cup\dl)}g_{\gm\cup\dl}^{(k+1),\out}t^{\Weight(\gm\cap\dl)}g_{\gm\cap\dl}^{(k+1),\out}\right),
\end{align*}
which reduces by $\widetilde{g}_{\gm\cup\dl}$ and then $\ztau\ksup{k+1}g_{\gm\cap\dl}\ksup{k+1}$, as before.
\end{proof}
This completes the analysis of S-polynomials and reductions parallel to Section~\ref{bbrd2}, along with the proof of Lemma~\ref{buchend} for arbitrary framings. Since all of the necessary reductions to zero can be accomplished without expanding the working basis, Buchberger's Algorithm terminates with the working basis $\cG^\prime$ described above. The proof of Theorem~\ref{idealqthmintro} from Lemma~\ref{buchend} is identical to that in Section~\ref{lemmatothmsec}. Intersecting with $\cE_{k+1}$, then dividing each of the remaining generators by $\ztau\ksup{k+1}$ we produce the set of $g_\gm\ksup{k+1}$ as a basis for the ideal quotient $\pi_k\!\left(N_k\right):\left(\ztau\ksup{k+1}\right)$. Therefore, we have established Theorem~\ref{idealqthmintro} in the full generality that it was stated.
\bibliographystyle{amsplain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Mirror symmetry predicts dualities between quantum geometries on Calabi-Yau manifolds. The two sides of the dual theories are called \emph{A-model}
and \emph{B-model} respectively. The A-model is related to symplectic geometry, which is mathematically established as \emph{Gromov-Witten} theory of
counting holomorphic maps. The B-model is attached to complex geometry, which could be understood via \emph{Kodaira-Spencer gauge} theory. Such
gauge theory is proposed by Bershadsky, Cecotti, Ooguri and Vafa \cite{BCOV} as a closed string analogue of Chern-Simons theory \cite{Witten-CS} in the B-model, whose classical theory describes the deformation of complex structures. We refer to \cite{Barannikov-Kontsevich,Si-BCOV, Si-elliptic,Si-review, Si-gauge} for some recent mathematical
development of its quantum geometry.
Although Kodaira-Spencer gauge theory provides the geometry of B-model from the point of view of string/gauge duality, a direct mathematical approach
to B-model in the spirit of $\sigma$-model is still lacking. The main difficulty is the unknown measure of path integral on the infinite dimensional
mapping space. Thanks to supersymmetry, the physics of B-model path integral is expected to be fully encoded in the small neighborhood of constant
maps. This allows us to extract physical quantities via classical geometries, such as Yukawa couplings for genus-$0$ correlation functions, etc (See \cite{mirror} for an introduction).
It is thus desired to have
a mathematical theory to
reveal the above physics context in the vicinity of constant maps, parallel to the localized space of holomorphic maps in the A-model.
The main purpose of the current paper is to provide a rigorous geometric model to analyze B-model via mapping space. To illustrate our method, we will focus on topological
field theory in this paper, while leaving the topological string for coupling with gravity in future works. In the rest of the introduction, we will
sketch the main ideas and explain our construction. A closely related development of B-model in physics has been communicated recently to us by Losev \cite{Losev}.
The geometry of B-twisted $\sigma$-model (in the spirit of AKSZ-formalism \cite{AKSZ}) describes the mapping
$$
(\Sigma_g)_{dR}\to T^\vee_X[1],
$$
where $(\Sigma_g)_{dR}$ is the ringed space with the sheaf of de Rham complex on the Riemann surface $\Sigma_g$, and $T^\vee_X[1]$ is the
super-manifold associated to the cotangent bundle of $X$ with degree one shifting in the fiber direction. The full mapping space
is difficult to analyze. Instead we
will consider the mapping space in the formal neighborhood of constant maps. Such consideration is proposed in \cite{Kevin-SUSY} to fit
into the effective renormalization method developed in \cite{Kevin-book}. Therefore the corresponding perturbative quantum field theory can be
rigorously analyzed, which is the main context of the current paper. As we have mentioned above, zooming into the neighborhood of constant maps in the B-model does not lose information in physics due to supersymmetry.
\noindent \textbf{Notations}: We will fix some notations that will be used throughout the paper. For a smooth manifold $M$, we
will let $\mathcal A_M$ denote the sheaf of de Rham complex of smooth differential forms on $M$, and let $\mathcal A_M^\sharp$ denote the sheaf of
smooth differential forms forgetting the de Rham differential:
$$
\mathcal A_M:=(\mathcal A_M^\sharp, d_M).
$$
$D_M$ refers to the sheaf of smooth differential operators on $M$. When $M$ is a complex manifold, ${\mathcal O}_M$ refers to the sheaf of
holomorphic
functions, and $T_M$ denotes either the holomorphic tangent bundle, or the sheaf of holomorphic tangent vectors, while its
meaning should be clear from the context. Similarly for the dual $T_M^\vee$. We will use $\Omega_M^\bullet$ to denote the sheaf of
holomorphic de Rham complex on $M$, and $D_M^{hol}$ the sheaf of holomorphic differential operators. The
tensor product $\otimes$ without mentioning its ring means $\otimes_{\mathbb{C}}$.
\subsection{Calabi-Yau model}
The space of fields describing our B-twisted $\sigma$-model is given by
$$
\mathcal E:= \mathcal A_{\Sigma_g}\otimes (\mathfrak{g}_X[1]\oplus \mathfrak{g}_X^\vee),
$$
where $\mathfrak{g}_X$ is the sheaf of curved $L_\infty$-algebra on $X$ describing its complex geometry \cite{Kevin-CS}. As a sheaf itself,
$$
\mathfrak{g}_X=\mathcal A^\sharp_X\otimes_{{\mathcal O}_X} T_X[-1], \quad \mathfrak{g}_X^\vee =\mathcal A^\sharp_X\otimes_{{\mathcal O}_X} T_X^\vee[1].
$$
The Chevalley-Eilenberg complex $C^*(\mathfrak{g}_X)$ is a resolution of the sheaf ${\mathcal O}_X$ of holomorphic functions on $X$, and the curved $L_\infty$-algebra $\mathfrak{g}_X\oplus \mathfrak{g}_X^\vee[-1]$ describes the derived geometry of $T^\vee_X[1]$ (see Section \ref{section-classical} for details).
The Chevalley-Eilenberg differential and the natural symplectic pairing equip $T^\vee_X[1]$ (more precisely its $L_\infty$-enrichment) with the structure of QP-manifold \cite{AKSZ}. The action functional is constructed via the AKSZ-formalism in the same fashion as in \cite{Kevin-CS}, formally written as
$$
S(\alpha+\beta)=\int_{\Sigma_g} \langle d_{\Sigma_g}\alpha, \beta\rangle +\sum_{k\geq 0}\abracket{{l_k(\alpha^{\otimes k})\over (k+1)!}, \beta}
$$
for $\alpha \in \mathcal A_{\Sigma_g}\otimes \mathfrak{g}_X[1], \beta\in \mathcal A_{\Sigma_g}\otimes \mathfrak{g}_X^\vee$. Here $l_k$'s are the
$L_\infty$-products for $\mathfrak{g}_X$. By construction, the action functional satisfies a version of classical master
equation (See section \ref{classical action}). One interesting feature is that $S$ contains only one derivative (coming from
$d_{\Sigma_g}$), and the first-order formulation has been used (e.g. \cite{Losev-I,CDC,Frenkel-Losev}) to describe the the twisted $\sigma$-model around the large volume limit. We follow the more recent formulation \cite{Kevin-CS,Owen-Ryan}, using $L_\infty$-algebra via jet bundle as a coherent way to do perturbative expansion over the target
manifold $X$. In fact, the terms involving $L_\infty$ products exactly represent the curvature of the target (see \cite{Kevin-CS}
for an explanation) in terms of jets.
We would like to do perturbative quantization via Feynman diagrams on the infinite dimensional space $\mathcal E$ analogous to the ordinary non-linear $\sigma$-model \cite{Friedan}. One convenient theory via effective Batalin-Vilkovisky formalism is
developed by Costello \cite{Kevin-book}, and we will analyze the quantization problem via this approach.
\begin{thm}[Theorem \ref{thm:obstruction-quantization}, Theorem \ref{thm:one-loop-correction}]
Let $X$ be a complex manifold.
\begin{enumerate}
\item The obstruction to the existence of perturbative quantization of our B-twisted topological $\sigma$-model is given by $(2-2g)c_1(X)$, where $g$
is the genus of the Riemann surface $\Sigma_g$ and $c_1(X)$ is the first Chern-class of $X$.
\item If $c_1(X)=0$, i.e. X being Calabi-Yau, then there exists a canonical perturbative quantization associated to a choice of holomorphic volume
form $\Omega_X$.
\end{enumerate}
\end{thm}
We refer to section \ref{section-quantization} for the precise meaning of the theorem. The theorem is proved by analyzing Feynman diagrams
with the heat kernel on $\Sigma_g$ associated to the constant curvature metric, and this is consistent with physics that
B-twisting can only exist on Calabi-Yau manifolds. Similar results on half-twisted B-model and 2d holomorphic Chern-Simons theory
have been obtained in \cite{Yuan, Yuan-Owen} via background field method. Another approach to topological B-model via D-module
techniques is communicated to us by Rozenblyum \cite{Nick}.
Given a perturbative quantization, there exists a rich structure of factorization algebra for observables developed by Costello and Gwilliam
\cite{Kevin-Owen}. In our case of quantum field theory in two dimensions, the factorization product for local observables gives rise to the structure
of $E_2$-algebra. A perturbative quantization of a so-called cotangent field theory (where our Calabi-Yau model belongs to) can be
viewed as defining certain \emph{projective volume form} on the space of fields \cite{Kevin-CS}. It allows us to define correlation functions
for local observables via the
local-to-global factorization product. The next theorem concerns with the local and global obvervables in our model.
\begin{thm}
Let $X$ be a compact Calabi-Yau with holomorphic volume form $\Omega_X$.
\begin{enumerate}
\item The cohomology of local quantum observables on any disk $U\subset \Sigma_g$ is $H^*(X, \wedge^* T_X)[[\hbar]]$.
\item The complex of quantum observables on $\Sigma_g$ is quasi-isomorphic to the de Rham complex of a trivial local system on $X$ concentrated at
degree $(2g-2)\dim_{\mathbb C}X$.
\end{enumerate}
\end{thm}
See section \ref{section:global-observable} for the explanation.
Instead of the de Rham cohomology for observables in the Gromov-Witten theory, the observables in the B-model are described by polyvector fields. Let
$\mu_i\in H^*(X, \wedge^* T_X)$, and $ \amalg_iU_i\subset \Sigma_g$ be disjoint union of disks on $\Sigma_g$. Let
$O_{\mu_i,U_i}$ be a local observable in $U_i$ representing $\mu_i$ via the above theorem. Then the factorization product with
respect to the embedding
$$
\amalg_i U_i \hookrightarrow \Sigma_g
$$
gives a global observable $O_{\mu_1,U_1}\star O_{\mu_2,U_2}\star\cdots\star O_{\mu_k,U_k}$. Following \cite{Kevin-CS}, the
correlation function of topological field theory is defined by the natural integration
$$
\abracket{O_{\mu_1,U_1}, \cdots, O_{\mu_k,U_k}}_{\Sigma_g}:= \int_X \bbracket{O_{\mu_1,U_1}\star O_{\mu_2,U_2}\star\cdots\star
O_{\mu_k,U_k}} \in \mathbb C((\hbar)).
$$
Here $\bbracket{-}$ is the de Rham cohomology class represented by the quantum observable as in the second part of the above theorem. The degree
shifting implies that the correlation function is zero unless $\sum\limits_i \deg O_{\mu_i,U_i}=\sum\limits_i \deg \mu_i=(2-2g)\dim_{\mathbb C}X$.
Explicit calculation on the sphere gives
\begin{thm}[Theorem \ref{correlation-P1}]
Let $\Sigma_g=\mathbb P^1$, and let $X$ be a compact Calabi-Yau with holomorphic volume form $\Omega_X$. Then
$$
\abracket{O_{\mu_1,U_1}, \cdots, O_{\mu_k,U_k}}_{\mathbb P^1}= \hbar^{\dim_{\mathbb{C}}X}\int_X \bracket{\mu_1\cdots\mu_k \vdash \Omega_X}\wedge \Omega_X,
$$
where $\hbar$ is a formal variable.
\end{thm}
When $\Sigma_g$ is an elliptic curve, the only non-trivial topological correlation function is the partition function without
inputs.
\begin{thm}[Theorem \ref{correlation-elliptic}] Let $g=1$, then $\abracket{1}_{\Sigma_g}=\chi(X)$ is the Euler characteristic of $X$.
\end{thm}
To establish the above computation of correlation functions, we describe a formalism in the spirit of Batalin-Vilkovisky Lagrangian integration,
which is equivalent
to the above definition of correlation functions for our model (Corollary \ref{cor:correlation-function-BV-def}). It not only simplifies the
computation, but also sheds light on the potential application to theories which are not cotangent. In fact, the
Landau-Ginzburg model to be described below is not a cotangent field theory, hence the definition of correlation function in
\cite{Kevin-CS} does not work in this case. However, the Batalin-Vilkovisky Lagrangian integration still makes sense and gives rise to the expected
result (Proposition \ref{thm-LG-correlation}).
\subsection{Landau-Ginzburg model} The Calabi-Yau model described above allows a natural generalization to the
Landau-Ginzburg model associated to a pair $(X,W)$, where $W$ is a holomorphic function on $X$ called the \emph{superpotential}.
This is accomplished by a \emph{twisting procedure}: at the classical level, the interaction is modified by adding a term $I_W$
(Definition \ref{LG-definition}); at the quantum level, this simple modification is still valid (Proposition \ref{prop:QME-LG}).
In particular, a choice of holomorphic volume form $\Omega_X$ on $X$ leads to a quantization of our Landau-Ginzburg B-model.
Let us describe the corresponding observable theory. For simplicity, let us assume $X=\mathbb{C}^n$, and the critical set of the superpotential $Crit(W)$ is finite. We let
$\{z^i\}$ be the affine coordinates on $\mathbb{C}^n$, and choose $\Omega_X=dz^1\wedge\cdots\wedge dz^n$. We consider the quantization associated to
the pair $(X, \Omega_X)$ with the twisting procedure described above.
\begin{thm}[Proposition \ref{LG-local-observable}]
The cohomology of Landau-Ginzburg $B$-model local quantum observables on any disk $U\subset \Sigma_g$ is $\Jac(W)[[\hbar]]$.
\end{thm}
Similarly, we use $O_{f,U}$ to denote a local quantum observable representing $f\in \Jac(W)$ in the above theorem. Let
$ \amalg_iU_i\subset \Sigma_g$ be disjoint union of disks on $\Sigma_g$. Then the factorization product
$$
O_{f_1,U_1}\star\cdots\star O_{f_k,U_k}
$$
defines a global quantum observable on $\Sigma_g$. However, the Landau-Ginzburg theory is no longer a cotangent theory in the sense of
\cite{Kevin-CS}, and the projective volume form interpretation of quantization breaks down. Instead, we directly construct an integration map on
quantum observables following the interpretation of Batalin-Vilkovisky Lagrangian geometry described above. This allows us to define the correlation function (Definition \ref{def:correlation-function-LG})
$$
\abracket{ O_{f_1,U_1}\star\cdots\star O_{f_k,U_k}}_{\Sigma_g}^W
$$
in the Landau-Ginzburg case.
\begin{thm}[Proposition \ref{thm-LG-correlation}]
The correlation function of topological Landau-Ginzburg B-model is
$$
\abracket{O_{f_1,U_1}\star\cdots\star O_{f_k,U_k}}_{\Sigma_g}^W=\sum_{p\in Crit(W)}\Res_p\bracket{f_1\cdots f_k \det(\partial_i\partial_j W)^{g}dz^1\wedge\cdots\wedge dz^n \over \prod_i
\partial_i W},
$$
where $\Res_p$ is the residue at the critical point $p$ \cite{GH}.
\end{thm}
This coincides with Vafa's residue formula \cite{vafa}.
\bigskip
\noindent \textbf{Acknowledgement}: The authors would like to thank Kevin Costello, Ryan Grady, Owen Gwilliam and Yuan Shen for valuable discussions
on quantum field theories. Part of the work was done while the authors were visiting MSC at Tsinghua University in the summer of 2012 and 2013, and the first author
was visiting Boston University in 2012 and 2013. We would like to thank for their hospitality. The first author is partially supported by Chinese Universities
Scientific
Fund WK0010000030. The second author is partially supported by NSF DMS-1309118.
\section{The classical theory}\label{section-classical}
In this section we will describe the geometry of B-twisted topological $\sigma$-model and set up our theory at the classical level.
\subsection{The model}
Let $X$ be a complex manifold, and let $\Sigma_g$ be a closed Riemann surface of genus $g$.
Two-dimensional $\sigma$-models are concerned with the space of maps
$$
\Sigma_g\to X.
$$
One useful way to incorporate interesting information about the geometry and topology of the target $X$ is to enhance ordinary
$\sigma$-models to supersymmetric ones and apply topological twists. There are two twisted supersymmetric theories that
have been extensively studied both in the mathematics and physics literature: the A-model and the B-model. It leads to the famous
mirror symmetry between symplectic and complex geometries. In this paper we will mainly focus on the B-model.
One possible mathematical formulation of the quantum field theory of B-twisted $\sigma$-model is proposed by Costello
\cite{Kevin-SUSY} via formal derived geometry, and we will adopt this point of view.
\begin{defn}[\cite{Kevin-SUSY}]\label{B-model-definition}
The (fully twisted) B-model, with source a genus $g$ Riemann surface $\Sigma_g$ and target a complex manifold $X$, is the cotangent theory to the elliptic
moduli problem of maps $$(\Sigma_g)_{dR}\rightarrow X_{\bar{\partial}}.$$
\end{defn}
In the subsequent subsections, we will explain all the notations and geometric data in the above definition. Basically, we have
enhanced the mapping as from a dg-space $\bracket{\Sigma_g}_{dR}$ to the $L_\infty$-space $X_{\dbar}$ to implement supersymmetry.
However, the full mapping space is complicated and hard to analyze. Instead, we will focus on the locus in the formal neighborhood
of constant maps. Under this reduction, we describe our classical action functional in section \ref{classical action}. From the
physical point of view, the quantum field theory of B-twisted $\sigma$-model is fully encoded in the neighborhood of constant
maps, thanks to supersymmetry. Therefore we do not lose any information via this consideration.
\subsection{The spaces $\bracket{\Sigma_g}_{dR}$ and $X_{\dbar}$}
\subsubsection{The dg-space $\bracket{\Sigma_g}_{dR}$}
We use $\bracket{\Sigma_g}_{dR}$ to denote the dg-ringed space
$$
\bracket{\Sigma_g}_{dR}=\bracket{\Sigma_g, \mathcal A_{\Sigma_g}}
$$
on the Riemann surface $\Sigma_g$, where the structure sheaf is the sheaf of smooth de Rham complex. $\mathcal A_{\Sigma_g}$ is an elliptic complex, and we view $\bracket{\Sigma_g}_{dR}$
as an \emph{elliptic ringed space} in the sense of \cite{Kevin-SUSY}.
\subsubsection{The $L_\infty$-space $X_{\dbar}$} The space $X_{\dbar}$ is a derived version of the complex manifold $X$ itself, which is introduced in \cite{Kevin-CS} to describe
holomorphic Chern-Simons theory. This is a suitable concept to discuss perturbative quantum field theory invariant under diffeomorphism group. It consists of a pair
$$
X_{\dbar}=\bracket{X, \mathfrak{g}_{X}},
$$
where $\mathfrak{g}_X$ is the sheaf of curved $L_\infty$-algebras on $X$ that we describe now. As a graded sheaf on $X$, $\mathfrak{g}_X$ is defined by
$$
\mathfrak{g}_X:=\mathcal A_X^{\sharp}\otimes_{{\mathcal O}_X} T_X[-1],
$$
where $T_X[-1]$ is the sheaf of holomorphic tangent vectors with degree shifting such that it is concentrated at degree $1$. To describe the curved
$L_\infty$-structure, we consider
$$
C^*\bracket{\mathfrak{g}_X}:=\widehat{\Sym}_{\mathcal A_X^{\sharp}}\bracket{\mathfrak{g}_X[1]^\vee}=\prod\limits_{k\geq 0}\Sym^k_{\mathcal A_X^{\sharp}}\bracket{\mathfrak{g}_X[1]^\vee},
$$
where
$$
\mathfrak{g}_X[1]^\vee:=\mathcal A_X^{\sharp}\otimes_{{\mathcal O}_X} T_X^\vee
$$
is the dual sheaf of $\mathfrak{g}_X[1]$ over $\mathcal A_X^{\sharp}$, and $\Sym^k_{\mathcal A_X^{\sharp}}\bracket{\mathfrak{g}_X[1]^\vee}$ is the graded symmetric tensor product of $k$ copies of $\mathfrak{g}_X[1]^\vee$ over
$\mathcal A_X^{\sharp}$. When $k=0$, we set
$
\Sym^0_{\mathcal A_X^{\sharp}}\bracket{\mathfrak{g}_X[1]^\vee}\equiv \mathcal A_X^{\sharp}.
$
It is easy to see that
$$
C^*\bracket{\mathfrak{g}_X}=\mathcal A_X^\sharp\otimes_{{\mathcal O}_X} \widehat{\Sym}_{{\mathcal O}_X}\bracket{T_X^\vee}.
$$
Thus $C^*\bracket{\mathfrak{g}_X}$ is a sheaf of algebras over $\mathcal A_X^{\sharp}$.
\begin{notn}\label{notation-basis}
Let $\{z^1,\cdots,z^n\}$ denote local holomorphic coordinates on $X$, we will let $\{\widetilde{\partial_{z^i}}\}$ denote the
corresponding basis of $\mathfrak{g}_X$ over $\mathcal A_X^{\sharp}$,
and let $\{\widetilde{dz^i}\}$ denote the corresponding basis of $\mathfrak{g}_X^\vee$ over $\mathcal A_X^{\sharp}$ similarly.
\end{notn}
A curved $L_\infty$-algebra structure on $\mathfrak{g}_X$ is a differential on $C^*\bracket{\mathfrak{g}_X}$ with which it becomes a dg-algebra over the dg-ring $\mathcal A_X$. Such a structure is obtained
in \cite{Kapranov}, which is called a weak Lie algebra there. We reformulate the construction for the application in B-twisted $\sigma$-model. Let us first recall
\begin{defn}\label{def:jet-bundle} Let $E$ be a holomorphic vector bundle on $X$. We define the holomorphic jet bundle $\Jet_X^{hol}(E)$ as follows: let
$\pi_1$ and $\pi_2$ denote
the projection of $X\times X$ onto the first and second component respectively,
$$
\xymatrix{
& X\times X \ar[dl]_{\pi_1} \ar[dr]^{\pi_2} & \\
X && X
}
$$
then
$$
\Jet_X^{hol}(E):=\pi_{1*}\bracket{\widehat{{\mathcal O}}_{\Delta}\otimes_{{\mathcal O}_{X\times X}}\pi_2^* E},
$$
where $\Delta\hookrightarrow X\times X$ is the diagonal, and $\widehat{{\mathcal O}}_{\Delta}$ is the analytic formal completion of $X\times X$
along $\Delta$. The jet bundle $ \Jet_X^{hol}(E)$ has a natural filtration defined by
$$
F^k\Jet_X^{hol}(E):= I_\Delta^k \Jet_X^{hol}(E),
$$
where $I_\Delta$ is the structure sheaf of $\Delta$.
\end{defn}
It is clear that $\Jet^{hol}_X(E)$ inherits a $D^{hol}_X$-module structure from $\widehat{{\mathcal O}}_{\Delta}$, and we will let $\Omega_X^*\bracket{\Jet^{hol}_X(E)}$ be the corresponding holomorphic de
Rham complex. The natural embedding
$$
E\hookrightarrow \Omega_X^*\bracket{\Jet^{hol}_X(E)}
$$
induced by taking Taylor expansions of holomorphic sections is a quasi-isomorphism.
Let us consider a smooth map
$$
\rho: U\to X\times X,
$$
where $U\subset T_X$ is a small neighborhood of the zero section. We require that $\rho$ is a diffeomorphism onto its image, and if we write
$$
\rho: (x, v)\mapsto (x, \rho_x(v)),
$$
then $\rho_x(-)$ is holomorphic if we fix $x$. Such a diffeomorphism can be constructed from a K\"{a}hler metric on $X$ via the
K\"{a}hler normal coordinates. Note that in general $\rho_x(-)$ does not vary holomorphically with respect to $x$. Such a map
$\rho$ induces an isomorphism
$$
\rho^*: C^{\infty}(X)\otimes_{{\mathcal O}_X}\pi_{1*}\bracket{\widehat{{\mathcal O}}_{\Delta}}\overset{\sim}{\rightarrow} C^{\infty}(X)\otimes_{{\mathcal O}_X} \widehat{\Sym}\bracket{T_X^\vee}.
$$
Tensoring with $\mathcal A_X^\sharp$, we find the following identification
\begin{equation}\label{eqn:identification-CE-jet}
\rho^*: \mathcal A_X^\sharp\otimes_{{\mathcal O}_X}\Jet_X^{hol}({\mathcal O}_X) \overset{\sim}{\rightarrow} C^*\bracket{\mathfrak{g}_X}.
\end{equation}
Let $d_{D_X}$ be the de Rham differential on $\mathcal A_X^\sharp\otimes_{{\mathcal O}_X}\Jet_X^{hol}({\mathcal O}_X)$ induced from the $D^{hol}_X$-module
structure on $\Jet_X^{hol}({\mathcal O}_X)$. We can define a differential $d_{CE}$ on $C^*\bracket{\mathfrak{g}_X}$ by
$$
d_{CE}=\rho^*\circ d_{D_X} \circ \rho^{*-1}.
$$
The differential $d_{CE}$ defines a curved $L_\infty$-structure on $\mathfrak{g}_X$, under which $d_{CE}$ is the corresponding Chevalley-Eilenberg differential. We remark that the use of the K\"{a}hler metric is only auxiliary: any choice of smooth splitting of the projection $F^1 \Jet_X^{hol}({\mathcal O}_X)\to F^1\Jet_X^{hol}({\mathcal O}_X)/F^2 \Jet_X^{hol}({\mathcal O}_X)$ can be used to define a curved $L_\infty$-structure, and different choices are homotopic equivalent \cite{Kevin-CS}. Therefore we will not refer to a particular choice.
\begin{defn}
$\mathfrak{g}_X$ is the sheaf of curved $L_\infty$-algebras on $X$ defined by the Chevalley-Eilenberg complex $\bracket{C^*\bracket{\mathfrak{g}_X}, d_{CE}}$. We
will denote the components of the structure maps (shifted by degree 1) of $\mathfrak{g}_X$ by
$$
l_k: \Sym^k_{\mathcal A^\sharp_X}(\mathfrak{g}_X[1])\to \mathfrak{g}_X.
$$
\end{defn}
Therefore $l_1$ defines $\mathfrak{g}_X$ as a dg-module over $\mathcal A_X$, $l_k$ are $\mathcal A_X^{\sharp}$-linear for $k>1$, and $l_0$ defines the curving. There is a natural quasi-isomorphic embedding
$$
\bracket{X, {\mathcal O}_X}\hookrightarrow \bracket{X, C^*\bracket{\mathfrak{g}_X}}
$$
and $X_{\dbar}$ is viewed as the derived enrichment of $X$ in this sense.
Classical constructions of vector bundles can be naturally extended to the $L_\infty$-space $X_{\dbar}$.
\begin{defn}
Let $E$ be a holomorphic vector bundle on $X$. The induced vector bundle $E_{\dbar}$ on the $L_\infty$-space $X_{\dbar}$ is defined by the $\mathfrak{g}_{X}$-module whose sheaf of Chevalley-Eilenberg complex $C^*\bracket{\mathfrak{g}_X, E_{\dbar}}$ is the dg module
$$
C^*\bracket{\mathfrak{g}_X, E_{\dbar}}:=\mathcal A_X^{\sharp} \otimes_{{\mathcal O}_X}\Jet^{hol}_X(E)
$$
over the dg algebra $C^*(\mathfrak{g}_X)$.
\end{defn}
\begin{eg}The tangent bundle $TX_{\dbar}$ is given by the module $\mathfrak{g}_X[1]$, with its naturally induced module structure over
$\mathfrak{g}_X$. Similarly, the cotangent
bundle $T^*X_{\dbar}$ is given by the natural $\mathfrak{g}_X$-module $\mathfrak{g}_X[1]^\vee$. Symmetric and exterior tensor products of vector bundles are defined in the same
fashion. For example, $$
\wedge^kT^*X_{\dbar}=\wedge^k\bracket{\mathfrak{g}_X[1]^\vee}
$$
and a $k$-form on $X_{\dbar}$ is a section of the sheaf
$$
C^*\bracket{\mathfrak{g}_X, \wedge^k\bracket{\mathfrak{g}_X[1]^\vee}}=\mathcal A_X^{\sharp}\otimes_{{\mathcal O}_X}\Jet_X^{hol}(\wedge^k T_X^\vee).
$$
\end{eg}
In Appendix \ref{appendix:L_infty}, we present the corresponding $L_\infty$ constructions in more details.
\subsubsection{Mapping space as $L_\infty$-space}\label{mapping-space}
Let $f:\Sigma_g\rightarrow X$ be a smooth map. The sheaf $$f^*\mathfrak{g}_X\otimes_{f^*{\mathcal A_{X}}} \mathcal A_{\Sigma_g}$$ naturally inherits a curved
$L_\infty$-algebra on $\Sigma_g$ within which Maurer-Cartan elements are defined \cite{Kevin-CS} .
\begin{defn}
A map $\bracket{\Sigma_g}_{dR} \to X_{\dbar}$ consists of a smooth map $f: \Sigma_g\to X$, together with a Maurer-Cartan element
$$
\alpha\in f^*\mathfrak{g}_X\otimes_{f^*{\mathcal A_{X}}} \mathcal A_{\Sigma_g}.
$$
\end{defn}
We would like to consider those maps which are constant on the underlying manifold. As shown in \cite{Kevin-CS}, the space of such maps can be
represented by the $L_\infty$-space
$$
\bracket{X, \mathcal A_{\Sigma_g}\otimes_{\mathbb{C}} \mathfrak{g}_X},
$$
which is an enrichment of $X_{\dbar}$ by the information from the Riemann surface $\Sigma_g$.
\subsection{Classical action functional}\label{classical action}
As in Definition \ref{B-model-definition}, our model is defined as the cotangent theory to the elliptic moduli problem of maps
$$
(\Sigma_g)_{dR}\rightarrow
X_{\bar{\partial}}.
$$
The cotangent construction of perturbative field theory is described in \cite{Kevin-Owen} as a convenient way to implement Batalin-Vilkovisky
quantization. In our case, we consider the enlarged mapping space
$$
\bracket{\Sigma_g}_{dR}\to T^*X_{\dbar}[1].
$$
The dg-space $\bracket{\Sigma_g}_{dR}$ is equipped with a volume form of degree $-2$, and $T^*X_{\dbar}[1]$ has a natural
symplectic form of degree $1$. This fits into the AKSZ-construction \cite{AKSZ} and leads to an odd symplectic structure of degree
$-1$ on the mapping space as desired for Batalin-Vilkovisky formalism.
We are interested in the locus around constant maps. As explained in section \ref{mapping-space}, such locus is represented by the
$L_\infty$-space
$$
\bracket{X, \mathcal A_{\Sigma_g}\otimes_\mathbb{C} \mathfrak{g}_{T^*X_{\dbar}[1]} },
$$
where $\mathfrak{g}_{T^*X_{\dbar}[1]}=\mathfrak{g}_X\oplus \mathfrak{g}_X[1]^\vee$ is the curved $L_\infty$-algebra representing $T^*X_{\dbar}[1]$.
\begin{defn} The space of fields of the B-twisted $\sigma$-model is the $\mathcal A_X^\sharp$-module
$$
{\mathcal E}:=\mathcal A_{\Sigma_g}^\sharp\otimes_{\mathbb{C}} \bracket{\mathfrak{g}_X[1]\oplus \mathfrak{g}_X^\vee}.
$$
\end{defn}
\begin{lem-defn}
There exists a natural graded sympletic pairing $\abracket{-,-}$ on ${\mathcal E}$ of degree $-1$.
\end{lem-defn}
\iffalse
It is straightforward to check that this pairing is graded skew-symmetric.
\begin{equation*}
\begin{aligned}
&\langle\beta_1\otimes g^1,\alpha_2\otimes g_2\rangle\\
=&(-1)^{|\alpha_2||g^1|}\int_\Sigma\beta_1\wedge\alpha_2\cdot\langle g^1,g_2\rangle\\
=&(-1)^{|\alpha_2||g^1|+|\alpha_2||\beta_1|+|g^1||g_2|+1}\int_\Sigma\alpha_2\wedge\beta_1\cdot\langle g_2,g^1\rangle\\
=&(-1)^{|\alpha_2||g^1|+|\alpha_2||\beta_1|+|g^1||g_2|+|\beta_1||g_2|+1}\langle\alpha_2\otimes g_2,\beta_1\otimes g^1\rangle\\
=&(-1)^{|\alpha_2\otimes g_2||\beta_1\otimes g^1|}(-1)\langle\alpha_2\otimes g_2,\beta_1\otimes g^1\rangle.
\end{aligned}
\end{equation*}
\fi
The proof is standard and we omit here. The classical action functional is constructed in a similar way as in \cite{Kevin-CS}.
\begin{defn}\label{defn-classical-functional} The classical action functional is defined as the $\mathcal A_X^{\sharp}$-valued formal function on ${\mathcal E}$
$$
S(\alpha+\beta):=\int_{\Sigma_g}\left( \langle d_{\Sigma_g}\alpha,\beta\rangle+
\sum_{k\geqslant
0}\dfrac{1}{(k+1)!}\langle l_k(\alpha^{\otimes k}),\beta\rangle\right),
$$
where $\alpha\in \mathcal A_{\Sigma_g}^{\sharp}\otimes \mathfrak{g}_X[1], \beta\in \mathcal A_{\Sigma_g}^{\sharp}\otimes \mathfrak{g}_X^\vee$, $d_{\Sigma_g}$ is the de Rham differential on $\Sigma_g$, and $l_k$ is the $L_\infty$-product for $\mathfrak{g}_X$.
\end{defn}
We will let $$Q=d_{\Sigma_g}+l_1:\mathcal{E}\rightarrow\mathcal{E}$$ and split the classical action
$S$ into its free and interaction parts
\begin{align*}
S&=S_{free}+I_{cl},
\end{align*}
where $$I_{cl}(\alpha+\beta)=\int_{\Sigma_g}\left(\langle
l_0,\beta\rangle+\sum_{k\geqslant 2}\dfrac{1}{(k+1)!}\langle l_k(\alpha^{\otimes k}),\beta\rangle\right)$$ and
$$S_{free}(\alpha+\beta)=\int_{\Sigma_g}\langle Q(\alpha),\beta\rangle.$$
For later discussion, we denote the following functionals by
\begin{equation}\label{def: tilde-l_k}
\tilde{l}_k(\alpha+\beta):=\dfrac{1}{(k+1)!}\int_{\Sigma_g}\langle l_k(\alpha^{\otimes k}),\beta\rangle, \hspace{5mm} \text{for\ } k\geq 0.
\end{equation}
\subsection{Classical master equation} The classical action functional $S$ satisfies the classical master equation, which is equivalent to the gauge invariance in the Batalin-Vilkovisky formalism. We will explain the classical master equation in this section and set up some notations to be used for quantization later.
\subsubsection{Functionals on fields}
The space of fields ${\mathcal E}$ is an $\mathcal A_{\Sigma_g}^\sharp$-module. Let ${\mathcal E}^{\otimes k}$ denote the $\mathcal A_{X}^{\sharp}$-linear completed tensor product of $k$ copies of
${\mathcal E}$, where the completion is over the products of Riemann surfaces. Explicitly,
$$
{\mathcal E}^{\otimes k}:=\mathcal A_{\Sigma_g\times \cdots \times \Sigma_g} \otimes_{\mathbb{C}} \bracket{\bracket{\mathfrak{g}_X[1]\oplus \mathfrak{g}_X^\vee}\otimes_{\mathcal A_{X}^{\sharp}}\cdots
\otimes_{\mathcal A_{X}^{\sharp}} \bracket{\mathfrak{g}_X[1]\oplus \mathfrak{g}_X^\vee}}.
$$
The permutation group $S_k$ acts naturally on ${\mathcal E}^{\otimes k}$ and we will let
$$
\Sym^k\bracket{{\mathcal E}}:=\bracket{{\mathcal E}^{\otimes k}}_{S_k}
$$
denote the $S_k$-coinvariants.
We will use ${\overline \mathcal A}_{\Sigma_g}$ to denote the distribution valued de Rham complex on $\Sigma_g$. $\overline{{\mathcal E}}$ will be
distributional sections
of ${\mathcal E}$:
$$
\overline{{\mathcal E}}={\overline \mathcal A}_{\Sigma_g}\otimes_{\mathbb{C}} \bracket{\mathfrak{g}_X[1]\oplus \mathfrak{g}_X^\vee}.
$$
We will also use
$$
{\mathcal E}^\vee:= \Hom_{\mathcal A_X^{\sharp}}\bracket{{\mathcal E}, \mathcal A_X^{\sharp}}
$$
to denote functionals on ${\mathcal E}$ which are linear in $\mathcal A_X^{\sharp}$. The symplectic pairing $\abracket{-,-}$ gives a natural
embedding
$$
{\mathcal E}\hookrightarrow {\mathcal E}^\vee[-1],
$$
which induces an isomorphism
$$
\overline{{\mathcal E}}\cong {\mathcal E}^\vee[-1].
$$
\begin{defn} We define the space of $k$-homogenous functionals on ${\mathcal E}$ by the linear functional (distribution) on $\Sigma_g\times\cdots\times
\Sigma_g$ ($k$-copies)
$$
{\mathcal O}^{(k)}({\mathcal E}):=\Hom_{\mathcal A_X^{\sharp}}\bracket{\Sym^k({\mathcal E}), \mathcal A_X^{\sharp}},
$$
where our convention is that ${\mathcal O}^{(0)}({\mathcal E})=\mathcal A_X^{\sharp}$. We introduce the following notations:
$$
{\mathcal O}({\mathcal E}):=\prod_{k\geq 0}{\mathcal O}^{(k)}({\mathcal E}), \quad {\mathcal O}^+({\mathcal E}):=\prod_{k\geq 1}{\mathcal O}^{(k)}({\mathcal E}).
$$
\end{defn}
Therefore ${\mathcal O}({\mathcal E})$ can be viewed as formal power series on ${\mathcal E}$. The isomorphism $\overline{{\mathcal E}}\cong {\mathcal E}^\vee[-1]$ leads to natural isomorphisms
$$
{\mathcal O}^{(k)}({\mathcal E})=\bracket{{\mathcal E}^\vee}^{\otimes k}_{S_k}\cong \bracket{\overline{{\mathcal E}}[1]}^{\otimes k}_{S_k},
$$
where the tensor products are the $\mathcal A_X^{\sharp}$-linear completed tensor products over $k$ copies of $\Sigma_g$.
\begin{defn}\label{contraction operator} Let $P \in \Sym^k({\mathcal E})$. We define the operator of contraction with $P$
$$
{\partial\over \partial P}: {\mathcal O}^{(m+k)}({\mathcal E})\to {\mathcal O}^{(m)}({\mathcal E})
$$
by
$$
\bracket{{\partial\over \partial P}\Phi}(\mu_1, \cdots, \mu_m):=\Phi(P, \mu_1, \cdots, \mu_m),
$$
where $\Phi\in {\mathcal O}^{(m+k)}({\mathcal E}), \mu_i\in {\mathcal E}$.
\end{defn}
\begin{defn}
We will denote by $\mathcal O_{loc}({\mathcal E})\subset {\mathcal O}({\mathcal E})$ the subspace of local functionals, i.e. those of the form given by the integration of a Lagrangian density on $\Sigma_g$
$$
\int_{\Sigma_g} \mathcal L(\mu), \quad \mu \in {\mathcal E}.
$$
$\mathcal O_{loc}^+({\mathcal E})$ is defined similarly as local functionals modulo constants.
\end{defn}
\begin{eg}
The classical action functional $S$ in Definition \ref{defn-classical-functional} is a local functional.
\end{eg}
\subsubsection{Classical master equation} As a general fact in symplectic geometry, the Poisson kernel of a symplectic form induces a Poisson bracket on the
space of functions. In our case we are dealing with the infinite dimensional symplectic space $\bracket{{\mathcal E}, \abracket{-,-}}$. The Poisson bracket is of the
form of $\delta$-function distribution, therefore the Poisson bracket is well-defined on local functionals.
\begin{lem-defn}\label{Poisson-bracket} The symplectic pairing $\abracket{-,-}$ induces an odd Poisson bracket of degree $1$ on the space of local functionals, denoted by
$$
\fbracket{-,-}: \mathcal O_{loc}({\mathcal E})\otimes_{\mathcal A_{X}^{\sharp}} \mathcal O_{loc}({\mathcal E}) \to \mathcal O_{loc}({\mathcal E}),
$$
which is bilinear in $\mathcal A_X^{\sharp}$.
\end{lem-defn}
\begin{lem}\label{lem:classical-master-equation}
Let $F_{l_1}$ be the functional on ${\mathcal E}$ defined as follows:
$$
F_{l_1}(\alpha+\beta):=\langle l_1^2(\alpha),\beta \rangle,\hspace{5mm} \alpha\in\mathcal A^\sharp_{\Sigma_g}\otimes\mathfrak{g}_X[1],
\beta\in\mathcal A^\sharp_{\Sigma_g}\otimes\mathfrak{g}_X^\vee.
$$
The classical interaction functional $I_{cl}$ satisfies the following classical master equation:
\begin{equation} \label{eqn:classical-master-equation-S}
QI_{cl}+\frac{1}{2}\{I_{cl},I_{cl}\}+F_{l_1}=0.
\end{equation}
\end{lem}
\begin{proof} This follows from the fact that the maps $\{l_k\}_{ k\geq 0}$ of $\mathfrak{g}_X$ defines a curved $L_\infty$-structure. See \cite{Kevin-CS}. The extra $F_{l_1}$ describes the
curving: $\{F_{l_1},-\}=Q^2=l_1^2$.
\end{proof}
In particular, Lemma \ref{lem:classical-master-equation} implies that the operator $Q+\{I_{cl},-\}$ defines a differential on $\mathcal O_{loc}({\mathcal E})$.
\begin{defn} The complex $Ob:=\bracket{\mathcal O_{loc}^+({\mathcal E}), Q+\{I_{cl},-\}}$ is called the deformation-obstruction complex associated to the classical field theory defined by $({\mathcal E}, S)$.
\end{defn}
As established in \cite{Kevin-book}, the complex $Ob$ controls the deformation theory of the perburbative quantization of $S$, hence the name.
\section{Quantization}\label{section-quantization}
In this section we establish the quantization of our B-twisted $\sigma$-model via Costello's perturbative renormalization method \cite{Kevin-book}.
We show that the obstruction to the quantization is given by $(2-2g)c_1(X)$. When $c_1(X)=0$, i.e. $X$ being Calabi-Yau, every choice of holomorphic volume form on
$X$ leads to an associated canonical quantization of the B-twisted $\sigma$-model.
\subsection{Regularization}\label{section-regularization}
Perturbative quantization of the classical action functional $S$ is to model the asymptotic $\hbar$-expansion of the infinite dimensional path integral
$$
\int_{L\subset {\mathcal E}} e^{S/\hbar},
$$
where $L$ is an appropriate subspace related to some gauge fixing (a BV-Lagrangian in the Batalin-Vilkovisky formalism). A natural formalism based on finite dimensional models is
$$
\int_{L\subset {\mathcal E}} e^{S/\hbar}\mapsto \exp\bracket{\hbar^{-1}W(G, I_{cl})},
$$
where $W(G, I_{cl})$ is the weighted sum of Feynman integrals over all connected graphs, with $G$ ($=\mathbb{P}_0^\infty$ below) labeling the internal edges, and $I_{cl}$ labeling
the vertices. One essential difficulty is the infinite dimensionality of the space of fields which introduces singularities in the propagator $G$ and breaks the naive
interpretation of Feynman diagrams. Certain regularization is required to make sense of the theory, which is the celebrated idea of
renormalization in quantum field theory. We will use the heat kernel regularization to fit into Costello's renormalization technique \cite{Kevin-book}.
\subsubsection{Gauge fixing}
We need to choose a gauge fixing operator for regularization. For any Riemann surface $\Sigma_g$, we pick the metric on $\Sigma_g$ of constant curvature $0,1$ or $-1$, depending on the genus $g$. In particular, we choose the hyperbolic
metric on $\Sigma_g$ when $g>1$. The gauge fixing operator is
$$
Q^{GF}:=d_{\Sigma_g}^*,
$$
where $d_{\Sigma_g}^*$ is the adjoint of the de Rham differential $d_{\Sigma_g}$ on $\Sigma_g$ with respect to the chosen metric.
It is clear that the Laplacian $H=[Q,Q^{GF}]=d_{\Sigma_g}d_{\Sigma_g}^*+d_{\Sigma_g}^*d_{\Sigma_g}$ is the usual Laplacian on $\mathcal A_{\Sigma_g}$. We will let $e^{-tH}$ denote the heat operator acting on
$\mathcal A_{\Sigma_g}$ for $t>0$.
\begin{rmk}The operators $Q^{GF}, H$ and $e^{-tH}$ extend trivially over $\mathfrak{g}_X[1]\oplus \mathfrak{g}^\vee_X$ to define operators on ${\mathcal E}$, and we will use the same notations without confusion.
\end{rmk}
\subsubsection{Effective propagator}
To analyze B-twisted $\sigma$-model, we first describe the propagator of the theory.
\begin{defn}
The heat kernel $\mathbb{K}_t$ for $t>0$ is the element in $\Sym^2\bracket{{\mathcal E}}$ defined by the equation
$$
\langle \mathbb{K}_t(z_1,z_2),\phi(z_2)\rangle=e^{-tH}(\phi)(z_1), \quad \forall \phi\in {\mathcal E}, z_1\in \Sigma_g.
$$
\end{defn}
\begin{notn} The fact that the symplectic pairing on ${\mathcal E}$ is (up to sign) the tensor product of the natural pairings on $\mathcal A_{\Sigma_g}^\sharp$ and
$\mathfrak{g}_X[1]\oplus\mathfrak{g}_X^\vee$ implies that the heat kernel $\mathbb{K}_t(z_1,z_2)$ is of the following form:
$$
\mathbb{K}_t(z_1,z_2)=K_t(z_1,z_2)\otimes(\text{Id}_{\mathfrak{g}_X}+\text{Id}_{\mathfrak{g}_X^\vee})
,$$
where $K_t$ is simply the usual heat kernel of $e^{-tH}$ on $\Sigma_g$, and $\text{Id}_{\mathfrak{g}_X}+\text{Id}_{\mathfrak{g}_X^\vee}$ is the Poisson kernel corresponding to the natural symplectic
pairing on $\mathfrak{g}_X[1]\oplus\mathfrak{g}_X^\vee$. We will call $K_t$ and $\text{Id}_{\mathfrak{g}_X}+\text{Id}_{\mathfrak{g}_X^\vee}$ the analytic and combinatorial part of $\mathbb{K}_t$ respectively.
\end{notn}
The combinatorial part of $\mathbb{K}_t$ can be described locally as follows: pick a local basis $\{X_i\}$ of $\mathfrak{g}_X[1]$ as an
$\mathcal{A}_X^{\sharp}$-module, and let $\{X^i\}$ be the corresponding dual basis of $\mathfrak{g}_X^\vee$. Then we have
$$
\text{Id}_{\mathfrak{g}_X}+\text{Id}_{\mathfrak{g}_X^\vee}=\sum_{i}(X_i\otimes X^i+X^i\otimes X_i).
$$
\begin{defn}\label{defn:propagator}
For $0<\epsilon<L<\infty$, we define the effective propagator $\mathbb{P}_\epsilon^L$ as the element in $\Sym^2\bracket{{\mathcal E}}$ by
$$
\mathbb{P}_\epsilon^L(z_1,z_2)=P_\epsilon^L(z_1,z_2)\otimes (\text{Id}_{\mathfrak{g}_X}+\text{Id}_{\mathfrak{g}_X^\vee}),
$$ where the analytic part of the propagator $P_\epsilon^L$ is given by
$$
P_\epsilon^L:=\int_\epsilon^L (Q^{GF}\otimes 1) K_tdt.
$$
\end{defn}
\begin{rmk}
In the notations $P_\epsilon^L(z_1,z_2)$ and $K_t(z_1,z_2)$, we have omitted their anti-holomorphic dependence for simplicity.
\end{rmk}
In other words, $\mathbb{P}_\epsilon^L$ is the kernel representing the operator $\int_\epsilon^L Q^{GF}e^{-tH}dt$ on ${\mathcal E}$. The full propagator
$\mathbb{P}_0^\infty$ represents the operator ${Q^{GF}\over H}$, which is formally the inverse of the quadratic pairing $S_{free}$ after gauge
fixing. The standard trick of Feynman diagram expansions picks $\mathbb{P}_0^\infty$ as the propagator. However $\mathbb{P}_0^\infty$ exhibits
singularity along the diagonal in $\Sigma_g\times\Sigma_g$, and the above effective propagator with cut-off parameters $\epsilon, L$ is viewed as a
regularization.
It is known that the heat kernel $K_t$ on a Riemann surface $\Sigma_g$ has an asymptotic expansion:
\begin{equation}\label{eqn:asymp-heat-kernel}
K_t(z_1,z_2)\sim \dfrac{1}{4\pi t}e^{-\frac{\rho^2(z_1,z_2)}{4t}}\left(\sum_{i=0}^\infty t^i\cdot a_i(z_1,z_2)\right) \quad \text{as}\ t\rightarrow 0,
\end{equation} where each $a_i(z_1,z_2)$ is a smooth $2$-form on $\Sigma_g\times\Sigma_g$ and $\rho(z_1,z_2)$ denotes the geodesic distance between $z_1$ and
$z_2$. Similarly, for the propagator $P_\epsilon^L$, we have
\begin{lem}[Appendix \ref{appendix:asymp-propagaotr}]\label{lem:asymp-propagator}
The propagator on the hyperbolic upper half plane $\mathbb{H}$ is given explicitly by
\begin{equation}\label{eqn:propagator}
P_{\epsilon}^L=\int_{\epsilon}^Lf(\rho,t)dt\cdot\left(\dfrac{2(x_1-x_2)}{y_1y_2}(dy_1-dy_2)-\dfrac{(y_1-y_2)(y_1+y_2)}{y_1y_2
}
\left(\dfrac{dx_1}{y_1}-\dfrac{dx_2}{y_2}\right)\right),
\end{equation}
where $x_i=\Re z_i$ and $y_i=\Im
z_i$, for $i=1,2$. The function $f(\rho,t)$ is smooth on
$\mathbb{R}_{\geqslant 0}\times\mathbb{R}_{>0}$, and has an asymptotic expansion as $t\rightarrow 0$:
\begin{equation}\label{eqn:asymp-propagator}
f(\rho,t)\sim
\sum_{k=0}^\infty t^{-2+k}e^{-\frac{\rho^2}{4t}}b_k(\rho).
\end{equation}
\end{lem}
\subsubsection{Effective Batalin-Vilkovisky formalism} The heat kernel cut-off also allows us to regularize the Poisson bracket $\fbracket{-,-}$
and extend its definition from
local functionals to all distributions.
\begin{defn} We define the effective BV Laplacian $\Delta_L$ at scale $L>0$
$$
\Delta_L:={\partial\over \partial \mathbb{K}_L}: {\mathcal O}\bracket{{\mathcal E}}\to {\mathcal O}\bracket{{\mathcal E}}
$$
by contracting with $\mathbb{K}_L$ (see Definition \ref{contraction operator}).
\end{defn}
Since the regularized Poisson kernel $\mathbb{K}_L$ is smooth, $\Delta_L$ is well-defined on ${\mathcal O}\bracket{{\mathcal E}}$
and can be viewed as a second order differential operator in our infinite dimensional setting.
\begin{defn} We define the effective BV bracket at scale $L$
$$
\fbracket{-,-}_L: {\mathcal O}({\mathcal E})\times {\mathcal O}({\mathcal E})\to {\mathcal O}({\mathcal E})
$$
by
$$
\fbracket{\Phi_1, \Phi_2}_L:=\Delta_L\bracket{\Phi_1 \Phi_2}-\bracket{\Delta_L \Phi_1}\Phi_2-(-1)^{|\Phi_1|}\Phi_1\bracket{\Delta_L \Phi_2}, \quad \forall
\Phi_1, \Phi_2 \in {\mathcal O}({\mathcal E}).
$$
\end{defn}
As we will see, Batalin-Vilkovisky structures at different scales will be related to each other via the renormalization group flow.
For two distributions $\Phi_1, \Phi_2\in {\mathcal O}({\mathcal E})$, the bracket $\fbracket{\Phi_1, \Phi_2}_L$ will in general diverge as $L\to 0$. However for $\Phi_1, \Phi_2\in \mathcal O_{loc}({\mathcal E})$,
$$
\lim_{L\to 0}\fbracket{\Phi_1, \Phi_2}_L=\fbracket{\Phi_1, \Phi_2},
$$
where on the right hand side $\fbracket{-,-}$ is the Poisson bracket as in Lemma/Definition \ref{Poisson-bracket}. Therefore
$\fbracket{-,-}_L$ is a regularization of the classical Poisson bracket.
\subsection{Effective renormalization} We discuss Costello's quantization framework \cite{Kevin-book} in our current set-up.
\subsubsection{Renormalization group flow}
We start from the definition
of graphs:
\begin{defn}
A graph $\gamma$ consists of the following data:
\begin{enumerate}
\item A finite set of vertices $V(\gamma)$;
\item A finite set of half-edges $H(\gamma)$;
\item An involution $\sigma: H(\gamma)\rightarrow H(\gamma)$. The set of fixed points of this map is denoted by $T(\gamma)$ and is
called the set of tails of $\gamma$. The set of two-element orbits is denoted by $E(\gamma)$ and is called the set of internal edges of
$\gamma$;
\item A map $\pi:H(\gamma)\rightarrow V(\gamma)$ sending a half-edge to the vertex to which it is attached;
\item A map $g:V(\gamma)\rightarrow \mathbb{Z}_{\geqslant 0}$ assigning a genus to each vertex.
\end{enumerate}
It is clear how to construct a topological space $|\gamma|$ from the above abstract data. A graph $\gamma$ is called $connected$ if
$|\gamma|$ is connected. A graph is called stable if every vertex of genus $0$ is at least trivalent, and every genus $1$ vertex is at
least univalent. The genus of the graph $\gamma$ is defined to be $$g(\gamma):=b_1(|\gamma|)+\sum_{v\in V(\gamma)}g(v),$$ where
$b_1(|\gamma|)$ denotes the first Betti number of $|\gamma|$.
\end{defn}
Let $$\left(\mathcal{O}(\mathcal{E})[[\hbar]]\right)^+\subset\mathcal{O}(\mathcal{E})[[\hbar]]$$ be the subspace consisting of those
functionals which are at least cubic modulo $\hbar$ and modulo the nilpotent ideal $\mathcal{I}$ in the base ring
$\mathcal{A}_X^\sharp$. Let $I\in (\mathcal{O}(\mathcal{E})[[\hbar]])^+$ be a functional which can be expanded as
$$I=\sum_{k,i\geq 0}\hbar^k
I_{i}^{(k)}, \quad I_{i}^{(k)}\in {\mathcal O}^{(i)}({\mathcal E}).
$$
We view $I_{i}^{(k)}$ as an
$S_i$-invariant linear map
$$
I_{i}^{(k)}: \mathcal{E}^{\otimes i}\rightarrow\mathcal{A}_X^\sharp.
$$
With the propagator $\mathbb{P}_\epsilon^L$, we will describe the $Feynman\ weights$
$$
W_\gamma(\mathbb{P}_\epsilon^L,I)\in (\mathcal{O}(\mathcal{E})[[\hbar]])^+
$$ for any connected stable graph $\gamma$: we label every vertex $v$ in $\gamma$ of genus $g(v)$ and valency $i$ by
$I^{(g(v))}_i$, which we denote by:
$$I_{v}:\mathcal{E}^{\otimes H(v)}\rightarrow\mathcal{A}_X^\sharp,$$
where $H(v)$ is the set of half-edges of $\gamma$ which are incident to $v$.
We label every internal edge $e$ by the propagator $$\mathbb{P}_e=\mathbb{P}_\epsilon^L\in\mathcal{E}^{\otimes H(e)},$$
where $H(e)\subset H(\gamma)$ is the two-element set consisting of the half-edges forming $e$. Now we can contract
$$\otimes_{v\in V(\gamma)}I_v: \mathcal{E}^{H(\gamma)}\rightarrow\mathcal{A}_X^\sharp$$
with $$\otimes_{e\in E(\gamma)}\mathbb{P}_e\in\mathcal{E}^{H(\gamma)\setminus T(\gamma)}\rightarrow\mathcal{A}_X^\sharp$$ to yield a linear map
$$W_\gamma(\mathbb{P}_\epsilon^L, I) : \mathcal{E}^{\otimes T(\gamma)}\rightarrow\mathcal{A}_X^\sharp.$$
We can now define the $renormalization\ group\ flow$ operator:
\begin{defn}
The renormalization group flow operator from scale $\epsilon$ to scale $L$ is the map
$$
W(\mathbb{P}_\epsilon^L,-):\left(\mathcal{O}(\mathcal{E})[[\hbar]]\right)^+\rightarrow\left(\mathcal{O}(\mathcal{E})[[\hbar]]\right)^+
$$
defined by taking the sum of Feynman weights over all stable connected graphs:
$$I\mapsto\sum_{\gamma}\dfrac{1}{|\text{Aut}(\gamma)|}\hbar^{g(\gamma)}W_\gamma(\mathbb{P}_\epsilon^L,I).$$ A collection of functionals
$$\{I[L]\in\left(\mathcal{O}(\mathcal{E})[[\hbar]]\right)^+|L\in\mathbb{R}_+\}$$ is said to satisfy the renormalization group equation (RGE) if for
any $0<\epsilon<L<\infty$, we have
$$I[L]=W(\mathbb{P}_\epsilon^L,I[\epsilon]). $$
\end{defn}
\begin{rmk} Formally, the RGE can be equivalently described as
$
e^{I[L]/\hbar}=e^{\hbar {\partial\over \partial \mathbb{P}_\epsilon^L}}e^{I[\epsilon]/\hbar}.
$
\end{rmk}
\subsubsection{Quantum master equation}
Now we explain the quantum master equation as the quantization of the classical master equation. Usually the quantum master
equation is associated with the following operator \cite{Kevin-book} in the Batalin-Vilkovisky formalism
$$
Q+\hbar \Delta_L,
$$
which can be viewed as a quantization of the differential $Q$.
However, in our case, the above operator does not define a differential due to the curving
$$
\bracket{ Q+\hbar \Delta_L}^2=l_1^2.
$$
We will modify the construction in \cite{Kevin-book} to incorporate with the curving.
\begin{defn}
We define the effective curved differential $Q_L: {\mathcal E} \to {\mathcal E}$ by
$$
Q_L:= Q+ l_1^2\int_0^L Q^{GF}e^{-tH}dt,
$$
where $l_1^2\int_0^L Q^{GF}e^{-tH}dt$ is the composition of the operator $\int_0^L Q^{GF}e^{-tH}dt$ with $l_1^2$.
\end{defn}
It is straightforward to prove the following lemma:
\begin{lem}\label{QME-RG}
The quantized operator ${Q_L+\hbar \Delta_L+{F_{l_1}\over \hbar}}$ is compatible with the renormalization group flow in the following sense (recall Lemma \ref{lem:classical-master-equation} for the definition $F_{l_1}$)
$$
e^{\hbar {\partial\over \partial \mathbb{P}_\epsilon^L}} \bracket{Q_\epsilon+\hbar \Delta_\epsilon+{F_{l_1}\over \hbar}} = \bracket{Q_L+\hbar
\Delta_L+{F_{l_1}\over \hbar}} e^{\hbar {\partial\over \partial \mathbb{P}_\epsilon^L}}.
$$
Moreover, it squares zero modulo $\mathcal A_X^\sharp$:
$$
\bracket{Q_L+\hbar \Delta_L+{F_{l_1}\over \hbar}}^2=C,
$$
equals the multiplication by some $C\in \mathcal A_X^\sharp$.
\end{lem}
Therefore we will use ${Q_L+\hbar \Delta_L+{F_{l_1}\over \hbar}}$ instead of $Q+\hbar\Delta_L$ in order to define the quantum
master equation. The
constant $C$ does not bother us since the perturbative quantization in \cite{Kevin-book} is defined modulo the constant terms.
Precisely,
\begin{defn}\label{defn-quantization}
Let $\{I[L]\in\left(\mathcal{O}(\mathcal{E})[[\hbar]]\right)^+|L\in\mathbb{R}_+\}$ be a collection of effective interactions which satisfies the renormalization group
equation. We say that they satisfy the quantum master equation if for all $L>0$ the following scale $L$ quantum mater equation (QME) is satisfied:
\begin{equation}\label{eqn:qunatum-master-equation}
\bracket{Q_L+\hbar \Delta_L+{F_{l_1}\over \hbar}}e^{I[L]/\hbar}=R e^{I[L]/\hbar},
\end{equation}
where $R\in \mathcal A_X^\sharp[[\hbar]]$ does not depend on $L$.
\end{defn}
In other words, if we view ${Q_L+\hbar \Delta_L+{F_{l_1}\over \hbar}}$ as defining a projective flat connection, then a solution of quantum master equation defines a projectively flat section.
\begin{lem} The quantum master equation is compatible with the renormalization group flow in the following sense: if the collection $\{I[L]|L\in\mathbb{R}_+\}$ satisfies QME at some
scale $L_0>0$, then QME holds for any scale.
\end{lem}
\begin{proof} This follows from Lemma \ref{QME-RG}.
\end{proof}
\begin{lem}\label{quantum-BRST}
Suppose $I[L]$ satisfies the quantum master equation at scale $L>0$, then
$
Q_L+\hbar \Delta_L + \{I[L],-\}_L
$
defines a square-zero operator on ${\mathcal O}({\mathcal E})[[\hbar]]$.
\end{lem}
\begin{proof} Let $U_L=Q_L+\hbar \Delta_L + \{I[L],-\}_L$ and $\Phi\in {\mathcal O}({\mathcal E})[[\hbar]]$. Then
$$
\bracket{Q_L+\hbar \Delta_L+{F_{l_1}\over \hbar}}\bracket{\Phi e^{I[L]/\hbar}}=\bracket{U_L(\Phi)+R\Phi } e^{I[L]/\hbar}.
$$
Applying $Q_L+\hbar \Delta_L+{F_{l_1}\over \hbar}$ again to both sides, we find
$$
C\Phi e^{I[L]/\hbar}=\bracket{U_L^2(\Phi)+U_L(R\Phi )+R (U_L(\Phi)+R \Phi)}e^{I[L]/\hbar}.
$$
Set $\Phi=1$, we find $C=d_XR+R^2$, while $R^2=0$ since $R$ is a 1-form. Here $d_X$ is the de Rham differential on $X$. On the other hand,
$$
U_L(R \Phi)= (d_XR) \Phi-RU_L(\Phi).
$$
Comparing the two sides of the above equation, we get $U_L^2(\Phi)=0$ as desired.
\end{proof}
\begin{rmk} From the above proof, we find the following compatibility equation: $C=d_XR$. It is not hard to see that the two form
$C$ is given by the contraction between $F_{l_1}$ and $\Delta_L$, describing the curvature $l_1^2$. In fact $C$ represents
$(2-2g)c_1(X)$. The compatibility equation says that $C$ is an exact form, implying that the Calabi-Yau condition is necessary for
the quantum consistency (if $g\neq 1$). In section \ref{subsection:genuine-quantization}, we will show that Calabi-Yau condition
is also sufficient for anomaly cancellation. Such phenomenon arising from the curved $L_\infty$-algebra does not play a role in
\cite{Kevin-CS}, but we expect that it would appear in general.
\end{rmk}
It is easy to see that the leading $\hbar$-order of the quantum master equation reduces to the classical master equation when $L\to 0$. Therefore the square-zero operator
$Q_L+\hbar \Delta_L + \{I[L],-\}_L$ defines a quantization of the classical $Q+\{I_{cl},-\}$.
\subsubsection{$\mathbb{C}^\times$-symmetry}\label{section:classical-symmetry}
Later, when we study quantum theory of B-twisted $\sigma$-model, we will be interested in quantizations which preserve certain
symmetries we describe now: we consider an action of $\mathbb{C}^\times$ on $\mathcal{E}$ as follows:
$$
\lambda \cdot(\alpha_1\otimes\mathfrak{g}_1+\alpha_2\otimes\mathfrak{g}_2^\vee):=\alpha_1\otimes\mathfrak{g}_1+\lambda^{-1}
\alpha_2\otimes\mathfrak{g}_2^\vee,\ \lambda\in\mathbb{C}^\times.
$$
\begin{defn}
We define an action of $\mathbb{C}^\times$ on $\mathcal{O}(\mathcal{E})((\hbar))$ by
$$(\lambda \cdot
(\hbar^k F))(v):=\lambda^{k}\hbar^kF(\lambda^{-1}\cdot v), F\in\mathcal{O}(\mathcal{E}),v\in\mathcal{E}.
$$
\end{defn}
It is obvious that the classical interaction ${I_{cl}/\hbar}$ is invariant under this action. The following lemma can be proved by straightforward calculation, which we omit:
\begin{lem}\label{lem:C-action}
The following operations are equivariant under the action of $\mathbb{C}^\times$:
\begin{enumerate}
\item The renormalization group flow operator:
$W(\mathbb{P}_\epsilon^L,-):(\mathcal{O}(\mathcal{E})[[\hbar]])^+\rightarrow(\mathcal{O}(\mathcal{E})[[\hbar]])^+$,
\item The differential $Q:\mathcal{O}(\mathcal{E})[[\hbar]]\rightarrow\mathcal{O}(\mathcal{E})[[\hbar]]$,
\item The quantized differential $Q_L+\hbar\Delta_L+{\hbar^{-1}F_{l_1}}$,
\item The BV bracket
$\{-,-\}_L:\mathcal{O}(\mathcal{E})[[\hbar]]\otimes_{\mathcal A_X^\sharp[[\hbar]]}\mathcal{O}(\mathcal{E})[[\hbar]]\rightarrow\mathcal{O}
(\mathcal{E} )[[\hbar]]$, for all $L>0$.
\end{enumerate}
\end{lem}
The following proposition describes those functionals in $\mathcal{O}(\mathcal{E})[[\hbar]]$ that are $\mathbb{C}^\times$-invariant.
\begin{prop}\label{prop:invariant-actions}
Let $I=\sum_{i\geq 0}I^{(i)}\cdot \hbar^i\in\mathcal{O}(\mathcal{E})[[\hbar]]$. If $I$ is invariant under the
$\mathbb{C}^\times$
action, then $I^{(i)}=0$ for $i>1$, and furthermore $I^{(1)}$ lies in the subspace
$$\mathcal{O}(\mathcal{A}(\Sigma_g)\otimes\mathfrak{g}_X[1])\subset\mathcal{O}(\mathcal{E}).$$
\end{prop}
\begin{proof}
The hypothesis that $I$ is invariant implies that $I^{(i)}$ has weight $-i$ under the $\mathbb{C}^\times$ action. Notice that the
weight of the $\mathbb{C}^\times$ action on $\mathcal{O}(\mathcal{E})$ can be only $-1$ or $0$, which implies the first statement. The
second
statement of the proposition is obvious.
\end{proof}
\subsection{Quantization}
We study the quantization of B-twisted $\sigma$-model in this section. Firstly, let us recall the definition of
perturbative quantization of classical field theories in the Batalin-Vilkovisky formalism in \cite{Kevin-book}:
\begin{defn}
Let $I\in\mathcal{O}_{loc}(\mathcal{E})$ be a classical interaction functional satisfying the classical master equation. A quantization of the classical field theory defined by
$I$ consists of a collection
$\{I[L]\in(\mathcal{O}(\mathcal{E})[[\hbar]])^+|L\in\mathbb{R}_+\}$ of effective functionals such that
\begin{enumerate}
\item The renormalization group equation is satisfied,
\item The functional $I[L]$ must satisfy a locality axiom, saying that as $L\rightarrow 0$ the functional $I[L]$ becomes more and
more local,
\item The functional $I[L]$ satisfies the scale $L$ $quantum\ master\ equation$ (\ref{eqn:qunatum-master-equation}),
\item Modulo $\hbar$, the $L\rightarrow 0$ limit of $I[L]$ agrees with the classical interaction functional $I$.
\end{enumerate}
\end{defn}
The strategy for constructing a quantization of a given classical action functional is to run the renormalization group flow from
scale $0$ to scale $L$. In other words, we try to define the effective interaction $I[L]$ as the following
limit $$\lim_{\epsilon\rightarrow 0}W(\mathbb{P}_\epsilon^L,I).$$ However, the
above limit in general does not exist. Then the technique of counter terms solves the problem: after the choice of a
$Renormalization\ Scheme$, there is a unique set of counter terms $I^{CT}(\epsilon)\in(\mathcal{O}_{loc}(\mathcal{E})[[\hbar]])^+$ such that the limit
\begin{equation}\label{eqn:naive-quantization}
\lim_{\epsilon\rightarrow
0}W(\mathbb{P}_\epsilon^L,I-I^{CT}(\epsilon))\in(\mathcal{O}(\mathcal{E})[[\hbar]])^+
\end{equation}
exists. For more details on Renormalization Scheme and counter terms, we refer
the readers to \cite{Kevin-book}. It is then natural to define the $naive\ quantization$ $I_{naive}[L]$ of $I$ to be the limit in
equation (\ref{eqn:naive-quantization}). For B-twisted $\sigma$-model, we calculate the naive quantization in section
\ref{subsection:naive-quantization}. In particular, we show that the counter terms for our theory actually vanish.
The naive quantization $\{I_{naive}[L]| L>0\}$ automatically satisfies the renormalization group equation and the locality axiom by
construction. However, it does not satisfy the quantum master equation in general. In order to find the genuine quantization, we
need to analyze the possible cohomological obstruction to solving the QME, and correct the naive
quantization $\{I_{naive}[L]|L>0\}$ term by term in $\hbar$ accordingly if the obstruction vanishes. The
$\mathbb{C}^\times$ symmetry of the classical interaction $I_{cl}$ simplifies this computation to one-loop anomaly, and in Appendix
\ref{appendix:one-loop-anomaly} we give
a formula for one-loop anomaly for general field theories. This formula, when specialized to B-twisted $\sigma$-model, shows that
the condition for anomaly cancellation is exactly the Calabi-Yau condition of the target $X$. This is done in section
\ref{subsection:genuine-quantization}. In section \ref{subsection:one-loop-correction}, we give an explicit formula for the
one-loop correction of the naive quantization when $X$ is Calabi-Yau.
\begin{rmk}
In later sections, we will give the details of the analysis for Riemann surfaces of genus $g>1$, the study of $\mathbb{P}^1$ and
elliptic curves are similar and actually easier which we omit.
\end{rmk}
\subsubsection{The naive quantization}\label{subsection:naive-quantization}
Let $I_{cl}$ denote the classical interaction of B-twisted $\sigma$-model. We will show that the following limit exists for all $L>0$:
\begin{equation}\label{eqn:RG flow}
\lim_{\epsilon\rightarrow 0}\ W(\mathbb{P}_\epsilon^L,I_{cl}).
\end{equation}
The following simple observation simplifies the analysis greatly: for any $L>\epsilon>0$ and any graph $\gamma$, the associated Feynman weight
$\frac{\hbar^{|g(\gamma)|}}{|\text{Aut}(\gamma)|}W_\gamma(\mathbb{P}_\epsilon^L,I_{cl})$ is invariant
under the $\mathbb{C}^\times$ action defined in section
\ref{section:classical-symmetry}, by Lemma \ref{lem:C-action} and the fact that $I_{cl}/\hbar$ is $\mathbb{C}^\times$-invariant. By
Proposition
\ref{prop:invariant-actions}, we have $$W_\gamma(\mathbb{P}_\epsilon^L,I_{cl})=0$$ for
those stable graphs $\gamma$ with genus greater than $1$. Thus we only need to consider Feynman weights for trees and one-loop
graphs. For any classical interaction $I$, the limit (\ref{eqn:RG flow}) always exists for trees, but not necessarily for
one-loop graphs. Fortunately, for the classical interaction $I_{cl}$ of the B-twisted $\sigma$-model, the limit (\ref{eqn:RG
flow}) exists.
\begin{lem-defn}\label{lem:naive quantization}
Let $\gamma$ be a graph of genus $1$, then the following limit exists: $$ \lim_{\epsilon\rightarrow
0}W_\gamma(\mathbb{P}_\epsilon^L,I_{cl}).$$
We define the naive quantization at scale $L$ to be
$$
I_{naive}[L]:=\lim_{\epsilon\to 0}W(\mathbb{P}_\epsilon^L,I_{cl}).
$$
\end{lem-defn}
\begin{rmk}
As discussed in Definition \ref{defn:propagator}, the propagator $\mathbb{P}_\epsilon^L$ consists of an analytic part and a combinatorial part. It is clear that
only the analytic part is relevant concerning the convergence issue. In the following, we will use the notation $W(P_\epsilon^L,I_{cl})$ to denote the
analytic part of the RG flow $W(\mathbb{P}_\epsilon^L,I_{cl})$, whose inputs are only differential forms on $\Sigma_g$.
We will also use similar notations replacing $\mathbb{K}_\epsilon$ by $K_\epsilon$ later.
\end{rmk}
\begin{proof}
Since $I_{cl}\in(\mathcal{O}({\mathcal E}))^+\subset(\mathcal{O}({\mathcal E})[[\hbar]])^+$, we only need to consider those genus $1$ graphs $\gamma$
whose vertices are all of genus $0$, i.e $b_1(\gamma)=1$. Such a graph is called a $wheel$ if it can not be disconnected by
removing a single edge. Every graph with first Betti number $1$ is
a wheel with trees attached on it. Since trees do not contribute any divergence, we only need to prove
the lemma for wheels. Let $\gamma$ be a wheel with $n$ tails, and
let $\alpha_1\otimes\mathfrak{g}_1,\cdots,\alpha_n\otimes\mathfrak{g}_n\in\mathcal{E}$ be inputs on the tails. If the valency of
some vertices of $\gamma$ is greater than 3, we can combine the analytic part of the inputs on the tails that are attached to the same
vertex. More precisely, the convergence property of the following two Feynman weights are the same:
\begin{equation*}
\figbox{0.2}{trivalentloop}\hspace{30mm}\figbox{0.2}{morevalentloop}
\end{equation*}
Thus the proof of the lemma can be further reduced to trivalent wheels. Let $\gamma$ be a trivalent wheel with $n$ vertices, we
prove the lemma for the three
possibilities:
(1) $n=1$: in this case, the graph $\gamma$ contains a self-loop, and the Feynman weight is given by
$$
W_\gamma(P_\epsilon^L,I_{cl})(\alpha)=\int_{t=\epsilon}^Ldt\int_{z_1\in\Sigma_g}d^*K_t(z_1,z_1)\alpha.
$$
Let the Riemann surface $\Sigma_g$ be of the form $\Sigma_g=\H/\Gamma$, where $\Gamma$ is a subgroup of isometry acting discretely on $\H$. Let $k_t$ denote
the heat kernel on $\H$, and let $\pi$ denote the natural projection $\H\rightarrow \Sigma_g$. The heat kernel on $\Sigma_g$ can be written as:
$$
K_t(\pi(x_1),\pi(x_2))=\sum_{g\in\Gamma}k_t(x_1,g\cdot x_2),
$$
from which it is clear that $K_t$ is regular along the diagonal in $\Sigma_g\times\Sigma_g$: we can pick $x_1=x_2$ in the above identity. If $g=\text{id}$, then $k_t(x_1,x_1)$
vanishes by Lemma \ref{lem:asymp-propagator}, otherwise $k_t(x_1,g\cdot x_1)$ is automatically regular since heat kernel is singular only along the diagonal but $x_1\not=g\cdot
x_1$.
(2) $n\geq 3$: the Feynman weight is given explicitly by:
\begin{equation}\label{eqn:Feynman-weight}
\begin{aligned}
&W_\gamma(P_\epsilon^L,I_{cl})(\alpha_1,\cdots,\alpha_n)\\
=&\int_{z_1,\cdots,z_n\in\Sigma_g}P_\epsilon^L(z_1,z_2)
P_\epsilon^L(z_2,z_3)\cdots
P_\epsilon^L(z_n,z_1) \alpha_1(z_1,\bar{z}_1)\cdots\alpha_n(z_n,\bar{z}_n)\\
=&\int_{t_1,\cdots,t_n=\epsilon}^Ldt_1\cdots
dt_n\int_{z_1,\cdots,z_n\in\Sigma_g}\left(d^*K_{t_1}(z_1,z_2)\right)\cdots\left(d^*K_{ t_n } (z_n ,
z_1)\right)\alpha_1(z_1,\bar{z}_1)\cdots\alpha_n(z_n,\bar{z}_n).
\end{aligned}
\end{equation}
Using the same argument as in case (1), there is no difference if we replace $\Sigma_g$ in equation (\ref{eqn:Feynman-weight}) by
$\mathbb{H}$ concerning the convergence property:
\begin{equation}\label{eqn:Feynman-weight-H}
\int_{t_1,\cdots,t_n=\epsilon}^Ldt_1\cdots
dt_n\int_{z_1,\cdots,z_n\in\mathbb{H}}\left(d^*k_{t_1}(z_1,z_2)\right)\cdots\left(d^*k_{ t_n }
(z_n,z_1)\right)\alpha_1(z_1,\bar{z}_1)\cdots\alpha_n(z_n,\bar{z}_n).
\end{equation}
For simplicity, we keep the notation for the inputs $\alpha_1,\cdots,\alpha_n$ which are now differential forms on $\mathbb{H}$ with compact support.
We can write the integral (\ref{eqn:Feynman-weight-H}) as the sum of the following integrals where $\sigma$ runs over the symmetric group $S_n$:
\begin{equation}\label{eqn:Feynman-weight-sum} \int_{\epsilon\leqslant t_{\sigma(1)}\leqslant\cdots\leqslant t_{\sigma(n)}\leqslant
L}dt_1\cdots
dt_n\int_{z_1,\cdots,z_n\in\mathbb{H}}\left(d^*k_{t_1}(z_1,z_2)\right)\cdots\left(d^*k_{ t_n }
(z_n,z_1)\right)\alpha_1(z_1,\bar{z}_1)\cdots\alpha_n(z_n,\bar{z}_n).
\end{equation}
We will show that the integral (\ref{eqn:Feynman-weight-sum}) converges as $\epsilon\rightarrow 0$ for $\sigma=\text{id}\in S_n$, the proof for other
permutations $\sigma$ are the same. Let $(z_1,\cdots,z_n)=(x_1+iy_1,\cdots,x_n+iy_n)$ denote the standard coordinates on
$\mathbb{H}\times\cdots\times\mathbb{H}$. By Lemma \ref{lem:asymp-propagator}, the leading term of $d^*k_{t_k}(z_{k},z_{k+1})$ is
of the form
\begin{equation}\label{eqn:propagator-leading-term}
\begin{aligned}
\dfrac{1}{t_k^2}e^{-\frac{\rho^2(z_{k},z_{k+1})}{4t_k}}\Big(&Q_1(z_{k},\bar{z}_k;z_{k+1},\bar{z}_{k+1})(x_{k+1}-x_{k})(dy_{k+1}-dy_{k}
)\\
-&Q_2(z_{k},\bar{z}_k;z_{k+1},\bar{z}_{k+1})(y_{k+1}-y_{k})(\frac{dx_{k+1}}{y_{k+1}}-\frac{dx_{k}}{y_k})\Big),
\end{aligned}
\end{equation} where $Q_1$ and $Q_2$ are smooth functions on $\mathbb{H}\times\mathbb{H}$. To show the convergence of the integral
(\ref{eqn:Feynman-weight-sum}) as $\epsilon\rightarrow 0$, we will apply Wick's lemma and show that the integral of the leading term in
(\ref{eqn:Feynman-weight-sum}) converges. The higher order terms have better convergence property.
Without loss of generality, we can assume that the supports of $\alpha_i's$ lie in a small neighborhood of the small diagonal
$\Delta=\{(z,\cdots,z): z\in \mathbb{H}\}$ of $\mathbb{H}\times \cdots \times \mathbb{H}$. Otherwise we can choose a cut-off
function supported around $\Delta$ and split the integral into parts of the desired form. We consider the following change of
coordinates: let
$w_0=(u_0,v_0)=(x_1,y_1)\in\mathbb{H}$ and let $w_k=(u_k,v_k)\in\mathbb{R}^2$ be the Riemann normal coordinate of the point
$(x_{k+1},y_{k+1})$ with center $(x_k,y_k)$ for $1\leqslant k\leqslant n-1$ such that on
$\Delta_k:=\{(z_1,\cdots,z_n)\in\mathbb{H}\times\cdots\times\mathbb{H}: z_k=z_{k+1}\}$, we have
\begin{equation}\label{eqn:change-coordinates-derivative}
\dfrac{\partial(x_{k+1}-x_k)}{\partial u_k}\bigg|_{\Delta_k}=\dfrac{\partial(y_{k+1}-y_k)}{\partial v_k}\bigg|_{\Delta_k}=\dfrac{1}{y_k},
\hspace{5mm} \dfrac{\partial(x_{k+1}-x_k)}{\partial v_k}\bigg|_{\Delta_k}=\dfrac{\partial(y_{k+1}-y_k)}{\partial u_k}\bigg|_{\Delta_k}=0.
\end{equation}
By the definition of Riemann normal coordinates, the geodesic distance between $z_{k}$ and $z_{k+1}$ is
$\rho(z_{k},z_{k+1})=(u_k^2+v_k^2)^{\frac{1}{2}}=||w_k||$ when they are close.
It is obvious that there are smooth positive functions $\{\phi_k,\psi_k,1\leqslant k\leqslant n\}$ on
$\mathbb{H}\times\mathbb{R}^{2n-2}$ such that
\begin{equation}\label{eqn:change-coordinates-estimate}
\begin{aligned}
&|x_{k+1}-x_{k}|\leqslant \phi_k\cdot||w_k||,\hspace{9mm}|y_{k+1}-y_{k}|\leqslant \psi_k\cdot||w_k||,\hspace{5mm}\text{for}\ \
1\leqslant k\leqslant n-1\\
&|x_n-x_1|\leqslant\phi_n\cdot(\sum_{k=1}^{n-1}||w_k||),\hspace{3mm}|y_n-y_1|\leqslant
\psi_n\cdot(\sum_{k=1}^{n-1}||w_k||).\hspace{9mm}
\end{aligned}
\end{equation}
With the above preparation, we are now ready to show the convergence of the integral of the leading term in
(\ref{eqn:Feynman-weight-sum}). After plugging (\ref{eqn:propagator-leading-term}) into (\ref{eqn:Feynman-weight-sum}) and
using the estimate (\ref{eqn:change-coordinates-estimate}), it is not difficult to see that there is a smooth positive function
$\Phi$ on $\mathbb{H}\times\mathbb{R}^{2n-2}$, such that the integral (\ref{eqn:Feynman-weight-sum}) with $\sigma=\text{id}$ is bounded above in absolute value
by:
\begin{align*}
&\int_{\epsilon\leqslant t_1\leqslant\cdots\leqslant
t_n\leqslant L}\prod_{i=1}^{n}dt_i\int_{w_0\in\mathbb{H}}\int_{w_1\cdots,w_{n-1}\in\mathbb{R} ^2 }
\Phi(w_0,\cdots,w_n)\cdot\left(\prod_{i=1}^{n-1}\dfrac{||w_i||}{t_i^2}e^{-\frac{||w_i||^2}{4t_i}}\right)\\
&\hspace{65mm}\cdot\dfrac{||w_1||+\cdots+||w_{n-1}||}{
t_n^2}\cdot e^{-\frac{\rho^2(z_n,z_1)}{4t_n} }\prod_{i=0}^{n-1}d^2w_i\\
\leqslant&\int_{\epsilon\leqslant
t_1\leqslant\cdots\leqslant
t_n\leqslant L}\prod_{i=1}^{n}dt_i\int_{w_0\in\mathbb{H}}\int_{w_1\cdots,w_{n-1}\in\mathbb{R}^2}\Phi(w_0,\cdots,w_n)\cdot\left(\prod_{
i=1}^{n-1}\dfrac{||w_i||}{t_i^2}e^{-\frac{||w_i||^2}{4t_i}}\right)\\
&\hspace{65mm}\cdot\dfrac{||w_1||+\cdots+||w_{n-1}||}{t_n^2}\prod_{i=0}
^{n-1}d^2w_i.
\end{align*} The inequality follows simply by dropping the term $ e^{-\frac{\rho^2(z_n,z_1)}{4t_n}}$, and the function $\Phi$
arises as the product of absolute value of the following functions or differential forms:
\begin{enumerate}
\item the functions $\phi_k$ and $\psi_k$ in (\ref{eqn:change-coordinates-estimate});
\item The Jacobian of the change from the standard coordinates to the Riemann normal coordinates;
\item The functions $Q_1,Q_2$ in (\ref{eqn:propagator-leading-term});
\item The inputs on the tails of the wheel.
\end{enumerate}
From (4) it is clear that we can choose $\Phi$ with compact support. Thus we only need to show that the
following integral is convergent:
\begin{equation}\label{eqn:estimate-integral-final-form}
\int_{\epsilon\leqslant
t_1\leqslant\cdots\leqslant
t_n\leqslant L}\prod_{i=1}^{n}dt_i\int_{w_1\cdots,w_{n-1}\in\mathbb{R}^2}\left(\prod_{
i=1}^{n-1}\dfrac{||w_i||}{t_i^2}e^{-\frac{||w_i||^2}{4t_i}}\right)
\cdot\dfrac{||w_1||+\cdots+||w_{n-1}||}{t_n^2}\prod_{i=1}
^{n-1}d^2w_i.
\end{equation}
We can further change the coordinates: let $$\xi_k=w_k\cdot t_k^{-\frac{1}{2}}, \hspace{5mm}1\leqslant k\leqslant n-1.$$ Then
(\ref{eqn:estimate-integral-final-form}) becomes
\begin{align*}
\int_{\epsilon\leqslant t_1\leqslant\cdots\leqslant
t_n\leqslant L}\prod_{i=1}^{n-1}dt_i\int_{\xi_1\cdots,\xi_{n-1}\in\mathbb{R}^2}\big(\prod_{
i=1}^{n-1}\dfrac{||\xi_i||}{t_i^{1/2}}e^{-\frac{||\xi_i||^2}{4}}\big)\cdot\dfrac{||\xi_1||t_1^{1/2}+\cdots+||\xi_{n-1}||t_{n-1}^{
1/2 }
} { t_n^2 } \prod_ { i=1 }
^{n-1}d^2\xi_i,
\end{align*}
which is bounded above by
$$\left(\int_{\epsilon\leqslant
t_1\leqslant\cdots\leqslant t_n\leqslant L}\big(\prod_{i=1}^{n-1}t_i^{-\frac{1}{2}}\big)t_n^{-\frac{3}{2}}\prod_{i=1}^{n}
dt_i\right)\cdot\left(\int_{\xi_1,\cdots,\xi_{n-1}\in\mathbb{R}^2}P(||\xi_i||)e^{-\sum_{i=1}^{n-1}||\xi_i||^2/4}\right),$$ where
$P(||\xi_i||)$ is a polynomial of $||\xi_i||$'s. It is not difficult to see that the first integral converges as
$\epsilon\rightarrow0$ when
$n\geqslant 3$, and that the second integral is finite.
(3) $n=2$: Plugging the leading term of
(\ref{eqn:propagator-leading-term}) into the integral (\ref{eqn:Feynman-weight-H}) for $n=2$, we can see that the integral of
the leading term is of the following form:
\begin{equation}\label{eqn:integral}
\begin{aligned}
\dfrac{1}{t_1^2t_2^2}\int_{t_1,t_2=\epsilon}^Ldt_1dt_2\int_{(u_0,v_0)\in\mathbb{H}}\int_{(u_1,v_1)\in\mathbb{R}^2}&\Phi(u_0,v_0,u_1,
v_1)(x_1-x_2)(y_1-y_2)\\
&\exp\left(-(u_1^2+v_1^2)(\frac{1}{t_1}+\frac{1}{t_2})\right)du_0dv_0du_1dv_1,
\end{aligned}
\end{equation}
where $\Phi$ is similar to that in the case where $n\geq 3$. The fact that the functions $(x_1-x_2)^2$ and $(y_1-y_2)^2$ do not show up in
equation (\ref{eqn:integral}) follows from the trivial observation that $(dy_1-dy_2)^2=(\frac{dx_1}{y_1}-\frac{dx_2}{y_2})^2=0$. This simple fact, together
with the derivatives of $x_1-x_2$ and $y_1-y_2$ in (\ref{eqn:change-coordinates-derivative}) implies that the leading term in Wick's expansion of
the integral
$$\dfrac{1}{t_1^2t_2^2}
\int_{(u_1,v_1)\in\mathbb{R}^2}\Phi(u_0,v_0,u_1,v_1)(x_1-x_2)(y_1-y_2)\exp\left(-(u_1^2+v_1^2)(\frac{1}{t_1}+\frac{1}{t_2}
)\right)du_1dv_1$$
is given by a multiple of
$$\dfrac{1}{t_1^2t_2^2}
\int_{(u_1,v_1)\in\mathbb{R}^2}u_1^2 v_1^2 \exp\left(-(u_1^2+v_1^2)(\frac{1}{t_1}+\frac{1}{t_2}
)\right)du_1dv_1\propto \dfrac{t_1t_2}{(t_1+t_2)^3}.
$$
The integral of $\dfrac{t_1t_2}{(t_1+t_2)^3}$
on $[\epsilon,L]\times[\epsilon,L]$ clearly converges as $\epsilon\rightarrow 0$. Furthermore, since $\Phi$ has compact support on
$\mathbb{H}\times\mathbb{R}^2$, it is clear that (\ref{eqn:integral}) converges as $\epsilon\rightarrow 0$.
\end{proof}
\subsubsection{Obstruction analysis} By construction, the naive quantization $\{I_{naive}[L]|L\in\mathbb{R}_+\}$ satisfies all requirements of a quantization except for
the quantum master equation. In general, there exist potential obstructions to solving quantum master equation known as the
anomaly. The analysis of such obstructions is usually very difficult. In \cite{Kevin-book}, Costello has developed a convenient
deformation theory to deal with this problem, which we will follow to compute the obstruction space of the B-twisted
$\sigma$-model.
Recall that $Ob=\bracket{\mathcal O_{loc}^+({\mathcal E}), Q+\fbracket{I_{cl},-}}$ is the deformation-obstruction complex of our theory. Costello's deformation method says that
$$
H^1(Ob)
$$
is the obstruction space for solving quantum master equation, and
$$
H^0(Ob)
$$
parametrizes the deformation space. Both cohomology groups can be computed via $D$-module techniques. In our case, we can restrict to a subcomplex of $Ob$,
thanks to the $\mathbb{C}^\times$-symmetry.
\begin{defn} We define $\tilde{{\mathcal E}}\subset {\mathcal E}$ to be the subspace
$$
\tilde{{\mathcal E}}:=\mathcal A_{\Sigma_g}\otimes \mathfrak{g}_X[1]
$$
and $\widetilde{Ob}$ to be the reduced deformation-obstruction complex
$$
\widetilde{Ob}:=\bracket{\mathcal O_{loc}^+\bracket{\tilde{{\mathcal E}}}, Q+\fbracket{I_{cl},-}}.
$$
\end{defn}
\begin{prop} The obstruction space for solving quantum master equation with prescribed $\mathbb{C}^\times$-symmetry is $H^1\bracket{\widetilde{Ob}}$.
\end{prop}
\begin{proof} This is the same as the holomorphic Chern-Simons theory in \cite{Kevin-CS}.
\end{proof}
To describe the complex $\widetilde{Ob}$, we first introduce some notations. Let
$$
\Jet_{\Sigma_g}\bracket{\tilde{{\mathcal E}}}:=\Jet_{\Sigma_g}\bracket{\mathcal A_{\Sigma_g}}\otimes \mathfrak{g}_X[1]
$$
be the sheaf of smooth jets of differential forms on $\Sigma_g$ valued in $\mathfrak{g}_X[1]$, and let $D_{\Sigma_g}$ be the sheaf of smooth differential operators on
$\Sigma_g$. $\Jet_{\Sigma_g}\bracket{\tilde{{\mathcal E}}}$ is naturally a $D_{\Sigma_g}$-module, and we define its dual
$$
\Jet_{\Sigma_g}\bracket{\tilde{{\mathcal E}}}^\vee:=\Hom_{C^{\infty}(\Sigma_g)\otimes \mathcal A_{X}^{\sharp}}\bracket{\Jet_{\Sigma_g}\bracket{\tilde{{\mathcal E}}},
C^{\infty}(\Sigma_g)\otimes \mathcal A_X^{\sharp}}.
$$
Equivalently,
$$
\Jet_{\Sigma_g}\bracket{\tilde{{\mathcal E}}}^\vee= \Jet_{\Sigma_g}\bracket{\mathcal A_{\Sigma_g}}^\vee\otimes \mathfrak{g}_X[1]^\vee,
$$
where $\Jet_{\Sigma_g}\bracket{\mathcal A_{\Sigma_g}}^\vee$ is the complex of dual $D_{\Sigma_g}$-module of $\Jet_{\Sigma_g}\bracket{\mathcal A_{\Sigma_g}}$, with induced differential which we still denote by $d_{\Sigma_g}$. There is a natural identification between complexes of $D_{\Sigma_g}$-modules
$$
\Jet_{\Sigma_g}\bracket{\mathcal A_{\Sigma_g}}^\vee \cong D_{\Sigma_g}\otimes \wedge^* T_{\Sigma_g},
$$
where $T_{\Sigma_g}$ is the smooth tangent bundle, and the right hand side is the usual complex of Spencer's resolution. In particular, we have the quasi-isomorphism
\begin{align}\label{spencer resolution}
(\Jet_{\Sigma_g}\bracket{\mathcal A_{\Sigma_g}}^\vee, d_{\Sigma_g})\simeq C^{\infty}(\Sigma_g)
\end{align}
$\Jet_{\Sigma_g}\bracket{\tilde{{\mathcal E}}}^\vee$ is a locally free $D_{\Sigma_g}$-module. We will let $\mathcal A_{\Sigma_g}^{top}$ denote the right $D_{\Sigma_g}$-module of
top differential forms on $\Sigma_g$. According to the definition of local functionals, $\mathcal O_{loc}^+(\tilde{{\mathcal E}})$ is isomorphic to
the global sections of the following complex of sheaves on $\Sigma_g$:
\begin{equation}\label{eqn:deformation-obstruction-jet}
\begin{aligned}
\mathcal A_{\Sigma_g}^{top}\otimes_{D_{\Sigma_g}} \prod_{k\geq 1}\Sym^k_{D_{\Sigma_g}\otimes \mathcal A_X^{\sharp}}\bracket{\Jet_{\Sigma_g}\bracket{\tilde{{\mathcal E}}}^\vee}
\end{aligned}
\end{equation}
with the differential induced from $Q+\fbracket{I_{cl},-}$. All the sheaves here, including the sheaf of jets, are sheaves over
smooth functions on $\Sigma_g$. Thus these sheaves are all fine sheaves, a fact which implies that the cohomology we want to compute
is nothing but the hypercohomology of the complex (\ref{eqn:deformation-obstruction-jet}) with respect to the global section functor.
\begin{prop}
The cohomology of the deformation-obstruction complex of B-twisted $\sigma$-model is
$$
H^k(\widetilde{Ob})=\sum_{p+q=k+2}H^p_{dR}(\Sigma_g)\otimes H^q(X,\Omega_{cl}^1),
$$
where $\Omega_{cl}^1$ is the sheaf of closed holomorphic 1-forms on $X$. In particular, the obstruction space for the quantization at one-loop is given by
\begin{align*}
H^1(\widetilde{Ob})= \bracket{H^0_{dR}(\Sigma_g)\otimes H^3(X,\Omega_{cl}^1)} \oplus \bracket{H^1_{dR}(\Sigma_g)\otimes
H^2(X,\Omega_{cl}^1)}
\oplus \bracket{H^2_{dR}(\Sigma_g) \otimes H^1(X,\Omega_{cl}^1)}.
\end{align*}
\end{prop}
\begin{proof} We follow the strategy developed in \cite{Kevin-CS}. The Koszul resolution gives a resolution of
the $D_{\Sigma_g}$-module
$$
\mathcal A_{\Sigma_g}(D_{\Sigma_g})[2]\to \mathcal A_{\Sigma_g}^{top},
$$
where $\mathcal A_{\Sigma_g}\bracket{D_{\Sigma_g}}[2]$ is the de Rham complex of $D_{\Sigma_g}$. Together with the quasi-isomorphism \eqref{spencer resolution} and the fact that the
$D_{\Sigma_g}$-module $\prod\limits_{k\geq
1}\Sym^k_{D_{\Sigma_g}\otimes \mathcal A_X^{\sharp}}\bracket{\Jet_{\Sigma_g}\bracket{\tilde{{\mathcal E}}}^\vee}$ is flat, we find
quasi-isomorphisms
\begin{align*}
\widetilde{Ob}&\cong \mathcal A_{\Sigma_g}^{top}\otimes^{L}_{D_{\Sigma_g}} \bracket{\prod_{k\geq 1}\Sym^k_{D_{\Sigma_g}\otimes
\mathcal A_X^{\sharp}}\bracket{\Jet_{\Sigma_g}\bracket{\tilde{{\mathcal E}}}^\vee}}\\
&\cong \mathcal A_{\Sigma_g}(D_{\Sigma_g})\otimes_{D_{\Sigma_g}}^L\bracket{ \prod_{k\geq 1}\Sym^k_{D_{\Sigma_g}\otimes
\mathcal A_X^{\sharp}}\bracket{\Jet_{\Sigma_g}\bracket{\tilde{{\mathcal E}}}^\vee}}[2]\\
&\cong \mathcal A_{\Sigma_g}\otimes_{\mathbb{C}}\bracket{\prod_{k\geq 1}
\Sym^k_{\mathcal A_{X}^{\sharp}}\bracket{\mathfrak{g}_X[1]^\vee}}[2]=\mathcal A_{\Sigma_g}\otimes_{\mathbb{C}} C^*_{red}\bracket{\mathfrak{g}_X}[2].
\end{align*}
The differential on the last complex is $d_{\Sigma_g}+l_1+\{I_{cl},-\}=d_{\Sigma_g}+d_{CE}$, where $d_{\Sigma_g}$ is the de Rham differential on
$\mathcal A_{\Sigma_g}$ and $d_{CE}$ is the Chevalley-Eilenberg
differential on the reduced Chevalley-Eilenberg complex of $\mathfrak{g}_X$. Therefore
$$
H^k\bracket{\widetilde{Ob}}=\sum_{p+q=k+2} H^p_{dR}\bracket{\Sigma_g}\otimes H^q\bracket{C^*_{red}(\mathfrak{g}_X), d_{CE}}.
$$
Finally, from the following short exact sequence
$$
0\to \mathcal A_X\to C^*\bracket{\mathfrak{g}_X}\to C^*_{red}\bracket{\mathfrak{g}_X}\to 0,
$$
we have the quasi-isomorphism of complexes of sheaves
$$
C^*_{red}\bracket{\mathfrak{g}_X} \simeq \bbracket{\mathcal A_X\to C^*\bracket{\mathfrak{g}_X}}\simeq \bbracket{\mathbb{C}\to {\mathcal O}_X}\simeq \Omega^1_{X, cl},
$$
which implies
$$
H^q\bracket{C^*_{red}(\mathfrak{g}_X), d_{CE}}\cong H^q\bracket{X, \Omega_{X,cl}^1}.
$$
\end{proof}
\subsubsection{Computation of the obstruction}\label{subsection:genuine-quantization}
We now compute the obstruction to the quantization of B-twisted $\sigma$-model.
In section \ref{subsection:naive-quantization}, we have seen that the $\mathbb{C}^\times$-invariance of $I_{cl}/\hbar$ and the RG flow
operator guarantees that the the naive quantization $\{I_{naive}[L]|L>0\}$ only contains the constant term and linear term in the power
expansion of $\hbar$. The naive quantization automatically satisfies the quantum master equation modulo $\hbar$ since $I_{cl}$
satisfies the classical master equation. Thus, we only need to take care of the one-loop anomaly. We have the following explicit
graphical expression of one-loop anomaly for general perturbative QFT's:
\begin{thm}\label{theorem:one-loop-anomaly} The one-loop obstruction $O_1$ to quantizing a classical field theory with classical interaction $I_{cl}$ is
given graphically by
\begin{equation}\label{graph:one-loop anomaly}
O_1=\lim_{\epsilon\rightarrow0}\left(\figbox{0.18}{one-loop-edge-K-ep-K0}\right)+\lim_{\epsilon\rightarrow
0}\Big(\figbox{0.18}{one-loop-self}\Big)
\end{equation}
\end{thm}
\begin{rmk}
After fixing a renormalization scheme, we can define the smooth part of a Feynman weight $W_\gamma(P_\epsilon^L, I_{cl})$ for any graph
$\gamma$. We take the smooth part of the term in the dashed red circle.
\end{rmk}
The proof of this theorem is given in Appendix \ref{appendix:one-loop-anomaly}. For B-twisted $\sigma$-model, the following two
lemmas imply that the first term in (\ref{graph:one-loop anomaly}) vanishes as $\epsilon\rightarrow 0$. We defer the proof of these two
lemmas to Appendix \ref{appendix:Feynman-graph-computation}.
\begin{lem}\label{lem:two-vertex-wheels}
Let $\gamma$ be a genus $1$ graph containing a wheel with $2$ vertices. Then the following Feynman weight vanishes:
\begin{equation}\label{eqn:two-vertex-vanish-susy}
\figbox{0.18}{wheelwithtwovertices}
\end{equation}
\end{lem}
\begin{lem}\label{lem:more-vertex-wheels}
Let $\gamma$ be a genus $1$ graph containing a wheel with $n$ vertices, and let $e$ be an edge of $\gamma$ which is
part of the wheel.
Assume
that $n\geq 3$, then we have $$\lim_{\epsilon\rightarrow 0}
W_{\gamma,e}(\mathbb{P}_{\epsilon}^L,\mathbb{K}_\epsilon-\mathbb{K}_0,I_{cl})=\lim_{\epsilon\rightarrow
0}\left(\figbox{0.18}{anomaly}\right)=0.$$
\end{lem}
Hence the scale $L$ one-loop obstruction is given by
\begin{equation}\label{eqn:obstruction}
O_1[L]=\sum_{\gamma:tree}\lim_{\epsilon\rightarrow
0}\dfrac{1}{|\text{Aut}(\gamma)|}W_\gamma\Big(\mathbb{P}_\epsilon^L,\figbox{0.18}{one-loop-self}\Big).
\end{equation}
By the fact that $\lim_{L\rightarrow 0}(I_{naive}^{(0)}[L])=I_{cl}$, we have:
\begin{equation}\label{eqn:scale-0-obstruction}
O_1=\lim_{L\rightarrow 0}O_1[L]=\hspace{1mm}\lim_{\epsilon\rightarrow 0}\left(\figbox{0.18}{one-loop-self} \right).
\end{equation}
The obstruction $O_1$ contains an analytic part and a combinatorial part. It is clear that the analytic part is given by the limit of the super trace
of the heat kernel along the diagonal in $\Sigma_g\times \Sigma_g$:
$$
\lim_{\epsilon\rightarrow 0}Str(K_\epsilon(z,z))=(2-2g)\text{dvol}_{\Sigma_g},
$$ where
$\text{dvol}_{\Sigma_g}$ is the normalized volume form on $\Sigma_g$ with respect to the constant curvature metric, and the identity follows from local
index theorem. Similar to the holomorphic Chern-Simons theory \cite{Kevin-CS}, the combinatorial factor
of (\ref{eqn:scale-0-obstruction}) gives the first Chern class of the target manifold $X$. Thus, we can conclude this section by:
\begin{thm}\label{thm:obstruction-quantization}
The obstruction to quantizing B-twisted $\sigma$-model is given by $$[(2-2g)\text{dvol}_{\Sigma_g}]\otimes c_1(X)=c_1(\Sigma_g)\otimes
c_1(X)\in H^2_{dR}(\Sigma_g)\otimes
H^1(X,\Omega_{cl}^1)\subset H^1(\widetilde{Ob}),$$ and the topological B-twisted $\sigma$-model can be quantized (on any Riemann
surface $\Sigma_g$) if and only if the target $X$ is Calabi-Yau.
\end{thm}
\subsubsection{One-loop quantum correction}\label{subsection:one-loop-correction} Now let us assume that $X$ is a Calabi-Yau
manifold with a holomorphic volume form $\Omega_X$. By Theorem \ref{thm:obstruction-quantization}, the quantization of our
topological B-twisted $\sigma$-model is unobstructed. This
means that there exists some quantum correction $I_{qc}[L]$ to the naive quantization $I_{naive}[L]$ such that $I_{naive}[L]+\hbar I_{qc}[L]$ solves the quantum
master equation. In this section we give an explicit description of the one-loop quantum correction which will be used in the next section to compute the
quantum correlation functions.
We first have the following lemma:
\begin{lem}\label{lem:obstruction-local-effective}
Let $I_{qc}\in\mathcal{O}_{loc}(\mathcal{E})$ be a local functional on $\mathcal{E}$ satisfying the equation
\begin{equation}\label{eqn:local-version-eqn-one-loop-correction}
QI_{qc}+\{I_{cl},I_{qc}\}=O_1,
\end{equation}
where $O_1$ is the one-loop anomaly described in section \ref{subsection:genuine-quantization}.
Then the effective functionals $$I_{qc}[L]:=\lim_{\epsilon\rightarrow
0}\sum_{\gamma\in\text{trees}, v\in V(\gamma)}W_{\gamma,v}(P_\epsilon^L, I_{cl},I_{qc})$$ satisfy the equation
$$
QI_{qc}[L]+\{I^{(0)}_{naive}[L],I_{qc}[L]\}_L=O_1[L],
$$
where $W_{\gamma,v}(P_\epsilon^L, I_{cl},I_{qc})$ is the Feynman weight associated to the graph $\gamma$ with the vertex $v$
labeled by $I_{qc}$ and all other vertices labeled by $I_{cl}$. In particular, $I_{naive}[L]+\hbar I_{qc}[L]$ solves the quantum
master equation.
\end{lem}
\begin{proof}
The proof of the lemma is a simple Feynman graph calculation. See \cite{Kevin-book}.
\end{proof}
The objective is to find a local functional $I_{qc}$ satisfying equation (\ref{eqn:local-version-eqn-one-loop-correction}). Let
$\Delta$ be the operator on $\text{Sym}^*(\mathfrak{g}_X)\otimes\text{Sym}^*(\mathfrak{g}_X[1]^\vee)$ given by contraction
with the identity in
$\text{End}_{\mathcal{A}_X}(\mathfrak{g}_X\oplus\mathfrak{g}_X^\vee)$, and let $L$ denote the functional on
$\mathfrak{g}_X[1]\oplus\mathfrak{g}_X^\vee$ given by $$L(\alpha+\beta):=\dfrac{1}{(n+1)!}\sum_{n\geqslant 0}\langle
l_n(\alpha^{\otimes n}),\beta\rangle,\hspace{3mm} \alpha\in\mathfrak{g}_X[1],\beta\in\mathfrak{g}_X^\vee.$$ From the graphical expression of $O_1$
in equation (\ref{eqn:scale-0-obstruction}), it is not difficult to see that $O_1$ is only a functional on
$C^\infty(\Sigma_g)\otimes\mathfrak{g}_X[1]$ of the following form: $$(O_1)_k((f_1\otimes
g_1)\otimes\cdots\otimes (f_k\otimes g_k))=(2-2g)(\Delta L)_k(g_1\otimes\cdots\otimes g_k) \int_{\Sigma_g}f_1\cdots f_k\ \text{dvol}_{\Sigma_g},
$$
where $(O_1)_k$ denotes the k-component of $O_1$ in ${\mathcal O}^{(k)}({\mathcal E})$, and similarly for $(\Delta L)_k$. We are looking for an $I_{qc}$ which is only a functional on $C^\infty(\Sigma_g)\otimes\mathfrak{g}_X[1]$ of the form
\begin{align}\label{correction-form}(I_{qc})_k((f_1\otimes
g_1)\otimes\cdots\otimes (f_k\otimes g_k))=B_k(g_1\otimes\cdots\otimes g_k)\int_{\Sigma_g}f_1\cdots f_k\ \text{dvol}_{\Sigma_g},
\end{align}
where $B_k\in \text{Sym}^k(\mathfrak{g}_X[1]^\vee)$. With this ansatz, we have $QI_{qc}=l_1 I_{qc}$ by type reason and
equation (\ref{eqn:local-version-eqn-one-loop-correction}) is reduced to
\begin{equation}\label{eqn:quantum-correction}
l_1 I_{qc}+\{I_{cl}, I_{qc}\}=O_1.
\end{equation}
Let $B=\sum_{k\geq 0} B_k$, it is clear that
$$
\left(l_1I_{qc}+\{I_{cl},I_{qc}\}\right)((f_1\otimes g_1)\otimes\cdots\otimes (f_k\otimes g_k))=(d_{CE}B)(g_1\otimes\cdots\otimes
g_k) \int_{\Sigma_g}f_1\cdots f_k\ \text{dvol}_{\Sigma_g}.
$$
Equation (\ref{eqn:quantum-correction}) is then reduced to
\begin{equation*}
d_{CE}B=(2-2g)\Delta L
\end{equation*}
which, since the Chevalley-Eilenberg differential $d_{CE}$ is the same as the bracket
$\{L,-\}$, can be further reduced to
\begin{equation}\label{eqn:one-loop-correction-equation}
(2-2g)\Delta L-\{L, B\}=0.
\end{equation}
Since we only need to solve the equation modulo constant functionals, equation (\ref{eqn:one-loop-correction-equation}) is
equivalent to the vanishing of the operator $\{(2-2g)\Delta L-\{L,B\},-\}$.
\begin{lem}\label{lem:one-loop-correction-brackets}
We have the following two identities for any $B$:
\begin{equation*}
\begin{aligned}
\{\{L,B\},-\}&=[\{L,-\},\{B,-\}],\\
\{\Delta L,-\}&=[\Delta,\{L,-\}].
\end{aligned}
\end{equation*}
\end{lem}
\begin{proof}
The first identity follows directly from the Jacobi identity. The second identity follows from the identity
$$
\bbracket{\Delta, \bbracket{\Delta, L}}=0.
$$
\end{proof}
By Lemma \ref{lem:one-loop-correction-brackets}, to solve equation (\ref{eqn:one-loop-correction-equation}), we only need to find
$B\in C^*(\mathfrak{g}_X)$ such that the operator $\Delta+\{B,-\}$ commutes with the Chevalley-Eilenberg differential
$d_{CE}=\fbracket{L,-}$. The following technical
proposition transfers the problem to a geometric context:
\begin{prop}\label{prop:iso-CE-jet}\cite{Kevin-CS}
There is a natural isomorphism of cochain complexes of $\mathcal{A}_X$-modules
\begin{equation}\label{eqn:iso-CE-Jet}
\tilde{K}:\left(C^*(\mathfrak{g}_X,\text{Sym}^*\mathfrak{g}_X),d_{CE}\right)\overset{\sim}{\rightarrow}\left(\mathcal{A}_X\otimes_{
\mathcal{O}_X}\text{Jet}^{hol}_X(\wedge^*T_X),d_{D_X}\right),
\end{equation}
where $d_{CE}$ on the left hand side is the Chevalley-Eilenberg differential of the $\mathfrak{g}_X$-module
$\text{Sym}^*\mathfrak{g}_X$, and $d_{D_X}$ is the differential of the de Rham complex of the holomorphic jet bundle.
\end{prop}
The explicit formula of the above isomorphism is given in Appendix \ref{appendix:L_infty}.
There is a natural second order differential operator on the right hand side of equation (\ref{eqn:iso-CE-Jet}) which commutes
with the differential $d_{D_X}$: let $\Omega_X$ be a holomorphic volume form on $X$ which induces an isomorphism between
holomorphic polyvector fields and holomorphic differential forms via the contraction map:
\begin{equation*}
\begin{aligned}
\wedge^* T_X& \overset{\sim}{\rightarrow} \Omega_X^*\\
\alpha&\mapsto\alpha\lrcorner\ \Omega_X.
\end{aligned}
\end{equation*}
This isomorphism transfers the holomorphic de Rham differential $\partial$ on $\Omega_X^*$ to an operator on polyvector fields:
$$\partial_{\Omega_X}:\Gamma(\wedge^*T_X)\rightarrow
\Gamma(\wedge^{*-1}T_X),$$
which naturally induces a second order operator (denoted by the same symbol)
$$
\partial_{\Omega_X}: \mathcal{A}_X\otimes_{\mathcal{O}_X}\text{Jet}_X^{hol}(\wedge^{*}T_X)\to \mathcal{A}_X\otimes_{\mathcal{O}_X}\text{Jet}_X^{hol}(\wedge^{*-1}T_X),
$$
that commutes with
$d_{D_X}$.
To solve equation (\ref{eqn:one-loop-correction-equation}), we need to transfer the operator $\Delta$ to the de Rham complex of
jet bundle in (\ref{eqn:iso-CE-Jet}). For simplicity, we still denote this operator by $\Delta$.
\begin{claim}
The two second order differential operators $\Delta$ and $\partial_{\Omega_X}$ on
$\mathcal{A}_X\otimes_{\mathcal{O}_X}\text{Jet}_X^{hol}(\wedge^*T_X)$ have the same symbol.
\end{claim}
\begin{proof}
We prove the claim by some local calculation from which we can also find an explicit expression of the functional $B\in
C^*(\mathfrak{g}_X)$.
Let $\{z^1,\cdots, z^n\}$ be local holomorphic coordinates on $U\subset X$ where $n=\dim_{\mathbb{C}}X$, such that the holomorphic volume can be expressed as
$\Omega_X|_U=dz^1\wedge\cdots\wedge dz^n$, and let $\delta z^1,\cdots,\delta z^n$ be the corresponding jet coordinates.
The isomorphism $\tilde{K}$ in equation (\ref{eqn:iso-CE-Jet}) gives rise to (recall Notation \ref{notation-basis}):
\begin{align}\label{local-identification}
\mathcal A_X(U)[[\delta z^i, \pi_2^*(\partial_{z^i})]] =\mathcal{A}_X\otimes_{\mathcal{O}_X}\text{Jet}^{hol}_X(\wedge^*T_X)(U)\cong
\mathcal{A}_X(U)[[\tilde{K}(\widetilde{dz^i}),\tilde{K}(\widetilde{\partial_{z^j}})]].
\end{align}
Let $T$ denote the restriction of ${\rho^*}^{-1}$ in (\ref{eqn:identification-CE-jet}) to
$\Omega_X^1$:
$$
T: \Omega_X^1 \to C^{\infty}(X)\otimes_{{\mathcal O}_X}\Jet^{hol}_X({\mathcal O}_X).
$$
Let $\partial_{dR}$ be the internal de Rham differential
$$
\partial_{dR}: \Jet^{hol}_X(\Omega^*_X)\to \Jet^{hol}_X(\Omega^{*+1}_X),
$$
and let $\partial_{dR}\circ T$ be the composition
\begin{equation}\label{eqn:splitting-T}
\partial_{dR}\circ T: \Omega_X^1\to C^{\infty}(X)\otimes_{{\mathcal O}_X}\Jet^{hol}_X(\Omega_X^1).
\end{equation}
Let
$$
\abracket{-,-}:\bracket{C^{\infty}(X)\otimes_{{\mathcal O}_X}\Jet^{hol}_X(T_X)}\otimes_{C^{\infty}(X)} \bracket{C^{\infty}(X)\otimes_{{\mathcal O}_X}\Jet^{hol}_X(\Omega_X^1)}\to
C^{\infty}(X)\otimes_{{\mathcal O}_X}\Jet^{hol}_X({\mathcal O}_X)
$$
be the natural pairing induced from that between $T_X$ and $\Omega_X^1$. By our convention, $T(dz^i)=\tilde T(\widetilde{dz^i})$, and
\begin{align}\label{compare-coordinate}
\abracket{\partial_{dR}\circ T(dz^i), \tilde K(\widetilde{\partial_{z^j}})}=\delta_{j}^i, \quad \abracket{\partial_{dR}(\delta z^i), \pi_2^*(\partial_{z^j})}=\delta^i_j.
\end{align}
By construction, there exists an invertible $P\in C^{\infty}(X)\otimes_{{\mathcal O}_X} \text{Jet}_X^{hol}(\mathcal{O}_X)(U)$ such that
\begin{equation}\label{eqn:quantum-correction-exp}
\pi_2^*(dz^1\wedge\cdots \wedge dz^n)=P\cdot\left((\partial_{dR}\circ T)(dz^1)\wedge\cdots\wedge (\partial_{dR}\circ T)(dz^n)\right)\in\text{Jet}_X^{hol}(\Omega_X^*)(U).
\end{equation}
Under the identification \eqref{local-identification},
$$
\Delta=\sum_i\dfrac{\partial}{\partial(\tilde{T}(\widetilde{dz^i})}\dfrac{\partial}{\partial(\tilde{K}(\widetilde{\partial_{z^i}}))}, \quad
\partial_{\Omega_X}=\sum_i\dfrac{\partial}{\partial(\delta z^i)}\dfrac{\partial}{\partial(\pi_2^*(\partial_{z^i}))}.
$$
By \eqref{compare-coordinate}, \eqref{eqn:quantum-correction-exp},
it is not difficult to see that
\begin{equation}\label{eqn:comparison-operator}
\begin{aligned}
\partial=\Delta+\sum_i \abracket{\partial_{dR}\circ T(dz^i), \log P}\dfrac{\partial}{\partial(\tilde{K}(\widetilde{\partial_{z^i}}))}=\partial+\{\log P,-\}.
\end{aligned}
\end{equation}
This proves the claim.
\end{proof}
We conclude this section with the following theorems:
\begin{thm}\label{thm:one-loop-correction} Any pair $(X, \Omega_X)$ leads to a canonical quantization of topological B-twisted $\sigma$-model, whose one-loop quantum correction,
which will be denoted by $I_{qc}$, is of the form (\ref{correction-form}).
\end{thm}
The theorem follows from the following explicit description of $B$ in \eqref{correction-form}. By taking the top wedge product of $\partial_{dR}\circ T$, we define
$$
\wedge^n\bracket{\partial_{dR}\circ T}: \Omega_X^n\to C^{\infty}(X)\otimes_{{\mathcal O}_X}\Jet^{hol}_X(\Omega_X^n).
$$
\begin{prop}\label{correction-formula} The quantum correction associated to the canonical quantization of the pair $(X, \Omega_X)$ has the combinatorial part
$$
B=(2-2g)\log \bracket{{\pi_2^*(\Omega_X) \over \wedge^n\bracket{\partial_{dR}\circ T}(\Omega_X)}}\in
C^{\infty}(X)\otimes_{{\mathcal O}_X}\Jet_X^{hol}({\mathcal O}_X)\subset C^*(\mathfrak{g}_X),
$$
where $\pi_2$ is the same as in Definition \ref{def:jet-bundle}.
\end{prop}
\begin{proof}
This follows from \eqref{eqn:one-loop-correction-equation} and the local calculation \eqref{eqn:comparison-operator}.
\end{proof}
\begin{rmk}The existence of the quantum correction is due to the fact that the curved $L_\infty$ structure $\mathfrak{g}_X$ requires the choice of a splitting \cite{Kevin-CS}, although
different choices lead to homotopic equivalent theory. The quantum correction $I_{qc}$ precisely compensates such choice and link the effective Batalin-Vilkovisky geometry to the canonical Batalin-Vilkovisky
structure of polyvector fields associated to the Calabi-Yau structure.
\end{rmk}
With the one-loop quantum correction term $I_{qc}$, we can give an explicit formula of the constant term $R\in\mathcal A_X$ in quantum
master equation (\ref{eqn:qunatum-master-equation}), which will be used later in observable theory:
\begin{lem}\label{lem:S_1-contant term}
Let $(I_{qc})_1$ denote the linear term in the one-loop correction $I_{qc}$, and let $\tilde{l}_0$ denote the functional on ${\mathcal E}$
given by
$$
\tilde{l}_0(\alpha+\beta)=\langle l_0,\beta\rangle.
$$
Then the constant term $R$ is given by:
$$
R=\{(I_{qc})_1,\tilde{l}_0\}.
$$
\end{lem}
\begin{proof}
Let $I[L]=I^{(0)}[L]+\hbar I^{(1)}[L]$ be the scale
$L$ effective interaction. Then the quantum master equation (\ref{eqn:qunatum-master-equation}) can be expanded as
\begin{equation}\label{eqn:QME-constant-term}
Q_{L}I[L]+\frac{1}{2}\{I^{(0)}[L]+\hbar I^{(1)}[L],I^{(0)}[L]+\hbar I^{(1)}[L]\}_L+\hbar\Delta_L
I[L]+\hbar R+F_{l_1}=0.
\end{equation}
It is clear by type reason that the constant term in (\ref{eqn:QME-constant-term}) other than $\hbar R$ can only live in the
bracket $\{I^{(0)}[L],\hbar I^{(1)}[L]\}_L$.
Thus we only need to find the linear terms in both $I^{(0)}[L]$ and $I^{(1)}[L]$. On one hand, it is obvious that the
only linear term in $I^{(0)}[L]$ is $\tilde{l}_0$ since $\tilde{l}_0$ does not propagate by the type reason. Therefore the only linear term in
$I^{(1)}[L]$ that contributes $\{I^{(1)}[L], \tilde{l}_0\}_L$ is $(I_{qc})_1$. It follows that
$$
R=\{(I_{qc})_1, \tilde{l}_0\}_L=\{(I_{qc})_1, \tilde{l}_0\}
$$
since $R$ does not depend on $L$.
\end{proof}
\section{Observable theory}
The objective of this section is to study the quantum observables of B-twisted topological $\sigma$-model following the general
theory developed by Costello and Gwilliam \cite{Kevin-Owen}. In section
\ref{section:local-observable}, we show that classical and quantum local observables are given by the cohomology of polyvector
fields. In section \ref{section:global-observable}, we study global topological quantum observables on Riemann
surfaces of any genus $g$. Using the local to global factorization map, we define the topological correlation functions of quantum observables. In section \ref{section:correlation-function}, we show that the correlation
functions on $\mathbb{P}^1$ are given by the trace map on Calabi-Yau manifold, and the partition function on the elliptic curve reproduces the Euler
characteristic of the target manifold. This is in complete agreement with the physics prediction.
\subsection{Classical observables}\label{section:local-observable}
We first recall that classical observables are given by the derived critical locus of the classical action functional \cite{Kevin-Owen}.
\begin{defn} The classical observables of the B-twisted $\sigma$-model is the graded commutative factorization algebra on
$\Sigma_g$ whose value on an open subset $U\subset
\Sigma_g$ is the cochain complex
\begin{equation}\label{eqn:classical-observable}
\text{Obs}^{cl}(U):=\left(\mathcal{O}(\mathcal{E}_U), Q+\{I_{cl},-\}\right).
\end{equation}
Here $I_{cl}$ is the classical interaction functional and
$\mathcal{E}_U=\mathcal{A}_{\Sigma_g}(U)\otimes(\mathfrak{g}_X[1]\oplus\mathfrak{g}_X^\vee)$.
\end{defn}
By definition,
$$
\mathcal{O}(\mathcal{E}_U)=\widehat{\Sym}\bracket{\mathcal{E}_U^\vee}=\prod_{k\geq 0}\Sym^k \bracket{\mathcal{E}_U^\vee}.
$$
With the help of the symplectic pairing, we have the following identification:
$$
\quad \mathcal{E}_U^\vee\cong \overline{\mathcal{A}}_c(U)[2]\otimes(\mathfrak{g}_X^\vee[-1]\oplus\mathfrak{g}_X),
$$
where $\overline{\mathcal A}_c(U)$ is the space of compactly supported distribution-valued differential forms on $U$. Thus we have
\begin{equation*}
\begin{aligned}
\text{Sym}^n(\mathcal{E}_U^\vee)&=\text{Sym}^n\left(\bracket{\mathcal{A}(U)\otimes\bracket{\mathfrak{g}_X[1]\oplus \mathfrak{g}_X^\vee}}^\vee\right)\\
&\cong\text{Sym}^n\bracket{\overline{\mathcal{A}}_c(U)[2]\otimes \bracket{\mathfrak{g}_X^\vee[-1]\oplus \mathfrak{g}_X}}.\\
\end{aligned}
\end{equation*}
We would like to consider local observables in a small disk on $\Sigma_g$ and define their correlation functions. This can be viewed as the mirror
consideration of observables associated to marked points in Gromov-Witten theory. At the classical level, we have
\begin{prop}\label{prop-classical-ob}
Let $U\subset \Sigma_g$ be a disk. The cohomology of classical local observables of B-twisted topological $\sigma$-model on
$U$ is given by the cohomology
of polyvector fields:
$$
H^k(\text{Obs}^{cl}(U))\cong \bigoplus_{p+q=k}H^p(X,\wedge^qT_X).
$$
\end{prop}
\begin{proof} Recall that $\text{Obs}^{cl}(U)$ is a dg-algebra over $\mathcal A_X$. Let $\mathcal{A}_X^k$ denote the smooth $k$-forms on $X$. We filter $\text{Obs}^{cl}(U)$ by defining
$$
F^k\text{Obs}^{cl}(U):= \mathcal{A}^k_X\text{Obs}^{cl}(U).
$$
Since the operator $l_1+\{I_{cl},-\}$ increases the
degree of differential forms on $X$ by one while $d_{\Sigma_g}$ preserves it, it is clear that the $E_1$-page of the spectral sequence is obtained by taking the cohomology
with respect to $d_{\Sigma_g}$. By Atiyah-Bott's lemma, the chain complex of currents on $U$ is quasi-isomorphic to the chain complex of compactly supported differential forms.
Thus we have:
$$
E_1=\bracket{\widehat{\Sym}\bracket{H_c^2(U)\otimes\bracket{\mathfrak{g}_X[1]^\vee\oplus \mathfrak{g}_X}}, l_1+\fbracket{I_{cl},-}}.
$$
The next lemma identifies the $E_1$-page of the spectral sequence with the de Rham complex of certain jet bundle on $X$. It is clear that the spectral sequence
degenerates at the $E_2$-page. Thus we have the quasi-isomorphism
\begin{equation*}
\begin{aligned}
\text{Obs}^{cl}(U)&\cong\left(\mathcal{A}_X\otimes_{\mathcal{O}_X}\text{Jet}^{hol}_X(\wedge^*T_X),d_{D_X}\right)\cong(\mathcal{A}_X^{0,*}\otimes_{
\mathcal{O}
_X}\wedge^*T_X,\bar{\partial}).
\end{aligned}
\end{equation*}
The proposition follows by taking the cohomology of the rightmost cochain complex.
\end{proof}
\begin{lem}\label{lem:local observable-iso-jet}
We have the following isomorphism of cochain complexes over the dga $\mathcal A_X$:
\begin{equation*}
\begin{aligned}
&\bracket{\widehat{\Sym}\bracket{H_c^2(U)\otimes\bracket{\mathfrak{g}_X[1]^\vee\oplus \mathfrak{g}_X}}, l_1+\fbracket{I_{cl},-}}
\cong&\left(\mathcal{A}_{X}\otimes_{\mathcal{O}_X}\text{Jet}^{hol}_X(\wedge^*T_X),d_{D_X}\right),
\end{aligned}
\end{equation*} where $d_{D_X}$ denotes the differential of the de Rham complex of
$\text{Jet}^{hol}_X(\wedge^*T_X)$.
\end{lem}
\begin{proof}
Since $U$ is a disk in $\Sigma_g$, we have the canonical isomorphism $H_c^2(U)\cong\mathbb{C}$ induced by the integration of $2$-forms. And the following isomorphism is clear:
$$
\bracket{\widehat{\Sym}\bracket{H_c^2(U)\otimes\bracket{\mathfrak{g}_X[1]^\vee\oplus \mathfrak{g}_X}}, l_1+\fbracket{I_{cl},-}}\cong
(C^*(\mathfrak{g}_X,\text{Sym}^*\mathfrak{g}_X),d_{CE}),
$$
thus the Lemma follows from Proposition \ref{prop:iso-CE-jet}.
\end{proof}
\subsection{Quantum observables} \label{section:global-observable}
Quantum observables are the quantization of classical observables. Let $I[L]$ be a quantization of the classical interaction
$I_{cl}$. The operator $Q_L+\{I[L],-\}_L+\hbar
\Delta_L$ sqaures zero (Lemma \ref{quantum-BRST}) and defines a quantization of the classical operator $Q+\{I_{cl},-\}$.
\begin{defn} The quantum observables on $\Sigma_g$ at scale $L$ is defined as the cochain complex
$$\text{Obs}^q(\Sigma_g)[L]:=\bracket{\mathcal{O}(\mathcal{E})[[\hbar]],Q_L+\{I[L],-\}_L+\hbar\Delta_L}.$$
\end{defn}
The definition is independent of the scale $L$ since quantum observables at different scales are homotopic equivalent via renormalization group flow
(See \cite[Chatper 5, Section 9]{Kevin-book}). Therefore
we will also use $\text{Obs}^q(\Sigma_g)$ to denote quantum observables when the scale is not specified.
The quantum observables form a factorization algebra on $\Sigma_g$ \cite{Kevin-Owen}. To define the quantum observables on an arbitrary open subset $U\subset \Sigma_g$, we need
the concept of parametrices.
\begin{defn} A parametrix $\Phi$ is a distributional section
$$
\Phi \in \Sym^2\bracket{\overline{{\mathcal E}}}
$$
with the following properties:
\begin{enumerate}
\item $\Phi$ is of cohomological degree $1$ and $(Q\otimes 1+1\otimes Q)\Phi=0$,
\item ${1\over 2}\bracket{H\otimes 1+1\otimes H}\Phi- K_{0} \in \Sym^2\bracket{{\mathcal E}}$ is smooth, where $H=[Q,Q^{GF}]$ is the Laplacian and $K_{0}=\lim\limits_{L\to 0}K_L$ is the kernel of the
identity operator.
\end{enumerate}
\end{defn}
\begin{rmk} We have dropped the "proper" condition as in \cite{Kevin-Owen}. This is automatic here since we are working with compact Riemann surface
$\Sigma_g$. We have also symmetrized $(H\otimes 1)\Phi$ used in \cite{Kevin-Owen}.
\end{rmk}
\begin{defn} We define the propagator $P(\Phi)$ and BV kernel $K_{\Phi}$ associated to a parametrix $\Phi$ by
$$
P(\Phi):={1\over 2}\bracket{Q^{GF}\otimes 1+1\otimes Q^{GF}}\Phi \in \Sym^2\bracket{\overline{{\mathcal E}}}, \quad K_{\Phi}:=K_0-{1\over 2}\bracket{H\otimes 1+1\otimes H}\Phi.
$$
The effective BV operator $\Delta_\Phi:={\partial \over \partial K_\Phi}$ induces a BV bracket $\{-,-\}_\Phi$ on ${\mathcal O}\bracket{{\mathcal E}}$ in a similar way as the scale $L$ BV bracket $\{-,-\}_L$.
\end{defn}
The following identity describes the relation between the propagator $P(\Phi)$ and BV kernel $K_{\Phi}$:
$$
(Q\otimes 1+1\otimes Q)P(\Phi)=K_0-K_{\Phi},
$$
i.e., $P(\Phi)$ gives a homotopy between the singular kernel $K_0$ and the regularized kernel $K_{\Phi}$.
\begin{eg} $\Phi=\int_0^L \mathbb K_t dt$ is the parametrix we have used to define quantization. There
$$
P(\Phi)={1\over 2}\int_0^L \bracket{Q^{GF}\otimes 1+1\otimes Q^{GF}}\mathbb K_t dt =\int_0^L \bracket{Q^{GF}\otimes 1}\mathbb K_t dt = \mathbb P_0^L, \quad K_\Phi=\mathbb K_L, \quad \Delta_\Phi=\Delta_L.
$$
\end{eg}
The basic reason we use arbitrary parametrix here is that the usual renormalization group flow $W\bracket{\mathbb P_\epsilon^L,-}$ of observables using length scales does not
preserve the property of being supported in an open subset $U$. Instead, there exist parametrices whose supports are arbitrarily close to the diagonal
$\Delta\subset\Sigma_g\times\Sigma_g$ that we can use to achieve this.
\begin{defn} Let $I[L]$ be a given quantization of $I_{cl}$, and let $\Phi$ be a parametrix. We define the effective quantization $I[\Phi]$ at the parametrix $\Phi$ by
$$
I[\Phi]:=W\bracket{P(\Phi)-\mathbb P_0^L, I[L]}.
$$
\end{defn}
Note that $P(\Phi)-\mathbb P_0^L\in \Sym^2({\mathcal E})$ is a smooth kernel since
$$
(H\otimes 1+1\otimes H)(P(\Phi)-\mathbb P_0^L)=(Q^{GF}\otimes 1+1\otimes Q^{GF})({1\over 2}(H\otimes 1+1\otimes
H)\Phi-\mathbb{K}_0+\mathbb{K}_L)
$$
is smooth and $H$ is an elliptic operator.
$I[\Phi]$ satisfies a version of quantum master equation described by the parametrix $\Phi$ as in \cite{Kevin-Owen} (with a slight modification to include $F_{l_1}$), and defines
the corresponding cochain complex of quantum observables. We leave the details to the readers since we will not use its form for later discussions. Furthermore, different
parametrices $\Phi, \Phi^\prime$ lead to homotopic equivalent cochain complexes which are linked by the renormalization group flow $W(P(\Phi)-P(\Phi^\prime),-)$.
\begin{defn}[\cite{Kevin-Owen}] Given a quantum observable $O[L]$ at scale $L$, we define its value $O[\Phi]$ at the parametrix $\Phi$ by requiring that
$$
I[\Phi]+\delta O[\Phi]:=W\bracket{P(\Phi)-\mathbb P_0^L, I[L]+\delta O[L]},
$$
where $\delta$ is a square-zero parameter. The map $O[L]\mapsto O[\Phi]$ defines a homotopy between the corresponding cochain complexes of observables.
\end{defn}
\iffalse
It is straightforward to prove the following lemma:
\begin{lem}
$I[\Phi]$ satisfies the following $\Phi$-quantum master equation
$$
\bracket{Q_{\Phi}+\hbar \Delta_{\Phi}+{F_{l_1}\over \hbar}}e^{I[\Phi]/\hbar}=R e^{I[\Phi]/\hbar}.
$$
Here $Q_{\Phi}:=Q+l_1^2 \widehat{P(\Phi)}$, where $\widehat{P(\Phi)}$ denotes the linear operator on ${\mathcal E}$ whose kernel is $P(\Phi)$. Moreover, the evaluation map
$$
O[L] \to O[\Phi]
$$
gives an isomorphism of complexes
$$
\bracket{\mathcal{O}(\mathcal{E})[[\hbar]],Q_L+\{I[L],-\}_L+\hbar\Delta_L}\to
\bracket{\mathcal{O}(\mathcal{E})[[\hbar]],Q_\Phi+\{I[\Phi],-\}_\Phi+\hbar\Delta_\Phi}.
$$
\end{lem}
\fi
\begin{defn} Given $O\in {\mathcal O}({\mathcal E})=\prod\limits_{k,i\geq 0}\Sym^i\bracket{{\mathcal E}^\vee} \hbar^k$, we will let $O_{i}^{(k)}$ denote the
corresponding component, i.e.
$$
O=\sum_{k,i\geq 0}O_{i}^{(k)}\hbar^k.
$$
\end{defn}
\begin{defn}[\cite{Kevin-Owen}] We say that a quantum observable $O[L]$ has support in $U$, if for any $k,i\geq 0$, there exists a parametrix $\Phi$ such that
$$
\text{Supp}\bracket{O[\Phi]_{i}^{(k)}}\subset U.
$$
\end{defn}
As shown in \cite{Kevin-Owen}, the subspace of quantum observables supported in $U$ forms a sub-cochain complex of $\text{Obs}^q(\Sigma_g)$, which will be denoted by $\text{Obs}^q(U)$.
\subsubsection{Local quantum observable} Let $U$ be a disk on $\Sigma_g$. As shown in \cite{Kevin-Owen} with great generality, the cohomology of the local quantum observables
$$
H^*\bracket{\text{Obs}^q(U)}
$$
defines a deformation of $H^*\bracket{\text{Obs}^{cl}(U)}$:
\begin{align}\label{local-quantum-classical}
H^*\bracket{\text{Obs}^q(U)}\otimes_{\mathbb{C}[[\hbar]]}\mathbb{C} \cong H^*\bracket{\text{Obs}^{cl}(U)}.
\end{align}
We will construct a splitting map in this subsection, reflecting the vanishing of quantum corrections for observables in our B-model.
Let $\eta\in H^2_c(U)$ be a fixed generator with $\int_U\eta=1$. By the proof of Propostion \ref{prop-classical-ob}, it induces a quasi-isomorphic embedding
$$
\left(\mathcal{A}_X\otimes_{\mathcal{O}_X}\text{Jet}^{hol}_X(\wedge^*T_X),d_{D_X}\right)\hookrightarrow \text{Obs}^{cl}(U),
$$
and different choices of $\eta$ are homotopic equivalent. Let $\mu\in \mathcal{A}_X\otimes_{\mathcal{O}_X}\text{Jet}^{hol}_X(\wedge^*T_X)$, and we will denote by $O_\mu$
the corresponding local classical observable. Let $\{O_\mu[L]|L>0\}$ denote the RG flow of the classical observable $O_\mu$. More
explicitly, we define $O_\mu[L]$ by requiring that
$$
I[L]+\delta O_\mu[L]=\lim_{\epsilon\to 0}W(\mathbb{P}_\epsilon^L,I_{cl}+\hbar I_{qc}+\delta O_\mu),
$$
where $\delta^2=0$, and $I_{qc}$ denotes the one-loop quantum correction in
equation (\ref{eqn:local-version-eqn-one-loop-correction}). The existence of the limit follows from Lemma/Definition
\ref{lem:naive quantization} and the observation that the distribution $O_\mu$ is in fact smooth (tensor products of $\eta$'s). By
construction, $O_\mu[L]$ is a local quantum observable supported in $U$. We denote the above map by
$$
\Psi: \mathcal{A}_X\otimes_{\mathcal{O}_X}\text{Jet}^{hol}_X(\wedge^*T_X)\to \text{Obs}^{q}(U), \quad \mu\mapsto O_\mu[L].
$$
\begin{prop}\label{prop:local-observable} $\Psi$ is a cochain map.
\end{prop}
\begin{proof} Let $U_L=Q_L+\hbar\Delta_L+\{I[L],-\}_L$ be the differential on quantum observables. By construction,
$$
O_\mu[L]e^{I[L]/\hbar}=\lim_{\epsilon\to 0}e^{\hbar {\partial \over \partial \mathbb{P}_\epsilon^L}} \left(O_\mu e^{I_{cl}/\hbar+I_{qc}}\right).
$$
By Lemma \ref{quantum-BRST},
\begin{align*}
&(U_L(O_\mu[L])+O_\mu[L]R)e^{I[L]/\hbar}
=(Q_L+\hbar\Delta_L+F_{l_1}/\hbar)\bracket{O_\mu[L]e^{I[L]/\hbar}}\\
=&\lim_{\epsilon\to 0}e^{\hbar {\partial \over \partial \mathbb{P}_\epsilon^L}} (Q_\epsilon+\hbar\Delta_\epsilon+F_{l_1}/\hbar)\bracket{O_\mu e^{I_{cl}/\hbar+I_{qc}}}\\
=&\lim_{\epsilon\to 0}e^{\hbar {\partial \over \partial \mathbb{P}_\epsilon^L}}\bracket{\bracket{Q_\epsilon O_\mu+ \{I_{cl},O_\mu\}_\epsilon}
e^{I_{cl}/\hbar+I_{qc}}+O_\mu (Q_\epsilon+\hbar\Delta_\epsilon+F_{l_1}/\hbar)e^{I_{cl}/\hbar+I_{qc}}}
\end{align*}
where we have used the fact that both $O_\mu$ and $I_{qc}$ can only have non-trivial inputs for $0$-forms on $\Sigma_g$, hence
$$
\hbar\Delta_\epsilon O_\mu=\{O_\mu, I_{qc}\}_\epsilon=0
$$
by the type reason. Since the distribution $O_\mu$ is in fact smooth, we are safe to take $\epsilon\to 0$ by a similar argument as Lemma \ref{lem:naive quantization}, Lemma \ref{lem:two-vertex-wheels} and Lemma \ref{lem:more-vertex-wheels}. The first term above gives
$$
\lim_{\epsilon\to 0}e^{\hbar {\partial \over \partial \mathbb{P}_\epsilon^L}}\bracket{\bracket{Q_\epsilon O_\mu+ \{I_{cl},O_\mu\}_\epsilon} e^{I_{cl}/\hbar+I_{qc}}}=e^{\hbar {\partial \over \partial \mathbb{P}_0^L}} (O_{d_{D_X}\mu} e^{I_{cl}/\hbar+I_{qc}}).
$$
The quantum master equation implies that the second term is
$$
\lim_{\epsilon\to 0}e^{\hbar {\partial \over \partial \mathbb{P}_\epsilon^L}}\bracket{O_\mu (Q_\epsilon+\hbar\Delta_\epsilon+F_{l_1}/\hbar)e^{I_{cl}/\hbar+I_{qc}}}=e^{\hbar {\partial \over \partial \mathbb{P}_0^L}} (O_\mu R e^{I_{cl}/\hbar+I_{qc}})=O_\mu[L]Re^{I[L]/\hbar}.
$$
It follows that
\begin{align*}
U_L(O_\mu[L])e^{I[L]/\hbar}=&e^{\hbar {\partial \over \partial \mathbb{P}_0^L}} (O_{d_{D_X}\mu}) e^{I_{cl}/\hbar+I_{qc}}\\
=&O_{d_{D_X}\mu}[L]e^{I[L]/\hbar},
\end{align*}
i.e., $
U_L(\Psi(\mu))=\Psi(d_{D_X}\mu)
$
as desired.
\end{proof}
\begin{cor}\label{cohomology-quantum-local}
The cohomology of local quantum observables on a disk $U$ is given by
$$
H^*\bracket{\text{Obs}^q(U)}\cong H^*\bracket{X, \wedge^*T_X}[[\hbar]].
$$
\end{cor}
\begin{proof} In fact, the map $\Psi$ defines a splitting of \eqref{local-quantum-classical}.
\end{proof}
This says that the local observables do not receive quantum corrections via perturbative quantization, which is a very special property of B-model.
\subsubsection{Global quantum observable}
Now we consider global observables on the Riemann surface $\Sigma_g$. The cochain complex of
global quantum observables on $\Sigma_g$ at scale $L$ is defined as
\begin{equation}\label{eqn:global-quantum-observable}
\text{Obs}^q(\Sigma_g)[L]:=(\mathcal{O}(\mathcal{E})[[\hbar]],Q_L+\{I[L],-\}_L+\hbar\Delta_L).
\end{equation}
Since the complexes of quantum observables are homotopic equivalent for different length scales, we only need to
compute the cohomology of global observables at scale $L=\infty$. By considering the $d_{\Sigma_g}$-cohomology first, the
complex (\ref{eqn:global-quantum-observable}) at $L=\infty$ is quasi-isomorphic to the following complex:
$$
\bracket{{\mathcal O}\bracket{\mathbb{H}^*(\Sigma_g)\otimes \bracket{\mathfrak{g}_X[1]\oplus \mathfrak{g}_X^\vee}}[[\hbar]],
l_1+\{I^{(0)}[\infty]|_{\H},-\}_{\infty}+\hbar(\{I^{(1)}[\infty]|_{\H},-\}_\infty+\Delta_\infty)},
$$
where $\H^*(\Sigma_g)$ denotes the space of harmonic forms on $\Sigma_g$. $I^{(0)}[\infty]|_{\H}$ and $I^{(1)}[\infty]|_{\H}$ are the
restrictions of the tree-level and one-loop effective interactions to the space of harmonic fields:
$$
\H:=\H^*(\Sigma_g)\otimes \bracket{\mathfrak{g}_X[1]\oplus \mathfrak{g}_X^\vee}.
$$
\begin{lem}\label{effective-infinity} Restricted to the harmonic fields at scale $L=\infty$, we have
$$
I^{(0)}[\infty]|_{\H}=I_{cl}|_{\H}, \quad I^{(1)}[\infty]|_{\H}=I_{qc}|_{\H}+I^{(1)}_{naive}[\infty]|_{\H}.
$$
\end{lem}
\begin{proof}
We only prove the first identity, and the second one can be proved similarly. Let $\Gamma$ be a tree diagram with at least two vertices. We show that the Feynman weight
$W_{\Gamma}(\mathbb{P}_0^\infty, I_{cl})$ associated to $\Gamma$ vanishes when restricted to harmonic fields,
$$
W_{\Gamma}(\mathbb{P}_0^\infty, I_{cl})|_{\H}=0.
$$
We choose an orientation of the internal edges of $\Gamma$ such that every vertex is connected by a unique oriented path to a vertex
$v_\bullet$ in $\Gamma$, where $v_\bullet$ has only one edge which is oriented toward $v_\bullet$. The vertex $v_\bullet$ will be
called the root. A vertex which has only one edge oriented outward will be called a leaf. By assumption, $\Gamma$ has at least
one leaf which is distinct from the root.
If a leaf has only $\H^0(\Sigma_g)$ and $\H^2(\Sigma_g)$ inputs on its tails, then the propagator $\mathbb{P}_0^\infty$ attached to its edge
will annihilates $ W_{\Gamma}(\mathbb{P}_0^\infty, I_{cl})|_{\H}$ since wedge products of harmonic $0$-forms and $2$-forms are still
harmonic, and
$$
{d^*}=0 \quad \mbox{on}\hspace{3mm} \H^*(\Sigma_g).
$$
Similarly, if a leaf has only one input of type $\H^1(\Sigma_g)$, then $W_{\Gamma}(\mathbb{P}_0^\infty, I_{cl})|_{\H}=0$. So we can assume that all leaves have at least two inputs of type
$\H^1(\Sigma_g)$ on their tails (possibly other inputs of type $\H^0(\Sigma_g)$). Since $\mathbb{P}_0^\infty$ is a $1$-form on $\Sigma_g\times\Sigma_g$, it is easy to see by tracing the
path that
the incoming edge of the root $v_\bullet$ has to contribute a $1$-form to the copy of $\Sigma_g$ corresponding to $v_\bullet$
which is $d^*$-exact, and by the type reason there is exactly one extra input of type $\H^1(\Sigma_g)$ on one tail of $v_\bullet$.
Since
$$
\int_{\Sigma_g} d^*(a) \wedge b=0, \quad \forall a\in \mathcal A(\Sigma_g), b\in \H^*(\Sigma_g).
$$
This again implies that $W_{\Gamma}(\mathbb{P}_0^\infty, I_{cl})|_{\H}=0$.
\end{proof}
For later discussions on correlation functions of observables, we also need some description of the one-loop naive interaction as in the following Lemma:
\begin{lem}\label{lem:one-loop-naive-interaction-harmonic-fields}
For Riemann surfaces $\Sigma_g$ of genus $g=0$ and $g=1$, the infinity scale one-loop naive interaction vanishes when restricted to $\H$:
$$
I^{(1)}_{naive}[\infty]|_\H=0.
$$
\end{lem}
\begin{proof}
For both genus $0$ and genus $1$ Riemann surfaces with constant curvature metric, the product of harmonic forms remains harmonic. Thus by the same
argument as in the proof of Lemma \ref{effective-infinity}, if a one-loop graph $\gamma$ is a wheel with nontrivial trees attached to it, then
$$
W_\gamma(\mathbb{P}_0^\infty,I_{cl})|_\H=0.
$$
Hence we only need to deal with wheels. For the genus $0$ Riemann surface $\mathbb{P}^1$, since there are no harmonic $1$-forms, there must be at least one vertex on the wheel,
attached to which all inputs are harmonic $0$-forms by the type reason. The corresponding Feynman integral vanishes since the
composite of two propagators $\mathbb{P}_0^\infty$ on that vertex is zero by $(d^*)^2=0$.
For an elliptic curve $\Sigma_1=\mathbb{C}/(\mathbb{Z}+\mathbb{Z}\tau)$, if the number of vertices on a wheel is even, then the vanishing of the associated Feynman weight can be
proved by the same argument as in Lemma \ref{lem:two-vertex-wheels}. For a wheel with odd number of vertices, a $\mathbb{Z}_2$-symmetry of the analytic propagator
$P_0^\infty$ results in the vanishing of the Feynman weights: Let $dw$ be a harmonic $1$-form on $\Sigma_1$, and we assume without lost of generality that all the vertices of the
wheel are trivalent, and all the inputs are
of the form $dw\otimes g_i$, where $g_i\in\mathfrak{g}_X$.
$$
\figbox{0.2}{one-loop-odd-vertices}
$$
Similar to \cite[Lemma 17.4.4]{Kevin-CS}, the analytic part of the corresponding Feynman weight $ W(P_0^\infty,I_{cl})(dw)$ will be a linear combination of
$$
\sum_{(a,b)\in\mathbb{Z}^2\setminus\{0\}}\dfrac{1}{(a\tau+b)^{k}(a\bar\tau+b)^{2n+1-k}},
$$ which clearly vanishes.
\end{proof}
Now let us compute the cohomology of the global quantum observables.
\begin{rmk}
In the following discussion, the harmonic forms $\H^k(\Sigma_g)$ sit at degree $k$, and $\Omega_X^1[1]\cong T_X^\vee[1]$ sits at degree $-1$.
\end{rmk}
\begin{lem}\label{splitting-isom}
There is a natural isomorphism of $\mathcal A_X$-modules
\begin{equation}\label{eqn:structure-sheaf-ringed-space-T-g}
\begin{aligned}
&{\mathcal O}\bracket{\mathbb{H}^*(\Sigma_g)\otimes \bracket{\mathfrak{g}_X[1]\oplus \mathfrak{g}_X^\vee}}
\cong\\
&\mathcal A_X\otimes_{{\mathcal O}_X}
\Jet_X^{hol}\bracket{\widehat{\Sym}\bracket{T_X\otimes \H^1(\Sigma_g)}^\vee \otimes \widehat{\Sym}\bracket{T_X\otimes \H^2(\Sigma_g)}^\vee
\otimes \widehat{\Sym}(\Omega^1_X[1]\otimes\H^*(\Sigma_g))^\vee}
\end{aligned}
\end{equation}
\end{lem}
\begin{proof}
We have the following isomorphisms:
\begin{align*}
\mathcal{O}(\H^*(\Sigma_g)\otimes(\mathfrak{g}_X[1]\oplus\mathfrak{g}_X^\vee))&\cong\mathcal{O}(\H^0(\Sigma_g)\otimes\mathfrak{g}_X[1])\otimes_{\mathcal A_X}\mathcal{O}
\left(\bigoplus_{k=1}^{2}\H^k(\Sigma_g)\otimes\mathfrak{g}_X[1]\oplus\bigoplus_{k=0}^2\H^k(\Sigma_g)\otimes\mathfrak{g}_X^\vee\right)\\
&\cong C^*(\mathfrak{g}_X)\otimes_{\mathcal A_X}\widehat{\Sym}
\left(\bigoplus_{k=1}^{2}\H^k(\Sigma_g)\otimes\mathfrak{g}_X[1]\oplus\bigoplus_{k=0}^2\H^k(\Sigma_g)\otimes\mathfrak{g}_X^\vee\right)^\vee.\\
\end{align*}
It is clear that the tensor products of the isomorphisms $\tilde{T}$ and $\tilde{K}$ in Propositions
\ref{proposition:T-dual-to-jet} and
\ref{proposition:T_X-to-jet} respectively give the desired isomorphism.
\end{proof}
\begin{defn}
Let $\pi_2^*\bracket{\Omega_X^{2g-2}}$ denote the canonical flat section of the jet bundle
$$
\Jet_X^{hol}\bracket{{\Sym}^{2g\cdot \dim_{\mathbb{C}}X}\bracket{T_X\otimes \H^1(\Sigma_g)}^\vee \otimes{\Sym}^{\dim_{\mathbb{C}}X}(\Omega^1_X[1]\otimes \H^0(\Sigma_g))^\vee
\otimes{\Sym}^{\dim_{\mathbb{C}}X}(\Omega^1_X[1]\otimes \H^2(\Sigma_g))^\vee}
$$
induced by the holomorphic volume form $\Omega_X$. Here we use the notation $\pi_2^*$ to be consistent with Definition \ref{def:jet-bundle}.
\end{defn}
We can view $\pi_2^*\bracket{\Omega_X^{2g-2}}$ as a quantum observable via the identification in Lemma \ref{splitting-isom}. The general philosophy in \cite{Kevin-CS} says that
the quantization gives rise to a projective volume form, and the next proposition says that the volume form is exactly given by $\pi_2^*\bracket{\Omega_X^{2g-2}}$.
\begin{prop}\label{trivial-system}
The following embedding
$$
\iota: \mathcal A_X((\hbar)) \hookrightarrow {\mathcal O}\bracket{\mathbb{H}^*(\Sigma_g)\otimes \bracket{\mathfrak{g}_X[1]\oplus \mathfrak{g}_X^\vee}}((\hbar))
$$
defined by
$$
A\mapsto \iota(A):= \hbar^{-2\dim_{\mathbb{C}}X}A\otimes_{\mathcal{O}_X} {\pi_2^*\bracket{\Omega_X^{2g-2}}}, \quad \forall A\in \mathcal A_X
$$
is a quasi-isomorphism which is equivariant with respect to the $\mathbb{C}^\times$-symmetry defined in Section \ref{section:classical-symmetry}.
\end{prop}
\begin{proof} We first show that $\iota$ respects the differential. This is equivalent to showing that
$\pi_2^*\bracket{\Omega_X^{2g-2}}$ is closed under
$l_1+\fbracket{I^{(0)}[\infty]|_{\H},-}_{\infty}+\hbar(\{I^{(1)}[\infty]|_{\H},-\}_\infty+\Delta_\infty)$. By Lemma
\ref{effective-infinity},
$$
\bracket{l_1+\fbracket{I^{(0)}[\infty]|_{\H},-}_{\infty}} \bracket{\pi_2^*\bracket{\Omega_X^{2g-2}}}=d_{D_X}
\bracket{\pi_2^*\bracket{\Omega_X^{2g-2}}}=0,
$$
since ${\pi_2^*\bracket{\Omega_X^{2g-2}}}$ is flat. Here $d_{D_X}$ is the de Rham differential of the $D_X$-module.
\begin{claim}
$
\fbracket{I^{(1)}_{naive}[\infty], \bracket{\pi_2^*\bracket{\Omega_X^{2g-2}}}}_\infty=0.
$
\end{claim}
\begin{proof}
It is straightforward to check (similar to the proof of Lemma \ref{effective-infinity}) that for any one-loop graph $\gamma$, either the Feynman weight $W_\gamma(\mathbb{P}_0^\infty, I_{cl})$
vanishes, or the operator $\{W_\gamma(\mathbb{P}_0^\infty, I_{cl}),-\}_\infty$ applied to $ \pi_2^*\bracket{\Omega_X^{2g-2}}$will generate
new terms in $(\H^1(\Sigma_g)\otimes\mathfrak{g}_X[1])^\vee$. These terms force the bracket $\fbracket{W_\gamma(\mathbb{P}_0^\infty,
I_{cl}),\pi_2^*\bracket{\Omega_X^{2g-2}}}_\infty$ to vanish since $\pi_2^*\bracket{\Omega_X^{2g-2}}$ already contains the
highest wedge product of $(\H^1(\Sigma_g)\otimes\mathfrak{g}_X[1])^\vee$.
\end{proof}
Hence we only need to consider the operator $\Delta_\infty+\{I_{qc},-\}_\infty$. Let $n=\dim_\mathbb{C} X$, the map
$$
\wedge^n\bracket{\partial_{dR}\circ T}: \Omega_X^n\to C^{\infty}(X)\otimes_{{\mathcal O}_X}\Jet^{hol}_X(\Omega_X^n)
$$
in Proposition \ref{correction-formula} induces a natural embedding
\begin{align*}
T':&\bracket{\Omega_X^n}^{\otimes(2g-2)}\hookrightarrow\\
&\hspace{3mm}\Jet_X^{hol}\bracket{{\Sym}^{2g\cdot n}\bracket{T_X\otimes \H^1(\Sigma_g)}^\vee \otimes{\Sym}^{n}\bracket{\Omega^1_X[1]\otimes
\H^0(\Sigma_g)}^\vee\otimes{\Sym}^{n}\bracket{\Omega^1_X[1]\otimes \H^2(\Sigma_g)}^\vee}.
\end{align*}
Let $T'\bracket{\Omega_X^{2g-2}}$ denote the image of the section $(\Omega_X)^{\otimes(2g-2)}$, where $\Omega_X$ denotes the volume form and its negative power denotes its dual.
By construction,
$$
\Delta_\infty T'\bracket{\Omega_X^{2g-2}}=0
$$
and by Proposition \ref{correction-formula}, we have
$$
\pi_2^*\bracket{\Omega_X^{2g-2}}=e^{-I_{qc}}T'\bracket{\Omega_X^{2g-2}}.
$$
It follows that
$$
\Delta_\infty \pi_2^*\bracket{\Omega_X^{2g-2}}=\Delta_\infty
\bracket{e^{-I_{qc}}T'\bracket{\Omega_X^{2g-2}}}=-e^{-I_{qc}}\fbracket{I_{qc},T'\bracket{\Omega_X^{2g-2}}}_\infty=-\fbracket{I_{qc}, \pi_2^*\bracket{\Omega_X^{2g-2}}}_\infty,
$$
as desired.
Now we show that $\iota$ is a quasi-isomorphism. We consider the filtration on $ {\mathcal O}\bracket{\mathbb{H}^*(\Sigma_g)\otimes
\bracket{\mathfrak{g}_X[1]\oplus \mathfrak{g}_X^\vee}}((\hbar))$ by the degree of the differential forms on $X$:
$$
F^k {\mathcal O}\bracket{\mathbb{H}^*(\Sigma_g)\otimes \bracket{\mathfrak{g}_X[1]\oplus \mathfrak{g}_X^\vee}}((\hbar)):=\mathcal A_X^k\cdot {\mathcal O}\bracket{\mathbb{H}^*(\Sigma_g)\otimes
\bracket{\mathfrak{g}_X[1]\oplus
\mathfrak{g}_X^\vee}}((\hbar)).
$$
The differential of the graded complex given by
$$
d_1= \hbar \bracket{\Delta_\infty+\fbracket{I_{qc},-}_\infty}=\hbar e^{-I_{qc}}\Delta_\infty e^{I_{qc}}.
$$
By the Poincare lemma below, the $d_1$-cohomology is precisely given by $\Im(\iota)$. It follows that $\iota$ is a quasi-isomorphism.
\end{proof}
Recall the following Poincare lemma:
\begin{lem}\label{lem:Poincare-lemma}Let $\{x^i\}$ be even elements and let $\{\xi_i\}$ be odd elements, then we have
$$
H^*\left(\mathbb{C}[[x^i,\xi_i]],\Delta=\dfrac{\partial}{\partial
x^i}\dfrac{\partial}{\partial\xi_i}\right)=\mathbb{C}\xi_1\wedge\cdots\wedge\xi_n.
$$
\end{lem}
\begin{cor} \label{quantum-obs-cohomology}
The top cohomology of $\text{Obs}^q(\Sigma_g)[\hbar^{-1}]$ is at degree $(2-2g)\dim_\mathbb{C} X$, given by
$$
H^{(2-2g)\dim_\mathbb{C} X}\bracket{\text{Obs}^q(\Sigma_g)[\hbar^{-1}]}\cong \mathbb{C}((\hbar)).
$$
\end{cor}
\subsection{Correlation function}\label{section:correlation-function}
Proposition \ref{trivial-system} implies that a quantization $I[L]$ defines an integrable projective volume form in the sense of
\cite{Kevin-CS}, which allows us to define correlation functions for quantum observables.
\begin{defn}
Let $O\in \text{Obs}^q(\Sigma_g)$ be a closed element, representing a cohomology class $[O]\in H^k\bracket{\text{Obs}^q(\Sigma_g)}$. We define its correlation
function (via Corollary \ref{quantum-obs-cohomology}) by
$$
\abracket{O}_{\Sigma_g}:=\begin{cases} 0 & \text{if}\ k\neq (2-2g)\dim_\mathbb{C} X\\ [O]\in \mathbb{C}((\hbar)) & \text{if}\ k= (2-2g)\dim_{\mathbb{C}} X \end{cases}
$$
\end{defn}
Recall that the local quantum observables form a factorization algebra on $\Sigma_g$. This structure enables us to define
correlation functions for local observables, for which let us first introduce some notations.
\begin{defn}\label{definition:factorization-product}
Let $U_1, \cdots, U_n$ be disjoint open subsets of $\Sigma_g$. The factorization product
$$
\text{Obs}^q(U_1)\times \cdots \times \text{Obs}^q(U_n)\to \text{Obs}^q(\Sigma_g)
$$
of local observables $O_i\in \text{Obs}^q(U_i)$ will be denoted by $O_1\star\cdots\star O_n$. This product descends to cohomologies.
\end{defn}
\begin{defn} Let $U_1, \cdots, U_n$ be disjoint open subsets of $\Sigma_g$, and let $O_{i} \in \text{Obs}^q(U_i)$ be closed
local quantum observables supported on $U_i$. We define their correlation function by
$$
\abracket{O_1,\cdots, O_n}_{\Sigma_g}:=\abracket{O_1\star\cdots \star O_n }_{\Sigma_g}\in \mathbb{C}((\hbar)).
$$
\end{defn}
We would like to compute the correlation functions for B-twisted topological $\sigma$-model. By the degree reason, the only
nontrivial cases are $g=0,1$. We show that they coincide with the prediction from physics.
For later computation, we give an equivalent definition of correlation functions via the BV integration point of view. Let
$\widehat{T^*\bracket{{T^{\Sigma_g}X}}}[-1]$ denote the ringed space with underlying topological space $X$ and structure sheaf as that of
(\ref{eqn:structure-sheaf-ringed-space-T-g}):
$$
\widehat{T^*\bracket{{T^{\Sigma_g}X}}}[-1]=\left(X,{\mathcal O}\bracket{\mathbb{H}^*(\Sigma_g)\otimes \bracket{\mathfrak{g}_X[1]\oplus
\mathfrak{g}_X^\vee}}\right).
$$
It is clear that the intersection pairing on $\H^*(\Sigma_g)$, together with the canonical pairing between $\mathfrak{g}_X[1]$ and
$\mathfrak{g}_X^\vee$, induces an odd symplectic structure on $\widehat{T^*\bracket{{T^{\Sigma_g}X}}}[-1]$ . Let $\mathcal L$ be
the ringed space with underlying topological space $X$ and structure sheaf being generated by the odd generators over
$\mathcal A_X$ in ${\mathcal O}\bracket{\mathbb{H}^*(\Sigma_g)\otimes \bracket{\mathfrak{g}_X[1]\oplus\mathfrak{g}_X^\vee}}$. Thus
$\mathcal L$ can be viewed as a Lagrangian subspace of $\widehat{T^*\bracket{{T^{\Sigma_g}X}}}[-1]$.
It is clear from the form of the jet bundle in \eqref{eqn:structure-sheaf-ringed-space-T-g} that there is a canonical
projection of functions on $\widehat{T^*\bracket{{T^{\Sigma_g}X}}}[-1]$ to the subspace
$$
\mathcal A_X\otimes_{{\mathcal O}_X}
\Jet_X^{hol}\bracket{ \widehat{\Sym}\bracket{T_X\otimes \H^2(\Sigma_g)}^\vee \otimes \widehat{\Sym}\bracket{\Omega^1_X[1]\otimes
\H^1(\Sigma_g)}^\vee}\pi_2^*(\Omega_X^{2g-2})
$$
generated by $\pi_2^*(\Omega_X^{2g-2})$. We denote by $(f)^{TF}$ the projection of $f$, where $TF$ is short for ``top fermions''.
\begin{prop}
The map
\begin{align*}
i_L^*:\text{Obs}^q(\Sigma_g)&\rightarrow \mathcal A_X((\hbar))[(2g-2)\dim_{\mathbb{C}} X]\\
O&\mapsto \hbar^{2\dim_{\mathbb{C}}X} \left((e^{I[\infty]/\hbar}\cdot O|_{\H})^{TF}\big/\pi_2^*(\Omega_X^{2g-2})\right)\Big|_{\mathcal{L}}
\end{align*} is a $\mathbb{C}^\times$-equivariant cochain map. Here the differential on the right hand side is the de Rham differential $d_X$, and
$$
\Big|_{\mathcal{L}}: \mathcal A_X\otimes_{{\mathcal O}_X}
\Jet_X^{hol}\bracket{ \widehat{\Sym}^*\bracket{T_X\otimes \H^2(\Sigma_g)}^\vee \otimes \widehat{\Sym}^*\bracket{\Omega^1_X[1]\otimes
\H^1(\Sigma_g)}^\vee} \to \mathcal A_X
$$
denotes the map which sets all the jet coordinates and that of $T_X\otimes \H^2(\Sigma_g),\Omega^1_X[1]\otimes\H^1(\Sigma_g)$ to be zero.
\end{prop}
\begin{proof} Let $O\in \text{Obs}^q(\Sigma_g)$. Recall that from QME, we have
$$
(Q_\infty O+\hbar\Delta_\infty O+\{I[\infty],O\}_\infty)\cdot
e^{I[\infty]/\hbar}=(Q_\infty+\hbar\Delta_\infty+\frac{F_{l_1}}{\hbar}-R)(e^{I[\infty]/\hbar}\cdot O).
$$
We have the following two simple observations:
\begin{enumerate}
\item $\left(\hbar\Delta_\infty(e^{I[\infty]/\hbar}O)\right)^{TF}=0$ since $\Delta_\infty$ annihilates one odd generator,
\item $\dfrac{F_{l_1}}{\hbar}(e^{I[\infty]/\hbar}O)$ vanishes when restricted to the Lagrangian $\mathcal{L}$, since
$F_{l_1}$ contains non-trivial bosonic generators.
\item When restricted to $\H$, $Q_\infty=l_1$.
\end{enumerate}
Thus, we only need to show that
$$
\left(\left((Q_\infty-R)(e^{I[\infty]/\hbar}\cdot
O\big|_{\H})\right)^{TF}/\pi_2^*(\Omega_X^{2g-2})\right)\bigg|_{\mathcal{L}}=d_X\left((e^{I[\infty]/\hbar} \cdot
O\big|_{\H})^{TF}/\pi_2^*(\Omega_X^{2g-2})\right)\Big|_\mathcal{L}.
$$
Since $l_1$ commutes with $ \Big|_{\mathcal{L}}$, we can assume that $(e^{I[\infty]/\hbar}O\big|_{\H})^{TF}$ is of the form
\begin{align*}
(e^{I[\infty]/\hbar}O\big|_{\H})^{TF}=B\cdot \pi_2^*(\Omega^{2g-2})
\overset{(1)}{=}B\cdot e^{-I_{qc}} T'(\Omega_X^{2g-2}),
\end{align*}
where $B\in\mathcal A_X$, and identity $(1)$ and the map $T'$ is explained in the proof of proposition \ref{trivial-system}. Then
\begin{align*}
Q_\infty(B\cdot \pi_2^*(\Omega_X^{2g-2}))
=&d_X(B)\cdot \pi_2^*(\Omega_X^{2g-2})+B\cdot l_1(\pi_2^*(\Omega_X^{2g-2})).
\end{align*}
The fact that $\pi_2^*(\Omega_X^{2g-2})$ is a flat section of the jet bundle is translated to
$$
l_1(\pi_2^*(\Omega_X^{2g-2}))+\{I_{cl},\pi_2^*(\Omega_X^{2g-2})\}=0,
$$
where $I_{cl}=\tilde{l}_0+\sum_{k\geq 2}\tilde{l}_k$ is the classical interaction functional and $\tilde{l}_k$ is defined in equation \eqref{def:
tilde-l_k}.
The functionals $\{\tilde{l}_k,\pi_2^*(\Omega_X^{2g-2})\}$ for $k\geq 2$ vanish when
restricted to the Lagrangian $\mathcal{L}$ since they contain jet coordinates. Thus
\begin{align*}
\left(l_1(\pi_2^*(\Omega_X^{2g-2}))\right)\Big|_\mathcal{L} &=-\{\tilde{l}_0,\pi_2^*(\Omega_X^{2g-2})\}\Big|_\mathcal{L}\\
&=-\{\tilde{l}_0,e^{-I_{qc}}T'(\Omega_X^{2g-2})\}\Big|_\mathcal{L}\\
&=-\left(-\{\tilde{l}_0,I_{qc}\}\cdot
e^{-I_{qc}}T'(\Omega_X^{2g-2})+e^{-I_{qc}}\cdot\{\tilde{l}_0,T'(\Omega_X^{2g-2})\}\right)\Big|_{\mathcal{L}}\\
&\overset{(1)}{=}\left(\{\tilde{l}_0,I_{qc}\}\cdot
e^{-I_{qc}}T'(\Omega_X^{2g-2})\right)\Big|_{\mathcal{L}}\\
&\overset{(2)}{=}R\cdot\pi_2^*(\Omega_X^{2g-2}).
\end{align*}
The identity $(1)$ follows from the fact that $\{\tilde{l}_0,T'(\Omega_X^{2g-2})\}=0$ by type reason, and identity $(2)$
follows from Lemma \ref{lem:S_1-contant term}. Thus we have
\begin{align*}
&\left(\left((Q-R)(e^{I[\infty]/\hbar}\cdot O)\big|_{\H}\right)^{TF}/\pi_2^*(\Omega_X^{2g-2})\right)\bigg|_{\L}\\
=&\left((Q-R)(B\cdot \pi_2^*(\Omega^{2g-2}))/\pi_2^*(\Omega_X^{2g-2})\right)\Big|_{\L}\\
=&\left((dB+B\cdot R-B\cdot R)\cdot \pi_2^*(\Omega^{2g-2}))/\pi_2^*(\Omega_X^{2g-2})\right)\Big|_{\L}\\
=& dB.
\end{align*}
\end{proof}
It is clear that the cochain map $i_L^*$ induces an isomorphism on the degree $(2-2g)\dim_{\mathbb{C}} X$ component. Thus we have the following
corollary:
\begin{cor}\label{cor:correlation-function-BV-def}
Let $O$ be a global quantum observable which is closed, then the correlation function of $O$ is the same as the integral of $i_L^*(O)$
on $X$: $$\langle O\rangle_{\Sigma_g}=\hbar^{2\dim_{\mathbb{C}}X}\int_X\left((e^{I[\infty]/\hbar}\cdot O\big|_{\H})^{TF}/\pi_2^*(\Omega_X^{2g-2})\right)\Big|_{\L}.$$
\end{cor}
We are ready to compute the topological correlation functions on $\mathbb{P}^1$ and elliptic curves.
\subsubsection{$g=0$}
\begin{lem}\label{lem:factorization-product-genus-0}
Let $\{U_i\}$ be disjoint union of disks contained in a larger disk $U\subset \Sigma_g$, and let $[O_{\mu_i, U_i}] \in H^*\bracket{\text{Obs}^q(U_i)}$
be the local quantum observable associated to
$\mu_i \in H^*(X, \wedge^*T_X)$ on $U_i$. Then
$$
[O_{\mu_1, U_1}\star\cdots\star O_{\mu_m, U_m}]=[O_{\mu_1\cdots\mu_m, U}]\in H^*\bracket{\text{Obs}^q(U)}.
$$
\end{lem}
\begin{proof}
For any parametrix $\Phi$, we have
\begin{align*}
(O_{\mu_1, U_1}\star O_{\mu_2, U_2})[\Phi]&\overset{(1)}{=}\lim_{L\rightarrow 0}W(P(\Phi)-\mathbb{P}_0^L,I[L],O_{\mu_1,U_1}[L]\star
O_{\mu_2,U_2}[L])\\
&\overset{(2)}{=}W(P(\Phi),I_{cl}+\hbar I_{qc}, O_{\mu_1,U_1}\cdot O_{\mu_2,U_2})\\
&=O_{\mu_1\mu_2,U}[\Phi].
\end{align*}
Here identity $(1)$ is the definition of factorization product of observables, and identity $(2)$ follows from Proposition
\ref{prop:local-observable}.
\end{proof}
\begin{thm}\label{correlation-P1}
Let $\Sigma_g=\mathbb{P}^1$, and $\{U_i\}$ be disjoint union of disks on $\mathbb P^1$. Let $O_{\mu_i, U_i} \in
H^*\bracket{\text{Obs}^q(U_i)}$ be a local quantum
observable associated to $\mu_i \in H^*(X,\wedge^*T_X)$ supported in $U_i$. Then
$$
\abracket{O_{\mu_1, U_1}, \cdots, O_{\mu_m, U_m}}_{\mathbb{P}^1}=\hbar^{\dim_{\mathbb{C}}X}\int_X \bracket{\mu_1\cdots \mu_m\vdash \Omega_X}\wedge \Omega_X.
$$
\end{thm}
\begin{proof}
We compute the correlation function at the scale $L=\infty$. By the degree reason and the previous lemma, we can assume $m=1$, and $\mu=\mu_1\in H^{\dim_{\mathbb{C}}X}(X, \wedge^{\dim_{\mathbb{C}}X}T_X)$. Let $O_\mu$ be the classical observable represented by $\mu$. By Proposition \ref{prop:local-observable}, the corresponding quantum observable is described by
$$
O_{\mu}[\infty] e^{I[\infty]/\hbar}=\lim_{L\to\infty}\lim_{\epsilon\to 0}e^{\hbar {\partial\over \partial \mathbb P_\epsilon^L}}\bracket{O_\mu
e^{I_{cl}/\hbar+I_{qc}}}.
$$
Since $\H^1(\mathbb{P}^1)=0$, a similar argument as in Lemma \ref{effective-infinity} implies
$$
O_{\mu}[\infty] \big|_{\mathbb H}=O_{\mu}
$$
when restricted to harmonic fields. By Corollary \ref{cor:correlation-function-BV-def},
$$
\abracket{O_\mu[\infty]}_{\mathbb{P}^1}=\hbar^{2\dim_{\mathbb{C}}X}\int_X\left((e^{I[\infty]/\hbar}\cdot O_\mu\big|_{\H})^{TF}/\pi_2^*(\Omega_X^{-2})\right)\Big|_{\L}.
$$
By Lemma \ref{lem:one-loop-naive-interaction-harmonic-fields} and Lemma \ref{effective-infinity},
$$
(e^{I[\infty]/\hbar}\cdot O_\mu\big|_{\H})^{TF}=(e^{I_{cl}/\hbar+I_{qc}}\cdot O_\mu\big|_{\H})^{TF}.
$$
By the type reason, the only terms in $e^{I_{cl}/\hbar+I_{qc}}$ that will contribute after $\big|_{\L}$ are products of $\tilde{l}_0$:
$$
\figbox{0.26}{genus-0}
$$
Let $\{z^i\}$ be holomorphic coordinates on $U\subset X$ such that locally $\Omega_X|_U=dz^1\wedge\cdots\wedge dz^n$. Let
$$
\mu=A d\bar z^1\wedge\cdots \wedge d\bar z^n\otimes \partial_{z^1}\wedge\cdots \wedge \partial_{z^n},
$$
where $n=\dim_{\mathbb{C}} X$. We can choose the following element in the jet bundle representing $\mu$:
$$
O_\mu=Ad\bar z^1\wedge\cdots \wedge d\bar z^n\otimes_{{\mathcal O}_X} \left((\pi_2^*(dz^{1})\otimes
1)^\vee\wedge\cdots\wedge(\pi_2^*(dz^{n})\otimes 1)^\vee\otimes\mathbf{1}\right),
$$
where $\mathbf{1}$ denotes the other component in the jet bundle. On the other hand,
$$
e^{\tilde l_0/\hbar}= \hbar^{-n}\cdot dz^1\wedge\cdots\wedge dz^n\otimes_{{\mathcal O}_X}\bracket{T(\partial_{z^1})\wedge\cdots\wedge T(\partial_{z^n})} +\text{lower
wedge products}.
$$
It follows easily that
$$
\left((e^{I[\infty]/\hbar}\cdot O_\mu\big|_{\H})^{TF}/\pi_2^*(\Omega_X^{-2})\right)\Big|_{\L}=\hbar^{-n}\cdot(\mu\vdash\Omega_X)\wedge \Omega_X.
$$
\end{proof}
\subsubsection{$g=1$} On elliptic curves, the only nontrivial input is at cohomology degree $0$.
\begin{thm}\label{correlation-elliptic} Let $E=\Sigma_1$ be an elliptic curve. Then $
\abracket{1}_E=\chi(X)
$ is the Euler characteristic of $X$.
\end{thm}
\begin{proof}
By Corollary \ref{cor:correlation-function-BV-def}, we only need to look at the term $e^{I[\infty]/\hbar}$. For the case $g=1$, we have shown in Lemma
\ref{lem:one-loop-naive-interaction-harmonic-fields} that $I_{naive}^{(1)}[\infty]\big|_{\H}$ vanishes, and the quantum correction $I_{qc}$ also vanishies. Hence
$I[\infty]=I^{(0)}[\infty]=I_{cl}$. Let $w$ denote the normalized holomorphic coordinate on the elliptic curve $E$ such that
$$
\sqrt{-1}\int_{E}dwd\bar{w}=1.
$$
It is not difficult to see that by type reason, the only terms in $e^{I_{cl}/\hbar}|_{\H}$ that can survive under $\big|_{\L}$ are the following:
\begin{equation}\label{eqn:genus-1-correlation-survival-term}
(1)\hspace{5mm}\figbox{0.26}{genus-1-1},\hspace{20mm}(2)\hspace{5mm}\figbox{0.26}{genus-1-2}
\end{equation}
Let $\{z^i\}$ be local holomorphic coordinates on $X$ as we chose before, then term $(1)$ in (\ref{eqn:genus-1-correlation-survival-term}) can be expressed explicitly as:
\begin{equation}\label{eqn:genus-term1}
A_{ij}^k\otimes_{{\mathcal O}_X}((dw)^\vee\otimes \tilde{T}(\widetilde{dz^i}))\otimes ((d\bar{w})^\vee\otimes
\tilde{T}(\widetilde{dz^j}))\otimes(1^\vee\otimes\tilde{K}(\widetilde{\partial_{z^k}})).
\end{equation}
And term (2) can be expressed as
\begin{equation}\label{eqn:genus-term2}
dz^l\otimes\left((\sqrt{-1}dwd\bar{w})^\vee\otimes \tilde{K}(\widetilde{\partial_{z^l}})\right).
\end{equation}
By the discussion in \cite{Kevin-CS}, the following differential form valued in the bundle $End(T_X)=T_X\otimes_{\mathcal{O}_X}T_X^\vee$
$$
(A_{ij}^kdz^i)\otimes_{{\mathcal O}_X}(dz^j\otimes\frac{\partial}{\partial z^k})
$$
is exactly the Dolbeault representative of the Atiyah class of the holomorphic tangent bundle $T_X$. It is straightforward to
check that
\begin{align*}
\left(\left(\exp(I[\infty]/\hbar)|_{\H}\right)^{TF}/\pi_2^*(\Omega_X^{2-2g})\right)\Big|_\mathcal{L}&=\Tr\left((A_{ij}
^kdz^i)\otimes(dz^j\otimes\frac { \partial } { \partial z^k})\right)^{n}\\
&=\Tr(At(T_X))^n\\
&=c_n(X).
\end{align*}
It then follows easily that
\begin{align*}
\langle 1\rangle_E&=\int_X c_n(X)\\
&=\chi(X).
\end{align*}
\end{proof}
\section{Landau-Ginzburg Twisting}
In this section we discuss the Landau-Ginzburg twisting of the B-twisted $\sigma$-model and establish the topological Landau-Ginzburg
B-model via the renormalization method.
\begin{rmk} To avoid confusion, ``twisted'' and ``untwisted'' in this section are always concerned with the twist that arises from the superpotential $W$, instead of
the $B$-twist.
\end{rmk}
\subsection{Classical theory} Let $X$ be a smooth variety with a holomorphic function
$$
W: X\to \mathbb{C}.
$$
Recall that the B-twisted $\sigma$-model describes maps
$$
\bracket{\Sigma_g}_{dR}\to T^*X_{\dbar}[1].
$$
Let
$$
dW\lrcorner: \wedge^* T_X\to \wedge^* T_X
$$
be the contraction with the 1-form $dW$. It induces a differential on ${\mathcal O}\bracket{T^*X_{\dbar}[1]}$ of degree $-1$, commuting
with $d_{D_X}$. By abuse of notations, we still denote this differential by $dW\lrcorner$.
\begin{defn} We define $ T^*X_{\dbar}^W[1]$ to be the $L_\infty$-space with underlying space $X$, and sheaf of functions the
$\mathbb{Z}_2$-graded complex
$$
{\mathcal O}\bracket{ T^*X_{\dbar}^W[1]}:=\mathcal A_X\otimes_{{\mathcal O}_X}\Jet^{hol}_X\bracket{\wedge^* T_X}
$$
with the twisted differential $d_{D_X}+{dW\lrcorner}$.
\end{defn}
\begin{rmk} When $X=\mathbb{C}^n$, and $W$ being a weighted homogeneous polynomial, i.e.,
$$
W(\lambda^{q_i}z^i)=\lambda W(z^i), \quad \forall \lambda\in \mathbb{C}^*
$$
where $q_i$'s are positive rational numbers called the weights, then there emerges a $\mathbb{Q}$-grading by assigning the weights:
$
wt(z^i)=q_i, wt\bracket{\partial_{z^i}}=1-q_i, wt(\bar z^i)= -q_i, wt(d\bar z^i)=-q_i
$. However, we will not explore further on this in the current paper.
\end{rmk}
Note that there is a quasi-isomorphism of $\mathbb{Z}_2$-graded complexes of sheaves
$$
{\mathcal O}\bracket{ T^*X_{\dbar}^W[1]}\cong \bracket{\mathcal A_X^{0,*} \bracket{\wedge^* T_X}, \dbar_W}, \quad \dbar_W=\dbar+dW\lrcorner.
$$
Therefore $ T^*X_{\dbar}^W[1]$ can be viewed as the derived critical locus of $W$. The Landau-Ginzburg $B$-model describes
the space of maps
$$
\bracket{\Sigma_g}_{dR}\to T^*X_{\dbar}^W[1].
$$
As in the untwisted case, we consider those maps in the formal neighborhood of constant maps. This corresponds to the physics
statement that path integrals in B-twisted
Landau-Ginzburg model are localized around the neighborhood of constant maps valued in the critical locus of $W$.
Recall that there exists a Poisson bracket (Schouten-Nijenhuis bracket): $\wedge^*T_X\otimes_{\mathbb{C}} \wedge^*T_X\to \wedge^*T_X$.
Viewed as a bi-differential operator, it induces a bracket on the jet bundles, which we denote by
$$
\{-,-\}_{T^*X_{\dbar}[1]}: {\mathcal O}\bracket{ T^*X_{\dbar}[1]} \otimes_{\mathcal A_X} {\mathcal O}\bracket{ T^*X_{\dbar}[1]} \to {\mathcal O}\bracket{
T^*X_{\dbar}[1]}.
$$
\begin{lem-defn} Let $\widetilde{W} \in {\mathcal O}\bracket{ T^*X_{\dbar}[1]}$ be the image of $W$ under the natural embedding
$$
{\mathcal O}_X\hookrightarrow \Jet^{hol}_X({\mathcal O}_X) \hookrightarrow \mathcal A_X\otimes_{{\mathcal O}_X}\Jet^{hol}_X\bracket{\wedge^* T_X}.
$$
Then $\{\widetilde{W},-\}_{T^*X_{\dbar}[1]}=dW\lrcorner$ as operators on ${\mathcal O}\bracket{ T^*X_{\dbar}[1]}$.
\end{lem-defn}
\begin{proof} This follows from the corresponding statement on $\wedge^*T_X$.
\end{proof}
\iffalse
\begin{proof}
Let $\{\widetilde{\partial_{z^i}}\}$ and $\{\widetilde{dz^i}\}$ denote the basis of the $L_\infty$ algebra $\mathfrak{g}_X$ and the dual
$\mathfrak{g}_X^\vee$ over $\mathcal A_X$, and let $\{\widehat{dz^i}\}$ denote the basis of the $\mathfrak{g}_X$-module $C^*(\mathfrak{g}_X,\mathfrak{g}_X^\vee) $. Let
$\tilde{K}$ denote the smooth splitting
$$\tilde{K}: \Omega_X^1\rightarrow C^{\infty}(X)\otimes_{{\mathcal O}_X}\Jet_X^{hol}(\Omega_X^1) $$
as before. Let $f\in{\mathcal O}\bracket{ T^*X_{\dbar}[1]}$, then
we have
\begin{align*}
\{\widetilde{W},f\}=&\sum_i\dfrac{\partial\widetilde{W}}{\partial(\widetilde{dz^i})}\dfrac{\partial f}{
\partial(\widetilde{\partial_{z^i}})}
\overset{(1)}{=}\sum_i\dfrac{\partial(d_R(\widetilde{W}))}{\partial(\widehat{dz^i})}\dfrac{\partial
f}{\partial(\widetilde{\partial_{z^i}})}\\
=&\sum_{i}\sum_{j,k}\dfrac{\partial(d_R(\widetilde{W}))}{\partial(\tilde{K}(dz^j)}\dfrac{\partial(\tilde{K}(dz^j)}{
\partial(\widehat{dz^i})}\dfrac{ \partial f}{\partial(K(\partial_{z^k}))}\dfrac{\partial(K(\partial_{z^k}))}{
\partial(\widetilde{\partial_{z^i}})}\\
=&\sum_{j,k}\dfrac{\partial(d_R(\widetilde{W}))}{\partial(\tilde{K}(dz^j)}\dfrac{ \partial
f}{\partial(K(\partial_{z^k}))}\cdot\langle\tilde{K}(dz^j),K(\partial_{z^k}))\rangle\\
\overset{(2)}{=}&\sum_i\dfrac{\partial(d_R(\widetilde{W}))}{\partial(\tilde{K}(dz^i)}\dfrac{\partial
f}{\partial(K(\partial_{z^i}))}
=dW\lrcorner f.
\end{align*}
Here identity $(1)$ follows from the commutative diagram (\ref{eqn:right-de rham-CE}) and identity $(2)$ follows from Equation
(\ref{eqn:definition-T}).
\end{proof}
\fi
\begin{defn}\label{LG-definition}
The space of fields of topological Landau-Ginzburg $B$-model is
$$
{\mathcal E}:= \mathcal A_{\Sigma_g}\otimes_{\mathbb{C}}\bracket{\mathfrak{g}_{X}[1]\oplus \mathfrak{g}_{X}^\vee},
$$
and the classical action functional $S^W$ is defined by
$$
S^W=S+ I_{W},
$$
where $S$ is the classical action functional of the untwisted case, and $I_W$ is the local functional on $\mathcal A_{\Sigma_g}\otimes
\mathfrak{g}_X[1]$ defined by
$$
I_W\bracket{\alpha}:=\int_{\Sigma_g} \widetilde{W}(\alpha), \quad \alpha \in \mathcal A_{\Sigma_g}\otimes \mathfrak{g}_X[1].
$$
Here $\widetilde W$ is extended linearly in $\mathcal A_{\Sigma_g}$ to $\mathcal A_{\Sigma_g}\otimes \mathfrak{g}_X[1]$. The
LG-twisted interaction is
$$
I^{W}_{cl}=I_{cl}+I_W.
$$
\end{defn}
\begin{rmk} The $\mathbb{C}^\times$-symmetry of the untwisted B-model is broken by the term $I_W$. In particular, the LG-twisted theory is
no longer a cotangent theory in the sense of \cite{Kevin-CS}.
\end{rmk}
\begin{lem}\label{lem:Classical-ME-LG} The classical interaction $I_{cl}^W$ of Landau-Ginzburg $B$-model satisfies the classical master equation $QI^W_{cl}+{1\over
2}\fbracket{I^W_{cl},
I^W_{cl}}+{F_{l_1}}=0$.
\end{lem}
\begin{proof}
\begin{align*}
QI^W_{cl}+{1\over 2}\fbracket{I^W_{cl},I^W_{cl}}+F_{l_1}=&QI_{cl}+{1\over 2}\fbracket{I_{cl},I_{cl}}+F_{l_1}+QI_W+{1\over
2}\fbracket{I_W,I_W}+\fbracket{I_{cl},I_W}\\
\overset{(1)}{=}&QI_W+\fbracket{I_{cl},I_W},
\end{align*}
where $(1)$ follows from the classical master equation of $I_{cl}$ in the untwisted case and the vanishing of $\{I_W,I_W\}$ by type reason. It is not
difficult to see that for $\alpha\in\mathcal A_{\Sigma_g}$,
\begin{align*}
(QI_W+\{I_{cl},I_W\})(\alpha)=\int_{\Sigma_g}d_{D_X}(\widetilde{W})(\alpha)
=0,
\end{align*}
since $\widetilde{W}$ is flat.
\end{proof}
\subsection{Quantization} We assume that $X$ is Calabi-Yau with holomorphic volume form $\Omega_X$. Since $X$ is non-compact, the
choice of $\Omega_X$ will not be unique, and we will always fix one such choice.
Let $I_{qc}$ be the one-loop quantum correction of B-twisted $\sigma$-model associated to $(X, \Omega_X)$, with which the
quantization of the untwisted theory
is defined as
$$
I[L]=W(\mathbb P_0^L, I_{cl}+\hbar I_{qc}):=\lim_{\epsilon\to 0}W(\mathbb P_\epsilon^L, I_{cl}+\hbar I_{qc}).
$$
\begin{defn} We define the Landau-Ginzburg twisting of $I[L]$ by
$$
I^W[L]=W(\mathbb P_0^L, I_{cl}+I_{W}+\hbar I_{qc}):=\lim_{\epsilon\to 0}W(\mathbb P_\epsilon^L, I_{cl}+I_{W}+\hbar I_{qc}).
$$
\end{defn}
Since $I_W$ is only a functional on $\mathcal A_{\Sigma_g}^*\otimes\mathfrak{g}_X[1]$, it is easy to see by the type reason that
$$
I^W[L]=I[L]+W_{tree}(\mathbb P_0^L,I_{cl},I_W).
$$
\begin{prop}\label{prop:QME-LG}
$I^W[L]$ defines a quantization of B-twisted topological Landau-Ginzburg model $S^W$ in the sense of Definition
\ref{defn-quantization}.
\end{prop}
\begin{proof} Let $\delta_W[L]=W_{tree}(\mathbb P_0^L,I_{cl},I_W)$. By the type reason, $\Delta_L \delta_W[L]=\{\delta_W[L],\delta_W[L]\}_L=0$.
Since
$I[L]$ satisfies the QME, we have
\begin{align*}
\bracket{Q_L+\hbar \Delta_L+{F_{l_1}\over \hbar}-R}e^{I^W[L]/\hbar}
=\hbar^{-1}\bracket{Q_L \delta_W[L]+\{I[L], \delta_W[L]\}_L}e^{I^W[L]/\hbar}.
\end{align*}
Since $\delta_W[L]$ is given by sum over trees, it satisfies the classical RG flow equation. The vanishing of $Q_L
\delta_W[L]+\{I[L], \delta_W[L]\}_L$ then follows from its vanishing in the classical limit
$$
Q I_W+\{I_{cl}, I_W\}=0.
$$
\end{proof}
\subsection{Observable theory and correlation functions}
We would like to explore the correlation functions of local quantum observables of our Landau-Ginzburg theory. One
essential difference with the untwisted case is that it is no longer a cotangent theory due to the term
$I_W$, hence the interpretation of quantization as projective volume forms \cite{Kevin-CS} does not work directly in this case.
However, the BV integration interpretation in Corollary \ref{cor:correlation-function-BV-def} still makes sense in the LG-twisted
case, which we will use to define the correlation functions.
For simplicity, we will assume $X$ to be a Stein domain in $\mathbb{C}^n$, and that $Crit(W)$ is finite. We choose the
holomorphic volume form $\Omega_X=dz^1\wedge\cdots\wedge dz^n$, where $\{z^i\}$ are the linear coordinates on $\mathbb{C}^n$.
\begin{defn} The quantum observables on $\Sigma_g$ at scale $L$ is defined as the cochain complex
$$
\text{Obs}^q(\Sigma_g)[L]:=\bracket{\mathcal{O}(\mathcal{E})[[\hbar]],Q_L+\{I^W[L],-\}_L+\hbar\Delta_L}.
$$
Observables $\text{Obs}^q(U)$ with support in $U\subset \Sigma_g$ are defined in a similar fashion as in Section \ref{section:global-observable}.
\end{defn}
Correlation functions are defined for a proper subspace of quantum observables which are "integrable" in some good sense, since we
are working with non-compact space $X$. We consider the following simplest class:
\begin{defn}
We define the sub-complex $\text{Obs}_c^q(\Sigma_g)[L]\subset \text{Obs}^q(\Sigma_g)[L]$ by
$$
\text{Obs}^q_c(\Sigma_g)[L]:=\bracket{\mathcal{O}_c(\mathcal{E})[[\hbar]],Q_L+\{I^W[L],-\}_L+\hbar\Delta_L},
$$
where ${\mathcal O}_c({\mathcal E}):={\mathcal O}({\mathcal E})\otimes_{\mathcal A(X)}\mathcal A_c(X)$ and $\mathcal A_c(X)$ is the space of compactly supported differential forms on $X$. The
corresponding local observables supported in $U\subset \Sigma_g$ is denoted by $\text{Obs}^q_c(U)$.
\end{defn}
\begin{prop}\label{LG-local-observable}
Let $U\subset \Sigma_g$ be a disk. Then
$$
H^*(\text{Obs}^q(U))\cong H^*(\text{Obs}^q_c(U))\cong \Jac(W)[[\hbar]],
$$
where $\Jac(W)$ is the Jacobian ring of $W$.
\end{prop}
\begin{proof} The strategy is completely parallel to Corollatry \ref{cohomology-quantum-local}. We just need to observe that the
cohomology of classical observables in the twisted case is given by
$$
H^*(\mathcal A^{0,*}(X, \wedge^*T_X), \dbar+dW\lrcorner ) \quad \text{or}\hspace{5mm} H^*(\mathcal A_c^{0,*}(X, \wedge^*T_X), \dbar+dW\lrcorner
),
$$
both of which are canonically isomorphic to $\Jac(W)$ (See \cite{LLS}).
\end{proof}
Now we define the correlation function of quantum observables. For Landau-Ginzburg model, notice that the ``top fermion''
$\pi_2^*(\Omega_X^{2g-2})$ can be defined in a similar way as in the untwisted case. Thus we can define the correlation function
of quantum observables via the BV integration in the spirit of Corollary
\ref{cor:correlation-function-BV-def} (see also there for the notations):
\begin{defn}\label{def:correlation-function-LG}
Let $O$ be a quantum observable of Landau-Ginzburg $B$-model which is closed, then the correlation function of $O$ is
defined as
$$
\abracket{O}^W_{\Sigma_g}:=\int_X\left((e^{I^W[\infty]/\hbar} O\big|_{\H})^{TF}\Big/\pi_2^*(\Omega_{X}^{2g-2})\right)\Big|_\mathcal{L}.
$$
\end{defn}
As a parallel to Theorem \ref{correlation-P1}, we have
\begin{prop}\label{thm-LG-correlation}
Let $\{U_i\}$ be disjoint disks on $\Sigma_g$. Let $O_{f_i,U_i}\in H^*(Obs^q_c(U_i))$ be local observables associated to
$f_i\in\Jac(W)$ by Proposition \ref{LG-local-observable}. Then
$$
\langle O_{f_1,U_1}\star\cdots\star O_{f_m,U_m}\rangle_{\Sigma_g}^W=\sum_{p\in Crit(W)}\Res_p \bracket{{f_1\cdots f_m
\det(\partial_i\partial_j W)^{g}dz^1\wedge \cdots \wedge dz^n\over \prod_i \partial_i W }},
$$
where $\star$ is the local to global factorization product, and $\Res_p$ is the residue at the critical point $p$ \cite{GH}.
\end{prop}
\begin{proof}
As in the non-twisted case, we can assume that $m=1$ and let $f=f_1\in \Jac(W)$. Let $O_f[L]$ denote the corresponding quantum
observable and $O_f=\lim\limits_{L\to 0} O_f[L]$. By the definition of correlation function, we have
\begin{align*}
\langle O_f[\infty]\rangle_{\Sigma_g}^W
=\int_X\left((e^{I^W[\infty]/\hbar} O_{f}[\infty]\big|_{\H})^{TF}/\pi_2^*(\Omega_{X}^{2g-2})\right)\Big|_\mathcal{L}.
\end{align*}
Since $X\subset \mathbb{C}^n$, we can choose the $L_\infty$ structure on $\mathfrak{g}_X$ such that $l_i=0$ for all $i\geq 2$. It is then
clear that the RG flow of the classical interaction of B-model $I_{cl}$ has only tree parts. Furthermore, when restricted to the
subspace of harmonic fields, there is
\begin{align*}
I[\infty]|_{\H}&=I_{cl}|_\H,\\
W_{tree}(\mathbb{P}_0^L,I_{cl},I_W)|_\H&=I_W|_\H.
\end{align*}
It is then not difficult to see that the only terms in $e^{I^W[\infty]/\hbar}$ that will contribute non-trivially to
$\left((e^{I^W[\infty]/\hbar} O_{f}[\infty]\big|_{\H})^{TF}/\pi_2^*(\Omega_{\mathbb{C}^n}^{2g-2})\right)\Big|_\mathcal{L}$ are the following:
\begin{equation}\label{pic:correlation-LG}
\figbox{0.26}{LG-W-second-derivative}\hspace{15mm}\figbox{0.26}{genus-g}
\end{equation}
In the first picture, the two harmonic $1$-forms on $\Sigma_g$ attached to the tails must be dual to each
other. Since $\dim_\mathbb{C}(\H^1(\Sigma_g))=2g$, the total contribution of the first terms is
$$
\hbar^{-g\cdot n}\left(\det(\partial_i\partial_j W)\right)^g\otimes\pi_2^*(\Omega_X^{2g}).
$$
And the contribution of the second terms in (\ref{pic:correlation-LG}) together with the observable $O_{f}$
is, as in the computation of correlation functions in non-twisted B-model on $\mathbb{P}^1$, given by
$$
\hbar^{-n}((O_{f}\vdash \Omega_X)\wedge\Omega_X)\otimes \pi_2^*(\Omega_X^{-2}).
$$
All together, we have
\begin{align*}
\langle O_{f}[\infty]\rangle_{\Sigma_g}^W=&\hbar^{-(g+1)n}\int_X\left(\det(\partial_i\partial_j W)\right)^g(O_{f}\vdash \Omega_X)\wedge\Omega_X\\
=&\hbar^{-(g+1)n}\cdot\sum_{p\in Crit(W)}\Res_p \bracket{{f \det(\partial_i\partial_j W)^{g}dz^1\wedge \cdots \wedge dz^n \over \prod_i \partial_i W}},
\end{align*}
where the last equality follows from \cite[Proposition 2.5]{LLS}.
\end{proof}
\begin{rmk}
This coincides with Vafa's residue formula for topological Landau-Ginzburg models \cite{vafa}.
\end{rmk}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Cosmic neutrinos, as a probe of the universe to the highest energy regime, are indeed wonderful in many aspects.
Due to their extremely small interaction cross section,
they can penetrate through galactic infrared (IR) and cosmic microwave background (CMB) photons,
while photons of energy above 10 TeV would be attenuated.
Furthermore, being uncharged, they propagate along straight lines and are therefore able to point directly back to their sources,
while protons or other charged particles would be deflected by the magnetic fields in the universe.
Ultra-high energy cosmic rays (UHECRs) have been observed up to $\approx$ $10^{19.6}$ eV.
The source of such amazingly energetic events remains a mystery.
Above this energy scale, UHECRs interact with CMB photons through the Greisen-Zatsepin-Kuzmin(GZK) processes~\cite{gzk_process}.
The GZK cut-off of the cosmic ray energy spectrum was first observed
by the High Resolution Fly's Eye Experiment~\cite{cutoffHiRes}
and later confirmed by the Pierre Auger Observatory~\cite{cutoff}.
As such, the corresponding GZK neutrinos,
a necessary by-product of the GZK process,
are almost guaranteed to exist.
Nevertheless, none of these have been observed so far.
Detection of the GZK neutrinos would provide critical information
for unraveling the mystery of the origin and evolution of cosmic accelerators,
and will be one of the most exciting prospects in the coming decade~\cite{pisin_whitepaper}.
A promising way of detecting UHE neutrinos is the radio approach.
When an ultra-high energy cosmic neutrino interacts with ordinary matters on the Earth,
it would lead to a hadronic debris, caused by either charged-current or neutral-current weak interaction.
The former also produces a lepton with corresponding flavor.
Both the high energy leptons and hadronic debris induce particle showers.
As proposed by Askaryan in the 1960's~\cite{Askaryan},
the high energy particle shower developed in a dense medium would have net negative charges.
This charge imbalance appears as a result of the knocked-off electrons being part of the shower,
as well as positrons in the shower annihilating with electrons of the medium.
The net charges of the showers, typically $20 \%$ of total shower particles,
serve as a source which emits the Cherenkov radiations when they travel in the medium.
The sizes of the showers are quite localized (tens of cm in radial and a few meters in longitudinal development)
compared to those developed in the air (km scale),
and therefore result in coherent radiations for wavelengths longer than the shower sizes.
The corresponding coherent wavelength turns out to be in the radio band, from hundreds of MHz to a few GHz.
This Askaryan effect has been confirmed in a series of experiments at the Stanford Linear Accelerator Center (SLAC),
where different dense media such as silica sand, rock salt and ice were used~\cite{aska1, aska2, aska3, aska4}.
Various experiments have been proposed based on the idea of Askaryan:
the balloon-borne antenna array (e.g. ANITA~\cite{anita1, anita2}),
the ground-based antenna array buried in salt (e.g. SalSA~\cite{salsa}) or ice (e.g. RICE~\cite{rice}),
and the radio telescope searching for lunar signals (e.g. LUNASKA~\cite{lunaska}).
As ground-based experiments have the advantage of low noise and low energy threshold,
they play an important role in the next-generation experiments
aimed at detecting some tens of GZK neutrinos per year.
However, for the extremely high energy neutrino event, especially for the electron neutrino,
it is very likely that the longitudinal size of the shower would become comparable to the distance between the antenna and the shower.
Under this situation, the common assumption of far-field radiation does not apply and near-field effects become nonnegligible.
In this paper, we study the impact of the near-field effect to the radiation pattern via a numerical method.
The organization of this paper is as follows:
Section II will discuss the underlying physics of the shower elongation.
Section III will introduce the numerical method and our setup.
Section IV will present the implementation of parallel computing to gain a satisfying efficiency of our numerical calculation.
In Section V, we analyse the results in both time-domain and frequency-domain.
We discuss the features of the near-field radiation pattern
and compare the far-field pattern between our results and the theoretical ones for validation.
\section{Coherent Cherenkov Pulses}
In the radio detection experiment,
signals come from Cherenkov radiations of net charges in the shower.
The key concept which makes this technique possible is coherent emission.
In fact, Cherenkov radiation is a broad band emission and the intensity increases as frequency increases.
For a single charged particle,
the Cherenkov signal in the radio band should be the weakest in the spectrum.
The compact size of the shower is what makes the radio signal so special.
Coherent emission greatly enhances the signal strength in the radio band.
The electric field and its associated vector potential of the Cherenkov radiation can be calculated by solving inhomogeneous Maxwell equations,
as demonstrated in the paper of Zas, Halzen and Stanev~\cite{ZHS}.
The vector potential can be obtained by the Green's function method:
\small{
\begin{equation}
\vec{A}_{\omega}(\vec{x})
= \frac{1}{4 \pi \epsilon_0 {\rm c}^2}
\int\!\!d^{3}\!x'\,\frac{\exp{(ik|\vec{x}-\vec{x}\,'|)}}
{|\vec{x}-\vec{x}\,'|}\int\!\!dt'\,\exp{(i\omega t')}
\vec{J}(t',\vec{x}\,'),
\label{eq:GreenMethod}
\end{equation}
}
\normalsize
where $k$ is the wavenumber,
$\vec{x}$ is the position of the detector,
$\vec{x}\,'$ is the position of the shower particles
and $\vec{J}$ is the current source.
Adopting the Fraunhoffer approximation
\small{
\begin{equation}
\frac{ \exp{(ik|\vec{x}-\vec{x}\,'|)} }{|\vec{x}-\vec{x}\,'|}
\approx \frac{ \exp{(ik|\vec{x}|-i k\hat{x}\cdot\vec{x}\,')} }{R} ~,
\label{eq:fraun_appr}
\end{equation}
}
\normalsize
where $R$ is the absolute value of $\vec{x}$,
the integration in Eq.(~\ref{eq:GreenMethod}) can be greatly simplified and
therefore enhances the computational efficiency in the Monte Carlo simulation~\cite{ZHS, AZ_1d, AZ_unified, AZ_thinned}.
The validity of this approximation relies on several length scales:
the detection distance ($R$), the spatial size of the shower ($l$) and the wavelengths of interest ($\lambda$).
The Fraunhoffer approximation works well under the condition
\small{
\begin{equation}
\frac{l^2}{\lambda R}~\rm{sin}^2\theta \ll 1,
\label{eq:farfield_condition}
\end{equation}
}
\normalsize
where $\theta$ is the angle between the shower axis and the observational direction.
However, for ultra-high energy showers, the longitudinal development is longer,
especially for electromagnetic showers that suffer from Landau-Pomeranchuk-Migdal (LPM) suppression~\cite{LPM, lpm_klein}.
Electromagnetic showers can be produced by charge current generated leptons.
For an electromagnetic shower of EeV-scale energy,
the impact of LPM effect on shower development has been investigated by Monte Carlo simulations~\cite{Niess, Bolmont, AZ_thinned}.
Electromagnetic showers of primary energy $10^{20}$ eV can be extended to about 200-m long with great fluctuations.
In such cases, the far-field condition cannot be satisfied for distances up to several kilometers,
Meanwhile, the typical detection distance for ground array detectors is about 1 km, as dictated by the attenuation length of radio signals in ice.
In addition,
the separation between detection stations in the proposed layout is also about 1 km,
which makes the possible detection distance even shorter.
Under these circumstances, the Fraunhoffer approximation is invalid
and one has to deal with the complicated integration in Eq.(~\ref{eq:GreenMethod}).
In the paper of Buniy and Ralston~\cite{BR},
the correction has been made by the saddle-point approximation,
which deals with Fresnel zone radiation where the Fraunhoffer condition fails.
However, it still can not cope with extreme cases for $R \sim l$, i.e. the near-field radiation,
which is an inevitable concern for showers of hundred meters long.
We address this problem by a numerical method based on first principle
so that near-field radiations can be efficiently obtained.
Figure (\ref{fig:near-field sweeping zone}) illustrates the relation among the three different radiation zones.
The Cherenkov radiation can be viewed as an analogy with the diffraction problem in optics.
Although the radiation in near-field is confined within the Cherenkov direction where the diffraction has yet to occur,
it does have a finite width which is at least as wide as the longitudinal development of the shower.
After reaching far-field,
the radiation starts to diffract out of the Cherenkov angle.
While the far-field approximation is able to correctly deal with the diffraction phenomenon in far-field,
it fails to describe the situation in near-field.
\begin{figure}[H]
\begin{center}
\includegraphics[width=9cm, height=6cm ]{near_field_sweeping_zone_grey.pdf}
\caption{A cartoon picture that illustrates the diffraction phenomenon in three different radiation zones. In near-field there is little diffraction, but the radiation sweeping zone is as wide as the longitudinal development of the shower. In Fresnel zone the diffraction starts to take effect and part of the radiation leaks out of the Cherenkov angle.}
\label{fig:near-field sweeping zone}
\end{center}
\end{figure}
Hadronic showers, on the other hand, are less affected by the LPM effect~\cite{alvarez98, acorne},
since the sources of electromagnetic components in hadronic showers are the decay of neutral pions,
which tend to interact with the medium instead of decay at energy levels above 6.7 PeV.
The far-field condition is well fulfilled for hadronic showers in most practical cases.
\section{Numerical Method}
Numerical algorithms for calculating electromagnetic fields have been in existence for decades.
However, it was not until the recent rapid growth of computational power that this approach became wildly adopted.
Among all the existing algorithms,
the finite-difference time-domain (FDTD hereafter) method has several advantages:
\begin{itemize}
\item
It is exceptionally simple to be implemented by computer programs.
\item
The time-domain approach is well suitable for an impulse signal;
A broad band impulse can be calculated in one single run.
\item
The algorithm itself is inherently parallel and efficiency can be largely improved via parallel computing.
\end{itemize}
The idea of FDTD was first proposed by Yee in the 1960's~\cite{Yee},
and has been in use for many years for electromagnetic impulse modeling.
Like most numerical finite difference methods,
space is discretized into small grids and fields are calculated on each grid by solving Maxwell equations.
Adopting a special lattice arrangement (known as the Yee lattice),
the E-field and H-field are staggered in both space and time and can be calculated in a leapfrog time-marching way.
In the remainder of this section, we review the algorithm of the FDTD method and our setup.
\subsection{Algorithm}
The Maxwell curl equations in differential forms are
\small{
\begin{equation}
\nabla \times \mathbf{E} = -\mu \frac{\partial \mathbf{H}}{\partial t} ~,
\end{equation}
\begin{equation}
\nabla \times \mathbf{H} = \epsilon \frac{\partial \mathbf{E}}{\partial t} + \sigma \mathbf{E} ~.
\end{equation}
}
\normalsize
The FDTD method approximates derivatives by finite differences.
The central difference is adopted to achieve 2nd order accuracy in both spatial and temporal derivatives.
We assume cylindrical symmetry along the shower axis (defined as z-axis),
and therefore all the derivatives with respect to $\phi$ vanish.
In addition, due to the polarization property of Cherenkov radiations,
the $H_r, H_z, E_\phi$ components also vanish.
This can save large amounts of computer memories and calculation time.
Figure(~\ref{fig:cylind_grid}) shows the configuration of the lattice under these assumptions.
Maxwell equations in a cylindrical coordinate then reduce to:
\small{
\begin{eqnarray}
\frac{\partial E_r}{\partial z} - \frac{\partial E_z}{\partial r} = -\mu \frac{\partial H_\phi}{\partial t}, \\
-\frac{\partial H_\phi}{\partial z} = \epsilon \frac{\partial E_r}{\partial t}, \\
\frac{1}{r} \frac{\partial (rH_\phi)}{\partial r} = \epsilon \frac{\partial E_z}{\partial t}.
\end{eqnarray}
}
\normalsize
where we have set $\sigma = 0$, assuming lossless medium.
The above equations can be discretized in both space and time
with a spatial interval of $\Delta r$ and $\Delta z$ in the r and z direction respectively,
and with a time interval of $\Delta t$.
For example, the discretized field for $E_r(r = i \Delta r, ~z = j \Delta z, ~t = n\Delta t)$ is denoted as $E_r^n(i, j)$.
Replacing derivatives with finite differences, we have
\small{
\begin{eqnarray}
&&H_\phi^{n+1/2}(i+1/2, j+1/2) = H_\phi^{n-1/2}(i+1/2, j+1/2) \nonumber \\
&&+ \frac{\Delta t}{\mu \Delta r}[ E_z^n(i+1, j+1/2) - E_z^n(i, j+1/2) ] \nonumber \\
&&- \frac{\Delta t}{\mu \Delta z}[ E_r^n(i+1/2, j+1) - E_r^n(i+1/2, j-1/2) ] \label{eq:fdtd_Hphi}, \\
\nonumber \\
&&E_r^{n+1}(i+1/2, j) = E_r^{n}(i+1/2, j) \nonumber \\
&&- \frac{\Delta t}{\epsilon \Delta z}[ H_\phi^{n+1/2}(i+1/2, j+1/2) \nonumber \\
&&- H_\phi^{n+1/2}(i+1/2, j-1/2) ] \label{eq:fdtd_Er}, \\
\nonumber \\
&&E_z^{n+1}(i, j+1/2) = E_z^{n}(i, j+1/2) \nonumber \\
&&+ \frac{\Delta t}{\epsilon r_i \Delta r}[ r_{i+1/2}H_\phi^{n+1/2}(i+1/2, j+1/2) \nonumber \\
&&- r_{i-1/2}H_\phi^{n+1/2}(i-1/2, j+1/2) ] \label{eq:fdtd_Ez}.
\end{eqnarray}
\normalsize
Iteratively, we can use Eq.~\ref{eq:fdtd_Hphi} to update the H-field,
and then use Eq.~\ref{eq:fdtd_Er} and Eq.~\ref{eq:fdtd_Ez} to update the E-field, and so on.
The relation between spatial and temporal grid sizes have been chosen as
$\Delta r = \Delta z = \delta$, ${\rm c_{ice}}\Delta t = \delta/2$ such that numerical stability is satisfied.
Numerical dispersion can be controlled at an acceptable level if we define the grid size fine enough~\cite{Courant, Taflove}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{cylind_grid_grey.pdf}
\caption{The adopted lattice configuration.}
\label{fig:cylind_grid}
\end{center}
\end{figure}
\subsection{Simulation Setup}
In order to not lose focus on the RF calculation,
we simply assume a shower model for longitudinal development rather than to invoke the Monte Carlo packages.
For electromagnetic showers,
the well known Nishimura-Kamata-Greisen (NKG) parameterization formula~\cite{NKG} that describes the number of shower particles reads
\small{
\begin{equation}
N_e(X) = \frac{0.31}{ \sqrt{\ln{E_0/E_c}} } \exp \left[\left(1-\frac{3}{2}\ln{s}\right)\frac{X}{X_R}\right],
\label{eq:nkg_formula}
\end{equation}
}
\normalsize
where $E_0$ is the energy of the primary particle and $E_c$ is the critical energy,
$X$ is the slant depth, $X_{max}$ the depth where the shower reaches its maximum number,
and $s$ is the (dimensionless) shower age defined as $s = 3X/(X+X_{max})$.
The NKG formula can be fit by a Gaussian distribution as:
\small{
\begin{equation}
N_e(z) = N_{{\rm max}} \exp(-\frac{z^2}{2l^2}),
\label{eq:showerModel}
\end{equation}
}
\normalsize
where $N_{{\rm max}}$ is the particle number at shower maximum
and $l$ is the longitudinal shower length.
For charge distribution in a snapshot,
we assume Gaussian distribution in both radial ($r$) and longitudinal ($z$) directions:
\small{
\begin{equation}
n(z,r) = \frac{N_e}{(2\pi)^{1.5}\sigma_z\sigma_r^2}
\exp \left(-\frac{(z-X)^2}{2\sigma_z^2}\right)
\exp \left(-\frac{r^2}{2\sigma_r^2}\right),
\end{equation}
}
\normalsize
where $z$ and $r$ are the cylindrical coordinates in units of g/cm$^{2}$,
and $\sigma_z$ and $\sigma_r$ represent the standard deviation of the distribution in $z$ and $r$ directions, respectively.
The $\sigma_z$ is generally smaller than $\sigma_r$ and thus has no significant effect on the radiation pattern.
We adjust different relations among $R$, $l$, $\sigma_r$ and $\theta$ to produce radiation in different zones.
The value of $E_0$ only affects the overall normalization magnitude and is not our focus,
so we simply set $E_0 = 10^{19}$ eV throughout the study to obtain a reasonable scale of magnitude.
More realistic magnitudes can be obtained from Monte Carlo simulations of the shower.
Following the same spirit, we do not extract the number of net charged particles from the number of total charged particles (the normal ratio is about 0.2) in our model.
Our main focus is on the impact of these parameters on the shape of the Cherenkov radiation in both time-domain (the waveform) and frequency-domain (the spectrum).
The medium we use is ice, and the corresponding parameters are set accordingly.
That is, the critical energy $E_c$ is 73 MeV, the density of ice is 0.92 g/cm$^3$,
and the index of refraction is 1.78.
The lateral distribution $\sigma_r$ is set at 1m throughout the study for numerical purpose.
This is about ten times larger than the actual shower Moliere radius.
To avoid unmanageable computing time, our numerical choice of the Moliere radius is dictated by the resolution in FDTD method that we use. This compromise, however,
does not affect the physics we are discussing.
A more realistic situation should again rely on shower Monte Carlo simulations,
which would give the correct lateral size of showers,
and then use FDTD method with finer resolution to obtain the radiation pattern.
Of course, finer resolution requires more computing time and resources.
In principle, one can apply the scaling of each quantity in FDTD method.
The scaling relationships between the grid size and the physical quantities in FDTD method are: if the grid size reduces by K times, then the time scale should also decrease by K times, while the frequency, the charge current, and the electric field should all increase K times.
Figure~(\ref{fig:setup}) is a cartoon that depicts our setup.
The shower travels in the +z direction emitting Cherenkov radiations.
We define a spherical coordinate whose origin lies in the intersection of the z axis and the shower maximum.
Desired detector positions are chosen by varying both the detection angle ($\theta$) and detection distance ($R$),
and record the electric field values as time evolves.
After one single run, we were able to obtain all the simulation results we needed.
Here one can see the merit of using a time-domain calculation method.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=6.5cm]{setup_grey.pdf}
\caption{Schematic diagram for the simulation set up.}
\label{fig:setup}
\end{center}
\end{figure}
\subsection{Performance Improvement via the GPU Parallelization}
Graphical processing units (GPU) have became increasingly important in the field of high performance computation.
Sparked by the needs of computer game markets, GPUs are currently advancing under booming developments.
They have become programmable via the Compute Unified Device Architecture (CUDA) provided by NVIDIA,
and have been implemented in various scientific areas such as molecular dynamics, gravitational N-body simulations and lattice QCD
~\cite{MD, sapporo, cudanbody, gamer, lattQCD}.
CUDA is an extension of the C language, one of the most popular high-level languages in the world.
Programmers who are familiar with C language can utilize GPU computations by simply calling the functions from CUDA.
General parallelization strategies can be found in the CUDA Programming Guide available on their webpage.
Because of its multicore architecture,
GPUs are ideal for implementing parallel algorithms.
The FDTD method is therefore an ideal candidate to benefit from GPU computing.
With the help of GPU parallelization,
we have obtained a great improvement on computing efficiency which otherwise would be so low that it makes the FDTD method formidable.
We run our code by a NVIDIA GTX285 graphic card provided by
the Center for Quantum Science and Engineering of National Taiwan University (CQSE).
Table (\ref{tab:benchmark}) shows the improvement of GPU acceleration using a NVIDIA GTX285 graphic card,
compared to Intel Quad Core i7 920 at 2.66GHz (sequential code without CPU parallelization).
The computing time can be accelerated up to about 200 times,
depending on the total number of grids.
This is a great efficiency improvement, from more than a day to only ten minutes.
\begin{table}[h]
\begin{center}
\begin{tabular}{ l | c c c c } \hline \hline
$N_{\rm{grid}}$ &$t_{\rm{CPU}}$ &$t_{\rm{GPU}}$ &speed up &GPU performance\\
&(sec) &(sec) &(${t_{\rm{CPU}}}/{t_{\rm{GPU}}}$) &(GFLOPS)\\
\hline
$256^2$ &5.15 &0.040 &128.75 &57.0 \\
$512^2$ &34.94 &0.186 &187.85 &98.1\\
$1024^2$ &294.41 &1.167 &252.28 &125.1\\
$2048^2$ &2122.80 &8.372 &253.56 &139.5\\
$4096^2$ &15751.17 &67.617 &232.95 &138.2\\
$8192^2$ &120617.48 &681.082 &177.11 &109.8\\
\hline \hline
\end{tabular}
\end{center}
\caption{Performance of CPU and GPU codes.}
\renewcommand{\arraystretch}{1.5}
\label{tab:benchmark}
\end{table}
\section{Results}
\subsection{At the Cherenkov Angle}
We first focus on radiation field right at the Cherenkov angle $\theta_c$.
The magnitude of the E-field, $E_\omega$, of course, decreases as the detection distance $R$ increases.
However, the scaling relation for such a decrease behaves quite differently between the near-field and the far-field regimes.
Figure (\ref{fig:near_match}) and Figure (\ref{fig:far_match}) show the spectrum of E-field at different distances $R$ with different scalings.
The longitudinal length $l$ is assumed to be 20 m,
which is roughly the length of a shower with $10^{18}$eV primary energy.
For $R$ = 50 m to 100 m, Figure (\ref{fig:near_match}) suggests that the magnitude of E-field goes like $1/\sqrt{R}$ in all frequency range.
On the other hand, for $R$ = 100 m to 200 m, the lower frequency part of the spectrum decreases faster than $1/\sqrt{R}$.
In fact, as Figure (\ref{fig:far_match}) implies,
the lower frequency part goes like $1/R$.
This phenomenon can be interpreted in a physical explanation.
It is a known fact that high frequency waves are less likely to diffract than low frequency ones.
As a result, the high frequency components of the Cherenkov radiation are more confined in the $\theta$-direction and their energies can only spread into the Cherenkov cone,
which makes them go like $1/R$ due to geometrical reason.
The fields are thus go like $1/\sqrt{R}$, which is a behavior of cylindrical wave.
For the low frequency components,
diffraction allows another direction (the $\phi$-direction) for their energies to disperse,
which therefore makes them decrease faster.
As $R$ becomes large enough to satisfy the Fraunhofer limit~\cite{AZ_unified, AZ_thinned}, the radiation source can be viewed as a point,
which leads to a spherical wave behavior where energy goes like $1/R^2$.
In consequence, the scaling between $E_\omega$ and $R$ of all frequency range again has a simple relation: $E_\omega$ $\propto$ $1/R$.
\begin{figure}[H]
\begin{center}
\includegraphics[width=9cm, height=6cm]{spec_cylin_R_50_100_200_a_20_ang_0.pdf}
\caption{$\sqrt{R}|E_\omega|$ spectra at $R$ = 50 m, 100 m and 200 m (from top to bottom), for $l$ = 20 m and $\sigma_r$ = 1 m.
These three spectra match well in thet high frequency regime,
implying a cylindrical behavior ($E_\omega$ $\propto$ $1/\sqrt{R}$).}
\label{fig:near_match}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=9cm, height=6cm ]{spec_spher_R_50_100_200_a_20_ang_0.pdf}
\caption{$R|E_\omega|$ spectra at $R$ = 200 m, 100 m and 50 m (from top to bottom), for $l$ = 20 m and $\sigma_r$ = 1 m.
These two spectra match well in the low frequency regime,
implying a spherical behavior ($E_\omega$ $\propto$ $1/R$).}
\label{fig:far_match}
\end{center}
\end{figure}
In Fresnel zone, the scaling behavior is a smooth transition from cylindrical to spherical.
Since components of different frequencies reach the Fraunhofer condition at different $R$,
the shape of the spectrum in Fresnel zone would change.
To be specific, higher frequency part meets Eq.(\ref{eq:farfield_condition}) at larger $R$,
and thus decreases slower in Fresnel zone.
The peak of the spectrum, as a result,
would migrate to higher frequency as $R$ move further away.
This phenomenon can also be observed in time domain.
Figure(\ref{fig:waveform_200_to_700m}) shows the waveform from near-field to far-field.
There are two interesting features for the waveform.
First, the waveform in the near-field ($R$ = 200 m) has a small tail in its rear part,
which gradually vanishes as $R$ increases.
Second, the waveform in the near-field is an asymmetric bipolar pulse in contrast to the symmetric bipolar pulse in the far-field.
These two features can be understood in a qualitative sense.
\begin{figure}[H]
\begin{center}
\includegraphics[width=9cm, height=6cm ]{ang_0_a_50_R_200_to_700.pdf}
\caption{Time domain waveform at $R$ = 200, 300, 400, 500, 600, 700 m (from left to right), for $l$ = 50 m and $\sigma_r$ = 1 m. The tail part in the near-field is clear. Notice the transition of the asymmetric waveform in the near-field to the symmetric waveform in the far-field. }
\label{fig:waveform_200_to_700m}
\end{center}
\end{figure}
Figure (\ref{fig:first_light}) depicts a geometrical relation of Cherenkov radiation for a single charged particle emitted from different points.
Suppose an observer is at point $a$ and the shower travels along the z-axis,
wherein $z_0$ locates at the Cherenkov angle,
and $z_+$($z_-$) is any point lies right(left) to $z_0$.
Compare the arrival times of signals emitted from point $z_0$ and $z_+$.
When the charged particle passes $z_0$, it emits a signal.
Later on, as the shower passes $z_+$, it emits another signal.
The previous signal, in the mean time, has reached to point $b$.
Therefore with the simple triangular relation,
it should be clear that any signal emitted from the point right to $z_0$ would arrive the observer later than the signal emitted from $z_0$.
This is commonplace: the signal emitted first would end up arriving first.
However, the situation would be reversed on the other side.
When the shower passes $z_0$ and emits a signal,
the signal that emitted from $z_-$ has only traveled a distance that equals to the distance between $z_-$ and $c$, which means the signal has not even reached point $d$.
Therefore, signal emitted from the point left to $z_0$ would still arrive the observer later than the signal emitted from $z_0$.
It may seem counter-intuitive for a signal emitted later would in the end arrive earlier.
The crux of this unusual behavior is that the charged particle in ice travels faster than electromagnetic wave does.
The signal emitted from $z_0$ is thus the first signal during the whole process.
With this in mind, we are now able to describe near-field radiation qualitatively.
When the first signal from $z_0$ arrives the observer,
it would generate a sudden rise of the scalar and vector potential.
The rise can be expected to be relatively strong for two reasons.
Firstly, $z_0$ is the shower maximum having a strongest source.
Secondly, since signals coming from the vicinity of $z_0$ would all be squeezed into a very short time interval and arrive at almost the same instant.
This "squeezing effect" can also be found in the case of the ordinary Lienard-Wiechert potential in the vacuum, where the most intensified direction points towards the traveling direction.
After the rise, the potential would experience a smoother decrease,
which results from the both sides of $z_0$ where signals would not be squeezed and the number of charged particle are fewer.
The corresponding E-field is just the derivative of the potential,
so the waveform should be a sharp peak coming first with a smooth tail following up in an opposite polarization.
A longer longitudinal length would generate a smoother tail.
\begin{figure}[H]
\begin{center}
\includegraphics[width=9cm, height=6cm ]{first_light.pdf}
\caption{The geometrical relation of Cherenkov radiation for a single charged particle emitted from different points. The charged particle travels along z-axis and the observer is located at point $a$.}
\label{fig:first_light}
\end{center}
\end{figure}
In fact, the sharp peak is actually a delta function since the sudden rise in potential is represented by a step function.
A delta function waveform is unphysical and only appears in the one dimensional approximation which neglects the lateral distribution.
In real cases, the lateral distribution of the shower plays a part in interference and smooths the delta function.
Namely, the lateral distribution, based on its size, filters out high frequency components of the spectrum via destructive interference.
As the detection distance moves further away,
signals from the whole range of longitudinal development arrive at the same time.
In consequence, the potential decreases drastically after the sudden rise.
For $R$ large enough (Fraunhofer limit),
the potential vanishes as fast as its rise,
and thus resembles a delta function.
Therefore, its derivative leads to a bipolar waveform symmetric in time domain.
This is the reason the asymmetric waveform in near-field gradually transform into a symmetric one as $R$ increases.
The peak shift of the spectrum can be understood as the longitudinal development gradually lose its impact on the waveform as $R$ increases.
In this case,
the characteristic frequency of the waveform is determined uniquely by the interference of the lateral distribution.
\subsection{Angular Distribution}
We now start to investigate the radiation for a fixed $R$ at different angles.
Figure (\ref{fig:spec_fresnel}) shows the near-field spectrum for $R$ = 300 m, $l$ = 20 m and $\sigma_r$ = 1 m.
We compared the result of FDTD method with the one of saddle-point approach and found they are in good agreement.
The peak frequency is highest at the Cherenkov angle and decreases as $\theta$ moves away from $\theta_c$, which is the main feature of diffraction.
\begin{figure}[H]
\begin{center}
\includegraphics[width=9cm, height=6cm ]{spec_R_300_a_20_rm_1_ang_0_5_10_20_grey.pdf}
\caption{Fresnel zone $E_\omega$ spectra at $\theta$ = $\theta_c$, $\theta_c$ + 5$^{\circ}$, $\theta_c$ + 10$^{\circ}$ and $\theta_c$ + 20$^{\circ}$ (from top to bottom), for $R$ = 300m, $l$ = 20 m and $\sigma_r$ = 1 m. The solid curves are our results and the dashed curves are obtained by the saddle-point approach. Results from both methods are in good agreement. The peak frequency is highest at the Cherenkov angle.}
\label{fig:spec_fresnel}
\end{center}
\end{figure}
Figure (\ref{fig:spec_near}) shows the near-field spectrum at different angles for $R$ = 300 m, $l$ = 100 m and $\sigma_r$ = 1 m.
The disagreement between the two methods is obvious since the saddle-point approach is not suitable in the near-field regime.
An interesting feature is that at different angles the peak frequency does not change much.
The reason is simple:
in the near-field the detection distance is too short for the diffraction to take effect,
and the radiation only exists within the area sweeping by the longitudinal length along $\theta_c$.
The peak magnitude at different angles reflects the longitudinal development.
As the radiation travels into the Fresnel zone or even Fraunhofer zone,
the diffraction effect gradually appears and part of radiation (especially lower frequency components) leaks out of the Cherenkov angle,
as illustrated in Figure(\ref{fig:near-field sweeping zone}).
One thing worth noticing is the peak frequency at $\theta_c$ in Figure (\ref{fig:spec_near}) is lower than that in Figure (\ref{fig:spec_fresnel}).
It is an example of how the longitudinal length affecting the characteristic frequency at the Cherenkov angle.
We can expect a long tail in the waveform in such case.
\begin{figure}[H]
\begin{center}
\includegraphics[width=9cm, height=6cm ]{spec_R_300_a_100_rm_1_grey.pdf}
\caption{Near-field $E_\omega$ spectra at $\theta$ = $\theta_c$, $\theta_c$ + 10$^{\circ}$ and $\theta_c$ + 20$^{\circ}$ (from top to bottom), for $R$ = 300m, $l$ = 100 m and $\sigma_r$ = 1 m. The solid curves are our results and the dashed curves are obtained by the saddle-point approach, which is not suitable in the near-field. The peak frequency is almost a constant at different angles. }
\label{fig:spec_near}
\end{center}
\end{figure}
We now turn to the angular distribution for components of different frequencies.
Figure (\ref{fig:ang_fresnel}) shows the angular distribution in Fresnel zone for $\omega$ = 50, 100 and 200 MHz respectively for $R$ = 300 m, $l$ = 20 m and $\sigma_r $ = 1 m.
The diffraction pattern is clearly shown:
higher frequency component has a narrower angular distribution.
All three curves are symmetric in both sides of the Cherenkov angle.
In near-field, the situation would be quite different.
Figure (\ref{fig:ang_near}) shows the angular distribution in near-field for $\omega$ = 20, 50 and 100 MHz respectively for $R$ = 300 m, $l$ = 20 m and $\sigma_r $ = 1 m.
Radiation is stronger at $\theta < \theta_c$ than the other side,
since the far-field approximation replaces $|\vec{x}-\vec{x}\,'|$ with a constant $R$ in the denominator and therefore neglects the fact that the region $\theta < \theta_c$ lies closer to the shower axis and would surely have a larger signal.
For a large enough $R$,
since the whole longitudinal development can be viewed as located at the same point,
this difference becomes negligible,
resulting in a symmetric angular distribution.
\begin{figure}[H]
\begin{center}
\includegraphics[width=9cm, height=6cm ]{angular_R_300_a_20_rm_1_Hz_50_100_200.pdf}
\caption{Fresnel zone angular distribution for $\omega$ = 200, 100, 50 MHz (from top to bottom) with $R$ = 300 m, $l$ = 20 m and $\sigma_r $ = 1 m. We redefine $\theta$ = 0$^{\circ}$ as the Cherenkov angle for clarity. The angular distribution is symmetric at both sides. The diffraction pattern is presented clearly.}
\label{fig:ang_fresnel}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=9cm, height=6cm ]{angular_R_300_a_100_rm_1_Hz_20_50_100.pdf}
\caption{Near-field angular distribution for $\omega$ = 100, 50, 20 MHz (from top to bottom) with $R$ = 300 m, $l$ = 100 m and $\sigma_r $ = 1 m. We redefine $\theta$ = 0$^{\circ}$ as the Cherenkov angle for clarity. It shows an asymmetric angular distribution at both sides.}
\label{fig:ang_near}
\end{center}
\end{figure}
Time domain waveform also gives some interesting properties.
Figure (\ref{fig:waveform_near}) shows the waveform in near-field at different angles.
As mentioned above,
the near-field waveform is asymmetric in time.
The characteristic frequency can be implied by the duration of the pulse.
The duration of the pulse at different angles changes quite little,
which we have already seen in Figure(\ref{fig:spec_near}).
One intriguing phenomenon is the arrival time difference of the waveform.
This subtlety is due to the fact that the near-field wavefront is a straight line on the r-z plane instead of an arc in the Fraunhofer limit.
Consequently, radiation arrives last to the observer at $\theta$ as it can be seen in Figure (\ref{fig:waveform_near}).
In Fresnel zone,
as shown in Figure (\ref{fig:waveform_fresnel}),
the arrival time difference vanishes if we take the zero-crossing instant as the arrival time.
The waveform becomes symmetric at $\theta_c$ while it remains asymmetric at other angles.
As mentioned earlier,
symmetric waveform results from the "squeezing effect",
which leads to a rapid decay of potential after arrival of the first signal.
For $\theta$ away from $\theta_c$,
the most squeezed region ($z_0$) correspond to an area where either the shower has not yet started to develop ($\theta > \theta_c$) or the shower development is over ($\theta < \theta_c$).
The signals emitted from the whole shower in this case would arrive in order,
either ordinary ($\theta > \theta_c$) or reversed ($\theta < \theta_c$),
which leads to a smooth decay of the potential and thus an asymmetric waveform.
\begin{figure}[H]
\begin{center}
\includegraphics[width=9cm, height=6cm ]{waveform_R_300_a_100_rm_1_ang_0_5_10_grey.pdf}
\caption{Waveform in near-field at different angles, for $R$ = 300m, $l$ = 100 m and $\sigma_r$ = 1 m. The solid curve is at $\theta$ = $\theta_c$, dashed curve is at $\theta$ = $\theta_c$ + 5$^{\circ}$ and dashed-dotted curve is at $\theta$ = $\theta_c$ + 10$^{\circ}$. The asymmetric waveforms are the striking feature of near-field radiation. Notice the arrival-time difference of the three waveforms. }
\label{fig:waveform_near}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=9cm, height=6cm ]{waveform_R_300_a_20_rm_1_ang_0_5_10_grey.pdf}
\caption{Waveform in Fresnel zone at different angles, for $R$ = 300m, $l$ = 20 m and $\sigma_r$ = 1 m. The solid curve is at $\theta$ = $\theta_c$, dashed curve is at $\theta$ = $\theta_c$ + 5$^{\circ}$ and dashed-dotted curve is at $\theta$ = $\theta_c$ + 10$^{\circ}$. The waveform becomes symmetric at $\theta_c$ and remains asymmetric at $\theta_c$ + 5$^{\circ}$ and $\theta_c$ + 10$^{\circ}$. }
\label{fig:waveform_fresnel}
\end{center}
\end{figure}
\section{Summary and Conclusions}
We have studied the near-field radiation in both time domain and frequency domain.
We shown that even for a shower with symmetric longitudinal development (e.g. Gaussian distribution),
the resulting near-field waveform would be asymmetric in time.
The longitudinal development is as important as the lateral distribution even at the Cherenkov angle.
Future work on the parameterization of near-field radiation should take this into account.
As the radiation propagates,
the waveform would gradually become symmetric.
Moreover,
this transition occurs at different $R$ for different $\theta$.
To be specific, it occurs at shortest distance for the observer located at the Cherenkov angle.
For a ground array neutrino detector,
with the size of LPM-elongated showers becomes comparable with the typical detection distance,
the near-field effect is an indispensable factor.
The correct relation of distance dependence in near-field prevents underestimation of the signal strength in a Monte Carlo simulation.
Furthermore, the correct angular spread in near-field is necessary in order not to underestimate the detection solid angle of a neutrino detector.
The Fraunhofer approximation leads to an angular spread that is quite narrow for LPM-elongated showers.
It is incorrect since a shower of hundred meters long would at least generate radiation which also spans hundred meters long in near-field.
The overall detector sensitivity should be better than adopting traditional radiation formula in Fraunhofer limit in the Monte Carlo simulation.
We plan to find the parameterization formula suitable in all cases in the future work.
On the other hand,
due to the complicated features of near-field radiation,
new reconstruction methods are required.
For example, the normal way to reconstruct the direction of incoming Cherenkov pulses is by the arrival time differences between antennas.
This method treats the wavefront as a spherical one and is based on the far-field assumption which fails in near-field as we have shown.
Charged-current interaction of electron neutrino is the main source of ultra-high energy electromagnetic showers.
In a typical detection distance,
the near-field condition should be easily satisfied.
Identification of such radiation therefore implies electron neutrino events.
This opens an opportunity for neutrino flavor identification,
since muon and tau neutrinos only induce hadronic showers whose sizes are too compact to produce near-field radiation in a typical detection distance.
\section{Acknowledgements}
We thank Ting-Wai Chiu for the help in GPU-calculations.
This research is supported by the Taiwan National Science Council(NSC) under Project No. NSC98-2811-M-002-501, No. NSC98-2119-M-002-001,
the Center for Quantum Science and Engineering of National Taiwan University(NTU-CQSE) under Nos. 98R0066-65, 98R0066-69,
and the US Department of Energy under Contract No. DE-AC03-76SF00515.
We would also like to thank the Leung Center for Cosmology and Particle Astrophysics and the Taiwan National Center for Theoretical Sciences for their support.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
\vspace*{-3mm}
While
QCD
was established
as a fundamental theory of the strong interaction a few decades ago,
its realization in hadron physics has not been understood
completely. For instance,
(apparent) absence of
``exotic''
states, which are different
from ordinary $q\bar{q}$ mesons and $qqq$ baryons,
has been a long standing problem.
Therefore, the announcement\cite{nakano} of the
discovery of $\Theta^+$ (1540),
whose minimal
configuration is $uudd\bar{s}$, was quite striking.
For the current experimental status,
we refer to Ref.\cite{Nakano-ykis}.
In this report, we review the theoretical effort
to search the $\Theta^+$ pentaquark state.
The main issue here is whether QCD favors
its existence or not, and the determination of possible
quantum numbers
for the
pentaquark families (if any).
In particular, in order to understand the
narrow width of $\Theta^+$ observed in the experiment,
it is crucial to determine the
spin and parity
directly from QCD.
For this purpose, we employ two frameworks,
the QCD sum rule and the lattice QCD,
where
both
allow the nonperturbative QCD
calculation without models, and have achieved a great success
for
ordinary mesons/baryons.
Note, however, that neither of them is infallible,
and we consider them
as
complementary
to each other.
For instance, the lattice simulation
cannot be performed at completely realistic setup,
i.e., there often exists the artifact stemming from
discretization error,
finite volume,
heavy u,d-quark masses
and neglection of dynamical quark effect (quenching), etc.
On the other hand,
the sum rule can be constructed at realistic situation,
and
is free from such artifacts
in lattice.
Unfortunately, it
suffer from another type of artifact.
Because a sum rule yields only the dispersion integral of
spectrum,
an interpretive model function have to be assumed
phenomenologically.
Compared to the ordinary hadron analyses,
this procedure may weaken the predictability for the
experimentally uncertain system, such as pentaquarks.
Another artifact in the sum rule is the OPE truncation:
one have to evaluate whether
the OPE convergence is enough or not.
We also comment on the important issue
common to both of the methods.
Recall that the decay channel
$\Theta^+ \rightarrow N+K$ is open experimentally.
Considering also that
both methods calculate a two-point
correlator
and seek for a pentaquark signal in it,
it is essential to develop a framework which
can distinguish the
pentaquark
from the NK state in the correlator.
In the subsequent sections, we examine
the literatures and
see how the above-described issues
have been resolved
or
remain unresolved.
\vspace*{-2mm}
\section{The QCD Sum Rule Work}
\label{sec:qsr}
\vspace*{-2mm}
More than ten sum rule analyses for $\Theta^+$ spectroscopy
exist for $J=1/2
\cite{Zhu,matheus,SDO,kek-penta,Lee1,Lee2,eidemuller,higher-dim,HJ.Lee,kojo}.
The first parity projected sum rule
was studied by us\cite{SDO} for $I=0$.
The positivity of the pole residue
in the spectral function is proposed as a
signature of the pentaquark signal.
This is superior criterion to the
consistency check of
predicted/experimental masses,
because it is difficult to achieve the mass prediction
within 100MeV ($\sim [m(\Theta^+) - m(NK)]$) accuracy.
We also propose the diquark exotic current
$
J_{5q} =\epsilon^{abc}\epsilon^{def}\epsilon^{cfg}
(u_a^T C d_b) (u_d^T C \gamma_5 d_e) C\bar{s}_g^T ,
$
in order to suppress the
NK state contamination.
The OPE is calculated up to dimension 6, checking
that the highest dimensional contribution is reasonably small.
We obtain a possible signal in negative parity.
In the treatment of the NK state,
improvement is proposed in Ref.\cite{Lee1}.
There, NK contamination is evaluated
using the soft-Kaon theorem.
Note here that the NK contamination calculated by
two-hadron reducible (2HR) diagrams
in the OPE level\cite{kek-penta}
is invalid because
what have to be calculated is the 2HR part in the hadronic level,
not in the
QCD (OPE)
level.
The reanalysis\cite{Lee1} of sum rule
up to dimension 6 shows that
the subtraction of the NK state
does not change the result of Ref.\cite{SDO}.
Yet, as described in Sec.\ref{sec:intro},
the above sum rules
may suffer from the OPE truncation artifact.
In fact,
the explicit calculation
up to higher dimension
have shown that this is indeed the case\cite{higher-dim,HJ.Lee,kojo}.
Here,
we refer to the elaborated work
in Ref.\cite{kojo}.
They calculate the OPE for $I(J^P)=0(1/2^\pm)$
up to dimension $D=15$.
It is shown that
the terms with $D>6$ are important as well,
while further high dimensional terms
$D > 15$
are not significant.
Another idea
in Ref.\cite{kojo} is
the use of the combination of two independent pentaquark sum rules.
In fact, the proper combination is found to
suppress the
continuum contamination drastically,
which corresponds to reducing the uncertainty in
the phenomenological model function.
Examining the positivity of
the pole residue,
they conclude the pentaquark exists
in positive parity.
Does the result\cite{kojo} definitely predict the $J^P=1/2^+$ pentaquark ?
At this moment, we conservatively point out remaining issues.
The first problem is still the NK contamination.
While such contamination is expected to be partly suppressed
through the continuum suppression,
it is possible that
the obtained signal corresponds to just scattering states.
In this point, Ref.\cite{kojo} argues that the signal
has different dependence on the parameter $\braket{\bar{q}q}$
from the NK state.
We, however, consider
this discussion uncertain,
because $\braket{\bar{q}q}$ is not a free parameter
independent of other condensates.
For further study, the explicit estimate in the soft-Kaon limit\cite{Lee1}
is interesting check, but the calculation up to high dimension
has not been worked out yet.
Second issue is related to the OPE.
In the evaluation of the high dimensional condensates,
one have to rely on
the vacuum saturation approximation practically,
while
the uncertainty originating from this
procedure
is not known.
Furthermore, there exists an issue
for the validity of the OPE itself when considering the sum rule
with high dimensionality.
In fact,
rough
analysis of the gluonic condensates
shows\cite{SVZ} that the {\it nonperturbative} OPE may break down
around $D \mathrel{\mathpalette\gl@align>} 11-16$.
One may have to consider this effect as well,
through, for instance, the instanton picture\cite{HJ.Lee}.
So far, we have reviewed $J=1/2$ sum rules.
While there are $J=3/2$ works\cite{kek-3/2,zhu-3/2},
it is likely that
they suffer from slow OPE convergence.
Further progress is awaited.
\vspace*{-2mm}
\section{The Lattice QCD Work}
\label{sec:lat}
\vspace*{-2mm}
There are a dozen of quenched lattice
calculations\cite{csikor12,chiu,kentucky,ishii12,lasscock12,
csikor122,alexandrou12,rabbit,holland,lasscock32,ishii32,negele,hagen}:
some of them\cite{csikor12,chiu,rabbit,lasscock32}
report
the positive signal,
while
others\cite{kentucky,ishii12,lasscock12,csikor122,ishii32,holland}
report null results.
This apparent inconsistency, however, can be understood
in a unified way, by taking a closer look at
the ``interpretation'' of the numerical results
and the pending lattice artifact.
As discussed in Sec.\ref{sec:intro},
the question is how to identify the pentaquark signal
in the correlator, because the correlator
at large Euclidian time
is dominated by the ground state,
the NK scattering state.
In this point, we develop a new method in Ref.\cite{ishii12,ishii32}.
Intuitively, this method makes use of that
a scattering state is sensitive
to the spacial boundary condition (BC),
while a compact one-particle state is expected to be insensitive.
Practically, we calculate the correlator under
two spacial BCs:
(1)
periodic BC (PBC) for all u,d,s-quarks,
(2) hybrid BC (HBC) where anti-periodic BC for u,d-quarks
and periodic BC for s-quark.
The consequences are as follows.
In PBC,
all of $\Theta^+$, N, K
are subject to periodic BC.
In HBC, while
$\Theta^+(uudd\bar{s})$ remains subject to periodic BC,
N($uud$,$udd$) and K($\bar{s}d$,$\bar{s}u$) are
subject to anti-periodic BC.
Therefore, the energy of NK will shift
by PBC $\rightarrow$ HBC due to the momentum of N and K,
while there is no energy shift for $\Theta^+$.
(Recall that the momentum is quantized on lattice
as $2\vec{n}\pi/L$ for periodic BC
and $(2\vec{n}+1)\pi/L$ for anti-periodic BC,
with spatial lattice extent $L$ and $\vec{n} \in \mbox{\sf Z\zr{-0.45}Z}^3$.)
In this way, the different behavior between NK and $\Theta^+$
can be used to identify whether the signal
is NK or $\Theta^+$.
We simulate the anisotropic lattice,
$\beta=5.75$, $V=12^3\times 96$, $a_\sigma/a_\tau=4$,
with the clover fermion.
The conclusion is:
(1) the signal in $1/2^-$ is found to be s-wave NK
from HBC analysis. No pentaquark is found up to $\sim$ 200MeV
above the NK threshold.
(2) the $1/2^+$ state is too massive ($>$ 2GeV)
to be identified as $\Theta^+(1540)$.
In comparison with other lattice results,
we introduce another powerful method\cite{kentucky}
to distinguish $\Theta^+$ from NK.
This method makes use of that the volume dependence
of the spectral weight
behaves as ${\cal O}(1)$ for one-particle state,
and as ${\cal O}(1/L^3)$ for two-particle state.
Intuitively, the latter factor ${\cal O}(1/L^3)$
can be understood as the encounter probability of the two particles.
The calculation\cite{kentucky} of the spectral weight from
$16^3\times 28$ and $12^3 \times 28$ lattices
reveals that
the ground states of both the $1/2^\pm$ channels
are not the pentaquark, but the scattering states.
Further analysis is performed in Ref.\cite{rabbit}.
There, the 1st excited state in $1/2^-$
is extracted with $2\times 2$ variational method.
The volume dependence of the spectral weight
indicates that the 1st excited state
is not a scattering state
but a pentaquark state.
This is consistent with Ref.\cite{negele},
where $19\times 19$ variational method is used to extract the excited states.
Note here that this results is {\it consistent}
with the HBC analysis\cite{ishii12}.
In fact, HBC analysis exclude the pentaquark up to $\sim$ 200MeV above
threshold, while the resonance observed in Ref.\cite{rabbit} locates
200-300MeV above the threshold.
The question is whether the observed resonance is really
$\Theta^+$ which experimentally locates 100MeV above the threshold.
To address this question, explicit simulation is necessary
at physically small quark mass without quenching.
In particular, small quark mass would be important
considering that Refs.\cite{ishii12,rabbit} are simulated at rather heavy quark masses
and expected to suffer from large uncertainty in the chiral extrapolation.
Finally, we discuss the $J^P=3/2^\pm$ lattice results.
We performed the
comprehensive study\cite{ishii32} with three different operators
and conclude that
all the lattice signals are too massive ($>$ 2GeV) for $\Theta^+$,
and are identified as not pentaquarks but scattering states
from the HBC analysis.
On the other hand,
Ref.\cite{lasscock32} claims that
a pentaquark candidate is found in $3/2^+$.
We, however, observe that the latter result
are contaminated by significantly large statistical noise,
which makes their result quite uncertain.
Note also that their criterion
to distinguish $\Theta^+$
from scattering states
is based on rather limited
argument
compared to the HBC analysis.
\vspace*{-2mm}
\section{Conclusions}
\label{sec:conclusion}
\vspace*{-2mm}
We have examined both of the QCD sum rule and lattice QCD works.
In the sum rule, progresses in OPE calculation and continuum suppression
have achieved stable
analysis,
while the subtraction of NK contamination remains a critical issue.
In the lattice, the framework which distinguish
the pentaquark from NK have been successfully established.
In order to resolve the {\it superficial} inconsistency in the
lattice prediction,
the calculation at small quark mass without quenching
is highly desirable.
\vspace*{-2mm}
\section*{Acknowledgements}
\vspace*{-2mm}
This work is completed in collaboration with
Drs.
H.Iida,
N.Ishii,
Y.Nemoto,
M.Oka,
F.Okiharu,
H.Suganuma
and
J.Sugiyama.
T.D. is supported
by Special Postdoctoral Research Program of RIKEN
and by U.S. DOE grant DE-FG05-84ER40154.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Outliers in Abundance Ratio Trends}
Extremely metal poor (EMP) stars were presumably among the first
stars formed in the Galaxy, and hence represent in effect
a local high-redshift population. Such stars
provide important clues to the chemical
history of our Galaxy, the role and type of early SN, the
mode of star formation in the proto-Milky Way, and the formation
of the Galactic halo. \cite{beers05} compiled the small sample of EMP
stars known as of 2005.
The goal of our 0Z Project is to increase this sample
substantially.
Our sample selection is based on mining the database of the Hamburg/ESO Survey
\citep{wis00} for candidate EMP stars with
[Fe/H] $< -3.0$~dex \citep{christlieb03}.
Our abundance determination procedures are described in
\cite{cohen04}. The
determination of stellar parameters, measurement of equivalent widths, and detailed
abundance analyses were all carried out by J.~Cohen.
Our data in general follow the well established trends from numerous studies of Galactic
halo stars between abundance ratios [X/Fe] and overall
metallicity as measured by [Fe/H] \citep[see e.g.][]{cayrel_04,cohen04}.
The interesting question is whether in the low metallicity
regime studied here we can detect
the effect of only a small number of SN contributing to a star's chemical
inventory or inhomogeneous mixing within the ISM at these early stages of formation of the Galaxy.
Thus the size of the scatter around these trends and whether
there are major outliers is of great interest.
After all the abundance analyses were completed, we
looked for {\it{strong}} outliers, either
high or low, in
plots of [X/Fe] vs [Fe/H]. We checked these in detail.
The Ca abundance turned out to be problematical
in those very C-rich stars whose spectra were obtained
prior to HIRES detector upgrade in mid-2004 and thus
included only a limited wavelength range. In
an effort to derive Ca abundances from these early HIRES spectra, we ended up using
lines which were crowded/blended, presumably by molecular
features. This was only realized
fairly recently when we obtained additional C-star HIRES spectra extending
out to 8000~\AA\ which covered key isolated
Ca~I lines in the 6160~\AA\ region. We found much lower Ca abundances from the additional
Ca lines in these carbon stars. Our
earlier claims in \cite{cohen06} of high Ca/Fe for some C-rich stars
are not correct.
These abundance analyses were carried out
over a period of a decade, and some updates were made in J.~Cohen's
master list of adopted $gf$ values during that period. The
next step, completed in Dec 2011 after the conference,
was to homogenize the $gf$ values.
\section{Linear Fits to Abundance Ratios}
Fig. 1 shows [Ca/Fe] vs [Fe/H] for our sample.
Linear fits to the abundance ratios vs [Fe/H] where there is adequate data
for the species X were calculated, excluding
C-rich stars and a small number of {\it{strong}} outliers.
An example of these fits is shown
for Ca in
Fig~2, where in the lower panel the included stars are shown together
with the fit (thick solid line) and the fit $\pm$0.15~dex (dashed lines).
Note that the fit for [Ca/Fe] vs [Fe/H] is constant, [Ca/Fe] = 0.26~dex. In the
upper panel the histogram of deviations from the linear fit is shown.
Although a number of very deviant low outliers were excluded, an
assymetric distribution of $\delta$([Ca/Fe]) still remains, suggesting the presence
of a small tail of stars with low [Ca/Fe], though not so extreme
that the stars were rejected as strong outliers. This is also apparent
in the lower panel, where there are four stars with [Ca/Fe] $< 0$;
these values were not low enough for them to be rejected as strong outliers.
Current work focuses on determining whether the dispersion about these fits
is larger than the expected uncertainties.
There are six stars which are very deviant low outliers in [Ba/Fe]. One of these
has [Ba/H] below that of Draco~119, the star with the
lowest Ba abundance previously known \citep{draco119}.
\acknowledgements
We are very grateful
to the Palomar, Las Campanas, and Keck time allocation committees for
their long-term support of
this effort.
J.~Cohen acknowledges partial support from NSF grants
AST--0507219 and AST--0908139. I.~Thompson acknowledges partial
support from NSF AST-0507325.
We are grateful to the many people
who have worked to make the Keck Telescopes and their instruments,
and the Magellan Telescopes and their instruments,
a reality and to operate and maintain these observatories.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The standard model (SM) of particle physics is so successful that all
the experiments searching for signs beyond the SM resulted in negative
results and the SM has been tested to a precision of $10^{-3}$. Yet,
the neutrino oscillation experiments accumulated enough evidences that
the neutrinos do have masses. Massive neutrinos are then the only
formally established evidences beyond the SM. Although there are some
other observations which also point to physics beyond the SM, such as
existence of dark matter, accelerated expansion of the Universe, and
matter-antimatter asymmetry, however, they are not as convincing
as the massive neutrinos.
Extensions or modifications of the SM are often put forward to explain
the neutrino masses and their oscillation patterns. The most celebrated
one is the see-saw mechanism with the introduction of heavy right-handed
(RH) neutrinos at the mass scale of $10^{11-12}$ GeV \cite{see-saw}.
There are variations in the see-saw type models with TeV RH neutrinos
\cite{tev-see-saw}. The advantage of TeV see-saw models is that they
can be tested in the LHC experiments \cite{lhc-see-saw}.
Another type of neutrino mass models is based on loop diagrams, in which
the small neutrino mass is naturally obtained by the suppression loop factor.
Some classic examples are the one-loop Zee model \cite{zee} and
Ma model \cite{ma}, two-loop Zee-Babu model~\cite{babu},
three-loop Krauss-Nasri-Trodden model \cite{knt}, etc.
Often in this type of models, some ad hoc symmetries are introduced to
forbid some unwanted contributions or the see-saw contributions if there
are RH neutrinos in the model.
Recently, there was an $2.6\sigma$ anomaly in lepton-universality violation
measured in the ratio
$R_K \equiv B(B\to K\mu\mu)/B(B\to K ee) = 0.745 ^{+0.090}_{-0.074} \pm 0.036$
by LHCb \cite{lhcb-2014}. Also, sizable deviations from the SM prediction
were recorded in angular distributions of $B \to K^*\mu\mu$ \cite{lhcb-2013}.
The results can be accounted for by a large negative
contribution to the Wilson coefficient $C_9$ of the semileptonic operator
$O_9$, and also contributions to other Wilson coefficients, in particular to
$C'_9$ {\cite{Descotes-Genon:2015uva, Hiller:2014yaa, Hiller:2016kry}}.
Here we propose a simple extensions of the SM
with introduction of a
color-triplet $SU(2)_L$-doublet scalar boson $\eta$ and
a color-antitriplet $SU(2)_L$-triplet scalar boson
$\Delta$ without assuming
further discrete or gauge symmetries. The $SU(3)_C$, $SU(2)_L$, $U(1)_Y$
quantum numbers of the new fields are summarized in Table~\ref{tab:1}.
We shall show that this model can successfully explain the neutrino masses
and the oscillation pattern, as well as solving the anomalies
in $b \to s \ell \bar \ell$ with additional contributions to
$C_{9,10}$ and $C'_{9,10}$, and at the same time
satisfying all the existing constraints of lepton-flavor violations (LFV),
flavor-changing neutral currents (FCNC), and $S,T,U$ parameters.
Furthermore, the masses of $\eta$ and $\Delta$ bosons are in the
TeV scale, and so can be tested in the Drell-Yan process and
lepton-flavor violating production, and also directly in the pair
production via the $\ell\ell jj$ final state. This is the main
result of the work.
This paper is organized as follows.
In Sec.~II, we review the model and describe the constraints.
In Sec.~III, we analyze numerically the parameter space so as to
solve the anomaly in $b\to s \ell\ell$, and calculate the cross sections
for lepton-flavor violating production.
We conclude in Sec.~IV.
\section{Model setup and Constraints}
\begin{table}[t!]
\begin{tabular}{|c||c|c|}
\hline\hline
& ~$\eta$~ & ~$\Delta$ \\\hline
$SU(3)_C$ & $\bm{3}$ & $\bar{\bm3}$ \\\hline
$SU(2)_L$ & $\bm{2}$ & $\bm{3}$ \\\hline
$U(1)_Y$ & $\frac16$ & $\frac13$ \\\hline
\end{tabular}
\caption{\small
Charge assignments of the new fields $\eta$ and $\Delta$
under $SU(3)_C\times SU(2)_L\times U(1)_Y$.}
\label{tab:1}
\end{table}
The new field contents and their charges are shown in Table~\ref{tab:1},
in which
the color-triplet $\eta$ is an $SU(2)_L$ doublet with $1/6$ hypercharge,
while the color-antitriplet $\Delta$ is an $SU(2)_L$ triplet with
$1/3$ hypercharge.
The relevant Lagrangian for the interactions of the $\eta$ and $\Delta$ with
fermions and the Higgs field is given by
\begin{align}
-\mathcal{L}_{Y}
&=
f_{ij} \overline{d_{R_i}} \tilde \eta^\dag L_{L_j} + g_{ij} \overline{ Q^c_{L_i}}
(i\sigma_2) \Delta L_{L_j}
-\mu\Phi^\dag \Delta \eta+ {\rm h.c.},
\label{Eq:lag-flavor}
\end{align}
where $(i,j)=1-3$ are generation indices,
$\tilde\eta\equiv i\sigma_2\eta^*$, $\sigma_2$ is the second
Pauli matrix, and $\Phi$ is the SM Higgs field that develops a
nonzero vacuum expectation value (VEV), which is symbolized by
$\langle\Phi\rangle\equiv v/\sqrt2$.
We work in the basis where all the coefficients are real and positive
for simplicity.
The scalar fields can be parameterized as
\begin{align}
&\Phi =\left[
\begin{array}{c}
w^+\\
\frac{v+\phi+iz}{\sqrt2}
\end{array}\right],\quad
\eta =\left[
\begin{array}{c}
\eta_{2/3}\\
\eta_{-1/3}
\end{array}\right],\quad
\Delta =\left[
\begin{array}{cc}
\frac{\delta_{1/3}}{\sqrt2} & \delta_{4/3} \\
\delta_{-2/3} & -\frac{\delta_{1/3}}{\sqrt2}
\end{array}\right],
\label{component}
\end{align}
where the subscript of the fields represents the electric charge,
$v \approx 246$ GeV, and $w^\pm$ and $z$ are, respectively, the
Nambu-Goldstone bosons, which will then be absorbed by the
longitudinal component of the $W$ and $Z$ bosons.
Due to the $\mu$ term in Eq.~(\ref{Eq:lag-flavor}), the charged components
with $1/3$ and $2/3$ electric charges mix, such that
their mixing matrices and mass eigenstates are defined as follows:
\begin{align}
&\left[\begin{array}{c} \eta_{i/3} \\ \delta_{i/3} \end{array}\right] =
O_i \left[\begin{array}{c} A_i \\ B_i \end{array}\right],\quad
O_i\equiv
\left[\begin{array}{cc} c_{\alpha_i} & s_{\alpha_i} \\
-s_{\alpha_i} & c_{\alpha_i} \end{array}\right], \quad (i=1,2),
\end{align}
where their masses are denoted as $m_{A_i}$ and $m_{B_i}$ respectively.
The interactions in terms of the mass eigenstates can be written as
\begin{align}
& - {L}_{Y}\approx
f_{ij} \overline{ d_{R_i}} \nu_{L_j} (c_{\alpha_1} A_1 +s_{\alpha_1} B_1)
-\frac{g_{ij}}{\sqrt2} \overline{ d_{L_i}^c} \nu_{L_j} (-s_{\alpha_1} A_1 +
c_{\alpha_1} B_1)
\label{eq:neut}
\\
&-
f_{ij} \overline{d_{R_i}} \ell_{L_j} (c_{\alpha_2} A_2 +s_{\alpha_2} B_2)
-
\frac{g_{ij}}{\sqrt2} \overline{u_{L_i}^c} \ell_{L_j} (-s_{\alpha_1} A_1 +
c_{\alpha_1} B_1)
\label{eq:lfvs-1}\\
&
-
{g_{ij}} \overline{d_{L_i}^c} \ell_{L_j}\delta_{4/3} \;
+ \;
{g_{ij}} \overline{u_{L_i}^c} \nu_{L_j} (-s_{\alpha_2} A_{2}^* + c_{\alpha_2} B_{2}^*) .
\label{eq:lfvs-2}
\end{align}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=80mm]{VS.eps}
\caption{ One-loop diagrams for estimating the constraint from vacuum stability.}
\label{fig:VS}
\end{center}
\end{figure}
{\it Vacuum stability}: Since we have charged {components}
such as $\eta_{1/3,2/3}$ and $\delta_{1/3,2/3,4/3}$,
we have to avoid their pure couplings from becoming negative by
restricting the negative contribution at one-loop level due to the
$\mu$ term to be smaller than the tree-level coupling.
{Estimating the one-loop diagrams in Fig.~\ref{fig:VS}}, these conditions are respectively given by
\begin{align}
\frac{\mu^4}{2(4\pi)^2}\int\frac{dxdy\delta(x+y-1)xy}
{(x m^2_\Phi+ym_\delta^2)^2}\lesssim \lambda_\eta^{\rm tree},\quad
\frac{\mu^4}{2(4\pi)^2}\int\frac{dxdy\delta(x+y-1)xy}
{(x m^2_\Phi+ym_\eta^2)^2}\lesssim \lambda_\delta^{\rm tree},
\end{align}
where $m_{\eta/\delta}$ are the bare masses in the potential.
{Now we estimate the typical upper bound of $\mu$, assuming $m_{\eta/\delta}\approx1$ TeV that comes from collider {bounds as shall} be seen later. Also, we {restrict} $\lambda_{\eta/\delta}^{\rm tree}\lesssim4\pi$. Under the framework, one obtains $|\mu|\lesssim$ 6.4 TeV, which gives almost no constraint on the TeV scale model.
Thus, we do not need to worry about the stability condition.
}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=80mm]{neutrino.eps}
\caption{ One-loop diagrams for generating the neutrino mass matrix.}
\label{fig:neutrino}
\end{center}
\end{figure}
\subsection{Neutrino mixing}
The dominant contribution to the active neutrino mass matrix $m_\nu$
is given at one-loop level through interactions in
Eq.~(\ref{eq:neut}) {as illustrated in Fig.~\ref{fig:neutrino}}, and its formula is given by
\begin{align}
&(m_{\nu})_{ab}
=
\frac{N_c s_{\alpha_1}c_{\alpha_1} }{2(4\pi)^2}\left[1-\frac{m^2_{A_1}}{m^2_{B_1}}\right]
\sum_{i=1}^3
\left[g^T_{bi} m_{d_i} f_{ia}+ f_{ai} m_{d_i} g_{ib}^T \right] F_I(r_{A_1}, r_{m_{d_i}}),\\
&F_I(r_1, r_2)
=
\frac{r_1(r_2-1)\ln r_1 - r_2(r_1-1)\ln r_2}{(r_1-1)(r_2-1)(r_1-r_2)},\quad (r_1\neq 1),
\end{align
where $N_c=3$ is the color factor and we define $r_{f}\equiv (m_{f}/m_{B_i})^2$.
$({m}_\nu)_{ab}$ is diagonalized by the Pontecorvo-Maki-Nakagawa-Sakata
mixing matrix $V_{\rm MNS}$ (PMNS)~\cite{Maki:1962mu} as
$
({m}_\nu)_{ab} =(V_{\rm MNS} D_\nu V_{\rm MNS}^T)_{ab}$ with $D_\nu\equiv
(m_{\nu_1},m_{\nu_2},m_{\nu_3})$, where we use the
data in the global analysis~\cite{Forero:2014bxa}.
Then one can parameterize as
\begin{align}
& g^T R f\equiv \frac12\left[V_{\rm MNS} D_\nu V_{\rm MNS}^T+A\right],\quad
R\equiv\frac{N_c s_{\alpha_1}c_{\alpha_1} }{2(4\pi)^2}\left[1-\frac{m^2_{A_1}}{m^2_{B_1}}\right]
\sum_{i=1}^3 m_{d_i} F_I(r_{A_1}, r_{m_{d_i}}),
\end{align}
where $A$ is an arbitrary antisymmetric matrix with complex values.
Finally, we derive the following relations~\cite{Nomura:2016pgg}:
\begin{align}
g
&=\frac12 (V_{\rm MNS}^* D_\nu V_{\rm MNS}^\dag+ A) f^{-1}R^{-1},
\ {\rm or}\
f=\frac12 R^{-1} (g^T)^{-1}(V_{\rm MNS}^* D_\nu V_{\rm MNS}^\dag+ A).
\end{align}
In the numerical analysis, we shall use the former relation for
convenience.
\subsection{LFVs and FCNCs at tree level}
Leptoquark models often induce LFVs and FCNCs at tree level.
Several processes can arise from the terms $g$ and $f$ and these processes
can be estimated by the effective Hamiltonian~\cite{Carpentier:2010ue} as
\begin{align}
({\cal H}_{\rm eff})_{ijkn}^{\bar\ell\ell\bar d d}
&={f_{kj} f_{in}^\dag }
\left(\frac{c_{\alpha_2}^2} {m_{A_2}^2}+\frac{s_{\alpha_2}^2}{m_{B_2}^2}\right)
(\bar\ell_i\gamma^\mu P_L \ell_j)(\bar d_k\gamma_\mu P_R d_n)
-\frac{g_{kj} g_{in}^\dag }{m_\delta^2}
(\bar\ell_i\gamma^\mu P_L \ell_j)(\bar d_k\gamma_\mu P_L d_n)\nonumber\\
&\equiv
C_{LR}^{\bar\ell\ell\bar d d} (\bar\ell_i\gamma^\mu P_L \ell_j)(\bar d_k\gamma_\mu P_R d_n)
+
C_{LL}^{\bar\ell\ell\bar d d}(\bar\ell_i\gamma^\mu P_L \ell_j)(\bar d_k\gamma_\mu P_L d_n), \label{eq:hami1}
\\
({\cal H}_{\rm eff})_{ijkn}^{\bar\ell\ell\bar uu}
&=
-\frac{g_{kj} g_{in}^\dag }{2}
\left(\frac{s_{\alpha_1}^2} {m_{A_1}^2}+\frac{c_{\alpha_1}^2}{m_{B_1}^2}\right)
(\bar\ell_i\gamma^\mu P_L \ell_j)(\bar u_k\gamma_\mu P_L u_n)\nonumber\\
&\equiv C_{LL}^{\bar\ell\ell\bar uu}(\bar\ell_i\gamma^\mu P_L \ell_j)(\bar u_k\gamma_\mu P_L u_n), \label{eq:hami2}
\\
({\cal H}_{\rm eff})_{ijkn}^{\bar\nu\nu\bar qq}
&=
-\frac{g_{kj} g_{in}^\dag }{2}
\left(\frac{s_{\alpha_1}^2} {m_{A_1}^2}+\frac{c_{\alpha_1}^2}{m_{B_1}^2}\right)
(\bar\nu_i\gamma^\mu P_L \nu_j)(\bar q_k\gamma_\mu P_L q_n)\nonumber\\
&\equiv C_{LL}^{\bar\nu\nu\bar qq}(\bar\nu_i\gamma^\mu P_L \nu_j)(\bar q_k\gamma_\mu P_L q_n).\label{eq:hami3}
\end{align}
Then one can rewrite the relevant coefficients as
\begin{align}
\epsilon_{ijkn}^{\bar\ell\ell\bar d d}&=
\frac{\sqrt2}{4{\rm G_F}} (C_{LR}^{\bar\ell\ell\bar d d}+C_{LL}^{\bar\ell\ell\bar d d}),\\
\epsilon_{ijkn}^{\bar\ell\ell\bar uu}&=
\frac{\sqrt2}{4{\rm G_F}} C_{LL}^{\bar\ell\ell\bar uu},\\
\epsilon_{ijkn}^{\bar\nu\nu\bar qq}&=
\frac{\sqrt2}{4{\rm G_F}} C_{LL}^{\bar\nu\nu\bar qq},
\end{align}
where the experimental bounds on each coefficient are summarized in
Tables~\ref{tab:lfvs-fcncs-1}, \ref{tab:lfvs-fcncs-2},
and \ref{tab:lfvs-fcncs-3}~\cite{Carpentier:2010ue}.
{\it $B_{d/s}\to\mu^+\mu^-$ measurements}:
Recently, CMS~\cite{Chatrchyan:2013bka} and LHCb~\cite{Aaij:2013aka}
experiments reported the branching ratios of $B(B_s\to\mu^+\mu^-)$ and
$B(B_d\to\mu^+\mu^-)$, which can place interesting bounds on new physics.
The bounds on the coefficients of the effective Hamiltonians in
Eq.~(\ref{eq:hami1})~\cite{Sahoo:2015wya} are
\begin{align}
\label{eq:Bsmumu}
& B(B_s\to\mu^+\mu^-):\quad 0\lesssim |C_{LR}^{\bar\mu\mu\bar sb}+C_{LL}^{\bar\mu\mu\bar sb}|\lesssim5\times 10^{-9}\ {\rm GeV}^{-2},\\
\label{eq:Bdmumu}
& B(B_d\to\mu^+\mu^-):\quad 1.5\times 10^{-9} \ {\rm GeV}^{-2}
\lesssim |C_{LR}^{\bar\mu\mu\bar db}+C_{LL}^{\bar\mu\mu\bar db}|\lesssim3.9\times 10^{-9}\ {\rm GeV}^{-2},
\end{align}
where the phase is assumed to be zero for simplicity.
The bounds from the other modes are
\begin{align}
& B(B_s\to e^+ e^-):\quad
|C_{LR}^{\bar ee \bar sb}+C_{LL}^{\bar ee\bar sb}|\lesssim2.54\times 10^{-5}\ {\rm GeV}^{-2},\\
& B(B_d\to e^+ e^-):\quad
|C_{LR}^{\bar ee\bar db}+C_{LL}^{\bar ee\bar db }|\lesssim1.73\times 10^{-5}\ {\rm GeV}^{-2},\\
& B(B_s\to \tau^+\tau^-):\quad
|C_{LR}^{\bar \tau\tau \bar sb}+C_{LL}^{\bar \tau\tau \bar sb}|\lesssim1.2\times 10^{-8}\ {\rm GeV}^{-2},\\
& B(B_d\to \tau^+ \tau^-):\quad
|C_{LR}^{\bar \tau\tau\bar db}+C_{LL}^{\bar \tau\tau\bar db}|\lesssim1.28\times 10^{-6}\ {\rm GeV}^{-2}.
\end{align}
\begin{table}[t]
\begin{tabular}{|c|c|c|c|c} \hline
${ijkn}$ of $\epsilon_{ijkn}^{\bar\ell\ell\bar d d}$ & Constraints on $\epsilon_{ijkn}^{\bar\ell\ell\bar d d}$ & Observable & Experimental value \\ \hline\hline
$eeds(\to1112)$ & $5.7\times 10^{-5}$ & $B(K^0_L\to \bar ee)$ & $9.0\times 10^{-12}$ \\
$eedb(\to1113)$ & $2.0\times 10^{-4}$ & $\frac{B(B^+\to \pi^+\bar ee)}{B(B^0\to \pi^-\bar e\nu_e)}$ &$<\frac{1.8\times 10^{-7}}{1.34\times 10^{-4}}$ \\
$eesb(\to1123)$ & $1.8\times 10^{-4}$ & $\frac{B(B^+\to K^+\bar ee)}{B(B^0\to D^0\bar e\nu_e)}$ & $\frac{4.9\times 10^{-7}}{2.2\times 10^{-2}}$ \\
$e\mu dd(\to1211)$ & $8.5\times 10^{-7}$ & $\mu-e$ conversion on Ti & $\frac{\sigma(\mu^-Ti\to e^-Ti)}{\sigma(\mu^-Ti\to capture)}<4.3\times 10^{12}$ \\
$e\mu ds(\to1212)$ & $3.0\times 10^{-7}$ & $B(K^0_L\to \bar e\mu)$ & $<4.7\times10^{-12}$ \\
$e\mu db(\to1213)$ & $2.0\times 10^{-4}$ & $\frac{B(B^+\to \pi^+\bar e\mu)}{B(B^0\to \pi^-\bar e\nu_e)}$ & $\frac{1.7\times 10^{-7}}{1.34\times 10^{-4}}$ \\
$e\mu sb(\to1223)$ & $8\times 10^{-5}$ & $\frac{B(B^+\to K^+\bar e\mu)}{B(B^+\to D^0\bar e\nu_e)}$ & $<\frac{9.1\times 10^{-8}}{2.2\times 10^{-2}}$ \\
$e\tau dd(\to1311)$ & $8.4\times 10^{-4}$ & $\frac{B(\tau\to \pi^0 e)}{B(\tau\to \pi\nu_\tau)}$ & $<\frac{8\times 10^{-8}}{10.91\times 10^{-2}}$ \\
$e\tau ds(\to1312)$ & $4.9\times 10^{-4}$ & $\frac{B(\tau\to eK)}{B(\tau\to \bar\nu K)}$ & $B<3.3\times10^{-8}$ \\
$e\tau db(\to1313)$ & $4.1\times 10^{-3}$ & $B(B^0\to \bar e\tau)$ & $<1.1\times10^{-4}$ \\
$\mu\mu ds(\to2212)$ & $7.8\times 10^{-6}$ & $B(K_L^0\to \bar\mu\mu)$ & $6.84\times10^{-9}$ \\
$\mu\mu db(\to2213)$ & $1.3\times 10^{-4}$ & $\frac{B(B^+\to\pi^+\bar\mu\mu)}{B(B^0\to\pi^- \bar e\nu_e)}$ & $<\frac{6.9\times10^{-8}}{1.3\times 10^{-4}}$ \\
$\mu\tau dd(\to2311)$ & $9.8\times 10^{-4}$ & $\frac{B(\tau\to\pi^0\mu)}{B(\tau\to\pi^- \nu_\tau)}$ & $<\frac{1.1\times10^{-7}}{10.91\times 10^{-2}}$ \\
$\mu\tau ds(\to2312)$ & $5.4\times 10^{-4}$ & $\frac{B(\tau\to\mu K)}{B(\tau\to \bar\nu K)}$ & $B<4.0\times10^{-8}$ \\
$\mu\tau db(\to2313)$ & $2.1\times 10^{-2}$ & $B(B^0\to \bar \mu\tau)$ & $<2.2\times10^{-5}$ \\
$\mu\tau sb(\to2323)$ & $2.3\times 10^{-3}$ & $\frac{B(B^+\to K^+\bar\tau \mu)}{B(B^+\to D^0\bar e \nu)}$ & $<\frac{7.7\times10^{-5}}{2.2\times10^{-2}}$ \\
$\tau\tau db(\to3313)$ & $0.2$ & $B(B^0\to \bar\tau\tau)$ & $<4.1\times10^{-3}$\\
\hline
\end{tabular}
\caption{
Summary for the experimental bounds on $\epsilon_{ijkn}^{\bar\ell\ell\bar d d}$.}
\label{tab:lfvs-fcncs-1}
\end{table}
\begin{table}[t]
\begin{tabular}{|c|c|c|c|c} \hline
${ijkn}$ of $\epsilon_{ijkn}^{\bar\ell\ell\bar uu}$ & Constraints on $\epsilon_{ijkn}^{\bar\ell\ell\bar uu}$ & Observable & Experimental value \\ \hline\hline
$eeuc(\to1112)$ & $7.9\times 10^{-3}$ & $\frac{B(D^+\to\pi^+\bar ee)}{B(D^0\to\pi^-\bar e\nu_e)}$ & $<\frac{7.4\times 10^{-6}}{2.83\times10^{-3}}$ \\
$eett(\to1133)$ & $0.092$ & $Z\to\bar ee$ & $R_e=20.804\pm 0.050$ \\
$e\mu uu(\to1211)$ & $8.5\times10^{-7}$ & $\mu-e$ conversion on Ti & $\frac{\sigma(\mu Ti\to eTi)}{\sigma(\mu Ti\to capture)}<4.3\times 10^{-12}$ \\
$e\mu uc(\to1212)$ & $1.7\times10^{-2}$ & $\frac{B(D^+\to \pi^+\bar e\mu)}{B(D^0\to\pi^-\bar e\nu_e)}$ & $<\frac{3.4\times 10^{-5}}{2.83\times10^{-3}}$ \\
$e\mu tt(\to1233)$ & $0.1$ & $Z\to \bar e\mu$ & $B<1.7\times10^{-6}$ \\
$e\tau uu(\to1311)$ & $8.4\times10^{-4}$ & $\frac{B(\tau\to \pi^0 e)}{B(\tau\to\pi^-\nu_\tau)}$ & $<\frac{8\times 10^{-8}}{10.91\times10^{-2}}$ \\
$e\tau tt(\to1233)$ & $0.2$ & $Z\to \bar e\tau$ & $B<9.8\times10^{-6}$ \\
$\mu\mu uc(\to2212)$ & $6.1\times10^{-3}$ & $\frac{B(D^+\to \pi^+ \bar\mu\mu)}{B(D^0\to\pi^-\bar e\nu_e)}$ & $<\frac{3.9\times 10^{-6}}{2.83\times10^{-3}}$ \\
$\mu\mu tt(\to2233)$ & $0.061$ & $Z\to \bar \mu\mu$ & $R_\mu=20.785\pm0.033$ \\
$\mu\tau uu(\to2311)$ & $9.8\times10^{-4}$ & $\frac{B(\tau\to \pi^0 \mu)}{B(\tau\to\pi^-\nu_\tau)}$ & $<\frac{1.1\times 10^{-7}}{10.91\times10^{-2}}$ \\
$\mu\tau tt(\to2333)$ & $1$ & $Z\to \tau\bar\mu$ & $B<12\times10^{-6}$ \\
$\tau\tau tt(\to3333)$ & $0.086$ & $Z\to \bar \tau\tau$ & $R_\tau=20.764\pm0.045$ \\
\hline
\end{tabular}
\caption{
Summary for the experimental bounds on $\epsilon_{ijkn}^{\bar\ell\ell\bar uu}$.}
\label{tab:lfvs-fcncs-2}
\end{table}
\begin{table}[t]
\begin{tabular}{|c|c|c|c|c} \hline
${ijkn}$ of $\epsilon_{ijkn}^{\bar\nu_i\nu_j\bar q_kq_n}$ & Constraints on $\epsilon_{ijkn}^{\bar\nu_i\nu_j\bar q_kq_n}$ & Observable & Experimental value \\ \hline\hline
$ijds(\to ij12)$ & $9.4\times 10^{-6}$ & $\frac{B(K^+\to\pi^+\bar \nu\nu)}{B(K^+\to\pi^0\bar e\nu_e)}$ & $\frac{1.5\times 10^{-10}}{5.08\times10^{-2}}$ \\ \hline
\end{tabular}
\caption{Summary for the experimental bounds on $\epsilon_{ijkn}^{\bar\nu_i\nu_j\bar q_kq_n}$, where $(i,j)=(1-3)$.}
\label{tab:lfvs-fcncs-3}
\end{table}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=80mm]{LFV.eps}
\caption{ One-loop diagrams for the LFV processes $\ell_a \to \ell_b \gamma$
where the cross mark on the internal lines indicates the attachment of a
photon line.}
\label{fig:LFV}
\end{center}
\end{figure}
\subsection{LFVs and FCNCs at the one-loop level}
\label{lfv-lu}
{\it LFVs}:
$\ell_a\to\ell_b\gamma$ processes, which arise from Eqs.~(\ref{eq:lfvs-1})
and ~(\ref{eq:lfvs-2}) via one-loop diagrams {as shown in Fig.~\ref{fig:LFV}}, often give
stringent experimental constraints, and the branching ratio is given by
\begin{align}
B(\ell_a\to\ell_b \gamma)
=
\frac{48\pi^3 C_a\alpha_{\rm em}}{{\rm G_F^2} m_a^2 }(|(a_R)_{ab}|^2+|(a_L)_{ab}|^2),
\end{align}
where $m_{a(b)}$ is the mass for the charged-lepton eigenstate,
$C_{a}=(1,1/5)$ for ($a=\mu,\tau$).
$a_L$ and $a_R$ are respectively given by
\begin{align}
&\frac{(a_R)_{ab}}{N_c}\approx
-\frac{ f^\dag_{bi} f_{ia} m_a }{12(4\pi)^2}
\left( \left[\frac{Q_{A_2} c^2_{\alpha_2} }{m_{A_2}^2} + \frac{Q_{B_2} s^2_{\alpha_2} }{m_{B_2}^2}\right]
+
2 \left[\frac{Q_d c^2_{\alpha_2} }{m_{A_2}^2} + \frac{Q_d s^2_{\alpha_2} }{m_{B_2}^2}\right]\right)
\\
&
+\frac{ g^\dag_{bi} g_{ia} {m_a} }{24(4\pi)^2}
\left(
\left[\frac{- Q_{\delta} }{m_\delta^2}+ \frac{2Q_{\bar d}}{m_\delta^2}\right]
-
\left[\frac{Q_{A_1} s^2_{\alpha_1} }{m_{A_1}}+ \frac{Q_{B_1} c^2_{\alpha_1} }{m_{B_1}}\right]
+2 Q_{\bar u}
\left[\frac{s^2_{\alpha_1} }{m_{A_1}}+ \frac{c^2_{\alpha_1}}{m_{B_1}}\right]\right),\nonumber
\end{align}
where we have assumed $m_{d(u)}<<m_{A_i,B_i,\delta}(i=1-2)$, and
$a_L=a_R(m_a\rightarrow m_b)$,
$Q_{\delta}=-Q_{\bar\delta}\equiv -4/3$, $Q_{A_2}=Q_{B_2}\equiv -2/3$,
$Q_{A_1}=Q_{B_1}\equiv -1/3$, $Q_{d}=-Q_{\bar d}\equiv -1/3$,
$Q_{u}=-Q_{\bar u}\equiv 2/3$.
The current experimental upper bounds are given
by~\cite{TheMEG:2016wtm, Adam:2013mnn}
\begin{align}
B(\mu\rightarrow e\gamma) &\leq4.2\times10^{-13},\quad
B(\tau\rightarrow \mu\gamma)\leq4.4\times10^{-8}, \quad
B(\tau\rightarrow e\gamma) \leq3.3\times10^{-8}~.
\label{expLFV}
\end{align}
{\it The muon anomalous magnetic moment (muon $g-2$)}: It
has been measured with
a high precision, and its deviation from the SM prediction is of
order $10^{-9}$. The formula for the muon $g-2$ is given by
\begin{align}
\Delta a_\mu\approx -m_\mu [{(a_R)_{22}+(a_L)_{22}}].\label{damu}
\end{align}
In our model, typical values are at most $10^{-14}$ with mostly the negative
sign. Although there may exist some positive sources, however,
the negative sources are always larger than the positive ones
when we impose all the bounds from LFVs and FCNCs.
\if0
{\color{blue}
{\it $Q-\overline{Q}$ mixing}: We also consider the constraint of the
$Q-\overline{Q}$ mixing, where $Q=K,B$.
The mixing is characterized by $\Delta m_Q$ given by~\cite{hep-ph/9604387}
\begin{align}
\Delta m_Q\approx
-\sum_{i,j=1}^3\frac{2(s_{\alpha_1} c_{\alpha_1})^2 (f^\dag_{ib}f_{ai}) (g^\dag_{jb}g_{cj})}{(4\pi)^2}
\left[\frac{1}{24}+\frac14\left(\frac{m_Q}{m_{q_1}+m_{q_2}}\right)^2\right] m_Q f_Q^2,
\end{align}
where {each of the indices} and experimental value for
$Q=K,B$ is given by
\begin{align}
Q=K:& \quad (a,b,c,d)=(1,2,1,2),\quad (q_1,q_2)=(s,d),\quad (m_K,f_K)\approx (497.611,156.3)\ {\rm MeV},\nonumber\\
&|\Delta m_K|\lesssim 3.484\times 10^{-12}\ {\rm MeV},\\
Q=B: &\quad (a,b,c,d)=(3,2,3,2),\quad (q_1,q_2)=(s,b),\quad (m_B,f_B)\approx (5279.61,190.5)\ {\rm MeV},\nonumber\\
&|\Delta m_B|\lesssim 3.356\times 10^{-10}\ {\rm MeV}.
\end{align}}
\fi
{\it $Q-\overline{Q}$ mixing}: We also consider the constraint of the
$Q-\overline{Q}$ mixing, where $Q=K,B,D$.
The mixing is characterized by $\Delta m_Q$ given by~\cite{hep-ph/9604387}
{
\begin{align}
\Delta m_K&\approx
\sum_{i,j=1}^3\frac{
5f_K^2 m_K^3\left(
8f_{i2}^\dag f_{1i} f_{1j} f^\dag_{j2} m_{B_1}^2 m_{\delta}^2
+
g_{2i} g^\dag_{i1} g_{2j} g^\dag_{j1} (4 m_{B_1}^2 + m_{\delta}^2)\right)}
{768 \pi^2 m_{A_1}^2 m_{B_1}^2 m_{\delta}^2 (m_d+m_s)^2}
\lesssim 3.48\times10^{-15}[{\rm GeV}],\label{eq:kk}\\
\Delta m_B&\approx
\sum_{i,j=1}^3\frac{
5f_B^2 m_B^3\left(
8f_{i2}^\dag f_{3i} f_{3j} f^\dag_{j2} m_{B_1}^2 m_{\delta}^2
+
g_{2i} g^\dag_{i3} g_{2j} g^\dag_{j3} (4 m_{B_1}^2 + m_{\delta}^2)\right)}
{768 \pi^2 m_{A_1}^2 m_{B_1}^2 m_{\delta}^2 (m_b+m_s)^2}
\lesssim 3.36\times10^{-13} [{\rm GeV}],\label{eq:bb}\\
\Delta m_D&\approx
\sum_{i,j=1}^3\frac{
5f_D^2 m_D^3g_{2i} g^\dag_{i1} g_{2j} g^\dag_{j1} }
{768 \pi^2 m_{B_1}^2 (m_u+m_c)^2}
\lesssim 6.25\times10^{-15}[{\rm GeV}],\label{eq:bb},
\end{align}
where we assume $m_{A_1}\approx m_{A_2}$, $m_{B_1}\approx m_{B_2}$, and $s_{\alpha_{1(2)}}\approx0$, and
each of the last inequalities of Eqs.(\ref{eq:kk}, \ref{eq:bb})
represents the upper bound on the experimental values \cite{pdg}, and
$f_K\approx0.156$ GeV, $f_B\approx0.191$ GeV, $m_K\approx0.498$ GeV,
and $m_B\approx5.280$ GeV.
}
{\it $b\to s\gamma$}: It can arise from the same term in the LFVs, yet
the constraint is always weaker than those of LFVs. Thus we do not further
consider this process.
\subsection{ Oblique parameters}
Since $\eta$ and $\Delta$ are multiplets under the $SU(2)_L$ gauge symmetry,
we need to take into account the constraints from the oblique parameters
$S$, $T$, and $U$.
Here we focus on the new physics contributions to $S$ and $T$ parameters,
$\Delta S$ and $\Delta T$, which are defined by
\begin{align}
\Delta S&={16\pi} \frac{d}{dq^2}[\Pi_{33}(q^2)-\Pi_{3Q}(q^2)]|_{q^2\to0},\quad
\Delta T=\frac{16\pi}{s_{W}^2 m_Z^2}[\Pi_{\pm}(0)-\Pi_{33}(0)],
\end{align}
where $s_{W}^2\approx0.22$ is the Weinberg angle and $m_Z$ is the $Z$
boson mass.
The loop factors $\Pi_{33,3Q,QQ,\pm}(q^2)$ are calculated from the one-loop
vacuum-polarization diagrams for $Z$ and $W^\pm$ bosons,
$i \Pi_{Z(W)}^{\mu \nu}$, where new particles run inside the loop diagrams,
as follows;
\begin{align}
\label{eq:piZ}
\Pi_{Z}^{\mu \nu} &= g^{\mu \nu} \frac{e^2}{c_W^2 s_W^2} \left( \Pi_{33}(q^2) - 2 s_W^2 \Pi_{3Q}(q^2) - s_W^4 \Pi_{QQ}(q^2) \right), \\
\label{eq:piW}
\Pi_{W}^{\mu \nu} &= g^{\mu \nu} \frac{e^2}{s_W^2} \Pi_{\pm}(q^2),
\end{align}
The list of new particle contributions is quite lengthy and so we
summarize them in the Appendix.
The experimental bounds are given by \cite{pdg}
\begin{align}
(0.05 - 0.09) \le \Delta S \le (0.05 + 0.09), \quad
(0.08 - 0.07) \le \Delta T \le (0.08 + 0.07).
\end{align}
\subsection{Collider physics}
The interactions of the $\eta$ and $\Delta$ are very similar to leptoquarks
or squarks. The first signature that we consider is their effects on
Drell-Yan production and also the lepton-flavor violating production
processes such as $e^\pm \mu^\mp$, $\mu^\pm \tau^\mp$, and $e^\pm \tau^\mp$ {\cite{Mandal:2015lca}}.
Without loss of generality we take the mixing angles between $\eta$ and
$\Delta$ to be small (indeed required by the $S,T$ parameters), such
that $\eta_{1/3,2/3} \approx A_{1,2}$ and $\delta_{1/3,2/3} \approx B_{1,2}$.
We can write down the amplitude for
$d_{R_i} (p_1) \overline{d_{R_{i'}}} (p_2) \to \ell_{L_j} (q_1) \bar
\ell_{L_{j'}} (q_2) $ with a $t$-channel exchange of $\eta_{2/3}$
\begin{eqnarray}
i {\cal M} &=& - i f_{ij} f_{i'j'} \frac{1}{\hat t - m_{\eta}^2}
\bar u (q_1) P_R u(p_1 ) \; \bar v(p_2) P_L v(q_2) \nonumber \\
& \stackrel{Fierz}{ = } &
- i f_{ij} f_{i'j'} \frac{1}{\hat t - m_{\eta}^2} \frac{1}{2}
\bar u (q_1) \gamma^\mu P_L v (q_2 ) \; \bar v(p_2) \gamma_\mu P_R u(p_1) \;.
\end{eqnarray}
When $|\hat t| \ll m^2_{\eta}$ we can identify this amplitude as a 4-fermion
contact interaction and equate
\begin{equation}
\frac{f_{ij} f_{i'j'} }{2 m_{\eta}^2} = \frac{4\pi}{\Lambda^2_{LR}}
\end{equation}
where $\Lambda_{LR}$, with $L\,(R) $ chirality refers to the lepton (quark),
is often the limit quoted for the 4-fermion contact interactions.
Since only the limits $\Lambda_{LL}$ are quoted in PDG \cite{pdg}, which
was based on Ref.~\cite{cheung}, we use the limit of $\Lambda_{LR}$ obtained
in Ref.~\cite{cheung}. The limit on $\Lambda_{LR} \approx 11 -16$ TeV depending
on the sign of the 4-fermion contact interaction. Let us simply take
$\Lambda_{LR} = 16$ TeV, and translate into the mass limit of $m_\eta$ as
follows (with $i=i'=1$ and $j=j'=1$ or 2)~\footnote{
{The most updated limits by the ATLAS and CMS on the compositeness
scale are $\Lambda_{\pm}(LL) \agt 17-25$ TeV (ATLAS) \cite{atlas-contact}
and $11-18$ TeV (CMS) \cite{cms-contact},
which are somewhat less restrictive
than the limits that we quoted from the PDG. We therefore used the PDG values.
Nevertheless, the limits are not as stringent as the direct search limits of
around 1 TeV provided that the values for $f_{1j}$ and $g_{1j}$ are less $O(10^{-1})$.
}}
\begin{equation}
\label{eta-limit}
m_{\eta} \agt f_{1j} \times 3.2 \; {\rm TeV} \qquad (j=1,2) \;.
\end{equation}
{The effect of including the $\hat t$ or $\hat u$ in the leptoquark propagator
has been explicitly worked out in Refs.~\cite{1409.2372,1410.4798}. It was shown
that the limits obtained with the proper leptoquark propagators are weakened
by about 40\% to a few \% for leptoqark mass of 1 TeV to 3 TeV. Nevertheless,
the direct search limits of around 1 TeV are more restrictive then.
}
Note that the approximation $1/(\hat t - m_\eta^2) \simeq 1/(- m_\eta^2)$ may
not be valid for $m_\eta \alt 1$ TeV. Yet, the limit obtained in
Eq.~(\ref{eta-limit}) is a rough estimate on how heavy the $\eta$ boson can
be without upsetting the current Drell-Yan data. If the $\eta$ boson
is around 1 TeV, the Drell-Yan invariant-mass distribution
may receive some enhancement at the large invariant-mass end.
Similarly, we can write down the amplitudes for
$ u^c_{L_i} \overline{ u^c_{L_{i'}} } \to \ell_{L_j} \bar \ell_{L_{j'}}$ and
$ d^c_{L_i} \overline{ d^c_{L_{i'}} }\to \ell_{L_j} \bar \ell_{L_{j'}}$
with the exchange
of $\delta_{1/3}$ and $\delta_{4/3}$, respectively.
The resulting mass limits on $m_{\delta}$ can be written as
\begin{equation}
\frac{g_{1j} g_{1j} }{2 m_\delta^2} = \frac{4 \pi}{\Lambda_{LL}^2} \;.
\end{equation}
With a more severe $\Lambda_{LL} \approx 25$ TeV, we obtain
\begin{equation}
\label{delta-limit}
m_{\delta} \agt g_{1j} \times 5.0 \;{\rm TeV} \qquad (j=1,2) \;.
\end{equation}
We observe that the mass limit on $\delta$ is somewhat stronger
than $\eta$, simply because of the chiralities of quarks and leptons
that they induce.
On the other hand, the $\eta$ and $\delta$ bosons can be directly pair
produced by the strong interaction, followed by their decays into
leptons and quarks. Therefore, the typical signature would be a pair
of leptons and a pair of jets in the final states, of which the invariant
mass of one jet and one lepton shows a clear peak.
Note that the jets can be light or heavy flavors depending on the
Yukawa couplings $f_{ij}$ and $g_{ij}$, and the leptons can be neutrinos or
charged leptons of different or same flavors.
The current limits on leptoquarks, using electron or muon plus jets, are
about 1 TeV \cite{lq-limit}.
Pair production cross sections have been calculated with NLO accuracy
in Ref.~\cite{pair} long time ago. The cross section at 13 TeV LHC is
of order $O(10)$ fb for 1 TeV $\eta$ or $\delta$ boson.
Combining the direct search limit of about 1 TeV for $\eta$ and $\delta$,
and Eqs.~(\ref{eta-limit}) and (\ref{delta-limit}), we obtain upper limits
for $f_{1j}$ and $g_{1j}$:
\begin{equation}
f_{1j} \alt 0.3, \qquad g_{1j} \alt 0.2 \qquad (j=1,2) \;.
\end{equation}
{A list of more comprehensive collider and low energy constraints can be
found in Ref.~\cite{1603.04993}. }
\section{ $b\to s\bar \ell \ell$ Anomaly and Predictions}
The more striking anomaly was the lepton-universality violation measured
in the ratio
$R_K \equiv B(B\to K\mu\mu)/B(B\to K ee) = 0.745 ^{+0.090}_{-0.074} \pm 0.036$
by LHCb \cite{lhcb-2014}, and the less one was the angular distributions
of $B \to K^*\mu\mu$~\cite{lhcb-2013}.
The new-physics contributions to the effective Hamiltonian
characterizing the decay processes are
\begin{align}
{\cal H}_{\rm eff}^f&=\frac{f_{b\ell} f_{s\ell'} }4
\left(\frac{c_{\alpha_2}^2} {m_{A_2}^2}+\frac{s_{\alpha_2}^2}{m_{B_2}^2}\right)
\left[(\bar s\gamma^\mu P_R b)(\bar \ell'\gamma_\mu \ell)-(\bar s\gamma^\mu P_R b)(\bar \ell'\gamma_\mu\gamma_5 \ell) \right],
\\
{\cal H}_{\rm eff}^g&=-\frac{g_{b\ell} g_{s\ell'} }{4m_{\delta}^2}
\left[(\bar s\gamma^\mu P_L b)(\bar \ell'\gamma_\mu \ell)
{-}
(\bar s\gamma^\mu P_L b)(\bar \ell'\gamma_\mu\gamma_5 \ell) \right].
\end{align}
Therefore, the relevant new-physics Wilson coefficients for the operators
$C^{(')}_{9,10}$ are given by
\begin{align}
(C_9')^{\ell \ell'}&=\frac{1}{C_{\rm SM}}\frac{f_{b\ell} f_{s\ell'}}{4}\left(\frac{c_{\alpha_2}^2} {m_{A_2}^2}+\frac{s_{\alpha_2}^2}{m_{B_2}^2}\right),\quad
(C_{10}' )^{\ell \ell'}=- (C_9' )^{\ell \ell'},\\
(C_9)^{\ell \ell'}&={-}(C_{10})^{\ell \ell'}=-\frac{1}{C_{\rm SM}}\frac{g_{b\ell} g_{s\ell'} }{4m_{\delta}^2},\quad C_{\rm SM}\equiv \frac{V_{tb}V_{ts}^* G_F\alpha_{\rm em}}{\sqrt2\pi},
\end{align}
where $\alpha_{\rm em}\approx1/137$ is the fine-structure constant,
and ${\rm G_F}\approx1.17\times 10^{-5}$ GeV$^{-2}$ is the Fermi
constant. In our analysis, we focus on the case of $\ell=\ell' = \mu$
and we write
the $\mu \mu$ component simply as $C_9(C_{10})$ and $C_9'(C_{10}')$ in the
following. In Table~\ref{tab:C910}~\cite{Descotes-Genon:2015uva}, we
summarize the best fit values of the Wilson coefficients for
explaining the experimental anomalies where we focus on the cases of
$C_9 = {-}C_{10}$ and $C_9' = - C_{10}'$ since most of the allowed
parameter sets provide either $C_9 \ll C_9'$ or $C_9 \gg C_9'$ as we
show in our numerical analysis.
{Note that $(C_9)^{\mu \mu}$ and $(C'_9)^{ \mu\mu}$ are roughly estimated as
\begin{equation}
C_9[C_9'] \sim 3.3 \times 10^2 \left( \frac{1 \, {\rm TeV}}{m_{LQ}} \right)^2 g_{b \mu} g_{s \mu} [f_{b \mu} f_{s \mu}],
\end{equation}
where $m_{LQ}$ indicates the leptoquark mass. Therefore, the bi-product of the couplings are required to be $\sim O(10^{-3})$ to obtain the best fit value for $\sim$1 TeV leptoquark mass.
}
\begin{table}[t]
\begin{tabular}{c|c|c|c} \hline
& ${\rm Best\ fit}$ & $1\sigma$ & $3\sigma$ \\ \hline
$C_{9}={-}C_{10}$ & $-{0.68}$ & $[{-0.85,-0.50}]$& $[{-1.22,-0.18}]$ \\
$C'_{9}=-C'_{10}$ & $0.19$ & $[0.07,0.31]$& $[-0.17,0.55]$ \\ \hline
\end{tabular}
\caption{Summary for the new physics contribution to $C_{9,10}(C'_{9,10})$ explaining experimental anomalies of the $b\to s\bar \ell \ell$ processes
in the cases of $C_9 = - C_{10}$ and $C'_9 = - C'_{10}$ where new physics contribution is nonzero only for the values on the table for both cases. }
\label{tab:C910}
\end{table
\subsection{Numerical analysis \label{sec:numerical}}
We are now ready to search for the allowed parameter space, which
satisfies all the constraints that we have discussed above,
in particular those explaining the $b\to s \mu \mu$ anomalies.
First of all, we fix some mass parameters as $m_{A_2}=m_{A_1}$
and $m_{B_2}=m_{\delta}=m_{B_1}$ {where we require degenerate masses for the components of $\eta$ and $\Delta$ to suppress the oblique parameters $\Delta S$ and $\Delta T$}.
We prepare 80 million random sampling points for the relevant input
parameters with the following ranges:
{
\begin{align}
& (m_{A_1},m_{B_1}) \in [1\,, 5\,]\text{TeV},\quad
|A_{12,23,13}| \in \left[10^{-13},\ 10^{-7}\,\right] \text{GeV}, \nonumber\\
& (\alpha_{1} ,\ \alpha_{2} ) \in [10^{-5}, 10^{-2}], \quad
| f_{ij} | \in [10^{-5}, {4 \pi}].
\label{range_scanning}
\end{align}
}
After scanning, we find {709} parameter sets, which can fit
neutrino oscillation data and satisfy all the constraints.
Note that mixing angle $\alpha_{1(2)}$ is required to be small
due to the constraints from $\Delta S$ and $\Delta T$ parameters.
In the left panel of Fig.~\ref{fig:nums}, we show the allowed region in
$m_{A_1}$-$m_{B_1}$ plane, which suggests the relation
$m_{A_1}\lesssim m_{B_1}$ that is mainly required from the constraints
of $\Delta S$ and $\Delta T$.
We also put a vertical line of $m_{A_1} = 1$ TeV onto the
figure, because the collider limit on leptoquarks is roughly 1 TeV.
In the right panel of Fig.~\ref{fig:nums}, we show the allowed region
in $C_{9}$-$C_{9}'$ plane. It suggests that the relation
{$C_9(\in[0,0.2]) \ll C_9'(\in[0,0.7])$}
is realized for most of the parameter region while
{a parameter set provides} $C_9 \sim C_9'$; the cases of $C_9(={-}C_{10}) < 0$ and
$C_9'(=-C_{10}') < 0$ are disfavored by the constraints from $B_s$ and
$B_d$ decay branching ration Eqs.~(\ref{eq:Bsmumu}) and
(\ref{eq:Bdmumu}).
$C_9(={-}C_{10}) \sim C_9'(=-C_{10}')$ case is not favored by the global analysis.
In Fig.~\ref{fg}, we show the scatter plots of $f_{11,12,13}$ versus
$m_{A_1} = m_{A_2} \approx m_\eta$ (left panels), and
$g_{11,12,13}$ versus $m_{B_1} = m_{B_2} \approx m_\delta$ (right panels).
{$f_{11}$ and $f_{13}$ go over all the range, while $f_{12}$ is favor of rather larger value.}
On the other hand, {$[g_{11},g_{12}] \alt {\cal O}(1)$, while $g_{13}$ is likely to a free parameter}, as shown in Fig.~\ref{fg-gtrend}.
We note that the $\mu \to e \gamma$ process provides the strongest constraint which bounds the combinations of the couplings $|f_{2i}^\dagger f_{i1}|$ and $|g_{2i}^\dagger g_{i1}|$. Therefore, we can see
that the collider limits obtained from the Drell-Yan process in
Eqs.~(\ref{eta-limit}) and (\ref{delta-limit})
are weaker than the direct search mass limits of leptoquarks ($\sim 1$ TeV).
\begin{figure}[tb]
\begin{center}
\includegraphics[width=80mm]{mA1-mB1.eps}
\includegraphics[width=80mm]{C9-C9p.eps}
\caption{
Scattering {\it} plots in the plane of $m_{A_1}$ versus $m_{B_1}$ in the left
panel; and in the plane of $C_9$ versus $C_{9}'$ in the right panel.
{
The vertical line of 1 TeV is superimposed in the left panel due to the
direct search limit of leptoquarks.}}
\label{fig:nums}
\end{center}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=80mm]{A1-f11p.eps}
\includegraphics[width=80mm]{B1-g11p.eps}
\includegraphics[width=80mm]{A1-f12p.eps}
\includegraphics[width=80mm]{B1-g12p.eps}
\includegraphics[width=80mm]{A1-f13p.eps}
\includegraphics[width=80mm]{B1-g13p.eps}
\caption{Scatter plots of $f_{1j}$ versus $m_{A_1}=m_{A_2} \approx m_\eta$ (left
panels), and $g_{1j}$ versus $m_{B_1}=m_{B_2} \approx m_\delta$ (right panels).
{
The vertical line of 1 TeV is superimposed due to the
direct search limit of leptoquarks.}
}
\label{fg}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=80mm]{g11-g12.eps}
\includegraphics[width=80mm]{g11-g13.eps}
\includegraphics[width=80mm]{g12-g13.eps}
\caption{{Scatter plots of $g_{11}$ versus $g_{12}$ with black dots
(upper left), $g_{11}$ versus $g_{13}$ with red dots (upper right), and
$g_{12}$ versus $g_{13}$ with blue dots (lower). These trends mainly come
from $\ell_a\to\ell_b\gamma$ at one-loop level. Especially, the upper left
panel (black dotted) is more restrictive than the other two panels,
because both components $g_{12}$ and $g_{11}$ are related to the most
stringent constraint of $\mu\to e\gamma$.
}}
\label{fg-gtrend}
\end{figure}
\subsection{Collider Predictions}
The most striking signature of the model is the lepton-flavor
violating production via the $\eta$ or $\delta$ bosons in the $t$-channel,
resulting in the final states of $e^\pm \mu^\mp$, $e^\pm \tau^\mp$, or
$\mu^\pm \tau^\mp$. The SM irreducible backgrounds to these final states are
negligible.
Since the $\delta$ boson is in general heavier than the
$\eta$ boson, we use the subprocess $d \bar d \to \ell_i \ell_j$
via the exchange of the $\eta$ boson to estimate the event rates.
We give the signal event rates in Table~\ref{e-mu-tau}, using
the parameters $f_{11}=10^{-2}, f_{12}=f_{13} = 10^{-1}$ and
$m_\eta = 1$ TeV at the 13 TeV LHC with 300 fb$^{-1}$ luminosity.
Naively, the production cross section for $\ell_i \ell_j$ is
proportional to $| f_{1i} f_{1j} |^2$. If we choose $f_{11} = 10^{-1}$ instead
of $10^{-2}$ the event rate will increase by 100 times, although the
number of parameter points for $f_{11} =10^{-1}$ is considerably less.
Therefore, the event rates may be large enough for observation if the
Yukawa couplings $f_{1j}$ are of order $O(10^{-1})$.
As we have mentioned above, pair production cross sections for leptoquarks
have been calculated with NLO accuracy \cite{pair} and the
cross sections at 13 TeV LHC for 1 TeV $\eta$ or $\delta$ bosons is
of order $O(10)$ fb. The final state consists of two leptons and two jets,
among which the corresponding lepton and jet will form an invariant-mass
peak.
On the other hand, the $\eta$ or $\delta$ bosons can also be singly
produced with the subprocess $g q \to \eta \ell$ \cite{singly},
followed by the decay of $\eta$ into
a lepton and a quark. The amplitude for the production involves
a strong coupling and a Yukawa coupling ($f_{ij}$ for $\eta$ but $g_{ij}$
for $\delta$). Nevertheless, since the sizes of $f_{ij}$ and $g_{ij}$ are
very small because of the small neutrino mass, the production cross
section for single $\eta$ or $\delta$ is very suppressed. We shall
not further consider this production mechanism.
\begin{table}[t!]
\caption{\small \label{e-mu-tau}
Event rates for the $e^\pm \mu^\mp$, $e^\pm \tau^\mp$, or
$\mu^\pm \tau^\mp$ final states with exchange of the $\eta$ boson in
the subprocess $d\bar d \to e^- e^+$
at the 13 TeV LHC with 300 fb$^{-1}$ luminosity.
}
\medskip
\begin{ruledtabular}
\begin{tabular}{lccc}
Inputs & $e^\pm \mu^\mp$ & $e^\pm \tau^\mp$ & $\mu^\pm \tau^\mp$ \\
$f_{11}=10^{-2}$, $f_{12}=f_{13}=10^{-1}$, $m_{\eta}=1$ TeV
& $0.057$ & $5.7$ & $0.057$
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Conclusions}
We have proposed a simple extension of the SM with two leptoquark-like
scalar bosons $\eta$ and $\Delta$, which couple to all three generations
of fermions. It can explain the neutrino masses and oscillations data,
and the most importantly explain the anomalies observed in
$b \to s \ell^+ \ell^-$ including the lepton-universality violation and
angular distributions, and is at the same time consistent with all the
LFVs, FCNCs, Drell-Yan production, and collider searches.
We offer a few more comments as follows.
\begin{enumerate}
\item The contributions of $\eta$ and $\Delta$ to the muon $g-2$ are
negligible compared to the experimental uncertainties.
\item The contributions of the $\eta_{2/3}$ to the Drell-Yan process,
proportional to $|f_{11}|^4$ ($|f_{12}|^4$) for $e^+ e^- \; (\mu^+ \mu^-)$
final state, will show up as an enhancement in the large invariant-mass
region.
\item The most interesting collider signature for the $\eta$ or $\delta$
boson is the lepton-flavor violating production such as
$e^\pm \mu^\mp$, $e^\pm \tau^\mp$, and $\mu^\pm \tau^\mp$, which are
proportional to $|f_{11} f_{12}|^2$, $|f_{11} f_{13}|^2$, and
$|f_{12} f_{13}|^2$, respectively. The event rates
may be large enough for observation if the
Yukawa couplings $f_{1j}$ are of order $O(10^{-1})$.
\item The direct search limits on $\eta$ or $\delta$ bosons, just like
leptoquarks, are currently stronger than the indirect bounds from the
Drell-Yan process that we obtained in Eqs.~(\ref{eta-limit}) and
(\ref{delta-limit}).
\end{enumerate}
\section*{Acknowledgments}
This work was supported by the Ministry of Science and Technology
of Taiwan under Grants No. MOST-105-2112-M-007-028-MY3.
\begin{appendix}
\section{New particle contributions to vacuum polarization diagram}
Here we summarize the contributions to $\Pi_{\pm}(q^2)$, $\Pi_{33}(q^2)$, $\Pi_{3Q}(q^2)$ and $\Pi_{QQ}(q^2)$ in Eq.~(\ref{eq:piZ}) and (\ref{eq:piW}) from new particles in our model.
\noindent
{\bf Contributions to $\Pi_{\pm}(q^2)$ } \\
The one loop contributions from three-point gauge interaction are denoted by $\Pi_\pm^{XY}(q^2)$ where $X$ and $Y$ indicate the particles inside the loop.
They are summarized as follows;
\begin{align}
& \Pi_{\pm}^{A_1(B_1) \delta_{4/3}} (q^2) = \frac{2}{(4\pi)^2} s_{\alpha_1}^2(c_{\alpha_1}^2) G(q^2, m_{A_1(B_1)}^2, m_\delta^2), \\
& \Pi_{\pm}^{B_1 \delta_{4/3}} (q^2) = \frac{2}{(4\pi)^2} c_{\alpha_1}^2 G(q^2, m_{B_1}^2, m_\delta^2), \\
& \Pi_{\pm}^{A_1A_2} (q^2) = \frac{2}{(4\pi)^2} \left( s_{\alpha_1} s_{\alpha_2} - \frac{1}{\sqrt{2}} c_{\alpha_1} c_{\alpha_2} \right)^2 G(q^2, m_{A_1}^2, m_{A_2}^2), \\
& \Pi_{\pm}^{B_1B_2} (q^2) = \frac{2}{(4\pi)^2} \left( c_{\alpha_1} c_{\alpha_2} - \frac{1}{\sqrt{2}} s_{\alpha_1} s_{\alpha_2} \right)^2 G(q^2, m_{B_1}^2, m_{B_2}^2), \\
& \Pi_{\pm}^{A_1B_2} (q^2) = \frac{2}{(4\pi)^2} \left( s_{\alpha_1} c_{\alpha_2} + \frac{1}{\sqrt{2}} c_{\alpha_1} s_{\alpha_2} \right)^2 G(q^2, m_{A_1}^2, m_{B_2}^2), \\
& \Pi_{\pm}^{B_1A_2} (q^2) = \frac{2}{(4\pi)^2} \left( c_{\alpha_1} s_{\alpha_2} - \frac{1}{\sqrt{2}} s_{\alpha_1} c_{\alpha_2} \right)^2 G(q^2, m_{B_1}^2, m_{A_2}^2),
\end{align}
where
\begin{align}
& G(q^2, m_P^2, m_Q^2) = \int dx dy \delta (1-x-y) \Delta_{PQ} [\Upsilon+1 - \ln \Delta_{PQ}], \nonumber \\
& \Delta_{PQ} = -q^2 x(1-x) + x m_{P}^2 + y m_{Q}^2, \quad \Upsilon = \frac{2}{\epsilon} - \gamma - \ln (4 \pi).
\end{align}
The one loop contributions from four-point gauge interaction are denoted by $\Pi_\pm^{X}(q^2)$ where $X$ indicates the particle inside the loop.
They are summarized as follows;
\begin{align}
& \Pi_\pm^{A_{1}}(q^2) = - \frac{1}{2 (4\pi)^2} (4 s_{\alpha_{1}}^2+ c_{\alpha_{1}}^2) H(m_{A_{1}}^2), \\
& \Pi_\pm^{A_{2}}(q^2) = - \frac{1}{2 (4\pi)^2} (2 s_{\alpha_{2}}^2+ c_{\alpha_{2}}^2) H(m_{A_{2}}^2), \\
& \Pi_\pm^{B_{1}}(q^2) = - \frac{1}{2 (4\pi)^2} (4 c_{\alpha_{1}}^2+ s_{\alpha_{1}}^2) H(m_{B_{1}}^2), \\
& \Pi_\pm^{B_{2}}(q^2) = - \frac{1}{2 (4\pi)^2} (2 c_{\alpha_{2}}^2+ s_{\alpha_{2}}^2) H(m_{B_{2}}^2), \\
& \Pi_\pm^{\delta_{4/3}}(q^2) = - \frac{1}{(4\pi)^2} H(m_{\delta}^2),
\end{align}
where
\begin{equation}
H(m_P^2) = m_P^2 [\Upsilon + 1 - \ln m_P^2].
\end{equation}
\noindent
{\bf Contributions to $\Pi_{33}(q^2)$, $\Pi_{3Q}(q^2)$ and $\Pi_{QQ}(q^2)$ } \\
The one loop contributions from three-point gauge interaction are denoted by $\Pi_{33,3Q,QQ}^{XY}(q^2)$ where $X$ and $Y$ indicate particles inside the loop.
They are summarized as follows;
\begin{align}
& \Pi_{[33,3Q,QQ]}^{A_1 A_1} = \frac{2}{(4\pi)^2} \left[ \frac{1}{4} c_{\alpha_1}^4, \frac{1}{6} c_{\alpha_1}^2, \frac{1}{9} \right] G(q^2, m_{A_1}^2, m_{A_1}^2), \\
& \Pi_{[33,3Q,QQ]}^{B_1 B_1} = \frac{2}{(4\pi)^2} \left[ \frac{1}{4} s_{\alpha_1}^4, \frac{1}{6} s_{\alpha_1}^2, \frac{1}{9} \right] G(q^2, m_{B_1}^2, m_{B_1}^2), \\
& \Pi_{[33,3Q,QQ]}^{A_1 B_1} = \frac{2}{(4\pi)^2} \left[ \frac{1}{4} s_{\alpha_1}^2 c_{\alpha_1}^2, 0, 0 \right] G(q^2, m_{A_1}^2, m_{B_1}^2) \\
& \Pi_{[33,3Q,QQ]}^{A_2 A_2} = \frac{2}{(4\pi)^2} \left[ s_{\alpha_2}^4 - s_{\alpha_2}^2 c_{\alpha_2}^2 + \frac{1}{4} c_{\alpha_2}^4, \frac{2}{3} s_{\alpha_2}^4 - s_{\alpha_2}^2 c_{\alpha_2}^2 + \frac{1}{3} c_{\alpha_2}^4, \frac{4}{9} (s_{\alpha_2}^2 - c_{\alpha_2}^2)^2 \right] G(q^2, m_{A_2}^2, m_{A_2}^2), \\
& \Pi_{[33,3Q,QQ]}^{B_2 B_2} = \frac{2}{(4\pi)^2} \left[ c_{\alpha_2}^4 - s_{\alpha_2}^2 c_{\alpha_2}^2 + \frac{1}{4} s_{\alpha_2}^4, \frac{2}{3} c_{\alpha_2}^4 - s_{\alpha_2}^2 c_{\alpha_2}^2 + \frac{1}{3} s_{\alpha_2}^4, \frac{4}{9} (s_{\alpha_2}^2 - c_{\alpha_2}^2)^2 \right] G(q^2, m_{B_2}^2, m_{B_2}^2), \\
& \Pi_{[33,3Q,QQ]}^{A_2 B_2} = \frac{2}{(4\pi)^2} s_{\alpha_2}^2 c_{\alpha_2}^2 \left[ \frac{9}{4} , 3, 4 \right] G(q^2, m_{A_2}^2, m_{B_2}^2), \\
& \Pi_{[33,3Q,QQ]}^{\delta_{4/3} \delta_{4/3}} = \frac{2}{(4\pi)^2} \left[ 1, \frac{4}{3}, \frac{16}{9} \right] G(q^2, m_{\delta}^2, m_{\delta}^2).
\end{align}
The one loop contributions from four-point gauge interaction are denoted by $\Pi_{33,3Q,QQ}^{X}(q^2)$ where $X$ indicates the particle inside the loop.
They are summarized as follows;
\begin{align}
& \Pi_{[33,3Q,QQ]}^{A_1} = - \frac{2}{(4\pi)^2} \left[s_{\alpha_1}^2 + \frac{1}{4} c_{\alpha_1}^2, \frac{2}{3} s_{\alpha_1}^2+\frac{1}{6} c_{\alpha_1}^2, \frac{4}{9} s_{\alpha_1}^2 + \frac{1}{9} c_{\alpha_1}^2 \right] H( m_{A_1}^2), \\
& \Pi_{[33,3Q,QQ]}^{B_1} = - \frac{2}{(4\pi)^2} \left[c_{\alpha_1}^2 + \frac{1}{4} s_{\alpha_1}^2, \frac{2}{3} c_{\alpha_1}^2+\frac{1}{6} s_{\alpha_1}^2, \frac{4}{9} c_{\alpha_1}^2 + \frac{1}{9} s_{\alpha_1}^2 \right] H( m_{B_1}^2), \\
& \Pi_{[33,3Q,QQ]}^{A_2} = - \frac{2}{(4\pi)^2} \left[ \frac{1}{4} c_{\alpha_2}^2, \frac{1}{3} c_{\alpha_2}^2, \frac{1}{9} s_{\alpha_2}^2 + \frac{4}{9} c_{\alpha_2}^2 \right] H( m_{A_2}^2), \\
& \Pi_{[33,3Q,QQ]}^{B_2} = - \frac{2}{(4\pi)^2} \left[ \frac{1}{4} s_{\alpha_2}^2, \frac{1}{3} s_{\alpha_2}^2, \frac{1}{9} c_{\alpha_2}^2 + \frac{4}{9} s_{\alpha_2}^2 \right] H( m_{B_2}^2), \\
& \Pi_{[33,3Q,QQ]}^{\delta_{4/3}} = - \frac{2}{(4\pi)^2} \left[ 1, \frac{4}{3} , \frac{16}{9} \right] H( m_{\delta}^2).
\end{align}
\end{appendix}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Historically, the first instance of a string amplitude is the Veneziano amplitude, describing the scattering of four `would-be' pions but actually tachyons. Yet, thanks to the powerful constraints of conformal invariance and modular invariance in the perturbative regime, a systematic construction has been successfully achieved for closed-string theories much earlier than for open-string theories \cite{Green:1987sp, Green:1987mn, Polchinski:1998rq, Polchinski:1998rr, Kiritsis:2007zza}. The systematic construction of theories with open unoriented strings was completed in the early 90's \cite{Bianchi:1990yu, Bianchi:1990tb}\footnote{For a review see {\it e.g.}~ \cite{Angelantonj:2002ct}.}. The advent of D-branes \cite{Angelantonj:2002ct} triggered an enormous interest in the subject that led to the construction of exactly solvable models with D-branes and $\Omega$-planes based on tori, orbifolds, free fermions, minimal models, group manifolds and Gepner models \cite{Bianchi:1988ux, Bianchi:1988fr, Bianchi:1989du, Bianchi:1990yu, Bianchi:1990tb, Bianchi:1991rd, Bianchi:1991eu, Angelantonj:1996mw, Angelantonj:1996uy,
Pradisi:1995qy, Recknagel:1997sb, Blumenhagen:2003su, Blumenhagen:2004cg, Dijkstra:2004ym, Anastasopoulos:2006da}. Most of the attention has been devoted to the spectrum {\it i.e.}~ to the one-loop partition function.
Much less work has been devoted to the computation of scattering amplitudes in non-trivial settings based on genuinely interacting CFT's \cite{Bianchi:1991rd, Angelantonj:1996mw, Pradisi:1995qy, Recknagel:1997sb, Blumenhagen:2003su, Blumenhagen:2004cg, Dijkstra:2004ym, Anastasopoulos:2006da}. The only prominent exceptions are twist field correlators on tori and orbifolds \cite{Gava:1997jt, David:2000um, David:2000yn, Cvetic:2003ch,Abel:2003vv,Abel:2003yx}. As reviewed in \cite{Light}, twist fields are essential ingredients in the vertex operators for open string states living at brane intersections. However one should keep in mind that while closed-string twists are quantized (rational numbers $k/n$), intersection angles are not, since they depend on both discrete (wrappings) and continuous (moduli) parameter.
Aim of the present note is to derive open-string twist-field correlators using algebraic techniques, devised in \cite{Bianchi:1990yu, Bianchi:1990tb} and applied to minimal models \cite{Bianchi:1991rd} and later on to WZW models \cite{Pradisi:1995qy}, and complementing the `stress tensor method' \cite{Gava:1997jt, David:2000um, David:2000yn, Cvetic:2003ch,Abel:2003vv,Abel:2003yx}. In this approach one has to carefully decompose the `parent' closed-string amplitude in conformal blocks and then take the `square root' in each sector, thus producing chiral blocks evaluated at a real argument that are to be combined with appropriate coefficients. The latter are tightly constrained by `planar duality' \cite{Bianchi:1991rd} and by correct factorization, {\it i.e.}~ correct normalization and positivity of the residues of the poles.
We consider correlators of four bosonic twist-fields with one or two independent angles, recalling the closed-string computation and then deriving the open string results. In order to take the relevant square-root, we need to rewrite the closed-string result as a sesqui-linear form in the conformal blocks. We comment on possible ambiguity and arbitrariness in the procedure. As mentioned before, open-string twist-field can be considered `descendants' of closed-string ones only for rational intersection angles. Since rational form a dense set, one can safely extend the result to irrational values (in units of $\pi$) for the angles.
We conclude with preliminary considerations on how to generalize our approach to more sophisticated cases (such as WZW or Gepner models) that are not related in such a simple way as orbifolds to free CFT's.
\section{General strategy \label{sec general strategy}}
In the works of \cite{Gava:1997jt, David:2000um, David:2000yn, Cvetic:2003ch,Abel:2003vv,Abel:2003yx} the authors compute open string correlators that contain multiple bosonic twist fields. In order to do so they extend the world-sheet, whose original domain is the upper half complex plane, via the ``doubling trick" to the full complex plane and employ conformal field theory techniques that are related to the study of bosonic twist fields in the context of closed string theory on orbifolds \cite{Dixon:1986qv,Burwick:1990tu}.
Here we follow a different approach based on the systematic construction of open-string theories from their parent closed-string theories \cite{Bianchi:1990yu, Bianchi:1990tb}. This algebraic procedure has been mostly applied to the determination of the spectrum coded in the four contributions -- torus, Klein-bottle, annulus and M\"obius-strip -- to the one-loop partition function.
One notable extension of the systematic procedure is the computation of 4-point correlation functions of open strings at tree-level (disk) in minimal models that display the desired factorization properties and `planar duality \cite{Bianchi:1991rd}. Later on this computation has been generalized to $SU(2)$ WZW models \cite{Pradisi:1995qy}.
Quite remarkably both one-loop amplitudes and 4-point correlators at tree level depend on one `invariant', which is complex for oriented closed strings and `real' for open and unoriented strings. At one-loop one has the complex modular parameter $\tau$ for the torus, that becomes purely imaginary ($\tau = i \tau_2$) for Klein-bottle and Annulus and has a fixed real part ($\tau_1= 1/2$) for the M\"obius-strip \cite{Bianchi:1988ux, Bianchi:1988fr, Bianchi:1989du, Bianchi:1990yu, Bianchi:1990tb}. At tree level, one has the complex cross-ratio $z=z_{12}z_{34}/z_{13}z_{24}$ for the sphere and a real cross ratio $x=x_{12}x_{34}/x_{13}x_{24}$ for the upper half plane, topologically equivalent to a disk.
As we will momentarily see, the close similarity between tree-level 4-point correlators and one-loop partition functions is more than a mere analogy. Indeed, twist-field correlators on the sphere can be expressed as (fake) torus partition functions once an appropriate branched-cover of the sphere is considered that trivializes the monodromies.
Before entering the details of the twist-field correlators, let us describe our strategy
\cite{BianchiPHD, StanevMaster, Fuchs:1999fh, Fuchs:2000hn}. Generically a closed string correlator takes the form
\begin{align}\nonumber
{\cal A}_{closed} &= \langle \Phi_{h_1,\overline h_1}(z_1, \bar{z}_1) \Phi_{h_2,\overline h_2}(z_2, \bar{z}_2)\Phi_{h_3,\overline h_3}(z_3, \bar{z}_3)\Phi_{h_4,\overline h_4}(z_4, \bar{z}_4)\rangle \\ &= \prod_{i<j} |z_{ij}|^{-(h_i + \overline h_i +h_j +\overline h_j)+{\Delta +\overline \Delta \over 3}}\sum_{i,j} c_{ij} {\cal F}_i(z) \overline {\cal F}_j (\overline z)
\end{align}
where $\Delta= h_1 + h_2 +h_3 +h_4$, $\overline \Delta= \overline h_1 + \overline h_2 + \overline h_3 +\overline h_4$ and $ {\cal F}_i$ and $ \overline {\cal F}_j$ denote the holomorphic and anti-holomorphic conformal blocks, respectively. The index $i$ runs over the conformal families that appear in the s-channel {\it i.e.}~ in the OPE's of both $\Phi_{h_1}\Phi_{h_2}$ and $\Phi_{h_3}\Phi_{h_4}$ exposed in the limit $z\rightarrow 0$. Clearly one can express the same amplitude in the t- or u- channels that give rise to different OPE's and thus different bases of conformal blocks. A defining property of the conformal blocks ${\cal F}_i(z)$ is that, up to an overall factor $z^{h-h_1-h_2}$, they admit an expansion in integer powers of $z$ very much as one-loop characters $\chi_h(w)$ admit an expansion in integer powers of $w=\exp(2\pi i \tau)$, up to an overall factor $w^{h-{c\over 24}}$.
Closely following the systematic construction of one-loop amplitudes, open string 4-point correlators take the form
\begin{align}
{\cal A}_{open} = \langle \phi_{h_1}(x_1) \phi_{h_2}(x_2) \phi_{h_3}(x_3) \phi_{h_4}(x_4)\rangle = \prod_{i<j} x_{ij}^{-(h_i +h_j)+{\Delta \over 3}}\sum_{i} a_i{\cal F}_i(x)
\end{align}
where $x$ is real. The coefficients $a_i$ are tightly constrained by planar duality \cite{Bianchi:1991rd, Pradisi:1995qy} and are adjusted in such a way that they give rise to the correct pole structures. For our purpose some specific limits correspond to the exchange of the identity sector or other untwisted states.
In the simple case of $Z_2$ twist-fields, the above procedure was essentially followed in \cite{Antoniadis:1993jp}. For generic twists, the closed string bosonic correlator with one or two independent angles takes the form
\begin{align}
{\cal A}_{closed} = |K(z)|^2 \, \sum_{\vec{k},\vec{v}} c_{\vec{k}\vec{v}} w(z)^{\frac{\alpha' p^2_L}{4}} \,\,\overline w(\overline z)^{\frac{\alpha' p^2_R}{4}}\,\,,
\label{eq closed string form}
\end{align}
where $w(z)$ is a holomorphic function and $p_L$ and $p_R$ denote the closed string momenta
\begin{align}
p^2_L =\left(\vec{k}+ \frac{\vec{v}}{\alpha'}\right)^2 \qquad \qquad p^2_R = \left(\vec{k}- \frac{\vec{v}}{\alpha'}\right)^2\,\,.
\end{align}
Note that $\vec{k}$ and $\vec{v}$ are two dimensional vectors in the dual lattice $\Lambda^*$ and the lattice $\Lambda$, respectively.
Then the corresponding open string result will be given by
\begin{align}
{\cal A}_{open} = K(x) \, \sum_{p,q} c_{p,q} w(x)^{\alpha' p^2_{open}}
\end{align}
with $p$ and $q$ being integers and $p^2_{open}$ denoting the open string mass. For a D1-brane wrapping a one-cycle on a two torus the mass is given
by \cite{Kors:2001ku,Angelantonj:2005hs}
\begin{align}
p^2_{open}= \frac{1}{L^2} p^2 +\frac{1}{\alpha'^2}\frac{R^2_1 R^2_2}{L^2} q^2\,\,.
\end{align}
Here $R_1$ and $R_2$ are the radii of the two torus and $L$ denotes the length of the brane the open string is attached to. The first term corresponds to the Kaluza-Klein states along the brane while the latter to the winding excitations along the perpendicular direction.
Thus one of the tasks in our approach is to bring the closed string bosonic twist field correlator in the form of \eqref{eq closed string form}. Such an expression allows one then to make an ansatz for the open string correlator as laid out above.
Given this ansatz one furthermore has to determine the coefficients $c_{p,q}$ by analyzing specific limits that correspond to the exchange of universal states like the gauge bosons (identity sector) or their excitations in the untwisted sector. Positivity of the residues of the poles in all channels puts severe constraints on the $c_{p,q}$ and guarantees `planar duality' \cite{Bianchi:1991rd, Pradisi:1995qy}.
\section{Bosonic twist field correlator with one independent angle}
Generically the closed string correlator is determined via the energy momentum tensor method and given in Lagrangian form. As already mentioned before in order to extract the open string correlator from the closed string one one has to manipulate the closed string result in such a way that it reveals the form \eqref{eq closed string form}. Specifically that implies a Poiss\`on resummation in one of the lattice variables.
\subsection{Closed string correlator}
Using the energy-momentum tensor method the quantum part of the closed string bosonic twist field correlator with one independent angle has been shown to be \cite{Dixon:1986qv}
\begin{align}
|z_{\infty}|^{2 \theta (1-\theta)}\,\,\langle \sigma_{1-\theta}(0)\, \sigma_{\theta}(z,\overline z) \,
\sigma_{1-\theta}(1) \, \sigma_{\theta} (\infty)\rangle = {\cal C} \,\, \frac{|z(1-z)|^{-2\theta(1-\theta)}}{F(z)\overline F(1-\overline z) +\overline F( \overline z) F(1-z)}\,\,,
\end{align}
where $0< \theta < 1$ is a rational number encoding the twist ($k/n$) and $F(z)$ denotes the hypergeometric function
\begin{align}
F(z)={_2F}_1[\theta,1-\theta,1,z] \,\,.
\end{align}
The action for the classical solutions is given by (here and in the following we set $\alpha'=2$ for the closed string computations)
\begin{align}
S_{cl}(v_1,v_2)=\frac{\pi}{4 \tau_2 \sin(\pi \theta)} \big[ v_2 \overline v_2 + \tau_1 \left( v_1 \overline v_2 \overline \beta + \overline v_1 v_2 \beta \right) + |\tau|^2 v_1 \overline v_1\big]\,\,,
\end{align}
where $\beta=-i e^{-i\pi \theta}$ denotes a phase and $\tau(z)$ denotes the modulus of a ``fake torus'' introduced in \cite{Dixon:1986qv}
\begin{align}
\tau(z) =\tau_1+i\tau_2=i\frac{F(1-z)}{F(z)} \qquad \qquad \qquad \overline \tau(\overline z) =\tau_1-i\tau_2=-i\frac{\overline F(1-\overline z)}{\overline F(\overline z)}\,\,.
\label{eq definition of tau}
\end{align}
Here $v_1$ and $v_2$ denote vectors belonging to some coset of the lattice $\Lambda$.
Combining both results gives
\begin{align}
|z_{\infty}|^{2 \theta (1-\theta)}\,\, \langle \sigma_{1-\theta}(0)\, \sigma_{\theta}(z,\overline z) \,
\sigma_{1-\theta}(1) \, \sigma_{\theta} (\infty)\rangle = {\cal C} \,\, \frac{|z(1-z)|^{-2\theta(1-\theta)}}{\tau_2(z,\overline z) |F(z)|^2} \,\, \sum_{v_1,v_2} e^{-S_{cl}(v_1,v_2)}\,\,,
\end{align}
where we used \eqref{eq definition of tau} to simplify the quantum part. This is the Lagrangian form of the correlator. In the following we will perform a Poiss\`on resummation over the variable $v_2$ to obtain the Hamiltonian form which is much more convenient to extract the corresponding open string result.
After Poiss\`on resumming one ends up with a sum over momenta in the dual lattice $\Lambda^*$, however since the sum over $v_2$ is only over a subset of the lattice $\Lambda^*$ we will substitute $v_2=-2\,\beta^{-1}\,\sin(\pi \theta) \left( f_{23} + q \right)$, where the summation over $q$ is now over the entire lattice $\Lambda$. Poiss\`on resummation over $q$ yields
\begin{align}
\label{eq hamiltonian form closed 1 angle}
|z_{\infty}|^{2 \theta (1-\theta)}\,\, \langle \sigma_{1-\theta}(0)\, \sigma_{\theta}(z,\overline z) \,
\sigma_{1-\theta}(1) \, \sigma_{\theta}& (\infty)\rangle = \frac{{\cal C}}{V_{\Lambda}} \,\, \frac{|z(1-z)|^{-2\theta(1-\theta)}}{ |F(z)|^2}\\ \nonumber
\times &\sum_{k \in \Lambda^*, v_1 \in \Lambda_c} \exp\left[-2 \pi i f_{23} \cdot k \right] \,\, w(z)^{1/2(k+\frac{v}{2})^2}\, \overline w(\overline z)^{1/2(k-\frac{v}{2})^2},
\end{align}
where $\Lambda_c= (1-\theta)(f_{21}+\Lambda)$ denotes the coset over which $v$ runs, $V_{\Lambda}$ is the volume of the unit cell of the lattice $\Lambda$, and $w(z)$ is given by
\begin{align}
w(z)=\exp\left[ \frac{i\pi \tau(z)}{\sin(\pi \theta)}\right]\,\,.
\end{align}
Note that the result \eqref{eq hamiltonian form closed 1 angle} takes the form \eqref{eq closed string form} that allows us to extract the open string result.
\subsection{Open string correlator}
Before we extract the open string correlator from the closed string one, let us briefly discuss such a correlator in the context of intersecting D-branes on a compactified six-torus $T^6=T^2 \times T^2 \times T^2$. In this work we assume that all branes go through the origin of the respective two-tori, thus all Wilson lines vanish. Moreover, we further simplify the setup by assuming that both branes $a$ and $b$ wrap such one-cycles on the respective two-tori that they intersect exactly once. Our results can be easily generalized to the more generic case of arbitrary number of intersections and non-vanishing Wilson-lines. We refer the interested reader to \cite{Cremades:2003qj,Cvetic:2003ch,Abel:2003yx, Abel:2003vv,Cremades:2004wa} for more details.
Figure \ref{fig twistoneangle} depicts the intersection of two D-branes labeled by $a$ and $b$ on one of the three two-tori. At the intersection there exists a massless fermion going from brane $b$ to brane $a$, whose vertex operator contains the bosonic twist field $\sigma_{\theta}$, where $\theta$ denotes the intersection angle in one two torus.
At the cost of being pedantic, contrary to the twist in the `parent' closed string, $\theta$ is not quantized in the open-string case.
At the very same intersection there will be its antiparticle, a string going from brane $a$ to brane $b$, whose vertex operator contains a anti-twist field $\sigma_{1-\theta}$ \footnote{For a detailed discussion on vertex operators of massless states for arbitrary intersection angles, see \cite{Cvetic:2006iz,Bertolini:2005qh}, for a generalization to massive states see \cite{Light} and for a discussion on instantonic states at the intersection of D-instanton and D-brane at arbitrary angles, see \cite{Cvetic:2009mt}. For an analysis of vertex operators for massive states in heterotic string theories, see \cite{Bianchi:2010es}.}.
\begin{figure}[h]
\begin{center}
\includegraphics[height=5cm]{4sigmas1angle.eps}
\end{center}
\caption{Intersection of two D-brane stacks, $a$ and $b$, in one complex dimension.}
\label{fig twistoneangle}
\end{figure}
Even though in this work we only consider the twist field correlator and not the amplitude of four matter fields that would contain additional conformal fields, it is worthwile to notice that the non fixed twist field operator position $x$ should remain in the interval $[0,1]$ since otherwise the Chan-Paton factors make the whole amplitude vanishing.
Finally, before we perform the steps laid out in section \ref{sec general strategy} to derive the open string correlator let us discuss our expectations for the two limits $x \rightarrow 0$ and $x \rightarrow 1$. In both limits we expect the exchange of untwisted string states, just as one observes in the closed string correlator. We will use gauge-boson exchange to normalize the correlator.
The result \eqref{eq hamiltonian form closed 1 angle} is of the type that allows us to extract the open string correlator. Since the open string result should only depend on real parameters we expect $w(x)$ to take the form
\begin{align}
w(x)=\exp\left[- \frac{\pi t(x)}{\sin(\pi \theta)}\right]
\label{eq def w(x) one angle}
\end{align}
with $t(x)$ given by \eqref{eq definition of tau}
\begin{align}
t(x) = \frac{1}{2 i} \left( \tau(x) - \overline \tau(x )\right)= \frac{F(1-x)}{F(x)}\,\,.
\end{align}
In the last step we used the fact that $\overline F(x) = F(x)$ for $0\leq x\leq 1 $.
Concerning the quantum part one expects only the holomorphic part to survive. Finally the mass of the open strings in the untwisted sector is given by \cite{Kors:2001ku,Angelantonj:2005hs}
\begin{align}
M^2_{open}(p,q)= \frac{1}{L^2_a} p^2+ \frac{1}{\alpha'^2}\frac{R^2_1 \, R^2_2}{L^2_a} q^2\,\,,
\label{eq mass KK and winding}
\end{align}
where $R_1, R_2$ are the radii of the two-torus and $L_a$ is the length of the D-brane
\begin{align}
L_a = \sqrt{n^2_a R^2_1+ m^2_a R^2_2 }
\end{align}
with $n_a$, $m_a$ being the wrapping numbers of the brane $a$ on the two torus.
Thus the open string result takes the form
\begin{align}
x^{ \theta (1-\theta)}_{\infty} \langle \sigma_{1-\theta}(0)\, \sigma_{\theta}(x) \,
\sigma_{1-\theta}(1) \, \sigma_{\theta} (\infty)\rangle = {\cal C}_{open} \frac{[x(1-x)]^{-\theta(1-\theta)}}{F(x)} \sum_{p,q} w(x)^{ \left( \frac{\alpha'}{L^2_a} p^2 + \frac{1}{\alpha'}\frac{R^2_1 \, R^2_2}{L^2_a} q^2 \right)}
\label{eq suggested result open one angle}
\end{align}
with $w(x)$ given in \eqref{eq def w(x) one angle}.
This ansatz has to satisfy various non-trivial constraints, that generalize `planar duality', namely in the limit $x\rightarrow 0$ as well as for $x \rightarrow 1$ we expect an exchange of untwisted states. This condition also allows us to fix the constant ${\cal C}_{open}$.
Let us start with the limit $x \rightarrow 0$. In that limit $t(x)$ behaves
\begin{align}
\lim_{x\rightarrow 0} t(x) =\frac{1}{\pi} \sin(\pi \theta)\, \left( -\ln(x) +\ln(\delta)\right)
\end{align}
with
\begin{align}
\ln(\delta) = 2 \psi(1) -\psi(\theta)- \psi(1-\theta)\,\,.
\end{align}
Plugging in that limit one obtains
\begin{align}
{\cal C}_{open} x^{-\theta(1-\theta)} \sum_{p,q} \left(\frac{x}{\delta}\right)^{ \left( \frac{\alpha'}{L^2_a} p^2 + \frac{1}{\alpha'}\frac{R^2_1 \, R^2_2}{L^2_a} q^2 \right)}
\label{eq result one angle a}
\end{align}
Let us turn to the $x \rightarrow 1$ limit which also exhibits the exchange of untwisted states, corresponding to strings starting and ending on the brane $b$ rather than on brane $a$. In order to determine this limit we perform a Poiss\`on resummation in the variables $p$ and $q$ and obtain
\begin{align}
{\cal C}_{open} \frac{[x(1-x)]^{-\theta(1-\theta)}}{F(x)} \, \frac{\sin(\pi \theta)}{t(x)} \frac{L^2_a}{R_1 \, R_2} \, \sum_{\tilde{p} \in \Lambda_{p}, \tilde{q} \in {\Lambda}^*_{q}} \exp\left[-\frac{\pi \sin(\pi \theta)}{t(x)} \left( \frac{L^2_a}{\alpha'} \tilde{p}^2 +\frac{ \alpha'\, L^2_a}{R^2_1\, R^2_2} \tilde{q\,}^2 \right) \right]
\end{align}
With the identification (recall that brane $a$ and brane $b$ intersect exactly once on the torus)
\begin{align}
\sin(\pi \theta) = \frac{R_1 R_2}{L_a L_b}
\end{align}
this further simplifies to
\begin{align}
{\cal C}_{open} \frac{[x(1-x)]^{-\theta(1-\theta)}}{F(1-x)} \, \frac{L_a}{L_b} \,\, \sum_{\tilde{p} \in \Lambda_{p}, \tilde{q} \in \Lambda^*_{q}} \exp\left[-\frac{\pi }{\sin(\pi \theta) t(x)} \left( \frac{R^2_1 R^2_2}{ \alpha' \, L^2_b} \tilde{p}^2 +\frac{\alpha'}{L^2_b} \tilde{q}^2 \right) \right]\,\,.
\end{align}
Now performing the limit $x \rightarrow 1$ yields
\begin{align}
{\cal C}_{open} \frac{L_a}{L_b} (1-x)^{-\theta(1-\theta)} \sum_{\tilde{p},\tilde{q}} \left(\frac{1-x}{\delta}\right)^{ \left( \frac{\alpha'}{L^2_b} \, \tilde{q}^2+ \frac{R^2_1 \, R^2_2}{\alpha' L^2_b} \, \tilde{p}^2\right)}\,\,.
\label{eq result one angle b}
\end{align}
Comparing the limits \eqref{eq result one angle a} and \eqref{eq result one angle b} it is natural to assume that the normalization constant is given by ${\cal C}_{open}= \frac{\sqrt{\alpha'}}{L_a}$.
Let us now compare the derived open string expression to the ones in \cite{Gava:1997jt, David:2000um, David:2000yn, Cvetic:2003ch,Abel:2003vv,Abel:2003yx} derived via the energy momentum tensor method after extending the world-sheet to the whole complex plane. The first thing to notice is that \eqref{eq suggested result open one angle} is given in the Hamiltonian form while the correlator in \cite{Gava:1997jt, David:2000um, David:2000yn, Cvetic:2003ch,Abel:2003vv,Abel:2003yx} is of Lagrangian type. Thus we have to Poiss\`on resum \eqref{eq suggested result open one angle} over the variable $p$ which gives
\begin{align}
& \sqrt{\sin(\pi \theta)}\frac{[x(1-x)]^{-\theta(1-\theta)}}{\sqrt{F(x) F(1-x)}} \\ & \hspace{1cm}\times \sum_{\tilde{p} \in \Lambda_{p}, \tilde{q} \in {\Lambda}^*_{q}} \exp\left[-\frac{\pi}{\alpha'} \sin(\pi \theta) F(x) F(1-x) \left(\left( \frac{\tilde{p} L_a}{F(1-x)}\right)^2+
\left( \frac{q L_b}{F(x)}\right)^2\right) \right] \,\,.\nonumber
\end{align}
This is indeed the result obtained via employing conformal field theory techniques. We conclude that in this case there is no ambiguity in the choice of the coefficients of the linear combination of conformal blocks.
Before we turn to the bosonic twist correlator with two independent angles let us determine the sub-dominant poles for the correlator \eqref{eq suggested result open one angle} in the limit $x \rightarrow 0$. This requires the knowledge of the behaviour of $F(x)$ for small $x$, which is (see appendix \ref{app hypergeometric})
\begin{align}
\lim_{x\rightarrow 0} F(x) = \frac{1}{\Gamma(\theta) \, \Gamma(1-\nu)} \sum^{\infty}_{n=0} \frac{\Gamma(\theta+n) \, \Gamma(1-\nu+n)}{\Gamma(n)} \frac{x^n}{n !}\,\,.
\end{align}
Given that the correlator \eqref{eq suggested result open one angle} behaves in the limit $x \rightarrow 0$ as
\begin{align}
\frac{\sqrt{\alpha'}}{L_a} x^{-\theta(1-\theta)} \left( 1 - \theta(1-\theta) x -\frac{1}{2} \theta (1-\theta) ((\theta^2 - \theta + 2) x^2+ ... \right)
\sum_{p,q} \left(\frac{x}{\delta}\right)^{ \left( \frac{\alpha'}{L^2_a} p^2 + \frac{1}{\alpha'}\frac{R^2_1 \, R^2_2}{L^2_a} q^2 \right)}
\end{align}
suggests the following OPE
\begin{align}
\sigma_{\theta}(z) \, \sigma_{1-\theta} (w) & \sim \sqrt{\frac{\sqrt{\alpha'}}{L_a}} (z-w)^{-\theta(1-\theta)} \,\, \mathbf{1} \\ \nonumber
& \hspace{15mm} + \sqrt{\frac{\sqrt{\alpha'}}{L_a}} \sqrt{\theta(1-\theta)} (z-w)^{-\theta(1-\theta)+1} \left( e^{i\alpha} \partial X + e^{-i\alpha} \partial \overline X \right) +...
\end{align}
Here the phase $\alpha$ indicates the arbitrariness in the definition of the conformal fields $\sigma$ as well as $\partial X$ and $\overline \partial X$, and can be eliminated by an appropriate redefinition of the latter. One can easily extend this OPE to higher order poles which involve then conformal fields with larger conformal dimension.
\section{Bosonic twist field correlator with two independent angles}
Now we turn to the bosonic twist correlator with two independent angles. We follow the same procedure as above for the correlator with just one independent angle. We will manipulate the closed string result to obtain an expression of type \eqref{eq closed string form} and extract from that the open string result.
\subsection{Closed string correlator}
Let us display the complete result, containing the quantum as well as the classical part, for the four twist field correlator with two independent angles determined in
\cite{Burwick:1990tu}\footnote{See also \cite{Stieberger:1992bj,Erler:1992gt}.} (again we set $\alpha'=2$)
\begin{align}
\label{eq correlator 2 independent angles}
& |z_{\infty}|^{2 \nu (1-\nu)}\,\,\langle \sigma_{1-\theta}(0)\, \sigma_{\theta}(z,\overline z) \,
\sigma_{1-\nu}(1) \, \sigma_{\nu} (\infty)\rangle \\ & \hspace{2cm} =|z|^{-2\theta(1-\theta)}
\left(1-z\right)^{-\nu(1-\theta)}\, \left(1-\overline z\right)^{-\theta(1-\nu)}
I^{-1}( z, \bar{z}) \,\,\sum_{v_1, v_2}e^{-S_{cl}} \nonumber
\end{align}
with $I(z, \bar{z})$ given by
\begin{align*}
I(z, \bar{z})= \frac{1}{2\pi} \big[B_1(\theta,\nu) \,G_2( z)
\overline H_1(1-\overline z)+ B_2(\theta,\nu) \, \overline G_1(\overline z) {H_2}(1- z)\big]\,\,,
\end{align*}
where
\begin{align*}
\begin{gathered}
B_1(\theta,\nu)=\frac{\Gamma(\theta)\,\Gamma(1-\nu)}{\Gamma(1+\theta-\nu)}\qquad
B_2(\theta,\nu)=\frac{\Gamma(\nu)\,\Gamma(1-\theta)}{\Gamma(1+\nu-\theta)}\\
\\
G_1(z)= {_2F}_1[\theta,1-\nu,1;z]\qquad
G_2(z)= {_2F}_1[1-\theta,\nu,1;z]\\
\\
H_1(z)= {_2F}_1[\theta,1-\nu,1+\theta-\nu;z]\qquad
H_2(z)={_2F}_1[1-\theta,\nu,1-\theta+\nu;z]\,\,.
\end{gathered}
\end{align*}
and $S_{cl}$ is
\begin{align}
S_{cl}=V_{11} v_1 \overline v_1 +V_{12} v_1 \overline v_2 +V^*_{21} v_2 \overline v_1 +V_{22} v_2 \overline v_2 \,\,.
\end{align}
Here the $V_{ij}$ are given by
\begin{align} \nonumber
V_{11} &= \frac{1}{4} \left(\frac{\sin(\pi \theta)}{\pi}\right)^2 |I(z,\overline z)|^{-2} \\ & \nonumber
\hspace{1cm}\left( \,\, B_2 \big|H_2(1-z)\big|^2\Big[ B_1 G_1(z) \overline H_1(1-\overline z)+ B_1 \overline G_1(\overline z) H_1 (1- z) \right.
\\& \left.\nonumber
\hspace{5cm}
+ \pi \left( \cot(\pi \nu) - \cot(\pi \theta) \right)\left| G_1(z)\right|^2\Big]
\right. \\ & \left. \nonumber
\hspace{1cm} +B_1 \big|H_1(1-z)\big|^2\Big[ B_2 G_2(z) \overline H_2(1-\overline z)+ B_2\overline G_2(\overline z) H_2 (1- z)
\right. \\ &\left.
\hspace{5cm}
+ \pi \left( \cot(\pi \theta) - \cot(\pi \nu) \right)\left| G_2(z)\right|^2\Big] \right)
\end{align}
\begin{align} \nonumber
V_{12} &= \frac{e^{i\pi \theta}}{4} \frac{\sin(\pi \theta)}{\pi} |I(z,\overline z)|^{-2} \\ & \nonumber
\hspace{1cm}\left( \,\, B_2 G_2 (z) \overline H_2(1-\overline z)\Big[ B_1 G_1(z) \overline H_1(1-\overline z)+ B_1 \overline G_1(\overline z) H_1 (1- z) \right.
\\& \left.\nonumber
\hspace{5cm}
+ \pi \left( \cot(\pi \nu) - \cot(\pi \theta) \right)\left| G_1(z)\right|^2\Big]
\right. \\ & \left. \nonumber
\hspace{1cm} -B_1 \overline G_1(\overline z ) \overline H_1(1-z)\big|^2\Big[ B_2 G_2(z) \overline H_2(1-\overline z)+ B_2\overline G_2(\overline z) H_2 (1- z)
\right. \\ &\left.
\hspace{5cm}
+ \pi \left( \cot(\pi \theta) - \cot(\pi \nu) \right)\left| G_2(z)\right|^2\Big] \right)
\end{align}
\begin{align} \nonumber
V_{22} = \frac{1}{4} |I(z,\overline z)|^{-2} \,\, &\left( \,\, G_1(z) G_2 (z) \Big[ B_1 \overline G_2(\overline z) \overline H_1(1-\overline z)+ B_2 \overline G_1(\overline z) H_2 (1-\overline z) \right.
\\& \left.
\hspace{0.1cm} + \overline G_1( \overline z) \overline G_2 (\overline z) \Big[ B_1 G_2( z) H_1(1- z)+ B_2 G_1(z) H_2 (1-z)\right)\,\,.
\end{align}
In order to extract the open-string correlator from the closed-string one let us perform a Poiss\`on resummation, just as we did for the four bosonic twist correlator with just one independent angle. The substitution $v_2 = 2 i e^{-i\pi \theta} \sin(\pi \theta) \left( f_{23} + q \right)$ yields to a summation over the lattice $\Lambda$
\begin{align}
\sum_{q, v_1 \in \Lambda} e^{-S_{cl}} = \sum_{q, v_1 \in \Lambda} &\exp\Big[ -V_{11} |v_1|^2 + 2 i\, \sin(\pi \theta) e^{i\pi \theta} V_{12}\,\, v_1 \left( \overline f_{23} + \overline q \right) \\ \nonumber
& -2 i\, \sin(\pi \theta) e^{-i\pi \theta} V^*_{12}\,\, \overline v_1 \left( f_{23} + q \right) - 4 \sin^2(\pi \theta) V_22 \,\, \left( f_{23} + q \right) \left( \overline f_{23} + \overline q \right) \Big]\,\,,
\end{align}
which allows us to perform Poiss\`on resummation. One obtains (note that here we omit the quantum part)
\begin{align*}
\frac{\pi}{4 \sin^2(\pi \theta) V_{22}}
\,\,\sum_{k \in \Lambda^*, v \in \Lambda} & \exp\Big[ -\frac{\pi^2}{4 \sin^2(\pi \theta) V_{22} }
|\vec{k}|^2 - 2 \pi i \vec{f}_{23} \cdot \vec{k} + \pi \frac{\left( e^{-i \pi \theta} V_{12} -e^{i \pi \theta} {V_{12}}^*\right)}{2 \sin( \pi \theta)\,\, V_{22} } \, \vec{k} \cdot \vec{v}\\
& + i \pi \frac{\left( e^{-i \pi \theta} V_{12} +e^{i \pi \theta} {V_{12}}^*\right)}{2 \sin( \pi \theta)
\,\, V_{22} } \vec{p}^{\,T}
\left(\begin{array}{cc}
0&1\\
-1& 0
\end{array}
\right) \vec{v} + \left(\frac{V_{12} {V_{12}}^*}{V_{22}} -V_{11} \right) |\vec{v}|^2
\Big]\,\,.
\end{align*}
Some statements are in order. First of all note that $v_1 \rightarrow v$. Moreover, in the last line the $\vec{k}$ and $\vec{v}$ are two-dimensional vectors rather than complex numbers. In addition one can verify the following, very useful relations
\begin{align}
\left(\frac{V_{12} {V_{12}}^*}{V_{22}} -V_{11} \right) = -\frac{\pi^2}{16 \sin^2(\pi \theta) V_{22} }
\end{align}
\begin{align}
\frac{\left( e^{-i \pi \theta} V_{12} +e^{i \pi \theta} {V_{12}}^*\right)}{2 \sin( \pi \theta)
\,\, V_{22} } = \frac{1}{4} \big( \cot(\pi \nu) - \cot(\pi \theta) \big)\,\,.
\end{align}
Finally, it is convenient to define the following quantities
\begin{align}
\tau_1(z,\overline z) = i \, \frac{1}{2 \, V_{22}} \left({V_{12}}^* - V_{12} \right)
\end{align}
and
\begin{align}
\tau_{2} (z, \overline z) =\frac{\pi}{4 \, \sin(\pi \theta)} \,\, \frac{1}{V_{22}} \,\,.
\end{align}
They combine to the holomorphic ({\it a priori} far from obvious) quantity
\begin{align}
\tau(z) =\tau_1(z,\overline z) + i \tau_{2} (z, \overline z) = i \frac{\sin(\pi \theta)}{2\, \pi} \left( \frac{B_1 H_1(1-z)}{G_1(z)} + \frac{B_2 H_2(1-z)}{G_2(z)}\right) \,\,,
\end{align}
which for the special case of equal angles, $\theta =\nu$, takes the form \eqref{eq definition of tau}. With these identifications we can write the Poiss\`on resummed expression as (here we still omit the quantum part)
\begin{align}
\frac{\pi}{4 \sin^2(\pi \theta) V_{22}}
\,\,\sum_{k \in \Lambda^*, v \in \Lambda} e^{i\pi \vec{k}^T B \vec{v} } e^{-2 \pi i \vec{f}_{23} \cdot \vec{k}}\,\, w(z)^{1/2(k+v/2)^2} \,\, \overline w(\overline z)^{1/2(k-v/2)^2}
\label{eq hamiltonian without quantum}
\end{align}
with
\begin{align}
B= \frac{1}{4} \big( \cot(\pi \nu) - \cot(\pi \theta) \big) \,\, \left(\begin{array}{cc}
0&1\\
-1& 0
\end{array} \right)
\end{align}
and
\begin{align}
w(z)=\exp\left[{i\frac{\pi \tau(z)}{\sin(\pi \theta)}} \right]\,\,.
\end{align}
Adding the quantum part (see eq. \eqref{eq correlator 2 independent angles}) to \eqref{eq hamiltonian without quantum} one obtains
\begin{align}
|z_{\infty}|^{2 \nu (1-\nu)}\,\,\langle \sigma_{1-\theta}(0)\, \sigma_{\theta}(z,\overline z) \,
\sigma_{1-\nu}(1) \, \sigma_{\nu} (\infty)\rangle =&
|z|^{-2\theta(1-\theta)}
|1-z|^{-\theta(1-\nu)}\, \frac{1}{|G_1(z)|^2} \\ & \hspace{-2cm} \times
\,\,\sum_{k \in \Lambda^*, v \in \Lambda} e^{i\pi \vec{k}^T B \vec{v} } e^{-2 \pi i \vec{f}_{23} \cdot \vec{k}}\,\, w(z)^{\frac{(k+v/2)^2}{2}} \,\, \overline w(\overline z)^{\frac{(k-v/2)^2}{2}}\,\,. \nonumber
\label{eq poisson resum closed result}
\end{align}
This is exactly the same as the complicated expression \eqref{eq correlator 2 independent angles}. Apart from the obviously much simpler form it is also of the type \eqref{eq closed string form} that allows us to extract the open string result. Before we turn to this derivation we analyze the two interesting limits $z \rightarrow 0$ and $z \rightarrow 1$.
In the limit $z \rightarrow 0$ we expect the exchange of untwisted states. In that limit $w(z)$ behaves as
\begin{align}
\lim_{z \rightarrow 0} w(z) = \frac{z}{\delta}
\end{align}
with $\ln \delta= 2\psi(1) - \psi(\theta) -\psi(1-\theta) -\psi(\nu) - \psi(1-\nu) $. Here we used various relations and limits displayed in appendix \ref{app hypergeometric}.
This gives the expected result which looks similar to the limits $z \rightarrow 0$ and $z \rightarrow 1$ for the closed string correlator with just one independent twist/angle \cite{Dixon:1986qv}
\begin{align}
|z|^{-2\theta(1-\theta)}
\,\,\sum_{k \in \Lambda^*, v \in \Lambda} e^{i\pi \vec{k}^T B \vec{v} } \frac{e^{-2 \pi i \vec{f}_{23} \cdot \vec{k}}}{\delta^{h+\overline h}}\,\,z^h\,\overline z^{\overline h}
\end{align}
with the conformal weights $h$ and $\overline h$ given by
\begin{align}
h=\frac{1}{2}\left(k+\frac{v}{2}\right)^2\qquad \qquad \qquad \overline h=\frac{1}{2}\left(k-\frac{v}{2}\right)^2\,\,.
\end{align}
In contrast in the limit $z \rightarrow 1$ we expect the exchange of twisted states. Thus the closed string twist field correlator OPE's can be derived from taking the limit
$z \rightarrow 1$ of \eqref{eq correlator 2 independent angles} where we ignore the classical part of the correlator. For $z \rightarrow 1$ the quantum part of \eqref{eq correlator 2 independent angles} behaves as
\begin{align}
\label{eq expansion nu>theta}
2 \pi \,|1-z|^{-2 \theta(1-\nu)} &\frac{\Gamma(1-\theta) \, \Gamma( \nu) \, \Gamma(\nu-\theta)}{\Gamma(\theta) \, \Gamma( 1-\nu) \, \Gamma(1+\theta-\nu)}\\ & \times \left[ 1-\frac{\Gamma^2(1-\theta) \, \Gamma^2(\nu) \, \Gamma(1+\theta-\nu) \, \Gamma(\theta -\nu) }{\Gamma^2(\theta) \, \Gamma^2(1-\nu) \, \Gamma(1-\theta +\nu) \, \Gamma(\nu-\theta) } |1-z|^{2\nu-2\theta} + ...\right] \nonumber
\end{align}
for $\nu >\theta$ and\footnote{The angle $\alpha$ appearing in the bosonic twist field is in the open interval $(0,1)$. Thus for $\nu >\theta$ the resulting angle on the right-hand side is not the naively expected angle $1-\theta+\nu$, but rather $\nu-\theta$.}
\begin{align}
\label{eq expansion nu<theta}
2 \pi \,|1-z|^{-2 \nu(1-\theta)} &\frac{\Gamma(\theta) \, \Gamma( 1-\nu) \, \Gamma(1-\theta+\nu)}{\Gamma(1-\theta) \, \Gamma( \nu) \, \Gamma(\theta-\nu)}\\ & \times \left[ 1-\frac{\Gamma^2(\theta) \, \Gamma^2(1-\nu) \, \Gamma(\nu- \theta) \, \Gamma(1-\theta +\nu) }{\Gamma^2(1-\theta) \, \Gamma^2(\nu) \, \Gamma(\theta -\nu) \, \Gamma(1+\theta-\nu) } |1-z|^{2\theta-2 \nu} + ...\right] \nonumber
\end{align}
for $\theta > \nu$. Using the properties displayed in appendix \ref{app hypergeometric} it is straightforward to show that hypergeometric functions behave as
\begin{align} \nonumber
&\lim_{z\rightarrow 1} G_1(z) = \frac{\Gamma(\nu-\theta)}{\Gamma(1-\theta) \, \Gamma(\nu)} +(1-z)^{\nu-\theta} \frac{\Gamma(\theta-\nu)}{\Gamma(\theta) \, \Gamma(1-\nu)}\\
&\lim_{z\rightarrow 1} G_2(z) = \frac{\Gamma(\theta-\nu)}{\Gamma(\theta) \, \Gamma(1-\nu)} +(1-z)^{\theta-\nu} \frac{\Gamma(\nu-\theta)}{\Gamma(1-\theta) \, \Gamma(\nu)}\\
&\lim_{z\rightarrow 1} H_1(1-z) = \lim_{z\rightarrow 1} H_2(1-z) =1\nonumber\,\,.
\end{align}
which eventually gives rise to the limits \eqref{eq expansion nu>theta} and \eqref{eq expansion nu<theta}.
Thus the OPE of two bosonic twist fields is given by
\begin{align}
\sigma_{\theta} (z, \overline z) \, \sigma_{1-\nu} (w, \overline w) \sim\,\, & C^-_{\sigma}\, \,|z-w|^{-2 \theta(1-\nu)}\,\sigma_{1+\theta-\nu}\, (w, \overline w) \\
& \hspace{10mm}+ C^-_{\widetilde{\tau}} \,\, |z-w|^{-2 \theta(2-\nu)+2 \nu} \widetilde{\tau}_{1+\theta-\nu}\, (w , \overline w)\, \nonumber
\end{align}
for $\nu > \theta$, and takes the form
\begin{align}
\sigma_{\theta} (z, \overline z) \, \sigma_{1-\nu} (w, \overline w) \sim\,\, & C^+_{\sigma}\, \,|z-w|^{-2 \nu(1-\theta)}\,\sigma_{\theta-\nu}\, (w, \overline w) \\
& \hspace{10mm}+ C^+_{{\tau}} \,\, |z-w|^{-2 \nu(2-\theta)+2 \theta} {\tau}_{\theta-\nu}\, (w , \overline w)\, \nonumber
\end{align}
for $\theta > \nu$. Here the coefficients are
\begin{align}
\label{eq OPE closed yukawa1}
C^-_{\sigma} &= \sqrt{2 \pi \,\,\frac{\Gamma(1-\theta)\,\Gamma(\nu)\, \Gamma(1+\theta-\nu)}{\Gamma(\theta)\,\Gamma(1-\nu)\, \Gamma(\nu-\theta)}}\\
C^-_{\tau} &=\sqrt{ 2 \pi \,\,\frac{\Gamma^3(1-\theta)\,\Gamma^3(\nu)\, \Gamma^2(1+\theta-\nu)\, \Gamma(\theta-\nu)}{\Gamma^3(\theta)\,\Gamma^3(1-\nu)\, \Gamma^2(\nu-\theta)\, \, \Gamma(1-\theta+\nu)}}\\
\label{eq OPE closed yukawa2}
C^+_{\sigma} &= \sqrt{2 \pi\,\, \frac{\Gamma(\theta)\,\Gamma(1-\nu)\, \Gamma(1-\theta+\nu)}{\Gamma(1-\theta)\,\Gamma(\nu)\, \Gamma(\theta-\nu)}}\\
C^+_{\widetilde{\tau}} &=\sqrt{ 2 \pi \,\,\frac{\Gamma^3(\theta)\,\Gamma^3(1-\nu)\, \Gamma^2(1-\theta+\nu)\, \Gamma(\nu-\theta)}{\Gamma^3(1-\theta)\,\Gamma^3(\nu)\, \Gamma^2(\theta-\nu)\, \, \Gamma(1+\theta-\nu)}}\,\,,
\end{align}
where \eqref{eq OPE closed yukawa1} and \eqref{eq OPE closed yukawa2} are related to the quantum part of the physical Yukawa couplings determined in
\cite{Burwick:1990tu,Stieberger:1992bj}. The weights of the respective conformal fields are given by
\begin{align}
h_{\sigma_{\alpha}} &= \frac{1}{2} \alpha \left(1-\alpha \right) \qquad \qquad \qquad \hspace{11mm}\overline h_{\sigma_{\alpha}} = \frac{1}{2} \alpha \left(1-\alpha \right)\\
h_{\widetilde{\tau}_{\alpha}} &= \frac{1}{2} \left(1-\alpha \right)\left(2+\alpha \right) \qquad \qquad \qquad \overline h_{\widetilde{\tau}_{\alpha}} = \frac{1}{2} \left(1-\alpha \right)\left(2+\alpha \right)\\
h_{\tau_{\alpha}} &= \frac{1}{2} \alpha \left(3-\alpha \right) \qquad \qquad \qquad \hspace{11mm}\overline h_{\tau_{\alpha}} = \frac{1}{2} \alpha \left(3-\alpha \right)\,\,.
\end{align}
Let us make a few remarks regarding this result. So far we have not distinguished between the bosonic twist field $\sigma^+_{\alpha}$ and the bosonic anti-twist field $\sigma^{-}_{\alpha}$. As shown in the appendix \ref{app twists} the anti-twist field $\sigma^{-}_{\alpha}$ can be identified with $\sigma^+_{1-\alpha}$. Thus for the case at hand, namely the OPE $\sigma_{1-\theta}(z, \overline z) \, \sigma_{\nu}(w, \overline w) $, it is not surprising to obtain two different scenarios, depending on which angle is larger. One should interpret that OPE as an OPE between a twist field and an anti-twist field. Depending on the choice angles these two fields couple to a bosonic twist field or an bosonic anti-twist field. Similar arguments apply to the the sub-dominant couplings to the excited twist fields $\tau_{\alpha}(z, \overline z)$ and $\widetilde{\tau}_{\alpha}(z, \overline z )$. That suggests that $\tau_{\alpha}(z, \overline z)$
should be interpreted as $\tau^+_{\alpha}(z, \overline z)$, while $\widetilde{\tau}_{\alpha}(z, \overline z)$
should be interpreted as excited anti-twist field $\tau^-_{\beta}(z, \overline z)$, with $\beta= 1-\alpha$. This is also exposes the analogy between the two fields, since they have now the same conformal dimension $h_{\tau^+_{\alpha}}=\overline h_{\tau^+_{\alpha}}=h_{\tau^-_{\alpha}}=\overline h_{\tau^-_{\alpha}}=\frac{1}{2} \alpha(3-\alpha)$.
One can further extend the OPE's \eqref{eq OPE closed yukawa1} and \eqref{eq OPE closed yukawa2} by looking at higher order terms in the expansions \eqref{eq expansion nu>theta} and \eqref{eq expansion nu<theta}, respectively. One finds that the two bosonic twist fields couple to doubly excited twist fields as well. This can be generalized to the statement that they couple to N-times excited twist fields for any $N$. As we will see momentarily this is in contrast to the open string, where one observes an even grading, namely the bosonic twist fields couple only to N-times excited twist fields where $N$ is even.
\subsection{Open string correlator \label{sec open two angles}}
In contrast to the previously discussed case the setup here consists of three D-branes denoted by $a$, $b$ and $c$. Again we assume a simplified setup in which all D-branes go through the origin and all three D-branes intersect each other once. Figure \ref{fig twisttwoangles} depicts the above described setup for one of the three two-tori. The intersection angle between $a$ and $b$ is given by $\theta$ while between $a$ and $c$ it is $\nu$. As for the correlator with just one independent angle only for the interval $0 \leq x \leq 1$ the result is non-vanishing\footnote{This statement is slightly modified in un-oriented theories in which one of the D-branes is the orientifold image of one of the other two. Such a situation has been discussed in \cite{Cvetic:2006iz}.}.
\begin{figure}[h]
\begin{center}
\includegraphics[height=5cm]{4sigmas2angles.eps}
\end{center}
\caption{Intersection of three D-brane stacks, $a$, $b$ and $c$, in one complex dimension.}\label{fig twisttwoangles}
\end{figure}
In contrast to the correlator with just one independent angle here only in the limit $x \rightarrow 0$ we expect the exchanges of untwisted states. This limit allows us to normalize the correlator. On the other hand in the limit $x \rightarrow 1$ one observes the exchange of twisted states. As for the closed string correlator this limit allows us then to determine the proper operator product expansion of two bosonic twist fields even beyond the dominant pole.
Following the same steps as for the four bosonic twist field correlator with just one independent angle we obtain for the correlator
\begin{align} \nonumber
& x^{\nu (1-\nu)}_{\infty}\,\, \langle \sigma_{1-\theta}(0)\, \sigma_{\theta}(x) \,
\sigma_{1-\nu}(1) \, \sigma_{\nu} (\infty)\rangle = \\ & \hspace{2cm} {\cal C}_{open} \frac{x^{-\theta(1-\theta)} (1-x)^{-\theta(1-\nu)}}{G_1(x)} \sum_{p,q} w(x)^{ \left( \frac{\alpha'}{L^2_a} p^2 + \frac{1}{\alpha'}\frac{R^2_1 \, R^2_2}{L^2_a} q^2 \right)}
\label{eq suggested result open two angles}
\end{align}
with $w(x)$ given by
\begin{align}
w(x)=\exp\left[- \frac{\pi t(x)}{\sin(\pi \theta)}\right]
\end{align}
where $t(x)$ is
\begin{align}
t(x) = \frac{\sin(\pi \theta)}{2 \pi} \left( \frac{B_1 H_1(1-x)}{G_1(x)} + \frac{B_2 H_2(1-x)}{G_2(x) }
\right) \,\,.
\end{align}
Here we use the fact that $G_i (x) = \overline G_i (x) $ and $H_i (x) = \overline H_i (x)$ for $0\leq x\leq 1$. Note that, as expected, \eqref{eq suggested result open two angles} reduces to \eqref{eq suggested result open one angle} for $\theta= \nu$ .
Let us determine the constant ${\cal C}_{open}$ which we of course expect to be $\sqrt{\alpha'}/L_a$. Recall that we expect the following OPE
\begin{align}
\sigma_{1-\theta}(z) \, \sigma_{\theta} (w) \sim (z-w)^{-\theta(1-\theta)} \,\,\frac{\sqrt{\alpha'}}{L_a} \,\,\mathbf{1}\,\,,
\label{eq expected result}
\end{align}
where the normalization factor $\sqrt{\alpha'}/L_a$ is crucial for consistency of the four bosonic twist field correlator with just one independent angle. In order to determine the constant ${\cal C}_{open}$ we analyze the limit $x \rightarrow 0 $, which corresponds to the exchange of untwisted states. In that limit $t(x)$ behaves like
\begin{align}
t(x) \approx \frac{\sin (\pi\theta)}{ \pi}\left( -\ln(x) + \ln(\delta)\right)
\end{align}
with $\ln (\delta)$ given by
\begin{align}
\ln(\delta) = 2 \psi(1) -\frac{1}{2} \left(\psi(\theta) + \psi(1-\theta) + \psi(\nu) + \psi(1-\nu) \right)\,\,.
\end{align}
That leads to the following expression for \eqref{eq suggested result open two angles} in the limit $x \rightarrow 0$
\begin{align}
{\cal C}_{open}\, x^{-\theta(1-\theta)} \sum_{p,q} \left(\frac{x}{\delta}\right)^{\left( p^2 \frac{\alpha'}{L^2_a} + q^2 \frac{R^2_1 \, R^2_2}{\alpha'\, L^2_a}\right)}
\label{eq lim x -> 0 two angles}
\end{align}
Thus comparing \eqref{eq lim x -> 0 two angles} for $p=q=0$ with \eqref{eq expected result} we indeed get for the normalization constant
${\cal C}_{open} =\sqrt{\alpha'}/L_a$.
Eventually we are interested in the limit $x \rightarrow 1$ that corresponds to the exchange of twisted states. In order to take the limit it is convenient to perform a Poiss\`on resummation over the variable $p$, such that both variables we sum over live in the lattice $\Lambda$. We obtain (taking already into account the normalization constant)
\begin{align}
\frac{x^{-\theta(1-\theta)} (1-x)^{-\theta(1-\nu)}}{G_1(x)} \sqrt{\frac{\sin(\pi \theta)}{t(x)}}
\sum_{\widetilde{p},q}
\exp\left[-\pi \frac{\sin(\pi \theta)} {t(x)} \, \frac{L^2_a}{\alpha'} \, \widetilde{p}^2- \pi \frac{t(x) }{\sin(\pi \theta)} \frac{ R^2_1\, R^2_2}{\alpha' \, L^2_a} \, q^2\right]\,\,.
\label{eq Poisson resummed result}
\end{align}
A comparison with the results obtained in \cite{Cvetic:2003ch,Abel:2003vv,Abel:2003yx} shows that both methods, the one using the doubling trick to extend the world-sheet to the complex plane and then using conformal field theory techniques and the other one in which one extracts the open string correlator directly from the closed string correlator, give the exactly same result. Once again we conclude that there are no ambiguity in the choice of the constants $c_{p,q}$ in this case either.
For the limit $x \rightarrow 1$ one has to distinguish two different cases
\begin{align} \label{eq limits t(x) theta>nu}
\lim_{x\rightarrow 1} t (x) &= \frac{\sin(\pi(\theta-\nu) )}{2 \sin(\pi \nu)} \qquad \qquad \text{for} \qquad \theta > \nu\\
\label{eq limits t(x) theta<nu}
\lim_{x\rightarrow 1} t (x) &= \frac{\sin(\pi(\nu-\theta) )}{2 \sin(\pi \nu)} \qquad \qquad \text{for} \qquad \theta < \nu\,\,.
\end{align}
This can be easily derived from
\begin{align} \nonumber
&\lim_{x\rightarrow 1} G_1(x) = \frac{\Gamma(\nu-\theta)}{\Gamma(1-\theta) \, \Gamma(\nu)} +(1-x)^{\nu-\theta} \frac{\Gamma(\theta-\nu)}{\Gamma(\theta) \, \Gamma(1-\nu)}\\ \label{eq limits of hypergeometric}
&\lim_{x\rightarrow 1} G_2(x) = \frac{\Gamma(\theta-\nu)}{\Gamma(\theta) \, \Gamma(1-\nu)} +(1-x)^{\theta-\nu} \frac{\Gamma(\nu-\theta)}{\Gamma(1-\theta) \, \Gamma(\nu)}\\
&\lim_{x\rightarrow 1} H_1(1-x) = \lim_{x\rightarrow 1} H_2(1-x) =1\nonumber\,\,.
\end{align}
Thus for $\nu>\theta$ we get \eqref{eq suggested result open two angles}
\begin{align}
(1-x)^{-\theta(1-\nu)} &\frac{\Gamma(1-\theta)}{\Gamma(\nu)}{\Gamma(\nu -\theta)} \sqrt{\frac{2 \sin(\pi \theta) \, \sin(\pi \nu)}{\sin(\pi (\nu-\theta))}} \\
&\times \sum_{\widetilde{p},q} \exp\left[-2\pi \frac{\sin(\pi \theta) \, \sin(\pi \nu) }{\sin(\pi(\theta-\nu)} \, \frac{ L^2_a}{\alpha'}\, \widetilde{p}^2- \frac{\pi}{2} \frac{\sin(\pi (\theta-\nu))}{\sin(\pi \theta)\,\sin(\pi \nu) } \, \,\frac{ R^2_1\, R^2_2}{ \alpha' \,L^2_a} q^2\right]\,\,. \nonumber
\end{align}
The angles can be expressed in the following matter (recall that all the branes intersect exactly once)
\begin{align}
\sin(\pi \theta) = \frac{R_1 R_2}{ L_a L_b} \hspace{1cm} \sin(\pi \nu) = \frac{R_1 R_2}{ L_c L_a} \hspace{1cm} \sin(\pi (\nu-\theta)) = \frac{R_1 R_2}{ L_b L_c}\,\,.
\label{eq sinus definitions}
\end{align}
Given \eqref{eq sinus definitions} one obtains
\begin{align}
\sqrt{2 \pi} (1-x)^{-\theta(1-\nu)} & \sqrt{ \frac{\Gamma(1-\theta)\Gamma(\nu)\Gamma(1+\theta-\nu)}
{\Gamma(\theta)\Gamma(1-\nu)\Gamma(\nu-\theta)} } \sum_{\widetilde{p},q} \exp\left[-\frac{2\pi}{\alpha'} R_1 R_2 \tilde{p}^2- \frac{\pi}{2 \alpha'} R_1 R_2 q^2\right]
\end{align}
which, with the redefinition of the summation variables (note that with this definition $r_1$ and $r_2$ are integers)
\begin{align}
r_1 = 2 \tilde{p} +q \qquad \qquad r_2 = 2 \tilde{p}- q
\end{align}
leads to
\begin{align}
\sqrt{2 \pi} (1-x)^{-\theta(1-\nu)} & \sqrt{ \frac{\Gamma(1-\theta)\Gamma(\nu)\Gamma(1+\theta-\nu)}
{\Gamma(\theta)\Gamma(1-\nu)\Gamma(\nu-\theta)} } \sum_{r_1, r_2} \exp\left[-\frac{\pi}{4 \alpha'} R_1 R_2 \left( r^2_1 +r^2_2 \right)\right]\,\,.
\label{eq Yukawa final theta<nu}
\end{align}
Analogously one obtains for $\theta> \nu$
\begin{align}
\sqrt{2 \pi} (1-x)^{-\nu(1-\theta)} & \sqrt{ \frac{\Gamma(\theta)\Gamma(1-\nu)\Gamma(1-\theta+\nu)}
{\Gamma(1-\theta)\Gamma(\nu)\Gamma(\theta-\nu)} } \sum_{r_1, r_2} \exp\left[-\frac{\pi}{4 \alpha'} R_1 R_2 \left( r^2_1 +r^2_2 \right)\right]\,\,.
\label{eq Yukawa final theta>nu}
\end{align}
Again $r_1$ and $r_2$ are positive integers. Note that the classical part is exactly twice the world-sheet instanton contribution to the Yukawa coupling arising from a two-torus determined in \cite{Cremades:2003qj, Abel:2003vv}. Moreover, \eqref{eq Yukawa final theta<nu} and \eqref{eq Yukawa final theta>nu}, quantum and classical part, agree with the results of \cite{Cremades:2003qj,Cvetic:2003ch, Abel:2003vv} where the authors investigated Yukawa couplings in intersecting brane worlds.
As we saw for the closed string correlator, to extract the OPE between two bosonic twist fields it is sufficient to look at the quantum part of the correlator. For large volumes, namely in the limit $R_1, R_2 \rightarrow \infty$ the only contribution comes from $(\widetilde{p}, q) =(0,0)$ and the whole correlator reduces to the quantum part given by\footnote{Here we used also the identities displayed in eq. \ref{eq sinus definitions}.}
\begin{align}
\frac{x^{-\theta(1-\theta)} (1-x)^{-\theta(1-\nu)}}{G_1(x)} \sqrt{\frac{\sin(\pi \theta)}{t(x)}}
\end{align}
which is exactly the quantum part computed in \cite{Cvetic:2003ch,Abel:2003yx} using the same conformal field theory techniques as for the closed string after extending the world-sheet to the whole complex plane via the doubling trick.
We already showed above that the dominant pole gives the expected Yukawa couplings. Now we would like to take a look at the sub-dominant terms. Again we will start with $\nu>\theta$. Using \eqref{eq limits t(x) theta<nu} and \eqref{eq limits of hypergeometric} one gets
\begin{align}
\label{eq expansion theta<nu}
(1-x)^{-\theta(1-\nu)} & \sqrt{2 \pi} \sqrt{ \frac{\Gamma(1-\theta)\Gamma(\nu)\Gamma(1+\theta-\nu)}
{\Gamma(\theta)\Gamma(1-\nu)\Gamma(\nu-\theta)} } \\ & \left( 1-\frac{\Gamma^2(1-\theta) \Gamma^2(\nu) \Gamma(\theta-\nu) \Gamma(1+\theta-\nu)}{\Gamma^2(\theta) \Gamma^2(1-\nu) \Gamma(\nu-\theta) \Gamma(1-\theta+\nu)} (1-x)^{2(\nu-\theta)} + ... \right)\,\,.\nonumber
\end{align}
This implies that in the OPE between the two bosonic twist fields there is no coupling to the excited twist field $\tau$ but the first sub-dominant pole rather indicates a coupling to the doubly excited twist field $\rho$. Let us display the OPE explicitly (keep in mind one has to take the square root out of the different coefficients in front of the poles)
\begin{align}
\sigma_{\theta}(w) \sigma_{1-\nu} (z) \sim & \left( 2\pi \frac{\Gamma(1-\theta)\Gamma(\nu)\Gamma(1+\theta-\nu)}
{\Gamma(\theta)\Gamma(1-\nu)\Gamma(\nu-\theta)} \right)^{\frac{1}{4}} (z-w)^{-\theta(1-\nu)} \sigma_{1+\theta-\nu} (z) \\
& \hspace{-2cm} + \left( 2\pi \frac{\Gamma^5(1-\theta) \Gamma^5(\nu) \Gamma^2(\theta-\nu) \Gamma^3(1+\theta-\nu)}{\Gamma^5(\theta) \Gamma^5(1-\nu) \Gamma^3(\nu-\theta) \Gamma^2(1-\theta+\nu)} \right)^{\frac{1}{4}} (z-w)^{-\theta(3-\nu)+2 \nu}\widetilde{\rho}_{1+\theta-\nu}(z) + ...\nonumber
\end{align}
Analogously one obtains for $\theta>\nu$ in the limit $x \rightarrow 1$
\begin{align}
\label{eq expansion theta>nu}
(1-x)^{-\nu(1-\theta)} & \sqrt{2 \pi} \sqrt{ \frac{\Gamma(\theta)\Gamma(1-\nu)\Gamma(1-\theta+\nu)}
{\Gamma(1-\theta)\Gamma(\nu)\Gamma(\theta-\nu)} }\\ & \left( 1-\frac{\Gamma^2(\theta) \Gamma^2(1-\nu) \Gamma(\nu-\theta) \Gamma(1-\theta+\nu)}{\Gamma^2(1-\theta) \Gamma^2(\nu) \Gamma(\theta-\nu) \Gamma(1+\theta-\nu)} (1-x)^{2(\nu-\theta)} + ... \right)\nonumber
\end{align}
which leads to the OPE
\begin{align}
\sigma_{\theta}(w) \sigma_{1-\nu} (z) \sim & \left( 2\pi \frac{\Gamma(\theta)\Gamma(1-\nu)\Gamma(1-\theta+\nu)}
{\Gamma(1-\theta)\Gamma(\nu)\Gamma(\theta-\nu)} \right)^{\frac{1}{4}} (z-w)^{-\nu(1-\theta)} \sigma_{\theta-\nu} (z) \\
& \hspace{-2.3cm} + \left( 2\pi \frac{\Gamma^5(\theta) \Gamma^5(1-\nu) \Gamma^2(\nu-\theta) \Gamma^3(1-\theta+\nu)}{\Gamma^5(1-\theta) \Gamma^5(\nu) \Gamma^3(\theta-\nu) \Gamma^2(1+\theta-\nu)} \right)^{\frac{1}{4}} (z-w)^{-\nu(3-\theta)+2 \theta} \,\,\rho_{\theta-\nu}(z) + ...\nonumber\,\,.
\end{align}
Note that in contrast to the closed string, two bosonic open twist fields $\sigma_{\alpha}(w)$ and $\sigma_{\beta}(z)$ do not couple to the excited twist field $\tau_{\alpha+\beta}$. Their first sub-dominant pole in the OPE indicates the coupling to the doubly excited twist field $\widetilde{\rho}$ and $\rho$ respectively. This can be generalized by looking at higher order terms in the expansions \eqref{eq expansion theta<nu} and \eqref{eq expansion theta>nu}
to the statement that the two bosonic twist fields do only couple to $N$-times excited twist fields, where $N$ is an even number.
The conformal dimensions of the fields $\sigma_{\alpha}$ and $\widetilde{\rho}_{\alpha}$ and ${\rho}_{\alpha}$ are
\begin{align}
h_{\sigma_{\alpha}}= \frac{1}{2} \alpha\left(1-\alpha \right) \qquad \hspace{4mm} h_{\widetilde{\rho}_{\alpha}}= \frac{1}{2} \left(1-\alpha\right)\left(4+\alpha \right) \qquad \hspace{4mm} h_{\rho_{\alpha}}= \frac{1}{2} \alpha\left(5-\alpha \right) \,\,.
\end{align}
Analogously to the closed string result $\widetilde{\rho}_{\alpha}$ can be interpret as a doubly-excited anti-twist field $\rho^-_{\beta}$ with $\beta=1-\alpha$. On the other hand $\rho_{\alpha}$ is the doubly-excited twist field $\rho^-_{\alpha}$. As for the closed string this exhibits the analogy of these two conformal fields due to their same conformal dimension $h_{\rho^+_{\alpha}}=h_{\rho^-_{\alpha}}=\frac{1}{2} \alpha \left(5-\alpha \right)$ \footnote{For a more detailed discussion on this issue see \cite{Light} and appendix \ref{app twists}.}.
\section{Summary}
In this note we revisited the correlator of four open string bosonic twist-fields with one and two independent angles, respectively. In contrast to previous works we extracted the open string correlator directly from the closed string result. This way we provide a non-trivial check for the method advocated in \cite{Gava:1997jt,David:2000um,Cvetic:2003ch,Abel:2003vv} where the authors extend the world-sheet, whose original domain is the upper half complex plane, via the ``doubling trick'' to the full complex plane and employ conformal field theory techniques.
Given the four-point correlators, we investigated various limits that allowed us to extract the OPE's between two bosonic twist fields beyond the dominant terms. Interestingly one finds, in contrast to the closed string, that two bosonic twist fields do not couple to an excited twist field, but rather to a doubly excited twist field. This can be generalized to
the statement that two open string bosonic twist fields couple only to $N$-times excited twist fields with $N$ being an even positive integer.
Finally, we found an interesting identification between bosonic twist field and anti-twist field even for higher excited twist fields. This allows to compute
the twist field correlator for just one combination of ``twist'' and ``anti-twist'' fields. Any other combination can be determined by the appropriate identifications.
Before concluding we would like to sketch how to generalize our discussion to genuinely interacting albeit rational CFT's. While in the case of twist-fields the number of block was actually infinite, in a RCFT the number of blocks is finite. Conformal blocks can be determined by means of null vectors and the resulting Ward identities. Explicit expressions are available for WZW models and for various minimal models. For RCFT the number of D-branes is at most equal to the number of characters. Given a parent closed-string model, specified by the choice of the one-loop modular invariant combination of characters, there may be many open string `descendants'. By the same token, given a closed-string 4-point correlator we expect different open-string correlators, at least as many as open string descendants. In some simple cases, as minimal models ({\it e.g.}~ Ising model), only the simplest open-string descendant with real Chan-Paton charges was considered \cite{Bianchi:1991rd} to give 4-point amplitudes compatible with planar duality. Later on, based on the analysis of $SU(2)$ WZW models \cite{Pradisi:1995qy} it was noticed that complex Chan-Paton factors were also allowed.
It would be interesting to explore the problem of computing open-string correlator based on the knowledge of their parent closed-string correlators in WZW or even better in Gepner models. In the latter case only limited knowledge of chiral blocks is available at present, however. Given their relation to solvable compactifications on CY manifolds this is a rather sad state of affairs.
\section*{Acknowledgements}
We acknowledge M. Cveti{\v c}, F. Fucito, E. Kiritsis, G. Leontaris, J. F. Morales Morera, I. Pesando, G. Pradisi, B. Schellekens, O. Schlotterer, M. Schmidt-Sommerfeld, Ya. Stanev, P. Teresi and T. Weigand for interesting discussions and correspondence. P.~A. is supported by FWF P22000. P.~A. and R.~R. are grateful to the organizers of the school and workshop of ITN ``UNILHC" PITN-GA-2009-237920 for hospitality during parts of this work. The work of R.~R. was partly supported by the German Science Foundation (DFG) under the Collaborative Research Center (SFB) 676 "Particles, Strings and the Early Universe".
The work of M.~B. was partially supported by the ERC Advanced Grant n.226455 Superfields, by the Italian MIUR-PRIN contract 20075ATT78, by the NATO grant PST.CLG.978785. M.~B. and P.~A. would like to thank NORDITA, Stockholm for hospitality during the completion of this project.
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The onset of massive star formation episodes in galaxies drives their
observational properties in almost any wavelength range. The UV and optical
become dominated by the continuum of massive, hot and young stars, as well as by
the presence of nebular emission lines. After a few Myr of evolution, red
supergiant stars contribute to most of the near infrared emission. The heating
of interstellar dust particles by the powerful UV photons induces the thermal
re-emission of large amounts of energy in the { mid and far infrared} domain.
The injection of ionizing photons into the surrounding gas generates thermal
radio emission, which is replaced by non-thermal emission as the ionizing power
of the burst declines and the more massive stars begin to explode as supernovae.
The direct relation between the strength of the star formation episode and the
intensity of the different observable parameters has allowed a number of star
formation rate calibrators to be defined, such as UV continuum, emission lines
intensity, far infrared or radio luminosities
\citep{Kennicutt98,Rosa02,Bell2003}. These calibrators have proven to be
invaluable for statistical studies of the star formation history of the
Universe.
Star-forming regions are also the source of conspicuous X-ray emission,
generated by individual stars, by the injection of large amounts of
mechanical energy heating the interstellar medium, by supernova remnants, and by
binary systems transferring mass to a compact primary \citep{Cervino02,Persic02}. All of
these individual components are in principle directly linked to the strength of
the star formation episode, so that the X-ray luminosity could also be used as
an estimator of star formation rates ({\em SFR}).
\begin{figure*}
\begin{center}
\includegraphics[width=8 cm,bb=5 39 786 521
dpi,clip=true]{8398fi1a.eps}
\hspace{1.0 cm}
\includegraphics[width=8 cm,bb=17 39 786 527
dpi,clip=true]{8398fi1b.eps}
\end{center}
\caption{Evolution of $L_{\rm{FIR}}$\ (left) and $L_{\rm{soft X}}$\ (right) for IB (dashed line, right
axis) and EB (solid line, left axis) models. IB models predictions are
shown normalized to $1$~M$_\odot$\ of gas transformed into stars, while for EB the
luminosities are scaled to {\em SFR}\ $= 1$ M$_\odot$\ yr$^{-1}$. $L_{\rm{FIR}}$\ has been
plotted (from top to bottom) for {\em E(B-V)}\ $= 1.0$, $0.5$ and $0.1$. $L_{\rm{soft X}}$\ has been
computed (from top to bottom) for $\epsilon_{\rm{xeff}}$\ $= 0.1$, $0.05$ and $0.01$. }
\label{lumin}
\end{figure*}
Several authors have in recent years discussed the feasibility of using the
X-ray luminosity as an {\em SFR}\ estimator. {\citet{Fabbiano02} already concluded
from the analysis of 234 S0/a-Irr galaxies observed with {\em Einstein}, that
the correlation they found between the X-ray and the FIR luminosities in Sc-Irr
galaxies was due to the young stellar populations in these objects.}
\citet{Ranalli03} proposed an empirical calibration of the soft ($0.5$--$2.0$
keV) and hard ($2$--$10$ keV) X-ray luminosities, based on their correlation
with the far infrared (FIR) and radio luminosities, and using the known
calibrations of these parameters as proxies. \citet{Grimm03} studied the
correlation between the number of high-mass X-ray binaries (HMXB) and the {\em SFR},
deriving different calibrations of the hard X-ray luminosity for low and high
star formation rates. \citet{Persic04} obtained a calibration of the hard X-ray
luminosity as a {\em SFR}\ estimator by assuming that most of the emission in this
range is associated with HMXB, and scaling from the number of HMXB to the {\em SFR}\
of our Galaxy. \citet{Gilfanov2004} confirmed the {calibration} of the hard
X-ray luminosity associated with HMXB, using slightly different slopes at high
and low star formation rates. In a recent paper, \citet{Persic07} found indeed
that the collective hard X-ray emission of young point sources correlates
linearly with the star formation rate derived from the far infrared luminosity.
\citet{Strickland04b} demonstrated that the luminosity of diffuse X-ray emission
in star-forming galaxies is directly proportional to the rate of mechanical
energy injection from the young, massive stars into the interstellar medium of
the host galaxies. A similar result was found by \citet{Grimes05} from the
analysis of a sample of starburst galaxies of different types (from dwarf
starbursts to ultraluminous infrared galaxies), which concluded that the
mechanism producing the diffuse X-ray emission in the different types of
starbursts was powered by the mechanical energy injected by stellar winds and
supernovae into the surrounding medium. Recently \citet{Rosa07} confirmed the
reliability of the soft X-ray luminosity as an {\em SFR}\ estimator from the analysis
of a sample of star-forming galaxies in the {\em Chandra Deep Field South} at
redshifts $ z = 0.01 - 0.67$, using the UV continuum luminosity from {\em GALEX}
as a proxy.
While hard X-ray emission from star-forming regions may be dominated by binary
systems, diffuse soft X-ray emission is generated by reprocessed, mechanical
energy from stellar winds and supernovae explosions. This mechanical energy is
related to the strength of the burst of star-formation, and can be calculated
using evolutionary population synthesis models. In this paper, we analyze the
correlation between the soft X-ray and FIR luminosities in star-forming regions,
both predicted by evolutionary synthesis models. We study the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratio as a
function of the {evolutionary state, the efficiency of the conversion of
mechanical energy into soft X-ray luminosity, the star formation history
(instantaneous or extended), and the dust abundance,} and compare the computed
values with observations taken from the literature. { Our objective is to derive
a calibration of $L_{\rm{soft X}}$\ as a tracer of the star formation rate, based on the
predictions of evolutionary synthesis models, and to test the validity of the
empirical calibration proposed by \citet{Ranalli03}. }
In Sect.~2 we describe the evolutionary synthesis models that we use in the present study, in Sect.~3 we
present the observational data taken from the literature, and in Sect.~4 we
discuss the predictions and the comparison with the observational values. {\
Throughout this work we have assumed $H_{\rm0} = 73$ km s$^{-1}$ Mpc$ ^{-1}$ to
convert fluxes into luminosities.}
\begin{figure*}
\begin{center}
\includegraphics[width=8cm,bb=20 39 713 527
dpi,clip=true]{8398fi2a.eps}
\hspace{1.0 cm}
\includegraphics[width=8cm,bb=20 39 713 527
dpi,clip=true]{8398fi2b.eps}
\end{center}
\caption{Evolution of the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratio computed for $\epsilon_{\rm{xeff}}$\ $= 0.01, 0.1$ and {\em E(B-V)}\ $= 0.1, 1.0$. Left: predictions for IB models; right: predictions for EB models. }
\label{ratio}
\end{figure*}
\section{Evolutionary synthesis models}
We have computed the predicted $L_{\rm{soft X}}$\ and $L_{\rm{FIR}}$\ values using the evolutionary
population synthesis models of \citet{Cervino02} (hereafter CMHK02 models\footnote{
Downloadable from {\rm http://www.laeff.inta.es/users/mcs/SED/}}), which are
based on the models of \citet{Arnault89}, \citet{MasHesse91} and
\citet{Cervino94}. These models compute the evolution of a cluster of massive
stars, formed at the same time (Instantaneous Bursts, IB), or {during an extended
period of time (typically several tens of Myr) at a constant rate (Extended
Bursts, EB)}, assuming different metallicities and Initial Mass Function (IMF)
slopes. { The CMHK02 models were developed to compute the evolution of a
starburst during the first 30~Myr, following the onset of the star formation
episode.} Once the structure {(number of stars of each spectral type and
luminosity class at a given evolutionary time)} of the stellar population is
derived, the models are used to compute a number of observable parameters, from X-ray to radio wavelengths.
In general, we assume a solar metallicity $Z_{\sun}$, and a Salpeter IMF ($\phi(m)\sim
m^{-2.35}$) with masses ranging between $2 \, M_{\sun}$ and $120 \, M_{\sun}$. {The output of the models are normalized to the star formation rate (mass of gas
transformed into stars per unit time, $M_{\sun} yr^{-1}$) for EB cases, or star
formation strength (mass of gas transformed into stars at the onset of the
burst, in units of $M_{\sun}$) for IB scenarios. In both cases, the mass
normalization corresponds to the mass integrated between 2 $M_{\sun}$ and 120 $M_{\sun}$
assuming a Salpeter IMF. We emphasize that this normalization might differ
significantly if other mass limits are considered. For example, the ratio
between our mass normalization ($2-120 M_{\sun}$) and the one assumed by
\citet{Kennicutt98} ($0.1-100 M_{\sun}$) is $M_2^{120}/M_{0.1}^{100} = 0.293$.
It is important to remark that this is the normalization implicitly assumed when
the {\em SFR}\ calibrations proposed by \citet{Kennicutt98} are used.} Under our
assumptions, the soft X-ray ($0.4$ -- $2.4$~keV) and far infrared luminosities
were computed during the first 30~Myr after the onset of the massive starburst
episode.
One of the main sources of soft X-ray emission in a star-forming region is the
diffuse gas heated by mechanical energy from a starburst (massive
stellar winds or supernova explosions) into the surrounding medium. Its
contribution is modeled by a Raymond-Smith thermal plasma with $kT=0.5$ keV,
which controls the fraction of the mechanical energy that heats the gas to X-ray temperatures, $\epsilon_{\rm{xeff}}$. The models include the soft X-ray
radiation emitted during the adiabatic phase of the Supernova Remnant (SNR),
which is modeled by a composite Raymond-Smith plasma with $kT=0.23$, $0.76$ and
$1.29$ keV. The total energy emitted by the SNR, during the adiabatic phase, has
been subtracted from the energy of each supernova explosion when computing the
injection of mechanical energy. A more detailed description of our models is provided in \citet{Cervino02}.
The contribution of the stellar atmospheres to the soft X-ray emission was
neglected because it is expected to be two orders of magnitude lower than the
emission from the diffuse gas. The contribution of high-mass X-ray binaries to
the soft X-ray emission has in addition been neglected in this work.
\citet{MasHesse99a} discussed the properties of the HMXB population expected to
form during a massive star-formation burst. Binary systems become X-ray emitters
when the primary collapses into a black hole or neutron star, the atmosphere of
the secondary has started to expand, and the secondary is sufficiently close to
the collapsed primary for mass transfer to begin. Mass is accreted onto the
surface of the compact object and emits X-rays with a typical $L_{\rm{X}}$\ $\sim
10^{38}$ erg s$^{-1}$, peaking at energies between $5$ and $10$ keV \citep{Persic02}.
{The number of HMXB in a young starburst is dependent on many free parameters.
Following \citet{MasHesse99a}, we estimate that only a few HMXB should be active
after the first 5--6 Myr of evolution of starbursts that have transformed
approximately $10^6$ M$_\odot$ of gas into stars. This HMXB population should
contribute a few times $10^{38}$ erg s$^{-1}$ to the total X-ray luminosity,
and a small fraction of this radiation to the soft X-ray emission.} In all
cases, the total contribution of HMXB, to the soft X-ray emission, remains
$15$\% for IB and $10$\% for EB. Only if one or a few of these HMXB develop into
an ultraluminous X-ray source (ULX), with $L_{\rm{X}}$\ $\sim 10^{40}$ erg s$^{-1}$
\citep{Miniutti06}, could the X-ray emission, from soft to hard X-rays, of the
entire galaxy, be dominated by the emission of HMXB, compared to that of the
diffuse gas.
Concerning the FIR emission, a thermal equilibrium of dust is assumed, implying
that all the energy absorbed by dust, mostly originating from the UV continuum
of the massive stars, is reemitted in the FIR range. { In this paper, $L_{\rm{FIR}}$\
refers to the total mid- and far-infrared luminosity integrated over the
wavelength range $1 -1000 ~\mu$m. We remark that this parametrization of $L_{\rm{FIR}}$\
implies a value of $L_{\rm{FIR}}$\ that is larger than that calculated using the FIR
parameter proposed by \citet{Helou88}, which is widely used in the literature.
\citet{Helou88} showed that the FIR luminosity computed from the {\em IRAS}
fluxes at $60$ and $100$ ~$\mu$m would intercept about 70\% (0.15 dex) of the
total FIR luminosity from 1 to 1000 $\mu$m, assuming that there is a single,
dominating component, at a temperature of 30 to 50 K. The discrepancy would be
even larger in the presence of an additional warm dust component. This estimate
of $L_{\rm{FIR}}$\ is consistent with the range ($8 -1000 ~\mu$m) considered by
\citet{Kennicutt98} for the calibration of $L_{\rm{FIR}}$\ as an {\em SFR}\ estimator, since
most of the FIR luminosity in starburst galaxies is emitted at wavelengths in
the range $10 -120 ~\mu$m. The models apply a Galactic extinction law
\citep{Cardelli89} to the synthetic spectral energy distributions, which is
parametrized by the colour excess {\em E(B-V)}. The energy absorbed due to extinction is
calculated using the models. It is assumed that all energy absorbed is reemitted
thermally by the dust, within the mid to far infrared domain, i.e., within the
$\sim$ $8-1000 ~\mu$m range. The models do not predict the shape of the FIR
emission, since we do not make any assumption about the expected dust
temperature. The presence of completely-obscured stars is not taken into
account. Furthermore, the models assume that a fraction $1-f$ of Lyman continuum
photons is directly absorbed by the dust, and does not contribute to the
ionization (see \citet{Mezger1974}). {\citet{Mezger1978} for the Galactic and
\citet{Degioia1992} for the Large Magellanic Cloud HII regions, derived $1-f$
values in the range $0.3-0.4$.} We have assumed $(1-f)=0.3$ in this work, as
proposed for starburst galaxies by \citet{Belfort1987}. }
The evolution of both $L_{\rm{FIR}}$\ and $L_{\rm{soft X}}$\ predicted by the models are shown in
Fig.~\ref{lumin}, while their ratio $L_{\rm{soft X}}/L_{\rm{FIR}}$\ is presented in Fig.~\ref{ratio}. For
IB models, the luminosities are shown scaled to $1$~M$_\odot$\ of gas transformed into
stars at the onset of the starburst. In the case of EB models, the luminosities
are normalized to {\em SFR}\ = $1 \, M_{\sun} \, yr^{-1}$. $L_{\rm{FIR}}$\ is presented in the
plot, computed for {\em E(B-V)}\ $= 0.1$, $0.5$ and $1.0$. It can be seen that FIR
emission saturates rapidly for {\em E(B-V)}\ values {above $0.5$.} In the remainder of this work, we consider the value of $L_{\rm{FIR}}$\ calculated by assuming that {\em E(B-V)}\ $=
1.0$, which can be considered an upper limit to the expected FIR luminosity.
In the case of EB models, $L_{\rm{FIR}}$\ reaches an
asymptotic value after approximately $10 - 15$~Myr of evolution, when an equilibrium is
reached between the number of massive stars that die, and those forming
continuously. For coeval starbursts, $L_{\rm{FIR}}$\ declines rapidly after the
first $5$~Myr of evolution, when the most massive stars begin to end their
lifetimes and stop heating the interstellar dust.
\begin{figure}
\centering
\includegraphics[width=8cm,bb=20 39 713 527 dpi,clip=true]
{8398fi3.eps}
\caption{{ Evolution of the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratio as a function of metallicity and star
formation regime. {\em E(B-V)}$ =1.0$ and $\epsilon_{\rm{xeff}}$\ $=0.05$.}}
\label{ratio-met}
\end{figure}
$L_{\rm{soft X}}$\ is shown in Fig.~\ref{lumin} computed for $\epsilon_{\rm{xeff}}$\ values of $1$\%, $5$\%
and $10$\%. During the first few Myr, there is a rapid increase in $L_{\rm{soft X}}$\ because
both the luminosity and the stellar winds of the most massive stars, increase. After the first $3$~Myr of evolution, the most massive stars end
their lifetimes and explode as supernovae. The injection of mechanical energy
begins to be dominated by energy released by supernovae explosions, as the
importance of stellar winds rapidly diminishes. { During the first 35 Myr, this
remains the situation for IB models because, for a Salpeter IMF, the supernovae
rate declines slowly, while there exist stars of sufficient mass (of initial mass
above $8$~M$_\odot$) to produce supernovae \citep{Cervino94,Leitherer95}. In the
case of EB models, $L_{\rm{soft X}}$\ is expected to increase slowly after the first 5~Myr,
until an equilibrium is reached between the formation and destruction of stars
that end their lifes as supernovae, i.e., at around $40$~Myr at solar
metallicities, according to the evolutionary tracks considered.
\citet{Leitherer95} presented the evolution of both supernova rate and the
injection rate of mechanical energy for longer term evolution, up to ages of
300~Myr. Their Fig.~56 showed that the asymptotic rate of energy injection is
within 0.05 dex of the value predicted at 30~Myr}. Although the computation of
the soft X-ray luminosities in the CMHK02 models is simplistic, the
predictions are in good agreement with the results of \citet{Strickland99},
which were computed using hydrodynamical simulations of a young superbubble
driven by a cluster of massive stars.
\begin{figure}
\centering
\includegraphics[width=7cm,bb=50 34 705 521 dpi,clip=true]
{8398fi4.eps}
\caption{$L_{\rm{soft X}}/L_{\rm{FIR}}$\ histograms for the samples of star-forming galaxies from
\citet{Ranalli03} (dashed line), \citet{Tullmann06b} (thick solid line) and
\citet{Rosa07} (thin solid line). { The bins have been computed with a slight
shift between each other for clarity.} }
\label{histo}
\end{figure}
Figure~\ref{ratio} shows the predictions for the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratio evolution
with time. It can be seen that $L_{\rm{soft X}}/L_{\rm{FIR}}$\ increases continuously with time after an
instantaneous burst, even after the first $5$~Myr, due to the fact that $L_{\rm{soft X}}$\
remains almost constant while $L_{\rm{FIR}}$\ decreases rapidly. In extended burst
models, {the rate of increase of $L_{\rm{soft X}}/L_{\rm{FIR}}$\ with time,} after the first $5$~Myr, is smaller. As
discussed above, we expect the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratio to stabilize after about $40$ Myr
in EB models, when an equilibrium between formation and destruction of stars
susceptible to becoming supernovae, has been reached. In both cases, there is a
rapid increase in the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratio after the onset of the burst (one order of
magnitude in $3$~Myr in IB models), associated with the rapid increase in the
amount of mechanical energy injected during the first phase of the evolution of
the most massive stars.
{ In Fig.~\ref{ratio-met}, we plot the evolution of the $L_{\rm{soft X}}/L_{\rm{FIR}}$\
ratio as a function of metallicity ($Z = Z_\odot$ and $Z = 0.4\times Z_\odot$)
and star formation history (instantaneous and extended bursts). Varying metallicity highlights two results: first, low-metallicity stars evolve more slowly and
have a longer lifetime, such that the evolution of $L_{\rm{FIR}}$\ is delayed with respect
to solar-metallicity stars. Second, the lower the metallicity, the less
efficient are the stellar winds, and therefore the lower is the amount of mechanical
energy released into the interstellar medium. As seen in Fig.~\ref{ratio-met}, the net
effect is a decrease in the value of $L_{\rm{soft X}}/L_{\rm{FIR}}$\, at intermediate ages by 0.2 dex, or by up to 0.5 dex within the first 4 Myr of evolution. At some ages, the trend is even reversed. For the time interval considered, however, the predicted values of $L_{\rm{soft X}}/L_{\rm{FIR}}$\ are similar for both values of metallicity. }
\begin{figure}[ht]
\centering
\includegraphics[width=8cm, bb= 19 39 705 565
dpi,clip=true]{8398fi5.eps}
\caption{$L_{\rm{soft X}}/L_{\rm{FIR}}$\ histogram for the combined sample, over the evolution of
the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ computed for IB (dashed lines) and EB (solid lines) and $\epsilon_{\rm{xeff}}$\ $=
0.10$ (top) and $0.01$ (bottom). The horizontal line corresponds to the mean
ratio of the complete sample, $log(L_{\rm{soft X}}/L_{\rm{FIR}})$\ $= -3.92$.}
\label{sxf-histo}
\end{figure}
\section{Observational data sample}
We have compiled $L_{\rm{soft X}}$\ and $L_{\rm{FIR}}$\ data for 62 star-forming galaxies, of
different types and redshifts, to compare with our model predictions.
{The data compilation of \citet{Ranalli03} (hereafter RCS03) is for star-forming
galaxies, from the atlas of \citet{Ho97}, that have detectable X-ray emission in
ASCA and/or BeppoSAX observations. Only spiral and irregular galaxies from Sa to
later types were included in this sample, which was complemented by the authors
with data of six, well-known starburst galaxies observed in the southern
hemisphere.}
The sample compiled by \citet{Tullmann06b} (hereafter TUL06) is based on the nine
late-type starburst galaxies of \citet{Tullmann06a} ({\em XMM-Newton} data),
seven star-forming disk galaxies from \citet{Strickland04a} ({\em Chandra} data)
and seven additional late-type star-forming galaxies taken from the literature.
The contribution by obvious point-sources within the extraction regions was
removed from all objects in the original \citet{Tullmann06a} and
\citet{Strickland04a} compilations when possible.
{ In both samples, the X-ray emission was corrected for Galactic neutral
Hydrogen absorption, but not for the intrinsic absorption of the galaxies. The
published far-infrared fluxes were computed using the {\em IRAS} 60 and 100
$\mu$m flux following the procedure of \citet{Helou88}, corresponding to the energy emitted
within the range $40 - 120 ~\mu$m. \citet{Calzetti2000} found for local starburst galaxies that $F(1-1000)/F(40-120) = 1.75 \pm 0.25$. We have
therefore multiplied the FIR fluxes provided by RCS03 and
TUL06 by 1.75, in order to obtain a more realistic determination of
the total amount of energy being reemitted in the mid- and far- infrared range.
The luminosities were recomputed for all objects from the published fluxes
by using the distances corrected to the Local Group reference frame as given in
the {\em Nasa Extragalactic Database} , assuming $H_{\rm0} = 73$ km s$^{-1}$
Mpc$ ^{-1}$.
There is some overlap between the two samples. The $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratio for M82 is
$-4.08$ in RCS03 (based on {\em BeppoSAX} data) and $-3.85$ for
TUL06 ({\em Chandra}). For NGC~4631 $L_{\rm{soft X}}/L_{\rm{FIR}}$\ varies between $-3.96$
({\em ASCA}) and $-4.17$ ({\em XMM-Newton}), respectively. In these cases we
have taken the $L_{\rm{soft X}}$\ values provided by TUL06 . }
To compare with star-forming galaxies outside the Local Universe, we consider
galaxies at redshifts $ z = 0.01 - 0.67$ studied by \citet{Rosa07} (hereafter
ROSA07). Both $L_{\rm{soft X}}$\ and $L_{\rm{FIR}}$\ data are available for these galaxies, which were
observed as part of the Chandra Deep Field South (CDFS) survey. The X-ray data
in this sample come from the catalog of \citet{Alexander03}, and were not
corrected for either Galactic or intrinsic absorption by neutral hydrogen.
Nevertheless, along the line of sight to the {\em CDFS} the expected Galactic
neutral hydrogen absorption in the soft X-rays band is only 4.2\% (0.02
dex) \citep{Alexander03}. { We exclude objects believed to harbour an obscured
AGN, or to be dominated by low-mass, X-ray binaries (LMXB). In total, we have
identified 18 objects with detectable soft X-ray, and far-infrared fluxes that
appear not to be contaminated by either an AGN or LMXB. }
ROSA07 derived the far infrared luminosities for these objects in the
full 8 -- 1000 $\mu$m band using Spitzer observations at 25 $\mu$m. They used the
empirical calibration of \citet{Takeuchi05} {based on the analysis of a
large sample of galaxies for which fluxes in the four {\em IRAS} bands at 12,
25, 60 and 100 $\mu$m were available. The FIR luminosity computed in this way
should be consistent with the luminosities derived for the RCS03
and TUL06 samples, extrapolated to the 1 -- 1000 $\mu$m range.
ROSA07 assumed $H_{\rm0} = 70$ km s$^{-1}$ Mpc$ ^{-1}$ to convert
fluxes into luminosities. }
We note that the $L_{\rm{soft X}}$\ values are integrated over the $0.5 - 2.0$ keV band by
RCS03, $0.2 - 2.0$ by ROSA07, and over the $0.3 -
2.0$ keV band by TUL06, while the model predictions were computed
for the $0.4 - 2.4$ keV band. We have verified that for the typical spectral
properties of the diffuse gas in these objects, the discrepancies associated with
the different bandwidths should be smaller than $9$\% ($0.04$ dex) in any case.
We have plotted in Fig.~\ref{histo} the histograms of the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ distribution
for each sample. It can be seen that while there is a significant overlap
between them, the star-forming galaxies compiled by RCS03 show the
smallest dispersion. The TUL06 galaxies show generally lower $L_{\rm{soft X}}/L_{\rm{FIR}}$\
values than the objects compiled by RCS03, while the sample of
ROSA07 presents the highest $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratios. The mean $log(L_{\rm{soft X}}/L_{\rm{FIR}})$\
values derived for each sample are $-3.94$ (0.27) (RCS03), $-4.34$
(0.48) (TUL06), $-3.37$ (0.45) (ROSA07) and $-3.92$
(0.57) for the whole compilation. Values within parentheses correspond to the
$\sigma$ dispersion of each sample. { We have looked for any possible
correlation between the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratios in the TUL06 galaxies and
their morphological type, but there is no clear trend. Both Sc+Sd and SB galaxies in the sample cover a wide range of luminosities and $L_{\rm{soft X}}/L_{\rm{FIR}}$\
values and furthermore, they cover a similar range in luminosities than the
galaxies in the RCS03 sample. There is no obvious reason for the
differences between the two samples of local star-forming galaxies. }
\section{Discussion}
Figure~\ref{ratio} indicates that our models predict a strong dependence of the $L_{\rm{soft X}}/L_{\rm{FIR}}$\
ratio on the star formation history (instantaneous or extended), and on the
evolutionary state of the star formation process. Moreover, the ratio is also
strongly dependent on the efficiency of the reprocessing of mechanical energy
and UV photons into soft X-ray and FIR emission, respectively. Therefore, we
would expect a significant scatter in the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ values observed in star-forming
galaxies. This is indeed what we find in Fig.~\ref{histo}, as discussed above.
\begin{figure*}
\begin{center}
\includegraphics[width=8 cm,bb=18 22 705 527
dpi,clip=true]{8398fi6a.eps}
\hspace{1.0 cm}
\includegraphics[width=8 cm,bb=18 22 705 527
dpi,clip=true]{8398fi6b.eps}
\end{center}
\caption{$L_{\rm{soft X}}$\ vs. $L_{\rm{FIR}}$\ for the RCS03 (squares),
TUL06 (triangles) and ROSA07 (circles) samples.
In the left panel we have
overplotted the correlation lines corresponding to the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratios predicted
by IB models for different ages and $\epsilon_{\rm{xeff}}$\ values. The right panel shows the
corresponding predictions for EB models.}
\label{mcorr}
\end{figure*}
\citet{MasHesse99b} showed that the star formation episodes taking place in
{ compact} starburst or HII galaxies are generally of short duration, and that
their properties can be better reproduced by evolutionary synthesis models
assuming (nearly) instantaneous bursts, than by long-lasting, extended-in-time, star-formation processes. The generally strong optical emission lines in these
galaxies constrain the evolutionary state of their massive star-formation
episodes to ages below $6$ Myr, { typically within $4-5$ Myr.} On the
other hand, star formation is expected to proceed during long periods of time in
the disks of late-type spiral galaxies, generally in the form of individual
bursts at different times. The accumulation of individual starburst episodes
along the disks of these galaxies mimick a continuous star formation process
\citep{MasHesse99b}, so that extended star formation models might be a better
approximation to reproduce their spatially-integrated properties.
In Fig.~\ref{sxf-histo}, we have added to the predictions of the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ values
the histogram corresponding to the whole sample, as well as the average
observational $L_{\rm{soft X}}/L_{\rm{FIR}}$\ value. As discussed above, there is a degeneracy between
$\epsilon_{\rm{xeff}}$\ and the age of the star formation episode, so that it is not possible to
discriminate between both parameters without additional constraints on the
evolutionary state. { Apart from a few irregular galaxies, for which an instantaneous
model would describe better the star-formation episodes they are hosting, most
of the galaxies in the sample are large spiral galaxies experiencing long-term
star formation. Figure 5 shows that the mean $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratio can be reproduced by
relatively young EB models (after about 10 Myr of evolution) with moderate
efficiencies $\epsilon_{\rm{xeff}}$\ of about $1$\%. Galaxies with higher $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratios would
correspond to more evolved extended starbursts reaching the evolutionary
asymptotic phase, with $\epsilon_{\rm{xeff}}$\ in most cases below $10$\%. On the other hand,
the galaxies with lower $L_{\rm{soft X}}/L_{\rm{FIR}}$\ values can only be reproduced by very young
models, after less than 10~Myr of evolution. The galaxies in the sample have been
selected in the various compilations for being {\em star-forming} or {\em
starburst} galaxies, i.e., galaxies experiencing a stronger than average episode
of star formation. The integrated emission in the far infrared and soft X-ray
bands of some of these galaxies could be dominated by a single but intense
burst of star formation. { While star formation proceeds in these objects for a
long time, these individual, intense bursts are not expected to last longer than
a few Myr.} Some of these starbursts could indeed be rather unevolved. We believe
that this is what we see in Fig.~\ref{sxf-histo}: some of the galaxies with low
$L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratios could be dominated by a single, intense and relatively unevolved
burst of star formation. A deeper study of the individual galaxies would be
required to confirm this hypothesis, but it is beyond the scope of this work.
\citet{Grimes05} computed the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratios for a sample of
ultraluminous infrared, starburst and dwarf starburst galaxies (the galaxies
classified as starburst are already included in the TUL06
sample). For their 7 dwarf starburst galaxies (2 are already included in the
sample by RCS03), they derived a mean $log(L_{\rm{soft X}}/L_{\rm{FIR}})$\ of $-4.0$, close
to the average value found for our complete compilation. The $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratio of
these galaxies (within the range $-3.58$ to $-4.29$) is properly reproduced by
IB models at approximately $4-5$ Myr, with $\epsilon_{\rm{xeff}}$\ within a realistic range $1-5$\%
\citep{Strickland99,Summers01,Summers04}. The mean $log(L_{\rm{soft X}}/L_{\rm{FIR}})$\ value of the 9
ultraluminous infrared galaxies (ULIRG) in their sample is $-4.5$, clearly below
our average. This indicates that the star formation processes in this kind of
galaxy might be relatively unevolved and their emission dominated by a
single, intense episode of star formation. }
In Figure~\ref{mcorr}, we plot the observational $L_{\rm{soft X}}$\ vs. $L_{\rm{FIR}}$\ values
for the galaxies in the three samples. We include the predictions of
IB and EB models at different ages, and for $\epsilon_{\rm{xeff}}$\ values between $1$ and $10$\%.
{ These plots support the main points of our previous discussion: the observational $L_{\rm{soft X}}$\ vs.
$L_{\rm{FIR}}$\ correlation in star-forming galaxies can be well-reproduced by
evolutionary synthesis models, assuming realistic parameters: age below 6 Myr for
IB cases, and a spread of young to evolved bursts for EB models, a high
efficiency in the reprocessing of UV photons into far infrared emission, and a
moderate ($1-10$\%) efficiency in the heating of the diffuse interstellar
gas by the mechanical energy released by massive stellar winds. }
We have analyzed the effect of some intrinsic properties of the sampled objects,
on the dispersion shown in the correlation plots. First, as noted above, the
measured $L_{\rm{soft X}}$\ values did not include the correction for intrinsic neutral
hydrogen absorption. \citet{Kunth98} measured the column density of neutral
hydrogen in the line of sight to $8$ compact starburst galaxies, by fitting
their $Lyman\, \alpha$ profiles, finding values in the range $log(n_H) \sim
19 - 21$ cm$^{-2}$. We have computed that for the typical spectral properties of
the hot diffuse gas this correction would be within $0.4$\% -- $50$\% (i.e.
$<0.18$ dex). { A second effect would be the contamination of
the soft X-ray emission by low mass X-ray binaries associated to the underlying,
older stellar population. ROSA07 estimated that the contribution by LMXB
in their sample of spiral star-forming galaxies was generally of few percent,
and concluded that the contamination should be negligible for galaxies with
{\em SFR}\ $ > 1 \, M_{\sun} \, yr^{-1}$. In the RCS03 sample only 7
objects have a value of {\em SFR}\ below $1 \, M_{\sun} \, yr^{-1}$, but their average $log(L_{\rm{soft X}}/L_{\rm{FIR}})$\
is $-3.7$ (in any case within $-4.2$ to $-3.2$), i.e., with no deviation at all
from the total average $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratio. Therefore, the contamination of $L_{\rm{soft X}}$\ by
LMXB does not seem to be important. Finally, TUL06 removed the
contamination by point sources before computing the integrated $L_{\rm{soft X}}$, so that the
contamination of this sample by LMXB should be negligible. An additional effect
is related to the relative strength of the starburst emission compared to the
galaxy as a whole. While the soft X-ray emission would be associated mostly with
the starburst regions, the far infrared luminosity can include a significant
contribution from the rest of the galaxy, so that the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratios obtained
from spatially-integrated measurements would be lower than the intrinsic value
produced by the starburst itself. A detailed morphological analysis of the
galaxies in the sample is out of the scope of this paper, but we expect that at
least in some cases the observational $L_{\rm{soft X}}/L_{\rm{FIR}}$\ value might be contaminated by
emission not related to the star formation episode.}
Finally, $L_{\rm{FIR}}$\ has been computed assuming almost complete reprocessing of UV
stellar continuum photons ({\em E(B-V)}\ $= 1$), as discussed in Sect.~2. {Lower
values of the extinction, of the order of {\em E(B-V)}\ $= 0.1$, would raise the predicted $L_{\rm{soft X}}/L_{\rm{FIR}}$\
ratios by up to $0.15$ dex.} The effective extinction of these objects should be
in between both extreme values {of {\em E(B-V)}.} In conclusion, correcting the soft X-ray
luminosity from intrinsic photoelectric absorption and/or rejecting the far
infrared emission not associated with the starburst regions themselves, could
increase the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratios for some objects. On the other hand, smaller interstellar extinction values would increase the
predicted $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratio by less than $40$\%.
We consider if it is possible to derive a calibration that would allow the $L_{\rm{soft X}}$\ luminosity to be used as a tracer of SFR. \citet{Strickland99} found from detailed hydrodynamical
simulations that superbubbles accelerated by the release of mechanical energy in
a starburst, would convert on average approximately $5$\% of the input mechanical energy
into soft X-ray emission. \citet{Summers01} analyzed Mrk~33, a dwarf
star-forming galaxy, and concluded that it is dominated by an intense burst
$5$--$6$ Myr old, and that the rate of injection of mechanical energy from the
starburst is approximately $1.2\times10^{41}$ erg s$^{-1}$. The soft X-ray luminosity of the
central, extended diffuse gas derived by these authors is $L_{\rm{soft
X}}\sim 2.2\times 10^{39}$ erg s$^{-1}$, corresponding to $\epsilon_{\rm{xeff}}$\ $\sim 0.018$.
Similarly, \citet{Summers04} estimated the mechanical energy injection rate from
the starburst in NGC~5253 to be $L_{\rm{mech}} = 3.6\times10^{40}$ erg s$^{-1}$. The
measured thermal X-ray emission associated with the starburst was $L_{\rm{soft X}}$\ $\sim
4\times10^{38}$ erg s$^{-1}$, yielding $\epsilon_{\rm{xeff}}$\ $\sim 0.01$.
{ We can therefore assume that $\epsilon_{\rm{xeff}}$\ is constrained to be in the range
$1-5$\% for typical star-forming galaxies. Our evolutionary synthesis models
predict for Instantaneous Bursts (of age between $3-6$ Myr) that $log(L_{\rm{soft X}}/L_{\rm{FIR}})$\ is approximately equal to $log(L_{\rm{soft X}}/L_{\rm{FIR}})$\ $\sim (-3.1, -4.0)$, with a central value
$log(L_{\rm{soft X}}/L_{\rm{FIR}})$\ $\sim -3.5$. On the other hand, Extended Burst models predict
$log(L_{\rm{soft X}}/L_{\rm{FIR}})$\ $\sim (-3.0, -3.4)$ after $30$ Myr of evolution, when the number of
supernova explosions begins to stabilize. For less-evolved, extended episodes
(of approximately 10 Myr of continuous star formation) the predicted ratios would be
lower, within the range $log(L_{\rm{soft X}}/L_{\rm{FIR}})$\ $\sim (-3.5, -4.0)$.
These values are close to the average ratio $log(L_{\rm{soft X}}/L_{\rm{FIR}})$\ $\sim -3.92$ derived from
our total sample of star-forming galaxies, supporting the use of the soft X-ray
emission as a tracer of the star formation rate in starburst galaxies. Using
the $L_{\rm{soft X}}$\ values predicted by the models, as shown in Fig.~\ref{lumin}, we can
derive the calibration of the star formation rate (or strength) as a function of
$L_{\rm{soft X}}$. For this we have considered the points at which $log(L_{\rm{soft X}}/L_{\rm{FIR}})$\ $= -3.5$ for IB
models, $log(L_{\rm{soft X}}/L_{\rm{FIR}})$\ $= -3.1$ for EB cases in the asymptotic phase of evolution, and
$log(L_{\rm{soft X}}/L_{\rm{FIR}})$\ $= -3.7$ for extended, but non-evolved bursts, with ages of approximately 10~Myr,
according to the discussion above. The calibrations would therefore be:
\vspace{0.2cm}
\centerline { {\em SFR}\ (M$_{\sun}$ yr$^{-1}) = 2\times 10^{-41}$ $L_{\rm{soft X}}$\ (erg s$^{-1}$) (evolved EB)}
\vspace{0.2cm}
\centerline { {\em SFR}\ (M$_{\sun}$ yr$^{-1}) = 8\times 10^{-41}$ $L_{\rm{soft X}}$\ (erg s$^{-1}$) (young EB) }
\vspace{0.2cm}
\noindent For comparison, the semiempirical calibration derived by
\citet{Ranalli03} is
\vspace{0.2cm}
\centerline { {\em SFR}\ (M$_{\sun}$ yr$^{-1}) = 1.1\times 10^{-40}$ $L_{\rm{soft X}}$\ (erg s$^{-1}$) }
\vspace{0.2cm}
\noindent where we have adapted the original calibration to the whole FIR
emission in the $1-1000 ~\mu$m band, to be consistent with \citet{Kennicutt98}
(whose $L_{\rm{FIR}}$\ calibration is used as a proxy), and have scaled the original
\citet{Kennicutt98} mass normalization to the range $2-120 M_{\sun}$ to be
directly comparable with our results.
\noindent These calibrations should be applicable, within the range of validity
shown in Fig.~\ref{lumin}, for galaxies experiencing an extended episode of
star formation at a constant star formation rate. In the case of coeval, instantaneous
starbursts, the parameterization would be:
\vspace{0.2cm}
\centerline { {\em SFS}\ (M$_{\sun}$)$ = 2\times 10^{-34}$ $L_{\rm{soft X}}$\ (erg s$^{-1}$) }
\vspace{0.2cm}
\noindent where the star formation strength ($SFS$) is the total mass
transformed into stars at the onset of the burst.
The semiempirical calibration proposed by \citet{Ranalli03} is therefore in
good agreement with the calibration derived from synthesis models for relatively
unevolved star formation episodes, but it would tend to overestimate the star
formation rate for galaxies that have been forming massive stars over a long time, for example tens or hundreds of millions of years.
The models do not in principle predict any dependence of the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratios
on the strength (i.e. total luminosity) of the star formation episodes, as
proposed by some authors for the hard X-ray luminosity. Nevertheless, we want
to remark that our models do not study the detailed properties of the
medium surrounding the star-forming regions. The intensity of the star-formation episode could, for example, have a direct effect on the dust-grain properties of the interstellar medium. $\epsilon_{\rm{xeff}}$\ may, in addition, be a function of the burst intensity. Both effects could create a correlation between $L_{\rm{soft X}}/L_{\rm{FIR}}$\ and the star-formation burst intensity, although such a correlation is unclear in Fig.~\ref{mcorr}. }
\section{Conclusions}
We have compared the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ values measured in a sample of 62 star-forming
galaxies with the predictions by our evolutionary synthesis models, aiming to
analyze the validity of semiempirical and theoretical calibrations of $L_{\rm{soft X}}$\ as a star
formation rate estimator. The main results can be summarized as follows:
\begin{enumerate}
\item The $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratios are strongly dependent on the efficiency in the
conversion of the mechanical energy released by the young massive starburst into
soft X-ray luminosity, by interaction of the stellar winds and supernova ejecta
with the surrounding interstellar medium. From theoretical predictions and
observational data, we expect an $\epsilon_{\rm{xeff}}$\ value of few percent in starburst
galaxies.
\item $L_{\rm{soft X}}/L_{\rm{FIR}}$\ is also dependent on the evolutionary status of the star
formation episode. It increases rapidly with time during the first $5$ Myr of
evolution of a massive starburst, and shows a slower increase afterwards. After
a (nearly) instantaneous burst of star formation, $L_{\rm{soft X}}$\ decreases slower than
$L_{\rm{FIR}}$, as long as there remains a population of massive stars able to collapse
as supernovae (up to around 35~Myr).
\item When star formation proceeds at a nearly constant rate during
extended periods of time, the $L_{\rm{soft X}}/L_{\rm{FIR}}$\ ratio is expected to stabilize after around
$40$ Myr, when the number of massive stars that produce supernovae has
reached an equilibrium between death and birth of new stars.
{
\item The $L_{\rm{soft X}}/L_{\rm{FIR}}$\ values measured for the sample of star-forming galaxies are
consistent with the predictions by the models under realistic conditions:
relatively young and unevolved star formation episodes and $\epsilon_{\rm{xeff}}$\ values
within $1$--$10$\%.
\item A calibration of $L_{\rm{soft X}}$\ as a star formation rate estimator, based on the
predictions of evolutionary synthesis models, has been derived. The calibration
proposed by \citet{Ranalli03} is consistent with the predictions for
relatively unevolved, time-extended bursts of star formation.
}
\end{enumerate}
\begin{acknowledgements}
JMMH and HO are partially funded by Spanish MEC grants AYA2004-08260-C03-03 and
ESP2005-07714-C03-03. OH is funded by Spanish FPI grant BES-2006-13489. MCS
acknowledges funding by Spanish MEC grant AYA2004-02703, and by Spanish {\em
Ram\'on y Cajal} fellowship El 01/08/2007.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{introduction}
An analysis of rotational state solutions for main belt asteroids was performed by many authors. All the authors observed the deficiency of poles close to the ecliptic plane \citep[e.g.,][]{Magnusson1986,Drummond1988,Pravec2002,Skoglov2002,Kryszczynska2007}. \citet{Hanus2011} showed that this depopulation of spin vectors concerns mainly smaller asteroids ($D\lesssim40$ km), while the larger asteroids \citep[$60\lesssim D\lesssim$~130--150 km,][]{Kryszczynska2007, Paolicchi2012} have a statistically significant excess of prograde rotators, but no evident lack of poles close to the ecliptic plane. The observed anisotropy of pole vectors of smaller asteroids is now believed to be a result of YORP thermal torques\footnote{Yarkovsky--O'Keefe--Radzievskii--Paddack effect, a torque caused by the recoil force due to anisotropic thermal emission, which can alter both rotational periods and orientation of spin axes, see e.g., \citet{Rubincam2000}} and also collisions that systematically evolve the spin axes away from the ecliptic plane, and the prograde excess of larger asteroids as a primordial preference that is in agreement with the theoretical work of \citet{Johansen2010}. While the number of asteroids with known rotational states grows, we can study the spin vector distribution not only in the whole MBAs or NEAs populations, but we can also focus on individual groups of asteroids within these populations, particularly on collisional families (i.e., clusters of asteroids with similar proper orbital elements and often spectra that were formed by catastrophic break-ups of parent bodies or cratering events).
The theory of dynamical evolution of asteroid families \citep[e.g.,][]{Bottke2006} suggests that the Yarkovsky\footnote{a thermal recoil force affecting rotating asteroids}/YORP effects change orbital parameters of smaller asteroids ($\lesssim$30--50 km) -- the semi-major axis of prograde rotators is slowly growing in the course of time, contrary to retrograde rotators which semi-major axis is decreasing. This phenomenon is particularly visible when we plot the dependence of the absolute magnitude $H$ on the proper semi-major axis $a_{\mathrm{p}}$ (see an example of such plot for Themis family in Figure~\ref{img:a_H}, left panel). In addition, various resonances (e.g., mean-motion resonances with Jupiter or Mars, or secular resonances) can intersect the family and cause a decrease of the number of asteroids in the family by inducing moderate oscillations to their orbital elements\citep{Bottke2001} as can be seen in Figure~\ref{img:a_H} for the Flora family, where the secular $\nu_6$ resonance with Saturn almost completely eliminated objects to the left from the center of the family (the $\nu_6$ resonance has its center at 2.13 AU for objects with $\sin I\sim0.09$, which is typical for Flora family members, it evolves objects that come to the proximity of the resonance). Some resonances can, for example, capture some asteroids on particular semi-major axes \citep{Nesvorny1998}.
Laboratory experiments strongly suggest that a collisionally-born cluster should initially have a rotational frequency distribution close to Maxwellian \citep{Giblin1998} and an isotropic spin vector distribution.
\begin{figure*
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{themis_a_H.eps}\,\includegraphics{flora_a_H.eps}}\\
\end{center}
\caption{\label{img:a_H}Dependence of the absolute magnitude $H$ on the proper semi-major axis $a_{\mathrm{p}}$ for the Themis family (left) and for the Flora family (right) with the likely positions of the family centers (vertical lines). We also plot three ($a_{\mathrm{p}}$, $H$) borders of the family for different parameters $C$ (different values correspond to a different initial extent of the family or different age and magnitude of the Yarkovsky semi-major axis drift) by gray lines, the optimal border corresponds to the middle line. The vertical dotted line represents the approximate position of the secular $\nu_6$ resonance for the inclination typical for Flora family members and the horizontal arrow its approximate range.}
\end{figure*}
For several families, we already know their age estimates \citep[e.g., $2.5\pm1.0$~Gyr for Koronis family, ][]{Bottke2001}, and so we have a constraint on the time, for which the family was evolving towards its current state. As was shown in \citet{Bottke2001}, the family evolution is dominated by Yarkovsky and YORP effects, and also collisions and spin-orbital resonances. The knowledge of the age should constrain some free parameters in various evolutionary models.
The spin-vector properties in an asteroid family were first studied by \citet{Slivan2002} and \citet{Slivan2003}, who revealed an anisotropy of spin vectors for ten members of the Koronis family. This was an unexpected result because collisionally-born population should have an isotropic spin-vector distribution. The peculiar spin-vector alignment in the Koronis family was explained by \citet{Vokrouhlicky2003} as a result of the YORP torques and spin-orbital resonances that modified the spin states over the time span of 2--3~Gyr. The secular $s_6$ spin-orbital resonance with Saturn may affect the Koronis family members, according to the numerical simulations, it can
(i)~capture some objects and create a population of prograde rotators with periods $P\in(4,7)$~h, similar obliquities ($42^{\circ}$ to $51^{\circ}$) and also with similar ecliptic longitudes in the ranges of ($24^{\circ}$ to $73^{\circ}$) and ($204^{\circ}$ to $259^{\circ}$); or
(ii)~create a group of low-obliquity retrograde rotators with rotational periods $P<5$~h or $P>13$~h.
The prograde rotators trapped in the $s_6$ spin-orbital resonance were referred by \citet{Vokrouhlicky2003} as being in {\em Slivan states}. Most members of the Koronis family with known rotational states \citep[determined by the lightcurve inversion by][]{Slivan2003,Slivan2009,Hanus2011,Hanus2013a} had the expected properties except the periods of observed prograde rotators were shifted to higher values of 7--10~h. Rotational states of asteroids that did not match the properties of the two groups were probably reorientated by recent collisions, which are statistically plausible during the family existence for at least a few Koronis members \citep[e.g., asteroid (832)~Karin was affected by a collision when a small and young collisional family within the Koronis family was born][]{Slivan2012}.
Another study of rotational states in an asteroid family was performed by \citet{Kryszczynska2013a}, who focused on the Flora family. She distinguished prograde and retrograde groups of asteroids and reported an excess of prograde rotators. This splitting into two groups is likely caused by the Yarkovsky effect, while the prograde excess by the secular $\nu_6$ resonance that significantly depopulates the retrograde part of the family (see Figure~\ref{img:a_H}b, only retrograde rotators can drift via the Yarkovsky/YORP effects towards the resonance).
Further studies of rotational properties of collisional families should reveal the influence of the Yarkovsky and YORP effects, and possibly a capture of asteroids in spin-orbital resonances similar to the case of the Koronis family. The Yarkovsky effect should be responsible for spreading the family in a semi-major axis (retrograde rotators drift from their original positions towards the Sun, on the other hand, prograde rotators drift away from the Sun, i.e. towards larger $a_{\mathrm{p}}$'s), and the YORP effect should eliminate the spin vectors close to the ecliptic plane.
Disk-integrated photometric observations of asteroids contain information about object's physical parameters, such as the shape, the sidereal rotational period and the orientation of the spin axis. Photometry acquired at different viewing geometries and apparitions can be used in many cases in a lightcurve inversion method \citep[e.g.,][]{Kaasalainen2001a,Kaasalainen2001b} and a convex 3D shape model including its rotational state can be derived. This inverse method uses all available photometric data, both the classical dense-in-time lightcurves or the sparse-in-time data from astrometric surveys. Most of the asteroid models derived by this technique are publicly available in the Database of Asteroid Models from Inversion Techniques \citep[DAMIT\footnote{\texttt{http://astro.troja.mff.cuni.cz/projects/asteroids3D}},][]{Durech2010}. In February 2013, models of 347 asteroids were included there. About a third of them can be identified as members of various asteroid families. This high number of models of asteroids that belong to asteroid families allows us to investigate the spin-vector properties in at least several families with the largest amount of identified members. Comparison between the observed and synthetic (according to a combined orbital- and spin-evolution model) spin-vector properties could even lead to independent family age estimates.
The paper is organized as follows: in Section~\ref{sec:models}, we investigate the family membership of all asteroids for which we have their models derived by the lightcurve inversion method and present 31 new asteroid models that belong to ten asteroid families. An analysis of spin states within these asteroid families with at least three identified members with known shape models is presented in Section~\ref{sec:spin_state_general}. A combined spin-orbital model for the long-term evolution of a collisional family is described in Section~\ref{sec:simulation}, where we also compare the synthetic and observed spin-vector properties and constrain ages of families Flora and Koronis.
\section{Determination of family members}\label{sec:models}
\subsection{Methods for family membership determination}\label{sec:membership}
For a {\em preliminary} family membership determination, we adopted an on-line catalog published by \citet{Nesvorny2012} who used the Hierarchical Clustering Method\footnote{In this method, mutual distances in proper semi-major axis ($a_{\mathrm{d}}$), proper eccentricity ($e_{\mathrm{d}}$), and proper inclination ($i_{\mathrm{d}}$) space are computed. The members of the family are then separated in the proper element space by less than a selected distance (usually, it has a unit of velocity), a free parameter often denoted as ``cutoff velocity``.} \citep[HCM,][]{Zappala1990,Zappala1994}. \citet{Nesvorny2012} used two different types of proper elements for the family membership identification: semi-analytic and synthetic. The more reliable dataset is the one derived from synthetic proper elements, which were computed numerically using a more complete dynamical model. Majority of asteroids is present in both datasets. A few asteroids that are only in one of the datasets are included in the study as well (e.g., asteroids (390)~Alma in the Eunomia family or (19848)~Yeungchuchiu in the Eos family), because at this stage it is not necessary to remove objects that still could be real family members.
The HCM method selects a group of objects that are separated in the proper element space by less than a selected distance. However, not all of these objects are actually real members of the collisionally-born asteroid family. A~fraction of objects has orbital elements similar to typical elements of the asteroid family members only by a coincidence, the so-called interlopers. Interlopers can be identified (and removed), for example, by:
\begin{itemize}
\item inspection of reflectance spectra, because they are usually of different taxonomic types than that of the family members, we use the SMASSII \citep{Bus2002} or Tholen taxonomy \citep{Tholen1984,Tholen1989};
\item inspection of colors based on the Sloan Digital Sky Survey Moving Object Catalog 4 \citep[SDSS MOC4,][]{Parker2008}, we used the color indexes $a^{\star}$ and $i-z$, which usually well define the core of the family (see examples for Themis and Eunomia families in Figure~\ref{img:sdss}), for each asteroid with available color indexes, we compared values $a^{\star}$ and $i-z$ to those that define the family;
\item inspection of albedos based on the WISE data \citep{Masiero2011};
\item constructing a diagram of the proper semi-major axis vs. the absolute magnitude (see Figure~\ref{img:a_H}), estimating the {\em V-shape} defined by the Yarkovsky semi-major axis drift and excluding outliers, i.e. relatively large asteroids outside the V-shape \citep[see][for the case of Eos family]{Vokrouhlicky2006}. We refer here the ($a_{\mathrm{p}}$, $H$) border of the family as the border of the V-shape;
\item constructing a size-frequency distribution (SFD) of the cluster, some asteroids can be too large to be created within the family and thus are believed to be interlopers \citep[see, e.g., numerical simulations by][who excluded the asteroid (490)~Veritas from the Veritas family]{Michel2011}.
\end{itemize}
\begin{figure
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{24_Themis_a_star_i-z_COLOR_rect.eps}\,\,\includegraphics{15_Eunomia_a_star_i-z_COLOR_rect.eps}}\\
\end{center}
\caption{\label{img:sdss}Dependence of the color indexes $a^{\star}$ and $i-z$ (from the Sloan Digital Sky Survey Moving Object Catalog 4) for a C-type family Themis and S-type family Eunomia. The family corresponds to a compact structure in this parameter space marked by a rectangle. Note a qualitative difference between C- and S-types asteroids.}
\end{figure}
These methods for family membership determination have one common characteristic -- we have to determine or choose a range for a quantity that defines the family members (range of spectra, sizes, or distance from the family center), which affects the number of objects we include in the family. Our criteria correspond to the fact that usually 99\% of the objects are within the ranges.
\subsection{New asteroid models}
From the DAMIT database, we adopt 96 models of asteroids that are, according to the HCM method, members of collisional families.
Currently, we have about 100 new asteroid models that have not yet been published. Here, we present new physical models of 31 asteroids from this sample that are identified as members of asteroid families by the HCM method (we choose only asteroids that belong to ten specific families for which we expect a reasonable amount of members, i.e. at least three). These convex shape models are derived by the lightcurve inversion method from combined dense and sparse photometry. The derivation process is similar to the one used in \citet{Hanus2013a}. The dense photometry was from two main sources:
(i)~the Uppsala Asteroid Photometric Catalogue \citep[UAPC\footnote{\texttt{http://asteroid.astro.helsinki.fi/}}, ][]{Lagerkvist1987, Piironen2001}, where lightcurves for about 1\,000 asteroids are stored, and
(ii)~the data from a group of individual observers provided by the Minor Planet Center in the Asteroid Lightcurve Data Exchange Format \citep[ALCDEF\footnote{\texttt{http://www.minorplanet.info/alcdef.html}},][]{Warner2009}.
The sparse-in-time photometry is downloaded from the AstDyS site (Asteroids -- Dynamic Site\footnote{\texttt{http://hamilton.dm.unipi.it/}}). We use data from the three most accurate observatories: USNO--Flagstaff station (IAU code 689), Roque de los Muchachos Observatory, La Palma (IAU code 950), and Catalina Sky Survey Observatory \citep[CSS for short, IAU code 703,][]{Larson2003}.
To increase the number of asteroid models for our study of asteroid families, we perform additional analysis of our previous results of the lightcurve inversion. For many asteroids, we are able to determine a unique rotational period, but get multiple pole solutions (typically 3--5) with similar ecliptic latitudes $\beta$, which is an important parameter. In \citet{Hanus2011}, we presented a reliability test, where we checked the physicality of derived solutions by the lightcurve inversion (i.e., if the shape model rotated around its axis with a maximum momentum of inertia). By computing models for all possible pole solutions and by checking their physicality, we remove the pole ambiguity for several asteroids, and thus determine their unique solutions (listed in Table~\ref{tab:models}). For other asteroids, the pole ambiguity remain and the models give us accurate period values and also rough estimates of ecliptic latitudes $\beta$ (if the biggest difference in latitudes of the models is $<50^{\circ}$). We call these models {\em partial} and present them in Table~\ref{tab:partials}. For the ecliptic latitude $\beta$, we use the mean value of all different models. We define parameter $\Delta\equiv|\beta_{\mathrm{max}}-\beta_{\mathrm{min}}|/2$ as being the estimated uncertainty of $\beta$, where $\beta_{\mathrm{max}}$ and $\beta_{\mathrm{min}}$ are the extremal values within all $\beta$. The threshold for partial models is $\Delta<25^{\circ}$. We present 31 new models and 24 partial models. References to the dense lightcurves used for the model determination are listed in Table~\ref{tab:references}. In Section~\ref{sec:simulation}, we compare the numbers of asteroids in four quadrants of the ($a_{\mathrm{p}}$, $\beta$) diagram (defined by the center of the family and the value $\beta=0^{\circ}$) with the same quantities based on the synthetic family population. The uncertainties in $\beta$ are rarely larger than 20$^{\circ}$, and the assignment to a specific quadrant is usually not questionable (only in 4 cases out of 136 the uncertainty interval lies in both quadrants, most of the asteroids have latitudes $|\beta|\gtrsim 30^{\circ}$), and thus give us useful information about the rotational properties in asteroid families. Partial models represent about 20\% of our sample of asteroid models.
The typical error for the orientation of the pole is (5--10$^{\circ}$)/$\cos \beta$ in longitude $\lambda$ and 5--20$^{\circ}$ in latitude $\beta$ (both uncertainties depend on the amount, timespan and quality of used photometry). Models based purely on dense photometry are typically derived from a large number ($\sim$30--50) of individual dense lightcurves observed during $\sim$5--10 apparitions, and thus the uncertainties of parameters of the rotational state correspond to lower values of the aforementioned range. On the other hand, models based on combined sparse-in-time data have, due to the poor photometric quality of the sparse data, the uncertainties larger (corresponding to the upper bound of the aforementioned range).
Models of asteroids (281)~Lucretia and (1188)~Gothlandia published by \citet{Hanus2013a} were recently determined also by \citet{Kryszczynska2013a} from partly different photometric data sets. Parameters of the rotational state for both models agree within their uncertainties.
The spin vector solution of asteroid (951)~Gaspra based on Galileo images obtained during the October 1991 flyby was already published by \citet{Davis1994}. Similarly, the solution of a Koronis-family member (243)~Ida based on Galileo images and photometric data was previously derived by \citet{Davies1994} and \citet{Binzel1993}. Here we present convex shape models for both these asteroids. Our derived pole orientations agree within only a few degrees with the previously published values (see Table~\ref{tab:families}), which again demonstrates the reliability of the lightcurve inversion method.
\subsection{Family members and interlopers}\label{sec:members}
We revise the family membership assignment by the HCM method according to the above-described criteria for interlopers or borderline cases. Interlopers are asteroids which do not clearly belong to the family, for example, they have different taxonomic types, incompatible albedos or are far from the ($a_{\mathrm{p}}$, $H$) border. On the other hand, borderline cases cannot be directly excluded from the family, their physical or orbital properties are just not typical in the context of other members (higher/lower albedos, close to the ($a_{\mathrm{p}}$, $H$) border). These asteroids are possible family members, but can be easily interlopers as well. In Table~\ref{tab:families}, last but one column, we show our revised membership classification of each object ({\em M} is a member, {\em I} an interloper and {\em B} a borderline case), the table also gives the rotational state of the asteroid (the ecliptic coordinates of the pole orientation $\lambda$ and $\beta$ and the period $P$), the semi-major axis $a$, the diameter $D$ and the albedo $p_{\mathrm{V}}$ from WISE \citep{Masiero2011}, the SMASS II \citep{Bus2002} and Tholen taxonomic types \citep{Tholen1984, Tholen1989}, and the reference to the model).
Although we got for Vesta and Nysa/Polana families several members by the HCM method, we excluded these two families from our further study of spin states. Vesta family was created by a cratering event, and thus majority of fragments are rather small and beyond the capabilities of the model determination. Most of the models we currently have (recognized by the HCM method) are not compatible with the SFD of the Vesta family and thus are interlopers. On the other hand, Nysa/Polana family is a complex of two families (of different age and composition), thus should be treated individually. Additionally, we have only five member candidates for the whole complex, so even if we assign them to the subfamilies, the numbers would be too low to make any valid conclusions.
In Table~\ref{tab:interlopers}, we list asteroids for which the HCM method suggested a membership to families Flora, Koronis, Eos, Eunomia, Phocaea and Alauda, but using the additional methods for the family membership determination described above, we identified them as interlopers or borderline cases.
In Figure~\ref{img:V_shape}, we show the ($a_{\mathrm{p}}$, $H$) diagrams for all eight studied families. We plot the adopted ($a_{\mathrm{p}}$, $H$) border \citep[from][]{Broz2013b} and label the members, borderline cases and interlopers by different colors.
Several asteroids in our sample belong to smaller and younger sub-clusters within the studied families (e.g, (832)~Karin in the Koronis family, (1270)~Datura in the Flora family or (2384)~Schulhof in the Eunomia family). These sub-clusters were likely created by secondary collisions. As a result, the spin states of asteroids in these sub-clusters were randomly reoriented. Because our combined orbital- and spin-evolution model (see Section~\ref{sec:simulation}) includes secondary collisions (reorientations), using asteroids from sub-clusters in the study of the spin-vector distribution is thus essential: asteroids from sub-clusters correspond to reoriented asteroids in our synthetic population.
\begin{figure*
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{flora_a_H_borderline.eps}\includegraphics{koronis_a_H_borderline.eps}}\\
\resizebox{\hsize}{!}{\includegraphics{eos_a_H_borderline.eps}\includegraphics{eunomia_a_H_borderline.eps}}\\
\resizebox{\hsize}{!}{\includegraphics{phocaea_a_H_borderline.eps}\includegraphics{themis_a_H_borderline.eps}}\\
\resizebox{\hsize}{!}{\includegraphics{maria_a_H_borderline.eps}\includegraphics{alauda_a_H_borderline.eps}}\\
\end{center}
\caption{\label{img:V_shape}Dependence of the absolute magnitude $H$ on the proper semi-major axis $a_{\mathrm{p}}$ for the eight studies families: Flora, Koronis, Eos, Eunomia, Phocaea, Themis, Maria and Alauda with the likely positions of the family centers (vertical lines). We also plot the possible range of the ($a_{\mathrm{p}}$, $H$) borders (two thick lines) of each family for values of the parameter $C$ from \citet{Broz2013b} (different values correspond to a different initial extent of the family or different age and magnitude of the Yarkovsky semi-major axis drift.). The pink triangles represent the members from our sample (M), green circles borderline cases (B) and blue circles interlopers (I). Note that borderline cases and interlopers are identified by several methods including the position in the ($a_{\mathrm{p}}$, $H$) diagram, and thus could also lie close to the center of the family (e.g., in the case of the Flora family).}
\end{figure*}
\section{Observed spin vectors in families}\label{sec:analysis}
\begin{figure*
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{flora.eps}\includegraphics{koronis.eps}}\\
\resizebox{\hsize}{!}{\includegraphics{eos.eps}\includegraphics{eunomia.eps}}\\
\resizebox{\hsize}{!}{\includegraphics{phocaea.eps}\includegraphics{themis.eps}}\\
\resizebox{\hsize}{!}{\includegraphics{maria.eps}\includegraphics{alauda.eps}}\\
\end{center}
\caption{\label{img:families}Dependence of the pole latitude $\beta$ on the proper semi-major axis $a_{\mathrm{p}}$ for eight studied asteroid families: Flora, Koronis, Eos, Eunomia, Phocaea, Themis, Maria and Alauda. Family members are marked by circles and borderline cases by squares, which sizes are scaled proportionally to diameters (only the scale for (15)~Eunomia was decreased by half to fit the figure). The vertical lines correspond to the likely centers of the asteroid families, which uncertainties are usually $<0.01$ AU. The Eos family has an asymmetric V-shape (the ($a_{\mathrm{p}}$, $H$) border is asymmetric) which makes the center determination harder, so we marked two possible positions (one corresponds to the right ($a_{\mathrm{p}}$, $H$) border, the second to the left border). The uncertainties in $\beta$ are usually 5--20$^{\circ}$. In most cases, the value of $|\beta|\gtrsim 30^{\circ}$ and thus the quadrant to which the asteroid belongs (defined by the center of the family and the value $\beta=0^{\circ}$) is {\em not} changed.}
\end{figure*}
There are eight asteroid families for which we find at least three members (together with borderline cases) in our data set of asteroid models (after the family membership revision, labeled by {\em M} or {\em B} in the last column of Table~\ref{tab:families}) -- Flora (38 members), Koronis (23), Eos (16), Eunomia (14), Phocaea (11), Themis (9), Maria (9), and Alauda (3) families. Having the models and membership, we now can proceed to the discussion of the spin states in families in general (Section~\ref{sec:spin_state_general}), and for families Flora and Koronis (Sections~\ref{sec:flora},~\ref{sec:koronis}).
\subsection{Spin-vector orientations in individual families}\label{sec:spin_state_general}
In Figure~\ref{img:families}, we show the dependence of asteroid's pole latitudes in ecliptic coordinates (if there are two possible pole solutions for an asteroid, we take the first one in Table~\ref{tab:models}, because it corresponds to a formally better solution, additionally, latitudes for both ambiguous models are usually similar) on the semi-major axes. We mark the family members by circles and borderline cases by squares, which sizes are scaled proportionally to diameters to show also the dependence on the diameter. Vertical lines in Figure~\ref{img:families} correspond to the likely centers of the asteroid families, which we determine by constructing the V-shaped envelope of each family (we use all members of each family assigned by the HCM method, see Figure~\ref{img:a_H} and Figure~\ref{img:V_shape}). The Eos family has an asymmetric V-shape (the ($a_{\mathrm{p}}$, $H$) diagram), we compute centers for both wings of the V-shape individually. For the Flora family, we use only the right wing of the V-shape to derive the center, while the left one is strongly affected by the $\nu_6$ secular resonance.
In the study of spin-vector properties in families, we simply use the ecliptic coordinates for the pole orientation: ecliptic longitude $\lambda$ and latitude $\beta$. A formally better approach would be to use the coordinates bound to the orbital plane of the asteroid: orbital longitude $\lambda_{\mathrm{orb}}$ and latitude $\beta_{\mathrm{orb}}$. The orbital latitude can be then easily transformed to obliquity, which directly tells us, if the asteroid rotates in a prograde or retrograde sense. However, due to several reasons, we prefer the ecliptic coordinates:
(i)~most of the asteroids have low inclinations and thus the difference between their ecliptic and orbital latitudes are only few degrees, the maximum differences for the families with higher inclination (Eos, Eunomia, Phocaea, Maria) are 20--30$^{\circ}$;
(ii)~the orbital coordinates of the pole direction cannot be computed for partial models, because we do not know the ecliptic longitude, these models represent about 20\% of our studied sample;
(iii)~the positions of the asteroids in the ($a_{\mathrm{p}}$, $\beta$) diagrams (i.e., to which quadrant they belong), namely if they have $\beta>0^{\circ}$ or $\beta<0^{\circ}$ are the sufficient information. Because most of the asteroids have latitudes larger than 30$^{\circ}$, their positions in the ($a_{\mathrm{p}}$, $\beta_{\mathrm{orb}}$) are similar (this is not true only for three asteroids out of 136); and
(iv)~we compare the ($a_{\mathrm{p}}$, $\beta$) diagrams (numbers of objects in the quadrants) between the observed and synthetic populations for ecliptic latitudes, so the consistency is assured.
In general, we observe for all studied families similar trends:
(i) larger asteroids are situated in the proximity of the center of the family;
(ii) asteroids with $\beta>0^{\circ}$ are usually found to the right from the family center;
(iii) on the other hand, asteroids with $\beta<0^{\circ}$ to the left from the center;
(iv) majority of asteroids have large pole-ecliptic latitudes ($|\beta|\gtrsim30^{\circ}$); and finally
(v) some families have a statistically significant excess of asteroids with $\beta>0^{\circ}$ or $\beta<0^{\circ}$.
Case (i) is evident for families Flora, Eunomia, Phocaea, Themis or Maria. We have no large asteroids in the samples for the remaining families.
Cases (ii) and (iii) are present among all families with an exception of Eos, where all the asteroids are close to the (badly constrained) center. This phenomenon can be easily explained by the Yarkovsky drift, which can change asteroid's semi-major axes~$a$, namely it can increase $a$ of prograde rotators, and decrease $a$ of retrograde once. The magnitude of the Yarkovsky drift is dependent on the asteroid size, is negligible for asteroids with diameters $D\gtrsim$50 km (the case of Eos), and increases with decreasing diameter. For Flora, Eunomia, Phocaea or Maria family, we can see that the smallest asteroids in the sample ($D\sim$ 5--10 km) can be situated far from the family center, and we can also notice a trend of decreasing size with increasing distance from the center that probably corresponds to the magnitude of the Yarkovsky effect and the initial velocities $v_\mathrm{ini}(D)$ the objects gained after the break-up.
Observation (iv) is a result of the dynamical evolution of the asteroid's spin vector orientations dominated by the YORP effect, which increases the absolute value of the pole-ecliptic latitude \citep[see papers][where this effect is numerically investigated and compared with the observed anisotropic spin vector distribution of the sample of $\sim$300 MBAs]{Hanus2011, Hanus2013a}.
Case (v) concerns families Flora, Eunomia, Phocaea, Themis and Maria. The different number of asteroids with $\beta>0^{\circ}$ and $\beta<0^{\circ}$ among these families is statistically significant and cannot be coincidental. The obvious choice for an explanation are mean-motion or secular resonances. Indeed, the $\nu_6$ secular resonance removed many objects with $\beta>0^{\circ}$ from the Flora family (see Section~\ref{sec:flora} for a more thorough discussion), the 8:3 resonance with Jupiter truncated the Eunomia family, which resulted into the fact that there are no objects with $a_\mathrm{p}>2.70$~AU, and similarly, the 3:1 resonance with Jupiter affected the Maria family, for which we do not observe objects with smaller $a_\mathrm{p}$ than 2.52~AU. Near the Phocaea family at $a=2.50$~AU, the 3:1 resonance with Jupiter is situated. Due to the high inclination of objects in the Phocaea family ($I\sim24^{\circ}$), the resonance affects asteroids with $a_\mathrm{p}>2.40$~AU, which corresponds to the probable center of the family. The resonance removed a significant number of objects between 2.40~AU and 2.45~AU, and all objects with $a_\mathrm{p}$ larger.
The asymmetry of asteroids with $\beta>0^{\circ}$ and $\beta<0^{\circ}$ in the Themis family is caused by a selection effect: in the family, there are no objects with absolute magnitude $H<12$~mag (i.e., large asteroids) and $a_\mathrm{p}<3.10$~AU, on the other hand, with $a_\mathrm{p}>3.10$~AU, there are more than a hundred of such asteroids (see Figure~\ref{img:a_H}a). Our sample of asteroid models derived by the lightcurve inversion method is dominated by larger asteroids, and it is thus not surprising that we did not derive models for the Themis family asteroids with $a_\mathrm{p}<3.10$~AU.
The Flora and Koronis families are interesting also from other aspects, and thus are discussed in more detail in Sections~\ref{sec:flora} and~\ref{sec:koronis}.
\subsection{The Flora family}\label{sec:flora}
The Flora cluster is situated in the inner part of the main belt between 2.17--2.40 AU, its left part (with respect to the ($a_{\mathrm{p}}$, $H$) diagram) is strongly affected by the secular $\nu_6$ resonance with Saturn, which is demonstrated in Figure~\ref{img:a_H}b. The probable center of the family matches the position of asteroid (8)~Flora at $a=2.202$ AU. Because of the relative proximity to the Earth, more photometric measurements of smaller asteroids are available than for more distant families, and thus more models were derived. So far, we identified 38 models of asteroids that belong to the Flora family (together with borderline cases).
The majority of asteroids within this family have $\beta>0^{\circ}$ ($\sim$68\%, due to small inclinations of the family members, majority of the objects with $\beta>0^{\circ}$ are definitely prograde rotators, because their obliquities are between 0$^\circ$ and 90$^\circ$) and lie to the right from the center of the family, confirming the presence of the Yarkovsky drift. Nine out of twelve asteroids with $\beta<0^{\circ}$ can be found in Figure~\ref{img:families} near or to the left from the center of the family. The exceptions are the borderline asteroids (1703)~Barry and (7360)~Moberg, and asteroid (7169)~Linda with $a_{\mathrm{p}}$ close to 2.25~AU (see Figure~\ref{img:families}). The borderline category already suggests that the two asteroids could be possible interlopers and their rotational state seems to support this statement. However, it is also possible that these asteroids have been reoriented by a non-catastrophic collisions. Rotational state of another borderline asteroid (800)~Kressmannia is also not in agreement with the Yarkovsky/YORP predictions, and thus it could be an interloper (or reoriented). The asteroid (7169)~Linda classified as member could still be an interloper, which was not detected by our methods for interloper removal, or could be recently reoriented by a non-catastrophic collision (the typical timescale for a reorientation \citep[][see~Eq.~\ref{eq:reorientation}]{Farinella1998} of this 4km-sized asteroid with rotational period $P=27.9$ h is $\tau_{\rm reor} \sim 500$~Myr, which is comparable with the age of the family). The depopulation of poles close to the ecliptic plane is also clearly visible.
The $\nu_6$ resonance to the left from the center of the family creates an excess of retrograde rotators not only among the family, but also among the whole main belt population if we use the currently available sample of asteroid models (there are $\sim$300 asteroid models in DAMIT database, in the Flora family, there are 14 more asteroids with $\beta>0^{\circ}$ than with $\beta<0^{\circ}$ (corresponds to the prograde excess), which corresponds to about 6\% of the whole sample, this bias needs to be taken into consideration, for example, in the study of rotational properties among MBAs).
The missing asteroids with $\beta<0^{\circ}$ were delivered by this resonance to the orbits crossing the orbits of terrestrial planets and are responsible, for example, for the retrograde excess of the NEAs \citep{LaSpina2004}: the $\nu_6$ resonance contributes to the NEA population only by retrograde rotators, other major mean-motion resonances, such as the 3:1 resonance with Jupiter, deliver both prograde and retrograde rotators in a similar amount.
We did not observe a prograde group of asteroids with similar pole-ecliptic longitudes in the Flora family (i.e., a direct analog of the Slivan state in the Koronis family) that was proposed by \citet{Kryszczynska2013a}. Although \citet{Kryszczynska2013a} claims that Slivan states are likely observed among the Flora family, no corresponding clustering of poles of the prograde rotators is shown, particularly of ecliptic longitudes. We believe that the term {\em Slivan state} was used incorrectly there.
\subsection{The Koronis family}\label{sec:koronis}
The Koronis family is located in the middle main belt between 2.83--2.95~AU with the center at $a=2.874$~AU. We identified 23~members (together with borderline cases) with determined shape models.
The concept given by the Yarkovsky and YORP predictions work also among the Koronis family (asteroids with $\beta<0^{\circ}$ lie to the left from the family center, asteroids with $\beta>0^{\circ}$ to the right, see Figure~\ref{img:families}). In addition to that, \citet{Slivan2002} and \citet{Slivan2003} noticed that prograde rotators have also clustered pole longitudes. These asteroids were trapped in a secular spin-orbital resonance $s_6$ and are referred as being in Slivan states \citep{Vokrouhlicky2003}. Several asteroids were later recognized as being incompatible with the Slivan states, such as (832)~Karin and (263)~Dresda by \citet{Slivan2012}.
Asteroid (832)~Karin is the largest member of a young \citep[$\sim$5.8 Myr,][]{Nesvorny2004} collisional family that is confined within the larger Koronis family. The spin state of (832)~Karin was thus likely affected during this catastrophic event and changed to a random state.
Asteroid (263)~Dresda could be randomly reoriented by a non-catastrophic collision that is likely to happen for at least a few of 27 asteroids in the Koronis cluster with known spin state solutions, or its initial rotational state and shape did not allow a capture in the resonance. All four borderline asteroids have rotational states in agreement with the Yarkovsky/YORP concept which may support their membership to the Koronis cluster. On the other hand, rotational states of asteroids (277)~Elvira and (321)~Florentina do not match the expected values, and thus could be again interlopers or affected by reorientations.
Being trapped in the spin-orbital resonance does not necessarily mean that the asteroid is a member of the Koronis family, it rather indicates that its initial orbital position, the rotational state and the shape were favorable for being trapped in the resonance. For example, asteroids (311)~Claudia, (720)~Bohlinia, (1835)~Gajdariya and (3170)~Dzhanibekov have expected rotational states but are either rejected from the Koronis family or classified as borderline cases by our membership revision.
\section{Long-term evolution of spin vectors in asteroid families}\label{sec:simulation}
Here we present a comparison of the observed spin-vector orientations in several asteroid families with a numerical model of the temporal spin-vector evolutions. We use a {\it combined\/} orbital- and spin-evolution model, which was described in detail in \citet{Broz2011}. We need to account for the fact that the Yarkovsky semi-major axis drift sensitively depends on the orientation of the spin axis, which is in turn affected by the YORP effect and non-disruptive collisions. This model includes the following processes, which are briefly described in the text:
(i)~impact disruption;
(ii)~gravitational perturbations of planets;
(iii)~the Yarkovsky effect;
(iv)~the YORP effect;
(v)~collisions and spin-axis reorientations; and
(vi)~mass shedding.
\paragraph{Impact disruption}
To obtain initial conditions for the family just after the breakup event we use a very simple model of an isotropic ejection of fragments from
the work of \citet{Farinella1994}. The distribution of velocities "at infinity" follows the function
\begin{equation}
{\rm d}N(v) {\rm d} v = C' v (v^2 + v_{\rm esc}^2)^{-(\alpha+1)/2} {\rm d} v\,,\label{dN_v}
\end{equation}
with the exponent $\alpha$ being a~free parameter, $C'$ a normalization constant and $v_{\rm esc}$ the escape velocity from the parent body,
which is determined by its size~$D_{\rm PB}$ and mean density~$\rho_{\rm PB}$ as
$v_{\rm esc} = \sqrt{(2/3) \pi G \rho_{\rm PB}}\, D_{\rm PB}\,.$
The distribution is usually cut at a selected maximum allowed velocity $v_{\rm max}$ to prevent outliers. The initial velocities $|v|$ of individual bodies are generated by a straightforward Monte--Carlo code and the orientations of the velocity vectors $\vec v$ in space are assigned randomly. We also assume that the velocity of fragments is independent of their size.
We must also select initial osculating eccentricity~$e_{\rm i}$ of the parent body, initial inclination~$i_{\rm i}$, as well as true anomaly~$f_{\rm imp}$ and argument of perihelion~$\omega_{\rm imp}$ at the time of impact disruption, which determine the initial shape of the synthetic family just after the disruption of the parent body.
\paragraph{Gravitational perturbations of planets}
Orbital integrations are performed using the SWIFT package
\citep{Levison1994}, slightly modified to include necessary
online digital filters and a second-order symplectic integrator
\citep{Laskar2001}. The second-order symplectic scheme allows
us to use a time-step up to $\Delta t = 91\,{\rm d}$.
Our simulations include perturbations by four outer planets,
with their masses, initial positions and velocities taken from the
JPL DE405 ephemeris \citep{Standish1997}.
We modify the initial conditions of the planets and asteroids
by a barycentric correction to partially account for the influence
of the terrestrial planets.
The absence of the terrestrial planets as perturbers
is a reasonable approximation in the middle and outer part
of the main belt (for orbits with $a > 2.5\,{\rm AU}$ and $e < 0.6$).%
\footnote{For the Flora family located in the inner belt we should account
for terrestrial planets directly, because of mean-motion resonances
with Mars, but we decided not do so, to speed-up the computation.
Anyway, the major perturbation we need to account for is the
$\nu_6$~secular resonance, which is indeed present in our model.}
Synthetic proper elements are computed as follows.
We first apply a Fourier filter to the (non-singular)
orbital elements in a moving window of 0.7 Myr (with steps
of 0.1 Myr) to eliminate all periods smaller than some threshold
(1.5 kyr in our case); we use a sequence of Kaiser windows
as in \citet{Quinn1991}.
The filtered signal, mean orbital elements, is then passed through
a frequency analysis code adapted from \citet{Sidlichovsky1996}
to obtain (planetary) forced and free terms in Fourier representation
of the orbital elements. The isolated free terms are what we use
as the proper orbital elements.
\paragraph{Yarkovsky effect}
Both diurnal and seasonal components of the Yarkovsky accelerations
are computed directly in the N-body integrator. We use a theory of
\citet{Vokrouhlicky1998a} and \citet{Vokrouhlicky1999b}
for spherical objects \citep[but the magnitude of the acceleration does
not differ substantially for non-spherical shapes][]{Vokrouhlicky1998b}.
The implementation within the SWIFT integrator was described
in detail by \citet{Broz2006}.
\paragraph{YORP effect}\label{sec:yorp}
The evolution of the orientation of the spin axis and of the angular velocity is given by:
\begin{eqnarray}
\frac{{\rm d}\omega}{{\rm d} t} &=& c f_i(\epsilon)\,,\qquad i = 1 \dots 200\,,\label{eq:domega}\\
\frac{{\rm d}\epsilon}{{\rm d} t} &=& {c \frac{g_i(\epsilon)}{\omega}}\,,\label{eq:depsil}
\end{eqnarray}
where $f$- and $g$-functions describing the YORP effect for a set of 200 shapes were calculated numerically by \citet{Capek2004} with the effective radius $R_0 = 1\,{\rm km}$, the bulk density $\rho_0 = 2500\,{\rm kg}/{\rm m}^3$, located on a circular orbit with the semi-major axis $a_0 = 2.5\,{\rm AU}$. We assigned one of the artificial shapes (denoted by the index~$i$) to each individual asteroid from our sample.
The $f$- and $g$-functions were then scaled by the factor
\begin{equation}\label{cyorp}
c = c_{\rm YORP} \left(\frac{a}{a_0}\right)^{-2} \left(\frac{R}{R_0}\right)^{-2}\left(\frac{\rho_{\rm bulk}}{\rho_0}\right)^{-1}\,,
\end{equation}
where $a$, $R$, $\rho_{\rm bulk}$ denote the semi-major axis, the radius, and the density of the simulated body, respectively, and $c_{\rm YORP}$ is a free scaling parameter reflecting our uncertainty in the shape models and the magnitude of the YORP torque, which dependents on small-sized surface features \citep[even boulders,][]{Statler2009} and other simplifications in the modeling of the YORP torque. In \citet{Hanus2013a}, we constrained this parameter and find $c_{\rm YORP}=0.2$ to be the optimal value when comparing the results of the simulation with the observed latitude distribution of main belt asteroids. In our simulation, we used this value for $c_{\rm YORP}$.
The differential equations~(\ref{eq:domega}), (\ref{eq:depsil}) are integrated numerically by a simple Euler integrator. The usual time step is $\Delta t = 1000\,{\rm yr}$.
\paragraph{Collisions and spin-axis reorientations}\label{sec:reorientation}
We neglect the effect of disruptive collisions because we do not want to loose objects during the simulation, but we include spin axis reorientations caused by collisions. We use an estimate of the time scale by \citet{Farinella1998}.
\begin{equation}\label{eq:reorientation}
\tau_{\rm reor} = B \left(\frac{\omega}{\omega_0}\right)^{\beta_1} \left(\frac{D}{D_0}\right)^{\beta_2}\,,
\end{equation}
where
$B = 84.5\,{\rm kyr}$,
$\beta_1 = 5/6$,
$\beta_2 = 4/3$,
$D_0 = 2\,{\rm m}$ and
$\omega_0$ corresponds to period $P = 5$~hours.
These values are characteristic for the main belt.
\paragraph{Mass shedding}
If the angular velocity approaches a critical value
\begin{equation}
\omega_{\rm crit} = \sqrt{\frac{4}{3} \pi G \rho_{\rm bulk}}\,,
\end{equation}
we assume a mass shedding event, so we keep the orientation of the spin axis and the sense of rotation, but we reset the orbital period~$P = {2\pi/\omega}$ to a random value from the interval $(2.5, 9)$~hours. We also change the assigned shape to a different one, since any change of shape may result in a different YORP effect.
\paragraph{Synthetic Flora, Koronis and Eos families}
\begin{figure*
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{flora-1_abeta_DGT30_1GYR.eps}\includegraphics{flora-1_abeta_DLT30_1GYR.eps}}\\
\resizebox{\hsize}{!}{\includegraphics{koronis-1_abeta_DGT30_4GYR.eps}\includegraphics{koronis-1_abeta_DLT30_4GYR.eps}}\\
\resizebox{\hsize}{!}{\includegraphics{eos-1_MAXWELLIAN_abeta_DGT30_1.5GYR.eps}\includegraphics{eos-1_MAXWELLIAN_abeta_DLT30_1.5GYR.eps}}\\
\end{center}
\caption{\label{img:flora_evolution}A simulation of the long-term evolution of the synthetic Flora (top), Koronis (middle) and Eos (bottom) families in the proper semi-major axis~$a_{\rm p}$ vs. the pole latitude~$\beta$ plane. Left: objects larger than $D > 30\,{\rm km}$, which almost do not evolve in $\beta$. Right: objects with $D \le 30\,{\rm km}$, with the initial conditions denoted by empty circles and an evolved state at 1 Gyr denoted by full circles. The sizes of symbols correspond to the actual diameters~$D$.
The initial conditions for Flora correspond to an isotropic size-independent velocity field with $\alpha = 3.25$ and $v_{\rm esc} = 95\,{\rm m}\,{\rm s}^{-1}$, and a uniform distribution of poles (i.e. $\sin\beta$).
We increase the number of objects 10~times compared to the observed members of the Flora (Koronis and Eos as well) family in order to improve statistics. We retain their size distribution, of course.
The objects in Flora family are discarded from these plots when they left the family region (eccentricity $e_{\rm p} = 0.1\hbox{ to }0.18$, inclination $\sin I_{\rm p} = 0.05\hbox{ to }0.13$), because they are affected by strong mean-motion or secular resonances ($\nu_6$ in this case).
Thermal parameters were set as follows:
the bulk density $\rho_\mathrm{bulk} = 2500\,{\rm kg}\,{\rm m}^{-3}$,
the surface density $\rho_{\rm surf} = 1500\,{\rm kg}\,{\rm m}^{-3}$,
the thermal conductivity $K = 0.001\,{\rm W}\,{\rm m}^{-1}\,{\rm K}^{-1}$,
the thermal capacity $C_{\mathrm{t}} = 680\,{\rm J}\,{\rm kg}^{-1}$,
the Bond albedo $A = 0.1$ and
the infrared emissivity $\epsilon = 0.9$.
The time step for the orbital integration is ${\rm d} t = 91\,{\rm days}$ and~${\rm d} t_{\rm spin} = 10^3\,{\rm yr}$ for the (parallel) spin integration.
The parameters for Koronis and Eos are chosen similarly, only for Koronis, we use $v_{\rm esc} = 100\,{\rm m}\,{\rm s}^{-1}$, and for Eos $v_{\rm esc} = 225\,{\rm m}\,{\rm s}^{-1}$ and $\rho_{\rm surf} = 2500\,{\rm kg}\,{\rm m}^{-3}$.
}
\end{figure*}
In Figure~\ref{img:flora_evolution} (top panel), we show a long-term evolution of the synthetic Flora family in the proper semi-major axis~$a_{\rm p}$ vs. the pole latitude~$\beta$ plane for objects larger and smaller than $30\,{\rm km}$. The values of the model parameters are listed in the figure caption. Larger asteroids do not evolve significantly and remain close to their initial positions. On the other hand, smaller asteroids ($D<30\,{\rm km}$) are strongly affected by the Yarkovsky and YORP effects: They drift in the semi-major axis, differently for prograde and retrograde rotators, and their pole orientations become mostly perpendicular to their orbits (corresponds to the proximity of the ecliptic plane for small inclinations). After the simulation at $t=1$~Gyr, we observe a deficiency of asteroids with $\beta>0^{\circ}$ to the left from the family center and a deficiency of asteroids with $\beta<0^{\circ}$ to the right from the family center.
The asymmetry of the synthetic Flora family with respect to its center (red vertical line in the Figure~\ref{img:flora_evolution}) caused by the secular $\nu_6$ resonance is obvious. The down-right quadrant ($\beta<0^{\circ}$, $a_{\rm p}>2.202$~AU) still contains for $t=1$~Gyr many objects, because for some of them the evolution in $\beta$ and $a_{\rm p}$ is rather small, and other were delivered to this quadrant by collisional reorientations.
The appearance of the evolved proper semi-major axis~$a_{\rm p}$ vs. the pole latitude~$\beta$ diagrams for Koronis and Eos families are qualitatively similar to the one of the Flora family. Because the asteroid samples for Koronis and Eos families are dominated by intermediate-sized asteroids ($D\sim20-50$~km), the evolution in $a_{\rm p}$ and $\beta$ is on average slower than in the Flora family. We show the state of the simulation for Koronis family in 4 Gyr and for Eos in 1.5 Gyr (based on the expected ages). The Eos family thus seems less evolved than Koronis family.
We also check the distributions of the proper eccentricities and inclinations of the synthetic Flora/Koronis/Eos objects if they (at least roughly) correspond to the observed family. However, the number of objects to compare is rather low, and seems insufficient for a detailed comparison of distributions in 3D space of proper elements ($a_{\rm p}$, $e_{\rm p}$, $\sin I_{\rm p}$).
\paragraph{Ages of Flora, Koronis and Eos families}
\begin{figure
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{flora-1_kvadrant_median_DLT30.eps}}\\
\resizebox{\hsize}{!}{\includegraphics{koronis-1_kvadrant_median.eps}}\\
\resizebox{\hsize}{!}{\includegraphics{eos-1_MAXWELLIAN_kvadrant_median.eps}}\\
\end{center}
\caption{\label{img:ages}Time evolution of the metric $(k_2+k_4)/(k_1+k_3)$, where $k_i$ correspond to the numbers of synthetic objects in quadrants $i$ ($i=1, 2, 3, 4$) that are defined by the center of the family and value $\beta=0^{\circ}$, for synthetic Flora, Koronis and Eos families (red lines). The spread corresponds to 100 different selections of objects (we simulate 10 times more objects to reach a better statistics), the upper curve denotes the 90\% quantile and the bottom 10\%. Thick horizontal line is the observed ratio $(k_2+k_4)/(k_1+k_3)$ with the uncertainty interval.}
\end{figure}
To quantitatively compare the simulation of the long-term evolution of the synthetic families in the proper semi-major axis~$a_{\rm p}$ vs. the pole latitude~$\beta$ plane with the observation, we construct the following metric: we divide the ($a_{\rm p}$, $\beta$) plane into four quadrants defined by the center of the family and value $\beta=0^{\circ}$ and compute the ratio $(k_2+k_4)/(k_1+k_3)$, where $k_i$ correspond to the numbers of synthetic objects in quadrants $i$ ($i=1, 2, 3, 4$). In Figure~\ref{img:ages}, we show the evolution of the metric $(k_2+k_4)/(k_1+k_3)$ during the simulation of families Flora, Koronis and Eos for all synthetic objects with $D<30$~km, and the value of the same metric for the observed population for comparison.
For the Koronis family (middle panel), the synthetic ratio reaches the observed one after $t=2.5$~Gyr and remains similar until the end of the simulation at $t=4$~Gyr. \citet{Bottke2001} published the age $t=(2.5\pm1.0)$~Gyr for the Koronis family. Unfortunately, we cannot constrain the age of the Eos family from this simulation due to objects with the relatively small evolution in $a_{\rm p}$ and $\beta$. The fit for the Flora family is not ideal, the reason could be differences in the initial velocity field or the true anomaly $f_\mathrm{imp}$ of the impact. The best agreement is for the age $t=(1.0\pm0.5)$~Gyr, which is approximately in agreement with the dynamical age in \citet{Nesvorny2005}: $(1.5\pm0.5)$~ Gyr.
\section{Conclusions}
We identify 152 asteroids, for which we have convex shape models and simultaneously, the HCM method identifies them as members of ten collisional families. Due to a large number of expected interlopers in families Vesta and Nysa/Polana, we exclude these families from the study of the rotational properties. In the remaining sample of asteroids from eight families, we identify $\sim20$\% of objects that are interlopers or borderline cases (see Table~\ref{tab:interlopers}). We use several methods, described in Section~\ref{sec:membership}, for their identification. The borderline cases are still possible members of the families and thus are included in our study of the spin-vector distribution.
From the dependence of the asteroid's pole latitudes on the semi-major axes, plotted in Figure~\ref{img:families}, we can see fingerprints of families spreading in $a$ and spin axis evolution due to Yarkovsky and YORP effects: Asteroids with $\beta<0^{\circ}$ lie on the left side from the center of the family, and on the other hand, asteroids with $\beta>0^{\circ}$ on the right side. The asymmetry with respect to the family centers is in most cases caused by various resonances that cut the families, in the case of Themis family, a selection effect is responsible.
However, we do not observe a {\em perfect} agreement with the Yarkovsky and YORP effects predictions. A few (eight) individual objects that have incompatible rotational states could:
(i)~be incorrectly determined;
(ii)~be interlopers;
(iii)~have initial rotational states that cause only a small evolution in the ($a_{\rm p}$, $\beta$) space (i.e., they are close to their initial positions after the break-up); or
(iv)~be recently reoriented by collisional events.
In the case of the Flora family, significantly less asteroids with $\beta<0^{\circ}$ ($\sim28\%$) than with $\beta>0^{\circ}$ ($\sim72\%$) are present. The secular $\nu_6$ resonance is responsible for this strong deficit, because objects with $\beta<0^{\circ}$ are drifting towards this resonance and are subsequently removed from the family (they become part of the NEAs population where they create an excess of retrograde rotators).
We do not find an analog of the Slivan states (observed in the Koronis family) among any other of the studied families.
We simulate a long-term evolution of the synthetic Flora, Koronis and Eos families (Figure~\ref{img:flora_evolution}) in the proper semi-major axis~$a_{\rm p}$ vs the pole latitude~$\beta$ plane and compare the results with the properties of observed asteroid families. We obtain a good qualitative agreement between the observed and synthetic spin-vector distributions. For all three families, we compute evolution of the number of objects in the four quadrants of the families in the ($a_{\rm p}$, $\beta$) diagram, and we estimate ages for families Flora $(1.0\pm0.5)$~Gyr and Koronis (2.5 to 4~Gyr) that are in agreement with previously published values. However, we do not estimate the age of the Eos family due to a small evolution of the objects in the ($a_{\rm p}$,$\beta$) diagram.
The uncertainties seem to be dominated by the observed quadrant ratios. We expect that increasing the sample size by a factor of 10 would decrease the relative uncertainty by a factor of about 3, which is a good motivation for further work on this subject.
\begin{acknowledgements}
The work of JH and JD has been supported by grants GACR P209/10/0537 and P209/12/0229 of the Czech Science Foundation, and the work of JD and MB by the Research Program MSM0021620860 of the Czech Ministry of Education. The work of MB has been also supported by grant GACR 13-013085 of the Czech Science Foundation.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |